forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
5zMKxmc1eh
OscillationInversion: Understand the structure of Large Flow Model through the Lens of Inversion Method
[ "Yan Zheng", "Zhenxiao Liang", "Xiaoyan Cong", "Yuehao Wang", "Peihao Wang", "Lanqing Guo", "Zhangyang Wang" ]
We investigate oscillation phenomena observed in inversion methods applied to large text-to-image diffusion models, particularly the ``Flux'' model. Using a fixed-point-inspired iteration method to invert real-world images, we find that the solution does not converge but instead oscillates between distinct clusters. Our results, validated both on real diffusion models and toy experiments, show that these oscillated clusters exhibit significant semantic coherence. We propose that this phenomenon arises from oscillatory solutions in dynamic systems, linking it to the structure of rectified flow models. The oscillated clusters serve as local latent distributions that allow for effective semantic-based image optimization.We provide theoretical insights, linking these oscillations to fixed-point dynamics and proving conditions for stable cluster formation and differentiation in flow models.
[ "diffusion models; image generation" ]
https://openreview.net/pdf?id=5zMKxmc1eh
https://openreview.net/forum?id=5zMKxmc1eh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "lfegJTX0ZF", "cxuCjvpLOL", "SVM9JsVG1o", "IACbNBsi7f", "Hy2ZvKKiUC" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730483357512, 1730635638634, 1730665778222, 1731653026947, 1730695938393 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3533/Reviewer_bPut" ], [ "ICLR.cc/2025/Conference/Submission3533/Reviewer_Bmmb" ], [ "ICLR.cc/2025/Conference/Submission3533/Reviewer_5nuJ" ], [ "ICLR.cc/2025/Conference/Submission3533/Authors" ], [ "ICLR.cc/2025/Conference/Submission3533/Reviewer_e8X8" ] ], "structured_content_str": [ "{\"summary\": \"The paper describes a method a method called 'Oscillations Inversion' that allows one to recover the latent noise corresponding to an image in a rectified flow model. Using this method, the paper discovers the phenomena that repeated repeated mapping and inversion between the latent representation and the pixel space images does not converge to a fixed point, rather it oscillates between distinct clusters.\\n\\nThen, the paper proposes a method for guiding this oscillation using finetuned inversion. This refers to finetuning the flow model to align the velocity field of an inpainted image with the original image. This finetuning procedure is used to perform high-quality inpaining.\\n\\nFinally, the paper proposes a method for further quality enhancement: post-inversion optimization. This enables the optimization of the latent representation to improve image fidelity metrics.\\n\\nThe experimental results show good performance on image inpainting and image enhancement tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper showcases a very interesting finding in that if one uses the proposed inversion method, the image and their latent representation oscillates between different clusters.\\n\\nThe paper proposes two further methods, finetuned inversion and post-inversion optimization that are applied to tackle denoising and image inpainting tasks. The paper showcases impressive results on these tasks.\", \"weaknesses\": \"The oscillatory behavior in the fixed point iteration is induced by the error in the approximate inversion. A flow model defines a 1-to-1 mapping between the noise representations and latent representations. If the inversion process were exact, the process would not oscillate. It would map back and forth between the noise representation and the latent representation. Therefore the finding of the oscillatory behavior is not a general finding about flow models, rather its something specific to this inversion process.\", \"the_paper_does_not_provide_sufficient_evidence_for_its_central_claim\": \"The images and their latent representation oscillate between local clusters. The paper claims this is the case empirically, but provides no experimental data. The claim also isn't justified theoretically or an intuitive reasoning given why that would be the case.\\n\\nLack of justification for methods. It is not clear where the training objectives come from and why the methods achieve the desired results.\\n* Finetuned inversion: It is not clear why this training objective would induce more diverse oscillatory behavior in latent space. The paper lacks justification or experimental data that supports this claim.\\n* Post-inversion optimization: It is unclear why this complex training objective is chosen over a simple fidelity loss with a MSE regularization term.\", \"paper_formatting\": [\"Figure 6 is not legible.\", \"Page 5 margin violation\", \"Top of Page 6 odd figure spacing?\", \"Table formatting does not follow ICLR convention.\"], \"questions\": [\"Is the GP loss in Eq 9 equivalent to MSE? if z_i is fixed, the first and last terms are constant. The middle terms for an RBF kernel is simply the squared distance so this loss should be equivalent the MSE.\", \"In post-inversion optimization: How are the points assigned to clusters? How are the clusters identified?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper studied the oscillation problem in the inversion problem of rectified flows. The authors use a numerical method to justify the claim that oscillations exist as groups. Then they introduced finetuned inversio to make the separated clusters to align with customized semantics. The observation in the paper is largely expereiment based and the method introduced is strainghtforward.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The oscillation phenomen in the inversion problem of rectified flow is identified numerical. A solution to uitlize the oscillation is proposed to improve the quality of image generations.\", \"weaknesses\": \"The main weakness is that the observation is identified through experiments not analysis. It is hard to fullly make the readers convinced of their contribution. The optimization method to deal with oscillation is strainghtforwad without further explanation.\", \"questions\": \"1.\\tThe reference format in section 2 looks very strange. Should be updated.\\n2.\\tAs the paper is a direct follow up of paper Liu et al 2022, it is much better to elaborate the formulation in section 3.1.1. \\n3.\\tIn (2), the integral on t is missing compared to Liu et al 2022.\\n4. The lines 270-272 are not correctly displayed\\n5. In line 370, \\\"Section ??\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper propose a method for solving the image inversion problem. The authors observe that when using a fixed point iterative method to solve the inversion problem, the solutions oscillates number of points. Based on this observation a method for image edit, image reconstruction, and super resolution is derived.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The suggest method for solving the inversion problem which is of interest for a large audience.\\n2. The observation of oscillatory behavior for fixed point method on flow models and it relation to semantic meaning is novel. And the author are able to reproduce this behavior on smaller model as well.\\n3. The method is able to produce SOTA results.\", \"weaknesses\": \"1. No clear algorithm for applying the method is provided. This makes it hard to understand the order of which the 3 stages of the method, Group Inversion, Finetuned inversion, and Post-inversion Optimization is applied, or whether all of the three stages are always applied.\\n\\n2. It is not clear how the use of multiple latent encoding in equation 6. is used for image reconstruction and image super resolution. \\n\\n3. In equation 8 appears $\\\\mathcal{L}_{\\\\text{rgb}}$ and its said to be \\\"a customized loss function\\\" but no example of such loss function is given.\\n\\n\\n\\n4. images in figure 6 are too small\", \"questions\": \"Can the authors please expand on weaknesses 1 and 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper presents a novel method called Oscillation Inversion for enhancing image manipulation capabilities in rectified flow-based large text-to-image diffusion models. The authors observe that, unlike conventional inversion methods that converge smoothly, their proposed approach oscillates between distinct semantic clusters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The method's applicability to diverse tasks (e.g., image restoration, enhancement, and makeup transfer) demonstrates its flexibility and potential for real-world uses.\"], \"weaknesses\": [\"Are there other models to perform the experiments with? The experiments are just on one checkpoint. (FLUX)\"], \"questions\": \"See the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5zGuFj0y9V
Boosting Semi-Supervised 2D Human Pose Estimation by Revisiting Data Augmentation and Consistency Training
[ "Huayi Zhou", "Mukun Luo", "Fei Jiang", "Yue Ding", "Hongtao Lu", "Kui Jia" ]
The 2D human pose estimation (HPE) is a basic visual problem. However, its supervised learning requires massive keypoint labels, which is labor-intensive to collect. Thus, we aim at boosting a pose estimator by excavating extra unlabeled data with semi-supervised learning (SSL). Most previous SSHPE methods are consistency-based and strive to maintain consistent outputs for differently augmented inputs. Under this genre, we find that SSHPE can be boosted from two cores: advanced data augmentations and concise consistency training ways. Specifically, for the first core, we discover the synergistic effects of existing augmentations, and reveal novel paradigms for conveniently producing new superior HPE-oriented augmentations which can more effectively add noise on unlabeled samples. We can therefore establish paired easy-hard augmentations with larger difficulty gaps. For the second core, we propose to repeatedly augment unlabeled images with diverse hard augmentations, and generate multi-path predictions sequentially for optimizing multi-losses in a single network. This simple and compact design is interpretable, and easily benefits from newly found augmentations. Comparing to state-of-the-art SSL approaches, our method brings substantial improvements on public datasets. Code will be released for academic use.
[ "semi-supervised learning", "human pose estimation", "data augmentation", "consistency training" ]
Reject
https://openreview.net/pdf?id=5zGuFj0y9V
https://openreview.net/forum?id=5zGuFj0y9V
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z3fulfXXPk", "ym9CxJ6bAP", "yEajoE76ND", "qh9mR1DTiH", "pLf3o1RlC8", "kXuYAC3kJn", "kWByMz5rcA", "iDl1uAs8Mb", "foLXelOXGD", "f2l8gY1l7z", "ZLATmv7bvV", "XpvgiYYFrP", "R9dvm2chz6", "OuOPXF9rsu", "OhkjSm1l3E", "OFYWTq6Dqp", "Ldzbp0S4xY", "KTmc5xvtFE", "Gm0uWHa1Yo", "FL9j5UQm2a", "DYgl8VIJyR", "D8eWYMmyoW", "CaWsIjeQmq", "C9GpgFFF7t", "9COyrxnGVa", "5N9er3kUfb", "3Q00weDJR6" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1732042506226, 1732436672931, 1732043092199, 1732676380260, 1730271592058, 1732041982695, 1732683909642, 1732042916770, 1730341479511, 1732043244021, 1732041525480, 1732978052860, 1732043404135, 1732496558311, 1734406375720, 1737523527649, 1732616132994, 1732041178700, 1733071147995, 1733048968400, 1730348324024, 1732043907607, 1732042717355, 1732042027220, 1732684521087, 1730840870800, 1730637736230 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Reviewer_r3Sz" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Reviewer_4hx9" ], [ "ICLR.cc/2025/Conference/Submission2733/Reviewer_z46M" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Reviewer_r3Sz" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Reviewer_r3Sz" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Area_Chair_ms24" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2733/Reviewer_z46M" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Reviewer_iqhT" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Authors" ], [ "ICLR.cc/2025/Conference/Submission2733/Reviewer_TvMD" ], [ "ICLR.cc/2025/Conference/Submission2733/Reviewer_4hx9" ] ], "structured_content_str": [ "{\"title\": \"Official Responses by Authors\", \"comment\": \"Dear Reviewer iqhT,\\n\\nThank you for your detailed review, positive appreciation and critical points about our paper. Your feedback is essential for refining our work, and we have addressed each of your concerns below.\\n\\n***\\n***W1: Concerns about novelty***\\n\\n**R1:** Firstly, the research goal of this paper is to try to propose a new paradigm based on existing basic augmentations, so as to obtain a new advanced augmentation combination conveniently and efficiently. Therefore, we adopt the Joint Cutout augmentation in the existing similar work PoseDual and the Joint Cut-Occlude augmentation in SSPCM as elements of the basic augmentation set. These do not conflict with the innovation of this paper.\\n\\nSecondly, our proposed superior augmentation generation paradigm and multi-path consistency loss strategy do not overlap with the methods SSPCM or POST. Specifically, SSPCM is a triple-network framework that improves PoseDual by adding an auxiliary network to provide additional help for examining the quality of pseudo-labels. Our approach is different from SSPCM in both structure and core strategy. And our quantitative results in Tables 3 and 4 are clearly better than those of SSPCM. While, POST applies the most commonly used mean-teacher architecture in unsupervised domain adaptation to gradually update the teacher model to better predict pseudo labels. When training, only the student model uses backward propagation to update parameters, and the teacher model uses exponential moving average (EMA) to update parameters without gradients. It can be seen that POST is completely different from the teacher-student alternating network used in our paper.\\n\\nFinally, our proposed strong augmentation generation paradigm is experimentally demonstrated to be concise and effective for the task of 2D HPE. While, PoseAug is for solving the 3D HPE problem, which usually desires a 2D-to-3D pose estimator/lifter, whose input is a series of detected 2D keypoints. It is a completely different field from our 2D keypoint detection using the RGB image as input. This means that augmentations used by these two fields are not mutually compatible.\\n\\n***\\n***W2: Lack of experiments***\\n\\n**R2:** We appreciate the reviewer pointing out many other topics and researches that are somehow related to our study, such as 3D human pose estimation [3], 3D human pose and shape estimation [5], source-free domain adaptive human pose estimation [1], and human vision foundation model [4]. Although these methods show various advantages and generalization in human-related topics, they are not completely consistent with the research content SSHPE in this paper. Almost all of the above methods have not been or cannot be trained and tested on the 2D HPE datasets COCO and MPII used in this paper. Therefore, we cannot fairly or quickly compare our method quantitatively with these cross-proposition methods. Nevertheless, we agree that in future research, we would try to obtain the 2D human keypoint prediction results of the large foundation model or 3D human model in a zero-shot manner in order to compare with our method. This is indeed a more interesting and promising research area.\\n\\n***\\n***W3: Provide more results***\\n\\n**R3:** Regarding the loss function effectiveness, we agree that the weight of the supervised loss and the unsupervised loss is a very important hyper-parameter. In the experiments of this paper, in order to compare fairly with previous methods (including PoseDual and SSPCM), we follow the same setting as them and use the same weight for these two losses. We did not deliberately optimize this hyper-parameter, which would be unnecessary or unfair.\\n\\nRegarding the unsupervised domain adaptation HPE methods, we agree that it has something in common with the topic SSHPE in this paper, but these two research fields cannot be confused. The UDA HPE method is only applicable when there is a clear domain difference between the labeled source domain and the unlabeled target domain. However, our semi-supervised HPE research does not emphasize the domain difference between the labeled set and the unlabeled set. Therefore, it is unfair and inappropriate to directly compare the SOTA algorithm UDAPE [6] with SSHPE approaches including PoseDual, SSPCM and our method in Table 4. Nevertheless, we expect that our future research will apply the core innovations proposed in this paper (including the new superior augmentation and multi-path consistency loss) to address the UDA HEP problem.\\n\\n***\\n***W4: Improve content clarity***\\n\\n**R4:** Thank you for pointing out these omissions. We have clarified these details in our updated paper. Please refer to the first paragraph of Section 3, where the revised part is marked in red color.\\n\\n***\\nWe hope these revisions address your concerns and provide a more comprehensive understanding of our method and its unique contributions. We are dedicated to making the necessary updates to enhance the clarity and impact of our paper.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for your response to my questions.\\n\\n1. Response to W1R1: While I agree that \\\"there is no exactly the same work before us to try to propose and verify the combination synergy effect between multiple basic augmentations\\\", my point was that such a proposed method needs to run a set of optimal strategy experiments (similar to searching for optimal hyperparameters) and simply proposing a sequence of augmentation schemes (from existing ones) does not require formulating the problem in a novel sense. Furthermore, the gains are not significant enough to be considered an extraordinary contribution.\\n\\nFor example, SimCLR [1] not only proposes the composition of augmentations but also shows how the overall proposed simple framework of contrastive learning gives highly noticeable performance gains.\\n\\n\\n2. Response to W2R2: I believe it is unfair to base your comparison on a supervised approach in terms of relative improvement. A fair comparison would be a direct comparison to SSPCM (an SSHPE method) where it is evident that the gains are marginal (except ResNet101).\\n\\n\\n[1] Chen et al., \\\"A Simple Framework for Contrastive Learning of Visual Representations\\\"\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"***W6: Explanation of results in Table 8***\\n\\n**R6:** Similar to our explanation in W2 and W4, the small absolute performance improvement on the MPII test-set is completely understandable when using the powerful backbone HRNet-w32. In fact, the MPII dataset was released earlier than the COCO dataset, so it faces a more severe problem of accuracy saturation, or cannot keenly reflect the differences between SOTA algorithms. As an alternative, we suggest that reviewers can refer to the Table 11 in **Appendix A.2**, which shows obvious advantages of our method over the other two SOTA algorithms (PoseDual and SSPCM) when the label rate is lower.\\n\\n***\\n**Q1: Division criteria in Table 5**\\n\\n**A1:** In fact, the upper, middle and lower areas in Table 5 show CNN-based fully supervised HPE methods, transformers-based fully supervised HPE methods and semi-supervised HPE methods respectively. We apologize for not stating this clearly, and we have included the corresponding statement in the revised paper (marked in red color in the description of setup S3).\\n\\n***\\n**Q2: Training efficiency of our method**\\n\\n**A2:** This is a good and important question. In order to fairly and reasonably reflect the efficiency of our method, we follow the setting S1 (using ResNet-18 as the backbone, batch size is set to 32, total training epochs are 30, and the amount of labeled data is 1K), and conduct each experiment on four 3090 graphics cards (with each containing 24 GB memory) to compare the training time of our method with that of PoseCons and PoseDual. The strong augmentation used by PoseCons or PoseDual is $T_{JC}$. Considering that our method often uses different strong augmentations, their computation is not the main bottleneck. Therefore, in order to be fair, all strong augmentations in our method are also replaced into $T_{JC}$. Assuming that the total training time of PoseCons is one unit time T0, which is actually about 7 hours. Then the total training time of running other methods is summarized as follows.\\n\\n```\\n-----------------------------------------------------------------------------------------------------\\nMethod\\t| PoseCons\\t| Ours(Single,2#)\\t| Ours(Single,3#)\\t| Ours(Single,4#)\\t| \\n-----------------------------------------------------------------------------------------------------\\nTime\\t| T0\\t\\t| 1.36 * T0\\t\\t| 1.50 * T0\\t\\t| 1.83 * T0\\t\\t|\\n-----------------------------------------------------------------------------------------------------\\nMethod\\t| PoseDual\\t| Ours(Dual,2#)\\t\\t| Ours(Dual,3#)\\t\\t| Ours(Dual,4#)\\t\\t|\\n-----------------------------------------------------------------------------------------------------\\nTime\\t| 2.49 * T0\\t| 2.62 * T0\\t\\t| 2.88 * T0\\t\\t| 3.14 * T0\\t\\t|\\n-----------------------------------------------------------------------------------------------------\\n```\\n\\nwhere an integer with the marker # in our method means how many multi-path losses are used. From these results, we can see that when using four-path losses, although the training time increases, it is still faster than PoseDual (1.83\\\\*T0 vs. 2.49\\\\*T0). Referring to the quantitative results in Table 3 of the main paper, our method based on single-network using four-path losses achieves higher mAP than PoseDual. In addition, when using dual networks with four-path losses, the total training time does not increase significantly (2.49\\\\*T0 vs. 3.14\\\\*T0). These indicate that our method is both efficient and effective. We have added these analyses in **Appendix A.5**.\\n\\n***\\n***Q3: The significance of Figure 7***\\n\\n**A3:** Although Table 3 has shown the quantitative comparison results, we still cannot intuitively and quickly perceive the performance differences between different methods under different annotation rates from these plain numbers. Therefore, we considerately visualize these mAP values in Figure 7. From it, we can easily see that our method not only obtains leading results, but also achieves a more obvious leading advantage when the data annotations are scarcer. This property is the most important and valuable characteristic of semi-supervised algorithms.\\n\\n***\\nYour feedback has been invaluable in helping us improve the clarity and comprehensiveness of our research. We are committed to addressing these points and enhancing the overall quality of our paper.\"}", "{\"comment\": [\"I appreciate empirical studies and greatly value the additional experiments included in the response. Some issues have been resolved. However, based on other review and my own understanding, I feel that while I could raise my score, the paper still requires reorganization and further polishing. Especially,\", \"For mixup, domain gaps do exist in the real world (e.g., daytime vs. nighttime, sunny vs. rainy), even these gaps may be smaller than those in syn-to-real situations.\", \"For basic augmentations including rankings, combinations and parameter selections, due to so many variations, simply providing specific examples in this paper is not sufficient and convincing. General tools and guidelines might be more important, and should be added and highlighted in the very beginning of the main paper.\"]}", "{\"summary\": \"The paper proposes a data augmentation method with the consistency training strategy to improve the performance of semi-supervised 2D human pose estimation,\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.This paper identifies that the existing SSHPE methods lack rigor in ranking the difficulty levels of applied data augmentations and discovers synergistic effects among different augmentations. It proposes a more rigorous difficulty ranking for data augmentations.\\n2.This paper provides a comprehensive evaluation of existing advanced data augmentation methods. Rather than designing new augmentation techniques, the paper employs rule-based constraints to combine existing augmentations, nominating the most likely superior combinations: TJOCO and TJCCM.\\n3.A novel multi-path approach that applies multi-path augmentations and multi-path loss training to a single network can surpass certain dual network.\", \"weaknesses\": \"1.The introduction of new data augmentation requires a re-evaluation of the selection of the optimal augmentation combination.\\n2.The combined data augmentation will increase the training time. Is this time consumption still lower than that of stacked networks? \\n3.Different combinations may yield varying performance across datasets, so can the selected optimal combination consistently provide the best results?\\n4.The selection process for the optimal augmentation combination is time-intensive.\\n\\n5. Although the paper provides a ranking of data augmentation techniques, the criteria for evaluating the difficulty levels of augmentations appear somewhat heuristic and may lack a strong quantitative foundation.\\n\\n6. The paper lacks experiments using only the multi-path loss, making it difficult to determine the individual performance improvements contributed by the multi-path loss and the two new data augmentations.\\n\\n7. The paper does not clearly explain the rationale for choosing the TJOCO and TJCCM data augmentations.\", \"questions\": \"Please see the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"***W4: Parameters of each augmentation***\\n\\n**R4:** Thanks a lot for pointing out this important detail. The hyper-parameters involved in each augmentation are indeed important. In order to make a fair comparison, each basic augmentation we selected is derived from various compared methods without additional fine-tuning. For example, the parameters of Joint Cutout are the same as those in PoseDual which used JC5, and the parameters of Joint Cut-Occlude are the same as those in SSPCM which used JO2. We list these parameters in **Appendix A.4** so that readers can quickly and clearly know these details.\\n\\n***\\n***W5: Clearer ablation studies***\\n\\n**R5:** Thank you for raising this concern. In fact, when we were conceiving the overall framework of this paper, we also struggled with the issue of how to present the ablation experiments. After much deliberation, we finally decided to present the ablation results of our proposed augmentation combination and multi-path loss in advance in Sections 3.1 and 3.2 as quantitative data for empirical studies, so that readers can quickly understand and get into the two major themes, data augmentation and consistency training. Therefore, the results in Tables 1 and 2 are essentially ablation studies. To some extent, the results in Table 3 also have the same effect, showing the comparison results of different network structures and label annotation rates. In Section 5.3, we present and compare additional important components, including other similar augmentation combinations and different training techniques. We hope that this arrangement can satisfy most readers and help them capture the key points.\\n\\n***\\n***W6: Showing qualitative results***\\n\\n**R6:** Thank you for your sincere suggestion. We have added qualitative visualization comparison results in **Appendix A.3**, mainly including the conventional human images from the COCO val-set and the fisheye camera images from the WEPDTOF-Pose dataset. We expect these results will make our advantages more intuitively demonstrated.\\n\\n***\\n***W7: The multi-path consistency loss***\\n\\n**R7:** This is indeed a good question. In fact, when designing the ablation experiments with PoseCons using a single-path loss, we chose a fixed batch size 32 to perform all experiments (all using backbone ResNet-18). All relevant results can be found in Table 1. When using multi-path consistency losses, we still set the batch size to 32, including the two-path losses (including $m_{1} \\\\sim m_{7}$ and $m_{9} \\\\sim m_{11}$) and four-path losses (including $m_{8}$ and $m_{12}$) in Table 2. Therefore, in final comparative experiments (see Table 3), we still keep the batch size as 32 and use the optimal four-path losses. Now, in order to investigate the possible impact of different batch sizes, we report the effects of PoseCons and PoseDual when the batch size is 128.\\n\\n```\\n-----------------------------------------------------------\\nMethod\\t\\t| Nets.\\t| Losses | BS | 1K | 5K | 10K \\n-----------------------------------------------------------\\nPoseCons\\t| 1\\t| 1\\t | 32 | 42.1 | 52.3 | 57.3 \\nPoseCons\\t| 1\\t| 1\\t | 128 | 42.3 | 52.6 | 57.5 \\nPoseDual\\t| 2\\t| 1\\t | 32 | 44.6 | 55.6 | 59.6 \\nPoseDual\\t| 2\\t| 1\\t | 128 | 44.9 | 58.7 | 59.6 \\nOurs (Single)\\t| 1\\t| 4\\t | 32 | 45.5 | 56.2 | 59.9\\nOurs (Dual)\\t| 2\\t| 4\\t | 32 | 49.7 | 58.8 | 61.8\\n-----------------------------------------------------------\\n```\\n\\nAs can be seen, after increasing the batch size of PoseCons or PoseDual accordingly, the final mAP results under different labeling rates (e.g., 1K, 5K and 10K) did not get significantly better. This indicates that batch size does not have a large impact on the performance of existing methods.\\nWe also conducted additional experiments to address another concern, namely whether to use a single constant easy augmentation as input for multi-path losses (the pair {$I_e$} + {$I_{h_1},...,I_{h_n}$}, termed as 1-vs-n) or to use different easy augmentations multiple times as input (the pair {$I_{e_1}$,...,$I_{e_n}$} + {$I_{h_1}$,...,$I_{h_n}$}, termed as n-vs-n).\\n\\n```\\n----------------------------------------------------------------------\\nMethod\\t\\t| Nets.\\t| Losses | BS | Input\\t | 1K | 5K | 10K \\n----------------------------------------------------------------------\\nOurs (Single)\\t| 1\\t| 4\\t | 32 | 1-vs-n | 45.5 | 56.2 | 59.9\\nOurs (Single)\\t| 1\\t| 4\\t | 32 | n-vs-n | 45.6 | 56.4 | 59.8\\nOurs (Dual)\\t| 2\\t| 4\\t | 32 | 1-vs-n | 49.7 | 58.8 | 61.8\\nOurs (Dual)\\t| 2\\t| 4\\t | 32 | n-vs-n | 49.7 | 58.9 | 61.9\\n----------------------------------------------------------------------\\n```\\n\\nAs shown in the table above, whether using 1-v-n augmented input or n-vs-n augmented input, the final mAP results obtained under various labeling rates are not significantly different. This is mainly because the used easy augmentation is always fixed (e.g., $T_{A30}$), so the input does not change in essence. We have added these additional ablation studies in **Appendix A.5**.\\n\\n***\\n\\n**Continue in the next comment.**\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"Thanks for your kind reply.\\n\\nIn any case, thank you for your hard review efforts. We will do our best to further improve the current work.\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"***W3: Comparison with SSPCM***\\n\\n**R3:** As described in the setting S3 and shown in Table 5, except for SSPCM, our method still has an advantage over other fully supervised methods and the semi-supervised method PoseDual. We think there are two possible reasons: fewer training epochs and less network parameters. In fact, after submitting the paper, we trained another 400 epochs according to this setting (based on the backbone HRNet-w48, setting the batch size to 16 and using 8 A100s with 80GB of each), which took about 20 days. The final mAP and mAR values were 77.4% and 82.3%, respectively, which still did not exceed SSPCM (77.5% and 82.3%). We could not find any other better explanation, and can only conjecture that SSPCM uses a triple-network framework to bring better generalization on the COCO test-dev that has never been seen during training.\\n\\nThis phenomenon is indeed very weird and incomprehensible, considering that our results in other settings are significantly better than SSPCM (please refer Tables 3, 4, 8 and 9). To this end, we tried to contact the authors of SSPCM to seek their final trained model weights based on HRNet-w48 for self-testing and comparison, but we never received a response so far. And their public training code does not contain the details of this part.\\n\\n***\\n***W4: Performance on the test-set***\\n\\n**R4:** Similar to our explanation in W2, it is reasonable to achieve a small absolute performance improvement on COCO test-dev where the accuracy indicator is close to saturation when using the most powerful backbone HRNet-w48. In fact, the semi-supervised HPE methods shown in Table 5 performed almost the same, which makes it difficult to significantly and objectively distinguish the advantages and disadvantages of each method. Nevertheless, we present it here for complete comparison with previous methods. Considering that the main purpose of this paper is to use semi-supervised algorithms to try to improve network performance when labels are scarce, we believe that the setting based on smaller-scale labeled data can more sensitively reflect the advantages of one SSHPE method. Please refer results in Tables 3 and 9, which contain experiments under very small annotation rates.\\n\\n***\\n**W5: Instructions on dataset setup in S1**\\n\\n**R5:** Firstly, for the dataset setup in setting S1, we followed the previous work PoseDual in order to maintain fairness and reproducibility. Specifically, we select data from the first 1K, 5K, and 10K samples of the training dataset as labeled set, and the remaining samples are used as the unlabeled set. The randomness here just follows the data representation in PoseDual. To avoid ambiguity, we have corrected the statement in our main paper (marked in red color in Section 5.1). We are sorry that this oversight caused confusion to the reviewers.\\n\\nAs additional supplementary information, when implementing the code, we can set the number of labeled samples as TRAIN_LEN in the configuration file, and directly select the first TRAIN_LEN samples when initially loading the training dataset. You can refer to the original PoseDual code to confirm this in https://github.com/xierc/Semi_Human_Pose/blob/master/lib/dataset/coco.py#L138 \\n\\nSecondly, we totally agree that the conclusions drawn from testing and evaluation on small-scale data may not necessarily be generalized to other datasets. Therefore, we repeated the comparison in setting S1 and Table 3 by replacing the COCO dataset into MPII dataset. Specifically, we conducted experiments using the first 1K samples as labeled data and the left 39K samples as unlabeled data in MPII. The validation set of MPII is used to evaluate. The backbone is ResNet-18. The final comparison results are shown below.\\n\\n```\\n---------------------------------------------------------------------------------\\nMethods \\t| Hea \\t| Sho\\t| Elb\\t| Wri\\t| Hip\\t| Kne\\t| Ank\\t| Total\\n---------------------------------------------------------------------------------\\nSupervised\\t| 89.6\\t| 84.8\\t| 72.0\\t| 58.4\\t| 57.8\\t| 49.4\\t| 41.2\\t| 65.3 \\nPoseCons\\t| 92.7\\t| 87.6\\t| 74.5\\t| 67.9\\t| 72.3\\t| 64.2\\t| 59.4\\t| 75.2\\nPoseDual\\t| 93.3\\t| 88.4\\t| 75.0\\t| 67.3\\t| 72.6\\t| 65.3\\t| 59.7\\t| 75.6\\nSSPCM\\t\\t| 93.5\\t| 90.6\\t| 80.2\\t| 71.3\\t| 75.9\\t| 68.9\\t| 62.3\\t| 78.3\\nOurs (Single)\\t| 94.1\\t| 91.1\\t| 80.5\\t| 72.2\\t| 76.3\\t| 69.2\\t| 62.8\\t| 79.1\\nOurs (Dual)\\t| 94.7\\t| 92.4\\t| 81.2\\t| 73.3\\t| 76.8\\t| 70.6\\t| 63.9\\t| 79.7\\n---------------------------------------------------------------------------------\\n```\\n\\nNot surprisingly, our method still maintains a clear lead in performance, both in terms of overall accuracy and the specific accuracy of each joint. These experiments once again demonstrate that our method is indeed universally effective and superior across different datasets. We have updated these results in our revised paper in **Appendix A.2**.\\n\\n***\\n**Continue in the next comment.**\"}", "{\"summary\": \"The authors propose a method for boosting 2D human pose estimation performance in a semi-supervised setting. Towards this, the authors propose two cores: advanced data augmentation and concise consistency training ways. They hypothesize that these two cores help the model\\u2019s interpretability, benefitting from the augmentations, and aid it in performing superior to SoTA semi-supervised approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors propose a simple and seemingly effective way to improve 2d human pose estimation from RGB images using advanced data augmentation combinations and a multi-path augmentation framework for training a single network.\\n2. Table 3: the proposed approach outperforms other baselines showing its effectiveness on the COCO val set with different labeled train set sizes.\\n3. Table 4: the proposed approach outperforms other baselines when using the Resnet50 and ResNet101 architectures showing its effectiveness on the COCO val set when using the entire COCO labeled train set with unlabeled wild set for training.\\n4. Table 9: Results on WEPDTOF-Pose which is an indoor dataset indicate that the proposed approach may be generalizable across datasets.\", \"weaknesses\": \"1. While the overall approach seems to work somewhat effectively, the contribution in the novelty aspect is limited. The authors propose a method to combine the different augmentations to create sensible hard augmentations but the takeaways are generalized in nature and seem obvious. For example, selecting combining augmentations in Sec 3.1 does not seem to take out-of-the-box thinking and may be deduced from running a combination of experiments.\\n2. Table 4: while the gains are noticeable when using ResNet as the backbone, the same cannot be said for HRNet as the backbone. This raises concerns regarding the effectiveness of using the proposed augmentations and approach. Can the authors justify why choosing a different architecture may not result in a noticeable performance boost? The current results indicate the proposed approach may be architecture-dependent.\\n3. Sec 5.2 S3 \\u2192 is there a reason the authors do not train for 400 epochs if it is the only other thing besides the number of networks presenting their method from performing not as well as SSPCM?\\n4. Table 5: the proposed approach performs similarly to DUAL on the test set. This raises concerns over the generalizability of the method.\\n5. Sec 5.2 S1 \\u2192 Superior performance on 1 dataset for a small number of samples may not necessarily generalize to using small train sets from other datasets. Is it possible to show results on other datasets, e.g. MPII (which is already evaluated), H3.6, or LSP? Additionally, can the authors justify this statement? The authors state in Dataset Setup that the small subsets of 1K, 5K, and 10K were randomly selected. From the reported numbers, the corresponding baselines do not seem to be trained on the same selected labeled set. It may then be likely (not certainly) that the reported numbers for the proposed approach may be biased towards the evaluation set because of the selected subset for training.\\n6. S5 Table 8: While the overall performance is better by 0.1 (not significant), the method outperforms 4 of 7 joints (as opposed to claiming all).\", \"questions\": \"1. The authors need to explain how they divide the different sections of Table 5. This is currently unclear from the table.\\n2. What is the effective training time of the proposed approach under different conditions/networks?\\n3. What is the significance of Fig.7 when Table 3 already presents the same results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"Dear Reviewer z46M,\\n\\nThank you for your inspiring affirmation, detailed review and insightful queries regarding our paper. Your feedback is invaluable in improving our work. Below, we address each of your points to clarify our methodology and findings.\\n\\n***\\n***W1: What if a new augmentation is introduced?***\\n\\n**R1:** We acknowledge that for a newly added basic augmentation, some evaluation is still required to select possible advanced augmentation combinations. However, as described in the previous responses to W3 raised by `Reviewer 4hx9` and W1 raised by `Reviewer r3Sz`, thanks to the proposed paradigm for generating superior augmentations in this paper, we can simplify and speed up this process, requiring only a small amount of workload.\\n\\nSpecifically, for the newly added basic augmentations, we first need to evaluate and rank their difficulty to get a preliminary understanding of their effects and usability. Then, we can roughly judge which augmentations are mutually exclusive or synergistic based on experience. For example, occlusion of keypoints (e.g., $T_{JC}$ and $T_{JO}$) and randomly generated occlusion (e.g., $T_{CO}$ and $T_{CM}$) are essentially synergistic. We can also roughly classify and filter through singular value decomposition (SVD) analysis, such as that shown in Figure 11. After that, we will confidently recommend new and stronger augmentation combinations based on the three concise criteria proposed in Section 3.1, including not using MixUp, not repeating with the same type of augmentation, and not stacking too many augmentations.\\n\\nFinally, let's give an additional example. For instance, in Section 5.1, we introduced a new basic augmentation, YOCO (You Only Cut Once), which is based on other existing augmentations such as RandAugment or TrivialAugment. We have actually evaluated its basic effect in Table 6, which is roughly the same difficulty as $T_{JC}$ or $T_{JO}$. Then, considering that YOCO's operation is to crop and re-stitch the same image, it is a completely new type in form. Therefore, it is easy to think that it can be combined with the strongest augmentation $T_{JOCO}$ or $T_{JCCM}$ that we have already introduced, that is, to get a better combination $T_{JOCO+YOCO}$ or $T_{JCCM+YOCO}$. The process is not very time-consuming or difficult to understand, so it has the potential and value to be widely promoted for unsupervised data augmentation generation.\\n\\n***\\n***W2: Training time after using augmentations combination***\\n\\n**R2:** Thanks for pointing out this important detail. In fact, the combination of different augmentations will eventually result in one augmentation after continuous execution, and will not significantly increase the overall time. The total time consumption is related to the basic augmentation itself. For example, the time consumption of $T_{JCCM}$ or $T_{JOCO}$ is similar to that of $T_{JC}$ or $T_{JO}$.\\n\\nThe design that actually increases the training time is another strategy we proposed, multi-path consistency loss. We have quantitatively calculated and compared the time cost of different number of path augmentations in our reply to Q2 raised by `Reviewer r3Sz`. Please refer to it for more details. In short, the final conclusion is that the design of multi-path loss will not significantly prolong the training time, but will bring great performance improvement. It is a simple and efficient strategy for addressing the SSHPE problem.\\n\\n***\\n***W3: Stability of optimal augmentations combination***\\n\\n**R3:** This is also an important issue. In our response to W5 raised by `Reviewer r3Sz`, we newly added a detailed comparative experiment on the MPII dataset following the setting S1, and the trend of the final results is consistent with the conclusions on the COCO dataset in Table 3. Our method still achieved clear advantages whether using a single network or dual networks. We have updated these results in our revised paper in **Appendix A.2**. These further prove that the optimal augmentation combination and multi-path consistency loss we proposed are better and can continue to achieve leading results across different datasets.\\n\\n***\\n**Continue in the next comment.**\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"Dear Reviewer 4hx9,\\n\\nWe greatly appreciate your thorough review, constructive feedback and praise for groundbreaking on our paper. Your points have helped us identify areas for clarification and improvement. Please find our responses to your queries below.\\n\\n***\\n\\n***W1: Why not using MixUp?***\\n\\n**R1:** First, the MixUp operation is to superimpose and mix two images of the same size pixel by pixel, and the global mixing ratio $\\\\alpha$ is randomly generated in advance. We summarize this process as ${Img} = \\\\alpha*{Img}_1 + (1-\\\\alpha)*{Img}_2$. Global here means the entire image is superimposed, using a predetermined blending ratio value.\\n\\nIn addition, MixUp is indeed used in the heatmap-based method like [R1] and performs well. We think that the main emphasis is on using MixUp to alleviate the problem of domain discrepancies in the domain transfer process of sim2real. And making it as difficult as possible for the model to distinguish between the source domain and the target domain is a common practice in the unsupervised domain adaptation from synthetic to real field. Therefore, it is reasonable that MixUp may be helpful in [R1] after mixing synthetic images and real images as input. However, our paper does not emphasize the domain adaptation setting. In each independent experiment, all labeled and unlabeled images used are from the real world and there is no obvious inter-domain differences. At this point, we lack a theoretical basis for using the MixUp operation, and our related experiments in Figure 2 and Table 1 also verify that MixUp is inappropriate for solving the SSHPE problem.\\n\\n***\\n\\n***W2: Modification of Table 1***\\n\\n**R2:** Thank you for your suggestion. We have added the data of these three base augmentations in Table 1 to facilitate quick comparison. At the same time, the corresponding convergence curves in Figure 4 have also been updated.\\n\\n***\\n\\n***W3: Principles for combining augmentations***\\n\\n**R3:** First, we need to rank the difficulty of some well-known basic augmentations, which we have already shown in Figure 2 and the first paragraph of Section 3.1. We also provide a theoretical analysis of the superior augmentation in **Appendix A.1** as an explanation support. Next, we need to recognize the synergy between basic augmentations, that is, which and how many base augmentations to combine have general rules to follow. We provide an intuitive explanation in the second paragraph of Section 3.1, and its quantitative verification is shown in Table 1 and Figure 4. In practice, when combining different basic augmentations, we summarized three simple operating principles by integrating the above two pieces of information. These are introduced in the third paragraph of Section 3.1, and there are sufficient empirical experiments in the fourth paragraph to support the feasibility of these principles. These experiences guide us to quickly find the optimal augmentation combination instead of experimenting with all possible combinations one by one. It should be noted that the empirical researches involved here are not the essential reason for deducing optimal combinations, but for quantitative verification.\\n\\nFor $T_{JC}$ and $T_{CO}$, namely Joint Cutout and trivial Cutout, they both generate several small square zero-value masks/patches in the image. The difference is that the former mainly generates occlusion around the keypoint area of the human body, while the position of the latter is random. Obviously, the former will bring greater challenges to the keypoint prediction task because of more frequent occlusion. The same explanation applies to $T_{JO}$ and $T_{CM}$, except that these patches of them are cropped from another image instead of using zero values.\\n\\nFinally, for a newly added augmentation, we only need to repeat the first step, that is, to determine its difficulty ranking among basic augmentations. Then we can get a better augmentation combination according to the three principles proposed. For example, we have verified in Table 6 that the augmentation YOCO can produce synergistic effects with RandAugment or TrivialAugment. Then we can rank it, and recommend new and better augmentation combinations, such as $T_{JOCO+YOCO}$ and $T_{JCCM+YOCO}$. These are essentially similar to the procedures we have demonstrated when selecting out $T_{JOCO}$ and $T_{JCCM}$, so the experiments are not repeated in our main paper due to space limitations and minor significance.\\n\\n***\\n\\n**Continue in the next comment.**\"}", "{\"title\": \"Response to Authors\", \"comment\": \"\\u200b\\u200bI thank the authors for their detailed responses. While most of my concerns are wholly or partially resolved, the major concern about proposing the solution or formulating the problem in a novel aspect as explained in my responses to authors remains. I have slightly raised my score based on the authors\\u2019 responses to experimental concerns.\\n\\n1. W1R1: The concern is about the novelty of designing a method as opposed to running a set of experiments for hyperparameters search.\\n\\n2. W3R3: While I understand that a code inference might help with the quantitative reproduction of results, it may not help in determining why their network performs better. As the authors claim to design an efficient network performing on par with other SShPE methods, the comparison has to be in accordance. The current performance does not reflect the claims and no clear explanation is available.\\n\\n3. W2R2, W4R4, W5R5, and W6R6: The responses resolve my concern.\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"***W4: Selection process of augmentations combination***\\n\\n**R4:** In fact, in the field of semi-supervised learning, it is not easy to discover or invent new advanced augmentations. We can find that there are still a lot of research exploring this issue to date, please refer to the third paragraph of Section 2 and the second paragraph of Section 5.3 of the main text. Under this consensus, this paper attempts to propose a standardized paradigm to produce strong augmentations, rather than just proposing a new one and ending it. Our approach is to continue to use the new basic augmentations proposed by the community to further obtain a better one. Therefore, although the advanced augmentation generation process described in Chapter 3.1 seems complicated, it provides a feasible route and can help save trial and error costs in practice. If we can eventually find superior augmentation at this cost, we think it is worthwhile and meaningful.\\n\\n***\\n***W5: Quantitative analysis of augmentation difficulty ranking***\\n\\n**R5:** This is indeed an issue worth exploring in depth. As pointed out in our paper, we initially found that the approach in PoseDual was to augment the images directly on the test set. Then, in order to reflect the difficulty of different augmentations, they evaluate the same model when facing different types of augmented inputs, and record the severity of the drop in accuracy indicators. The obvious drawback of this strategy is that if we directly input a noisy image, the final accuracy will be close to 0, but such augmentation is not desirable.\\nTherefore, our approach of utilizing ablation experiments on the training set is more intuitive and convincing. Although it seems rather heuristic, the quantitative data comparison shown in Tables 1 and 2 provides us with indisputable assurance and confidence in proposing a new superior augmentation. We also confirmed in the experimental part that this paradigm is indeed effective.\\n\\nIn addition, we also try to give a theoretical explanation for why different augmentations have different levels of difficulty in **Appendix A.1**. From the perspective of singular value decomposition (SVD), we find that the so-called advanced augmentation is more difficult because the average entropy of its singular values is higher, which means that after applying the corresponding augmentation, the augmented input may span a larger feature space, thereby helping to improve the generalization of the model. We hope that this explanation can help make up for the theoretical foundation of the proposed strategy in Section 3.1.\\n\\n***\\n***W6: Ablation studies of the multi-path loss***\\n\\n**R6:** This is indeed an important concern. In fact, we have described in detail the effectiveness verification of the multi-path consistency loss strategy in the last two paragraphs of Section 3.2. For more detailed results, please refer to Table 2 and Figure 3. Among them, schemes $m_9$ and $m_{10}$ show the experiments of using the same strong augmentation to calculate two-path losses. After comprehensive comparison with scheme $m_{11}$, it can be found that different multi-path strong augmentations also have synergistic effects, which will perform better than using only a single type augmentation as in schemes $m9$ and $m_{10}$.\\nIn addition, by comparing some experiments in Table 1 and Table 2, such as $c_{10}$-$m_6$, $c_{11}$-$m_7$, $c_{12}$-$m_4$ and $c_{13}$-$m_5$, we can also find that multi-path loss itself has a positive effect, which can help alleviate the negative impact of excessive accumulation of multiple basic strong augmentations.\\n\\n***\\n***W7: Reasons for selecting two optimal augmentations***\\n\\n**R7:** The reasons for choosing $T_{JOCO}$ and $T_{JCCM}$ can be found in the second and third paragraphs of Section 3.1. Specifically, first, we focus on the HPE task in this paper, and it is not beneficial to choose MixUp-related augmentations. For specific explanations, please refer to our response to W1 raised by `Reviewer 4hx9`. Secondly, considering the synergistic effects between different augmentations, we tend not to apply the same type of augmentation to the image. According to this criterion, we can exclude combinations $T_{COCM}$, $T_{JCCO}$, $T_{JOCM}$ and $T_{JOJC}$, leaving only $T_{JOCO}$ and $T_{JCCM}$. Finally, an intuitive idea is that we'd better not apply too many augmentations to an image, otherwise it may backfire and destroy the semantic information in the image. In extreme cases, it may directly make the image unrecognizable or meaningless. Therefore, we do not recommend stacking too many augmentations. These empirical conclusions or criteria are finally quantitatively verified in Table 1 and Figure 2.\\n\\n***\\nWe hope these responses address your concerns and provide a clearer understanding of our approach and its capabilities. We are committed to making the necessary revisions to reflect these clarifications and enhance the overall quality of our paper.\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"Thanks for your kind reply.\\n\\nFor W1R1, in fact, we have performed relevant ablation experiments in Table 1 and Figure 4 to verify the found optimal augmentation combinations. The related basic augmentations all used the same hyper-parameter settings as their respective sources to ensure fair comparison (details can be found in Appendix `A.4`). These quantitative results can provide evidence for our proposed intuitive principles.\\n\\nFor W2R2, we tried to provide a comparative perspective to illustrate that it is indeed more difficult to achieve absolute performance improvement after the basic supervision network is strong enough. This actually also indirectly shows that it is not easy to clearly observe the performance differences of various methods under such settings. Therefore, we recommend that reviewers can refer to the significant advantages of our method over other SSHPE methods when the label annotation rate is scarce (please see Tables 3/9/10/11). This superiority is more valuable in practical applications, such as fisheye cameras, ego perspectives, low light conditions, etc. where human keypoint labels are not readily available in large quantities or are difficult to obtain.\"}", "{\"metareview\": \"This paper received five reviews, and the authors submitted a response addressing the queries raised. While two reviewer shifted to a positive stance after the rebuttal, albeit with some reservations, the others retained their initial ratings, which were largely below the acceptance threshold. Key concerns included the paper's marginal improvements over existing work and the lack of essential experiments for fair comparisons.\\n\\nThe overall consensus is that the paper requires significant revisions to meet publication standards. After careful consideration, the Area Chair panel decided not to accept the paper in its current form. We encourage the authors to address the reviewers' feedback thoroughly and submit a stronger, more comprehensive version in the future.\", \"additional_comments_on_reviewer_discussion\": \"Post rebuttal reviewers 4hx9 and r3Sz raised the score to 6 reluctantly pointing out that this work lacks novel formulation and requires major re-organization/polishing. Whereas other reviewers iqhT and z46M were not satisfied with the response and retained their negative score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the efforts in the response. However, the rebuttal cannot address most of my concerns and I am willing to keep my final rating.\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"Dear Reviewer TvMD,\\n\\nThank you for your high recognition, valuable feedback and insightful comments on our paper. We appreciate the opportunity to clarify and address your concerns. All our responses are as follows.\\n\\n***\\n\\n***W1: Effectiveness under a single network***\\n\\n**R1:** In this paper, we mainly use single-network to demonstrate the efficiency and effectiveness of the proposed optimal augmentations combination and multi-path loss strategies. In the experimental comparison, the results involving a single network are mainly concentrated in Tables 3, 4, and 9. When there is only a single network, our method always outperforms the baseline method, namely PoseCons. Please compare line 4 vs. line 8 in Table 3, and line 2 vs. line 5 in Table 9. Even when the baseline method PoseDual uses dual networks, our single network based results are still much better. Please compare line 5 vs. line 8 in Table 3, lines 2/8/14 vs. lines 5/11/17 in Table 4 and line 3 vs. line 5 in Table 9. These advantages based on a single network clearly demonstrate the effectiveness of our core method design.\\n\\nWhen we upgrade the proposed method to a dual network structure, the results are naturally better, indicating that it has become the dominant approach. And compared with SSPCM using triple networks, our dual-network approach still has obvious advantages in most cases, while the single-network approach also shows comparable performance.\\n\\n***\\n\\n***W2: Performance with using ResNet-50***\\n\\n**R2:** We have shown the comparison results based on ResNet-50 in Table 4, mainly including methods PoseDual, SSPCM and Pseudo-HMs. Actually, for a fair comparison, we follow the experimental settings in these compared methods and keep using ResNet-18 in Table 3 for convenient ablation researches, HRNet-w48 in Table 5 to compare performance limits, and HRNet-w32 in Tables 7 and 8 to highlight the best performance.\\n\\nIn addition, we strongly agree that reporting the comparison results based on the more important ResNet-50 is more convincing than ResNet-18. Therefore, we replaced the backbone in Table 3 with ResNet-50 according to setting S1 and re-conducted the comparative experiments. The results are as follows.\\n\\n```\\n------------------------------------------------------\\nMethod\\t\\t| Nets.\\t| 1K | 5K | 10K | All\\n------------------------------------------------------\\nSupervised\\t| 1\\t| 34.8 | 50.6 | 56.4 | 70.9\\t\\nPoseCons\\t| 1\\t| 43.1 | 57.2 | 61.8 | ---\\nPoseDual\\t| 2\\t| 48.2 | 61.1 | 65.0 | ---\\t\\nSSPCM\\t\\t| 3\\t| 49.8 | 61.8 | 65.5 | ---\\nOurs (Single)\\t| 1\\t| 49.3 | 61.4 | 65.2 | ---\\nOurs (Dual)\\t| 2\\t| 51.7 | 62.9 | 66.3 | ---\\n------------------------------------------------------\\n```\\n\\nAs shown in the table, similar to using ResNet-18, our method can still achieve a clear advantage. When using a single network, our method outperforms PoseCons and Posedual, while being comparable to SSPCM. And our dual-network based approach achieves significant advantages. We have updated our paper with these additional results in **Appendix A.2**.\\n\\n***\\n\\n***W3: Marginal improvement on COCO***\\n\\n**R3:** In our paper, the settings S2 and S3 are based on labeled COCO train-set and unlabeled COCO wild-set. For setting S2, we validate models in the COCO val-set, and report results in Table 4. Our approach has achieved not-insignificant advantages. Of course, the performance improvement gradually decreases as the number of parameters and sophistication of the backbones used grows (e.g., from ResNet-50 to ResNet-101 and HRNet-w48). This phenomenon is also true for previous compared methods such as PoseCons, PoseDual, SSPCM and Pseudo-HMs.\\n\\nFor setting S3, we validate models in the COCO test-dev, and report results in Table 5. The same explanation applies to this similar phenomena. It should be noted that some SOTA supervised learning methods (e.g., UDP and ViTPose) in Table 5 also achieved similar performance, which indicates that the accuracy on this dataset is suspected to be saturated. These situations actually prevent us from objectively evaluating different methods on it.\\n\\nAs an alternative, we recommend evaluating the pros and cons of different SSL methods by looking at the significant performance differences on human images with low annotation rates (see Table 3) or captured using unconventional fisheye cameras (see Table 9).\\n\\nFor referring similar concerns, please refer to our responses to W2/W3/W6 raised by `Reviewer r3Sz`.\\n\\n***\\n\\nWe hope these clarifications address your concerns and contribute to a better understanding of our work. We are committed to making the necessary revisions in our paper to reflect these clarifications and enhance its overall quality.\"}", "{\"title\": \"More New Experiments on Semi-Supervised Human Hand Keypoints\", \"comment\": \"Dear Area Chairs and Reviewers,\\n\\nRecently, in order to further verify the wide applicability and superiority of our proposed method MultiAugs, we have conducted comparative experiments on the human hand keypoint detection task which is very similar to the SSHPE task for the human body. The final conclusion is still impressive. These experimental results can serve as a strong supplement to the responses to `reviewer TvMD` in **W3R3**, `reviewer iqhT` in **W3R3**, `reviewer r3Sz` in **W3R3** and **W4R4**, and `reviewer z46M` in **W3R3**. Please consider it as an additional reference.\\n\\nSpecifically, we selected relevant human hand images and corresponding keypoint annotations from the COCO-WholeBody dataset [R1] as the annotated dataset (where the number of hand keypoints is 21), and obtained a train-set and a val-set containing approximately 76K and 3.8K samples, respectively. In addition, we used the BPJDet detector [R2] to extract approximately 118K samples from COCO wild-set as the unlabeled dataset. Then, we repeated setups **S1** and **S2** in the experimental setting and obtained results similar to those in Tables 3 and 4, respectively. The specific experiments and conclusions which are completely new are as follows.\\n\\n***\\n```\\n------------------------------------------------------\\nMethod\\t\\t| Nets.\\t| 1K | 5K | 10K\\n------------------------------------------------------\\nSupervised\\t| 1\\t| 33.4 | 38.9 | 41.9\\t\\nPoseCons\\t| 1\\t| 52.1 | 57.4 | 59.4\\nPoseDual\\t| 2\\t| 55.9 | 60.1 | 62.0\\nSSPCM\\t\\t| 3\\t| 58.8 | 62.4 | 64.5\\nOurs (Single)\\t| 1\\t| 56.3 | 60.9 | 64.1\\nOurs (Dual)\\t| 2\\t| 62.8 | 65.5 | 67.4\\n------------------------------------------------------\\n```\\nFirst, we still used ResNet18 as the backbone in the setup **S1**, and then conducted comparative experiments on the methods SimpleBaseline, PoseCons, PoseDual, SSPCM, and the proposed MutliAugs. As shown in the table above, our method is still significantly better than the PoseCons using a single network or the PoseDual using two networks when using a single-network structure. When we use a dual-network structure, our method is significantly better than the SSPCM using a triple-network. These results and the trends they reveal are consistent with Tables 3, 9, 10, and 11 shown in our paper.\\n\\n***\\n```\\n----------------------------------------------------------\\nMethod\\t\\t| Backbone\\t| Nets\\t| AP\\t| AR \\n----------------------------------------------------------\\nSupervised\\t| ResNet50\\t| 1\\t| 62.1\\t| 74.9\\nPoseDual\\t| ResNet50\\t| 2\\t| 65.9\\t| 78.3\\nSSPCM\\t\\t| ResNet50\\t| 3\\t| 66.3\\t| 78.8\\nOurs (Single)\\t| ResNet50\\t| 1\\t| 66.8\\t| 79.2\\nOurs (Dual)\\t| ResNet50\\t| 2\\t| 67.4\\t| 79.7\\n----------------------------------------------------------\\nSupervised\\t| ResNet101\\t| 1\\t| 64.5\\t| 76.8\\nPoseDual\\t| ResNet101\\t| 2\\t| 68.0\\t| 80.2\\nSSPCM\\t\\t| ResNet101\\t| 3\\t| 69.1\\t| 81.1\\nOurs (Single)\\t| ResNet101\\t| 1\\t| 69.9\\t| 81.7\\nOurs (Dual)\\t| ResNet101\\t| 2\\t| 71.8\\t| 83.3\\n-----------------------------------------------------------\\n```\\nThen, we followed the setup **S2**, and conducted experiments on COCO-WholeBody by using the annotated train-set as the labeled set and the COCO wild-set as the unlabeled set. Due to time constraints, we only performed various experiments under backbones ResNet50 and ResNet101 and did not have time to perform tests on HRNet-w48. The quantitative results on the val-set are summarized in the table above. From these results, we can find that our method still has an undoubted advantage, which is consistent with the phenomenon shown in Table 4.\\n\\nThe above experimental results show that the advanced augmentation combination and multi-path consistency loss strategy we proposed is indeed sustainable, effective and easy to promote. For example, for keypoint detection tasks, whether it is human body or hand, MultiAugs has reliable transferability and great potential in versatility. We expect that these experiments can further help demonstrate the core contribution of this paper.\\n\\nBest Regards,\\n\\nAll Authors\\n\\n***\\n\\n**References**\\n* *[R1] Whole-Body Human Pose Estimation in the Wild, ECCV2020*\\n* *[R2] BPJDet: Extended Object Representation for Generic Body-Part Joint Detection, TPAMI2024*\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"Thank you again for your hard work in reviewing and responsible discussions.\\n\\nThere are indeed some parts of our work that need to be further improved, and in fact we are still in the process of continuous exploration and optimization. Compared with the comparison on the COCO or MPII keypoint dataset that is close to performance saturation, we prefer to generalize the design and findings of this paper to other similar SSL fields where data labels are more scarce and thus more challenging. In addition to the fisheye camera dataset mentioned in Table 9 in `Appendix A.2` where we can achieve a more obvious advantage over DualPose or SSPCM, we also intend to apply our findings to ego keypoint detection or hand keypoint detection. They are equally more important visual foundation capabilities to existing embodied intelligence research.\"}", "{\"summary\": \"The authors propose a method to enhance semi-supervised human pose estimation (SSHPE) through synergistic data augmentation and multi-path consistency training. Instead of creating isolated or complex augmentations, they combine existing augmentations in a complementary way that intuitively benefits SSHPE, achieving stronger results through these collaborative transformations. For consistency training, they forgo traditional multi-network stacking in favor of training a single network with multi-path consistency losses across multiple augmented views of the same unlabeled image batch, yielding both efficiency and accuracy gains.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1.\\tFocus on a Practical Problem: The paper addresses a fundamental and labor-intensive challenge in 2D human pose estimation (HPE)\\u2014the need for extensive labeled data. By leveraging semi-supervised learning (SSL) to utilize unlabeled data, the approach targets a practical solution that could reduce the dependency on costly and time-consuming data annotation.\\n2.\\tInnovative Augmentation Strategies: The authors identify a unique contribution by combining existing data augmentations to create \\u201ceasy-hard\\u201d augmentation pairs that introduce a wider difficulty spectrum. This approach leverages the synergistic effects of established augmentations to generate novel HPE-specific augmentations, potentially enhancing model robustness by training on more challenging, noise-introduced variations of data.\\n3.\\tSimplified and Effective Consistency Training: Instead of relying on complex multi-network architectures, the paper proposes a single-network design that optimizes multiple losses through sequential multi-path predictions on heavily augmented data. This simplified approach is both interpretable and efficient, making it accessible for broader implementation while maintaining strong performance gains.\\n4.\\tCommitment to Open Science: By releasing code for academic use, the authors support transparency and reproducibility, enabling other researchers to build upon this work and further validate the approach across diverse HPE tasks and datasets.\", \"weaknesses\": \"### Limited Novelty\\n\\nThe proposed method in this paper lacks sufficient novelty and fails to contribute meaningful advancements to the field of semi-supervised human pose estimation (SSHPE). Human pose estimation, as noted, is already a thoroughly researched area with many well-established methods for both data augmentation and consistency training. Specifically, the augmentations Joint Cutout (TJC) and Joint Cut-Occlude\\u2014are already known [2]. Similarly, the concept of consistency training has been explored extensively in prior work (e.g., Xie et al., 2021; Moskvyak et al., 2021; Li & Lee, 2023; Huang et al., 2023), reducing the originality of the current approach.\\n\\nAdditionally, this paper\\u2019s methodology appears closely aligned with SSPCM [2], sharing a similar approach to consistency training without offering meaningful differentiation both in terms of novelty and accuracy (Table 3,4). The methods do not extend beyond existing approaches like POST [1], which employs consistency training alongside augmentation but tackles a more complex problem involving significant distribution shifts\\u2014something that requires a more robust approach. Given that the paper neither introduces novel methodologies nor demonstrates improvements over current state-of-the-art techniques, its contributions are limited in both theoretical and practical impact. Consequently, the paper falls short in providing sufficient novelty or relevance to warrant further consideration.\\n\\nThe current approach to selecting augmentations is somewhat rudimentary; exploring more advanced augmentation generation paradigms, such as PoseAug [3], could substantially enhance the contribution by introducing more targeted, pose-specific transformations. To further strengthen the paper\\u2019s impact, the authors could also address more complex challenges within semi-supervised human pose estimation, such as managing significant distribution shifts or effectively handling occlusions, both of which would demonstrate the robustness and adaptability of the proposed method in diverse and realistic scenarios.\\n\\n### Lack of experminets\\n\\nThe proposed method for semi-supervised human pose estimation (SSHPE) would benefit from a deeper comparison with existing foundation models, particularly in a zero-shot setting. For instance, models like Sapiens [4] have demonstrated strong generalization in zero-shot applications, and evaluating the proposed SSHPE approach against such foundation models would clarify its relative strengths and weaknesses. Additionally, the paper overlooks a critical experiment involving the use of high-accuracy 3D pose estimation models, such as those utilizing the SMPL model, which effectively parameterizes human poses, handles occlusions, and supports 3D-to-2D projection via camera transformations. Testing whether training a 2D pose estimation model under a semi-supervised setting indeed outperforms simply projecting a state-of-the-art 3D model (e.g., BEDLAM [5], which is trained solely on synthetic data) into 2D, would be crucial in establishing the necessity of the SSHPE approach. Given BEDLAM\\u2019s potential for zero-shot 2D projection, it is likely that both Sapiens and BEDLAM could outperform the proposed SSHPE model without additional training.\\n\\nThese omissions raise an essential question about the need for SSHPE approaches in the presence of robust, generalizable foundation models and accurate 3D pose estimators. To justify the SSHPE setting, it is recommended that the authors add a dedicated section explaining its relevance and necessity. Additionally, expanded experimental results are needed to establish a clear advantage of the proposed method over these established models, particularly to demonstrate that the proposed approach fills a specific gap that existing foundation models and 3D estimators do not.\\n\\n### Results\", \"loss_function_effectiveness\": \"Currently, the paper does not provide sufficient insight into the effectiveness of the proposed loss functions, $\\\\mathcal{L}_u$ and $\\\\mathcal{L}_S$. Including an ablation study to examine the individual impact of these losses on labeled and unlabeled datasets would provide a clearer understanding of their contributions. This would help substantiate the effectiveness of each component and offer more transparency on how these loss functions drive model performance.\", \"comparison_with_uda_methods\": \"The semi-supervised setting described shares similarities with unsupervised domain adaptation (UDA) approaches, suggesting a relevant opportunity for comparison. Including benchmark results against state-of-the-art UDA methods, such as UDAPE [6], which leverages consistency loss minimization, would highlight the unique value of the proposed method. Adding these comparisons in Table 4 would show how the approach fares relative to established UDA methods and underscore its novelty and advantage in this space.\\n\\n### Lack of clarity\\nThe paper lacks essential clarity, making it challenging for readers unfamiliar with the field to follow the methodology and results. Specifically:\", \"undefined_variables\": \"Key variables such as $u$, $l$, $N$, and $M$ in lines 181-182 are not clearly defined, which disrupts the reader\\u2019s ability to interpret the equations and understand their significance within the proposed method.\", \"unclear_justification_for_downsampled_heatmaps\": \"In line 161, there is a reference to predicting downsampled heatmaps without any clear explanation or citation to support this choice. Given that downsampling can impact resolution and accuracy, it\\u2019s essential to provide references to prior methods that utilize this approach and clarify why it\\u2019s suitable in this context.\\n\\n### References\\n\\n[1] Prior-guided Source-free Domain Adaptation for Human Pose Estimation, ICCV 2023 \\n[2] Semi-Supervised 2D Human Pose Estimation Driven by Position Inconsistency Pseudo Label Correction Module, CVPR 2023 \\n[3] PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation, CVPR 2021 \\n[4] Sapiens: Foundation for Human Vision Models, ECCV 2024 \\n[5] BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike Animated Motion, CVPR 2023 \\n[6] A Unified Framework for Domain Adaptive Pose Estimation, ECCV 2022\", \"questions\": \"It would be helpful if the authors could address key areas including Novelty and Differentiation, Comparison with Foundation Models, Justification of the SSHPE Setting, as well as provide insights on the suggested experimentation, ablation studies, and clarity improvements. Clarifying these aspects would significantly enhance the understanding of the paper\\u2019s contributions. Addressing these points would also help position the work within the broader landscape of current methods, highlighting any distinct impact and practical relevance.\\n\\n# UPDATE (After Discussion Period)\\n\\nI appreciate the author's response and thanks for sharing a detailed response and addressing concerns related to novelty, experimentation and ablation studies. The statement made by the authors, \\\"Almost all of the above methods have not been or cannot be tested on the 2D HPE datasets used in this paper,\\\" appears unjustified and lacks merit. Several existing approaches, including foundational models like SAPIENS and BEDLAM, offer the capability to perform zero-shot evaluations on 2D HPE datasets without requiring additional training. Conducting such evaluations is straightforward and would provide a meaningful benchmark against proposed methods. This comparison is critical to establish the relevance and competitive performance of the proposed method in the broader context of foundation models.\\n\\nWithout such comparisons, the study's scope of semi-supervised learning remains speculative, as it lacks a clear understanding of how the proposed method measures up to existing works. Hence, **I would like to retain my current rating**, as the lack of rigorous comparative evaluation constitutes a significant shortcoming.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Submission of Revised Paper\", \"comment\": [\"We are writing to inform you that we have submitted the revised version of our paper. We would like to express our sincere gratitude for your insightful and constructive comments and feedback. Your expert critiques have been invaluable in guiding the improvements we have made to our paper.\", \"In response to the points raised during the review process, we have made the following comprehensive revisions:\", \"**Clearer Description:** We have added explanations of some variables and references of default input processing in the problem definition of Section 3 (for ***W4*** by `Reviewer iqhT`). We also reiterated that the labeled datasets of size 1K, 5K, and 10K from the COCO training set are selected in a fixed pattern and can be reproduced in Section 5.1 (for ***W5*** by `Reviewer r3Sz`). And we explained the rationale for the regional divisions in Table 5 (for ***Q1*** by `Reviewer r3Sz`).\", \"**More Performance Comparison Details:** In Appendix A.2, we added new comparison experiments based on ResNet-50 in Table 10 (for ***W2*** by `Reviewer TvMD`) and new comparison experiments using MPII dataset in Table 11 (for ***W5*** by `Reviewer r3Sz`) as more convincing supplements to Table 3 based on ResNet-18 and COCO dataset in the main paper.\", \"**Qualitative Visualization Comparison:** In Appendix A.3, we added the visualization comparison of predicted results on COCO val-set and WEPDTOF-Pose test-set of various methods (for ***W6*** by `Reviewer 4hx9`).\", \"**Parameters of Basic Augmentations:** In Appendix A.4, we described in detail the hyper-parameters of basic augmentations used in this paper (for ***W4*** by `Reviewer 4hx9`).\", \"**Additional Ablation Studies:** In Appendix A.5, we tested and analyzed the impact of different batch sizes or easy augmented input ways on the multi-path loss function in Tables 13 and 14 (for ***W7*** by `Reviewer 4hx9`). We also analyzed and verified the efficiency and effectiveness of multi-path consistency loss training in Table 15 (for ***Q2*** by `Reviewer r3Sz`).\", \"We believe that these comprehensive revisions have notably elevated the quality, clarity, and robustness of our research. We are hopeful that the paper now aligns more closely with the esteemed standards of the conference. We are grateful for the opportunity to refine our work and appreciate the dedication and effort you have invested in reviewing our paper.\", \"Thank you once again for your invaluable assistance in enhancing the quality of our work. We look forward to your continued guidance and are hopeful for favorable consideration of our paper.\"]}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"Dear Reviewer r3Sz,\\n\\nThank you for your high appreciation, constructive feedback and insightful questions regarding our paper. We appreciate the opportunity to clarify the aspects you have highlighted. Please find our responses to your queries below.\\n\\n***\\n***W1: Selection of augmentations combination***\\n\\n**R1:** First of all, thank you for your affirmation of the effectiveness of the method we proposed to obtain new advertisements. In fact, although it seems intuitive, there is no exactly the same work before us to try to propose and verify the combination synergy effect between multiple basic augmentations. A technical route that has a different starting point from our work but very similar final results is the AutoAug families (see the second paragraph in Section 5.3), which require offline search for optimal parameters of different augmentations and are usually highly dependent on the dataset utilized.\\n\\nDifferently, we first ranked the difficulty of the existing basic augmentations, and then proposed three concise guidelines to recommend more advanced augmentation combinations, and used a large number of ablation experiments to verify these criteria. According to these criteria explained in Section 3.1, we do not have to verify all possible combinations one by one, and can rapidly be compatible with new basic augmentations, which provides a feasible pipeline for us to quickly obtain the optimal augmentation combination. For more details, we recommend reading our response to W3 raised by `Reviewer 4hx9`, which explains similar concerns.\\n\\n***\\n***W2: Effects of different backbones***\\n\\n**R2:** Generally, the performance improvement in Table 4 is negatively correlated with the total number of parameters or basic capabilities of the backbone used. That is to say, for the same training and testing settings, using ResNet-50, ResNet-101 and HRNet-w48 as backbones respectively, the final performance will inevitably get better and better, regardless of whether full supervision or semi-supervision is used. Meanwhile, the predictable accuracy of the test set itself has an upper limit. This means that the higher the accuracy (here is mAP), the closer to performance saturation. Smaller absolute improvements are most likely related to the indicator limits of the evaluated dataset. Therefore, after using the most powerful HRNet-w48 as the backbone, the absolute improvement in mAP appears to be minor. As a more sensitive measure, we can calculate the relative performance improvement from SSPCM to Ours in each case.\\n\\n```\\n---------------------------------------------------------------------------------\\nMethod\\t\\t| Backbone\\t| Nets\\t| AP\\t| Abs.Imp.\\t| Rel.Imp. \\n---------------------------------------------------------------------------------\\nSupervised\\t| ResNet50\\t| 1\\t| 70.9\\t| 0.0\\t\\t| ---\\nSSPCM\\t\\t| ResNet50\\t| 3\\t| 74.2\\t| 3.3\\t\\t| ---\\nOurs (Dual)\\t| ResNet50\\t| 2\\t| 74.6\\t| 3.7\\t\\t| 12.12%\\n---------------------------------------------------------------------------------\\nSupervised\\t| ResNet101\\t| 1\\t| 72.5\\t| 0.0\\t\\t| ---\\nSSPCM\\t\\t| ResNet101\\t| 3\\t| 75.5\\t| 3.0\\t\\t| ---\\nOurs (Dual)\\t| ResNet101\\t| 2\\t| 76.4\\t| 3.9\\t\\t| 30.00%\\n---------------------------------------------------------------------------------\\nSupervised\\t| HRNetw48\\t| 1\\t| 77.2\\t| 0.0\\t\\t| ---\\nSSPCM\\t\\t| HRNetw48\\t| 3\\t| 79.4\\t| 2.2\\t\\t| ---\\nOurs (Dual)\\t| HRNetw48\\t| 2\\t| 79.5\\t| 2.3\\t\\t| 4.55%\\n---------------------------------------------------------------------------------\\n```\\n\\nIt can be found that although the absolute improvement based on HRNet is not obvious, our relative improvement cannot be ignored (about 5%). This shows that our method is not necessarily related to the network structure. In addition, we choose to use the current backbones for fair comparison with previous SOTA methods. Using more advanced backbones (such as vision transformers) to pursue higher accuracy/mAP is beyond the scope of this paper. However, our proposed method does not rule out the possibility of being applicable to other network structures, which will be left for future research.\\n\\n***\\n**Continue in the next comment.**\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"***W8: Revision of minor comments***\\n\\n**R8:** Thank you for pointing out these detailed typos. We have corrected them in the main text. At the same time, we have also added commas or periods at the end of other formulas to make the paper more standardized.\\n\\n***\\n\\nYour feedback has been instrumental in enhancing the clarity and accuracy of our work. We are committed to making the necessary revisions to address these points thoroughly.\"}", "{\"title\": \"Official Responses by Authors\", \"comment\": \"Thank you for your kind reply and heartfelt affirmation.\\n\\nFor MixUp, it is indeed a widely used strategy to alleviate the inter-domain differences. However, we did not address the inter-domain adaptation problem in this paper, so we did not focus on it. We will actively explore its great potential in future work.\\n\\nAs for the hyper-parameters of basic augmentations, this is indeed a very important component, almost the core of any kinf of data augmentation. Unfortunately, we need to make a fair and reliable comparison with previous similar works, so we did not explore it in depth. Perhaps our future work can focus on the study of AutoAug algorithms suitable for SSHPE tasks.\\n\\nThank you again for your hard work in reviewing this paper.\"}", "{\"summary\": \"This paper presents a method for SSL 2D Human Pose estimation that improves existing methods from two aspects: data augmentation and better consistency loss design.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow\\n2. Personally, the idea of ranking basic augmentation looks interesting to me. From my own experience, heatmap-based prediction has different behavior from augmentations from traditional classifier-based models so this line of work looks interesting to me.\\n3. The experiment results show decent amount of improvement.\", \"weaknesses\": \"1. It seems that the dual network design is still a dominant approach based on experiment results. The single network with proposed components still cannot outperform dual network approaches.\\n2. How does the performance look like with ResNet-50? Some other work also evaluates with this model. Honestly, I don't really think ResNet-18 is that important these days.\\n3. The improvement on the coco training + coco wild seems to be marginal.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to boost semi-supervised human pose estimation from two perspectives: data augmentation and consistency training. For data augmentation, this paper conducts empirical studies to get the easy-hard rank of augmentations (joint cutout, joint cut-occlude, randaugment, cutmix, etc.) for human pose and recommend two augmentation combinations (i.e., a cutout after joint cut-occlude and a cutmix after joint cutout). For consistency training, the paper proposes using one easy augmentation to supervise multiply hard augmentation. Experiments on COCO and MPII datasets show that the proposed method outperforms existing SOTA methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Based on the results in Table 3, the proposed method exceeds previous SOTA results.\", \"The rank and the combination of augmentations for semi supervised learning is important, and this work first try to dive deeper into this study.\"], \"weaknesses\": \"1. L.95 introduces a principle \\\"Do not combine MixUp-related augmentations\\\" while L.224 P1 claims that \\\"A global $T_{MU}$ does not make sense for the HPE task.\\\" It would be better to provide more details. What is the meaning of global? Moreover, it seems mixup is used in some heatmap-based works like [R1], and it works well.\\n\\n- [R1] From Synthetic to Real: Unsupervised Domain Adaptation for Animal Pose Estimation. CVPR2021.\\n\\n2. It would be better to include $T_{CM}$, $T_{CO}$ and $T_{MU}$ in Table 1.\\n\\n3. Section 3.1 concludes that the rank and the combination of data augmentations are entirely based on empirical studies. For example, it is easy to understand $T_{A60}$ is harder than $T_{A30}$, but it is not intuitive to understand $T_{JC}$ is harder than $T_{CO}$. I am wondering if any insights or theoretical analysis can be provided to help readers understand more intuitively. For example, L. 228 claims that \\\"we thus nominate the most likely superior combinations: TJOCO and TJCCM\\\". The two combinations are useful. However, can we have any insights or principles for other augmentations which not discussed in this paper. It would be meaningful if we can quickly rank any new augmentations. \\n\\n4. Section 3.1 did not discusses the parameters of each augmentation. I am not sure the default parameters of each augmentation. Moreover, I am wondering if the parameters will have significant influences on the rank and the combination. For example, Dual-Network (Xie et al., 2021) prefers to use JC 5 and RA 20 to show the parameters directly.\\n\\n5. Section 5.3 discusses more augmentations and training techniques. But they are not ablation study. It would be better to clearly show the baseline and the effect of each module/strategy.\\n\\n6. It would be better to show some qualitative results in the main paper or in the supplementary.\\n\\n7. I am not sure the comparison between the multi-path consistency loss and others is fair enough due to the difference of batch size. It would be better to compare the pair {$I_{e1}$,...,$I_{en}$} + {$I_{h1}$,...,$I_{hn}$} with {$I_{e}$} + {$I_{h1}$,...,$I_{hn}$}, and also discuss the influence of batch size of other SOTA methods.\\n\\n8. Minor comments\\n- there is an incorrect usage of cite in L.47.\\n- there are incorrect punctuation in formulas like Eqs.3 and 4.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5zDU4pFxkg
VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for MLLMs
[ "Qiucheng Wu", "Handong Zhao", "Michael Saxon", "Trung Bui", "William Yang Wang", "Yang Zhang", "Shiyu Chang" ]
With the recent introduction of vision understanding capabilities in large language models, multimodal LLMs (MLLMs) have inherited and advanced a series of intriguing capabilities from classical LLMs. Among these capabilities, visual spatial planning - the ability to comprehend the spatial arrangements of objects and devise action plans to achieve specific desired outcomes - remains under-explored in MLLMs. In our study, we introduce VSP, a benchmark specifically designed to 1) evaluate the spatial planning capability in these models in general, and 2) break down the visual planning task into finer-grained sub-tasks, including perception and reasoning, and measure their capabilities in these sub-tasks. Contrary to expectations that MLLMs should naturally process scene images and reason effectively, evaluation on the benchmark shows that both open-source and private MLLMs fail to generate effective plans for even simple spatial planning tasks. The fine-grained analysis further reveals that while MLLMs have flaws in both perception and reasoning, the deficiency in the former capabilities is significantly worse. Evaluations on these tasks reveal fundamental deficiencies in the models’ visual perception and reasoning abilities, explaining their worse performance in the general spatial planning tasks. Our work illuminates future directions for improving multimodal LLMs' abilities in spatial planning.
[ "multimodal LLM", "spatial planning" ]
https://openreview.net/pdf?id=5zDU4pFxkg
https://openreview.net/forum?id=5zDU4pFxkg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oOZKKbA3rp", "nvBN176VXC", "RdAX3djmYO", "CCrfZKtKYa", "1MZqpnz5RB" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730623678544, 1730651011753, 1730789579305, 1730358956582, 1731654941455 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7966/Reviewer_3Mmr" ], [ "ICLR.cc/2025/Conference/Submission7966/Reviewer_tYBX" ], [ "ICLR.cc/2025/Conference/Submission7966/Reviewer_oHsn" ], [ "ICLR.cc/2025/Conference/Submission7966/Reviewer_nsRH" ], [ "ICLR.cc/2025/Conference/Submission7966/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces VSP (Visual Spatial Planning), a benchmark designed to evaluate the spatial planning capabilities of multimodal large language models (MLLMs). VSP consists of various tasks, including maze navigation, block manipulation, collision detection, and pathfinding in real-world maps. These tasks are further broken down into subtasks focusing on perception and reasoning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Necessart Benchmark: spatial planning is one of the most important ability for the MLLM and VSP provides a framework for assessing MLLMs\\u2019 ability to understand and manipulate spatial environments;\\n2. This paper breaks down tasks into perception and reasoning subtasks, which helps identify specific bottlenecks that limit model performance in spatial planning.\\n3. several leading MLLMs are evaluated, exposing room for the future improvements\\n4. Open-sourced dataset as mentioned in the paper\", \"weaknesses\": \"1. The current evaluation is not with dynamic and realistic environments and a more extensive evaluation in these settings would be beneficial. Some possible trials on auto driving or robotic manipulation is welcome.\\n2. Though this paper is mainly on LLM, some quick comparison with the traditional/classic planning methods is also welcome, at least could show some potential improvement room.\\n3. Limited analysis of model internals, further analysis on the performance gap or other factors will help the authors understand.\", \"questions\": \"1. How can the benchmark be adapted to evaluate other forms of spatial planning, such as 3D environments or object manipulation?\\n\\n2. How can the findings from VSP inform the development of new MLLM architectures and training methods?\\n\\n3. How can VSP be integrated into the development process of MLLMs to guide their design and optimization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This is a benchmark dataset paper. It particularly focuses on testing spatial planning capability of multimodal LLMs. The paper shows that existing multimodal LLMs fail to generate effective plans for even simple spatial planning tasks. The benchmark will encourage researchers to make their VLMs more spatial information aware.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The paper focuses on spatial reasoning capabilities, also relevant to multiple real-world scenarios in robotics.\", \"Four different scenarios including 2D Maze Navigation, 3D Blocks World and Collision, as well as 2D Google Map were used for the evaluation.\", \"The supplementary material with detailed prompts and procedures is helpful.\", \"Experimental results confirming the weakness of the existing VLMs is meaningful.\"], \"weaknesses\": [\"The selection of the VLMS tested now seems very slightly outdated. For the final version, maybe include newer models like Qwen2-VL, Claude-3.5, LLaVA-OneVision, BLIP-3, and so on?\"], \"questions\": \"Any further discussions on how this benchmark could be extended further in the future?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper build a new perception and spatial planning benchmark for multimodal LLMs. The key insights are using different visual planning tasks such as Maze Navigation and let LLMs solve these tasks by reasoning. They test the performance of some SOTA open-sourced and closed sourced MLLMs and find that GPT-4o is the best in their tasks for perception and spatial planning. However, MLLMs still have flaws in both perception and reasoning, the deficiency in the former capabilities is significantly worse.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. This paper has a good dataset and benchmark contribution. The authors said they will open source the dataset.\\n\\n2. This paper is well-written, especially its task defination sections.\", \"weaknesses\": \"1. As I know, the tasks you choose in your papers have already been proposed by many papers. I don't think you are the first in this area. See these references: [1],[2]\\n\\n2. You state \\\"The first two scenarios, Maze Navigation and Blocks World, are basic environments developed from classical planning tasks. Additionally, we challenge the model\\u2019s abilities in dynamic and realistic applications through the Collision and Google Map scenarios, respectively. All environments are fully observable through input images.\\\"\\n\\n- But why are these tasks important for LLMs? Besides MLLMs, there are many other models that can be used for these tasks. I don't think these tasks are necessary for MLLMs. Just like the navigation, we can easily transfer the question into language format. Thus LLMs can solve these questions in another way.\\n\\n3. I am really confused with the concept of visual spatial planning in your paper. Most of the time we say visual spatial relations in the human's vision. If my understanding is right, it should be visual spatial relations in perception, and then make planning by reasoning. To better design your task, you should first let the model output the scene description for perception. And then guide the MLLMs to plan for your subtask. (Figure 2 in your paper.)\\n\\n4. Maybe you should compare these MLLMs with humans in Table 2, thus we could know the importance of your task design. If humans are also hard to solve, I don't think these tasks are necessary for MLLMs.\\n\\n5. We may need more details for Table 2. For example, how many images you input for each models? Are you using different system prompts (not each input prompt) for these models? What is the scale of these open-sourced models? (7B, 34B, or larger)\", \"ref\": \"[1] Ghosal, Deepanway, et al. \\\"Are Language Models Puzzle Prodigies? Algorithmic Puzzles Unveil Serious Challenges in Multimodal Reasoning.\\\" arXiv preprint arXiv:2403.03864 (2024).\\n[2] Chia, Yew Ken, et al. \\\"PuzzleVQA: Diagnosing Multimodal Reasoning Challenges of Language Models with Abstract Visual Patterns.\\\" arXiv preprint arXiv:2403.13315 (2024).\", \"questions\": \"I have raised most of the questions in the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns.\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a benchmark called Visual Spatial Planning (VSP) designed to evaluate the capabilities of multimodal large language models (MLLMs) in visual spatial planning. This includes understanding spatial arrangements of objects and devising action plans to achieve desired outcomes in visual scenes. The VSP benchmark comprises various scenarios, including Maze Navigation, Blocks World, Collision, and Google Map scenarios, each with different levels of complexity. The benchmark not only tests end-to-end spatial planning but also breaks down the task into finer-grained sub-tasks such as perception and reasoning. The evaluation results show that both open-source and private MLLMs struggle with even simple spatial planning tasks, revealing deficiencies in visual perception and reasoning abilities.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The benchmark presented in the paper tests the spatial planning capabilities of MLLMs through four different tasks, revealing their shortcomings.\\n2. The experiments are well-established, encompassing many of the MLLMs currently available in open and closed source, and also demonstrate performance improvements through fine-tuning.\\n3. The paper is generally well-written. figures can clearly reflect the setting of the tasks.\", \"weaknesses\": \"1. The proposed four tasks seem to have some gap with our real life. In real life, similar tasks (e.g., navigation) can currently be obtained by direct planning from maps without the need for planning by MLLM. In Collision Sceneario, speed and direction determination can be done by similar specialized algorithms. In other words, if MLLMs perform well on this benchmark, does it mean that MLLMs can demonstrate \\u201cspatial perception\\u201d in real life? The paper would have been more convincing if the authors could explain the necessity of these tasks for real-life applications.\\n2. From a psychological and physiological point of view, humans have the ability to perceive space to a certain extent dependent on the parallax brought about by binocular vision and the temporal difference brought about by the movement of objects. However, benchmark is more of a single RGB image and a task on 2D.\\n3. Depth is not introduced in benchmark. Depth is very important in spatial perception, and can directly reflect the distance of different objects and thus the distribution of objects. I think authors should discuss and compare this, like Spatialbot[1].\\n4. If the authors could provide some examples between benchmark and real applications, I think benchmark would be more contributing.\\n\\n[1]SpatialBot: Precise Spatial Understanding with Vision Language Models\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
5z9GjHgerY
DPLM-2: A Multimodal Diffusion Protein Language Model
[ "Xinyou Wang", "Zaixiang Zheng", "Fei YE", "Dongyu Xue", "Shujian Huang", "Quanquan Gu" ]
Proteins are essential macromolecules defined by their amino acid sequences, which determine their three-dimensional structures and, consequently, their functions in all living organisms. Therefore, generative protein modeling necessitates a multimodal approach to simultaneously model, understand, and generate both sequences and structures. However, existing methods typically use separate models for each modality, limiting their ability to capture the intricate relationships between sequence and structure. This results in suboptimal performance in tasks that requires joint understanding and generation of both modalities. In this paper, we introduce DPLM-2, a multimodal protein foundation model that extends discrete diffusion protein language model (DPLM) to accommodate both sequences and structures. To enable structural learning with the language model, 3D coordinates are converted to discrete tokens using a lookup-free quantization-based tokenizer. By training on both experimental and high-quality synthetic structures, DPLM-2 learns the joint distribution of sequence and structure, as well as their marginals and conditionals. We also implement an efficient warm-up strategy to exploit the connection between large-scale evolutionary data and structural inductive biases from pre-trained sequence-based protein language models. Empirical evaluation shows that DPLM-2 can simultaneously generate highly compatible amino acid sequences and their corresponding 3D structures eliminating the need for a two-stage generation approach. Moreover, DPLM-2 demonstrates competitive performance in various conditional generation tasks, including folding, inverse folding, and scaffolding with multimodal motif inputs.
[ "protein foundation model", "diffusion language model", "multimodal language model" ]
Accept (Poster)
https://openreview.net/pdf?id=5z9GjHgerY
https://openreview.net/forum?id=5z9GjHgerY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vmpJevUOKs", "vPtY6HrtXp", "vGPRopAUY6", "umFUMxrISM", "sddvF4OscD", "rQ9nvEutnV", "ppF66TgZxk", "nCLEy99BqH", "m3hZK53ior", "lQkkr8zXKl", "kUASIelVcH", "kFDUUfZkUi", "kESSYfSjDq", "ixk0QkLo0h", "iR1XwUrD8b", "hs0SFi50YQ", "g7I5Qh0vAX", "fNrr88akfg", "fCJeXldWhX", "fBb4xOcoeM", "bMo322NunH", "XG59vES9M6", "WTlZ1fTrkI", "Tn0zuI1f6q", "Tl2GnZi00n", "SbR8xflvBx", "RtFVXLGJGL", "QJnWLTPxl5", "P9ByjPrwdK", "Ku8QyghRjJ", "KDmpgbPFfH", "IxPWOGgAM1", "I4XvLf8gjY", "H2oF3YfGwS", "FZ4Auk5X7B", "FS3kVyW3hY", "FFF0KnILx6", "E8fq7UUKkg", "DV3E6YpqPY", "BmHZmGOTis", "AsEx2RV2aq", "9FfWj0IFZe", "5P3qIdN2gO", "2wpYNncqC5", "22eq2NQnzB", "1QliHgzQ8M", "0lAKHDTcAs", "0SzAIzgx79" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732416309394, 1732427592446, 1733193438999, 1732417138727, 1732420609891, 1732419229394, 1732421214745, 1732421626427, 1732418518453, 1732423084002, 1732416904132, 1732416818903, 1732994284096, 1732456727254, 1730724454118, 1730029921078, 1732420542117, 1732417565557, 1732418010662, 1732441265667, 1732593311127, 1732417461565, 1732419002236, 1733194600211, 1732417557504, 1732417308987, 1732417845057, 1733148446445, 1734616673583, 1732419355863, 1732417726930, 1733194164750, 1732419848086, 1732417351276, 1732417275907, 1732504613748, 1732417209077, 1737524286355, 1733196167671, 1733196049484, 1732416780515, 1732421444103, 1732419738305, 1733192362274, 1733192638071, 1730455294841, 1732418555125, 1733194491591 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Reviewer_FqCq" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Reviewer_f8By" ], [ "ICLR.cc/2025/Conference/Submission13865/Reviewer_4A34" ], [ "ICLR.cc/2025/Conference/Submission13865/Reviewer_f8By" ], [ "ICLR.cc/2025/Conference/Submission13865/Reviewer_FqCq" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Area_Chair_FtBn" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Reviewer_4A34" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ], [ "ICLR.cc/2025/Conference/Submission13865/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you so much for your constructive suggestions. We understand your concerns about the clarity issues and missing technical details, potential inconsisency in some result intepretation, and requiring analysis and discussion on structure tokenizer. To address these issues, we have made our major efforts in (1) elaborating on technical details and accordingly updating the manuscript for improved clarity; (2) providing discussion on the implications of in silico designablity metrics; and (3) providing more analysis and discussion on structure tokenization. Your insightful comments and suggestions have greatly improved our manuscript. We sincerely thank you once again and welcome any further feedback!\"}", "{\"title\": \"Thank you very much!\", \"comment\": \"Thank you for reading our rebuttal and for your supportive words! We're happy that we have addressed all your concerns! We would like to once again thank you for your comments, which are super inspiring and have indeed helped greatly enhance our paper.\\n\\n\\nAuthors\"}", "{\"comment\": \"> `Q3:` In the presented setting, sc-TM and sc-RMSD evaluate the degree of consistency between the predictions in two modalities, sequence and structure. Both \\\"quality\\\" and \\\"designability\\\" are misnomers.\\n\\nThanks for your suggestions. Considering that scTM and scRMSD represent the consistency between structure and sequence, we will rename the evaluation metric to \\\"structure-sequence compatibility/consistency\\\". If you have a more appropriate way of expressing it, we would be glad to hear your suggestions!\\n\\n> `Q4:` I appreciate the results on the novelty evaluation using TM-score on the full training set, not only PDB, which is a subset thereof. The results show that the model generates more structures similar to those in the training data, however, in this case reporting mean and std is insufficient as it may mask issues with memorization. It is advised to plot the distribution.\\n\\nThanks for your suggestions. To more comprehensively demonstrate the similarity between the generated proteins and training data, we have plotted the distribution as you suggested. Specifically, for each generated protein, we search the structure against the training set with foldseek and calculate the TMscore and sequence identity with the most similar protein in the training set. \\n\\nAs seen in [this figure](https://anonymous.4open.science/r/supple_dplm2-2342/novelty_tm_id.pdf), a considerable number of points are distributed in the bottom right area, indicating a high TMscore but a relatively low sequence identity. This suggests that DPLM-2 is likely to generate proteins with structures similar to those in the training set but often with novel sequences.\\n\\n\\n> `Q5:` Regarding argmax vs. stochastic sampling. This important topic requires a thorough discussion, since different strategies are used for different tasks. In DPLM, which DPLM-2 is based upon, Gumbel-Max trick was used (at least) in unconditional generation to alleviate mode collapse. I have not found any mentions of this in the manuscript. Here, argmax sampling is used for inverse folding, while stochastic sampling is used for unconditional generation and scaffolding. Why different tasks use different sampling approaches? What stochastic approach is used, please provide the details? What is the reasoning behind this and how is it supported experimentally?\\n\\n\\nWe apologize for the confusion and the lack of details about the sampling strategies in the manuscript. We would like to make further clarifications as follows. We will include these missing details in the next version. Thanks again for your suggestions.\\n\\n**Why different tasks use different sampling approaches?**\\nIn our original manuscript, we utilize argmax decoding for conditional generation tasks (e.g., folding and inverse folding) to maximize generation accuracy and ensure a fair comparison with DPLM. On the other hand, stochastic sampling was employed for unconditional generation or motif-scaffolding tasks to encourage generation diversity while maintaining good generation quality. In the original rebuttal Q4, we also demonstrate that stochastic sampling with temperature annealing can be used in the inverse folding task to sample more diverse sequences while ensuring structural plausibility.\\n\\n**What stochastic approach is used, please provide the details?**\\nWe utilize a temperature-based stochastic approach. With a normally fixed temperature $\\\\tau$ (we used $\\\\tau=0.7$ in the original manuscript), the full process can be referred to DPLM paper Appendix A, Algorithm 1. \\n\\nHere we mainly focus on the temperature-annealed version used in our initial rebuttal for better sampling diversity. The details are shown below:\\n1. Determine the minimum temperature $\\\\tau\\\\_{\\\\min}$, the maximum temperature $\\\\tau\\\\_{\\\\max}$ and the total sampling steps $T$. Initialize the start timestep $t = 0$.\\n2. For each timestep $t$, calculate the temperature $\\\\tau \\\\leftarrow \\\\tau\\\\_{\\\\min} + (1-\\\\frac{t}{T})(\\\\tau\\\\_{\\\\max} - \\\\tau\\\\_{\\\\min})$.\\n3. DPLM-2 performs a sampling iteration based on the $\\\\tau$. Increment the timestep: $t \\\\leftarrow t + 1$.\\n4. Repeat (2) if $t \\\\leq T$, otherwise proceed to (5).\\n5. End sampling.\\n\\nWe will include a more formal presentation of the technical details of this temp-annealed sampling in next version of manuscript.\\n\\n**What is the reasoning behind this and how is it supported experimentally?**\\nThe gumbel-argmax trick used in the original DPLM is akin to sampling with fixed temperature 1.0 at every timestep, and we found the unconditional generation performance with gumbel-argmax trick is sub-optimal in terms of diversity. \\nTo this end, the temperature annealing sampling approach introduces more randomness during the initial stage of sampling by using a large temperature, and more fidelity during the final stage of sampling by using a small temperature. This method improves generation diversity while maintaining generation quality.\"}", "{\"comment\": \"**Regarding ablation study of training strategies.**\\n\\nAs you suggested, here we provide a comprehensive ablation study extending the Table 3 of our original manuscript to examine the effects of four training strategies on DPLM-2-650M: sequence pre-training (finetuning from seq.-pretrained DPLM), data augmentation with predicted structures, random length cropping (cutting off long proteins with a 50% probability per sample to increase diversity), and self-mixup for mitigating exposure bias in discrete diffusion. \\n\\nAs seen in the following table, predicted structure from swissprot benefit designability (exp 2 vs exp 1); (2) finetuning from pre-trained DPLM leads to improved designability and sampling diversity (exp 5 vs exp 2, exp 3 vs exp 1); (3) absence of length cropping results in considerable degradation of diversity (exp 4 vs exp 5) ; and (4) self-mixup training significantly boosts sampling diversity while preserving decent designability (exp 6 vs exp 5). Remarkably, training a multimodal PLM from scratch is challenging when only limited structural data is available and the extensive sequence data is underutilized. To address this, the four proposed training strategies have proven highly effective, enabling DPLM-2 to be established efficiently (1.5 days on 16 A100 GPUs for the DPLM-2 650M model). Moreover, we believe that a well-designed data mixture policy incorporating all available multimodal protein data can further improve multimodal PLM training, as demonstrated by the success of ESM-3.\\n| Exp id | finetuning from seq.-pretrained DPLM | predicted structures | random length cropping | self-mixup training | scTM | MaxCluster |\\n|-----------|--------------------------------------|----------------------|------------------------|---------------------|---------------|------------|\\n| 1 | no | no | yes | no | 0.702 \\u00b1 0.259 | 0.274 |\\n| 2 | no | yes | yes | no | 0.886 \\u00b1 0.137 | 0.214 |\\n| 3 | yes | no | yes | no | 0.889 \\u00b1 0.158 | 0.396 |\\n| 4 | yes | yes | no | no | 0.937 \\u00b1 0.061 | 0.168 |\\n| 5 | yes | yes | yes | no | 0.916 \\u00b1 0.099 | 0.440 |\\n| 6 (paper) | yes | yes | yes | yes | 0.925 \\u00b1 0.085 | 0.545 |\\n\\n`--end of Q1--`\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q6`**: Regarding repetition: In practice, we observed that diffusion protein language models tend to generate a large number of repetitive amino acids. I noticed that DPLM employed a resampling scheme to address this issue. Therefore, regarding the DPLM-2 model, I would like to know whether it also encounters similar repetition issues. If so, what approach did you adopt to resolve it? If not, I would like to know what prevented this issue in DPLM-2.\\n\\n**`A6`**: This is a great question. \\n\\n**About repetition issues in DPLM.** Generative models, especially language models, are good at learning what you provide with them. DPLM trained with UniRef50 contains a great number of sequences with high proportion repetition patterns. As a consequence, maximum-a-posterior (MAP) based sampling methods, such as default mask-predict sampling method for masked discrete diffusion, are likely to firstly recover these high likelihood patterns, and then get stuck in this local optimal and turn out to be low quality generation. This phenomenon, i.e., high likelihood != high generation quality, has been widely noticed in the studies of neural text/sequence generation [1,2]. In the original DPLM, authors address this issue with resampling for repetition patterns given a threshold.\\n\\n**About repetition issues in DPLM-2.** In DPLM-2, we train the model on pairs of structure tokens and amino acid tokens. As structure vocabulary is much larger than amino acid and structure data is much more complicated, there are basically no such repetitive structure tokens in the resulting structure tokens training data. Meanwhile, PDB and swissprot data are curated much more carefully than UniRef and AFDB in general (predicted structure from unannotated sequences, incl. UniRef), also resulting in less amino acid repetition in our training data. As a result, DPLM-2 just samples \\\"good-looking\\\" proteins (while amino acid repetitions often lead to long helical or disordered loopy proteins) using straightforward sampling strategy with no need for resampling. We can also further enhance sampling diversity in general using temperature annealing sampling, starting with high temperature (e.g., 2.2) for high stochacity and gradually decreasing towards low temperature (e.g., 0.1) for high likelihood and fidelity, similar to langevin dynamics or simulated annealing. \\n\\n[1] Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation. Coling 2020\\n\\n[2[ The Curious Case of Neural Text Degeneration. ICLR 2020.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q2`**: Regarding scTM and scRMSD metrics: In Table 2, under the unconditional backbone generation task, DPLM-2 achieves the highest scTM metric while also showing a significantly worse scRMSD metric, both in terms of mean and variance, which are at high levels. Is this phenomenon somewhat unusual? Given that these two metrics should have a certain level of correlation.\\n\\n **`A2`**: Thank you for pointing this out. We would like to address your concern as follows: \\nThe pLDDT metric reflects the structural plausibility of sequence. In addition to the pLDDT metric, we also include sequence-related metric that we calculate sequence diversity using mmseq2 clustering at sequence identy = 0.5 for good quality samples with pLDDT > 70. This quality threshold for diversity is inspired by Multiflow, which is more informative by avoiding diverse but messy sequences. We follow the experimental setting in DiMA [1], generating 2048 sequences with length sampled from the length distribution of PDB + SwissProt. The results highlight that DPLM-2 is able to generate structurally plausible and diverse sequences for protein generation. We also find that training data distillation greatly helps Multiflow's sequence quality in terms of pLDDT and diversity.\\n\\n| | scTM | scRMSD | helix ratio | strand ratio | coil ratio |\\n|--------------------------------------------|----------------|---------------|-------------|--------------|------------|\\n| native PDB samples | 0.904 \\u00b1 0.129 | 4.623 \\u00b1 5.688 | 0.36 | 0.22 | 0.42 |\\n| Multiflow (w/ distilation) | 0.930 \\u00b1 0.098 | 3.208 \\u00b1 4.741 | 0.75 | 0.10 | 0.15 |\\n| Multiflow (w/o distillation) | 0.750 \\u00b1 0.163 | 9.306 \\u00b1 8.499 | 0.73 | 0.06 | 0.21 |\\n| Multiflow (retrained on our training data) | 0.871 \\u00b1 0.934 | 6.580 \\u00b1 6.258 | 0.56 | 0.17 | 0.26 |\\n| DPLM-2 | 0.925 \\u00b1 0.085 | 3.899 \\u00b1 3.723 | 0.47 | 0.16 | 0.37 |\\n\\nWe first need to highlight that the generated samples from DPLM-2 share similar scTM (0.925) and scRMSD (3.9) as native PDB samples, which also exhibit good scTM (0.904) with a little bit higher scRMSD (4.623). Moreover, Additionally, DPLM-2 maintains a balanced structural composition (helix: 0.4, strand: 0.2, coil: 0.45), closely resembling natural distributions. In contrast, for MultiFlow, the officially released model with distillation attains much lower scRMSD (3.2), while the performance of our retrained version (on the same DPLM-2 training set) degrades in both scTM (0.871) and scRMSD (6.58). Lower scRMSD in MultiFlow with distillation, appears to be driven by overrepresentation of structured elements (Figure 4A), i.e., significantly biasing towards proteins with more helices, with less strands and loops (also see Figure 4C). This overrepresentation drives the observed scRMSD improvement but deviates from natural protein diversity.\\n\\nTM-score emphasizes global topology, while RMSD is sensitive to local structural errors. As such, although scTM and scRMSD are generally correlated, discrepancies can arise. The purpose of TM-score is to solve this sensitivity of RMSD, because RMSD is an average distance of all residue pairs in two structures, a local error (e.g. a misorientation of the tail) will raise a big RMSD value although the global topology is correct. In TM-score, however, the small distance is weighted stronger than the big distance, which makes the score insensitive to the local modeling error. \\n\\nAs shown in Fig 4B, some samples from DPLM-2 with higher loop proportion are more conformationally flexible, hence may show high scTM (>0.9) but worse scRMSD (>2.0), similar to natural protein. However, this does not necessarily indicate a limitation in generation quality but reflects differences in metric sensitivity. \\n\\nAs a result, the in-silico designability of protein generation should be evaluated comprehensively using both scTM and scRMSD, as each metric offers distinct insights and serves different purposes. For users aiming to generate samples with accurate global topology, scTM serves as a reliable indicator, whereas scRMSD may occasionally exclude reasonable structures. Conversely, for applications requiring structurally rigid and stable proteins, such as functional designs (e.g., binder design), scRMSD has been shown to correlate more strongly with in vitro success rates, as suggested by RFDiffusion.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q7`**: Regarding VQ-VAE: I am curious about the vocabulary usage rate of VQ-VAE when the vocabulary size is 1024. I noticed a significant performance difference between VQ-VAE and LFQ in this case, so I wonder if the vocabulary usage rate might be a contributing factor to this issue.\\n\\n**`A7`**: \\nThank you for your suggestion. Following your advice, we calculated the codebook utilization, as shown in the table below. We observed that LFQ-based tokenizers consistently achieve nearly 100% codebook utilization, with more evenly distributed code usage, whereas the vanilla VQ-VAE suffers from codebook collapse (63.5%). This suggests that severe codebook collapse limits the vanilla VQ-VAE\\u2019s ability to learn meaningful vocabulary, at least in our implementation. We are also aware of the successful use of vanilla VQ-VAE in ESM-3, so there might be significant optimization efforts that matter. Nevertheless, the combination of pre-trained structure encoder (GVP-Transformer) + LFQ + AF2-style structure decoder has been found to be an effective and efficient approach, requiring minimal twists for robust development.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q9`**: Regarding the structural information: Similar to ESM-3, you also use discrete structure tokens to represent structural information. This approach offers scalability benefits for the model, yet it also limits the model\\u2019s ability to capture finer structural details accurately. This trade-off represents an important challenge in the current field of multimodal protein language models. I am interested to know whether you would consider any methods to mitigate this issue when applying DPLM-2 to more structure-related tasks, as you described in Section 5.\\n\\n**`A9`**: This is a great question, and we fully agree that it represents a crucial challenge in token-based multimodal protein language models. albeit being the key enabler to multimodal PLMs, Structure tokenization is essentially clustering similar local structured environments, which results in lossy compression and the absence of fine-grained structural variation. We are very aware of this issue. The primary principle of the solution is that we need to \\\"recover\\\" and preserve the high-frequency variation that gets lost during quantization. We propose some potential directions for mitigation:\\n- Separate structure encodings for DPLM-2. We can introduce different structure encoders for encoding and generation purposes, respectively. For parts of a protein where atomic coordinates are already provided, lossy tokenization may not be necessary. Instead, we can use robust features from powerful structure encoders like GVP-Transformer while continuing to use structure tokens for generating the remaining parts. To achieve this, the model can be trained to alternate between these two types of encodings. A similar approach has been applied successfully in recent vision-language MLLMs [1], as the vision-language community has also recognized that understanding and generation often require different types of representations.\\n- Modeling continuous structure features with hybrid tokenization. In the structure tokenizer, the vector quantizer module converts encoder features into discrete structure token features, but the residuals\\u2014differences between the original and quantized features\\u2014are lost, removing fine-grained structural details. To address this, we can using continuous generative modeling, such as diffusion/flow-based models, to learn to recover these residuals. This would work by conditioning on the structure tokens and possibly the final hidden states of DPLM-2. The protein structure generation process would involve first generating discrete structure tokens that capture the overall topology, then using those tokens to generate the missing residuals. These residuals would be added up to the structure token embeddings to recover a more complete and accurate structure representation, closer to the features produced by the structure encoder. This approach could significantly improve structure generation. By combining this idea with hybrid structure encodings, DPLM-2 could not only interpret given structures at atomic accuracy but also generate structures that include the missing fine-grained variations. Similar strategies have shown significant success in visual autoregressive generation with visual tokenizers [2].\\n\\n[1] Janus: Decoupling visual encoding for unified multimodal understanding and generation. Arxiv 2024. \\n\\n[2] HART: Efficient Visual Generation with Hybrid Autoregressive Transformer\"}", "{\"comment\": \"> `Q10:` One of the main claims of the paper states that the co-generation guarantees consistency between structure and sequence. This is a strong statement that requires strong evidence. However on line 223 the assumption of conditional independence is made. Can you provide a rigorous mathematical proof that garantees such consistensy?\\n\\n`A10`: Thanks for this valuable question. Conditional independence is not a special assumption made by DPLM-2, it is a fundamental assumption made by diffusion models in general and their multimodal extensions, derived from the nature of their forward and backward processes. Previous theoretical studies on diffusion models have shown the convergence between generated samples distribution and data distribution is guaranteed under such conditional independence. In this paper, we have empirical evidence showing the consistency/compatibility between co-generated structures and sequences (e.g., scTM for co-generation), and we believe\\u00a0a mathematical proof of this is beyond the scope of this paper and can refer to the established theoretical results on diffusion. Nevertheless, we do love to elaborate on our thoughts and understanding of this as follows.\\n\\n**Conditional independence in diffusion models in general.**\\n\\n\\nConditional independence over the elements of high-dimensional data, i.e., $p\\\\_\\\\theta(\\\\mathbf{x}\\\\_{t-1} | \\\\mathbf{x}\\\\_t) = \\\\textstyle\\\\prod\\\\_{i=1}^d p\\\\_\\\\theta(x_{t-1, [i]} | \\\\mathbf{x}\\\\_t)$, is a prevailing assumption in diffusion probablistic models, both continuous and discrete variants, thanks to their iterative nature of probabilistic modeling. For example, in continuous diffusion models for vision generation, the denoising networks learn to reconstruct a denoised image at each timestep $t-1$ by simultaneously and independently operating over all pixels conditioned on the previous noisier pixels of the image and the current timestep (or equivalently noise level) $t$. So as for discrete diffusion, where discrete diffusion for text or protein sequence treats tokens of a sequence of $\\\\mathbf{x\\\\_{t-1}}$ independently given $\\\\mathbf{x\\\\_t}$. Several recent works have established the theoretical foundations on the convergence analysis of both continuous diffusion [1] and discrete diffusion [2,3], showing that there are theoretical guarantees of the convergence of the generated sample distribution of the diffusion models and the data distribution, which means that a well-learned diffusion models can preserve the statistical structure of the data (in other words, the consistency between the elements $\\\\mathbf{x} = \\\\\\\\{ x\\\\_1, ..., x\\\\_d \\\\\\\\}$ of the generated samples).\"}", "{\"title\": \"Thanks for your rebuttal !\", \"comment\": \"I have carefully read all the responses, and I can say that these responses have addressed all my concerns. This is an excellent paper, and I believe it should be accepted by ICLR. Therefore, I have raised my score to 8.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q2`**: Clarity and Consistency Issues: Minor inconsistencies reduce clarity, such as inconsistent bolding of best results in Tables 2 and 6. In Table 4, DPLM-2 is presented as performing well in zero-shot folding, yet its RMSD is high compared with ESM3, PVQD, and ESMFold, achieving competitive performance only after fine-tuning (SFT). Additionally, clarifying the presentation of mean and median values could help with data interpretation.\\n\\n**`A2`**: Thank you for pointing out the misleading and we have changed the way of highlighting in our paper: the highest number is bold, and the second highest number is underlined. The current results of conditional generation (Table 4, Table 5) are presented as mean/median values, and we have updated our paper for clearer clarification.\\n\\n**About SFT**: The results of PVQD and ESMFold are actually obtained by supervised finetuning on folding-only objective, so it is meaningful and fair to compare to DPLM-2 with folding SFT too.\"}", "{\"comment\": \"Thank you so much for your constructive suggestions. We understand your concerns about the potential risks of reduced sampling diversity and mode collapse, the need for more evaluation metrics, results, and explanations for a comprehensive assessment, and the theoretical guarantee of structure-sequence consistency under the conditional independence assumption in diffusion models. To address these issues, we have made our major efforts in (1) providing a discussion on diversity metrics with updated results and an ablation study on how the training strategies influence generation diversity; (2) including sequence-based metrics for unconditional generation, conducting sampling from length distribution of natural proteins, analysis on structure tokens and discussion of sampling strategies for conditional generation (3) providing discussions in theoretical aspect on the conditional independence assumption in diffusion models, including DPLM-2 and beyond. Your insightful comments and suggestions have greatly improved our manuscript. We sincerely thank you once again and welcome any further feedback!\\n\\n> `Q1:` The DPLM-2 model is trained starting from the weights of DPLM model. As evident from e.g. Table 2 (poor diversity) and Table 5 (high AAR), DPLM suffers from mode collapse. Training DPLM-2 starting from such a checkpoint leads to severe mode collapse in DPLM-2. Generating the same result over and over again makes a generative model useless. Training both models from scratch rather than finetuning might help and provide the insight on how adding structural information to the DPLM architecture affects the generation ability. Even better would be a thourough ablation study of the training procedure. The dataset preparation included random cropping. It is not clear if it has a detrimental effect on the model behaviour.\\n\\n`A1`: Thanks for this valuable question. We understand you are concerned about the sampling diversity and the risk of mode collapse of DPLM-2, and wondering how specific training strategies (e.g., multimodal finetuning from pre-trained DPLM) relate to the model behaviors. \\n\\n**Regarding sampling diversity and risk of mode collapse.**\\n\\nWe thank the reviewer for bringing this to our attention. We notice that there are inconsistent results of diversity in Table 2 between `avg inner-TM` (averaged pairwise TM-score among samples) and `MaxCluster` (number of distinct structure clusters), which do not make sense. For example, DPLM-2 (co-generation), attains a `MaxCluster=0.545` (i.e., there are 54.5 distinct fold clusters out of 100 samples determined by Foldseek with TM-score threshold = 0.5), which is better than the `MaxCluster=0.500` for the official Multiflow (w/ distillation), whereas the `avg inner-TM` is unreasonably high (0.703 vs 0.468 for MultiFlow), which contradicts with the value of MaxCluster and the visualized observation we perceive from the predicted structures.\\n\\nWe have accordingly checked our code and found out that our implementation of avg inner-TM was mistakenly incorrect. Here we provide a updated results in the Table below with corrected avg inner-TM, where DPLM-2 gets reasonable avg inner-TM similar to native PDB samples (with the same length intervals) and lower than MuliFlow and RFDiffusion. These results empirically indicate that DPLM-2 does not struggle with mode collapse issue during sampling. Plus, we can further improve sampling diversity by using temperature-annealed sampling with higher initial temperature. \\n\\n\\n| Model | scTM | scRMSD | avg. inner-TM | MaxCluster | helix ratio | strand ratio |\\n|----------------------------------------------------|---------------|---------------|---------------|------------|-------------|--------------|\\n| Native PDB | 0.904 \\u00b1 0.129 | 4.623 \\u00b1 5.688 | 0.271 \\u00b1 0.020 | 0.776 | 0.36 | 0.22 |\\n| MultiFlow (official ckpt) | 0.930 \\u00b1 0.098 | 3.208 \\u00b1 4.741 | 0.356 \\u00b1 0.013 | 0.500 | 0.75 | 0.10 |\\n| RFDiffusion | 0.914 \\u00b1 0.155 | 1.969 \\u00b1 4.073 | 0.352 \\u00b1 0.025 | 0.598 | 0.62 | 0.18 |\\n| DPLM-2 | 0.925 \\u00b1 0.085 | 3.899 \\u00b1 3.723 | 0.270 \\u00b1 0.018 | 0.545 | 0.46 | 0.16 |\\n| DPLM-2 (temperature-annealed sampling: 2.2 -> 0.1) | 0.883 \\u00b1 0.120 | 5.447 \\u00b1 5.477 | 0.275 \\u00b1 0.031 | 0.584 | 0.43 | 0.18 |\"}", "{\"title\": \"Official Comment by Reviewer f8By\", \"comment\": \"Dear Authors,\\n\\nThank you for the detailed response to my initial review. Some of my concerns have been addressed. However, several important points require further attention:\\n\\n1. I appreciate the ablations. It would be valuable for the community if the analysis of the results of the ablation study end up in the final version of the manuscript.\\n\\n2. DPLM-2 is a finetuned protein language model. So its evaluation should contain a great deal of sequence analysis. The authors provide only pLDDT, which is primarily a structural quality measure, and MMseqs2 clustering. MMseqs2 clustering (and also structural clustering for that matter) should be performed at different thresholds to capture different aspects of the diversity. Adding perplexity as a measure of naturalness/quality and novelty through sequence identity to nearest neighbor in the dataset to the evaluation toolkit would make analysis more sound.\\n\\n3. In the presented setting, sc-TM and sc-RMSD evaluate the degree of consistency between the predictions in two modalities, sequence and structure. Both \\\"quality\\\" and \\\"designability\\\" are misnomers.\\n\\n4. I appreciate the results on the novelty evaluation using TM-score on the full training set, not only PDB, which is a subset thereof. The results show that the model generates more structures similar to those in the training data, however, in this case reporting mean and std is insufficient as it may mask issues with memorization. It is advised to plot the distribution.\\n\\n5. Regarding argmax vs. stochastic sampling. This important topic requires a thorough discussion, since different strategies are used for different tasks. In DPLM, which DPLM-2 is based upon, Gumbel-Max trick was used (at least) in unconditional generation to alleviate mode collapse. I have not found any mentions of this in the manuscript. Here, argmax sampling is used for inverse folding, while stochastic sampling is used for unconditional generation and scaffolding. Why different tasks use different sampling approaches? What stochastic approach is used, please provide the details? What is the reasoning behind this and how is it supported experimentally?\\n\\n6. Even with added baselines, important comparisons are missing. There is a lot of active research on inverse folding, so there is no lack of good baselines and good methodology ([1-4] to name a few). At the end of the day, it is not about beating SOTA result, but about providing objective comparison for the benefit of the protein design community.\\n\\n7. Thank you for adding the codebook utilization metrics. However, they are purely quantitative and don't provide insight into the semantic meaning of tokens. Since the structure tokenization is an important part of the work, it would very much benefit from a deep analysis of the used tokens. Since the codebook sizes far exceed the number of secondary structure features, it would be great to map tokens to known structural motifs beyond secondary structure and provide visualizations like in ESM-3 paper.\\n\\n8. Regarding the representation learning comparison, there is a huge body of work out there for comparison on these tasks. Why not to compare against strong baselines? Again, it is not about beating SOTA, but about correctly reflecting the state of affairs in the experiments. Also, there is a more recent and more strongly performing Gear-net version (https://arxiv.org/abs/2303.06275).\\n\\n9. Regarding one of the main claims of the paper, you do not answer the question. The following is outlined in the main contributions: \\\"DPLM-2 enables unconditional co-generation of designable and diverse proteins that **guarantees** consistency between structure and sequence\\\". It seems like a gross overstatement without a formal rigorous proof. If there is no guarantees of consistency, then only empirical evidence should be declared.\\n\\n10. Adding a discussion of the used self-mixup training strategy would strengthen the work. Also the experiment and data in table 8 are not discussed. If the absolute numbers of clusters are reported, the number of samples that underwent clustering should be stated in the caption.\", \"references\": \"[1] https://arxiv.org/abs/2305.15151\\n\\n[2] https://arxiv.org/abs/2306.16819\\n\\n[3] https://arxiv.org/abs/2312.06297v2\\n\\n[4] https://arxiv.org/abs/2310.11802\"}", "{\"comment\": \"I have carefully reviewed all the reviews and responses. I believe the responses have addressed all my concerns, and the authors have conducted extensive experiments and presented their work effectively. Therefore, I have raised my score to 8.\"}", "{\"summary\": \"This paper introduces DPLM-2, a multimodal discrete diffusion protein language model that can simultaneously generate both protein sequences and their 3D structures. The model extends the DPLM protein language model by incorporating structural information through a lookup-free quantization-based tokenizer that converts 3D coordinates into discrete tokens. DPLM-2 is trained on both experimental structures from PDB and synthetic structures from AFDB-SwissProt.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is clearly written.\\n\\n2. The model achieves simultaneous structure-sequence generation without requiring a two-stage training approach.\\n\\n3. The experiments on learning structure tokenization are valuable for the community.\", \"weaknesses\": \"1. The DPLM-2 model is trained starting from the weights of DPLM model. As evident from e.g. Table 2 (poor diversity) and Table 5 (high AAR), DPLM suffers from mode collapse. Training DPLM-2 starting from such a checkpoint leads to severe mode collapse in DPLM-2. Generating the same result over and over again makes a generative model useless. Training both models from scratch rather than finetuning might help and provide the insight on how adding structural information to the DPLM architecture affects the generation ability. Even better would be a thourough ablation study of the training procedure.\\n\\n2. Authors compare DPLM and DPLM-2 only with EvoDiff on unconditional sequence generation. Adding more baselines of different architectures (AR transformers, CARP, continuous diffusion, flow-matching, etc.) would greatly improve the work.\\n\\n3. There are some issues with model evaluation. First, the model is evaluated only structurally, but it is langugae model after all. Using sequence-based evaluation metrics, including sequence clustering for the diversity should benefit the soundness of the work. Second, it is not clear, why the Authors use pdb-TM (claculate TM-score against PDB), if the model is trained also on the synthetic data. Third, the designability metric used in this work evaluates the consistensy between the generated structure and the prediction of ESMFold on the generated sequence. It does not measure protein \\\"quality\\\".\\n\\n4. Authors compare DPLM-2 on inverse folding task with weak baselines. Adding recognized IF models would greatly benefit the work. The same goes to other tasks, e.g. representation learning.\\n\\n5. The dataset preparation included random cropping. It is not clear if it has a detrimental effect on the model behaviour.\\n\\n6. The paper lacks analysis on the trained structural tokens. Additional exploration of the interpratability of the tokens themselves and untilization of the codebooks would greatly benefit the paper. Do the tokens correspond to some local environments in structure, or are they just abstract entities?\", \"questions\": \"1. The model does not learn the distribution of protein lengths. Have you tried to overcome this limitation?\\n\\n2. The ablation results presented in 4.1.3 and table 3 are controversial. Could you please clarify the procedure, how many samples was used for evaluation and so on? \\n\\n3. On page 9, line 442 is stated that DPLM-2 adopts argmax decoding. In the original DPLM paper argmax did not work. Can you elaborate on this?\\n\\n4. The experiment on DeepLoc is not described in sufficient detail. Why you chose to use only one tiny dataset? Could you provide experiments that show the described catastrophic forgetting issue, which is of high importance?\\n\\n5. One of the main claims of the paper states that the co-generation guarantees consistency between structure and sequence. This is a strong statement that requires strong evidence. However on line 223 the assumption of conditional independence is made. Can you provide a rigorous mathematical proof that garantees such consistensy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a multimodal diffusion protein language model that integrates information from both sequence and structure modalities. The model is built upon a pre-trained DPLM model, using LoRA technology to extend the original sequence-only model\\u2019s capability to process structural knowledge. This approach not only reuses the knowledge embedded in the existing sequence-based protein language model but also reduces training costs and expenses. Compared to traditional autoregressive language models, the diffusion language model offers more flexible generation capabilities, making it better suited for modeling protein data with extensive non-unidirectional semantic dependencies. Overall, this work represents a natural extension and generalization of the DPLM model, exploring the performance of such diffusion language models on multimodal protein data and demonstrating the potential advantages of multimodal protein language models.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This is the first multimodal diffusion protein language model, which effectively explores the application of such models in multimodal scenarios. Considering the potential of diffusion language models, I believe this is a highly meaningful research question.\\n\\n2. The paper has a clear logical structure, and its content is straightforward and easy to understand.\\n\\n3. The paper demonstrates the performance of DPLM-2 across a variety of tasks, providing substantial and comprehensive content that analyzes the model\\u2019s performance from multiple perspectives.\", \"weaknesses\": \"1. The main results reported in this paper are based on unconditional generation tasks; however, some evaluation metrics may have inherent limitations.\\n\\n2. The model in this paper is trained on a relatively small protein structure dataset, lacking exploration on larger-scale structural data. I believe that extending multimodal protein language models to larger protein sequence and structure datasets could further unlock the potential of this approach.\\n\\n3. There are some minor spelling and formatting errors: the equation on line 197 is missing a right parenthesis, and $ b_{i}(t) $ introduced in equation (1) (on line 192) is undefined in the paper.\", \"questions\": \"1. **Regarding structure tokens:** Figure 2B shows that most structure tokens are concentrated in alpha helix and beta sheet regions, with lower density in loop regions. However, in line 409 of the paper, you state that \\\"loops are highly flexible.\\\" Could this pose a potential issue? More variable regions typically contain richer and more diverse information, which may require more structure tokens to model this diversity effectively. Yet, your experimental results do not align with this. Moreover, your results in Figure 4B also indicate that the model\\u2019s designability is notably worse in regions with a higher proportion of loops. Could one possible reason for this issue be that the structure tokens in loop regions are not well-learned, resulting in weaker modeling capability?\\n\\n2. **Regarding the pLDDT metric:** Table 2 shows that MultiFlow (retrained on our training data) and DPLM-2 (co-generation) perform similarly on scTM and scRMSD metrics but diverge significantly on the pLDDT metric. A similar trend is observed when comparing Figures 3A and 3C. In Figure 3A, scTM and scRMSD metrics remain relatively stable as sequence length increases, with scRMSD even increasing for sequences of length 500. However, Figure 3C shows a notable upward trend in pLDDT as sequence length increases (from 100 to 300), without a significant drop in pLDDT even for sequences of length 500. These phenomena indicate that the scTM and scRMSD metrics convey somewhat contradictory information to the pLDDT metric. Generally, a significant drop in pLDDT should correspond with a decrease in scTM and an increase in scRMSD. My question here is whether this inconsistency might indicate that the pLDDT metric is unreliable. I raise this because, in previous experiments, we observed that pLDDT, being predicted by a neural network (e.g., ESM-2) trained on natural protein datasets, sometimes fails to handle model-generated proteins that deviate significantly or exhibit severe irregularities, leading to inflated pLDDT values. This phenomenon is quite common in unconditional generation. Therefore, I believe using a more diverse set of metrics is essential. However, you did not provide evaluation results for MultiFlow (retrained on our training data) on the Novelty and Diversity dimensions, which makes it harder to assess the model\\u2019s performance on this part of the task.\\n\\n3. **Regarding scTM and scRMSD metrics:** In Table 2, under the unconditional backbone generation task, DPLM-2 achieves the highest scTM metric while also showing a significantly worse scRMSD metric, both in terms of mean and variance, which are at high levels. Is this phenomenon somewhat unusual? Given that these two metrics should have a certain level of correlation.\\n\\n4. **Regarding the inverse folding task:** Comparing the performance of ESM3 and DPLM-2 in Table 5 reveals a pattern where DPLM-2 often exhibits a higher AAR while frequently showing a lower scTM. Could this possibly be due to biases in the protein structure prediction model (ESMFold) or because the sequences generated by DPLM-2, despite having higher ARR, may deviate further from the target protein\\u2019s data distribution?\\n\\n5. **Regarding repetition:** In practice, we observed that diffusion protein language models tend to generate a large number of repetitive amino acids. I noticed that DPLM employed a resampling scheme to address this issue (code link: [here](https://github.com/bytedance/dplm/blob/5545c6d4166f515b4eb66ada41d0ab3178dfe6ca/src/byprot/models/lm/dplm.py#L279)). Therefore, regarding the DPLM-2 model, I would like to know whether it also encounters similar repetition issues. If so, what approach did you adopt to resolve it? If not, I would like to know what prevented this issue in DPLM-2.\\n\\n6. **Regarding VQ-VAE:** I am curious about the vocabulary usage rate of VQ-VAE when the vocabulary size is 1024. I noticed a significant performance difference between VQ-VAE and LFQ in this case, so I wonder if the vocabulary usage rate might be a contributing factor to this issue.\\n\\n7. **Regarding catastrophic forgetting:** I would like to confirm whether DPLM-2 in Table 7 still utilizes structural pre-training, given that neither of these models used large-scale sequence pre-training. If structural pre-training was indeed applied to DPLM-2, then there are two differences between DPLM and DPLM-2: one is the difference in model architecture, and the other is that DPLM-2 was pre-trained on structural data. With these two variables present, how can we determine that the performance decline in DPLM-2 is due to catastrophic forgetting?\\n\\n8. **Regarding the structural information:** Similar to ESM-3, you also use discrete structure tokens to represent structural information. This approach offers scalability benefits for the model, yet it also limits the model\\u2019s ability to capture finer structural details accurately. This trade-off represents an important challenge in the current field of multimodal protein language models. I am interested to know whether you would consider any methods to mitigate this issue when applying DPLM-2 to more structure-related tasks, as you described in Section 5.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q5`**: Regarding the inverse folding task: Comparing the performance of ESM3 and DPLM-2 in Table 5 reveals a pattern where DPLM-2 often exhibits a higher AAR while frequently showing a lower scTM. Could this possibly be due to biases in the protein structure prediction model (ESMFold) or because the sequences generated by DPLM-2, despite having higher ARR, may deviate further from the target protein\\u2019s data distribution?\\n\\n| | AAR (mean/median) | scTM (mean/median) |\\n|------------------------------------------------------------------------------------------------------------|-------------------|--------------------|\\n| ESM-3 (1.4B) | 47.06/46.24 | 0.90/0.95 |\\n| DPLM-2 (650M) + argmax | 49.01/50.10 | 0.88/0.93 |\\n| DPLM-2 (650M) + temperature annealing sampling (linearly annealing from 2.2 -> 0.1 for 100 decoding steps) | 43.15/42.24 | 0.88/0.93 |\\n\\n**`A5`**: Thanks for your insightful question!\\n\\n**About scTM**: Although the actual gap between DPLM-2 (650M) and ESM-3 is not that large (0.88 vs 0.90 on cameo 2022), we suggest that this difference can be attributed to the lossy structure encoding caused by discrete tokenization. ESM3 introduces an important geometric attention module to encode atomic coordinate of input structure when available, while DPLM-2 currently relies on pure structure tokens. This also leads to the discussion of trade-offs and further directions of structure modeling in multimodal PLM, which exactly corresponds to your last question. Please take a look at our elaborate discussion on this in Q9.\\n\\n**About high AAR**: This mainly arises from the sampling strategy. Here we ablated the sampling strategy to study its impact. By default, we followed DPLM's approach, using argmax decoding, which selects the token with the highest probability at each timestep. This method generates sequences with high probabilities, resulting in strong amino acid recovery (AAR). In contrast, we introduced a sampling strategy with annealing temperature ranging from 2.2 to 0.1 to enhance sequence diversity. While this approach lowers AAR, it maintains the same scTM score as argmax decoding. This demonstrates that the temperature annealing strategy generates more diverse sequences. Although these sequences are less similar to the ground truth, they still satisfy the structural requirements, highlighting the trade-off between diversity and similarity to the target sequence.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q8`**: In Section 3.3, you state, \\u201cRecent efforts have applied this approach to protein structure coordinates (Van Kempen et al., 2024; Liu et al., 2023; Gao et al., 2024; Lu et al., 2024). This allows language models to better learn the composition of local structural elements. However, how to learn an effective structure tokenizer remains an active research question.\\u201d Given the similarity of your method to prior approaches, particularly with the use of LFQ as in Gao et al., 2024, could you elaborate on how your method contributes to addressing this active research question?\\n\\n**`A8`**: \\nThank you for your question. Our approach addresses the active research question of structure tokenization through a simpler and more practical design compared to Gao et al., 2024:\\n\\n**Simplicity of design and training**: We use a strong pretrained structure encoder (from ESM-IF) with an AF2-style decoder (triangular modules + IPA) trained using FAPE loss. There are also no unnecessary modifications to LFQ. In contrast, Gao et al introduced very complicated modifications to LFQ (SoftLFQ). This makes our method fairly easy and straightforward to implement, as well as effective and efficient for training.\\n\\n**Performance**: Despite its simplicity, our tokenizer performs competitively in structure reconstruction (recRMSD), demonstrating its ability to effectively capture structural features. Higher reconstruction does not necessarily lead to better generation quality, which is widely observed in VQ-VAE literature while the LFQ paper [2] also clearly elaborated on this. We in this paper have shown that our tokenizer can actually support various generation tasks with decent generation performance, while Gao et al., (2024) mainly focused on evaluation of reconstruction. For generations, they only assessed antibody CDR infilling, which is fairly easy, and general protein generation ability of their tokenizer remians unclear. \\n\\n[1] Gao et al.: FoldToken: Learning Protein Language via Vector Quantization and Beyond. Arxiv 2024\\n\\n[2] Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation. ICLR 2024\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you so much for your support of our work and for so many insightful and thoughtful comments! We would like to address your concern and questions point-by-point below. Your insightful comments and suggestions have greatly improved our manuscript. Please have a check and we are happy to address any further feedback!\"}", "{\"title\": \"General Response & Summary of Rebuttal\", \"comment\": [\"Dear Reviewers, ACs and SACs,\", \"We want to express sincere appreciation for all reviewers' efforts in reviewing and providing valuable suggestions! We have tried our best to address reviewers' concerns respectively, mainly including:\", \"Added comprehensive ablation study of training strategies and analysis of structure tokens, as suggested by Reviewer f8BY.\", \"Added recognized baselines and explored temperature-annealed sampling strategy for improved sequence novelty in inverse folding as suggested, by Reviewer f8BY and Reviewer FqCq.\", \"Added representation learning experiments for investigating the factors contributing to the performance gap between DPLM-2 and DPLM, as suggested by Reviewer f8BY and Reviewer FqCq.\", \"Added discussion about theoretical convergence guarantees on the consistency between structure and sequence under the conditional independence assumption in diffusion model, as suggested by Reviewer f8BY.\", \"Added essential technical details and improved clarity in our paper as suggested by Reviewer 4A34.\", \"Added comprehensive analysis of scTM, scRMSD and secondary structure for unconditional protein generation, as suggested by Reviewer 4A34 and Reviewer FqCq.\", \"Added discussion on comparison of structure tokenization method between our method and Gao et al., 2024, as suggested by Reviewer 4A34.\", \"Added exploration on training on larger-scale structure dataset, as suggested by Reviewer FqCq.\", \"Added discussion on information that structure tokens learn and how to address the potential issue of lossy compression and the absence of fine-grained structural variation introduced by discrete structure tokens, as suggested by Reviewer FqCq.\", \"Added discussion on the repetition issues in DPLM and DPLM-2 as suggested by FqCq.\", \"Also fixed some abnormal evaluation results in unconditional protein generation caused by incorrect implementation or sampling configurations.\", \"We again thank everyone's time and effort in discussing, providing valuable and inspiring feedback, and helping us improve our manuscript. Moreover, we have accordingly revised the paper to best include most of the insightful suggestions and comments from the reviewers. We do sincerely appreciate you!\", \"We are very happy to address any further feedback during the discussion phase!\", \"Many thanks, and cheers!\", \"Authors\"]}", "{\"comment\": \"Dear Reviewer f8By,\\n\\nHi, thank you for taking the time to provide such thoughtful and valuable feedback on our manuscript. We deeply appreciate your insights, and we have tried our best to address your concerns in our response. Please kindly check it out.\\n\\nAs the final deadline for manuscript revisions is approaching (Nov 26 AoE), we would be very grateful for any further feedback you might have. Please don\\u2019t hesitate to reach out if you have additional questions\\u2014we\\u2019d be happy to provide further clarifications!\\n\\nLooking forward to hearing from you, and many many thanks!\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q6`**: In Table 2, what is meant by DPLM-2 (seq \\u2192 structure) or (structure \\u2192 seq)? If these indicate modality-by-modality generation, could you clarify how this is implemented?\\n\\n**`A6`**: \\nThanks for pointing this out, we have updated our paper to improve clarity. The co-generation can be performed in simultaneous generation (co-generation) and cascaded workflow: first generating the structure then the sequence conditioned on generated structure (struct \\u2192 seq), and the reverse way (seq \\u2192 struct), without the need of other folding or inverse folding models.\\n\\n> **`Q7`**: For the protein representation learning evaluation, it might be useful to include a broader range of baselines, such as GNN-based models, for a more comprehensive comparison.\\n\\n**`A7`**: \\nThanks for your valuable suggestion! We have accordingly added the results of GearNet [1], as the GNN-based baseline, as you suggested. We have updated the results in Table 6 of our paper.\\n\\n[1] Protein representation learning by geometric structure pretraining. ICLR 2023\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q1`**: The model in this paper is trained on a relatively small protein structure dataset, lacking exploration on larger-scale structural data. I believe that extending multimodal protein language models to larger protein sequence and structure datasets could further unlock the potential of this approach.\\n\\n**`A1`**: \\nThanks for your suggestions! As we stated in the discussion section, we fully agree that the limited numbers of structure data may hinder DPLM-2 from unlocking its full potential. As you suggested, during the last week, we curated a 1.3 million predicted structure dataset from AlphaFoldDB, where predicted structures were representative entries obtained by MMSeq2 sequence clustering as well as Foldseek structure clustering. This data size is an order of magnitude larger than the current structure data we used, and not too much larger to help us perform an efficient proof-of-concept of whether enlarging data size can boost our model in a course of a week. \\n\\nThe following table summarizes our primary explorations on how the size of the data, data mixing strategy as well as training strategies affect the model performance (DPLM-2 650M). \\n| exp_id | | size | scTM | scRMSD | Foldseek cluster |\\n|--------|--------------------------------------------------------------|-----------------|---------------|-----------------|------------------|\\n| 0 | PDB + swissprot (unclustered, original paper) | 200K | 0.925 \\u00b1 0.085 | 3.899 \\u00b1 3.723 | 54.5 |\\n| 1 | PDB + swissprot (unclustered, original paper) w/o selfmixup | 200K | 0.916 \\u00b1 0.099 | 4.656 \\u00b1 6.366 | 44.0 |\\n| 2 | PDB+swissprot (clustered with seq-identity=0.5) | 110K clusters | 0.883 \\u00b1 0.128 | 5.661 \\u00b1 6.532 | 52.8 |\\n| 3 | AFDB_reps (clustered) | 1.3M clusters | 0.726 \\u00b1 0.237 | 22.557 \\u00b1 24.516 | 60.8 |\\n| 4 | pretrained on exp 3 then finetuned using data as exp 2 | 1.3M then 110K | 0.904 \\u00b1 0.101 | 5.411 \\u00b1 6.882 | 53.0 |\\n\\nMeanwhile, we find that training on the larger-scale structural data can further enhance the representation learning, please refer to **Q8** for more details.\"}", "{\"comment\": \"> `Q8:` Regarding the representation learning comparison, there is a huge body of work out there for comparison on these tasks. Why not to compare against strong baselines? Again, it is not about beating SOTA, but about correctly reflecting the state of affairs in the experiments. Also, there is a more recent and more strongly performing Gear-net version (https://arxiv.org/abs/2303.06275).\\n\\n\\nThanks for your valuable suggestion. We have accordingly added more recent strong baselines and updated the stronger GearNet version as you suggested. The results of newly supplemented baseline models are mainly focused on the EC and GO downstream tasks. Meanwhile, we discovered discrepancies between the new results and previous results in the EC and GO tasks. We utilized old version of SaProt codebase as the evaluation pipeline. After consulting with the authors of SaProt, it is because there was an issue in the early SaProt codebase in calculating the metrics for these tasks, which had been fixed later. As such, we have updated the EC and GO results for both SaProt and DPLM-2.\\n\\n| Models | Thermostability | HumanPPI | Metal Ion Binding | EC | GO | | | DeepLoc | |\\n|------------------------|-----------------|----------|-------------------|-------|-------|-------|-------|-------------|--------|\\n| | | | | | MF | BP | CC | Subcellular | Binary |\\n| SaProt | 0.724 | 86.41 | 75.75 | 0.882 | 0.682 | 0.486 | 0.479 | 85.57 | 93.55 |\\n| SaProt-GearNet | 0.660 | 85.80 | 74.44 | 0.889 | 0.678 | 0.522 | 0.508 | 84.16 | 93.63 |\\n| MIF-ST | 0.694 | 75.54 | 75.08 | 0.807 | 0.633 | 0.375 | 0.322 | 78.96 | 91.76 |\\n| GearNet | 0.571 | 73.86 | 71.26 | 0.874 | 0.644 | 0.481 | 0.476 | 69.45 | 89.18 |\\n| GearNet updated | -- | -- | -- | 0.890 | 0.681 | 0.488 | 0.464 | -- | -- |\\n| CoupleNet [1] | -- | -- | -- | 0.866 | 0.669 | 0.467 | 0.494 | -- | -- |\\n| CDConv [2] | -- | -- | -- | 0.820 | 0.654 | 0.453 | 0.479 | -- | -- |\\n| ESM2-650M-S [3] | 0.668 | -- | -- | 0.823 | 0.649 | 0.463 | 0.519 | -- | -- |\\n| VABS-NET [4] | -- | -- | -- | 0.900 | 0.695 | 0.531 | 0.579 | -- | -- |\\n| ESM-GearNet-INR-MC [5] | -- | -- | -- | 0.896 | 0.683 | 0.518 | 0.504 | -- | -- |\\n| ESM2-650M | 0.691 | 84.78 | 71.88 | 0.868 | 0.670 | 0.473 | 0.470 | 83.68 | 92.28 |\\n| DPLM | 0.695 | 86.41 | 75.15 | 0.875 | 0.680 | 0.357 | 0.409 | 84.56 | 93.09 |\\n| DPLM-2 | 0.714 | 84.44 | 74.28 | 0.881 | 0.682 | 0.493 | 0.481 | 82.98 | 93.64 |\\n\\n----\\n\\n\\n[1] Learning Complete Protein Representation by Dynamically Coupling of Sequence and Structure. NIPS 2024\\n\\n[2] Continuous-Discrete Convolution for Geometry-Sequence Modeling in Proteins. ICLR 2023\\n\\n[3] Structure-informed Protein Language Model. Arxiv 2024\\n\\n[4] Pre-Training Protein Bi-level Representation Through Span Mask Strategy On 3D Protein Chains. ICML 2024\\n\\n[5] Pre-training Sequence, Structure, and Surface Features for Comprehensive Protein Representation Learning. ICLR 2024\\n\\n\\n\\n\\n> `Q9:` Regarding one of the main claims of the paper, you do not answer the question. The following is outlined in the main contributions: \\\"DPLM-2 enables unconditional co-generation of designable and diverse proteins that guarantees consistency between structure and sequence\\\". It seems like a gross overstatement without a formal rigorous proof. If there is no guarantees of consistency, then only empirical evidence should be declared.\\n\\nWe apologize for the confusion due to our inaccurate use of \\\"guarantees\\\". We never aimed to over-claim that the structure-sequence consistency is mathematically and rigorously guaranteed. Instead, we mainly claim our empirical findings. To make the statements more precise, we will rephrase our wording to \\\"DPLM-2 enables unconditional protein co-generation of both structure and sequence, which demonstrates good structure-sequence consistency.\\\" \\n\\nBesides, your initial question on this greatly encouraged us to delve deep into this hence we provided a discussion with literature review in our initial response. We are also truly grateful for your inspirations!\"}", "{\"comment\": \"> `Q4:` On page 9, line 442 is stated that DPLM-2 adopts argmax decoding. In the original DPLM paper argmax did not work. Can you elaborate on this?\\n\\n`A4`: What you mention here is about the decoding strategy for inverse folding and forward folding, where strong and clear conditioning information is given. In the original DPLM paper, their inverse-folding results were actually also obtained by argmax decoding so as to maximize generation accuracy, while stochastic sampling was used for unconditional generation or scaffolding. In DPLM-2, we follow their settings for a fair comparison.\\n\\nWe understand that you are also concerned about if high AAR indicates DPLM-2 just overfit or collapse to native sequence and less generalizable for \\\"de novo design\\\". This is not the case. Here, we provide ablation study on the sampling strategy in inverse folding. The argmax decoding strategy picks the token with highest probability at each timestep, yielding sequence with high probability and resulting in high amino acid recovery (AAR). On the other hand, we employ a sampling strategy with annealing temperature from 2.2 to 0.1 to improve diversity, and the generated sequence has a lower AAR while maintaining the same scTM as argmax decoding. This demonstrates that temperature annealing sampling strategy is capable of generating more diverse sequences that, while not similar to the ground truth, still meet the given structural conditions.\\n\\n| | AAR | scTM |\\n|-------------------------------|-------------|-----------|\\n| argmax decoding | 49.01/50.10 | 0.88/0.93 |\\n| Temperature-annealed sampling | 43.15/42.24 | 0.88/0.93 |\\n\\n\\n\\n> `Q5:` Authors compare DPLM-2 on inverse folding task with weak baselines. Adding recognized IF models would greatly benefit the work. The same goes to other tasks, e.g. representation learning\\n\\n`A5:` Thanks for your valuable suggestion. **For inverse folding task**, we mainly focus on the comparison with other multimodal generative models (MultiFlow, ESM3) in our paper. We have also added more recognized baseline methods in inverse folding evaluation (ProteinMPNN & LM-Design [1]). We conduct experiments on CAMEO 2022 testset. We find that DPLM-2 is able to achieve close results with the strong baselines despite slightly lower scTM. To further improve scTM to bridge the last gap, there are several potential directions: (1) Inverse folding SFT: DPLM-2 conducts this task in a zero-shot manner while other systems are purpose-built models, thus task-oriented SFT could help as we have observed in folding ; (2) better structure modeling includes introducing separate structure encoders for structure encoding and generation purposes [3], or hybrid tokenization for recovering the lost fine-grain structural variations [4].\\n\\n| Model | AAR | scTM |\\n|---------------------------------------------|-------------|-----------|\\n| ProteinMPNN | 44.45/45.93 | 0.91/0.97 |\\n| LM-Design | 56.40/58.24 | 0.88/0.96 |\\n| DPLM-2 650M (argmax decoding) | 49.01/50.10 | 0.88/0.93 |\\n| DPLM-2 650M (temperature-annealed sampling) | 43.15/42.24 | 0.88/0.93 |\\n\\n\\n*For representation learning evaluation*, as you suggested, we have added GearNet [2], as the GNN-based baseline, for a broader range of baselines. We have updated the results in Table 6 of our paper.\\n\\n---\\n\\n[1] Structure-informed language models are protein designers. ICML 2023\\n\\n[2] Protein representation learning by geometric structure pretraining. ICLR 2023\\n\\n[3] Janus: Decoupling visual encoding for unified multimodal understanding and generation. Arxiv 2024. \\n\\n[4] HART: Efficient Visual Generation with Hybrid Autoregressive Transformer.\"}", "{\"comment\": \"> `Q2:` Authors compare DPLM and DPLM-2 only with EvoDiff on unconditional sequence generation. Adding more baselines of different architectures (AR transformers, CARP, continuous diffusion, flow-matching, etc.) would greatly improve the work. The model is evaluated only structurally, but it is langugae model after all. Using sequence-based evaluation metrics, including sequence clustering for the diversity should benefit the soundness of the work.\\n\\n\\n`A2:` Thanks for your valuable suggestion on adding more baselines and sequence clustering for sequence diversity. We have added more baselines with various architectures on unconditional sequence generation, including autoregressive language models (Progen2), CNNs (CARP), continuous diffusion on ESM2 embedding space (DiMA) and flow matching (MultiFlow). We also include sequence diversity using mmseq2 clustering at sequence identy = 0.5 for good quality samples with pLDDT > 70. This quality threshold for diversity is inspired by Multiflow, which is more informative by avoiding diverse but messy sequences. We follow the experimental setting in DiMA [2], generating 2048 sequences with length sampled from the length distribution of PDB + SwissProt. The results highlight that DPLM-2 is able to generate structurally plausible and diverse sequences for protein generation. We also find that training data distillation greatly helps Multiflow's sequence quality in terms of pLDDT and diversity.\\n\\n\\n| | ProGen2 [1] | CARP [2] | DiMA [3] (result from their paper) | EvoDiff | MultiFlow (official w/ distillation) | MultiFlow (retrained on DPLM-2 data) | DPLM | DPLM2 |\\n|----------------------------------------------------------|-------------|----------|------------------------------------|---------|--------------------------------------|--------------------------------------|-------|-------|\\n| pLDDT | 57.2 | 30.0 | 83.3 | 35.846 | 79.4 | 62.6 | 84.0 | 83.7 |\\n| diversity (\\u2191) / mmseq cluster at seq-id=0.5 & plddt > 70 | - | 0.0 | - | 0.020 | 0.860 | 0.294 | 0.745 | 0.755 |\\n\\n[1] Progen2: Exploring the Boundaries of Protein Language Models. 2022\\n\\n[2] CARP: Convolutions are competitive with transformers for protein sequence pretraining. Cell Systems 2024\\n\\n[3] DiMA: Diffusion on language model embeddings for protein sequence generation. Arxiv 2024\\n\\n\\n\\n> `Q3:` There are some issues with model evaluation. It is not clear, why the Authors use pdb-TM (claculate TM-score against PDB), if the model is trained also on the synthetic data. Third, the designability metric used in this work evaluates the consistensy between the generated structure and the prediction of ESMFold on the generated sequence. It does not measure protein \\\"quality\\\".\\n\\n`A3:` Thank for your suggestions!\\n\\n**About \\\"pdb-TM\\\"**. We use \\\"pdb-TM\\\" to assess the novelty of proteins generated by DPLM-2 against natural proteins. As you suggested, it is indeed more reasonable to have TM-scores against swissprot and pdb structures (swissprot & pdb TM). As shown in the table, the \\\"SwissProt & PDB TM\\\" scores for DPLM-2 (650M) align closely with the \\\"pdb-TM\\\" results, consistent with the findings presented in our paper. These results from unconditional sampling indicate that DPLM-2, as a generative model of protein structure and sequence, learns the training data distribution and serves as a foundation for various potential conditional purposes. Finally, we will change to \\\"avg. max-TM\\\" in our revised version as used in Flowflow-2 [1] for a more accurate terminology.\\n\\n| | pdb-TM | swissprot & pdb TM |\\n|---------------|---------------|--------------------|\\n| DPLM-2 (650M) | 0.640 \\u00b1 0.204 | 0.744 \\u00b1 0.170 |\\n\\n**About \\\"quality\\\"**. Thank you for pointing this out! We initially followed FrameDiff and Multiflow to term \\\"quality\\\" for self-consistency based designability for generated structures. To avoid any ambiguity, we will rephrase our wording by rephrasing \\\"quality\\\" to \\\"designability\\\".\\n\\n[1] Sequence-Augmented SE(3)-Flow Matching For Conditional Protein Backbone Generation. NeurIPS 2024\"}", "{\"comment\": \"> `Q9:` The experiment on DeepLoc is not described in sufficient detail. Why you chose to use only one tiny dataset? Could you provide experiments that show the described catastrophic forgetting issue, which is of high importance?\\n\\n`A9`: Thank you for your question. In our study, we selected the DeepLoc subcellular dataset because it highlights a pronounced performance gap between DPLM-2 and DPLM, which serves as the initialization model for DPLM-2. This dataset provides an ideal testbed for investigating the factors contributing to this performance drop.\", \"we_hypothesize_two_potential_causes_for_the_observed_degradation\": \"1. DPLM-2 needs to accommodate additional structural representations given the same model capacity (parameters), which could negatively impact the representation learning performance.\\n2. As continuous training on smaller magnitude of structure data, DPLM-2 may experience catastrophic forgetting of the representation power gained during DPLM's large-scale sequence pretraining.\\n\\nTo explore (1), we eliminated pretraining factors by retraining both DPLM and DPLM-2 with random initialization on the SwissProt and PDB datasets for 100K training steps. Additionally, we evaluated performance across all three tasks (HumanPPI, MetalIonBinding & DeepLoc) where DPLM-2 underperformed compared to DPLM. As shown in the table below, when large-scale sequence pretraining is removed, DPLM-2 significantly outperforms DPLM (exp 2 vs exp 1). This indicates that incorporating structural information enhances performance rather than harming it, which rejects the hypothesis (1).\\n\\nHowever, when DPLM undergoes large-scale pretraining and DPLM-2 is subsequently trained from the pretrained DPLM, the performance of DPLM-2 on certain tasks diminishes (exp 4 vs exp 3). Given the relatively smaller structure data for DPLM-2 training, this suggests that catastrophic forgetting occurs during DPLM-2's multimodal training, reducing the advantages of large-scale pretraining. To verify and mitigate this, during the course of rebuttal of last week, we have curated additional 1.3M predicted structures from AFDB_rep [1], and trained DPLM-2 on this larger data. The experimental results show that the amount of structure data is indeed a key factor for better multimodal protein representations, leading to significantly improved performance over the original data (exp 5 vs exp 4). In particular, on HumanPPI, enlarging data from 200K to 1.5M helps DPLM-2 attain 2.3% improvement, and also outperforms SaProt, a strong multimodal PLM trained with 40M foldseek tokenized AFDB data. \\n\\n| exp id | | HumanPPI (Acc%) | MetalIonBinding (Acc%) | DeepLoc (Acc%) |\\n|---|---|---|---|---|\\n| 0 | SaProt | 86.41 | 75.75 | 85.57 |\\n| 1 | DPLM (PDB + swissprot only) | 73.33 | 62.25 | 63.49 |\\n| 2 | DPLM-2 (PDB + swissprot only) | 77.22 | 69.47 | 66.77 |\\n| 3 | DPLM w/ fully pretraining on UniRef50 | 86.41 | 75.15 | 84.56 |\\n| 4 | DPLM-2 w/ seq-pretraining (finetuned from DPLM) data: swissprot + pdb (200K) | 84.44 | 74.28 | 82.98 |\\n| 5 | DPLM-2 w/ seq-pretraining (finetuned from DPLM) data: AFDB_rep + Swissprot + pdb (1.3M + 200K) | 87.78 | - | 83.42 |\\n\\n\\n[1] Clustering predicted structures at the scale of the known protein universe. Nature 2023\"}", "{\"comment\": \"Dear Reviewer f8By,\\n\\nThank you so much for your detailed and constructive feedback! We truly value your input and are currently working hard on additional experiments and analyses to address each of your points. Please rest assured that we take your comments very seriously and will provide a thorough response soon. Thanks!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"metareview\": \"This paper proposes a joint generative model for protein structure (represented by discrete tokens) and sequence. The paper details a comprehensive study. The authors and two of three reviewers engaged in a thorough discussion. Two referees are strongly supporting acceptance and the non-engaged referee the opposite. Based upon the discussion and browsing through the paper, acceptance is recommended.\", \"additional_comments_on_reviewer_discussion\": \"None.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q3`**: Regarding structure tokens: Figure 2B shows that most structure tokens are concentrated in alpha helix and beta sheet regions, with lower density in loop regions. However, in line 409 of the paper, you state that \\\"loops are highly flexible.\\\" Could this pose a potential issue? More variable regions typically contain richer and more diverse information, which may require more structure tokens to model this diversity effectively. Yet, your experimental results do not align with this. Moreover, your results in Figure 4B also indicate that the model\\u2019s designability is notably worse in regions with a higher proportion of loops. Could one possible reason for this issue be that the structure tokens in loop regions are not well-learned, resulting in weaker modeling capability?\\n\\n**`A3`**: Thanks for this great question! This leads to the discussion on how to explain different in silico designablity metrics (scTM & scRMSD) and what information the structure tokens learn. For the former, we have provided our thoughts on this in the previous question and we empirically notice that scTM for the samples indeed drops when proportion of loops become higher but still keep greater than 0.8 overall. \\n\\n**About what information the structure tokens learn.** For interpratability, we update a more informative simplex plot of struct token vs second struct in Fig. 2B. As you are pointing out, we can observe a strong correlation between a vast majority of the structure tokens and structured local environments, where a lot of structured tokens concentrate on the alpha helix and beta sheet vertices, while some tokens lie between regions or the loop vertice. There are also a subset of struct tokens having less clear clues to specific secondary structures. This suggests that structure tokens mostly capture clear secondary elements, some may correspond to structured local environments (in bewteen helic and sheet), while others could be high-level abstract entities or just not well-learned entries. On one hand, we agree that \\\"more variable regions contain richer and diverse information\\\" as these regions are of high entropy. On the other hand, the nature of lossy and clustering-like vector-quantization methods is highly likely to eliminate such high-frequency high-entropy structural variations, and only keep low-frequency, low-entropy content. We suggest that this could be the major reason for the not-well learned flexible regions. And this also exactly corresponds to your last question about the trade-offs and further directions of structure modeling in multimodal PLM. Please take a look at our elaborate discussion on this in **Q9**.\"}", "{\"comment\": \"> `Q6:` The paper lacks analysis on the trained structural tokens. Additional exploration of the interpratability of the tokens themselves and untilization of the codebooks would greatly benefit the paper. Do the tokens correspond to some local environments in structure, or are they just abstract entities?\\n\\n`A6:` Thanks for your suggestion. In the Figure 2 of the original submission, we have shown the reconstruction accuracy and an interpretation analysis on the correspondence of structure tokens and local structural elements in terms of secondary structures. As you suggested, we calculate the codebook utilization in the following table. We find that LFQ-based tokenizers always achieve nearly 100% codebook utilization with more evenly distributed code usage, while vanilla VQ-VAE struggles with codebook collapse. \\n\\nFor interpretability, we also update a more informative simplex plot of struct token vs second struct in Fig. 2B. We can observe a strong correlation between a vast majority of the structure tokens and structured local environments, where a lot of structured tokens concentrate on the alpha helix and beta sheet vertices, while some tokens lie between regions or the loop vertice. There are also a subset of struct tokens having less clear clues to specific secondary structures. This suggests that structure tokens mostly capture clear secondary elements, some may correspond to structured local environments (in bewteen helic and sheet), while others could be high-level abstract entities. \\n\\n| tokenizer | codebbok size | codebook untilization | train | cameo 2022 | | |\\n|---|---|---|---|---|---|---|\\n| | | | lddt_ca | lddt_ca | tm-score | rmsd |\\n| VQ-VAE-1k | 1024 | 63.50% | 0.76 | 0.71 | 0.8 | 6.14 |\\n| LFQ-1k | 1024 | 100% | 0.82 | 0.77 | 0.86 | 4.35 |\\n| LFQ-2k | 2048 | 100% | 0.84 | 0.79 | 0.88 | 3.62 |\\n| LFQ-4k | 4096 | 100% | 0.86 | 0.82 | 0.91 | 3.31 |\\n| LFQ-8k | 8192 | 99.50% | 0.92 | 0.86 | 0.93 | 2.58 |\\n| LFQ-16k | 16384 | 98.60% | 0.92 | 0.87 | 0.94 | 2.32 |\\n\\n\\n> `Q7:` The model does not learn the distribution of protein lengths. Have you tried to overcome this limitation?\\n\\n`A7`: Thanks for your question. In our paper, our primary purpose is to conduct fair comparisons with previous models under the similar settings to better assess the strengths and limitations of our models, hence follow MultiFlow in sampling within length intervals. Meanwhile, in many protein design applications users have prior knowledge of the target lengths or the ranges of lengths, or indeed require explicit length control and manipulation. As such, we may not need to directly learn the length distribution.\\n\\nDPLM-2 is capable of generating proteins from the empirical length distribution. Specifically, we sample 2048 sequences with length sampled from the length distribution of PDB + SwissProt. The table below demonstrates that DPLM-2 can generate highly plausible proteins, which is consistent with sampling with length intervals.\\n\\n| | scTM | scRMSD | pLDDT |\\n|---|---|---|---|\\n| Original: [100, 200, ..., 500] | 0.925 \\u00b1 0.085 | 3.899 \\u00b1 3.723 | 82.686 |\\n| Training set (PDB+Swissprot) length dist. | 0.929 \\u00b1 0.086 | 3.967 \\u00b1 3.257 | 83.698 |\\n\\n> `Q8:` The ablation results presented in 4.1.3 and table 3 are controversial. Could you please clarify the procedure, how many samples was used for evaluation and so on?\\n\\n`A8`: Thanks for pointing this out. In ablation study, we evaluate the effects of sequence pre-training and data augmentation on unconditional protein generation. Specifically, We investigate the effect of sequence pre-training by randomly initializing DPLM-2 instead of using DPLM parameters, while for effect of predicted structures we leverage only PDB structures for training. We conduct experiments on a 150M parameter DPLM-2. For each DPLM-2 variant, we sample 100 examples for each length in 100, 200, 300, 400 and 500. We evaluate designability by scTM and diversity by the number of difference clusters for each length. The results of Table 3 in our paper demonstrate the effectiveness of sequence pre-training and data augmentation.\"}", "{\"comment\": \"> `Q10:` Adding a discussion of the used self-mixup training strategy would strengthen the work. Also the experiment and data in table 8 are not discussed. If the absolute numbers of clusters are reported, the number of samples that underwent clustering should be stated in the caption.\\n\\nThanks for your suggestions. We would like to make further clarification of the rationale behind the self-mixup training strategy and the details of it as follows. \\n\\n**Clarification on self-mixup**\\n\\nThe exposure bias problem, which is described as the input mismatch between training and sampling, has already garnered attention in the research of continuous diffusion [1,2,3] and NLP [4,5]. We find that the discrete diffusion model also encounters this issue. According to Eq.4 in the manuscript, the model is trained to be tasked with $p\\\\_{\\\\theta}(\\\\mathbf{x}^{(0)}\\\\_i|\\\\mathbf{x}^{(t)})$, essentially doing masked-prediction. During training, the model makes prediction conditioned on the $\\\\mathbf{x}^{(t)}$, which is a mixup of ground-truth tokens and mask tokens as noise: ${\\\\mathbf{x}}^{(t)} = \\\\alpha\\\\_t {\\\\mathbf{x}^{(0)}} + (1-\\\\alpha\\\\_t)\\\\mathbf{q}\\\\_{\\\\text{noise}}$; However, during inference, the model predicts $p\\\\_{\\\\theta}(\\\\mathbf{x}^{(0)}\\\\_i|\\\\hat{\\\\mathbf{x}}^{(t)})$ conditioned on previously generated sample $\\\\hat{\\\\mathbf{x}}^{(t)}$, which is a mixup of model prediction and masks, essentially requiring denoising and masked-prediction. The difference between $\\\\mathbf{x}^{(t)}$ and $\\\\hat{\\\\mathbf{x}}^{(t)}$ causes a discrepancy between $p\\\\_{\\\\theta}(\\\\mathbf{x}^{(0)}\\\\_i|\\\\mathbf{x}^{(t)})$ and $p\\\\_{\\\\theta}(\\\\mathbf{x}^{(0)}\\\\_i|\\\\hat{\\\\mathbf{x}}^{(t)})$, potentially leading to error accumulation since the model trend to be over-confident on its predictions (as in training the model is always exposed to ground-truth, hence the name exposure bias), and negatively impacting the generation performance.\\n\\nTo mitigate this, we propose to bridge this gap by training model to make predictions conditioned on its own predicted results:\\n\\n1. Predict $\\\\hat{\\\\mathbf{x}}^{(0)}$ conditioned on the ground truth training sample $\\\\mathbf{x}^{(t)}$\\n2. Construct the generated sample: $\\\\hat{\\\\mathbf{x}}^{(t)} \\\\leftarrow \\\\hat{\\\\mathbf{x}}^{(0)} + (1-\\\\alpha\\\\_t)\\\\mathbf{q}\\\\_{\\\\text{noise}}$\\n3. Compute self-mixup loss according to Eq4:\\n$\\\\hat{\\\\mathcal{J}}\\\\_t = \\\\mathbf{E}\\\\_{q(\\\\mathbf{x}^{(0)})} \\\\left[\\\\lambda^{(t)} \\\\sum\\\\_{1 \\\\leq i \\\\leq L} b_i(t) \\\\cdot \\\\log p\\\\_{\\\\theta}(\\\\mathbf{x}^{(0)}\\\\_i|\\n\\\\hat{\\\\mathbf{x}}^{(t)})\\\\right]$\", \"we_can_illustrate_this_more_clearly_with_a_break_down_example\": \"Let the ground truth $\\\\mathbf{x}^{(0)}$ be `A B C D E` and the $\\\\mathbf{x}^{(t)}$ be `[m] [m] [m] D E` as in masked discrete diffusion, where `[m]` represents the mask token.\\n1. call a model forward to obtain model prediction $\\\\hat{\\\\mathbf{x}}^{(0)}$, which is `a b c D E` (with the ground truth token `D E` preserved for masked positions), where `a b c` represent model prediction by argmax.\\n2. construct self-mixup $\\\\hat{\\\\mathbf{x}}^{(t)}$. In our experiments, we always replace the ground truth token in $\\\\mathbf{x}^{(t)}$ (`D E` in this case) with the mask token. Therefore $\\\\hat{\\\\mathbf{x}}^{(t)}$ becomes `a b c [m] [m]`,\\n3. compute self-mixup loss. Now we can calculate cross entropy between $\\\\hat{\\\\mathbf{x}}^{(t)}$ (`a b c [m] [m]`) and $\\\\mathbf{x}^{(0)}$ (`A B C D E`) at all positions. More specifically, this can be seen as mask positions are applied masked language modeling loss while non-masked positions are applied denoising autoencoder loss. Moreover, this also improves sample-efficiency compared to typical masked discrete diffusion where training loss is applied to mask positions.\\n\\nIn our experiments, we first train DPLM-2 with the original loss $\\\\mathcal{J}\\\\_t$ in Eq.4 for 50K steps to ensure the prediction quality. This step is crucial; otherwise, the model's predictions might be poor, leading to an excessively large self-mixup loss and causing training instability. After this initial phase, we continue training with self-mixup loss $\\\\hat{\\\\mathcal{J}}\\\\_t$ to mitigate the exposure bias issue.\\n\\n**Regarding the experimental details of Table 8**\\n\\nFor results presented in Table 8, we conduct experiments with the DPLM-2 650M model on the unconditional generation task. We sample 100 proteins within each length interval and calculate scTM for structure-sequence compatibility and the number of clusters for diversity. We will supplement the necessary experimental details in the captions of all diagrams, tables, and figures in our paper in the next version, as you suggested.\\n\\n-----\\n[1] Elucidating the Exposure Bias in Diffusion Models. ICLR 2024\\n\\n[2] Input Perturbation Reduces Exposure Bias in Diffusion Models. ICML 2023\\n\\n[3] Alleviating Exposure Bias in Diffusion Models through Sampling with Shifted Time Steps. ICLR 2024\\n\\n[4] Sequence level training with recurrent neural networks. ICLR 2016\\n\\n[5] Scheduled sampling for sequence prediction with recurrent neural networks. NIPS 2015\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"The pLDDT metric reflects the structural plausibility of sequence. In addition to the pLDDT metric, we also include sequence-related metric that we calculate sequence diversity using mmseq2 clustering at sequence identy = 0.5 for good quality samples with pLDDT > 70. This quality threshold for diversity is inspired by Multiflow, which is more informative by avoiding diverse but messy sequences. We follow the experimental setting in DiMA [1], generating 2048 sequences with length sampled from the length distribution of PDB + SwissProt. The results highlight that DPLM-2 is able to generate structurally plausible and diverse sequences for protein generation. We also find that training data distillation greatly helps Multiflow's sequence quality in terms of pLDDT and diversity.\\n\\n| | ProGen2 [2] | CARP [3] | DiMA [1] (result from their paper) | EvoDiff | MultiFlow (official w/ distillation) | MultiFlow (retrained on DPLM-2 data) | DPLM | DPLM2 |\\n|----------------------------------------------------------|-------------|----------|------------------------------------|---------|--------------------------------------|--------------------------------------|-------|-------|\\n| pLDDT | 57.2 | 30.0 | 83.3 | 35.846 | 79.4 | 62.6 | 84.0 | 83.7 |\\n| diversity (\\u2191) / mmseq cluster at seq-id=0.5 & plddt > 70 | - | 0.0 | - | 0.020 | 0.860 | 0.294 | 0.745 | 0.755 |\\n\\n**<End of `Q4` >**\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q5`**: While DPLM-2 enables flexible generation, what are the trade-offs in structural invariance when using structure tokens instead of 3D coordinates?\\n\\n**`A5`**:\\nThank you for your insightful question. \\n\\nOur approach can be seen as decoupling the learning of the structurally invariant topology to the tokenizer and the language model, and the geometric reasoning on 3D coordinates to the structure decoder parameterized by triangular modules and IPAs from AlphaFold2 with FAPE loss. This shares the similarity to AF2 and ESMfold, where the sequence encoding modules like Evoformer (co-evolution encoding) and ESM (amino-acid encoding) provide invariant features (in the form of single and pair embeddings) to the structure decoder that learns to convert invariant features into 3D coordinates. The AF2-style structure decoder does not enforce strict equivariance to rigid transformations. Instead, it relies on the FAPE loss to ensure structural consistency, which minimizes coordinate errors in a manner that is invariant to global rotations and translations. \\n\\nAs such, we suggest that the primary trade-off when using invariant structure tokens instead of 3D coordinates mainly lies in the potential loss of fine-grained structural details. Structure tokens cluster similar local structures into discrete representations, which inherently introduce quantization errors. This trade-off, on one hand, enables efficient multimodal learning and generation by simplifying the representation space, on the other hand, represents an important challenge in the current field of multimodal protein language models as suggested by the Reviewer FqCq (Q9) and in our discussion section. Future efforts should be made towards better structure modeling to mitigate this trade-off. Some potential directions include introducing separate structure encoders for structure encoding and generation purposes [1], or hybrid tokenization for recovering the lost fine-grain structural variations [2].\\n\\n[1] Janus: Decoupling visual encoding for unified multimodal understanding and generation. Arxiv 2024. \\n\\n[2] HART: Efficient Visual Generation with Hybrid Autoregressive Transformer.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q4`**: Sequence tokens use a smaller vocabulary than structure tokens. Are the corresponding structural embeddings in DPLM-2 trained from scratch during fine-tuning?\\n\\n**`A4`**: That's correct. The structural token embeddings in DPLM-2 are trained from scratch. To efficiently utilize the evolutionary information from pre-trained sequence-based DPLM, DPLM-2 uses a warm-up strategy (outlined in Section 3.2 of our paper). This approach initializes DPLM-2 with the weights of the sequence-trained DPLM.\\n\\nMore specifically, since DPLM's vocabulary consists only of amino acids, DPLM-2 expands this with discrete structure tokens. The embeddings for these new tokens are initialized using the mean and standard deviation of the learned amino acid embeddings. This embedding initialization keeps the distributional statistics of the embedding space consistent with the pre-trained DPLM, ensuring stable early-stage training (for learning structure-sequence alignment) and reducing the risk of extreme gradients that could cause training instability.\"}", "{\"title\": \"Thank you so much!\", \"comment\": \"Dear Reviewer 4A34,\\n\\nWe are thrilled that our responses have addressed all of your concerns, and we sincerely appreciate your supportive feedback and the increased rating!\\n\\nYour thoughtful suggestions and encouraging words mean a lot to us. We will surely make more efforts to include suggestions, discussions, and new results in a better form of presentation in the future revision. Your feedback has been invaluable in helping us improve, and we cannot thank you enough for your support!\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q3`**: Mixed Results: The results on certain tasks are not fully convincing. In particular, while DPLM-2 exhibits high scTM in unconditional protein generation, it does not achieve lower scRMSD compared to other methods, indicating potential limitations in generation quality.\\n\\n**`A3`**: Thank you for pointing this out. This is a great question about the evaluation metrics and their interpretation for the in silico designability of protein generation or design. We would like to address your question as follows: \\n| | scTM | scRMSD | helix ratio | strand ratio | coil ratio |\\n|--------------------------------------------|----------------|---------------|-------------|--------------|------------|\\n| native PDB samples | 0.904 \\u00b1 0.129 | 4.623 \\u00b1 5.688 | 0.36 | 0.22 | 0.42 |\\n| Multiflow (w/ distilation) | 0.930 \\u00b1 0.098 | 3.208 \\u00b1 4.741 | 0.75 | 0.10 | 0.15 |\\n| Multiflow (w/o distillation) | 0.750 \\u00b1 0.163 | 9.306 \\u00b1 8.499 | 0.71 | 0.06 | 0.23 |\\n| Multiflow (retrained on our training data) | 0.871 \\u00b1 0.934 | 6.580 \\u00b1 6.258 | 0.57 | 0.16 | 0.26 |\\n| DPLM-2 | 0.925 \\u00b1 0.085 | 3.899 \\u00b1 3.723 | 0.48 | 0.17 | 0.35 |\\n\\nWe first need to highlight that the generated samples from DPLM-2 share similar scTM (0.925) and scRMSD (3.9) as native PDB samples, which also exhibit good scTM (0.904) with a little bit higher scRMSD (4.623). Moreover, Additionally, DPLM-2 maintains a balanced structural composition (helix: 0.4, strand: 0.2, coil: 0.45), closely resembling natural distributions. In contrast, for MultiFlow, the officially released model with distillation attains much lower scRMSD (3.2), while the performance of our retrained version (on the same DPLM-2 training set) degrades in both scTM (0.871) and scRMSD (6.58). Lower scRMSD in MultiFlow with distillation, appears to be driven by overrepresentation of structured elements (Figure 4A), i.e., significantly biasing towards proteins with more helices, with less strands and loops (also see Figure 4C). This overrepresentation drives the observed scRMSD improvement but deviates from natural protein diversity.\\n\\nTM-score emphasizes global topology, while RMSD is sensitive to local structural errors. As such, although scTM and scRMSD are generally correlated, discrepancies can arise. The purpose of TM-score is to solve this sensitivity of RMSD, because RMSD is an average distance of all residue pairs in two structures, a local error (e.g. a misorientation of the tail) will raise a big RMSD value although the global topology is correct. In TM-score, however, the small distance is weighted stronger than the big distance, which makes the score insensitive to the local modeling error. \\n\\nAs shown in Fig 4B, some samples from DPLM-2 with higher loop proportion are more conformationally flexible, hence may show high scTM (>0.9) but worse scRMSD (>2.0), similar to natural protein. However, this does not necessarily indicate a limitation in generation quality but reflects differences in metric sensitivity. \\n\\nAs a result, the in-silico designability of protein generation should be evaluated comprehensively using both scTM and scRMSD, as each metric offers distinct insights and serves different purposes. For users aiming to generate samples with accurate global topology, scTM serves as a reliable indicator, whereas scRMSD may occasionally exclude reasonable structures. Conversely, for applications requiring structurally rigid and stable proteins, such as functional designs (e.g., binder design), scRMSD has been shown to correlate more strongly with in vitro success rates, as suggested by RFDiffusion.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"This wraps up our response to your follow-up questions from `Q2` to `Q10` (not in a sequential order though)! We hope we have adequately addressed your questions and concerns. Thank you once again for your invaluable feedback, which has significantly helped improve the quality of our work!\\n\\nCheers!\\n\\nAuthors\"}", "{\"comment\": \"> `Q6:` Even with added baselines, important comparisons are missing. There is a lot of active research on inverse folding, so there is no lack of good baselines and good methodology ([1-4] to name a few). At the end of the day, it is not about beating SOTA result, but about providing objective comparison for the benefit of the protein design community.\\n\\nThanks for your valuable suggestion. We have added more good inverse folding baselines as you suggested to provide objective comparison for the benefit of the protein design community.\\nWe conduct experiments on the CATH 4.2 testset, and * means results are quoted from their respective paper. We will include these results in the next version of our manuscript.\\n| Model | AAR | scTM |\\n|---|---|---|\\n| Knowledge-Design* [1] | 60.77 | -- |\\n| GraDe-IF* [2] | 52.21 | -- |\\n| MMDesign* [3] | 54.88 | -- |\\n| VFN-IFE* [4] | 62.67 | -- |\\n| PiFold* [5] | 51.66 | -- |\\n| Bridge-IF* [6] | 58.59 | -- |\\n| ProteinMPNN | 45.96 | 0.87 |\\n| LM-Design | 54.41 | 0.88 |\\n| DPLM-2 Argmax decoding | 42.7 | 0.84 |\\n| DPLM-2 temp-annealed sampling | 36.3 | 0.84 |\\n\\n----\\n[1] Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement. ICLR 2024\\n\\n[2] Graph Denoising Diffusion for Inverse Protein Folding. NIPS 2023\\n\\n[3] Progressive Multi-Modality Learning for Inverse Protein Folding. ICME 2024\\n\\n[4] De novo protein design using geometric vector field networks. ICLR 2024\\n\\n[5] PiFold: Toward effective and efficient protein inverse folding. ICLR 2023\\n\\n[6] Bridge-IF: Learning Inverse Protein Folding with Markov Bridges. NIPS 2024\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q1`**: Missing Details: Key details are missing in certain sections. For instance:\\n\\n> **`Q1.1`**: A core contribution is the DPLM-2 model itself, but essential details, such as how sequence and structure tokens are combined and how the structure tokenizer is trained, are not included in the main text. Moving these details from the appendix to the primary text, with clear explanations including the frozen pretrained structure encoder and the sequence prediction head on top of the structure decoder, would significantly improve clarity.\\n\\n**`A1.1`**: Thanks for your valuable suggestions on clarity issues! We have updated our paper by moving the essential details into the main text, and alternating ablation study (section 4.1.3) to the appendix. \\n\\n> **`Q1.2`**: The distinct noise level for each modality could be better explained, as this aspect is currently underdeveloped.\\n\\n**`A1.2`**: Thank you for pointing this out. We introduce a distinct scheduler to control the noise level of structure and sequence flexibly during training (section 3.1 in our paper). Different combinations of structures and sequence schedulers (denoted as $t_z$ and $t_s$, respectively) imply training for different applications. Specifically, we mainly focus on: (1) sequence-conditioned structure generation (e.g., folding), (2) structure-conditioned sequence generation (e.g., inverse-folding), (3) sequence generation, (4) structure generation, (5) structure-sequence co-generation. \\n\\nFor conditional generation tasks (e.g., folding and inverse-folding), we set the noise scheduler of the conditioned modality to 0, e.g., no noise in the conditioned modality. Specifically, in the folding task, the $t_s$ is always set to $0$, while in the inverse-folding tasks the $t_z$ is always set to $0$. \\n\\nIn the structure-sequence co-generation task, we keep the $t_z$ and $t_s$ for the same, enhancing the structure-sequence consistency in co-generation. The structure or sequence generation tasks do not depend on another modality, so we set the noise scheduler of another modality to $T$, which is the maximum timestep and means 100% noise in another modality. For example, in structure generation task, the $t_s$ is always set to $T$. \\n\\nDuring training, we jointly train the above 5 tasks simultaneously. We divide the training data in a batch into 5 parts according to a preset proportion, and each part is used for a specific task training. In our experiment, the proportion for each task is the same, which is 20%. \\n\\n> **`Q1.3`**: In Section 4.2, the authors mention that performance can improve with supervised fine-tuning using a folding objective; however, the paper lacks details on this fine-tuning process.\\n\\n**`A1.3`**: According to the above question, we can further enhance a specific generation task by supervised finetuning (SFT). This involves continuing training for the specific task with a proportion of 100%, while the proportion for other tasks is set to 0%. For example, in Tab. 4, the folding supervised finetuning is performed by continuous training from a pre-trained DPLM-2 with 100% proportion of folding objective using the same training data for additional 50K steps with a constant and smaller learning rate (5e-5 vs 1e-5 for joint pre-training).\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q8`**: Regarding catastrophic forgetting: I would like to confirm whether DPLM-2 in Table 7 still utilizes structural pre-training, given that neither of these models used large-scale sequence pre-training. If structural pre-training was indeed applied to DPLM-2, then there are two differences between DPLM and DPLM-2: one is the difference in model architecture, and the other is that DPLM-2 was pre-trained on structural data. With these two variables present, how can we determine that the performance decline in DPLM-2 is due to catastrophic forgetting?\\n\\n**`A8`**: The DPLM-2 in Table 7 does not utilize structural pretraining. We conduct this experiment because we find DPLM-2 demonstrates a significantly larger performance gap compared with the DPLM, which is used for parameter initialization for DPLM-2. As such, we think this task can be a good testbed for us to figure out the factors causing this degradation.\", \"we_hypothesize_two_potential_causes_for_the_observed_degradation\": \"1. DPLM-2 needs to accommodate additional structural representations given the same model capacity (parameters), which could negatively impact the representation learning performance.\\n2. As continuous training on smaller magnitude of structure data, DPLM-2 may experience catastrophic forgetting of the representation power gained during DPLM's large-scale sequence pretraining.\\n\\nTo explore (1), we eliminated pretraining factors by retraining both DPLM and DPLM-2 with random initialization on the SwissProt and PDB datasets for 100K training steps. Additionally, we evaluated performance across all three tasks (HumanPPI, MetalIonBinding & DeepLoc) where DPLM-2 underperformed compared to DPLM. As shown in the table below, when large-scale sequence pretraining is removed, DPLM-2 significantly outperforms DPLM (exp 2 vs exp 1). This indicates that incorporating structural information enhances performance rather than harming it, which rejects the hypothesis (1).\\n\\nHowever, when DPLM undergoes large-scale pretraining and DPLM-2 is subsequently trained from the pretrained DPLM, the performance of DPLM-2 on certain tasks diminishes (exp 4 vs exp 3). Given the relatively smaller structure data for DPLM-2 training, this suggests that catastrophic forgetting occurs during DPLM-2's multimodal training, reducing the advantages of large-scale pretraining. To verify and mitigate this, during the course of rebuttal of last week, we have curated additional 1.3M predicted structures from AFDB_rep [1], and trained DPLM-2 on this larger data. The experimental results show that the amount of structure data is indeed a key factor for better multimodal protein representations, leading to significantly improved performance over the original data (exp 5 vs exp 4). In particular, on HumanPPI, enlarging data from 200K to 1.5M helps DPLM-2 attain 2.3% improvement, and also outperforms SaProt, a strong multimodal PLM trained with 40M foldseek tokenized AFDB data. \\n\\n| exp id | | HumanPPI (Acc%) | MetalIonBinding (Acc%) | DeepLoc (Acc%) |\\n|--------|------------------------------------------------------------------------------------------------|-----------------|------------------------|----------------|\\n| 0 | SaProt | 86.41 | 75.75 | 85.57 |\\n| 1 | DPLM (PDB + swissprot only) | 73.33 | 62.25 | 63.49 |\\n| 2 | DPLM-2 (PDB + swissprot only) | 77.22 | 69.47 | 66.77 |\\n| 3 | DPLM w/ fully pretraining on UniRef50 | 86.41 | 75.15 | 84.56 |\\n| 4 | DPLM-2 w/ seq-pretraining (finetuned from DPLM) data: swissprot + pdb (200K) | 84.44 | 74.28 | 82.98 |\\n| 5 | DPLM-2 w/ seq-pretraining (finetuned from DPLM) data: AFDB_rep + Swissprot + pdb (1.3M + 200K) | 87.78 | - | 83.42 |\\n\\n[1] Clustering predicted structures at the scale of the known protein universe. Nature 2023\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> **`Q4`**: Regarding the pLDDT metric: Table 2 shows that MultiFlow (retrained on our training data) and DPLM-2 (co-generation) perform similarly on scTM and scRMSD metrics but diverge significantly on the pLDDT metric. A similar trend is observed when comparing Figures 3A and 3C. In Figure 3A, scTM and scRMSD metrics remain relatively stable as sequence length increases, with scRMSD even increasing for sequences of length 500. However, Figure 3C shows a notable upward trend in pLDDT as sequence length increases (from 100 to 300), without a significant drop in pLDDT even for sequences of length 500. These phenomena indicate that the scTM and scRMSD metrics convey somewhat contradictory information to the pLDDT metric. Generally, a significant drop in pLDDT should correspond with a decrease in scTM and an increase in scRMSD. My question here is whether this inconsistency might indicate that the pLDDT metric is unreliable. I raise this because, in previous experiments, we observed that pLDDT, being predicted by a neural network (e.g., ESM-2) trained on natural protein datasets, sometimes fails to handle model-generated proteins that deviate significantly or exhibit severe irregularities, leading to inflated pLDDT values. This phenomenon is quite common in unconditional generation. Therefore, I believe using a more diverse set of metrics is essential. However, you did not provide evaluation results for MultiFlow (retrained on our training data) on the Novelty and Diversity dimensions, which makes it harder to assess the model\\u2019s performance on this part of the task.\\n\\n**`A4`**: Thanks for pointing this out! After checking our code, we find that the results of MultiFlow (retrained on our training data) were obtained incorrectly. This occurred because we leveraged default MultiFlow sampling configuration that sampling proteins in lengths of [70, 100, 200, 300], inconsistent with the experimental settings in our paper that sample in lengths of [100, 200, 300, 400, 500]. We apologize for this mistake and we provide a updated results in the Table below with corrected scTM and scRMSD, where MultiFLow (retrained on our training data) achieves significantly lower scTM and scRMSD results compared with DPLM-2, aligning with the performance gap seen in pLDDT. We believe this is because MultiFlow's weaker ability to generate long proteins, resulting in a significant decline in designability when generating proteins of lengths 400 and 500.\\n\\n| Model | scTM | scRMSD | pLDDT | avg. inner-TM | MaxCluster |\\n|----------------------------------------------|----------------|----------------|--------|---------------|------------|\\n| Native PDB | 0.904 \\u00b1 0.129 | 4.623 \\u00b1 5.688 | -- | 0.271 \\u00b1 0.020 | 0.776 |\\n| MultiFlow (official ckpt) | 0.930 \\u00b1 0.098 | 3.208 \\u00b1 4.741 | 79.447 | 0.356 \\u00b1 0.013 | 0.500 |\\n| Multiflow (w/o distillation) * | 0.750 \\u00b1 0.163* | 9.306 \\u00b1 8.499* | 65.861 | 0.350 \\u00b1 0.038 | 0.490 |\\n| Multiflow (retrained on our training data) * | 0.871 \\u00b1 0.934* | 6.580 \\u00b1 6.258* | 67.870 | 0.331 \\u00b1 0.052 | 0.440 |\\n| DPLM-2 (650M) | 0.925 \\u00b1 0.085 | 3.899 \\u00b1 3.723 | 82.686 | 0.270 \\u00b1 0.018 | 0.545 |\"}", "{\"title\": \"Further Responses\", \"comment\": \"Dear Reviewer f8By,\\n\\nThank you for your thoughtful and encouraging feedback, which we truly appreciate. Your detailed and insightful comments are invaluable to us, and we are deeply grateful for the time and effort you have invested in reviewing our work.\\n\\nWe have addressed your concerns and questions point by point below and kindly invite you to review our responses. Since there is an additional day for authors to reply, we welcome any further feedback you may have and are happy to provide further clarifications if needed.\\n\\nFinally, we will carefully incorporate all results and discussions into the revised version of the manuscript to ensure it meets the highest standards. Thanks for helping us make our work a better one!\"}", "{\"comment\": \"> `Q2:` DPLM-2 is a finetuned protein language model. So its evaluation should contain a great deal of sequence analysis. The authors provide only pLDDT, which is primarily a structural quality measure, and MMseqs2 clustering. MMseqs2 clustering (and also structural clustering for that matter) should be performed at different thresholds to capture different aspects of the diversity. Adding perplexity as a measure of naturalness/quality and novelty through sequence identity to nearest neighbor in the dataset to the evaluation toolkit would make analysis more sound.\\n\\nThanks for your suggestions. We have conducted more comprehensive evaluations as you advised, including:\\n1. sequence and structural diversity: We conduct MMseqs2 clustering and foldseek structural clustering at different thresholds. We calculate diversity for high-quality samples, following the practice of multiflow. For MMseqs2 clustering, we select samples with pLDDT > 70, while for foldseek clustering we choose samples with scTM > 0.5. \\n2. sequence naturalness: We calculate perplexity as a measure of naturalness with ProGen2-large [1],\\n3. sequence novelty: We calculate novelty through sequence identity to the nearest neighbor in the training set.\\n4. we plan to also provide evaluation of conditional perplexity derived from $p(\\\\text{seq} | \\\\text{struct})$ using invfold model like ESM-IF [2] in the future manuscript. \\n\\nAll models generate 100 samples per length in the range of [100, 200, 300, 400, 500] for evaluation, with the results summarized in the table below.\\n\\nOne particularly insightful observation is the distinct behavior of MultiFlow (w/ distillation) and DPLM-2 regarding structural diversity. Specifically, DPLM-2 exhibits greater diversity under strict TM-score thresholds (\\u22640.5), while MultiFlow achieves better diversity at higher TM-score thresholds (\\u22650.7). Combined with the average inner-TM scores (DPLM-2: 0.275, MultiFlow: 0.356) presented in Q1 of our initial response, this suggests that DPLM-2 excels at generating diverse structures in terms of global topologies but exhibits limited structural variation within each cluster. This finding highlights a key limitation of the current structural tokenization approach: the loss of fine-grained structural variations, emphasizing the need for future improvements in this area.\\n\\nAdditionally, DPLM-2 achieves the lowest ProGen2 perplexity, while its sequence identity to training data (0.475) is higher than that of DPLM and MultiFlow. This indicates that the sequences generated by DPLM-2 align more closely with the natural distribution.\\n\\n\\n| | MultiFlow (official w/ distillation) | MultiFlow (retrained on DPLM-2 data) | DPLM | DPLM2 |\\n|---|---|---|---|---|\\n| pLDDT | 79.4 | 62.6 | 84.0 | 83.7 |\\n| seq-diversity (\\u2191) / mmseq cluster at seq-id=0.3 & plddt > 70 | 0.804 | 0.204 | 0.740 | 0.745 |\\n| seq-diversity (\\u2191) / mmseq cluster at seq-id=0.5 & plddt > 70 | 0.860 | 0.294 | 0.745 | 0.755 |\\n| seq-diversity (\\u2191) / mmseq cluster at seq-id=0.7 & plddt > 70 | 0.862 | 0.294 | 0.815 | 0.795 |\\n| seq-diversity (\\u2191) / mmseq cluster at seq-id=0.9 & plddt > 70 | 0.862 | 0.294 | 0.885 | 0.895 |\\n| struct-diversity (\\u2191) / foldseek at tmscore=0.3 & scTM > 0.5 | 0.030 | 0.080 | -- | 0.198 |\\n| struct-diversity (\\u2191) / foldseek at tmscore=0.5 & scTM > 0.5 | 0.500 | 0.440 | -- | 0.545 |\\n| struct-diversity (\\u2191) / foldseek at tmscore=0.7 & scTM > 0.5 | 0.962 | 0.830 | -- | 0.646 |\\n| struct-diversity (\\u2191) / foldseek at tmscore=0.9 & scTM > 0.5 | 0.990 | 0.910 | -- | 0.746 |\\n| seq-naturalness / progen2 ppl | 8.11 \\u00b1 2.08 | 9.15 \\u00b1 2.77 | 4.33 \\u00b1 2.51 | 4.08 \\u00b1 2.00 |\\n| seq-novelty / mmseq search against PDB+swissprot | 0.306 | 0.312 | 0.304 | 0.475 |\\n\\n----\\n\\n[1] Progen2: Exploring the Boundaries of Protein Language Models. Cell system 2022\\n\\n[2] Learning inverse folding from millions of predicted structures. ICML 2022\"}", "{\"summary\": \"The paper presents DPLM-2, a multimodal protein foundation model designed to represent and generate protein sequences and structures. DPLM-2 introduces a token-based approach to protein structures, converting 3D backbone coordinates into discrete structure tokens. The model then processes structure token sequences alongside amino acid sequences, with aligned position encodings to reinforce residue-level correspondence between them, and is trained using a denoising objective.\\n\\nAddressing the limitations of previous models like Multiflow, which lack sequence-based pretraining to capture co-evolutionary relationships, DPLM-2 leverages pretraining on unlabeled sequences and finetuning on structures, to learn the joint distribution of sequences and structures. This allows DPLM-2 to capture both modalities effectively and enables it to model joint, marginal, and conditional distributions.\\n\\nAdditionally, DPLM-2 demonstrates competitive performance across tasks such as unconditional generation, folding, inverse folding, and motif-scaffolding, showing its capability as a comprehensive multimodal protein modeling tool.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Originality: The work's originality lies primarily in (1) the DPLM framework that combines sequence tokens and structure tokens with a lookup-free quantization (LFQ) VAE, and (2) the use of sequence pretraining followed by LoRA fine-tuning. These contributions provide valuable insights to the field. The authors also address relevant concurrent work effectively.\", \"Clarity: The paper is generally easy to follow, though some sections would benefit from further explanation and technical details.\", \"Empirical Performance: DPLM-2 demonstrates strong empirical performance across a wide range of generative tasks, including sequence-structure co-generation, folding, inverse folding, and motif-scaffolding. The paper also includes valuable ablation studies on sequence pretraining and data augmentation, offering insights into the model\\u2019s effectiveness and robustness.\"], \"weaknesses\": \"-Missing Details: Key details are missing in certain sections. For instance:\\n\\n (1)A core contribution is the DPLM-2 model itself, but essential details, such as how sequence and structure tokens are combined and how the structure tokenizer is trained, are not included in the main text. Moving these details from the appendix to the primary text, with clear explanations including the frozen pretrained structure encoder and the sequence prediction head on top of the structure decoder, would significantly improve clarity.\\n\\n (2)The distinct noise level for each modality could be better explained, as this aspect is currently underdeveloped.\\n\\n (3)In Section 4.2, the authors mention that performance can improve with supervised fine-tuning using a folding objective; however, the paper lacks details on this fine-tuning process.\\n\\n- Clarity and Consistency Issues: Minor inconsistencies reduce clarity, such as inconsistent bolding of best results in Tables 2 and 6. In Table 4, DPLM-2 is presented as performing well in zero-shot folding, yet its RMSD is high compared with ESM3, PVQD, and ESMFold, achieving competitive performance only after fine-tuning (SFT). Additionally, clarifying the presentation of mean and median values could help with data interpretation.\\n\\n- Mixed Results: The results on certain tasks are not fully convincing. In particular, while DPLM-2 exhibits high scTM in unconditional protein generation, it does not achieve lower scRMSD compared to other methods, indicating potential limitations in generation quality.\", \"questions\": [\"Sequence tokens use a smaller vocabulary than structure tokens. Are the corresponding structural embeddings in DPLM-2 trained from scratch during fine-tuning?\", \"While DPLM-2 enables flexible generation, what are the trade-offs in structural invariance when using structure tokens instead of 3D coordinates?\", \"In Table 2, what is meant by DPLM-2 (seq \\u2192 structure) or (structure \\u2192 seq)? If these indicate modality-by-modality generation, could you clarify how this is implemented?\", \"For the protein representation learning evaluation, it might be useful to include a broader range of baselines, such as GNN-based models, for a more comprehensive comparison.\", \"In Section 3.3, you state, \\u201cRecent efforts have applied this approach to protein structure coordinates (Van Kempen et al., 2024; Liu et al., 2023; Gao et al., 2024; Lu et al., 2024). This allows language models to better learn the composition of local structural elements. However, how to learn an effective structure tokenizer remains an active research question.\\u201d Given the similarity of your method to prior approaches, particularly with the use of LFQ as in Gao et al., 2024, could you elaborate on how your method contributes to addressing this active research question?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Conditional independence in multimodal diffusion models.**\\n\\nMultimodal diffusion models aim to accommodate two or more modalities using a unified models. In this case, conditional independence between modalities is generally made $p\\\\_\\\\theta(\\\\mathbf{x}\\\\_{t-1}, \\\\mathbf{y}\\\\_{t-1} | \\\\mathbf{x}\\\\_t, \\\\mathbf{y}\\\\_{t}) = \\\\textstyle\\\\prod\\\\_i p\\\\_\\\\theta(x\\\\_{t-1, [i]} | \\\\mathbf{x}\\\\_t, \\\\mathbf{y}\\\\_{t-1}) \\\\textstyle\\\\prod\\\\_j p\\\\_\\\\theta(y\\\\_{t-1, [j]} | \\\\mathbf{x}\\\\_t, \\\\mathbf{y}\\\\_{t})$. For instance, UniDiffuser [4] is a multimodal continuous diffusion model that handles text and image modalities independently at each timestep, conditioned on the predictions from the previous timestep. Multiflow [3], on the other hand, factorizes protein data into three modalities\\u2014translation, orientation, and amino acid type\\u2014assuming conditional independence. It establishes a multimodal diffusion/flow-based model by combining three types of stochastic processes over Euclidean, SO(3), and categorical spaces for these modalities. In DPLM-2, we adopt a unified discrete diffusion approach where structure tokens and amino acid tokens are treated as conditionally independent. While theoretical guarantees for the convergence of mixture diffusion processes are still under-explored, existing discrete diffusion theory [2,3] ensures that a well-trained DPLM-2 can converge to the tokenized structure-sequence data distribution, supporting consistency between structure and sequence tokens.\\n\\nAdditionally, theoretical studies on non-autoregressive Transformers (NATs) for text generation, which are akin to masked discrete diffusion, indicate that the learning difficulty of such models can be evaluated through conditional total correlation, a dataset-dependent and model-free measure captures the discrepancy between a joint data distribution and a fully factorized distribution under conditional independence [6]. These studies suggest that simplifying the original complex target data (e.g., by using instance-level knowledge distillation from other models as in NATs for text generation or in MulitFlow for sequence generation, or by using tokenized structure instead of 3D coordinates as in DPLM-2/ESM3), reduces conditional total correlation, thereby enhancing both learning and generation quality.\\n\\nGiven the consistency of structure tokens and amino acid can be ensured to learn in DPLM-2 by previous theoretical results [2,3,6], the overall structure and sequence consistency can be achieved with a decent structure tokenizer, such as the one proposed in this paper, which can map structure tokens to their atomic coordinates with a sufficient accuracy.\\n\\n----\\n\\n[1] Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. ICLR 2023\\n\\n[2] Convergence analysis of discrete diffusion model: Exact implementation through uniformization. Arxiv 2024\\n\\n[3] Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis. Arixv 2024\\n\\n[4] UniDiffuser: One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale. ICML 2023\\n\\n[5] Generative Flows on Discrete State-Spaces: Enabling Multimodal Flows with Applications to Protein Co-Design. ICML 2024. \\n\\n[6] On the Learning of Non-Autoregressive Transformers. ICML 2021\\n\\n\\n`---end of Q10---`\"}", "{\"comment\": \"> `Q7:` Thank you for adding the codebook utilization metrics. However, they are purely quantitative and don't provide insight into the semantic meaning of tokens. Since the structure tokenization is an important part of the work, it would very much benefit from a deep analysis of the used tokens. Since the codebook sizes far exceed the number of secondary structure features, it would be great to map tokens to known structural motifs beyond secondary structure and provide visualizations like in ESM-3 paper.\\n\\nThank you for the valuable suggestions. We now fully understand your point and completely agree that mapping structure tokens to structural motifs can provide more fine-grained insights into what structure tokens learn.\\n\\nSince mapping each structure token to a dataset of \\\"known structural motifs\\\" is quite challenging for us within this short period, we propose an alternative approach. Specifically, as structure tokens are residue-wise representations, we aim to map each structure token to structural motifs defined as the nearest-neighbor local structural environment of a residue in the training dataset (for efficiency, we used only the PDB dataset). The process is as follows:\\n\\n1. For each structure in the PDB dataset (~20K in total), we first tokenize the structure into structure tokens and save the (structure token, 30-nearest-neighbors structural motif) pair for each residue. We use 30 nearest neighbors because the pre-trained GVPTransformerEncoder, which we used as the structure encoder, employs 30 nearest neighbors as the hyperparameter for geometric features.\\n2. After processing all structures, we obtain a table where each row corresponds to a structure token and its associated structural motifs (i.e., num_structural_motifs).\\n3. To analyze whether a structure token tends to occur in a similar local structural environment, we use Foldseek (TM-threshold = 0.5) to cluster the structural motifs for each structure token (i.e., motif_clusters). Although Foldseek may not be entirely accurate in clustering such short and discontinuous structural regions, it provides a reasonable comparative sense of the similarity/difference among all structural motifs associated with each structure token.\\n\\nIn [this figure](https://anonymous.4open.science/r/supple_dplm2-2342/struct_token_hist.pdf), we plot the histogram of num_structural_motifs vs. motif_clusters for each structure token (randomly sampling 500 out of 8,192 structure tokens to ensure readability). From the visualization, we observe that many structure tokens correspond to highly similar structural motifs (evidenced by a small ratio of motif_clusters to num_structural_motifs), while others exhibit a high degree of ambiguity.\\n\\nAdditionally, leveraging this mapping between structure tokens and structural motifs, we can create visualizations similar to those in ESM-3, as you suggested. In [this figure](https://anonymous.4open.science/r/supple_dplm2-2342/struct_tokens_vis.pdf), we showcase two structure tokens and their corresponding similar structural motifs across four different PDB structures, illustrating the diversity or consistency in the mapped local structural environments.\"}" ] }
5yDS32hKJc
Time After Time: Deep-Q Effect Estimation for Interventions on When and What to do
[ "Yoav Wald", "Mark Goldstein", "Yonathan Efroni", "Wouter A.C. van Amsterdam", "Rajesh Ranganath" ]
Problems in fields such as healthcare, robotics, and finance requires reasoning about the value both of what decision or action to take and when to take it. The prevailing hope is that artificial intelligence will support such decisions by estimating the causal effect of policies such as how to treat patients or how to allocate resources over time. However, existing methods for estimating the effect of a policy struggle with \emph{irregular time}. They either discretize time, or disregard the effect of timing policies. We present a new deep-Q algorithm that estimates the effect of both when and what to do called Earliest Disagreement Q-Evaluation (EDQ). EDQ makes use of recursion for the Q-function that is compatible with flexible sequence models, such as transformers. EDQ provides accurate estimates under standard assumptions. We validate the approach through experiments on survival time and tumor growth tasks.
[ "effect estimation", "treatment times", "irregular times", "sequential decision making" ]
Accept (Poster)
https://openreview.net/pdf?id=5yDS32hKJc
https://openreview.net/forum?id=5yDS32hKJc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zH2LkDqmx7", "x61kCIFi8g", "wwKuquVVnM", "wjQmwwMKIs", "tn2j0ytVOm", "t0dLbWsoD1", "qpLmYY6ocJ", "lYAUPHrrtU", "itmuzZuQf4", "gkPuGjee8l", "fVxEsviTuj", "f6mx4DUETY", "cHd05CjQN2", "bhZnXIBtzN", "awYpdsuqEE", "aKVrCfbNpX", "VYDIKHoWj9", "UN7Dew0yjs", "UBSQtnVrO4", "Nmgc2QKByB", "Lo1EjOXTMg", "KhOHieJERu", "IUiudHeSu6", "GsHLJtTMOx", "G5uswV7xh0", "D1pRObwFRR", "Ak6OIt0s39", "AYHp0N22er", "7U86OGzZrm", "5SfaWSkfHI", "4xlpaoOHqx" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732569586614, 1732576260303, 1732568338263, 1734770778108, 1729994718431, 1732715609558, 1732567483154, 1730586362840, 1730700334524, 1732794518354, 1732567749119, 1730636914768, 1732569630731, 1732687759038, 1732794328299, 1732567118975, 1732794238390, 1732568247885, 1732640183577, 1732569268874, 1730678022827, 1732569307514, 1732640653790, 1731042950838, 1732567950225, 1732568268693, 1732640555836, 1737524176761, 1732568850283, 1732667222054, 1732574724048 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_5s6h" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Area_Chair_7kb3" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_jyT3" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_u2Uo" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_WaHo" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_jmvH" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_u2Uo" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_jyT3" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_5s6h" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_vFET" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_vFET" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12267/Authors" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_jmvH" ], [ "ICLR.cc/2025/Conference/Submission12267/Reviewer_WaHo" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer jyT3 (1/2)\", \"comment\": \"We thank the reviewer for engaging with our paper and for the valuable comments. To respond on the questions and comments:\\n\\n**Paper title promises insights on When and What to do, but primarily delivers evaluation of pre-specified timings.**\\nRegarding the evaluation of pre-specified policies, we titled our work \\u201c**Effect Estimation** for Interventions on When and What to do\\u201d because we focus on evaluation and not on policy optimization. It is common to handle these problems separately, and off-policy evaluation is a common standalone topic for works [1,2,3]. The conceptual leap in adapting EDQ to perform policy optimization instead of evaluation is not large, as policy iteration can be done based on an evaluation procedure, similarly to the way it is done in discrete time. However we believe this deserves separate work that deals with choices around policy optimization such as how to specify a policy that is amenable to optimization etc.\\n\\nAs for interventions on \\u201cwhat\\u201d to do, we started with interventions on \\u201cwhen\\u201d because this is an aspect that is often unaddressed in existing literature. That said, our revision includes simulations where the policies also intervene on the \\u201cwhat\\u201d part, corresponding to the type of \\ntreatment (radiotherapy, chemotherapy, combined therapy) in the newly added tumor growth simulation. While this part does not require novel methodology beyond the one we introduced for intervention on times, it demonstrates that the method can be applied to this scenario.\\n\\n**Experimental validation relies on an overly simplified setting:** Please see the general comment for the cancer simulation we experimented with. In this simulation, processes draw treatments based on the past observed tumor volumes, which the reviewer may find more realistic, and it has been used in other works on effect estimation with neural nets [4,5]. We are also working to add multiple vital signs in our time-to-failure simulation to show that the method is able to cope with multivariate covariates; these will be included in the next revision.\\n\\n**Accessibility and reproducibility:** Please see the general comment for details on computational complexity. The complexity of EDQ and FQE are similar, please note that FQE is scalable and has been used in various large scale RL problems (see e.g. [6] for an evaluation using it). The only difference in computation time per iteration is due to sampling from the target policy in order to draw the treatments used in the $Q$ update. In most applications, the added complexity due to this difference is small w.r.t the cost of evaluating the $Q$-function, and in turn the cost of function evaluation is the same for FQE and EDQ. The computational complexity of sampling from the policy depends on how we represent and implement it. For instance, policies may be specified in terms of their intensity functions which requires sampling using the thinning algorithm [7], with neural networks that output the time-to-next-event (e.g. [8] for one example among networks that predict time), or with closed-form decision rules. For instance, in our first simulation we sample exponential variables from times when a feature crosses a certain threshold, and in the cancer simulation we sample actions from a discrete time policy (which means EDQ needs to generate a few more samples until disagreement is achieved, instead of FQE that samples one). In both simulations the time is negligible w.r.t to the evaluation of the $Q$ function.\\n\\nAs for guidelines on practical implementation, we will release our code upon publication to enable reproducibility.\\n\\n**Definition of $\\\\tilde{P}^{a}_t$:** Regrettably, while editing definition 5 we wounded up overcomplicating the term $\\\\tilde{P}^{a}_t$. A simple and well-posed definition is as follows: $\\\\tilde{P}^{a}_t(u \\\\vert \\\\mathcal{H})$ is a point process on the interval $(t, T]$ with intensity $\\\\lambda^a(u \\\\vert \\\\mathcal{H}_u)$, i.e. the marginal intensity of the treatments under the policy we wish to evaluate (Given by $\\\\lambda^a$. We may also include the distribution $\\\\pi(a \\\\vert \\\\mathcal{H}_u)$ for the mark of the treatment). The reason we include the tilde symbol in $\\\\tilde{P}$ is to emphasize that at each $u\\\\in{(t, T]}$ the distribution conditions on $\\\\mathcal{H}_u$, the *observed* trajectory up until that time. That is to discern this type of sampling from sampling entire trajectories of treatments and observations from the interventional distribution $P$. We amended definition 5 (now definition 4 in the revision) accordingly.\"}", "{\"comment\": \"Thank you for your detailed responses\\u2014they are very helpful. In my first point, I was referring to the Bellman expectation equation which uses the tower property, but I appreciate your clarification regarding the discrete-time analogy of Theorem 1. I have thus increased my score.\"}", "{\"title\": \"Response to Reviewer 5s6h (2/2)\", \"comment\": \"**Assumption 1-3 in line 251:** The reference to assumptions 1-3 became inconsistent due to our final edits and we apologize for that, the assumptions are the ones detailed in section 2.2, and we fixed the theorem statement to make the reference to these assumptions precise.\\n\\n> In line 105, should each trajectory be indexed\\u2026\\n\\nOur apologies, this is a typo, the trajectory should be denoted by $\\\\mathcal{H}_i$ and not $m_i$. We fixed this and thank the reviewer for catching this and for the rest of the comments.\\n\\n[1] Munos, R., Stepleton, T., Harutyunyan, A., and Bellemare, M. (2016). Safe and efficient off-policy\\nreinforcement learning. Advances in neural information processing systems, 29.\\n\\n[2] Precup, D. (2000). Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, page 80.\\n\\n[3] Uehara, M., Shi, C., and Kallus, N. (2022). A review of off-policy evaluation in reinforcement\\nlearning. arXiv preprint arXiv:2212.06355.\"}", "{\"metareview\": \"The paper considered estimating the effect of a continuous-time treatment policy, defined as a treatment intensity function (when to treat) coupled with a mark distribution (what treatment to select), focusing on off-policy evaluation. The authors tackle a novel problem and present an elegant solution. The paper leverages the property of countable decision points in a point process and introduces a simple yet efficient method based on earliest disagreement times. The paper also provides identifiability conditions for the causal estimands, ensuring accurate estimation. The simulation setup is intuitive and clearly presented, and the proposed method outperforms alternatives. There are several suggestions and concerns raised by reviewers and the authors addressed them carefully by conducting further healthcare analysis to support their major contribution and further discussions regarding related works, strengthening the experimental results, clarifying the computational complexity, and many others. We appreciate the updates and clarifications and encourage the authors to incorporate all suggestions into the revised version.\", \"additional_comments_on_reviewer_discussion\": \"There are several suggestions and concerns raised by reviewers and the authors addressed them carefully by conducting further healthcare analysis to support their major contribution and further discussions regarding related works, strengthening the experimental results, clarifying the computational complexity, and many others. We appreciate the updates and clarifications and encourage the authors to incorporate all suggestions into the revised version.\"}", "{\"summary\": \"This paper proposes Earliest Disagreement Q-Evaluation (EDQ), a method for estimating causal effects of sequential treatments with irregular timing. The key contribution is adapting Q-learning approaches to continuous-time settings while maintaining model-free properties. The work focuses on healthcare and financial applications where treatment timing is crucial.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper addresses a crucial and previously underexplored problem in causal inference: estimating the effect of interventions on treatment timing in sequential decision-making. This problem has practical implications in healthcare and finance, where treatment timing often has substantial impact on outcomes. The authors provide compelling motivation and clearly articulate why existing methods are insufficient.\\n\\n2. The proposed EDQ method demonstrates impressive scalability and flexibility by being compatible with modern deep learning architectures like transformers. This is a significant advancement over existing methods that either require time discretization or don't scale to large models, making the approach more practical for real-world applications.\", \"weaknesses\": \"1. The paper's title promises insights into \\\"When and What to Do,\\\" but primarily delivers a method for evaluating pre-specified timing policies. While the proposed EDQ method effectively handles off-policy evaluation of treatment timing effects, it provides no framework for discovering optimal timing strategies. The experimental section further highlights this gap, focusing solely on estimation accuracy rather than demonstrating practical utility in finding better treatment schedules. Additionally, despite the title's suggestion, the paper explicitly omits treatment selection (the \\\"what\\\" aspect) to focus on timing, making the scope narrower than advertised.\\n\\n2. The experimental validation relies on overly simplified simulation settings, particularly in modeling treatment delays with an exponential distribution $\\\\delta\\\\sim exp(\\\\lambda_a)$. Although the method's theoretical guarantees don't depend on specific distributional assumptions, the paper would be stronger with experiments using more realistic delay distributions and multiple vital signs. This limitation raises questions about the method's practical performance in real-world healthcare settings where treatment timing follows more complex patterns.\\n\\n3. The technical presentation has several gaps that affect its accessibility and reproducibility. The paper lacks analysis of computational complexity and clear guidelines for practical implementation.\", \"questions\": \"1. Definition 5, which is the most important definition, is not clear. For instance, what does $\\\\tilde P_t^a$ mean? I cannot find its definition throughout the whole article including appendix.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the response. I will keep my positive assessment.\"}", "{\"title\": \"Response to Reviewer vFET (1/2)\", \"comment\": \"We thank reviewer vFET for the careful reading of our paper, and for appreciating the solution we present, and the identifiability, related work and simulation sections. We are further grateful for the valuable comments that helped us improve parts of the paper. Below we comment on each of these points.\\n\\n**Experimental results underdeveloped:** Please see the general response. We\\u2019ve added experiments on a cancer simulator developed in [1] and used in other papers on causal inference with irregularly sampled data, e.g. [2,3]. Note that [2,3] do not evaluate dynamic policies, and also suffice with this dataset for the purpose of their simulations. We think our additional simulation has some qualitative differences, e.g. densely sampled observations and irregularly sampled treatments, that complement the cancer simulation well.\\n\\nAs for a real dataset, please note that without performing an experiment in the real world, it is very difficult to obtain a credible \\u201creal\\u201d dataset. This is further aggravated when we are interested in a scenario of *sequential* decision making like ours. For instance, we might take the route of using real patient vitals from MIMIC-IV, and introducing synthetic treatments and outcomes to form a semi-synthetic dataset. However, then a treatment $a$ at time $t$ cannot affect the covariates $x$ at time $u>t$, as this yet again requires a simulation to reason about counterfactual covariates. The consequence is that we must select between working in a synthetic setting, or having treatments that only affect future outcomes and treatments, without affecting future vitals. Like several other works in the field, we chose the former, and included a more realistic simulator for this purpose.\\nA semi-synthetic experiment where treatments do not affect future covariates is rather unrealistic, and eliminates aspects of planning from the problem (i.e. effect estimation in this case can be done without accounting for which state the current treatment may lead us to). Nonetheless, we will work to include such a simulation in our next revision.\\n\\n**Definition of $\\\\tilde{P}^a_t$:** Regrettably, while editing definition 5 (now definition 4 in our revision) we wounded up overcomplicating the term $\\\\tilde{P}^{a}_t$. A simple and well-posed definition is as follows: $\\\\tilde{P}^{a}_t(u \\\\vert \\\\mathcal{H})$ is a point process on the interval $(t, T]$ with intensity $\\\\lambda^a(u \\\\vert \\\\mathcal{H}_u)$, i.e. the marginal intensity of the treatments under the policy we wish to evaluate (Given by $\\\\lambda^a$. We may also include the distribution $\\\\pi(a \\\\vert \\\\mathcal{H}_u)$ for the mark of the treatment). The reason we include the tilde symbol in $\\\\tilde{P}$ is to emphasize that at each $u\\\\in{(t, T]}$ the distribution conditions on $\\\\mathcal{H}_u$, the *observed* trajectory up until that time. That is to discern this type of sampling from sampling entire trajectories of treatments and observations from the interventional distribution $P$. \\n\\n\\nWe emphasize that in practice, this just means taking the observed history up to time $u$ and inserting it into the policy of interest to obtain whether or not there is a treatment in the next increment.\\nThere are several specific algorithms to do this correctly, depending on how the policy is represented. For instance the thinning algorithm for sampling from point processes when we are given access to the intensity function. We amended the definition accordingly. \\n\\n**Defining an intensity as policy is interesting, but in reality decisions in fixed intervals are more relevant:** We agree with the reviewer that deciding whether to treat at fixed time intervals is a practically interesting case. Indeed, this is one of the intensities that one could specify for our method (e.g. the cancer simulation is performed in discrete times, but with sparse occurrence of events). There are a few benefits to formalizing the problem with a more general object such as intensities. Besides providing a more honest representation of the real world data generating process in applications like healthcare, the method and formalism allow us to bypass the need to assume a minimal fixed time interval. Instead the data is handled as an \\u201cevent stream\\u201d, as reflected by our choice of architecture (which represents the time gaps between events as features) and other dedicated architectures that model this type of data [4, 5]. Furthermore, it supports the specification of policies in different manners. For instance, we may be explicitly provided with the intensity function as we have in our first simulation; we could specify a discrete time policy; it is also possible to have a neural network that outputs the time of the next treatment; or even a simple decision rule.\"}", "{\"summary\": \"This paper presents a novel approach for handling irregular timing in off-policy evaluation within sequential decision-making frameworks. It extends the classical Bellman equation in fitted Q-evaluation (FQE) to account for cases with irregular time intervals and introduces the concept of measuring the \\\"earliest disagreement time\\\" in Q-function updates.\\n\\nOverall, the paper is well-written, with a natural flow that guides the reader from foundational ideas to specific methods. While reading the introduction, I was somehow expecting a real-data analysis that could demonstrate the approach, perhaps with an example like ASCVD control or another scenario where OPE might reveal insights under varied intervention timings. I understand the challenges of conducting OPE with observational data, so this is not a request for inclusion in the rebuttal. However, such an analysis could add an interesting dimension to showcase differences in performance in real-world applications and offer potential interpretations. Some terms and assumptions could benefit from clearer explanations, but overall, this paper provides a practical and effective method for handling irregular time in OPE without imposing heavy modeling or causal assumptions.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper\\u2019s motivation is well-illustrated with examples. I appreciate the effort to clarify the setup in Section 2-3, which makes the new approach easier to understand. The comparison with classical FQE and the challenges outlined at the end of Section 3 are also helpful.\\n\\n2. The simulation studies demonstrate promising results, effectively showcasing the benefits of the proposed EDQ by comparing it with FQE and MC methods.\", \"weaknesses\": \"1. Some notations are unclear on first mention. Here are a few areas where clarification could improve readability (please correct me if I missed something):\\n\\n (a) In Definition 4, Line 170, does $\\\\ll$ in $P\\\\ll P_{obs}$ denote absolute continuity? It might be helpful to define this symbol for readers who may not be familiar with the concept.\\n \\n (b) In Theorem 1, Line 251, assumptions 1-3 don\\u2019t appear to be clearly defined or referenced in Section 2.2.\\n\\n (c) In Algorithm 2, Line 279, the loss function $l(Q_t,y)$ seems to be undefined. While it may be user-specified and flexible, adding a brief note about its role and possible choices could provide helpful context.\", \"questions\": \"1. How does the computational time of your algorithm compare with classical FQE?\\n\\n2. In cases where stages are regularly spaced in time, would EDQ reduce to classical FQE? A brief discussion or proof in the paper about how EDQ behaves in the limit as time intervals become regular would suffice.\\n\\n3. In practical applications, a common workaround for irregular timing is to include the interval between the $k-1$th and the $k$th stages as a component of the state $\\\\boldsymbol{X}$, then proceed with classical FQE for Q-function estimation. While this may lack methodological novelty, industry practitioners often find it to be a straightforward and effective way to account for \\\"irregular time\\\" in value estimation. Could the authors elaborate on the motivation for considering \\\"earliest disagreement times\\\" and explain situations where this approach might perform better?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper aims to tackle the temporal irregularity when estimating causal effects of policies, by proposing a method called EDQ (earliest disagreement Q-evaluation) based on dynamic programming. The simulation demonstrates more accurate estimations of EDQ.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The work investigated an important problem, i.e., when to intervene with irregular times.\", \"The work is well motivated, given many human-related backgrounds such as healthcare and finance requires a carefully evaluation for when to provide proper intervenes.\", \"The paper is well structured and polished.\"], \"weaknesses\": \"- Given the work is called into category of off-policy evaluation. Related works in off-policy evaluation should be thoroughly discussed in the paper, either in Introduction or Related work. I understand there can exist some key differences between traditional off-policy evaluation and EDQ, but should be carefully and thoroughly discussed and compared in experimental settings. Also, there exists work regarding when-to-treat problem (e.g., [1]). A further comparison and discussion regarding EDQ and those works would be great.\\n\\n- Since one major motivation of the work is human-related when-to-treat problem, and the paper used a lot of healthcare examples (which is comprehensive). I\\u2019m curious about whether the work can be examined on some related settings. It\\u2019s understandable that running real-world experiments would not be feasible and high-stake, but it would be more impressive to provide experiments on some empirical motivated settings, e.g., sepsis [2], autism [3], etc.\\n\\n\\n\\n\\n- Minors:\\n\\nIt would be great if code and/or data can be released for reproducibility.\\n\\nLine 99. a was defined as action and then represented number of multivariate unobserved process\\n\\nLine 185. FQE needs careful citations, given it\\u2019s a well-established work in off-policy evaluation and optimization.\\n\\nLine 308, 355, 363. Lower cases for bolded words?\\n\\n[1] Learning When-to-Treat Policies\\n\\n[2] Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models\\n\\n[3] Off-policy Policy Evaluation For Sequential Decisions\\nUnder Unobserved Confounding\\n\\n**Update after rebuttal**:\\n\\nThe authors conducted further healthcare analysis to support their major contribution, and further discussions regarding related works.\", \"questions\": \"Please see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for the helpful comments and for revising your score. We sincerely appreciate the time and effort put into the review and to taking our response into account.\"}", "{\"title\": \"Response to Reviewer vFET (2/2)\", \"comment\": \"**The title is a bit misleading.** Please see the general comment, we added a simulation on what to do by intervening on application of chemotherapy and radiotherapy in the tumor growth example of the experimental section. In addition to having added this, we are also adding an intervention on treatment dosages for our existing time-to-failure experiment. However, this might only be added in our next revision.\\n\\n**There are quite a few typos toward the end of the paper.** Thank you, we have done a thorough pass to fix grammatical errors and typos in the paper.\\n\\n**Answers to questions:**\\n> Why do these assumptions imply that the process is self-exciting?\\n\\nSelf-exciting here was simply meant to convey that the intensity may depend on the history of the process, not necessarily that jumps at certain times make jumps at subsequent times more likely. The literature uses the term for both meanings. We changed the terminology to clarify that the jumps can depend on history in an arbitrary fashion instead of the narrow definition of strictly increasing intensities upon additional jumps.\\n\\n> How well does the algorithm scale?\\n\\nA response to this question is included in the general response. We will reiterate some points from it here, and include additional details.\\n* The only difference between EDQ and FQE in computation time per iteration is due to sampling from the target policy in order to draw the treatments used in the $Q$ update. Note that FQE is scalable and has been used in various large scale RL problems (see e.g. [6] for an evaluation using it).\\n* In most applications, the added complexity due to this difference is small w.r.t the cost of evaluating the $Q$-function, and in turn the cost of function evaluation is the same for FQE and EDQ. The computational complexity of sampling from the policy depends on how we represent and implement it. For instance, policies may be specified in terms of their intensity functions which requires sampling using the thinning algorithm [7], with neural networks that output the time-to-next-event (e.g. [8] for one example among networks that predict time), or with closed-form decision rules. For instance, in our first simulation we sample exponential variables from times when a feature crosses a certain threshold, and in the cancer simulation we sample actions from a discrete time policy (which means EDQ needs to generate a few more samples until disagreement is achieved, instead of FQE that samples one). In both simulations the time is negligible w.r.t to the evaluation of the $Q$ function. \\n\\n\\n[1] Geng, Changran, Harald Paganetti, and Clemens Grassberger. \\\"Prediction of treatment response for combined chemo-and radiation therapy for non-small cell lung cancer patients using a bio-mathematical model.\\\" Scientific reports 7.1 (2017): 13542.\\n\\n[2] Seedat, Nabeel, et al. \\\"Continuous-Time Modeling of Counterfactual Outcomes Using Neural Controlled Differential Equations.\\\" International Conference on Machine Learning. PMLR, 2022.\\n\\n[3] Vanderschueren, Toon, et al. \\\"Accounting for informative sampling when learning to forecast treatment outcomes over time.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[4] McDermott, Matthew, et al. \\\"Event Stream GPT: a data pre-processing and modeling library for generative, pre-trained transformers over continuous-time sequences of complex events.\\\" Advances in Neural Information Processing Systems 36 (2023): 24322-24334.\\n\\n[5] Mei, Hongyuan, and Jason M. Eisner. \\\"The neural hawkes process: A neurally self-modulating multivariate point process.\\\" Advances in neural information processing systems 30 (2017).\\n\\n[6] Voloshin, Cameron, et al. \\\"Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning.\\\" Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).\\n\\n[7] Lewis, PA W., and Gerald S. Shedler. \\\"Simulation of nonhomogeneous Poisson processes by thinning.\\\" Naval research logistics quarterly 26.3 (1979): 403-413.\\n\\n[8] Nagpal, Chirag, Vincent Jeanselme, and Artur Dubrawski. \\\"Deep parametric time-to-event regression with time-varying covariates.\\\" Survival prediction-algorithms, challenges and applications. PMLR, 2021.\"}", "{\"summary\": \"This paper introduces off-policy evaluation through the framework of stochastic point processes and presents Earliest Disagreement Q-evaluation (EDQ) as a method for estimation. Under the causal validity assumption, the authors demonstrate that EDQ can accurately identify the true policy value. Additionally, this assumption is elucidated using a local independence graph. Experiments conducted on simulated data confirm the effectiveness of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"**1**. The formulation of sequential causal effect estimation using point processes is novel, effectively addressing the irregular nature of treatments and covariates in various real-world applications.\\n\\n**2**. The EDQ method for estimation is compelling, with corresponding consistency results rigorously demonstrated in Theorem 1.\\n\\n**3**. The assumptions required for causal identification are elucidated through a local independence graph, providing an intuitive understanding of their implications.\", \"weaknesses\": \"Experiments should be conducted on real-world datasets or, at the very least, on simulated datasets generated from real-world data to provide more convincing validation.\", \"questions\": \"**1**. To my knowledge, consistency and the SUTVA assumptions are also essential in causal effect estimation. Why are these assumptions not required in your work?\\n\\n**2**. What is the impact of different sequential modeling architectures (such as Transformers, RNNs, regression models, etc.) on your method? Is the estimation sensitive to the choice of architecture and hyperparameters?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer jyT3 (2/2)\", \"comment\": \"**References**\\n\\n[1] Oberst, Michael, and David Sontag. \\\"Counterfactual off-policy evaluation with gumbel-max structural causal models.\\\" International Conference on Machine Learning. PMLR, 2019.\\n\\n[2] Thomas, Philip, and Emma Brunskill. \\\"Data-efficient off-policy policy evaluation for reinforcement learning.\\\" International Conference on Machine Learning. PMLR, 2016.\\n\\n[3] Precup, Doina, Richard S. Sutton, and Satinder P. Singh. \\\"Eligibility Traces for Off-Policy Policy Evaluation.\\\" Proceedings of the Seventeenth International Conference on Machine Learning. 2000.\\n\\n[4] Seedat, Nabeel, et al. \\\"Continuous-Time Modeling of Counterfactual Outcomes Using Neural Controlled Differential Equations.\\\" International Conference on Machine Learning. PMLR, 2022.\\n\\n[5] Vanderschueren, Toon, et al. \\\"Accounting for informative sampling when learning to forecast treatment outcomes over time.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[6] Voloshin, Cameron, et al. \\\"Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning.\\\" Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).\\n\\n[7] Lewis, PA W., and Gerald S. Shedler. \\\"Simulation of nonhomogeneous Poisson processes by thinning.\\\" Naval research logistics quarterly 26.3 (1979): 403-413.\\n\\n[8] Nagpal, Chirag, Vincent Jeanselme, and Artur Dubrawski. \\\"Deep parametric time-to-event regression with time-varying covariates.\\\" Survival prediction-algorithms, challenges and applications. PMLR, 2021.\"}", "{\"comment\": \"Thank you for your response. I will improve my scores.\"}", "{\"comment\": \"Thank you very much for the time and effort put into the review and the engagement in the discussion.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank the authors for their thorough and insightful reviews. We are glad to see that the reviewers engaged with the paper, recognized that we:\\n* Tackle an underexplored [jyT3, vEFT, u2UO] and crucial [jyT3, jmvH] problem\\n* Present an elegant/compelling [vEFT, u2UO], impressively scalable [jyT3] solution\\n* Elucidate identifiability conditions [vFET, 5s6h, u2Uo]\\n* Provide intuitive simulations that demonstrate the efficacy of our method [vFET, WaHo].\\n\\nThe comments put forward by the reviewers are insightful and will help us significantly improve the paper. We address them in depth with a response to each review, and summarize below our response to questions that were raised by more than one reviewer.\\n\\n**Experimental evaluation:** Reviewers commented that additional experiments would be appreciated. In our revision, we included experiments on a cancer growth simulator, following previous work on large scale effect estimation with sequential treatments [1,2,3]. [2,3] also use them to study irregular sampling times, but do not work with dynamic treatments. Reviewers further noted that the experiments do not include an intervention on \\u201cwhat\\u201d to do, but only on \\u201cwhen\\u201d. In the cancer simulation, policies determine the type of treatment as well as its timing. Additionally, we are adding an intervention on dosage for our existing time-to-failure simulation, along with higher dimensional covariates. These experiments might not finish until the end of discussion but will be included in our next revision.\\n\\n**Computational complexity:** reviewers [vFET, WaHo, jyT3] asked that we comment about the computational complexity of EDQ. To give a brief response to this question, we consider two parts that together form an iteration of both EDQ and FQE: (i) evaluating $Q(\\\\mathcal{H}_t)$ and $Q(\\\\tilde{\\\\mathcal{H}}_u)$ where $u=t+\\\\delta$ (for FQE $\\\\delta=1$) and performing a gradient update, (ii) sample actions from the target policy to form $\\\\tilde{\\\\mathcal{H}}$. **In most cases**, where $Q$-functions are parameterized by neural networks, **the first part is substantially more costly than the second**, scaling as $O(d)$ where $d$ is the number of parameters in the network. The cost of sampling an action from a policy in both FQE and EDQ depends on the policy we are interested in and its implementation (e.g. it may involve sampling times from a point process using the thinning algorithm [4], when we are given access to the intensity function), and we specify a few options in the response to reviewers that asked about this. A comment on this, along with examples of how one may implement policies has been added to section 5.1 and the appendix. Apologies for not subscripting $t+\\\\delta$ directly, OpenReview didn't allow complex subscripts.\\n\\nWe appreciate your thorough reviews and are happy to answer any further comments.\\n\\n[1] Bica, I., Alaa, A. M., Jordon, J., and van der Schaar, M. (2020). Estimating counterfactual treatment outcomes over time through adversarially balanced representations. In International Conference on Learning Representations.\\n\\n[2] Seedat, Nabeel, et al. \\\"Continuous-Time Modeling of Counterfactual Outcomes Using Neural Controlled Differential Equations.\\\" International Conference on Machine Learning. PMLR, 2022.\\n\\n[3] Vanderschueren, Toon, et al. \\\"Accounting for informative sampling when learning to forecast treatment outcomes over time.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[4] Lewis, PA W., and Gerald S. Shedler. \\\"Simulation of nonhomogeneous Poisson processes by thinning.\\\" Naval research logistics quarterly 26.3 (1979): 403-413.\"}", "{\"comment\": \"Thank you very much for the helpful comments and for revising your score. We greatly appreciate the time and effort put into the review.\"}", "{\"title\": \"Response to Reviewer 5s6h (1/2)\", \"comment\": \"We thank the reviewer for the comments and hope that the responses below answer the questions they raise.\\n\\n**Connection between Bellman Equation in discrete time and Theorem 1:** To keep the discussion concise, we wish to note that equation 1 is an identity on conditional expectations (we used the name tower property), while the name Bellman equation is often used to refer to an optimality equation that is similar but with a value function that maximizes over the actions. While clearly the two are related, we state this to avoid any inaccuracies in the discussion, as our work studies evaluation and not optimization of policies.\\n\\nAs the reviewer suggested, if we take the continuous time analogue of the discrete time equation 1, we can arrive at a stochastic differential equation that is satisfied by the $Q$-function. However, theorem 1 is not that equation, and it is also not a differential equation. The relation between the equation we derive in theorem 1 and the tower property of conditional expectations/Bellman equation, is perhaps best understood by examining the discrete time version of Theorem 1, which we derive in the appendix (it is referenced in lines 257-258 of the main paper in the original submission). The derived equation in discrete time is not equation 1, but rather an earliest disagreement in discrete time, and the closest methods in the literature are $N$-step policy evaluation methods in Reinforcement Learning, e.g. [1, 2] which is also mentioned in section 3 of the main paper. We have better clarified in the revision.\\n\\nAs mentioned in the review, we take advantage of the countable amount of jump times and write the equation that is stopped at time of the earliest disagreement. The benefit of deriving Theorem 1 and not a stochastic differential equation obtained from finding an analogue of equation 1 in continuous time, is that the equation in Theorem 1 easily leads to an efficient numerical implementation for evaluation of the $Q$ function. Similar to how the discrete time Bellman equation is useful since it easily lends itself to an efficient numerical implementation, while the Hamilton-Jacobi-Bellman differential equation is not straightforward to solve numerically. We would be glad to know whether the response properly addresses the reviewer\\u2019s concerns.\\n\\n**Can the outcome become too large when the trajectory has many events?** The conditions of the theorem, detailed in the appendix, are that times we consider are stopped at time $T$ (i.e. are in the interval $[0, T)$) for some $T>0$ and that the number of events is bounded (this is stated in the literature by taking $T_k \\\\rightarrow_{k\\\\rightarrow \\\\infty} = \\\\infty$, where $T_k$ is the time of the $k$-th event). This means the expected rewards are at the very least non-divergent. Note that we\\u2019ve limited the results to population level statements (i.e. identity is given in expectations rather than analyzing finite samples, numerical convergence etc.). Thus they do not capture considerations like numerical stability, handling large rewards, and convergence of $Q$-iterations. To form stable algorithms on cumulative rewards, one might need to introduce discounting such as in other RL algorithms (in our simulation the reward only appears at the end of the trajectory). We added this explanation to the paper, and we thank the reviewer for bringing it to our attention.\\n\\n**Writing suggestions:** We thank the reviewer, we incorporated the suggestions into the revision and answer the questions that require clarification below. We would be glad to know if anything remains unclear.\\n\\n**Distinction between a marked decision point process and a marked temporal point process (Upadhyay et al. 2018):** The mathematical objects we define are the same (up to them studying a Markov process, whereas we do not use this independence constraint in our presentation). In terms of the task studied, they assume online interaction while we study off-policy evaluation. Notice that the title of Upadhyay et al. 2018 is about \\u201cLearning of Marked Temporal Point Processes\\u201d, which describes well the fact that they learn a policy that is a marked temporal point process. We used the term \\\"decision point process\\u201d for the entire joint distribution of policy, covariates and outcomes (instead of just the policy). The latter is in concert with terminology in Reinforcement Learning and sequential decision making, where the distributions are called decision processes instead of just stochastic processes (see e.g. [3]). Thus our choice to name the joint distributions \\\"decision point process\\u201d instead of \\\"point processes\\u201d seems suitable. We are open to reconsidering this name in case the reviewer has further thoughts about this.\"}", "{\"comment\": \"Thank you very much for the response and for updating your score.\\n\\nWe think there is a revision available now through the OpenReview page, which includes the updates mentioned in our responses. We hope you find it interesting, and we will upload a more polished revision once revision uploads will become available again. Thanks again for the engagement both in the review and the discussion.\"}", "{\"title\": \"Response to Reviewer WaHo (1/2)\", \"comment\": \"We thank the reviewer for the useful suggestions and for the positive review. Please find our response to each point below:\\n\\n**Notation comments:** These are great points, indeed $\\\\ll$ denotes absolute continuity, we added a clarification for that. The reference to assumptions 1-3 became inconsistent due to our final edits and we apologize for that, the assumptions are the ones detailed in section 2.2, and we fixed the theorem statement to make the reference to these assumptions precise. Finally, we agree regarding the loss function, which has to be set such that it yields the conditional expectation at optimality. We specified this as the squared loss for simplicity, where for categorical outcomes this would be set to cross entropy.\\n\\n**Practical workaround for Irregular Time?** This is a crucial question, hopefully our answer will illustrate the details that necessitate EDQ instead of this type of simpler workaround. *Apologies in advance*, OpenReview did not allow for complex subscripts, so in the following $j := i+1$.\\n\\nThe suggestion for a practical workaround is to include time (or time gap) into the state $X$, i.e. the states at the $i$-th place of the trajectory are now $(t_i, \\\\mathbf{x}_i)$. Our goal is to intervene on time, therefore time also has to be included in the treatment $A$. Continuing with this workaround, actions are of the form $(t_i, \\\\mathbf{a}_i)$ and updates in FQE are along the form that fits $Q(t_i, \\\\mathbf{x}_i, \\\\mathbf{a}_i)$ to the quantity $y_j + Q(t_j, \\\\mathbf{x}_j, \\\\mathbf{a}_j)$.\\n\\nThe workaround becomes problematic once we intervene on time $t_i$ and wish to formulate a correct $Q$ update. For instance, if we intervene on time $t_j$ and change it to $\\\\tilde{t}_j$, then how should we set the state $\\\\mathbf{x}_j$ in the $Q(t_j, \\\\mathbf{x}_j, \\\\mathbf{a}_j)$ term of the update (or the reward $y_j$)? We cannot simply keep it as $\\\\mathbf{x}_j$ since the intervention on time may change the distribution of this feature. As an alternative, we could take further steps and define some null tokens for $\\\\tilde{\\\\mathbf{x}}_j$ where appropriate, and analyze cases where we intervene on time to set it later than observed events in the trajectory. Following this line of thought would likely eventually lead to a solution like EDQ. However, EDQ is not equivalent to the workaround suggested by the reviewer that runs FQE on adjusted feature spaces. We would be happy to hear whether this provides a full answer to the reviewer\\u2019s question.\\n\\n**Reduction to FQE in Discrete Time?** EDQ does not reduce to FQE in discrete time, as the answer to the previous question may hint. Instead, EDQ in discrete time, like its generalization to continuous time, is an update rule based on disagreement between the target policy and the observations. FQE and EDQ in discrete time do not apply the same update to $Q_t$ whenever the action sampled from the target policy for time $t+1$ is the same as the one in the observed trajectory. For completeness, we derive the discrete time equation for EDQ in the appendix after the proof of Theorem 1. The resulting algorithm for discrete time is perhaps closest in spirit to $N$-step off-policy evaluation methods in Reinforcement Learning (though we are not aware of works that study an earliest disagreement algorithm), which showed some favorable results in the past [1,2]. We choose to focus on the irregular sampling case, since it seems like the scenario where the earliest disagreement approach is most warranted, and also since the setting is more general than discrete time.\\n\\n**Computational complexity w.r.t FQE:** A response to this question is included in the general response. We will reiterate some points from it here, and include additional details.\\nThe only difference between EDQ and FQE in computation time per iteration is due to sampling from the target policy in order to draw the treatments used in the $Q$ update. In most applications, the added complexity due to this difference is small w.r.t the cost of evaluating the $Q$-function, and in turn the cost of function evaluation is the same for FQE and EDQ. The computational complexity of sampling from the policy depends on how we represent and implement it. For instance, policies may be specified in terms of their intensity functions which requires sampling using the thinning algorithm [3], with neural networks that output the time-to-next-event (e.g. [4] for one example among networks that predict time), or with closed-form decision rules. For instance, in our first simulation we sample exponential variables from times when a feature crosses a certain threshold, and in the cancer simulation we sample actions from a discrete time policy (which means EDQ needs to generate a few more samples until disagreement is achieved, instead of FQE that samples one). In both simulations the time is negligible w.r.t to the evaluation of the $Q$ function.\"}", "{\"summary\": \"The paper proposes a method for estimating the causal effect of policies using Earliest Disagreement Q-Evaluation (EDQ), where a policy determines both the action and its timing. Specifically, the authors aim to estimate the outcome of a target policy by leveraging dynamic programming, extending fitted Q-evaluation from discrete to continuous time. The update function is driven by the earliest point of disagreement between the behavior policy and the target policy.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper considers the problem of evaluating a policy using continuous-time causal inference, focusing on off-policy evaluation.\", \"To tackle the challenges of policy evaluation in continuous time, where the conventional Q-function collapses, the paper leverages the property of countable decision points in a point process and introduces a simple yet efficient method based on earliest disagreement times.\", \"The paper also provides identifiability conditions for the causal estimands, ensuring accurate estimation.\"], \"weaknesses\": [\"Could the authors discuss the connection between the Bellman equation in discrete time and Theorem 1? Equation (1) is equivalent to the Bellman equation for a finite-horizon problem in discrete times, and Theorem 1 appears to be a direct extension using differential equations and stopping times.\", \"The proposed framework allows the trajectory outcome to be a sum of time-specific outcomes (lines 79, 204-205). Are there any constraints on the number of observations in the trajectory? If the total number of $k$ is indefinite, the scale of $Y$ will vary with the number of observed time-specific outcomes, potentially affecting the outcome\\u2019s interpretation and stability.\", \"Writing improvement suggestions\", \"In the second paragraph of Section 1, the motivation could be clearer by explicitly stating the challenges related to sequential treatment and irregular timing, with a particular emphasis on the primary challenge about decision times. In the current introduction, it is not immediately clear to me which difficulties are being discussed in lines 37-39.\", \"In the third paragraph of Section 1, it would help to introduce the components of an intervention. From my understanding, there are two key components: the timing of the intervention and the intervention itself. Otherwise, readers might not yet understand whether treatment timing in line 49 is determined by the environment or chosen by the policy.\", \"In line 79, $Y_k$ has not been defined yet.\", \"At the end of line 88, should $d N(t)$ be $d N^l (t)$?\", \"Could you clarify the distinction between a marked decision point process in Definition 1 and a marked temporal point process (Upadhyay et al. 2018)?\", \"A period is missing from line 103.\", \"In line 105, should each trajectory be indexed by $i$ or $m_i$? It seems that $m$ is the total number of trajectories and $i$ takes values from $[m]$.\", \"What are assumptions 1-3 in line 251?\", \"In equation (2), should the right-hand side condition on $\\\\mathcal{H}_t$ as in equation (1)?\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer WaHo (2/2)\", \"comment\": \"**References**\\n\\n[1] Munos, R., Stepleton, T., Harutyunyan, A., and Bellemare, M. (2016). Safe and efficient off-policy\\nreinforcement learning. Advances in neural information processing systems, 29.\\n\\n[2] Precup, D. (2000). Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, page 80.\\n\\n[3] Lewis, PA W., and Gerald S. Shedler. \\\"Simulation of nonhomogeneous Poisson processes by thinning.\\\" Naval research logistics quarterly 26.3 (1979): 403-413.\\n\\n[4] Nagpal, Chirag, Vincent Jeanselme, and Artur Dubrawski. \\\"Deep parametric time-to-event regression with time-varying covariates.\\\" Survival prediction-algorithms, challenges and applications. PMLR, 2021.\"}", "{\"comment\": \"We are glad to learn that our responses addressed your concerns, thank you very much for engaging in the discussion.\"}", "{\"summary\": \"The authors propose a method for estimating the effect of a continuous-time treatment policy, defined as a treatment intensity function (when to treat) coupled with a mark distribution (what treatment to select), from observational data. Their approach, Earliest Disagreement Q-Evaluation (EDQ), is based on the principle that effects of observed versus proposed policies should not diverge until the point when the two policies disagree. The authors first review the identifiability of causal effects in this setting, then present the proposed estimator and associated algorithm. They construct a simulation setting in which a supposed vital sign is responsive to treatment, with diminishing returns. They show that EDQ estimates the effect of alternative (non-observed) treatment policies more accurately than comparator methods in this setting.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The authors tackle a novel problem and present an elegant solution. Identifiability of causal effects follows from previous results, and is presented clearly. The related work section is outstanding: comprehensive, highly informative, and well-written. The simulation setup is intuitive and clearly presented, and the proposed method outperforms alternatives.\", \"weaknesses\": [\"The key weakness is that the experimental results are underdeveloped. I would have liked to see more variations on the simulation results as well as application to at least one real dataset.\", \"I was confused by the notation and presentation in Definition 5, which then made it difficult to understand the details of EDQ. Specifically, I am not clear on how $\\\\tilde{P}_t^a$ is defined or how to sample from it. I have lowered my score for this reason, but I'm happy to increase it if the authors clarify the presentation and/or if other reviewers did not have similar difficulty.\", \"While defining the policy as an intensity (i.e. rate) is interesting, I have a hard time imagining a realistic scenario where it would make sense to sample treatment times rather than deciding whether/how to treat at fixed or given intervals.\", \"(minor) The title (\\\"When and What\\\") is a bit misleading. I take the point that learning when to treat (i.e., the rate portion of the policy) is the challenging part, but saying both when and what in the title is misleading, since the paper and experimental results focus only on the former.\", \"(minor) There are quite a few typos / minor errors, particularly in later sections of the paper.\"], \"questions\": [\"Line 89: Why do these assumptions imply that the process is self-exciting? Please elaborate.\", \"How well does the algorithm scale? Could the authors comment on the computational complexity?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer jmvH\", \"comment\": \"We thank the reviewer for the valuable comments and for appreciating positive aspects of our work. Below we provide responses to the comments and questions raised by the reviewer.\\n\\n**Experiments on related settings.** We thank the reviewer for the great suggestion and reference to simulators. As mentioned in the general response, we added an experiment with a cancer simulator that is popular in works on causal inference with sequential (but non-dynamic) treatments [1, 2], and used it to study evaluation of dynamic treatments. It has also been studied under an irregular sampling setting, which makes it rather suitable for our work. We hope that this addresses concerns about required experiments.\\nWe have also referred to the simulators suggested by the reviewer on sepsis and diabetes [3,4] as possible additional settings.\\n\\n**Discussion on Off-Policy Evaluation Literature.** We thank the reviewer for pointing out references. Specifically, the seminal when-to-treat work should have been mentioned and was regrettably left out of our discussion. It deals with regret bounds and doubly-robust estimation when a single discrete treatment start (or stopping) time is scheduled. This is different from the problem we study, which seeks to schedule multiple sequential treatments in continuous time. However, it would be interesting if future work can adopt ideas like the regret bounds, and advantage learning with \\u201cuniversal\\u201d nuisances to the irregularly sampled setting. We did our best to provide an adequate background on all the related fields, but are happy to take further suggestions from the reviewer on other work to include. Please note that many of the algorithmic components like doubly robust estimation are complementary to our contribution, and in principle can be incorporated on top of it. We think that since our contribution is a \\u201cdirect method\\u201d for irregularly sampled data, it makes the most sense to compare it with the direct method in discrete time (i.e. FQE). If there are additional baselines that the reviewer thinks can better support the results, then we are happy to incorporate them in our experiments.\\n\\n**Response to minor comments:**\\n\\n> Code release.\\n\\nThe code will be released upon publication.\\n> $a$ was defined as action and then represented number of multivariate unobserved process.\\n\\nWe might be misunderstanding the comment, does this refer to the sentence \\u201cand a multivariate unobserved process with intensity $\\\\lambda^u$\\u201d? If so then \\u2018a\\u2019 is not a number here, it is just the word \\u201ca\\u201d.\\n\\n> FQE citations.\\n\\nThank you for the comment, we included Watkins and Dayan [5] and Le et al. [6] as citations in our revision. We are happy to take other suggestions on this as well.\\n> Lower cases for bolded words.\\n\\n Thanks, this is fixed in our revision.\\n\\n[1] Seedat, Nabeel, et al. \\\"Continuous-Time Modeling of Counterfactual Outcomes Using Neural Controlled Differential Equations.\\\" International Conference on Machine Learning. PMLR, 2022.\\n\\n[2] Vanderschueren, Toon, et al. \\\"Accounting for informative sampling when learning to forecast treatment outcomes over time.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[3] Oberst, Michael, and David Sontag. \\\"Counterfactual off-policy evaluation with gumbel-max structural causal models.\\\" International Conference on Machine Learning. PMLR, 2019.\\n\\n[4] Namkoong, Hongseok, et al. \\\"Off-policy policy evaluation for sequential decisions under unobserved confounding.\\\" Advances in Neural Information Processing Systems 33 (2020): 18819-18831.\\n\\n[5] Watkins, C. J. and Dayan, P. (1992). Q-learning. Machine learning, 8:279\\u2013292.\\n\\n[6] Le, H., Voloshin, C., and Yue, Y. (2019). Batch policy learning under constraints. In International\\nConference on Machine Learning, pages 3703\\u20133712. PMLR.\"}", "{\"comment\": \"Thanks for your response. I appreciate the updates and clarifications and look forward to the revised version. I have increased my score to Marginal Accept.\"}", "{\"comment\": \"Thank you very much for the response and for increasing your score.\\n\\nWe are glad our clarifications were helpful, and hope the revision adequately resolves the points raised in the review. Thank you for clarifying your intent in the first point as well.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer u2Uo\", \"comment\": \"We thank the reviewer for the positive review and useful comments. Our response to questions is below:\\n\\n\\n**Experiments:** Please see the general response, we included an additional simulation from a realistic tumor growth simulator, also used in other papers on the topic (e.g. [1,2] use this simulator as their only experimental evaluation).\\n\\nFor real-world data, while we plan to do future work on finding suitable data, e.g. from existing experiments, please note that to the best of our knowledge there is no such data currently available that is being used in papers on the topic. We will work to have another experiment that is semi-synthetic for the next revision (after the discussion period), e.g. by taking vitals from MIMIC and introducing synthetic treatments and outcomes, but we must emphasize that such a setting is inherently limited when the task involves sequential decisions. This is because a treatment $a$ at time $t$ cannot affect the covariates $x$ at time $u>t$, as this yet again requires a simulation (to form the subsequent covariates affected by $a$). The consequence is that we must select between working in a fully-synthetic setting (i.e. defining synthetic effects on future vitals), or having treatments that only affect future outcomes and treatments, without affecting future vitals. The latter option is somewhat unrealistic, and eliminates aspects of planning from the problem. Considering these caveats, we opted, like several other works in the field, to work in a fully synthetic setting that is based on a realistic simulator that is used in the pharmaceutical industry.\\n\\n**Why are consistency and SUTVA not mentioned?** The formalism we use in our section on identifiability is not based on potential outcomes notation, but rather on the graphical approach. As Pearl [1, p.128] comments, it is possible to show that assumptions included in SUTVA such as consistency, are automatically satisfied in the structural interpretation of causality. Upon reviewing the literature, there are works that explicitly mention SUTVA and those that assume the graphical interpretation. We fell into the latter, but glad to explicitly mention it if you find this important.\\n\\n\\n**Impact of architecture and hyperparameters:** In this work our main focus is on laying out the formal framework and methodology to tackle the estimation problem on treatment times. From the experimental point of view, this involved modifying the transformer to suit our task and algorithm (that is after validating our solution with toy linear models that are not included in the paper). This includes defining different types of events, embedding times and time differences, and including a target network in the training procedure. Our experiments with hyperparameters were thus confined to the variations of these modifications, where the goal was to create the simplest working version of the algorithm. While we ended up choosing rather standard hyperparameters (e.g. learning rate of $1e-3$ with an adam optimizer, soft-Q updates with rate 0.001-0.01 etc.), some of the choices did have an impact on results, e.g. larger batch sizes helped in gaining more stabilized optimization. We will include the tools to reproduce all these experiments in our codebase. Since $Q$-methods are used with many architectures in the RL literature, we have careful optimism that the algorithm can be successfully applied to other sequence modeling architectures besides the transformer we use here, but we leave this exploration for future work. If there is a specific architecture that the reviewer would find helpful for comparison, we would be happy to take suggestions and include it in our next revision.\\n\\n[1] Pearl, Judea. \\\"Causal inference in statistics: An overview.\\\" (2009): 96-146.\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"Dear authors, I would like to thank you for the time and efforts to answer my questions. I'd happy to increase my score.\"}", "{\"comment\": \"I appreciate the authors' detailed responses to my questions. These points addressed my concerns. Therefore, I maintain my positive score for this paper.\"}" ] }
5y3QbuK6HD
Burning RED: Unlocking Subtask-Driven Reinforcement Learning and Risk-Awareness in Average-Reward Markov Decision Processes
[ "Juan Sebastian Rojas", "Chi-Guhn Lee" ]
Average-reward Markov decision processes (MDPs) provide a foundational framework for sequential decision-making under uncertainty. However, average-reward MDPs have remained largely unexplored in reinforcement learning (RL) settings, with the majority of RL-based efforts having been allocated to episodic and discounted MDPs. In this work, we study a unique structural property of average-reward MDPs and utilize it to introduce Reward-Extended Differential (or RED) reinforcement learning: a novel RL framework that can be used to effectively and efficiently solve various subtasks simultaneously in the average-reward setting. We introduce a family of RED learning algorithms for prediction and control, including proven-convergent algorithms for the tabular case. We then showcase the power of these algorithms by demonstrating how they can be used to learn a policy that optimizes, for the first time, the well-known conditional value-at-risk (CVaR) risk measure in a fully-online manner, without the use of an explicit bi-level optimization scheme or an augmented state-space.
[ "Reinforcement Learning", "Average Reward Reinforcement Learning", "Risk-Sensitive Reinforcement Learning", "Markov Decision Processes", "CVaR" ]
Reject
https://openreview.net/pdf?id=5y3QbuK6HD
https://openreview.net/forum?id=5y3QbuK6HD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zfWtOajJS9", "xTovc5ijF4", "wxpypWbOk7", "vKc3k0vpbw", "sgWBkOYTLT", "o8VnSDxGmT", "jdYbz9HPqK", "jWeKqGeN4n", "ibojzB3RqC", "hiao7UhxB6", "gjt4EmSkww", "gTRsKqJl1A", "cQrtpUGEYs", "bcMsdMCF0A", "YCfUHdnorp", "VVctQEGkYH", "VAHtLAMDbH", "Lj407K8ppD", "L7MpWEnpxc", "Gk82HPKVck", "F59sHXvMwf", "ElU71Mq9Kx", "AIx7gGpPwQ", "AIQmAgdlXo", "8Fij7dH1Uv", "4zqbOXg39K", "4CV37GG9qa", "27TDxo8rgw", "21kjpvn9Ya", "1jFbFqYvWA" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1731957331320, 1733226534203, 1733201537720, 1732123044671, 1731957475282, 1730219000067, 1732068448241, 1733208678413, 1731957599904, 1730237340546, 1733266341194, 1731122564886, 1731955840632, 1733145549457, 1731955250426, 1731955762764, 1731956565333, 1733145611761, 1729906081534, 1733201580691, 1731956628128, 1732134269492, 1731955963258, 1732219764156, 1733647376815, 1733265980534, 1731956219620, 1733145696410, 1737523683842, 1733187562747 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Reviewer_zhPt" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Reviewer_NkZR" ], [ "ICLR.cc/2025/Conference/Submission5096/Reviewer_dBpb" ], [ "ICLR.cc/2025/Conference/Submission5096/Reviewer_FEZJ" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Reviewer_dBpb" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Reviewer_zhPt" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Reviewer_FEZJ" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Reviewer_dBpb" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Area_Chair_ktcw" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Submission5096/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5096/Reviewer_FEZJ" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their consideration and review of our paper. Please see our response below. Due to the character limit, we split our response across three comments:\\n\\n**(1 / 3)**\\n\\n**The statements of Thms. 4.1 and 4.2 are vague and should be formalized: what do they show? Why not focus on just one subtask as this is the case of CVaR optimization?**: \\n\\nBased on the reviewer\\u2019s comments, we have updated the proof for Theorem 4.1 (now Theorem 5.1), such that the concept is presented in a clearer manner (it no longer uses the vague function inverse notation). Moreover, the updated draft now includes CVaR-specific proofs of Theorems 4.1 and 4.3 (now Theorems 5.1 and 5.3) in Appendix D (see Theorems D.1.1, D.1.2).\\n\\nThe generic theorems (5.1-5.3) show that we can solve any subtask that meets definition 4.1 (now defn 5.1) using the TD error. This is significant because it means that we do not have to rely on other types of gradients (for example, the gradient of the quantile loss) to solve the subtasks. The CVaR specific theorems (D.1.1, D.1.2) show that, by only using the TD error, our algorithms converge to the optimal long-run VaR and CVaR of the observed reward.\\n\\nWe note that the underlying methods presented in our work are not CVaR-specific. Hence, we present them in a generalized way in the main text, so that we do not give the impression that the methods presented can only be used in the context of CVaR.\\n\\n**How does definition 4.1 translate to the max in Eq. (10) ?**:\\n\\nConceptually, we can think of the subtask function as the expression inside the expectation in Equation 10 (Equation 7 in the updated draft), such that $X$ corresponds to the observed per-step reward, $b$ corresponds to the subtask that we want to solve, and the output of this expression corresponds to a modified per-step reward, $\\\\tilde{r}$, who\\u2019s long-run average we can optimize using the average-reward MDP.\", \"the_basic_idea_is_that\": \"the max in Equation 10 (Equation 7 in the updated draft) motivates having to optimize the subtask $b$ because if we optimize $b$, as well as the average of the modified reward, then we will have optimized the CVaR of the observed reward (see Corollaries D.1-D.4). As such, the max does not directly correspond to defn 4.1 (now defn 5.1), but motivates our framework for solving subtasks (as defined in defn 5.1).\\n\\nNow, in practice Equation 10 (Equation 7 in the updated draft) needs to be modified because directly using it as the subtask function may result in multiple solutions (see lines 1383-1450), so in Appendix D.1 we augment it to narrow the set of possible solutions to those such that $b=$VaR.\\n\\n**Notably, [2] focuses on that same criterion rather than the discounted one.**\\n\\nWe now mention [2] in Section 2.1 of the updated draft.\\n\\nThe key difference between our work and [2] lies in the average-reward (i.e. $\\\\bar{R}_t$) update. In [2], the average-reward update is as follows (step 5 in Table 1 of Bhatnagar, et. al., where we align the notation with that of our draft):\\n\\n$$\\\\bar{R}_{t+1} = (1 - \\\\alpha) \\\\bar{R}_t + \\\\alpha R_t$$\\n\\nwhere, $R_t$ is the observed reward at time t, and $\\\\alpha$ is the step size. \\n\\nIn contrast, the algorithms in our work estimate the average-reward using the TD error, $\\\\delta$ (e.g. Eqn 6d in our draft):\\n\\n$$\\\\bar{R}_{t+1} = \\\\bar{R}_t + \\\\alpha \\\\delta_t$$\\n\\nHence, the analytical results of Bhatnagar, et. al. are not directly applicable. \\n\\nAs a side note, the update used in Bhatnagar, et. al. is restricted to the on-policy case, whereas our update is applicable to both the on-policy and off-policy cases.\\n\\n**In the risk-neutral case, [2] do function approximation on the average reward setting. I think the same could be done for CVaR + function approximation.**:\\n\\nWe provide algorithms with function approximation in Appendix B and D.2 of the updated draft (where algorithm 6 in Appendix D.2 does function approximation + CVaR). Moreover, we use CVaR algorithms with function approximation in the inverted pendulum experiment. We also note that as previously mentioned, the work that the reviewer mentioned does not use the TD error to estimate the average reward, and hence the results are not directly applicable.\\n\\n**As a side comment, I am unaware of any provably convergent AC algorithms for the risk-neutral discounted return. In that respect, it does not surprise me that the same holds for risk-sensitive MDPs, which questions the significance of this work.**\\n\\nWe would be grateful if the reviewer could elaborate on this point, as it is not clear to us how the convergence of AC algorithms impacts the significance of our work. We would respectfully argue that the significance of our work lies in the ability to solve various subtasks simultaneously in a fully-online manner, with theoretical guarantees in the tabular case. The CVaR case study is an important result, however our approach is not specific to CVaR, and can be applied beyond the risk-aware domain.\\n\\n**...(Continued in next comment)**\"}", "{\"comment\": [\"Thanks for detailed responses and revising the paper.\", \"Regarding figures, the legends are small. Consider using larger font sizes.\", \"Regarding the comment about asymptotic convergence: I am not sure if I was clear enough. I am not sure either if I fully understood your response. Even though the average-reward notion is an asymptotic one, non-asymptotic bounds could still be relevant, and this is done in the context of many learning algorithms derived for this setting (e.g., non-asymptotic regret bounds for regret minimization in average-reward RL, and many others). In essence, the notion of gain $g^\\\\pi$ of a policy $\\\\pi$ is asymptotic, but it can be related to the running average $\\\\sum_{t=1}^T r_t/T$ for $r_t \\\\sim R(s_t,a_t)$ with $a_t\\\\sim \\\\pi(s_t)$ for any finite $T$ and corresponding deviation bounds could be derived. It is fine if the current version only derives asymptotic bounds. Yet, I think deriving non-asymptotic ones is of relevance as future direction.\"]}", "{\"comment\": \"We thank the reviewer for their most recent comments. We are happy to provide the requested clarifications. Due to the character limit, we split our response across two comments:\\n\\n**(1 / 2)**\\n\\n**I am still confused about the meaning of subtasks. According to your definition in Def 5.1, a subtask is a constant value and a subtask function is a weighted sum of the reward and all subtasks.**\\n\\nYour interpretation is correct. A subtask, $z_i$, is indeed a constant. Definition 5.1 states that in order to satisfy the definition of a \\u201csubtask\\u201d (in the context of our work), the constant $z_i$ must belong to a suitable function (a \\u201csubtask function\\u201d) that satisfies the criteria listed in Definition 5.1. In essence, the subtask function must be of the form:\\n\\n$$\\\\tilde{R}_t = R_t + a_0 + a_1z_1 + a_2z_2 + \\\\ldots + a_iz_i + \\\\ldots + a_nz_n.$$\\n\\n(Where we have $n$ subtasks).\", \"a_natural_question_to_ask_might_be\": \"*Why must the subtask function be of this form?* Well, as we will discuss below, a subtask function of this form makes it possible to estimate and/or optimize any given subtask using only a modified version of the TD error.\\n\\n**Aren\\u2019t all the zs given?**\\n\\nAlthough the subtasks (**zs**) are constant, they are not given, and so we need a way to estimate them. To this end, the primary contribution of our paper is a general-purpose framework that allows us to estimate and/or optimize the subtasks using only a modified version of the TD error. In other words, we start with an initial guess for a given subtask $z_i$ (i.e., $Z_{i, t=0}$), then, through our framework, we can estimate/optimize this estimate using only a modified version of the TD error (this happens in parallel to the regular value function learning/optimizing that occurs in the MDP). We note that Appendix C includes convergence proofs for the subtask estimates for the tabular case (i.e. $Z_{i, t} \\\\to z_i$ as $t \\\\to \\\\infty$). Also see Theorems 5.2, 5.3, and D.1.2.\", \"a_natural_follow_up_question_to_ask_might_be\": \"*What is the point of estimating or optimizing a subtask?* In our work, we assume that there is some underlying motivation or benefit to estimating and/or optimizing the subtask $z_i$. For example, we might want to know what the constant $z_i$ is for a given policy (hence, we want to \\u2018estimate\\u2019 the subtask), or, we may want to know what the constant $z_i$ is for the optimal policy (hence, we want to \\u2018optimize\\u2019 the subtask). In the case of CVaR, we want to know what the value-at-risk (VaR) of the optimal policy is, because if we know what this (constant) value is, then we can turn the computationally-expensive process of optimizing CVaR (which can involve solving multiple MDPs) into a trivial one (that only requires solving a single MDP). See lines 210-240 for a thorough discussion on this. Importantly, our framework is able to simultaneously estimate/optimize any arbitrary number of subtasks simultaneously in a fully-online manner.\\n\\nWe note that the \\u2018weights\\u2019 $a_0, \\u2026, a_n$ are given (they are problem-specific). For example, in the CVaR case study we have single subtask, VaR, with the following \\u2018weight\\u2019: \\n\\n$a_{\\\\text{VaR}, t} = 1$ if $R_t >= \\\\text{VaR}_t$, and \\n\\n$a_{\\\\text{VaR}, t} = \\\\frac{\\\\tau - 1}{\\\\tau}$ if $R_t < \\\\text{VaR}_t$ (where $\\\\tau$ is the known CVaR parameter). \\n\\nWe note that a full derivation of these \\u2018weights\\u2019 for CVaR can be found in Appendix D.\\n\\n**...(Continued in next comment)**\"}", "{\"comment\": \"We thank the reviewer for their quick response, their willingness to reconsider their initial score, as well as their most recent comments.\\n\\nWe have now included the average-reward CVaR objective in the main body of the latest draft (Equation 9).\\n\\nWe have also added a discussion on Blackwell-optimality, as suggested by the reviewer (lines 165-171). In short, the Blackwell optimality criterion is related to discounted MDPs, and serves as a metric that indicates whether a discounted-optimal policy is also average-optimal (i.e., whether it satisifies the long-run average-reward optimality criteria). In the context of our paper, we employ methods that utilize the standard average-reward MDP, which only aims to optimize the long-run behavior (hence the solution is not blackwell-optimal). We note that it is the unique properties of the standard average-reward MDP that enable our subtask-driven approach. As such, although out methods do not yield Blackwell-optimal policies, they do yield our subtask-driven approach, along with the key CVaR result.\\n\\nFinally, we are more than happy to work with the reviewer to improve the ordering of the paper. In particular, we would be grateful if the reviewer could specify what aspect of the orderning is confusing, so that we can address it.\\n\\nWe look forward to further discussions with the reviewer to address any remaining concerns and answer any more questions.\"}", "{\"comment\": \"**(2 / 3)**\\n\\n**The experiments are somewhat unclear and not very convincing.**:\\n\\nWhile our experiments are limited, they, in combination with the theoretical work and impactful case study presented, address several critical questions and potential concerns about the capabilities of our algorithms. In particular, they show that:\\n\\n- Our algorithms are able to successfully optimize CVaR even if it results in a lower average-reward.\\n\\n- Our algorithms have comparable (if not better) performance to the baseline risk-neutral algorithms, as shown in the inverted pendulum experiment where both methods share the same optimal solution.\\n\\n- Even with function approximation and an actor-critic architecture, the algorithms are still able to find the optimal CVaR policy.\\n\\n- Our algorithms are robust to the initial guesses for the VaR/CVaR estimates.\\n\\nWe have also included an additional experiment in Appendix E (line 1806) of the update draft, which further validates that our algorithms can optimize at the desired risk level.\\n\\nAs such, we believe that our experiments, in conjunction with the theoretical results, sufficiently demonstrate the capabilities of our algorithms. \\n\\n**The theoretical contribution seems to be a simple adaptation of risk-neutral TD to CVaR-TD.**:\\n\\nThere are two points that we would like to make. \\n\\nThe first is that the theoretical work presented in our work goes beyond CVaR. In particular, we present a general-purpose framework that makes it possible to solve various learning objectives simultaneously and in a fully-online manner in the average-reward setting using only (a potentially modified version of) the TD error. This includes convergence proofs for tabular algorithms derived from this framework. For clarity, the key theoretical contributions of this work are not CVaR-specific, and can be applied beyond the risk-aware domain.\", \"the_second_point_is_that_we_are_able_to_leverage_this_general_purpose_framework_to_achieve_an_important_result\": \"optimizing CVaR without augmenting the state-space or needing an explicit bilevel optimization scheme. To our knowledge, our algorithm is the first to achieve this in an MDP-based setting. By contrast, a simple adaptation of risk-neutral TD to CVaR-TD, such as the methods proposed by Xia et al. (2023), would need to have an augmented state-space and a bilevel optimization to optimize CVaR. This can potentially mean having to solve multiple MDPs or a standalone optimization at every step.\\n\\n**Explaining the analytical challenges encountered under risk-sensitive criteria would be helpful.**:\\n\\nWe have included a discussion of this in the updated draft in Section 4.\\n\\nThe primary non-triviality lies in that we need to know what the optimal VaR is in order to calculate the optimal CVaR. However, one does not typically know this value beforehand, so existing methods have to perform some version of the optimization presented in Equation 11 of our paper (Equation 8 in the updated draft).\\n\\nIn a standard/naive implementation of Equation 11 (Equation 8 in the updated draft), we need to augment the state-space with VaR, which can be any real (potentially-bounded) number. Moreover, a naive implementation often implies solving multiple MDPs (each with a different guess for VaR), which compounds the computational costs induced by a larger state-space.\\n\\nNow consider more clever methods that attempt to mitigate the computational costs. One of the most well-known, computationally-efficient examples is Chow et al. (2015), who utilized a clever but cumbersome decomposition technique that made it possible to only need to augment the state-space with a value between 0 and 1, as well as only needing to solve a single MDP. However, even this clever method requires the use of linear interpolation, as well as having to solve a standalone optimization at every iteration.\\n\\nBy contrast, the average-reward formulation, in combination with our proposed approach, allows us to circumvent these issues altogether, such that we can optimize both VaR and CVaR simultaneously in a fully-online manner.\\n\\n**...(Continued in next comment)**\"}", "{\"summary\": \"This paper focuses on the infinite horizon average criterion, to learn CVaR return without bi-level optimization. Indeed, CVaR-MDPs i.e., MDPs under CVaR objective, require solving an optimization problem at each policy evaluation step. By switching to the CVaR of the average reward (instead of discounted), the authors introduce RED-CVaR, a TD-type algorithm that avoids the inner optimization problem. Convergence results are provided under standard assumption. The approach is validated on a two-state MDP and inverted pendulum.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is easy to read, and the writing skills are good.\", \"I am unaware of previous work that proposed CVaR optimization for the average reward criterion, so this is original (as far as I can tell).\"], \"weaknesses\": \"I reviewed this paper for another venue where reviewers voted for rejection unanimously.\\nThere have not been any substantial updates since that submission, so my concerns apply to this one. I copy below the most critical concerns I had then: \\n\\n- Even in the risk-neutral case, the average reward criterion has some analytical advantages. Notably, [2] focuses on that same criterion rather than the discounted one. As a side comment, I am unaware of any provably convergent AC algorithms for the risk-neutral discounted return. In that respect, it does not surprise me that the same holds for risk-sensitive MDPs, which questions the significance of this work.\\n- Another missing related work is [5], which considers infinite horizon average reward but with entropic risk instead of CVaR\", \"a_discussion_on_the_nature_of_the_risk_considered_in_this_work_is_missing\": [\"is it nested or static? It looks static, i.e., the objective is $\\\\text{CVaR}(\\\\bar{r}_{\\\\pi})$, not nested, see [3, 4]. Therefore, time-consistency issues may arise and if not, they should be discussed. On the other hand, the nested formulation enables doing DP but lacks interpretability.\", \"The initial claims in Sec 2.1 are incorrect: average criteria have been extensively studied already in the 60-s with Howard and Blackwell. In particular, the Blackwell optimality criterion bridges the gap between discounted and average returns. See Chaps 8-9 of [1].\", \"Eqs (4)-(5) are called Poisson equations, see [2].\", \"In the risk-neutral case, [2] do function approximation on the average reward setting. I think the same could be done for CVaR + function approximation.\", \"Def 4.1 is unclear. How does this definition translate to the max in Eq. (10) ?\", \"The statements of Thms. 4.1 and 4.2 are vague and should be formalized: what do they show? Why not focus on just one subtask as this is the case of CVaR optimization?\", \"Formal algorithm pseudo-codes should appear instead of a list of equations (17)\", \"The learning rates $\\\\eta$, $\\\\alpha$ are sometimes constant, sometimes time or even state-dependent.\", \"The convergence plots seem to show one run per experiment. How is the seed chosen? Is it random? Have the algorithms been run on more seeds? The authors are encouraged to plot error curves with mean\\u00b1std.\", \"Broadly speaking, the following concerns led to my grading:\", \"Some claims are inaccurate, including related works.\", \"The experiments are somewhat unclear and not very convincing.\", \"Although optimizing a CVaR-average return criterion is new, the theoretical contribution seems to be a simple adaptation of risk-neutral TD to CVaR-TD. In particular, explaining the analytical challenges encountered under risk-sensitive criteria would be helpful. In particular, why is the unichain assumption still necessary? I think this comes from the static nature of the CVaR -- I don't think the same assumption would be required/enough if CVaR here were nested.\", \"*[1] Puterman, Martin L. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. [2] Bhatnagar, Shalabh, et al. \\\"Incremental natural actor-critic algorithms.\\\" Advances in neural information processing systems 20 (2007). [3] Hau, Jia Lin, Marek Petrik, and Mohammad Ghavamzadeh. \\\"Entropic risk optimization in discounted MDPs.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2023. [4] Shen, Yun, Wilhelm Stannat, and Klaus Obermayer. \\\"Risk-sensitive Markov control processes.\\\" SIAM Journal on Control and Optimization 51.5 (2013): 3652-3672. [5] Murthy, Yashaswini, Mehrdad Moharrami, and R. Srikant. \\\"Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs.\\\" Learning for Dynamics and Control Conference. PMLR, 2023.*\"], \"questions\": \"I encourage the authors to account for previous reviews' comments and suggestions and update their paper accordingly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Good rebuttal with possible improvement\", \"comment\": [\"I have revised my rating based on the authors' response. Despite, the organization of the main paper remains difficult to follow and the clarity of their proofs could be further improved. I thank the authors for their detailed rebuttal, which strongly highlights the significance of their contributions and has largely convinced me of their merit.\", \"I would still advise the author to explicitly include their average reward CVaR objective in the main body i.e. $\\\\lim_{n \\\\to \\\\infty} \\\\frac{1}{n} \\\\sum_{t=1}^n CVaR_\\\\tau[R_t | S_0=s,A_{0:t-1} \\\\sim \\\\pi]$.\", \"Since this paper focus on long term average reward, I also agree with Reviewer NkZR that the Blackwell optimality criterion should be analyze or discussed to strengthen the paper.\"]}", "{\"comment\": \"Thanks. So the agent maintains subtask estimates Z_i and wants to estimate z_i. But there must be some signals that are grounded to z_i and these signals are observed by the agent, right? What are these signals?\"}", "{\"comment\": \"**(3 / 3)**\\n\\n **Why is the unichain assumption still necessary? I think this comes from the static nature of the CVaR -- I don't think the same assumption would be required/enough if CVaR here were nested.**:\\n\\nAs per our previous discussions, the unichain requirement may vary depending on whether we consider a nested or a static risk measure. Here the CVaR that we aim to optimize is the CVaR of the limiting reward distribution, which is a stationary measure.\\n\\n**The convergence plots seem to show one run per experiment. How is the seed chosen? Is it random? Have the algorithms been run on more seeds? The authors are encouraged to plot error curves with mean\\u00b1std.**:\\n\\nWe kindly point out that convergence plots show the mean and 95% confidence interval for 50 random seed runs.\\n\\n**A discussion on the nature of the risk considered in this work is missing**: \\n\\nWe included a discussion on the nature of the risk in Appendix C of the submitted draft (lines 1844-1847). We have expanded upon this section in the updated draft (now Appendix D, lines 1608-1619). We recognize that reviewers are not required to look at appendices, however we felt that this section was best suited to be in the Appendix, where we discuss the CVaR-specific approach in great detail.\\n\\n**The learning rates are sometimes constant, sometimes time or even state-dependent.** , **Another missing related work is [5], which considers infinite horizon average reward but with entropic risk instead of CVaR**, and **The initial claims in Sec 2.1 are incorrect**:\\n\\nWe have rectified these points in the updated draft.\\n\\n**Formal algorithm pseudo-codes should appear instead of a list of equations (17)**:\\n\\nWe include full formal algorithm pseudo-codes in Appendix B (and D). We recognize that reviewers are not required to look at appendices, however we felt that the space in the main body was better used for explaining other concepts, than including a detailed algorithm that may not be of immediate interest to the reader.\\n\\n**Eqs (4)-(5) are called Poisson equations**:\\n\\nCorrect. We note that Eqs (4)-(5) are also commonly referred to as Bellman equations in the RL literature. We have added Poisson in brackets in the updated draft.\\n\\n**Even in the risk-neutral case, the average reward criterion has some analytical advantages.**\\n\\nAgreed! We hope that our work will encourage more exploration of methods in the average-reward setting, beyond the risk-aware domain.\\n\\n**References**:\\n\\nXia, Li, Luyao Zhang, and Peter W. Glynn. \\\"Risk\\u2010sensitive Markov decision processes with long\\u2010run CVaR criterion.\\\" Production and Operations Management 32.12 (2023): 4049-4067.\\n\\nYinlam Chow, Aviv Tamar, Shie Mannor, and Marco Pavone. Risk-sensitive and robust decision making: a CVaR optimization approach. In Advances in neural information processing systems 28, 2015.\"}", "{\"summary\": \"This paper extends the risk-averse average-reward MDP framework from [1] and introduces a new approach called \\\"Reward Extended Differential (RED)\\\" for solving various subtasks (e.g., scalar prediction or control objectives) concurrently. Instead of using the observed reward $R$ directly, the TD error is defined using a modified reward $\\\\tilde{R} = f(R,Z_1,Z_2 ...,Z_n)$ where $f$ is an invertible function mapping the observed reward and all subtasks to a modified reward $\\\\tilde{R}$. The authors demonstrate their algorithm\\u2019s application to risk-averse (CVaR) decision-making in a fully online setting.\", \"references\": \"[1] Bellini, Fabio, and Valeria Bignozzi. \\\"On elicitable risk measures.\\\" Quantitative Finance 15.5 (2015): 725-733.\\n\\n[2] Shen, Yun, et al. \\\"Risk-sensitive reinforcement learning.\\\" Neural computation 26.7 (2014): 1298-1328.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(a) The abstract, introduction, and preliminaries on average reward reinforcement learning are well-written and clearly presented.\\n\\n(b) The TD and Q-learning with stochastic approximation algorithms, along with Theorems 4.1\\u20134.3, appear to be rigorously verified with proofs in Appendix B. These proofs effectively extend the results from [1, 2, 3] to the multi-subtasks setting proposed in this work.\", \"weaknesses\": \"Despite the authors' in-depth understanding of stochastic approximation and model-free Q-learning proofs, the paper lacks sufficient validation regarding the extension to risk awareness in average reward MDPs.\\n\\n(a) The paper demonstrates a limited engagement with prior work and foundational concepts in risk-averse CVaR MDPs. The authors inaccurately claim that \\u201cour work is the first to propose an MDP-based CVaR optimization algorithm that does not require an explicit bi-level optimization scheme or an augmented state-space.\\u201d However, several existing approaches such as dynamic risk-averse MDPs [1], risk-averse distributional RL [2, 3,11] and average-criteria CVaR [9] also avoid state-space augmentation and employ stationary Markov policies, similar to this work. Furthermore, the proposed algorithm still seems to be bilevel as it aim to optimize for CVaR but update the VaR estimate at every level. Moreover the author mentioned that \\\"the CVaR that we aim to optimize most closely matches the static category\\\", restricting to stationary Markov policies can impair both the optimality and interpretability of static CVaR MDPs (see [4, 5, 6, 7]), since the sum over $t \\\\in 1:n$ for average criteria is outside of the CVaR operator, this is closer to the dynamic category where optimal deterministic stationary policy exist (see Theorem 1 of [9]). Additionally, the authors overlook related works [8] applies a similar TD update and [10] consider time-consistent policies set. It should be noted that \\\"notable works such as [6]\\\" describe in related work section, are known to be sub-optimal for policy optimization (see [7]). For this reason, augmented state-space primal methods with bi-level optimization, as in static CVaR MDP algorithms [4, 5, 9], are generally preferred.\\n\\n(b) The CVaR analysis in Appendix C.1 is focused solely on evaluation, leaving out an analysis for policy optimization claim \\\"We can now optimize the expectation in Equation C.5f using the RED RL framework\\\". Additionally, the average criterion CVaR objective function itself is not explicitly presented in the paper. Sections 4 and 5 feel somewhat disconnected; providing a clearer explanation to link these sections, along with an explicit proof that the proposed algorithm can optimize the CVaR objective, would significantly strengthen the paper\\u2019s claims regarding risk-aware reinforcement learning in average reward MDPs.\\n\\n(c) Limited empirical results: The results in Section 5 do not demonstrate that the proposed algorithm effectively optimizes the desired CVaR risk level. The evaluation would be more convincing if the authors trained the algorithm across multiple distinct CVaR risk levels (e.g. $\\\\tau \\\\in [0.01,0.05,0.1,0.5,1]$), and subsequently assessed performance by calculating the CVaR of the average reward over the final $n$ steps for each risk level $\\\\tau' \\\\in [0.01,0.05,0.1,0.5,1]$. Ideally, the maximum performance at each evaluated risk level should correspond to the training run specifically conducted at that CVaR risk level, reinforcing that the algorithm correctly optimizes for the specified risk. Furthermore, comparing the proposed algorithm\\u2019s performance with other approaches [4,9,11] under an average reward criterion could also provide a clearer benchmark for its effectiveness.\\n\\n(d) The claim to \\u201clearn a policy that optimized the CVaR value without using an explicit bi-level optimization scheme or an augmented state-space, thereby alleviating some of the computational challenges\\u201d is not substantiated. This claim would be more convincing if the authors compared the computational complexity or running time of the proposed method with that of the algorithm proposed in [9].\", \"questions\": \"Do we know the proposed algorithm updating VaR and CVaR simultaneously would converge to the optimal fixed point, not any other fixed point?\\n\\n(a) The quantile regression stochastic approximation from equation (C.2) provides a quantile estimate which may not be unique for discrete random variable, VaR is only an element of quantile which is not an elicitable risk measure (see [1]). Therefore, quantile regression may not converge to VaR, perhaps VaR is not necessary and any quantile estimate is sufficient? However, CVaR is also not elicitable which makes it unclear how stochastic approximation can approximate these values accurately. There may be an assumption missing for the subtask function $f$ to handle the nuances of the problem discussing here. \\n\\n(b) It is unclear why the VaR approximation in algorithm 7 is update with $\\\\delta$ instead of the gradient of quantile (L1) loss update (C.2) (see [2]). Note that the gradient of L1 loss is a piecewise constant. \\n\\n(c) In Appendix C, the claim that \\\"We can see that when the VaR estimate is equal to the actual VaR value, the quantile regression-inspired terms in Equation C.5f become zero\\\" holds only for continuous distributions during policy evaluation. Furthermore, this is insufficient, the authors may need to demonstrate that, starting from any initial estimate, the VaR estimate converges to the actual VaR, and similarly, that the CVaR estimate converges to the actual CVaR. Even if convergence is achieved in policy evaluation, there is no proof validating this statement for the discrete case or for policy optimization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their most recent comments.\\n\\nWe will increase the font size in our plots to make them more readable (including the legends).\\n\\nWe also thank the reviewer for the clarification on the non-asymptotic bounds. We agree that deriving non-asymptotic bounds is a fruitful direction to pursue in future work.\\n\\nWe again thank the reviewer for their time and consideration.\"}", "{\"summary\": \"This paper studies a class of average-reward reinforcement learning problems, which includes risk-sensitive RL as special case. The main contribution is a new framework called Reward-Extended Differential (RED) RL, which leverages structural properties of average-reward RL. RED RL can be used to devise RL algorithms for a rather broad range of objectives admitting some notion called \\u201csubtask\\u201d. In particular, the paper showcases the efficacy of this framework in the design of risk-averse RL algorithms under the CVaR risk measure, and this appears to be main motivation behind RED RL. The key benefit of the new framework here would be to avoid bi-level optimization or state-space augmentation that appear in the existing RL algorithms under CVaR criterion.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper studies an interesting problem in average-reward RL, which leverages a structural property that is specific to average-reward MDPs. The introduced framework appears interesting in its generic form, although its presentation in the paper is done in a rather high and abstract level. I found its application to CVaR RL quite interesting. In addition, that it removes the need to solve bi-level optimization problems explicitly is definitely a plus.\\n\\nThe paper is mostly well-organized and well-written. I have a couple of minor comments about writing and organization that I defer until the next section. Whenever applicable, the paper uses some figures to illustrate some concept, which proved quite helpful. \\n\\nThe paper includes numerical experiments, which is a positive aspect. The two domains used for the experiments sound interesting and relevant to showcase the framework.\", \"weaknesses\": [\"Main Comments:\", \"-\", \"One main comment is regarding the assumption. In view of statements in line 123-124, it appears to me that effectively a unichain assumption is made both for prediction and control.\", \"As a weak aspect, the presented framework only is shown to enjoy asymptotic convergence (in the tabular case).\", \"Regarding CVaR RL, use of an augmented state-space is mentioned as a standard technique. Of course, it is clear that we lack interest in extending state-space \\u2013 especially if there is some workaround \\u2013 for the classical performance bounds that deteriorate as the size of state-space grows. However, it is worth remarking that an \\u201caugmented but highly structured\\u201d state-space is not necessarily a weak aspect if one could leverage the underlying structure. Could you explain whether this is the case for CVaR RL?\", \"In Section 5, Equation 19: could you clarify what the choice of function $f$ is.\", \"As a general comment, I wonder whether RED performs simultaneous learning of multiple subtasks without any sacrifice? If not, it is not highlighted enough in the paper (or maybe I miss something).\", \"Subtask may bring some confusion because of its use as a standard terminology in hierarchical RL terminology. Also, I do not think that this choice of naming effectively reflects what it actually serves. Other candidates?\", \"I found the literature review part rather week. Admittedly, there is a rarity of prior work dealing with learning multiple goals/objectives in average-reward MDPs. However, in other MDPs settings and bandits \\u2013 that are obviously more straightforward to analyze \\u2013 there might exist a relatively richer literature. Further, one key contribution of the paper falls into the realm of risk-sensitive RL. It is therefore expected to see a better coverage of the related literature (and for discounted and episodic settings).\", \"The preliminary on average-reward MDPs and RL is rather long. Despite less work on them comparatively, they are standard settings and notions for a venue such as ICLR. I suggest Section 3.1 to be compressed to that the space in the main text could be used for more novel aspects.\"], \"minor_comments\": \"- \\n- Figures are not readable.\", \"typos\": [\"-\", \"Line 84: builds off of Wan et al. ==> Did you mean \\u201cbuilds on Wan et al.\\u201d?\", \"Line 61: in the Appendix ==> I think it is more correct to use \\u201cin Appendix\\u201d or \\u201cin the appendix\\u201d.\", \"Line 105: $S$ is a finite set of states, $A$ is \\u2026 ==> $\\\\mathcal S$ is \\u2026, $\\\\mathcal A$ is \\u2026\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**(2 / 3)**\\n\\n**The claim to \\u201clearn a policy that optimized the CVaR value without using an explicit bi-level optimization scheme or an augmented state-space, thereby alleviating some of the computational challenges\\u201d is not substantiated.**:\\n\\nWe would respectfully argue that reducing the size of the state-space, or only having to, for example, solve a single MDP instead of multiple MDPs (due to an explicit bilevel optimization) can be reasonably interpreted as alleviating computational challenges. We do not claim any specific numerical advantage.\\n\\n**Do we know the proposed algorithm updating VaR and CVaR simultaneously would converge to the optimal fixed point, not any other fixed point?**:\\n\\nThe updated draft now includes a CVaR-specific proof in Appendix D (see Theorem D.1.2; line 1570) that addresses this. In short, we know that (as is the case with the standard average-reward formulation) the proposed (tabular) algorithm converges to the optimal fixed point, up to an additive constant. We note that this does not affect our ability to find the CVaR optimal policy, given that the relative ordering of policies is what is of interest.\\n\\n**It is unclear why the VaR approximation in algorithm 7 is update with the TD error instead of the gradient of quantile (L1) loss update (C.2)**\\n\\nThis highlights the appeal of our approach and the core contribution of our paper. Instead of having to rely on the gradient of the quantile loss to estimate our subtask (in this case, VaR), our approach allows us to estimate VaR in a theoretically-sound way using a modified version of the TD error. The updated draft now includes CVaR-specific proofs in Appendix D (see Theorems D.1.1 and D.1.2) that formalize this logic.\\n\\nWe have also included a clarifying statement in Appendix D (lines 1419-1422) to make it clear that we do not use the quantile loss directly. The reason that we bring up quantile regression in our paper, is to motivate the terms that get added to the expression inside the expectation in Equation 10 (Equation 7 in the updated draft), which yields the final subtask function for CVaR (see Appendix D.1 in the updated draft). These quantile regression-inspired terms are needed because they narrow the set of possible solutions, to those with a reasonable VaR estimate.\\n\\n**The quantile regression stochastic approximation from equation (C.2) provides a quantile estimate which may not be unique for discrete random variable, VaR is only an element of quantile which is not an elicitable risk measure (see [1]). Therefore, quantile regression may not converge to VaR, perhaps VaR is not necessary and any quantile estimate is sufficient? However, CVaR is also not elicitable which makes it unclear how stochastic approximation can approximate these values accurately. There may be an assumption missing for the subtask function to handle the nuances of the problem discussed here.**\\n\\nAs stated in the above response, the key appeal of our approach is that we do not need to rely on the quantile loss to estimate our subtask (in this case, VaR). In particular, our approach allows us to estimate VaR (and consequently CVaR) in a theoretically-sound way using a modified version of the TD error. The updated draft now includes CVaR-specific proofs in Appendix D (see Theorems D.1.1 and D.1.2) that formalize this logic.\\n\\nThe reason that we bring up quantile regression in our paper, is to motivate the terms that get added to the expression inside the expectation in Equation 10 (Equation 7 in the updated draft), which yields the final subtask function for CVaR (see Appendix D.1 in the updated draft). These quantile regression-inspired terms are needed because they narrow the set of possible solutions, to those with a reasonable VaR estimate.\\n\\n**In Appendix C, the claim that \\\"We can see that when the VaR estimate is equal to the actual VaR value, the quantile regression-inspired terms in Equation C.5f become zero\\\" holds only for continuous distributions during policy evaluation. Furthermore, this is insufficient, the authors may need to demonstrate that, starting from any initial estimate, the VaR estimate converges to the actual VaR, and similarly, that the CVaR estimate converges to the actual CVaR. Even if convergence is achieved in policy evaluation, there is no proof validating this statement for the discrete case or for policy optimization.**\\n\\nThe updated draft now includes a CVaR-specific proof in Appendix D (see Theorem D.1.2; line 1570).\\n\\n**...(Continued in next comment)**\"}", "{\"comment\": \"Dear Reviewer zhPt,\\n\\nWe hope that our updated draft and response to your comments have addressed your concerns and provided the necessary clarifications. \\n\\nWe are happy to further engage with you to address any remaining concerns and/or answer any more questions.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal Revision\", \"comment\": [\"We would like to thank the reviewers for their detailed and considerate reviews of our paper. We have uploaded a modified version of our paper that incorporates the reviewer\\u2019s comments. Namely, in the updated draft:\", \"We fixed the typos/errors identified by the reviewers,\", \"We expanded the literature review section (Section 2) and incorporated the relevant works identified by the reviewers,\", \"We added a section (Section 4 in the updated draft) that explains the challenges of CVaR optimization, as well as how the subtask approach can be used to mitigate these challenges,\", \"We simplified the proof for Theorem 4.1 (now Theorem 5.1) to make it less vague,\", \"We included CVaR-specific proofs in Appendix D (see Theorems D.1.1 and D.1.2) that show that our CVaR optimization approach is valid, and\", \"We ran an additional experiment and included results in Appendix E (line 1806). This new experiment shows that our approach is able to successfully optimize CVaR at the desired risk level.\", \"All in all, we believe that these changes address the reviewers' concerns. We look forward to further engaging with the reviewers to address any remaining concerns, and answer any remaining questions.\"]}", "{\"comment\": \"We thank the reviewer for their thorough and insightful review of our paper. We appreciate the amount of detail and consideration given. Please see our response below. Due to the character limit, we split our response across three comments:\\n\\n**(1 / 3)**\\n\\n**Several existing approaches such as dynamic risk-averse MDPs [1], risk-averse distributional RL [2, 3,11] and average-criteria CVaR [9] also avoid state-space augmentation and employ stationary Markov policies, similar to this work.**:\\n\\nWe are happy to engage with the reviewer further on this topic. However, we would argue that our claim stands based on the following:\\n\\n*Dynamic risk-averse MDPs [1]*: It was shown by Boda (2006) (see below for our rebuttal references) that CVaR is not a time consistent risk measure, hence any time-consistent interpretation of CVaR is only an approximation (note that our claim is only specific to CVaR).\\n\\n*Risk-averse distributional RL [2, 3, 11]*: It was shown in [11] that the CVaR optimization approach utilized in [2, 3] (which avoided the state-space augmentation) converges to neither the optimal dynamic-CVaR nor the optimal static-CVaR policies. The authors of [11] then proposed a valid approach that utilizes an augmented state-space.\\n\\n*Average-criteria CVaR [9]*: We kindly point out to the reviewer that this work indeed used an augmented state-space as well as an explicit (sensitivity-based) bi-level optimization (note that this was mentioned in our paper).\\n\\nNote that we have included these points in Section 2.2 of the updated draft.\\n\\n**The proposed algorithm still seems to be bilevel as it aim to optimize for CVaR but update the VaR estimate at every level.**:\\n\\nCorrect, our algorithm is an *implicit* bilevel optimization, where both VaR and CVaR are updated in a fully-online manner in a single MDP. By contrast, methods such as [4, 5, 9] require an *explicit* bilevel optimization, where, for example, multiple MDPs with different VaR guesses must be solved in order to find the optimal policy. Our claim, as stated in the paper, is that our method does not require an explicit bilevel optimization.\\n\\n**Restricting to stationary Markov policies can impair both the optimality and interpretability of static CVaR MDPs (see [4, 5, 6, 7]), since the sum over for average criteria is outside of the CVaR operator, this is closer to the dynamic category where optimal deterministic stationary policy exist (see Theorem 1 of [9])**:\\n\\nWe agree that the CVaR being optimized in our paper has properties of both static and dynamic risk measures (while not perfectly fitting either definition). In the updated draft, we have updated the wording in the paper to be more neutral, such that it includes arguments for each (including the insightful point made by the reviewer). See lines 1608-1619 of the update draft.\\n\\n**Additionally, the authors overlook related works [8] applies a similar TD update and [10] consider time-consistent policies set.**: \\n\\nWe thank the reviewer for bringing [8] and [10] to our attention as they do have enough resemblance to our work to merit a discussion in our paper. We note that while one of the methods in [8] does use a vaguely similar TD update, all of the methods proposed in [8] require either an augmented state-space or an explicit bi-level optimization. Similarly, while [10] does not use an augmented state-space, they also require an explicit bi-level optimization. In both cases, these works, while relevant, do not impact the novelty or claims made in our paper.\\n\\n**The CVaR analysis in Appendix C.1 is focused solely on evaluation, leaving out an analysis for policy optimization claim \\\"We can now optimize the expectation in Equation C.5f using the RED RL framework\\\".**: \\n\\nThe updated draft now includes a CVaR-specific proof in Appendix D (see Theorem D.1.2; line 1570).\\n\\n**The average criterion CVaR objective function itself is not explicitly presented in the paper.**: \\n\\nThe updated draft now explicitly presents the CVaR objective as Equation D.6 (line 1450)\\n\\n**Sections 4 and 5 feel somewhat disconnected; providing a clearer explanation to link these sections, along with an explicit proof that the proposed algorithm can optimize the CVaR objective, would significantly strengthen the paper\\u2019s claims regarding risk-aware reinforcement learning in average reward MDPs.**:\\n\\nWe have updated the wording at the start of Section 4 and 5 to better communicate how they are related to each other. Moreover, the updated draft now includes a CVaR-specific proof in Appendix D (see Theorem D.1.2; line 1570).\\n\\n**...(Continued in next comment)**\"}", "{\"comment\": \"We thank the reviewer for their consideration and review of our paper. The updated draft has corrected the typos identified by the reviewer. We are also happy to engage with the reviewer to see what about the figures makes them hard to read. Please see our response to the main comments below. Due to the character limit, we split our response across two comments:\\n\\n**(1 / 2)**\\n\\n**In view of statements in line 123-124, it appears to me that effectively a unichain assumption is made both for prediction and control.**:\\n\\nWe thank the reviewer for pointing this out. We have updated the wording to highlight that the communicating assumption only guarantees the existence of a unique optimal average-reward. For clarity, our methods only require a communicating assumption for control.\\n\\n**As a weak aspect, the presented framework only is shown to enjoy asymptotic convergence.** \\n\\nWe note that this is also the case for the standard average-reward formulation. We also note that while we do not offer convergence proofs for the non-tabular case, we utilized function approximation in the inverted pendulum experiment, and our algorithm still converged (in fact, it showed faster convergence compared to the risk-neutral differential algorithm).\\n\\n**Regarding CVaR RL, use of an augmented state-space is mentioned as a standard technique. Of course, it is clear that we lack interest in extending state-space \\u2013 especially if there is some workaround \\u2013 for the classical performance bounds that deteriorate as the size of state-space grows. However, it is worth remarking that an \\u201caugmented but highly structured\\u201d state-space is not necessarily a weak aspect if one could leverage the underlying structure. Could you explain whether this is the case for CVaR RL?**:\\n\\nWe are happy to provide an explanation. In short: this is not the case for CVaR RL.\\n\\nThe primary non-triviality lies in that we need to know what the optimal VaR is in order to calculate the optimal CVaR. However, one does not typically know this value beforehand, so existing methods have to perform some version of the optimization presented in Equation 11 of our paper (Equation 8 in the updated draft).\\n\\nIn a standard/naive implementation of Equation 11 (Equation 8 in the updated draft), we need to augment the state-space with VaR, which can be any real (potentially-bounded) number. Moreover, a naive implementation often implies solving multiple MDPs (each with a different guess for VaR), which compounds the computational costs induced by a larger state-space.\\n\\nNow consider more clever methods that attempt to mitigate the computational costs. One of the most well-known, computationally-efficient examples is [1] (see below for our rebuttal references), who utilized a clever but cumbersome decomposition technique that made it possible to only need to augment the state-space with a value between 0 and 1, as well as only needing to solve a single MDP. However, even this clever method requires the use of linear interpolation, as well as having to solve a standalone optimization at every iteration.\\n\\nBy contrast, the average-reward formulation, in combination with our proposed approach, allows us to circumvent these issues altogether, such that we can optimize both VaR and CVaR simultaneously in a fully-online manner. As such, we believe that our method is more appealing than even the more clever methods used in the discounted case.\\n\\n**In Section 5, Equation 19: could you clarify what the choice of the subtask function is:**\\n\\nThe subtask function used is presented explicitly as Equation D.6 (line 1450) in the updated draft. In short, the function is a modified version of Equation 10 (Equation 7 in the updated draft). The reason Equation 10 (Equation 7 in the updated draft) needs to be modified is that directly using it as the subtask function may result in multiple solutions (see lines 1383-1450), so we need a way to reduce the set of possible solutions to only solutions that have realistic VaR estimates. To accomplish this, we employ techniques from quantile regression. Note that we explain this in great detail in Appendix D.1.\\n\\n**...(Continued in next comment)**\"}", "{\"comment\": \"Dear Reviewer NkZR,\\n\\nWe hope that our updated draft and response to your comments have addressed your concerns and provided the necessary clarifications. \\n\\nWe are happy to further engage with you to address any remaining concerns and/or answer any more questions.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposes a way to optimizes the conditional value-at-risk (CVaR) risk measure of the average-reward rate in finite MDPs. The CVaR of a random variable $X$ with a parameter $\\\\tau in (0, 1)$ is the expectation of the lower $\\\\tau$ quantile of X. The key idea is to utilize a property of CVaR by Rockafellar and Uryasev (2000) --- the CVaR of X with a parameter $\\\\tau$ is the expectation of a piece-wise linear function of X and the value-at-risk (VaR; the lower $\\\\tau$ quantile of X). By estimating VaR separately and treating the output of this piece-wise function as a new reward, the paper proposes that CVaR can be estimated using an existing average-reward algorithm. The key advantage of this approach to estimate the CVaR of the reward rate is that it does not perform the bi-level optimization and does not augment the state space, whereas existing algorithms need to do one of them.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The claimed contribution has sufficient novelty. However, I am not an expert in this area so I can not confirm if the claim is true.\", \"weaknesses\": \"I have three concerns about this paper.\\n\\nFirst, the writing of this paper is vague, making it hard to understand. For example, it is not clear why subtasks are introduced when the main goal is the CVaR problem, until Section 5. Even in Section 5, the authors didn't explain explicitly how these two ideas are related and how equation (19) was derived. Another example is the discussion about the literature. The paper only mentioned one work for estimating CVaR (Xia et al. 2023) in the average-reward setting. Is that the only work? In addition, was the paper's idea applied to other settings (discounted, episodic) before? If so, what are the differences? \\n\\nYet, the major problem is that the derivation of the results in the paper seems to be problematic. Specifically, the step from 13a to 13b does not hold in general for piece-wise linear function f (the proof says that f is linear but Definition 4.1 says that f can be piece-wise linear and in order to be applicable to CVaR, f needs to be piece-wise linear). Similarly, 14a to 14b does not seem to hold.\\n\\nThird, there are quite a few typos/incorrectness/weird statements of this paper. I list some of them here:\\n\\\"Average-reward (or average-cost) MDPs were first studied in works such as Puterman (1994).\\\" Puterman's book summarizes previous works. I don't think it's fair to say that these MDPs were \\\"first studied in works such as Puterman (1994)\\\". \\nAt the beginning of Section 3.1, discreet -> discrete, S -> \\\\mathcal{S}, A -> \\\\mathcal{A}.\\nEquation 1 depends on the start state S_0 while the l.h.s. shows that it is not.\\n\\\"Such assumptions ensure that, for the induced Markov chain, ...\\\". \\\\mu_\\\\pi here is the limiting distribution, instead of a stationary distribution. In addition, the limiting distribution does not exist for periodic Markov chains.\\nMax in equation (10) should be Sup. So does several other places in the paper. \\nDefinition 4.1 (ii) should be a property on the function f, instead of a property on the z_i, because z_i is just a scalar input of f, not a random variable.\\nLower case letters r, s, s' are sometimes used as random variables and sometimes used as scalars.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**(2 / 2)**\\n\\n**What does it mean when you say \\\"An average-reward MDP can simultaneously predict or control any arbitrary number of subtasks \\\"** \\n\\nThis means that in the average-reward setting, we can develop a learning/update rule for a given subtask (that belongs to a subtask function that satisfies the criteria listed in Defn 5.1) based solely on a modified version of the TD error: \\n\\n$$Z_{i, t+1} = Z_{i, t} + \\\\eta\\\\alpha_{t}(-1/a_i)\\\\delta_t$$\\n\\nWhere, $Z_{i, t}$ is the estimate of the subtask, $z_i$, at time $t$, $\\\\eta\\\\alpha_{t}$ is the step size, and $\\\\delta_t$ is the TD error.\\n\\nImportantly, the $(-1/a_i)\\\\delta_t$ term satisfies a TD error-dependent property: it goes to zero as the TD error, $\\\\delta_t$, goes to zero. This implies that the arbitrary subtask update is dependent on the TD error, such that the subtask estimate will only cease to update once the TD error is zero. Hence, minimizing the TD error allows us to solve (i.e., estimate or optimize) the arbitrary subtask, $z_i$, simultaneously using the TD error.\\n\\nSee Theorem 5.1 for a more thorough discussion.\\n\\n**Could you write down clearly what the agent observes and what the underlying process is (like in the standard MDP setting, the agent observes a state and reward according to the transition probability of the MDP)?**\\n\\nLet $\\\\mathcal{M} \\\\doteq \\\\langle \\\\mathcal{S}, \\\\mathcal{A}, \\\\mathcal{R}, p \\\\rangle$ denote the standard (average-reward) MDP, where $\\\\mathcal{S}$ is a finite set of states, $\\\\mathcal{A}$ is a finite set of actions, $\\\\mathcal{R} \\\\subset \\\\mathbb{R}$ is a finite set of rewards, and $p: \\\\mathcal{S}\\\\, \\\\times\\\\, \\\\mathcal{A}\\\\, \\\\times\\\\, \\\\mathcal{R}\\\\, \\\\times\\\\, \\\\mathcal{S} \\\\rightarrow{} [0, 1]$ is a probabilistic transition function that describes the dynamics of the environment. At each discrete time step, $t = 0, 1, 2, \\\\ldots$, the agent chooses an action, $A_t \\\\in \\\\mathcal{A}$, based on its current state, $S_t \\\\in \\\\mathcal{S}$, and receives a reward, $R_{t+1} \\\\in \\\\mathcal{R}$, while transitioning to a (potentially) new state, $S_{t+1}$, such that $p(s', r \\\\mid s, a) = \\\\mathbb{P}(S_{t+1} = s', R_{t+1} = r \\\\mid S_t = s, A_t = a)$. \\n\\nIn our proposed framework, we only modify the reward via the subtask function, such that:\\n\\n$$\\\\tilde{R}_t = R_t + a_0 + a_1z_1 + a_2z_2 + \\\\ldots + a_iz_i + \\\\ldots + a_nz_n.$$\\n\\nThis results in the modified MDP, $\\\\mathcal{\\\\tilde{M}} \\\\doteq \\\\langle \\\\mathcal{S}, \\\\mathcal{A}, \\\\tilde{\\\\mathcal{R}}, \\\\tilde{p} \\\\rangle$, where the states and actions are the same as in the standard MDP, $\\\\tilde{R_t} \\\\in \\\\tilde{\\\\mathcal{R}}$, and the transition probabilities are essentially identical to that of the standard MDP, such that the probability of obtaining the modified reward, $\\\\tilde{R_t}$, is the same probability as obtaining the corresponding regular reward (from the standard MDP), $R_t$, such that:\\n\\n$\\\\tilde{p}(s', \\\\tilde{r} \\\\mid s, a) = \\\\mathbb{P}(S_{t+1} = s', \\\\tilde{R}_{t+1} = \\\\tilde{r} \\\\mid S_t = s, A_t = a) \\\\ldots$\\n\\n$\\\\quad = \\\\mathbb{P}(S_{t+1} = s', R_{t+1} = r \\\\mid S_t = s, A_t = a) = p(s', r \\\\mid s, a)$\\n\\n(This follows from the subtasks being independent of the states and actions; see Definition 5.1)\\n\\nHence, at each time step:\\n\\n1) The agent chooses an action, $A_t$, based on the current state, $S_t$.\\n\\n2) We observe the reward, $R_{t+1}$, and new state, $S_{t+1}$, from the environment.\\n\\n3) We calculate the modified reward, $\\\\tilde{R}_{t+1}$, using the subtask function\\n\\n(i.e., using $R_{t+1}$ and the subtask estimates $Z_{0,t}, \\u2026, Z_{i,t}, \\u2026, Z_{n,t}$).\\n\\n4) Proceed with the usual learning updates (TD error, value functions, etc.) using the modified reward.\\n\\n5) Update the subtask estimates using a modified version of the TD error (see our response to the previous questions).\\n\\n6) Set $S_t = S_{t+1}$. Go back to step 1.\\n\\nWe note that full pseudocode is provided in Appendix B (line 702).\\n\\nAs such, this process allows us to estimate/optimize the long-run average of the modified reward, as well as the subtasks. In the case of CVaR, optimizing the long-run average of the modified reward corresponds to optimizing the long-run CVaR of the reward from the standard MDP (which is our actual objective).\\n\\nWe note that through this subtask-driven approach, we have provided the first algorithm in any MDP-based setting to optimize CVaR without the use of an augmented state-space, or an explicit bi-level optimization, thereby reducing significant computational costs. We also note that this subtask-driven approach is not specific to CVaR and can be applied to other learning problems in the future.\\n\\nWe appreciate the reviewer's time and dedication, and hope that we have provided sufficient clarifications.\"}", "{\"comment\": \"**(2 / 2)**\\n\\n**As a general comment, I wonder whether RED performs simultaneous learning of multiple subtasks without any sacrifice? If not, it is not highlighted enough in the paper (or maybe I miss something).**:\\n\\nFrom our perspective, there are only two sacrifices that need to be made in order to perform the simultaneous learning of multiple subtasks. The first is adhering to the assumptions about the induced Markov chain (unichain or communicating). The second is crafting the subtask function, which may not be straightforward. However, given a valid subtask function, and the willingness to adhere to the Markov chain assumptions, our algorithms are able to learn an arbitrary number of subtasks simultaneously without additional sacrifice (other than having to perform an extra learning update per subtask). In the case of CVaR, this has the added benefit of removing the need to augment the state-space and having to solve multiple MDPs. Empirically, we saw that our algorithm could reliably learn the subtask, and even outperformed the risk-neutral differential algorithm (which does not have to estimate a subtask) in the inverted pendulum experiment (where both methods shared a common optimal solution).\\n\\n**Subtask may bring some confusion because of its use as a standard terminology in hierarchical RL terminology. Also, I do not think that this choice of naming effectively reflects what it actually serves. Other candidates?**\\n\\nWe agree that there may be some confusion, however the term \\u2018subtask\\u2019 or \\u2018auxiliary task\\u2019 is used in various contexts in RL (e.g. [2]). In this regard, we have updated Section 2 to contrast our definition of subtask, to that of other subfields, such as hierarchical RL.\\n\\n**I found the literature review part rather weak. Admittedly, there is a rarity of prior work dealing with learning multiple goals/objectives in average-reward MDPs. However, in other MDPs settings and bandits \\u2013 that are obviously more straightforward to analyze \\u2013 there might exist a relatively richer literature. Further, one key contribution of the paper falls into the realm of risk-sensitive RL. It is therefore expected to see a better coverage of the related literature (and for discounted and episodic settings).**:\\n\\nAgreed, we have added some non-average-reward MDP references in the updated draft related to learning multiple goals/objectives in the literature review section (such as [2]). In terms of risk-sensitive RL, we have added a few more additional key (CVaR-related) references for the discounted and episodic settings.\\n\\n**The preliminary on average-reward MDPs and RL is rather long. Despite less work on them comparatively, they are standard settings and notions for a venue such as ICLR. I suggest Section 3.1 to be compressed to that the space in the main text could be used for more novel aspects.**:\\n\\nWe thank the reviewer for their recommendation and we have shortened Section 3.1 to expand the literature review.\\n\\n**Overall**: \\n\\nWe hope that we have addressed the reviewer\\u2019s comments. We are happy to engage with the reviewer to provide additional clarifications.\\n\\n**References**:\\n\\n[1] Chow, Yinlam, Aviv Tamar, Shie Mannor, and Marco Pavone. 2015. \\u201cRisk-Sensitive and Robust Decision-Making: A CVaR Optimization Approach.\\u201d In Advances in Neural Information Processing Systems 28.\\n\\n[2] McLeod, Matthew, et al. \\\"Continual auxiliary task learning.\\\" In Advances in Neural Information Processing Systems 34.\"}", "{\"title\": \"Reply to Authors comment\", \"comment\": [\"We thank the author for their quick response and made changes to their draft.\", \"In terms of readability of the paper, I do agree with Reviewer FEZJ that **hard to understand: It is not clear why subtasks are introduced when the main goal is the CVaR problem, until Section 5. Even in Section 5, the authors didn't explain explicitly how these two ideas are related and how equation (19) was derived.** Despite the fact the author has made clarification and **the aim of the paper is to present a framework for solving subtasks simultaneously, with CVaR being an important case study that successfully utilizes this framework. We note that the fundamental approach presented (Theorems 4.1-4.3) is not CVaR-specific.** This paper aims to address general subtask-driven RL, but it provides limited motivation and applications beyond the CVaR objective. Focusing on a clear emphasis either on general subtask-driven RL or the CVaR average criterion, could enhance the readability. If the emphasis is on general subtask-driven RL, providing motivation and demonstrating applications beyond CVaR would strengthen the narrative. Alternatively, if the focus is on the CVaR average criterion, then introducing the CVaR problem early on and framing the subtasks as a methodology for addressing the CVaR objective may make the paper easier to follow.\", \"It seems to me that Lemma D.1-4 should be Corollary since their result have already been proven but worded for more intuitive interest.\"]}", "{\"comment\": \"**(3 / 3)**\\n\\n**The results in Section 5 do not demonstrate that the proposed algorithm effectively optimizes the desired CVaR risk level. The evaluation would be more convincing if the authors trained the algorithm across multiple distinct CVaR risk levels.**:\\n\\nThe updated draft now includes an additional experiment (see Appendix E; line 1806) that shows that our CVaR algorithm optimizes at the desired risk level, such that it reliably finds the CVaR optimal policy across various CVaR risk levels.\\n\\nWe would also like to note that, as is the case with the standard average-reward formulation, the optimal solution that the algorithm converges to is correct up to an additive constant. Note that this was mentioned several times in our paper (e.g. lines 147-149, 465-468 in the original draft). As such, comparing the CVaR estimates of algorithms trained using different $\\\\tau$\\u2019s may not be productive, given that comparing the values of the estimates themselves is not meaningful. \\nWhat is meaningful is seeing whether the algorithm converges to the optimal CVaR policy at the desired risk level. This is precisely what the new experiment shows. \\n\\n**Comparing the proposed algorithm\\u2019s performance with other approaches [4,9,11] under an average reward criterion could also provide a clearer benchmark for its effectiveness.**:\\n\\nThe notion of comparing results from an average-reward MDP (as in our method) to those of episodic and discounted MDPs, such as [4, 11], is an interesting one. However, there are several intricacies that would need to be addressed to make that comparison, and we fear that it would distract from the main purpose of this paper. For instance, a discounted approach would optimize the CVaR of the discounted sum of rewards, vs. our approach, which optimizes the CVaR of the reward received per time-step, thereby making it challenging to interpret a comparison of such methods. While it is possible to quantify the performance of discounted algorithms via an average reward criterion, doing such a quantification would only answer questions related to the discounted approach, such as, \\u201chow does optimizing the CVaR of the discounted sum of rewards affect the long-run reward CVaR?\\u201d. By contrast, such a quantification would not reveal any insights related to our approach.\\n\\nIn the case of [9], their approach requires the use of an explicit bilevel optimization, which cannot be directly compared to our approach, which optimizes CVaR in a fully-online manner. The only motivation to compare our approach to [9] would be if we did not know what the optimal CVaR policy is (such that we could check whether both methods converge to the same solution). However, in our experiments, we know what the optimal policies are, so there is no need to compare our approach to [9].\\n\\nAs such, while we recognize that our experiments are limited, we hope that the reviewer will take into account that the novelty of our work makes it difficult to directly compare our algorithm to existing works. For instance, even being able to plot (fully-online) learning curves, such as the ones in our paper, cannot be done for most of the (CVaR) algorithms referenced by the reviewer, given the explicit bilevel optimization that is required by many of these methods. The only direct comparison that can be made is to the risk-neutral differential algorithms, which we did compare our algorithms against, and found that in the case where both approaches shared a common solution (i.e., the inverted pendulum experiment), our approach had better performance than the risk-neutral algorithm. \\n\\n**Overall**:\", \"we_hope_that_we_have_clarified_and_justified_the_following_to_the_reviewer\": \"- That our paper\\u2019s claims are accurate in the context of the previous works mentioned by the reviewer.\\n\\n- That our CVaR optimization approach is valid, as shown by the CVaR-specific proof in Appendix D of the updated draft (see Theorem D.1.2; line 1570).\\n\\n- That while our experiments are limited, they answer key questions related to our approach, and show that our algorithms can indeed find the optimal CVaR policy. We have conducted a follow-up experiment which shows that the RED CVaR algorithm converges to the CVaR optimal policy for a range of $\\\\tau$'s, thereby showing that our algorithm is able to optimize at the desired risk level. These new results are presented in Appendix E of the updated draft (line 1806).\\n\\nWe are happy to provide additional clarifications and discussion as needed by the reviewer.\\n\\n**References:**\\nK. Boda and J. Filar. Time consistent dynamic risk measures. Mathematical Methods of Operations Research, 63(1):169\\u2013186, 2006\"}", "{\"comment\": \"We thank the reviewer for their continued engagement and quick responses.\\n\\nAs suggested by the reviewer, we have changed Lemmas D.1-D.4 to Corollaries.\\n\\nWe have also added a section to the main body (Section 4 of the latest draft), that explains the challenges of CVaR optimization, as well as how the subtask approach can be used to mitigate these challenges. We believe that this new section will better explain to the reader how our subtask approach fits into the goal of optimizing CVaR, and thereby make the paper easier to follow.\\n\\nWe look forward to further discussions with the reviewer to address any remaining concerns and answer any more questions.\"}", "{\"metareview\": \"This paper introduces a Reward Extended Differential (RED) approach for risk-averse AMDP that aims to handle multiple subtasks concurrently, by defining the TD error through a modified reward generated by the observed rewards and subtasks through an invertible function. They leverage a property of CVaR from Rockafellar and Uryasev and treat CVaR as the expectation of a piecewise linear function of a random variable and its Value-at-Risk and hence apply the RED method to optimize the risk-averse CVaR objective function.\\n\\nHowever, apart from the relatively minor concerns in the related work discussion, numerical experiments, the paper has a major technical issue in that the derivation 13a-13b and 14a-14b, which does not hold for nonlinear functions. Although after the revision, the authors modified this proof and make it specific to linear $f$. However, linear $f$ does not contain their main target (CVaR). They authors claim in a hand-waving style that the proof can be trivially extended to piece-wise linear function by applying it on each piece. However, this claim is wrong. In fact, piece-wise linear functions provide uniform approximation of continuous functions over compact set, the authors' claim would suggest their result to hold for arbitrary continuous functions, which is impossible. \\n\\nBased on this technical issue, we decide to reject this paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers have major concerns in related works, writing and clarity of presentation, whether the considered risk is nested or not, and technical validity of the derivations.\\n\\nThe authors have cleared several of the above concerns by making many changes in the writing and derivations. However, their response to technical validity does not convince the AC and we believe the proof is not correct.\"}", "{\"comment\": \"We thank the reviewer for their quick response and most recent comments. We are happy to provide the additional clarification.\\n\\n**The agent maintains subtask estimates Z_i and wants to estimate z_i. But there must be some signals that are grounded to z_i and these signals are observed by the agent, right? What are these signals?**\", \"consider_a_subtask_function_of_the_form\": \"$$\\\\tilde{R}_t = R_t + a_0 + a_1z_1 + a_2z_2 + \\\\ldots + a_iz_i + \\\\ldots + a_nz_n$$\\n\\nwhere the modified reward, $\\\\tilde{R}_t$, is the output of the subtask function. \\n\\nAs per Theorem 5.1, this yields an update rule for an arbitrary subtask, $z_i$, of the form:\\n\\n$$Z_{i, t+1} = Z_{i, t} + \\\\eta\\\\alpha_{t}(-1/a_i)\\\\delta_t.$$\\n\\nAs previously mentioned, our framework assumes that the constants (or \\u2018weights\\u2019) $a_0, a_1, ..., a_i, ..., a_n$ are known. \\n\\nHence, we can enforce a particular update that is grounded to $z_i$ based on our choice of $a_i$. \\n\\nFor example, in the CVaR case study, we can enforce the desired update for the subtask, VaR, by choosing the following $a_{\\\\text{VaR}}$:\\n\\n$a_{\\\\text{VaR}, t} = 1$ if $R_t >= \\\\text{VaR}_t$, and \\n\\n$a_{\\\\text{VaR}, t} = \\\\frac{\\\\tau - 1}{\\\\tau}$ if $R_t < \\\\text{VaR}_t$. \\n\\nThis choice comes directly from Equation 7 in the update draft. This equation defines CVaR in terms of VaR. In our work, we adapt this equation into the following subtask function:\\n\\n$$\\\\tilde{R}_t = \\\\text{VaR} - \\\\frac{1}{\\\\tau}(\\\\text{VaR} - R_t)^{+} + \\\\text{other terms}$$\\n\\n(See Equation D.6 for the full function; the other terms are not relevant to this specific response)\\n\\nHence, we get $a_{\\\\text{VaR}}$ by grouping the VaR terms in the subtask function (for both cases: $R_t >= \\\\text{VaR}$ and $R_t < \\\\text{VaR}$).\\n\\nIn other words, we use the definition of CVaR to define the subtask function, thereby yielding an $a_{\\\\text{VaR}}$ that allows us to estimate the desired subtask, VaR.\", \"this_highlights_the_appeal_of_our_approach_and_the_core_contribution_of_our_paper\": \"Instead of having to rely on some other gradient (e.g. quantile regression) to estimate our subtask (in this case, VaR), our approach allows us to estimate the subtask in a theoretically-sound way using a modified version of the TD error.\\n\\nConversely, it also highlights a limitation: it may not always be straightforward to derive a useful subtask function (and the corresponding $a_0, a_1, ..., a_i, ..., a_n$). However, as evidenced by the significant CVaR result in our paper, given an appropriate subtask function, our framework can be quite useful and powerful. \\n\\nHence, to summarize: The \\u2018signal\\u2019 that makes this all possible is the modified reward, $\\\\tilde{R}_t$, which, through our framework, yields the update rules for the subtasks, such that the update for a given subtask $z_i$ is grounded by the choice of $a_i$.\\n\\nWe thank the reviewer for their consideration of our paper. We hope that we have provided an adequate clarification.\"}", "{\"comment\": \"We thank the reviewer for their consideration and review of our paper. Please see our response below:\\n\\n**It is not clear why subtasks are introduced when the main goal is the CVaR problem, until Section 5. Even in Section 5, the authors didn't explain explicitly how these two ideas are related and how equation (19) was derived.** \\n\\nWe appreciate the reviewer\\u2019s feedback. We have added a section (Section 4 in the updated draft) that explains the challenges of CVaR optimization, as well as how the subtask approach can be used to mitigate these challenges.\\n\\nFor clarity, the aim of the paper is to present a framework for solving subtasks simultaneously, with CVaR being an important case study that successfully utilizes this framework. We note that the fundamental approach presented (Theorems 5.1-5.3; previously 4.1-4.3) is not CVaR-specific.\\n\\nWe note that Equation 19 (now equation 17 in the updated draft) is derived in Appendix D of the updated draft.\\n\\n**The paper only mentioned one work for estimating CVaR (Xia et al. 2023) in the average-reward setting. Is that the only work?**\\n\\nYes, to the best of our knowledge, Xia et al. (2023) is the only other work that has looked at optimizing CVaR in the average-reward setting. We note that Xia et al.\\u2019s foundational work on the subject is more of a direct adaptation of CVaR optimization methods from the discounted case. By contrast, our work proposes a fundamentally-different approach that does not require the augmented state-space or explicit bilevel optimization that is used by Xia et al.\\n\\n**Was the paper's idea applied to other settings (discounted, episodic) before? If so, what are the differences?**\\n\\nThere is a lot to unpack here, but we will keep it brief. To answer the reviewer\\u2019s question: no, to the best of our knowledge, the paper\\u2019s idea has not been applied to episodic and discounted settings. The reason is that our paper\\u2019s idea critically relies on the stochastic approximation theory that the average-reward MDP is built upon. By contrast, episodic and discounted MDPs rely on the more typical contraction mapping theory. Hence, applying our idea in the discounted/episodic case would require careful consideration of the differences between the theoretical underpinnings of the various methods.\\n\\n**The step from 13a to 13b does not hold in general for piece-wise linear function f. Similarly, 14a to 14b does not seem to hold.**\\n\\nWe thank the reviewer for pointing this out, as we should have been more explicit with our explanation. The proof for Theorem 4.1 (now Theorem 5.1) only shows the case for a linear function (we have clarified this in the updated draft), however the results can trivially be extended to the piecewise-linear case, by considering each piecewise segment individually. This is what is done with CVaR, where the resulting update (i.e., Equation 19 in the old draft; Equation 17 in the updated draft) is also piecewise. We have updated the proof of Theorem 4.1 (now Theorem 5.1) to mention this.\\n\\nNote that we have simplified the proof for Theorem 4.1 (now Theorem 5.1) based on another reviewer\\u2019s comments, but the same logic still applies for the updated proof.\\n\\n**There are quite a few typos/incorrectness/weird statements of this paper.**\\n\\nWe thank the reviewer for identifying these typos and minor errors. The updated draft has corrected these typos/errors, as well as other typos identified by us after the submission deadline. Most notably, we have updated our wording from \\u2018stationary\\u2019 to \\u2018limiting\\u2019 where appropriate, as well as tweaked Definition 4.1 (now Definition 5.1), as per your recommendation. \\n\\n**Overall**: \\n\\nWe hope that we have addressed the reviewer\\u2019s concerns regarding the piecewise-linear function. In the updated draft, we have fixed the typos/errors mentioned by the reviewer. We are happy to engage with the reviewer to provide additional clarifications.\"}", "{\"comment\": \"Dear Reviewer FEZJ,\\n\\nWe hope that our updated draft and response to your comments have addressed your concerns and provided the necessary clarifications.\\n\\nWe are happy to further engage with you to address any remaining concerns and/or answer any more questions.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Follow-up questions\", \"comment\": \"I am still confused about the meaning of subtasks. According to your definition in Def 5.1, a subtask is a constant value and a subtask function weighted sum up the reward and all subtasks. Then what does it mean when you say \\\"An average-reward MDP can simultaneously predict or control any arbitrary number of subtasks \\\". Also, are all the zs given? Could you write down clearly what the agent observes and what the underlying process is (like in the standard MDP setting, the agent observes a state and reward according to the transition probability of the MDP)?\"}" ] }
5xxGP9x5dZ
Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy
[ "Yangsibo Huang", "Daogao Liu", "Lynn Chua", "Badih Ghazi", "Pritish Kamath", "Ravi Kumar", "Pasin Manurangsi", "Milad Nasr", "Amer Sinha", "Chiyuan Zhang" ]
Machine unlearning algorithms, designed for selective removal of training data from models, have emerged as a promising approach to growing privacy concerns. In this work, we expose a critical yet underexplored vulnerability in the deployment of unlearning systems: the assumption that the data requested for removal is always part of the original training set. We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data \textit{not} present in the training set. We propose white-box and black-box attack algorithms and evaluate them through a case study on image classification tasks using the CIFAR-10 and ImageNet datasets, targeting a family of widely used unlearning methods. Our results show extremely poor test accuracy following the attack—3.6% on CIFAR-10 and 0.4% on ImageNet for white-box attacks, and 8.5% on CIFAR-10 and 1.3% on ImageNet for black-box attacks. Additionally, we evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification, as most of the mechanisms fail to detect stealthy attacks without severely impairing their ability to process valid requests. These findings underscore the urgent need for research on more robust request verification methods and unlearning protocols, should the deployment of machine unlearning systems become more prevalent in the future.
[ "Machine unlearning", "Security", "Privacy", "Attack" ]
Accept (Poster)
https://openreview.net/pdf?id=5xxGP9x5dZ
https://openreview.net/forum?id=5xxGP9x5dZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rDmJv1wa4t", "m5lpWfvZNO", "hyD9NbW3B3", "hxpN0IvoEA", "g5xumBKnCE", "Zwd0OMlEiD", "Zu37ymGVFp", "Wif41SW66O", "VMpHssAE3Z", "V3JXfzymKn", "Rs9WZM5HCu", "RGwrcrNOpj", "Pl4IYlPrqZ", "P1jGq6zjf7", "K3KB54mo9C", "6Ahf0EBgXh", "36KI0Fx4AX", "0Qit9YNijo" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732741104795, 1732700022952, 1732764824267, 1732194598350, 1732199282688, 1732919140956, 1732195698991, 1732194773925, 1730720659351, 1734912761900, 1732547299080, 1732937241628, 1732581200663, 1730694326191, 1737523607224, 1732195052492, 1732547282443, 1730366292385 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3925/Authors" ], [ "ICLR.cc/2025/Conference/Submission3925/Reviewer_hRNb" ], [ "ICLR.cc/2025/Conference/Submission3925/Authors" ], [ "ICLR.cc/2025/Conference/Submission3925/Authors" ], [ "ICLR.cc/2025/Conference/Submission3925/Reviewer_Hn2d" ], [ "ICLR.cc/2025/Conference/Submission3925/Reviewer_jZsQ" ], [ "ICLR.cc/2025/Conference/Submission3925/Authors" ], [ "ICLR.cc/2025/Conference/Submission3925/Authors" ], [ "ICLR.cc/2025/Conference/Submission3925/Reviewer_Hn2d" ], [ "ICLR.cc/2025/Conference/Submission3925/Area_Chair_zTh2" ], [ "ICLR.cc/2025/Conference/Submission3925/Authors" ], [ "ICLR.cc/2025/Conference/Submission3925/Authors" ], [ "ICLR.cc/2025/Conference/Submission3925/Reviewer_jZsQ" ], [ "ICLR.cc/2025/Conference/Submission3925/Reviewer_jZsQ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3925/Authors" ], [ "ICLR.cc/2025/Conference/Submission3925/Authors" ], [ "ICLR.cc/2025/Conference/Submission3925/Reviewer_hRNb" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer hRNb,\\n\\nThank you for your thoughtful response and clarification. We again apologize for exceeding the page limit and appreciate your understanding.\\n\\n> ... I agree that exploring their application in the machine unlearning context is a valuable contribution. However, I see this value as being more applicative than methodological for the machine learning research community. With these considerations in mind, I have updated my score to reflect the significance of this application while maintaining my view on the limited methodological novelty.\\n\\nWe\\u2019re pleased that you recognize our work as \\\"a valuable contribution.\\\" In response to your comment about its contributions being more applicative than methodological, we\\u2019d like to highlight the following points that make our paper highly relevant to the ICLR community:\\n\\n1. **Alignment with call for papers**: our work directly addresses the [ICLR's call for papers](https://iclr.cc/Conferences/2025/CallForPapers) on \\\"societal considerations including fairness, safety, and privacy\\\". By exposing critical security vulnerabilities in machine unlearning, we advance the understanding of risks in this emerging field.\\n\\n2. **Methodological contributions**: our paper introduces the following methodological novelties:\\n- We propose novel defense strategies against the introduced attack (Section 4).\\n- We explore subset-selection-based attack strategies, extending beyond the input perturbation techniques commonly seen in poisoning literature (Section 4.2).\\n- We scale our attacks to challenging black-box and high-dimensional settings (Algorithm 3).\\n\\n3. **Theoretical contributions**: our theoretical results (Appendix C) provide rigorous insights into the existence of the proposed attack, complementing our empirical findings and offering a strong foundation relevant to the machine learning community.\\n\\nWe believe these contributions offer a blend of **methodological and theoretical insights**, complemented by the **practical implications** you have acknowledged, making our work highly relevant to the machine learning community.\\n\\nThank you again for your constructive feedback and careful review.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response and for revising the paper to comply with the conference guidelines. However, exceeding the 10-page limit could be considered unfair to other authors who followed the guidelines. ICLR explicitly states, \\\"*The main text must be between 6 and 10 pages (inclusive).* ***This limit will be strictly enforced.***\\\". Despite that, the paper has not been desk-rejected. Honestly, I can understand how such an oversight might occur, and I would feel personally disappointed if it had happened to me. I hope the authors can appreciate that my concern is rooted in a matter of fairness and is not intended as an outright criticism of the paper itself.\\n\\n\\nRegarding the contribution, I appreciate the clarification provided by the authors. While the attack methods are adaptations of established poisoning techniques, I agree that exploring their application in the machine unlearning context is a valuable contribution. However, I see this value as being more *applicative* than *methodological* for the machine learning research community. That said, it may hold greater relevance and value for the security community, where its implications could be further explored and appreciated.\\nWith these considerations in mind, I have updated my score to reflect the significance of this application while maintaining my view on the limited methodological novelty.\"}", "{\"comment\": \"Dear Reviewer jZsQ,\\n\\nThank you for your thoughtful response and follow-up questions. We address them as follows.\\n\\n**Q2: Threat model**\\n\\n\\n> I'm generally okay with the threat model's justification but would recommend being precise when introducing a new threat model...I think the author's response should be able to address the 1st point, which was the root cause of confusion. It's thus recommended to refine the motivation regarding this gap.\\n\\nWe\\u2019re glad to hear that our clarification about the threat model largely addressed your concern. We also agree with your suggestion to further clarify our motivation, particularly by discussing the two mentioned points.\", \"we_plan_to_restructure_the_introduction_flow_of_the_paper_as_follows\": \"While machine unlearning algorithms have been proposed, there has been limited discussion on their deployment, particularly regarding whether these algorithms should accept user-provided inputs to construct D-forget. In this paper, we highlight two potential vulnerabilities that attackers could exploit if user-provided inputs are allowed to construct D-forget:\\n- Adversarially perturbed examples from D-forget (Section 3), which can be mitigated by certain defenses (Section 4.1).\\n- Adversarially chosen valid examples from D-forget, which pose a greater challenge for the deployer to defend against (Section 4.2).\\n\\nWe are happy to update the paper accordingly if this revised flow makes sense to you.\\n\\n\\n\\n\\n\\n**Q3: Section 4.2**\\n\\n> I double-checked the results and the results are somewhat confusing. IIRC, it shows that the attacker can adversarially select 10 samples to construct the D-forget, which will reduce 30% accuracy if executed by the model deployer. I wonder where these 10 samples were selected and if there were additional assumptions. Were they selected from the entire training set (which the attacker should not have access to), or a chosen portion of the training set that the attacker initially provided for model training?\\n\\nThank you for your question. In this setting, the attacker adversarially selects N examples to construct D-forget. Figure 5 illustrates that for N=10, this results in the model achieving ~30% classification error after unlearning (compared to only 3.8% classification error for an average-case forget set).\\n\\nWe note that this is a proof-of-concept experiment, where we allow the attacker to exploit its maximal power by randomly sampling N images *from the entire training set*. For your reference, we also conducted experiments where the attacker has access to **only a portion of the data**. Below are the results for N=10:\\n\\n\\n| Attacker's portion to training data | Max Retain Error (%) |\\n|---------|----------------------|\\n| 5% | 21.6 |\\n| 10% | 26.8 |\\n| 20% | 27.3 |\\n| 50% | 28.7 |\\n| 100% | 30.9 |\\n\\nWe are happy to include these additional results for different values of N's in the final version of the paper.\\n\\n\\n**Q4: max vs. average**\\n> Since this paper is an attack paper, the results are higher the better. So it would make more sense to report the mean and std of the attack's bare effectiveness across several runs. \\n\\nThank you for your follow-up question. We\\u2019ve included the average-case results for the main tables (Table 1 and Table 4) in Appendix B.3 for your reference.\"}", "{\"title\": \"Response to Reviewer jZsQ (Part 1)\", \"comment\": \"Thank you for your feedback! We are glad to hear that you find our paper solid, well-written, and our evaluation generally comprehensive. We address your questions and comments below.\\n\\n**Q1: Related work of security risks of unlearning tasks and poisoning attacks**\\n> have there been prior efforts to study the security risks of machine unlearning tasks? \\n\\n**A**: Thanks. As discussed in the related work section of our submission (Section 6), prior studies have explored various security risks in machine unlearning systems, including:\\n- Unintended privacy leakage: Carlini et al. (2022), Hayes et al. (2024), Shi et al. (2024)\\n- Vulnerability to re-introducing unlearned knowledge: Shumailov et al. (2024), Lucki et al. (2024)\\n- Insufficient removal of poisoned data: Di et al. (2022)\\n\\nTo the best of our knowledge, our work is the first to demonstrate that adversarial unlearning requests can cause catastrophic performance degradation in unlearned models.\\n\\n> what's the difference between the threat models of this work and poisoning attacks? attacker is also able to execute poisoning attacks\\n\\n**A**: You raised a valid point about poisoning attacks being a plausible alternative when the attacker has control over part of the training data. However, we\\u2019d like to make two important clarifications: \\n\\n- Our attack could operate under a slightly **weaker** threat model compared to poisoning attacks\\u2014it can be executed even without the attacker having control over a subset of the training data (Section 5.2).\\n- Our attack identifies a new vulnerability **unique** to machine unlearning systems, whereas poisoning is a well-known attack vector applicable to all systems involving model training. These two types of attacks target different stages of a machine learning system, and thus our attack is **NOT** intended to compete with or replace standard poisoning attacks.\\n\\nTherefore, we view our contribution as orthogonal and possibly complementary to poisoning attacks.\\n\\n**Q2: Practicalness of threat model**\\n\\n> I'm not sure if it ever makes sense for a model trainer to rely on user-submitted data to execute unlearning algorithms. In practice, it's usually the model provider's responsibility to identify which data belongs to the user and unlearn those data stored in the model provider's datastore. \\n\\n**A**: We completely agree with the reviewer's opinion that the model provider has the responsibility to verify the unlearning requests, which is one of the main implications we are advocating in this paper. We note that while unlearning has been a promising research direction recently, it has not yet been widely deployed. Therefore, we do not have concrete examples of deployed systems to demonstrate this attack. However, all the mainstream formulation of unlearning has largely overlooked this problem. Our systematic study hopes to guide the evolution of unlearning protocols to address such vulnerabilities before widespread deployment.\\n\\nIn the following, we address the two concrete questions raised by the reviewer:\\n\\n1. *Why might a model provider not store all training data despite having originally trained the model?*\", \"there_are_two_potential_reasons\": \"- **Storage limitations**: In cases of training on streaming data, the model deployer may only retain the most recent data due to limited storage capacity and have to discard older training data.\\n- **Regulatory compliance**: regulations often require organizations to delete raw training data after a predefined retention period to minimize data exposure. Typically, the retention period for raw training data is shorter than that of the model.\\n\\n2. *Why might a model provider accept user-submitted samples for unlearning instead of identifying them themselves?*\\n\\nUnder the GDPR, individuals have the \\\"right to be forgotten,\\\" which mandates that organizations remove personal data upon user request. In such cases, the model provider may rely on user-submitted samples to execute an unlearning operation.\\n\\n> the proposed threat model may also cause some other issues, where User A may submit benign samples belonging to User B and request deletion on behalf of User B without User B's consent. \\n\\n**A**: Thanks! While the scenario you described is orthogonal to our primary contribution and can be covered by the current attack framework, it is an interesting point. We will include a discussion of this in the final version.\\n\\n> I believe what's actually happening is that the model provider will go through their data store and identify those belonging to the requester, which they know for sure (1) belong to the requester and (2) were indeed used in the model training.\\n\\n**A**: Strong defenses like data ownership verification, as you suggested, could indeed prevent certain attacks. However, implementing them in practice is challenging because privacy regulations typically require data to be anonymized, making it sometimes prohibitive to store user IDs or similar identifiers that link data to specific users.\"}", "{\"comment\": \"Thank you for the clarification and the revision.\"}", "{\"comment\": \"Thanks for the response. I think the revised flow makes much more sense now.\\n\\nThe updated results for Q3 make sense now. However, it's worth pointing out that the attacker in Sec 4.2 is much stronger than the main threat model, as they control more than the samples they submitted. A more precise modeling of the attacker would be that they can only sample the N points from their controlled part of the data -- where 5% would still be unusually high, as in Tabs 3 and 4 the attacker only had control of D-forget with 10 - 100 samples.\\n\\nI think it's acceptable to increase the attacker's capability to quantify the defender's hardness, but it may have overestimated the damage caused by the attacker if the paper wants to make a point on the attack's side (I understand it's in a defense section). I would recommend clarifying the gap in the threat model and focusing on the defense's side here, and being conservative when discussing the attacker's capability.\\n\\nOverall I think the paper should be in good shape with all the above updates implemented. I've adjusted my score accordingly.\"}", "{\"title\": \"Response to Reviewer Hn2d\", \"comment\": \"We appreciate your feedback and are glad to hear you enjoyed our paper. We address your questions and comments below.\\n\\n**Q1: Scalability Limitations in Black-Box Attacks**\\n\\n> While the paper\\u2019s black-box attacks yield promising results, they might face challenges when applied to high-dimensional data. Including a more detailed discussion of this limitation could add depth and insight. Addressing these challenges would provide valuable context about the trade-offs and bottlenecks in black-box attacks, enhancing the practical understanding for researchers and practitioners in the field of machine unlearning and security.\\n\\n**A**: We appreciate the comment. From a theoretical perspective, the gradient estimators in the black-box setting can have larger variance in the high-dimensional setting, and our experiments confirm that attacking high-dimensional data (e.g., ImageNet) is more challenging (Table 2). Increasing the number of estimators and averaging them could partially mitigate this issue. As suggested, we have included this discussion in the revised PDF.\\n\\n**Q2: Limited Exploration of Adaptive Defense Mechanisms**\\n\\n> This paper makes a valuable contribution by assessing the effectiveness of various defensive mechanisms and highlighting potential vulnerabilities. However, investigating additional adaptive defenses, such as anomaly detection or similarity-based clustering methods, could strengthen this section.\\n\\n**A**: Thank you for the comment. We note that Section 4 already explores several anomaly detection approaches, including hashing-based, embedding-based, and pixel distance-based methods. If you have specific suggestions for additional defenses, we would be happy to consider and discuss their incorporation.\\n\\n**Q3: Ambiguity in the definition of \\u201cBlack-Box\\u201d Attack**\\n> Although termed a \\u201cblack-box\\u201d attack, the method used by this paper still requires access to the training loss. This might be somewhat misleading, as traditional black-box adversarial attacks are generally assumed to operate without any internal information about the target model. Adjusting this terminology or clarifying the scope could improve precision and align expectations.\\n\\n**A**: Thank you for this valuable feedback. In our setup, we define the black-box attack as one that requires access to the training loss. We acknowledge that this differs from the stricter definition, where the attacker has access only to the model's outputs. However, similar setups have also been referred to as \\\"black-box\\\" in prior literature [1, 2, 3]. We have clarified this in the revised PDF as suggested (lines 135-136). \\n\\n**Q4: Potential defenses**\\n> Can you think about possible ways to enhance the defense against such adversarial machine unlearning attacks?\\n\\n**A**: Thank you for the comment. As noted in Section 4.2, the most robust defense we can propose is for the model deployer to avoid directly using user-submitted inputs for unlearning (the paragraph titled with \\u201cSimilarity searching and indexing\\u201d). Instead, they could compare the submitted inputs to already stored examples and use these for unlearning. However, this defense requires the model deployer to pre-store such examples and still remains susceptible to indexing attacks, where an attacker selects a subset of valid training examples that degrade model performance after unlearning.\\n\\n**References**:\\n\\n[1] Chen, Pin-Yu, et al. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models.\\n\\n[2] Tu, Chun-Chen, et al. Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks.\\n\\n[3] Liu, Yiyong, et al. Membership inference attacks by exploiting loss trajectory.\"}", "{\"title\": \"Response to Reviewer jZsQ (Part 2)\", \"comment\": \"**Q3: defences with full access to training examples**\\n> If so, wouldn't the best defense be not trusting the user-provided data at all?... Then they could just identify the precise set of user's data used for model training, and then execute the unlearning algorithm.\\n\\n**A**: We appreciate the comment. We kindly note that your suggestion of not trusting user-provided data is exactly the defense strategy we explored in Section 4.2 (the paragraph titled \\\"Similarity searching and indexing\\\"). Specifically, we examined a defense mechanism where the model deployer could compare submitted inputs against stored training examples to identify and unlearn them. However, we demonstrate that this approach still remains vulnerable to the indexing attacks: In such attacks, an adversary can strategically select a subset of valid training examples that, when removed, significantly degrades model performance (see Figure 5).\\n\\n**Q4: Experiment settings** \\n> In Section 3, the forget set has 10 to 100 samples. It's unclear how this range is devised, but 100 may sound too much IMO, as it means the model provider will accept up to 100 samples, which the requester claims belong to them, to update their model weights. It's suggested to provide more concrete justification for why such numbers are chosen.\\n\\n**A**: The range of forget set sizes was selected to capture a spectrum of unlearning requests, from small (10 samples) to relatively large (100 samples).\\n\\nRegarding the concern about the size of 100 samples, we note that this is not unusually large in the context of prior work. The table below highlights the largest forget set sizes used in previous studies that use CIFAR-10 as a benchmark for unlearning:\\n\\n| Study | Largest forget set size |\\n|---------------------------------------------------------------------------------|---------------------------------------------|\\n| Eternal sunshine of the spotless net: Selective forgetting in deep networks [1] | 5000 |\\n| Towards adversarial evaluations for inexact machine unlearning [2] | 4000 |\\n| Machine Unlearning Competition, NeurIPS 2023 [3] | 5000 (10% of the CIFAR-10 training dataset) |\\n| Towards Unbounded Machine Unlearning [4] | 25 (small-scale), 5000 (large-scale) |\\n\\nCompared to these studies, our choice of a maximum of 100 samples is relatively modest.\\n\\n> The evaluation quantifies the attack's performance as the maximum accuracy degradation observed across a grid search of hyperparameters. I'm curious (and concerned) why the maximum instead of other more common metrics are reported, such as the average, medium, or confidence interval. \\n\\n**A**: Our metric reports the **average** (across five randomly chosen forget sets) of the **maximum** accuracy degradation under attack. Using the \\\"average\\\" ensures statistical rigor, while the \\\"maximum accuracy degradation\\\" highlights the worst-case scenario, which is critical for evaluating system robustness. We argue that from a model deployer's perspective, focusing on non-optimal attack hyperparameters is less informative, as it fails to capture the most damaging potential outcomes.\\n\\n> It's also not very convincing that, in reality, the attacker would be able to have a grid search and obtain the best results.\\n\\n**A**: While online grid search may seem computationally intensive, we note that attackers can feasibly conduct **offline** hyperparameter optimization. In our implementation, we optimized hyperparameters for each forget set size during an initial search and reused them for all subsequent attacks of that size. This simulates a realistic attack scenario, where an adversary balances computational cost with maximizing effectiveness.\\n\\n\\n**References**:\\n\\n[1] Golatkar, Aditya, Alessandro Achille, and Stefano Soatto. Eternal sunshine of the spotless net: Selective forgetting in deep networks. CVPR 2020\\n\\n[2] Goel, Shashwat, et al. Towards adversarial evaluations for inexact machine unlearning. arXiv preprint arXiv:2201.06640 (2022)\\n\\n[3] NeurIPS 2023 Machine Unlearning Challenge. https://unlearning-challenge.github.io/\\n\\n[4] Kurmanji, Meghdad, et al. Towards unbounded machine unlearning. NeurIPS 2024\"}", "{\"summary\": \"This paper explores security risks within machine unlearning schemes, a topic gaining importance as the demand for ensuring \\\"the right to be forgotten\\\" grows. Specifically, the paper introduces a type of attack that degrades a model's accuracy by submitting adversarial unlearning requests, with evaluations demonstrating its effectiveness. Additionally, the paper shows that various existing verification mechanisms fail to detect these proposed attacks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The concept of adversarial machine unlearning attacks is innovative and contributes a new perspective to the field.\", \"In addition to empirical evaluations, the paper provides a theoretical demonstration of the proposed attack, enhancing its depth and rigor.\", \"The paper is well-structured, with clear and accessible content that is easy to follow.\"], \"weaknesses\": [\"Overall, I enjoyed reading this paper. Below are some suggestions that could help improve it further:\", \"*Scalability Limitations in Black-Box Attacks*: While the paper\\u2019s black-box attacks yield promising results, they might face challenges when applied to high-dimensional data. Including a more detailed discussion of this limitation could add depth and insight. Addressing these challenges would provide valuable context about the trade-offs and bottlenecks in black-box attacks, enhancing the practical understanding for researchers and practitioners in the field of machine unlearning and security.\", \"*Limited Exploration of Adaptive Defense Mechanisms*: This paper makes a valuable contribution by assessing the effectiveness of various defensive mechanisms and highlighting potential vulnerabilities. However, investigating additional adaptive defenses, such as anomaly detection or similarity-based clustering methods, could strengthen this section.\", \"*Ambiguity in the definition of \\u201cBlack-Box\\u201d Attack*: Although termed a \\u201cblack-box\\u201d attack, the method used by this paper still requires access to the training loss. This might be somewhat misleading, as traditional black-box adversarial attacks are generally assumed to operate without any internal information about the target model. Adjusting this terminology or clarifying the scope could improve precision and align expectations.\"], \"questions\": [\"Can you think about possible ways to enhance the defense against such adversarial machine unlearning attacks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper received ratings of 8,8,3. The paper presents promising results on black-box attacks and machine unlearning but has several key weaknesses. It overlooks scalability challenges for high-dimensional data in black-box attacks, and fails to fully explore adaptive defense mechanisms like anomaly detection. The definition of \\u201cblack-box\\u201d attack is ambiguous, as the method still requires access to training loss, which deviates from traditional black-box assumptions. Additionally, the threat model may not be realistic, as it assumes model providers accept user-submitted data for unlearning, which is uncommon in practice. There are also concerns about the model provider\\u2019s ability to reliably identify user data for deletion and the practicality of the attack in real-world scenarios. The evaluation method, including the use of maximum accuracy degradation, is questioned for its relevance and realism. Lastly, the paper would benefit from a more detailed exploration of related work on the security risks of machine unlearning and poisoning attacks.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal addressed most issues raised by the reviewers. One reviewer is still rejecting it. The main weakness reported by the reject reviewer is: I see the value of this paper as being more applicative than methodological for the machine learning research community. That said, it may hold greater relevance and value for the security community, where its implications could be further explored and appreciated.\\n\\nThe AC believes the submission has sufficient positive aspects so can be accepted.\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nThanks again for your detailed comments and suggestions. Since the rebuttal period is approaching its end, we would appreciate learning whether our responses have addressed your concerns. We are also happy to engage in further discussions.\"}", "{\"comment\": \"Dear Reviewer jZsQ,\\n\\nWe sincerely appreciate your constructive feedback and for raising your score.\\n\\nWe\\u2019re happy to hear that you find the revised flow clearer. We\\u2019ll include this update in the final paper (unfortunately, it seems we can\\u2019t update the PDF submission at this stage).\\n\\nWe also agree with your point that the results in Section 4.2 may appear stronger than those under the main threat model. In the final version, we\\u2019ll revise the main results to focus on scenarios where the attacker has access to very few examples (e.g., $\\\\leq$ 5%), while keeping the results for cases with greater access for reference.\\n\\nThank you again for your feedback.\"}", "{\"comment\": \"Thanks for the detailed response. Below are my remaining concerns.\\n\\n**Q2: Thraet model.**\\n\\nI'm generally okay with the threat model's justification but would recommend being precise when introducing a new threat model. I agree with the statement that ML unlearning algorithms haven't discussed scenarios where D-forget does not belong to D-train. But this does not necessarily mean that such algorithms assume taking user-provided inputs to construct D-forget. If the latter is a given, then the proposed concerns would totally make sense. This means the paper's motivation is better decoupled into two steps:\\n1. There is a lack of discussion regarding whether ML unlearning algorithms should accept user-provided inputs to construct D-forget.\\n2. In scenarios where the unlearning algorithms have to take user inputs, two types of adversarial inputs can happen: (1) adversarially chosen benign D-forget and (2) adversarial examples present in D-forget.\\n\\nI think the author's response should be able to address the 1st point, which was the root cause of confusion. It's thus recommended to refine the motivation regarding this gap.\\n\\n**Q3: Section 4.2**\\n\\nI double-checked the results and the results are somewhat confusing. IIRC, it shows that the attacker can adversarially select 10 samples to construct the D-forget, which will reduce 30% accuracy if executed by the model deployer. I wonder where these 10 samples were selected and if there were additional assumptions. Were they selected from the entire training set (which the attacker should not have access to), or a chosen portion of the training set that the attacker initially provided for model training?\\n\\n**Q4: max vs. average**\\n\\nSince this paper is an attack paper, the results are higher the better. So it would make more sense to report the mean and std of the attack's bare effectiveness across several runs. Only in this way, we can understand the true distribution of the attack's performance. The hyper-parameter tuning should belong to a separate ablation study to show how the attack's performance is sensitive to these factors, and the possibility & stability of offline hyper-parameter optimization.\"}", "{\"summary\": \"This paper investigates the problem of adversarially degrading model accuracy by submitting adversarial unlearning requests for data not presented in the training set. Two settings are considered, white box and black box, depending on whether the attacker can access the model's gradients. Experiments show that such requests can significantly degrade the model's performance. The paper also presents the challenges of detecting the legitimacy of unlearning requests.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"### Originality\", \"The idea of studying adversarial unlearning requests is novel and interesting. This is a valid concern when the model deployer accepts user-submitted unlearning data.\", \"The overall methodology is novel, including the use of the gradient-through-gradient technique, regardless of some inevitably similar techniques to poisoning attacks and gradient estimation in black-box attacks.\", \"### Quality\", \"The methodology is solid under the proposed threat model, and correctly verified using experiments on CIFAR-10 and ImageNet, with three unlearning algorithms.\", \"The evaluations are generally comprehensive.\", \"### Clarity\", \"The paper is well-written and easy to follow.\", \"### Significance\", \"The overall threat model and takeaways are useful, under the assumption that it makes sense for a model deployer to accept user-submitted samples to execute the unlearning algorithm (see weakness for why this may not be the case).\"], \"weaknesses\": \"### Originality\\n\\n**Q1: Related work of security risks of unlearning tasks and poisoning attacks.**\\n\\nSince this paper's threat model is similar in spirit to poisoning attacks, it's suggested to discuss more details on the related work and address two main questions. First, have there been prior efforts to study the security risks of machine unlearning tasks? Second, what's the difference between the threat models of this work and poisoning attacks, given that the attacker in this paper is also able to execute poisoning attacks from the beginning?\\n\\n### Quality\\n\\n**Q2: Practicalness of the threat model.**\\n\\nThe assumption of this paper is that the user may submit malicious images as unlearning requests to the model deployer, who will execute the unlearning algorithm on such images. However, I'm not sure if it ever makes sense for a model trainer to rely on user-submitted data to execute unlearning algorithms. In practice, it's usually the model provider's responsibility to identify which data belongs to the user and unlearn those data stored in the model provider's datastore. Can the authors provide some concrete use cases, preferably realistic, where any model provider will (1) not store the training data despite their original ability to train the model and (2) accept user-submitted samples to delete the data?\\n\\nExtending from this concern, the proposed threat model may also cause some other issues, where User A may submit benign samples belonging to User B and request deletion on behalf of User B without User B's consent. I believe what's actually happening is that the model provider will go through their data store and identify those belonging to the requester, which they know for sure (1) belong to the requester and (2) were indeed used in the model training.\\n\\nThe second aspect of this question is that the attacker already controls a subset of the training data. In this case, why can't the attacker simply execute poisoning attacks, which are more stealthy than this attack? In this case, the attacker's identity is easy to trace, yet it's much harder to identify who has sent the poisoning samples.\\n\\n**Q3: Regarding the defences with full access to training examples.**\\n\\nThis is similar to Q2 but applies specifically to Section 4.2. In this subsection, the defense has full access to the training samples, so I guess in this case the model provider does not have any limitations in Q2. If so, wouldn't the best defense be not trusting the user-provided data at all? As a model provider, or in fact, the service provider, they must have the ability to identify where the data was originally collected from. Otherwise, they won't be able to delete the user's data in their training set. Then they could just identify the precise set of user's data used for model training, and then execute the unlearning algorithm.\\n\\n### Clarity\\n\\n**Q4: Concerns regarding the experiment settings.**\\n\\n1. In Section 3, the forget set has 10 to 100 samples. It's unclear how this range is devised, but 100 may sound too much IMO, as it means the model provider will accept up to 100 samples, which the requester claims belong to them, to update their model weights. It's suggested to provide more concrete justification for why such numbers are chosen.\\n2. The evaluation quantifies the attack's performance as the *maximum* accuracy degradation observed across a grid search of hyperparameters. I'm curious (and concerned) why the maximum instead of other more common metrics are reported, such as the average, medium, or confidence interval. It's also not very convincing that, in reality, the attacker would be able to have a grid search and obtain the best results.\\n\\n### Significance\\n\\nI think the overall idea and evaluation of executing machine unlearning algorithms on malicious inputs are interesting and useful. However, my main concern regarding the significance is that the threat model may not be realistic (Q2-Q3). It's true that unlearning algorithms have been implicitly assuming that the data to be deleted are indeed used for model training, but such an assumption is built upon another assumption that, in order to delete the user's data, the model provider must have the basic ability to identify the correct set of user data in their data store, and only then go to the next level of deleting data from the model. It's suggested to provide more justification for the practicalness of the proposed threat model, where the model provider is expected to delete data from the model without the ability to reliably identify the user's precise set of data stored in their database.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer hRNb\", \"comment\": \"Thank you for your feedback! We're glad to hear that you find our paper interesting and the soundness and presentation to be good. We address your questions and comments below.\\n\\n**Q1: Exceeds page limit**\\n\\n> The paper currently extends beyond the 10-page maximum allowed by ICLR, utilizing in total 11 pages of core content. \\n**A**: Thank you for pointing this out and we sincerely apologize for exceeding the page limit by 2 additional newlines, due to a last minute update that we did not carefully check. We\\u2019ve fixed this in the revised PDF. \\n\\n> Additionally, several crucial aspects, such as unlearning algorithm specifics, hyperparameters, and theoretical results, are deferred to the appendix, making it challenging to assess the completeness of the methods presented within the main body.\\n\\nThank you for the comment. Regarding the placement of technical content:\\n- **Unlearning algorithm specifics, hyperparameters**: the core unlearning algorithms and their hyperparameters are fully described in the main text (lines 222-239). Only the hyperparameter selection rationale is in the appendix. We can certainly move this discussion to the main paper if you feel it would improve readability, space permitting. \\n- **Theoretical result**: we have summarized the main theorem statement in the body of the paper (lines 298-301). This theorem establishes the existence of our proposed attacks in the context of linear model unlearning. The detailed proof remains in the appendix to maintain narrative flow while preserving technical completeness.\\n\\n**Q2 Mischaracterization as a novel adversarial attack & Lack of novelty in attack mechanisms**\\n\\n> While the approach is framed as a new type of adversarial attack, it is more accurately an adaptation of established poisoning techniques. The attack uses a standard poisoning approach where the unlearning update replaces fine-tuning or training in the model. As outlined, the objective of minimizing model accuracy while removing poisoning samples mirrors conventional poisoning attacks. This is detailed in prior work such as [1-4] or survey [5] which already connect poisoning attacks to meta-learning approaches, similar to the attack formulation in Section 2.2 of this paper. \\n\\n> The two attack methods (white-box and black-box) do not introduce any new methodologies beyond existing techniques. The white-box method is essentially a re-implementation of approaches discussed in [1] and [3], while the black-box method, based on gradient estimation, has been used in [5]. Consequently, the attack does not present a unique contribution in terms of adversarial methods.\\n\\n**A**: We appreciate the reviewer\\u2019s comments and agree that our attacking algorithm is similar to some previous poisoning techniques. However, we note they all boil down to the same standard formulation with gradient based searching of adversarial inputs. The main novelty (of previous poisoning papers and our work) is applying such techniques to a novel application scenario and systematically studying the implications and defense. Specifically, we highlight that\\n- Our primary contribution lies in identifying a novel **attack interface** specific to machine unlearning pipelines, which fundamentally targets a different system and interface (unlearning) compared to poisoning attacks (in standard learning or meta-learning). \\n- With this new attack surface, we also explore corresponding defensive methods. We also note that the defensive landscape for unlearning differs significantly from that of training-time poisoning attacks. In unlearning, the unlearned set must resemble valid training data previously used to train the model. This enables defenses based on similarity comparison to training examples (Section 4). \\n\\nWe discussed some of the previous poisoning attacks in the manuscript, but we apologize for missing some of the relevant citations and have addressed this in the revised manuscript.\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nThanks again for your detailed comments and suggestions. Since the rebuttal period is approaching its end, we would appreciate learning whether our responses have addressed your concerns. We are also happy to engage in further discussions.\"}", "{\"summary\": \"This paper introduces Unlearn and Burn, an adversarial attack targeting machine unlearning algorithms to significantly degrade model performance by submitting malicious unlearning requests. The attack exploits the assumption that all unlearning requests correspond to legitimate training data, allowing an adversary to request the removal of data not present in the training set. The authors propose both white-box and black-box versions of the attack and demonstrate its effectiveness on CIFAR-10 and ImageNet, with results showing considerable drops in accuracy. The study also highlights the challenges of detecting such stealthy attacks, suggesting that existing verification methods for unlearning requests may be inadequate.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper presents an interesting and relevant idea, yet it has some notable issues that need addressing.\", \"weaknesses\": \"Exceeds page limit: The paper currently extends beyond the 10-page maximum allowed by ICLR, utilizing in total 11 pages of core content. Additionally, several crucial aspects, such as unlearning algorithm specifics, hyperparameters, and theoretical results, are deferred to the appendix, making it challenging to assess the completeness of the methods presented within the main body.\", \"mischaracterization_as_a_novel_adversarial_attack\": \"While the approach is framed as a new type of adversarial attack, it is more accurately an adaptation of established poisoning techniques. The attack uses a standard poisoning approach where the unlearning update replaces fine-tuning or training in the model. As outlined, the objective of minimizing model accuracy while removing poisoning samples mirrors conventional poisoning attacks. This is detailed in prior work such as [1-4] or survey [5] which already connect poisoning attacks to meta-learning approaches, similar to the attack formulation in Section 2.2 of this paper. Given these similarities, I observe a lacks of novelty compared to the state of the art and risk for creating confusion with respect to something that already exists in the literature.\", \"lack_of_novelty_in_attack_mechanisms\": \"The two attack methods (white-box and black-box) do not introduce any new methodologies beyond existing techniques. The white-box method is essentially a re-implementation of approaches discussed in [1] and [3], while the black-box method, based on gradient estimation, has been used in [6]. Consequently, the attack does not present a unique contribution in terms of adversarial methods.\\n\\n\\nGiven the issues above, particularly the excessive page count and the lack of novelty, I recommend rejection of the paper. I encourage the authors to reconsider the framing of their approach, positioning it as an adaptation of poisoning attacks within unlearning settings rather than a new adversarial attack. I hope these comments will serve as constructive feedback for further refinement.\\n\\n\\n\\n[1] Metapoison: Practical general-purpose clean-label data poisoning. NeurIPS 2020.\\n\\n[2] The hammer and the nut: Is bilevel optimization really needed to poison linear classifiers? IJCNN 2021.\\n\\n[3] Towards poisoning of deep learning algorithms with back-gradient optimization. ACM workshop on artificial intelligence and security 2017.\\n\\n[4] Witches' brew: Industrial scale data poisoning via gradient matching ICLR 2021.\\n\\n[5] Wild patterns reloaded: A survey of machine learning security against training data poisoning. ACM Computing Surveys 2024.\\n\\n[6] Generative poisoning attack method against neural networks. arXiv 2017.\", \"questions\": \"Could you clarify the specific differences between your proposed attacks and prior poisoning methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.', 'Yes, Research integrity issues (e.g., plagiarism, dual submission)', 'Yes, Other reasons (please specify below)']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\", \"details_of_ethics_concerns\": \"The paper initially exceeded the 10-page maximum (by 2-3 lines) allowed by ICLR, totaling 11 pages of core content. ICLR explicitly states, \\u201c*The main text must be between 6 and 10 pages (inclusive).* ***This limit will be strictly enforced.***\\u201d Despite this, the paper has not been desk-rejected. I wanted to highlight this point to ensure that submission guidelines are consistently and fairly applied to all submissions.\"}" ] }
5xwx1Myosu
Expressivity of Neural Networks with Random Weights and Learned Biases
[ "Ezekiel Williams", "Alexandre Payeur", "Avery Hee-Woon Ryoo", "Thomas Jiralerspong", "Matthew G Perich", "Luca Mazzucato", "Guillaume Lajoie" ]
Landmark universal function approximation results for neural networks with trained weights and biases provided the impetus for the ubiquitous use of neural networks as learning models in neuroscience and Artificial Intelligence (AI). Recent work has extended these results to networks in which a smaller subset of weights (e.g., output weights) are tuned, leaving other parameters random. However, it remains an open question whether universal approximation holds when only biases are learned, despite evidence from neuroscience and AI that biases significantly shape neural responses. The current paper answers this question. We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can approximate any continuous function on compact sets. We further show an analogous result for the approximation of dynamical systems with recurrent neural networks. Our findings are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights, as well as for AI, where they shed light on recent fine-tuning methods for large language models, like bias and prefix-based approaches.
[ "random neural networks", "recurrent neural networks", "plasticity", "deep learning", "neuroscience", "multi-task learning" ]
Accept (Poster)
https://openreview.net/pdf?id=5xwx1Myosu
https://openreview.net/forum?id=5xwx1Myosu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zOJhWJFwtu", "oiKQ0OIpjF", "o4SlqYp5J2", "jjC2oPw7j0", "glvKErR2pL", "foPkhE9TlR", "edrh8tyODT", "eQJYYEXxCe", "e5hcSjHPfi", "dnUReu8vyj", "ZjHKwqSTO0", "YgVrjq9M2y", "VKzpyUmyST", "TqKaEdUBnC", "MC3HM2gSbL", "LEsanvSsm5", "H4MHMacw23", "GSbrALIl7a", "GS52O7kJmy", "EfA7dH7Gll", "EPSPliO5gr", "D1XPVn9Cml", "5UgbNx6C3s", "4uw1N1RuLO", "3mQ12as9WV", "2i74abqNPW" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732868224721, 1731436733363, 1732832441167, 1732636071079, 1735003746322, 1732463644789, 1733166175576, 1732245454699, 1732245731657, 1732832306185, 1737523878294, 1732898324743, 1730566853828, 1732557252253, 1732764019392, 1730975126160, 1732641252690, 1732246005455, 1732557616097, 1732551757907, 1730654253058, 1732468726362, 1732557414701, 1732246642821, 1732557118015, 1732246609808 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_WzyV" ], [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_1PGv" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_1PGv" ], [ "ICLR.cc/2025/Conference/Submission7964/Area_Chair_y18U" ], [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_jeUd" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_WzyV" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_jeUd" ], [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_jeUd" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_kfuh" ], [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_kfuh" ], [ "ICLR.cc/2025/Conference/Submission7964/Reviewer_WzyV" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ], [ "ICLR.cc/2025/Conference/Submission7964/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I thank the authors for the modifications and responses. Regarding Figure D3, it seems that clustering is stronger in the weight-trained network, but quantifying this would require much more experiments.\\nThe claims made in the rebuttal regarding a mix of synaptic and bias learning are very speculative.\\nAfter carefully reading all the discussion, I am raising my score to 6.\"}", "{\"summary\": \"Previous work has investigated the expressivity of feed-forward neural networks (FNNs) when only subsets of parameters are trained (ie. only the output layer, normalization parameters, etc\\u2026). In the same vein, the authors introduce a method of training feed-forward neural networks by randomly sampling fixed weights and subsequently learning only the biases, termed bias learning. They provide theoretical and empirical evidence that demonstrates that FNNs trained through bias learning can approximate any continuous function on compact sets - meaning that they are theoretically as expressive as fully-trained FNNs.\\n\\nThey start with a theoretical treatment of bias learning where they carefully define their terms and introduce their theorems. A simplified version of the rigorous proof is as follows: 1) Train a fully connected network (N1) where the weights are constrained to lie in some fixed range. 2) Create a new network (N2) by randomly sampling the hidden neuron weights from the fixed range in 1. 3) After sufficient sampling, there exists a subnetwork of neurons in N2 that is \\u2018identical\\u2019 to the neurons in N1. 4) By training the biases, the outputs of neurons outside of this subnetwork can be removed, leaving N1 from N2 bias training. They provide a similar proof for recurrent neural networks (RNNs).\\nNext, the authors provide empirical evidence supporting their theory and explore the expressivity of bias-learned networks. They do this in multiple ways, including performing multi-task learning with bias learning in 7 tasks (MNIST, KMNIST, Fashion MNIST, etc.), comparing bias learning and mask learning in FNNs, and applying bias learning on an RNN trained on both an autonomous and non-autonomous dynamical system. The main takeaways are as follows: 1) multi-task bias learning leads to emergence of task-specific functional organization revealed by clusters of activation patterns measured by task variance, 2) compared to mask learning, bias learning had less sparse solutions and higher unit variance values, 3) bias learning in RNNs can succeed in time-series forecasting of non-linear dynamical systems with high enough gains.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"Overall, there is strength in its novelty of proving that bias learning in neural networks can have high expressivity that performs almost as well as a fully-trained network. This is significant because bias learning trains fewer parameters than a full network.\", \"Nature of bias learning is more behaviorally relevant in the context of tonic inputs, intrinsic cell parameters, threshold adaptation, and intrinsic excitability\", \"The theoretical proofs are very thorough, and backed up by numerical proofs.\"], \"weaknesses\": [\"In response to bias learning having fewer parameters to learn, no data was shown on training time\", \"Little background was given on mask learning (the mask learning section was also super short - felt less developed relative to other parts of the paper). This is important because two of their highlights in the results relate to mask learning.\", \"Makes claims (i.e. lines 253 - 259, lines 418-420) that could have been easily backed up by data, but were not.\", \"Figure 1 color scheme is weird\", \"Practically, not sure how exciting this is (i.e. other models can do what this model does - it's just that they use a different approach)\", \"The work seems highly related, in spirit, to neural tangent kernel approaches and other methods that consider wide NNs, but no references to that work were made.\"], \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Understood; thank you so much for the endorsement of our work, and for the time put into your detailed reviewing!\"}", "{\"comment\": \"Thanks to the authors for a clear response. My most significant concerns were addressed by the authors response, and the improved manuscript (improvements mostly stemming from other reviewer's comments) has, I believe, pushed this work above the bar for acceptance. (I have changed my score from a 5 to a 6). I still believe additional work is necessary to understand how these approaches perform (e.g. training time) against fully trained networks, but I do believe introducing the approach and its impact on ideas in computational neuroscience to the ICLR community has value.\"}", "{\"metareview\": \"This paper makes a significant theoretical contribution by demonstrating that neural networks with fixed random weights and trainable biases are universal function approximators. The authors provide rigorous proofs complemented by empirical experiments, including tasks relevant to neuroscience and AI, such as motor control and multi-task learning benchmarks. All reviewers voted for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"While some reviewers raised concerns about the scalability of bias-trained networks and the simplicity of the chosen tasks, the authors addressed these points through revisions that clarified theoretical bounds, expanded experimental scope, and enhanced presentation. Though the practical utility of bias learning remains speculative, its relevance to neuroscience and its potential for inspiring structured weight initialization in AI are notable strengths.\"}", "{\"comment\": \"I have gone over the authors' reponse and the updated pdf. They have address my concerns satisfactorily.\"}", "{\"title\": \"[Addressed] IMPORTANT: adressing score mismatch between comment and system\", \"comment\": \"UPDATE: Thank you for rectifying the score entered in the system. The score now reflects the intent communicated in the discussion.\\n\\n-----\\nHello,\\n\\nWe noticed that the reviewer has indicated that they would raise their score (see last reviewer comment from Nov 29) but the system is currently not reflecting this change. Could it be that the score change did not get saved? As the discussion period is drawing to an end, we kindly point out this discrepency and once again thank the reveiwer for the fruitful exchange. We remain available to adress any additional points.\"}", "{\"title\": \"Author Comment\", \"comment\": [\"We respond point-wise, to the reviewer's questions and comments, below.\", \"# Strengths\", \"We thank the reviewer for the kind words about our theoretical contributions!\", \"# Weaknesses\", \"Thank you for the observation! We do not claim that bias learning would require fewer parameters than fully trained networks\\u2013at least not with the fully random weights studied in our paper. To make this more clear we show parameter-matched performance results in the main manuscript (see Fig.1A and Fig.4A). Regarding training time, we have not focused on such efficiency comparisons because bias-learned networks with fully random weights do not train faster than fully-trained networks (a 100 unit fully-trained network will take less time to train and still perform slightly better than a 10000 unit bias-learned network) on the tasks we explored. We agree that this would be problematic if our paper\\u2019s objective was to provide out-of-the-box methods for SOTA accuracy or efficiency gains, but this is not our objective. We believe the value in our paper is in theoretical insights for models of learning in neuroscience and for future ML development; for example, as a starting point for understanding/designing bias learning algorithms that rely on weights with some amount of structure intermediate between fully random and fully trained (see discussion starting at line 528 in new manuscript).\", \"We thank the reviewer for the helpful criticism. We have re-worked Figure 2 to better illustrate the solutions found by mask versus bias learning (see updated manuscript), and have edited the text accordingly (see section 3.2 of the new pdf). We have also added a methods section to the manuscript\\u2019s appendix, giving full details on the implementation of the mask learning process. Please let us know if there is anything that is still unclear with this portion of the paper.\", \"We thank the reviewer for the comment. We have removed lines 418-420 because the statement, while theoretically correct, has little practical relevance for very small matrix gains. Could the reviewer elaborate on which claim in lines 253-259 could be backed up with data?\", \"We have updated the color scheme of figure 1 (see new manuscript pdf).\", \"We agree that taken as a novel method, this approach might seem limited. In the Common Comment above, we argue that the main contribution of this work is to establish new theoretical insights and guarantees on optimization approaches that do not target connections, but rather rely on constant inputs to units (i.e. biases). Above, we also stress the relevance of our work to computational neuroscience and in-context learning in LLMs. Overall, the fundamental expressivity of input-driven networks is not well-studied, and just like the universal approximation theorem for standard deep networks lays out idealized cases to provide foundations, our result contributes a first step in input-driven expressivity. We incorporated these points into the revised introduction.\", \"Our work is fundamentally different from neural tangent kernels in that it studies wide but finite networks, rather than the infinite-width limit. However, we believe that exploring infinite-width limits of bias-learned networks could represent an exciting future direction, and we thank the reviewer for the inspiration!\"]}", "{\"title\": \"Author Comment\", \"comment\": [\"We respond point-wise, to the reviewer's questions, below.\", \"# Questions\", \"We thank the reviewer for the feedback on section 3.3.2\\u2013now section 3.4 in the new manuscript. To address these clarity issues we have written a methods section, in the manuscript appendix, for section 3.4 along with all the other numerical work. We have also adjusted the main text for clarity. In particular, regarding your questions here, we note:\", \"The recurrent state is not given in our experiments\\u2013only an input, in the non-autonomous case (no inputs are given in the autonomous case). We have tried to make this more clear in our current edits, with changes in the first paragraph of section 3.4.\", \"We apologize for the lack of clarity here. In the standard case the RNN predicts future steps of the dynamical system given past steps of a partially observed history as an input (this is the classic time series forecasting problem). In the self-sustained case the RNN receives its very own prediction of future states as the input from which it is predicting the future (here the RNN is now generating samples from the time series when it was only trained for forecasting). We have updated the text in the second paragraph of section 3.4 and in the Fig.4 caption to better communicate this.\", \"We thank the reviewer for this comment which led us to expand the discussion of key neuroscience mechanisms which are typically modeled as bias modulations or bias learning (listed in a new table in supplemental section A). In particular, if we model a brain area as a local neuronal circuit, then the effects of tonic inputs from other areas onto the local circuit can be modelled by setting specific values for the biases. These biases can switch certain parts of a network on/off as in models of motor behavior. In the first model, based on a coupled thalamocortical-basal ganglia neural network, the basal ganglia projection to thalamic neurons are modeled as inhibitory biases, which turn off thalamic populations to induce specific motor patterns (Logiaco et al. Cell Reports (2021)). In an extension of this model (Recanatesi et al. Neuron (2022); Mazzucato Elife (2022)), the secondary motor cortex generates tonic inputs to the primary motor cortex, modeled as biases in the motor cortex network: each set of biases represent initial conditions for a particular action and are active for the whole duration of the action. Input-driven learning from upstream regions sending inputs to the motor cortex has also been able to explain short-timescale, trial-by-trial behavioral adaptation to predictable perturbations (Perich et al. Neuron 2018; Feulner et al. Nature Communications (2022)). In the typical neuroscience approach where biases to the local circuit represent external inputs from other brain areas, biases can be positive or negative depending on the particular neurotransmitter which mediates such long range projections (i.e., AMPA, GABA, dopamine, acetylcholine, norepinephrine, serotonin, etc). We have elaborated on this in the main text, adding reference to the new table, in lines 51-55, to address your astute feedback.\", \"Thank you to the reviewer for the suggestion! We have included a more detailed description of the differences between our proof and Malach\\u2019s proof in the appendix (see lines 917-927 of the new pdf), and referenced this in the introduction (see lines 95-98). Please let us know if this is sufficient.\"]}", "{\"comment\": \"Thank you very much for all the engagement, and for the kind words about our work. As a first stab at addressing your question about training time we ran some experiments, which we report as $\\\\mathrm{MEAN}\\\\pm 1\\\\mathrm{SD}$ over 5 seeds. On our hardware, training a fully-trained single-layer MLP with $10^4$ hidden units on MNIST for 5 epochs takes $69.17\\\\pm0.22$ seconds. Training only the biases for the same task and architecture takes $61.51\\\\pm0.92$ seconds. Thus, for matched layer widths training biases with standard pytorch implementations provides a slight advantage. Note that one can maintain strong performance and get faster training by simply reducing the width of the fully-trained network (a 100 hidden unit fully trained net takes $37.77\\\\pm0.04$ seconds on the same test). However, as mentioned (see our `common response\\u2019, or the first bullet point in our response to your detailed review), our objective with bias learning is not to present an efficient, out-of-the-box method but rather to provide theoretical insights to communities like computational neuroscience. For this reason, while these efficiency results provide extra context we don\\u2019t see them as absolutely critical given the objectives of our paper. Nonetheless, we appreciate you advocating for a test of these train-time differences, and we will add them to the supplementary section of our manuscript.\\n\\nIf there is anything else that we can provide that might compel you to increase your support of our work, and our chances of getting to communicate it to the community, please do let us know. Thank you again for all your great feedback\\u2013your suggestions have improved the quality of our manuscript!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you very much for your support of our paper, and for your careful and detailed reviewing! We agree that it appears, qualitatively, that there is greater selectivity in the bias-learned network, and that this, while outside the scope of the current work, would represent an intriguing direction for future study. We will attempt to fit mention of this into the discussion section. We also confirm that the idea of combining simple synaptic structuring with bias learning is indeed speculative (we are actually beginning to investigate this currently!), which is why we have limited this speculation to the discussion portion of the manuscript. We believe that it is worth discussing because of the growing interest in combinations of synaptic and input-driven learning (see e.g. citations in second paragraph of revised manuscript). Regarding your support of our paper, we see that the score was not yet updated in your main review and just wanted to add a reminder about this, in case this step was forgotten, for the sake of the openreview stats. We are also happy to address any other questions or clarifications you might have that could lead you to further strengthen your support. Thank you again so much for all your help in the reworking of our manuscript!\"}", "{\"summary\": \"Note: I\\u2019m not an expert in the field of pruning and universality.\\nThe authors show that random neural networks with trained biases are universal approximators. This is shown both for feedforward and recurrent networks. The authors use an argument that is similar to masking, and use negative biases to implement the masking. Numerical simulations show that several benchmarks reach similar performance when training biases or training weights as well.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The questions of pruning, masking and universality are all important questions for neuroscience and machine learning. In neuroscience, it is known that many cell-autonomous adaptation mechanisms exist (e.g., spike threshold adaptation), and the wide distributions of firing rates hint that these properties could be a form of long-term plasticity as well.\\nProving universality results has opened the door for more application-oriented research in the past. The combination of mathematical proofs with systematic simulations and a wide literature review is a strength.\", \"weaknesses\": \"My main concern is the scaling of network size, which seems exponential and is also missing from the main text. The results of Malach et al 2020 suggest that masking is weaker than weight-pruning, unless the size of the network is exponential.\\nIf I understand correctly, I expect a scaling of (R/eps)^n. Because there are n independent event with probability (R/eps). Line 944 (appendix) states a scaling, which by approximating log(1+z)=z is indeed (1/eps)^(n^2).\\nGiven this large scaling, and the results of Malach et al that with polynomial scaling neuron-pruning is weak, it seems strange that bias learning is as strong as weight learning. Indeed \\u2013 the actual numerical results do not show comparable performance. As the tasks become harder, the gap widens. Also when controlling for the number of learned parameters, bias learning is still weaker.\", \"questions\": \"1.\\tDefinition 2: bounded by gamma?\\n2.\\tLine 186 large hidden layer width. Can you provide a rough estimate? What is the scaling? I assume it is roughly (R/eps)^n. In Malach et al, the size was polynomial. If I understand correctly, line 944 gives such scaling, and by approximating log(1+z)=z it is indeed (1/eps)^(n^2).\\n3.\\tFig E2. It is hard to compare what happens from 5000 parameters or so. Perhaps a logarithmic or ratio plot would help.\\n4.\\tFig E2 \\u2013 fully trained was still better than bias-only, even when controlling for parameter number. Do you know why this is? Is this related to the main concern raised above?\\n5.\\tLine 304. Correlation between TV and bias. Was this computed for every unit, and then averaged within each cluster?\\n6.\\tLine 304 If the theory is aligned with training, then the number of units should be much higher than simply the square of fully trained network. If Figure E2 suggests otherwise, then why expect the mechanism of the proof to hold? Further \\u2013 what are the values of biases? Are some of them extremely low \\u2013 effectively shutting down neurons?\\n7.\\tCorrelation values between mask and bias \\u2013 what is the correlation between different realizations of the training process?\\n8.\\tLine 461 \\u2013 similar scaling in RNN and FNN. Figure E2C shows that the fully trained saturates at 64^2 parameters. The bias trained network is shown up to 64^2 parameters, so we can\\u2019t see whether fully-trained RNNs with less than 64^2 parameters behaves similarly to FNN.\\n9.\\tLine 462 stability in larger windows. Perhaps I didn\\u2019t understand this, but is this really stability or simply more test sets? Because the network is fed the true dynamics, is there a meaning to larger windows?\\n10.\\tFigure 4C \\u2013 How does the fully trained network generalize in this scenario?\\n11.\\tLine 481 \\u2013 I think some discussion of scaling should go into the main text, even if the proof is in the appendix.\\n12.\\tLine 483 \\u2013 task-selective clusters similar to fully trained. This was not shown. Specifically, quantification of how task-selective are bias-trained vs. fully-trained networks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you so much for the sound feedback you have given us; your insights have shaped our manuscript into a more worthy contribution. We are grateful that you see the value in our work as a first-pass characterization of bias learning. Please don't hesitate to inquire after any other information that would inspire you to further increase your score, and the chance of having our results on bias learning made available to the ICLR community. Thank you again!\"}", "{\"comment\": \"To follow-up on the last message, we have added a new figure to the appendix of the updated manuscript showing the task selectivity plot for fully-trained networks, as requested (see Figure D.3 and associated methods). Please let us know if there is any extra information we can provide to aid you in your evaluation of our paper. In the meantime, thank you again for all your feedback. We believe this review period has meaningfully strengthened our manuscript, and your insights have played a crucial role in the process.\"}", "{\"summary\": \"The authors show that both feedforward and recurrent neural networks can act as universal function approximators for functions and dynamical systems respectively, even when only biases are learned. They propose an alternative proof for the theorem that masking is all you need (Strong Lottery Ticket Hypothesis), and extend that and the bias result to RNNs approximating dynamical systems. They authors demonstrate their results using simple simulations, and discuss relevance to AI and neuroscience.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The authors connect well to the neuroscience and AI/ML literature and explain the proofs in an intuitive manner.\\nThe extension to RNNs and dynamical systems is also commendable as these often receive reduced attention in the ML community.\\nThe issue with the \\\"gain\\\" g in the weight distribution is well brought out.\", \"weaknesses\": \"The section 3.3.2 on the Lorenz system is not clearly written and the architecture and external input to the network are not clear.\\n\\nAt first glance, the result seems to be a simple extension of the masking theorem of Malach et al 2020. The difference with that proof should be made clear.\", \"questions\": \"In Section 3.3.2 - The authors write RNN, but then say that the recurrent state is given? Also what is provided as an external input? Is it the recurrent state? The difference between the 'standard' and the 'self-sustained' networks is not clear. To me the self-sustained way is the standard, and if somehow the recurrent state is provided (at each time step?), then the network is just acting as a feedforward network. Then in this case, I suspect that to actually use the RNN (usual self-sustained way) to learn the dynamics, the authors would need a lot more units.\\n\\nThe authors have not explained how (positive & negative) biases may arise in neuroscience if not by synaptic weights. As they mention threhsold changes etc. change the neural gain (and possible have a strongly non-linear effect). What about the role of inhibition and other brain areas switching parts of the network on and off?\\n\\nThe authors should bring out the differences between their proof and the Malach et al proof.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I do not think anything further in the limited amount of time will lead to increasing my score further, since I had already given an 8. Resolving the earlier unclearness(es) only confirms what I had assumed, while making the paper clearer, and so I maintain my score.\"}", "{\"title\": \"Author Comment\", \"comment\": \"We respond point-wise to the reviewer's questions below.\\n\\n# Weaknesses\\n- Thank you to the reviewer for the astute question. We know for certain that bias learning scales better than the extreme scaling suggested by the theory, as we can calculate the network sizes suggested by the theory and find that, numerically, bias learning scales a lot more reasonably (see remark 2 on lines 1080-1086 of the appendix for details). As such, we view the theorems as a statement that bias learning will work for some sufficiently wide layer, rather than a characterization of the precise scaling of bias learning. What then, of the key question of scaling itself? Because bias learning, with the activation functions tested, encompasses mask learning (since bias learning can shut off units), we know that the scaling will perform at least as well as masking units. Past work has related the scaling of unit masking to the scaling of random feature networks, and we have added mention of this in the new manuscript accordingly (see lines 513-517). We appreciate the idea of calculating scaling from the numerical work performed; to this end we reworked the scaling sub-figures, Fig.1A and Fig4.A, so that they directly compare performance as a function of trainable parameter count with a log-axis to better visualize the different curves. We have also plotted a log-log axis (Fig.D.A) to attempt to extrapolate the scaling, as you suggested. For power law (polynomial) scaling we would expect a linear relationship on the log-log plot, which is what we, approximately, observe. However, we hesitate to draw general conclusions from this one experiment because, as you mention, we would expect quantities like task difficulty and hyper-parameter choice to impact things. To address scaling numerically we would ideally run an array of experiments over a wider range of layer widths, a task that we believe is beyond the scope of the current work (we mention this as a future direction in the discussion\\u2013see lines 517-519). In the meantime, we believe that the reasonable performance of bias learning on the benchmarks tested render it a potentially useful neuroscientific model for rapid adaptation of network dynamics, and a step towards two lines of AI application: models that initialize weights from more structured distributions and hardware constrained settings, where one may want to use the same weights for multiple tasks, e.g. in small devices where weight change is prohibitive.\\n- We thank the reviewer for the great suggestion. We found that RNNs can indeed be trained to perform neuroscience-like tasks using bias learning. We have added a classic neuroscience task: the center-out reaching task (see new section \\u201cMotor control\\u201d, starting on line 455) . This task requires the RNN to move a cursor from the center of a circle to various points on the radius (see methods section C.5 for details). We believe the reviewer\\u2019s insight here has significantly improved the manuscript, and that the new results provide deeper evidence for the relevance of this work for neuroscientific modelling.\"}", "{\"comment\": \"We would like to follow-up to see if there is any further clarification we can provide to the reviewer that might lead them to potentially raise their score, and support the communication of our results to the ICLR community. Thank you again, very much, for the detailed review!\"}", "{\"comment\": \"I appreciate the authors' revisions, especially the inclusion of the motor control task. I still feel that a more complete investigation of required scaling in bias-only networks is called for, but the authors do a reasonable first-pass characterization here. I thus have raised my score to a 6.\"}", "{\"summary\": \"In this work, the authors prove a universal approximation result for the expressivity of neural networks with frozen weights but trainable biases. In particular, they build upon well-known universal approximation results of feedforward and recurrent architectures, and show via a simple mask learning-like argument that sufficiently large networks with randomly chosen weights can be constructed to approximate any function (feedforward) or finite-time trajectory (recurrent). They conduct experiments comparing fully trainable architectures to bias-learning variants, demonstrating that bias-only learning can achieve reasonable performance on some simple tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The main expressivity results shown are well-explained and seem mathematically tight. Considering that these results made use of a reduction to mask learning problems, the authors also do a good job discussing the relationship between their findings and those of the mask learning literature.\", \"weaknesses\": \"A crucial aspect of this work with regards to its practical relevance is how large a bias-trained network needs to be to achieve similar performance to a fully trained network. Surely the scaling is better than the extreme network expansions constructed for the existence proofs, but how much better? The authors allude to performance as a function of trainable parameter count scaling similarly to fully trained networks, and thus only needing quadratic scaling in layer width, but they only evidence this explicitly with comparisons to mask learning networks, which in my view are also less expressive than standard networks (for fixed number of parameters). I would like to see a more detailed investigation of this question. For example, could the authors extrapolate from the MNIST experiments (Fig. 1a) whether the required scaling is indeed quadratic? I imagine this scaling would also depend significantly on the task difficulty and the frozen weight initialization.\\n\\nOverall, the tasks the authors used to demonstrate efficacy of bias-only learning seemed restrictively simple, by the standards of both the machine learning and computational neuroscience literature. In particular, for the RNN experiments, only simple 1D pattern generation tasks were considered. I would be interested in seeing how biased-trained RNNs perform on simple \\\"cognitive\\\" tasks often used to assess task-trained RNNs in the computational neuroscience literature (e.g., interval timing, delayed match-to-sample). For example, I imagine that any task that requires the construction of many stable fixed points could be quite difficult for bias-learning RNNs, and might require prohibitive scaling of network size compared to fully-trained counterparts (e.g., N-bit flip flop).\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response\", \"comment\": [\"I read the rebuttal and updated PDF.\", \"Regarding my comments:\", \"The log scale in figD.2 is easier to read. Thanks\", \"Figure 2 is also easier to read. Thanks.\", \"Correlation between training runs \\u2013 this is what i meant. A baseline to know how different is different.\", \"Task-selective clusters. What I meant is quantifying the fully trained case (as in Yang et al 2019, which is cited in that section). Training all tasks simultaneously \\u2013 just to see whether the degree of task selectivity is similar or not.\", \"Regarding the new reaching task \\u2013 do you have any baseline of weight training? Also - if I understood correctly, the output of the network after reaching the target is zero. Therefore, it is not clear whether there are distinct fixed points of the dynamics.\"]}", "{\"comment\": \"Thank you, we are very grateful for your attentive reviewing of our manuscript!\", \"to_address_your_bottom_three_bullet_points\": [\"On correlation: perfect, please let us know if there is anything else we can clarify with regards to these statistics.\", \"On task-selectivity: thank you very much for the clarification. Our objective with task selectivity was to provide a characterization of this phenomenon for bias learning rather than a comparison across different learning paradigms. While this is an intriguing direction, we think a detailed comparison of neural representations for fully-trained vs bias-trained networks is a little outside the scope of this work. Regardless, we are setting up a task-selectivity experiment for a fully-trained MLP and will post it here if it is ready before the end of the rebuttal period.\", \"On the reaching task: to address this comment we ran the same experiments for a fully-trained RNN. We observed qualitatively similar results and have included them in the appendix (see Fig.D4); please let us know if you have any clarifying questions. Regarding the fixed point question, yes, you are absolutely right\\u2013the dynamics observed could be due to something else, for example dynamics that move into the nullspace of the output matrix. For this reason we didn\\u2019t reference fixed-points specifically in the updated manuscript. We did mention \\u201cfixed point-like dynamics\\u201d in another reviewer response, which we have subsequently deleted given the ambiguity that you bring up.\", \"Thank you again for the detailed and valuable feedback which, we believe, has improved the manuscript. Please let us know if there is any other information we can provide that might help you in judging potential changes in your rating of our paper.\"]}", "{\"title\": \"Author Comment Regarding Questions\", \"comment\": [\"# Questions\", \"Yes, thank you. This is now fixed.\", \"Thank you for the detailed question! Yes, our theory yields the scaling bound in the appendix but we know that this bound is very weak\\u2013significantly under-cut by the network sizes observed in numerical results (see Remark on lines 1080-1086)\\u2013and therefore not very informative. We thus view our theory results not as a statement about scaling but as a statement of existence: for some sufficiently large, but finite, layer width one can approximate the desired function with bias learning. We leave the interesting question of results on the scaling of bias learning for the future. We also note that Malach et al.\\u2019s results are for mask learning\\u2013which may or may not provide useful insight for bias learning since, for activations like the ReLU, bias learning is more flexible than mask learning (bias learning can mask units but also translate the activation\\u2019s zero point).\", \"We have changed our feed-forward network scaling figures to log-log plots to try to remedy this; please let us know if this helps.\", \"Indeed, bias learning does not catch up to fully trained networks for the layers tested. We believe this could be due to two things (we have included mention of these in the main body of the manuscript (lines 263-266)):\", \"Bias learning requiring a very large layer width, as suggested\", \"Standard hyperparameter formatting being better suited for fully trained networks than bias networks\", \"No, for each cluster the correlation was computed between:\", \"Bias for each task/unit pair\", \"Within task variance for each task/unit pair\", \"Thank you for the feedback! Based on the numerical experiments we believe the theory overestimates the layer widths, but that the intuition about biases shutting off units is partially correct. Regarding the second question: the magnitude of a bias is not in itself a good measure of whether a neuron is shut off, as a bias may only need to be small and negative if the input weights to the given unit are sufficiently small. Instead, we have looked at task variance as a measure of whether units are \\u2018off\\u2019. We observe that bias learning leads to many (roughly 3000) units having a low enough task variance to not significantly contribute to performance, demonstrating many units that are functionally shut off (Supplementary Figure D3.A). We have re-worked figure 2 to better compare these \\u2018functionally off\\u2019 units with those in mask learning. Please let us know if this answers your questions.\", \"Could the reviewer elaborate on what is meant here? If this helps, we have also tested the correlation between two different training runs of a mask-learned network on the same weights, and observe that it is almost precisely 1 (0.99 with an almost vanishing standard deviation over 5 samples).\", \"We have added smaller layer sizes for the fully-trained model accordingly; please see the new figure for section 3.3.2 (section 3.4 in the new manuscript).\", \"We thank the reviewer for this astute observation; indeed, we realize that this is not a meaningful test of the network and have reworked the figure accordingly.\", \"Thank you for noting the lack of a fully-trained benchmark here. We have added it to the figure and observe that the performance of the fully-trained network degrades similarly to the bias-learned network.\", \"We have expanded on the discussion of scaling in the theory (see lines 180-183) and discussion sections (see lines 511-519). Thank you for the feedback; please let us know if this addresses your concern.\", \"We thank the reviewer for the intriguing idea. To calculate task selectivity for fully trained networks one would need to train a single set of weights for all 8 tasks, so that neurons can be identified and compared across tasks. The difficulty here is that one will quickly encounter catastrophic forgetting if one trains a single set of weights. Overcoming catastrophic forgetting is an on-going research topic and, thus, we believe beyond the current scope of the paper. However, such a comparison would be a fantastic future direction.\"]}", "{\"comment\": \"We are very grateful to the reviewer for the care put into reviewing our manuscript--your feedback has been invaluable in crafting a stronger paper. Please don't hesitate to let us know if there is any more information we can provide that might lead you to further increase your support of our work. Thank you so much for all the insight!\"}", "{\"title\": \"Author Comment Regarding Weaknesses\", \"comment\": \"We respond point-wise to questions and comments, below.\\n\\n# Weaknesses\", \"we_split_this_response_into_responses_to_the_two_main_sub_comments_in_the_weakness_section\": [\"We are grateful for the precise feedback. You are correct that the proof uses an approach that leads to a scaling that is exponential in $n$, given the iid sampling of units. However, as we mention below (see answer to question 2), this scaling result is a very very loose estimate of the worst-case number of units needed to solve a task (as elaborated on in Remark on lines 1080-1086), and therefore not informative.\", \"Regarding the Malach et al. paper: it provides a comparison between random feature networks and mask learning. Given that, for appropriate activation functions, mask learning is simply a particular parameterization of bias learning, the scaling results in Malach et al. could be interpreted as a worst case scaling for bias learning. We have added mention of this now in the manuscript (lines 515-517). A deeper quantification of the scaling properties of bias learning\\u2013whether the added flexibility of bias learning leads to better scaling than mask learning\\u2013is beyond the scope of this paper as it would require a broad array of numerical experiments or some novel theory work to properly test. However, we believe the results in our paper are relevant regardless of whether or not bias learning is ultimately found to scale better than mask learning. The reasoning is twofold: first, our empirical results show that bias learning can solve neuroscience-relevant problems with a reasonable hidden layer size, thus bias learning should not immediately be discounted as a potential learning strategy employed by the brain, or in hardware constrained settings where one may want to use a single set of weights for multiple different tasks; second, we also believe that this work provides a useful starting point, and theoretical grounding, for old and new bias-learning approaches. For example, methods, like bias fine-tuning, that could use some amount of structure in the weights intermediate between fully random (as in our results) and fully trained. Such weights would likely improve scaling, and having the expressivity results in our paper provides a theoretical backbone for this research direction.\"]}" ] }
5xmXUwDxep
Manifold Constraint Reduces Exposure Bias in Accelerated Diffusion Sampling
[ "Yuzhe YAO", "Jun Chen", "Zeyi Huang", "Haonan Lin", "Mengmeng Wang", "Guang Dai", "Jingdong Wang" ]
Diffusion models have demonstrated significant potential for generating high-quality images, audio, and videos. However, their iterative inference process entails substantial computational costs, limiting practical applications. Recently, researchers have introduced accelerated sampling methods that enable diffusion models to generate samples with far fewer timesteps than those used during training. Nonetheless, as the number of sampling steps decreases, the prediction errors significantly degrade the quality of generated outputs. Additionally, the exposure bias in diffusion models further amplifies these errors. To address these challenges, we leverage a manifold hypothesis to explore the exposure bias problem in depth. Based on this geometric perspective, we propose a manifold constraint that effectively reduces exposure bias during accelerated sampling of diffusion models. Notably, our method involves no additional training and requires only minimal hyperparameter tuning. Extensive experiments demonstrate the effectiveness of our approach, achieving a FID score of 15.60 with 10-step SDXL on MS-COCO, surpassing the baseline by a reduction of 2.57 in FID.
[ "Diffusion Models", "Exposure Bias" ]
Accept (Poster)
https://openreview.net/pdf?id=5xmXUwDxep
https://openreview.net/forum?id=5xmXUwDxep
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zEnxK5TA7C", "z2tm1MNxLu", "yAAUuvdZSn", "xv84t7FQkD", "vAGE5919El", "srAiciwPVT", "sCZiWZi8V4", "s5XNVOBBKL", "oSkfzreX7B", "oBaefAOu28", "lhbPhWh24e", "lF7taQpOxO", "kV9jipJkZJ", "jK6G01OdPX", "hfol0z0Wh4", "hbwg611YzP", "hSXjlKMiRf", "cpTtvntz0X", "aVrvVJo9r9", "YZnQ66GpeV", "Wzrjl8yRW7", "WWU0OLjS8d", "Td8DVr2Scm", "SfInZGKEke", "QYPq0OqOt2", "NN1gMr0aEa", "HGg5KlJIAz", "BG2yFhjHoA", "AuiuBC7q7N", "Ao3L9bT205", "9lNhhrlzpj", "8QnGj3bQkt", "6CMTy9Obpp", "3lTUF8H4mr", "3a7Obpgv1M", "0RG6y0o0mF", "0143flCOrH" ], "note_type": [ "decision", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737523457541, 1732464233803, 1730020209097, 1730648921822, 1732469281011, 1734568405626, 1730558269186, 1730616399452, 1732472930484, 1732466752126, 1732465385312, 1730702577384, 1732612960928, 1732515984503, 1732794551952, 1732476442610, 1732972865707, 1732697916973, 1732463473311, 1733122463385, 1733102376331, 1733115099271, 1732508328448, 1729844239796, 1732691904985, 1732477251213, 1733119386779, 1732460044492, 1732470730759, 1732473841361, 1732461830340, 1733111879695, 1732794857732, 1732794978464, 1732467583359, 1732794630919, 1732613375631 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_Tp1G" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_JP98" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_Tp1G" ], [ "ICLR.cc/2025/Conference/Submission1547/Area_Chair_cJkf" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_fuNA" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_oFBR" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_rs44" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_oFBR" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_6dod" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_fuNA" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_6dod" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_6dod" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_JP98" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_Tp1G" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Reviewer_6dod" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ], [ "ICLR.cc/2025/Conference/Submission1547/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer oFBR (part 2)\", \"comment\": \">### **Q4: Explanation of L271 and generalization of observation presented in figure 3**\\n\\n**A5** Thanks for raising this question. In fact, **the objective of Equation 16 is to elucidate the relation between** $d(x_t,1,\\\\mathcal{M}_t)$ **and the deviation of** $\\\\text{Var}(x_t)$. **As this perspective can provide a manifold perspective to understand the effectiveness of previous published work TS-DDIM [2]**, which is based on assumption of using pixel variance of a single image to estimate distribution variance.\\n\\n**We appreciate your insightful observation that a decrease in pixel variance could potentially make $d(x_t, 1, \\\\mathcal{M}_t)$ larger than $r_t$.** However, we do not claim that the pixel variance of $x_t$ will necessarily decrease with fewer NFEs in all cases, as this would be too strong of an assumption, even though such observation might holds true statistically. Again, thank you for pointing this out.\\n\\n>### **Q5 Saturated examples**\\n\\n**A6** Thanks for mentioning this. We would like to answer this question in detail below:\\n1. As the sampling steps decreases, blurriness are observed in many baseline samples. Therefore, it is reasonable for solutions which improve image quality make the generated results look \\\"relatively saturated\\\". Which is also observed in experiments in previous published work [3]. \\n2. We contribute the higher saturation to our method's effectiveness in reducing models' prediciton error, **producing image with finer quality (sharper textures and richer structural details), as can be observed in all visualizations presented in paper.**\\n3. To provide a fair illustration of the saturation level, we have included qualitative results generated with 1,000 steps in Appendix A.13 of the revised manuscript. **These results demonstrate that our method achieves a saturation level similar to that of high-quality images generated with 1,000 steps**, when compared to the images produced without our method.\\n\\n[2] Li et al. Alleviating Exposure Bias in Diffusion Models through Sampling with Shifted Time Steps. ICLR 2024.\\n[3] Chen et al. On the Trajectory Regularity of ODE-based Diffusion Sampling, ICML 2024\"}", "{\"summary\": \"From the manifold hypothesis, the paper proposes a method to reduce Exposure Bias by adjusting the variance of $x_t$ at each step to match the variance of $q_t$ made by the forward diffusion process. The paper demonstrates that this approach yields FID gains across various model and datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"They validated the method in various datasets and methods.\\n\\nIt is good to see that the method proposed from this work can be combined with the method from previous literature, i.e., DDIM-MCDO^\\\\dagger.\", \"weaknesses\": \"**1. strong assumption**\\n\\nAssuming that the manifold can be understood solely by considering variance is too strong an assumption. It might be better to tone it down to suggest that this approximation is sufficient for performance improvement.\", \"questions\": \"**Q1**: Using the statistics of the data directly in generation\\u2014could this be seen as an FID \\\"hack\\\"? For example, I\\u2019m curious how the FID would change if the mean and variance of the generated data w/o MCDO were adjusted to match \\\\( q_0 \\\\).\\n\\n**Q2**: Does this method can improve other ODE solvers like DPM-solver++ [1] or PNDM [2]? I know that DDIM performs poorly when the NFE is below 50. I also want to see the results where NFE is around 50.\\n\\n[1]: DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models ([Arxiv](https://arxiv.org/abs/2211.01095))\\n[2]: Pseudo Numerical Methods for Diffusion Models on Manifolds ([ICLR22](https://arxiv.org/abs/2202.09778))\\n\\n**Q3**: (Minor) When comparing performance, I recommend plotting FID on the y-axis and NFE on the x-axis for clarity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"This paper proposes a method for improving the performance in accelerated diffusion sampling algorithms\", \"The paper identifies the exposure bias in accelerated diffusion sampling\", \"The method applies manifold constraint for reducing the exposure bias that occurs in accelerated sampling.\", \"The paper presents evaluations of the methods showing improvements over the baseline.\", \"A discussion on the geometric view of the exposure bias is presented.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written\", \"The method is evaluated on multiple diffusion models trained on different datasets\", \"The method is simple yet effective. It does not require any further training and the additional computations are marginal.\", \"The approach shows an improvement over the baselines in most cases\"], \"weaknesses\": [\"The derivation of section 4.2 is relatively weak, many assumptions and loose steps need to be either refined or omitted.\", \"In section 4.2, you assume that $E[\\\\epsilon_\\\\theta^t]=0$, but this is not always the case.\", \"In Equation (12) the authors utilize the fact that $\\\\frac{1}{n}\\\\sum_{i=1}^n \\\\hat{x}_i \\\\approx 0$, which does not always hold.\", \"Equation (15) does not hold, an expectation value is required for it to be true\", \"The evaluations include only comparison to DDIM, even though there are many accelerated samplers that achieve much better results [1,2,3], adding them to the tables is very important for evaluating the method.\", \"[1] GENIE: Higher-Order Denoising Diffusion Solvers\", \"[2] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps\", \"[3] DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models.\"], \"questions\": [\"Figure 3: the x-axis title is not clear, what do you mean by denoising steps if the sampling steps are given in the legend? and in general the figure needs to be explained properly\", \"In section 4.2, you assume that $E[\\\\epsilon_\\\\theta^t]=0$, but this is not always the case.\", \"Equation 15 does not make any sense without expectation value.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your hard work. Responding to six rebuttals must have been very challenging; I appreciate it.\", \"i_had_two_concerns\": \"one was the FID hack, and the other was the strong assumption.\\n\\n---\\n\\n### FID hack \\n\\n> Since the FID score is calculated by measuring the distributional distance between two sets of Inception-v3 features, it fundamentally differs from the pixel variance or L2-norm of an image.\\n\\nI know your point. However, I am still curious. In my experience, the FID score is greatly influenced by the color distribution of images. In the ablation study, you applied MCDO from $T$ down to $t$, but couldn't my experiment be considered as applying MCDO only at $t = 1$? I didn't think it was a baseless experiment. If I'm wrong, please correct me. \\n\\n> In addition to the improvements in FID score, our approach consistently achieves better results across other metrics, such as CLIP score, IS score, and sFID.\\n\\nI agree.\\n\\n> We agree with your opinion that using the statistics collected from the training data could potentially lead to even better results.\\n\\nI had misunderstood the method. I'm sorry, and thank you for clarifying.\\n\\n--- \\n\\n### Strong Assumption\\n\\n> Firstly, we introduce manifold assumptions extended or borrowed from previous works [1][2].\\n\\n**I kindly disagree with this statement because [1] and [2] do not use mean and variance information to approximate the manifold**. MCG utilizes the property that the input gradient of the diffusion model $\\\\nabla_{x_t} \\\\epsilon(\\\\cdot)$ is aligned with the manifold. This is also the case in DPS [4]. MPGD approximates the manifold using a VAE. That is, while they begin their theoretical development by asserting from the Gaussian annulus theorem that $x_t$ has a low-dimensional manifold structure, they do not claim that this can be approximated using mean and variance.\\n\\nPersonally, the reason I cannot readily give a positive evaluation of this work is that **I find it hard to accept that resolving exposure bias through mean and variance should be considered a Manifold Constraint**. Generally, the tangent vectors of the latent space manifold have clean, interpretable signals, as seen in the guidance vectors of DPS and MCG. This is even more evident in [5, Figure 6]. However, since the Gaussian annulus through mean and variance does not possess such manifold structure, the claim that improvements are due to the diffusion model manifold constraint sounds somewhat like overclaiming to me. I question whether the Manifold Constraint is an appropriate mathematical framework to excellently explain your method.\\n\\nOr... it could just be me thinking this way. I would not consider this point as crucial one for evaluation. \\n\\n[4] Chung et al. Diffusion Posterior Sampling for General Noisy Inverse Problems\\n\\n[5] Park et al. Understanding the Latent Space of Diffusion Models through the Lens of Riemannian Geometry\\n\\n---\\n\\nTo this end, some of my concerns regarding the FID hack have been resolved. However, it\\u2019s still difficult to be fully convinced because I haven\\u2019t seen results where the mean/variance regularization is applied only to the t=1 (or t=0). As for the strong assumption, I find it difficult to agree with the authors. Therefore, I will maintain my score for now. If the FID experiment yields positive results, I am considering raising my score to a 6.\"}", "{\"metareview\": \"This paper proposes a novel method to address exposure bias in accelerated diffusion sampling using a manifold constraint, demonstrating significant improvements in FID and CLIP scores across various datasets and diffusion models. Reviewers praised the method\\u2019s simplicity, training-free implementation, and consistent performance gains, particularly with low NFE scenarios. However, some concerns were raised regarding the theoretical assumptions, limited comparisons with high-order solvers, and occasional visual artifacts, suggesting further refinements in the theoretical framework and broader empirical evaluations would enhance its impact.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised concerns about the strong assumptions in the theoretical framework, limited evaluations with high-order solvers, and the potential for FID improvements being overly influenced by color distribution adjustments. The authors addressed these by conducting additional experiments with DPM-Solver++, evaluating under varying CFG scales, and clarifying the theoretical basis, providing empirical evidence of their method's robustness across metrics like FD and KD. They also included further visualizations and ablation studies to showcase qualitative and quantitative improvements, resolving most concerns and demonstrating the method\\u2019s generalizability and practicality.\"}", "{\"summary\": \"This paper presents an approach to enhance the efficiency and accuracy of diffusion models by addressing the issue of exposure bias in accelerated sampling. Leveraging a manifold hypothesis, the authors introduce a manifold constraint that reduces error accumulation during sampling without requiring additional training or extensive tuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The use of the manifold constraint is an interesting idea, which addresses exposure bias without adding additional training costs during sampling.\\n2. The use of well-chosen visualizations enhances the readability of the method section, conveying key information and clarifying the approach.\\n3. The paper is technically clear and well-organized, with the proposed method thoroughly explained.\", \"weaknesses\": \"1. The term \\\"denoising observations\\\" requires a clear and precise definition, as its current use lacks specificity. A more rigorous description would help readers to understand and improve the technical clarity of the paper.\\n2. The pre-computation process still requires a full denoising sequence (e.g., 1000 steps), which incurs substantial computational cost, especially when applying the proposed method to new datasets or domains. It is suggested that potential strategies for reducing this computational cost be discussed or that an analysis of the trade-offs between the number of steps in pre-computation and the method's performance be provided.\\n3. The number of samples and their diversity will influence the resulting approach $v_t$. However, the experiments simply set the sample number to 20 without discussing the diversity of prompts or other characteristics of the samples. Including these details would be valuable for readers to have a better understanding, and provide insights for the community. It is recommended to conduct an ablation study on the impact of sample number and diversity on the performance of the proposed method, or to provide more details on how you selected the 20 samples used in the experiments.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper views the issue of the quality degradation during accelerating diffusion model\\u2019s inference on the perspective of the explore bias. The authors point out that explore bias is an important reason that contributes to the loss of image quality when reducing inference steps. The noisy prediction will direct to inaccurate manifolds and thus the errors would be accumulated and amplified. A manifold constraint is proposed to curate this bias and thus lead to better image quality when significantly reducing inference steps.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method is well supported by a series of proofs with some assumptions.\\n2. The quantitative results look very promising, and largely outperforms the alternative.E.g., in Table 3, MCDO with 4 steps has better results than DDIM with 5 steps.\\n3. The method is well motivated and the writing logically reasonable and flows well.\", \"weaknesses\": \"1. Assumption 3 is a very important part in deriving the manifold constraint. I think it is too strong. I understand it's motivated by Equation (17), however \\\\hat{x}_0 could have a different distribution of x_0 which is inaccessible during inference. Could you elaborate more?\\n2. There are some details not explained well for some key equations/explanations. I added those in the questions below.\", \"questions\": \"1. In table4, why do fewer steps have lower FID?\\n2. Can you add more details of how to get equation 11 and 16 in the appendix?\\n3. I didn\\u2019t understand why \\\\epslon_t is equal to \\\\sqrt{n} when n is large in L261? Can you explain?\\n4. Can you elaborate on L271? In my understanding, in fig3, var(x_t) decreases faster as reducing steps, thus potentially making d(.) larger than r_t in later steps. But figure 3 is only on one sample, how to generalize this observation?\\n5. The qualitative examples look a bit over-saturated, it would be helpful if you can also show an oracle results (e.g., 1000 steps) on the side for comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Tp1G,\\n\\nThank you for your thoughtful insights and for engaging deeply with the proposed method. We are grateful for the opportunity to clarify and further elaborate. \\n\\n>Applying MCDO only at $t=1$\\n\\nWe are delighted to see your interest in the proposed method and sincerely apologize for initially misunderstanding your suggested experiment. To address your valuable insight, we will conduct an experiment where MCDO is applied exclusively at the final timestep.\\n\\nIt is reasonable that modifying pixel variance at the final stage will influence image contrast. However, we are uncertain whether this approach has the capacity to effectively recover fine details, as such details are often generated during earlier stages of the denoising process.\\n\\nThank you again for your invaluable feedback.\"}", "{\"title\": \"Response to Reviewer Tp1G (part 1)\", \"comment\": \"We greatly appreciate your recognition of our method\\u2019s compatibility with previous work. Below, we provide answers to your concerns.\\n\\n>### **W1: Assumption in variance-manifold relation**\\n\\n**A1** Thank you for your suggestion. We agree that it is not sufficient to fully understand data manifold with the variance concept. However, we would like to clarify that **we do not claim that \\\"manifold can be understood solely by considering variance.\\\"** To better explain our method, we summarize the main ideas of our paper as follows:\\n\\n1. Firstly, we introduce manifold assumptions extended or borrowed from previous works [1] [2].\\n2. Then, we establish a connection between the data-to-manifold distance and data pixel variance, by assuming: $\\\\\\\\mathbb{E}[\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\theta^t]=0, \\\\\\\\mathbb{E}[\\\\\\\\boldsymbol{x}\\\\_t]=0 $ . These assumption are justified as follows:\\n 1. The noise $\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_t$ added to clean data during the training phase follows normal distribution: $\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_t \\\\\\\\sim \\\\\\\\mathcal{N}(\\\\\\\\mathbf{0}, \\\\\\\\boldsymbol{I})$, which implies $\\\\\\\\mathbb{E}[\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_t]=0$. Consequently, assuming $\\\\\\\\mathbb{E}[\\\\\\\\boldsymbol{\\\\\\\\epsilon}^t\\\\_\\\\\\\\theta]=0$ is reasonable for well-trained diffusion models.\\n 2. The predicted noisy data at timestep $t-1$ can be expressed as $\\\\\\\\hat{x}\\\\_{t-1} = \\\\\\\\sqrt{\\\\\\\\bar{\\\\\\\\alpha}\\\\_{t-1}} \\\\\\\\frac{x\\\\_t - \\\\\\\\sqrt{1 - \\\\\\\\bar{\\\\\\\\alpha}\\\\_t} \\\\, \\\\\\\\boldsymbol{\\\\\\\\epsilon}^{(t)}\\\\_\\\\\\\\theta\\\\\\\\left(x\\\\_t\\\\\\\\right)}{\\\\\\\\sqrt{\\\\\\\\bar{\\\\\\\\alpha}\\\\_t}},+ \\\\\\\\sqrt{1 - \\\\\\\\bar{\\\\\\\\alpha}\\\\_{t-1} - \\\\\\\\sigma\\\\_t^2} \\\\\\\\, \\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)}\\\\\\\\left(\\\\\\\\mathbf{x}\\\\_t\\\\\\\\right) + \\\\\\\\sigma\\\\_t \\\\\\\\boldsymbol{\\\\\\\\epsilon}'\\\\_t,\\\\\\\\quad \\\\\\\\boldsymbol{\\\\\\\\epsilon}'\\\\_t \\\\\\\\sim \\\\\\\\mathcal{N}\\\\\\\\left(\\\\\\\\mathbf{0}, \\\\\\\\mathbf{I}\\\\\\\\right).$ Then, as long as $\\\\\\\\mathbb{E}[\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\_{i=0}^n\\\\\\\\boldsymbol{x}\\\\_{t\\\\_i}] =0$, one has $\\\\\\\\mathbb{E}[\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\_{i=0}^n\\\\\\\\boldsymbol{x}\\\\_{{t-1}\\\\_i}] =0$. Considering that the initial noise is sampled from normal distribution: $\\\\\\\\boldsymbol{x}\\\\_T \\\\\\\\sim \\\\\\\\mathcal{N}(\\\\\\\\mathbf{0},\\\\\\\\boldsymbol{I})$, we have $\\\\\\\\mathbb{E}[\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\_{i=0}^n\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_{t-1}]=0, \\\\\\\\forall t \\\\\\\\in [1, T]$.\\n3. Leveraging the pixel variance-manifold relation, we achieve the following:\\n 1. provide a new perspective on the effectiveness of prior work [3] from the viewpoint of the data manifold.\\n 2. introduce a training-free and tuning-free method, whose effectiveness is demonstrated across various models and datasets.\\n\\nWe hope this explanation addresses your concerns and clarifies any potential misunderstandings. Please do not hesitate to let us know if further clarification or additional details are required.\\n\\n>### **Q1: \\\"FID Hack\\\" & using $q_0$ to Implement MCDO**\\n\\n**A2** We do not consider the proposed method as a \\\"hack\\\" of the FID score. We explain our approach as follows:\\n\\n1. The primary objective of our method is to improve model's performance under low NFE situaiton. By guiding the deviated data toward their manifold, we achieved in reducing the exposure bias during generation. To achieve this, **we leverage pixel variance estimation collected from generation process.**\\n2. **Since the FID score is calculated by measuring the distributional distance between two sets of Inception-v3 features, it fundamentally differs from the pixel variance or $L_2$-norm of an image.** For instance, modifying the pixel variance of an image does not necessarily improve its quality.\\n3. **In addition to the improvements in FID score, our approach consistently achieves better results across other metrics, such as CLIP score, IS score, and sFID.** This suggests that **the effectiveness of our method stems from its ability to reduce the prediction errors made by the models, rather than manipulating any metric.** \\n\\n**We agree with your opinion that using the statistics collected from the training data, $q_0$, could potentially lead to even better results**. Given that in this case, the model would have access to the real training data, directly modifying the generated data distribution to \\\"hack\\\" FID, without improving its prediction accuracy.\\n\\n[1] Chung et al. Improving diffusion models for inverse problems using manifold constraints. NeurIPS 2022.\\n\\n[2] He et al. Manifold preserving guided diffusion. ICML 2024.\\n\\n[3] Li et al. Alleviating exposure bias in diffusion models through sampling with shifted time steps. ICLR 2024.\"}", "{\"title\": \"Response to Reviewer fuNA\", \"comment\": \"We thank reviewer fuNA for the acknowledgement of our contributions and the helpful comments. Below, we provided a point-to-point response to all comments.\\n\\n>### **W1: Explanation for denoising observation**\\n\\n**A1** We appreciate it that you point this out. In this paper, we leverage the term *denoising observation* from paper DDIM [1], which proposes the widly used accelerated sampling technique.\\n\\n>### **W2: Discussion on the Pre-computation of the Full Denoising Sequence**\\n\\n**A2** Thank you for highlighting the trade-off between the number of steps in pre-computation and the performance of the method. In the paper, we use a long denoising sequence (1,000 steps for SDXL experiments on the MS-COCO dataset) to better capture the statistics required by the proposed method. However, **denoising trajectories with fewer timesteps are also feasible**. For instance, in experiments on the CelebA-HQ dataset, we collect statistics using denoising sequences of 500 timesteps, and **the proposed method still demonstrates noticeable improvement with statistics derived from a shorter sampling trajectory**.\\n\\nNevertheless, we believe that, in general, increasing the number of sampling steps (and thus reducing the sampling intervals) leads to more accurate estimation of the pixel variance of $\\\\hat{x}_0^{(t)}$.\\n\\n>### **W3: Diversity of Reference Samples and Details in the Collection**\\n\\n**A3** To ensure a sufficient level of diversity in the samples collected for $v_t$ estimation: \\n1. **For text-to-image generation with MCDO, we randomly select 20 distinct prompts from the dataset** to guide the generation of reference samples (one sample per prompt). \\n2. **In class-conditional generation on ImageNet, 64 randomly sampled class IDs were used** to guide the generation of 64 images (one sample per class). \\n\\nWe appreciate your insights into the trade-offs between performance and the number of reference samples. For ease of implementation, we use $S=20$ for text-to-image generation with SDXL, and $S=64$ for the other experiments. **While we agree that the $v_t$ estimation could potentially benefit from an increased number of reference samples, considering the improvements already achieved by MCDO in these experiments, we believe that the current choice of hyperparameters is practical.** Therefore, we have decided not to further tune them for potentially better quantitative results.\\n\\n[1] Song et al, Denoising Diffusion Implicit Models, ICLR 2021.\"}", "{\"summary\": \"To narrow the gap between the training of the sampling phase of diffusion models, the authors analyze the diffusion processes from the view of the manifold. They propose to compute the statics of all intermediate diffusion variables and calibrate the sampling process based on the computed statics (variance, mean, \\\\etc). Experiments show that the proposed method can reduce the sampling steps while maintain the generation quality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method improves sample quality without adding substantial computational overhead.\\n2.Unlike some prior methods, this approach does not require model retraining or intensive hyperparameter tuning.\\n3. The manifold constraint method shows improved performance across various high-resolution datasets, achieving better FID scores with fewer sampling steps.\\n4. The paper provides both theoretical analysis and empirical evidence to support its method, including experiments on multiple tasks like image and video generation.\", \"weaknesses\": \"1. The proposed method needs to be verified on more diffusion schedulers, such as DPMSolver, PNDM.\\n\\n2. Some ODE-based diffusion models such as rectified flow and consistency models can reduce the sampling steps to two or even one. The proposed method focuses on accelerating the sampling process but is not compared with these fast-sampling diffusion models. The authors are encouraged to apply their methods to more recent diffusion models (\\\\ie SD3, FLUX) to show their priority and general ability.\", \"questions\": \"See the weakness above\", \"flag_for_ethics_review\": \"['No ethics review needed.', 'Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer oFBR,\\n\\nThank you very much for your valuable suggestions and for engaging in the discussion.\\n\\nWe greatly appreciate your recognition of the novelty and contributions of our work. We agree that further improvements in this direction could provide additional insights, and we will certainly explore this avenue in future work.\\n\\nOnce again, thank you for your constructive comments and your engagement with our work.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"Increase score\", \"comment\": \"I appreciate the prompt answers, which addressed all my questions. I increased my score to 6 based on my understanding of the novalty/contribution. I think this is a valid work, pushing the boundary to exteme cases like single step generation or compare with other methods with more steps (similar to Appendix A.13) would better demonstrate the effectiveness of the proposed method. A.13 does show that the saturation level is closer to 1000steps, but the numbers in table 2-5 are still much worse compared with more steps. Though it is understandable, showing more improvement along this direction would make the contribution stronger. Thanks.\"}", "{\"title\": \"Response to further comments (part 1)\", \"comment\": \"Dear reviewer,\\n\\nThank you once again for your thoughtful engagement in the discussion, as well as your valuable feedback. We sincerely appreciate your acknowledgment of our effort. Below, we address each of your concerns in detail.\\n\\n> ### Zero mean assumption\\n\\n**A1**\\nFirst of all, we agree that the variables mentioned\\u2014$\\\\\\\\hat{\\\\\\\\boldsymbol{x}}_t$, $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$, $\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\theta^{(t)}$, and $\\\\\\\\boldsymbol{x}\\\\_T$\\u2014**do not strictly exhibit zero mean in all cases**. However, in diffusion models, these variables possess certain characteristics that **keep their pixel means within an acceptable range around zero**. As a result, the assumption in our work\\u2014that their pixel mean is approximately zero\\u2014introduces only negligible errors. Given this, we believe that this assumption is viable for some applications.\\n\\n> ### Figs. 4 and 6\\n\\n**A2**\\nThank you for your suggestion regarding the use of additional data to enhance the generalizability of the observed variance-norm relation. In response, we have replaced Figure 4 with a **new curve based on statistics from 6,400 samples, which demonstrates a similar relationship to that observed with 64 samples**. Following thorough discussions among all authors, we have decided to remove Figure 6, as it is not critical for illustrating the core variance-norm relationship.\\n\\n> ### Error analysis for assuming zero pixel mean for $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$ and $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t$\\n\\n**A3**\\nAgain, we agree that more refined assumptions could potentially lead to better results. However, the assumptions we make simplify the relationship between exposure bias in diffusion models and pixel variance, allowing us to propose a straightforward yet effective method. \\n\\nIn response to your concern regarding the pixel mean assumption, we provide experimental results below to assess the error introduced by this assumption. Considering that the objective of assuming the zero mean for $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$ and $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t$ is to relate their pixel variance with their L2 norm, we report the relative error of this approximation: $e=\\\\\\\\frac{\\\\\\\\sqrt{n\\\\\\\\text{Var}(\\\\\\\\boldsymbol{x})} - \\\\\\\\|\\\\\\\\boldsymbol{x}\\\\\\\\|}{ \\\\\\\\|\\\\\\\\boldsymbol{x}\\\\\\\\|} \\\\times 100\\\\\\\\%$ in the tables below:\\n\\n| Denoising Timestep | 951 | 901 | 851 | 801 | 751 | 701 | 651 | 601 | 551 | 501 |\\n| ------------------------------------------------------------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\\n| Relative error for $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t$ (%) | 0.0019 | 0.0017 | 0.0010 | 0.0006 | 0.0038 | 0.0096 | 0.0196 | 0.0352 | 0.0585 | 0.0909 |\\n| Relative error for $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$ (%) | 4.0269 | 3.0110 | 1.8091 | 1.1320 | 0.8772 | 0.8084 | 0.8015 | 0.8252 | 0.8448 | 0.8669 |\\n\\n| Denoising Timestep | 451 | 401 | 351 | 301 | 251 | 201 | 151 | 101 | 51 | 1 |\\n| ------------------------------------------------------------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\\n| Relative error for $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t$ (%) | 0.1337 | 0.1873 | 0.2506 | 0.3217 | 0.3973 | 0.4738 | 0.5472 | 0.6141 | 0.6725 | 0.7250 |\\n| Relative error for $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$ (%) | 0.8783 | 0.8785 | 0.8670 | 0.8498 | 0.8285 | 0.8049 | 0.7807 | 0.7567 | 0.7321 | 0.7086 |\", \"the_results_demonstrate_that_throughout_the_entire_generation_process\": \"- The **approximation error remains consistently small**, with its value being **below 1%** for most timesteps. \\n- The maximum relative error **does not exceed 4.5%**. \\n\\nThis shows that **assuming the zero mean of $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$** and using its pixel variance as an approximation for its L2 norm **introduces minimal error**, which is acceptable for practical applications.\\n\\nThe evolution of this relative error is also provided in Appendix A.14 of the revised manuscript.\"}", "{\"title\": \"Response to Reviewer 6dod (part 2)\", \"comment\": \">#### **W1.4 Different derivation with/without $\\\\\\\\| \\\\\\\\boldsymbol{x}_0 \\\\\\\\|$**\\n\\n**A4** Thank you for your thorough and thoughtful review. We agree that the method you propose is effect, and we would like to offer further clarification to address some potential misunderstandings regarding our method:\\n\\n1. **We did not claim that it is essential to consider $\\\\\\\\boldsymbol{x}_0$.** \\n2. **Our proposal introduces an alternative approach that incorporates $\\\\\\\\boldsymbol{x}_0$ into the process.**\\n3. **We view both approaches (with and without $\\\\\\\\| \\\\\\\\boldsymbol{x}_0 \\\\\\\\|$) as practical** (see Sections 4.2 and 4.3). The approach without $\\\\\\\\boldsymbol{x}_0$ can be interpreted as the objective of TS-DDIM [1].\\n4. However, on large resolution dataset (e.g., $1024\\\\\\\\times 1024$ for SDXL), we observe that our proposed method, which incorporates $\\\\\\\\boldsymbol{x}_0$, leads to improved results.\\n\\nHope this clarificaiton addresses your concerns. Please do not hesitate to let us know if further explanation is required.\\n\\n[1] Li et al. Alleviating exposure bias in diffusion models through sampling with shifted time steps. ICLR 2024.\\n\\n>#### **W1.5 Assumption on pixel mean of denoising observation $\\\\\\\\boldsymbol{\\\\\\\\hat{x}}\\\\_0$**\\n**A5** Thanks for point this out. We would like to explain further.\\n1. Given that the diffusion model is trained to mininmize $\\\\\\\\| \\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_t - \\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\theta^{(t)} \\\\\\\\|, \\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_t \\\\\\\\sim \\\\\\\\mathcal{N}(\\\\\\\\mathbf{0},\\\\\\\\boldsymbol{I})$. With a well-trained model, one has: $\\\\\\\\mathbb{E}[\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)}] \\\\\\\\rightarrow \\\\\\\\mathbb{E}[\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_t] = 0$. For the first timestep in denoising process ($t=T$), $\\\\\\\\boldsymbol{\\\\\\\\hat{x}}\\\\_t \\\\\\\\sim \\\\\\\\mathcal{N}(\\\\\\\\mathbf{0},\\\\\\\\boldsymbol{I})$. The expectation of pixel mean of $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$ is $0$ as it is a linear composition of $\\\\\\\\boldsymbol{\\\\\\\\hat{x}}\\\\_t$ and $\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)}$: $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)} = \\\\\\\\frac{\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t - \\\\\\\\sqrt{1 - \\\\\\\\bar{\\\\\\\\alpha}\\\\_t} \\\\, \\\\\\\\boldsymbol{\\\\\\\\epsilon}^{(t)}\\\\_\\\\\\\\theta\\\\\\\\left(\\\\\\\\boldsymbol{x}\\\\_t\\\\\\\\right)}{\\\\\\\\sqrt{\\\\\\\\bar{\\\\\\\\alpha}\\\\_t}}$.\\n2. Considering that the noisy data prediction for $t-1$ timestep can be expressed as $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_{t-1} = \\\\\\\\sqrt{\\\\\\\\bar{\\\\\\\\alpha}\\\\_{t-1}} \\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)} + \\\\\\\\sqrt{1 - \\\\\\\\bar{\\\\\\\\alpha}\\\\_{t-1} - \\\\\\\\sigma\\\\_t^2} \\\\, \\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)}\\\\\\\\left(\\\\\\\\mathbf{x}\\\\_t\\\\\\\\right) + \\\\\\\\sigma\\\\_t \\\\\\\\boldsymbol{\\\\\\\\epsilon}'\\\\_t,\\\\\\\\quad \\\\\\\\boldsymbol{\\\\\\\\epsilon}'\\\\_t \\\\\\\\sim \\\\\\\\mathcal{N}\\\\\\\\left(\\\\\\\\mathbf{0}, \\\\\\\\mathbf{I}\\\\\\\\right)$, we have: $\\\\\\\\mathbb{E}[\\\\\\\\boldsymbol{\\\\\\\\hat{x}}\\\\_{t-1}] = 0$. Then according to the definition of denoising observation, the expectation of pixel mean for denoising observation at timestep $t-1$: $\\\\\\\\mathbb{E}[\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\_{i=0}^{n}\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t-1)}] = 0$ \\n\\n>### **W3: Visualization with photorealistic and obvious color shift**\\n\\n**A6** We appreciate your observation and agree that further refinement of the correction strategy could potentially improve the sample quality. We acknowledge that some aspects related to the visualizatino presented in Figures 7 and 12 require clarification and would like to address your concerns in detail:\\n\\n1. **Higher saturation alone does not equate to higher image quality, and it should not be interpreted as a flaw unique to our method.** While some color shifts (compared to the baseline which sometimes generates blurred and un-recognizable images) are noticeable, the higher saturation arises from our method's effectiveness in reducing models' prediciton error, therefore **producing image with finer quality (sharper textures and richer structural details), as can be observed in all visualizations presented in paper.** \\n2. As the sampling steps decreases, blurriness are observed in many baseline samples. As a consequence, it is reasonable for solutions which improve image quality make the generated results look \\\"relatively saturated\\\". Which is observed in many previous published works [2][3][4]. \\n4. **In Appendix A.13 of the revised manuscript, we include the visualization results generated using 1,000 steps** which we believe will serve as a fair reference.\\n\\n[2] Chen et al. On the Trajectory Regularity of ODE-based Diffusion Sampling, ICML 2024.\\n[3] Si et al. FreeU: Free Lunch in Diffusion U-Net. CVPR 2024 Oral.\\n[4] Lin et al. Common diffusion noise schedules and sample steps are flawed. WACV 2024.\"}", "{\"comment\": \"Thanks for the comprehensive explanations and clarifications. I highly appreciate the great efforts and dedication of the authors. I think all my concerns except for W1.4 are well addressed.\\n\\nI am willing to raise my score if W1.4 is also clearly clarified and the authors update both the analysis and the reasonability of the zero mean assumption in the revised version.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your thoughtful feedback and for engaging in the discussion to improve our work. We sincerely appreciate your recognition of our method\\u2019s simplicity and effectiveness, as well as your valuable suggestions for further improvement. We will carefully consider these recommendations in the revised version.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"Response to Reviewer oFBR (part 1)\", \"comment\": \"Thank you for your insightful feedback.Our answers to each question are shown below:\\n\\n>### **W1 Explanation of Assumption 3**\\n\\n**A1** Thanks for raising this question about Assumption 3. We agree that for most of timesteps, especially those in the early denoising stage, $\\\\hat{x}_0$ have a very different distribution from that of $x_0$. However, **this difference is a fundamental aspect of our method\\u2019s design**, which is detailed in Algorithm 1. \\n\\nThe objective of MCDO is to dynamically adjust $\\\\hat{x}_0$, bringing it closer to its corresponding distribution based on the current timestep $t$, rather than adhering to the fixed distribution of $x_0$. For example, if the distribution of $x_0$ were used to correct that of $\\\\hat{x}_0^{(t)}$ at the initial timestep (t=T), the entire process would be adversely affected.\\n\\n>### **Q1 Fewer steps have lower FID on ImageNet**\\n\\n**A2** Thank you for pointing this out. We also observed this phenomenon during our experiments and, as a result, re-evaluated all methods using a new environment. The outcomes of this re-evaluation were consistent with the results presented in the paper.\\n\\nAs shown in the visualization of the generated images provided in Appendix A.8, **while fewer steps yield lower FID values, the corresponding image quality is noticeably worse, which aligns with general intuition.**\\n\\nSince FID is calculated based on the distance between estimated distributions of two Inception-v3 features, it may not always perfectly reflect image quality in all scenarios. We believe this could explain the observed behavior on this dataset.\\n\\n>### **Q2 Explanation of Equation 11 and Equation 16**\\n\\n**A3** Equation 11 is based on the triangle inequality. As Equation 16 is derived by plugging $\\\\| \\\\sqrt{\\\\bar{\\\\alpha}_t} x_0 + \\\\sqrt{1-\\\\bar{\\\\alpha}_t} \\\\epsilon_t \\\\| \\\\approx \\\\sqrt{\\\\| \\\\sqrt{\\\\bar{\\\\alpha}_t} x_0 \\\\|^2 + \\\\| \\\\sqrt{1-\\\\bar{\\\\alpha}_t} \\\\boldsymbol{\\\\epsilon}_t \\\\|^2}$ into $d(\\\\hat{x}_t,1, \\\\mathcal{M}_t) \\\\gtrapprox \\\\left| \\\\| \\\\sqrt{\\\\bar{\\\\alpha}_t} x_0 + \\\\sqrt{1-\\\\bar{\\\\alpha}_t}\\\\boldsymbol{\\\\epsilon}_t \\\\| - \\\\sqrt{n\\\\text{Var}(\\\\hat{x}_t)} \\\\right|$. Typos in Equation16 are corrected in the revised manuscript.\\n\\n>### **Q3 Explanation of approximation in L261**\\n\\n**A4** To better explain L261, we would like to refer you to the Equation 14 in L258-259. Equation 14 says that, for random data sampled from n-dimension normal distribution: $\\\\epsilon \\\\sim \\\\mathcal{N}(0,1,\\\\mathbb{R}^n)$, the expectation of its norm satisfy: $|\\\\\\\\mathbb{E}[\\\\\\\\|\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\\\\\|] - \\\\\\\\sqrt{n} | \\\\\\\\leq \\\\\\\\frac{1}{\\\\\\\\sqrt{n}}$ [1]. As $n$, the latent space dimension in diffusion models is usually large in diffusion models (e.g. $n=12288$ for LDM-4 trained on CelebA-HQ), we have: $\\\\\\\\frac{1}{\\\\\\\\sqrt{n}} \\\\\\\\rightarrow 0$. Therefore, $| \\\\\\\\mathbb{E}[\\\\\\\\|\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\\\\\|] - \\\\\\\\sqrt{n} | \\\\\\\\| \\\\\\\\rightarrow 0$. That is to say, with high probability, we have $\\\\\\\\|\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\\\\\| \\\\\\\\approx \\\\\\\\sqrt{n}$. \\n\\nAlternatively, with large $n$, one can directly consider this: $\\\\\\\\mathbb{E}(\\\\\\\\|\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\\\\\|^2) = \\\\\\\\mathbb{E}(\\\\\\\\Sigma\\\\_{i=0}^{n}\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_i^2)= \\\\\\\\Sigma\\\\_{i=0}^{n}\\\\\\\\mathbb{E}(\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_i^2) = \\\\\\\\Sigma\\\\_{i=0}^{n}\\\\\\\\left(\\\\\\\\mathbb{E}(\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_i)^2+\\\\\\\\text{Var}\\\\\\\\left(\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_i\\\\\\\\right)\\\\\\\\right)= \\\\\\\\Sigma\\\\_{i=0}^{n}\\\\\\\\left(\\\\\\\\text{Var}\\\\\\\\left(\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_i\\\\\\\\right)\\\\\\\\right) =n$. Which leads to $\\\\\\\\mathbb{E}[\\\\\\\\|\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\\\\\|] \\\\\\\\approx \\\\\\\\sqrt{n}$. \\n\\n[1] Sven-Ake Wegner. Lecture notes on high-dimensional data, 2024.\"}", "{\"comment\": \"Dear Reviewer fuNA,\\n\\nThank you for your helpful comments. We greatly appreciate your recognition of our efforts and your engagement with our work. Your support is invaluable to us.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"comment\": \"We sincerely appreciate your engagement with our work and your insightful comment regarding the differences between the methods developed from Eq. (16) and Eq. (19). In response, we provide additional quantitative results to support our previous explanation.\\n\\nFor clarity, we refer to the Noisy Data Variance Adjustment (NDVA) method as being derived from Eq. (16) **with $\\\\boldsymbol{x}_0$ excluded from the lower bound expression**, and the pixel variance of $\\\\hat{\\\\boldsymbol{x}}_t$ is directly adjusted across timesteps.\\n\\n>### Comparison with TS-DDIM\\n\\nBelow, we compare the performance of TS-DDIM[1], NDVA, and MCDO on the CelebA-HQ dataset. Each FID evaluation involves generating 50k images.\\n\\n| Method | 20 Steps | 10 Steps | 5 Steps |\\n| ----------------------------- | ---------------- | ---------------- | --------------- |\\n| DDIM | 10.59 | 21.08 | 54.99 |\\n| TS-DDIM | **8.29** | 15.71 | 50.69 |\\n| DDIM + NDVA | 9.21 | 16.45 | 47.52 |\\n| DDIM + MCDO (ours) | 9.49 | **15.63** | **30.76** |\\n\\nFrom the table, it can be observed that the NDVA method:\\n\\n1. **Outperforms the baseline method**, supporting your perspective and our earlier response: \\n>We view both approaches (with and without $\\\\\\\\|\\\\boldsymbol{x}_0\\\\\\\\|$) as practical.\\n2. **Exhibits performance comparable to TS-DDIM**. It also outperforms MCDO slightly at 20 timesteps.\\n3. **Performs notably worse than MCDO as the number of timesteps decreases**, with a clear gap evident at 10 and 5 steps.\\n\\n>### Results on large resolution dataset\\n\\nWe evaluate NDVA on the MS-COCO dataset using SDXL and DPM-Solver at an image resolution of $1024 \\\\\\\\times 1024$. In this evaluation, we generate 10k images, employing the same settings as in our previous experiments on SDXL (e.g., the number of samples for statistical estimation and $t_{\\\\text{thre}}$ for MCDO).\\n\\n| Method | 10 Steps | 8 Steps | 6 Steps | 5 Steps |\\n| ------------------------ | --------- | --------- | --------- | --------- |\\n| DDIM | 15.10 | 15.57 | 17.05 | 19.16 |\\n| DDIM + NDVA | 15.76 | 15.76 | 17.13 | 19.02 |\\n| DDIM + MCDO (ours) | **14.88** | **15.32** | **16.39** | **18.51** |\\n\\nThe results demonstrate that, on larger resolution datasets, MCDO consistently outperforms NDVA across various low NFE experiments. It also supports our earlier statement:\\n> on large resolution dataset (e.g., for SDXL), we observe that our proposed method, which incorporates $\\\\\\\\|\\\\\\\\boldsymbol{x}_0\\\\\\\\|$, leads to improved results.\\n\\nWe hope these experimental results clarify the distinction between the methods (with and without $\\\\\\\\|\\\\\\\\boldsymbol{x}_0\\\\\\\\|$) and adequately address your concerns. We sincerely appreciate your interest in our work and your engagement in the discussion. Thank you again for your time and effort in providing such insightful feedback.\\n\\n\\n[1] Li et al. Alleviating exposure bias in diffusion models through sampling with shifted time steps. ICLR 2024.\"}", "{\"comment\": \"I appreciate the authors' efforts in rebuttal. I decide to keep my positive rating. Thank you.\"}", "{\"comment\": \"I highly appreciate the efforts of the authors in the rebuttal. And below is my feedback.\\n\\n- Figs. 4 and 6 only employ 64 samples, which seems inconvincing. The authors seemed to miss this one.\\n\\n- The clarification in W1.4 is not that convincing. I hope the authors could provide some experimental or theoretical results beyond the current response.\\n\\n- Response in W1.5 is still wrong.\\n 1. First, there is a gap in the original Diffusion Model paper that, $\\\\mathbf{x}_T$ in the forward diffusing process only **approximates** a Gaussian distribution with zero mean, *i.e.*, $\\\\mathbf{x}_T\\\\approx\\\\mathcal{N}(\\\\alpha_T\\\\mathbf{x}_0,\\\\sigma_T^2\\\\mathbf{I})$. Both theoretically and empirically, $\\\\alpha_T$ is extremely small but still nonzero. Therefore, the conclusion of zero mean by induction at each $t<T$ is wrong in your rebuttal.\\n 2. Second, if the underlying data follows a distribution with extremely large mean, from my perspective, the zero-mean assumption is unreasonable and may lead to strong error in your pipeline. I hope the authors could conduct experiments on toy data to convince me, *e.g.*, following setting of Reviewer Tp1G, the authors could apply your method on a Dirac Delta distribution $\\\\delta(\\\\mathbf{x}_0)$, in which $\\\\mathbf{x}_0$ is extremely far away from the origin. Such distribution has large mean and zero variance, and its score function has analytical form.\"}", "{\"summary\": \"The paper studies the exposure bias of accelerated diffusion model sampling from geometry perspective. The authors extend the previous manifold constraint theory with more detailed description of pixel variance, claiming that both exposure bias and truncation error account for the performance degradation. To this end, the authors propose to pre-calculate the reference pixel variance, which serves as a correction during inference. Such method achieves a training-free and easy-to-implement solution to performance degradation. Comprehensive experiments confirm the efficacy of the proposed algorithm.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well structured and easy to follow. Most of the derivation is clear.\", \"The paper extends the previous manifold constraint strategy with deeper study of the pixel variance and the consequent exposure bias, providing a novel perspective of geometric technique in diffusion models.\", \"The proposed algorithm is overall both efficient and effective, inference time cost barely increases.\", \"Quantitative experiments are convincing and extensive.\"], \"weaknesses\": \"- There seem some theoretical flaws in the draft, harming the soundness:\\n 1. Eq. (15) is wrong, when $x$ and $y$ are orthogonal, one can only deduce that $|x+y|^2=|x|^2+|y|^2$. Besides, what is the definition of $x_0$ and $\\\\epsilon_t$? $\\\\epsilon_t$ and $x_0$ may not be orthogonal if no further assumption are made.\\n 2. I cannot understand the relation between Eq. (18) and analytical form of $Var(x^{(t)}_0)$ and $Var(x_t)$, $Var(\\\\cdot)$ is supposed to be the pixel variance as claimed in Eq. (12) which is a scalar.\\n 3. Eq. (16) and Eq. (19) are similar, but the authors conclude differently. If minimizing right hand side in Eq. (19) could lead to the distance reduction, then so could minimizing right hand side in Eq. (16) be. They are all the **lower bounds** of distance of samples to manifold.\\n 4. If the authors insist that nonzero $|x_0|$ affects the derivation, then (1) since $\\\\hat{x_t}\\\\in\\\\mathcal{M}_t$, one can simply choose $x_0=0$ (which is reasonable since the authors have already assumed zero mean in L346), or (2) move the term with $|x_0|$ outside the absolute value using $|a+b-c|\\\\geqslant|b-c|-|a|$, which is similar to the form of Eq. (19). The authors should make further clarification.\\n 5. Why assume zero mean in L346? $\\\\hat{x}_0^{(t)}$ is the denoise observation at timestep $t$, somewhat a data sample with no noise. Then why is the case? The authors could calculate the mean to confirm the reasonability.\\n\\n- Figs. 4 and 6 only employ 64 samples, which seems inconvincing.\\n\\n- Visualization in Fig. 7 fails to be photorealistic with obvious color shift artifact, which is also the case in Fig. 12.\", \"questions\": [\"It is intuitive that applying MCDO with larger NFEs or better sampler will achieve weaker improvements. I am curious about the comparison on better sampler like Heun or DPM-Solver. There is also no discussion about applicability on high-order samplers.\", \"MCDO is proposed for manifold constraint to relieve exposure bias, how will the efficacy vary if different CFG scales are set? Larger CFG scale may also lead to severe exposure bias.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for your response.\", \"Regarding the assumptions: While I agree that the assumptions are reasonable and work well in practice, my concern lies in the derivation of the analytical bounds. Specifically, omitting certain terms based on their negligible impact in practice may undermine the accuracy of the analysis. For the results to be both correct and accurate, these terms should either be explicitly considered or quantitatively justified.\", \"On the method's improvement: I appreciate that your approach improves DPM-Solver++, although the improvement is marginal compared to the improvement you got using DDIM. This observation supports your claims, and I have increased my score accordingly. However, I recommend standardizing the evaluation to align more closely with the literature (my other suggestions in the review). This would provide a more robust and comparable assessment of your method's effectiveness. Please consider incorporating this suggestion in your revised version.\"]}", "{\"title\": \"Response to Reviewer 6dod (part 3)\", \"comment\": \"We sincerely thank you for your insightful reviews and valuable questions regarding the applicability of MCDO to high-order samplers and its effectiveness under large CFG scales. Below, we present detailed experimental results to address these points.\\n\\n>### **Q1**: Applicability to High-Order Sampler\\n\\n**A7** To evaluate the performance of MCDO with a high-order sampler, we conducted experiments using **DPM-Solver++** on **SDXL** with 5, 6, 10 steps. 60 samples were used for statistics estimation. The results, based on 10k generated images, are summarized below: \\n| **Timesteps** | **Method** | **FID** | CLIP Score | **$t\\\\_{thre}$** |\\n| ----------------------- | ----------------------- | --------- | ---------- | -------------- |\\n| 5 | DPM-Solver++(2M) | 19.16 | 31.52 | -- |\\n| 5 | DPM-Solver++(2M) + MCDO | **18.51** | **31.62** | 100 |\\n| 6 | DPM-Solver++(2M) | 17.05 | 31.65 | -- |\\n| 6 | DPM-Solver++(2M) + MCDO | **16.39** | **31.72** | 100 |\\n| 8 | DPM-Solver++(2M) | 15.57 | 31.74 | -- |\\n| 8 | DPM-Solver++(2M) + MCDO | **15.32** | **31.79** | 100 |\\n| 10 | DPM-Solver++(2M) | 15.10 | 31.74 | -- |\\n| 10 | DPM-Solver++(2M) + MCDO | **14.88** | **31.85** | 100 |\\n\\nThe results demonstrate that **the proposed method consistently enhances performance when combined with DPM-Solver++**. While the improvement of MCDO is more pronounced with DDIM compared to DPM-Solver++, this is consistent with the expectation that high-order solvers, such as DPM-Solver++, inherently reduce prediction errors. The results are also presented in Appendix A.10 of the revised manuscript.\\n\\n\\n\\n>### **Q2: Efficacy of MCDO with combined with larger CFG Scales**\\n\\n**A8** To assess the efficacy of MCDO under varying CFG scales, we performed additional experiments using DDIM sampler on SDXL with larger CFG scales (8, 10 ,12) using 10 sampling steps. The number of samples used for statistics estimation is the same as that in the paper. The results are summarized below:\\n\\n| **CFG Scale** | **Method** | **FID** | **CLIP Score** | **$t\\\\_{thre}$** |\\n| --------- | --------------- | ------- | ------- | -------------- |\\n| 8 | DDIM | 18.51 | 31.81 |-- |\\n| 8 | DDIM + MCDO | **17.38** | **32.03** | 100 |\\n| 10 | DDIM | 18.99 | 31.82 | -- |\\n| 10 | DDIM + MCDO | **17.49** | **32.13** | 100 |\\n| 12 | DDIM | 20.62 | 31.70 | -- |\\n| 12 | DDIM + MCDO | **17.80** | **32.18** | 100 |\\n\\nThe findings demonstrate that **MCDO significantly improves performance under large CFG scales, achieving a notable reduction in FID.** This suggests that **MCDO effectively alleviates the increased exposure bias introduced by higher CFG scales.** The results are also presented in Appendix A.12 of the revised manuscript.\"}", "{\"comment\": \"Dear reviewer 6dod,\\n\\nThank you for your engagement and for raising your score. We sincerely appreciate your recognition of this work. We also want to thank you for being active and constructive during the whole discussion. In the revised version, we will incorporate your suggestions, including further clarification of the pipeline and additional empirical results supporting the theoretical analysis.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer rs44\", \"comment\": \"Thank you for your insightful feedback and for highlighting the need for comparisons with high-order diffusion samplers and recent diffusion models. We recognize the importance of demonstrating the scalability and general applicability of our proposed method. Below, we address each concern in detail.\\n\\n>### **W1**: Verification on more diffusion schedulers, such as DPM solver\\n\\n**A1** We conducted additional experiments combining our method with the DPM++2M scheduler on the MS-COCO dataset using SDXL. The experiments followed the same settings as in our original submission (image resolution, classifier-free guidance scale). Due to limited computational resources and the rebuttal timeline, we evaluated performance using 10,000 generated images. The results, now included in the Appendix A.10 of the revised manuscript, are summarized below:\\n\\n| **Number of Timesteps** | **Method** | **FID** | CLIP Score | **$t_{thre}$** |\\n| ----------------------- | ----------------------- | --------- | ---------- | -------------- |\\n| 5 | DPM-Solver++(2M) | 19.16 | 31.52 | -- |\\n| 5 | DPM-Solver++(2M) + MCDO | **18.51** | **31.62** | 100 |\\n| 6 | DPM-Solver++(2M) | 17.05 | 31.65 | -- |\\n| 6 | DPM-Solver++(2M) + MCDO | **16.39** | **31.72** | 100 |\\n| 8 | DPM-Solver++(2M) | 15.57 | 31.74 | -- |\\n| 8 | DPM-Solver++(2M) + MCDO | **15.32** | **31.79** | 100 |\\n| 10 | DPM-Solver++(2M) | 15.10 | 31.74 | -- |\\n| 10 | DPM-Solver++(2M) + MCDO | **14.88** | **31.85** | 100 |\\n\\nThese resuls indicate that our method **enhances the performance of high-order solvers** like DPM-Solver++. While the improvements are slightly smaller compared to first-order solvers (e.g. a 2.57 FID decrease for DDIM)\\u2014likely because DPM-Solver++ inherently reduces prediction error and causes less exposure bias at low NFE. \\n\\nNonetheless, the consistent performance gains highlight the versatility of our method as a **plug-and-play module** compatible with a wide range of solvers.\\n\\n>### **W2**: Results on recent diffusion models\\n\\n**A2** We applied our method on **Stable Diffusion 3** ( SD3 ) [1], a recent model based on flow matching. The quantitative results, obtained by generating 10,000 images at \\\\(1024 \\\\times 1024\\\\) resolution, are summarized below:\\n\\n| **Model** | **Method** | **Steps** | **FID\\u2193** | **CLIP Score\\u2191** | **$t_{thre}$** |\\n| ------------------ | ----------- | --------- | --------- | --------------- | -------------- |\\n| Stable Diffusion 3 | DDIM | 6 | 33.31 | 30.56 | -- |\\n| Stable Diffusion 3 | DDIM - MCDO | 6 | **27.57** | **31.18** | 0 |\\n| Stable Diffusion 3 | DDIM | 9 | 22.71 | 31.44 | -- |\\n| Stable Diffusion 3 | DDIM-MCDO | 9 | **22.26** | **31.70** | 0 |\\n\\nThe results demonstrate **consistent improvements in both FID and CLIP scores** when integrating our method with SD3. These findings highlight the scalability and compatibility of our approach with cutting-edge diffusion models.\\n\\nThe results are now included in Appendix A.11 of the revised manuscript.\\n\\n[1] Esser et al. Scaling Rectified Flow Transformers for High-Resolution Image Synthesis. ICML 2024 Oral.\"}", "{\"title\": \"Response to Reviewer 6dod (part 1)\", \"comment\": \"Thank you for your insightful comments and questions. Please find our response below:\\n\\n>### **W1: Theoretical details in the paper**\\n\\n>#### **W1.1 Definition of $\\\\\\\\boldsymbol{x}\\\\_0$ and $\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_t$**\\n\\n**A1** Thanks for pointing this out. $x_0$ is the clean image sampled from data distribution, while $\\\\epsilon_t$ is the noise sampled from normal distribution. Since $e_t$ is independent of $x_0$, with a high enough latent dimension $n$ [1], one has: $\\\\mathbb{E}[\\\\| \\\\boldsymbol{x}_0 + \\\\boldsymbol{\\\\epsilon}_t\\\\|^2 ] = \\\\mathbb{E}[\\\\| \\\\boldsymbol{x}_0\\\\|^2] + \\\\mathbb{E}[\\\\|\\\\boldsymbol{\\\\epsilon}_t\\\\|^2]$. This typo is corrected in the revised manuscript.\\n\\n[1] Sven-Ake Wegner. Lecture notes on high-dimensional data, 2024. \\n\\n>#### **W1.2: Relation between the Analytical Forms of $\\\\\\\\boldsymbol{\\\\\\\\hat{x}}_t$ and $\\\\\\\\boldsymbol{\\\\\\\\hat{x}}_0^{(t)}$ and Their Pixel Variance**\\n\\n**A2** Thank you for pointing this out. We would like to provide further clarification. The analytical forms of $\\\\\\\\hat{x}\\\\_0^{(t)}$ and $\\\\\\\\hat{x}\\\\_t$ given in Equation 18 are presented to demonstrate the following:\\n\\n1. During the generation process, the noisy data prediction $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t$ contains a component of scaled-down prediction error, denoted as $n\\\\_t \\\\\\\\cdot \\\\\\\\boldsymbol{e}\\\\_\\\\\\\\theta^{(t)}$, where $n\\\\_t$ is the scale factor. Conversely, the denoising observation $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$ incorporates a scaled-up prediction error, expressed as $d\\\\_t \\\\\\\\cdot \\\\\\\\boldsymbol{e}\\\\_\\\\\\\\theta^{(t)}$, where $d\\\\_t$ is the scale factor.\\n2. The large magnitude of $d\\\\_t$ suggests that **$\\\\\\\\hat{x}\\\\_0^{(t)}$ is predominantly influenced by the amplified noise prediction error term $d\\\\_t \\\\\\\\cdot e\\\\_\\\\\\\\theta^{(t)}$ at most timesteps.** Thus, it is possible to correct the noise estimation error $\\\\\\\\boldsymbol{e}\\\\_\\\\\\\\theta^{(t)}$ by incorporating information from the denoising observation $\\\\\\\\hat{x}_0^{(t)}$.\\n\\nIn other words, **while the noisy data prediction $\\\\\\\\hat{x}\\\\_t$ primarily reflects information about its ground truth noisy data $\\\\\\\\boldsymbol{x}\\\\_t$, the denoising observation $\\\\\\\\hat{x}\\\\_0^{(t)}$ conveys information about the prediction noise $e_\\\\\\\\theta^{(t)}$. This discrepancy leads to the following results**:\\n\\n1. **Correcting the pixel variance of $\\\\\\\\hat{x}\\\\_t$ using a scale factor will eventually scales the entire term**, $\\\\\\\\boldsymbol{x}\\\\_t + n\\\\_t \\\\\\\\boldsymbol{e}\\\\_\\\\\\\\theta^{(t)}$. Consequently, this **leads to distortion of the $\\\\\\\\boldsymbol{x}\\\\_t$ component in $\\\\\\\\hat{x}\\\\_t$.**\\n2. When correcting the pixel variance of $\\\\\\\\hat{x}\\\\_0^{(t)}$ using the proposed method, one is correcting $\\\\\\\\boldsymbol{x}\\\\_0 + d\\\\_t \\\\\\\\boldsymbol{e}\\\\_\\\\\\\\theta^{(t)}$. While errors in $\\\\\\\\boldsymbol{e}\\\\_\\\\\\\\theta^{(t)}$ also influence the $\\\\\\\\boldsymbol{x}_0$ term, **it is important to note that by recalculating the noise estimation with the corrected denoising observation (Equation 21)** : $\\\\\\\\tilde{\\\\\\\\boldsymbol{\\\\\\\\epsilon}}^{(t)}\\\\_\\\\\\\\theta = \\\\\\\\frac{x\\\\_{t} - \\\\\\\\sqrt{\\\\\\\\bar{\\\\\\\\alpha}\\\\_{t}} \\\\\\\\tilde{x}\\\\_0^{(t)}}{\\\\\\\\sqrt{1 - \\\\\\\\bar{\\\\\\\\alpha}\\\\_{t}}}$ **the prediction error is reduced while the original $\\\\\\\\boldsymbol{x}\\\\_t$ is preserved.**\\n\\nWe hope this clarification addresses your concerns. Please feel free to reach out if further explanation is needed.\\n\\n\\n>#### **W1.3 Comparison between Equation 16 and Equation 19 regarding their role in reducing data-manifold distance.**\\n\\n**A3:** **We respectfully disagree with your statement that \\\"the authors conclude differently.\\\"** It appears there may be some misunderstanding, and we would like to clarify this point further.\\n\\n1. **As clearly stated in Section 4.2, Equation (16) offers a manifold-based perspective to understand the effectiveness of the previous method, TS-DDIM [1]**, which relies on the assumption of \\\"using pixel variance to estimate the sample distribution variance.\\\" \\n2. We agree with your observation that \\\"minimizing the right-hand side in Eq. (16) could also lead to the reduction of the distance.\\\" As we view this approach as an effective explanation of the timestep-shifting strategy in TS-DDIM [1], from the manifold perspective. **We acknowledge the simplicity and effectiveness of their proposed method, which inspired us to explore the concept of pixel variance further, and to leverage Equation 19 to develop the proposed method.**\\n\\n[1] Li et al. Alleviating exposure bias in diffusion models through sampling with shifted time steps. ICLR 2024.\"}", "{\"comment\": \"Thank you for your hard work! I really appreciate it.\\n\\nBy the way, what do you think about the second point, strong assumption? I want to hear your (possibly informal) opinion about this.\"}", "{\"title\": \"Response to Reviewer JP98\", \"comment\": \"Thank you very much for your helpful reviews and feedback, we now reply to your concerns:\\n\\n>### **W1&Q2&Q3: Assumptions on the mean value of noise estimation and noisy data, and refinement for Eq. 15** \\n\\n**A1** Thank you for pointing this out. We explain each as follows.\\n\\n1. In the proposed method, the assumption that the expectation of the pixel sum of the noise estimation and the noisy data is zero **serves as tool to relate the pixel variance of the noisy data and its deviation to the manifold.**\\n2. Relationship between these two concepts provides a novel insight into the methodology of TS-DDIM, and forms the basis for developing our proposed method.\\n3. **We made these assumptions based on statistic facts listed as follows**:\\n 1. Denoting the noise added to clean data during training phase as $\\\\\\\\boldsymbol{\\\\\\\\epsilon}_t$. As it follows normal distribution: $\\\\\\\\boldsymbol{\\\\\\\\epsilon}_t \\\\\\\\sim \\\\\\\\mathcal{N}(\\\\\\\\mathbf{0}, \\\\\\\\boldsymbol{I})$, we have $\\\\\\\\mathbb{E}[\\\\\\\\boldsymbol{\\\\\\\\epsilon}_t]=0$. Therefore, when diffusion models are well-trained, assuming $\\\\\\\\mathbb{E}[\\\\\\\\boldsymbol{\\\\\\\\epsilon}^{t}\\\\_\\\\theta]=0$ is reasonable.\\n 2. Given that the predicted noisy data of timestep $t-1$ can be expressed as $\\\\\\\\hat{x}\\\\_{t-1} = \\\\\\\\sqrt{\\\\\\\\bar{\\\\\\\\alpha}\\\\_{t-1}} \\\\\\\\frac{x\\\\_t - \\\\\\\\sqrt{1 - \\\\\\\\bar{\\\\\\\\alpha}\\\\_t} \\\\, \\\\\\\\boldsymbol{\\\\\\\\epsilon}^{(t)}\\\\_\\\\\\\\theta\\\\\\\\left(x\\\\_t\\\\\\\\right)}{\\\\\\\\sqrt{\\\\\\\\bar{\\\\\\\\alpha}\\\\_t}},+ \\\\\\\\sqrt{1 - \\\\\\\\bar{\\\\\\\\alpha}\\\\_{t-1} - \\\\\\\\sigma\\\\_t^2} \\\\\\\\, \\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)}\\\\\\\\left(\\\\\\\\mathbf{x}\\\\_t\\\\\\\\right) + \\\\\\\\sigma\\\\_t \\\\\\\\boldsymbol{\\\\\\\\epsilon}'\\\\_t,\\\\\\\\quad \\\\\\\\boldsymbol{\\\\\\\\epsilon}'\\\\_t \\\\\\\\sim \\\\\\\\mathcal{N}\\\\\\\\left(\\\\\\\\mathbf{0}, \\\\\\\\mathbf{I}\\\\\\\\right).$ Then, as long as $\\\\\\\\mathbb{E}[\\\\\\\\frac{1}{n}\\\\\\\\Sigma_{i=0}^n\\\\\\\\boldsymbol{x}\\\\_{t\\\\_i}] =0$, one has $\\\\\\\\mathbb{E}[\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\_{i=0}^n\\\\\\\\boldsymbol{x}\\\\_{{t-1}\\\\_i}] =0$. Considering that the initial noise is sampled from normal distribution: $\\\\\\\\boldsymbol{x}\\\\_T \\\\\\\\sim \\\\\\\\mathcal{N}(\\\\\\\\mathbf{0},\\\\\\\\boldsymbol{I})$, we have $\\\\\\\\mathbb{E}[\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\_{i=0}^n\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_{{t-1}\\\\_i}]=0, \\\\\\\\forall t \\\\\\\\in [1, T]$. \\n4. In addition, **extensive experiments demonstrate the effectiveness of our approach across various model and datasets**. Therefore, we believe these assumptions are acceptable.\\n5. Nevertheless, we agree that these assumptions may not hold for every case. In some application, there might exist cases where the mean of latent does not approaches zero, we leave this as further investigation in future work.\\n6. Thanks for your suggestion to improve Equation 15. We correct the problem you mentioned in the revised manuscript.\\n\\n>### **W2: Evaluation on other accelerated samplers**\\n\\n**A2** We appreciate your suggestion to include evaluations with other samplers for a more comprehensive comparison. **Following your feedback, we have incorporated evaluations with the DPM++2M scheduler on the MS-COCO dataset using SDXL**. The experiments followed the same settings as in our original submission. Due to limited computational resources and the rebuttal timeline, we evaluated performance using 10k generated images. The results, now included in the Appendix A.10 of the revised manuscript, are summarized below:\\n\\n| **Timestep** | **Method** | **FID** | **CLIP Score** | **$t\\\\_{thre}$** |\\n| ---------- | ----------| ---------- | ---------- | ----------|\\n| 5 | DPM-Solver++(2M) | 19.16 | 31.52 | -- |\\n| 5 | DPM-Solver++(2M) + MCDO | **18.51** | **31.62** | 100 |\\n| 6 | DPM-Solver++(2M) | 17.05 | 31.65 | -- |\\n| 6 | DPM-Solver++(2M) + MCDO | **16.39** | **31.72** | 100 |\\n| 8 | DPM-Solver++(2M) | 15.57 | 31.74 | -- |\\n| 8 | DPM-Solver++(2M) + MCDO | **15.32** | **31.79** | 100 |\\n| 10 | DPM-Solver++(2M) | 15.10 | 31.74 | -- |\\n| 10 | DPM-Solver++(2M) + MCDO | **14.88** | **31.85** | 100 |\\n\\nThese resuls indicate that our method **enhances the performance of high-order solvers** like DPM-Solver++. \\n\\n>### **Q1 Explanation of x-axis title in Figure 3** \\n\\n**A3** Thank you for highlighting this point. The timesteps on the x-axis correspond to the timesteps $t$, one of the input to the noise estimation network: $\\\\epsilon_\\\\theta(x_t,t,y)$. In contrast, the legends represent the number of inference steps used during sampling. Based on your feedback, we have revised and improved Figure 3 in the updated manuscript.\"}", "{\"comment\": \"Thanks for the further clarifications and extensive quantitative results. I at present believe the whole story is both theoretically and empirically solid, and the work will encourage future works on manifold constraints.\\n\\nTherefore, I raise my score accordingly. I hope that the authors could revise the draft according to the rebuttal, especially the further clarification of the pipeline and refinement of the theory part.\"}", "{\"title\": \"Response to further comments (part 3)\", \"comment\": \"Thank you for the insightful comments. We believe that the differences in our understanding may stem from the gap between the theoretical foundations of diffusion models and their practical applications. We would like to offer further clarification regarding the concerns raised.\\n\\n> ### In the forward diffusing process, $x\\\\_T$ only **approximates** a Gaussian distribution with zero mean, *i.e.*, $x\\\\_T \\\\\\\\approx \\\\\\\\mathcal{N}(\\\\\\\\alpha\\\\_T x\\\\_0, \\\\\\\\sigma\\\\_T^2 I)$. Both theoretically and empirically, $\\\\\\\\alpha\\\\_T$ is extremely small but still nonzero.\\n\\n**A5**\\n1. We agree with your point regarding the theoretical form of the mean of $x\\\\_T$ during training. \\n2. At the same time, we would like to mention **the gap between the theoretical and the practical aspects of diffusion models**. In practice, during the sampling process of diffusion models, the initial noise is typically sampled from $\\\\\\\\mathcal{N}(\\\\\\\\mathbf{0}, \\\\\\\\mathbf{I})$. \\n3. Therefore, the **assumption $\\\\\\\\boldsymbol{x}_T \\\\\\\\sim \\\\\\\\mathcal{N}(\\\\\\\\mathbf{0},\\\\\\\\mathbf{I})$ used in our explanation for W1.5 reflects the accurate distribution of $x\\\\_T$ in the practical sampling process.**\\n\\nBased on these points, we believe the assumptions we made are within an acceptable margin of error.\\n\\n> ### If the underlying data follows a distribution with an extremely large mean, the zero-mean assumption seems unreasonable and may introduce significant error in your pipeline.\\n\\n**A6**\\n1. We agree with your opinion that when the data follows a distribution with an extremely large mean, the zero-mean assumption may indeed lead to errors in the pixel variance-L2 norm approximation. \\n2. However, we would also like to highlight that **this is not a very common scenario in many widely used diffusion models. (*e.g.* LDM[1], SD3[2], DiT[3] and SDXL[4])**. For example, in LDM[1], the distribution of $x\\\\_0$ in the latent space is regularized to approach a standard normal distribution:\\n\\n> \\\"To avoid arbitrarily high-variance latent spaces, we experiment with two different kinds of regularizations. The first variant, KL-reg., imposes a slight KL penalty towards a standard normal on the learned latent, similar to a VAE.\\\" [1]\\n\\nThis regularization encourages the latent variables to follow a distribution with **a mean close to zero, preventing the scenario of extremely large means**. Therefore, in well-trained diffusion models, it is not typical for $x_0$ to have an extremely large mean, and thus the zero-mean assumption holds reasonably well.\\n\\nWe hope this clarification addresses your concerns. To summarize, we would like to:\\n\\n1. acknowledge your insightful point regarding the **limitations of our assumptions in extreme scenarios**, which we agree are an important consideration.\\n2. highlight that our method is **empirically supported in typical use cases**, as demonstrated by a range of performance metrics (FID, CLIP Score, sFID, IS, FID-DINO, KD) across various datasets, models (LDM [1], SD3 [3], SDXL [4]), and conditions (e.g., different CFG scales).\\n3. emphasize that all the presented **assumptions are supported by statistical evidence**, which falls within an acceptable margin of error. \\n4. Finally, given the simplicity of our approach\\u2014requiring no training and minimal hyperparameter tuning\\u2014we believe that these approximations are reasonable and acceptable for some practical applications.\\n\\nWe are happy to provide further clarification in response to any additional concerns.\\n\\n[1] Rombach et al. High-Resolution Image Synthesis with Latent Diffusion Models. CVPR 2022. \\n[2] Esser et al. Scaling Rectified Flow Transformers for High-Resolution Image Synthesis. ICML 2024. \\n[3] Peebles and Xie. Scalable Diffusion Models with Transformers. ICCV 2022 Oral. \\n[4] Podell et al.SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis. ICLR 2024 Spotlight.\"}", "{\"comment\": \"> Further clarification for W1.4\\n\\n Updating soon.\"}", "{\"title\": \"Response to Reviewer Tp1G (part 2)\", \"comment\": \">### **Q2: Results with other solvers & results on DDIM with NFE = 50**\\n\\n**A3** Thank you for your suggestion. We summarize the results with DPM++ sampler as follows:\\n\\n| **Timesteps** | **Method** | **FID** | CLIP Score | **$t\\\\_{thre}$** |\\n| ----------------------- | ----------------------- | --------- | ---------- | -------------- |\\n| 5 | DPM-Solver++(2M) | 19.16 | 31.52 | -- |\\n| 5 | DPM-Solver++(2M) + MCDO | **18.51** | **31.62** | 100 |\\n| 6 | DPM-Solver++(2M) | 17.05 | 31.65 | -- |\\n| 6 | DPM-Solver++(2M) + MCDO | **16.39** | **31.72** | 100 |\\n| 8 | DPM-Solver++(2M) | 15.57 | 31.74 | -- |\\n| 8 | DPM-Solver++(2M) + MCDO | **15.32** | **31.79** | 100 |\\n| 10 | DPM-Solver++(2M) | 15.10 | 31.74 | -- |\\n| 10 | DPM-Solver++(2M) + MCDO | **14.88** | **31.85** | 100 |\", \"these_resuls_indicate_that_our_method_enhances_the_performance_of_high_order_solver\": \"DPM-Solver++**. Given that DPM-Solver++ inherently reduces prediction error and causes less exposure bias at low NFE, the improvements of our method are slightly smaller compared to first-order solvers (e.g. a 2.57 FID decrease for DDIM). The results are also presented in Appendix A.10 of the revised manuscript.\\n\\nWe conducted experiments with DDIM using NFE = 50 on the SDXL dataset but did not observe significant improvements in the quantitative results with the current hyperparameters ($S=20$, $t_{\\\\text{thre}}=100$). This suggests that, **at higher NFE values, where step sizes are smaller, diffusion models likely make fewer prediction errors, and consequently, the exposure bias problem discussed in the paper becomes less severe.**\\n\\nHowever, it is important to note that our method is specifically designed for accelerated sampling in diffusion models, with a primary focus on low NFE scenarios. While it is true that increasing NFE can improve the results to some extent, we believe that this does not diminish the core value of our approach. **The main benefit of our method lies in its ability to address exposure bias in scenarios with fewer sampling steps, which is the key challenge in accelerated sampling**.\"}", "{\"title\": \"Response to further comments (part 2)\", \"comment\": \"> ### Pixel mean of $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t$, $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$ and $\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)}$ across timesteps\\n\\n**A4**\\nTo address your concerns, we collect the average absolute pixel mean for the noisy data prediction: $\\\\\\\\mathbb{E}[|\\\\\\\\frac{1}{n}\\\\\\\\sum_{i=0}^n \\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t |]$, denoising observation: $\\\\\\\\mathbb{E}[|\\\\\\\\frac{1}{n}\\\\\\\\sum\\\\_{i=0}^n \\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)} |]$, and noise estimation: $\\\\\\\\mathbb{E}[|\\\\\\\\frac{1}{n}\\\\\\\\sum\\\\_{i=0}^n \\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)} |]$ across the generation process. The statistics were collected from 6,400 samples generated using a 100-step LDM on CelebA-HQ. It can be observed that the average **pixel mean for $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t$, $\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}$ and $\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)}$ exhibits only minor deviations from 0 across timesteps**.\\n\\n\\n| **Timestep** | 951 | 901 | 851 | 801 | 751 | 701 | 651 | 601 | 551 | 501 |\\n| ------------------------------------------------------------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\\n| $\\\\\\\\mathbb{E}[\\\\|\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t\\\\|]$ | 0.0075 | 0.0079 | 0.0087 | 0.0101 | 0.0122 | 0.0154 | 0.0196 | 0.0248 | 0.0311 | 0.0382 |\\n| $\\\\\\\\mathbb{E}[\\\\|\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}\\\\|]$ | 0.1119 | 0.1093 | 0.1095 | 0.1099 | 0.1101 | 0.1109 | 0.1114 | 0.1118 | 0.1116 | 0.1119 |\\n| $\\\\\\\\mathbb{E}[\\\\|\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)}\\\\|]$ | 0.0068 | 0.0070 | 0.0070 | 0.0070 | 0.0069 | 0.0067 | 0.0064 | 0.0060 | 0.0056 | 0.0051 |\\n\\n\\n| **Timestep** | 451 | 401 | 351 | 301 | 251 | 201 | 151 | 101 | 51 | 1 |\\n| ------------------------------------------------------------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\\n| $\\\\\\\\mathbb{E}[\\\\|\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_t\\\\|]$ | 0.0460 | 0.0543 | 0.0629 | 0.0716 | 0.0800 | 0.0880 | 0.0954 | 0.1020 | 0.1077 | 0.1126 |\\n| $\\\\\\\\mathbb{E}[\\\\|\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\hat{\\\\\\\\boldsymbol{x}}\\\\_0^{(t)}\\\\|]$ | 0.1121 | 0.1123 | 0.1124 | 0.1125 | 0.1125 | 0.1126 | 0.1127 | 0.1128 | 0.1128 | 0.1128 |\\n| $\\\\\\\\mathbb{E}[\\\\|\\\\\\\\frac{1}{n}\\\\\\\\Sigma\\\\\\\\boldsymbol{\\\\\\\\epsilon}\\\\_\\\\\\\\theta^{(t)}\\\\|]$ | 0.0045 | 0.0040 | 0.0036 | 0.0032 | 0.0028 | 0.0025 | 0.0022 | 0.0021 | 0.0020 | 0.0011 |\\n\\nThe entire evolution of this relative error is provided in Appendix A.14 of the revised manuscript.\"}", "{\"comment\": \"Dear Reviewer 6dod,\\n\\nWe sincerely appreciate your valuable suggestions and the contributions you\\u2019ve made to enhancing our work.\\n\\nWe will conduct the additional experiments you suggested, addressing your concerns in detail. Once we have the results, we will provide a more comprehensive response.\\n\\nBest regards,\\n\\nThe authors\"}" ] }
5xfAcRHfgP
Hydra-MDP++: Advancing End-to-End Driving via Hydra-Distillation with Expert-Guided Decision Analysis
[ "Kailin Li", "Zhenxin Li", "Shiyi Lan", "Jiayi Liu", "Yuan Xie", "zhizhong zhang", "Zuxuan Wu", "Zhiding Yu", "Jose M. Alvarez" ]
We introduce HydraMDP++, a novel end-to-end autonomous driving framework that integrates rule-based and neural planners by learning from human demonstrations and distilling knowledge from rule-based experts. We propose a teacher-student knowledge distillation framework with a multi-head student decoder that integrates feedback from rule-based expert teachers. The student model achieves state-of-the-art performance on the NAVSIM benchmark with a tiny image encoder. Moreover, to address limitations in existing evaluation metrics, we expand the teacher model to include traffic light compliance, lane-keeping ability, and extended comfort. This is intended to ensure a more robust decision synthesis in driving. HydraMDP++ demonstrates robust and efficient performance across diverse driving scenarios, achieving a 91.0% drive score on NAVSIM by simply scaling the image encoder. Our work contributes to developing more reliable and adaptable autonomous driving systems that combine the strengths of rule-based and neural planning approaches.
[ "end-to-end autonomous driving", "expert guidance", "knowledge distillation", "open-loop metrics" ]
https://openreview.net/pdf?id=5xfAcRHfgP
https://openreview.net/forum?id=5xfAcRHfgP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "k4aegb4SyA", "gmpzGmvaks", "YAIaPjO15n", "UgFTfNiLjc", "EIly091V5q" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1729805490571, 1730673086560, 1730797559408, 1731514946520, 1730716252125 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6004/Reviewer_LUMZ" ], [ "ICLR.cc/2025/Conference/Submission6004/Reviewer_B2Ks" ], [ "ICLR.cc/2025/Conference/Submission6004/Reviewer_77mX" ], [ "ICLR.cc/2025/Conference/Submission6004/Authors" ], [ "ICLR.cc/2025/Conference/Submission6004/Reviewer_TbsM" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes to complement imitation learning with an extra term that punish or reward different trajectories based on their rollouts.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The emperical result is strong.\\n2. The proposed new metrics are a good update for NAVSIM.\\n3. The paper is easy to follow.\", \"weaknesses\": \"Limited novelty: the contributions of this work is on the decoder/planner side, which is actually not quite related to end-to-end autonomous driving. In pure planning side, **it is not new to conduct rollout during training and then reward/punish the planner like in [1, 2]**.\\n\\n[1] Closing the Planning-Learning Loop with Application to Autonomous Driving. TRO\\n[2] Rethinking Imitation-based Planners for Autonomous Driving. ICRA 2024\\n\\nIn summary, it seems that this work simply adopts the long existing idea in the planning field: offline rl by rollout during training into a new benchmark NAVSIM. As a result, I do not think implementing an image encoder with existing decoder designs have enough contributions and excitement to reach the bar of ICLR. Thus, I give reject rating. If the authors could demonstrate advantages over planning baselines like [1,2] in planning benchmarks like nuPlan, I am glad to improve my rating.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a a motion prediction driving approach for driving agents\\n which fuses a direct error prediction loss with simulation metrics based on NAVSIM.\\nThe paper shows promising results on the NAVSIM benchmark.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper goes into an important direction which is incorporating actual driving metrics into end-to-end driving learning instead of pure action or target point prediction. This is probably more relevant and translate better to actual driving quality results.\", \"weaknesses\": [\"The main weakness I observed is that I failed to understand the explanation of this as a hybrid method where there would be some additional framework to include IDM type of driving with a learned method. What I understood is the use of NAVSIM simulation based metrics which adds closed loop type of metrics into the loss function giving a bit more driving rollout supervision. I guess this is arguably similar to a hybrid approach because the rule base in embedded in the NAVSIM. Still that is not fully clear.\", \"I think the idea is simple and effective but is masked with a description that is bigger and more complex than necessary.\", \"The results are promising, but I would guess that optimizing for NAVSIM simulation metrics would lead for better navsim results. I would be more confident in using this learning strategy is full closed loop benchmarks where used with other platforms like CARLA or some other realistic simulator.\"], \"questions\": \"I wonder what are the potential drawbacks on using simulation based targets like navsim in terms of compute cost for training as that should be considerable more expensive than using datasets directly for the evaluation.\\n\\nWhat are also the impacts on overall determinism when using simulation as a part of the training framework. That should increase the training complexity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Hydra-MDP++, an end-to-end autonomous driving framework that bridges rule-based and neural planners by incorporating human demonstrations and expert guidance. Using a teacher-student knowledge distillation model, Hydra-MDP++ enables a multi-head student decoder to learn and validate trajectory proposals across various aspects of safe driving. Tested on the NAVSIM benchmark, Hydra-MDP++ shows impressive performance, achieving high compliance with metrics like lane-keeping, comfort, and traffic light observance. The framework demonstrates the potential for more adaptive and resilient autonomous driving by balancing human-like flexibility with rule-based reliability.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The integration of rule-based and neural planning is innovative. The paper\\u2019s approach effectively combines rule-based and neural planners, adding interpretability and adaptability to neural models, which may benefit real-world applications.\\n2. By incorporating traffic light compliance, lane-keeping ability, and extended comfort, the model considers crucial safety and regulatory aspects, showing a clear improvement over existing methods on the NAVSIM benchmark.\\n3. The model achieves high accuracy and compliance using only a ResNet-34 backbone, highlighting the computational efficiency.\", \"weaknesses\": \"For me, I have no main concerns of this paper. Here are some personal questions.\\n1. The specifics of the multi-head decoder\\u2019s feedback integration could be further clarified. While it\\u2019s stated that each head represents a different safety component, it would be beneficial to describe how conflicts between heads are resolved.\\n2. The qualitative examples primarily showcase straightforward scenarios like lane-following and traffic light compliance. Could authors also show some including challenging cases, such as complex intersections or mixed traffic environments?\\n3. I am not sure if this method can be evaluated on more than one dataset to prove generalization abilities.\", \"questions\": \"Here is one question: The model performs well on NAVSIM, yet it would be beneficial to discuss its scalability to other benchmarks or real-world tests.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents Hydra-MDP++, an extension of existing work Hydra-MDP, which could give decisions learned from rule-based experts and human demonstrations. Following the framework proposed in Hydra-MDP, Hydra-MDP++ achieves state-of-the-art performance on NAVSIM benchmark by receiving feedback from multiple expert teachers to help its trajectory decoder integrate the knowledge of rule-based and neural planners. Moreover, Hydra-MDP++ leverages a temporal Squeeze-and-Excitation network for temporal feature fusion from tiny image encoders, and expands teacher models to include more human knowledge.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The writing is easy to follow.\\n2. The idea of leveraging both simulation feedback and human demonstrations as supervision is interesting and intuitive.\\n3. Clear illustration of the framework architecture. (Fig. 2).\", \"weaknesses\": \"1. Missing Hydra-MDP baseline in experiments (Table 1 and Table 2). As the proposed method Hydra-MDP++ is an extension of Hydra-MDP, it is important to compare their results under the same evaluation protocol to demonstrate performance improvement. However, the results of the original Hydra-MDP are missing in the main experiments.\\n2. Lack of discussion on the difference between Hydra-MDP and Hydra-MDP++. The authors did mention several modifications throughout the paper such as in Sec. 3.4. However, it's still less obvious how Hydra-MDP++ differs from Hydra-MDP as a whole and what design choices are made to improve the original Hydra-MDP. A detailed section to elaborate on their differences and the motivation behind these modifications should be included. Without such clarification, I can hardly tell the novelty behind this paper.\\n3. A potential architectural flaw. The key innovation of this work is to distill the simulation feedback in the training stage, and then use the predicted feedback as guidance to select the best action to execute. However, a notable limitation of this design would be the over-reliance on a pre-developed driving simulator, which does not exist for most driving datasets. Therefore, the application of this method might be limited to certain datasets with the paired simulator. It's encouraged to include this drawback in the discussion or limitation section.\\n4. Missing details of inference latency which is critical for autonomous driving models. As Hydra-MDP++ is a sampling-based planning method, the planner estimates multiple action proposals and finally executes the most plausible one, it is important to report the latency/speed of such multi-round estimation. Also, the inference speed of the whole proposed pipeline should be included.\", \"questions\": \"Please see the weakness section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5xbKFaaqkS
Protecting Copyrighted Material with Unique Identifiers in Large Language Model Training
[ "Shuai Zhao", "Linchao Zhu", "Ruijie Quan", "Yi Yang" ]
A primary concern regarding training large language models (LLMs) is whether they abuse copyrighted online text. With the increasing training data scale and the prevalence of LLMs in daily lives, two problems arise: (1) false positive membership inference results misled by similar examples; (2) membership inference methods are usually too complex for general users to understand and use. To address these issues, we propose an alternative \textit{insert-and-detect} methodology, advocating that web users and content platforms employ \textbf{\textit{unique identifiers}} for reliable and independent membership inference. Users and platforms can create their identifiers, embed them in copyrighted text, and independently detect them in future LLMs. As an initial demonstration, we introduce \textit{\textbf{ghost sentences}} and a user-friendly last-$k$ words test, allowing general users to chat with LLMs for membership inference. Ghost sentences consist primarily of unique passphrases of random natural words, which can come with customized elements to bypass possible filter rules. The last-$k$ words test requires a significant repetition time of ghost sentences~($\ge10$). For cases with fewer repetitions, we designed an extra perplexity test, as LLMs exhibit high perplexity when encountering unnatural passphrases. We also conduct a comprehensive study on the memorization and membership inference of ghost sentences, examining factors such as training data scales, model sizes, repetition times, insertion positions, wordlist of passphrases, alignment, \textit{etc}. Our study shows the possibility of applying ghost sentences in real scenarios and providing instructions for the potential application.
[ "LLM", "Copyright" ]
https://openreview.net/pdf?id=5xbKFaaqkS
https://openreview.net/forum?id=5xbKFaaqkS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jgzB3v4H8O", "fILK24FwDp", "Tp4WoKxgaZ", "QcfUcz2PgX", "Kkr0matVi0", "A0NUU32ydU" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730662656950, 1730232532198, 1730274679015, 1733131583538, 1730429543862, 1730644761725 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4487/Reviewer_xPDZ" ], [ "ICLR.cc/2025/Conference/Submission4487/Reviewer_HCgh" ], [ "ICLR.cc/2025/Conference/Submission4487/Reviewer_YYzf" ], [ "ICLR.cc/2025/Conference/Submission4487/Authors" ], [ "ICLR.cc/2025/Conference/Submission4487/Reviewer_TMjr" ], [ "ICLR.cc/2025/Conference/Submission4487/Reviewer_9zEV" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a method to detect the presence of copyrighted material of a user in the training corpus of LLMs. The authors suggest a method that uses unique, randomly generated passphrases as identifiers, which are to be embedded into the user's text data. To detect these passphrases within the model's outputs, the study suggests two main techniques: a \\\"last-k words\\\" test and a perplexity-based test. The paper argues that these detection methods are accessible and user-friendly. They also conduct ablations analyzing several aspects such as scale of training data, model sizes, repetition of passphrases, etc.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This is definitely an important problem which is clearly motivated. I appreciated that the authors conducted detailed ablations analyzing the memorization of the ghost sentences and presented detailed results with repetition rates, model sizes, length and insertion position, etc. The suggested techniques are also simple and intuitive.\\n\\nI also liked that the paper proposes a method to get interpretable p values for both the user-friendly k-words test and the perplexity based text.\", \"weaknesses\": \"The paper has the following limitations:\\n\\n- In my view, the work lacks novelty considering the overlap with [1] and [2] which introduce multiple varieties of data watermarks (or copyright traps) into the text and detect membership using loss-based metrics. The more practical perplexity-based test is also similar to the one proposed in [1] which compares the loss-based metric of a watermark against the empirical distribution. I believe it would help to include a detailed comparison with [1] and [2] by highlighting significant benefits of the passphrases proposed in the paper.\\n\\n- Another limitation is that the more aggressive perplexity-based text requires the watermark to be repeated a significant number of times (5 times) and still offers a recall of just ~0.393.\\n\\n\\n1. [Proving membership in LLM pre training data via data watermarks](https://arxiv.org/abs/2402.10892) by Wei et al.\\n\\n2. [Copyright Traps for Large Language Models](https://arxiv.org/abs/2402.09363) by Meeus et al.\", \"questions\": [\"The paper suggests that the watermarks in [1] run the risk of being filtered? Could you provide evidence of a practice that data providers use to filter this?\", \"The paper offers details on the fine-tuning in terms of number of examples used, could you please share the details in terms of token counts?\", \"What would be the result of the baseline provided for the min-k test using the best performing hyperparameters (k value)?\", \"1. [Proving membership in LLM pre training data via data watermarks](https://arxiv.org/abs/2402.10892) by Wei et al.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the problem of membership inference in LLMs, where the goal is to determine whether a particular document or set of documents was used for training. The authors motivate this with concerns about the use of copyrighted material. They propose the insertion of _ghost sentences_ \\u2014 sentences made up of randomly sampled words \\u2014 into online documents. To test whether a model has been trained on documents containing a specific ghost sentence, they test two methods: 1. Computing the perplexity of the ghost sentence under the model probability distribution compared with the perplexity of other, randomly sampled sentences. 2. Prompting the model to guess words in the ghost sentence. A significantly lower perplexity (in case 1) or a successful guess (in case 2) indicates the presence of the ghost sentence in the training data. The authors perform various experiments covering both pretraining and fine-tuning settings, different model size, ghost sentence lengths and other parameters. They show that, given enough ghost sentences, their method can successfully detect the presence of ghost sentences in the training data of LLMs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors motivate the last-k word test by the need for easy-to-use membership inference methods that can be deployed by average users. I agree that this is an important and often overlooked aspect that can stand in the way of a broad adoption of such methods.\", \"Besides the ease of use, the method also comes with the advantage of not relying on output probabilities that are often not available for commercial models.\", \"The authors do a good job at covering various settings in their experiments. I particularly appreciate that they perform an experiment to determine whether ghost sentences inserted during pretraining are still detectable after alignment training.\", \"The introduction serves as a very good overview and summary of the paper.\"], \"weaknesses\": [\"My main criticism of the paper is the lack of novelty. The authors cite Wei et al. [1], who propose a very similar approach of introducing random sequences of characters (instead of random sequences of words) into documents. They mention three disadvantages of [1] when compared to their own method: 1. The model trainer could filter out the random character sequences. 2. There is a risk of false positives due to the prevalence of such random sequences in training data. 3. Random character sequences do not allow for a user-friendly detection method. I think that criticism only holds to some degree: 1. The model trainer could filter out random sequences of words using a method similar to the proposed perplexity test applied to a previously trained, small language model. This would arguably be harder than simply filtering out very uncommon sequences of characters, but still seems feasible. 2. If the sequence of random characters is sufficiently long, this is not an issue, because a collision becomes very unlikely. 3. It seems possible to me that the last-k prompt could be used with random sequences of characters, where instead of asking the model to complete a sentence, once asks it to complete a sequence of characters.\", \"I would have liked a comparison with [1] in terms of detection success, and an experiment confirming or refuting point 3. above.\", \"Taken independently, neither the perplexity-based hypothesis test is new [2] nor the idea of checking whether the model can determine a correct word in a sentence better than random guessing [3,4].\", \"The instruction tuning task \\u201cContinue writing the given content\\u201d is very similar to the pretraining task and not a task one would expect during instruction tuning.\", \"It seems like when applying the method to pretraining data, an unrealistically large percentage of the data needs to be ghost sentences (cf. Table 3b), given that modern LLMs are trained on trillions of tokens. However, that is the training phase where most data is used and thus the most relevant one for detecting the use of copyrighted material. It could in principle be that the required percentage decreases to manageable levels for large models that have been shown to memorize more data than small models. To avoid the cost of (continuous) pretraining of such models, one way to test that hypothesis might be to follow an approach similar to that of [1] and try to find instances of random word sequences in the training data of large models with publicly available training sets such as BLOOM.\", \"Minor: The sentence starting in 185 is not grammatical. And the first half of Sec. 4 has worse writing than the rest of the paper. Besides several typos and grammar mistakes, the beginning of the paragraph in ll. 324-329 (\\\"To figure out the average repetition $\\\\mu$ [...]\\\") does not seem to seem to make sense in the context of the rest of the paragraph.\", \"Minor: Throughout the paper, the authors use the term 'word length' several times, which I would interpret as the number of characters in a word. It seems like what is rather meant is the number of words in a ghost sentence. If that is the case, I suggest to change the term to 'sentence length'.\", \"[1] Wei, Johnny Tian-Zheng et al. Proving membership in LLM pretraining data via data watermarks. arXiv preprint arXiv:2402.10892, 2024.\", \"[2] Carlini, Nicholas et al. Membership inference attacks from first principles. 2022 IEEE Symposium on Security and Privacy.\", \"[3] Lukas, Nils et al. Analyzing leakage of personally identifiable information in language models. 2023 IEEE Symposium on Security and Privacy.\", \"[4] Chang, Kent K. et al. Speak, memory: An archaeology of books known to ChatGPT/GPT-4. arXiv preprint arXiv:2305.00118, 2023.\"], \"questions\": [\"195-196: As far as I can see, we can always upper bound $1/V^*$ by $1/V_g$. Seeing ghost sentences other than the one we are currently interested in gives the model information about the set of words from which the ghost sentences are created, but no information beyond that. Thus, can't we simply plug $V_g$ into Eq. 3? Since we can choose $V_g$ arbitrarily large, we do not need to hope that we get $n_g=2$. Could you clarify?\", \"Why do you use beam search for decoding instead of sampling-based decoding methods that are more commonly used for general-purpose LLMs? How would using such a decoding method change your results?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an insert-and-detect method to detect whether a text is included in LLM training data. The proposed method is based on unique identifiers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Detect training data in LLMs is important for copyright protection.\\n\\n2. The proposed method is effective in detecting training data.\\n\\n3. The experiments are extensive and solid.\", \"weaknesses\": \"1. Can the perplexity test be applied to close-source models?\\n\\n2. This paper uses finetuning to inject the unique identifiers. However, the problem studied in this paper is for detecting training data in pretraining, right? Is it possible that the finding and conclusion obtained from finetuning experiments be different for pretraining?\\n\\n3. What are existing data filtering methods for LLM pretraining data preprocessing? Is the proposed method robust to these data filtering methods?\\n\\n4. Is it possible the proposed method can harm the quality and readability of the data to be protected?\", \"questions\": \"1. Can the perplexity test be applied to close-source models?\\n\\n2. This paper uses finetuning to inject the unique identifiers. However, the problem studied in this paper is for detecting training data in pretraining, right? Is it possible that the finding and conclusion obtained from finetuning experiments be different for pretraining?\\n\\n3. What are existing data filtering methods for LLM pretraining data preprocessing? Is the proposed method robust to these data filtering methods?\\n\\n4. Is it possible the proposed method can harm the quality and readability of the data to be protected?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces a novel method for protecting copyrighted content in language models by embedding unique identifiers, termed \\\"ghost sentences,\\\" into text data. This technique enables content owners to identify unauthorized use of their data by querying LLMs. The proposed framework includes the \\\"last-k words\\\" and \\\"perplexity\\\" tests for efficient, accessible membership inference.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Presents an innovative approach to copyright detection using ghost sentences, filling a niche not fully addressed in LLM training.\", \"Provides extensive experimental validation with different model sizes, data scales, and insertion strategies.\"], \"weaknesses\": [\"Writing needs improvement, including but not limited to introduction\", \"The example in Figure 1 is not self-explanatory\", \"notations are confusing. Section 3.1 an example is indicated by both $x_i$ (with subscript) and $x$ (without subscript).\", \"Dependence on specific models and configurations, which might not generalize well across all LLM architectures.\", \"Experiments were only performed on the LLaMA family. There are alternative open-source models such as Mistral.\", \"If I understand correctly, the model should also work on proprietary models. However, experiments on state-of-the-art proprietary models such as GPT-4o are missing (see https://openai.com/index/gpt-4o-fine-tuning/)\", \"Line 73 \\\"Figure Figure 1\\\"\"], \"questions\": [\"Does training with ghost sentence impact generative fluency (lower the models' abilities to generate fluent sentences)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes using \\\"ghost sentences\\\"\\u2014unique, embedded passphrases\\u2014to help creators detect if their copyrighted content has been used in LLM training, offering methods like the last-k words and perplexity tests for reliable, user-friendly detection.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method introduces accessible detection techniques, allowing non-technical users to check if their content has been used for LLM training.\\n2. The study thoroughly examines factors like model size, training data scale, and repetition frequency, ensuring the robustness of the proposed approach across different LLM configurations.\", \"weaknesses\": \"1. Perplexity-based filtering is a common method for cleaning pre-training data. How might the proposed ghost sentences method be adapted to remain effective in the presence of perplexity-based data cleaning techniques commonly used in LLM training?\\n2. Could the authors provide a more detailed analysis of how different wordlist impact the effectiveness of ghost sentences? Additionally, what are the trade-offs of using the entire LLM vocabulary as the wordlist versus a smaller, curated list?\\n3. With LLM-generated or rephrased training data becoming more popular, it would be interesting to investigate whether ghost sentences persist after LLM-based rewriting.\\n4. The paper lacks baseline comparisons to validate its effectiveness.\", \"questions\": \"N.A.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5xSRg3eYZz
VVC-Gym: A Fixed-Wing UAV Reinforcement Learning Environment for Multi-Goal Long-Horizon Problems
[ "Xudong Gong", "Feng Dawei", "Kele Xu", "weijia wang", "Zhangjun Sun", "Xing Zhou", "Si Zheng", "Bo Ding", "Huaimin Wang" ]
Multi-goal long-horizon problems are prevalent in real-world applications. The additional goal space introduced by multi-goal problems intensifies the spatial complexity of exploration; meanwhile, the long interaction sequences in long-horizon problems exacerbate the temporal complexity of exploration. Addressing the great exploration challenge posed by multi-goal long-horizon problems depends not only on the design of algorithms but also on the design of environments and the availability of demonstrations to assist in training. To facilitate the above research, we propose a multi-goal long-horizon Reinforcement Learning (RL) environment based on realistic fixed-wing UAV's velocity vector control, named VVC-Gym, and generate multiple demonstration sets of various quality. Through experimentation, we analyze the impact of different environment designs on training, assess the quantity and quality of demonstrations and their influence on training, and assess the effectiveness of various RL algorithms, providing baselines on VVC-Gym and its corresponding demonstrations. The results suggest that VVC-Gym is suitable for studying: (1) the influence of environment designs on addressing multi-goal long-horizon problems with RL. (2) the assistance that demonstrations can provide in overcoming the exploration challenges of multi-goal long-horizon problems. (3) the RL algorithm designs with the least possible impact from environment designs on the efficiency and effectiveness of training.
[ "Reinforcement Learning Environment", "Demonstrations", "Goal-Conditioned Reinforcement Learning", "Fixed-wing UAV Velocity Vector Control" ]
Accept (Poster)
https://openreview.net/pdf?id=5xSRg3eYZz
https://openreview.net/forum?id=5xSRg3eYZz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tPbYDFMyBM", "r4ECwnmEOU", "qiqPs6VVcx", "nLZinErWvz", "ljX5x2u5fk", "iaIb4Kk9JZ", "ge2Uk3oTVm", "fyP3oQCaqz", "fLvkh8pryD", "ZOCYI87wsq", "Z2TIuKMtDT", "Xy6u4L5o1T", "RRRxesGYcK", "RCfbNgz3Zb", "QdCgaOsy4I", "Nq4B5vYKHw", "NKfJjV26F7", "NGoAcTaVFG", "LFkrZG5n0T", "IfDfThC5Q5", "HbmhS3gZRS", "DULzdpjjty", "8DHah6najB", "8CC1j2rZkW", "7LNIBie00N", "6bGD3o917t", "503eaAa8Gq", "2xu8SgGU4H", "19ghgqGK9z", "0X19jrFcnM", "02lNWWjE9L" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732120228896, 1732115261698, 1732255642311, 1732520893372, 1732332120976, 1732303688845, 1732163318292, 1732115777120, 1732477426999, 1732477396427, 1732115534366, 1732116088964, 1730699327445, 1732116256493, 1730687005097, 1732676861392, 1732116164748, 1732505221888, 1732331787282, 1732450473412, 1737524218185, 1732163104633, 1732636528628, 1732450524977, 1732163130795, 1729787142665, 1733023081556, 1729599120411, 1732116306272, 1734673944884, 1732115592905 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Reviewer_soQJ" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Reviewer_exNf" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Area_Chair_CCAi" ], [ "ICLR.cc/2025/Conference/Submission12834/Area_Chair_CCAi" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Reviewer_HNY5" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Reviewer_exNf" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Reviewer_exNf" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Reviewer_HNY5" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Reviewer_ecAp" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Reviewer_soQJ" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ], [ "ICLR.cc/2025/Conference/Submission12834/Area_Chair_CCAi" ], [ "ICLR.cc/2025/Conference/Submission12834/Authors" ] ], "structured_content_str": [ "{\"comment\": \"> `W3: After searching on google, I find some repo doing similar tasks like this: https://github.com/liuqh16/LAG, which is also based on fixed wings. Can you also discuss existing simulators of fixed-wing UAV control?`\", \"existing_fixed_wing_simulators_that_support_rl_training_include\": \"Fixed-Wing-Gym [1], Gym-JSBSim [2], Markov-Pilot [3], LAG [4], etc. We discuss the differences between VVC-Gym and these simulators from two perspectives: (1) the simulator itself, and (2) support for RL research:\\n\\n1. **From the perspective of the simulator itself**: Our goal is to provide RL researchers with a more extensible and computationally efficient simulator. This allows RL researchers to customize VVC-Gym to their tasks of interest and efficiently validate their algorithm designs.\\n\\n (1) **Scalability**. We have made the scalability of VVC-Gym the most important design objective from the beginning. Researchers can easily extend VVC-Gym to investigate new control tasks, new fixed-wing UAV models, and generate new demonstrations, among other applications.\\n\\n * *Using other aircraft models*: VVC-Gym uses an open-source, more realistic fixed-wing aircraft model, and can replace the aircraft model with other open-source aircraft models [(link)](https://mirrors.ibiblio.org/flightgear/ftp/Aircraft-2020/) according to actual needs.\\n * *Defining new tasks*: Task and Simulator are decoupled (overall architecture as shown in Fig.1), so when solving new types of tasks, such as Attitude Control, Basic Flight Maneuvers, etc., only new tasks need to be added without modifying existing source code. We have provided some of these new tasks, which can be found at [(link)](https://github.com/ForAnonymousUse/ForICLR2025/blob/main/VVC-Gym.zip) (locate at directory VVC-Gym/vvcgym/tasks/).\\n\\n (2) **Efficiency**. For RL researchers, the computational efficiency of the simulator is crucial as it affects the efficiency of validating algorithm designs. Since the core of fixed-wing UAV simulation lies in the computation of the Equations of Motion (EoM), VVC-Gym employs C++ for the EoM calculations, which is more computationally efficient. To demonstrate the computational efficiency of VVC-Gym, we evaluate the FPS of VVC-Gym and Fixed-Wing-Gym on the machine described in Appendix A, and the results are as follows:\\n\\n ||Fixed-Wing-Gym|VVC-Gym|\\n |:-:|:-:|:-:|\\n |FPS|1325|2356|\\n\\n _**Note**: The above training used the PPO algorithm from the StableBaselines framework, with 64 rollout workers, and $10^6$ environmental steps._\\n\\n It can be seen that VVC-Gym has a much higher computational efficiency than the Python-based Fixed-Wing-Gym. Additionally, in Appendix A, we also compared the FPS of VVC-Gym with commonly used RL environments, and the results show that VVC-Gym achieves the sampling speed of environments commonly used in academic research and even surpasses several of them.\\n\\n2. **From the perspective of supporting RL algorithm research**:\\n\\n ||VVC-Gym|Fixed-Wing-Gym|Gym-JSBSim|Markov-Pilot|LAG|\\n |:-:|:-:|:-:|:-:|:-:|:-:|\\n |Task|VVC, AC, BFMs|AC|Level turn|Gliding descent|AC, 1v1, 2v2|\\n |MDP type|Goal-Augmented MDP, standard MDP|standard MDP|standard MDP|standard MDP|standard MDP, Multi-agent MDP|\\n |Demonstrations|\\u221a|\\u2613|\\u2613|\\u2613|\\u2613|\\n |RL baselines|PPO, SAC, HER, GCBC, GCBC+PPO, MEGA, RIG, DISCERN|PPO|PPO|DDPG|PPO, MPPO|\\n\\n _**Note**: VVC is the abbreviation for Velocity Vector Control, AC is the abbreviation for Attitude Control, and BFMs is the abbreviation for Basic Flight Maneuvers._\", \"it_can_be_seen_that\": \"Firstly, in terms of tasks, VVC-Gym is the first publicly available RL environment for velocity vector control. Secondly, only VVC-Gym is modeled as a Goal-Augmented MDP, which better supports GCRL research. Thirdly, only VVC-Gym is accompanied by demonstrations, which can support research on demonstration-based RL. Fourthly, VVC-Gym provides baselines for 8 GCRL algorithms, while other environments only provide baselines for 1 to 2 RL algorithms. In summary, VVC-Gym provides stronger support for conducting GCRL research.\\n\\n[1] B\\u00f8hn E, Coates E M, Moe S, et al. Deep reinforcement learning attitude control of fixed-wing uavs using proximal policy optimization[C]//2019 international conference on unmanned aircraft systems (ICUAS). IEEE, 2019: 523-533.\\n\\n[2] Rennie G. Autonomous control of simulated fixed wing aircraft using deep reinforcement learning[J]. 2018.\\n\\n[3] Eckstein F, Schiffmann W. Learning to fly\\u2013building an autopilot system based on neural networks and reinforcement learning[D]. Master's thesis. FernUniversit\\u00e4t Hagen, Hagen, Germany, 2020.\\n\\n[4] https://github.com/liuqh16/LAG\"}", "{\"comment\": \"Thank you for your thorough review and valuable feedback on our work. We have addressed the three issues you raised and address your concerns in the following:\\n\\n> `Q1: In Fig. 6c, we can conclude that there are several distinct stages in training where different termination conditions are triggered. However, what can we conclude from Fig. 6d?`\\n\\nWe aim to explain why the application of termination conditions can enhance exploration efficiency through Fig.6d. Taking **ES** (Extreme State, where the episode is terminated when an unreasonable extreme state occurs, such as excessive roll angular rate) and **T** (Timeout, where the episode is terminated after reaching the maximum simulation length of max_episode_length steps) as examples, we provide the following explanation:\\n\\n* Without ES, all failed episodes would be simulated up to the predefined maximum simulation length of 400 steps in T. Even if unreasonable extreme states occur, the environment would still collect subsequent meaningless transitions until reaching 400 steps.\\n* With ES, during training, ES terminates the corresponding unreasonable episodes around 200-250 steps. This means that for each trajectory with an extreme state, employing ES can save 150-200 steps. Therefore, the application of ES can, to some extent, avoid collecting meaningless transitions like extreme states, thus improving exploration efficiency.\"}", "{\"title\": \"Thanks for the detailed response\", \"comment\": \"Thanks for the detailed response of the author. While I'm not a researcher in this field, the response of authors have solved my concerns to some extent, yet I still wonder we could have some (10~20) standardized settings in final version, if possible. I would recommend authors to provide a clear guidance for users on how to use the repo in the final version. I have raised my score.\"}", "{\"comment\": \"Dear Reviewer exNf,\\n\\nWe express our heartfelt gratitude for the time and dedication you've invested in reviewing our manuscript. Your constructive feedback and perceptive comments have been immensely valuable in refining our paper.\\n\\nThank you once again for your valuable feedback.\"}", "{\"comment\": \"Dear Reviewer soQJ,\\n\\nWe sincerely appreciate the time and effort you invested in reviewing our manuscript. Your insightful comments and concerns have greatly contributed to the improvement of our paper. We are sure to provide a clear guidance for users on how to use the proposed environment and demonstrations in the final version.\\n\\nThank you once again for your valuable feedback.\"}", "{\"comment\": \"I would like to thank the authors for the response. Have you updated the paper to include the additional results and to address the writing issues I proposed?\"}", "{\"comment\": \"> `W2: I feel it is more like a technical paper introducing a platform in reinforcement learning than a research paper.`\\n\\nOur work falls under the category of datasets and benchmarks. We are dedicated to providing the GCRL community with: (1) **an environment** for studying multi-goal long-horizon problems; (2) **demonstrations** to assist in GCRL training; and (3) **baselines** on some GCRL algorithms. Our core contribution is to help future GCRL researchers investigate how to better solve multi-goal long-horizon problems. We collect and study in detail some related work on environments, datasets, and benchmarks [1,2,3,4,5] from **ICLR 2024**, and we believe that **our work meets the requirements for research papers in the Datasets and Benchmark Track**.\\n\\n[1] Bonnet C, Luo D, Byrne D J, et al. Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments in JAX[C]//The Twelfth International Conference on Learning Representations, 2024.\\n\\n[2] Tec M, Trisovic A, Audirac M, et al. SpaCE: The Spatial Confounding Environment[C]//The Twelfth International Conference on Learning Representations, 2024.\\n\\n[3] Huang S, Weng J, Charakorn R, et al. Cleanba: A Reproducible and Efficient Distributed Reinforcement Learning Platform[C]//The Twelfth International Conference on Learning Representations. 2024.\\n\\n[4] Zhou S, Xu F F, Zhu H, et al. Webarena: A realistic web environment for building autonomous agents[C]//The Twelfth International Conference on Learning Representations. 2024.\\n\\n[5] Yuan Y, Hao J, Ma Y, et al. Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback[C]//The Twelfth International Conference on Learning Representations. 2024.\\n\\n> `W3: Additionally, I suggest the authors add more baseline algorithms to their experiments, not just PPO.`\\n\\nThank you for your suggestion. In our manuscript, Section 4.3 reports the results of a total of **8 algorithms**, including PPO, SAC, HER, GCBC, GCBC+PPO, MEGA, RIG, and DISCERN. The corresponding training scripts are included in the [Supplementary Material](https://openreview.net/attachment?id=5xSRg3eYZz&name=supplementary_material) we submitted. Additionally, the table below shows the baselines provided by other Fixed-wing UAV environments, indicating that we include a larger number of baselines compared to other environments. We are currently evaluating Goal-conditioned offline RL [1] algorithms on VVC-Gym, and we will extend our code repository to include the relevant training scripts and evaluating results in the future.\\n\\n||VVC-Gym|Fixed-Wing-Gym [2]|Gym-JSBSim [3]|Markov-Pilot [4]|LAG [5]|\\n|:-:|:-:|:-:|:-:|:-:|:-:|\\n|RL baselines|PPO, SAC, HER, GCBC, GCBC+PPO, MEGA, RIG, DISCERN|PPO|PPO|DDPG|PPO, MPPO|\\n\\n[1] Park S, Frans K, Eysenbach B, et al. OGBench: Benchmarking Offline Goal-Conditioned RL[J]. arxiv preprint arxiv:2410.20092, 2024.\\n\\n[2] B\\u00f8hn E, Coates E M, Moe S, et al. Deep reinforcement learning attitude control of fixed-wing uavs using proximal policy optimization[C]//2019 international conference on unmanned aircraft systems (ICUAS). IEEE, 2019: 523-533.\\n\\n[3] Rennie G. Autonomous control of simulated fixed wing aircraft using deep reinforcement learning[J]. 2018.\\n\\n[4] Eckstein F, Schiffmann W. Learning to fly\\u2013building an autopilot system based on neural networks and reinforcement learning[D]. Master's thesis. FernUniversit\\u00e4t Hagen, Hagen, Germany, 2020.\\n\\n[5] https://github.com/liuqh16/LAG\"}", "{\"comment\": \"We sincerely appreciate your careful review of our paper and valuable feedback. We address your concerns below.\\n\\n> `W1: I am not a researcher on fixed-wing UAV control and multi-goal long-horizon problem. The biggest problem for me is that while the authors present an environment for future researchers, they failed to give a clear guidance on how to use this environment for benchmark.`\\n\\nThank you for your suggestion. We have also considered how to facilitate researchers' use of our environment and dataset before submitting our manuscript. Therefore, we have included three parts in the [Supplementary Material](https://openreview.net/attachment?id=5xSRg3eYZz&name=supplementary_material) submitted:\\n\\n1. **The source code repository of VVC-Gym**, in which we have detailed the installation, configuration, and several usage examples of VVC-Gym in the README.md.\\n2. **The case repository for using VVC-Gym**, which includes scripts for generating demonstrations and training scripts for the baselines mentioned in the manuscript. We have detailed the methods for running these scripts in the README.md.\\n3. **Some visualized demonstrations** of the RL policy we have trained.\\n\\nDue to the double-blind review mechanism of ICLR, we have not open-sourced our work, including the environment source code, dataset, case source code, and trained policies, etc. Thank you again for your suggestion. We will:\\n\\n1. enrich our case repository by providing more baseline algorithms and more detailed usage instructions.\\n2. open-source the environment source code, dataset, case source code, and trained policies, etc.\\n3. add a section in the Appendix of the manuscript to introduce how to conduct research on the proposed environment, dataset, and baselines.\\n\\n> `W1.1: What are the number of tasks in this environment, and what are their difficulties, respectively? Seems to me there is only one task. In this way, users might not use this environment in their study since it is likely to be rejected by other reviewers by the reason like \\\"small amount of experiments\\\".`\\n\\nIn our submitted manuscript, we introduced the task of Fixed-wing UAV's Velocity Vector Control (VVC). Currently, we have expanded our code to include Attitude Control and several Basic Flight Maneuvers (BFMs), such as level turn and barrel roll. All these tasks belong to multi-goal problems. The extended code can be found at [(link)](https://github.com/ForAnonymousUse/ForICLR2025/blob/main/VVC-Gym.zip). These new tasks are located in the directory VVC-Gym/vvcgym/tasks/.\\n\\nTaking the VVC task as an example, this task requires the fixed-wing UAV to achieve any arbitrary velocity vector in three-dimensional space. Therefore, in different episodes, the RL agent faces different desired velocity vectors. If the initial velocity vector $s_0$ is fixed, different desired velocity vectors $g$ have different levels of difficulty. We define the difficulty of desired velocity vector based on the offset $d(s_0, g)$ between $s_0$ and $g$. We discretize the entire desired goal space and visualize the difficulty of different desired velocity vectors, the results of which can be seen at [(link)](https://github.com/ForAnonymousUse/ForICLR2025/blob/main/visualization%20of%20goal%20difficulty.pdf).\\n\\nIn summary, we focus on multi-goal problems and aim to train a goal-conditioned policy that can achieve all desired goals. Although VVC is a single task, within this task, there are multiple desired goals with varying levels of difficulty.\\n\\n> `W1.2: What are the recommended configuration, hyperparameters and algorithms for me to start with, for researchers in the field of (1) GCRL (2) demonstration-based RL and (3) fixed-wing UAV control?`\\n\\nAll baselines mentioned in our manuscript can be found in our case repository in the [Supplementary Material](https://openreview.net/attachment?id=5xSRg3eYZz&name=supplementary_material). Additionally, we provide the default parameters for these algorithms. For GCRL researchers, we recommend starting with SAC, HER, and MEGA; for demonstration-based RL, we suggest starting with GCBC; and for fixed-wing UAV control researchers, we recommend beginning with GCBC+PPO+MEGA.\\n\\n> `W1.3: As for demonstrations, if I want to generate trajectories to support my own task, do I have tools for that in the environment? Are there standardized demonstrations recommended by the authors?`\\n\\nIn the case repository within the [Supplementary Material](https://openreview.net/attachment?id=5xSRg3eYZz&name=supplementary_material), we provide and describe the methods for generating the 8 demonstration datasets mentioned in the manuscript (please refer to the README.md file in this code repository). We will also open-source these 8 demonstration datasets in the future.\"}", "{\"title\": \"Please respond to rebuttal ASAP\", \"comment\": \"Dear reviewer,\\nThe process only works if we engage in discussion. Can you please respond to the rebuttal provided by the authors ASAP?\"}", "{\"title\": \"Please respond to rebuttal ASAP\", \"comment\": \"Dear reviewer,\\nThe process only works if we engage in discussion. Can you please respond to the rebuttal provided by the authors ASAP?\"}", "{\"comment\": \"> `Q2: Seems that the success rates of all the baseline methods are often low (less than 50%). What might be the possible reasons that the GCRL algorithms fail to achieve a higher success rate?`\", \"we_address_your_concern_from_the_following_three_aspects\": [\"1. **Fixed-wing UAV's velocity vector control (VVC) is a challenging task.** The difficulties lie in:\", \"The large exploration space of the policy, which is a continuous state, continuous action problem, and the policy requires additional exploration of the goal space during training.\", \"The long interaction sequences, with the average length of demonstrations exceeding 280. Even well-trained policies require an average of over 100 steps to achieve a goal, and more challenging goals can demand upwards of 300 steps (see Table 1 in the manuscript for corresponding experimental data).\"], \"we_provide_evidence_that_vvc_is_a_challenging_task_through_the_following_three_sets_of_experiments\": \"(1) ***Standard RL algorithms struggle to solve the VVC task.*** The table below shows the % success rates of SAC and PPO. It is evident that SAC and PPO struggle to solve the VVC task.\\n\\n ||SAC|PPO|\\n |:-:|:-:|:-:|\\n |VVC|1.08\\u00b10.48|0.04\\u00b10.03|\\n\\n (2) ***Existing GCRL algorithms can effectively solve common multi-goal tasks in academic research, but they can only solve the VVC task to a certain extent.*** We compare the performance of different GCRL algorithms on VVC and common multi-goal tasks in academic research, PointMaze (PointMaze_Large_DIVERSE_G-v3 [(link)](https://robotics.farama.org/envs/maze/point_maze/)) and Reach (PandaReach-v3 [(link)](https://panda-gym.readthedocs.io/en/latest/usage/environments.html)). The results are shown in the table below. It can be seen that these GCRL algorithms can almost completely solve Reach and PointMaze, but the best algorithm achieves only a 38.31% success rate on VVC. These results indirectly reflect that VVC is a challenging task.\\n\\n ||MEGA|GCBC|GCBC+PPO|\\n |:-:|:-:|:-:|:-:|\\n |Reach|100.0\\u00b10.0|70.63\\u00b12.99|100.0\\u00b10.0|\\n |PointMaze|100.0\\u00b10.0|75.96\\u00b15.34|93.33\\u00b13.06|\\n |VVC|8.32\\u00b11.86|17.08\\u00b10.57|38.31\\u00b11.62|\\n\\n **Note**: *The demonstrations used in the Reach experiments are from the official script provided by Panda-Gym [(link)](https://panda-gym.readthedocs.io/en/latest/usage/manual_control.html), the demonstrations used in the PointMaze experiments are from Minari [(link)](https://minari.farama.org/datasets/D4RL/pointmaze/), and the demonstrations used in the VVC experiments are $\\\\mathcal{D}_E^0$ from the manuscript.*\\n\\n (3) ***The human-designed classical PID controller (detailed in Appendix C of the manuscript) has only a 20.08% success rate***, which also indirectly reflects that VVC is a challenging task.\\n\\n2. **Although the success rates of these GCRL algorithms are less than 50%, the difference between using and not using GCRL is significant.** The table below shows some comparison results (data from Table 2 in the manuscript). Taking MEGA as an example, when using MEGA to sample behavioral goals during training, the algorithm's success rate can be increased from 38.31% to 48.62%, achieving a 26.91% improvement.\\n\\n |Algorithm|GCRL methods|Success Rate|\\n |:-:|:-:|:-:|\\n |SAC|w/o|1.08\\u00b10.48|\\n |SAC|w/ HER|8.32\\u00b11.86|\\n |PPO|w/o|0.04\\u00b10.03|\\n |PPO|w/ GCBC|38.31\\u00b11.62|\\n |PPO|w/ GCBC+MEGA|48.62\\u00b12.35|\\n\\n3. We believe that for academic research, the difficulty of the task should progress in tandem with the research on algorithms. **The task should have appropriate levels of difficulty to properly evaluate different algorithms** [1]. If the task is too easy, too hard, or unsolvable, it will fail to provide a useful signal for benchmarking [1]. Therefore, we believe that the current success rates of GCRL algorithms on VVC being less than 50% is helpful for researchers to discover more insights when designing algorithms.\\n\\n[1] Park S, Frans K, Eysenbach B, et al. OGBench: Benchmarking Offline Goal-Conditioned RL[J]. arxiv preprint arxiv:2410.20092, 2024.\"}", "{\"comment\": \"> `W2: If conventional algorithms such as PID/MPC can already solve the task, why should we use RL for this problem? Maybe we can significantly reduce the complexity of the problem by planning?`\\n\\nWe utilize PID as an example to compare classical control methods with RL. We believe that in the complex nonlinear problem of fixed-wing UAV's velocity vector control, PID struggles to provide high-quality policies. PID is suitable for relatively simple linear systems and models with known dynamics, whereas RL is suitable for complex, dynamic, nonlinear systems, especially when models are unknown. **Therefore, we consider PID and RL to have different areas of application.** **The control of fixed-wing UAV is a typical nonlinear problem [1], and PID struggles to provide high-quality solutions.** In our environment, the PID controller (detailed in Appendix C) achieves only a 20.08% success rate, while the best RL policy achieves a 71.68% success rate. Numerous studies in the field of fixed-wing UAV have also found that RL can yield better policies than PID [1,2,3].\\n\\nIn our research, we find that although the success rate of PID is not as good as that of the RL policy, **the data sampled by the PID controller can serve as demonstrations to assist RL in training**. The following table (results from Table 2 and Table 3 of the manuscript) compares the success rates of PID with RL policies trained under different conditions.\\n\\n||PID|RL w/o Pre-train|Pre-train w/ BC|Fine-tune w/ RL|\\n|:-:|:-:|:-:|:-:|:-:|\\n|Success Rate (%)|20.08|0.04\\u00b10.03|17.08\\u00b10.57|**38.31\\u00b11.62**|\", \"it_can_be_observed_that\": \"* The PID controller has a relatively low success rate.\\n* If trained from scratch, the RL policy can barely achieve any goals within a limited training budget.\\n* Although the success rate of the policy pre-trained on demonstrations is lower than that of PID, when the policy is fine-tuned with RL, its success rate far exceeds that of PID.\\n\\nIn summary, in the complex nonlinear problem of fixed-wing UAV's velocity vector control, PID, due to its limitations, struggles to achieve goals with high quality, but the data sampled by PID can be used as demonstrations to assist RL in training well-performing policies.\\n\\n[1] B\\u00f8hn E, Coates E M, Reinhardt D, et al. Data-efficient deep reinforcement learning for attitude control of fixed-wing UAVs: Field experiments[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 35(3): 3168-3180.\\n\\n[2] Koch III W F. Flight controller synthesis via deep reinforcement learning[D]. Boston University, 2019.\\n\\n[3] Eckstein F, Schiffmann W. Learning to fly\\u2013building an autopilot system based on neural networks and reinforcement learning[D]. Master's thesis. FernUniversit\\u00e4t Hagen, Hagen, Germany, 2020.\"}", "{\"summary\": \"The Paper presents a Multi-Goal Long Horizon RL Environment based on Fixed Wing UAV Velocity Vector control. The paper further provides a set of various demonstrations, analyzing the quantity and quality of the demonstration and their effect of training. The paper claims that the environment presented is suitable for studing curriculum learning that combined imitation learning and RL, influence of the environment designs on multi-goal long-horizon problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper provides a detailed analysis of the presented environment, including evaluation of Demonstrations and training of various RL Algorithms with and without Curriculum Methods.\\n\\nClear problem formulation and transition function.\\n\\nDetailed Appendix and Supplementary Materials containing demonstrations.\", \"weaknesses\": [\"It is unclear what the novelty of the environment is compared to other simulation based fixed wing control environments.\", \"It is unclear if the machine generated demonstrations can be substituted by human demonstrations.\"], \"questions\": [\"It is unclear how the environment would work with the presence of Adversarial Fixed Wing Agents [1, 2].\", \"Is the environment applicable to Inverse Reinforcement Learning applications?\", \"How much reward engineering is required to train an agent to learn a different policy than what is provided?\", \"What are the challenges of transfering models trainined in this environment to real world UAVs?\", \"[1] Strickland L. G., \\u201cCoordinating Team Tactics for Swarm-vs-Swarm Adversarial Games,\\u201d Ph.D. Thesis, Georgia Inst. of Technology, Atlanta, GA, July 2022, [https://smartech.gatech.edu/handle/1853/67090](https://smartech.gatech.edu/handle/1853/67090)\", \"[2] B. Vlahov, E. Squires, L. Strickland and C. Pippin, \\\"On Developing a UAV Pursuit-Evasion Policy Using Reinforcement Learning,\\\" 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 2018, pp. 859-864, doi: 10.1109/ICMLA.2018.00138.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> `W2: It is unclear if the machine generated demonstrations can be substituted by human demonstrations.`\\n\\n**We believe that human demonstrations cannot replace machine-generated demonstrations. Both of them play a crucial role in RL training.** The design of the PID controller encapsulates expert knowledge, and while the collected demonstrations may not be optimal, it offers good interpretability and is suitable for data collection on physical systems. Moreover, it allows for the collection of a large amount of demonstrations. Compared to PID, human expert data may be of higher quality in terms of achieving desired goals but is limited in quantity and may contain some jitter[1]. Therefore, as two manifestations of human expert knowledge, collecting from human and human-designed classical controllers, such as PID, each have their own advantages and disadvantages.\\n\\nWe provide further experiment results on human demonstrations below. We have demonstrations collected from human experts, totaling 613 demonstrations, which corresponds to approximately 2.5 hours of data. [(link)](https://github.com/ForAnonymousUse/ForICLR2025/tree/main/pilot_trajs) provides four screenshots of these human demonstrations. We trained policies with GCBC+PPO on these human play demonstrations and compared their performance with policies trained on demonstrations generated by the PID controller. The results of this comparison are presented in the table below.\\n\\n|Data Source|Data Collecting Time|Demonstration Quantity $\\\\uparrow$|Average Demonstration Length $\\\\downarrow$|Policy Success Rate $\\\\uparrow$|\\n|:-:|:-:|:-:|:-:|:-:|\\n|Human|2.5(h)|613|143.71\\u00b123.91|0.29\\u00b10.02|\\n|PID|10.1(min)|10264|281.83\\u00b1149.48|0.38\\u00b10.02|\", \"the_results_indicate_that\": \"* Human demonstrations have shorter lengths (indicating faster goal achievement), thus they are of higher quality.\\n* Collecting 613 human demonstrations took approximately 2.5 hours, whereas collecting 10264 demonstrations with PID took only 10.1 minutes. Therefore, using PID allows for more efficient collection of demonstrations.\\n* In terms of policy performance, demonstrations generated by the PID controller can assist RL in achieving higher success rates. On the other hand, although the number of human demonstrations is only $613 / 10264 = 5.97$% of the PID demonstrations, they can assist the RL policy in achieving a success rate that is $0.29 / 0.38 = 76.32$% of the PID.\\n\\nIn summary, in our scenario, human demonstrations are fewer but of higher quality, while machine-generated demonstrations are of lower quality but in greater quantity. The above experiments show that both forms of demonstrations can effectively assist in RL training. Therefore, we believe that human demonstrations cannot replace machine-generated demonstrations. Both of them play a crucial role in RL training.\\n\\n> `Q1: It is unclear how the environment would work with the presence of Adversarial Fixed Wing Agents [1, 2].`\\n\\nIn this manuscript, we have not considered the topic of adversarial agents in our contributions.\\n\\nFrom an algorithmic perspective, we aim to provide Goal-Conditioned RL researchers with an environment, dataset, and baselines that facilitate the study of multi-goal long-horizon problems. From an application perspective, we focus on solving the inner-loop control problem of a single fixed-wing UAV.\\n\\nAlthough we have not yet focused on multi-agent adversarial scenarios, our environment supports extension to multi-agent environments. Based on the current architecture (detailed in Fig.1 of our manuscript), a _situation management module_ can be established above the task module to manage multiple agents. We recommend referring to [1] for reward design and training adversarial RL agents.\\n\\n[1] https://github.com/liuqh16/LAG\\n\\n> `Q2: Is the environment applicable to Inverse Reinforcement Learning applications?`\\n\\nDue to the inclusion of demonstrations, VVC-Gym is applicable for studying Inverse Reinforcement Learning (IRL). We provide training scripts for two IRL algorithms on VVC-Gym: modeling reward function with Kernel Density [1], AIRL [2]. The training scripts are available at [(link)](https://github.com/ForAnonymousUse/ForICLR2025/tree/main/eval_on_irl), the training logs are available at [(link)](https://github.com/ForAnonymousUse/ForICLR2025/tree/main/eval_on_irl/logs), and the visualizations of the training results are available at [(link)](https://github.com/ForAnonymousUse/ForICLR2025/tree/main/eval_on_irl/screenshots).\\n\\nThank you again for your suggestion. We will continue to expand on these experiments and include the relevant content in our manuscript.\\n\\n[1] https://imitation.readthedocs.io/en/latest/algorithms/density.html\\n\\n[2] Fu J, Luo K, Levine S. Learning Robust Rewards with Adverserial Inverse Reinforcement Learning[C]//International Conference on Learning Representations. 2018.\"}", "{\"summary\": \"This paper proposes a multi-goal long-horizon Reinforcement Learning (RL) environment based on realistic fixed-wing UAV velocity vector control, named VVC-Gym. The proposed environment is studied through various ablation studies using different Goal-Conditioned RL (GCRL) algorithms and is equipped with multi-quality demonstration sets. Baselines are also provided on the environment.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed benchmark is useful for the GCRL community.\\n\\n2. The benchmark is equipped with demonstrations with different quality and several baseline algorithms. \\n\\n3. The influence of the environment-related parameters is studied via ablations.\\n\\n4. Baseline methods are provided with the benchmark.\\n\\n5. The paper is well-written and easy to read.\", \"weaknesses\": [\"1. Some writing issues:\", \"The citation format is not very suitable. The authors may consider using `\\\\citep` for most of the citations.\", \"Line 349 and Line 357: I guess Fig. 2a and Fig. 2b should be Table. 2a and Table. 2b.\", \"Figure 5 is provided but not mentioned in the text.\", \"2. The only task provided is the velocity vector control. If various types of different tasks are provided, the paper will have a larger influence and contribution.\"], \"questions\": \"1. In Fig. 6c, we can conclude that there are several distinct stages in training where different termination conditions are triggered. However, what can we conclude from Fig. 6d?\\n\\n2. Seems that the success rates of all the baseline methods are often low (less than 50%). What might be the possible reasons that the GCRL algorithms fail to achieve a higher success rate?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer HNY5,\\n\\nWe sincerely appreciate the time and effort you invested in reviewing our manuscript. Your insightful comments and concerns have greatly contributed to the improvement of our paper. We are sure to add these discussions to our final version.\\n\\nThank you once again for your valuable feedback.\"}", "{\"comment\": \"We sincerely appreciate your careful review of our paper and valuable feedback. We address your concerns below.\\n\\n> `W1: It is unclear what the novelty of the environment is compared to other simulation based fixed wing control environments.`\", \"existing_fixed_wing_simulators_that_support_rl_training_include\": \"Fixed-Wing-Gym [1], Gym-JSBSim [2], Markov-Pilot [3], LAG [4], etc. We discuss the differences between VVC-Gym and these simulators from two perspectives: (1) the simulator itself, and (2) support for RL research:\\n\\n1. **From the perspective of the simulator itself**: Our goal is to provide RL researchers with a more extensible and computationally efficient simulator. This allows RL researchers to customize VVC-Gym to their tasks of interest and efficiently validate their algorithm designs.\\n\\n (1) **Scalability**. We have made the scalability of VVC-Gym the most important design objective from the beginning. Researchers can easily extend VVC-Gym to investigate new control tasks, new fixed-wing UAV models, and generate new demonstrations, among other applications.\\n\\n * *Using other aircraft models*: VVC-Gym uses an open-source, more realistic fixed-wing aircraft model, and can replace the aircraft model with other open-source aircraft models [(link)](https://mirrors.ibiblio.org/flightgear/ftp/Aircraft-2020/) according to actual needs.\\n * *Defining new tasks*: Task and Simulator are decoupled (overall architecture as shown in Fig.1), so when solving new types of tasks, such as Attitude Control, Basic Flight Maneuvers, etc., only new tasks need to be added without modifying existing source code. We have provided some of these new tasks, which can be found at [(link)](https://github.com/ForAnonymousUse/ForICLR2025/blob/main/VVC-Gym.zip) (locate at directory VVC-Gym/vvcgym/tasks/).\\n\\n (2) **Efficiency**. For RL researchers, the computational efficiency of the simulator is crucial as it affects the efficiency of validating algorithm designs. Since the core of fixed-wing UAV simulation lies in the computation of the Equations of Motion (EoM), VVC-Gym employs C++ for the EoM calculations, which is more computationally efficient. To demonstrate the computational efficiency of VVC-Gym, we evaluate the FPS of VVC-Gym and Fixed-Wing-Gym on the machine described in Appendix A, and the results are as follows:\\n\\n ||Fixed-Wing-Gym|VVC-Gym|\\n |:-:|:-:|:-:|\\n |FPS|1325|2356|\\n\\n _**Note**: The above training used the PPO algorithm from the StableBaselines framework, with 64 rollout workers, and $10^6$ environmental steps._\\n\\n It can be seen that VVC-Gym has a much higher computational efficiency than the Python-based Fixed-Wing-Gym. Additionally, in Appendix A, we also compared the FPS of VVC-Gym with commonly used RL environments, and the results show that VVC-Gym achieves the sampling speed of environments commonly used in academic research and even surpasses several of them.\\n\\n2. **From the perspective of supporting RL algorithm research**:\\n\\n ||VVC-Gym|Fixed-Wing-Gym|Gym-JSBSim|Markov-Pilot|LAG|\\n |:-:|:-:|:-:|:-:|:-:|:-:|\\n |Task|VVC, AC, BFMs|AC|Level turn|Gliding descent|AC, 1v1, 2v2|\\n |MDP type|Goal-Augmented MDP, standard MDP|standard MDP|standard MDP|standard MDP|standard MDP, Multi-agent MDP|\\n |Demonstrations|\\u221a|\\u2613|\\u2613|\\u2613|\\u2613|\\n |RL baselines|PPO, SAC, HER, GCBC, GCBC+PPO, MEGA, RIG, DISCERN|PPO|PPO|DDPG|PPO, MPPO|\\n\\n _**Note**: VVC is the abbreviation for Velocity Vector Control, AC is the abbreviation for Attitude Control, and BFMs is the abbreviation for Basic Flight Maneuvers._\", \"it_can_be_seen_that\": \"Firstly, in terms of tasks, VVC-Gym is the first publicly available RL environment for velocity vector control. Secondly, only VVC-Gym is modeled as a Goal-Augmented MDP, which better supports GCRL research. Thirdly, only VVC-Gym is accompanied by demonstrations, which can support research on demonstration-based RL. Fourthly, VVC-Gym provides baselines for 8 GCRL algorithms, while other environments only provide baselines for 1 to 2 RL algorithms. In summary, VVC-Gym provides stronger support for conducting GCRL research.\\n\\n[1] B\\u00f8hn E, Coates E M, Moe S, et al. Deep reinforcement learning attitude control of fixed-wing uavs using proximal policy optimization[C]//2019 international conference on unmanned aircraft systems (ICUAS). IEEE, 2019: 523-533.\\n\\n[2] Rennie G. Autonomous control of simulated fixed wing aircraft using deep reinforcement learning[J]. 2018.\\n\\n[3] Eckstein F, Schiffmann W. Learning to fly\\u2013building an autopilot system based on neural networks and reinforcement learning[D]. Master's thesis. FernUniversit\\u00e4t Hagen, Hagen, Germany, 2020.\\n\\n[4] https://github.com/liuqh16/LAG\"}", "{\"comment\": \"Thanks for uploading the revised manuscript. I'll keep my positive score.\"}", "{\"comment\": \"Thank you for your valuable feedback and prompt response. We have added clarifications for Fig. 6(d) in lines 418-427, corrected the three writing issues, and introduced a new section (Appendix J) that discusses the challenges of the VVC task and presents relevant experimental results. We have re-uploaded the manuscript. We will continue to proofread our manuscript and ensure to introduce the new tasks discussed above in the final version. We are grateful once again for your careful review of our manuscript.\"}", "{\"comment\": \"Dear Reviewer HNY5:\\n\\nWe are approaching the conclusion of the author-reviewer discussion period. Should you have any further questions or require clarification on any points, please feel free to reach out. We are committed to addressing your queries promptly. We greatly value your feedback and look forward to your insights.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your valuable feedback on our work. We appreciate the opportunity to address your concerns.\\n\\n> `Q1: Generally, I strongly recommend several papers, as shown below, in which authors can learn how to improve academic writing skills and organize corresponding ideas from them.`\\n\\nThank you for your suggestions on our manuscript. Our current writing thought is structured as follows: we begin by introducing the concept of multi-goal long-horizon problems, followed by an exploration of the challenges that arise when attempting to solve these problems using Goal-Conditioned Reinforcement Learning (GCRL). We then explain why current multi-goal environments fall short in supporting GCRL researchers in tackling these challenges. Subsequently, we present VVC-Gym in detail, outlining its MDP definition and demonstrations. Finally, we illustrate how VVC-Gym can facilitate various research endeavors for GCRL researchers through the introduction of baselines and the conduct of ablation studies.\\n\\nAdditionally, in the appendix, we provide a detailed introduction to Goal-Augmented MDP (the mathematical definition of the problems solved by GCRL), the calculation of aircraft aerodynamic equations, the generation method of demonstrations, and more ablation studies, etc.\\n\\nThank you for your suggestions again. We will add one additional page to the manuscript and adjust the sequence of some content in the main text and Appendix to ensure the coherence of the paper.\"}", "{\"comment\": \"I would like to thank the authors for their detailed response to the questions regarding applications and novelty. I have updated my scores.\"}", "{\"comment\": \"Dear Reviewer ecAp:\\n\\nWe are approaching the conclusion of the author-reviewer discussion period. Should you have any further questions or require clarification on any points, please feel free to reach out. We are committed to addressing your queries promptly. We greatly value your feedback and look forward to your insights.\"}", "{\"comment\": \"> `W1: Moreover, I believe this problem might belong to the multi-objective multi-agent area. Please check the paper: R\\u0103dulescu, R., Mannion, P., Roijers, D.M. et al. Multi-objective multi-agent decision making: a utility-based analysis and survey. Auton Agent Multi-Agent Syst 34, 10 (2020).`\\n\\n**We focus on single-agent multi-goal problems:**\\n\\n1. **Our manuscripts focuses on \\\"multi-goal RL\\\"**. There is a fundamental difference between \\\"multi-goal RL\\\" and \\\"multi-objective RL\\\":\\n\\n* **Multi-Objective RL (MORL)**: In MORL, the reward is a vector [1]. In other words, the agent pursues multiple objectives within a single episode [1]. A simple example is that the agent must maximize a reward $r_1 \\\\in [0, 1]$ indicating the completion of the task and minimize a penalty $r_2 \\\\in [-1,0]$ based on control gain or energy consumption [2]. The reward is a vector $\\\\mathbf{r} = [r_1,r_2]$. Since RL algorithms rely on scalar rewards, a preference function $f$ is needed to convert the vector reward into a scalar for optimization. In the example, a linear preference function $f(\\\\mathbf{r}) = 0.8 r_1 + 0.2 r_2$ can be used to transform the reward vector into a scalar reward. In MORL, the policy is trained based on the reward vector and the preference function. When the preference is known, the problem reduces to standard RL. When the preference is unknown, the main task of MORL is to find Pareto-optimal solutions [1].\\n\\n* **Multi-goal RL** (also known as **goal-conditioned RL** or **goal-oriented RL**): The reward remains scalar [3]. In multi-goal RL, the agent pursues a single objective within an episode, which is to achieve a specific desired goal. The concept of \\\"multi-goal\\\" is reflected in that the agent may be required to achieve different desired goals across different episodes. The objective of \\\"multi-goal RL\\\" is to obtain a policy that is capable of achieving any desired goal. For example, a robotic arm may be tasked with reaching a point 1 meter to the left (goal $g_1$) in one episode and a point 0.5 meters to the right (goal $g_2$) in another episode. We aim for an agent capable of achieving any desired goal ($\\\\pi$ that can achieve both $g_1$ and $g_2$), rather than using different agents for different goals ($\\\\pi_1$ for $g_1$ and $\\\\pi_2$ for $g_2$). From the reward type perspective, multi-goal RL is consistent with standard RL, both being scalar. The only difference is that multi-goal RL uses a goal-conditioned reward $r_g : \\\\mathcal{S} \\\\times \\\\mathcal{A} \\\\rightarrow \\\\mathbb{R}, g \\\\in \\\\mathcal{G} $. The objective of multi-goal RL becomes optimizing the standard RL objective over the desired goal distribution, $ E_{g \\\\sim p_{dg}, \\\\pi}[ \\\\sum_{t=0}^{\\\\infty} \\\\gamma^t r_g (s_t) ] $.\\n\\n* **Velocity Vector Control**: The objective is to enable the fixed-wing UAV to achieve any desired goal velocity vector. However, at any given moment, the fixed-wing UAV has only one goal velocity vector to pursue. Therefore, Velocity Vector Control falls under the category of typical multi-goal problems.\\n\\n**In summary, multi-objective RL and multi-goal RL represent different RL research fields. Velocity Vector Control is a multi-goal problem, and Our focus is on multi-goal RL.**\\n\\n2. Our manuscript does not include contents related to multi-agent settings. From the application perspective, **we focus on solving the control problem of _a single fixed-wing UAV_**.\\n\\n[1] Yang R, Sun X, Narasimhan K. A generalized algorithm for multi-objective reinforcement learning and policy adaptation[J]. Advances in neural information processing systems, 2019, 32.\\n\\n[2] B\\u00f8hn E, Coates E M, Reinhardt D, et al. Data-efficient deep reinforcement learning for attitude control of fixed-wing UAVs: Field experiments[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 35(3): 3168-3180.\\n\\n[3] Liu M, Zhu M, Zhang W. Goal-conditioned reinforcement learning: Problems and solutions[J]. IJCAI International Joint Conference on Artificial Intelligence, 2022.\"}", "{\"summary\": \"This paper propose a multi-goal long-horizon Reinforcement Learning (RL) environment based on realistic fixed-wing UAV\\u2019s velocity vector control, named VVC-Gym, and generate multiple demonstration sets of various quality. I suggest that the authors improve their academic writing skills, especially in organizing their ideas properly and expressing their points of view.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper propose a multi-goal long-horizon Reinforcement Learning (RL) environment based on realistic fixed-wing UAV\\u2019s velocity vector control, named VVC-Gym, and generate multiple demonstration sets of various quality.\", \"weaknesses\": \"I suggest that the authors improve their academic writing skills, especially in organizing their ideas properly and expressing their points of view. For example, the paper first presents the concept of multi-goal long-horizon problems. It does not clearly define the problem or list related works to support the issues, but it spends many words discussing the GCRL, which confuses me about the paper's writing logic. If the authors think the multi-goal long-horizon problems belong to the GCRL, they need to discuss the background of the GCRL and the current state-of-the-art. Then, they need to analyze their relationships and provide recent research on the challenge. Moreover, I believe this problem might belong to the multi-objective multi-agent area. Please check the paper:\\nR\\u0103dulescu, R., Mannion, P., Roijers, D.M.\\u00a0et al.\\u00a0Multi-objective multi-agent decision making: a utility-based analysis and survey.\\u00a0Auton Agent Multi-Agent Syst\\u00a034, 10 (2020).\\n\\nI feel it is more like a technical paper introducing a platform in reinforcement learning than a research paper. Additionally, I suggest the authors add more baseline algorithms to their experiments, not just PPO.\", \"questions\": \"Generally, I strongly recommend several papers, as shown below, in which authors can learn how to improve academic writing skills and organize corresponding ideas from them.\\n\\n1) Yang, Q., & Parasuraman, R. Bayesian strategy networks based soft actor-critic learning. ACM Transactions on Intelligent Systems and Technology (TIST).\\n\\n2) Mannion P, Devlin S, Duggan J, Howley E. Reward shaping for knowledge-based multi-objective multi-agent reinforcement learning. The Knowledge Engineering Review. 2018\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer ecAP,\\n\\nWe wish to remind you that the author-reviewer discussion period concludes in only one day. We kindly request that you review the rebuttal. Should you have further questions or concerns, please rest assured that we are committed to addressing them promptly. Your feedback is immensely valuable to us, and we eagerly await your insights.\"}", "{\"summary\": \"This paper presents VVC-Gym, a fix-wing UAV RL environment for multi-goal long-horizon tasks. The environment is built upon real dynamics and rooted in a real-world UAV control problem. Various simulation shows VVC-Gym is suitable for studying the influence of environment designs on multi-goal long-horizon problems, the necessity of expert demonstrations on the task and suitable RL algorithm factors for the environment.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Fix-wing UAV control inherently requires multi-goal and long-horizon RL. The biggest strength to me is that the problem and the environment is based on real need, not simulations. I also appreciate the explorations of the authors take on how to design the environment and how to make RL work in this context.\", \"weaknesses\": \"1. I am not a researcher on fixed-wing UAV control and multi-goal long-horizon problem. The biggest problem for me is that while the authors present an environment for future researchers, they failed to give a clear guidance on how to use this environment for benchmark. For example, as a user of the environment, I want to know:\\n\\n* What are the number of tasks in this environment, and what are their difficulties, respectively? Seems to me there is only one task. In this way, users might not use this environment in their study since it is likely to be rejected by other reviewers by the reason like \\\"small amount of experiments\\\".\\n\\n* What are the recommended configuration, hyperparameters and algorithms for me to start with, for researchers in the field of (1) GCRL (2) demonstration-based RL and (3) fixed-wing UAV control?\\n\\n* As for demonstrations, if I want to generate trajectories to support my own task, do I have tools for that in the environment? Are there standardized demonstrations recommended by the authors?\\n\\n2. If conventional algorithms such as PID/MPC can already solve the task, why should we use RL for this problem? Maybe we can significantly reduce the complexity of the problem by planning?\\n\\n3. After searching on google, I find some repo doing similar tasks like this: https://github.com/liuqh16/LAG, which is also based on fixed wings. Can you also discuss existing simulators of fixed-wing UAV control?\\n\\n4. Some minors: in page 7, ... with the results presented in Fig. 2b, which should be Table. 2b. Please also check others.\\n\\nTo sum up, as RL researchers, we always need real-world environments for simulations and I appreciate the large amount of work by authors to develop this environment and make it work. However, I'm not sure how to use the environment without pain. Maybe the authors could revise the paper by giving more recommendations to users?\", \"questions\": \"Please see \\\"Weakness\\\" section. I will read the rebuttal and adjust scores in reviewer-author discussion.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> `Q3: How much reward engineering is required to train an agent to learn a different policy than what is provided?`\", \"we_address_your_concern_from_two_aspects\": \"1. The specific workload involved in our reward engineering; 2. The reason behind designing the reward function in a seemingly complex manner.\\n\\n1. This is a practical question. We believe that the design of the reward should align with the actual application, so the workload of reward engineering depends on the requirements of the application. Taking the fixed-wing UAV's velocity vector control (VVC) as an example, our reward engineering work mainly covers the content shown in the table below. It can be seen that we conducted a total of $5 \\\\times 1 \\\\times 5= 25$ groups of experiments to find the optimal parameter combinations in the reward function.\\n\\n |Parameters in Reward|Physical meaning of parameters|Whether to search|Search set|\\n |:-:|:-:|:-:|:-:|\\n |$w_d, w_v$|Weights of the error of direction and magnitude of the velocity vector|\\u221a|(0.25, 0.75), (0.375, 0.625), (0.5, 0.5), (0.625, 0.375), (0.75, 0.25)|\\n |$\\\\delta_d, \\\\delta_v$|Normalization factors for the error of direction and magnitude of the velocity vector|\\u2613|(180.0, 100.0)|\\n |b|Exponential scaling factor for the error of direction and magnitude of the velocity vector|\\u221a|0.25, 0.5, 1.0, 2.0, 4.0|\\n\\n Among them, the best-performing set is $w_d=0.75, w_v=0.25, b=0.5$. We use this set of parameters as the default parameters for VVC-Gym. We believe that **for GCRL or RL researchers, using the default parameters we provide is sufficient**.\\n\\n2. The reward form commonly used in multi-goal environments is $r(s) = -d(s,g)$ [1], where $d(\\\\cdot, \\\\cdot)$ is some distance metric. We found that this reward is very broad and does not work well when directly applied to VVC. The reasons are: (1) The importance of eliminating errors in various parts of the goal is different; (2) The rate of change of the reward function on the error ($b$) needs to be comprehensively designed according to factors such as the length of the control sequence and the precision of judgment of arrival (we have detailed discussions in Section 3.3 of the manuscript). Through experiments, we found that these different settings of the reward will simultaneously affect the training process and the final results. Therefore, we hope that **VVC-Gym can serve as a testbed for RL researchers to explore the impact of different reward settings on training**.\\n\\n[1] Liu M, Zhu M, Zhang W. Goal-conditioned reinforcement learning: Problems and solutions[J]. IJCAI International Joint Conference on Artificial Intelligence, 2022.\\n\\n> `Q4: What are the challenges of transfering models trainined in this environment to real world UAVs?`\", \"we_list_three_typical_challenges_below\": \"1. VVC-Gym employs an open-source UAV model [1], so we need to consider the differences between the UAV model in the environment and real-world UAVs.\\n2. We have not yet incorporated environmental dynamic factors into VVC-Gym, such as changes in wind, air temperature, and humidity. Before deploying the model to real-world UAVs, it is necessary to thoroughly evaluate the robustness of the model.\\n3. It is necessary to consider the computational capabilities of the onboard computer to ensure that it can complete the model's inference within the physical time of an environment step.\\n\\nThank you again for your suggestions. We will incorporate this discussion into our manuscript to help application-focused researchers understand the challenges of applying VVC-Gym in a production environment.\\n\\n[1] https://mirrors.ibiblio.org/flightgear/ftp/Aircraft-2020/\"}", "{\"metareview\": \"This paper provides a new benchmark for fixed wing aircrafts solved as multi-goal long horizon RL, with velocity control. The authors provide an easy to use benchmark, demonstrations, RL and IRL algorithms and perform a detailed empirical study.\", \"strengths\": \"Reasonable benchmark which is not saturated, and potentially useful for the GCRL and RL community\\nWell designed benchmark with lots of baselines and ablations\", \"weaknesses\": \"Given this is not a typically used benchmark, more motivation is needed on why this benchmark should be adopted. \\nA larger range of tasks could be useful\\n\\nOverall this benchmark was well appreciated by the reviewers, seem like a benchmark to add to the set of standard GCRL evals. I would suggest the authors work with farama foundation and Gymnasium or with other folks to incorporate this into standard evals.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers did have questions about why this benchmark is general and broadly applicable, how it compares to other fixed wing benchmark, what the demos look like and also how this compares to standard methods. Moreover they made some great suggestions about how to make the system more usable. Reviewer ecAp brought up concerns about writing quality, but did not make concrete and actionable suggestions. This was downweighted.\"}", "{\"comment\": \"> `W: The only task provided is the velocity vector control. If various types of different tasks are provided, the paper will have a larger influence and contribution.`\\n\\nThank you for your suggestion. In this paper, our main goal is to provide the GCRL community with a task of appropriate difficulty level, along with demonstrations and baselines. Therefore, we choose the velocity vector control task. As you suggested, providing various types of different tasks can indeed make more contributions to the RL community. **We have extended VVC-Gym based on existing interfaces, providing attitude control and several Basic Flight Maneuvers (BFM) tasks**, including level turn and barrel roll. The extended code can be found at [(link)](https://github.com/ForAnonymousUse/ForICLR2025/blob/main/VVC-Gym.zip). These new tasks locate at directory *VVC-Gym/vvcgym/tasks/*. Additionally, we are working to provide more BFM tasks, including Half Cuban Eight, Immelmann, and others.\"}" ] }
5xP1HDvpXI
Precise Localization of Memories: A Fine-grained Neuron-level Knowledge Editing Technique for LLMs
[ "Haowen Pan", "Xiaozhi Wang", "Yixin Cao", "Zenglin Shi", "Xun Yang", "Juanzi Li", "Meng Wang" ]
Knowledge editing aims to update outdated information in Large Language Models (LLMs). A representative line of study is locate-then-edit methods, which typically employ causal tracing to identify the modules responsible for recalling factual knowledge about entities. However, we find these methods are often sensitive only to changes in the subject entity, leaving them less effective at adapting to changes in relations. This limitation results in poor editing locality, which can lead to the persistence of irrelevant or inaccurate facts, ultimately compromising the reliability of LLMs. We believe this issue arises from the insufficient precision of knowledge localization. To address this, we propose a Fine-grained Neuron-level Knowledge Editing (FiNE) method that enhances editing locality without affecting overall success rates. By precisely identifying and modifying specific neurons within feed-forward networks, FiNE significantly improves knowledge localization and editing. Quantitative experiments demonstrate that FiNE efficiently achieves better overall performance compared to existing techniques, providing new insights into the localization and modification of knowledge within LLMs.
[ "Large Language Models", "Transformers", "Model Editing", "Neural Networks", "Neuron Activation" ]
Accept (Poster)
https://openreview.net/pdf?id=5xP1HDvpXI
https://openreview.net/forum?id=5xP1HDvpXI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "r3N9znNQJt", "qnZDoDUzoM", "lFF1WRPz7t", "iQcoBsjiEt", "fJbepJPWA1", "bECLXTafqI", "ZvONINnbmP", "ZfQYQURMrW", "VGntOgaVOQ", "ToioYdk1hp", "NU6ygqYJIs", "EZAgf0XU97", "9rRU7BYnCJ", "9Fsr0Y4lx2", "6Bi6EYms2r", "5kV9UwfPlF", "4uy24JS8CD", "4Zf4WxEQYL", "4SyQ6c6oiN" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1732646550789, 1732565765077, 1732154799931, 1730650401463, 1732155267368, 1737523738078, 1732174987769, 1732732406230, 1732154585690, 1730345722142, 1732583238591, 1732155089423, 1732154739604, 1732154423685, 1734636180338, 1732167942609, 1732499098976, 1730301277964, 1730231706491 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6006/Reviewer_z62H" ], [ "ICLR.cc/2025/Conference/Submission6006/Reviewer_AZce" ], [ "ICLR.cc/2025/Conference/Submission6006/Authors" ], [ "ICLR.cc/2025/Conference/Submission6006/Reviewer_z62H" ], [ "ICLR.cc/2025/Conference/Submission6006/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6006/Authors" ], [ "ICLR.cc/2025/Conference/Submission6006/Reviewer_AZce" ], [ "ICLR.cc/2025/Conference/Submission6006/Authors" ], [ "ICLR.cc/2025/Conference/Submission6006/Reviewer_b16A" ], [ "ICLR.cc/2025/Conference/Submission6006/Authors" ], [ "ICLR.cc/2025/Conference/Submission6006/Authors" ], [ "ICLR.cc/2025/Conference/Submission6006/Authors" ], [ "ICLR.cc/2025/Conference/Submission6006/Authors" ], [ "ICLR.cc/2025/Conference/Submission6006/Area_Chair_nKEA" ], [ "ICLR.cc/2025/Conference/Submission6006/Reviewer_b16A" ], [ "ICLR.cc/2025/Conference/Submission6006/Authors" ], [ "ICLR.cc/2025/Conference/Submission6006/Reviewer_mBEd" ], [ "ICLR.cc/2025/Conference/Submission6006/Reviewer_AZce" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your response. Some concerns have been addressed, and now I have no more question.\"}", "{\"comment\": \"Thank you for the detailed response! I have one more question for clarification: does FiNE require neuron localization for each individual piece of knowledge that needs to be edited?\"}", "{\"title\": \"Response to Reviewer mBEd\", \"comment\": \"We appreciate your thoughtful comments and have addressed your concerns as follows.\\n\\n> Regarding Weakness 1.\\n\\nWe've now added our code as part of the supplementary material. Apologies for not including it as part of the initial submission.\\n\\n> Regarding Weakness 2.\\n\\nThanks for your comment. We have revised Figure 1, 3 and 8 for better reading. As for the scale of plots in Figure 2 and 5, since the metrics of different subplots are different and incomparable, we believe the current plots are preferable. Using the same scale across them might render certain variations less noticeable.\\n\\n> Regarding Weakness 3.\\n\\nThank you for pointing out this. We have revised Table 3 to add Fluency. Besides, \\\"Succ.\\\" is same as \\\"Edit Succ.\\\". We have revised the phrasing that could lead to misunderstandings and standardized it as \\\"Edit Succ.\\\".\\n\\n> Regarding Question 1.\\n\\nWe have defined \\\"over-editing rates\\\" and \\\"unchanging rates\\\" in Appendix B (Line 756). In a word, \\\"over-editing rates\\\" calculates the proportion of responses that LLMs still answer the\\nediting target object, indicating excessive editing. And \\\"unchanging rates\\\" represents the proportion of responses that remain consistent with answers prior to editing.\\n\\n> Regarding Question 2.\\n\\nYes, for a given piece of knowledge, FiNE may select the same neuron multiple times. However, this does not affect the construction of the neuron set. We retain repeated neurons, as this may indicate that these neurons are of greater importance.\"}", "{\"summary\": \"The paper introduces a Fine-grained Neuron-level Knowledge Editing (FiNE) method for Large Language Models (LLMs). The authors identify limitations in existing locate-then-edit methods, such as poor locality and over-reliance on subject entities, leading to ineffective knowledge updates. FiNE addresses this by targeting individual neurons within feed-forward networks (FFNs) responsible for specific knowledge, thereby enhancing the precision of knowledge localization. Quantitative results demonstrate FiNE's superior performance in terms of edit success, locality, and efficiency compared to state-of-the-art methods like ROME and MEMIT.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1: The paper presents a clear and well-structured exposition of the motivation and background knowledge, making it easy to understand the context of the proposed method and enhancing the impact of its contributions.\", \"s2\": \"The method greatly reduces the number of modified parameters compared to existing approaches (e.g., ROME, MEMIT), making it faster and more memory-efficient.\", \"s3\": \"The paper focuses on the precise localization of knowledge. The use of extensive benchmarks, ablation studies, and metrics (e.g., locality, fluency) offers a thorough validation of FiNE's effectiveness.\", \"weaknesses\": \"W1: My primary concern is about the novelty. The paper draws on previous work for calculating contribution scores in multi-modal LLMs and extends the approach of modifying neuron parameters within the FFNs. The description in the Methodology section is relatively brief. Could you please clarify the main contributions of this paper?\", \"w2\": \"The paper assumes that the FFN layer is the primary location for knowledge storage but lacks discussion on whether components like the self-attention layer also play a significant role in knowledge representation and generation when conducting knowledge editing.\", \"questions\": \"Q1: One advantage of FiNE is its smaller number of modified parameters. Could this be a major reason for its significant lead over other methods in the Locality metric?\", \"q2\": \"In Table 9, under the LLaMA-2 model in the Locality - FA column, it seems that the highest value should be 73.7, achieved by PMET, rather than 72.1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response\", \"comment\": [\"Dear reviewers,\", \"We want to thank you for your thorough and detailed reviews. We believe your suggestions have improved our paper. In the following, we summarize the changes made to the revision which are marked in $\\\\text{\\\\textcolor{red}{red}}$. We have also responded to each of your concerns under your respective reviews.\", \"We have added some missing references to the introduction. (@R b16A)\", \"We have added a pseudo code in Section 4.1 to help understand our procedure of neuron localization. (@R b16A)\", \"We have revised the color of Figure 1, 3 and 8 for better reading. (@R mBEd)\", \"We have revised our tables to correctly show the results and revised the phrasing to avoid misunderstandings. (@R z62H, b16A, mBEd)\", \"We have added new lines in Table 1 to show results of using our method to edit neurons located by KN. We have added efficiency experiments when restricting editing to a single layer in Table 13, overlap neuron counts of rephrase prompts in Table 14 and experiments on other size of models in Table 15. (@R b16A, AZce)\", \"We have added our code as part of the supplementary material. (@R mBEd)\", \"Your insights have been invaluable in enhancing the overall quality of our work.\", \"Sincerely,\", \"Authors of Submission 6006.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you for acknowledging our work.\", \"comment\": \"Thank you for acknowledging our work. We will attempt to analyze the connections and differences between the neurons located by FiNE and KN. However, as noted in our response above, the overlap between the two is minimal, and KN exhibits poor performance, making such analysis challenging. Nevertheless, we will strive to provide new insights in the final version.\"}", "{\"comment\": \"Thank you for your response. I don\\u2019t have any further questions and will keep my positive score.\"}", "{\"title\": \"Response to Reviewer b16A (1/2)\", \"comment\": \"We appreciate your thoughtful comments and have addressed your concerns as follows.\\n\\n> Regarding Weakness 1.\\n\\nThank you for your valuable suggestions.\\n\\n(1) We compare the neurons located by FiNE and KN (under the original setup), and compute Intersection over Union (IoU) between these neurons. Experimental results are shown below. We can find that there is almost no overlap neurons between FiNE and KN, which demonstrates FiNE can locate different neurons from KN. These neurons are important but overlooked by KN.\\n\\n| Model | IoU |\\n|:-|:-|\\n| GPT-J | $2.3\\\\times10^{-3}$ |\\n| LLaMA-2 | $3.4\\\\times10^{-4}$ |\\n| LLaMA-3 | $0.0$ |\\n\\n(2) As suggested, we conduct experiments on using our editing method on top of neurons located by KN. Results on GPT-J, LLaMA-2 and LLaMA-3 are listed below. For methods with low Edit Succ., we do not present their Locality (i.e., RSA and FA) results, since the Locality is inherently 100% when no edit is effectively applied. KN + FiNE outperforms KN but still falls significantly short of FiNE (except for the FA metric on GPT-J). The neurons localized by FiNE are more precise than those identified by KN, resulting in a substantial performance improvement. We have revised Table 1 to add these results.\\n\\n**GPT-J**:\\n\\n| Method | Edit Succ. | SAA | LGA | RA | RSA | FA | Fluency |\\n|:-|-:|-:|-:|-:|-:|-:|-:|\\n| GPT-J | 21.5 | 21.7 | 14.8 | 18.6 | - | - | **612.3** |\\n| KN | 18.1 | 17.9 | 10.8 | 18.5 | - | - | 580.0 |\\n| KN + FiNE | 66.6 | 48.2 | 14.3 | 24.2 | 76.8 | **63.5** | 584.8 |\\n| FiNE | **99.8** | **90.6** | **17.5** | **37.4** | **84.2** | 54.2 | 545.7 |\\n\\n**LLaMA-2**:\\n\\n| Method | Edit Succ. | SAA | LGA | RA | RSA | FA | Fluency |\\n|:-|-:|-:|-:|-:|-:|-:|-:|\\n| LLaMA-2 | 27.0 | 27.8 | 26.1 | 26.2 | - | - | **583.3** |\\n| KN | 21.3 | 21.8 | 16.9 | 24.6 | - | - | 561.4 |\\n| KN + FiNE | 84.6 | 77.1 | 23.2 | 36.6 | 59.4 | 40.6 | 447.3 |\\n| FiNE | **99.9** | **89.8** | **28.8** | **41.5** | **92.6** | **65.0** | 542.3 |\\n\\n**LLaMA-3**:\\n\\n| Method | Edit Succ. | SAA | LGA | RA | RSA | FA | Fluency |\\n|:-|-:|-:|-:|-:|-:|-:|-:|\\n| LLaMA-3 | 23.1 | 23.1 | 21.7 | 22.8 | - | - | **607.1** |\\n| KN | 17.1 | 18.1 | 14.9 | 19.2 | - | - | 593.7 |\\n| KN + FiNE | 61.9 | 55.8 | 14.5 | 34.0 | 84.0 | 56.7 | 546.9 |\\n| FiNE | **100.0** | **89.6** | **22.4** | **38.3** | **90.5** | **63.0** | 567.1 |\\n\\n> Regarding Weakness 2.\\n\\nThank you again for your suggestion. We ask GPT-4o to rephrase the prompts in $\\\\text{WikiData}_\\\\text{counterfact}$ and calculate Intersection over Union (IoU) between their locating neurons (under default parameter settings). The results are listed below. The IoU results show that our most neurons remain unchanged, which provides partial evidence of robustness of FiNE localization method and significantly outperforms KN. We have added Table 14 to show these results.\\n\\n| Model | IoU (FiNE) | IoU (KN) |\\n|:-|:-:|:-:|\\n| GPT-J | 0.655 | 0.137 |\\n| LLaMA-2 | 0.704 | 0.235 |\\n| LLaMA-3 | 0.701 | 0.201 |\\n\\n\\n> Regarding Weakness 3.\\n\\nThank you for spotting this error. We have modified it in the revised manuscript, where $W_u \\\\in \\\\mathbb{R}^{v \\\\times d_h}$, $W_{out}^l \\\\in \\\\mathbb{R}^{d_h \\\\times d_m}$ and $W_uW_{out}^l \\\\in \\\\mathbb{R}^{v \\\\times d_m}$.\\n\\n> Regarding Weakness 4.\\n\\nThank you for pointing out this. We have added a pseudo code in Section 4.1 (Line 206).\"}", "{\"summary\": \"This article points out that ineffective localization leads to poor editing locality. In this hypothesis, the author proposes a fine-grained knowledge editing method (FiNE), suggesting that knowledge is stored in different neurons rather than being stored in specific layers. The author demonstrates the accuracy of FiNE's localization by experiments. And ablation studies prove the effectiveness of the editing method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-motivated: it points out that the existing editing methods lack effectiveness in locating, and then proposes a fine-grained approach to improve the editing accuracy.\", \"The method is easy to deploy: memory usage and editing time costs indicate that FiNE is more efficient.\"], \"weaknesses\": [\"**Main Weaknesses**\", \"The effectiveness of the FiNE method may not be convincing enough to me.\", \"*W1*: I suggest the author to conduct some comparative experiments with the KN method: 1) compare the overlap rate of the neurons located by FiNE and KN to investigate whether FiNE is able to locate neurons that KN cannot. 2) use the method in Section 4.2 to edit the neurons located by KN to demonstrate that the neurons located by your method are indeed highly related to knowledge.\", \"*W2*: Additionally, I suggest that the author compare the overlap rate of the localization results before and after modifying the prompt to demonstrate the robustness of the FiNE localization method. For example, the author could compare the overlap rate between \\\"X's wife is Y\\\" and \\\"X married Y\\\".\", \"**Minor Weaknesses**\", \"*W3*: Line 206 has $W_uW^l_{out}\\\\in \\\\mathbb{R}^{d_m \\\\times v}$. However, in Appendix A, we have $W_u \\\\in \\\\mathbb{R}^{d_h \\\\times v}$ and $W^l_{out} \\\\in \\\\mathbb{R}^{d_m \\\\times d_h}$. So it might be $W^l_{out}W_u\\\\in \\\\mathbb{R}^{d_m \\\\times v}$.\", \"*W4*: The description of selecting neurons from Line 211 to Line 215 may be a bit vague. Therefore, I suggest the author to add pseudocode to help understand it.\", \"**Missing References**\", \"A Survey on Knowledge Editing of Neural Networks. (2023)\", \"Editing Large Language Models: Problems, Methods, and Opportunities. (2023)\", \"Knowledge Editing for Large Language Models: A Survey. (2023)\"], \"questions\": \"**Main Questions**\\n* *Q1*: There is \\\"The ineffectiveness of localization may lead to overfitting of editing.\\\" in Line 49. So my question is: inaccurate localization should lead to underfitting, why does the author say it is overfitting here? For example, for the localization method for ROME and MEMIT, selecting the last subject token can give the model a confidence of about 98% in the new knowledge. If the last token is selected for editing the model, the model can only achieve a confidence of about 20%.\\n* *Q2*: The existing knowledge editing methods can have a detrimental impact on the performance of the model [1]. Therefore, I am curious whether a more precise positioning method can alleviate the damage to the model's capabilities.\\n\\n**Minor Questions**\\n* *Q3*: Can fine-grained FiNE handle more difficult tasks such as multi-hop editing [2]?\\n\\n$Ref$:\\n\\n[1] Model Editing Can Hurt General Abilities of Large Language Models. (2024)\\n\\n[2] Mquake: Assessing knowledge editing in language models via multi-hop questions. (2023)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer AZce\", \"comment\": \"Thank you again for your detailed and insightful comments and feedback.\\n\\nFor each individual piece of knowledge to be edited, we perform neuron localization only once. We emphasize that our neuron localization process is highly efficient. This additional step does not introduce significant complexity and effectively accelerates the editing process, resulting in an overall time cost lower than that of ROME (as shown in Figure 3).\\n\\nIf any aspects still require clarification, we are happy to provide additional details promptly.\"}", "{\"title\": \"Response to Reviewer AZce\", \"comment\": \"We appreciate your thoughtful comments and have addressed your concerns as follows.\\n\\n> Regarding Weakness 1.\\n\\nThank you for your comment. There may be a misunderstanding regarding this aspect of our method. We'd like to clarify that FiNE does not introduce significantly more complexity. In contrast, benefiting from more precise localization and a smaller number of modified parameters, FiNE is highly efficient, enabling significant savings in both time and memory usage. This issue has been discussed in detail in Section 5.4 (Line 434).\\n\\n> Regarding Weakness 2.\\n\\nWe believe the selected models are sufficiently representative. Similar to ROME and its Mamba version (https://arxiv.org/html/2404.03646v2), we consider exploring other architectures feasible but more suitable as separate future work.\\n\\n> Regarding Question 1.\\n\\nBased on differences in locating step, FiNE exhibits the following distinctions in the editing step compared to other approaches.\\n\\n1. FiNE does not modify the entire second-layer weight matrix of the FFN; instead, it updates only the vectors corresponding to the top neurons.\\n2. FiNE introduces a repetition penalty loss to prevent the post-edited model from generating the editing target repeatedly.\\n3. FiNE utilizes the layer freezing technique for protecting model's linguistic abilities during the editing process.\\n\\n> Regarding Question 2.\\n\\nThank you for pointing out this. We have conducted experiments with a smaller (i.e., LLaMA-3.2-1B) and a larger (i.e., LLaMA-2-13B) model on $\\\\text{WikiData}_\\\\text{counterfact}$. Results are listed below. FiNE continues to demonstrate strong performance on both smaller and larger models. We have added Table 15 to show these results in the revision.\\n\\n**LLaMA-3.2-1B**:\\n\\n| Method | Edit Succ. | SAA | LGA | RA | RSA | FA | Fluency |\\n| :- | -: | -: | -: | -: | -: | -: | -: |\\n| LLaMA-3.2-1B | 21.0 | 21.5 | 19.6 | 18.2 | - | - | **604.2** |\\n| ROME | 97.4 | 78.8 | 20.5 | 31.2 | 32.0 | 25.6 | 522.9 |\\n| MEMIT | 97.8 | 68.0 | 20.3 | 24.2 | 39.7 | 30.6 | 473.8 |\\n| FiNE | **98.5** | **87.2** | **21.6** | **36.9** | **84.3** | **61.9** | 561.3 |\\n\\n**LLaMA-2-13B**:\\n\\n| Method | Edit Succ. | SAA | LGA | RA | RSA | FA | Fluency |\\n| :- | -: | -: | -: | -: | -: | -: | -: |\\n| LLaMA-2-13B | 26.9 | 27.4 | 25.5 | 25.9 | - | - | **591.0** |\\n| ROME | 98.7 | 72.0 | 23.6 | 37.2 | 48.0 | 46.5 | 586.6 |\\n| MEMIT | 98.1 | 80.7 | 21.4 | 36.3 | 41.8 | 43.8 | 571.4 |\\n| FiNE | **99.3** | **83.6** | **26.9** | **37.6** | **94.9** | **76.2** | 562.7 |\\n\\n> Regarding Question 3.\\n\\nWe have discussed efficiency in Section 5.4 (Line 434). Benefitting from more precise localization and a smaller number of modified parameters, FiNE exhibits a significant time advantage over other locate-then-edit methods, particularly at Float32 precision, being approximately 4\\u00d7 to 6\\u00d7 faster. We hypothesize that FiNE offers significant benefits when applied to time-sensitive applications.\\n\\n> Regarding Question 4.\\n\\nWe have discussed performance when restricting editing to a single layer in Section 5.3 (Line 404) and we'll supplement efficiency experiments below. We believe that while manually restricting neuron localization to a single layer can be effective based on experience, relying on our algorithm to automatically locate neurons may be a more appropriate option in the absence of prior information. We have added Table 13 to show these results in the revision.\\n\\n**GPT-J**:\\n\\n| Layer | Time (s) | Memory (GB) |\\n|:-:|:-:|:-:|\\n| 5 | 4.10 | 23.82 |\\n| 10 | 5.80 | 23.82 |\\n| 15 | 5.06 | 23.82 |\\n| 20 | 4.43 | 23.82 |\\n| Any | 4.68 | 28.09 |\\n\\n**LLaMA-2**:\\n\\n| Layer | Time (s) | Memory (GB) |\\n|:-:|:-:|:-:|\\n| 5 | 3.92 | 25.87 |\\n| 10 | 3.13 | 25.87 |\\n| 15 | 2.14 | 25.87 |\\n| 20 | 1.83 | 25.87 |\\n| Any | 2.13 | 28.82 |\\n\\n**LLaMA-3**:\\n\\n| Layer | Time (s) | Memory (GB) |\\n|:-:|:-:|:-:|\\n| 5 | 5.89 | 32.44 |\\n| 10 | 6.46 | 32.44 |\\n| 15 | 4.47 | 32.44 |\\n| 20 | 3.03 | 32.44 |\\n| Any | 2.93 | 34.25 |\"}", "{\"title\": \"Response to Reviewer b16A (2/2)\", \"comment\": \"> Regarding Missing Reference.\\n\\nWe have added the missing references to the introduction (Line 34). Thanks for the recommendation.\\n\\n> Regarding Question 1.\\n\\nThanks for pointing this out. We have revised the potentially misleading statement to \\u201cpredominantly subject-driven\\u201d (Line 49).\\n\\n> Regarding Question 2.\\n\\nThanks for the good question. We also believe that precise localization can avoid modifying unrelated neurons, minimizing changes to the model and thereby preserving its other capabilities. We will explore this with comprehensive experiments in the future work.\\n\\n> Regarding Question 3.\\n\\nFor more complex editing tasks such as multi-hop editing, we may need to make corresponding modifications to FiNE in order to align with the task objectives, which will be investigated in our future work.\"}", "{\"title\": \"Response to Reviewer z62H\", \"comment\": \"We appreciate your thoughtful comments and have addressed your concerns as follows.\\n\\n> Regarding Weakness 1.\\n\\nDue to space limitations, we placed the detailed description and derivation of our methodology in Appendix A. We will surely enrich the methodology section in content if more space permitted. The distinction between this paper and the referred previous work on multi-modal LLMs lies in different focus: the previous work focuses on identifying interpretable multi-modal neurons, whereas our work aims to fundamentally improve localization-based knowledge-editing methods. In this way, we do not seek to claim that the neuron localization techniques, which is largely inherited from the previous work, is an novel contribution of this work. Our contributions can be summarized as follows:\\n\\n1. With empirical analyses, we point out that causal tracing encounters issues during localization by focusing excessively on the subject and neglecting overall knowledge.\\n2. We propose a fine-grained neuron-level knowledge editing (FiNE) technique for a more precise localization of memories in LLMs.\\n3. Quantitative and qualitative experiments demonstrate that FiNE significantly outperforms existing locate-then-edit methods based on causal tracing localization.\\n\\n> Regarding Weakness 2.\\n\\nWe primarily discuss FFN for two reasons. First, previous work have demonstrated the importance of FFN, especially in storing factual knowledge [1-5]. Second, as shown in Eqn.11 and Eqn.12, FFN can intuitively reflect the contribution of neurons to each knowledge item. For these two reasons, we focus on deeply exploring the role of FFN in this paper rather than comprehensively investigating the roles of all other modules. We are open to further analyzing other parts in the LLM in future work.\\n\\n[1] Finding skill neurons in pre-trained transformer-based language models.\\n\\n[2] Locating and editing factual associations in gpt.\\n\\n[3] Mass-editing memory in a tranformer.\\n\\n[4] Multimodal Neurons in Pretrained Text-Only Transformers.\\n\\n[5] Finding and editing multi-modal neurons in pre-trained tranformers.\\n\\n> Regarding Question 1.\\n\\nWe believe this intuition is correct and it is the basic motivation of this work. The smaller number of modified parameters is attributed to FiNE's ability to achieve more precise knowledge localization, allowing it to edit as few parameters as possible and thereby minimizing the impact on other knowledge in the model.\\n\\n> Regarding Question 2.\\n\\nThank you for pointing out this. We have revised Table 9 to correctly show the highest value.\"}", "{\"metareview\": \"This paper introduces a novel Fine-grained Neuron-level Knowledge Editing (FiNE) method aimed at improving the precision and locality of knowledge editing in Large Language Models (LLMs). The authors argue that existing locate-then-edit methods, such as ROME and MEMIT, exhibit limitations in localizing knowledge effectively, particularly when the focus is on specific relations rather than entities. FiNE addresses these shortcomings by identifying and modifying individual neurons within feed-forward networks (FFNs), thereby achieving higher editing locality and minimizing collateral effects on other unrelated model knowledge. The paper includes extensive quantitative and qualitative experiments demonstrating that FiNE consistently outperforms state-of-the-art methods across several metrics, including editing success, locality, fluency, and efficiency. Importantly, the authors have committed to releasing their code, enabling reproducibility and further research in this area.\\n\\nThe reviewers broadly agreed on the strengths of the paper. The methodology is novel and well-motivated, addressing a critical limitation in existing knowledge editing techniques. The presentation is clear, with strong empirical validation through extensive ablation studies, benchmarks, and experiments across multiple LLM architectures and scales. The paper also demonstrates significant efficiency improvements by reducing the number of modified parameters, making the proposed approach computationally feasible and memory efficient. Additionally, the authors responded thoughtfully to reviewer feedback, addressing concerns and incorporating suggestions such as adding pseudocode, clarifying methodology, and improving visual clarity in figures and tables.\\n\\nHowever, the paper is not without its weaknesses. Some reviewers questioned the novelty of the approach, as it builds upon previous methods in the field of knowledge editing and neuron localization. While FiNE introduces significant enhancements, it is an incremental improvement rather than a fundamental departure from existing methods. Furthermore, the paper heavily emphasizes FFN layers as the primary storage for factual knowledge but does not fully explore the roles of other components, such as self-attention layers. This limitation raises concerns about the broader applicability of the method to alternative architectures or more complex editing scenarios. Additionally, one reviewer noted the increased complexity introduced by neuron-level localization, which, while efficient, might not be as scalable as simpler methods like ROME in time-critical applications.\\n\\nOverall, the decision to accept this paper is based on its clear contributions to the field of LLM knowledge editing. FiNE offers substantial improvements in editing locality and precision, demonstrating its practical utility in preserving model integrity while updating specific knowledge. The extensive experimental results, robust methodology, and responsiveness during the rebuttal period further strengthen the case for acceptance. While the paper could benefit from more detailed exploration of broader architectures and editing scenarios, its current contributions are significant enough to warrant publication and will likely inspire further research in this area.\", \"additional_comments_on_reviewer_discussion\": \"The author-reviewer discussion period was constructive, with the authors actively engaging with reviewer concerns and improving the paper significantly. Reviewer z62H raised concerns about the novelty of the approach and the lack of discussion on self-attention layers. The authors clarified that their focus on FFN layers was based on existing evidence of their importance in storing factual knowledge and proposed future work to address other model components. Reviewer b16A requested additional experiments comparing FiNE with the KN method, as well as evaluations of robustness. The authors provided detailed results demonstrating minimal overlap between neurons localized by FiNE and KN, underscoring the precision of their approach, and added robustness evaluations, which showed FiNE\\u2019s consistency across rephrased prompts. Reviewer mBEd suggested improvements to the visual presentation and requested clarification of certain metrics and methodology. The authors revised figures, added pseudocode, and standardized terminology to address these points. Finally, reviewer AZce raised questions about scalability, efficiency, and the impact of restricting editing to fixed layers. The authors conducted additional experiments with smaller and larger models and demonstrated that FiNE retains strong performance across scales while maintaining efficiency advantages.\\n\\nThe authors\\u2019 comprehensive and timely responses addressed all major concerns raised during the review process, leading to an overall positive shift in reviewer sentiment. Most reviewers raised their scores following the rebuttal, noting the improved clarity and additional experimental evidence provided by the authors. After weighing the reviewers' critiques, the authors\\u2019 responses, and the overall contributions of the paper, I believe the strengths of this work clearly outweigh its limitations, and I recommend acceptance.\"}", "{\"title\": \"Response to Submission6006\\u2018s Authors\", \"comment\": \"Tanks for your reply! Some of my concern has been addressed. I've raised my score. However, I have a few more questions that I would like the authors to elaborate on: I'm curious about the connections and differences between the neurons you localized and the neurons that KN localized. I hope the authors can analyze this in the final version.\"}", "{\"title\": \"A gentle reminder for the close of the author-reviewer discussion.\", \"comment\": \"Dear Reviewers and AC,\\n\\nAs the author-reviewer discussion period is closing soon, we would like to call for any further discussion or comments on our submission. \\n\\nWe understand the constraints of time and workload that reviewers and AC face, and we appreciate the effort already put into evaluating our work. If there are any additional insights, questions, or clarifications on our responses/submission that you would like to discuss with us, we would be very grateful to hear them. \\n\\nYour feedback have been invaluable in enhancing the overall quality of our work.\\n\\nBest regards,\\n\\nAuthors of Submission 6006.\"}", "{\"summary\": \"This paper tackles the problem of knowledge editing in LLMs, i.e., how to update knowledge \\\"in the LLM\\\". To do so, they use a methodology that precisely locates neurons responsible for a piece of specific knowledge and updates only a limited number of weights. This approach allows to efficiently change the belief while limiting the impact of the modification to other aspects of the LLM. The experiments clearly show the advantages of the approach.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"S1. The paper is extremely well presented and easy to follow. All the points are clearly explained, with many connections with previous works.\\n\\nS2. The approach is novel and brings a new perspective on locate-and-edit methods.\\n\\nS3. The authors ran an extensive analysis of their approach, with many metrics reported (even in the Appendix on more datasets and models). They tested the parameters of their model in various configurations through the ablation study.\", \"weaknesses\": \"W1. The authors claim the code will be available, but I do not understand why they did not upload a zip on OpenReview or used an anonymized git.\\n\\nW2. Some plots are hard to read, particularly when reading the paper in black and white. The scale of plots in Figures 2 and 5 are strange (they should all be the same; it is better if they start at 0).\\n\\nW3. Fluency disappeared in Table 3. Besides, \\\"Succ.\\\" is sometimes called \\\"Edit Succ.\\\" Is it the same thing?\", \"questions\": \"See above.\\nQ1. Can you give a small definition of the \\\"over-editing rates\\\" and \\\"unchanging rates\\\"?\\n\\nQ2. Line 220, is it possible to have several times the same neuron i selected in layer l? If yes, what happens in this case?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a Fine-grained Neuron-level Knowledge Editing (FiNE) method for precise localization and modification of specific knowledge within large language models. By identifying and modifying individual neurons within feed-forward networks, FiNE enhances editing locality and demonstrates improved performance over existing techniques.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The FiNE method introduced in this paper enhances editing performance and significantly improves locality control, showing high effectiveness across most key metrics.\\n\\n2. The paper is well-structured and clearly presented. It includes thorough ablation studies, efficiency evaluations, and detailed comparisons with existing models, offering a comprehensive assessment of FiNE's effectiveness.\", \"weaknesses\": \"1. While FiNE shows higher performance than ROME and MEMIT, its approach introduces significantly more complexity. Unlike ROME, which can perform edits after a single causal tracing run, FiNE requires neuron-level localization for each knowledge item to be edited (if I understand correctly). This additional computational step raises questions about whether the performance gains justify the increased complexity, particularly in cases where fast, scalable edits are required.\\n\\n2. FiNE\\u2019s focus on feed-forward layers in transformers as the primary knowledge storage may restrict its adaptability to other architectures and use cases.\", \"questions\": \"1. FiNE introduces a unique locate step with neuron-level localization, but is there any substantive difference in the actual edit step compared to methods like ROME?\\n\\n2. The experimental models are similar in size, around 7 billion parameters. How would FiNE perform on larger or smaller models, and would neuron-level localization retain its efficacy across varying model scales?\\n\\n3. How does FiNE scale in terms of time efficiency compared to other locate-then-edit methods? Are there noticeable gains or limitations when applying FiNE to time-sensitive applications?\\n\\n4. What would be the impact on FiNE\\u2019s performance if edits were restricted to fixed layers or specific neurons? Could such restrictions improve interpretability or efficiency without sacrificing accuracy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5x9kfRXhBd
Spatial-temporal Graph Attention Network for Forex Forecasting with Hierarchical Transformer
[ "Kaiwen Guan", "Yan Ge" ]
The foreign exchange market, with its daily trading volume reaching nearly trillions of dollars, presents significant opportunities for the application of advanced predictive analytics. Traditional exchange rate forecasting methods often overlook the interdependencies between currencies and struggle with long-range data dependencies, leading to challenges in capturing the true market dynamics. To overcome these limitations, this paper introduces a novel Spatial-Temporal Graph Attention Network with Hierarchical Transformer (STGAT). Our model innovatively combines spatial graph convolutions with a dual-view temporal transformer-based mechanism, utilizing a Temporal Linearity Graph Attention Network (TLGAT) to account for currency relations in a time-sensitive manner. By integrating a linear attention mechanism for enhanced efficiency and capturing both local and global sequential data embeddings, STGAT provides a framework based on a hierarchical transformer for predicting exchange rates. We validate our approach on exchange rates of seventeen currencies over 2,092 trading days, demonstrating superior performance compared to state-of-the-art models.
[ "Graph Attention Networks", "Transformer", "Forex Forecasting" ]
Reject
https://openreview.net/pdf?id=5x9kfRXhBd
https://openreview.net/forum?id=5x9kfRXhBd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xgDIgUoPQz", "sS5Di98evS", "pf1E2Y3B1g", "lIWQ1FjEwK", "I7fCCUn757", "H4xmHA74Qb" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "meta_review", "official_review" ], "note_created": [ 1737524025568, 1730452703551, 1730442224964, 1730430218833, 1734402794248, 1730648671107 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10093/Reviewer_ucBh" ], [ "ICLR.cc/2025/Conference/Submission10093/Reviewer_Zvid" ], [ "ICLR.cc/2025/Conference/Submission10093/Reviewer_SCEZ" ], [ "ICLR.cc/2025/Conference/Submission10093/Area_Chair_1NLX" ], [ "ICLR.cc/2025/Conference/Submission10093/Reviewer_M6hf" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes a Spatial-Temporal Graph Attention Network with hierarchical transformer (STGAT) architecture for predicting forex rate. This model addresses two major challenges in forex rate prediction: (1) capturing interdependencies among different currencies and (2) managing long-range temporal dependencies. STGAT combines spatial graph convolutions with a dual-view temporal transformer, incorporating a novel temporal linearity graph attention network (TLGAT) for efficiency. The model outperformed ten baseline models across three categories (regression-based, transformer-based, and GNN-based) in experiments with 17 currencies over 2,092 trading days.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1: The combination of spatial graph convolutions and dual-view temporal transformers addresses the limitations of traditional forex models by capturing inter-currency dependencies and effectively handling long-range temporal dependencies for improved prediction accuracy.\", \"s2\": \"The implementation of a linear attention mechanism reduces computational complexity, enhancing the model\\u2019s suitability for large-scale forex data involving numerous currency pairs.\", \"weaknesses\": \"W1: There are minor typographical issues in the paper: \\\"firstly\\\" is repeated on lines 240 and 243; \\\"EXPERIMENTAL SETTTINGS\\\" on line 313 should be \\\"EXPERIMENTAL SETTINGS\\\"; and \\\"Traing setup\\\" on line 321 should be corrected to \\\"Training setup.\\\"\", \"w2\": \"The description of the model's input and output in Figure 2 is unclear.\", \"w3\": \"The paper's writing has considerable room for improvement, as the definitions of symbols in the methodology section are rather ambiguous.\", \"w4\": \"While you claim that your model effectively captures long-term dependencies, the experimental setup lacks clarification on the number of time steps used as the model's input.\", \"w5\": \"In Section 3.3, where you mention that \\\"as a result of the Brexit event, there will be an increase in the volatility of the Pound, especially in the face of uncertain events,\\\" it would strengthen your analysis to include specific examples. For instance, in 2023, when the U.S. Federal Reserve raised interest rates, the resulting increase in the dollar\\u2019s exchange rate impacted forex rates globally, leading to a depreciation in other currencies, especially those with close economic ties to the U.S. Similarly, other geopolitical or economic shifts, such as fluctuations in oil prices or trade policies, could be cited to illustrate how such events amplify volatility across forex rates, including the Pound.\", \"questions\": \"Q1: For W2, could you provide a more detailed description of each module\\u2019s inputs and outputs?\", \"q2\": \"For W4, could you analyze the model's performance on both long-input and short-input scenarios? Additionally, please specify how many future time steps your model is predicting.\", \"q3\": \"For W5, could you include an analysis of these financial events? For example, identify specific phases where predictions were less accurate, possibly due to complex financial conditions coinciding with particular events.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes an approach to exchange rate forecasting through the Spatial-Temporal Graph Attention Network with Hierarchical Transformer (STGAT). The model integrates spatial graph convolutions with a dual-view temporal transformer mechanism to capture currency interdependencies and long-range data dependencies. The authors present validation results of STGAT's performance on a dataset covering 2,092 trading days across 17 currencies, indicating some improvement in accuracy over certain existing models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. This paper focuses on exchange rate prediction, although exchange rate datasets have already become a standard choice in most time-series forecasting models.\\n\\n2. The authors mention some unique characteristics of exchange rate data compared to other time-series data, though the proposed model does not seem, in my view, to effectively leverage these characteristics.\", \"weaknesses\": \"1.From a methodological perspective, the paper primarily combines existing graph neural network structures with transformers. The so-called linear attention mechanism is also a common structure, and the approach does not seem to contribute new insights to the ML field.\\n\\n2.While the authors aim to address exchange rate forecasting, the proposed model feels more like a general time-series forecasting model. Without demonstrating superior performance on commonly used datasets such as ETTh, electricity, or traffic data, it is difficult to find the approach convincing.\\n\\n3.The experimental results in Table 1 appear quite weak, with many models showing R\\u00b2 values below zero, meaning they perform worse than simple mean prediction. This suggests the authors may not have given adequate attention to data normalization or baseline parameter tuning.\\n\\n4. The use of k-means for graph construction seems unnecessary, as self-learned graph structures are now common, and many end-to-end deep clustering methods are available.\", \"questions\": \"Q1: Could the authors test their model on general time-series datasets, given that they compare it with iTransformer?\", \"q2\": \"The authors should provide more detailed experimental comparisons.\", \"q3\": \"The effectiveness of using a k-means constructed graph is questionable; could the authors compare it with self-learning graph methods proposed in papers like Graph WaveNet?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a Spatial-Temporal Graph Attention Network with Hierarchical Transformer (STGAT) for forecasting foreign exchange (forex) rates. This model integrates graph attention networks (GATs) with hierarchical transformers to capture spatial and temporal dependencies across currencies. It introduces a dual-view temporal transformer mechanism to manage both local and global currency correlations, alongside using linear GAT to enhance efficiency. Tested on 17 currencies over 2,092 trading days, the model shows improved performance over state-of-the-art models, including XGBRegressor, LSTM, and FourierGNN.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"**Originality**: The combination of spatial graph convolutions with **temporal transformers** is a novel approach to forex rate forecasting, especially as it addresses spatial and long-range dependencies in the data.\", \"**Quality**: The paper conducts comprehensive experiments, comparing the model against multiple baselines, and presents detailed results using appropriate metrics such as MAE, RMSE, and R\\u00b2.\", \"**Clarity**: The explanations are clear, particularly in describing the technical contributions like GAT and hierarchical transformer components. Diagrams also effectively aid understanding.\", \"**Significance**: Forex rate forecasting has significant applications in financial markets, and the accuracy improvement demonstrated in the experiments suggests a practical and meaningful contribution.\"], \"weaknesses\": [\"**Motivation**: The motivation is somewhat weak. While the authors claim that traditional models do not address the interdependencies between currencies and long-range dependencies, other financial time series prediction models, particularly in stock markets, have already tackled similar issues. This diminishes the novelty of the problem formulation.\", \"**Lack of Macroeconomic Indicators**: The model only uses forex rate data, without considering macroeconomic indicators like GDP or inflation, which are often crucial in driving currency movements.\", \"**Efficiency vs. Accuracy**: The ablation study suggests that a non-linear GAT could offer slightly better performance than the linear GAT, but the paper does not adequately discuss the trade-off between computational efficiency and potential accuracy improvements.\", \"**Temporal Granularity**: It is unclear whether the daily data granularity used in the model is optimal. The authors could have experimented with finer temporal granularity, such as hourly or minute-level data, to see if it improves performance.\"], \"questions\": [\"Could incorporating macroeconomic indicators, such as inflation or GDP, improve the model's predictive performance, particularly for long-term predictions?\", \"Have you explored using finer temporal granularity (e.g., hourly or minute-level data) to capture short-term market fluctuations?\", \"What are the primary challenges or limitations of applying this model to other types of financial time series, such as stock market prediction?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a model by integrating a spatial-temporal graph attention network with a hierarchical transformer and applies it to the forecasting of currency exchange rates.\", \"major_strengths\": [\"Forex forecasting is an important application in financial time series forecasting and the proposed model seems to address some limitations of existing models for forex forecasting.\", \"The proposed method has potential to be generalized to other financial time series forecasting applications, though they are not included in this work.\"], \"major_weaknesses\": [\"Though not for financial time series forecasting applications, the combination of spatial-temporal graph convolutional networks with transformers has been studied in other domains. The paper does not highlight the unique characteristics and challenges of forex forecasting as compared to the other applications studied.\", \"The experiments can be strengthened, which includes incorporating appropriate data normalization, applying hyperparameter tuning, expanding the dataset to include more currencies, and analyzing if the results are in line with the real situations.\", \"The paper in its current form is not up to the acceptance standard of ICLR. The authors are encouraged to improve their paper for future submission by considering the comments and suggestions of the reviewers.\"], \"additional_comments_on_reviewer_discussion\": \"The authors did not respond to the reviews.\"}", "{\"summary\": \"This paper introduces a novel Spatial-Temporal Graph Attention Network with Hierarchical Transformer (STGAT).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe model architecture is novel, introducing a spatial-temporal graph attention network and hierarchical transformer for Forex rate forecasting.\\n2.\\tThe presentation of methods is clear, and the performance metrics, MAE and RMSE, are reasonable.\\n3.\\tThe research contributes to financial time series analysis and can be generalized to other domains like stock prediction.\", \"weaknesses\": \"1.\\tThe combination of spatial graph convolutions with transformers has been explored in other domains, and the paper does not adequately highlight the unique challenges of Forex forecasting that the proposed model addresses.\\n2.\\tIn the experiments part, line 316, the dataset of 17 currencies against the Chinese Yuan is limited. It should be discussed whether the chosen data can generalize to other market behaviors.\\n3.\\tThe performance of graph construction from line 230 with k-means should be examined to determine if the final graph accurately represents the real situation, you should give analysis in the experiment part.\", \"questions\": \"1.\\tCould you provide some cases and analysis for graph construction?\\n2.\\tWere experiments conducted on currencies other than the Chinese Yuan?\\n3.\\tGiven that research on Forex rate prediction is limited, have you considered using methods from stock prediction as a baseline, since they are similar?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5x88lQ2MsH
Bonsai: Gradient-free Graph Condensation for Node Classification
[ "Mridul Gupta", "Samyak Jain", "Vansh Ramani", "HARIPRASAD KODAMANA", "Sayan Ranu" ]
Graph condensation has emerged as a promising avenue to enable scalable training of GNNs by compressing the training dataset while preserving essential graph characteristics. Our study uncovers significant shortcomings in current graph condensation techniques. First, the majority of the algorithms paradoxically require training on the full dataset to perform condensation. Second, due to their gradient-emulating approach, these methods require fresh condensation for any change in hyperparameters or GNN architecture, limiting their flexibility and reusability. To address these challenges, we present Bonsai, a novel graph condensation method empowered by the observation that *computation trees* form the fundamental processing units of message-passing GNNs. Bonsai condenses datasets by encoding a careful selection of *exemplar* trees that maximize the representation of all computation trees in the training set. This unique approach imparts Bonsai as the first linear-time, model-agnostic graph condensation algorithm for node classification that outperforms existing baselines across $7$ real-world datasets on accuracy, while being $22$ times faster on average. Bonsai is grounded in rigorous mathematical guarantees on the adopted approximation strategies, making it robust to GNN architectures, datasets, and parameters.
[ "Graph Neural Networks", "Machine Learning", "Data Distillation", "Graph Distillation", "Dataset Distillation", "Sustainable AI", "Graph Condensation", "Data Condensation", "Dataset Condensation" ]
Accept (Poster)
https://openreview.net/pdf?id=5x88lQ2MsH
https://openreview.net/forum?id=5x88lQ2MsH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yRseMbziyb", "wQDLFhN3fe", "wCiwRPuAvh", "vvsxwFjdzV", "u9O71rSpPV", "tZQOtloOU6", "qlMMbbVd3Q", "pn4RXY6WAW", "pJL8o7Zk2y", "oOMbbe36l7", "muRpRIHoDc", "iGX4nCbXi0", "hYzGtiAFKT", "g4T0JpUl1S", "fQ3w69qWKX", "eGEb7MiyOs", "dOyv5QYXpP", "am71smyJPQ", "UP4uIx0oNV", "ThAFMpMz3B", "QoBFSfYjJO", "PzjNYtDfDg", "PWpPFDMMpV", "JCXD1ZGDNT", "ItJhNEecCP", "IWHiYoHZ1g", "FqDat8QICn", "Dxpjuq1Zy5", "BoJM5G0LXF", "8sBvfbfvFl", "7ddFO2x711", "6ujM4JjgX5", "4p20ATV810", "4QVUQhQeO0", "31YGpHmoL4", "2hQ4caG7zd" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732035181972, 1732534476218, 1730996288774, 1732034360626, 1732721273277, 1732725913386, 1732564219262, 1732723766455, 1732544223488, 1732036056582, 1732033034430, 1732368581734, 1732036220958, 1732035846505, 1732240070730, 1737524129042, 1732470090510, 1734629820176, 1732034391053, 1730429331041, 1732530598496, 1732034212924, 1732417829794, 1732725315362, 1732253370999, 1732034276200, 1732618311718, 1732034788456, 1732035818632, 1732677644546, 1732786957297, 1732725075923, 1732034849188, 1730730853426, 1730085679329, 1732329783070 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Reviewer_n8k1" ], [ "ICLR.cc/2025/Conference/Submission11523/Reviewer_YqxY" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Reviewer_t6gR" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Reviewer_q9ty" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Area_Chair_GC58" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Reviewer_n8k1" ], [ "ICLR.cc/2025/Conference/Submission11523/Reviewer_YqxY" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Reviewer_n8k1" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Area_Chair_GC58" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ], [ "ICLR.cc/2025/Conference/Submission11523/Reviewer_t6gR" ], [ "ICLR.cc/2025/Conference/Submission11523/Reviewer_q9ty" ], [ "ICLR.cc/2025/Conference/Submission11523/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer n8k1 - Part 3\", \"comment\": \"**Q3. From my perspective, selecting datasets using a sampling strategy seems more akin to traditional graph reduction, sparsification, or coarsening methods, rather than directly aligning with the field of graph condensation. Therefore, it\\u2019s challenging to accept the significant improvements claimed over random or herding baselines. Could the authors provide an intuitive example to support their approach?**\\n\\n**Answer:** Bonsai does not sample trees. It is a deterministic algorithm to select a subset of computation trees that best approximates the embeddings of those computation trees that are filtered out. Sampling can be used in Reverse k-nn computation to approximate tree ranking in exchange of efficiency, but not a necessity. Let us outline the logical progression based on which Bonsai is based.\\n\\n1. **Similar computation trees = similar GNN embeddings:** The embedding of a node is a function of its $L$-hop computation tree. Hence, if two computation trees are similar, their GNN embeddings are also similar regardless of the underlying message-passing architecture (`Hypotheses 1 and Figure 2`).\\n2. **Similar embeddings= similar gradients:** When two embeddings are similar, the gradients they generate are similar (`Table 3`). **Core question:** This raises the question: *Given the set of all computation trees, can we select the subset that best approximates the remaining computation trees?*\\n3. **Reverse $k$-NN cardinality indicates the approximation power of a tree:** It is due to these observations, we design the reverse $k$-NN based ranking of tree (or node) importances for distillation (also an important distinction from Mirage). Specifically, if tree $T_1$ is among the $k$-NN of tree $T_2$ for a small $k$ (we use $k=5$ for all datasets), this indicates these two trees are similar in WL-embedding. Hence, we seek to include those trees in the distilled set that reside in the $k$-NN of lots of other trees. Consequently, if these trees are selected, their GNN embeddings are also likely similar to their $k$-NN neighbors from the WL space. As a result, they can effectively approximate the GNN embeddings of the filtered-out nodes. Since similar GNN embeddings lead to similar gradients (`Table 3`), we minimize the information lost from nodes that are filtered out.\\n4. **Coverage Maximization to identify optimal subset:** The tree selection algorithm begins with the reverse $k$-NN set of each computation tree as input. It then iteratively selects trees based on their *marginal* cardinality - specifically, choosing the tree that appears in the $k$-NN sets of the largest number of yet-uncovered trees. A tree is considered uncovered if none of its $k$-nearest neighbors have been selected for the distilled set. This focus on **marginal contribution naturally promotes diversity**. Consider two similar trees, $T_1$ and $T_2$, both with high reverse $k$-NN cardinality. Due to the transitivity of distance functions, these trees likely share many of the same neighbors in their reverse $k$-NN sets. Consequently, if $T_1$ is selected, $T_2$'s marginal cardinality significantly decreases despite its high initial reverse $k$-NN cardinality, preventing redundant selection of similar trees.\\n\\nRandom tree selection does not account for either diversity or its approximation power of other trees in the set. Herding, on the other hand, is a cluster the GNN embeddings and opts for the centers. It does not perform the critical ranking of computation trees based on its neighborhood density. Additionally, clusters often have varying diameters, meaning cluster membership alone doesn't guarantee that two trees can effectively approximate each other. \\n\\nFinally, we note that our codebase has been released for easy replication of all reported results. \\n\\n----------------\\n\\n### Appeal the reviewer\\n\\nIn addition to the above clarification, we have now also added data on carbon emissions of all algorithms and Bonsai (Table 9). With our proposed approach, we outperform existing condensation algorithms in accuracy (`Tables 5 and 6`), at least 7 times faster in running time (`Table 7`), and at least 17 times lower in carbon emissions (`Table 9`). More importantly, the proposed distillation algorithm is the only algorithm that is faster than full dataset training with lower carbon emissions; existing algorithms are both slower and have higher carbon emissions, putting into question its practical utility. We hope these results would convince the reviewer on the merits of our work.\"}", "{\"comment\": \"Thanks for the author's detailed response. I appreciate your efforts and several last questions and suggestions:\\n\\n1. What's the difference between graph distillation and graph condensation? If there's no difference, why do you choose to term your work graph distillation rather than graph condensation, where the latter is much more prevalent in the graph learning community?\\n\\n2. Can you further polish Figure 3? The current version looks very sloppy, with the main issues being: 1) It is overly simplistic, conveying very limited information to support your ideas. 2) The overall style of the diagram does not align with the style of other charts in your manuscripts. 3) The input and output on the left suggest that the number of nodes in the diagram seems unchanged; it merely specifies the edges.\"}", "{\"summary\": \"This paper proposes a novel graph distillation method empowered by the observation that computation trees form the fundamental processing units of message-passing GNNs. This paper specifically addresses the issue of overly dense edges in graph distillation. Experiments on various datasets verify the effectiveness of the method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Compared to previous works, BONSAI is novel.\\n2. The experimental results look very good, especially regarding the training time.\\n3. The theoretical analysis is solid.\", \"weaknesses\": \"1. This paper is not easy to understand.\\n2. In some cases, BONSAI does not perform the best, such as with citeseer.\\n3. Regarding table 5, can you provide experimental results for other compression rates?\\n4. PPR and RkNN involve many parameters, and the ablation study in Fig. 4(b) is insufficient.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer t6gR- Part 3\", \"comment\": \"**Q5. In line 235, Diversity is highlighted. Which part of the method addresses this concern?**\\n\\n**Answer:** Diversity in our approach is achieved through **coverage maximization (`Sec 3.2`)**, another key distinction from Mirage. The process, detailed in `Algorithm 1`, works as follows:\\n\\nThe algorithm begins with the reverse $k$-NN set of each computation tree as input. It then iteratively selects trees based on their *marginal* cardinality - specifically, choosing the tree that appears in the $k$-NN sets of the largest number of yet-uncovered trees. A tree is considered uncovered if none of its $k$-nearest neighbors have been selected for the distilled set.\\n\\nThis focus on **marginal contribution naturally promotes diversity**. Consider two similar trees, $T_1$ and $T_2$, both with high reverse $k$-NN cardinality. Due to the transitivity of distance functions, these trees likely share many of the same neighbors in their reverse $k$-NN sets. Consequently, if $T_1$ is selected, $T_2$'s marginal cardinality significantly decreases despite its high initial reverse $k$-NN cardinality, preventing redundant selection of similar trees.\\n\\nWe have now updated our manuscript (`Sec 3.2` as well as start of `Sec 3`) to incorporate the above discussion. Thanks for your suggestion.\\n\\n**Q6. To select the trees with top representativeness, why do the authors choose Reverse K-NN? Would it be possible to simply adopt clustering instead?**\\n\\n**Answer:** Bonsai's effectiveness stems from its explicit **ranking** of computation trees based on their representative power - measured by how many filtered-out trees they can effectively approximate through $k$-NN neighborhood relationships (`Equation 3`).\\n\\nTraditional clustering approaches cannot achieve this critical ranking requirement. Additionally, clusters often have varying diameters, meaning cluster membership alone doesn't guarantee that two trees can effectively approximate each other. This limitation is evident in the baseline Herding method, which essentially performs $k$-means clustering on GNN embeddings. Bonsai's superior performance compared to Herding demonstrates the advantages of our reverse $k$-NN approach.\\n\\nIn general, we believe potential alternatives to reverse $k$-NN would be the literature from space-partitioning algorithms such as locality sensitive hashing. We hope to explore such alternatives in our future works.\"}", "{\"title\": \"Eagerly awaiting feedback on the revised manuscript\", \"comment\": \"Dear Reviewer n8k1,\\n\\nWe apologize if our repeated reminders bother you. However, since the deadline to update the manuscript closes today, we are extremely keen to know if the changes made satisfactorily addresses the concerns raised. We hope, with the new experiments, clarifications and improved presentation, you will feel convinced on the merits of our work. Your support to reconsider the rating based on the updated manuscript will be really valued.\\n\\nregards.\\n\\nAuthors\"}", "{\"title\": \"thank you for your suggestions and support\", \"comment\": \"Dear Reviewer n8k1,\\n\\nthank you for your engagement during the discussion phase. Your suggestions have helped us improve our work and we truly value your support for the revised manuscript.\\n\\nregards,\\n\\nAuthors\"}", "{\"title\": \"Incorporating suggestions of Reviewer n8k1\", \"comment\": \"Dear Reviewer n8k1,\\n\\nThank you for your engagement during the discussion phase. We have carefully addressed your questions below.\\n \\n**Q1. Distillation vs. Condensation:** There is no difference. In fact, we mention this explicitly in line 33 of the first paragraph of our manuscript. We agree that condensation is the more popular keyword in graph condensation space. In the vision community, several papers have used the term distillation instead of condensation, which motivated the seminal GCOND paper (See [1] [2] [3]). **We would be glad to change \\\"distillation\\\" to \\\"condensation\\\" in our title and rest of the manuscript to align more closely with current terminology in the field.**\\n \\n[1] Ondrej Bohdal,Yongxin Yang, and Timothy Hospedales. Flexible dataset distillation: Learn labels instead of images, In 4th Workshop on Meta-Learning (MetaLearn) at NeurIPS 2020.\\n \\n[2] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. ArXiv preprint,2018.\\n \\n[3] Dataset Distillation by Matching Training Trajectories. George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu, CVPR 2022.\\n \\n**Q2. Suggested changes in Figure 3:** We have revised Figure 3 based on your suggestions to provide a more accurate representation of our algorithm. Please see the updated manuscript pdf for the revised figure.\\n\\nWe look forward to your feedback on whether the updated manuscript addresses your queries comprehensively.\\n\\nregards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for the detailed response. I am willing to improve my score and strongly recommend incorporating the feedback into the revision to avoid any misinterpretation.\"}", "{\"comment\": \"We appreciate the reviewer's positive assessment of our revised manuscript and the subsequent increase in score.\\n\\nregards,\\n\\nAuthors\"}", "{\"title\": \"Cover letter\", \"comment\": [\"We thank the reviewers for their insights and constructive suggestions. A comprehensive point-by-point response to the reviewers' comments is presented below. **We have updated the main manuscript** to address these comments. The changes made in the manuscript are highlighted in **blue** font. The major additional changes are listed below.\", \"**Additional experiments:** We have incorporated all of the additional experiments requested by the reviewers spanning\", \"Three additional baselines of EXGC, GCSNTK and GEOM\", \"Expanded ablation study in Fig. 4\", \"Additional data on carbon emissions\", \"Empirical analysis that highlights relationship between WL embedding similarities and training gradients\", \"**Presentation:** We have addressed several presentation-related inquiries, such as highlighting differences between Mirage and Bonsai, moving auxiliary details to appendix, and adding clarifications wherever appropriate.\", \"We hope these revisions will satisfactorily address the concerns raised by the reviewers and elevate the overall quality of our work. We remain open to any further suggestions.\"]}", "{\"title\": \"Response to Reviewer YqxY- Part 1\", \"comment\": \"We thank the reviewer for their positive comments on our work as well as the constructive feedback. Please find below our responses to the concerns raised.\\n\\n**Q1. This paper is not easy to understand.**\\n\\n**Answer:** We acknowledge this feedback. To improve comprehension of the proposed work, we have now significantly expanded the start of Section 3. In this section, we present a detailed outline of the logical foundation that Bonsai is built upon as well as empirical data supporting this foundation. In general, aim to orient the user towards the high-level design of our algorithm before detailing the individual steps.\\n\\nWe hope this would enhance the readability of our work. We would be happy to incorporate any further suggestions from the reviewer in this regard.\\n\\n**Q2. In some cases, BONSAI does not perform the best, such as with citeseer.**\\n\\n**Answer:** While Bonsai may not be optimal in every scenario, our evaluation shows it offers superior performance and consistency across datasets. The evidence is compelling:\\n\\n- Bonsai achieves top accuracy in 12 out of 18 test cases - significantly outperforming the next best method, GDEM, which leads in only 2 out of 18 cases.\\n- In the 6 cases where Bonsai isn't the top performer, it ranks second in 3, demonstrating robust reliability.\\n- Most notably, Bonsai delivers this robust performance without requiring full-dataset training, resulting in more than 7-times speed-up (`Table 7`) over the fastest competitor. Furthermore, Bonsai is CPU-bound resulting in and at least 17-times reduced carbon emissions (`Table 9`) than any of the existing distillation algorithms.\\n\\nThis combination of consistent accuracy, reduced computational needs, and lower carbon footprint makes Bonsai unique compared to existing graph distillation works.\\n\\n**Q3. Regarding table 6, can you provide experimental results for other compression rates?**\\n\\n**Answer:** We have expanded `Table 6` to include all three compression rates of 0.5%, 1% and 3% in the updated manuscript. The same table is also provided below for each reference. Bonsai outperforms all baselines across all compression rates.\\n\\n### Model Accuracy Comparison\\n\\n| **Dataset** | **%** | **GNN** | **Random**| **Herding** | **GCond** | **GDEM** | **GCSR** | **Bonsai** | **Full** |\\n|---|--|---|----|---|--|--|---|---|--|\\n| | 0.5 | GAT|41.44\\u00b11.73 | 33.80\\u00b10.07 | 13.21\\u00b11.99| 63.91\\u00b15.91\\u2020| 15.09\\u00b16.19 | **75.42\\u00b11.61** | |\\n| | 1|GAT|42.73\\u00b11.03 | 46.09\\u00b10.86 | 35.24\\u00b10.00| 73.49\\u00b12.64\\u2020|37.60\\u00b11.34|**78.67\\u00b10.89**|85.70\\u00b10.09 |\\n| Cora |3|GAT|60.22\\u00b10.67 | 56.75\\u00b10.45 | 35.24\\u00b10.00 | 75.28\\u00b14.86\\u2020 | 36.72\\u00b10.81 | **80.66\\u00b10.80** | |\\n| | 0.5 | GIN|49.04\\u00b10.50 | 34.39\\u00b11.03 | 14.13\\u00b16.80 | 63.65\\u00b17.11 | 76.05\\u00b10.44\\u2020 | **85.42\\u00b10.74**||\\n| |1|GIN|50.48\\u00b10.85|33.80\\u00b12.42|33.91\\u00b11.23|75.92\\u00b14.24\\u2020|60.70\\u00b14.44|**84.80\\u00b10.41**|86.62\\u00b10.28 |\\n| |3|GIN|59.52\\u00b10.88|36.35\\u00b10.59|31.70\\u00b14.97|59.59\\u00b17.95\\u2020|51.62\\u00b15.00|**85.42\\u00b10.53**||\\n| |0.5 | GAT|42.76\\u00b10.35|36.04\\u00b10.46|21.47\\u00b10.00|69.86\\u00b12.28\\u2020|21.92\\u00b10.76|**68.56\\u00b10.57**||\\n| |1|GAT|46.19\\u00b11.38|52.07\\u00b10.11\\u2020|21.47\\u00b10.00|23.87\\u00b13.05|21.50\\u00b10.06|**69.43\\u00b10.82**|77.48\\u00b10.75 |\\n| CiteSeer |3|GAT|61.65\\u00b10.51|65.17\\u00b10.00\\u2020|21.26\\u00b10.22|22.90\\u00b11.20|21.50\\u00b10.06|**69.94\\u00b11.15**||\\n| |0.5 | GIN|44.86\\u00b10.43|22.97\\u00b10.30|21.47\\u00b10.00|67.69\\u00b13.28\\u2020|50.66\\u00b11.17|**71.80\\u00b10.26**||\\n| |1|GIN|47.90\\u00b10.65|39.67\\u00b10.82|19.49\\u00b11.09|67.64\\u00b14.45\\u2020|64.74\\u00b11.88|**72.16\\u00b10.60**|75.45\\u00b10.23 |\\n| |3|GIN|61.83\\u00b10.68|60.48\\u00b10.26\\u2020|18.65\\u00b12.56|48.65\\u00b18.17|59.95\\u00b19.07|**70.51\\u00b10.54**||\\n| |0.5 | GAT|77.73\\u00b10.12|75.44\\u00b10.02|37.49\\u00b14.01|80.06\\u00b11.16\\u2020|38.29\\u00b18.13|**85.66\\u00b10.38**||\\n| |1|GAT|78.85\\u00b10.09|76.64\\u00b10.02|41.55\\u00b13.18|80.75\\u00b10.47\\u2020|40.47\\u00b10.00|**85.88\\u00b10.28**|86.33\\u00b10.08 |\\n| PubMed |3|GAT|82.84\\u00b10.11\\u2020|78.48\\u00b10.03|37.77\\u00b13.61|65.08\\u00b19.53|40.27\\u00b10.20|**85.62\\u00b10.36**||\\n| |0.5 | GIN|77.45\\u00b10.14|48.48\\u00b11.33|30.91\\u00b14.57|78.78\\u00b10.91\\u2020|36.88\\u00b112.06|**84.32\\u00b10.33**||\\n| |1|GIN|78.43\\u00b10.22|62.22\\u00b10.13|32.84\\u00b16.27|78.72\\u00b10.95\\u2020|33.75\\u00b15.58|**85.57\\u00b10.26**|84.66\\u00b10.05 |\\n| |3|GIN|80.56\\u00b10.17|45.40\\u00b10.46|36.11\\u00b13.47|81.08\\u00b10.99\\u2020|32.01\\u00b16.77|**85.66\\u00b10.23**||\\n| |0.5 | GAT|43.64\\u00b10.99\\u2020|36.50\\u00b113.22|40.24\\u00b13.20|25.43\\u00b110.37|28.03\\u00b16.60|**48.22\\u00b13.60**||\\n| |1|GAT|43.56\\u00b11.06\\u2020|36.34\\u00b11.14|40.85\\u00b11.08|18.44\\u00b19.42|OOT|**45.62\\u00b11.85**|51.42\\u00b10.07 |\\n| Flickr |3|GAT|45.71\\u00b11.87\\u2020|42.70\\u00b11.17|41.51\\u00b19.81|25.83\\u00b111.39|OOT|**47.80\\u00b12.06**||\\n| |0.5 | GIN|42.67\\u00b10.83\\u2020 | 39.98\\u00b17.21|13.65\\u00b17.54|14.10\\u00b15.68|5.92\\u00b11.01|**44.97\\u00b12.23**||\\n| |1|GIN|42.90\\u00b10.76\\u2020 | 41.87\\u00b14.52|16.65\\u00b16.55|19.44\\u00b19.68|OOT|**44.90\\u00b10.88**|45.37\\u00b10.57 |\\n| |3|GIN|19.63\\u00b14.21 | 43.72\\u00b13.26\\u2020|24.25\\u00b114.43|20.97\\u00b16.64|OOT|**45.04\\u00b11.94** | |\", \"notes\": [\"Bold numbers (**) indicate the best performance\", \"\\u2020 indicates the second-best performance\", \"OOT indicates \\\"Out of Time\\\"\"]}", "{\"title\": \"Eagerly awaiting feedback from Reviewer t6gR\", \"comment\": \"Dear Reviewer t6gR,\\n\\nWe thank you for your constructive suggestions on our work. Based on your suggestions, our updated manuscript includes:\\n* Detailed differentiation with Mirage\\n* Clarifications and updated discussions to highlight the novelty and importance of reverse $k$-NN, ensuring diversity through coverage maximization, empirically validating the hypotheses of similar embeddings leading to similar gradients, etc.\\n* Justification of our empirical setup and why we believe it's a more fair and transparent setup to evaluate graph distillation algorithms.\\n* New experiments to show Bonsai consumes 17 times lower carbon foot prints, 7 times faster and overall more accurate than existing algorithms.\\n\\nWe are keenly awaiting your reaction to the changes made and if there is anything more we can do to convince you of the merits of our work.\\n\\nregards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer YqxY- Part 2\", \"comment\": \"**Q4a. PPR and RkNN involve many parameters**\\n\\n**Answer:** This appears to be a misunderstanding. Bonsai requires only four parameters- far less most other the baselines. The table below presents the parameters required in the config files for each of the distillation algorithms. We re-emphasize some important points with respect to Bonsai that underscores its robustness to parameters.\\n\\n* As we noted in our introduction, all baselines, except GDEM, mimic gradients on the full train set, and hence requires knowledge of all parameters that will be used while training. In addition, they have some distillation-specific parameters. This design significantly inflates their parameter sets as evident from the table below (in addition to making them order of magnitudes slower).\\n\\n* Finally, we emphasize that we **did not tune parameters** for each dataset, GNN architecture or distillation ratio--the same values were used across all experiments to showcase our robustness. In contrast, the parameters for all baselines are specific to each combination of dataset-GNN architecture-distillation ratio. While the authors have not pointed out how the optimal values were arrived at, it appears grid search is the only possible methodology, making their deployment in practical settings challenging. This behavior makes Bonsai easier to deploy in production settings on unseen datasets.\\n\\n* Even when we vary the parameters of Bonsai (`Fig 7`), they hardly affect the quality, demonstrating its robustness and why dataset or compression specific tuning is not necessary.\\n\\nBonsai|GCond|GDEM|GCSR|GC-SNTK|GEOM|EXGC\\n-----|------|-----|------|-----------------|-----|--\\n$k$ in RkNN|Gradient matching parameter: Number of inner iterations|Number of largest eigenvalues to match|Gradient matching parameter: Number of inner iterations|$K$: Number of neighborhood aggregation rounds|$U$: Initial upper limit of expanding window|Gradient matching parameter: Number of inner iterations\\nSample size $z$ in RkNN|Gradient matching parameter: Number of outer iterations|Number of smallest eigenvalues to match|Gradient matching parameter: Number of outer iterations|$L$: Number of iterations in SNTK|$U'$: Upper bound of expanding window|Gradient matching parameter: Number of outer iterations\\n$\\\\theta$ in PPR to find knee point|Sparsification parameter|$k$: Number of eigenvectors|Sparsification parameter|$\\\\lambda$: Regularization parameter tuned $10^{-6}$ to $10^6$|$\\\\zeta$: Number of epochs for training in buffer phase|Sparsification parameter\\n|Number of layers in WL kernel|Regularization parameter $\\\\alpha$|$\\\\tau_1$|Regularization parameter $\\\\alpha$||Scheduler: Pacing functions (, root, geometric)|Learning rate\\n||Learning rate|$\\\\tau_2$|Learning rate||$lr\\\\_{feat}$: Learning rate for condensed graph features|Dropout\\n||Dropout|Learning rate for eigenvectors|Dropout||$lr\\\\_y$: Learning rate for soft labels (0 for hard labels)|Pruning parameter\\n||Weight decay|Learning rate for features|Weight decay||$p$: Number of training steps for expert GNNs|Mining parameter\\n||Number of epochs|Number of epochs|Number of epochs||$q$: Number of checkpoints for student GNNs|Circulation parameter\\n||Early-stopping patience parameter|Loss weight: $\\\\alpha$|Early-stopping patience parameter||$\\\\alpha$: Weight for knowledge embedding loss term|Number of epochs\\n|||Loss weight: $\\\\beta$|Regularization parameter $\\\\beta$ (Eq. 11)|||Regularization parameter $\\\\alpha$\\n|||Loss weight: $\\\\gamma$|Number of experts|||Early-stopping patience parameter\\n||||$\\\\tau$ (Eq. 17)|||Weight decay\\n||||$\\\\gamma$ (Eq. 18)\\n\\n**Q4b. The ablation study in Fig. 4(b) is insufficient.**\\n\\n**Answer:** Bonsai operates through two main mechanisms: **(i)** exemplar tree selection via reverse $k$-nearest neighbors and **(ii)** sparsification using personalized PageRank (PPR).\\n\\n* We have expanded our ablation study to study the impact of each of these components across **all datasets on all compression ratios** (`Fig 4`). \\n\\n* We present detailed analyses in `Appendix B.4.2` examining how different values of $k$ and sample sizes in reverse $k$-NN influence both model accuracy and computational efficiency. These experiments demonstrate Bonsai's parameter robustness and show that the method performs well without requiring dataset-specific parameter tuning.\\n\\nShould the reviewer have further suggestions for the ablation study, we would be happy to incorporate them.\"}", "{\"title\": \"Response to Reviewer q9ty- Part 1\", \"comment\": \"**W2 and Q3: Other graph condensation methods should be included for accuracy comparison, e.g., SGDD [3], SFGC [4], GEOM [5].**\\n\\n**Answer:** We have added GEOM to `Table 5` as well as in our running time comparison. We outperform GEOM across all datasets, while being more than 50-times faster (`Table 7`) and 10000-times lower carbon emissions (`Table 9` in Appendix).\\n\\nRegarding SFGC, as we note in lines 407-410, we already include three algorithms, namely GDEM, GCSR and GEOM that have outperformed them in the literature. Second, SFGC requires training on the full dataset 200 times, which makes them impratical for graph distillation, where the goal is to avoid training on the full dataset in the first place. Bonsai and GDEM are the only two methods that achieves this goal.\\n\\nSGDD has also been outperformed in the literature by GCSR and GDEM. Additionally, in our experiments, it produced NaN repeatedly. This issue of non-reproducibility has been reported by multiple users in their github repo (https://github.com/RingBDStack/SGDD/issues/) and the authors have acknowledged a bug in the code with the promise to release an updated version, which have not been released till date.\"}", "{\"comment\": \"I thank the author for conducting so many additional experiments. I raise score. I don\\u2019t seem to have noticed the threshold for \\\"out of time\\\", perhaps I missed it. If the author hasn\\u2019t included it in the paper, I would recommend adding it.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Request for rebuttal feedback - Discussion closing soon\", \"comment\": \"Dear Reviewers YqxY, t6gR, and n8k1,\\n\\nThank you for your valuable feedback on our submission. We submitted our detailed rebuttal and revised manuscript on November 19th, addressing your comments and suggestions.\\n\\nAs the discussion phase closes in two days, we would greatly appreciate your thoughts on our revisions. Early feedback would allow us time to address any remaining concerns you may have. We're pleased to note that Reviewer q9ty has reviewed our changes and responded positively, increasing their rating to 6.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"metareview\": \"This authors proposes a gradient-free graph distillation/condensation method based on the observation that similar computation trees imply similar embeddings, which in turn imply similar gradients. The method is model-agnostic, retains the original node features, and scalable. The reviewers were convinced by both the idea and the experimental results. The reviewers also appreciated the theoretical analysis (simple but interesting).\", \"additional_comments_on_reviewer_discussion\": \"Several reviewers asked for the difference between the proposed method Bonsai and Mirage. The authors clarified that the main difference is that Bonsai selects diverse trees in dense neighborhoods, while Mirage which selects frequent trees. They were able to convincingly address this and other concerns and subsequently, all four reviewers increased their score from 5 to 6.\"}", "{\"title\": \"Response to Reviewer t6gR- Part 4\", \"comment\": \"**Q7. The experimental settings differ from commonly used ones in the following ways: (a) Different dataset split (i.e., training/validation/test set split) (b) Different metric for compression rate. The authors are suggested to clarify the reasons for choosing a different setting.**\\n\\n**Answer:** Indeed this is an important aspect of our study.\\n\\n* **Different metric for compression:** As discussed in `Section 1.1 (line 64 onwards)`, compression has predominantly been measured by counting nodes in the distilled dataset. However, this metric overlooks several crucial factors. First, the efficiency of a GNN scales linearly with the number of edges, a factor not captured by node-based compression ratios. Additionally, GPU memory consumption depends on both the edge count in the \\u2113-hop neighborhood of a node and the number of node attributes. This limitation becomes particularly evident in datasets like Cora and Citeseer (`Table 2`), where the distilled datasets produced by Gcond and GDEM actually contain more edges than the full dataset! This significant detail has remained obscured due to the exclusive focus on node count as the compression metric. A more comprehensive approach would consider the total byte consumption of the distilled dataset, providing a holistic measure that accounts for all these factors.\\n\\n* **Different dataset split and compression ratios:** The compression ratios established in GCond have become a de facto standard, with subsequent works largely adopting the same approach. However, our analysis reveals several significant issues with this methodology that necessitate a fresh examination.\\n\\n * The first concern is the inconsistency in compression ratios across datasets. For instance, while Cora uses ratios of 1.3%, 2.6%, and 5.2%, Reddit employs much smaller ratios of 0.05%, 0.1%, and 0.2%. Notably, there appears to be no documented justification for these widely varying ratios. Our approach **implements uniform compression ratios across all datasets, eliminating potential dataset-specific biases.** \\n\\n * The second issue stems from the interaction between data splits and compression ratios. The lack of uniformity in both these aspects makes the compression ratios difficult to interpret meaningfully. As examples, in GCond and GDEM, the **compression ratios of 5.2% and 3.6% in Cora and Citeseer actually translate to using all nodes in the training set** as these percentages match the proportion of nodes in the training data. Thus, in practice, it means no compression with respect to node count, and inflation with respect to edge counts (`Table 2`). Such crucial insights remain hidden due to the inconsistent application of splits and condensation ratios across datasets.\\n\\nBy standardizing both data splits and compression ratios across datasets, we provide more transparent evaluation, clearer understanding of true compression effectiveness, fair comparison across different datasets, and unambiguous measurement of compression relative to original datasets. We believe this revised approach enables more meaningful insights into the effectiveness of different distillation methods.\\n\\n**Minor Comments:**\\n\\n**Q8. The details of PPR starting from line 332 could be simplified or moved to the Appendix...**\\n \\n**Answer:** We acknowledge this feedback. We have moved PPR to appendix.\\n\\n**Q9. Why is Herding OOT (out of time) for Reddit? In my understanding, this method should be efficient.**\\n\\n**Answer:** We have added Herding results for Reddit (`Table 5`). Your observation is correct, and it appears to be an Out-of-memory error that ocurred during our execution, perhaps due to multiple workloads. We have throughly verified that the OOT reported for GCSR are indeed OOT. GCSR trains on the full train set 100 times, and hence the high running time is expected.\\n\\n-------\\n### Appeal to the reviewer\\n\\nIn addition to the clarifications made above, we have also added data on the carbon emissions of the various algorithms (`Table 9`). The data shows that all existing techniques except Bonsai are in fact slower than training on the full train set (`Table 7`) and have higher carbon emissions (`Table 9`). We hope these results would convince the reviewer on the merits of our work.\"}", "{\"summary\": \"The paper presents Bonsai, a gradient-free graph distillation method for node classification that overcomes limitations in existing approaches. By selecting exemplar computation trees rather than synthesizing fully connected graphs, Bonsai efficiently distills graphs without needing full dataset training or specific GNN architectures. This approach achieves higher accuracy and faster distillation across various datasets and GNN models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The\\u00a0performance\\u00a0of\\u00a0bonsai\\u00a0is\\u00a0impressive,\\u00a0including\\u00a0the\\u00a0cross-arch\\u00a0results.\\n2. The\\u00a0proof\\u00a0of\\u00a0maximizing\\u00a0the\\u00a0representative\\u00a0power\\u00a0of\\u00a0exemplars\\u00a0is\\u00a0NP-hard\\u00a0is\\u00a0simple\\u00a0but\\u00a0attracting.\", \"weaknesses\": \"See\\u00a0questions.\", \"questions\": \"1. Could the authors clarify their specific contributions to Mirage [A]? While Mirage highlights the importance of the computation tree in graph dataset condensation, their primary focus is on the graph level (with node-level experiments included in the appendix). Thus, extending their approach to the node level by following the 1-WL test may not represent a substantial novelty.\\n2. Regarding Fig. 4(b), although the authors emphasize the significance of the RkNN and PPR components, the random exemplar selection still yields considerable results. How do the authors interpret this outcome? Could they also provide a performance comparison for $\\\\mathbf{S_r}$ using different exemplars on datasets like Cora, Ogbn-arxiv, and Reddit? Including results for random selection would further clarify the comparison.\\n3. From my perspective, selecting datasets using a sampling strategy seems more akin to traditional graph reduction, sparsification, or coarsening methods, rather than directly aligning with the field of graph condensation. Therefore, it\\u2019s challenging to accept the significant improvements claimed over random or herding baselines. Could the authors provide an intuitive example to support their approach?\\n\\n[A] Mridul Gupta, Sahil Manchanda, HARIPRASAD KODAMANA, and Sayan Ranu. Mirage: Model\\u0002\\nagnostic graph distillation for graph classification.\\u00a0ICLR\\u00a02024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your responses. I appreciate your effort and I raise the score.\"}", "{\"title\": \"Response to Reviewer t6gR- Part 1\", \"comment\": \"We appreciate the reviewer\\u2019s positive feedback and constructive suggestions. We have carefully considered the comments and incorporated the recommended changes, as well as addressed the concerns raised, as outlined below.\\n\\n**Q1. The idea of Bonsai is very similar to MIRAGE[1], as both methods select frequent trees. This similarity makes Bonsai appear to be a minor adaptation of MIRAGE. Furthermore, much of the theoretical analysis, such as Graph Isomorphism, is borrowed from MIRAGE. Although these two works focus on different tasks, it's strongly recommended to discuss the differences between Bonsai and MIRAGE in the related work section.**\\n\\n**Answer:** We have updated the manuscript (`Sec 1.1` and `App A.1`) with a detailed discussion on the novel contributions of Bonsai with respect to Mirage.\\n\\nThe similarities between Mirage and Bonsai end at their shared goal of analyzing computation trees for distillation. Their approaches and capabilities differ fundamentally. Specifically, \\n* **Bonsai does not select frequent trees**. \\n* **Bonsai does not perform graph isomorphism either.** \\n* Bonsai selects a set of trees located in **dense neighborhoods** and **diverse** from each other. \\n\\nLet us elaborate.\\n\\n**Summary of Mirage:** Given a graph database, Mirage identifies all unique computation trees of a certain depth in the graph database. Two trees are non-unique if they are isomorphic to each other. Next, each graph is represented as a set of computation trees, on which ferquent itemset mining is performed. Trees in these frequent itemsets form the distilled dataset.\\n\\n**What breaks in our setting?** Tree isomorphism does not work when each node is attributed with high-dimensional feature vectors since treating feature vectors as labels means two trees are isomorphic if they are topologically identical and the **mapped nodes across the graphs are annotated with identical feature vectors.** Mirage explcitly looks at only graphs where nodes are annotated with a single discrete label (such as atom-types in chemical compounds) and hence tree isomorphism can be performed. In Bonsai, we make no such assumption and hence any algorithm built on tree isomorphism is not feasible.\", \"bonsai_makes_the_following_novel_contributions_that_shares_no_similarity_to_mirage\": [\"Bonsai embeds computation trees into a feature space using WL-kernel and **ranks** each tree based on the density in its neighborhood (Reverse $k$-NN).\", \"Reverse K-NN is expensive ($O(n^2)$). Bonsai proposes an efficient sampling strategy with provable guarantees on the sample_size-approximation_error trade-off.\", \"The core idea is to select a subset of trees that are representative of the entire set. Hence, we select trees located in dense neighborhoods and diverse from each other. This ensures that all trees that are not selected in distilled set are likely to have a close neighbor (in the embedding space) in the distilled set. This is achieved through coverage maximization (`Sec 3.2`).\", \"We prove coverage maximization is NP-hard, monotonic and submodolar. Hence, greedy selection provides $1-1/e$ approximation guarantee.\", \"Sparsification of the distilled dataset is performed through personalized page rank.\", \"**Empirical validation:** To further leave no room for ambiguity, we applied Mirage on Cora and Citeseer, and in both dataset all computation trees were unique leading to no compression.\"]}", "{\"title\": \"Eagerly awaiting feedback before closure of discussion phase.\", \"comment\": \"Dear Reviewer n8k1,\\n\\nAs we are days away from the closure of the discussion phase, we are eagerly awaiting your feedback on the revisions made. Our rebuttal discusses in details how your feedback has been incorporated. We have also uploaded a revised manuscript that includes all suggested changes such as more detailed ablation study, more clear distinction with Mirage, and several other clarification. In addition, based on suggestion from other reviewers, we have added $3$ more baselines. Overall, our results show that Bonsai is at least 7-times faster than baselines, produces 17-times lower carbon footprint, and more accurate on average across datasets as well as GNN architectures. These benefits come along with the advantage of being model-agnostic. Our codebase has been released for easy replication.\\n\\nWe are keenly awaiting your feedback on the changes made and whether we can do anything more to convince you of the merits of our work.\\n\\nregards,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for your additional work. All my concerns have been addressed. I will raise my ratings.\"}", "{\"title\": \"thank you for raising score\", \"comment\": \"We thank the reviewer for supporting our work and raising the score. The threshold of OOT (out of time) is mentioned in the caption of Table 5, which is 5 hours. We feel this is a reasonable time length since full training even on the largest dataset of Reddit finishes in 40 minutes (as shown in Table 7).\\n\\nDo not hesitate to let us know if there is anything more we can do to further convince you of the merits of our work.\"}", "{\"title\": \"Response to Reviewer t6gR- Part 2\", \"comment\": \"**Q2. This paper claims that \\\"distilling to fully-condensed graph\\\" is a problem for previous work. However, most prior methods include a sparsification step, setting a threshold to obtain a sparse graph. Consequently, the number of edges reported in Table 2 is inaccurate. For correct edge counts, please refer to Table 5 in the GCond paper [2].**\\n\\n**Answer:** We apologize for this inaccuracy and not articulating our point precisely. However, as we will argue below, the broader claim that the distilled graph being larger than the full graph in some cases remains intact. We present the updated discussion from the paper verbatim below. We also present the updated Table 2 with corrected edge counts below supporting this claim.\\n\\n>In a message-passing GNN, the computation cost of each forward pass is $\\\\mathcal{O}(|E|)$, where $E$ denotes the set of edges. Consequently, the computational effectiveness of graph distillation is primarily determined by the reduction in edge count between the original and distilled graphs, rather than node count alone. However, current graph distillation algorithms (see Table 1) quantify the condensation ratio based on the node count. Specifically, given a compression ratio $r\\\\_n$, it synthesizes a weighted, fully-connected dense adjacency matrix for the distilled graph of the size $\\\\frac{|V|}{r\\\\_n}\\\\times \\\\frac{|V|}{r\\\\_n}$, where $V$ denotes the node set. **Some of these algorithms sparsify by removing edges with edge weights below a certain threshold. This threshold is chosen by studying the drop in accuracy at various sparsity levels and choosing a point providing good accuracy-efficiency trade-off. Consequently, the sparsification process itself requires training on the fully connected initial distilled graph.** This design, which is inconsistent with the computation structure of GNNs, can lead to a distilled graph with small reduction in edge count and, in some cases, may even result in a graph with more edges than the original dataset (See Table 2). \\n\\n| Dataset | GCond | GDEM | Full Dataset |\\n|---------|-------|------|--------------|\\n| Cora (5.2%) | **15,074** | **19,600** | 10,556 |\\n| Citeseer (3.6%) | **10,996**| **14,400** | 9,104 |\\n| Pubmed (0.3%) | 3,557 | 3,600 | 88,648 |\\n| Flickr (1%) | 23,556 | 795,664 | 899,756 |\\n| Ogbn-arxiv (0.5%) | 15,110 | 715,716 | 2,315,598 |\\n| Reddit (0.2%) | 5,756 | 216,225 | 23,213,838 |\\n\\n**Table caption:** We present the number of edges in the distilled graphs produced by GCOND and GDEM and compare them to the full dataset. **Bold** cells indicate cases where the distilled graphs have more edges than the full dataset. The indicated node ratios are taken from the values used in GCOND. For GCOND, we report the number of edges after sparsification as reported in their github repository at https://github.com/ChandlerBang/GCond/tree/main/saved_ours.\\n\\n**Q3. Although this paper presents comprehensive deductions in the theoretical section, some hypotheses appear to be too strong and lack support. For example, Hypothesis 1 (line 201) and Logical Progression 2 (line 215) may not hold true.**\\n\\n**Answer:** Thank you for this important feedback. We acknowledge that our original statement was based on intuitive algorithm design principles rather than theoretical guarantees. Following your suggestion, we conducted empirical analysis to examine the relationship between WL-embedding similarities and training gradients.\\n\\nThe results lends support to our hypothesis, revealing statistically significant ($p$-value$<0.05$) positive correlations:\\n\\n| Dataset | Correlation | $p$-value |\\n| --- | --- | --- |\\n| Cora | 0.74 | $\\\\approx 0$ |\\n| Citeseer | 0.83 | $\\\\approx 0$ |\\n| Pubmed | 0.38 | 0.02 |\\n| Reddit | 0.42 | $\\\\approx 0$ |\\n\\n**Changes in manuscript:** We have incorporated these empirical findings in `Sec 3` of the revised manuscript.\\n\\n**Q4. In Fig. 2, the authors empirically demonstrate the correlation between GNN embedding and WL embedding. When the threshold is large, almost all node pairs are considered, leading to a low correlation. How does this observation inform the method design?**\\n \\n**Answer:** It is due to this observation, we design the reverse $k$-NN based ranking of node importances for distillation (also an important distinction from Mirage). Specifically, if node $v$ is among the $k$-NN of node $u$ for a small $k$ (we use $k=5$ for all datasets), this indicates these two nodes are similar in WL-embedding. Hence, we seek to include those nodes in the distilled set that reside in the $k$-NN of lots of other nodes. Consequently, if these nodes are selected, their GNN embeddings are also likely similar to their $k$-NN neighbors from the WL space. As a result, they can effectively approximate the GNN embeddings of the filtered-out nodes. Since similar GNN embeddings likely lead to similar gradients (as discussed in Q3 above), we minimize the information lost from nodes that are filtered out.\\n\\nWe have now clarified this explicitly in `Sec 3`.\"}", "{\"title\": \"Awaiting your response\", \"comment\": \"Dear Reviewer t6gR,\\n \\nWe really appreciate your detailed feedback and have made changes to our manuscript to address them. We are extremely keen to know if there are any outstanding concerns following our rebuttal. As noted by ICLR, 27th Nov is the last date to make changes to the manuscript. Hence, your engagement within this deadline would be much appreciated.\\n \\nregards,\\n \\nAuthors\"}", "{\"title\": \"Response to Reviewer n8k1 - Part 1\", \"comment\": \"We thank the reviewer for their positive comments and constructive feedback. We have carefully addressed the concerns and incorporated the suggested revisions, as detailed below.\\n\\n**Q1. Could the authors clarify their specific contributions to Mirage [A]? While Mirage highlights the importance of the computation tree in graph dataset condensation, their primary focus is on the graph level (with node-level experiments included in the appendix). Thus, extending their approach to the node level by following the 1-WL test may not represent a substantial novelty.**\\n\\n**Answer:** We have updated the manuscript (`Sec 1.1 and App A.1`) with a detailed discussion on the novel contributions of Bonsai with respect to Mirage.\\n\\nThe similarities between Mirage and Bonsai end at their shared goal of analyzing computation trees for distillation. **The fundamental limitation of Mirage lies in relying on tree isomorphisms, which makes them limited to graphs annotated with only single, discrete node labels** (such as atom-types in chemical compounds). Hence, **Mirage does not work on general purpose graphs** where nodes are annotated with high-dimensional feature vectors.\\n\\nLet us elaborate.\\n\\n**Summary of Mirage:** Given a graph database, Mirage identifies all unique computation trees of a certain depth in the graph database. Two trees are non-unique if they are isomorphic to each other. Next, each graph is represented as a set of computation trees, on which frequent itemset mining is performed. Trees in these frequent itemsets form the distilled dataset.\\n\\n**What breaks in our setting?** Tree isomorphism does not work when each node is attributed with high-dimensional feature vectors since treating feature vectors as labels means two trees are isomorphic if they are topologically identical and the **mapped nodes across the graphs are annotated with identical feature vectors.** In Bonsai, we make no such assumption.\\n\\nThe algorithm design of Bonsai is entirely different. We make the following novel contributions:\\n\\n* **Reverse $k$-NN to rank tree importance:** Bonsai embeds computation trees into a feature space using WL-kernel and **ranks** each tree based on the density in its neighborhood (Reverse $k$-NN).\\n* **Fast reverse $k$-NN through sampling:** Reverse $k$-NN is expensive ($O(n^2)$). Bonsai proposes an efficient sampling strategy with provable guarantees on the sample_size-approximation_error trade-off.\\n* **Coverage maximization:** The core idea is to select a subset of trees that are representative of the entire set. Hence, we select trees located in dense neighborhoods and diverse from each other. This ensures that all trees that are not selected in distilled set are likely to have a close neighbor (in the embedding space) in the distilled set. This is achieved through coverage maximization (`Sec 3.2`).\\n* **Theoretical guarantees:** We prove coverage maximization is NP-hard, monotonic and submodolar. Hence, greedy selection provides $1-1/e$ approximation guarantee. \\n* **Sparsification** of the distilled dataset is performed through personalized page rank.\\n\\n**Empirical validation:** To further leave no room for ambiguity, we applied Mirage on Cora and Citeseer, and in both dataset all computation trees were unique leading to no compression.\"}", "{\"title\": \"Response to Reviewer q9ty- Part 1\", \"comment\": \"We thank the reviewer for the positive feedback and constructive comments on our work. Please find below our clarifications and changes made in the revised manuscript to address the concerns raised.\\n\\n**W1 and Q1: Why the authors choose 0.5%, 1% and 3% as $S_r$? This setting does not align with previous works.**\\n\\n**Answer:** The compression ratios established in GCond have become a de facto standard, with subsequent works largely adopting the same approach. However, our analysis reveals significant issues with this methodology that necessitate a fresh examination.\\n\\n * **Ensuring fairness and consistency:** The first concern is the inconsistency in compression ratios across datasets. For instance, while Cora uses ratios of 1.3%, 2.6%, and 5.2%, Reddit employs much smaller ratios of 0.05%, 0.1%, and 0.2%. Notably, there appears to be no documented justification for these widely varying ratios. **Our approach implements uniform compression ratios across all datasets, eliminating potential dataset-specific biases.** \\n\\n * **Interpretability:** The lack of uniformity obscures important insights. As examples, in GCond and GDEM, the **compression ratios of 5.2% and 3.6% in Cora and Citeseer actually translate to using all nodes in the training set** as these percentages match the proportion of nodes in the training data. More critically, as we point out in `Table 2`, this results in **distilled datasets being larger than the full dataset!** Thus, in practice, it means no compression with respect to node count, and inflation with respect to edge counts. \\n\\nBy standardizing compression ratios across datasets, we provide more transparent evaluation, clearer understanding of true compression effectiveness, and fair comparison across different datasets. \\n\\n**W2 and Q2: Recently, a lot of works have studied the efficiency of graph condensation, e.g., Exgc [1] and GC-SNTK [2]. These two methods should be included as baselines when comparing condensation time. By the way, it would be better to present time comparison via table than figure.**\\n\\n**Answer:** We have included comparison to both Exgc and GC-SNTK in accuracy (`Table 5`) and distillation times (`Table 7`). We outperfom both across all datasets except in ogbg-arxiv at 0.5% and 1%. \\n\\nAs suggested, we now present distillation times as a table (`Table 7`). The same table is also produced below for easy reference. In addition, to further showcase the benefit of Bonsai, we present the carbon emissions for distillation from each of the techniques (`Table 9`). As evident from the table below, Bonsai is **7 times faster** on average over EXGC, the 2nd fastest baseline, and **reduces carbon footprint by 17 times.**\\n\\n**Distillation times** (in seconds):\\n\\n| Dataset | Gcond | Gdem | Gcsr | Exgc | Gcsntk | Geom | Bonsai | Full training|\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Cora | 2738 | 105 | 5260 | 34.87 | 82 | 12996 | **2.76** | 24.97 |\\n| Citeseer | 2712 | 167 | 6636 | 34.51 | 124 | 15763 | **2.61** | 24.87 |\\n| Pubmed | 2567 | 530 | 1319 | 114.96 | 117 | OOT | **24.84** | 51.06 |\\n| Flickr | 1935 | 3405 | 17445 | 243.28 | 612 | OOT | **118.23** | 180.08 |\\n| Ogbn-arxiv | 14474 | 569 | OOT | 1594.83 | 12218 | OOT | **348.24** | 524.67 |\\n| Reddit | 30112 | 20098 | OOT | 6903.47 | 29211 | OOT | **1445.00** | 2425.68 |\\n\\n**Carbon emissions**: Estimated CO$_2$ emissions from distillation of various methods in seconds at 0.5\\\\% compression ratio. CO$_2$ emissions are computed as 10.8kg per 100 hours for Nvidia A100 GPU and 4.32kg per 100 hours for 10 CPUs of Intel Xeon Gold 6248[1]. \\n\\n| Dataset | Gcond | Gdem | Gcsr | Exgc | Gcsntk | Geom | Bonsai | Full training|\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Cora | 82.14 | 3.15 | 157.80 | 1.05 | 2.46 | 389.88 | **0.03** | 0.75 |\\n| Citeseer | 81.36 | 5.01 | 199.08 | 1.03 | 3.72 | 472.89 | **0.03** | 0.75 |\\n| Pubmed | 77.01 | 15.90 | 39.57 | 3.45 | 3.51 | \\u2265540.00 | **0.3** | 1.53 |\\n| Flickr | 58.05 | 102.15 | 523.35 | 7.30 | 18.36 | \\u2265540.00 | **1.42** | 5.40 |\\n| Ogbn-arxiv | 434.22 | 17.07 | \\u2265540.00 | 46.49 | 366.54 | \\u2265540.00 | **4.18** | 15.74 |\\n| Reddit | 903.36 | 602.94 | \\u2265540.00 | 207.10 | 876.33 | \\u2265540.00 | **17.34** | 72.77 |\\n\\n[1] Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of machine learning.\"}", "{\"title\": \"Keenly awaiting your feedback on the updated figure and clarifications\", \"comment\": \"Dear Reviewer n8k1,\\n\\nSince we will be unable to update our manuscript after 27th, we are very keen to know if the current version satisfactorily addresses all concerns. We have updated the figure as per your suggestions. We have also clarified that distillation and condensation are synonymous in our context and we would be happy to change \\\"distillation\\\" to \\\"condensation\\\" in our manuscript.\\n\\nregards,\\n\\nAuthors\"}", "{\"comment\": \"I would like to encourage the reviewers to engage with the author's replies if they have not already done so. At the very least, please\\nacknowledge that you have read the rebuttal.\"}", "{\"title\": \"thank you for supporting our revised manuscript\", \"comment\": \"Dear Reviewer t6gR,\\n\\nWe appreciate your support for our work. We have already incorporated all of the suggestions made in our revised manuscript. We still have few hours left to update the manuscript, if there are any further suggestions, we will ensure to incorporate those as well.\\n\\nThank you again for raising the score.\\n\\nregards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer n8k1 - Part 2\", \"comment\": [\"**Q2. Regarding Fig. 4(b), although the authors emphasize the significance of the RkNN and PPR components, the random exemplar selection still yields considerable results. How do the authors interpret this outcome? Could they also provide a performance comparison for using different exemplars on datasets like Cora, Ogbn-arxiv, and Reddit? Including results for random selection would further clarify the comparison.**\", \"**Answer:** We apologize for not being articulate enough in our analysis of the ablation study. To correct this, we have made the following changes to the updated manuscript (`Sec 4.5` to be specific).\", \"**Additional Experiments:** As suggested, we now perform ablation study **across all datasets and all compression ratios**. The results are available in updated `Figure 4`.\", \"**Added Random:** We have added Random to the ablation study. As evident from `Fig. 4`, across most datasets, Bonsai is dramatically better than Random. FlickR is an exception, where Random does not outperform, but comes close to the performance of Bonsai. However, this trend has also been reported across all other baselines, where Random has been shown to perform well.\", \"**Random vs. Bonsai-rknn:** We apprehend that the reviewer may have confused Bonsai-rknn with Random. Hence, we now separately show the performance of both in our ablation study (`Fig. 4`). While in Bonsai-rknn, we randomly add computation trees till the distillation budget is exhausted, in Random, we iteratively add random nodes, till their induced subgraph exhausts the budget. Consequently, Random covers more diverse nodes with partial local neighborhood information, while Bonsai-rknn compromises diversity by selecting a smaller node set with complete $L$-hop topology, enabling precise GNN embedding computation. Both represent two distinct mechanisms of random selection. In most cases, Bonsai-rknn achieves a higher accuracy indicating that obtaining full $L$-hop topological information for a smaller set of nodes leads to better results than partial $L$-hop information over a broader set of nodes. We note that random computation tree sampling has thus far not been explored in the graph distillation literature to the best of our knowledge.\"]}", "{\"summary\": \"This paper studies critical limitations in existing graph distillation methods. It introduces a new approach that optimizes the training of GNNs by generating smaller, representative datasets without requiring full training on the original data. The proposed method leverages a computation tree-based approach to create a distilled representation that is efficient, adaptable, and capable of maintaining high accuracy across varied datasets and models. Experimental results have demonstrated its effectiveness and efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed gradient-free approach bypasses the need for computationally expensive gradient calculations, resulting in a significantly faster distillation process. This efficiency makes Bonsai highly scalable even for large datasets.\\n2. This model-agnostic method is interesting and saves efforts in hyperparameter tuning when changing condensation models.\\n3. It is the first distillation method that retains the original node features and synthesizes graphs with unweighted edges, which more faithfully represent the original graph structure.\", \"weaknesses\": \"1. The idea of Bonsai is very similar to MIRAGE[1], as both methods select frequent trees. This similarity makes Bonsai appear to be a minor adaptation of MIRAGE. Furthermore, much of the theoretical analysis, such as Graph Isomorphism, is borrowed from MIRAGE. Although these two works focus on different tasks, it's strongly recommended to discuss the differences between Bonsai and MIRAGE in the related work section.\\n2. This paper claims that \\\"distilling to fully-condensed graph\\\" is a problem for previous work. However, most prior methods include a sparsification step, setting a threshold to obtain a sparse graph. Consequently, the number of edges reported in Table 2 is inaccurate. For correct edge counts, please refer to Table 5 in the GCond paper [2].\\n3. Although this paper presents comprehensive deductions in the theoretical section, some hypotheses appear to be too strong and lack support. For example, Hypothesis 1 (line 201) and Logical Progression 2 (line 215) may not hold true.\\n4. In Fig. 2, the authors empirically demonstrate the correlation between GNN embedding and WL embedding. When the threshold is large, almost all node pairs are considered, leading to a low correlation. How does this observation inform the method design?\\n5. In line 235, **Diversity** is highlighted. Which part of the method addresses this concern?\\n6. To select the trees with top representativeness, why do the authors choose Reverse K-NN? Would it be possible to simply adopt clustering instead?\\n7. The experimental settings differ from commonly used ones in the following ways: (a) Different dataset split (i.e., training/validation/test set split) (b) Different metric for compression rate. The authors are suggested to clarify the reasons for choosing a different setting.\\n\\n\\n### Minor\\n1. The details of PPR starting from line 332 could be simplified or moved to the Appendix as people are familiar with it.\\n2. Why is Herding OOT (out of time) for Reddit? In my understanding, this method should be efficient.\\n\\n### References\\n[1] Mridul Gupta, Sahil Manchanda, HARIPRASAD KODAMANA, and Sayan Ranu. Mirage: Model-\\nagnostic graph distillation for graph classification. In ICLR 2024\\n\\n[2] Wei Jin, Lingxiao Zhao, Shichang Zhang, Yozen Liu, Jiliang Tang, and Neil Shah. Graph condensa-\\ntion for graph neural networks. In ICLR, 2021.\", \"questions\": \"Please see the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes Bonsai, a linear-time, model-agnostic graph distillation algorithm for node classification. It observe three limitations of previous works, Full Gnn training, Distilling to a fully-connected graph and Model-specific. To address these limitations, Bonsai aims to identify a set of b exemplar computation trees that optimally represent the full training set.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1: The problem is important in this field.\", \"s2\": \"Presentation is good. Good model name.\", \"s3\": \"The solution looks novel to me.\", \"weaknesses\": \"W1: The experiments are not very standard.\", \"w2\": \"Some recent works are not included as baseline.\", \"questions\": \"Q1: Why the authors choose 0.5%, 1% and 3% as $S_r$? This setting does not align with previous works.\", \"q2\": \"Recently, a lot of works have studied the efficiency of graph condensation, e.g., Exgc [1] and GC-SNTK [2]. These two methods should be included as baselines when comparing condensation time. By the way, it would be better to present time comparison via table than figure.\", \"q3\": \"Other graph condensation methods should be included for accuracy comparison, e.g., SGDD [3], SFGC [4], GEOM [5].\\n\\n[1] Exgc: Bridging efficiency and explainability in graph condensation\\n\\n[2] Fast Graph Condensation with Structure-based Neural Tangent\\n\\n[3] Does graph distillation see like vision dataset counterpart\\n\\n[4] Structure-free graph condensation: From large-scale graphs to condensed graph-free data\\n\\n[5] Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Eagerly waiting for feedback on revisions made\", \"comment\": \"Dear Reviewer YqxY,\\n\\nWe have provided detailed explanations and additional experiments to address your concerns. We have also uploaded a revised manuscript with all new additions clearly highlighted in blue font. Since the discussion phase will close soon, we are keen to get your reaction to the changes made. \\n\\nregards,\\n\\nAuthors\"}" ] }
5x65bI0aY8
NIAQUE: Neural Interpretable Any-Quantile Estimation - Towards Large Probabilistic Regression Models
[ "Boris N. Oreshkin", "Dmitry Efimov", "Siamak Gilan", "James Nordlund" ]
State-of-the-art computer vision and language models largely owe their success to the ability to represent massive prior knowledge contained in multiple datasets by learning over multiple tasks. However, large-scale cross-dataset studies of deep probabilistic regression models are missing, presenting a significant research gap. To bridge this gap, in this paper we propose, analyze, and evaluate a novel probabilistic regression model, capable of solving multiple regression tasks represented by different datasets. To demonstrate the feasibility of such operation and the efficacy of our model, we define a novel multi-dataset probabilistic regression benchmark LPRM-101. Our results on this benchmark imply that the proposed model is capable of solving a probabilistic regression problem jointly over multiple datasets. The model, which we call NIAQUE, learns a meaningful cross-dataset representation, scores favorably against strong tree-based baselines and Transformer and exhibits positive transfer on unseen datasets after fine-tuning.
[ "deep probabilistic regression", "large regression models" ]
Reject
https://openreview.net/pdf?id=5x65bI0aY8
https://openreview.net/forum?id=5x65bI0aY8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ymxCDxAqeO", "pCxuPG0S2k", "mWn8qB8xMr", "luO7G1TzD3", "hmQ2ssel9Y", "amXNFTFvO7", "TifkSEdYdM", "QRWZVGkjkk", "NQTYDgMT8X", "N3AjOrEydT", "JNPNhwgVZ3", "FbwLAfZMQz", "DsgYnVQSkJ", "3Gg39dyOgu", "28jLLhxYji", "0NM1rvgGON" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_review", "decision", "official_comment" ], "note_created": [ 1732620352174, 1730578748802, 1732568750109, 1730685199706, 1732786138688, 1732476358364, 1732562166212, 1732823812672, 1732547333681, 1734632043451, 1732517626185, 1732454556137, 1730718372828, 1730582553614, 1737523448588, 1732462410598 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1352/Reviewer_uKfu" ], [ "ICLR.cc/2025/Conference/Submission1352/Reviewer_JSss" ], [ "ICLR.cc/2025/Conference/Submission1352/Authors" ], [ "ICLR.cc/2025/Conference/Submission1352/Reviewer_p8Dt" ], [ "ICLR.cc/2025/Conference/Submission1352/Reviewer_NJbn" ], [ "ICLR.cc/2025/Conference/Submission1352/Authors" ], [ "ICLR.cc/2025/Conference/Submission1352/Authors" ], [ "ICLR.cc/2025/Conference/Submission1352/Authors" ], [ "ICLR.cc/2025/Conference/Submission1352/Authors" ], [ "ICLR.cc/2025/Conference/Submission1352/Area_Chair_NQhh" ], [ "ICLR.cc/2025/Conference/Submission1352/Reviewer_p8Dt" ], [ "ICLR.cc/2025/Conference/Submission1352/Authors" ], [ "ICLR.cc/2025/Conference/Submission1352/Reviewer_uKfu" ], [ "ICLR.cc/2025/Conference/Submission1352/Reviewer_NJbn" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1352/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your response. I choose to keep the same score.\"}", "{\"summary\": \"This paper aims to narrow the gap between deep probabilistic regression models with current powerful deep neural networks. It proposes a large unified model to handle regression tasks across different datasets. Correspondingly, it introduces a framework called NIAQUE, to address probabilistic regression problem. Compared with several baseline methods, the proposed model achieves better empirical results.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper aims to narrow the gap between current neural networks and probability regression models, which is still under explored.\\n2. It proposes a complete working pipeline from building benchmark, proposing new model with theoritical analysis, and empirical analysis.\\n3. Overall, the draft is easy to read and follow.\", \"weaknesses\": \"1. The problem statement is still not clear, why use transformer model to handle such tasks, what kind of real-world applications to illustrate the model practices? more discussions and intuitions are needed to support the motivation of this work.\\n2. Some table and figure formats are not well prepared such as figure 3. Necessary descriptions are needed.\\n3. Based on the table 1, there is not clear difference of final results compared with previous works, what is the benefit of proposing such a learning pipeline? how to illustrate the proposed model superiority?\", \"questions\": \"Please refer to the weaknesses section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"Dear Reviewer JSss, we would like to thank you for providing insightful feedback. Our detailed response appears below.\\n\\n1. The Transformer model is used as a baseline. The basic principle here is that in order to operate in the multi-task setting, where each dataset represents a different task and has a different number of available features, the model needs to be able to process variable sized input. Both the proposed NIAQUE model and the Transformer model are capable of this mode operation, with NIAQUE demonstrating improved accuracy. Additionally, some of the existing distributional models such as neural process or conditional neural process pointed out by Reviewer p8Dt simply cannot do it. The multi-task operation creates the opportunity to co-train on multiple datasets and hence have one model for multiple problems. Finally, in our updated experiments, we show the feasibility of pretraining a large probabilistic regression model that can be fine-tuned on unseen target datasets, providing clear performance lift. We have revised the manuscript and included additional experiments and motivation for our work accordingly.\\n\\n2. We have provided additional clarifications in the caption of Figure 3. We are happy to provide additional clarifications if Reviewer provides additional guidance.\\n\\n3. We have provided additional transfer learning results in Table 2 and paragraph Transfer Learning Experiment. The learning pipeline we proposed provides the ability to train foundational probabilistic regression models that can be fine-tuned on target datasets providing a clear accuracy lift compared to the model trained on the same data from scratch. This is akin to the case of training foundational vision or language models that can be used in various downstream tasks through transfer learning. The feasibility of such transfer learning has not been demonstrated before in the context of probabilistic regression models and we are closing this gap in this paper.\"}", "{\"summary\": \"The authors focus on the problem of multi-task probabilistic regression over tabular datasets. They contribute a benchmark of 101 regression datasets drawn from public sources, and develop a neural network architecture trained with quantile regression for this setting. They demonstrate that their architecture, NIAQUE, outperforms tree-based and Transformer baselines in both point and distributional metrics. The authors also provide a theoretical result demonstrating the consistency of their any-quantile training algorithm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Good motivation: The problem setting of multi-task learning across tabular regression datasets is a promising and important one, so the paper is well-motivated.\", \"NIAQUE compares reasonably well to strongest baseline: The NIAQUE architecture appears to have decent empirical results, performing slightly better than the Transformer with ~2x reduction in wall-clock training time (L254).\", \"Strong benchmark and set of evaluation datasets: The contributed LPRM-101 benchmark is expansive.\", \"The method (the neural network architecture and quantile regression objective) seem sensible and novel.\", \"The consistency result for NIAQUE appears to be correct.\"], \"weaknesses\": [\"Minimal gains from multi-task transfer: the core pitch of the paper is to contribute an architecture that works well for multi-task learning, but the gains from learning across datasets appear to be pretty marginal (comparing NIAQUE-local and NIAQUE-global in Table 1).\", \"Unclear presentation: Some parts of the presentation of NIAQUE were confusing or unclear. How exactly is the model used for prediction if the encoder takes a variable number of inputs? At test time, do you concatenate each query point with the training dataset (or a subsample of training points) and feed them to the encoder to obtain the encoder representation? If so, have you considered various heuristics for selecting these points (e.g., nearest neighbors to the query input)?\"], \"questions\": \"My main concern is minimal gains from multi-task transfer.\\n* One way to address this: is it possible to evaluate this model in the standard meta-learning setup, with entirely held-out datasets, as in [1, 2]? If these results are strong, it might better justify NIAQUE even when in the \\\"held-in\\\" dataset setting, NIAQUE-local and NIAQUE-global perform similarly. In the meta-learning setting, Neural Processes are the relevant baseline. If NIAQUE is competitive with them, it would encourage me to raise my score.\", \"other_medium_priority_concerns\": \"* In Table 1, why is Transformer-local not included? How does it perform?\\n* The architecture appears to resemble Neural Processes [1, 2], another neural architecture for multi-task learning / meta-learning. Could you contextualize NIAQUE with regard to them?\\n* The nomenclature of \\\"co-training\\\" seems to clash with the old-school pseudo-label style setting as in [3]. Could you instead use the established nomenclature for this setup (multi-task learning, pretraining, or meta-learning depending on the context?).\\n\\n[1] Garnelo et al., 2018. Neural Processes. https://arxiv.org/abs/1807.01622\\n\\n[2] Garnelo et al., 2018. Conditional Neural Processes. In ICML. https://proceedings.mlr.press/v80/garnelo18a.html\\n\\n[3] Blum and Michell, 1998. Combining Labeled and Unlabeled Data with Co-Training. In COLT. https://www.cs.cmu.edu/~avrim/Papers/cotrain.pdf\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"response to authors rebuttal\", \"comment\": \"I have carefully considered the feedback from other reviewers, as well as the author's response to neural process questions, and their responses to other reviewers. While I appreciate the authors' thorough and thoughtful responses, I am convinced by the explanations provided, and will slightly change my score to marginal acceptance! good luck!\\n\\nMoreover, I would encourage the authors to share the code [and datasets] while submitting a manuscript.\"}", "{\"title\": \"Response\", \"comment\": [\"Dear Reviewer uKfu, we would like to thank you for providing insightful feedback and for positive assessment of our work. Our detailed response appears below.\", \"The code to download benchmark datasets, pytorch dataloaders, models, training and evaluation scripts/notebooks along with trained model binaries will be publicly released if the paper is accepted. Therefore, we do not envision any problems with building on top of this work.\", \"The focus of this paper is to demonstrate the feasibility of multi-task probabilistic learning on heterogenous regression datasets. Our most recent results in the revised paper also demonstrate positive transfer after fine-tuning on unseen datasets in this setting. To be able to demonstrate these results we used raw dataset features, uniformly across all datasets, to ensure uniformity and apple-to-apple comparison. Overwhelming majority of datasets used in our study are not Kaggle competition datasets. Kaggle competitors typically spend significant amount of time on feature engineering. Your point definitely outlines an interesting direction for future work, that of finding the automatic feature extraction methods that work synergistically with large-scale regression models helping them compete against human feature engineering approaches. This requires significant additional investigative, experiment design and theoretical work, which is outside of the scope of our current paper.\"]}", "{\"title\": \"Response\", \"comment\": [\"Dear Reviewer NJbn, we would like to thank you for providing insightful feedback. Our detailed response appears below.\", \"W1: Indeed we have outlined this as a potential risk in the discussion section of our paper. We would like to note that this is not exclusive to our model, but rather is a common risk relevant for the class of multi-task models as a whole. We would also like to emphasize that most of the prominent present day models are multi-task models. We believe that an effective strategy to combat this risk factor is fine-tuning on the specific target dataset. In the light of our most recent transfer learning results summarized in Table 2 of the revised paper, we conclude that fine-tuning of a pretrained NIAQUE model is viable and produces tangible overall accuracy lift compared to the control model trained from scratch, even when we vary the size of the training set on the target unseen dataset, as shown in Table 2. This provides evidence that the model is capable of producing improved results for target datasets of varying sizes, addressing concerns raised in this comment. Additionally, based on our preliminary analysis, it appears that the lift is consistent across datasets. The metric comparisons on individual datasets for pretrained/global models and the local model will be presented in an appendix in the camera ready version of the paper.\", \"W2: NIAQUE model provides the capability to learn arbitrary quantiles of the target distribution. As such, all metrics of the model remain the same irrespective of the quantiles selected to compute COVERAGE @ $\\\\alpha$ metric. The only metric that changes is COVERAGE @ $\\\\alpha$. This is because (i) the point estimation metrics are computed based on the median and (ii) the CRPS metric is computed based on 200 quantiles sampled uniformly at random in (0, 1). In this light, the CRPS metric already does provide a measure of model success when operating on arbitrary quantiles, not just ones used for COVERAGE @ $\\\\alpha$ computation. Additionally, for the NIAQUE model reported in Table 1, COVERAGE @ 50 is 52.2, COVERAGE @ 90 is 89.4 and COVERAGE @ 99 is 98.1. Note that this computation is done on the same model, without retraining it in any way for the specific target quantiles, but rather querying it with the user requested quantile value of interest. Therefore, the guideline for the user is as simple as: train the model in the way described in the paper and at inference time, query the model with the quantile of your choice, be it p50 (median), p5, p90 or any other. We have included the following clarification at the beginning of Section 2.1: \\\"Therefore, at inference time the user of the model has the choice of querying the model with any combination of target quantiles that best suit the user's downstream application.\\\"\", \"W3: Our model and training method do not make any parametric assumptions about the nature of the target distribution. Hence, it is able to handle distributions of different properties, naturally. The LPRM-101 benchmark is expansive and it contains datasets of very different nature. Figure 3b demonstrates what happens when features are ranked by the importance weight, calculated per dataset, and then removed from model input based on their importance rank. The low-importance feature removal degrades accuracy visibly less than the removal of high importance features, across many datasets. This clearly demonstrates that the proposed importance works on a wide variety of data. We believe that this addresses the concern raised by the reviewer.\"]}", "{\"comment\": \"Dear Reviewer NJbn, thank you very much!\"}", "{\"title\": \"Response to remaining points\", \"comment\": \"Dear Reviewer p8Dt, we would like to thank you for engaging into a productive discussion and raising your score in response to our revisions! We have further revised the paper to address your remaining points.\\n\\nFirst, we have worked on the terminology as you suggested and pivoted towards the multi-task formulation, replacing the co-training terminology that may not be clear.\\n\\nSecond, we have clarified the operation of the architecture on vector independent variables of variable dimensionality throughout the text, but especially in the intro of Section 2.2 as follows: \\n\\nAt inference time, for $i$-th observation sample, $x_i$, with variable dimensionality $N_i$ it accepts a tensor of values of dimensionality $1 \\\\times N_i$ and a tensor of feature codes of dimensionality $1 \\\\times N_i$, transforms, embeds and concatenates them into tensor of size $1 \\\\times N_i \\\\times E_{in}$. The encoder then collapses the independent variable dimension using prototype approach, resulting in output embedding of size $1 \\\\times E$.\\n\\n\\nWe would be extremely grateful to you if you could review the latest revision of the paper suggesting any further improvements to the paper you might see fit.\"}", "{\"metareview\": \"This paper introduces NIAQUE, a neural network model for probabilistic regression problem that excels in any-quantile estimation across multiple datasets. It also presents a new benchmark LPRM-101, with 101 diverse regression datasets, showing that NIAQUE outperforms existing models like trees and transformers while being more interpretable and consistent in its training.\\n\\nThis paper got a mixed rating with borderline accepts and rejects. The authors acknowledge that NIAQUE is more interpretable with theoretical analysis and can save significant training time. The proposed LPRM-101 benchmark allows for rigorous and expansive evaluation across diverse regression tasks. Several questions and concerns are raised in the initial round, including 1. Experimental performance and comparison. Marginal gains from multi-task transfers. It is also suggested to add comparison with the best single dataset approaches. 2. Presentation and clarity issue. 3. Bias concern in co-training across datasets. After rebuttal, some reviewers raise the ratings, but it is still a borderline (e.g. after raising the rating by p8Dt, it is still a 5). Some concerns like the marginal gain from multitask transfer and the challenges of real-world application still remain. Given all these, I tend to suggest a reject given the current paper status and the ICLR standard.\", \"additional_comments_on_reviewer_discussion\": \"A number of questions and concerns raised in the original reviews can be categorized into the following: 1) Performance and comparison, 2) Presentation and clarity, and 3) Bias and real-world application. Correspondingly, the authors added additional experiments and provided clarification for misunderstandings and unclear points. I am happy to see that the authors resolved most of the reviewers\\u2019 concerns by adding experiments and offering further explanation. However, some inherent problems, such as the marginal gain from multi-task transfers, the bias issue caused by all co-training, and the challenges of real-world application, remain not fully addressed.\"}", "{\"title\": \"Response to Author Rebuttal\", \"comment\": \"Thanks for conducting the transfer learning experiments and including the Transformer-local results. Thanks also for the clarification on how this method relates to the (Conditional) Neural Process literature. I have raised my score to a 5.\"}", "{\"title\": \"Additional Experiments\", \"comment\": \"Dear Reviewer p8Dt, we would like to thank you for providing insightful feedback. We have addressed your major points by (i) including the transfer learning experiment on held-out datasets and (ii) providing the Transformer-local results in Table 1. Our results demonstrate the value of pretraining NIAQUE in multi-task fashion and confirms that the learnings on various probabilistic regression tasks are generalizeable and can be transferred on unseen regression datasets using our approach. Please refer to the updated manuscript for detailed results. We are sorry that it took some time to code and execute the experiments. Please let us know if this addresses your major points. In the meanwhile, we will work on providing the response to your other points.\"}", "{\"summary\": \"This paper introduces NIAQUE, a new model for probabilistic regression that learns to approximate the inverse of the posterior distribution. It also introduces a new benchmark LPRM-101 with 101 datasets for multi-dataset probabilistic regression. The authors provide theoretical analysis for NIAQUE and also show that it is competitive with (or better than) prior baselines (trees, transformers) on the new benchmark. Additionally, the authors argue that NIAQUE improves over existing baselines by being more interpretable.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"To my knowledge, the benchmark is novel because it focuses on multi-dataset probabilistic regression\", \"NIAQUE design decisions are explained and also include some theoretical analysis\", \"NIAQUE is more interpretable while having competitive performance when compared with standard approaches\", \"Experimental results sweep over parameters and include confidence intervals\"], \"weaknesses\": [\"While the paper introduces a new benchmark, it lacks detail or tooling that would allow others to use it. I would like to better understand how easy it would be for another researcher to use it and build off this work.\", \"This is a multi-dataset benchmark, but I would like to understand how the best existing single dataset approaches perform, or if the best approaches are the ones in Table 1. For example, are the top submissions to the Kaggle datasets' respective competitions significantly different from Transformers or XGBoost? How do those numbers compare with the \\\"local\\\" approaches in Table 1?\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The current manuscript introduces NIAQUE, a neural network model for probabilistic regression that performs any-quantile estimation across multiple datasets. NIAQUE takes a training batch of the variable number of observations and generates a quantile of uniformly at random in the [0,1] range for each training sample. The quantile is used as the input of the quantile decoder to modulate\\nthe observation representation and as the supervision signal in the quantile loss.\\n\\nBesides this, the paper also introduces the LPRM-101 Benchmark which is a new benchmark with 101 diverse regression datasets from different sources like UCI, PMLB, OpenML, and Kaggle, aimed at assessing the performance of models on probabilistic regression tasks. The proposed method enables to handle a range of quantiles in a single framework. Its encoder-decoder design effectively manages variable data formats across datasets, making it flexible for multi-dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The authors provided the LPRM-101 benchmark which allows for rigorous evaluation across diverse regression tasks and data sources.\", \"The proposed NIAQUE seems a new probabilistic regression that learns to approximate the inverse of the posterior distribution during training.\"], \"weaknesses\": [\"W1: co-training across datasets could introduce biases that are absent in single-dataset models. This is especially relevant if certain datasets dominate the training process, potentially leading to skewed representations and reduced performance on minority datasets. It would be interesting to see the performance of your model in individual datasets when trained jointly vs. separately.\", \"W2: While the any-quantile framework is flexible, the performance of the model may vary depending on the choice of quantiles.The lack of specific guidance on the choice of quantiles may prevent users from achieving optimal results in different use cases.\", \"W3:NIAQUE relies on specific probabilistic assumptions for feature importance and distributional modeling, which may not be suitable for all regression tasks, particularly those with non-standard distributional characteristics. This could limit the model\\u2019s applicability to a narrower range of tasks than intended.\"], \"questions\": [\"Q1) I am wondering how NIAQUE's performance varies with different quantile choices.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to the Neural Process question\", \"comment\": \"We thank the reviewer for pointing out the Neural Process and Conditional Neural Process papers. These approaches are not directly applicable to the problem we solve, because they solve the problem in the fixed dimensional input space. For example, typical settings considered in the papers include 1d regression (curve fitting), 2d regression (image completion), classification on Omniglot. The approaches operate on the the support set of $(x,y)_i$ tuples as explained in Section of 2.4 in https://arxiv.org/pdf/1807.01622. The dimensionality of $x$ and $y$ is assumed to be fixed. The meta-learning aspect then manifests itself, for example, in the case of ominglot, when the same-dimensional 32x32 images are fed at both training and inference time, whereas the classes at train and inference time do not overlap. This is quite different from our setup, in which across different datasets the dimensionality of $x$ changes. As an example, let's take the nearest neighbor baseline used by the conditional neural process in the Omniglot experiment (Table 2 of https://proceedings.mlr.press/v80/garnelo18a/garnelo18a.pdf). Suppose we would like to implement cross-dataset knowledge transfer with variable dimensionality $x$, what is then the meaning of the nearest neighbor? None really seems to emerge naturally. Yet, our transfer learning experiment shows that positive transfer across datasets with variable-dimensional input spaces is viable. To address the reviewer's point we have included the following paragraph in related work section.\\n\\nAlternative approaches, such as Neural Processes \\\\citep{garnelo2018neuralprocesses} and Conditional Neural Processes \\\\citep{garnelo2018conditional}, also generate conditional probabilistic solutions to regression problems. However, these methods are limited to fixed-dimensional input spaces and are not directly applicable to the cross-dataset, multi-task learning problem addressed here, where datasets vary in the number of independent variables. Moreover, unlike \\\\citet{garnelo2018neuralprocesses} and \\\\citet{garnelo2018conditional}, our approach demonstrates the ability to transfer knowledge to entirely new datasets, even when their dependent variable domains do not overlap with the training data.\"}" ] }
5x1Gklb3mf
Learning Phase Representations for Microstructural Segmentation in Metallographic Images through Expert Knowledge
[ "Bishal Ranjan Swain", "Kyung Joo Cheoi", "Jaepil Ko" ]
Automated segmentation of metallographic images containing multiple phases such as martensite, ferrite, and pearlite is essential for quantifying different phases and thereby helping in the understanding properties of materials. Segmentation of these phases is challenging as they often exhibit overlapping boundaries, similar textures, and other more complexities that require a holistic understanding of the microstructures and correct phase representation within the image. To this end, we propose a novel approach for learning phase representations that captures the subtle differences between phases. Our proposed Phase Learning Module strategically integrates phase ratio information with image encodings to produce ratio-aware features that preserve critical spatial details. Materials scientists can roughly estimate phase ratios by examining an image, and our proposed model leverages this expertise. While we use expert-estimated phase ratios during inference, we train a model using accurate phase ratios obtained from target mask images. To our knowledge, this is the first use of class ratios as input in a deep learning segmentation model that serves as constraints to guide consistent phase proportions in predictions. Experimental results demonstrate segmentation performance improvements on both private and public datasets, with a 5.65% increase in Dice scores on the private dataset and a 6.48% improvement on the MetalDAM dataset with only 1.07% increase in model parameters. Furthermore, visualizations show that our approach leads to learning of more distinct and better phase representations across models. The code and private dataset will be made publicly available.
[ "Phase Fraction", "Microstructure Segmentation", "Material Segmentation" ]
https://openreview.net/pdf?id=5x1Gklb3mf
https://openreview.net/forum?id=5x1Gklb3mf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yejBlv22f1", "XjPMFLvfN3", "X5gB0b3JZc", "WiPpOo922b", "VSaCLWWMg9", "Rj0bWW2SyM", "PbGXw3IExC", "KRwj2W4vqd", "K31LKWEKJg", "CUWbOmvcWJ", "BZ1XBkYYj5", "AjPXOAugm7", "6StLZ5HpSi", "6NFCFPYo1O", "1YkT8Xki9B" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment" ], "note_created": [ 1732182240333, 1730085510962, 1732611437056, 1730306681233, 1732182285023, 1732218020759, 1732684036910, 1730292385865, 1729104887815, 1732605207582, 1732652415574, 1732182576627, 1737720094494, 1732272498862, 1732182208032 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13852/Authors" ], [ "ICLR.cc/2025/Conference/Submission13852/Reviewer_4eVt" ], [ "ICLR.cc/2025/Conference/Submission13852/Authors" ], [ "ICLR.cc/2025/Conference/Submission13852/Reviewer_5dQM" ], [ "ICLR.cc/2025/Conference/Submission13852/Authors" ], [ "ICLR.cc/2025/Conference/Submission13852/Reviewer_J4Re" ], [ "ICLR.cc/2025/Conference/Submission13852/Authors" ], [ "ICLR.cc/2025/Conference/Submission13852/Reviewer_JHKH" ], [ "ICLR.cc/2025/Conference/Submission13852/Reviewer_J4Re" ], [ "ICLR.cc/2025/Conference/Submission13852/Authors" ], [ "ICLR.cc/2025/Conference/Submission13852/Reviewer_5dQM" ], [ "ICLR.cc/2025/Conference/Submission13852/Authors" ], [ "ICLR.cc/2025/Conference/Submission13852/Authors" ], [ "ICLR.cc/2025/Conference/Submission13852/Reviewer_4eVt" ], [ "ICLR.cc/2025/Conference/Submission13852/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely thank the reviewer for their thoughtful review and for recognizing the strengths of our work. We appreciate the constructive feedback and have addressed the concerns as follows:\\n\\n1. **Model Performance Without Accurate Phase Ratio Inputs**\\n \\n We acknowledge the importance of evaluating the model's performance in the absence of accurate phase ratio inputs. In our experiments, we observed that the model performs better when **no phase ratio input is provided** than when incorrect phase ratios are supplied (Section 4). This suggests that the model is robust to the absence of phase ratio information but can be adversely affected by inaccurate inputs. \\n To mitigate this dependency, we are considering a two-stage inference approach in future work, as suggested by Reviewer 4eVt. In the first stage, the model would perform segmentation without any phase ratio input, allowing it to generate an initial prediction based solely on image features. In the second stage, the estimated phase ratios from the initial prediction would be used to refine the segmentation results. This iterative process could enhance performance without requiring precise expert input.\\n \\n2. **Limited Training Data and Dataset Augmentation**\\n \\n We agree that the dataset size is a limitation. To address this, we have performed extensive data augmentation to enhance the diversity and size of our training data. Our private dataset consists of high-resolution images with dimensions approximately 1660\\u00d71640 pixels. We employed various augmentation strategies, including sliding window augmentation, flipping, rotation, intensity adjustments, gamma correction, contrast variations, and the addition of simplex noise. These techniques expanded our private training set to approximately **5,600 image patches** of 512x512. Similarly, for the MetalDAM dataset, which originally contains images of 1024\\u00d7768 pixels, we applied the same augmentation methods. This resulted in a training set of about **7,800 images** of 512\\u00d7512 pixels.\\n \\n3. **Ablation Studies on Individual Modules**\\n \\n Thank you for this valuable suggestion. We have expanded our experiments to include comprehensive ablation studies assessing the impact of each component of our proposed method. In the revised manuscript, **Tables 3 and 4** now present performance metrics for the following configurations - baseline, w/ RE, w/ SA, w/ SA+FA, w/ RE+SA and w/ RE+SA+FA.\\n \\n\\nOur findings indicate that adding just the RE does not significantly improve performance because the model cannot effectively distinguish between different phases or understand their relationship with the ratios without spatial context. However, incorporating both RE and SA modules leads to increased accuracy, as the model gains information about the phases and their spatial distribution within the image. The FA module is essential for enabling the model to perform well even when no phase ratio is provided; it merges the original image features with the ratio-enhanced features, ensuring robust performance.\\n\\nAdditionally, we have included more performance metrics in **Tables 5 and 6**, where we compare models across different configurations of the Segment Anything Model (SAM) and provide a detailed parameter breakdown of each module in our proposed method. This analysis helps demonstrate the effectiveness and efficiency of our approach.\"}", "{\"summary\": \"This paper proposes a novel method for learning phase representations in the context of metallographic segmentation, effectively capturing subtle differences between phases. The phase learning module introduced in the paper adaptively integrates phase ratio information with image encoding to generate scale-aware features that preserve critical spatial details. During inference, phase ratios can be coarsely estimated from the image to achieve improved segmentation performance. The paper is clearly articulated and well-written.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The background, motivation, and proposed method are introduced clearly.\\n2. The comparison with CNN-based segmentation methods is comprehensive.\\n3. The experimental analysis and explanatory figures are well-presented, and the proposed learnable phase representation method demonstrates a significant improvement in results.\", \"weaknesses\": \"This paper introduces a learnable phase representation by incorporating phase ratios, statistically derived from ground truth, into the network. During testing, the method relies on expert-estimated phase ratios as conditions, yielding notable performance improvements over the baseline segmentation. However, certain aspects concerning innovation and fairness in comparison could be improved.\\n\\nFirstly, the use of ground-truth statistical information was previously employed in [1], where such statistical information was constrained within the loss function, thus avoiding the need for conditional input during inference. \\n[1] Do we really need dice? The hidden region-size biases of segmentation losses.\\n\\nSecondly, as the approach requires expert-estimated phase ratios during inference, it falls into the category of interactive methods, necessitating a fair comparison with interactive approaches in terms of interaction time and final performance.\", \"questions\": \"1. I am skeptical about the claim that precise phase ratios are needed during training but that inference can achieve high performance without accurate or even any phase ratio. If the authors' claim holds true, a cascade inference approach could be employed: the first step would involve prediction without phase ratios, followed by phase ratio estimation from the predicted results for a second inference step, thereby potentially removing the need for expert-provided phase ratio assessments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their valuable feedback and appreciate for giving the opportunity to further clarify our methodology and address the raised concerns.\\n\\n- We understand your concern and agree that using the phase ratio as a scalar input might seem like a weak form of supervision due to its simplicity as a single numerical value. However, in our method, we do not directly use the phase ratio as a scalar with the image encoding. Instead, we employ a **Ratio Encoder (RE) module** that transforms the phase ratios into multi-channel ratio maps.\\n \\n Our **Feature Extraction (FE) module** separates the original image encoding into $n$-channel feature encodings, where $n$ is the number of phases. We then enhance these features spatially using the **Spatial Awareness (SA) module.**\\n \\n The $n$-channel ratio maps from the Ratio Encoder are merged with the spatially enhanced image encodings. Each ratio map corresponds to a specific phase and is combined with its corresponding phase's visual features. This ensures that the phase ratio information is effectively integrated with the image features for each phase.\\n \\n It can be thought like we add weights into the visual representation of phases based on their ratio. When the phase ratios are changed during inference, the corresponding weights for the phases are adjusted. It allows the model to give more attention to phases with higher ratios and less attention to those with lower ratios and helps in aligning the segmentation output with the expected phase composition.\\n \\n- Our integration of **Feature Aggregation (FA) module** ensures that the model does not rely strictly on phase ratios. The FA module uses learnable parameters $\\\\gamma$ and $\\\\delta$ to balance the contributions of the original image encoding and the PLM-enhanced encoding . This way we ensure that the mode retains its original visual representations that is crucial when phase ratio information is absent. While highly inaccurate phase ratios can affect the model's performance by misguiding the attention weights. We also added an analysis on the $\\\\gamma$ and $\\\\delta$ in appendix section A.4. Our experiments show that the model begins to outperform the baseline when the input phase ratios are approximately **62% accurate or better**.\\n- Our experiments demonstrate that integrating the phase ratio information leads to notable improvements in segmentation accuracy over the baseline models. We conducted ablation studies (as shown in Tables 2 and 3) to assess the impact of each module. We also show that our model uses very less number of trainable parameters (Table 6) and is able to learn and put weights on the phases based on the input ratio. We also visualize the effects of each of the module in Figure 11. The results confirm that the PLM learns the phase representation well and plays a crucial role in enhancing performance by using expert\\u2019s knowledge of materials.\"}", "{\"summary\": \"The author builds upon the existing SAM model and designs a new information fusion component called the Phase Learning Module. This module integrates additional information, such as phase ratio data, with image encodings to generate ratio-aware features that enhance segmentation performance. The author tested the model on both private and public datasets, achieving promising results. The application of artificial intelligence to explore less mature fields is commendable, and the author's commitment to making the code and datasets publicly available is beneficial to the field. However, the technical contribution of this work is insufficient for an ICLR paper and may be more suitable for a domain-specific journal.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. According to the author, this is the first instance of using class ratios as input in a deep learning segmentation model, where they serve as constraints to guide consistent phase proportions in predictions.\\n2. The author claims that releasing the dataset and code may be meaningful for advancing research in a relatively new materials field.\", \"weaknesses\": \"1. The main issue with this paper is the limited technical contribution. As an ICLR paper, the primary focus should be on the machine learning contribution, but this work mainly relies on fine-tuning SAM. While some technical designs, such as the phase ratio prompt, are introduced, these are clearly minor modifications of SAM, and there are already numerous similar methods. With proper revisions, this could potentially be a good AI for Science paper; however, the current technical contribution is not substantial enough for this problem, and it lacks significant insights for the ICLR audience. It may be more suitable for a domain-specific journal.\\n\\n2. Although the author claims that the Phase Learning Module is a technical contribution, there are some issues with its design and evaluation. The attempt to integrate external information into the SAM-based segmentation model for improved performance is commendable. However, the practical rationale and potential costs of this approach need thorough evaluation. During training, prompt information is derived from labels, but at the inference stage, acquiring additional information incurs costs. It is important to assess whether this additional cost is justified in real-world applications. Moreover, if the information is obtained from test labels, there is a risk of data leakage.\\n\\n3. Specific comparison and evaluation shortcomings:\\n \\n a. Fairness of Comparison: The author mainly compares basic segmentation models, but even these comparisons lack comprehensiveness. For example, segmentation models based on fundamental transformer architectures, such as TransUNet and UCTransNet, are not sufficiently evaluated. Furthermore, the model with external information is only compared against SAM, ignoring other deep learning models that focus on similar multimodal information fusion. This raises concerns about whether SAM\\u2019s framework is necessary for multimodal fusion or if a simpler attention mechanism could achieve similar results.\\n\\n b. Metric Selection: The author evaluates the segmentation model using only the Dice coefficient, which may not be sufficient or reliable. Other metrics, such as IoU, NSD, or those assessing boundary accuracy, could provide a more comprehensive evaluation. Additionally, the dataset size is not clearly discussed. If the dataset is small, cross-validation should be performed, and the mean and variance reported, along with statistical tests like a t-test to prove the effectiveness of the newly added module.\\n\\n c. Parameter Comparison: Adding the new module likely increases the number of parameters. Comparing only performance without considering parameter count is not entirely fair. Moreover, the author does not compare different configurations of the SAM model (e.g., small, base, large versions), which should be addressed.\", \"questions\": \"Refer to the discussion in the Weaknesses section. The experimental comparisons need to be more comprehensive, with a wider selection of evaluation metrics and more baselines (including both basic segmentation models and multimodal fusion models, not just SAM). Additionally, a detailed comparison of model parameters and inference speed is necessary. Extra manual experiments may also be needed to evaluate the practicality of acquiring prompt information.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the thoughtful review and insights that have greatly have helped us identify areas for improvement.\\n\\n1. **Comparison with using statistical information in loss function.**\\nThank you for bringing this important reference to our attention. We agree that incorporating statistical information into the loss function is a valuable approach for addressing class imbalance and improving segmentation performance. However, our method integrates phase ratio information directly into the model architecture. This allows our model to dynamically adjust its predictions based on phase ratios provided during inference. By incorporating expert-estimated phase ratios during inference, our method leverages domain knowledge to improve segmentation accuracy, especially in challenging cases where visual differences between phases are subtle. This real-time adjustment is particularly useful in metallographic analysis, where phase distributions can vary significantly between samples. Our experiments show that the user needs to input 67% or higher accurate phase ratios to have better segmentation result than the baseline model. \\n2. **Comparison with Interactive Methods**\\n \\n You raise an excellent point about the necessity of comparing our method with existing interactive segmentation approaches. However, presently there are not any pre-existing models that allow interactive segmentation of metallographic images. These images present with complex visual semantics that cannot be achieved by model trained on natural images. But we will be evaluating for such in future related researches.\\n \\n3. **Cascade Inference Approach**\\n \\n Thank you for this insightful suggestion. We understand your skepticism and agree that exploring a cascade inference approach could be valuable. In our experiments, we observed that the model performs better when no phase ratio input is provided than when incorrect phase ratios are supplied, indicating robustness to the absence of this information. In the first stage, the model would perform segmentation without any phase ratio input, allowing it to generate an initial prediction based solely on image features. In the second stage, the estimated phase ratios from the initial prediction would be used to refine the segmentation results. This iterative process could enhance performance without requiring precise expert input, thereby improving the model's usability in automated systems or settings with limited domain expertise. We would definitely pursue this in future research.\\n \\n Additionally we have added expanded comparisons with more models in Table 1 and have expanded the ablation experiments in Table 2 and 3 that show the importance of each of the proposed module and how ratio information is effectively added using proposed modules. We also include Table 5 and 6 that show the performance comparison across SAM models and parameter breakdown of each of the proposed components. Phase Learning Module adds 1.2M parameters to the existing models and improves performance by over 5% on private dataset and over 6% on MetalDam dataset.\"}", "{\"comment\": \"Thank you for your clarifications.\\nPlease consider including your analysis and findings of the gamma and delta parameters.\\nThis would provide additional insight into the model's operation and clarify, even if slightly, how it works.\"}", "{\"comment\": \"We thank the reviewer for their thoughtful review and for sharing their concerns. We appreciate the opportunity to clarify the contributions and significance of our work.\\n\\nWe understand the reviewer\\u2019s concern about the perceived limited novelty of our methodology. Our primary goal was to address a significant challenge in microstructure segmentation: **the difficulty of providing effective input conditioning due to the complex and indescribable nature of these images.**\\n\\n In existing models like SAM and similar architectures, input conditioning is typically achieved through text prompts or visual cues such as bounding boxes and points. These methods are effective for natural images where objects are distinct and describable. However, in the domain of microstructure images, such as metallographic images, the intricate patterns and lack of intuitive visual cues make it impractical to provide such conditioning inputs. The components within these images often cannot be easily described or annotated with bounding boxes due to their visual complexity and the subtle differences between phases which we describe in Figure 1.\\n\\nTo overcome this challenge, we introduce the use of **phase ratios** as a novel form of input conditioning after extensive thought. Phase ratios represent the proportions of different material phases within a microstructure and can be estimated by experts through visual inspection. By leveraging this readily available expert knowledge, we provide a practical means of conditioning the model without the need for detailed annotations or descriptive prompts. \\nOur proposed **PLM** is designed to effectively integrate phase ratio information into the segmentation model. Instead of simply adding the phase ratios as scalar inputs, we transform them into multi-channel ratio maps using the **Ratio Encoder**. These ratio maps are then effectively merged with their corresponding image phases after being spatially aligned. The input of phase ratios is a single step that can greatly enhance model performance with slight increase in model parameters.\\n\\n**Justification for the necessity of our method**\", \"the_necessity_of_our_method_arises_from_the_unique_challenges_associated_with_microstructure_images\": \"- **Inadequacy of Traditional Conditioning**: Conventional input conditioning methods are ineffective due to the inability to describe or annotate complex microstructures.\\n- **Leverage of Expert Knowledge**: Experts can estimate phase ratios with reasonable accuracy through visual inspection, providing valuable information that is otherwise difficult to encode.\\n\\nOur method optimally utilizes this expert knowledge by embedding it into the model in a way that enhances segmentation performance without imposing significant additional costs. The integration is seamless and does not require extensive modifications to existing architectures.\\n\\n**Addressing Concerns About Broader Applications**\\n\\nWhile our work focuses on metallographic image segmentation, the underlying concept of using quantitative estimates as conditioning inputs can be extended to other domains facing similar challenges such as medical imaging and remote sensing.\\n\\nWe believe our work addresses a critical gap in the application of deep learning to complex image segmentation tasks where traditional input conditioning fails. By effectively integrating expert-estimated phase ratios through our Phase Learning Module, we provide a novel and practical solution that enhances segmentation performance. Our approach is not only relevant to the materials science community but also offers insights that could inspire broader applications in other fields with similar challenges.\"}", "{\"summary\": \"This paper proposes an approach for segmenting metallographic images by integrating expert knowledge through phase ratios, which are estimated by domain specialists. The proposed Phase Learning Module (PLM) enhances the segmentation model\\u2019s accuracy by refining image encoding with ratio-aware features, achieving improved performance on both public and private datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"By incorporating expert phase ratio input, the model bridges domain knowledge with deep learning, improving interpretability and alignment with real-world observations.\\n\\nThe model demonstrates clear performance improvements in Dice scores, achieving substantial segmentation accuracy increases on challenging microstructural datasets.\\n\\nThe model allows input of phase ratios during inference, improving usability in applications requiring expert oversight.\", \"weaknesses\": \"With only 42 images in MetalDAM and 24 in the private dataset, the training data is limited, potentially impacting the model\\u2019s ability to generalize across diverse materials.\\n\\nThe model\\u2019s effectiveness relies on accurate phase ratios, which may limit its utility when expert estimations are unavailable or imprecise. Inaccurate phase ratios significantly reduce model performance, which might affect usability in automated systems or those lacking domain expertise.\", \"questions\": \"How does the model perform in the absence of accurate phase ratio inputs, and are there plans to mitigate this dependency?\\n\\nHave you considered expanding the dataset, or are there augmentation techniques that could address the limited training data?\\n\\nCould you provide ablation studies to assess the impact of individual modules, like Phase Ratio integration, SA, and FA, on overall performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents a module that incorporates domain-specific knowledge to guide a segmentation model, to accurately segment metallographic images. This guidance is the ratio of each segment in the image, while during training it is computed using the GT and during inference it is provided by the operating experts.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Problem formulation and motivation is presented really well.\\nThe paper is easy to follow.\\nIt makes sense to utilize a segmentation foundation model and inject domain-specific hints.\\nIt shows pretty consistent (and not negligible) improvements on several segmentation models and on two datasets.\", \"weaknesses\": \"1) The presented scientific background is too short. I suggest presenting a broader related work section that is separate from the introduction section. A bit more information on what is done on the vision-metallography domains may be helpful, and a bit more information at least on LoRA-SAM as your reported baseline utilizes it. At least - introduce its main components, since you use them in your encoder and decoder.\\n\\n2) Currently the setting requires the operators to work \\\"harder\\\" as it demands their guidance.\\nI would suggest to to train another module that will predict the ratio from the input image. Instead of simply calculating it from the GT, predict it from the input and penalize using the GT ratio. This will give you the option to operate using only the image In inference.\\nIt will be interesting to see in this zero-expert-intervention setting, how well does the model perform.\\n\\n3) It will be interesting to see an analysis of gamma and delta. What did the model preferred to focus on?\", \"technical_issues\": \"\", \"line_24\": \"\\\"model\\\" -> \\\"a model\\\"\", \"figure_3\": \"Why is there an arrow from the input to the Phase Ratio Extractor in training? Shouldn't the arrow start from the GT?\", \"figure_4\": \"Fix the squares behind the yellow square below add coords\", \"line_247\": \"\\\"denote\\\" -> \\\"denotes\\\"\", \"questions\": \"The definition of n and k are not clear to me. Is n defined by the total number of phases in the dataset? Something else?\\nk is identical in each segmentation mask? If not, denote that each mask has a different k^i or something like it.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the valuable feedback. We have now added a new section in appendix about the analysis of $\\\\delta $ and $\\\\gamma$ in A.4. The section reports the mean values of the parameters learnt by the model across various epochs. Our analysis indicates that the model places significantly more weight on ratio-enhanced features while still retaining a portion of the original image features to maintain contextual information.\"}", "{\"comment\": \"Thank you for the response. However, I remain unconvinced and still believe this paper does not meet ICLR standard. The main issue is the limited novelty of the AI methodology. The approach is overly simple and intuitive, with no clear justification for why this particular method is necessary or optimal. It lacks the depth and innovation to inspire broader applications.\\n\\nWhile I acknowledge the scientific value and the improved experiments from the response, I maintain that the contribution is insufficient for acceptance. My score of 3 reflects that the paper\\u2019s average merit does not reach the acceptance threshold.\"}", "{\"comment\": \"We sincerely thank the reviewer for your positive assessment of our work and for the valuable suggestions to improve our paper. We address your comments and questions below.\\n\\n1. **Expansion of the Scientific Background**\\nThank you for this suggestion. We agree that expanding the scientific background and providing more context would strengthen the paper but due to lack of space we have added it in the Appendix A.1 section.\\n2. **Reducing User Effort by Predicting Phase Ratios**\\n \\n In the current system the expert has to provide phase-ratio inputs during inference which be a bottleneck. To mitigate this dependency, we are considering a two-stage inference approach in future work, as suggested by Reviewer 4eVt. In the first stage, the model would perform segmentation without any phase ratio input, allowing it to generate an initial prediction based solely on image features. In the second stage, the estimated phase ratios from the initial prediction would be used to refine the segmentation results. This iterative process could enhance performance without requiring precise expert input.\\n \\n3. **Analysis of Gamma and Delta Parameters**\\n \\n Thank you for pointing this out. We have conducted an analysis of the gamma (\\u03b3) and delta (\\u03b4) parameters, which control the influence of the ratio-enhanced features in the Feature Aggregator (FA) module. Our findings are as follows:\\n \\n - Through experiments, we found that the model converged to values of **\\u03b3 = 0.32** and **\\u03b4 = 0.78** on average. This indicates that the model prefers to place a higher emphasis on the ratio-enhanced features (influenced by \\u03b4) while still considering the original image features (controlled by \\u03b3).\\n - However, we are also keen on conducting further study on the analysis of Gamma and Delta parameters which we will be conducting in a future study.\\n\\nWe have corrected all the grammatical errors as pointed out by the reviewer. For the clarification of $n$ and $k$, \\n\\n$k$: Represents the total number of distinct phases in the dataset. It is the number of unique classes or labels used to segment the images into different material phases or regions. \\n\\n$i$: Identifies each individual phase within the total of $k$ phases. It ranges from 1 to $k$. For example, if $k=3$ , then $i$ can be 1, 2, or 3, corresponding to Phase 1, Phase 2, and Phase 3, respectively.\\n\\n$n$: Number of pixels and $n_i$ **r**efers to the number of pixels belonging to phase $i$ in a particular image.\\n\\n$r_i$: Represents the ratio of $n_i$ to the total number of pixels in the image, i.e., the proportion of the image occupied by phase $i$.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thank you for your explanation. However, the phase ratio here is merely a scalar value serving as an input condition, which represents a relatively weak form of supervision and is limited to a single numerical value. On one hand, such a reasoning process likely imposes stringent requirements on the phase ratio, leading to weaker generalization capabilities. On the other hand, its contribution to the overall results appears to be relatively limited. I would appreciate a stronger justification and more robust evidence to support this approach.\"}", "{\"comment\": \"We sincerely thank the reviewer for their thorough evaluation of our work and for the valuable feedbacks. We address concerns below -\\n\\n**1. Technical Contribution**\\n\\nWe acknowledge the reviewer's concern regarding the technical contribution of our work in the context of ICLR. While our approach builds upon existing models like SAM, we believe that our proposed Phase Learning Module (PLM) introduces a novel methodology by integrating class ratios as input into deep learning segmentation models. This integration serves as a constraint to guide the model towards consistent phase proportions in its predictions, which is valuable in scenarios where classes in images are difficult to distinguish based solely on visual features.\\n\\nOur method allows the model to adjust segmentation outputs based on desired class ratio inputs and enhance performance in cases with unbalanced classes or indescribable images. This capability is not commonly found in existing segmentation models and represents a meaningful advancement that could benefit other domains facing similar challenges.\\n\\n**2. Practical Rationale and Costs**\\n\\nOur proposed method is designed to assist domain experts who often rely on techniques like Electron Backscatter Diffraction (EBSD) for accurate labeling, which can be costly and time-consuming. By enabling experts to input estimated phase ratios during inference, our method reduces the reliance on expensive labeling methods and leverages expert knowledge to improve segmentation accuracy.\\n\\nDuring the inference stage, we mimic expert input by providing phase ratios inputs with 90% accuracy, as detailed in the Appendix A2 section of our paper. We demonstrate that the model achieves better results than the baseline when the expert's phase ratio accuracy is 67% or higher (Figure 9). This indicates that even approximate estimates from experts can significantly enhance model performance and justifies the practical utility of our approach in real-world applications.\\n\\n**3. Comparison and Evaluation Shortcomings**\\n\\n**a. Fairness of Comparison**\\n\\nWe appreciate the suggestion to include transformer-based segmentation models like TransUNet and UCTransNet in our comparisons. In response, we have added these models to our experiments, and the results are presented in Table 1 of the revised paper. Our method still demonstrates increased performance, indicating that the inclusion of ratio information provides benefits beyond what attention mechanisms alone can achieve. Additionally, our Spatial Awareness (SA) and Feature Aggregator (FA) modules effectively integrate this additional information, leading to the performance improvements observed in Tables 2 and 3.\\n\\n**b. Metric Selection**\\n\\nWe agree with the reviewer that dice coefficient may be sufficient but it is a widely used metric in metallographic segmentation studies that allows for direct comparison with prior and future works. Dice coefficient effectively measures the overlap between the predicted segmentation and the ground truth. We do value the input and will be implementing other evaluation metrics in future related studies.\\n\\n**c. Parameter Comparison**\\n\\nThank you for highlighting the importance of comparing model parameters and configurations. We have included Tables 5 and 6 in the appendix of the revised paper. Table 5 compares the performance of different configurations of the SAM model (base, large, and huge), and Table 6 provides a detailed breakdown of the parameters in each of our proposed modules. Our Phase Learning Module, while accounting for approximately 1.2 million additional parameters, improves the performance of the models by over 5% on the private dataset and over 6% on the MetalDAM dataset. This demonstrates that the performance gains are achieved with a reasonable increase in model complexity.\"}" ] }
5wxCQDtbMo
GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks
[ "Sarp Aykent", "Tian Xia" ]
Understanding complex three-dimensional (3D) structures of graphs is essential for accurately modeling various properties, yet many existing approaches struggle with fully capturing the intricate spatial relationships and symmetries inherent in such systems, especially in large-scale, dynamic molecular datasets. These methods often must balance trade-offs between expressiveness and computational efficiency, limiting their scalability. To address this gap, we propose a novel Geometric Tensor Network (GotenNet) that effectively models the geometric intricacies of 3D graphs while ensuring strict equivariance under the Euclidean group E(3). Our approach directly tackles the expressiveness-efficiency trade-off by leveraging effective geometric tensor representations without relying on irreducible representations or Clebsch-Gordan transforms, thereby reducing computational overhead. We introduce a unified structural embedding, incorporating geometry-aware tensor attention and hierarchical tensor refinement that iteratively updates edge representations through inner product operations on high-degree steerable features, allowing for flexible and efficient representations for various tasks. We evaluated models on QM9, rMD17, MD22, and Molecule3D datasets, where the proposed model consistently outperforms state-of-the-art methods in both scalar and high-degree property predictions, demonstrating exceptional robustness across diverse datasets, and establishes GotenNet as a versatile and scalable framework for 3D equivariant Graph Neural Networks.
[ "graph neural networks", "computational physics", "3D graphs" ]
Accept (Poster)
https://openreview.net/pdf?id=5wxCQDtbMo
https://openreview.net/forum?id=5wxCQDtbMo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x9E0AMYlRn", "rS6IWbomjc", "rQqAZqzrWl", "pyjGMotlDx", "mRqFww82lE", "eG8gaKMTVF", "cNrJlPHoWT", "XhQCy8XJcy", "UUO5gUpHMg", "TYaaETSRoz", "SZtPlPyJB8", "SOftNvOuaU", "QsOBhrAozC", "PeW3gWI3oe", "MQiHTC9LRM", "LhrXZ4L3hA", "LSepnRgiPt", "L42qnvFtD5", "KTMmS4AaYC", "KMcdZHrKKE", "JsY4rF7h7N", "IUpqsLRVyr", "HUirESj2g6", "GbcXULgK1M", "EwjLGyIvYr", "EIzKjo9iwp", "AVHFMNNtHo", "98GxRmUDrS", "7eKkW1bBOk", "4fVV6zxOn8", "3MPtOgTSpF", "2nidyTBcnj", "24jTbRYNp6" ], "note_type": [ "official_review", "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "decision", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730705999714, 1732724836487, 1734674415073, 1732127027701, 1732503639350, 1733088009096, 1732126168223, 1730583865421, 1732725129131, 1733176871068, 1732119181242, 1733154894991, 1732468576322, 1732779886131, 1730549717670, 1732314833430, 1737524272937, 1732760243511, 1732757338336, 1732769870024, 1732690825120, 1732406692846, 1732811532329, 1732771703806, 1732118470918, 1733184500780, 1729782425745, 1733110313551, 1732163009195, 1733080950890, 1733155725911, 1732119771951, 1732776059394 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_4mUv" ], [ "~Tatsunori_Taniai1" ], [ "ICLR.cc/2025/Conference/Submission13629/Area_Chair_HWcB" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_hFv3" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_bNe3" ], [ "~Tatsunori_Taniai1" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_bNe3" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "~Tatsunori_Taniai1" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_BR7w" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_4mUv" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "~Tatsunori_Taniai1" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_hFv3" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_hFv3" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "~Tatsunori_Taniai1" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_hFv3" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_hFv3" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_hFv3" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_hFv3" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Reviewer_hFv3" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ], [ "ICLR.cc/2025/Conference/Submission13629/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces GotenNet, a Geometric Tensor Network, that advances molecular representation learning by addressing computational challenges in 3D graph neural networks. GotenNet's novel tensor embedding strategy eliminates the need for complex irreducible representations and Clebsch-Gordan transformations, enhancing computational efficiency while preserving model expressiveness. Its Geometry-Aware Tensor Attention mechanism enables refined edge representations, capturing complex geometric relationships for better molecular property predictions. Additionally, the Hierarchical Tensor Refinement approach allows the model to adapt across scales, accommodating both broad patterns and detailed molecular features. Evaluated on benchmark datasets such as QM9 and Molecule3D, GotenNet consistently outperforms existing methods, demonstrating robustness and scalability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents a novel framework, GotenNet, which innovatively combines geometric tensor representations with advanced attention mechanisms. This approach addresses the expressiveness-efficiency trade-off in 3D graph modeling, a challenge that has been inadequately tackled in prior works.\\n2. The paper is well-structured and clearly articulates the problem being addressed, the proposed solutions, and the significance of the findings.\", \"weaknesses\": \"1. **Comparison of computational complexity with previous methods**: Could you provide details on the model size, as well as training and inference times, in comparison with existing methods? This information would help highlight GotenNet\\u2019s computational efficiency relative to other models in the field.\", \"questions\": \"Please see the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Ask for Clarifications\", \"comment\": \"Dear Authors,\\n\\nI am a researcher working on GNNs and transformers for molecules and crystals. I came across this interesting paper and was very impressed by its strong results. However, as Reviewer bNe3 was concerned, several parts are still unclear to me even in the revised version. Hoping to help the authors to improve the presentation, I would like to make some questions and comments about the method.\\n\\n---\\n## **1) About the composition of feature tensors**\\n### **[Resolved] 1-1) What are the dimensions of features $h$ and $\\\\tilde{v}^{(l)}$ and others?**\\nI suppose that the edge tensor $r_{ij}$ represents spherical harmonic functions: $r = \\\\\\\\{ r^0, r^1, ..., r^{L_{max}} \\\\\\\\}$ where $L_{max}$ is the maximum degree of spherical harmonics. Each degree $l$ has $(1 + 2l)$ components. Thus, $r$ constructs a pyramidal dimensional tensor, shaped like below:\\n\\n\\u3000\\u3000\\u25a2\\u3000\\u3000\\u3000$l = 0$ \\n\\u3000\\u25a2\\u25a2\\u25a2\\u3000\\u3000$l = 1$ \\n\\u25a2\\u25a2\\u25a2\\u25a2\\u25a2\\u3000$l = 2$ ($= L_{max}$) \\n\\nThis is my understanding of \\\"Geometric Tensor Notations\\\" in Sec 3.1 and it is clear to me. Then, from line 231, it is written that geometric tensors are decomposed into scalar features $h$ and steerable features $\\\\tilde{v}^{(l)}$. I suppose $h$ corresponds to feature components of degree $l = 0$ and $\\\\tilde{v}^{(l)}$ to those of higher degrees $l \\\\in \\\\\\\\{1,2,...,L_{max}\\\\\\\\}$, right?\\n\\nMy question here is whether $\\\\tilde{v}^{(l)}$ is further structured to have $(1 + 2l)$ components or not? In other words, what are the dimensions of $h$ and $\\\\tilde{v}^{(l)}$? Are they $h \\\\in \\\\mathbb{R}^{d_{ne}}$ and $\\\\tilde{v}^{(l)} \\\\in \\\\mathbb{R}^{(1 + 2l)\\\\times d_{ne}}$ with some feature dimension $d_{ne}$ ? Or simply $h \\\\in \\\\mathbb{R}^{d_{ne}}$ and $\\\\tilde{v}^{(l)} \\\\in \\\\mathbb{R}^{d_{ne}}$ ?\\nCarefully clarifying such dimensionality information for $h$ and $\\\\tilde{v}^{(l)}$ and others (such as $m_i$ in Eq 1 and $t_{ij}$ in Eq 3 and Eq 12) will be much helpful for readers.\\n\\n**Answer**: $\\\\boldsymbol{h} \\\\in \\\\mathbb{R}^{d _{ne}}$ and $\\\\tilde{\\\\boldsymbol{X}}^{(l)} \\\\in \\\\mathbb{R}^{(1+2l) \\\\times d _{ne}}$.\\n\\n### **[Resolved] 1-2) Where does $S := (1 + 2L_{max})$ come from?**\\nFrom several Equations (such as Eqs 7, 8, 9) and texts, I see that feature tensors, such as $V$, $\\\\text{SAE}$, and $o_{ij}$, are structured to have $(1 + 2L_{max})$ components. However, I could not understand the meaning of $(1 + 2L_{max})$. It corresponds to the number of spherical harominic components at the max degree $l = L_{max}$ but why pariticularly the max degree? It would make more sense to me if $(1 + 2L_{max})$ were actually the number of components per degree: $(1 + 2l)$ or the total number of components across degrees: $\\\\sum_{l=0}^{L_{max}} (1 + 2l)$.\\n\\nAt line 300, the paper says that \\\"The $S$ variable introduced to generate different coefficients for each degree of steerable features and formulated as $(1 + 2\\\\times L_{max})$\\\". Stating that $S$ is a variable despite $(1 + 2L_{max})$ is a constant suggests that the authors actually meant $S = (1 + 2l)$ instead of $S = (1 + 2L_{max})$ ? Even with this interpretation, Sec 3.2 still doesn't make sense to me.\\n\\nPerhaps, Sec 3.2 could actually explain the operations for each target degree $l$, assuming $S = (1 + 2l)$. In this case, $L_{max}$ in several places (such as in $S$, Eq 7 and Eq 8) were actually this target degree $l$ ?\\n\\nWhere does $(1 + 2L_{max})$ come from? Which level is Sec 3.2 about? I really want to know these to understand the paper correctly. \\n\\n**Answer**: $S = 1 + 2L_{max}$ is an authors' architectural design choice, chosen to implement Eq 8.\\n\\n### **[Resolved] 1-3) $l$ in Eq 8 is unclear**\\nEq 8's essential form is $\\\\Delta v_i^{(l)} = \\\\sum_j ( \\\\\\\\{ x_{ij}^{(l)} \\\\\\\\}_{l=1} ^{L})$, which is unclear because two types of $l$ exist. While $l$ in the left-hand side is a given degree number, $l$ in the right-hand side is a loop variable enumerating $1, 2, ..., L$. Which of the followings is the correct intepretation?\\n- $\\\\Delta v_i^{(l)} = \\\\sum_j x_{ij}^{(l)}$ where $l \\\\in \\\\\\\\{ 1, 2, ..., L \\\\\\\\}$ (Here the operation is degree-wise)\\n- $\\\\Delta v_i^{(l)} = \\\\sum_j \\\\text{Concatenate}( \\\\\\\\{ x_{ij}^{(k)} \\\\\\\\}_{k=1}^{l} )$ (Here $l$ gives the max for $k$, assuming $S = 1+2l$.)\\n- $\\\\Delta v_i^{(l)} = \\\\sum_j \\\\text{Concatenate}( \\\\\\\\{ x_{ij}^{(k)} \\\\\\\\}_{k=1}^{L} )$ (Here $l$ in the left-hand side has no effect on the right-hand side)\\n- Something else\\n\\n**Answer**: $\\\\Delta v_i^{(l)} = \\\\sum_j x_{ij}^{(l)}$ where $l \\\\in \\\\\\\\{ 1, 2, ..., L \\\\\\\\}$\"}", "{\"metareview\": \"This paper develops a new $E(3)$ equivariant transformer architecture for molecular data modeling. At the core of the method is a Geometry Aware Tensor Attention (GATA) acting on input node and edge features and providing a good tradeoff between computational complexity and expressiveness. Initial reviewers\\u2019 concerns were regarding presentation and exposition of the paper. The authors addressed these issues during the rebuttal period to the satisfaction of most of the reviewers. All reviewers felt the experimental part is solid and demonstrates the efficacy of the suggested architecture.\", \"additional_comments_on_reviewer_discussion\": \"No additional comments.\"}", "{\"title\": \"Weakness and Question\", \"comment\": \"> Weakness: comparison with the current available literature\\n\\nWe thank the reviewer for this valuable suggestion to better contextualize our work within existing literature. We have substantially revised our introduction and related work sections to clarify the evolution of approaches in this field and GotenNet's distinct contributions.\\n\\nThe current landscape of equivariant networks can be broadly categorized into two approaches [1]: (1) scalarization-based methods like Graph Attention Networks, which prioritize computational efficiency but may sacrifice expressiveness, and (2) high-degree steerable models [4,5], which achieve high expressiveness through CG coefficients and tensor products but incur significant computational overhead. **GotenNet bridges these approaches** through two key innovations:\\n\\n1. While state-of-the-art works [4,5] uses CG transforms to leverage high-order steerable features, our geometry-aware tensor attention (GATA) directly operates on geometric tensor representations without CG transforms. This crucial difference allows us to maintain the expressiveness of high-degree features while achieving the computational efficiency typically associated with scalarization-based methods.\\n\\n2. Recent works like SO3KRATES [2] and HEGNN [3] demonstrated theoretical connections between CG coefficients and inner products. SO3KRATES and GotenNet utilizes equivariant attention, but the inner product formulation used in GotenNet is more concise and yields far better results than SO3KRATES. As noted by reviewer hFv3, although both HEGNN and GotenNet employ inner product formulation, GotenNet distinguishes itself by focusing on competitive real-world tasks while HEGNN primarily addresses dynamic problems. Therefore, the architectures of the concurrent works, HEGNN and GotenNet, are fundamentally different, tailored for distinct application domains.\\n\\nWe have revised our manuscript to better highlight these distinctions and their implications for both theoretical understanding and practical performance. This includes expanded comparisons with existing architectures in both the introduction and related work sections.\\n\\n\\n> Questions: How does the current architecture generalize to different tasks? As presented, it would work only on molecule type graphs, is it also possible to generalize the architecture to for instance social networks or other types of graphs? In other words, how general is the method.\\n\\nWe sincerely thank the reviewer for their positive assessment and thoughtful suggestions about future directions. The proposed GotenNet framework can be readily extended to any spatial data where interactions between nodes are heavily influenced by geometric properties. While our current work focuses on molecular property prediction, the core architectural components - geometric tensor representations and attention mechanisms - are domain-agnostic and can generalize to various applications. For instance, several promising domains could benefit from our framework:\\n\\n1. Point Clouds: As the reviewer noted, our framework can be adapted to point cloud processing by modifying the initial feature embedding while maintaining the core equivariant operations.\\n\\n2. Protein Structure Analysis: The efficient handling of high-degree steerable features through our GATA and HTR mechanisms could benefit protein structure prediction and analysis.\\n\\n3. Dynamic Molecular Simulations: Our model's ability to capture multi-scale geometric relationships makes it particularly suitable for molecular dynamics applications.\\n\\nThese applications represent natural extensions of our framework, as they share the fundamental requirement of processing geometric relationships while preserving symmetries. We appreciate these insightful suggestions and included a discussion of these potential applications in our revised manuscript in Appendix K.\\n\\n\\n[1] Jiaqi Han, Jiacheng Cen, Liming Wu, Zongzhao Li, Xiangzhe Kong, Rui Jiao, Ziyang Yu, Tingyang Xu, Fandi Wu, Zihe Wang, Hongteng Xu, Zhewei Wei, Yang Liu, Yu Rong, and Wenbing Huang. A survey of geometric graph neural networks: Data structures, models and applications, 2024.\\n\\n[2] J Thorben Frank, Oliver T Unke, Klaus-Robert M\\u00fcller, and Stefan Chmiela. A euclidean transformer for fast and stable machine learned force fields. 2024.\\n\\n[3] Jiacheng Cen, Anyi Li, Ning Lin, Yuxiang Ren, Zihe Wang, and Wenbing Huang. Are high-\\ndegree representations really unnecessary in equivariant graph neural networks? 2024. \\n\\n[4] Yi-Lun Liao and Tess Smidt. Equiformer: Equivariant graph attention transformer for 3d atomistic graphs. 2023.\\n\\n[5] Yi-Lun Liao, Brandon M Wood, Abhishek Das, and Tess Smidt. Equiformerv2: Improved equivariant transformer for scaling to higher-degree representations. 2024.\"}", "{\"title\": \"Additional Evidence Supporting the Acceptance of This Work\", \"comment\": \"I would like to offer additional support for this paper and encourage all reviewers to consider accepting it. I checked several recent models based on different methodologies applied to the QM9 dataset (e.g., pre-trained models), and found that, even when compared to the latest Frad model [a] (in Nature Machine Intelligence: 18 September 2024, preprint at arXiv:2407.11086), GotenNet still keeps its SOTA . This paper can be regarded as a milestone contribution for general equivariant models, further reinforcing my recommendation for **\\\"strong acceptance\\\"**.\\n\\nAdditionally, I recommend that the authors briefly discuss the introduction of high-degree steerable features in the appendix, similar to the theoretical discussions in ViSNet and HEGNN. Even if directly cited, references could be made to the appendix of SEGNN and the preprint of e3nn, including details on tensor products (and why they involve sixth-order complexity), as well as the representation power based on Legendre polynomials, etc. This would provide newcomers to the field with a broader understanding of the context when reading the paper.\\n\\n[a] Pre-training with Fractional Denoising to Enhance Molecular Property Prediction\\n\\n**I do not require the authors to complete these during the discussion stage. I hope the authors can focus on replying to other reviewers, and wish you good luck.**\"}", "{\"title\": \"Follow-up: Have we sufficiently addressed the concerns?\", \"comment\": \"We sincerely thank you for your thoughtful review and constructive feedback. We have carefully addressed both key concerns you raised:\\n\\n1. Regarding the comparison with current available literature, we have substantially expanded our introduction and related work sections to better contextualize GotenNet within the existing landscape. As detailed in our response, we now clearly position our work between scalarization-based methods and high-degree steerable models, highlighting our distinct technical contributions compared to approaches like Equiformer and Graph Attention Networks. The revised manuscript makes these comparisons explicit and accessible even to readers less familiar with the literature.\\n2. Regarding the generalizability of our method, we appreciate your insightful question about extending beyond molecular applications. We have added a comprehensive discussion in Appendix K that explores potential applications to point clouds, protein structure analysis, and molecular dynamics simulations. This addition helps clarify the broader applicability of our geometric tensor representations and attention mechanisms.\\n\\nAll these changes are highlighted in blue in our revised manuscript for easy reference. We believe these revisions have substantially strengthened the paper's clarity and accessibility while maintaining its technical depth.\\n\\nWe would greatly appreciate your feedback on whether these revisions adequately address your concerns. We wonder if there are any remaining concerns or aspects of our work that could benefit from further clarification or improvement. We are fully committed to enhancing the quality and impact of our research, and would greatly value any additional suggestions you might have to strengthen our contribution.\\n\\nThank you again for your detailed review that helped us improve this work.\"}", "{\"title\": \"Weakness and Question 3-4\", \"comment\": \"> Q3. It would be good to report training time for completeness.\\n\\n> W4: The comparison is somewhat unfair. For example, on QM9, this work trains the proposed model for 1,000 epochs while some previous works only train for 300 epochs. Besides, the batch size is 32 on QM9 (some previous works are 128), and smaller batch sizes on QM9 can sometimes lead to better results.\\n\\nWe thank the reviewer for raising this important point about experimental fairness. While standardized protocols would be valuable, current literature [1,2,3,4] shows considerable variation in training configurations. For instance, recent works like Equiformer [2] (batch size 128, 300 epochs) and EquiformerV2 [3] (batch sizes 48/64, 300 epochs) compare with SphereNet [1] (batch size 32, 1,000 epochs), suggesting that a specific configuration has not been established as a standard.\\n\\nFor transparency, our models used early stopping and actually trained for an average of ~550 epochs, not the full 1,000 epochs. To address the efficiency concern, we provide a detailed comparison:\\n\\n| Model | Batch Size | Time per Epoch (s) | Min. | Avg. | max. | Limit | Inference Latency (ms) |\\n| -------------- | ---------- | ------------------ | :--: | :--: | :--: | :---: | ------------------------------------ |\\n| Equiformer | 128 | 425 | 1.48 | 1.48 | 1.48 | 1.48 | 150 |\\n| EquformerV2 | 64 | 821 | 2.85 | 2.85 | 2.85 | 2.65 | 341 |\\n| EquformerV2 | 48 | 847 | 2.94 | 2.94 | 2.94 | 2.65 | 341 |\\n| GotenNet$_{S}$ | 32 | 117 | 0.41 | 0.75 | 1.34 | 1.35 | 37 |\\n| GotenNet$_{B}$ | 32 | 180 | 0.75 | 1.15 | 1.92 | 2.08 | 56 |\\n| GotenNet$_{L}$ | 32 | 291 | 1.37 | 1.87 | 2.33 | 3.37 | 112 |\\n\\nEven with batch size 32, GotenNet variants demonstrate superior efficiency. Our largest model (GotenNet$_{L}$) requires 2.33 GPU days at maximum, which is less than EquiformerV2 [3] (2.65 GPU days), while achieving **67% faster inference time** with batch size 128. The higher batch sizes and lower epoch counts in previous works may have been necessitated by their greater computational demands, as evidenced by their significantly higher per-epoch training times.\\n\\nWe **included these detailed efficiency comparisons** in our Appendix G for completeness.\\n\\n> W3: The proposed architecture should be simplified.\\n\\nWe thank the reviewer on the constructive feedback on the presentation of the architecture. The presentation of the architecture in methodology section is heavily revised for clarity. The equations are simplified with improved symbolic system. The authors believe the modifications will help readers understand the architecture.\\n\\n> Q4. Have you conducted any experiments on materials/catalyst datasets? I think OC20 IS2RE dataset would be good to test given the compute requirement is not that high and the train/val/test splits are well-defined.\\n\\nWe thank the reviewer for suggesting the OC20 IS2RE benchmark and acknowledging our **\\\"great and intensive experimental results.\\\"** Our current evaluation suite includes four diverse datasets (QM9, Molecule3D, rMD17, and MD22) providing comprehensive validation of our method. Given the distinct hyper-parameters required for OC20, as noted in previous works [3], properly benchmarking on this dataset would exceed the scope of the rebuttal period. We look forward to exploring this direction in future work.\\n\\n\\n[1] Yi Liu, Limei Wang, Meng Liu, Yuchao Lin, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for 3D molecular graphs. 2022.\\n\\n[2] Yi-Lun Liao and Tess Smidt. Equiformer: Equivariant graph attention transformer for 3d atomistic graphs. 2023. \\n\\n[3] Yi-Lun Liao, Brandon M Wood, Abhishek Das, and Tess Smidt. Equiformerv2: Improved equivariant transformer for scaling to higher-degree representations. 2024. \\n\\n[4] Yusong Wang, Shaoning Li, Tong Wang, Bin Shao, Nanning Zheng, and Tie-Yan Liu. Geometric transformer with interatomic positional encoding. 2023a. \\n\\n[5] J Thorben Frank, Oliver T Unke, Klaus-Robert M\\u00fcller, and Stefan Chmiela. A euclidean transformer for fast and stable machine learned force fields. 2024.\\n\\n[6] Jiacheng Cen, Anyi Li, Ning Lin, Yuxiang Ren, Zihe Wang, and Wenbing Huang. Are high-degree representations really unnecessary in equivariant graph neural networks? 2024.\"}", "{\"summary\": \"This paper proposed a new network archtiecture that is SE(3) equivariant and builds on Transformers and previous works on equivariant Transformers. The results on various datasets show good performance and speedup.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The experimental results are great and intensive.\", \"weaknesses\": \"1. The writing of the paper should be greatly improved. Please see \\\"Questions\\\" below.\\n2. The paper does not discuss why the proposed architecture differs from other works and why this is better in terms of accuracy and efficiency even though the results are seemingly good. Also a reference implementation would be great to make sure the better results are reproducible.\\n3. The proposed architecture should be simplified (partially because of the way it is presented).\\n4. The comparison is somewhat unfair. For example, on QM9, this work trains the proposed model for 1,000 epochs while some previous works only train for 300 epochs. Besides, the batch size is 32 on QM9 (some previous works are 128), and smaller batch sizes on QM9 can sometimes lead to better results.\", \"questions\": \"> Writing\\n\\n1. I think the title is too general. We can always rethink something and make them more efficient. Be very specific to your proposed method.\\n2. Line 16 -- 18: Mention what is the difference from previous works clearly.\\n3. Line 22 -- 23: Mention the datasets you tested so that readers can know the scale of experiments in this work.\\n4. Figure 1: I think the x and y axes are similar. Also mention which direction is better.\\n5. Line 62: What is \\\"this\\\" in \\\"Some recent works have sought to address this by...\\\"? Make sure the sentence is clear.\\n6. Line 113: Equivariant networks model translational \\\"invariance\\\".\\n7. Line 159: \\\"Effective Representations\\\" -> too general. Be specific to what makes it effective.\\n8. Figure 2: Please double checkt the errors in the caption.\\n9. Line 195 -- 215: Better to link to existing literature since they are similar to (or basically the same as) vector spaces of irreps.\\n10. Line 220 -- 221: The notation of h, r, t has no meaning and should be replaced by other variables that directly reflect what they are presenting.\\n11. Line 263: $z$ denotes the \\\"one-hot encoding\\\" of the atomic number.\\n12. Section 3.3: This is unclear. It would be better to first mention the high-level concept and then go to the details. Also state the differences from previous works.\\n13. Line 323 or Equation 8: This is essentially an SO(3) linear layer. It would be great to link to previous works.\\n14. Line 373: Geoformer Wang et al. -> make sure the format of citation is correct.\\n15. Figure 3: Mention the dataset you tested.\\n16. Line 423: \\\"The\\\" proposed method\\n17. Line 448: I don't get the definition of Scalability in the paper. Should be better to use \\\"Evaluation on Efficiency\\\"?\\n\\n> Question \\n\\n1. Line 66 -- 67: How do you define scalability? I think that means you invest more compute, you always get better results. Efficiency should be the correct term to use if you are saying some models are slow or take much memory.\\n2. Table 1: Somewhat hestitated to trust the results here. From the method section, it is unclear to me what are the differences that can improve the results. Besides, it would be great to discuss some potential oversmoothing, which prevents deeper networks from performing better.\\n3. It would be good to report training time for completeness.\\n4. Have you conducted any experiments on materials/catalyst datasets? I think OC20 IS2RE dataset would be good to test given the compute requirement is not that high and the train/val/test splits are well-defined.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continued\", \"comment\": \"### **[Resolved] 1-4) What is the intuition for decomposing $o$ into $o^s$, $o^d$, $o^t$ in Eq 7 and Eq 8?**\\nIn Eq 7, $o$ is decomposed into $o^s$ of one component, $o^d$ of $L_{max}$ components, and $o^t$ of $L_{max}$ components. Then, in Eq 8, $o^s$ is used for scalar $h$, and $o^d$ and $o^t$ are used for steerable $v$. Could you provide an intuitive interpretation to Eq 8 ? What's the difference between $o^s$, $o^d$, and $o^t$? I guess $o^s$ represents scalar features, while $o^d$ and $o^t$ represent steerable features. Then, what is the purpose of $o^d$ and $o^t$ ? Why are $o^d$ and $o^t$ processed differently (ie, multiplied by either $r_{ij}$ or $\\\\tilde{v}_{j}^{(l)}$) in Eq 8 ?\\n\\n**Answer**: $o^s$, $o^d$, and $o^t$ are all invariant features, produced to implement Eq 8.\\n\\n### **[Resolved] 2) Missing $\\\\phi$ in Eq 1**\\nAt line 262, it goes: \\\"A cutoff function $\\\\phi(r_{ij}^{(0)})$ is applied to ....\\\". However, there seems no $\\\\phi$ in Eq 1. \\n\\n### **[Resolved] 3) About $z$ in Eq 1 and Eq 2**\\nIt says that \\\"$z$ denotes the one-hot encoding of the atomic number\\\". I suppose $z_j$ is a $|Z|$-dimensional one-hot vector expressing $j$'s atomic number (say, $k$) and it extracts the $k$-th row of matrix $A$ when written as $z_j A$. If so, I think it is better to write it in bold as either $\\\\boldsymbol{z}_j^T \\\\boldsymbol{A}$ (if the paper assumes a column vector) or $\\\\boldsymbol{z}_j \\\\boldsymbol{A}$ (if a row vector).\\n\\n### **[Resolved] 4) $t_{ij,init}W_{rs,ini}$ in Eq 4 is unclear**\\nAt line 285, the paper says that \\\"$W_{rs,init} \\\\in \\\\mathbb{R}^{d_{ed}\\\\times L_{max} \\\\times d_{ne}}$ is a learnable weight matrix\\\". Strictly speaking, this $W$ is a tensor rather than a matrix. So, how the vector-tensor multiplication $t W$ is defined is unclear to me. Also, I suppose $t_{ij,init}$ is a vector, but its dimension is unclear. Is it $t_{ij,init} \\\\in \\\\mathbb{R}^{d_{ed}}$ where $d_{ed}$ is edge feature dimension? \\n\\n### **[Could be improved] 5) What is $E$ in Eq 10 ?**\\nI think $E$ in Eq 10 is never explained. What is it? How it is defined? How does it work?\\n\\n### **[Resolved] 6) What is the difference between $\\\\bigoplus$ and Agg ?**\\nIt is unclear from the paper why there are these two types of aggretation operators. Is that because Agg doesn't necessarily have to be permutation invariant?\\n\\n**Answer**: Correct\\n\\n### **[Resolved] 7) What is the intuition for Eq 13 ?**\\nI would like to know an intuitive interpretation to Eq 13. I guess that $m_1$ and $m_2$ are designed to be invariant, because both of scalar features $h$ and the L2 norm of steerable features $||\\\\tilde{v} W_{vu} ||_2$ are invariant (?). The form of $\\\\tilde{v} + m_2 \\\\circ (\\\\tilde{v} W _{vu})$ keeps equivariance while increasing the model's expressibility ? Does $W _{vu}$ have to be identical in the two equations to ensure equivariance?\\n\\n**Answer**: Correct. $W _{vu}$ can differ.\\n\\n### **A) Additional comments**\\n- **[Resolved]** A-1) In Eq 1: operator $\\\\circ$ is unclear. I can guess $\\\\circ$ is element-wise multiplication. But it would be more reader-friendly if it is explicitly explained after Eq 1.\\n- **[Resolved]** A-2) In Eq 5: I think $Q$, $K$, and $V$ in Eq 5 should have subscripts of $i$ or $j$.\\n- **[Could be improved]** A-3) In Eq 5 and others: I often see inconsistent notation styles for scalars, vectors, matrices, and tensors, which confused me. I think it would be helpful if the paper follows a consistent notation style, such as non-bold $x$ for scalars, bold $x$ for vectors, bold ${X}$ for matrices and tensors. (So, I think $Q$ and $K$ in Eq 5 should be $q_i$ and $k_j$ in bold.)\\n- **[Resolved]** A-4) Using $v$ for steerable features may confuse with value features of self-attention. I suggest replacing $v$ with, e.g., $x$.\\n- **[Resolved]** A-5) In Eq 13: Is $\\\\cup$ concatenation? If so, it is better to use the same notation as in Eq 2, or use $(a,b)$ instead of $(a \\\\cup b)$.\\n- A-6) Perhaps, the method could be better explained by moving Sec 3.1 later, since Sec 3.1 relies on one of the main modules, SAE. In this case, the authors could explain the main modules first, given some initialized features, and then provide the initialization procedures. This order may highlight the main contribution part.\\n- **[Could be improved]** A-7) In Figure 2: The captions of \\\"Structure Embedding\\\" in (a) and \\\"Node Embedding\\\" for (d) are inconsistent, if they refer to the same module.\\n- **[Resolved]** A-8) In Fugure 2: The caption texts in Figure 2 use operator notations that are different from main texts, such as $\\\\circledcirc$ vs $\\\\circ$ for element-wise (Hadamard) product. They should be consistent throught the paper. Also, there is a typo \\\"concatinationconcatenation\\\".\\n\\n---\\nSince the paper is really interesting, I would like to deeply understand this work! If the authors could answer to these questions or update the paper presentation to address them, that would be very helpful.\\n\\nThank you for the interesting work!\\n\\nSincerely, \\nTatsunori Taniai\"}", "{\"comment\": \"> Presentation\\n\\nThank you for your constructive feedback. We are dedicated to improving the presentation of our work while preserving the essential technical details that distinguish our method.\", \"the_key_components_of_our_method_are_illustrated_in_figure_2\": \"specifically, how the degree-wise inner-product operations in the HTR component continuously refine scalar edge features (Figure 2(c)), and how these enhanced edge features guide the attention mechanism (Figure 2(b)). While we understand the desire for simplification, we believe maintaining these technical details is crucial as they represent the core innovations of our work. However, we will improve the visual presentation to make these elements and their interactions more clear.\", \"we_have_already_incorporated_several_improvements_suggested_by_reviewers_bne3_and_hfv3\": \"- Adopted consistent mathematical notation throughout\\n- Better contextualized our work within chronological advancements in the field\\n- More clearly emphasized our method's contributions to future research directions\\n\\nThese revisions enhance the accessibility of our technical innovations while maintaining the necessary mathematical rigor.\\n\\nAs today marks the end of the rebuttal period, we want to ensure we fully understand your vision for Figure 2's improvement. While we have outlined our completed and planned revisions above, could you provide specific suggestions for modifications that would better highlight our key innovations, while preserving the technical completeness that is essential to our contribution? Your concrete feedback would be valuable in guiding our revision.\\n\\n> Efficiency-Expressiveness Trade-off\\n\\nAs noted by reviewer hFv3, we achieve this trade-off by utilizing the degree-wise inner product of high-degree features instead of tensor products. This crucial technical innovation enables us to maintain expressiveness while significantly reducing computational complexity compared to methods using CG coefficients Notably, as acknowledged in [a], higher degree features are indeed theoretically superior and lead to better empirical results when trained on large-scale datasets. Our method specifically addresses the computational challenges, thereby making it practical to leverage expressive representations at scale. We have highlighted this contribution more explicitly in both the introduction and related work sections of our revised manuscript.\\n\\n> OC20 IS2RE Experiment\\n\\nWe appreciate your clarification about OC20 IS2RE versus S2EF. To be clear: our previous response about OC20 was referencing the authors of EquiformerV2, who explicitly noted [7] that OC20 datasets require substantially different hyperparameters for optimal performance. Thus, using hyperparameters optimized for MD17/22 without proper adaptation and validation would not provide a fair comparison. \\n\\n**The significance of dataset-specific hyperparameter optimization**:\\nTo illustrate the **significance of dataset-specific hyperparameter optimization**, we can examine Equiformer's implementation. For MD17, they use [one set of hyperparameters](https://github.com/atomicarchitects/equiformer/blob/dc4852858a305bf552506321a28a81f981736f8a/nets/graph_attention_transformer_md17.py#L408), while for OC20RE they use [substantially different hyperparameters](https://github.com/atomicarchitects/equiformer/blob/master/oc20/configs/is2re/all/graph_attention_transformer/l1_256_nonlinear_g%402_local.yml). The differences are substantial, affecting critical parameters such as $L_{max}$ values, node dimensions, number of layers, irreps MLP dimensions, alpha dropout, and number of attention heads, among others. This point was also recently highlighted in an Equiformer V2 GitHub issue [7].\\n\\n**A thorough evaluation takes significant time:**\\nA complete evaluation would require adapting our implementation for OC20, validating it, and conducting thorough hyperparameter optimization. While each training run takes 48 hours on 2 GPUs, a proper hyperparameter search would require multiple runs to optimize various parameters (architecture, learning rate, dropout, etc.). This process could take weeks to complete properly. Conducting a rushed evaluation during the rebuttal period would not be scientifically sound, as it could lead to suboptimal results that misrepresent our method's true capabilities and potentially mislead future readers. \\n\\n**Our evaluation for GotenNet:**\\nInstead, **we have validated our method on four diverse well-benchmarked datasets (QM9, Molecule3D, MD17, and MD22)** - an evaluation scope that exceeds many recent significant contributions:\\n- DimeNet, and LeftNet validated on QM9/MD17\\n- LSRM validated on MD22\\n- SO3Krates utilized MD17/22\\n\\nThis comprehensive evaluation across multiple established benchmarks provides strong evidence of our method's effectiveness and generalizability. **We are certainly excited to explore OC20 IS2RE performance in future work.**\"}", "{\"title\": \"Questions 1-4\", \"comment\": \"> **Q1. Why use the same weight for steerable features of different degrees (in Eqs. (7) and (11))? Can they be different?**\\n\\nWe sincerely thank the reviewer for this insightful suggestion about weight differentiation for steerable features. Following your suggestion, we conducted extensive experiments with different weight configurations, particularly focusing on the interaction term: $\\\\big\\\\\\\\{o^{d, (l)}\\\\_{ij} \\\\circ r^{(l)}\\\\_{ij} + o^{t, (l)}_{ij} \\\\circ \\\\tilde{v}^{(l)}_j \\\\big\\\\\\\\}^{L\\\\_{max}}\\\\_{l=1}$. Our experiments revealed several interesting findings:\\n1. Experimented with all three components to explore performance characteristics. Different coefficients on GATA improves performance, while EQFF shows no significant change. \\n| Variation | $\\\\varepsilon_{\\\\text{HOMO}}$ | $U_0$ |\\n| - | - | - |\\n| $\\\\big\\\\\\\\{o^{d, (l)}\\\\_{ij} \\\\circ r^{(l)}\\\\_{ij} \\\\big\\\\\\\\}^{L\\\\_{max}}\\\\_{l=1}$ | 15.61 | 3.53 |\\n| $\\\\big\\\\\\\\{ o^{t, (l)}_{ij} \\\\circ \\\\tilde{v}^{(l)}_j \\\\big\\\\\\\\}^{L\\\\_{max}}\\\\_{l=1}$ | 15.84 | 3.67 |\\n| $\\\\big\\\\\\\\{o^{d, (l)}\\\\_{ij} \\\\circ r^{(l)}\\\\_{ij} + o^{t, (l)}\\\\_{ij} \\\\circ \\\\tilde{v}^{(l)}\\\\_j \\\\big\\\\\\\\}^{L_{max}}\\\\_{l=1}$ | **15.59** | **3.49** |\\n| $\\\\big\\\\\\\\{o^{d, (l)}\\\\_{ij} \\\\circ r^{(l)}\\\\_{ij} + o^{t, (l)}\\\\_{ij} \\\\circ \\\\tilde{v}^{(l)}\\\\_j \\\\big\\\\\\\\}^{L\\\\_{max}}\\\\_{l=1}$ , $(m\\\\_2^{(l)} \\\\circ \\\\Delta{\\\\tilde{v}{}}^{(l)}\\\\textbf{W}_{vu})$ | 15.62 | 3.51 |\\n| GotenNet$_{B}$ | 16.4 | 3.76 |\\n3. **Using different coefficients for steerable features significantly improves performance.** On QM9 dataset, this modification reduces errors for $\\\\varepsilon_{\\\\text{HOMO}}$ from 16.4 to 15.59 and $U_0$ from 3.76 to 3.49 in GotenNet$\\\\_B$.\\n4. **The improvement scales well with model size.** For GotenNet$\\\\_L$, we observe even more substantial gains, with $\\\\varepsilon_{\\\\text{HOMO}}$ improving from 14.3 to 13.67 and $U_0$ from 3.67 to 3.37.\\n5. **The benefits generalize across datasets.** Testing on the rMD17 dataset's Aspirin target shows consistent improvements in both energy (0.0364 \\u2192 0.0346) and forces (0.1338 \\u2192 0.1305).\\nThese results demonstrate both the effectiveness of differentiated weights and the flexibility of our framework. We are currently completing experiments across all targets shown as below and will include these improvements in our final version. We thank the reviewer for this valuable suggestion that has led to further performance gains.\\n\\n\\n| Model | $\\\\alpha$ | $\\\\Delta \\\\varepsilon$ | $\\\\varepsilon_{\\\\text{HOMO}}$ | $\\\\varepsilon_{\\\\text{LUMO}}$ | $\\\\mu$ | $C_{\\\\nu}$ | $G$ | $H$ | $R^2$ | $U$ | $U_0$ | ZPVE |\\n| - | - | -- | - | - | - | - | - | - | - | ---- | ----- | ---- |\\n| GotenNet$\\\\_{S} + \\\\big\\\\\\\\{o^{d, (l)}\\\\_{ij} \\\\circ r^{(l)}\\\\_{ij} + o^{t, (l)}\\\\_{ij} \\\\circ \\\\tilde{v}^{(l)}\\\\_j \\\\big\\\\\\\\}^{L_{max}}\\\\_{l=1}$ | 34.8 | 23.2 | 16.3 | 14.7 | 7.5 | 20.4 | 5.51 | 3.86 | 26 | 3.76 | 3.82 | 1.15 |\\n| GotenNet$_{S}$ | 37 | 25.4 | 18.4 | 15.7 | 7.5 | 21 | 5.67 | 4.17 | 33 | 3.97 | 3.89 | 1.16 |\\n| GotenNet$\\\\_{B} + \\\\big\\\\\\\\{o^{d, (l)}\\\\_{ij} \\\\circ r^{(l)}\\\\_{ij} + o^{t, (l)}\\\\_{ij} \\\\circ \\\\tilde{v}^{(l)}\\\\_j \\\\big\\\\\\\\}^{L_{max}}\\\\_{l=1}$ | 33 | 21.3 | 15.5 | 13.5 | 7.3 | - | - | - | - | - | 3.49 | - |\\n| GotenNet$_{B}$ | 33 | 23 | 16.4 | 14.4 | 7.8 | 20 | 5.42 | 3.74 | 32 | 3.76 | 3.76 | 1.1 |\\n| | | | | | | | | | | | | |\\n\\n\\n> **Q2. Why does Eq. (9) use a permutation-invariant operator?**\\n\\nWe thank the reviewer for observation about Equation (9). Indeed, the operation does not require permutation invariance - we initially used a sum operator (which happens to be permutation-invariant) for simplicity and consistency with message passing aggregation. Following your suggestion, we have updated the notation to use the $\\\\bigcirc$ symbol to indicate a generic aggregation operator, making this distinction clearer in the manuscript.\\n\\n> **Q3. Is it possible to explore the expressive power of GotenNet using high-degree steerable features?**\\n\\nWe thank the reviewer for this interesting question about the expressive power of higher-degree steerable features. We are currently conducting experiments with $L\\\\_{max} \\\\in \\\\\\\\{4, 6\\\\\\\\}$ on $\\\\varepsilon_{\\\\text{HOMO}}$ and $U_0$ targets to explore this aspect. We look forward to sharing these results as they become available.\\n\\n> **Q4. Is it possible to explore the expressive power of GotenNet on large-scale geometric graphs and dynamics tasks?**\\n\\nWe thank the reviewer for this valuable suggestion to explore GotenNet's capabilities on large-scale geometric graphs and dynamics tasks. We are actively working on integrating our framework with these applications. While the rebuttal period may not allow sufficient time for comprehensive results, we are committed to including this analysis in our final revision before publication.\"}", "{\"title\": \"Comments on Author Responses\", \"comment\": \"Thank you for your response. Please see my comments below.\\n\\n> 1. Presentation\\n\\nI still think the presentation needs to be improved. The issue is that from the presentation, it is unclear why the architecture is better while the authors spend lots of space reitering things alreay exist in the literature. Moreover, Figure 2 needs to be greatly improved to reflect the contribution of this work.\\n\\n> 2. \\\"Our primary goal is to address the fundamental trade-off in equivariant networks where models must choose between expressiveness and efficiency.\\\"\\n\\nAgain it is unclear to me how this is achieved. The authors are supposed to answer this clearly in their response. \\n\\n> 3. OC20 IS2RE experiment.\\n\\nI asked for experiments on OC20 \\\"IS2RE\\\" instead of OC20 \\\"S2EF\\\". The OC20 IS2RE results take at most 48 hours on 2 GPUs based on Equiformer paper while the authors cite EquiformerV2 takes too much on OC20 S2EF. My intention is to suggest authors to compare their methods on one larger (yet affordable) and well-benchmarked dataset to make sure their evaluation of performance is correct.\\n\\nSince the major points of my initial comments are not well-addressed, I keep my rating at this stage.\"}", "{\"title\": \"Follow-up: Have we sufficiently addressed the Reviewer's concerns?\", \"comment\": \"We sincerely thank you for your detailed and constructive feedback on our submission. We have provided comprehensive responses addressing each of your concerns, including thoroughly **revising the writing and presentation** in our manuscript with all changes highlighted in blue for easy reference, clarifying our definition of scalability, providing access to our reference implementation, explaining the **key technical innovations that enable GotenNet's superior performance**, providing **detailed training time comparisons**, addressing the fairness of experimental settings, and **simplifying the architectural presentation with improved symbolic notation**.\\n\\nWe believe these responses and additional analyses have substantially strengthened our paper. We would greatly appreciate your feedback on whether our responses have adequately addressed your concerns and if the additional efficiency metrics help validate our performance claims. If there are any remaining aspects you would like us to clarify or address, we are happy to provide further information. If you find our responses satisfactory, we would be grateful if you could reconsider your rating of our submission.\\n\\nThank you again for your time and dedication in helping us improve this work.\"}", "{\"comment\": [\"Dear Authors,\", \"Thank you for your clarifications and revisions. While checking your reply and the paper, I'd like to write a quick comment on the updates because I know the deadline for the paper revision is very close.\", \"While it is indeed helpful that $X$ follows the notation of HEGNN, I (and probably Reviewer hFv3) suggested that all geometric vectors/matrices/tensors (not limited to $X$ but including all the others such as $m_i$, $t_{ij}$, $q_i$, $k_j$, $v_j$, $sae_{ij}$, $\\\\alpha_{ij}$) follow the consistent notation of HEGNN. Due to limited time, I cound not check completely if the paper already satisfies this or not.\", \"For example, $\\\\boldsymbol{v}_j$ in Eq 5 seems to be a matrix in $\\\\mathbb{R}^{S \\\\times d _{ne}}$, so I think it's better to write $\\\\boldsymbol{V}_j$ (?). The same style should apply to $\\\\textbf{sae} _{ij}$ (as it has the same dimentionalities with $\\\\boldsymbol{V}_j$). Likewise, if $\\\\boldsymbol{\\\\alpha} _{ij}$ in Eq 6 is a scalar, then it's better to write $\\\\alpha _{ij}$.\", \"**[Resolved]** Also, $v$ in Eq 6 probably needs the subscript $j$ ?\", \"**[Resolved]** Additionally, I noticed that Q3 and Q4 are not addressed. Because of this, I think $z\\\\boldsymbol{A}$ in Eq 1 and Eq 2 as well as $tW$ in Eq 4 are kind of ill-defined.\", \"Another question: ~~Does the \\\"sae\\\" in Eq 7 have the dimensionalities of $S \\\\times d_{de}$ ? If so, I suppose the splitting in Eq 7 results in $o^s _{ij}$, $o^{d,(l)} _{ij}$, $o^{t,(l)} _{ij}$ that are all scalars (ie, $o^s _{ij}, o^{d,(l)} _{ij} ,o^{t,(l)} _{ij} \\\\in \\\\mathbb{R}$). Correct? If so, these $o$ should be non-bold in Eq 7 and Eq 8, and the two $\\\\circ$ in Eq 8 are probably unnecessary.~~ Sorry, this was my misunderstanding. I suppose $o$'s are all vectors in $\\\\mathbb{R}^{d _{de}}$, correct? If so, how are element-wise muptiplications in Eq 8, ie, $\\\\boldsymbol{o}^{d,(l)} _{ij} \\\\circ \\\\tilde{\\\\boldsymbol{r}}^{(l)} _{ij}$ and $\\\\boldsymbol{o}^{t,(l)} _{ij} \\\\circ \\\\tilde{\\\\boldsymbol{X}}^{(l)} _{j}$, defined? I suppose $\\\\tilde{\\\\boldsymbol{r}}^{(l)} _{ij} \\\\in \\\\mathbb{R}^{1 + 2l}$ and $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _{j} \\\\in \\\\mathbb{R}^{(1+2l)\\\\times d _{ne}}$ and they have different dimensionalities with vectors $\\\\boldsymbol{o} _{ij} \\\\in \\\\mathbb{R}^{d _{de}}$.\", \"The same question applies to $\\\\boldsymbol{m}_2 \\\\circ \\\\tilde{\\\\boldsymbol{X}}^{(l)} \\\\boldsymbol{W} _{vu}$ in Eq 13.\", \"Other suggestions:\", \"If $\\\\tilde{\\\\cdot}$ is used to represent steerable features, then notation of $\\\\tilde{\\\\boldsymbol{EQ}}^{(l)} _i$ and $\\\\tilde{\\\\boldsymbol{EK}}^{(l)} _j$ in Eq 10 seems redundant, because both $\\\\tilde{\\\\cdot}$ and $E$ express steerable features. I think simply writting them as $\\\\tilde{\\\\boldsymbol{Q}}^{(l)} _i$ and $\\\\tilde{\\\\boldsymbol{K}}^{(l)} _j$ suffices.\", \"I think the notation of $\\\\textbf{sae}_{ij}$ is not preferable. I suggest simply using a single capital-bold character (assuming that $\\\\textbf{sae} _{ij}$ is a matrix in $\\\\mathbb{R}^{S \\\\times d _{ne}}$).\", \"Thank you\"]}", "{\"summary\": \"The importance of equivariance in neural networks has become ubiquitous\\nin machine learning research. Neural networks operating on input features\\nwith certain symmetries, such as rotations, translation and permutation\\npredict with higher certainty and better quality if the architecture respect\\nthese symmetries. In Graph Neural Networks equivariance with respect to\\npermutation of the nodes has been the foundation upon which a plethora\\nof ML approaches have been developed. The authors present a deepening\\nof this paradigm, by combining the success of transformers, permutation\\ninvariance and E(3) equivariance and distilling these three components into\\nan architecture for molecular learning. The main contribution of the paper\\nis the development of Geometry Aware Tensor Attention layer that defines\\nan E(3) equivariant transformer layer for both node and edge embeddings. A\\nstrong highlighted advantage is computational speed, compared to architectures\\nusing Chlebsch-Gordan decompositions or architectures based on irreducible\\nrepresentation.\\n\\nThrough a thorough set of experiments, the authors show that their\\narchitecture, GotenNet, is able to outperform a wide variety of benchmarks on\\nmolecule property predictions. Their overall performance and scaling in terms\\nof computations is also shown.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents a rather strong experimental section showing that the\\nauthors have taken the time to set up a good set of experiments. In particular\\nthe fact that all comparison partners have been retrained and compared to\\nthree versions of the model shows dedication to the and has been a pleasure\\nto read. The experiments addressing the inference times is an important\\ncomplementary addition, since often there is a rather steep tradeoff between\\npracticality and usability, where better performance on metrics is achieved\\nthrough exponential increases in parameter count or long inference times. To\\nsee both good results and good inference time / scalability is reassuring to\\nthe reviewer. The introduction of geometric tensors as a replacement of CG\\ncoefficient and the computation irreducible representation is a novel and\\noriginal viewpoint which combined with the favorable tradeoff between good\\nperformance and accuracy is an impactful contribution. Writing is clear,\\nconcise and easy to follow.\", \"weaknesses\": \"Since the reviewer is not too familiar with the literature, the current\\nwriting makes it somewhat hard to compare to the current available literature.\\nFor instance, what would be the comparable difference between GotenNet and an\\nequivariant transformer (such as equiformer) or the Graph Attention Networks.\\nIs the current architecture a mix of the two? In that sense some more context\\nmight be good in the introduction or overview of related work.\\n\\nThe paper as written in its current form is of great quality, and could be\\npotentially be further strengthened through the incorporation of the points\\nabove and by addressing the questions below in a section discussing the current\\nlimitations of the method and potential future avenues of research. For\\ninstance it seems that the current method is _specifically_ tailored to the\\nprediction of molecule properties and how would that generalize? Another\\npoint to potentially discuss is how this method would extend beyond $E(3)$\\nand $SE(3)$ equivariance (maybe scaling for instance) and/or if the method\\nwould generalize to point clouds (where this line of work is also highly\\nrelevant). Since the paper is already of great quality addressing these\\ntopics in the rebuttal would be sufficient, however addressing them in the\\nrevision would be beneficial for the reader.\", \"questions\": \"Some questions:\\n\\n- How does the current architecture generalize to different tasks? As\\npresented, it would work only on molecule type graphs, is it also possible to\\ngeneralize the architecture to for instance social networks or other types\\nof graphs? In other words, how general is the method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"Thanks for your response. I have addressed my concerns and will increase the confidence score while maintaining the good paper recommendation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you Reviewer hFv3 for response\", \"comment\": \"Dear Reviewer hFv3,\\n\\nThank you for your response and clarification on Q 5. I totally agree with you on the benefit of using HEGNN's notation. If the paper strictly follows this notation throughout, it will greatly reduce ambiguity and enhance readability. In my opinion, the paper in its current state requires mature eyes of experts, such as Reviewer hFv3, to accurately understand the content. I am not against this work, but I sincerely hope that the paper's presentation will be improved before publication to make it accessible to a broad audience, for the sake of the community.\\n\\nBest, \\nTatsunori\"}", "{\"comment\": [\"Dear Dr. Taniai, I think I can answer your fifth question. $\\\\boldsymbol{EQ}$ in Eq. (10) should mean \\\"equivariant query\\\", and $E$ may mean that this is a steerable variable and should be a whole (i.e. $\\\\widetilde{\\\\boldsymbol{EQ}}$). In addition, I am very grateful for your many suggestions on presentation. In fact, I was also very confused about the presentation of the first version of this article at the beginning. Although I later suggested that the authors use the HEGNN symbol system, the authors may have made many omissions in the revision due to time constraints (so I must also admit that part of the responsibility lies with me).\", \"I agree with the notation you mentioned in 7-3. However, in order to distinguish single-channel and multi-channel scalars, I would like to make some improvements based on your suggestion. In fact, it is the notation used in HEGNN:\", \"$x\\\\in\\\\mathbb{R}$ represents a single-channel scalar\", \"$\\\\boldsymbol{x}\\\\in\\\\mathbb{R}^{C}$ represents a multi-channel scalar (the number of channels is $C$)\", \"$\\\\vec{\\\\boldsymbol{x}}\\\\in\\\\mathbb{R}^{3}$ represents a Cartesian vector\", \"$\\\\vec{\\\\boldsymbol{X}}\\\\in\\\\mathbb{R}^{3\\\\times C}$ represents a Cartesian vector group, which contains $C$\", \"$\\\\tilde{\\\\boldsymbol{x}}^{(l)}\\\\in\\\\mathbb{R}^{2l+1}$ represents a $l$th-degree steerable vector\", \"$\\\\tilde{\\\\boldsymbol{X}}^{(l)}\\\\in\\\\mathbb{R}^{(2l+1)\\\\times C}$ represents the $l$th-degree steerable vector group, which contains $C$\", \"$\\\\tilde{\\\\boldsymbol{x}}^{(0:L)}\\\\in\\\\mathbb{R}^{L^2}$ represents the $0$th to $L$th-degree steerable vector group\", \"$\\\\tilde{\\\\boldsymbol{X}}^{(0:L)}\\\\in\\\\mathbb{R}^{L^2\\\\times C}$ represents the $0$th to $L$th-degree steerable vector group, which contains $C$ for each degree\", \"If different degrees are different, it is recommended to use e3nn's string representation directly\", \"It is worth noting that Cartesian vectors are also 1st-degree steerable features, but because they are used so frequently, they are given a separate symbol. In addition, this also makes it easier to discuss translation equivariance separately, because steerable features are generally set to be translation invariant.\"]}", "{\"comment\": \"Thank you for your thorough review and strong support of our manuscript. We have implemented several significant improvements based on your valuable feedback:\\n\\n1. Notation and Presentation: We have fixed missing tilde symbols for steerable features and further aligned our notation with HEGNN for consistency. We have also reorganized the presentation of some statements, improving the logical flow of the manuscript.\\n2. Model Performance: We are particularly excited to share breakthrough results that emerged from your insightful question about different coefficients for steerable features. Your suggestion led us to explore this direction during the rebuttal phase, resulting in substantial performance improvements. The updated QM9 results demonstrate these remarkable gains:\\n\\n| Model | $\\\\alpha$ | $\\\\Delta \\\\varepsilon$ | $\\\\varepsilon_{\\\\text{HOMO}}$ | $\\\\varepsilon_{\\\\text{LUMO}}$ | $\\\\mu$ | $C_{\\\\nu}$ | $G$ | $H$ | $R^2$ | $U$ | $U_0$ | ZPVE | std. MAE | log MAE |\\n| ----------------------------------------------------------------------------------------------------------------------------- | -------- | -------------------- | --------------------------- | --------------------------- | ----- | --------- | ---- | ---- | ----- | ---- | ----- | ---- | -------- | ------- |\\n| $\\\\text{GotenNet}\\\\_{S} + \\\\big\\\\\\\\{o^{d, (l)}\\\\_{ij} \\\\circ \\\\tilde{r}^{(l)}\\\\_{ij} + o^{t, (l)}\\\\_{ij} \\\\circ \\\\tilde{X}^{(l)}\\\\_j \\\\big\\\\\\\\}^{L\\\\_{max}}\\\\_{l=1}$ | 34.8 | 23.2 | 16.3 | 14.7 | 7.5 | 20.4 | 5.51 | 3.86 | 26 | 3.76 | 3.82 | 1.15 | 0.62 | -6.27 |\\n| $\\\\text{GotenNet}\\\\_{S} $ | 37 | 25.4 | 18.4 | 15.7 | 7.5 | 21 | 5.67 | 4.17 | 33 | 3.97 | 3.89 | 1.16 | 0.67 | -6.21 |\\n| $\\\\text{GotenNet}\\\\_{B} + \\\\big\\\\\\\\{o^{d, (l)}\\\\_{ij} \\\\circ \\\\tilde{r}^{(l)}\\\\_{ij} + o^{t, (l)}\\\\_{ij} \\\\circ \\\\tilde{X}^{(l)}\\\\_j \\\\big\\\\\\\\}^{L\\\\_{max}}\\\\_{l=1}$ | 33 | 21.3 | 15.5 | 13.5 | 7.3 | 19.5 | 5.33 | 3.52 | 25 | 3.49 | 3.49 | 1.1 | 0.58 | -6.33 |\\n| $\\\\text{GotenNet}\\\\_{B} $ | 33 | 23 | 16.4 | 14.4 | 7.8 | 20 | 5.42 | 3.74 | 32 | 3.76 | 3.76 | 1.1 | 0.61 | -6.26 |\\n| $\\\\text{GotenNet}\\\\_{L} + \\\\big\\\\\\\\{o^{d, (l)}\\\\_{ij} \\\\circ \\\\tilde{r}^{(l)}\\\\_{ij} + o^{t, (l)}\\\\_{ij} \\\\circ \\\\tilde{X}^{(l)}\\\\_j \\\\big\\\\\\\\}^{L\\\\_{max}}\\\\_{l=1}$ | 30 | 19.9 | 13.7 | 12.2 | 7.7 | 19 | 4.98 | 3.36 | 21 | 3.33 | 3.37 | 1.08 | 0.54 | -6.39 |\\n| $\\\\text{GotenNet}\\\\_{L} $ | 30 | 20.7 | 14.3 | 13.3 | 7.7 | 19 | 5.27 | 3.47 | 25 | 3.58 | 3.67 | 1.08 | 0.56 | -6.34 |\\n\\n3. Comprehensive Model Comparison: Following your suggestion, we have expanded Table 1 with updated QM9 results and added a new comprehensive comparison (Table 8) in the appendix that categorizes different approaches.\\n\\n4. Theoretical Discussion: We appreciate your suggestion regarding the theoretical discussion of high-degree steerable features. In the final version, we plan to enhance the appendix with more detailed theoretical foundations to provide readers with a comprehensive understanding of the underlying concepts.\\n\\nWe remain committed to further improving the manuscript and welcome any additional suggestions you may have. Thank you again for your continuous constructive feedback and strong recommendation for acceptance.\"}", "{\"title\": \"About GotenNet's Engineering Value\", \"comment\": \"First, there are still some minor issues with the presentation:\\n- Some tilde symbols of steerable features are missing\\n- Some statements may need to be repositioned, such as the cutoff function in Line. 261 can be adjusted to near Eq. (4)\\n\\nIn addition, I suggest that the authors test the effects of SO3krates and HEGNN on datasets such as QM9 if they have time. I tested the effect of HEGNN on QM9 by myself. Although it is better than EGNN, it is far inferior to GotenNet (I guess SO3krates is similar). Of course, I don\\u2019t mean to discredit HEGNN (after all, as a simple backbone, HEGNN does not use rbf kernel, cutoff function and other technologies at all). I just think that this can further illustrate the **engineering value** of this article (which is even more important in the field of AI for Science than the architectural innovation brought by the inner product).\\n\\nI hope the authors can give a big table about QM9 (maybe in the appendix). On the left are the categories of models, which are invariant models (such as DimeNet++, SphereNet), scalarization-based models (such as EGNN, PAINN, LEFTNet), high-order steerable models (such as Equiformer, EquiformerV2), pre-trained models (such as Frad and others in Frad paper), models that use inner products to introduce high-degree steerable features (such as SO3krates, HEGNN, the authors can give this category a new name to reflect the scalarization-trick), and finally GotenNet.\\n\\n**GotenNet is extremely practical and worth learning for many techniques in scalar processing. I did not very emphasize the engineering value of this article in my previous comments. I would like to reiterate it here and call on other reviewers, especially reviewer bNe3, to raise the rating of this article.**\"}", "{\"comment\": \"We sincerely appreciate your thorough review and are pleased that our response have addressed your concerns. Thank you for your time and feedback throughout this process.\"}", "{\"title\": \"Thanks for the paper updates\", \"comment\": [\"Dear Authors,\", \"I noticed new revisions in the paper. I deeply thank the authors. Given these revisions, Sec 3 is mostly clear to me. Now I have much better understanding of the method. Although there is still slight abuse of notation (eg, summing a matrix and a vector with implicit vector-matrix reshaping), I think that is allowable.\", \"Meanwhile, there are equations that still seem ill-defined. I list them below, hoping that they are addressed in the final version:\", \"Definitions of $\\\\boldsymbol{o}^{d,(l)} _{ij} \\\\circ \\\\tilde{\\\\boldsymbol{r}}^{(l)} _{ij}$ and $\\\\boldsymbol{o}^{t,(l)} _{ij} \\\\circ \\\\tilde{\\\\boldsymbol{X}}^{(l)} _{j}$ in Eq 8 are unclear, given $\\\\boldsymbol{o}^{(l)} _{ij} \\\\in \\\\mathbb{R}^{d _{de}}$, $\\\\tilde{\\\\boldsymbol{r}}^{(l)} _{ij} \\\\in \\\\mathbb{R}^{(1 + 2l)}$, and $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _{j} \\\\in \\\\mathbb{R}^{(1+2l)\\\\times d _{ne}}$.\", \"Probably, the former is $(\\\\tilde{\\\\boldsymbol{r}}^{(l)} _{ij})^T \\\\boldsymbol{o}^{d,(l)} _{ij}$ and the latter multipies the same $\\\\boldsymbol{o}^{t,(l)} _{ij}$ to all $(1+2l)$ components of $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _{j}$ (ie, broadcast in numpy).\", \"Definition of $\\\\boldsymbol{m}_2 \\\\circ \\\\tilde{\\\\boldsymbol{X}}^{(l)} \\\\boldsymbol{W} _{vu}$ in Eq 13 is unclear.\", \"Probably, broadcasting applies to $\\\\boldsymbol{m}_2 \\\\in \\\\mathbb{R}^{d _{ne}}$ again.\", \"Similar issues for three $\\\\circ$'s in Eq 4.\", \"Additionally, I would like to leave several suggestions, which will hopefully increase the clarity.\", \"Sec 3.1:\", \"When introducing notation, I think it is important to clarify that $\\\\tilde{\\\\cdot}$ express steerable features that contains $L_{max}$ degrees and $(1+2l)$ components for each degree $l$. Although Sec 3.1 already explains so for $\\\\tilde{\\\\boldsymbol{r}}^{(l)} _{ij}$ and $\\\\tilde{\\\\boldsymbol{X}}^{(l)}$, it is important to clarify that this notation generally applies to other symbols.\", \"Also, it's better to clarify that **vectors in $\\\\mathbb{R}^d$ are row vectors** in the paper.\", \"Eq 1: Although I can guess, it is more reader-friendly to clarify $\\\\boldsymbol{m}_i \\\\in \\\\mathbb{R}^{d _{ne}}$ and $\\\\boldsymbol{z} \\\\in \\\\mathbb{R}^{|\\\\mathcal{Z}|}$.\", \"Eq 3\", \"Please clarify $\\\\boldsymbol{t} _{ij,init} \\\\in \\\\mathbb{R}^{d _{ed}}$. I think $\\\\boldsymbol{t} _{ij}$ is as important as $\\\\boldsymbol{h}$ and $\\\\tilde{\\\\boldsymbol{X}}^{(l)}$. Thus, such dim information is important.\", \"$(\\\\sigma(\\\\cdot))$ is simplified to $\\\\sigma(\\\\cdot)$ (same for Eq 2). Instead, Eq 3 needs to be clarified if $W$ is applied after or before $\\\\circ$, by inserting ( ) at proper positions. Ie, there is ambiguity whether $(( t_i + t_j ) \\\\circ \\\\sigma (\\\\cdot)) W$ or $( t_i + t_j ) \\\\circ (\\\\sigma (\\\\cdot) W)$.\", \"Eq 5: Given $\\\\boldsymbol{v}_j$ in vector style, it's better to define $\\\\gamma_v: \\\\mathbb{R}^{d _{ne}} \\\\to \\\\mathbb{R}^{S \\\\cdot d _{ne}}$ instead of $\\\\mathbb{R}^{S \\\\times d _{ne}}$ at the line after Eq 5. Accordingly, Eq 7 should use $\\\\gamma_s: \\\\mathbb{R}^{d _{ne}} \\\\to \\\\mathbb{R}^{S \\\\cdot d _{ne}}$. This way, the notation abuse (ie, vector + matrix) is also resolved.\", \"Eq 6: If $\\\\boldsymbol{\\\\alpha} _{ij}$ is a scalar in $\\\\mathbb{R}$, then I strongly recommend to write non-bold ${\\\\alpha} _{ij}$.\", \"Eq 9 and Eq 12: I suggest using \\\"$\\\\gets$\\\" instead of \\\"$=$\\\", ie, $X \\\\gets X+\\\\Delta X$. The same applies to $h$ in Eq 9 and $t$ in Eq 12.\", \"Overall Sec 3: At first, the high-level objectives of Sec 3.3 & 3.4 were unclear to me. Now I have a better perspective:\", \"The network holds node features (ie, invariant $\\\\boldsymbol{h}_i$ and steerable $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _i$) and edge features (ie, invariant $\\\\boldsymbol{t} _{ij}$) throughout, initializes them in Sec 3.2, and repeatedly updates $\\\\boldsymbol{h}_i$ and $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _i$ in Sec 3.3 & 3.5 as well as $\\\\boldsymbol{t} _{ij}$ in Sec 3.4.\", \"Sec 3.3 updates $\\\\boldsymbol{h}_i$ and $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _i$ using attention-based message passing in a degree-wise manner, where the information of $\\\\tilde{\\\\boldsymbol{X}}^{(l)}$ is not mixed across degrees $l$. Equivariance is ensured by relying on invariant features (ie, $\\\\boldsymbol{h}_i$, $\\\\boldsymbol{t} _{ij}$, and $\\\\tilde{\\\\boldsymbol{r}}^{(0)} _{ij}$) to compute coefficients for steerable $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _i$ as well as updates for invariant $\\\\boldsymbol{h}_i$.\", \"Sec 3.4 updates $\\\\boldsymbol{t} _{ij}$ using the information of $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _i$ and $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _j$ mixed across degrees $l$, by utilizing their inner products to produce invariant updates for $\\\\boldsymbol{t} _{ij}$.\", \"Sec 3.5 updates $\\\\boldsymbol{h}_i$ and $\\\\tilde{\\\\boldsymbol{X}}^{(l)} _i$ using a node-wise & degree-wise feed-forward net.\", \"Providing such high-level views before presenting details will help readers.\", \"Since Reviewer bNe3 seems absent and his/her clarity concern remains alive, I'd like to personally endorce the paper's clarity. I think the paper (Sec 3) is much clearer now, and addressing the above points will further enhance clarity.\", \"Best,\", \"Tatsunori\"]}", "{\"comment\": \"Dear Dr. Taniai,\\nI couldn't agree more with your point of view. A good article needs a good presentation, so that it can guide the newcomers (especially beginners) in the whole field. Especially for this article, which is bound to become a must-read in equivariant models, a clear and easy-to-follow architecture explanation is very necessary. Enthusiastic people like you are very much needed to provide suggestions (especially those points that hinder understanding), which I believe will also be welcomed by the authors. Thank you again for your valuable insights.\"}", "{\"title\": \"Comparison of computational complexity\", \"comment\": \"We sincerely thank the reviewer for the thorough evaluation of our work. We appreciate the recognition of GotenNet's novel contributions in combining geometric tensor representations with advanced attention mechanisms, as well as acknowledging our paper's clear structure and articulation. The reviewer's supportive comments on how our approach addresses the expressiveness-efficiency trade-off in 3D graph modeling are particularly encouraging.\\n\\n> W1. **Comparison of computational complexity with previous methods**: Could you provide details on the model size, as well as training and inference times, in comparison with existing methods? This information would help highlight GotenNet\\u2019s computational efficiency relative to other models in the field.\", \"we_provide_detailed_computational_complexity_comparisons_in_the_table_below\": \"| Model | Batch Size | Time per Epoch (s) | min | avg | max | Limit | Training Latency (ms) | Inference Latency (ms) | Params | std. | log |\\n| -------------- | ---------- | ------------------ | -------- | -------- | -------- | -------- | --------------------- | ----------------------- | ------ | -------- | --------- |\\n| Equiformer | 128 | 425 | 1.48 | 1.48 | 1.48 | 1.48 | 421 | 150 | 3.5M | 0.70 | -5.82 |\\n| EquformerV2 | 64 | 821 | 2.85 | 2.85 | 2.85 | 2.65 | 918 | 341 | 11.2M | 0.67 | -5.87 |\\n| EquformerV2 | 48 | 847 | 2.94 | 2.94 | 2.94 | 2.65 | 918 | 341 | 11.2M | 0.67 | -5.87 |\\n| Geoformer | 32 | 436 | - | - | - | 5.05 | 759 | 264 | 50.6M | 0.75 | -6.12 |\\n| GotenNet$_{S}$ | 32 | **117** | **0.41** | **0.75** | **1.34** | **1.35** | **80** | **37** | 6.1M | **0.67** | **-6.21** |\\n| GotenNet$_{B}$ | 32 | 180 | 0.75 | 1.15 | 1.92 | 2.08 | 120 | 56 | 9.2M | **0.61** | **-6.26** |\\n| GotenNet$_{L}$ | 32 | 291 | 1.37 | 1.87 | 2.33 | 3.37 | 244 | 112 | 18.3M | **0.56** | **-6.34** |\\n\\n\\nGotenNet demonstrates superior efficiency across all variants. GotenNet$\\\\_{S}$ achieves competitive performance with just 6.1M parameters and the lowest latencies (80ms training, 37ms inference). Even our largest variant, GotenNet$_{L}$, maintains 42% faster inference and 25% faster training compared to Equiformer while achieving state-of-the-art performance. We have included detailed efficiency comparisons discussion in Appendix G. We welcome any further discussion regarding computational efficiency or other aspects of our work.\"}", "{\"comment\": [\"Thank you for your thorough analysis of our reference implementation and your constructive feedback. We greatly appreciate the time you took to verify our results and examine our code in detail.\", \"Yes, you are correct about the e3nn connection in our `TensorInit` class and have added the appropriate citation to the e3nn arXiv paper in our manuscript.\", \"Regarding the notation for element-wise operations, we agree that $\\\\otimes$ could potentially cause confusion with CG tensor products. We will maintain our original notation using $\\\\circ$ for element-wise operations, and have added explicit clarification about its broadcasting behavior in the manuscript to ensure clarity.\", \"We appreciate your suggestion regarding the terminology. We have adopted \\\"Spherical-scalarization models\\\" as the category name for this class of approaches, as it better reflects how these models initialize high-degree steerable features from spherical harmonics and use inner product scalarization. We have updated Section 2.2 to introduce our framework within this context.\", \"We have corrected the remaining instances of \\\"high-order\\\" to \\\"high-degree\\\" in the newly added text.\", \"Your engagement throughout the discussion period has been invaluable in improving both the clarity and technical accuracy of our work. We sincerely appreciate your supportive approach and detailed suggestions that have helped enhance the quality of our manuscript.\"]}", "{\"summary\": \"This paper propose a novel Geometric Tensor Network (GotenNet) to address trade-offs between expressiveness and computational efficiency. The core of this model is to use inner-product in Eq. (9) to avoid tensor products or CG coefficients.\\n***\\n\\nIn order to better explain my comments and facilitate the understanding of the author, other reviewers, and AC, I will sort out the timetable of related models that are not mentioned in the article, mainly involving the following references:\\n1. SE(3)-Transformer [a] (in NeurIPS 2020): In Eq. (11) of [a], the calculation of attention $\\\\alpha$used $\\\\bf{q}^\\\\top\\\\bf{k}$, which sum all inner-products of high-degree steerable features together.\\n2. SE(3)-Transformer implemented by e3nn [b]: See code below\\n```python\\ndot = o3.FullyConnectedTensorProduct(irreps_query, irreps_key, \\\"0e\\\")\\n...\\nexp = edge_weight_cutoff[:, None] * dot(q[edge_dst], k).exp()\\n```\\nAnd the main difference is the module $\\\\texttt{o3.FullyConnectedTensorProduct}$, which introduces different learnable weights to each inner-product. However, now the model still uses tensor products to generate the key and value pair and is limited by efficiency.\\n\\n3. ViSNet [c] (in Nature Communications: 05 January 2024, preprint at arXiv:2210.16518): In Eq. (6-7) of [c], authors introduced the connection between inner-products and Legendre polynomials (though a mathematical common sense). But in methodology, they only use the 1st-degree steerable features (Cartesian vectors) to build their models.\\n\\n4. SO3KRATES [d] (in Nature Communications: 06 August 2024, preprint at arXiv:2309.15126): The SO3KRATES technique in Eq. (14), CG coefficient used in such form, but is equivalent to the inner-products of high-degree steerable features. This paper proposed an Euclidean transformer and achieved nice performance in property prediction tasks (e.g. rMD17 and MD22).\\n\\n5. HEGNN [e] (in NeurIPS 2024, preprint at arXiv:2410.11443): During studying symmetric geometric graphs, the authors proposed a equivariant GNN model that uses the scalarization trick (i.e., inner product) to introduce high-degree steerable features in Eq. (6) of [e], and pointed out the connection between this approach and SO3KRATES. They used a method similar to Deepsets and Legendre polynomials to prove that this method can recover the information of all angles between each pair of edges, achieved good performance in dynamics tasks (e.g. N-body and MD17), and demonstrated that the introduction of high-degree steerable features can make the model more robust.\\n\\n[a] SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks\\n\\n[b] https://docs.e3nn.org/en/stable/guide/transformer.html \\n\\n[c] Enhancing geometric representations for molecules with equivariant vector-scalar interactive message passing\\n\\n[d] A Euclidean transformer for fast and stable machine learned force fields\\n\\n[e] Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks?\\n\\nIn subsequent comments, I will use the names of the above models without citation.\\n\\n---\\n> **[First Comment, Rating 3, Confidence 5, with 4/1/4 of Soundness/Presentation/Contribution]**\\n\\nThe model and experiment in this paper are excellent and deserve a clear acceptance or even an award (8-10), but the terrible presentation of this version is totally unworthy of such an achievement. If the author can improve the presentation, I will be very happy to increase the score in the rebuttal. I am very confident that this is a paper that will have a profound impact on the entire field, so I hope it will be accepted in a very perfect manner.\\n\\n>**[Second Comment, Rating 10, Confidence 5, with 4/2/4 of Soundness/Presentation/Contribution]**\\n\\nThe authors have provided a sincere and thoughtful response, addressing most of my concerns. The remaining questions, primarily related to experimental hyperparameters (e.g. higher-degree steerable features), would likely require significant time to resolve and may not be feasible during the rebuttal period, which I fully understand. Regarding the shortcomings in the presentation originally raised, the revised version has been significantly improved, and the motivation and logic are now clearer and more coherent. Although the presentation still needs further improvement, it is no longer worth over-focusing on during the discussion. Overall, these minor details do not detract from the paper\\u2019s value. I therefore give a **strong accept** recommendation.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"For the increasingly important discipline of AI for Science, dealing with equivariance is undoubtedly very important. According to the classification of [f], there are two mainstream models: scalarization-based models and high-degree steerable models. The timeline I introduced in the summary (and this article) is actually doing something very valuable, that is, bridge the gap between these two types of models.\\n\\nIn fact, the only papers that truly achieve this are SO3KRATES, HEGNN, and GotenNet (this paper). Although the core formulas of these three works are equivalent, their publication dates are close together\\u2014especially since HEGNN's preprint appeared after the ICLR submission deadline. I believe this is merely a case of normal concurrent discovery, and therefore I recognize the originality of this paper. Moreover, equivariance is a very strong constraint, and I would even guess that this form is the only feasible form, which makes the contribution seem small in terms of formula form, but in fact it is a very important contribution.\\n\\nMoreover, these three works exhibit significant differences. Both SO3KRATES and GotenNet employ equivariant attention, but the inner product formulation used in GotenNet is more concise and yields far better results than SO3KRATES. While both HEGNN and GotenNet point out the inner product formulation, HEGNN\\u2019s contributions lie more in theoretical research and model framework, with experiments focusing on dynamical problems, creating a complementary relationship with GotenNet. From my perspective, although GotenNet is on par with these two other works and shares an equivalent formulation, its superior performance makes it an excellent contribution.\\n\\n[f] A Survey of Geometric Graph Neural Networks: Data Structures, Models and Applications\", \"weaknesses\": \"I think the whole article is amazingly good except for the presentation, and there is no major weakness. Here I mainly give the shortcomings of the presentation that I think need to be improved. For other parts (such as models and experiments), there are only some suggestions for the author to choose whether to adopt (see questions).\\n\\n> **W1. Lack of a good review of relevant literature.**\\n\\nThis article should introduce several works in the timeline I sorted out in the summary section, especially SO3KRATES and HEGNN, which should be introduced in detail. The results of SO3KRATES should be added to the experimental results for easy comparison (although GotenNet outperform SO3KRATES).\\n\\nFor models such as eSCN and EquiformerV2 that use spherical activation functions, a simple comparison should be made. For example, the use of Fibonacci grid requires a large number of sampling points, which brings a large constant in complexity, resulting in a theoretical complexity of $\\\\mathcal{O}(L^3)$, which is actually quite time-consuming (and this can only achieve qusi-equivariance).\\n\\n> **W2. Weird math definitions and confusing math symbols.**\\n\\nI don't know what the meaning of Def. 1, Theo. 1-2 in the main text is. I think it is just to make it look like there is some theory. Theo. 3 and Prop. 1, which have some meaning, are too trivial. They can be explained in one sentence: \\\"The product of an invariant scalar and other steerable features does not change its equivariant form.\\\" However, the key initialization by spherical harmonics is not only placed in the appendix, but also a clear formula is not given (while Eq. (11) in SO3KRATES and Eq. (5) in HEGNN are clearly written). I suggest replacing this with a discussion of the model architecture. In fact, it is a very valuable point to explain why GotenNet performs so much better than SO3KRATES with a similar architecture. Theoretical analysis may be very difficult, so it is not required, but it would be great if the authors could come up with some engineering tricks.\\n\\nMoreover, this article will surely have many followers in the future, and a good symbolic system will help future generations understand. The number of superscripts is sometimes one, sometimes two (though it has been explained), and the letters that appear are similar (the $l$ for the number of layers and the $L$ for high-degree steerable features are used at the same time), which is confusing. In addition, the current symbolic representation cannot well show which are invariant scalars and which are high-degree steerable features. It is recommended that the author adopt the HEGNN symbol system (e.g. using $\\\\Delta\\\\boldsymbol{h}$ to represent inter-layer updates, and using fonts such as $\\\\tilde{\\\\boldsymbol{v}}^{(l)}$ to represent $l$-degree steerable features).\\n\\n> **W3. Others.**\\n\\n- It is recommended to unify the terminology of the article, such as high-degree steerable features, and not to mix order and degree, otherwise it always gives readers the feeling that the authors are studying multi-body interactions. \\n\\n- There are too many crossing lines in Fig. 2 (b-d), which makes it very difficult to understand. Can it be simplified?\\n\\n- The rotation matrix $R$ in Line 879 (in fact, it should be emphasized that it is an orthogonal matrix) should be the Wigner-D matrix $\\\\boldsymbol{D}^{(l)}(r)$. In addition, the inversion factor should be multiplied according to the parity of the vector itself. For details, refer to [g].\\n\\n[g] https://docs.e3nn.org/en/stable/api/o3/o3_irreps.html\", \"questions\": \"Before anything else, is the author willing to provide an anonymous link to the project as soon as possible (this will not affect my subsequent rating increase) so that the results can be reproduced?\\n\\n> **Q1. Why use the same weight for steerable features of different degrees (in Eqs. (7) and (11))? Can they be different?**\", \"note_that_there_are_several_formula_items\": \"$o\\\\_{ij}^d\\\\circ r\\\\_{ij}^{[1,L\\\\_{max}]}$, $o\\\\_{ij}^t\\\\circ h\\\\_{j}^{l,[1,L\\\\_{max}]}$, $m\\\\_2\\\\circ h^{l,[1,\\\\hat{L}\\\\_{max}]}\\\\bf{W}\\\\_{vu}$. They seem to be multiplied by the same coefficient. Steerable features of different degrees represent different orthogonal bases, so their weights should not be the same (as HEGNN does). Can the authors change it to have different weights for each degree and test the performance of the corresponding model?\\n\\n> **Q2. Why does Eq. (9) use a permutation-invariant operator?**\\n\\nThe results of different degrees should be ordered, so there seems no need for permutation invariance. What are the benefits of using permutation-invariant operations?\\n\\n> **Q3. Is it possible to explore the expressive power of GotenNet using high-degree steerable features?**\\n\\nOne of the great benefits of this model is that the complexity grows slowly with the feature degree. Can authors show the experimental results of using higher-degree features (e.g. $L\\\\_{\\\\max}\\\\in\\\\\\\\{6,8\\\\\\\\}$)? In fact, similar HEGNN and EquiformerV2 have demonstrated the benefits of using such higher-degree features.\\n\\n> **Q4. Is it possible to explore the expressive power of GotenNet on large-scale geometric graphs and dynamics tasks?**\\n\\nIs it possible to add speed attributes like EGNN so that GotenNet can be applied to dynamics tasks? In addition, it is easy to associate the fast multipole method with the possibility that high-degree representations may have good effects on large-scale geometric graphs. Can the authors try the effect of GotenNet on large-scale geometric graphs (e.g. 100-body in HEGNN)?\\n\\n> **Q5. Is it possible to test the effect of $L\\\\_{\\\\max}=1$?**\\n\\nAlthough the benefits of high-degree steerable features are obvious, in many cases a low-degree model is good enough. Can authors give a case where only Cartesian vectors ($L\\\\_{\\\\max}=1$) are used?\\n\\n> **Q6. Is it possible to provide an e3nn version of GotenNet implementation later?**\\n\\nFrom the full text, I guess the author may not know the e3nn library, which can easily implement the interaction of multi-channel high-degree steerable features. I believe it will be easier for everyone to follow the work of this article. However, this project may be very large, and I hope the author can provide this version after the article is accepted.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Since this comment involves the code of this article, if it is not suitable for public discussion, please remind me to set the public scope.\\n\\nI checked the author's code, successfully ran it and extracted several indicators on QM9 to verify the effect, which is consistent with the results reported in the article. I have to say that the code given by the authors is very neat and worth learning, and I have benefited a lot from it. I noticed that the TensorInit class in src/models/components/ops.py is actually based on e3nn. I suggest that the authors cite e3nn's software library or their arxiv in this article.\\n\\nRegarding the presentation part, I previously suggested using high-degree instead of high-order, and the authors have made changes. However, high-order is still written in several newly added places (Line. 1150 and 1174). In addition, I do not recommend using $\\\\otimes$ to represent element-wise operations, because it is very easy to confuse with CG tensor product, although I have not thought of any better symbolic replacement at present.\\n\\nFinally, regarding the name 'Hybrid geometric models', I think it may not be obvious enough. Maybe 'Sph-scalarization models' would be better? Because all high-degree steerable features are initialized from spherical harmonics, and the core is the inner product scalarization (SO3KRATES is actually the modulus length). Then the authors can use this name to introduce the three models in section 2.2.\"}", "{\"comment\": [\"It is recommended to mark the changes in the PDF file with special colors, and use Anonymous Github (https://anonymous.4open.science/) for anonymous code links.\", \">**D1. The presentation needs further improvement.**\", \"The blank lines between Line 54 and 55 are probably due to the settings in Fig. 1. You can adjust the blank lines by setting the parameters of the **wrapfigure** environment.\", \"Bold font. It seems that all channel values that are not 1 (including multi-channel scalars, matrices, or high-degree steerable features) need to be bolded using the **\\\\bm** command.\", \"The use of $\\\\Delta$. In Eq. (8), in HEGNN, $\\\\Delta \\\\boldsymbol{h}$ seems to mean the **residual** (i.e. the output $\\\\boldsymbol{h}^{(k+1)}$ of $(k+1)$-th layer is the the output $\\\\boldsymbol{h}^{(k)}$ of $k$-th layer plus the residual $\\\\Delta \\\\boldsymbol{h}$).\", \"It is recommended to use $L_{\\\\max}$ instead of $L_{max}$ (i.e. use \\\\max command).\", \"If authors can't find a suitable symbol, such as in Eq. 10, you can use $\\\\texttt{Agg}$ (i.e. **\\\\texttt{Agg}**).\", \"Some formulas are missing symbols at the end (e.g. Eqs. (2,3,6,8,12)).\", \"Formula citations lack brackets. Authors may consider using packages such as **cleveref** to achieve automatic citations.\"]}", "{\"comment\": [\"Dear Dr. Taniai,\", \"Thank you for your thorough review and continuing engagement with our manuscript, especially during this Thanksgiving week. We truly appreciate your dedication to improving our work during the holiday period. We are **particularly grateful for your endorsement of the paper's clarity and your suggestions that will help further improve the final version**.\", \"We will refine our notation throughout Section 3 to enhance clarity and consistency. Regarding your specific comments:\", \"We will improve the notation based on your suggestions to make the equations more precise and consistent.\", \"We will maintain our original notation using for element-wise operations, and have added explicit clarification about its broadcasting behavior in the manuscript to ensure clarity.\", \"We will introduce $\\\\tilde{\\\\cdot}$ notation in Section 3.1 to consistently denote steerable features containing $l$-degree features with $(1+2l)$ components for each $l$ until $L_{\\\\max}$.\", \"We will clarify dimensions throughout Section 3.2, including $\\\\boldsymbol{m}\\\\_i \\\\in \\\\mathbb{R}^{d_{ne}}$, $\\\\boldsymbol{z} \\\\in \\\\mathbb{R}^{|\\\\mathcal{Z}|}$, and $\\\\boldsymbol{t}\\\\_{ij,\\\\text{init}} \\\\in \\\\mathbb{R}^{d_{ed}}$.\", \"We will update the function definitions after Equations 5 and 7 to $\\\\gamma_v, \\\\gamma_s: \\\\mathbb{R}^{d_{ne}} \\\\to \\\\mathbb{R}^{S \\\\cdot d\\\\_{ne}}$ to maintain consistency with vector representations and resolve notation ambiguity. We will also improve equation clarity by adjusting parentheses placement in Eq 3: $\\\\boldsymbol{t}\\\\_{ij, \\\\text{init}} = (\\\\boldsymbol{h}\\\\_{i, {\\\\text{init}}} + \\\\boldsymbol{h}\\\\_{j, {\\\\text{init}}}) \\\\circ \\\\Bigg(\\\\sigma\\\\Big(\\\\mathrm{LN}\\\\big(\\\\varphi(\\\\tilde{\\\\boldsymbol{r}}^{(0)}\\\\_{ij})\\\\mathbf{W}\\\\_{\\\\text{erd}}\\\\big)\\\\Big)\\\\mathbf{W}\\\\_{\\\\text{eru}}\\\\Bigg).$\", \"We keep $\\\\boldsymbol{\\\\alpha}\\\\_{ij}$ in bold as it is a vector $\\\\boldsymbol{\\\\alpha}\\\\_{ij} \\\\in \\\\mathbb{R}^c$ containing attention weights for each head, where $c$ is the number of attention heads.\", \"We will adopt the $\\\\gets$ symbol for update equations where appropriate and improve Figure 2's caption clarity.\", \"We will incorporate your excellent high-level overview of the architecture into Section 3, which indeed helps readers better understand the components before diving into details.\", \"All these improvements will appear in the final version of our manuscript. Thank you again for your detailed feedback and commitment to enhancing the clarity of our work!\"]}", "{\"comment\": \"Dear reviewer bNe3, I would like to express my opinion on your second question. In theory, it is better to use high-degree steerable features. This is not only the consensus in the industry [a], but also discussed in articles such as HEGNN. The core contribution of this paper is to use the inner product of high-degree steerable features instead of the tensor product, thereby reducing the complexity of $\\\\mathcal{O}(L^6)$ to $\\\\mathcal{O}(L^2)$.\\n\\n[a] Official Commentby Reviewer ZLS9. 02 Dec 2024, 23:30. SE(3)-Hyena Operator for Scalable Equivariant Learning. Submitted to ICLR 2025.\"}", "{\"title\": \"Questions 5-6\", \"comment\": \"> **Q5. Is it possible to test the effect of\\u00a0Lmax=1?**\\n\\nWe thank the reviewer for this important question about the impact of different $L_{max}$ values. We have evaluated GotenNet with $L_{max}=1$ on the QM9 dataset and found interesting performance/efficiency trade-offs:\\n\\nWhile $L_{max}=2$ generally improves performance across most targets, for some properties like $C_{\\\\nu}$ and ZPVE, $L_{max}=1$ performs equally well (and slightly better on ZPVE: 1.159 vs 1.163, though this difference might be attributed to training variance). This suggests that for certain properties, Cartesian vectors alone may be sufficient, offering potential computational savings.\\n\\nWe will include a comprehensive comparison of $L_{max}=1$ versus $L_{max}=2$ for all targets to help readers make informed choices based on their specific tasks and efficiency requirements.\\n\\n> **Q6. Is it possible to provide an e3nn version of GotenNet implementation later?**\\n\\nWe thank the reviewer hFv3 for suggesting an e3nn implementation of GotenNet. We agree this would be valuable for the research community and commit to providing an e3nn version of our framework at the time of publication. While the rebuttal period is too short to implement and verify this version, we will ensure it's available alongside our primary implementation.\"}", "{\"comment\": \"Dear Dr. Taniai,\\n\\nThank you for your detailed feedback on our manuscript. As authors, we are committed to making our research clear and understandable for future readers. We truly appreciate your constructive feedback and keen interest in our work.\\n\\nWe have revised the manuscript to align with the notation system suggested by reviewer hFv3 (1-1, A-2, A-3, A-4). Most notably replacing $\\\\boldsymbol{\\\\tilde{v}}$ with $\\\\boldsymbol{\\\\tilde{X}} \\\\in \\\\mathbb{R}^{(1 + 2l)\\\\times d_{ne}}$ for improved clarity. We have also revised our manuscript to clarify (1-3, 2, 3, A-1, A-5, A-8).\\n\\n(1-2, 1-4) The value of $S$ is determined by the total number of scalar outputs. Specifically, $\\\\\\\\| \\\\\\\\{ \\\\mathbf{o}^{s}\\\\_{ij} \\\\\\\\}\\\\\\\\| + \\\\\\\\|\\\\\\\\{\\\\mathbf{o}^{d, (l)}\\\\_{ij}\\\\\\\\}^{L\\\\_{\\\\max}}\\\\_{l=1}\\\\\\\\| + \\\\\\\\|\\\\\\\\{\\\\mathbf{o}^{t, (l)}\\\\_{ij}\\\\\\\\}^{L\\\\_{\\\\max}}\\\\_{l=1}\\\\\\\\| = 1 + L\\\\_{\\\\max} + L\\\\_{\\\\max} = (1 + 2L\\\\_{\\\\max})$. While it happens to match the number of components at degree $l$, this is coincidental. The intuition for decomposing $\\\\boldsymbol{o}$ into $\\\\boldsymbol{o}^s, \\\\boldsymbol{o}^d, \\\\boldsymbol{o}^t$ is to denote their usage, with all components being invariant scalars. The scalar $\\\\boldsymbol{o}^s$ updates the scalar representations $\\\\boldsymbol{h}$, while for each degree $l$ (where $l \\\\in [1, L\\\\_{\\\\max}]$), the coefficients $\\\\boldsymbol{o}^{d, (l)}\\\\_{ij}$ and $\\\\boldsymbol{o}^{t, (l)}\\\\_{ij}$ scale the $l$-degree $\\\\tilde{\\\\boldsymbol{r}}^{(l)}\\\\_{ij}$ and steerable features $\\\\boldsymbol{\\\\tilde{X}}\\\\_{j}^{(l)}$ respectively. For detailed performance comparisons between separate and shared coefficients, we refer you to our discussion with reviewer hFv3, which will be included in our appendix.\\n\\n(6) The key distinction is that $\\\\texttt{Agg}$ is not required to be permutation invariant, unlike $\\\\bigoplus$.\\n\\n(7) Your intuition about $\\\\boldsymbol{m}\\\\_1$ and $\\\\boldsymbol{m}\\\\_2$ being invariant is correct. $\\\\textbf{W}\\\\_{vu}$ could theoretically differ without compromising equivariance, we opted for shared weights for simplicity, as our experiments showed no significant performance advantage with separate weights.\\n\\nWe hope these clarifications help explain the technical aspects of our work. Please don't hesitate to reach out with any additional questions or suggestions - we welcome further discussion to ensure our work is as clear and useful as possible!\"}" ] }
5wuZyG1ACs
Archon: An Architecture Search Framework for Inference-Time Techniques
[ "Jon Saad-Falcon", "Adrian Gamarra Lafuente", "Shlok Natarajan", "Nahum Maru", "Hristo Todorov", "Etash Kumar Guha", "E. Kelly Buchanan", "Mayee F Chen", "Neel Guha", "Christopher Re", "Azalia Mirhoseini" ]
Inference-time techniques are emerging as highly effective tools to enhance large language model (LLM) capabilities. However, best practices for developing systems that combine these techniques remain underdeveloped due to our limited understanding of the utility of individual inference-time techniques and the interactions between them. Additionally, efficiently and automatically searching the space of model choices, inference-time techniques, and their compositions is challenging due to the large design space. To address these challenges, we introduce Archon, a modular framework for selecting, combining, and stacking layers of inference-time techniques to construct optimized LLM systems for target benchmarks. Rather than relying on a single LLM called once, we leverage a diverse set of LLMs and inference-time techniques, creating LLM systems greater than the sum of their parts. Archon defines an extensible design space, encompassing techniques such as generation ensembling, repeated sampling, ranking, fusion, critiquing, verification, and unit testing. It transforms the problem of building LLM systems into a hyperparameter optimization objective. Given the available LLMs, inference-time techniques, and compute budget, Archon utilizes hyperparameter search techniques to discover optimized architectures for target benchmark(s). We evaluate Archon architectures across a range of instruction-following, reasoning, and coding benchmarks, including MT-Bench, Arena-Hard-Auto, AlpacaEval 2.0, MixEval, MixEval Hard, MATH, and CodeContests. Archon architectures outperform frontier models, such as GPT-4o and Claude 3.5 Sonnet, on these benchmarks, achieving an average accuracy increase of 15.1 percentage points by using all available LLMs.
[ "inference-time techniques", "test-time scaling", "machine learning", "natural language processing" ]
Reject
https://openreview.net/pdf?id=5wuZyG1ACs
https://openreview.net/forum?id=5wuZyG1ACs
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xqWb0MsB5v", "pdKniAUXBr", "Ie0q5x6E9x", "A5Wa4Jx24T", "8ipO123jmh", "4h6lRsudDq" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "decision", "meta_review" ], "note_created": [ 1730686387960, 1730596791046, 1731197790431, 1730695018950, 1737523608643, 1734820505852 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3939/Reviewer_2Lsw" ], [ "ICLR.cc/2025/Conference/Submission3939/Reviewer_w3od" ], [ "ICLR.cc/2025/Conference/Submission3939/Reviewer_YLS1" ], [ "ICLR.cc/2025/Conference/Submission3939/Reviewer_q7H3" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3939/Area_Chair_bEqa" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a NAS-like approach to assembling a hodge-podge of LLM inference-time techniques for a target test benchmark. The resulting architecture, called ARCHON, is able to improve by scaling the number of LLM samples/models used as a subunit. Performance is evaluated in a number of benchmarks. The paper overall studies \\\"meta architectures\\\" where specialized LLMs are the unit of computing. Search is done with a classic Bayes optimization algorithm with a Gaussian process.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The approach performs well, both in the closed/open model settings, and can produce combinations of open models that reach close to closed model performance\", \"The approach is relatively inexpensive, using approximately 40x inference calls on a single query (though some opinions may differ on whether this is inexpensive)\", \"The resulting architectures found in the appendix are relatively simple, and scaling is easy for certain components\"], \"weaknesses\": [\"The proposed approach is not quite comparable to the baselines in terms of compute. While ARCHON uses 30+ inference calls, other LLM systems are given just one. It would be fairer to uses any single building block but scaled up (if possible) to having approximately the same inference cost, to demonstrate that the combination is truly driving the performance gains as opposed to the # of inference calls. If performance was better even when giving the baselines similar compute, this could help dampen the briefly discussed limitation of additional inference calls in the conclusion.\", \"Most of the LLM components in section 3.1 do not cite previous work (only the generator does). It would be good to describe what these modules are based on, especially since this paper's contribution is centered around the combination and not the proposal of new parts.\", \"No standard errors reported, and other details are light/missing (i.e. the specific search algorithm)\"], \"questions\": [\"Figure 5's table is somewhat confusing. While ARCHON results appear to have two sets of results (general and task specific), the other LLM systems are not categorized at all. Also, the inference calls are not close to each other---is there a reason why one cannot give the baselines a similar level of compute?\", \"Almost the entirety of the search algorithm is pushed to the appendix A6, which reads like a textbook overview of classic Bayesian optimization with a Gaussian process. This section should really be edited to state what the authors do, not generically about what Bayesian optimization is. For example, Gaussian processes are described as an example of a surrogate, and examples of acquisition functions are given, but the specific one used in the paper for the given approach is not described in A6. What is the specific algorithm being used?\", \"The set of rules may make architecture building with these LLM components as building blocks a bit more complex than traditional architecture search. In fact, with all of these rules, the search may end up being mostly parameter search instead of true architecture search, if the overall architecture is already mostly defined by the rules. Are there examples of different \\\"paradigms\\\" of combinations that arise from ARCHON? or are they all mostly the same?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"ARCHON is a modular framework that addresses the challenge of designing efficient and effective inference-time architectures for large language models (LLMs). The framework focuses on optimizing various inference-time techniques, such as ensembling, ranking, fusion, critiquing, verification, and unit testing, to improve LLM performance on tasks like instruction-following, reasoning, and coding. ARCHON employs a hyperparameter optimization approach called Inference-Time Architecture Search (ITAS) using Bayesian optimization to explore the vast design space and select the most effective configuration. The paper demonstrates that ARCHON outperforms existing state-of-the-art models and inference-time architectures, achieving significant performance gains across multiple benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"## Comprehensive Framework Design:\\nThe ARCHON framework introduces a detailed and modular architecture that provides a systematic way to apply and optimize inference-time techniques for large language models. This innovation addresses a critical gap in current methodologies, which often focus on singular techniques. ARCHON's approach is especially compelling in its structured, neural-network-inspired layering of methods, allowing for both parallel and sequential inference-time operations. The paper meticulously defines each component, such as Generators, Fusers, Rankers, Critics, and Unit Test Generators, and explains how they interact within the framework to optimize results.\\n## Substantial Performance Improvements: \\nThe quantitative results are striking. ARCHON achieves an average increase of 15.1 percentage points in accuracy across benchmarks, outperforming models like GPT-4 and Claude 3.5 Sonnet. This improvement is not superficial but demonstrated over a diverse set of benchmarks, including instruction-following (e.g., MT-Bench, AlpacaEval 2.0), reasoning (e.g., MixEval, MixEval-Hard), and coding tasks (e.g., CodeContests). The use of detailed performance metrics, including Pass@1 for coding and win rates for instruction-following, adds credibility to these claims.\\n## Detailed Analysis of Technique Interactions: \\nThe paper rigorously examines how different inference-time techniques complement each other. For example, the Ranker is shown to improve response quality by 5.9 percentage points when paired with the Critic, and adding multiple layers of Fusers increases the richness of responses in instruction-following tasks. These insights demonstrate a nuanced understanding of the interplay between techniques, which is often lacking in similar studies.\\n## Effective Use of Hyperparameter Search: \\nThe use of Bayesian Optimization for Inference-Time Architecture Search (ITAS) is a strong point, enabling efficient exploration of a vast design space. The paper details how ITAS outperforms random and greedy search, finding optimal configurations in 96.0% of iterations while reducing the number of evaluations by over 88%. This shows a well-optimized and strategic approach to architecture search.\\n## Extensibility and Practical Utility: \\nARCHON is designed as a plug-and-play framework, allowing practitioners to easily incorporate new LLMs and inference-time techniques. This adaptability makes it highly valuable for real-world applications, from academic research to industry use cases. The authors emphasize its potential for wide adoption by providing detailed guidelines on how to customize the framework for different tasks and compute budgets.\", \"weaknesses\": \"The authors themselves acknowledge these components-Verifier, for example-operate underperforming on simpler tasks, there is no clear mechanism or threshold available to disable or skip them. In their failure to do so, a weakness in the design of the framework is exposed: it could result in unnecessary computational overhead and inefficiency. Without a systematic way of assessing when the contribution of a particular component is negligible, the work in ARCHON far from completes. Again, the work has not suggested mechanisms of adaptive disabling or performance-based activation of components that would go a long way in enhancing the efficiency of the framework.\\n\\nThe high variability that ARCHON exhibits for different tasks suggests a major shortcoming in the way it was designed. Despite such broad generalization setup, the proposed framework generalizes poorly without significant adjustments, and therefore it is not robust. The authors lost the opportunity to embed, at least discuss how methods might enhance generalization, like meta-learning approaches or task embeddings. Because of its heavy reliance on manual tuning, the framework is far from real-world applications where task conditions cannot be predicted. The fact that this is a limitation greatly reduces the appeal and practicality of ARCHON, especially in service to those users who need to find a universally applicable solution.\\n\\nThis dependence of the ARCHON framework on empirical testing and the configuration of pre-filtering points to one very critical limitation-it cannot adaptively change the selection of components based on real-time performance feedback or task-specific features. This rigidity suggests inefficiency because, at any given time, ARCHON may never be optimal on every task without extensive trial-and-error tuning. For instance, any more adaptive strategy, such as reinforcement learning or meta-learning, could be investigated in order to let the system automatically adapt its configuration. Why such a strategy has not been considered is not discussed by the authors, together with the motivation of the mentioned feasibility challenges, thus leaving a wide gap in the discussion of scalability and adaptability of the paper.\", \"questions\": \"Latency vs. Performance Trade-Off: Given the confirmed high latency and compute cost, can ARCHON be adapted for real-time applications without sacrificing significant performance? If not, how do you foresee its practical deployment in latency-sensitive environments?\", \"task_generalization\": \"While ARCHON performs well on a variety of benchmarks, there seems to be variability in how effective certain components are across different tasks. Have you considered any techniques for improving generalization, such as meta-learning approaches or task embeddings, to make component selection more robust across diverse problem sets?\", \"dynamic_adaptation\": \"Your methodology for component selection heavily relies on empirical testing and pre-filtering configurations. Have you considered implementing a more adaptive or dynamic strategy that could optimize component selection in real-time based on task-specific features? If not, what are the main challenges or limitations you foresee in developing such a strategy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents ARCHON, a framework for stacking different LLM components to leverage test-time compute to improve performance. While a more general architecture is introduced, the actual pipeline first generates K candidate solutions and then passes through the test through a number of fusion layers before a final verifier/unit test layer. Many hyperparameters are introduced to control the architecture and then they are optimized either with bayesian optimization or greedy search on a subset (20%) of the test set. This yields a pipeline that substantially improves performance on a variety of benchmarks like MT-bench and MATH.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The results seem to be generally strong. Even if it requires many calls to the models to yield the final result, the reported performance numbers on a variety of benchmarks are impressive, often matching or exceeding state of the art performance.\\n\\n2. The authors produce an open sourced framework so that others can reproduce and build on the results.\", \"weaknesses\": \"1. There are some weird formatting issues with the paper. All the characters seem to be compressed compared to other submissions I am reviewing. But also, the bottom margins are substantially enlarged.\\n\\n2. The comparisons often don't seem to be very fair and baselines are lacking. We should match the FLOP/token/dollar budgets between different methods to see which methods are most effective at turning additional compute into performance. The results are often confounded by comparing across substantially different budgets. For example, comparing elaborate inference techniques with single model calls. This is interesting to show that open source models can be augmented to out-perform a closed source model, but for good science we would want to see an apples-to-apples comparison of different inference techniques on a fixed model or models. Fixing the budget in this comparison is important. For example, in the paper in Figure 3 we see a comparison of different ablations, but some of them require much more compute than others since they add in extra model components. It would be useful to compare with a fixed budget where e.g. if we add fusers, then we need to reduce K to keep the total FLOPs of the system constant.\\n\\n3. It is not clear what is the real novelty of the method or what the underlying principles of the approach are. Each component of the model alone is a standard component from the literature, and there already exist methods for combining these sorts of things and optimizing them like DsPy and mixture of agents. The main contribution seems to be setting up a particular way to parameterize a hyperparameter optimization problem, but it is not clear why this particular version was chosen. There are a variety of ablations showing that adding various components can improve performance, but there does not seem to be much insight into what is actually going on here or how the specific hyperparameter problem was chosen.\\n\\n4. The presentation does not track the actual method in the experiments very well, making it somewhat confusing. The presentation in 3.2 is very broad encompassing many architectures. Then 3.3 seems to choose one very specific architecture, which all the experiments use, but it is not clear exactly what this architecture is. Perhaps it would be more clear to just present the actual method used in the experiments directly, and then discuss generalizations as future work rather than flag-planting something that is not actually attempted.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores the composition and interactions of popular inference-time techniques in the field of large language models (LLMs) to better understand their relationships and performance across various tasks. It introduces a framework, ARCHON, designed to combine mainstream inference-time methods and select the optimal set of models for specific benchmarks or tasks. The proposed LLM system demonstrates significant improvements across a variety of datasets, outperforming any single LLM, including the most competitive models. The paper also provides a thorough empirical analysis of how various factors influence the outcomes.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper provides an insightful summary of existing inference-time techniques in the LLM field, distilling their concepts into well-structured building blocks that establish a robust paradigm for constructing LLM systems. The proposed framework, ARCHON, demonstrates significant performance improvements on common benchmarks, highlighting its effectiveness. The results from the LLM component interaction experiments are also valuable in real-world practice. Additionally, the presentation is generally clear and fluid, facilitating a coherent understanding of the paper's work.\", \"weaknesses\": \"While this work has involved substantial effort, I recommend rejecting the paper for the following reasons:\\n(1) The paper offers few novel insights into inference-time techniques, and the main conclusions drawn from the experiments are rather trivial, relying primarily on parameter tuning without theoretical justification; (2) The experimental setup is neither practical nor comprehensive enough to fully demonstrate the interaction mechanisms of mentioned methods; (3) The proposed framework is of limited practical value due to the high computational costs associated with parameter searching and inference.\\n\\nThe paper contributes little to the understanding of the utility or interactions of inference-time techniques. The improvements in downstream tasks through the combination of techniques, as discussed in Section 4, are predictable and have been partly addressed in prior work. Simply combining these techniques and presenting performance results does not constitute a substantial contribution. Additionally, the analysis of different utilities lacks theoretical grounding. The claimed insights, such as those in Sections 4.4 and A.2, are simply descriptive observations from comparative experiments. As seen in A.2, under the specific settings (using only 70B+ open-source models), the results are unlikely to provide generalizable rules.\\n\\nRegarding the experimental setup, several points require clarification. See the question part for details.\\n\\nIn addition to these points, the proposed framework is significantly limited by its high computational cost. Even with parallel execution of LLMs within the same layer, the system takes five times longer to respond, not to mention the increased demand for computational resources and associated costs, as highlighted in Section A.5. Handling such a system for models of different architectures and sizes would be a great challenge to both hardware and operation-maintenance service in real-world applications. Given the flexibility of the proposed framework, it would be beneficial to consider budget and cost constraints when searching for the optimal architecture in future work. Providing a more cost-effective version of the general-purpose architecture would also enhance the framework's practicality.\", \"questions\": \"(1) In Figure 2, the example system architecture includes two GPT-4o generators in the first layer, yet Section 3.3 states that generators are selected from the top-K best models. Is there an inconsistency in the figure, or is there something I may have misunderstood?\\n\\n(2) It is puzzling that the paper does not include a lightweight version in the experiments, i.e., exclusively using models with fewer than 8B parameters. On one hand, in practical applications, it is more common to deploy a mixture of smaller models due to resource constraints, making this approach more valuable. On the other hand, the coexistence of models of different sizes might cause the ITAS algorithm to disproportionately favor larger models, potentially disregarding the smaller ones. This tendency is also evident in Appendix A.4, where almost all architectures rely predominantly on large models. Why is there no consideration for the exclusive use of small-sized models?\\n\\n(3) In Section 4.4, the construction strategy for the General-purpose ARCHON Architecture is unclear. What is meant by ``maximizing performance over all benchmarks''? Is ITAS performed sequentially on sampled subsets from each dataset, or only once on the concatenated sampled subsets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper combines different LLMs and inference-time techniques to obtain better performance than with one of them alone. The problem of selecting the right combination of LLM and inference-time techniques is viewed as a hyperparameter optimization problem and addresses it with an off-the-shelf Bayesian optimization method. Reviewers highlighted various strengths, including strong empirical results, often exceeding SOTA methods, and a detailed analysis of the interplay between different methods. Criticisms included the low novelty and high compute time, combined with the lack of meta-learning. The most serious concern for me for the decision to reject was that of overselling and overblown contributions by Reviewer YLS1. While I decided for rejection following most reviewers, I do see a lot of potential in this work and encourage the authors to continue this line of work and submit to the next venue.\", \"additional_comments_on_reviewer_discussion\": \"Various concerns were brought up and many of the them addressed. The high compute time is an issue, but the authors did provide a compute-matched evaluation. Reviewer YLS1's remarks on overblown claims and overselling were not effectively rebutted.\"}" ] }
5wmAfwDBoi
UI-Pro: A Hidden Recipe for Building Vision-Language Models for GUI Grounding
[ "Hongxin Li", "Jingran Su", "Jingfan CHEN", "Yuntao Chen", "Qing Li", "Zhaoxiang Zhang" ]
Building autonomous UI agents that automate user interactions with interfaces has long been a vision in the field of artificial intelligence. Central to these agents is the capability for UI element grounding, which involves accurately locating UI elements (e.g., buttons and links) based on referring expression, such as user intents and functionality descriptions. Developing these agents with robust grounding capabilities using vision-language models (VLMs) offers a promising path forward. However, a practical framework for creating VLMs with strong element grounding capabilities remains under-explored. To address this gap, we conduct systematic experiments within the design space of VLMs to uncover an effective recipe for building VLMs with strong UI element grounding ability. Firstly, we find that fine-tuning with general visual grounding tasks as a warming-up step mitigates the challenges of fine-tuning with downstream UI element grounding data. Next, we explore different fine-tuning sequences of UI grounding training data from various sources and find that a simple-to-complex fine-tuning curriculum can maximize data utility. Moreover, we find that scaling up the size of either the warming-up data or the UI grounding data in downstream fine-tuning significantly enhances UI element grounding accuracy. Lastly, we explore various image feature compression techniques and find that using a convolution-based compressor to compress UI sub-image features significantly enhances the grounding capabilities on high-resolution UI images. Integrating these insights, we successfully develop UI-Pro, an expert VLM that achieves state-of-the-art UI grounding accuracy with fewer parameters across multiple benchmarks. We hope this work serves as a valuable roadmap for researchers in the UI-VLM domain and inspires future research.
[ "Vision-language models; GUI understanding; Visual Grounding" ]
https://openreview.net/pdf?id=5wmAfwDBoi
https://openreview.net/forum?id=5wmAfwDBoi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pL40PBrAOI", "kS2o0FXFED", "jdsxFazZ3c", "EMV0opnPnw", "2V5tlVOdvh" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731657933334, 1729678959282, 1731123549728, 1730218502946, 1729090009227 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1832/Authors" ], [ "ICLR.cc/2025/Conference/Submission1832/Reviewer_JSx4" ], [ "ICLR.cc/2025/Conference/Submission1832/Reviewer_JXwL" ], [ "ICLR.cc/2025/Conference/Submission1832/Reviewer_ZrzV" ], [ "ICLR.cc/2025/Conference/Submission1832/Reviewer_qBF2" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The manuscript explores the design space for GUI-grounding vision-language models. It concludes that warming up with visual grounding tasks, using a simple-to-complex fine-tuning approach, scaling up data size, and utilizing effective image feature compression significantly enhance grounding accuracy for high-resolution UI images. These findings are supported by ablative experiments presented in the manuscript.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The manuscript is organized in a very clear and coherent manner.\\n2. The experimental results effectively validate the conclusions drawn.\", \"weaknesses\": \"1. Benchmark Selection: The manuscript focuses solely on grounding tasks for evaluation. While the primary contribution is exploring the design space for GUI-grounding MLLMs, it is essential to evaluate the grounding task within the context of an agent system. Improvements in a single component (e.g., the grounding part) do not necessarily translate to performance gains in the overall system, i.e., an autonomous UI agent that fulfills user intentions.\\n\\n2. Comparison with Specialists: There is a need for comparisons with other specialized models.\\n\\n3. Comparison with Other Base Models: Comparisons with other base models, such as InternLM, are necessary. It remains unclear whether techniques like visual grounding warm-up or feature compression are compatible only with LLaVA pre-trained representations or if they are generalizable across different base models.\", \"questions\": \"Please see the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a UI-Pro, a vision-language model (VLM) designed to enhance autonomous interaction with user interfaces. It addresses the core challenge of UI element grounding, which involves accurately identifying and interacting with UI elements like buttons and text fields based on contextual references. UI-Pro achieves improvements on grounding across benchmarks with fewer parameters than other leading models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The validation of results across multiple benchmarks, demonstrating that the proposed methods lead to substantial improvements in grounding accuracy compared to existing models.\\n\\n2. A detailed comparison of different data scaling and compression methods, providing clear evidence of their impact on model performance.\\n\\n3. The use of a warming-up phase with general visual tasks before fine-tuning on UI-specific data is a strategic innovation that enhances adaptability and performance.\", \"weaknesses\": \"1. Limited Exploration of Model Variants in Pre-Training and Fine-Tuning: The study could strengthen its contributions by including a comparative analysis of different pre-training tasks (e.g., using textual data combined with visual data, synthetic GUI elements).\\n\\n2. While UI-Pro is tested across multiple benchmarks, these datasets are primarily focused on GUI scenarios without covering diverse UI types (e.g., industrial control interfaces, mobile applications, and accessibility-focused UIs). \\n\\n3.The paper cautions against overfitting when scaling data in fine-tuning, yet it lacks in-depth analysis or metrics on model stability over different scales.\\n\\n4.The paper achieves competitive results with fewer parameters than other models, but it does not address the computational requirements and inference efficiency (e.g., time and resource cost) of deploying UI-Pro on real-world applications, especially those with limited processing power or memory.\\n\\n5.The paper demonstrates state-of-the-art grounding accuracy but does not include an in-depth error analysis, which would be valuable for understanding where the model's grounding might fall short.\", \"questions\": \"See details in 'Weakness' section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work aims to develop Vision-Language Models (VLMs) with robust UI grounding capabilities. The paper presents the following key experimental findings:\\n1. Incorporating a warm-up fine-tuning step with visual grounding tasks enhances model performance.\\n2. a simple-to-complex fine-tuning curriculum maximizes data utility, and scaling up the warm-up dataset further improves grounding accuracy.\\n3. among various image feature compression strategies, a convolution-based compressor is found to be the most effective.\\nBased on these findings, the authors develop UI-Pro\\u2014a VLM fine-tuned to achieve strong grounding capabilities.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This work explores various design choices and configurations on both model and data for GUI grounding through comprehensive experiments and ablations, providing valuable insights for future research on enhancing GUI-based multimodal models. These findings contribute significantly toward advancing the development of effective UI agents.\", \"weaknesses\": \"1. The primary concern with this paper is that, while it presents several experimental findings specific to UI grounding, it lacks innovative design elements compared to existing vision-language models.\\n\\n2. Although UI grounding is important, it is only one aspect of UI agent functionality. UI navigation, which requires both grounding and action selection, is more critical for UI applications. By focusing solely on grounding, this paper limits its broader applicability.\\n\\n3. The paper makes several assumptions that raise concerns about the generalizability of its findings. The experiments are based on a specific VLM model (e.g., LLaVA), and it is unclear if these findings would hold for other VLM architectures.\\n\\n4. The paper does not cite or discuss Ferrt-UI, despite Ferrt-UI's use of local patches, which is conceptually similar.\\n\\n5. The model\\u2019s performance does not achieve state-of-the-art results.\\n\\n6. Regarding Finding 1, most current state-of-the-art VLMs (e.g., LLaVA-OV, Qwen-VL2, Phi-3.5-V) already demonstrate strong text-rich visual perception capabilities. This suggests that the proposed warming-up strategy may be unnecessary.\", \"questions\": \"1. In Table 2, when focusing on UI grounding performance on ScreenSpot, a zero-shot benchmark, there appears to be no significant difference among r4, r6, and r8.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces four independent strategies to improve the performance of vision-language models for GUI Grounding: (1) warm-up with general visual grounding task data, (2) organizing UI grounding training data in a simple-to-complex curriculum through multi-stage fine-tuning, (3) increasing the sizes of both warm-up and UI grounding data, and (4) using a lightweight convolution-based connector to compress visual features of high-resolution UI images, reducing computational cost to enable processing at the original ratio.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. All proposed findings are well-ablated\\n2. The proposed UI-Pro model achieves good performance using significantly fewer parameters\", \"weaknesses\": \"This paper reads like an experimental report, merely listing a series of methods proposed by others and reporting their best results. Hence, the paper suffers from considerable lack of novelty, as curriculum learning is a well-established field, the benefits of scaling data to improve performance are well-known, and the convolution-based connectors are also directly adopted from existing work. Moreover, the paper fails to provide meaningful discussions or insights into the purported findings, which appear to be extrapolations from inconclusive experiments, thus leading to considerable confusion.\\n> Finding 1: The author tries different types of data and concludes that grounding data on natural images is the most important warm-up data for UI Grounding because it yields the best performance. Without additional insights, many different types of data and data combinations not yet explored by the author are possible. For example, perhaps [Gnd on Natural Images] + [QA on Natural Images] or [Gnd on Natural Images] + [QA on Natural Images] + [Text Localizations] combinations could perform even better. It would be beneficial if the author could provide more insights to guide future curation or selection of data. Without such insights, little new knowledge can be gained from these experimental results. Can the author explain why grounding with data from different domains can help? Why not warm up with data from the same domain?\\n\\n> Finding 2: There is an additional dimension not ablated in Tab. 2. When using 2 or 3 stages of SFT with different data types, not only do the data types change, but the size of the data also changes. These two factors should be carefully segregated to draw conclusive results. For example, for r6, what happens if SeeClick data is also used for SFT-1 (a different set of data from SeeClick in SFT-2)?\\n\\n> Finding 3: The data for complex UI-grounding is an order of magnitude smaller than others. Without using the same data size, it's unclear how reliable the conclusion is that we should \\\"remain cautious of potential overfitting when fine-tuning with complex UI grounding data.\\\" Intuitively, I would expect that fine-tuning with data from the same domain should yield the best performance.\", \"questions\": \"1. Clarification on Fig. 4: Based on the descriptions in lines 355-366, the 5M results in Stage 1 should match the 212k results in Stage 2 and 625k results in Stage 3. However, in Fig. 4, why do the 5M results in Stage 1 (left) not match the 212k results in Stage 2 (middle) and the 625k results in Stage 3 (right)? Can the author kindly clarify?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5w8xpFWkns
Neocortical cell type classification from electrophysiology recordings using deep neural networks
[ "Raymond L Wang", "Sang Min Han", "Marta Agnieszka Gajowa", "Chunlei Liu" ]
Understanding the neural code requires identifying different functional units involved in the neural circuits. One way to identify these functional units is to solve a neuron type classification problem. For decades, current-clamp electrophysiology recordings have provided the means to classify the neurons based on subtle differences in action potential shapes and spiking patterns. However, significant variations in neuronal type definitions, classification pipelines, and intrinsic variability in the neuronal activities make unambiguous determination of neuron type challenging. Previous solutions to this electrophysiology-based cell type classification problem consisted of dimensionality reduction juxtaposed with clustering using hand-crafted action potential features. Recent discoveries have allowed genetics-based cell-type classifications, which have fewer ambiguities, but they are less practical in vivo and have even lower throughput. Leveraging the unprecedented ground truth data published in the Allen Institute Cell Types Database, which contains anatomical, genetic, and electrophysiological characterizations of neurons in the mouse neocortex, we construct a robust and efficient convolutional neural network (CNN) that successfully classifies neurons according to their genetic label or broad type (excitatory or inhibitory) solely using current-clamp electrophysiology recordings. The CNN is configured as a multiple-input single-output network consisting of three subnetworks that take in the raw time series electrophysiology recording as well as the real and imaginary components of its Fourier coefficients. Our single pipeline method is fast and streamlined while simultaneously outperforming a previous method. Furthermore, our method achieves classification with more classes using only a single current-clamp time series trace as the input. This end-to-end convolutional neural network-based classification method removes the need for hand-crafted features, specific knowledge, or human intervention for quick identification of the neocortical cell type with high accuracy, enabling interpretation of experimental data in a bias-free manner and understanding of a much broader scientific context.
[ "neuroscience", "electrophysiology", "cell type", "classification" ]
https://openreview.net/pdf?id=5w8xpFWkns
https://openreview.net/forum?id=5w8xpFWkns
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sauq54fYJO", "pbJKmbjLPz", "gZUDyZnjmK", "bbXlf4ihJg", "Psh2sPM9F1" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730418576393, 1730681715131, 1730558621811, 1730604339266, 1733287890164 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5082/Reviewer_dBhH" ], [ "ICLR.cc/2025/Conference/Submission5082/Reviewer_UBSu" ], [ "ICLR.cc/2025/Conference/Submission5082/Reviewer_3XJG" ], [ "ICLR.cc/2025/Conference/Submission5082/Reviewer_xJxj" ], [ "ICLR.cc/2025/Conference/Submission5082/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a CNN-based deep learning method for classifying neocortical cell types from in-vitro patch-clamp recordings. The model takes the waveform and the real and imaginary Fourier components of a single spike (within a short window) as input. It is evaluated on the Allen Cell Types Database using both binary and 5-way classification tasks. The authors compare their method primarily with (Ghaderi et al., 2018) and claim it as the state-of-the-art in performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a novel model architecture that uses both raw and Fourier-processed spike data for cell type classification.\\n2. The model trained on a binary classification task demonstrated effective data efficiency, requiring fewer data when transferred to the more challenging 5-way classification task.\", \"weaknesses\": \"Major Weaknesses\\n\\n1. The comparison with previous methods is limited, as the authors only benchmark against (Ghaderi et al., 2018). Other deep-learning-based methods for the same classification task, such as (Ophir et al., arXiv, 2023), are not included, which limits the ability to fully assess whether the proposed approach truly represents the state-of-the-art.\\n2. The performance comparison only uses overall accuracy. Given the imbalanced nature of the dataset, it would be more informative to report additional metrics, such as precision, recall, and F1 score, which are also commonly used in similar studies.\\n3. A potential strength of this work is that it avoids hand-crafted features, yet the authors do not empirically demonstrate whether their method significantly outperforms traditional approaches that rely on hand-crafted features.\\n4. The authors focus on the first spike in response to a short square pulse as input for classification, but it\\u2019s unclear why. Different neurons often exhibit distinct firing patterns with longer stimuli, including adaptation or irregular firing properties, which could provide additional classification-relevant information. A discussion on the scalability of the model to incorporate diverse response-stimuli pairs and the robustness of the model if subsequent spikes were used as input would improve clarity.\\n5. The authors mention that, given the same stimuli, the same neuron might exhibit variable responses, known as single-trial variability. Although the Allen Cell Types Database includes multiple trials per neuron for the same stimulus, it is unclear how the model accounts for or performs under such variability.\\n\\nMinor Weaknesses\\n\\n1. The manuscript could improve in clarity. For instance, presenting the ablation analysis in a table or figure format rather than in text would significantly improve readability.\\n2. The authors briefly mention the dataset\\u2019s imbalance problem. This is indeed a critical issue, especially as there are many subclasses within each cell type. Discussing how the model may handle or be impacted by this imbalance would provide useful insights for future work.\", \"questions\": \"Please refer to the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the challenge of classifying neuron cell types from electrophysiology recordings, specifically noting the scarcity of models and literature focused on using machine learning techniques for this classification beyond basic dimensionality reduction and clustering.\\n\\nThe authors introduce a convolutional neural network that achieves a low error rate on this task with minimal preprocessing. They also demonstrate the advantages of fine-tuning in handling class imbalance effectively.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This is a crucial problem with the potential to drive significant advancements in neuroscience as neuronal technologies continue to evolve. Given the recent annotation of the Allen Institute Cell Types Database, this issue is particularly worth addressing. The paper effectively explains the problem and highlights its importance.\\n\\nFrom the results, it is evident that the architecture presented in the paper works well for cell types with sufficient enough data (i.e. E and PV cells > 90% precision) without fine-tuning and minimal pre-processing, within a 50ms window of an action potential.\", \"weaknesses\": \"**Accuracies are not well-presented:**\\n\\n- The accuracy results should be organized into a table to improve clarity. Currently, the reader has to manually recall multiple accuracy values from related work, which disrupts readability.\\n\\n**N-fold cross-validation:**\\n- The work could greatly benefit from N-fold cross validation, especially considering that the multi-modal architecture is relatively trivial. It would greatly strengthen results and potentially avoid choosing bad hyper-parameters.\\n\\n**(Main criticism) Weak baselines:**\\n\\n- Ghaderi et al, 2018 is not a sufficient baseline for evaluating the performance of the proposed model. This work in particular as stipulated in the paper, has not been evaluated on the same Allen Dataset. This makes it hard to gain perspective on the quality of the results.\\n\\n- We are in the age of neuro-foundational models, it would have been a good exercise to evaluate the proposed model against transfer benchmarks on the LaBram model (from last year's ICLR spotlight - Jiang et al, 2024). This, for example, is a model that has been trained on electrophysiological recordings, in particular, EEG data. Fine-tuning the model on the Allen Cell Types dataset would be a trivial benchmark.\\n\\n- I would guess that a neurofoundational model like LaBram could do better with imbalanced datasets like that of the Allen dataset, due to the pre-training of the model on EEG data despite it not being single-cell recordings. I'd be intrigued to see how this compares against your model.\\n\\n**Some commentory**:\\n\\nThis is truly interesting work, and I believe that this has the potential to be quite useful in the field of neuroscience and NeuroAI. I think more work needs to be done in answering questions like, \\\"why not a transformer? Is it an overkill\\\", \\\"how much data is enough data to solve this task?\\\". Your results suggest that a CNN does not need that much data (~500) to identify PV interneurons, which is an interesting find and aligns with the hypothesis that neuron groups possess distinct properties. I think this work (+ Allen Cell Dataset) makes some headway in understanding how to identify cell types but I think more time can be invested in thinking of appropriate baselines.\", \"questions\": \"It is unclear to me what are the implications of finetuning/pre-training on 5-class / 2-class. It seems as if pre-training on 2-class and fine-tuning on 5-class underperforms compared to training the network end-to-end on 5-classes directly, but the opposite effect is the case when pre-training on 5-class and fine-tuning on 2 classes.\\n\\n* 5-class: Exc, Pvalb, Sst, Ndnf, and Vip\\n* 2-class: Exc, Inh\\n\\nCould you clarify this result and especially it's implications in light of your contributions to the literature?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper discusses an CNN based method to classify neuron type. Through empirical results, authors demonstrate that using 1D-CNN would improve the classification performance on the dataset.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Paper is easy to follow, writing is good.\", \"weaknesses\": \"This paper shows that using 1D-CNN would help improve the dataset's performance to classify neuron types. However, I am not convinced this contribution is significant enough to be presented at this conference. The only contribution is improved performance by using a 1D-CNN, which is not major.\", \"questions\": \"No question, as the work is straight forward.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a cell type classification model applied to the Allen cell types datasets. It presents a CNN model that uses multiple inputs, including the raw membrane potential and Fourier components, to classify different cell types. The CNN model was shown to be effective in cell type classification, outperforming an existing baseline model.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. This paper attempts to tackle an important problem of cell type classification in neuroscience.\\n2. The writing is clear and easy to follow.\", \"weaknesses\": \"1. This method requires more comprehensive evaluation on additional cell type classification datasets and against more baseline methods before we can conclude if it is state-of-the-art or useful.\\n2. The experimental setup is somewhat difficult to follow, and I am confused about how the training, validation, and test sets are partitioned. It may be important for the method to adopt more standard experimental practices commonly found in ML or computational neuroscience literature.\\n3. More qualitative examples are needed to help readers understand and assess the model's usefulness.\", \"questions\": \"Overall, I believe this method requires a more thorough evaluation and comparison with additional baselines before it can be accepted at ICLR. At this stage, the method is not fully mature and has potential for improvement in the future. I hope the following suggestions could help the author improve this paper.\\n\\n**Major:**\\n1. The author has only used the Allen cell type datasets. **Can the method be applied to other datasets such as [1]** to make sure it is not overfitting to a single dataset?\\n2. The author considered only one baseline and did not conduct actual experiments for direct comparison, as noted by the author's statement in the paper, \\u201cAlthough not necessarily a direct comparison of performance, Ghaderi et al. (2018) achieved an overall accuracy \\u2026\\u201d **Why didn\\u2019t the author perform a direct comparison with the method in Ghaderi et al. (2018)?** Additionally, **could the author include other baselines, such as PhysMAP [2] and a VAE-based method [3]?**\\n3. In Section 3.2, the author mentioned \\\"using the maximum validation accuracy over 100 epochs\\\" and stated that \\\"an 8:2 ratio training-validation data set split was used to select the optimal network model configuration.\\\" **I found the experimental setup and train-validation-test partitioning unclear. Could the author please clarify this?** The evaluation would be more valid if the author follow common procedures in ML or those adopted by previous work [2-3].\\n\\n**Minor:**\\n1. In the introduction, the author mentioned that \\u201cprevious approaches suffered in classification accuracy as they relied on AP shape, spiking pattern, or cell shape parameters that span a continuous feature space, which often do not have clear separation boundaries.\\u201d I find this statement confusing. Could the author elaborate on this point and explain why the proposed method is an improvement over previous approaches?\\n2. This paper uses raw time series data (membrane potential) and Fourier components. I wonder why single neurons waveforms were not considered. Additionally, since the CNN model takes three types of inputs, could the author provide an ablation study on the impact of each input type on model performance?\\n3. In Section 3.4, why does the author use the proposed transfer learning strategy to address imbalanced classification? Why not consider a simpler approach, such as upweighting the minority class during training, which is often effective? I\\u2019m curious if there\\u2019s relevant literature that I might be overlooking.\\n\\n[1] Ye, Z., Shelton, A. M., Shaker, J. R., Boussard, J., Colonell, J., Birman, D., ... & Steinmetz, N. A. (2023). Ultra-high density electrodes improve detection, yield, and cell type identification in neuronal recordings. bioRxiv.\\n\\n[2] Lee, E. K., Gul, A., Heller, G., Lakunina, A., Jaramillo, S., Przytycki, P., & Chandrasekaran, C. (2024). PhysMAP-interpretable in vivo neuronal cell type identification using multi-modal analysis of electrophysiological data. BioRxiv, 2024-02.\\n\\n[3] Beau, M., Herzfeld, D. J., Naveros, F., Hemelt, M. E., D\\u2019Agostino, F., Oostland, M., ... & Medina, J. F. (2024). A deep-learning strategy to identify cell types across species from high-density extracellular recordings. bioRxiv.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We would like to first express our sincere gratitude to the reviewers and the organizers for considering our submission to ICLR 2025. After careful consideration, we have decided to withdraw our submission in order to take the time necessary to incorporate every valuable feedback as well as helpful suggestion for improvement provided to us by the reviewers. We plan to address all concerns pointed out by the comments regarding both the strengths and weaknesses of our approach for future work so that our submission can be made more competitive. Thank you very much once again for the helpful reviews. We will significantly improve both the impact and contribution of our work, and we will submit our work again after more work in the near future.\"}" ] }
5w51I0XlOP
Self-Choose: Leveraging Diverse Reasoning Solutions to Self-Correct Multimodal Large Language Models
[ "Yexiang Liu", "Jie Cao", "Ran He", "Tieniu Tan" ]
In the past few years, Multimodal Large Language Models (MLLMs) have achieved remarkable advancements in reasoning while still suffering from mistakes. Some existing approaches on LLMs self-correct the answers without external feedback, proven limited in reasoning. We revisit these previous approaches and propose an improved effective strategy dubbed Self-Choose to teach MLLMs to utilize diverse reasoning solutions to self-correct reasoning. Our approach first employs various reasoning methods to generate candidate answers. Then, it evaluates them by comparing the reasoning processes and candidate answers to choose the optimal solution. Finally, it outputs the best candidate or reflects to generate an improved solution if all the answers are deemed inaccurate. We evaluate our method on multiple datasets with mainstream foundation models including LLaVA and Gemini. The extensive experiments show that Self-Choose achieves consistent improvements on different benchmarks and metrics. We hope this study will promote future research on self-correction and its application across various tasks.
[ "Multimodal Large Language Models", "Self-Correct", "Reasoning", "Prompting" ]
https://openreview.net/pdf?id=5w51I0XlOP
https://openreview.net/forum?id=5w51I0XlOP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ZSbQBoPJnh", "WDaphsxdHW", "M7oHHjmif7", "LCK0Gfa5Uz", "AytmS4cYDV" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730430401544, 1730820225978, 1731126398649, 1730509245293, 1734403252257 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13802/Reviewer_XX94" ], [ "ICLR.cc/2025/Conference/Submission13802/Reviewer_BNHH" ], [ "ICLR.cc/2025/Conference/Submission13802/Reviewer_MURc" ], [ "ICLR.cc/2025/Conference/Submission13802/Reviewer_rPNM" ], [ "ICLR.cc/2025/Conference/Submission13802/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a \\\"self-choose\\\" prompting method that allows the multimodal language model (MLLM) to self-correct during its reasoning process. It is composed of three steps:\\n\\n1. The model generates three different types of CoT reasoning: direct answering, writing scene graphs for CoT (CCoT), and decompose-then-answer (DDCoT).\\n2. Then, the model is asked which answer is the best by directly generating the option number (1, 2, or 3)\\n3. If all answers are deemed wrong, the model is asked to generate a more promising answer.\", \"the_method_is_tested_on_three_multimodal_benchmarks\": \"ScienceQA, WHOOPS and MM-Vet. The model outperforms baselines such as using any single CoT method, self-consistency, multi-agent debate, and meta-reasoning prompting.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors showed that previous self-correct methods struggle to improve the reasoning performance on MLLM, a similar finding with Huang et al. [1] who did a similar study in text-only reasoning tasks.\\n2. The authors modified the previous self-correct approaches (which they called self-refine and self-review) and improved the reasoning performance of MLLM on multiple tasks.\\n\\n\\n[1] Huang et al. Large language models cannot self-correct reasoning yet.\", \"weaknesses\": \"1. The contribution is incremental. This paper is basically \\\"using different prompting methods to do CoT and then prompt the model to select the best one\\\". This approach is like a combination of **self-consistency** (sampling multiple answers and find the majority) and **self-review** (asking the model if the generated answer is correct; if wrong then write a new one). All the components in this pipeline have similar ideas in the literature and are very easy to think of based on the prior research works. This pipeline just combined these components and applied it in the multi-modal scenario.\\\\\\nFor example, the idea of \\\"using different CoT prompts to sample answers\\\" can be found in [1,2,3]; the idea of \\\"sampling multiple answers and prompt the LM to select the best\\\" can be found in [4], not to mention it is also an incremental adjustment of the original self-consistency; \\\"writing a more promising answer based on sampled ones\\\" is also a simple adaptation of previous self-correct approaches. \\n\\n2. The pipeline is not tested on some more representative multi-modal reasoning tasks, to name a few, MMMU (multimodal general reasoning), RAVEN (multimodal logical reasoning), and MathVista (multimodal math reasoning). This weakens the soundness of the experiments. Moreover, the improvement on the tested benchmarks are very limited. It improves less than 1.5 points (on average) compared to only using CCoT without self-consistency (but uses more model calls which hurts efficiency). The performance gap is even smaller compared to some variants in the ablation study (Table 4), which questions the effectiveness of the components in this pipeline. I wonder if some significance test can be performed, e.g., using different random seeds for answer sampling.\\n\\n3. It looks like the approach is not specifically for multimodal tasks but can also be applied to single-modal (text-only) tasks, whereas the authors only provided a very simple comparison in Appendix E on GSM8k. They only compared to CoT and least-to-most reasoning without other more similar methods like self-consistency. I don't think this is solid enough to show the generalization of this method.\\n\\n[1] Orca 2: Teaching Small Language Models How to Reason\\n[2] MathPrompter: Mathematical Reasoning using Large Language Models\\n[3] MinT: Boosting Generalization in Mathematical Reasoning via Multi-View Fine-Tuning\\n[4] Universal Self-Consistency for Large Language Model Generation\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a Self-Choose strategy to teach MLLMs to use diverse reasoning solutions to self-correct. Evaluation shows that self-choose achieves improvements over previous reasoning prompts.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The proposed method is simple yet effective, and the experiments show self-choose's great potential.\", \"weaknesses\": \"Since self-consistency (https://arxiv.org/abs/2203.11171), self-refine (https://arxiv.org/abs/2303.17651) and self-correct (https://arxiv.org/abs/2310.01798) have been proposed in the field of large language models, **extending these strategies into multimodal LLMs and gaining performance gives minor contributions.**\\n\\nThe difference between this method and the cited important baselines (i.e. CCoT (CVPR 2024) and DDCoT (NeurIPS 2023) is that **previous works have tackled the specific challenge of multimodal reasoning**. For example, CCoT uses the scene graph of image, and DDCoT divides tasks between LLMs and MLLMs. \\n\\n**Unlike these**, the method in this work can not only be applied to multimodal reasoning, but also be directly applied to text-based reasoning. Sampling multiple reasoning paths and aggregating them **have been explored** in text-based reasoning scenarios (e.g. ReConcile (https://arxiv.org/abs/2309.13007), Peer Review (https://arxiv.org/abs/2311.08152, https://arxiv.org/abs/2307.02762 )). \\n\\nTherefore, I think the contribution of this work is limited.\", \"questions\": [\"What is the robustness of this method? For the reported results in Table 3, how many times was the experiment run?\", \"What is the version of Gemini-Vision? When were the experiments conducted?\", \"What are the inference costs of Self-choose compared to IO-SC, CCoT-SC, DDCoT-SC?\", \"Table 1 shows that the performance of self-refine and self-review degrades when increasing the number of rounds. What about your proposed self-choose method?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a prompting strategy called Self-Choose to teach models to self-correct without any training or external feedback. This strategy explicitly prompts the models to answer with different reasoning paths and generate a new answer if all candidate answers from the reasoning paths are deemed inaccurate. The authors evaluate their methods on Gemini-vision and LLaVA-1.6-13b and conduct comprehensive ablation studies.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper develops a simple plug-and-play self-correct prompting strategy named Self-Choose that does not require training or fine-tuning.\\n2. The authors present interesting conjectures, making connections with psychology, of why they think Self-Refine and Self-Review do not work.\\n3. The authors provide structured formulations of their proposed method.\\n4. The authors conduct comprehensive ablation studies with various variables within Self-Choose.\", \"weaknesses\": \"1. My primary concern is novelty, especially in the popular field of prompting strategies for MLLMs.\\n\\n (a.) Firstly, this paper presents limited literature reviews and comparison studies in the self-correct domain to effectively distinguish itself from existing methods. \\n\\n (b.) Secondly, the idea of having multiple reasoning paths is not new and resembles ToT[1], SCoT[2], StrategyLLM[3], etc., which the authors do not address at all in the paper. \\n\\n (c.) Finally, this method is overly simplified and straight-forward. With existing methods that utilize the search engine (Self-Ask[4]), APIs (Self-Correct[5]), and Python interpreters (Self-Debug[6]) that produce more significant experiment results, this work presents more of an engineering solution than a research advancement. \\n\\n2. The baselines are very limited. The paper only compares Self-Choose against Self-Refine and Self-Review, where Self-Review is an intermediate improvement that the authors come up with on top of Self-Refine. \\n\\n3. The results are insignificant. For example, Self-Choose improves LLaVA-1.6-13b on ScienceQA, of 2017 QAs, by 68.86%-67.63%=1.23%. Especially when the model temperature is set to 0.3 (mentioned in Appendix C), I remain skeptical of the significance of the improvements and thus the effectiveness of such a method.\\n\\n4. Number of models tested are limited. Only Gemini-vision and LLaVA-1.6-13b are tested with these methods. It is unclear if the improvements can be generalized to other MLLMs of various sizes. \\n\\n5. The claim in L62-63 \\u201cthe model fails to correct itself with a fixed thinking pattern\\u201d is also derived from an analogy to psychological phenomenon and has no scientific studies to back it up.\\n\\n (a.) In Table 4, the ablation study \\u201cw/o processes s_i\\u201d presents very similar performance to Self-Choose: 68.81% vs. 68.86% on ScienceQA, 62.00% vs. 62.65% on WHOOPS. If no thinking processes are presented to the MLLMs and they still achieve similar results, it means that it is not the thinking processes that are boosting the performance, which contradicts with the authors claim that they have \\u201cfixed thinking pattern\\u201d.\\n\\n6. It is unclear why MAD and MRP are introduced in section 5.4.3 and how their results affect authors findings.\\n\\n7. In L104-106, authors mention that they \\u201cconduct experiments applying self-correction techniques originally designed for LLMs to MLLMs\\u201d. It is unclear how this step is done.\\n\\n8. It is unclear how LLMs are utilized as an evaluator for experiments on WHOOPS and MM-Vet as mentioned in L291-292.\\n\\n9. In the ablation study \\u201cGenerate\\u201d, the authors prompt the models saying that \\u201call candidate answers are inaccurate\\u201d which is misleading to the models and not an effective ablation study, as the setup falls back on the issues with Self-Refine. A better ablation would be asking the model to generate at all times regardless of the candidates\\u2019 correctness.\\n\\n[1] Yao, Shunyu, et al. \\\"Tree of thoughts: Deliberate problem solving with large language models.\\\" Advances in Neural Information Processing Systems 36 (2024).\\\\\\n[2] Wang, Yu, et al. \\\"Strategic Chain-of-Thought: Guiding Accurate Reasoning in LLMs through Strategy Elicitation.\\\" arXiv preprint arXiv:2409.03271 (2024).\\\\\\n[3] Gao, Chang, et al. \\\"Strategyllm: Large language models as strategy generators, executors, optimizers, and evaluators for problem solving.\\\" arXiv preprint arXiv:2311.08803 (2023).\\\\\\n[4] Press, Ofir, et al. \\\"Measuring and narrowing the compositionality gap in language models.\\\" arXiv preprint arXiv:2210.03350 (2022).\\\\\\n[5] Welleck, Sean, et al. \\\"Generating sequences by learning to self-correct.\\\" arXiv preprint arXiv:2211.00053 (2022).\\\\\\n[6] Chen, Xinyun, et al. \\\"Teaching large language models to self-debug.\\\" arXiv preprint arXiv:2304.05128 (2023).\", \"questions\": \"1. In L49-50 where authors cite Kim et al., 2023, what is the relationship between this work and self-refine? This work is never brought up again in the later sections either.\\n2. In L49-50 where authors claim that \\u201cit fails to correct vision reasoning\\u201d, how do the authors reach this conclusion? And how does it fail?\\n3. In Appendix D, the authors mention that Self-Choose with LLaVA-1.6-13b on ScienceQA incorrectly changed 1.54% original answers from right to wrong and 2.73% from wrong to right. This should mean that the overall performance increase is 2.73%-1.54%=1.19%, but why is it 68.86%-67.63%=1.23% in Table 3? What are the original numbers of correct and wrong answers from the experiments?\\n4. Is there a particular reason that the authors mention MAD and MRP as comparisons? And what are the findings?\\n5. How do the authors \\u201cconduct experiments applying self-correction techniques originally designed for LLMs to MLLMs\\u201d described in L104-106? Any new visual prompting methods?\\n6. How do the authors utilize LLMs as evaluators for experiments on WHOOPS and MM-Vet? While LLMs as evaluators remain a controversial approach, are there any remedies taken to prove its effectiveness in authors use cases such as Cohen\\u2019s Kappa?\\n7. How are the reasoning methods selected? It seems rather arbitrary in the paper.\\n8. What is the \\u201cself-reflection\\u201d issue referring to in L404?\\n9. How do the models determine what are \\u201cwrong candidates\\u201d in the (1) and (2) of \\u201cOther settings for Equation 4\\u201d without human feedback?\\n10. What is the \\u201cSelf-Debate\\u201d referring to in L516-517? This is the only place that this word has come up.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a self-correction strategy for MLLMs called Self-Choose. The proposed method addresses the inherent reasoning errors in MLLMs by leveraging diverse reasoning methods to generate multiple candidate answers. Through comparison of the reasoning processes and outcomes, Self-Choose selects the most promising answer or generates an improved solution if all initial answers are inaccurate. This approach, tested on benchmarks like ScienceQA, WHOOPS, and MM-Vet with models such as LLaVA and Gemini, shows improvement in reasoning accuracy.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The Self-Choose method provides an effective alternative to existing self-correction strategies like Self-Refine and Self-Review, which is also supported by the experimental results\", \"The paper is well-written and easy to follow\"], \"weaknesses\": [\"The paper claims to focus on multimodal reasoning, but the research questions investigated do not inherently necessitate a multimodal context. The methods introduced appear to have broader applicability and could potentially be effective in a text-only setting as well. Also, the analysis and case studies in this paper also do not emphasize that why multimodal context is essential.\", \"Based on point 1, the paper lacks a comprehensive analysis that evaluates the effectiveness of these methods in a non-multimodal context (they include the preliminary results on GSM8K in appendix, which should be placed in the main context). This omission weakens the justification for a multimodal focus. To strengthen the paper, the authors should either conduct more comprehensive experiments to assess their methods' performance in a non-multimodal setting or provide a clearer rationale in the introduction for why multimodal reasoning is essential for this study.\", \"The benchmarks chosen for evaluation in this paper (ScienceQA, WHOOPS, MM-Vet) do not comprehensively cover multimodal and reasoning-intensive tasks. Widely recognized multimodal math reasoning benchmarks such as Geometry 3K and MathVista should be included to more accurately assess and validate the effectiveness of the proposed methods.\", \"The proposed self-choose approach offers limited performance improvement over the All CCoT baseline. This marginal gain may not justify its significantly higher inference cost, especially when compared to the baseline's efficiency. Further optimization or a more compelling trade-off between performance and inference cost would strengthen the overall value of the self-choose method.\"], \"questions\": [\"The table caption should be placed above the table content\", \"Figure 5 is not clear to the readers\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
5uUr3WFmyZ
Almost sure convergence of stochastic Hamiltonian descent methods
[ "Måns Williamson", "Tony Stillfjord" ]
Gradient normalization and soft clipping are two popular techniques for tackling instability issues and improving convergence of stochastic gradient descent (SGD) with momentum. In this article, we study these types of methods through the lens of dissipative Hamiltonian systems. Gradient normalization and certain types of soft clipping algorithms can be seen as (stochastic) implicit-explicit Euler discretizations of dissipative Hamiltonian systems, where the kinetic energy function determines the type of clipping that is applied. We make use of dynamical systems theory to show in a unified way that all of these schemes converge to stationary points of the objective function, almost surely, in several different settings: a) for $L-$smooth objective functions, when the variance of the stochastic gradients is possibly infinite b) under the $(L_0,L_1)-$smoothness assumption, for heavy-tailed noise with bounded variance and c) for $(L_0,L_1)-$smooth functions in the empirical risk minimization setting, when the variance is possibly infinite but the expectation is finite.
[ "Stochastic optimization", "clipping methods", "non-convex optimization" ]
Reject
https://openreview.net/pdf?id=5uUr3WFmyZ
https://openreview.net/forum?id=5uUr3WFmyZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uvucgat3vm", "kMQynS1w8O", "jz2VVtLjyS", "iDBniUX8Q1", "dIFK5cyADH", "ZeYMEKvbgo", "VsF6oyD7DG", "TRrshyxjbm", "N7N7HNJnqr", "LIoh3iZ9BN", "HHNhe4e5AB", "AyBogmCoqF", "4sj1DrSMnK", "4pGZNbUuLu" ], "note_type": [ "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1730690147194, 1732651405745, 1737524027017, 1731665586652, 1731665659164, 1732548362470, 1732718576488, 1732632518953, 1734829935216, 1731665631607, 1732551901094, 1730658503814, 1732795893163, 1730071342356 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10110/Reviewer_1tRZ" ], [ "ICLR.cc/2025/Conference/Submission10110/Reviewer_AXMi" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10110/Authors" ], [ "ICLR.cc/2025/Conference/Submission10110/Authors" ], [ "ICLR.cc/2025/Conference/Submission10110/Authors" ], [ "ICLR.cc/2025/Conference/Submission10110/Authors" ], [ "ICLR.cc/2025/Conference/Submission10110/Reviewer_KB2A" ], [ "ICLR.cc/2025/Conference/Submission10110/Area_Chair_rFv5" ], [ "ICLR.cc/2025/Conference/Submission10110/Authors" ], [ "ICLR.cc/2025/Conference/Submission10110/Reviewer_1tRZ" ], [ "ICLR.cc/2025/Conference/Submission10110/Reviewer_KB2A" ], [ "ICLR.cc/2025/Conference/Submission10110/Authors" ], [ "ICLR.cc/2025/Conference/Submission10110/Reviewer_AXMi" ] ], "structured_content_str": [ "{\"summary\": \"The paper studies a family of stochastic gradient methods given by equation 9 and for this family almost sure convergence to stationary points is proved under the following settings:\\n1. Smooth objective functions with stochastic gradients having locally bounded variance,\\n2. $(L_0,L_1)$-smooth objective functions with stochastic gradients having finite second moments,\\n3. $(L_0,L_1)$-smooth objective functions occurring in the empirical risk minimization problem with stochastic gradients having bounded expectation.\\n\\nTwo interesting algorithms that are part of this family are SGD with soft clipped momentum and SGD with normalized momentum.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is written in an easy to understand way and the ideas flow smoothly between sections. On the technical side it has the following strengths:\\n1. Previous work has given guarantees that hold either in expectation or with high probability. These do not guarantee the convergence of every trajectory. However, the results presented in this paper can guarantee the convergence of almost all the trajectories i.e. there exists a set of initial points of measure zero whose trajectories are not guaranteed to converge.\\n2. The analysis done in previous work holds true only under strong assumption that the stochastic gradients are bounded. Whereas the current work\\u2019s analysis holds under much more general assumption of bounded variance.\", \"weaknesses\": \"The paper focuses primarily on proving almost sure convergence and does not provide any claims about convergence rate. The paper could benefit from showing convergence rate of the family of methods given by equation 9 under one of the settings.\", \"questions\": \"1. (Clarification for Assumption 4 iii) This assumption is not satisfied when $\\\\varphi(x)=||x||^2/2$. For the family of algorithms under study defined by equation 9 to include SGD with momentum we need $\\\\varphi(x)= ||x||^2/2$. So, the analysis does not hold true for SGD with momentum. Is this correct?\\n2. (Suggestion about numerical experiments in Appendix C) The main claim of the paper that the family of algorithms converges to a stationary point of the objective function can be better demonstrated if there are graphs showing the evolution of $||\\\\nabla F||_2$ with the number of epochs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their comments and clarifications. I maintain my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We would like to thank the reviewer for their kind and constructive feedback on our article. The reviewer correctly points out that we do not present any results regarding convergence rates. We agree that this is also interesting and something we consider for future work. The current focus of the article is to investigate the almost sure convergence of clipping algorithms and the interplay with Hamiltonian dynamics. To the best of our knowledge, statements about the order of convergence for stochastic algorithms applied to non-convex functions are typically local in nature. Specifically, such results hold within a neighborhood of a stationary point or for a subsequence of iterates (e.g., $\\\\min_{1 \\\\leq k \\\\leq K} \\\\lVert \\\\nabla F(q_k)\\\\rVert_2 \\\\in \\\\mathcal{O}(\\\\dots)$). In contrast, the results presented in our article are global in the sense that they apply regardless of the location of the iterates and hold for the entire sequence ${q_k}_{k \\\\geq 0}$.\\n\\nRegarding the \\u201dclipping assumption\\u201d (Assumption 4.iii):\\nThis assumption is made in Setting 2 and 3 when the objective function is assumed to be $(L_0,L_1)-$smooth. When the objective function is merely L-smooth (in Setting 1) this assumption is not needed. Hence SGD with momentum is covered by the analysis of Setting 1 but not that of Setting 3 and 4. We believe that this illustrates the fact that clipping algorithms are a better option for \\u201d(L_0,L_1)-\\u201dsmooth functions. We will add a remark about this in a revised version of the manuscript.\", \"regarding_numerical_experiments\": \"We\\u2019re presenting the experiments in a similar form to what seems to be most common in the field. It is essentially equivalent to reporting the gradient as the reviewer suggests. (We see that we get convergence in the plot). If the reviewer has very strong opinions about adding numerical experiments that show the evolution of $\\\\lVert \\\\nabla F\\\\rVert$ we would of course be open to consider this. We are however not sure that this would contribute significantly to the overall value of the paper and would merit the invested time and computational effort.\"}", "{\"comment\": \"We thank the reviewer for providing valuable feedback.\\n\\nRegarding the choice of Lyapunov function in Theorem 5.7:\\n\\nThe reviewer is right in that this is a natural choice since it is also a Lyapunov function for the system (8). We will state this more clearly in a revised version of the article.\\n\\nRegarding what sets the contribution apart from Kushner & Yin (2003):\\n\\nWe cannot invoke Theorem 5.1 of Kushner & Yin (2003) directly as the algorithm is an implicit-explicit discretization of an ODE. The solution is to show that the \\u201ddifference\\u201d between the implicit and explicit evaluation of the algorihtm \\n\\\\begin{align*}\\n\\\\kappa_k(t) = \\\\int_0^t \\\\nabla \\\\varphi(P_{k+1}(s)) ds -\\\\int_0^t \\\\nabla \\\\varphi(P_{k}(s)) ds \\n\\\\end{align*}\\nconverges to $0$ uniformly on compact intervals. This is (part of) the proof of Theorem 5.11. We will try to make this clearer in the article. \\nOther reasons that Theorem 5.1 cannot be applied directly is that the noise assumption (i.e. A2.1 in section 5 of Kushner & Yin (2003)) would have to be verified in Setting 1 and 3. \\nThe assumption that the iterates are almost surely bounded is also very strong (and something we show holds for the algorithms in the paper).\\n\\n\\nRegarding clipping as studied in Zhang et al. (2020):\\n\\nIt is likely possible to analyse this kind of clipping as well, but then one probably would have to work with differential inclusions rather than ODE:s. We agree that this is an interesting perspective and something to consider for future studies. We consider differentiable functions since this is what people tend to implement in practice. For instance the experiments in Zhang et al. (2020) are implemented using a soft-clipping function.\"}", "{\"title\": \"Revised manuscript\", \"comment\": \"We have now uploaded a revised version of the manuscript, in which we have added the four remarks which we promised in our answers to the referees. They are contained in Remark 5.2, Remark 5.5, the third paragraph of Section 5.1, and the paragraph directly after Theorem A.1.\"}", "{\"comment\": \"We very much appreciate the reviewer's positive comments about the novelty and rigour!\\n\\nWe realize that we probably won't change the reviewer's opinion on this, but we still think the manuscript would be of interest to the machine learning community, and this is why we submitted it to ICLR. The ML area is typically where clipping schemes of this form are encountered, and in particular the convergence results for (L_0,L_1)-smooth functions with heavy tailed noise are aimed at cost functionals typically appearing in ML applications.\\n\\nWe do acknowledge that it might be hard to verify that the equilibria are isolated for a general optimization problem, but for ML problems there are techniques for handling this. For example, it does hold if the Hessian is non-degenerate at the equilibrium, and it is argued in e.g. [1,2,3] that introducing regularization, batch-normalization, or skip connections in neural networks\\nmay lead to such well-behaved problems. This is of course by no means a guarantee that it is satisfied for all ML problems. However, there are many applications of interest for which it is reasonable to expect it to be fulfilled.\\n\\nWith this being said, we do agree on Kushner & Yin being sparse on details. That is why we decided to provide more detailed and rigorous proofs, for the convenience of the readers.\\n\\n[1] Jia, Z. & Su, H. (2020). Information-Theoretic Local Minima Characterization and Regularization\\n\\n[2] Orhan, A. E. & Pitkow, X. (2018). Skip connections eliminate singularities.\\n\\n[3] Mehta et al. (2018). The Loss Surface Of Deep Linear Networks Viewed\\nThrough The Algebraic Geometry Lens.\"}", "{\"comment\": \"I find the book by Kushner and Yin (2003) extremely hard to digest\\u2014 - it contains notable errors, especially in the use of differential inclusions in the context of projections, but that is another topic. Their approach essentially relies on earlier, almost certain convergence results by Kushner-Clarke.\\n\\nIn my opinion, Bena\\u00efm's notion of pseudotrajectories is much more intuitive and practical. If you study Bena\\u00efm's proofs, you come across familiar tools such as the Arzel\\u00e0-Ascoli theorem and relative compactness arguments. The dynamics of asymptotic pseudotrajectories, in particular the structure of internally chain recurrent sets, allows the recovery of Kushner-Clarke results without having to resort to \\\"unusual\\\" assumptions (see Section 6 of Bena\\u00efm). For example, Proposition 6.4 in Bena\\u00efm (2006) provides a much more \\\"reasonable\\\" characterization of the convergence points when a Lyapunov function is available. This condition is also verifiable, e.g. by applying Sard's theorem in the context of gradient algorithms. In contrast, I have always struggled to imagine how to rigorously prove the Kushner-Clarke condition. \\n\\nYou mention that \\\"the function has isolated equilibria in many cases\\\",\\\" but even this is not so easy to prove in general!\\n\\nThat said, I see novelty in your approach, particularly in the way you handle the implicit-explicit discretization. While the adaptation itself is relatively straightforward, the rigor with which the paper is written stands out and is something I really appreciate. \\nI simply find the scope of the paper limited, with results that are difficult to apply due to the presence of assumptions that are impossible to verify. Its relevance to the machine learning community, in my opinion, is limited. However, with some reworking, it could make for a solid paper in a mathematics journal, especially if a more modern perspective on stochastic approximation is adopted.\"}", "{\"metareview\": \"This paper considers gradient normalization and soft clipping methods in the context of Hamiltonian systems. Authors view these as a kind of Euler discretization of dissipative systems and use dynamical systems approach and show convergence to stationary points of the objective function, under several interesting regimes.\\n\\n\\nThis paper was reviewed by three expert reviewers and received the following Scores/Confidence: 6/2, 6/4, 3/5. I think paper is studying an interesting topic but authors are not able to convince the reviewers sufficiently well about the novelty of their work. The following concerns were brought up by the reviewers:\\n\\n- The main concern about this paper was its novelty in terms of insights it brings to the field. \\n- Authors rely on classical techniques in this area, which in turn depend on conditions that are difficult to verify or too opaque in practice.\\n\\n\\nAlthough two reviewers were positive, no reviewers championed the paper and they are not particularly excited about the paper. As such, based on the reviewers' suggestion, as well as my own assessment of the paper, I recommend not including this paper to the ICLR 2025 program.\", \"additional_comments_on_reviewer_discussion\": \"Authors provided a response which was ultimately deemed insufficient by the main critical reviewer.\"}", "{\"comment\": \"We thank the reviewer for coming with constructive criticism as well as raising some valid points of concern.\", \"regarding_the_contribution_of_the_paper\": \"The reviewer is right in that the proof closely follows that of Kushner & Yin (2003) and some parts are included for the sake of completeness and the conveniece of the reader since Kushner & Yin (2003) is sparse with details. This if for instance the case with Theorem 5.15.\\n\\nThe results of Kushner & Yin (2003) or Bena\\u00efm (2006) does not work right away since the schemes we consider are implicit-explicit discretizations of an ODE while Kushner & Yin (2003) \\nconsiders explicit discretizations. This is Lemma 5.11 and 5.9 in which we show that the shifted sequence of interpolations are equi-continuous in the extended sense (even though the discretization is implicit-explicit). \\n\\nAnother part of the novelty of the paper lies in that we consider objective functions that are $(L_0,L_1)-$smooth and pose less restrictive assumptions on the noise, adapted to the machine learning setting. To the best of our knowledge $(L_0,L_1)-$smooth functions are not considered in previous works that make use of the ODE method. We show that the iterates are bounded a.s. in these settings and that it is possible to apply the extension of the ODE method.\\n\\nRegarding the verification of the assumptions of Theorem 5.15:\\n\\nThe reviewer is correct in that it is in general a strong assumption that the iterates enter a certain compact set in the domain of attraction of the locally asymptotically stable set of the theorem.\\nWe do show in the proof of Theorem 5.5 that this is the case for the algorithms in the paper (under the given assumptions) and that we can apply Theorem 5.15.\\n\\nRegarding Assumption 1.iii):\\n\\nThe question that the reviewer raises is valid; it is difficult to verify this assumption in practice \\nand we could have elaborated more on this in the article. We will add a remark in the revised version of the manuscript.\\nIt is a technical assumption that rules out pathological behaviour; it is slightly stronger than that of e.g. Proposition 6.4 in Bena\\u00efm (2006) but weaker than that of Prop. 3.2 in Bena\\u00efm (1996).\\nIn many cases the function has isolated equilibria and in this case Assumption 1.iii) is satisfied. That the equilibria are isolated is necessary to obtain convergence to a unique stationary point. \\n\\nRegarding using the approach of Bena\\u00efm (2006):\\n\\nIt is true that this approach can be used as well and it may be the case some assumptions are less restrictive. (Such as those discussed in the previous paragraph). We prefer the approach of Kushner & Yin (2003) as less \\u201dtechnical machinery\\u201d is needed, e.g. the notions of asymptotic pseudo-trajectories and chain recurrence. We also think that this approach is more intuitive (extracting convergent subsequences of $\\\\{Z_k\\\\}_{k \\\\geq0}$ and appealing to the Arzela\\u2014Ascoli theorem etc.) This is entirely our opinion of course.\\n\\n\\nBena\\u00efm, M. (1996). A dynamical systems approach to stochastic approximation.\\nBena\\u00efm, M. (2006). Dynamics of stochastic approximation algorithms.\\nKushner, H.J. & Yin, G. (2003) Stochastic approximation and recursive algorithms and applications.\"}", "{\"comment\": \"I thank the authors for addressing my concerns.\\n\\nThe paper sets out to prove almost sure convergence of stochastic Hamiltonian descent methods and does so. They also extend the Kushner & Yin (2003) proof strategy to the case of implicit explicit discretization of ODE.\"}", "{\"summary\": \"The paper investigates the almost sure convergence of stochastic Hamiltonian descent methods in optimization, especially in the context of machine learning and statistical estimation. Key methods discussed include gradient normalization and soft clipping, which are used to stabilize stochastic gradient descent (SGD) with momentum, commonly applied in non-convex optimization settings. The authors analyze the convergence properties of these modified SGD algorithms, especially when applied to objective functions with heavy-tailed noise and potentially infinite gradient variance, by classifying these techniques into dissipative Hamiltonian systems.\\n\\n One of the main contributions of the paper is to show that gradient normalization and soft clipping can be viewed as stochastic implicit-explicit Euler discretizations of dissipative Hamiltonian systems. By utilizing Hamiltonian dynamics and dynamical systems theory, the authors provide a unified convergence framework for these methods in different objective function settings, including \\\\(L\\\\)-smooth and \\\\((L_0, L_1)\\\\)-smooth functions.\\n\\nThe paper introduces assumptions for the objective function and the stochastic gradient, such as coercivity and locally bounded variance, which ensure that the optimization problem satisfies the necessary conditions for convergence. Moreover, the convergence guarantee is extended to settings with heavy-tailed noise, which is common in real data. In particular, the analysis shows that the iterative updates of these modified SGD algorithms converge almost surely to the set of stationary points of the objective function under these assumptions.\\n\\nThe proof strategy involves the use of a Lyapunov function based on the Hamiltonian of the system to demonstrate the finiteness and boundedness of the iterative sequences. The analysis then uses an ODE (Ordinary Differential Equation) approach to show that these sequences converge almost surely to stationary points of the objective function, and not just in expectation. This result is important because it ensures convergence along each individual optimization path and not just on average, which is crucial for the robustness of optimization algorithms.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper is very well-written, and the authors rigorously verify the standard assumptions of the ODE approach, even if this means including assumptions that may seem idealized.\"], \"weaknesses\": \"The paper doesn\\u2019t bring substantial new insights to the field; it essentially revisits classical methods, line by line, and demonstrates the applicability of standard ODE-based techniques. While it is certainly rigorous and methodical, the paper ultimately falls short of deepening our understanding of the algorithms themselves or providing fresh perspectives on their behavior. This makes it a rather conventional contribution, adhering closely to established approaches without pushing beyond them in terms of theoretical or practical insight.\\n\\nThe authors rely on the Kushner and Yin approach, which, although mathematically elegant, has a significant drawback: it depends on assumptions that are notably difficult to verify. For example, Theorem 5.15 includes the assumption that \\u201cthere exists a compact set in the domain of attraction of \\\\( A \\\\) that \\\\( \\\\{z_k\\\\}_{k \\\\geq 0} \\\\) visits infinitely often.\\u201d However, the practicality of checking this condition is questionable. For dissipative systems with Lyapunov functions, more robust and accessible conditions are typically available. For instance, Bena\\u00efm\\u2019s 2006 work on the dynamics of stochastic approximation provides an alternative perspective with less restrictive assumptions, and the work by Andrieu, Moulines, and Priouret (2005) presents stability criteria that are generally easier to validate. Both of these references suggest that a more flexible framework is possible and may have been preferable in analyzing the convergence behavior.\\n\\n\\n\\nSee Bena\\u00efm, M. (2006). Dynamics of stochastic approximation algorithms. In Seminaire de probabilites XXXIII (pp. 1-68). Berlin, Heidelberg: Springer Berlin Heidelberg.\\n- Andrieu, C., Moulines, \\u00c9., & Priouret, P. (2005). Stability of stochastic approximation under verifiable conditions. SIAM Journal on control and optimization, 44(1), 283-312. \\\"\", \"questions\": [\"How do you check in pratice Assumption 1-iii). Give a clear and easy to check criterion (based a.g. on Sard's theorem)\", \"How do you check the conditions of Theorem 5.15\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Just so there is no confusion: we did a last-minute update of the PDF in order to not exceed the 10-page limit. There was no change of content, just very minor reformulations of two sentences in Section 4.1.\"}", "{\"summary\": \"This paper shows the almost sure convergence of a class of stochastic Hamiltonian descent methods, including gradient normalization and soft clipping algorithms, under three different set of assumptions which provide validity of the conclusions to a wide range of settings. The proof relies on the typical argument followed in dynamic systems and control theory, which is to: (1) find a suitable Lyapunov function (in this case the Hamiltonian of the system); and (2) invoke LaSalle's Invariance theorem to establish the convergence to a (isolated) stationary point the (non-convex) objective function.\", \"typo\": \"Line 112 should read \\\"term\\\" instead of \\\"tern\\\"\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well written, structured, and thus easy to follow. It contributes to the theoretical understanding of SGD with momentum under gradient normalization and soft clipping.\\n\\nAlthough I didn't read every proof in extreme detail, I believe the formal arguments and results are correct.\", \"minor_detail\": \"Since the authors initially present the formulation of the nearly Hamiltonian system in continuous time (see eq. (8)), perhaps the choice of the Lyapunov function in the proof of Theorem 5.7 could be motivated in that context, showing that: $\\\\dot V = -\\\\gamma\\\\||\\\\nabla\\\\varphi\\\\||^2 \\\\leq 0.$\", \"weaknesses\": \"The main issue I have with the paper is that, because the proof strategy closely follows the approach developed by Kushner and Yin (2003) (cf. Section 5 and Theorem 5.2.1), it is not immediately clear why their proof cannot be directly invoked after showing that the sequence of iterates is finite almost surely. For example, Lemma A.3 can in fact be found in the first part of Theorem 5.2.1 but this is not explicitly mentioned by the authors. Could the authors explicitly discuss how their proof extends or differs from Kushner and Yin (2003), for example by including a paragraph comparing their approach to Kushner and Yin's and highlighting any novel elements or modifications needed for the Hamiltonian specific setting.\", \"questions\": [\"Could the authors more clearly explain how their approach seems to extend Kushner and Yin (2003)? This would allow the reader to better evaluate the novel elements needed in the proof almost sure convergence.\", \"This next question is not related to the perceived weakness of the paper but more for a deeper understanding of the results. Because the kinetic energy function is assumed to be differentiable, almost sure convergence cannot be concluded for clipping as studied by Zhang et. al. (2020) in expectation. Could the authors explain the need of the differentiability assumption, and why working with the sub-gradients of $\\\\varphi$ is not possible?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5tjdRyqnSn
PhiNets: Brain-inspired Non-contrastive Learning Based on Temporal Prediction Hypothesis
[ "Satoki Ishikawa", "Makoto Yamada", "Han Bao", "Yuki Takezawa" ]
Predictive coding has been established as a promising neuroscientific theory to describe the mechanism of information processing in the retina or cortex. This theory hypothesises that cortex predicts sensory inputs at various levels of abstraction to minimise prediction errors. Inspired by predictive coding, Chen et al. (2024) proposed another theory, temporal prediction hypothesis, to claim that sequence memory residing in hippocampus has emerged through predicting input signals from the past sensory inputs. Specifically, they supposed that the CA3 predictor in hippocampus creates synaptic delay between input signals, which is compensated by the following CA1 predictor. Though recorded neural activities were replicated based on the temporal prediction hypothesis, its validity has not been fully explored. In this work, we aim to explore the temporal prediction hypothesis from the perspective of self-supervised learning (SSL). Specifically, we focus on non-contrastive learning, which generates two augmented views of an input image and predicts one from another. Non-contrastive learning is intimately related to the temporal prediction hypothesis because the synaptic delay is implicitly created by StopGradient. Building upon a popular non-contrastive learner, SimSiam, we propose PhiNet, an extension of SimSiam to have two predictors explicitly corresponding to the CA3 and CA1, respectively. Through studying the PhiNet model, we discover two findings. First, meaningful data representations emerge in PhiNet more stably than in SimSiam. This is initially supported by our learning dynamics analysis: PhiNet is more robust to the representational collapse. Second, PhiNet adapts more quickly to newly incoming patterns in online and continual learning scenarios. For practitioners, we additionally propose an extension called X-PhiNet integrated with a momentum encoder, excelling in continual learning. All in all, our work reveals that the temporal prediction hypothesis is a reasonable model in terms of the robustness and adaptivity.
[ "Non contrastive learning", "Predictive coding", "Eigenvalue dynamics", "Temporal prediction hypothesis" ]
Accept (Poster)
https://openreview.net/pdf?id=5tjdRyqnSn
https://openreview.net/forum?id=5tjdRyqnSn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yV2xRmeCax", "uWPomgMh55", "sGlu9RUwN0", "oiv2efjYKH", "oT1hpVTWQE", "fP3HyacFCP", "eODKFsidVR", "e2Z3uY7KJf", "Slsb8uct3G", "OKbAZu4Me1", "MtjvOB0goO", "K0lLYD43pU", "Gq4pZZeo63", "2SxYETslIN", "0W31cSJ09u" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732175193018, 1732177961174, 1732602537100, 1730695378190, 1732179122291, 1732278663859, 1737523826196, 1730684441477, 1732248083537, 1730578284574, 1732212559073, 1734384209594, 1729951671975, 1732248241217, 1732921179318 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7245/Authors" ], [ "ICLR.cc/2025/Conference/Submission7245/Authors" ], [ "ICLR.cc/2025/Conference/Submission7245/Reviewer_pS8F" ], [ "ICLR.cc/2025/Conference/Submission7245/Reviewer_pS8F" ], [ "ICLR.cc/2025/Conference/Submission7245/Authors" ], [ "ICLR.cc/2025/Conference/Submission7245/Reviewer_1cHn" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7245/Reviewer_LRvN" ], [ "ICLR.cc/2025/Conference/Submission7245/Authors" ], [ "ICLR.cc/2025/Conference/Submission7245/Reviewer_1cHn" ], [ "ICLR.cc/2025/Conference/Submission7245/Reviewer_QH5V" ], [ "ICLR.cc/2025/Conference/Submission7245/Area_Chair_AQVo" ], [ "ICLR.cc/2025/Conference/Submission7245/Reviewer_QH5V" ], [ "ICLR.cc/2025/Conference/Submission7245/Authors" ], [ "ICLR.cc/2025/Conference/Submission7245/Reviewer_LRvN" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer QH5V\", \"comment\": \"We would like to thank the reviewer for the insightful questions and for correctly acknowledging the strengths of our work.\\n\\n## Weaknesses\\n> Can the authors explicitly explain, with specific reference to any important terms, why stop-gradient leads to any temporal lagging? \\n\\nThank you for your question.\\nFirst, our work introduces the novel claim that stop-gradient causes temporal lagging. Thus, there are no existing references that directly address this point. \\nPlease note that the temporal lagging mentioned in Section 3.1 refers specifically to the temporal lag in the weight update direction. \\nTo clarify, we are considering a loss between $f_t(x_i^{(1)})$ and $f_{t-1}(x_i^{(2)})$.\\nIn this loss function, $x_i^{(1)}$ is encoded with weights after $t$-step update while $x_i^{(2)}$ is encoded with weights after $t-1$-step update. This difference in encoding weights is what we refer to as temporal lag made by stop gradient.\\nNote that, although it may be confusing, the temporal lagging between the image inputs $x_i^{(1)}$ and $x_i^{(2)}$ is caused by augmentation, not by temporal lagging in the weights.\\n\\n> I am not able to see any claims in section 5.1 in Figure 5, even though it is referenced in the text. What kind of improvement am I supposed to see and what effect am I looking for?\\n\\nThank you for your question.\\nIn Figure 5, we aim to demonstrate two key points:\\n1. PhiNet (including X-PhiNet) achieves performance comparable to SimSiam.\\n2. Unlike BYOL or SimSiam, PhiNet (including X-PhiNet) exhibits minimal performance degradation when the weight decay is set lower than its optimal value.\\n\\nRegarding point (1), we observe that PhiNet (including X-PhiNet) achieves performance comparable to SimSiam. Furthermore, while RM-SimSiam employs EMA similarly to X-PhiNet, its performance on standard tasks tends to fall below that of PhiNet (and even below that of SimSiam), a limitation that PhiNet does not exhibit.\\n\\nFor point (2), when the weight decay is smaller than $2^{-15}$, PhiNet (including X-PhiNet) achieves higher accuracy compared to both BYOL and SimSiam. This finding supports the theoretical analysis presented in Section 4.\\nFurthermore, as discussed in \\\"Bless of Additional CA1 Predictor,\\\" PhiNet with $g = h$ or $g = I$ performs worse than PhiNet with an additional predictor, particularly when the weight decay is small. The poor performance of $g = I$ is especially pronounced at a batch size of 1024. \\nThis suggests that PhiNet with an additional CA1 predictor appears more robust to weight decay, making it a preferable choice from the perspective of weight decay robustness.\"}", "{\"title\": \"Response to Reviewer 1cHn\", \"comment\": \"We would like to thank the reviewer for the insightful questions and for correctly acknowledging the strengths of our work.\\n## Weaknesses\\n> it is unclear if the class of augmentations used here resemble \\\"natural\\\" augmentations in videos\\n\\nThank you for your feedback.\\nWe conducted our experiments under the belief that the data augmentation techniques we applied (such as RandomResizedCrop and RandomHorizontalFlip) closely resemble \\\"natural\\\" augmentations in videos. \\nHowever, we acknowledge that the augmentations may not fully align with natural augmentations. Here, let us clarify the objective of our approach in this work.\\nPredictive coding operates under a framework where it (1) processes a single input from a time series and (2) predicts the adjacent input. This approach is expected to offer the benefits of (A) capturing temporal features and (B) stabilizing learning. \\nFor non-video data, while the framework does not fully satisfy condition (1), it does meet condition (2). Therefore, evaluating predictive coding with existing data augmentations can be reframed as breaking down the framework into components (1) and (2) and initially examining the effectiveness of condition (2).\\nThe primary focus of our paper is to investigate whether condition (2) implies (B). Testing this framework in more temporally consistent data settings, which would satisfy both conditions (1) and (2), is indeed a necessary step and represents an important direction for future work.\\n\\n> Fig.5. suggests that setting g=I doesn't influence the performance too much\\n\\nThank you for your question.\\nAs shown in Figure 5, when the batch size is 1024, the accuracy of $g = I$ is lower than that of PhiNet, especially when weight decay is small, resulting in a lower evaluation accuracy (Eval Acc). In addition, we have conducted new experiments on CIFAR-5m and added the results of X-PhiNet with $g = I$ to Table 3. \\nFrom Table 3, it can be observed that even at the optimal weight decay of $2 \\\\times 10^{-5}$, X-PhiNet with $g = I$ achieves an Eval Acc that is 0.6\\\\% lower than the standard X-PhiNet (cos). Moreover, when the weight decay is further reduced to $1 \\\\times 10^{-5}$, X-PhiNet with $g = I$ shows an Eval Acc that is 1.3\\\\% lower than the standard X-PhiNet (cos). \\nThese results suggest that having $g$ as trainable leads to significantly higher accuracy. We agree that conducting experiments with continual learning is also important, and we plan to include these results in the camera-ready version. Furthermore, the theoretical analysis presented in Section 4 also supports the conclusion that having $g$ as trainable is critical for preventing mode collapse.\\n\\n> I'd be curious if this version already outperforms the original SimSiam implementation\\n\\nThank you for your question.\\nIf we remove Sim-2 from PhiNet, it becomes equivalent to SimSiam. Therefore, the ablation concerning Sim-2 is effectively comparison between PhiNet and SimSiam. We acknowledge that the difference between PhiNet and SimSiam may not have been clear initially. Based on the advice from Reviewer LRvN, we have improved the presentation by including a figure of SimSiam in Figure 1, making the differences between SimSiam and PhiNet more evident.\\n\\n> In the analytical Section 4, it'd be great if you'd give us an intuition for what phi and gamma in Eq. 1 actually denote - how are they related to g and h, etc.\\n\\nThank you for your valuable feedback.\\nIn Section4, $\\\\psi$ and $\\\\gamma$ correspond to (one of) eigenvalues of the linear networks h and g, respectively. Intuitively, we can regard $\\\\psi$ and $\\\\gamma$ as \\u201cscalarization\\u201d of the predictor networks. This formally requires tedious matrix analysis as shown in Appendix. The theoretical analysis is detailed in the Appendix, and we believe that having an intuitive understanding when reading this section would be beneficial. Therefore, we have added the above intuitive explanation to the Appendix.\\n\\n## Questions\\n\\n> What is the physiological relevance of low weight decay - why is that the important analysis here? (the authors spend a lot of space on it)}\\n\\nThank you for your question.\\nWe believe that weight decay is related to synaptic pruning; however, there is still much that remains unknown regarding the strength of pruning in the brain.\", \"we_would_like_to_clarify_our_initial_motivation\": \"it was based on ML research indicating that \\\"non-contrastive learning is sensitive to weight decay and prone to mode collapse.\\\" Our primary concern is that this sensitivity and susceptibility to mode collapse would not be ideal even for biological brains as well, since mode collapse would prevent the acquisition of meaningful representations.\\nWe believe that our proposed PhiNet offers a biologically plausible approach to mitigating mode collapse. \\nConnecting this insight with physiological findings would undoubtedly be a fascinating direction for future work.\"}", "{\"comment\": \"Thanks for the thoughtful response.\\n> The scope of our current paper can thus be described as examining whether condition (2) implies (B).\\n\\nI see the point here, and I hope the manuscript clarifies this. But without the temporal component, I think there is still a slight mismatch between the motivation and the actual method. \\n> While it is challenging to definitively distinguish whether mode collapse occurs during training, we believe that the large variability in the accuracy of SimSiam across seeds and its overall lower accuracy suggest that \\\"more random seeds would result in collapse\\\"\\n\\nI think without a more detailed analysis of the empirically learned representations this argument is a bit handwavy, but I understand that this might be hard to do in the short rebuttal period.\\nI want to thank the authors for addressing most of my concerns. I will keep the score since the original score already assumes these concerns are solved.\"}", "{\"summary\": \"In this paper, the authors propose PhiNet, a self-supervised learning (SSL) method partly inspired by the temporal prediction hypothesis and a previous SSL method SimSiam. PhiNet can be seen as an extended version of SimSiam with an additional predictor and a stable encoder which is an exponential moving average of the main encoder. The authors analyzed the learning dynamics of PhiNet in a linear setting and found that the network is less prone to collapse compared to SimSiam. The method is then evaluated using the CIFAR benchmarks and is shown to outperform classical baselines in SSL, especially in online learning and continual learning settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The hypothesized correspondence of the algorithm to the learning and memory mechanisms in the brain is interesting to both ML and neuroscience communities.\\n2. The authors conducted a detailed analysis of the learning dynamics which nicely explains the advantage over the baseline.\\n3. There are extensive experiments that clearly illustrate the strengths and weaknesses of the models.\\n4. The presentation is clear and a lot of details can be found in the appendix.\\nOverall this is a solid and interesting contribution that bridges predictive coding and SSL algorithms known to the ML community. I thus support acceptance provided that the authors can reasonably address my concerns.\", \"weaknesses\": \"1. While this is understandable for a more brain-inspired algorithm, there is little performance gain on classical SSL benchmarks.\\n2. The link from the model and task setting to the temporal predictive coding idea is not that strong, see below.\", \"questions\": \"1. One major concern is that the proposed method is not really doing \\\"temporal predictive coding\\\", instead, the predictor is predicting the embedding of an augmented image. The way to introduce the model is thus somewhat misleading. It also seems more natural to use some video datasets if the goal is to build a model for temporal predictive coding. I wonder if the authors can explain more about the motivations here.\\n2. When connecting X-PhiNet to the brain as in Fig 1(c), we are basically assuming that layers II and III in EC share the same encoder and the NC encoder is EMA of the one in EC. I wonder if the authors can provide more rationale for these assumptions.\\n3. While there are some qualitative comparisons of the learning dynamics between PhiNet and SimSiam, I think the result would be stronger if there were also quantitative comparisons, e.g., something showing that more random seeds would result in collapse for SimSiam than for PhiNet.\\n4. For empirical performance, the errorbar is missing from Fig 5 and Fig 7 so it's a bit hard to measure the margin of error.\\n5. When comparing models on online learning and continual learning benchmarks, it seems that the EMA encoder is critical, but at least for continual learning, this is not very surprising since EMA essentially prevents the representations from drifting away too much. As baselines for comparison here (POYO, SimSiam, etc) are not designed for continual learning, it might make sense to include more baselines from the field of continual learning, otherwise, the claim that X-PhiNet particularly excels at online and continual learning seems a bit unfair.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer LRvN\", \"comment\": \"We would like to thank the reviewer for the helpful comments and suggestions.\\n\\n## Weaknesses\\n> it would be nice to add the diagram of SimSiam to Figure 1 and this is the main work the current manuscript was based on.\\n\\nThank you for your helpful advice.\\nBased on your suggestion, we have added a diagram of SimSiam to Figure 1. We believe this makes the differences between SimSiam and our model more evident. As illustrated in the figure, the architectural novelty of our PhiNet lies in the additional predictor $g$.\\n\\n> But is this going to slow down the convergence rate? What are the drawbacks of having one more layer of prediction.\\n\\nThank you for your question.\\nAs shown in Figure 6, the use of PhiNet does not result in slower convergence. On the contrary, it often mitigates early-stage instability in learning and allows for more stable progression. A potential drawback of adding one more layer of prediction is the additional memory cost. However, as indicated in Table 17: Comparison of GPU Memory Costs, the memory consumption only increases slightly from 3.26GB to 3.44GB, which we believe is not a significant issue.\\n\\n> In the performance evaluation plots: Phinet showed sufficient improvement compared to its original version SimSiam but not much improvement compared to other methods such as Barlow twins. \\n\\nThank you for your question.\\nIn the experiments presented in Table 2 on CIFAR-5m and Split C-5m, X-PhiNet achieves an accuracy that is 2\\\\% higher than Barlow Twins, which represents a statistically significant difference. In addition, another advantage of X-PhiNet over Barlow Twins is the reduced computational cost. As shown in Table 18 in the Appendix, when the batch size is as small as 128, Barlow Twins requires nearly three times the training time compared to X-PhiNet.\\n\\n> And it's unclear why mse loss is better than cos loss in $L_{NC}$.\\n\\nThank you for your feedback.\\nTo clarify, our claim is not that \\\"MSE loss is always superior.\\\" In fact, in some experiments using CIFAR-5m, models employing cosine loss can achieve higher performance. Our position is that both models using cosine loss and those using MSE loss are categorized under PhiNet, with the choice of which is better depending on the specific task or setting.\\nThere remains much to be explored regarding the theoretical analysis of selecting the appropriate loss function for different settings, and it would be a promising future work.\\nThis is emphasized in the extended limitations section in the appendix.\\n\\n## Questions\\n\\n> definite needs more explanation about how it approximates modern optimizers and its relationship with learning rate etc.\\n\\nThank you for your question.\\nWeight decay is a standard component used in modern gradient descent and backpropagation methods. In Section 4, weight decay appeared suddenly in the formulation of gradient flow, so we have added a note in Section 3 to clarify that our optimization process includes weight decay. \\nRegarding the weight decay introduced in Section 4, we believe that the formulation of gradient flow with weight decay is consistent with discrete gradient descent with weight decay. In the discrete formulation, the weight decay term is indeed proportional to the learning rate; however, when considering the infinitesimal limit of the learning rate in the gradient flow formulation, both sides are divided by the learning rate. \\nThis point has been further explained at the beginning of Appendix B. Thank you for bringing this to our attention.\\n\\n> Does one time step refers to one gradient update? Is gradient update performed in an online manner?\\n\\nThank you for your question.\\nThe \\\"one time step\\\" introduced by the stop-gradient refers to a single gradient update. There are two aspects of time step in our paper:\\n\\n1. The time step associated with the next step of the gradient in gradient flow dynamics.\\n2. The time step where the augmented view is considered as the next frame in a time series.\\n\\nWe acknowledge that discussing these two different aspects of time derection may cause confusion. The augmented views are input to PhiNet, creating a temporal lag between inputs, while the temporal lag in the weights arises due to the stop-gradient mechanism.\\nAdditionally training with CIFAR-5m is conducted in an online learning setting, whereas training with CIFAR10, STL10, and ImageNet follows a mini-batch training.\"}", "{\"title\": \"Response to authors\", \"comment\": \"\\\"The primary focus of our paper is to investigate whether condition (2) implies (B).\\\" I am not sure I agree that the substantial transformations you apply here resemble \\\"next-frame\\\" transformations in natural videos. I understand your motivation but I wonder if your approach will actually generalise to the small transformations natural videos bring.\\n\\n\\\"These results suggest that having as trainable leads to significantly higher accuracy.\\\" I don't know if 0.6% or 1.3% lower accuracy warrant this conclusion. It helps but it seems we can do very well without it!\\n\\nIn general, I think my score is fine - there's something interesting for the field here.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The manuscript links the temporal prediction hypothesis observed in the brain's hippocampus-cortex interactions to a novel self-supervised learning structure (PhiNets). PhiNets is largely based on SimSiam but incorporates one more predictor layer and StopGradient module inspired by neuroscience observations. This model demonstrates greater stability and resilience against representational collapse compared to SimSiam and shows superior performance in online and continual learning scenarios. Additionally, the study presents X-PhiNet, an extension incorporating an exponential moving average encoder for improved long-term memory retention in continual learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"A clever observation between the temporal prediction hypothesis and the StopGradient module, making this architecture more brain-like. However, the definition of temporal steps was poorly articulated and needs more work. (See weakness)\", \"Rigorously proved the benefits of adding one more predictor layer through the analysis of gradient flow in a linearized case\", \"Demonstrated improved performance in terms of robustness to weight decay in terms of image classification\"], \"weaknesses\": [\"Presentation of the paper needs more improvement: For example, it would be nice to add the diagram of SimSiam to Figure 1 and this is the main work the current manuscript was based on.\", \"From the derivation, we know that adding one more layer of signal prediction could prevent from representation collapse. But is this going to slow down the convergence rate? What are the drawbacks of having one more layer of prediction.\", \"In the performance evaluation plots: Phinet showed sufficient improvement compared to its original version SimSiam but not much improvement compared to other methods such as Barlow twins. And it's unclear why mse loss is better than cos loss in L_{NC}.\"], \"questions\": [\"The weight decay term appeared out of sudden and definite needs more explanation about how it approximates modern optimizers and its relationship with learning rate etc. because the robustness about decay is where PhiNet showed most improvements in the later numerical experiments.\", \"The definition of time steps in the network and its link to temporal prediction is not well articulated. I roughy get the general idea how StopGradient could lead to synaptic delays but how are these two linked mathematically is still vague. Does one time step refers to one gradient update? Is gradient update performed in an online manner?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer pS8F\", \"comment\": \"We thank the reviewer for your positive evaluation and constructive feedback on our paper.\\n\\n## Weaknesses\\n\\n> While this is understandable for a more brain-inspired algorithm, there is little performance gain on classical SSL benchmarks.\\n\\nThank you for your feedback.\\n\\nIt is true that the performance gain observed with CIFAR10 appears limited. However, a more pronounced performance difference emerges in CIFAR-5m and continual learning scenarios. We believe that CIFAR-5m and continual learning, rather than CIFAR10, represent settings that are more natural both biologically and from an engineering perspective.\\nTraditionally, self-supervised learning experiments have focused on datasets such as CIFAR10 and ImageNet, often utilizing mini-batch training and requiring a substantial number of epochs\\u2014800 epochs for CIFAR10, for example\\u2014far exceeding those used in supervised learning. This extensive training reduces biological plausibility, as animals inherently learn through online processes.\\nIn addition, recent advancements in language models demonstrate the effectiveness of training on large-scale web data with a reduced number of epochs, sometimes as few as one. This progress underscores the importance of developing similar single-epoch training strategies for image-based self-supervised learning. Our experiments with CIFAR-5m revealed substantial differences between methods in online learning settings, distinctions that are not evident with CIFAR10.\\n\\n## Questions\\n\\n> One major concern is that the proposed method is not really doing \\\"temporal predictive coding\\\", instead, the predictor is predicting the embedding of an augmented image...I wonder if the authors can explain more about the motivations here.\\n\\nThank you for your question.\\n\\nPredictive coding operates within a framework where (1) it processes a single input from a time series and (2) predicts the next input signal. The advantages of this approach are expected to include (A) capturing temporal features and (B) stabilizing learning. For non-video data, while the framework does not fully satisfy condition (1), it does meet condition (2). Therefore, testing predictive coding with non-video data can be reframed as breaking down the framework into components (1) and (2), focusing initially on evaluating the effectiveness of condition (2).\\nThe scope of our current paper can thus be described as examining whether condition (2) implies (B). Validation of predictive coding on video data, incorporating both conditions (1) and (2), is indeed an essential direction for future work, which we intend to pursue.\\n\\n> When connecting X-PhiNet to the brain as in Fig 1(c), we are basically assuming that layers II and III in EC share the same encoder and the NC encoder is EMA of the one in EC. I wonder if the authors can provide more rationale for these assumptions.\\n\\nThank you for your question.\\n\\nSimSiam and the temporal predictive hypothesis differ primarily in whether they incorporate an additional predictor $g$, assuming we accept that the EC layers are modeled by the same encoder. Our research focuses on evaluating whether a biologically plausible recurrent structure involving $g$ has a mathematically significant effect.\\nGiven that Layers II and III receive inputs from two images at different time steps, assuming that these layers share the same encoder is a natural assumption. \\nFurthermore, using EMA to model NC layer is a common approach in the context of CLS theory[McClelland 1995, Pham 2021]. However, it should be noted that this is merely one of the simpler methods for modeling slow and fast learning systems.\\nSince many aspects of the biological brain remain unclear, further investigation is necessary to validate these assumptions.\\n\\n[McClelland 1995] J. L. McClelland, B. L. McNaughton, and R. C. O\\u2019Reilly. Why there are complementary learning\", \"systems_in_the_hippocampus_and_neocortex\": \"insights from the successes and failures of connectionist\\nmodels of learning and memory. Psychological Review, 1995. \\n[Pham 2021] Q. Pham, C. Liu, and S. Hoi. DualNet: Continual learning, fast and slow. NeurIPS, 2021.\"}", "{\"summary\": \"This papers takes another step in building non-contrastive self-supervised learning algorithms by taking inspiration from the architecture and supposed function of the hippocampal circuit. This PhiNets presented in this work are an extension of the SimSiam algorithm and the authors compare their method extensively to SimSiam, showing that it outperforms SimSiam in terms of robustness of low weight decay settings and continual learning. Their algorithm is competitive with many well-known contrastive and non-contrastive SSL methods (BYOL, Barlow Twins, etc.). Taken together, this is a nice paper showcasing the inspiration-from-neuroscience to usefully-deployed-ML-algorithm possibility.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Originality\\n- As far as I could tell, their conversion of a hippocampal circuit process to a ML algorithm is novel. It does go beyond the closest match, SimSiam, in both architecture and abilities.\\n\\nQuality\\n- In addition to empirical results, the analytical treatment is super useful to make us understand *why* their algorithm might outperform SimSiam in terms of robustness to low weight decay (among other things). \\n\\nSignificance\\n- As I wrote in the summary, \\\"this is a nice paper showcasing the inspiration-from-neuroscience to usefully-deployed-ML-algorithm possibility.\\\"\", \"weaknesses\": \"I have two core concerns:\\n1. The authors link their algorithm to the temporal prediction hypothesis. While I can see how in principle their setting (with image augmentations and the StopGradient) is similar to what temporal prediction would entail, results from this project cannot be used easily to validate their success as a success of temporal prediction SSL solely due to the reason that they do not use a temporal sequence. Yes, in a temporal sequence like a video subsequent frames could be thought of as augmentations but it is unclear if this algorithm will necessarily generalise to those settings because it is unclear if the class of augmentations used here resemble \\\"natural\\\" augmentations in videos.\\n2. Keeping the hippocampal inspiration aside for a moment, the addition of g and Sim-2 is what distinguishes this algorithm from SimSiam. However, Fig.5. suggests that setting g=I doesn't influence the performance too much - it would be good to know if the continual learning results and other results do require that g is trainable and not I. Similar concern for Sim-2 - what happens when Sim-2 is left out? These ablation analysis are important to understand what makes PhiNet special. Also removing Sim-2 makes the algorithm similar in essence to SimSiam - I'd be curious if this version already outperforms the original SimSiam implementation - this will help us establish a stronger \\\"baseline\\\" for the full PhiNet.\", \"clarity_suggestion\": [\"In the analytical Section 4, it'd be great if you'd give us an intuition for what phi and gamma in Eq. 1 actually denote - how are they related to g and h, etc. This will truly help tie in the section to the rest of the paper (although it is fine as is too).\"], \"questions\": \"What is the physiological relevance of low weight decay - why is that the important analysis here? (the authors spend a lot of space on it)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"I thank the authors for their response. It is clear to me why there is temporal lagging now. I also understand Figure 5 more clearly now.\\n\\nMy initial score already gave the authors the benefit of the doubt so I will keep my score.\"}", "{\"metareview\": \"This paper presents a novel self-supervised learning technique, loosely inspired by work on the hippocampus in neuroscience, called PhiNet. Similar to SimSiam, PhiNet take an input, generates two augmentations. But, unlike PhiNet, it makes a prediction with both augmentations (one of them via two steps of processing), and then compares these to an encoding of the original input. This is supposed to be analagous to the CA1-CA3-EC circuit in the medial temporal lobes.\\n\\nThe authors claim that PhiNet is able to learn useful representations more stably than SimSiam. They also claim that it can adapt more quickly in online learning scenarios. Finally, they claim to have a version of the model using an exponential moving average that is even better on continual learning tasks.\\n\\nThe strengths of the paper are that the idea and its link to neuroscience are interesting and the evaluations are robust and convincing. The weaknesses are that the advance over existing SSL methods is not that great, and some aspects of clarity in the paper were lacking.\\n\\nGiven these considerations, and the final scores (6,6,8,8) a decision of accept (poster) was reached.\", \"additional_comments_on_reviewer_discussion\": \"The discussion was constructive, and the authors assuaged the reviwer concerns enough to warrant acceptance.\"}", "{\"summary\": \"This paper introduces PhiNet and X-PhiNet, where the input is processed in three parallel pathways with 0,1,2 predictors and with two pathways having the stop-gradient operation. The authors report favorable results over SimSiam and explore online/continual learning effectiveness in their models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The idea is well motivated and the paper is thoroughly cited. The analysis is rigorous and convincing.\", \"weaknesses\": \"The majority of my weaknesses stem from my lack of understanding about certain claims, and I believe that this paper will be improved if the following points are made clearer:\\n\\n- Can the authors explicitly explain, with specific reference to any important terms, why stop-gradient leads to any temporal lagging? I understand how stop-gradient works, but I fail to see how it is related to time in any way.\\n\\n- I am not able to see any claims in section 5.1 in Figure 5, even though it is referenced in the text. What kind of improvement am I supposed to see and what effect am I looking for?\\n\\nI will update my review once I gain a better understanding of these results.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> I think the result would be stronger if there were also quantitative comparisons, e.g., something showing that more random seeds would result in collapse for SimSiam than for PhiNet.\\n\\nThank you for your helpful suggestions.\\nThe results in Figure 5 (now Figure 6) have been updated to include error bars based on multiple seeds. From these results, it can be observed that SimSiam exhibits unstable training when the batch size is 1024 and the weight decay is small, resulting in large variability in accuracy across different seeds.\\nWhile it is challenging to definitively distinguish whether mode collapse occurs during training, we believe that the large variability in the accuracy of SimSiam across seeds and its overall lower accuracy suggest that \\\"more random seeds would result in collapse\\\". \\n\\n> For empirical performance, the errorbar is missing from Fig 5 and Fig 7 so it's a bit hard to measure the margin of error.\\n\\nThank you for your helpful advice.\\nFollowing your suggestion, we have added error bars to Figure 5 (now Figure 6). However, due to time constraints, we have not yet been able to include error bars for the continual learning results in Figure 7 (now Figure 8). \\nFor the error bars of the continual learning results, please refer to Tables 5, 6, and 7 in the Appendix, where error bars are provided. While these tables do not include a sweep over weight decay, they demonstrate the statistically significant differences achieved by X-PhiNet.\\n\\n> As baselines for comparison here (POYO, SimSiam, etc) are not designed for continual learning, it might make sense to include more baselines from the field of continual learning\\n\\nThank you for your feedback.\\nAs you remark, SimSiam and BYOL are not designed for continual learning. Although BYOL also employs EMA, its performance in online and continual learning settings is not as strong as X-PhiNet, indicating that EMA does not guarantee high performance. Furthermore, RM-SimSiam, which we employ as a baseline in our comparisons, is designed for continual learning and also utilizes EMA. Despite this, X-PhiNet outperforms RM-SimSiam.\\nIn addition to RM-SimSiam, another self-supervised method for continual learning is CasSSLe. While it would indeed be valuable to include CasSSLe[Fini2022] as a baseline for comparison, the limited rebuttal period make it challenging to conduct additional experiments at this time. A more detailed analysis in the context of continual learning would be a future work.\\nWhile you may feel that there is a lack of strong baselines for continual learning, to the best of our knowledge, within the context of non-contrastive learning, most research has focused on data-structure-based approaches such as replay strategies [Madaan 2022]. \\n\\n[Fini2022] Fini, E., da Costa, V. G. T., Alameda-Pineda, X., Ricci,\\nE., Alahari, K., and Mairal, J. Self-supervised models are continual learners. CVPR, 2022. \\n[Madaan 2022] D. Madaan, J. Yoon, Y. Li, Y. Liu, and S. J. Hwang. Representational continuity for unsupervised continual learning. ICLR, 2022.\\n\\nIf our response satisfies the reviewer, we appreciate it if you would consider adjusting their score accordingly.\"}", "{\"title\": \"Response to authors\", \"comment\": \"I think the authors have addressed and clarified most of my concerns. Thanks for the efforts. I'll increase my confidence score.\"}" ] }
5swfKRkCx7
Two Heads are Better than One: Retrieval Augmented LLM for Question Answering with External Knowledge Attention
[ "Yuanhe Tian", "Fei Xia", "Yan Song" ]
Retrieval-augmented generation (RAG) of large language models (LLMs) has recently attracted significant attention owing to their ability to address knowledge gaps in generating reliable answers for specific questions. Existing RAG approaches typically optimize the knowledge processing by filtering out irrelevant or incorrect information and restructuring it for model input, improving the accuracy of answers to given questions. A general approach in doing so is to combine the retrieved knowledge with the input inquiry, which are then fed into the LLM to produce an answer. This approach requires the LLM to have strong knowledge comprehension and reasoning capabilities to effectively utilize the useful information, which may lead to errors when it fails to correctly interpret relevant knowledge. In this paper, we propose a novel approach to augmenting LLMs with external knowledge attention for question answering (QA), where the attention is functionalized as an extra head that integrated with the internal heads used in LLMs. We develop a memory-based mechanism that dynamically controls the degree of knowledge integration with the extra head based on the relationship between the question and the retrieved knowledge, and allows for differentiated fusion of external knowledge and LLM ability at its different layers. Experiments on both general and specific-domain QA tasks demonstrate the effectiveness of our approach, highlighting its potential to optimize LLMs for similar challenges.
[ "question answering", "large language modeling", "retrieval augmented generation", "knowledge" ]
https://openreview.net/pdf?id=5swfKRkCx7
https://openreview.net/forum?id=5swfKRkCx7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oScO3dKjIW", "cKuyx7C5I3", "PgPheaFHc5", "4EjjaojzcF", "3NQk1qSzp2" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730521624820, 1730118766782, 1730710316129, 1731922157262, 1730699132753 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13840/Reviewer_U16r" ], [ "ICLR.cc/2025/Conference/Submission13840/Reviewer_8beM" ], [ "ICLR.cc/2025/Conference/Submission13840/Reviewer_gndj" ], [ "ICLR.cc/2025/Conference/Submission13840/Authors" ], [ "ICLR.cc/2025/Conference/Submission13840/Reviewer_Gbmc" ] ], "structured_content_str": [ "{\"summary\": \"This paper focuses on improving the performance of LLMs with RAG on the question-answering (QA) task and proposes an approach to selectively fuse externally retrieved knowledge using different attention mechanisms. Specifically, based on the relevance of the retrieved knowledge, the authors propose encoding and integrating knowledge of varying weights into LLMs at different layers. Experiments on general-domain QA datasets and two domain-specific datasets (medical and counterfactuals) demonstrate improved performance compared to previous baselines. Ablation studies also show the effectiveness of the proposed approach by investigating the knowledge attention weights and the integration of different LLM layers.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation and idea of improving the integration of externally retrieved knowledge with LLMs in RAG are good and interesting. Indeed, most existing solutions in RAG simply concatenate the retrieved knowledge with the input and send it to LLMs in a shallow way, which may not fully leverage the knowledge and can lead to errors. This paper explores methods to utilize the knowledge in a more manageable way by assigning different attention weights to different pieces of knowledge and integrating them into different layers of the LLM, thereby utilizing the knowledge more deeply.\\n2. The experiments and ablation studies look sound and can demonstrate the design choices of each component in the proposed method (i.e., knowledge attention, memory module, and fusion in LLM layers).\", \"weaknesses\": [\"1. The usage of notations and symbols in the paper is sometimes misleading, and some of them are not consistent throughout the paper. For example,\", \"$s_{l,u}$ in equation (7) becomes $s_{l,n}$ in L173\", \"$f_{K}$ in equation (1) and $f_{KA}$ in L101, L124\", \"The $H^S$ in equation 3 is different from in L145\", \"It should be $H^{X}_{l-1,2}$ in equation (9)\", \"2. The datasets and benchmarks used for the general domain QA experiments are somewhat outdated and are not commonly used for evaluating QA in the era of LLMs. This makes the proposed method incomparable to more recent and advanced methods. For example, the community usually uses the following datasets/benchmarks:\", \"Natural Questions: A Benchmark for Question Answering Research, Kwiatkowski et al., 2019\", \"TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension, Joshi et al., 2019\", \"MuSiQue: Multihop Questions via Single-hop Question Composition, Trivedi et al., 2022\", \"When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories, Mallen et al., 2023\", \"3. The experimental settings are not entirely fair. In Table 4, the backbone models for the baselines are not the same, and some methods are optimized using the training data, while others are not. Furthermore, there is a lot of missing data in Table 4, and the improvement appears marginal compared to Tables 2, 3, and 4, especially considering that the proposed method requires additional training and inference costs. For example, it\\u2019s only about a 1-4% improvement compared to the base LLaMA in Table 2.\", \"4. The paper overstates some of its findings and results. For example, it only experiments with the LLaMA model (7B/13B) and BERT as the encoder, yet claims \\\"*our approach works with various pre-trained LLMs*\\\" (L314). In addition, it states in L297 that each model was run three times and that the average and standard deviation were reported, but this information is not shown in the paper.\"], \"questions\": \"1. What is the difference between $f_{KA}$ in L101 and $f_{K}$ in equation (1)?\\n2. I don\\u2019t understand equation 1. Why use $f_K$ to subtract the output of LLM?\\n3. It appears that the proposed method requires training and parameter tuning on the training and development sets to achieve optimal results for each dataset. How transferable is the proposed method? Will it perform well on other datasets that it has not been trained on?\", \"typos\": \"1. L109, Figure ??\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a retrieval-augmented question answering model that integrates external knowledge into LLMs through an additional attention head and a memory module controlling knowledge integration across layers. The approach aims to enhance the model's ability to utilize relevant information for answering questions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper's strength lies in its use of vectorized representations for retrieving knowledge, which allows for a more nuanced integration of external information with the LLM's internal representations. This approach leverages the ability of vector spaces to capture semantic relationships, enabling the model to better fuse and reason over the retrieved knowledge.\\n2. Another strength is the introduction of a memory module that dynamically adjusts the degree of knowledge integration across different layers of the LLM. This enables more granular control over how external knowledge influences the model's reasoning process, potentially leading to improved performance on complex question-answering tasks.\", \"weaknesses\": \"1. While the paper presents a novel approach to integrating external knowledge into LLMs, it may lack a broader discussion on the generalizability of this method across different LLM architectures.\\n2. It is unclear whether the proposed attention mechanism and memory module would require significant retraining or adaptation for each LLM, which could limit its practical applicability.\\n3. Although the authors explain that this embedding method has advantages (as mentioned in lines 050-085), the article does not explicitly address the robustness of the model to noisy or irrelevant data during retrieval, nor does it provide a comparison with these methods.\\n* Line 109: There are some errors in the figure references.\", \"questions\": \"1. Is it necessary to train additional attention and parameter matrices for each LLM to incorporate the proposed knowledge attention mechanism?\\n2. Regarding noise resistance, how does your method perform compared to other methods under noisy text conditions? For instance, how does the model's performance change when it is fed irrelevant content?\\n3. Could the authors elaborate on the scalability of the proposed approach, especially when dealing with very large retrieval contexts (beyond BERT's limits) or when the complexity of the questions increases?\\n4. How does the model handle conflicting information from different knowledge sources, and is there a mechanism in place to resolve or weigh such conflicts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents an approach to enhance large language models (LLMs) for question answering (QA) tasks through a novel retrieval-augmented generation (RAG) system. This system incorporates external knowledge dynamically using a specialized attention mechanism and memory modules. The proposed method, termed External Knowledge Attention (EKA), integrates an additional \\\"head\\\" within the multi-head attention framework of LLMs, designed to selectively fuse relevant external knowledge into the model\\u2019s decision-making process across different layers.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The introduction of an extra attention head for integrating external knowledge directly within each LLM layer is novel, potentially improving the model's ability to utilize relevant information.\\n2. The paper provides comprehensive experimental setups and results.\", \"weaknesses\": \"1. The architecture\\u2019s complexity, involving multiple components like memory modules and additional attention mechanisms, might pose challenges in practical implementations or scaling.\\n2. The multi-hop datasets are simple and small, except for GrailQA.\", \"questions\": \"1. Can you test your approach on other multi-hop datasets like HotpotQA, Musique, etc?\\n2. Llama3 was published in early 2024. Can you try to run your experiments on it?\\n3. Please remove explanations about marks in the caption of Table 5. It is the same as it is in Table 4. The two tables are close enough.\\n4. Can the knowledge attention module be generalized to fit different scenarios? How is the transferability of the module? Have you tried training this module on one dataset, such as WebQSP, and testing it on another, such as GrailQA?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces a novel method to enhance large language models (LLMs) for question-answering tasks by incorporating external knowledge through a specialized attention mechanism, termed External Knowledge Attention. This approach extends the conventional retrieval-augmented generation (RAG) framework, where relevant external knowledge is retrieved to supplement the LLM\\u2019s existing knowledge, thereby improving answer accuracy. The proposed model introduces an additional \\\"external knowledge attention\\\" head that operates alongside the LLM's internal attention heads, enabling a more flexible and dynamic integration of retrieved information based on its relevance to the input query. A memory-based mechanism allows the model to modulate the extent of external knowledge fusion across different LLM layers, adapting based on the relationship between the question and the retrieved content. Experimental results highlight the model's significant performance improvements across both general and specialized question-answering tasks, illustrating its effectiveness in utilizing external knowledge for more accurate responses.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The approach of knowledge injection through latent vectors in a decoder-only model is innovative. Most prior studies have focused on incorporating latent knowledge into encoder-decoder models, such as T5, whereas this work applies it to a decoder-only model.\\n\\nAlthough the authors do not explicitly address efficiency, the model's use of latent knowledge suggests potential benefits for reducing inference latency and increasing throughput.\", \"weaknesses\": \"The approach's intuition is not fully explained. The authors argue that standard RAG suffers from \\\"less converged integration of knowledge and LLMs\\\" due to concatenation-based methods, yet this claim lacks citation and evidence. Moreover, the experiments do not directly validate that the proposed approach addresses this issue.\\n\\nThe paper is poorly written, with the methodology lacking sufficient detail. Specific aspects remain unclear, as noted in the questions below.\\n\\nCertain methodological choices lack justification and intuitiveness, such as the attempt to align the BERT and LLAMA latent spaces without clear post-training evidence.\\n\\nThe approach seems closer to fine-tuning than instruction tuning. Given that LLMs are designed for multiple tasks, the authors should demonstrate that the approach preserves the model's performance on tasks beyond RAG.\", \"questions\": \"In line 131, the authors mention removing positional embeddings. Could they clarify this choice? Was an ablation study conducted?\\n\\nHow is a single knowledge instance encoded as a single vector? If a pre-trained language model (PLM) like BERT was used, the output would typically be a sequence of latent vectors, not a single vector. The authors should clarify this process.\\n\\nWhy did the authors select BERT specifically? There are more advanced models available, and the choice of BERT requires justification.\\n\\n[Feedback]\\n1. Equation 2 has an error; it should be s_u instead of s_i\\n2. Several recent citations on memory use in RAG settings are missing, such as [1]\\n[1] https://arxiv.org/abs/2406.04670\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5sdUTpDlbX
Professor X: Manipulating EEG BCI with Invisible and Robust Backdoor Attack
[ "Xuan-Hao Liu", "Xinhao Song", "Dexuan He", "Bao-liang Lu", "Wei-Long Zheng" ]
While electroencephalogram (EEG) based brain-computer interface (BCI) has been widely used for medical diagnosis, health care, and device control, the safety of EEG BCI has long been neglected. In this paper, we propose **Professor X**, an invisible and robust “mind-controller” that can arbitrarily manipulate the outputs of EEG BCI through backdoor attack, to alert the EEG community of the potential hazard. However, existing EEG attacks mainly focus on single-target class attacks, and they either require engaging the training stage of the target BCI, or fail to maintain high stealthiness. Addressing these limitations, Professor X exploits a three-stage clean label poisoning attack: **1)** selecting one trigger for each class; **2)** learning optimal injecting EEG electrodes and frequencies strategy with reinforcement learning for each trigger; **3)** generating poisoned samples by injecting the corresponding trigger’s frequencies into poisoned data for each class by linearly interpolating the spectral amplitude of both data according to previously learned strategies. Experiments on datasets of three common EEG tasks demonstrate the effectiveness and robustness of Professor X, which also easily bypasses existing backdoor defenses. Code will be released soon.
[ "safety", "backdoor attack", "EEG", "brain-computer interface" ]
Reject
https://openreview.net/pdf?id=5sdUTpDlbX
https://openreview.net/forum?id=5sdUTpDlbX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zeJnzjl85J", "wnis6x39fK", "wCssqcfUio", "uJ0Lqe0G0y", "swCLAVWdyx", "plec2LT4e7", "p9cC5WfaUo", "nt7ZOy9IXw", "nfmmO7MKIC", "jFDbiso5Ue", "j5qM7PvKM6", "idb7BLv6qI", "gm59EQiEWb", "gHGYQmy1HF", "du18XMe0a2", "c8GIY6rR0Q", "bBuOnVXLWn", "ZilbJqWMFi", "ZQwCwXxNQT", "YxNMJeInMh", "VjTSSykJzr", "TrWL2Yjabt", "TFz2IEG573", "SgsIozlZ1G", "Q0PDSjAED1", "OkzwDTz9fT", "KRdbfmeik2", "INCiWCcT2F", "H8mG8UuIeX", "FnPW45yAKl", "EAX1mfFCbN", "CzfKTrFCLP", "BqGWCVYIgg", "A94Rf9Wte0", "7biNaYdsYV", "7ZwVB03mC5", "6w6BSrU4od", "6kjg6wG1Sv", "4YfccSPRq0", "3gPbFG44O8", "0yXnwZM0sW" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732777668104, 1732897106971, 1730883992592, 1730713335152, 1732685271996, 1731992778461, 1734630300263, 1732534743988, 1732719767777, 1731992948002, 1732516518595, 1731992402495, 1732611445650, 1732517027016, 1733047804617, 1737524040170, 1732777164682, 1731992700431, 1732621045590, 1732534831085, 1731993205995, 1731992097500, 1732694264704, 1731993144746, 1731992908608, 1733113956767, 1730125184482, 1731992489874, 1732545356844, 1732331895727, 1731991628529, 1732619761193, 1732521512466, 1732707285021, 1732147738210, 1730396869810, 1730245460540, 1731991711035, 1732731382928, 1731991987687, 1733052212355 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Area_Chair_r9hc" ], [ "ICLR.cc/2025/Conference/Submission10303/Reviewer_RejD" ], [ "ICLR.cc/2025/Conference/Submission10303/Reviewer_JuBu" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Area_Chair_r9hc" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Reviewer_RejD" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Reviewer_dg1E" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Reviewer_RejD" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Reviewer_dg1E" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Area_Chair_r9hc" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Reviewer_JuBu" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Area_Chair_r9hc" ], [ "ICLR.cc/2025/Conference/Submission10303/Reviewer_uEuL" ], [ "ICLR.cc/2025/Conference/Submission10303/Reviewer_kEdv" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ], [ "ICLR.cc/2025/Conference/Submission10303/Authors" ] ], "structured_content_str": [ "{\"title\": \"Kindly request for post-rebuttal comments\", \"comment\": \"Dear Reviewer kEdv:\\n\\nThanks for your recognition of our work and insightful questions about how to defend Professor X. We are writing this comment to kindly request for post-rebuttal comments.\\n\\n**We given point-by-point answer to each of your questions and we believe we have comprehensively addressed them. Plus, we have added new experiments and a new section on the defensive research in our revised paper.** Have we already addressed your concern? Or do you have any further concerns?\\n\\nPlease feel free to contact us, we are more than happy to hear from you. We would really appreciate if you could reconsider your rating and we are glad to address your further concerns.\\n\\nBest, \\nAuthors\"}", "{\"comment\": \"Dear Reviewers,\\n\\nIf you haven\\u2019t already done so, I strongly encourage you to engage with the authors. Please note that there is no need to make any commitments regarding the final score at this time; but I would appreciate it if you could acknowledge that you have received and reviewed their responses, and ask any follow-up questions you may have.\\n\\nBest,\\\\\\nAC\"}", "{\"summary\": \"From the outset, the abstract of the submission presents a proposition that appears to be unrealistic and somewhat disconnected from contemporary research realities. It remains inaccurate to assert that \\\"While electroencephalogram (EEG) based brain-computer interfaces (BCIs) have been extensively employed in medical diagnosis, healthcare, and device control, the safety of EEG BCIs has long been neglected.\\\"\\n\\nThe manuscript constructs a fictitious framework, suggesting that research regarding BCIs has already translated into widespread applications. The submission relies on historical datasets such as BCIC-IV-2a. It is critical to note that motor imagery may not be effective for the target demographic of individuals with paralysis due to significant neural degeneration. Moreover, the methodology seems to rely on a dubious SEED database to fabricate artificial backdoor attack scenarios, ultimately suggesting solutions that are not based in rigorous academic research. This strategy does not promote the growth of academic inquiry, thus justifying a rejection of this submission.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The reviewer found no substantial strengths in the submission. It is fundamentally inadequate to fabricate problems only to propose solutions. Backdoor attacks do not present a significant concern in the field of BCI research at this time, as established paradigms are still lacking. Nonetheless, BCI research is experiencing significant growth due to advancements in machine learning; however, a considerable distance remains before it can transition to healthcare and broader applications that would necessitate the implementation of protections against backdoor attacks.\", \"weaknesses\": \"The prevailing conditions within the submission exhibit a lack of realism, an insufficiency of novel contributions, and a substantial deficiency in advancements for the BCI community.\", \"questions\": \"Why did the authors construct an entirely unrealistic and artificial scenario? Where did the authors encounter such hyperbolic or enthusiastic claims regarding the purported applications of BCI in healthcare and medical diagnostics? Currently, only a limited number of conditionally approved, mostly invasive devices have been tested on a small cohort of subjects within closed clinical studies.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Professor X, a novel, frequency-based EEG attack designed to be stealthy and multi-target. The method involves three main steps: 1) Finding triggers for each class, 2) using reinforcement learning to find optimal electrodes and frequencies injection\\nstrategies, and 3) generating poisoned samples using triggers and clean data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The study has a clear research question, which makes its purpose easy to understand.\\n3. The idea of employing reinforcement learning for finding the optimal electrodes and frequencies for data poisoning is interesting. \\n2. The authors designed multiple experiments to evaluate different parts of their method, and the ones focused on showing the method's robustness are especially valuable.\", \"weaknesses\": \"As mentioned in the related works, a research direction exists that focuses on designing frequency-based backdoor attacks. Although existing methods are designed for images rather than time series, the authors could still compare their proposed method with existing approaches to better highlight the novelty of the work in relation to current frequency-based methods.\\n\\nThe authors designed several baselines (stealthy and non-stealthy) based on BadNets, PP-based BD attacks, and so on, which is great. However, it would be great if the authors considered and designed some baselines based on the existing frequency-based BD attack, if applicable.\\n\\nThe stealthiness of the method is one of the claims of the method. Although there are some visualizations in this regard, it would be great if the authors designed an experiment to validate the stealthiness of the method. It may be similar to a previous study [1], which used anomaly detection methods.\", \"the_author_only_considers_three_models_for_the_classifiers\": \"EEGNet, DeepCNN, and LSTM. However, it would be great if the author considered other new models, like TIMESNET [2] and other new transformer-based models.\\n\\nThe quality of the writing needs improvement; here are some points:\\nThe third paragraph of the introduction requires revision for clarity and coherence. \\nFigure 1 consists of five sub-figures that provide a good summary of the method. However, in the introduction (line 050), the authors begin by explaining Figure 1-d, which destroys the flow.\\n\\nThe Methodology section should be improved by first defining the key concepts, symbols, and problems. \\nIt would also be helpful to include a table of abbreviations and symbols, as the multiple terms used throughout the paper may be confusing for readers. \\n\\n\\n\\n[1]. Lin X, Liu Z, Fu D, Qiu R, Tong H. BACKTIME: Backdoor Attacks on Multivariate Time Series Forecasting. arXiv e-prints. 2024 Oct:arXiv-2410.\\n\\n[2]. Wu H, Hu T, Liu Y, Zhou H, Wang J, Long M. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. InThe Eleventh International Conference on Learning Representations.\", \"questions\": \"In addition to the points in the 'Weaknesses' section, I'd like to add one more:\\n\\nHow effective is the proposed BD attack when applied to approaches that utilize frequency information of data, such as [3]?\\n\\n\\n\\n[3] Zhang X, Zhao Z, Tsiligkaridis T, Zitnik M. Self-supervised contrastive pre-training for time series via time-frequency consistency. Advances in Neural Information Processing Systems. 2022 Dec 6;35:3988-4003.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Replying to the Reviewer RejD\", \"comment\": \"Dear Review RejD:\\n\\nThank you for your feedback and the valuable time in reviewing our paper! We would really appreciate if you can reply some of our response's questions. Since your latest feedback (not public to everyone and ACs) overlooked these questions. We list them below for saving your valuable time:\\n\\n1. Why do you regard the SEED dataset dubious? \\n2. What's your opinion about the CHB-MIT dataset, are our attack's results on this dataset unrealistic too?\\n3. What do you want to illustrate by this fact \\\"*It is critical to note that motor imagery may not be effective for the target demographic of individuals with paralysis due to significant neural degeneration*\\\"?\\n\\nWe sincerely hope to hear from you and discuss with these questions. This is important for us to better address your concerns.\\n\\nBest, \\nAuthors\"}", "{\"title\": \"Response to the reviewer kEdv (2/4)\", \"comment\": \"> The approach may face challenges in scaling up for larger or more complex datasets, potentially limiting its effectiveness in real-world applications involving diverse user populations.\\n\\nWe would like to clarify that our method is effective no matter how large the size of EEG datasets. As the we poison the dataset by a constant poisoning ratio, the larger the target dataset, the more poisoned data we inject into the dataset.\\n\\nOr the word \\\"large\\\" indicates the number of subjects in the dataset. However, our attack is still effecitve, as the experiments performed in our paper are all under cross-subject setting. We adopt the same experiment setup as that in the paper [3]. In short, we train the EEG model on the data from n-2 subjects, and test the EEG model on the data of a new subject. Our attack works very well under this cross-subject setting, so we can conclude that our method is effective no matter how large the number of subjects in the EEG datasets.\\n\\nAs for the word \\\"complex\\\", we kindly guess that does this mean the complexity of classification task? Like, the number of label types. If so, we would like to argue that our attack is effective at most situations ( > 90%). Because in most cases, the number of label types in EEG BCI is no mare than four. The datasets selected in our experiment are the most complex datasets. For emotion recognition, most datasets only focus on binary classification, like DEAP [4] and DREAMER [5]. For motor imagery, most datasets only do binary classification. As far as we know, the BCIC-IV-2a is the only dataset to do four-class classification. For epilepsy detection, most datasets only do *itcal or non-itcal* binary classification. We have refined the task to four categories.\\n\\nFrom table 1 and the addtional experimental results on the P300 tasks above, we are quite confident that our attack will be effective facing challenges in scaling up for larger or more complex datasets.\\n\\n[3] Meng L, et al. EEG-based brain\\u2013computer interfaces are vulnerable to backdoor attacks[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023.\\n\\n[4] Koelstra S, et al. Deap: A database for emotion analysis; using physiological signals[J]. IEEE transactions on affective computing, 2011.\\n\\n[5] Katsigiannis S, Ramzan N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices[J]. IEEE journal of biomedical and health informatics, 2017.\\n\\n### Questions\\n\\n> What ethical frameworks are in place to govern the use of techniques like Professor X, and how can researchers ensure that such methods are not misused?\\n\\nWe can require professionals in the field of EEG BCI to sign relevant guarantees that they will not use any harmful attack, but afterall this is not the fundamental solution. Only by developing an effective defensive or detection method can we stop the misuse of such techniques.\\n\\nUnfortunately, we did not find any effective ways to defend or detect Professor X due to the high stealthiness and robustness. To the best of our knowledge, we can not ensure that such methods won't be misused at this stage. We will do our utmost to find a way to protect our attacks. Here, we are calling for research on the defense against Professor X.\\n\\n> How effective is STRIP in detecting backdoor inputs under various levels of perturbation? Are there specific types of perturbations that significantly impact its performance?\", \"strip_detects_the_backdoor_based_on_thees_findings\": \"**1)** backdoor trigger is input-agnostic; **2)** backdoor trigger is strong and effective when performing input perturbation; **3)** The backdoor models' outputs (softmax) of poisoned data has very low entropy.\\n\\nAs the original STRIP paper [6] said, their detection method is insensitive to the trigger-size, so STRIP is effective no matter the trigger is big or small. In our understanding of STRIP, it can detect the trigger that is obvious to models in the original space (like temporal domain for time series data, and spatial domain for image data).\\n\\nHowever, any trigger that may affect the above findings may cause STRIP's detection failure. For example, **1)** the trigger is input-specific; **2)** the trigger is not that strong, it fails when performing input perturbation; **3)** The trigger won't cause the backdoor model to predict with very low entropy.\\n\\nOur trigger is injected in the frequency domain, leading to the input-specific pattern in the temporal domain, causing **finding 1 to be invalid**. Moreover, the input perturbation in the temporal domain may damages the frequency information, causing our trigger disapper, leading to the **finding 2 to be invalid**. Last, as EEG is a nonstationary modality, the outputs of EEG models are always with high entropy, making **finding 3 to be invalid**. Thus, STRIP is not effective in detecting Professor X attack.\\n\\nWe hope our answer addressed your concerns, if you have any questions, please feel free to ask.\"}", "{\"metareview\": \"The paper introduces a novel and robust backdoor attack framework for EEG-based Brain-Computer Interfaces. The method employs a label-poisoning strategy with reinforcement learning to inject undetectable triggers into EEG signals. The paper introduces and tackles a very interesting notion - one which should not be ignored as BCI technologies become more adopted. The solution is also interesting and effective. However, the paper had a few weaknesses. The paper is somewhat difficult to follow, with key concepts like \\\"stealth\\\" lacking clear definition and justification. It also falls short in benchmarking against recent methods in time-series or signal-processing attacks. Additionally, the absence of an ethical discussion is a significant oversight for a topic of this nature. The analysis of backdoor defense mechanisms is limited in both scope and depth.\", \"additional_comments_on_reviewer_discussion\": [\"The paper received highly diverging reviews of 1, 5, 6, 6, 8. Here is a summary of the discussions and the rationale for my decision:\", \"Reviewer RejD who gave a **score of 1**, explicitly mentioned that \\\"The notion of a BCI backdoor vulnerability is unfounded\\\". I respectfully disagree with this and applaud the authors for exploring a novel, interesting, and forward-thinking problem in the sensitive area of BCI.\", \"However, other issues were also identified. The paper was found to be somewhat difficult to understand. For instance, the concept of \\\"stealth\\\" felt unclear and lacked proper definition and justification. Another shortcoming is the absence of benchmarking against recent time-series or signal-processing attack methods. Additionally, the paper does not include a clear discussion of ethics, which is essential for a topic like this. The analysis was generally found to be a bit limited in scope and depth w.r.t. backdoor defense mechanisms.\", \"Reviewer RejD agrees with these shortcomings.\", \"Reviewer uEuL who gave a **score of 8** *agrees* with the shortcomings regarding clarity and insufficient benchmarking, *however, strangely did not change their score after agreeing with these key issues*.\", \"Reviewer dg1E also agrees with the limitations of the benchmarking and analysis in the paper.\", \"Overall, taking everything into account and reducing the impact of the two extreme scores (1 and 8) due to the reasons mentioned above, I believe the paper is still slightly below the threshold for acceptance. However, I highly encourage the authors to continue this line of work and further polish the paper for another round of submission. Better presentation and clarity of the work (e.g., specific definitions), followed by more benchmarking and analysis/discussions, will make the paper acceptable in my opinion.\"]}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your thoughtful and insightful suggestions. We believe we have comprehensively addressed your questions regarding Professor X's realism.\\n\\n\\nWe are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them. \\ufeff\\n\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. Thank you for your thoughtful consideration!\"}", "{\"title\": \"It is not about dataset but complete misunderstanding of the BCI research field\", \"comment\": \"Dear Authors,\\n\\nThank you for your detailed response. However, the fundamental issue with your work, as accurately identified by the initial review (\\\"From the outset, the abstract of the submission presents a proposition that appears to be unrealistic and somewhat disconnected from contemporary research realities.\\\"), remains its disconnect from real-world medical device security realities. \\n\\nThe medical device field, particularly regarding pacemakers, DBS, cochlear implants, hearing aids, and future BCIs, is significantly more mature than AI or deep learning. This maturity has led to robust security protocols and regulatory oversight, such as those enforced by the FDA, EMA, etc. To date, there have been no documented instances of successful hacking or backdoor attacks on these devices. Medical devices are typically customized for individual patients, and their associated data and models are subject to stringent confidentiality requirements mandated by medical regulatory standards. User model customization for each session, necessitated by electrode placement variations, remains a significant challenge in EEG-based BCI. This practical issue warrants further research attention rather than theoretical explorations of unrealistic backdoor attacks.\\n\\nIt is essential to conduct thorough research and familiarize oneself with the field before proposing unrealistic scenarios and potential solutions. While publicly available datasets can be valuable for research, it's crucial to recognize that future approved medical devices are subject to stringent regulations and security measures, limiting the potential for malicious exploitation.\\n\\nWe urge you to consider applying your research efforts to fields such as image processing or speech recognition, where the regulatory landscape may be less restrictive.\\n\\nAs a member of the BCI community, the reviewer cannot endorse a publication that disregards the established security practices and potential consequences of such claims. Therefore, the initial recommendation stands.\"}", "{\"title\": \"Response to the reviewer kEdv (4/4)\", \"comment\": \"> How effective is Fine-Pruning at different pruning ratios beyond 0.7? Are there specific thresholds where the attack's effectiveness is significantly impacted?\\n\\nFrom Fig 7 in our paper, we can see that when pruning ratios are over 0.7, the attack success rates drop for all EEG tasks. However, the clean accuracies drop more greatly and approach the random level. So Fine-Pruning is not that effective for Professor X.\\n\\nIn our understanding of Fine-Pruning, we would like to say that there is no specific thresholds where the attack's effectiveness is significantly impacted. Because the threshold in our paper seems to be 0.7, but this threshold is for EEGNet models. For other diverse EEG BCI models, many factors like the architecture, size, and activation function of the model will all have impact on the threshold.\\n\\nIn practical applications of Fine-Pruning, the defender will set a threshold for clean accuracy, then prune the neurons until the clean accuracy drops to the threshold. So when the threshold is set relatively high, the backdoor may not be erased.\\n\\n> What future research avenues could be explored to improve defenses against backdoor attacks like Professor X, particularly in the context of Fine-Pruning?\\n\\nAs we discussed above, Fine-Pruning requires that the defender has a validation dataset $D_{valid}$. The performance of Fine-Pruning relies heavily on the quality of the validation dataset, since the low-activated neurons are determined by the validation dataset.\\n\\nSo in the future, building a large, diverse, high quality, and absolutely clean validation dataset is the key for improving the Fine-Pruning's performance. The most important part is the diversity, which not only means the diversity of EEG tasks, but also means the diversity of EEG formats. Thus, improving the defenses against backdoor attacks is not an easy task and needs joint efforts of the medical and academic communities.\\n\\n---\\nWe hope our responses could help address your concerns. We believe that this work contributes to this community and has the potential to serve as a catalyst for its development. We would sincerely appreciate it if you could reconsider your rating and we are more than happy to address any further concerns you may have.\\n\\n\\n[6] Gao Y, Xu C, Wang D, et al. Strip: A defence against trojan attacks on deep neural networks[C]//Proceedings of the 35th annual computer security applications conference. 2019: 113-125.\\n\\n[7] Bolun Wang, Yuanshun Yao, et al. Neural Cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (S&P), pp. 707\\u2013723. IEEE, 2019.\\n\\n[8] Brandon Tran, Jerry Li, and Aleksander Madry. Spectral signatures in backdoor attacks. Advances in Neural Information Processing Systems (NeurIPS), 31, 2018.\\n\\n[9] Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Fine-pruning: Defending against backdooring attacks on deep neural networks. In International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 273\\u2013294. Springer, 2018.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your thoughtful and insightful suggestions! We believe we have comprehensively addressed your questions regarding Professor X's comparison with vanilla FreBA, its stealthiness against anomaly detection methods, its generalizability on new sophisticated models, and writing improvement. We added the additional experimental results and refined our writing in our revised submission.\\n\\ufeff\\n\\nWe would like to emphasize that our method is the **first to achieve both highly stealthy and robust backdoor attacks on EEG BCI**. Through data poisoning approach, our method even does not require controlling the training stage of target models.\\n\\ufeff\\n\\nWe are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them.\\n\\ufeff\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. Thank you for your thoughtful consideration!\"}", "{\"title\": \"Response to the reviewer uEuL (1/2)\", \"comment\": \"We truly thank you for your appreciation of our work! Your questions about the generalizability of our attack is insightful, which also provides an opportunity to summarize our contributions. Our point-by-point responses are as follows.\\n\\n### Weaknesses\\n> The method might encounter difficulties when scaling to larger or more intricate datasets, which could restrict its effectiveness in real-world applications with varied user populations.\\n\\nWe would like to clarify that our method is effective no matter how large the size of EEG datasets. As the we poison the dataset by a constant poisoning ratio, the larger the target dataset, the more poisoned data we inject into the dataset.\\n\\nOr we may misunderstand the word \\\"large\\\", we kindly guess that does this mean the number of subjects in the dataset is large? If so, our attack is still effecitve, as the experiments performed in our paper are all under cross-subject setting. We adopt the same experiment setup as that in the paper [1]. In short, we train the EEG model on the data from n-2 subjects, and test the EEG model on the data of a new subject. Our attack works very well under this cross-subject setting, so we can conclude that our method is effective no matter how large the number of subjects in the EEG datasets.\\n\\nAs for the word \\\"intricate\\\", we kindly guess that does this mean the intricacy of classification task? Like, the number of label types. If so, we would like to argue that our attack is effective at most situations ( > 90%). Because in most cases, the number of label types in EEG BCI is no mare than four. The datasets selected in our experiment are the most intricate datasets. For emotion recognition, most datasets only focus on binary classification, like DEAP [2] and DREAMER [3]. For motor imagery, most datasets only do binary classification. As far as we know, the BCIC-IV-2a is the only dataset to do four-class classification. For epilepsy detection, most datasets only do *itcal or non-itcal* binary classification. We have refined the task to four categories.\\n\\nFrom table 1, our method has been proven to be effective when facing various situations (diverse EEG models, tasks, and formats). Furthermore, we evaluate our attack on another public dataset which studies the P300 tasks [4,5]. The attack performances of three different EEG models on the dataset are still excellent:\\n\\n| | Clean | ASR | 0 | 1 |\\n|:-:|:-:|:-:|:-:|:-:|\\n|EEGNet| 0.818 | 0.993 | 1.000 | 0.986 |\\n|DeepCNN| 0.807 | 0.940 | 0.997 | 0.883 |\\n|LSTM| 0.779 | 0.855 | 0.995 | 0.714 |\\n\\nIt is worth mentioning that these results are obtained by only running the reinforcement learning 30 iterations, which takes only 0.5 hour on each model. These results can be another strong evidence to demonstrate the generalizability to other EEG datasets and real-world scenarios.\\n\\nIn conclusion, we are quite confident that our attack will be effective when encountering difficulties when scaling to larger or more intricate datasets.\\n\\n---\\n[1] Meng L, Jiang X, Huang J, et al. EEG-based brain\\u2013computer interfaces are vulnerable to backdoor attacks[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 2224-2234.\\n\\n[2] Koelstra S, Muhl C, Soleymani M, et al. Deap: A database for emotion analysis; using physiological signals[J]. IEEE transactions on affective computing, 2011, 3(1): 18-31.\\n\\n[3] Katsigiannis S, Ramzan N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices[J]. IEEE journal of biomedical and health informatics, 2017, 22(1): 98-107.\\n\\n[4] U. Hoffmann, et al. \\\"An efficient P300-based brain-computer interface for disabled subjects\\\", J. Neurosci.Methods, 2008.\\n\\n[5] Rodrigo Ramele. P300-Dataset. https://www.kaggle.com/datasets/rramele/p300samplingdataset\"}", "{\"title\": \"Adding a discussion section about defensive study in the revised paper\", \"comment\": \"Dear ACs, reviewers, and readers:\\n\\nThanks to the reviewer JuBu, uEuL and kEdv, who asked many questions regarding the defensive study against Professor X. These insightful concerns deepen our understanding of our attack and how to guard backdoor attack in EEG BCIs. Thus, we added a new section in the revised paper's appendix (marked in blue) to discuss our humble opinion on the defensive study against Professor X, which we hope will benefit the future research.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your thoughtful and insightful suggestions! We believe we have comprehensively addressed your questions regarding Professor X's sophisticated design, its generalizability on novel tasks, its scalability on larger or more complex datasets, and defensive research against Professor X.\\n\\ufeff\\n\\nWe would like to emphasize that our method is **the first to achieve both highly stealthy and robust backdoor attacks on EEG BCI**. Through data poisoning approach, our method even does not require controlling the training stage of target models. \\ufeff\\n\\ufeff\\n\\nWe are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them. \\ufeff\\n\\ufeff\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. Thank you for your thoughtful consideration!\"}", "{\"comment\": \"Thank you for providing additional information and clarification to the concerns of myself and other reviewers. I have adjusted my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Kindly request for post-rebuttal comments\", \"comment\": \"Dear Reviewer dg1E:\\n\\nThanks for your valuable time in reviewing our paper! We are writing this comment to kindly request for post-rebuttal comments.\\n\\n**We have added the references you provided into out revised paper, and clarify the difference between our work and these references.** Have we already addressed your concern? Or do you have any further concerns?\\n\\nPlease feel free to contact us, we are more than happy to hear from you. We would really appreciate if you could reconsider your rating and we are glad to address your further concerns.\\n\\nBest, \\nAuthors\"}", "{\"title\": \"Response to the reviewer kEdv (1/4)\", \"comment\": \"We truly thank you for your appreciation of our work! Your question about how to defend Professor X is important and meaningful, which also helps us deepen the understanding of our Professor X attack. Below we have addressed your questions and concerns point-by-point.\\n\\n### Weaknesses\\n\\n> The potential for malicious use raises significant ethical issues, as manipulating EEG outputs could lead to harmful consequences for users and their applications.\\n\\nWe can not agree more! As we discussed in the general responses, the malicious use of our techniques raises severe ethical issuses. Unfortunately, we did not find any effective ways to defend Professor X due to the high stealthiness and robustness. We will try our best to find a method to guard our attack, and we here call for defensive research against Professor X.\\n\\nHowever, we would like to kindly argue that regarding the high stealthiness and robustness of our attack as a weakness is not suitable. While our work is aiming to alert the EEG community of the potential hazard of backdoor attack, addressing the ethical issues raised by our attack is somewhat out of our paper's scope.\\n\\n> The method involves a sophisticated three-stage clean label poisoning attack, which may be complex to implement in practice, especially for those lacking expertise in reinforcement learning and EEG signal processing.\\n\\nThank you for rasing this concerns. We admit that our work is somewhat sophisticated, but every stage in our attack is meaningful and effective.\\n\\n1. **We propose to inject triggers in the frequency domain,** which prevents from bring any unatrual frequency into the clean data.\\n\\n2. **We propose to adopt some optimization algorithm for leaning the injecting strategy** to determine **which electrodes or frequencies should we inject the trigger**. Moreover, we design two novel losses (DIS and HF) to enhance the stealthiness and robustness.\\n\\n3. **We introduce the clean label attack** to enhance the stealthiness since **different classes of EEG data have different frequency distribution**.\\n\\nYou can see the clear motivation behind the design of these three stages, as each of them improves the stealthiness or effectiveness of Professor X. However, we can still simplify our attack in the following ways:\\n\\n1. Our methods can be applied **without RL** if some performance drop is acceptable. As displayed in Table 3, removing the RL will cause about 15% drop in Attack performance.\\n\\n2. If you do not have to consider the different frequency distribution across different classes, then **the trigger can be selected arbitrarily** without clean label attack.\\n\\n> The design's focus on particular EEG tasks might result in overfitting, reducing its effectiveness in more generalized scenarios or with novel tasks.\\n\\nWe would like to take this opportunity to emphasize that we did not design our attack by focusing on any particular EEG tasks and our attack is generalizable across different EEG tasks, formats and EEG models. To verify the generalizability of our attack, we elaborately selected three EEG datasets as described in section 4.1. These datasets vary significantly in tasks, electrode numbers, montages, and sampling rates. The experimental results demonstrates the effectiveness across these different datasets, proving the generalizability.\\n\\nFrom table 1, our method has been proven to be effective when facing various situations (diverse EEG models, tasks, and formats). Furthermore, we evaluate our attack on another public dataset which studies the P300 tasks [1,2]. The attack performances of three different EEG models on the dataset are still excellent:\\n\\n| | Clean | ASR | 0 | 1 |\\n|:-:|:-:|:-:|:-:|:-:|\\n|EEGNet| 0.818 | 0.993 | 1.000 | 0.986 |\\n|DeepCNN| 0.807 | 0.940 | 0.997 | 0.883 |\\n|LSTM| 0.779 | 0.855 | 0.995 | 0.714 |\\n\\nIt is worth mentioning that these results are obtained by only running the reinforcement learning 30 iterations, which takes only 0.5 hour on each model. These results can be another strong evidence to demonstrate the generalizability to other EEG datasets and real-world scenarios.\\n\\nSo we are confident that our attack will be still effective in more generalized scenarios or with novel tasks. Or maybe we misunderstand what you meant, if you have any questions, please ask us.\\n\\n[1] U. Hoffmann, et al. \\\"An efficient P300-based brain-computer interface for disabled subjects\\\", J. Neurosci.Methods, 2008.\\n\\n[2] Rodrigo Ramele. P300-Dataset. https://www.kaggle.com/datasets/rramele/p300samplingdataset\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your recognition of our paper and the improved score! We are deeply grateful for your review, which has greatly assisted us in supplementing and perfecting our paper.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your thoughtful and insightful suggestions! We believe we have comprehensively addressed your questions regarding Professor X's generalizability on novel tasks, its scalability on larger or more complex datasets, and defensive research against Professor X. \\ufeff\\n\\nWe would like to emphasize that our method is **the first to achieve both highly stealthy and robust backdoor attacks on EEG BCI**. Through data poisoning approach, our method even does not require controlling the training stage of target models. \\ufeff \\ufeff\\n\\nWe are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them. \\ufeff \\ufeff\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. Thank you for your thoughtful consideration!\"}", "{\"title\": \"Response to the reviewer dg1E (2/2)\", \"comment\": \"### Questions\\n> What motivated the specific design choices made in the three-stage clean-label poisoning strategy?\\n\\nThanks for your interest in the design of Professor X! Actually, we design our attack in a reverse order, that is, we **(1)** firstly design the frequency injection stage, **\\uff082\\uff09** then design the reinforcement learning stage, **(3)** lastly design the trigger selection from true dataset to do clean label attack.\\nWith this reverse order, you can better understand the motivation behind our design, and you will find the design of Professor X is very **intuitive**.\\n\\n1. We firstly notice that all backdoor attacks for time-series and EEG BCI are injecting triggers in the temporal domain, which will inevitably bring unnatural frequency into the frequency domain. **Thus, we pioneerly propose to inject triggers in the frequency domain.** We combine two data's amplitude in the frequency domain to generate poisoned data. Since these two data are all natural data, the generated poisoned data is natural no matter in the temporal or frequency domain. We visualize the poisoned data and clean data in Fig 2 (frequency domain) and Fig 8 (temporal domain) to verify the stealthiness.\\n\\n2. Now we know that injecting trigger in the frequency domain is stealthy and natural, but we don't know **which electrodes or frequencies should we inject the trigger**. Since for different EEG tasks, the EEG electrodes and frequencies strongly related to the performance of EEG BCI are different. **For the first time, we propose to adopt some optimization algorithm for leaning the injecting strategy.** After lots of experiments, we chose the reinforcement learning as our optimization algorithm. Moreover, we design two novel losses (DIS and HF) to enhance the stealthiness and robustness.\\n\\n3. Last but not least, we have ensured the performance of Professor X, but we notice that certain classes of EEG data have specific morphology that can easily be identified by human experts, like the EEG during the ictal phase contains more spike/sharp waves than those during the normal state phase. In short, **different classes of EEG data have different frequency distribution**. To enhance the stealthiness, **we introduce the clean label attack**. The poisoned data has the same label as the trigger data, so the poisoned data's frequency information dosen't contain any frequency distribution from other classes.\\n\\nConsequently, we design our novel Professor X, a three-stage clean-label poisoning attack.\\n\\n> How does the proposed method compare in effectiveness to existing backdoor attack methods in the literature?\\n\\nWe compare our attack with four baselines in our experiments, the results are displayed in Table 2 and Fig 3. We detail the experiment settings in section 4.3. Specifically, \\n\\n1. We generate the poisoned data with different backdoor attack methods, and mixing these poisoned data into a bigger clean training dataset to form a poisoned dataset.\\n\\n2. We train an EEG BCI model on the poisoned dataset, the trained EEG BCI model is injected with a backdoor, we call it backdoor model.\\n\\n3. We test the backdoor model on the clean test set, and the poisoned test set where we inject triggers into the clean data. We report the clean accuracy and attack success rate by running this experiment.\\n\\nExcept for the different methods used to generate poisoning data, the other experimental setups are all same. By this experiment we compare our attack with previous baselines.\\n\\n---\\nWe hope this response could help address your concerns. We believe that this work contributes to this community and has the potential to serve as a catalyst for its development. We would sincerely appreciate it if you could reconsider your rating and we are more than happy to address any further concerns you may have. Thanks again!\"}", "{\"title\": \"Response to the reviewer JuBu (2/2)\", \"comment\": \"> The quality of the writing needs improvement; here are some points: The third paragraph of the introduction requires revision for clarity and coherence. Figure 1 consists of five sub-figures that provide a good summary of the method. However, in the introduction (line 050), the authors begin by explaining Figure 1-d, which destroys the flow.\\n\\nYour suggestions are so helpful, thank you very much! We have reorder the subfigure in Figure 1 and refine our writing in the revised paper. Please kindly refer to the new version of our paper, where the modifications are marked in blue.\\n\\n> The Methodology section should be improved by first defining the key concepts, symbols, and problems. It would also be helpful to include a table of abbreviations and symbols, as the multiple terms used throughout the paper may be confusing for readers.\\n\\nWhat a great suggestion! We added a table of key symbols in the Appendix A (Table 7) due to the page limitation. And we remind the reader the symbol table at the beginning of methodology section. Please kindly refer to the new version of our paper.\\n\\n### Questions\\n> How effective is the proposed BD attack when applied to approaches that utilize frequency information of data, such as [3]?\\n\\nWe are very sure that our attack is effective on any supervised learning-based models no matter they utilize frequency information or not. Because the TimesNet model contains a module to convert 1D time series data into structured 2D tensors using the frequency information, and our attack works well on TimesNet. So our attack is effective when applied to approaches that utilize frequency information of data.\\n\\nHowever, the paper [3] is a self-supervised contrastive learning method, which is not supervised learning. Even in the image field, BD attacking self-supervised learning is far different from BD attacking supervised learning [4], not to mention BD attacking contrastive learning [5]. Thus, we don't know whether our attack can work on [3], but we would like to kindly remind that the effectiveness of BD attack on [3] is out of our paper's scope.\\n\\n---\\nWe hope our responses could help address your concerns. We believe that this work contributes to this community and has the potential to serve as a catalyst for its development. We would sincerely appreciate it if you could reconsider your rating and we are more than happy to address any further concerns you may have. Thanks again!\\n\\n[3] Zhang X, Zhao Z, Tsiligkaridis T, Zitnik M. Self-supervised contrastive pre-training for time series via time-frequency consistency. Advances in Neural Information Processing Systems. 2022 Dec 6;35:3988-4003.\\n\\n[4] Jia J, Liu Y, Gong N Z. Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning[C]//2022 IEEE Symposium on Security and Privacy (S&P). IEEE, 2022: 2043-2059.\\n\\n[5] Carlini N, Terzis A. Poisoning and Backdooring Contrastive Learning[C]//International Conference on Learning Representations. 2022, oral.\"}", "{\"title\": \"Response to author question:\", \"comment\": \"A. Why do you regard the SEED dataset dubious?\", \"answer\": \"Sensorimotor Rhythm (SMR)-based motor imagery BCI is widely recognized as unsuitable for late-stage ALS (CLIS) patients due to cognitive decline and altered brain activity patterns [1,2]. While effective in earlier stages, when numerous non-BCI alternatives exist, alternative BCI approaches like P300-based or fNIRS-based systems are more appropriate for CLIS patients. While recent research has questioned certain aspects of [2], this seminal work remains a cornerstone in the BCI field. Therefore, exploring attacks on a paradigm with limited practical application in late-stage ALS (CLIS) is not an appropriate submission for a top-tier conference such as ICLR 2025.\\n\\n1. Foerster BR, Welsh RC, Feldman EL. 25 years of neuroimaging in amyotrophic lateral sclerosis. Nat Rev Neurol. 2013 Sep;9(9):513-24. doi: 10.1038/nrneurol.2013.153. Epub 2013 Aug 6. PMID: 23917850; PMCID: PMC4182931.\\n\\n2. K\\u00fcbler A, Birbaumer N. Brain\\u2013computer interfaces and communication in paralysis: Extinction of goal directed thinking in completely paralysed patients?. Clinical neurophysiology. 2008 Nov 1;119(11):2658-66.\"}", "{\"title\": \"Response to the reviewer dg1E (1/2)\", \"comment\": \"Thanks for your time and efforts in reviewing our work! Your questions about the motivation of our attack's design is insightful, which also provide an opportunity to summarize our contributions. Below we carefully address them.Below we have addressed your questions and concerns point-by-point.\\n\\n### Weaknesses\\n> The literature review is notably limited ... A more thorough engagement with relevant studies would enhance the study's contributions and contextual relevance.\\n\\nThanks for providing the additional references and we have added them in the revised paper, please kindly refer to the new version of our paper. However, we would like to kindly clarify that paper [1,2] are about adversarial attack on EEG BCI models, which is **not the same as backdoor attack**. In the revised paper, we have clarified the difference.\\n\\nAlthough backdoor attack and adversarial attack are all about the vulnerability of deep models, **backdoor attack is different from adversarial attack** in two ways: **(1) Attacking phase**: while adversarial perturbation attacks the model in the inference phase, backdoor attack injects a backdoor in the training phase; **(2) Attacking objective**: while adversarial attack aims to let deep models misclassify (the attacker doesn't care about the target class models will misclassify), backdoor attack aims to let deep models misclassify the samples with particular triggers to target class (the attacker clearly knows the target classes models will misclassify, thus can manipulate the model's output by injecting different triggers).\\n\\nMoreoever, the adversarial perturbation can also be used as a trigger for backdoor attack, which has been researched in [3]. In our paper, **we have compared [3] (the baseline AdverMT)** at the multi-trigger and multi-target settings in our paper. Please kindly refer to Table 1, it can be observed that our attack outperforms the adversarial-based backdoor attack. As the adversarial perturbation is designed for single-target attack, it fails to attack multi-target classes. \\n\\nIn conclusion, we would like to kindly argue that the **overlook of two less relevant papers [1,2] won't diminishes the impact and originality of our research**. We sincerely hope you can reconsider your score.\\n\\n[1] X Zhang, et al. \\\"On the vulnerability of CNN classifiers in EEG-based BCIs\\\", IEEE TNSRE, 27(5):814\\u2013825, 2019.\\n\\n[2] Z Liu, et al. \\\"Universal adversarial perturbations for CNN classifiers in EEG-based BCIs\\\", 2021.\\n\\n[3] L. Meng, et al. \\\"Adversarial filtering based evasion and backdoor attacks to EEG-based brain-computer interfaces\\\", Information Fusion, 2024.\"}", "{\"title\": \"Response to the reviewer kEdv (3/4)\", \"comment\": \"> How does STRIP compare to other defense mechanisms in terms of robustness against backdoor attacks like Professor X? What are its relative strengths and weaknesses?\\n\\nThanks for you interest in our work since you ask such an insightful question. We evaluated several backdoor defensive method in our paper, including Neural Cleanse [7], STRIP [6], Spectral Signature [8], and Fine-Pruning [9].\", \"strip_has_several_strengths\": \"**1) Insensitive to trigger-size**: Neural Cleanse tries to detect backdoor by reconstructing the trigger. However, Neural Cleanse is sensitive to the trigger-size, while STRIP is effective no matter the trigger is big or small. STRIP can detect the trigger that is obvious to models in the original space (like temporal domain for time series data, and spatial domain for image data).\\n\\n**2) Plug and Play**: Neural Cleanse needs the full control of the backdoor model as it inverse the trigger by adversarial loss. Spectral Signature needs the output of the last hidden layer, then perform singular value decomposition on the covariance matrix of these output. Fine-Pruning needs to detect the low-activated neurons and then prune them. While STRIP is plug and play, and compatible in any models. We only need the inputs and outpus of the backdoor models (treated as a blackbox as we don't need any intermediate outputs), then calculate the entropy of the outputs.\\n\\n**3) Backdoor model architecture-agnositc** STRIP only needs the inputs and outputs of the backdoor model, so it is an architecture-agnositc method and is generalize to many real-world application senarios.\\n\\nBut as we discussed before, STRIP has some weaknesses:\\n\\n**1)** STRIP can only detect backdoor whose trigger is input-agnostic, it fails when the trigger is input-specific. While Neural Cleanse and Spectral Signature can guard some input-specific backdoor triggers.\\n\\n**2)** STRIP can only detect backdoor whose trigger is strong and still effective when performing input perturbation. It fails when the triggers disappear when performing input perturbation. While other backdoor defensive methods don't have this limitation.\\n\\n**3)** STRIP can only detect backdoor whose trigger causes models predict with low entropy. If the model's outputs are always of high entropy, STRIP fails. While Fine-Pruning doesn't have this limitation.\\n\\n> What specific mechanisms within the model lead to the observed drop in attack success rate (ASR) when Fine-Pruning is applied? Can these mechanisms be quantified?\\n\\nWe are sorry that we don't have enough knowledge to perfectly answer this questions. The interpretation of EEG models is beyond our professional knowledge scope. But we try our best to share our understanding with you.\\n\\nAs far as we know, Fine-Pruning [9] removes neurons in the model that are in a dormant state (i.e., have low activation values) when predicting clean samples, thereby disrupting backdoor behavior. So the drop in ASR is caused by pruning the neuron that is important to predict poisoned data (this neuron has high activation values when processing inputs with triggers). But we don't know whether these mechanisms can be quantified or not.\\n\\n> How are low-activated neurons determined, and could this method inadvertently remove important features that are crucial for classification?\\n\\nFine-Pruning [9] assumes that the defender has a validation dataset $D_{valid}$ in which all data are clean. The defender feeds these clean data into the backdoor models, and recrods the average activation of each neuron. Afterwards, the defender iteratively prunes neurons from the DNN in increasing order of average activations. Thus, the low-activated neurons are those the average activation is low when feeding in clean data.\\n\\nYes, Fine-Pruning might inadvertently remove important neurons that are crucial for classification. Because the average activation is obtained from the small subset $D_{valid}$, so the low-activated neurons determined by $D_{valid}$ may be high-activated neurons when feeding another clean validation dataset $D_{valid}\\u2019$. That is, the important neurons for classifying clean sample $x \\\\in D_{valid}\\u2019$ may be low-activated neurons for all samples in $D_{valid}$, resulting in the pruning of these important neurons.\"}", "{\"title\": \"Reminder for Reviewer kEdv\", \"comment\": \"Dear Reviewer kEdv:\\n\\nThis is a gentle reminder that the discussion period ends in about 30 hours. We are pleased to report that after engaging in the discussion, other reviewers have raised their scores: Reviewer JuBu raised 1 point and Reviewer dg1E raised 2 points. We responded to your original review on 11/19 and wanted to check in to see if you have any further questions or comments. If you find the proposed revisions and the discussion here helpful in clarifying the paper and/or increasing its value, we kindly request that you comment to that effect and consider raising your score before the deadline. Please let us know if you have any final comments, as we aim to address any of your concerns by tomorrow, 12/2. Again, the final deadline for your response is in 30 hours. Thank you for your time and thoughtful consideration!\\n\\nBest, \\nAuthors\"}", "{\"summary\": \"This paper presents Professor X, a novel backdoor attack method specifically designed for EEG-based brain-computer interfaces (BCIs). The proposed approach employs a three-stage clean-label poisoning strategy that includes trigger selection, reinforcement learning to identify optimal injection techniques, and the generation of poisoned data in the frequency domain.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The introduction of a three-stage clean-label poisoning strategy represents an innovative approach to backdoor attacks in the context of EEG-based BCIs.\\n\\nThe use of reinforcement learning to optimize injection techniques enhances the potential effectiveness of the attack and contributes to the growing body of research on adversarial techniques in neurotechnology.\", \"weaknesses\": \"The literature review is notably limited, overlooking key studies such as Liu et al. (2021) and Zhang & Wu (2019), which highlight the vulnerabilities of EEG-based BCIs to adversarial attacks on signal integrity.\\n\\nThis oversight significantly diminishes the impact and originality of the research, as the proposed method lacks validation against established vulnerabilities within existing literature.\\n\\nA more thorough engagement with relevant studies would enhance the study's contributions and contextual relevance.\", \"reference\": \"Liu, Z., Meng, L., Zhang, X., Fang, W., & Wu, D. (2021). Universal adversarial perturbations for CNN classifiers in EEG-based BCIs. Journal of Neural Engineering, 18(4), 0460a4.\\nZhang, X., & Wu, D. (2019). On the vulnerability of CNN classifiers in EEG-based BCIs. IEEE transactions on neural systems and rehabilitation engineering, 27(5), 814-825.\", \"questions\": \"What motivated the specific design choices made in the three-stage clean-label poisoning strategy?\\n\\nHow does the proposed method compare in effectiveness to existing backdoor attack methods in the literature?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the reviewer uEuL (2/2)\", \"comment\": \"### Questions\\n> What potential research directions could be pursued to enhance defenses against backdoor attacks such as Professor X, especially regarding Fine-Pruning techniques?\\n\\nThanks for your interest of defensive research against Professor X and we are also concerned about its potential dangers. We have evaluated five defensive methods to try to guard Professor X, but all of them failed. The Fine-Pruning method seems to success at the pruning ratio over 0.7, but it damages the clean task's performance.\\n\\nFine-Pruning [4] assumes that the defender has a validation dataset $D_{valid}$ in which all data are clean. The defender feeds these clean data into the backdoor models, and records the average activation of each neuron. Afterwards, the defender iteratively prunes neurons from the DNN in increasing order of average activations. In practical applications of Fine-Pruning, the defender will set a threshold for clean accuracy, then prune the neurons until the clean accuracy drops to the threshold. So when the threshold is set relatively high, the backdoor may not be erased.\\n\\nAs we discussed above, Fine-Pruning requires that the defender has a validation dataset $D_{valid}$. The performance of Fine-Pruning relies heavily on the quality of the validation dataset, since the low-activated neurons are determined by the validation dataset.\\n\\nSo in the future, building a large, diverse, high quality, and absolutely clean validation dataset is the key for improving the Fine-Pruning's performance. The most important part is the diversity, which not only means the diversity of EEG tasks, but also means the diversity of EEG formats. Thus, improving the defenses against backdoor attacks is not an easy task and needs joint efforts of the medical and academic communities.\\n\\n---\\nWe hope this response could help address your concerns. We believe that this work contributes to this community and has the potential to serve as a catalyst for its development. We are more than happy to address any further concerns you may have. Thanks again!\\n\\n[4] Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Fine-pruning: Defending against backdooring attacks on deep neural networks. In International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 273\\u2013294. Springer, 2018.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nAs we approach the end of the discussion period, I would like to encourage you again to review the rebuttal and engage with the authors if you have any additional questions or need further clarification. If you have no further questions, please acknowledge that you have received and reviewed the rebuttal.\\n\\nBest,\\\\\\nAC\"}", "{\"title\": \"Kindly request for post-rebuttal comments\", \"comment\": \"Dear Reviewers:\\n\\ufeff\\n\\nThank you again for your wisdom and valuable comments. We have provided additional results or complete explanations for all the questions. Since the rebuttal process is approaching its end, we would be glad to hear from you whether our rebuttal has addressed your concerns. Feel free to comment on our rebuttal if you have further questions or considerations.\"}", "{\"title\": \"Response to the reviewer RejD\", \"comment\": \"We thank the reviewer for your time and effort in reviewing our work. Below we would like to exchange our ideas with you point-by-point.\\n\\n> It remains inaccurate to assert that \\\"While electroencephalogram (EEG) based brain-computer interfaces (BCIs) have been extensively employed in medical diagnosis, healthcare, and device control,...\\\"\\n\\nWe would like to say that EEG BCIs have been widely employed is not wrong. Because the employment in academia is also employment. As you agree with that **BCI research is experiencing significant growth due to advancements in machine learning**. According to the Web of Science, the number of papers on EEG BCI has increased from 6,185 in 2010 to 16,470 in 2023.\\n\\nOur attack can be used for **falsifying EEG datasets**, which is significantly severe in academia. As one can **draw any neuroscience conclusion from a fake EEG dataset**. The negative impact of academic fraud goes without saying, especially in the field of life sciences.\\n\\n> The submission relies on historical datasets such as BCIC-IV-2a. Moreover, the methodology seems to rely on a dubious SEED database.\\n\\nWe adopt three public dataset with high citation to ensure our experiments are credible. These datasets are covering three widely-studied EEG tasks: **1)** epilepsy detection (CHB-MIT), **2)** emotion recognition (SEED), **3)** and motor-imagery (BCIC-IV-2a).\\n\\nSince you questioned the BCIC-IV-2a and SEED dataset, we would like to know your opinion about the epilepsy detection CHB-MIT dataset. What do you think about the CHB-MIT dataset? Our attack is also effective on the CHB-MIT dataset, is this result unrealistic too? \\n\\nWe sincerely hope that you could explain why you evaluate the SEED dataset **\\\"dubious\\\"**, and what factors do you refer to when you say **\\\"dubious\\\"**? According to the SEED Dataset's Website [1]: *As of December 2023, the cumulative number of applications and research institutions using SEED have reached more than 5800 and 1000, respectively.* Moreover, The SEED's paper has been cited by more than 1,900 papers. Are these 1900 papers dubious too?\\n\\n> It is critical to note that motor imagery may not be effective for the target demographic of individuals with paralysis due to significant neural degeneration.\\n\\nWe would like to say that this fact doesn't mean the useless of motor imagery EEG BCI. According to your logic, we can say that **It is critical to note that GLASSES may not be effective for the target demographic of individuals with MYOPIA due to significant neural degeneration.** However, for other people with myopia due to other reasons such as long screen time, the glasses is very helpful. Hence, the fact you presented has nothing to do with the effectiveness of motor imagery EEG BCI.\\n\\nIn contrast, for the many patients with paralysis due to other reasons like spinal cord injuries, the motor imagery BCI is very helpful. The EEG BCI already helps a paralytic walk again [2], so the wider application of EEG BCI is predictable.\\n\\nTo be honest, we did not understand the rationality of your logic, why you present this fact and what do you want to illustrate by this fact? We sincerely hope you could explain it.\\n\\n### Questions\\n\\n> Why did the authors construct an entirely unrealistic and artificial scenario?\\n\\nNo, we didn't. Please kindly refer to the general response for more details.\\n\\n> Where did the authors encounter such hyperbolic or enthusiastic claims regarding the purported applications of BCI in healthcare and medical diagnostics?\\n\\nWe, as EEG BCI researchers, are conducting cutting-edge EEG BCI research with multiple top hospitals to try to improve the life quality of epilepsy patients. Specifically, we are trying to provide early warning for epileptic seizures with the help of EEG BCI. We are quite sure that the research of epilepsy detection is meaningful.\\n\\nAs the impressive results presented in [2], where a paralytic walk again with the help of EEG BCI, the motor imagery EEG BCI has been the hope of many paralytics.\\n\\n> Currently, only a limited number of conditionally approved, mostly invasive devices have been tested on a small cohort of subjects within closed clinical studies.\\n\\n1. We would like to argue that we don't have to wait until the horse has bolted before close the stable door.\\n\\n2. Although there are a limited number of subjects are taking the treat of BCI, we would like to say we still need to consider the security risk of the using of EEG BCI on them.\\n\\n3. Please note that our attack can also be misused in academic researches for academic fraud.\\n\\n---\\n\\nWe hope this response could help address your concerns. We believe that this work contributes to this community and has the potential to serve as a catalyst for its development. We are more than happy to further exchange ideas with you.\\n\\n[1] https://bcmi.sjtu.edu.cn/home/seed\\n\\n[2] Lorach H, Galvez A, Spagnolo V, et al. Walking naturally after spinal cord injury using a brain\\u2013spine interface[J]. Nature, 2023, 618(7963): 126-133.\"}", "{\"comment\": \"Thank you to the authors for their clear and detailed responses to my and the other reviewers' questions. This has improved my view of the paper, and I have raised my score to 6.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your thoughtful and insightful suggestions! We believe we have comprehensively addressed your questions regarding relative literature, Professor X's specific design, its effectiveness compared to other baselines including adversarial-based attacks.\\n\\ufeff\\n\\nWe would like to emphasize that our method is **the first to achieve both highly stealthy and robust backdoor attacks on EEG BCI**. Through data poisoning approach, our method even does not require controlling the training stage of target models. \\ufeff \\ufeff\\n\\ufeff\\n\\nWe are wondering whether you have any additional questions or comments regarding our response to your review comments. We will do our best to address them. \\ufeff \\ufeff\\n\\ufeff\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. Thank you for your thoughtful consideration!\"}", "{\"title\": \"Replying to the Reviewer RejD\", \"comment\": \"We greatly appreciate your timely response and valuable comments on the datasets! We are now more likely to know what your real concern is, and would like to kindly conclude it as: *The paradigm and applications of EEG BCI are not mature enough at the current stage, thus there is no need to focus on the safety issue*.\\n\\n### Major point\\nWe would like to take this opportunity to emphasize the motivation of our paper, which will in some degree answer your concern:\\n1. Although the BCI applications is not mature enough at the current stage, **it is still necessary to pay attention to backdoors like Professor X for future BCI applications**. **We don't have to wait until the horse has bolted before close the stable door.** It is worth noting that EEG BCI is gradually moving towards practical application [1]. We are confident that BCI will have better practical applications in the future.\\n\\n2. Our work provides a fundamental security assessment perspective for BCI society, promoting the real practical application of BCI. Since the primary consideration before real application is **security issues**.\\n\\n3. EEG BCI is widely used in academia, our attack can be used for **falsifying EEG datasets**. We provided an example in the **General Response** to illustrate the harm of this malicious use, which we absolutely do not encourage. We hope that our research can serve as a catalyst in the BCI community, raising more attention to BCI safety and promoting its practical application.\\n\\n### Minor point\\nBy the way, as you said P300-based method is more effective for CLIS patients, we also validated our attack on a P300 dataset [2] and found our attack is still effective. But after all the CLIS patients do not represent everyone who needs a motor imagery BCI, there are many other patients like those suffering from spinal cord injuries need motor imagery BCI. We admit motor imagery BCI is not effective for everybody, but as long as one patient benefits from it [1], it is useful.\\n\\n| | Clean | ASR | 0 | 1 |\\n|:-:|:-:|:-:|:-:|:-:|\\n|EEGNet| 0.818 | 0.993 | 1.000 | 0.986 |\\n|DeepCNN| 0.807 | 0.940 | 0.997 | 0.883 |\\n|LSTM| 0.779 | 0.855 | 0.995 | 0.714 |\\n\\n---\\nIn conclusion, our work is not aiming to attack BCI that has already been applied in the real-world (*while Professor X has the ability*), but more about **alerting a severe potential hazard** of backdoor in EEG BCI and **calling for defensive research** (*that's why we added a new section in appendix to discuss how to defend backdoor like Professor X in EEG BCI*). \\n\\nThe datasets used in our study is for validating our attack's efficacy across various EEG tasks and formats. There might be some drawbacks in these datasets, but after all it is not the focus of our work. Our work promotes the future practical application of EEG BCI (in the safety way) and the construction of a more honest academic community (in the academic integrity way).\\n\\nThanks for your valuable time in discussing with us! We are more than happy to address any further concerns you may have.\\n\\n[1] Lorach H, Galvez A, Spagnolo V, et al. Walking naturally after spinal cord injury using a brain\\u2013spine interface[J]. Nature, 2023, 618(7963): 126-133. \\n[2] Rodrigo Ramele. P300-Dataset. https://www.kaggle.com/datasets/rramele/p300samplingdataset\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Dear Reviewers,\\n\\nI encourage you to review the rebuttal and reach out to the authors with any additional questions or requests for clarification.\\n\\nBest,\\\\\\nAC\"}", "{\"summary\": \"This paper presents \\\"Professor X,\\\" a new EEG backdoor attack aimed at influencing the outputs of electroencephalogram (EEG)-based brain-computer interfaces (BCIs). While EEG BCIs are widely utilized in medical and device control settings, their security has often been neglected. Professor X improves upon existing EEG attack methods, which typically target single classes and either require interaction with the BCI's training phase or lack stealth. This innovative approach strategically selects specific EEG electrodes and frequencies for injection based on various EEG tasks and formats. By employing a reinforcement learning-based reward function, the method enhances both robustness and stealth. Experimental results demonstrate Professor X's effectiveness, resilience, and generalizability, underscoring vulnerabilities in EEG BCIs and calling for further defensive research in the field. Additionally, Professor X can help protect intellectual property within EEG datasets and BCI models by embedding a concealed watermark. The attack employs a three-stage clean label poisoning strategy: selecting triggers for each class, optimizing injection strategies for electrodes and frequencies, and generating poisoned samples via spectral interpolation. Testing on diverse EEG task datasets validates the method\\u2019s effectiveness and its capacity to circumvent existing defenses.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"It presents an innovative approach for manipulating EEG BCI outputs, addressing a gap in the existing literature that predominantly emphasizes single-target attacks. The design leverages reinforcement learning to improve the attack's robustness and stealth, enabling it to evade detection more successfully than earlier methods. Professor X takes into account the specific EEG electrodes and frequency ranges associated with different tasks and formats, making it versatile for various EEG applications. Experimental results show that Professor X is effective across multiple EEG tasks, highlighting its broad applicability beyond just one context.\", \"weaknesses\": \"The method might encounter difficulties when scaling to larger or more intricate datasets, which could restrict its effectiveness in real-world applications with varied user populations.\", \"questions\": \"What potential research directions could be pursued to enhance defenses against backdoor attacks such as Professor X, especially regarding Fine-Pruning techniques?\", \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns', 'Yes, Privacy, security and safety']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces \\\"Professor X,\\\" a novel EEG backdoor attack designed to manipulate the outputs of electroencephalogram (EEG)-based brain-computer interfaces (BCIs). While EEG BCIs are commonly used for medical and device control applications, their safety has often been overlooked. Professor X addresses the limitations of existing EEG attacks, which typically focus on single-target classes and require interaction with the training stage of the BCI or lack stealthiness. This method uniquely considers the specific EEG electrodes and frequencies to be injected based on different EEG tasks and formats. Utilizing a reinforcement learning-based reward function enhances both robustness and stealthiness. Experimental results demonstrate Professor X's effectiveness, robustness, and generalizability, highlighting the potential vulnerabilities in EEG BCIs and urging the community to conduct defensive studies. Additionally, Professor X offers applications in protecting intellectual property within EEG datasets and BCI models by providing a concealed watermark. The attack exploits a three-stage clean label poisoning strategy: selecting triggers for each class, optimizing electrode and frequency injection strategies, and generating poisoned samples through spectral interpolation. Tests on various EEG task datasets confirm the method's efficacy and its ability to bypass existing defenses.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"It introduces a novel method for manipulating EEG BCI outputs, filling a gap in the existing literature that largely focuses on single-target attacks.\\nThe design incorporates reinforcement learning to enhance the attack's robustness and stealthiness, allowing it to evade detection more effectively than previous methods.\\nProfessor X considers the specific EEG electrodes and frequency ranges relevant to different tasks and formats, making it adaptable to various EEG applications.\\nExperimental results demonstrate that Professor X is effective across multiple EEG tasks, indicating a broad applicability beyond a single context.\", \"weaknesses\": \"The potential for malicious use raises significant ethical issues, as manipulating EEG outputs could lead to harmful consequences for users and their applications.\\n\\nThe method involves a sophisticated three-stage clean label poisoning attack, which may be complex to implement in practice, especially for those lacking expertise in reinforcement learning and EEG signal processing.\\n\\nThe design's focus on particular EEG tasks might result in overfitting, reducing its effectiveness in more generalized scenarios or with novel tasks.\\n\\nThe approach may face challenges in scaling up for larger or more complex datasets, potentially limiting its effectiveness in real-world applications involving diverse user populations.\", \"questions\": \"What ethical frameworks are in place to govern the use of techniques like Professor X, and how can researchers ensure that such methods are not misused?\\n\\nHow effective is STRIP in detecting backdoor inputs under various levels of perturbation? Are there specific types of perturbations that significantly impact its performance?\\n\\nHow does STRIP compare to other defense mechanisms in terms of robustness against backdoor attacks like Professor X? What are its relative strengths and weaknesses?\\n\\n What specific mechanisms within the model lead to the observed drop in attack success rate (ASR) when Fine-Pruning is applied? Can these mechanisms be quantified?\\n\\nHow are low-activated neurons determined, and could this method inadvertently remove important features that are crucial for classification?\\n\\n How effective is Fine-Pruning at different pruning ratios beyond 0.7? Are there specific thresholds where the attack's effectiveness is significantly impacted?\\n\\nWhat future research avenues could be explored to improve defenses against backdoor attacks like Professor X, particularly in the context of Fine-Pruning?\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"The potential for malicious use raises significant ethical issues, as manipulating EEG outputs could lead to harmful consequences for users and their applications.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response by Authors\", \"comment\": \"We thank all the reviewers for your proficient and valuable comments and suggestions. We are cheerful to find that most of the reviewers have reached the consensus that our methods are novel and our idea of using reinforcement learning to optimize injection techniques is interesting (JuBu, uEuL, kEdv, dg1E). Moreover, we're also glad to see that our extensive experiments for showing the robustness (JuBu, uEuL, kEdv), the innovative three-stage attack (dg1E), the first multi-target attck on EEG BCI, effectiveness across multiple EEG tasks (uEuL, kEdv) are well recognized, and the research question is thought to be clear and easy to understand (JuBu).\\n\\n**Here, we sincerely invite all the reviewers to read this general response before diving into the detailed responses to your individual concerns,** to help readers to understand **what we are contributing** and **the value of our work (or the severe potential hazard)**.\\n\\n> What we are contributing.\\n\\nBefore the invention of Professor X, people may think EEG BCI is safe and cannot be injected with any backdoor, or the trigger in a backdoor poisoned sample is easily detected. However, our work alerts the EEG community that **EEG BCI is absolutely not safe**, and **EEG BCI can actually be injected with an invisible and robust backdoor attack**.\\n\\nWe conducted comprehensive experiments and validated the effectiveness of Professor X on various EEG tasks/formats. Moreover, we employed multiple backdoor defensive approaches to defend Professor X, but all of them failed. As a result, we didn't find an effective way to prevent Professor X.\\n\\nOur work has determined that the **EEG BCI is not safe**, and **alert the whole community of the potential hazard** of misusing Professor X.\\n\\n> The value of our work (or the severe potential hazard).\\n\\nSome may worried that the application of EEG BCI is not broad, so regarding the safety of EEG BCI is lack of realism. However, **we don't have to wait until the horse has bolted before close the stable door**, that is, we don't need to wait for the widespread application of a technique before considering its security risk.\\n\\nWe admit that in clinical practice, the number of patients using EEG BCI is limited as they are all suffering severe illnesses like epilepsy [1] (for medical diagnosis) and paralysis [2] (for device control). However, **as long as the EEG BCI is not safe, the result of malicious manipulation is catastrophic for any single patient** -- like misleading the localization of epilepsy lesions, or controlling a paralytic to act dangerously.\\n\\n**As long as there is one person may be harmed by the security risk of EEG BCI, studying the safety is meaningful.** Not to mention that there are **50 million** epilepsy patients and **15 million** patients suffering from spinal cord injuries worldwide (data from World Health Organization' website). Although only a small number of them have the conditions to receive BCI treatment, this is still not a small number. Since the EEG BCI already helps a paralytic walk again [2], the wider application of EEG BCI is predictable.\\n\\nBesides the real-world security risk, our attack can also be misused for **falsifying EEG datasets**, which is **more severe in academia**. With the help of Professor X, one can essentially adjust the classification accuracy of a EEG dataset to any level (by adjusting the poisoning rate $\\\\rho$). This will be a disaster for EEG research community, since one can **draw any neuroscience conclusion from a fake EEG dataset** by misuing Professor X.\\n\\nFor example, Jack is an EEG researcher and he wants to know whether can we detect Alzheimer's disease from EEG. So he put in great effort to build an large dataset recording 100 older's EEG signals (half are Alzheimer patients). But he runs classification model on this dataset and finds that the accuracy is very low, let's say just 52% (random level is 50%). Jack doesn't want to see his efforts go to waste, and **unethically used Professor X** to falsify his dataset, increasing the accuracy to 80%. Finally, Jack claims he find that he can detect Alzheimer's disease from EEG signals. However, this neuroscience conclusion is totally a fake finding.\\n\\n**Every coin has two sides**, our attack can be adopted for protecting intellectual properties of EEG datasets and BCI models. But most importantly, we want to **call for defensive research** as we have not found an effective way to detect and defend Professor X.\\n\\n[1] Shoeb A, Guttag J. Application of machine learning to epileptic seizure detection[C]//Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010: 975-982.\\n\\n[2] Lorach H, Galvez A, Spagnolo V, et al. Walking naturally after spinal cord injury using a brain\\u2013spine interface[J]. Nature, 2023, 618(7963): 126-133.\"}", "{\"title\": \"Replying to the Reviewer RejD\", \"comment\": \"Dear Reviewer RejD:\\n\\nThanks for your timely and detailed response! We are really glad that we and you have already reached some consensus. Below we tried to address your concerns:\\n\\n> Unrealistic problem.\\n\\n1. We definitely agree with that the EEG BCI is not mature at the current stage, including the paradigm and applications. But the potential practical application of it cannot be denied (We believe you are agree with this as you are a member of the BCI community too). Before real application of EEG BCI, we must consider the security issues of it. Our work promotes developing more safer BCI for future real application, which is definitely not unrealistic.\\n\\n> it's crucial to recognize that future approved medical devices are subject to stringent regulations and security measures, limiting the potential for malicious exploitation.\\n\\n2. We believe we and you have reached the consensus that **the primary consideration before real application is security issues**. Unfortunately, the security issues are definitely not limited by simple and vague \\\"*stringent regulations and security measures*\\\". First of all, how to draft these regulations? **If all members in the BCI community do not realize that BCI can be injected backdoor, there is no one to draft the regulations to regulate backdoor attacks in BCI**. Our work provides an important insight about the severe backdoor attacks threats to EEG BCI. However, we believe that there must be other security issues in BCI but are never been noticed. It would be dangerous if someone maliciously used some techniques unknown to the public. Our work promotes the draft of regulations for Backdoor Attack in BCI.\\n\\n> The medical device field, particularly regarding pacemakers, DBS, ... This maturity has led to robust security protocols and regulatory oversight, such as those enforced by the FDA, EMA, etc.\\n\\n3. The medical devices you cited including pacemakers, DBS, cochlear implants, etc, are not using the AI or deep learning (DL). So they definitely have not been affected by backdoor attacks aiming at AI models. These devices are developed based on the traditional electronic methods and the industry has adequate knowledge about these devices along with the potential hazard they may face, that's why robust security protocols and regulatory oversight can be made by the FDA, EMA, etc. \\nWe hope to reached the consensus with you that AI or DL has shown some superiority in BCI, due to the complexity and human-unreadbility of brain signals. In recent years, there has been an increasing number of articles developing EEG BCI with AI or DL methods. **However, the BCI community dose not have sufficient knowledge about AI or DL models along with the potential hazard they may face**. Our work is aiming to study and point out the potential hazard, promoting designing more safer BCI for future real application.\\n\\n\\n> User model customization for each session, necessitated by electrode placement variations, remains a significant challenge in EEG-based BCI. This practical issue warrants further research attention rather than theoretical explorations of unrealistic backdoor attacks... We urge you to consider applying your research efforts to fields such as image processing or speech recognition...\\n\\n4. Actually, we are BCI researchers, not safety researchers in the image processing or speech recognition fields. We definitely agree with that the studies for user model customization and electrode placement, etc, are essential in BCI. However, studying security issues is equally important, and these two are not in conflict. **Just like the traditional mature medical devices, there are some researchers focus on improving their performance, while there must be some researchers focus on improving their safety.** \\nWe noticed the security issues of EEG BCI, although we know the whole BCI society is focusing on improving the decoding accuracy and robustness, we can not ignore the security issues and pretend that we didn't notice. **Our work firstly prove that EEG BCI can be injected with invisible and robust backdoor, and unfortunately the backdoor can not be detected by any existing methods**. Our work provides valuable insight about the security issues of EEG BCI and promotes the draft of regulations for limiting Backdoor Attack in BCI.\\n\\n> Potential hazard in academia\\n\\n5. Let's set apart the practical application of EEG BCI and talk about the academia. As we all know, EEG BCI is not only a medical devices, but also a good neuroimaging techniques because of its portability and low cost, which is widely used in the neuroscience society. Our attack can also be misused for **falsifying EEG datasets and drawing completely wrong neuroscience conclusion** as we discussed in the General Response. Our work promotes the construction of a more honest neuroscience society as we pointed out this cheating methods. \\n**We think this cheating is probably the most severe hazard that can work now**.\\n\\nBest, \\nAuthors\"}", "{\"title\": \"Response to the reviewer JuBu (1/2)\", \"comment\": \"Thanks for your valuable comments, which helps us to greatly improve our paper's quality! The additional experiments you requested have enriched our experimental results and better demonstrated the generalization and robustness of our method! We'd like to express our appreciation that our novel reinforcement learning and comprehensive experiments are well recognized. Below we carefully address your questions and concerns point-by-point.\\n\\n### Weaknesses\\n> However, it would be great if the authors considered and designed some baselines based on the existing frequency-based BD attack (FreBA in the following text), if applicable.\\n\\nThanks for your careful reading! Actually **we have compared our model with the vanilla frequency-based BD attack**. First we would like to illustrate the difference between our attack and previous FreBA. Then your concern about the comparison of FreBA can be naturally resolved.\\n\\nCompared to the previous frequency-based backdoor attack, our attack has three differences: **(1) Multi-target vs Single-target:** our attack can fully control the output of classifier model, but previous FreBA can only attack one target class. **(2) Stealthiness of Time-Series Modality:** previous FreBA are designed for image modality, which will lose stealthiness while directly performing on the time-series modality, our attack introduce a novel HF loss to address it, showing in Fig 8. **(3) Reinforcement Learning:** we introduce RL to optimize the injecting strategy, which greatly improves the performance, stealthiness and robustness. Previous FreBA inject the trigger in a constant place.\\n\\nIn the Table 3, we conducted an ablation study to verify the effectiveness of RL, please kindly refer to the revised paper. The variant **Random is basically a vanilla frequency-based BD attack.** The results showed our attack is better than the vanilla FreBA.\\n\\n> It would be great if the authors designed an experiment to validate the stealthiness of the method. It may be similar to a previous study [1], which used anomaly detection methods.\\n\\nThanks for your valuable advice! We have added this experiment in our revised paper, in Table 6. The ROC-AUC is around 0.5 and F1-score is either around 0.5 or near 0 across all datasets, indicating that the detection results are nearly random guess. These strongly demonstrates the stealthiness of Professor X.\\n\\n||ER F1-score|ER AUC|MI F1-score|MI AUC|ED F1-score|ED AUC\\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:\\n|GDN|0.50|0.51|0.50|0.51|0.50|0.50\\n|USAD|0.00|0.51|0.00|0.51|0.00|0.50\\n\\n> The author only considers three models for the classifiers: EEGNet, DeepCNN, and LSTM. However, it would be great if the author considered other new models, like TimesNet [1] and other new transformer-based models.\\n\\nThanks for your constructive suggestions. We have added new experiment results in our revised paper, in Table 2. It can be seen that our attack works well (the ASRs are almost over 95%, except conformer on the ER task) on the TimesNet [1] and a new Transformer-based model EEG-conformer [2] (is cited over 200 times), further proving the generalizability of our attack. We have detailed the individual ASRs for each category in Table 1 of the main text, please kindly refer to the revised paper.\\n\\nIt takes a lot of time to conduct the experiment with TimesNet, as this model is not designed for multi-channel data like EEG BCI, thus is inefficient in processing EEG signals. We use TimesNet as a singal-channel feature extrator and concatenate all features of all EEG channels. Then we feed the concatenated feature into a linear layer for EEG classification. Our code of TimesNet and EEG-conformer will be public too.\\n\\n||ER Clean|ER Attack|MI Clean|MI Attack|ED Clean|ED Attack\\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:\\n|TimesNet|0.485|0.956|0.276|0.997|0.373|0.986\\n|EEG-conformer|0.475|0.894|0.296|0.996|0.419|0.944\\n\\nBy the way, we would like to discuss the reasons why we chose EEGNet, DeepCNN and LSTM. These three models are the most widely used model in the EEG community. And for most real-world application of EEG BCI, only shallow and simple models are required because deep and sophisticated models may overfit when the data is limited. Thus, considering the real-world situation, we chose these three simple models, but it doesn't mean that our attack is not effective on new sophisticated models.\\n\\nWe would like to take this opportunity to emphasize that our method is an EEG task-agnostic, model architecture-agnostic, and format-agnostic attack, which can generalize to many scenarios.\\n\\n---\\n[1] Wu H, Hu T, Liu Y, Zhou H, Wang J, Long M. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. In The Eleventh International Conference on Learning Representations.\\n\\n[2] Yonghao Song, Qingqing Zheng, Bingchuan Liu, and Xiaorong Gao. EEG conformer: Convolutional transformer for EEG decoding and visualization. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 31:710\\u2013719, 2022.\"}", "{\"title\": \"Thanks for the improved rating\", \"comment\": \"Dear Reviewer dg1E:\\n\\nThank you for the improved score! We are deeply grateful for your review, which has greatly assisted us in supplementing and perfecting our paper.\\n\\nBest, \\nAuthors\"}" ] }
5sU32OCxgZ
TTVD: Towards a Geometric Framework for Test-Time Adaptation Based on Voronoi Diagram
[ "Mingxi Lei", "Chunwei Ma", "Meng Ding", "Yufan Zhou", "Ziyun Huang", "Jinhui Xu" ]
Deep learning models often struggle with generalization when deploying on real-world data, due to the common distributional shift to the training data. Test-time adaptation (TTA) is an emerging scheme used at inference time to address this issue. In TTA, models are adapted online at the same time when making predictions to test data. Neighbor-based approaches have gained attention recently, where prototype embeddings provide location information to alleviate the feature shift between training and testing data. However, due to their inherit limitation of simplicity, they often struggle to learn useful patterns and encounter performance degradation. To confront this challenge, we study the TTA problem from a geometric point of view. We first reveal that the underlying structure of neighbor-based methods aligns with the Voronoi Diagram, a classical computational geometry model for space partitioning. Building on this observation, we propose the Test-Time adjustment by Voronoi Diagram guidance (TTVD), a novel framework that leverages the benefits of this geometric property. Specifically, we explore two key structures: 1) Cluster-induced Voronoi Diagram (CIVD): This integrates the joint contribution of self-supervision and entropy-based methods to provide richer information. 2) Power Diagram (PD): A generalized version of the Voronoi Diagram that refines partitions by assigning weights to each Voronoi cell. Our experiments under rigid, peer-reviewed settings on CIFAR-10-C, CIFAR-100-C, ImageNet-C, and ImageNet-R shows that TTVD achieves remarkable improvements compared to state-of-the-art methods. Moreover, extensive experimental results also explore the effects of batch size and class imbalance, which are two scenarios commonly encountered in real-world applications. These analyses further validate the robustness and adaptability of our proposed framework.
[ "Test-time adaptation", "out-of-distribution generalization", "distribution shift", "computational geometry" ]
Accept (Poster)
https://openreview.net/pdf?id=5sU32OCxgZ
https://openreview.net/forum?id=5sU32OCxgZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTyUCcU3aO", "xmsLL89APx", "tusSIJzyyj", "tielvQhSgS", "t7fbIHg9lO", "ou92DDWvnZ", "lfjHLYBNDf", "jdlTqzdDFP", "f142pNq4Dt", "bHVfsV4tIO", "ZQd0nHZomP", "TRCpuC12hn", "Rep79WWNlS", "Mxrefqjo1v", "K7MN76zOg4", "D5fOqnJY9N", "D3oLtlI029", "Bm0ytxvfcp", "BIeUnk197e", "AFL74P7PwZ", "9T286o4uof", "8mnrs7FkqF" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "meta_review", "official_review" ], "note_created": [ 1732509417172, 1733258202569, 1732509547678, 1733163513028, 1732164532417, 1732229905564, 1730709514492, 1732165333101, 1732629598134, 1732163402796, 1732167210189, 1730361053274, 1730693322175, 1732525438724, 1733258247082, 1732167033401, 1737523628524, 1732509386402, 1732522698142, 1732509437047, 1734552810943, 1730261339691 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Reviewer_ET99" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Reviewer_WP62" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Reviewer_WP62" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Reviewer_rD6K" ], [ "ICLR.cc/2025/Conference/Submission4254/Reviewer_ET99" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Reviewer_wGB6" ], [ "ICLR.cc/2025/Conference/Submission4254/Authors" ], [ "ICLR.cc/2025/Conference/Submission4254/Area_Chair_5afa" ], [ "ICLR.cc/2025/Conference/Submission4254/Reviewer_wGB6" ] ], "structured_content_str": [ "{\"comment\": \"Thank you once again for your time and effort providing the initial review of our paper! We have carefully addressed the questions you raised and provided detailed responses above. As the discussion period nears its end, we kindly invite you to share any feedback you might have, and we would be happy to discuss them further.\"}", "{\"comment\": \"We sincerely thank you for raising the score! Your feedback is valuable and constructive in helping us improve our paper. Due to time limit, we feel sorry that we are unable to provide results for the additional experiments on the final day of the discussion period, but we will certainly discuss them in the next version of our paper!\"}", "{\"comment\": \"Thank you once again for your time and effort providing the initial review of our paper! We have carefully addressed the questions you raised and provided detailed responses above. As the discussion period nears its end, we kindly invite you to share any feedback you might have, particularly regarding the geometric perspective of our contributions, and we would be happy to discuss them further.\"}", "{\"title\": \"Thanks for your rebuttal!\", \"comment\": \"Thanks for your rebuttal. I get your point that VD \\u2192 CIVD \\u2192 CIPD represent a progressive relationship rather than a combination. The revision of the paper makes it much more clear (e.g. line 354 - 357). Most of my concerns are addressed. The new diagram also helps understanding, thanks for your efforts! **I raised my rating to 6, good luck!**\\n\\nI am still curious about the synergy between entropy minimization and the proposed CIPD. In my opinion, CIPD mostly changes the \\\"inference\\\" phase, this makes me wondering: \\n1. If we do not update the pretrained feature extractor, how much performance gain we can get, compared to the full algorithm? This can be tested by setting the learning rate to be zero (if you don't have running mean / running variance or use the model.eval mode)\\n2. Since the feature extractor is updated, but the Voronoi sites won't be updated, I feel this part to be counter-intuitive: It seems like either (1) the feature extractor won't be significantly adapted, or (2) the Voronoi sites in the beginning and the end of testing sequence are not guaranteed to match (or drawn from the same distribution). \\n3. Since entropy is known to have trivial solution (e.g. Figure 3(a) in [1]), do you observe any model collapse when the learning rate is too high? Like in the testing sequence, the model performance first increase but then decrease. Is it possible that CIPD can help alleviating the trivial solution problem? \\n\\n[1] Hao Zhao, Yuejiang Liu, Alexandre Alahi, Tao Lin. On Pitfalls of Test-Time Adaptation. ICML 2023. \\n\\nI really apologize to ask questions at the end of the discussion phase. And I think some of these questions can be out of the scope o this paper. So you don't have to feel obliged to answer these question in just one day. But I think these questions can help better understanding the synergy between inference and adaptation.\"}", "{\"comment\": \"We appreciate your time and effort in reviewing our manuscript. Here are the clarification and answers to the concerns.\\n\\n**Response to Weakness 1:**\\nThank you for your advice. We have added the pseudo-code for CIVD and CIPD. However, there seems to be a misunderstanding regarding our proposed method, possibly due to our wording. In fact, VD, CIVD, CIPD are seperate structures. We have revised our summary of them at the end of Section 3 for clarity. VD \\u2192 CIVD \\u2192 CIPD represent a progressive relationship rather than a combination. VD is the simplest point-to-point structure among the three. CIVD builds upon it as a cluster-to-point structure, while CIPD further extends CIVD by incorporating the power distance, as described in Eq. 5. Our experiments demonstrate that their performance follows the order: CIPD > CIVD > VD.\\n\\nAs defined in Eq. 4 and Eq. 6, only a single hyperparameter, $\\\\gamma$, is introduced in both CIVD and CIPD. We find that $\\\\gamma$ does not affect the accuracy performance a lot, and we set $\\\\gamma=-0.8$ as a heuristic value to appropriately scale the influence of distant sites.\\n\\n**Response to Weakness 2:**\\nYour are right that we do inference using VDs instead of linear layers. Actually, not only SHOT but also many current TTA methods (TENT, NOTE, Conjugate PL, SAR) utilize entropy minimization. The unique properties of CIVD and CIPD contributing to the overall improvement. CIVD introduces multi-site influences, enhancing robustness, while CIPD enables more flexible partitioning by incorporating the power distance function.\\n\\n**Response to Minor Weakness 3:**\\nThank you for your advice. We have revised them in our manuscript.\\n\\n**Response to Question 1:**\\nYes, the Voronoi sites won't be updated. They are served as the adaptation guidance.\\n\\n**Response to Question 2:**\\nTo get the soft prediction, simply replace $d$ with $F$ in Equation 3.\\n\\n**Response to Question 3:**\\n$v$ is set according to Lemma 3.1, where it can be calculated from the classifier layer of the pretrained model.\", \"title\": \"Response to Reviewer Reviewer ET99\"}", "{\"title\": \"Summary of the rebuttal revisions.\", \"comment\": \"We sincerely thank Reviewers WP62, ET99, rD6K, and wGB6 for their constructive comments. Below, we summarize the changes made in the revision, which are highlighted in red text.\\n\\n1. **Additional Figure to Explaining VD, CIVD and CIPD** (ET99, wGB6):\\nWe have revised the summary of VD, CIVD, and CIPD at the end of Section 3 (Methodology) and added Figure 3 to illustrate their differences, clarifying the overall pipeline of our method. The original wording may have caused some misunderstanding of our method. In fact, VD \\u2192 CIVD \\u2192 CIPD represent a progressive relationship rather than a combination. Our proposed TTVD is constructed progressively, transitioning from standard VD to CIVD and, finally, to CIPD.\\n\\n2. **Algorithms for CIVD and CIPD** (ET99, rD6K):\\nWe have added the pseudo-code for CIVD and CIPD. In fact, CIVD and CIPD are advanced forms of VDs, which can be implemented by simply replacing $d$ with $F$ in Algorithm 1. Please note that we have also included our code in the supplementary materials to help public readers better understand our method.\\n\\nHere are some common questions that reviewers raise regarding our work.\\n\\n1. **Novelty** (WP62, wGB6): As Reviewer ET99 commented, our method is the first to apply VD for Test-time Adaptation, even though VDs were originally developed for space partition. Our contribution is mainly about the exploration between VDs, neighbor-based methods and TTA. First, we revealed that nearest neighbor algorithms can be analyzed through standard VD. Then, we find that advanced VDs, such as CIVD, PD, offer benefits for TTA, because of their unique properties (enhanced robustness from multi-site influence, more flexible partitions). We would like to note that previous papers on CIVD mainly focus on **theoretical aspects in computational geometry[1][2][3]**. In contrast, our work seeks to **explore its potential, bring renewed attention to it, and transition it into machine learning applications, specifically TTA.**\\n\\n2. **Hyperparameters** (WP62, ET99, rD6K, wGB6): The only hyperparameter introduced in our method is $\\\\gamma$ that control the magnitude of influence. We found that it does not affect the model's accuracy. Other parameters such as $\\\\epsilon$ and $\\\\tau$ are included in the paper for completeness and are set following common practice (e.g., using the standard softmax function). As stated in Section 4 (Experiment), Voronoi sites are precomputed from the dataset. We have added this explanation in Appendix D.\\n\\nThank you all for your comments and suggestions! Please kindly let us know if you have any further concerns or feedback.\\n\\n[1] Danny Z. Chen, et al. On clustering induced voronoi diagrams. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.\\n\\n[2] Danny Z. Chen, et al. On clustering induced voronoi diagrams. SIAM Journal on Computing, 46(6):1679\\u20131711, 2017.\\n\\n[3] Ziyun Huang, et al. Influence-based voronoi diagrams of clusters. Computational Geometry, 96:101746, 2021a. ISSN 0925-7721.\"}", "{\"summary\": \"This paper introduces a novel Test-Time Adaptation (TTA) method using Voronoi Diagrams, termed TTVD. The manuscript highlights the integration of cluster-induced Voronoi Diagrams with Power Diagrams, marking their inaugural application in the TTA domain. This combination aims to ensure both flexibility and robustness within the method. Based on the experiments presented, TTVD demonstrates a clear reduction in errors, which is commendable. The writing is clear, the methodology sound, and the experimental outcomes show significant improvement. However, the manuscript does raise concerns about its level of innovation, primarily since it builds upon pre-existing methodologies without introducing novel concepts.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The proposed TTVD method is presented as both simple and rational, making it easily understandable.\\n2. The experimental performance of the TTVD method is impressive and showcases the method\\u2019s efficacy.\", \"weaknesses\": \"1. The main innovation of TTVD appears to be the combination of existing methods (Cluster-induced Voronoi Diagram and Power Diagram), which might not sufficiently fulfill the criteria for substantial novelty.\\n2. The manuscript lacks a comprehensive discussion and validation of parameter settings, such as \\\\gamma, \\\\eps, and \\\\tau, which are crucial for the reproducibility and understanding of the research.\\n3. The comparative analysis primarily focuses on methods proposed up to and including 2023. Given the rapid advancements in the field, incorporating more recent methodologies (from 2024) could provide a more current understanding of TTVD's positioning.\", \"questions\": \"1. The parameter gamma seems to be a critical aspect of TTVD; however, its determination and impact on algorithm performance are not thoroughly discussed in the manuscript. Could you provide a detailed explanation on how gamma values are selected and their influence on the method's efficacy?\\n2. Regarding the introduction of parameter eps in equation (3) to avoid the log0 issue, the justification seems unclear. The rationale that including eps prevents log0 errors in equation (3) is not compelling, as the log0 problem might not arise even without eps. Could you elaborate on the necessity of eps in this context?\\n3. The manuscript assumes the Voronoi sites are pre-determined without detailing the process of converting Xtest into Voronoi sites. For clarity and completeness, please provide a comprehensive description of how Xtest data are transformed into Voronoi sites.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer rD6K\", \"comment\": \"Thank you for your constructive comments. We address the concerns as follows.\\n\\n**Response to Weakness 1:**\\nWe report the GPU memory usage and process time for each batch on CIFAR-C as below. The memory usage is similar to TTT, while TTVD is faster and similar to TENT.\\n\\n| | Memory Usage (MiB) | Infer and adapt (ms) |\\n| --------| -------- | --- |\\n| Tent | 614 | 31| \\n| TTT | 914 | 54|\\n| TTVD | 932 | 39|\\n\\n**Response to Weakness 2:**\\nYour are right that a theoretical analysis of VDs would greatly benefit the understanding of our method. Actually, this is also a challenge for most of the current TTA methods. We will leave this a valuable future work.\\n\\n**Response to Weakness 3:**\\nThank you for your advice. We have added the pseudo-code for CIVD and CIPD in the Appendix H. We also included our code in the supplementary materials for public readers to understand and reproduce our method.\\n\\n**Response to Weekness 4:**\\nThe only hyperparameter introduced in our method is $\\\\gamma$ that control the magnitude of influence. We find it does not affect the model's accuracy. We set $\\\\gamma=-0.8$ as a heuristic value to appropriately scale the influence of distant sites. Additionally, we provide a toy 3D illustration of the influence function to help your understanding, showing that as the distance increases, the influence of a site diminishes (https://anonymous.4open.science/r/iclr-654F/output.png).\\n\\n| gamma | Error (ImageNet-C) | ECE (ImageNet-C) |\\n| --------------- | --------- | --- |\\n| 0.9 | 59.8 | 17.8| \\n| 0.8 | 59.8 | 21.0|\\n| 0.7 | 59.7 | 23.3|\"}", "{\"comment\": \"Thank you for your response, which has addressed some of my concerns. As a result, I have slightly increased my score. However, overall, I remain somewhat dissatisfied with the level of innovation in the proposed approach, particularly in terms of the insights and impact it offers to researchers in the field.\"}", "{\"title\": \"Response to Reviewer WP62\", \"comment\": \"We sincerely thank Reviewer WP62 for the valuable time and effort in reviewing our work. We address the concerns as follows.\\n\\n**Response to Weakness 1:** As reviewer ET99 commented, our method is the first to apply VD for Test-time Adaptation, even though VDs were originally developed for space partition. Our contribution is mainly about the exploration between VDs, neighbor-based methods and TTA. First, we revealed that nearest neighbor algorithms can be analyzed through standard VD. Then, we find that advanced VDs, such as CIVD, PD, offer benefits for TTA, because of their unique properties (enhanced robustness from multi-site influence, more flexible partitions). **We would like to note that previous papers on CIVD mainly focus on theoretical aspects in geometry[1][2][3]. In contrast, our work seeks to explore its potential, bring renewed attention to it, and transition it into machine learning applications, specifically TTA.**\\n\\n**Response to Weakness 2:** We explain the setting of $\\\\gamma$, $\\\\epsilon$, and $\\\\tau$ as follows.\\n\\n$\\\\epsilon$ is the machine epsilon, i.e. the smallest number that a computer can recognize as being greater than zero, but still very small in magnitude. It is used for code implementation, to improve the numerical stability, rather than a hyperparameter for the algorithm. We set it as 1e-8, similar to the epsilon used in Pytorch Adam implementation.\\n\\n$\\\\tau$ is temperature of the softmax function. We use the standard softmax function, where $\\\\tau = 1$. Both $\\\\epsilon$ and $\\\\tau$ are included in the equation for completeness and are set according to common implementation practices, rather than being treated as hyperparameters to tune in our framework.\\n\\n\\n$\\\\gamma$ is the parameter to control the magnitude of the influence. We use a small negative value to scale and diminish the influence of distant sites. For example, we provide a toy 3D illustration of the influence function to show that as the distance increases, the influence of a site becomes smaller (https://anonymous.4open.science/r/iclr-654F/output.png). We find that $\\\\gamma$ does not affect the accuracy performance a lot from below, and we set $\\\\gamma=-0.8$ as a heuristic value to appropriately scale the influence of distant sites.\\n\\n| gamma | Error (ImageNet-C) | ECE (ImageNet-C) |\\n| --------------- | --------- | --- |\\n| 0.9 | 59.8 | 17.8| \\n| 0.8 | 59.8 | 21.0|\\n| 0.7 | 59.7 | 23.3|\\n\\nAside from these, common parameters such as the learning rate are configured according to the TTAB framework, as discussed in Appendix D. No additional parameters are introduced in our geometric framework. \\n\\n**Response to Weakness 3:**\\nWe included 3 more recent methods to compare as below. It is worth noting that, since our method is based on VDs, it is orthogonal to most current approaches. This means it can be combined with other methods to further enhance performance.\\n\\n| | Error (ImageNet-C) |\\n| ----------------- | ---------- |\\n| IST (CVPR 2024)[4] | 63.4 |\\n| MemBN (ECCV 2024)[5] | 65.6 |\\n| Decorruptor (ECCV 2024)[6] | 63.8 |\\n| TTVD (Ours) | 59.8 |\\n\\n**Response to Question 1:**\\nWe explained this in Weakness 2 above and it does not affect the model's accuracy.\\n\\n**Response to Question 2:**\\nEpsilon is a very small value used to enhance numerical stability in the code implementation, particularly when calling the log-softmax function in PyTorch. This is similar to the Adam optimizer implementation in PyTorch. We agree that the log problem might not arise, but we include this detail in the manuscript for completeness, and it does not affect the performance.\\n\\n**Response to Question 3:**\\nThank you for your advice. Voronoi sites are computed from the training set and the adaptation follows them using test data. We stated in the experiment section, under the \\\"implementation details\\\".Additionally, we have included a figure at the end of Section 3 to illustrate the workflow of our method. Furthermore, we shown that our method is robust to the precision of the Vonronoi sites in Table 4.\\n\\n[1] Danny Z. Chen, et al. On clustering induced voronoi diagrams. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.\\n\\n[2] Danny Z. Chen, et al. On clustering induced voronoi diagrams. SIAM Journal on Computing, 46(6):1679\\u20131711, 2017.\\n\\n[3] Ziyun Huang, et al. Influence-based voronoi diagrams of clusters. Computational Geometry, 96:101746, 2021a. ISSN 0925-7721.\\n\\n[4] Ma, Jing. \\\"Improved Self-Training for Test-Time Adaptation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[5] Kang, Juwon, et al. \\\"MemBN: Robust Test-Time Adaptation via Batch Norm with Statistics Memory.\\\" European Conference on Computer Vision. Springer, Cham, 2025.\\n\\n[6] Oh, Yeongtak, et al. \\\"Efficient Diffusion-Driven Corruption Editor for Test-Time Adaptation.\\\" arXiv preprint arXiv:2403.10911 (2024).\"}", "{\"title\": \"Response to Reviewer wGB6 (2/2)\", \"comment\": \"[1] Taesik Gong, et al. NOTE:Robust continual test-time adaptation against temporal correlation. In Advances in Neural Infor-\\nmation Processing Systems, 2022.\\n\\n[2] Zhang, Yifan, et al. \\\"Adanpc: Exploring non-parametric classifier for test-time adaptation.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[3] Ma, Jing. \\\"Improved Self-Training for Test-Time Adaptation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[4] Kang, Juwon, et al. \\\"MemBN: Robust Test-Time Adaptation via Batch Norm with Statistics Memory.\\\" European Conference on Computer Vision. Springer, Cham, 2025.\\n\\n[5] Oh, Yeongtak, et al. \\\"Efficient Diffusion-Driven Corruption Editor for Test-Time Adaptation.\\\" arXiv preprint arXiv:2403.10911 (2024).\\n\\n[6] Danny Z. Chen, et al. On clustering induced voronoi diagrams. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.\\n\\n[7] Danny Z. Chen, et al. On clustering induced voronoi diagrams. SIAM Journal on Computing, 46(6):1679\\u20131711, 2017.\\n\\n[8] Ziyun Huang, et al. Influence-based voronoi diagrams of clusters. Computational Geometry, 96:101746, 2021a. ISSN 0925-7721.\"}", "{\"summary\": \"The paper targets the challenge of test-time adaptation (TTA) in deep learning models. The authors propose a framework, TTVD (Test-Time adjustment by Voronoi Diagram guidance), which leverages the geometric properties of Voronoi Diagrams to adapt models online during inference. The paper introduces two key geometric structures: Cluster-induced Voronoi Diagram (CIVD) and Power Diagram (PD), to enhance the robustness and adaptability of models facing distributional shifts. Extensive experiments on benchmark datasets like CIFAR-10-C, CIFAR-100-C, ImageNet-C, and ImageNet-R demonstrate the effectiveness of TTVD against state-of-the-art methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper offers a fresh perspective on TTA by employing Voronoi Diagrams, which is a significant departure from traditional approaches and shows promise in handling distributional shifts.\\n2. The authors provide a rigorous experimental evaluation, demonstrating TTVD's superiority over existing methods on multiple benchmark datasets, which strengthens the credibility of their approach.\\n3. Leveraging Voronoi Diagrams for TTA enhances model interpretability, allowing for clearer visualizations and understanding of partition boundaries, which is a valuable asset in deep learning.\", \"weaknesses\": \"1. While the paper discusses the benefits of TTVD, it lacks a detailed discussion on the computational overhead introduced by the geometric structures, which could be a concern for real-time applications.\\n2. While experimental results are promising, it would be valuable to see a comparison with theoretical bounds or guarantees, if available, to understand the limits of TTVD.\\n3. Some sections, particularly the methodology, could benefit from more detailed explanations or pseudo-code to aid reproducibility.\\n4. The performance of geometric structures like CIVD and PD may be sensitive to hyperparameters. The paper could provide more insights into hyperparameter tuning and the robustness of these parameters.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper uses a combination of different variants of Voronoi Diagrams in test-time adaptation. The proposed method combines original Voronoi Diagram, Cluster-induced Voronoi Diagram, and Power Diagram. The proposed method outperforms a collection of relevant baselines under 4 benchmarking datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The proposed method is novel, seems to be the first working applying Voronoi Diagram in test-time adaptation, although there are works (e.g. T3A, AdaNPC) share similar intuitions.\", \"The author choose very appropriate baselines: all of them are highly relevant and shares similarities to the proposed method. The proposed method has strong performance.\"], \"weaknesses\": [\"The algorithm part seems unfinished and lacks many details. For example, the paper only include how to compute the soft prediction $\\\\hat{y}$ for VD, but not for CIVD and PD. Also, it is not introduced how these three components are combined and whether there are additional hyper parameters or flexibilities.\", \"It seems like this paper changes the way of doing inference (from simple linear layer to a combination of three types of Voronoi Diagrams). However, the TTA process is still just entropy minimization, like a simpler version of SHOT. Given this similarity, it is highly unsure why the proposed method can solve the challenges in introduction, and how.\", \"[Minor] The format of references may need to be updated. There are many places where the author use \\\\cite, while it should be \\\\citep. Please correct it in the next version.\"], \"questions\": [\"For the Voronoi Diagram method in Section 3.1, Is it true that the Voronoi sites won\\u2019t be updated once initialized? Since in Algorithm 1, it is not adapted.\", \"How to get the soft prediction based on $F$ in formula (4) and (6)? How three diagrams are combined?\", \"What is the purpose of Lemma 3.1? Is it how the $v_k$s are initialized?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank you for raising the score!\"}", "{\"comment\": \"We sincerely thank you for raising the score!\"}", "{\"title\": \"Response to Reviewer wGB6 (1/2)\", \"comment\": \"We thank Reviewer wGB6 for the positive feedback and valuable review!\\n\\n**Response to Weakness 1:**\\nWe will answer your concern below regarding your Question 1.\\n\\n**Response to Weakness 2:**\\nThank you for pointing this out. We agree that the idea of \\\"fully TTA\\\" (carefully quoted here from TENT) excludes access to source data. However, as modern methods continue to evolve, it can be observed that SODA methods often do not strictly adhere to these settings. For instance, NOTE[1] requires pre-training from scratch to implement its proposed batch normalization layers, and AdaNPC[2] similarly relies on pre-training with its proposed KNN-based loss function. From their experiments, we observed that they primarily compare their results to the same baseline we used.\\n\\nWhile categorizing previous TTA methods is not the main focus of our paper, we think that their primary distinction lies in their methodology rather than their settings: TTT employs self-training, whereas TENT utilizes entropy minimization. Since our adaptation approach aligns more closely with entropy minimization, we included these baselines for comparison. However, we understand your concern and have added additional baselines from more recent methods that incorporate self-training or require modifications to the pre-training process.\\n\\n| | ImageNet-C |\\n| --------------------------- | ---------- |\\n| IST (CVPR 2024)[3] | 63.4 |\\n| MemBN (ECCV 2024)[4] | 65.6 |\\n| Decorruptor (ECCV 2024)[5] | 63.8 |\\n| TTVD (Ours) | 59.8 |\\n\\n**Response to Weakness 3:**\\nIn standard VD, each Voronoi cell is dominated/influenced by a single site $\\\\mu_k$, as Eq. 2 shown. In contrast, in CIVD, each cell is influenced by a cluster of sites $\\\\mathcal{C}_k$, as Eq. 4 shown. This transition to multi-site influence enhances the robustness of the cells. Essentially, a standard VD can be viewed as a special case of 1-nearest neighbor, as VD serves as a foundational structure for nearest-neighbor methods. We have added a figure at the end of Section 3 to illustrate the differences among VD, CIVD, and CIPD, providing a clearer explanation.\\n\\n**Response to Weakness 4:**\\nYou are right that label y is generated by replacing $d$ with $F$. However, there seems to be a misunderstanding regarding our proposed method. In fact, VD, CIVD, CIPD are seperate structures. We have revised our summary of them at the end of Section 3 for clarity. VD \\u2192 CIVD \\u2192 CIPD represent a progressive relationship rather than a combination. VD is the simplest point-to-point structure among the three. CIVD builds upon it as a cluster-to-point structure, while CIPD further extends CIVD by incorporating the power distance, as described in Eq. 5. We have added the pseudo-code for CIVD and CIPD in Appendix H. We use CIPD for all datasets, as it is the most advanced structure of the three.\\n\\n**Response to Weakness 5:**\\nThank you for your advice. We submitted our code in the supplementary to help public readers understand and reproduce our method. $\\\\mu$ is calculated from the class means of the training set, which is stated under the ``implementation details'' in the experiment section. To generate $\\\\mathcal{C}$, we use self-supervision to expand the Voronoi sites, which is stated in detail in Section 3.2. For example, we use rotation to expand $K$ sites to $4K$ sites, and every 4 sites form a cluster $\\\\mathcal{C_k}$. $v$ is set according to Lemma 3.1, where it can be calculated from the classifier layer of the pretrained model.\\n\\n**Response to Question 1:**\\nWe are grateful for the opportunity to explain! We understand that this paper is intensive on Voronoi Diagrams, a classical structure from computational geometry that are not widely explored in the field of machine learning, which may challenge the readability.\\n\\nAs reviewer ET99 commented, our method is the first to apply VD for Test-time Adaptation, even though VDs were originally developed for space partition. Our contribution is mainly about the exploration between VDs, neighbor-based methods and TTA. First, we revealed that nearest neighbor algorithms can be analyzed through standard VD. Then, we find that advanced VDs, such as CIVD, PD, offer benefits for TTA, because of their unique properties (enhanced robustness from multi-site influence, more flexible partitions). **We would like to note that previous papers on CIVD mainly focus on theoretical aspects in geometry [6][7][8]. In contrast, our work seeks to explore its potential, bring renewed attention to it, and transition it into machine learning applications, specifically TTA.**\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you once again for your time and effort providing the initial review of our paper! We have carefully addressed the questions you raised and provided detailed responses above. As the discussion period nears its end, we kindly invite you to share any feedback you might have, and we would be happy to discuss them further.\"}", "{\"title\": \"response\", \"comment\": \"The rebuttal provides the differences among three types of VD and experiments with TTT methods. I think my most concerns have been addressed.\"}", "{\"comment\": \"Thank you once again for your time and effort providing the initial review of our paper! We have carefully addressed the questions you raised and provided detailed responses above. As the discussion period nears its end, we kindly invite you to share any feedback you might have, and we would be happy to discuss them further.\"}", "{\"metareview\": \"Thanks for your submission to ICLR. This paper received four reviews, and the reviewers ultimately agreed that the paper is sufficiently strong for publication. On the positive side, reviewers noted the novelty of the method, and its good empirical results. On the negative side, some reviewers felt that the method was incremental, and several reviewers noted some details missing from the paper throughout.\\n\\nDuring the discussion period, the author rebuttal addressed many of the reviewer concerns, and several of the reviewers raised their scores. At this point, all four reviewers are leaning accept on this paper, and I am happy to recommend accepting the paper for publication.\\n\\nPlease do try to address the reviewer comments in the final version of the paper.\", \"additional_comments_on_reviewer_discussion\": \"Three of the four reviewers raised their score during the discussion. One reviewer (who already had a positive score overall) did not participate in the discussion. Overall, it seems that the rebuttal helped to clear up major concerns about the paper.\"}", "{\"summary\": \"This paper presents the Test-Time adjustment by Voronoi Diagram (TTVD) framework by leveraging geometric principles, particularly the Voronoi Diagram (VD) and its extensions: the Cluster-induced Voronoi Diagram (CIVD) and the Power Diagram (PD). TTVD addresses the limitations of current test-time methods by using these geometric structures to improve feature alignment and sample filtering. CIVD enhances robustness by considering clusters rather than individual prototypes, while PD allows flexible boundaries to better handle noisy samples near decision boundaries. The proposed TTVD demonstrates substantial improvements over state-of-the-art TTA methods on several corrupted datasets, showing its effectiveness in real-world distribution shift scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"By introducing geometric frameworks like VD, CIVD, and PD, TTVD leverages computational geometry to improve the alignment of test-time features with training distributions. This approach provides a mathematically grounded and visually interpretable solution to feature adaptation in TTA.\", \"TTVD shows consistent improvements over existing methods across multiple datasets, reducing classification error rates and enhancing model calibration as indicated by lower Expected Calibration Error (ECE) scores. The inclusion of diverse corruption types in the evaluation (e.g., noise, blur, and weather-based distortions) demonstrates the framework\\u2019s adaptability to real-world conditions.\"], \"weaknesses\": [\"Both CIVD and PD are well-established geometric structures, raising questions about the novelty of TTVD\\u2019s core contributions. The first two contributions mainly apply these established methods to the TTA setting, which may limit the originality of the approach.\", \"The distinction between \\u201ctest-time training\\u201d (TTT) and \\u201ctest-time adaptation\\u201d (TTA) is somewhat blurred. According to the TENT framework, TTA excludes access to source data, while TTT can include self-supervised losses on source data. TTVD\\u2019s reliance on pre-computed Voronoi sites calculated during pre-training suggests it should be categorized as TTT rather than TTA. This distinction impacts baseline comparisons, as the current baselines primarily include TTA methods, potentially leading to an unfair performance comparison.\", \"The paper claims that TTVD extends VD from a point-to-point structure to a cluster-to-point influence mechanism, but it\\u2019s unclear why distances in standard VD (calculated by $\\\\mu_k$) would not already reflect cluster-to-point relationships. A clearer explanation of this transition\\u2019s significance would be beneficial.\", \"The method lacks details on integrating VD, CIVD, and PD into a single loss function. Questions remain regarding whether the label y is generated by substituting $d(\\\\cdot)$ in Equation 3 with $F(\\\\cdot)$, how the components are balanced, and whether this balance is sensitive to different datasets.\", \"The paper lacks computational details on estimating key parameters ($\\\\mu, C, and~ v$) in the TTVD framework. More clarity on these calculations would enhance understanding of the implementation and reproducibility of TTVD.\"], \"questions\": \"The reviewer may have limited familiarity with the Voronoi Diagram, which could have led to some misunderstandings. The authors are encouraged to provide additional explanations during the rebuttal to clarify above points, especially the contributions of the paper and the assumptions between TTT and TTA.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5sRnsubyAK
Neuroacoustic Patterns: Constant Q Cepstral Coefficients for the Classification of Neurodegenerative Disorders
[ "Aastha Kachhi", "Shashank Ojha", "Megha Pandey", "Ajay Kumar Sharma", "Anurag Pandey" ]
Early identification of neurodegenerative diseases is crucial for effective diagnosis in neurological disorders. However, the quasi-periodic nature of vocal tract sampling often results in inadequate spectral resolution in traditional spectral features, such as Mel Frequency Cepstral Coefficients (MFCC), thereby limiting their classification effectiveness. In this study, we propose the use of Constant Q Cepstral Coefficients (CQCC), which leverage geometrically spaced frequency bins to provide superior spectrotemporal resolution, particularly for capturing the fundamental frequency and its harmonics in speech signals associated with neurodegenerative disorders. Our results demonstrate that CQCC, when integrated with Random Forest and Support Vector Machine classifiers, significantly outperform MFCC, achieving absolute improvements of 5.6 % and 7.7 %, respectively. Furthermore, CQCC show enhanced performance over traditional acoustic measures, such as Jitter, Shimmer, and Teager Energy. The effectiveness of CQCC is underpinned by the form-invariance property of the Constant Q Transform (CQT), which ensures consistent feature representation across varying pitch and tonal conditions, thereby enhancing classification robustness. Furthermore, the robustness of CQCC features against MFCC features are validated using LDA plots. These findings are validated using the Italian Parkinson’s database and the Minsk2019 database of Amyotrophic Lateral Sclerosis, underscoring the potential of CQCC to advance the classification of neurodegenerative disorders.
[ "Neurodegenerative Disorder", "Constant Q Cepstral Coefficient", "Form Invariance", "Random Forest", "SVM." ]
https://openreview.net/pdf?id=5sRnsubyAK
https://openreview.net/forum?id=5sRnsubyAK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yj2VxWjc8x", "ea1GsIc6Xh", "ahI3J6wSLJ", "YCOoL6rrFf", "G5kRSj5OIS" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1729188193557, 1730689738634, 1732259449478, 1730716058632, 1730806109139 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14296/Reviewer_be3B" ], [ "ICLR.cc/2025/Conference/Submission14296/Reviewer_U5Xt" ], [ "ICLR.cc/2025/Conference/Submission14296/Authors" ], [ "ICLR.cc/2025/Conference/Submission14296/Reviewer_JA4h" ], [ "ICLR.cc/2025/Conference/Submission14296/Reviewer_uVwr" ] ], "structured_content_str": [ "{\"summary\": \"The paper explores the discriminatory ability of Constant Q Cepstral Coefficients (CQCC) to classify neurodegenerative disorders based on the utterance of sustained vowels. The experimental setup includes samples from patients suffering from Parkinson-s Disease (PD) and Amyotrophic Lateral Sclerosis. The proposed pipeline includes using SMOTE to compensate for class-inbalance problems and two classical ML classification models: SVM and RF. The results show a comparison between CQCC against the well-known MFCC and classical acoustic parameters related to the fundamental frequency variability.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper presents results demonstrating that the use of CQCC enhances classification accuracy in detecting neurodegenerative disorders compared to traditional MFCC and acoustic parameters.\", \"weaknesses\": \"The main drawback of the paper is its novelty. Moreover, I consider it entirely out of the Conference's scope since it does not introduce any approach incorporating the idea of a \\\"learning representation.\\\" The components related to Machine Learning used in the paper are traditional ML models. Regarding its novelty, the set of features analysed in the paper was introduced back in 2017 and has been tested before in several voice/speech processing applications, so its contribution would be more focused on the academic community interested in the specific area of neurodegenerative disorders classification from speech signals. However, even considering the potential contribution in the area of neurodegenerative disorders detection, the comparison proposed in the paper is pretty limited since some previous works have shown that, in the context of PD detection, Rasta-PLP coefficients have better performance than MFCC but more importantly, that sustained vowels lack articulatory information which is critical to the PD detection. Indeed, there are not many datasets available out there, but the Italian Dataset used in the experiments has pretty low recording quality, and many works have shown that classifying PD vs. Control in that dataset is not a difficult task, so it should not be used as a benchmark.\", \"questions\": [\"Why do the authors consider the paper suitable for the ICLR venue?\", \"The revision of the previous work should be improved significantly; literature using spectral/cepstral features is abundant. Moreover, to evaluate the proposed approach's actual contribution, the paper should analyse (and compare) the proposed approach with works using end-to-end approaches based on Spectrograms or feature vectors obtained from foundational models, such as Wav2vec, Speech2Vec, or HuBERT.\", \"Why did the authors not include experiments using Rasta-PLP if several works have reported better performance than MFCC in the context of PD detection?\", \"Why did the authors not include experiments using oral diadochokinesis tasks or free speech, which are currently the maximum performance tasks for PD detection from speech signals?\", \"Why did the authors not include more datasets in their experiments, such as the GITA (https://www5.informatik.uni-erlangen.de/fileadmin/research/Publikationen/2014/Orozco14-NSS.pdf) or Neurovoz (https://arxiv.org/abs/2403.02371) datasets, which are provided by request. There are also datasets in German, Czech, and English used in many studies that could be used by requesting the material from the authors.\", \"The authors should include cross-dataset experiments, which are the most challenging evaluations, where most of the proposed approaches fail or show significant drops in their performance, so they constitute the goal standard for evaluating advances in this field of application.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The article investigates constant-Q cepstral coefficients (CQCC) to perform classification of neuro-degenarative disorder from speech, and compared the results with respect to standard mel-frequency cepstral coefficients (MFCCs) and other low-level acoustic features, such as jitter, shimmer, teager-energy etc. The results presented in the paper indicate sufficient performance improvement compared to the MFCC baseline.\\n\\nWhile this is a well motivated work that has the potential to impact detection of neuro-degenerative diseases using speech as the input modality, however it is not clear completely what the main novelty of the paper is. The authors did specify that the constant-Q cepstra is the main novelty presented in this work, however that is fairly incremental as such features have been used in speech technologies, perhaps not in the same application area as this article.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper focuses on speech based detection of neuro-degenerative disease, specifically Parkinson's disease and Amyotrophic lateral sclerosis (ALS). The paper is well motivated, clearly outlines the prior work that has been done and the contribution of the paper. Results presented in the article shows a strong performance demonstrated by the proposed approach as compared with MFCC-based system.\", \"weaknesses\": \"This is an interesting and relevant work focusing on detection/recognition of Parkinson's disease and Amyotrophic lateral sclerosis (ALS) from speech data, consisting of sustained vowels, specifically focusing on constant-Q cepstral coefficients (CQCC) as acoustic features. There are certain aspects that needs to be addressed -\\n(1) Given the findings are primarily based on sustained vowels, how do the observations generalize to spontaneous speech? Is it absolutely needed to have speech containing sustained vowel to be able to detect/recognize the condition investigated in this work?\\n(2) Table 2 in the dataset section, introduces three datasets: D1, D2 and D3. However it is not clear which one of these correspond to the datasets detailed in section 4.1. Also, in section 4.1, there are two datasets that are introduced: (a) Italian Parkinson\\u2019s Voice and Speech dataset, and (b) Minsk2019 ALS database. Table 2 is confusing as it introduces three datasets, and it is not clear what is the 3rd dataset, and which datasets correspond to D1, D2 and D3.\\n(3) Section 4.3 introduces MFCCs as state-of-the-art: I wonder about the rationale behind stating that MFCCs are state-of-the-art. Is there any prior work that established MFCCs as the state-of-the-art feature for this specific application? \\n(4) There are some typing errors that can be addressed by proof-reading the paper: \\n(a) page 2, section 2, line 094: \\\"\\u2022Furthermore, no studies...\\\" >> \\\"\\u2022 Furthermore, no studies... \\\"\\n(b) page 2, section 2, line 097: \\\"this is the first study of it;s kind ... \\\" >> \\\"this is the first study of it's kind ... \\\"\\n(c) page 5, section 4.1, line 264: \\\"..sustained sounds of all vowel sounds .. \\\" > please rephrase this line, \\\"sounds\\\" is repeated twice and it makes the sentence a bit confusing.\", \"questions\": \"The paper presents an interesting and relevant application of speech technologies for detection of Parkinson's disease and Amyotrophic lateral sclerosis (ALS) from speech data, consisting of sustained vowels. Please find below some open questions, which if addressed, can facilitate the paper to be more accessible to the general reader/audience.\\n\\n(1) What is meant by D1, D2 and D3 in table 2? Is it possible to specify which ones correspond to the two datasets specified earlier: (a) Italian Parkinson\\u2019s Voice and Speech dataset, and (b) Minsk2019 ALS database?\\n\\n(2) The dataset section 4.1 does not provide any detail on how the train, validation and test sets are created/obtained from the data shown in tables 2 and 3. Were there any speaker overlap between the train-dev-test splits?\\n\\n(3) I wonder the rationale behind stating that MFCCs are state-of-the-art. Is there any prior work that established MFCCs as the state-of-the-art feature for this specific application? \\n\\n(4) It is also not clear why 20 CQC coefficients were selected against 13 MFC coefficients? What is the rationale behind using 13 MFCCs only? Typically 13 is selected for speech recognition purposes, as higher cepstral coefficients are known to capture more speaker related attributes. \\n\\n(5) Section 5.1 presents an interesting analysis using some examples, however it is not clear that how much of the observations shared in the analysis is captured by the features. Specifically the first 13 cepstral features may not capture speaker specific characteristics, including pitch. It is also not obvious if harmonic energy is captured well in the explored features. \\nHow consistent are these observations w.r.t speakers having varying degrees of ALS or Parkinsons disease?\\n\\n(6) Section 5.2.2 presents an interesting analysis by comparing the findings against other relevant features. However, it will be useful to share if there is any prior art that have proposed the use of these features in isolation?\\nGiven jitter, shimmer and teager energy features, each capture different attributes in the acoustic speech signal, these features are usually used in combination with one another, rather than isolation. It is not clear why these features were explored in isolation as baseline. What happens when these features were combined, even when combined with MFCCs of CQCCs. \\n\\n(7) Section 5.2.2, page 8, lines 385-386: \\\"Two new databases D1 and D3 were prepared, where two different pathologies ..\\\" > it is not clear what D1, D2 and D3 represent? What are the two different pathologies specified here?\\n\\n(8) I am wondering if the authors have considered using some of the paralinguistic feature sets well known in the literature such as the openSMILE features that contain attributes which have been used for analysis (table 6) shared in the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors have proposed Constant Q Cepstral Coefficients (CQCC) as a measure to identify neurodegenerative diseases (like Parkinson\\u2019s and Amyotrophic lateral sclerosis). The proposed measure is compared against basic acoustic features Jitter Shimmer Teager Energy and MFCC using traditional machine learning classifiers like random forest and Support vector machines. The discriminator power of CQCC is demonstrated using two different datasets i.e. Italian Parkinson\\u2019s Voice and Speech dataset and Minsk2019 ALS database.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper contributes towards developing interpretable features representation for neurodegenerative diseases.\\n2. A comparison with mostly commonly used features like MFCC\\n3. evaluation on two different languages and diseases. \\n4. achievement of significant performance over widely used feature sets by neurodegenerative research community. \\n5. demonstration of improved class separation of CQCC over MFCC using LDA plots.\", \"weaknesses\": \"1. hyperparameter optimization is not performed.\\n2. it is not clear what is the dimensionality of each feature set, consider adding a table or paragraph in the methodology section detailing the dimensionality of each feature set used. \\n3. Have you considered discussing the trade-offs between your approach and deep learning methods like wav2vec or BERT? This could help contextualize your choice of method and highlight any advantages in terms of interpretability or computational efficiency.\\n4. As the research field lacks large amount of datasets. In the limitations section, could you discuss how the scarcity of large datasets in this field might impact the generalizability of the findings, and what implications this has for future research? \\n5. Could you provide more context in the methodology section about why these specific traditional feature sets were chosen for comparison? Are there particular characteristics of these features that make them relevant benchmarks for neurodegenerative disease detection?\\n6. consider adding more references\", \"questions\": \"1) why and how your proposed feature is helpful?\\n2) what characteristic of speech the features are representing and how they represent neurodegeneration in speech (interpretability for clinicians?)\\n3) why you have not performed fusion of features?\\n4) any computational cost advantages?\\n5) explain dimensionality of feature sets and the time window for extraction of features, and how did you generate a representation for an audio recording.\\n6) how did you handle the variable duration of audio recordings?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a new feature extraction method that leverages the form-invariance property of the Constant Q Transform (CQT). It is applied for the classification of neurodegenerative disorders, specifically Parkinson's Disease (PD) and Amyotrophic Lateral Sclerosis (ALS). The authors propose that CQCC, which leverages geometrically spaced frequency bins, provides superior spectrotemporal resolution compared to traditional Mel Frequency Cepstral Coefficients (MFCC). The study demonstrates that CQCC, when integrated with Random Forest and Support Vector Machine classifiers, significantly outperforms MFCC, achieving absolute improvements of 5.6% and 7.7%, respectively. The effectiveness of CQCC is validated using the Italian Parkinson\\u2019s database and the Minsk2019 database of ALS\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper leverages the form-invariance property of the Constant Q Transform (CQT) to achieve superior spectrotemporal resolution compared to traditional Mel Frequency Cepstral Coefficients (MFCC). The authors demonstrate significant improvements in classification accuracy using Random Forest and Support Vector Machine classifiers, validated across multiple datasets. While the technical complexity may pose challenges for some readers, the research is of high quality and holds significant potential for advancing early diagnosis and treatment of neurodegenerative diseases. The paper's contributions are a valuable addition to the niche application area of disease diagnosis via audio.\", \"weaknesses\": \"1. Rigor in experimentation\\n1a. Error Analysis/ Literature comparison \\n Although the paper includes spectrographic analysis and Linear Discriminant Analysis (LDA) plots to visualize feature separability, It is necessary to Conduct a detailed literature analysis of the SoTA models used for this task such as Deep feature extractors. A useful analysis would be to Identify common patterns or features that contribute to the errors and provide insights into the specific cases where the proposed method fails. This would help in understanding the contribution of CQCC as an efficient feature extractor. \\n\\n1a. Cross validation/ 10 fold CV-\\nThe paper lacks external validation of the proposed CQCC method. While simple accuracy score results are promising, additional validation using independent datasets not used in the training phase would provide stronger evidence of the method's effectiveness. This could involve cross-validation with other publicly available datasets or non intersecting splits from current dataset to achieve error estimates or uncertainty scores. *(Ref see Uncertainty Quantification of Deep Learning Models)\\n\\n\\n2. Dataset Diversity:\\nThe study primarily uses the Italian Parkinson\\u2019s Voice and Speech dataset and the Minsk2019 ALS database. While these datasets are well-established, the paper could benefit from including more diverse datasets to ensure the generalizability of the findings. It would strengthen the validity of the results and demonstrate the robustness of the proposed method across various contexts\\n\\n3 Lack of Comparison with Other Deep models/ deep feature extractors:\\nThe paper compares CQCC primarily with traditional acoustic features like MFCC, Jitter, Shimmer, and Teager Energy. However, it does not provide a comparison with other advanced feature extraction methods or machine learning techniques that have go-to in most cases of such applications. Including such comparisons would provide a more comprehensive evaluation of the proposed method's performance and highlight its relative strengths and weaknesses\", \"questions\": \"Include comparisons with other deep feature extraction methods say Wav2Vec oPASE or even CNN techniques recently proposed in the literature. This would provide a more comprehensive evaluation of your method's performance and highlight its relative strengths and weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5sQiK2qTGa
On Memorization of Large Language Models in Logical Reasoning
[ "Chulin Xie", "Yangsibo Huang", "Chiyuan Zhang", "Da Yu", "Xinyun Chen", "Bill Yuchen Lin", "Bo Li", "Badih Ghazi", "Ravi Kumar" ]
Large language models (LLMs) achieve good performance on challenging reasoning benchmarks, yet could also make basic reasoning mistakes. This contrasting behavior is puzzling when it comes to understanding the mechanisms behind LLMs' reasoning capabilities. One hypothesis is that the increasingly high and nearly saturated performance on common reasoning benchmarks could be due to the memorization of similar problems. In this paper, we systematically investigate this hypothesis with a quantitative measurement of memorization in reasoning tasks, using a dynamically generated logical reasoning benchmark based on Knights and Knaves (K&K) puzzles. We found that LLMs could interpolate the training puzzles (achieving near-perfect accuracy) after fine-tuning, yet fail when those puzzles are slightly perturbed, suggesting that the models heavily rely on memorization to solve those training puzzles. On the other hand, we show that while fine-tuning leads to heavy memorization, it also consistently improves generalization performance. In-depth analyses with perturbation tests, cross difficulty-level transferability, probing model internals, and fine-tuning with wrong answers suggest that the LLMs learn to reason on K&K puzzles despite training data memorization. This phenomenon indicates that LLMs exhibit a complex interplay between memorization and genuine reasoning abilities. Finally, our analysis with per-sample memorization score sheds light on how LLMs switch between reasoning and memorization in solving logical puzzles.
[ "LLM", "memorization", "logical reasoning", "perturbation", "knights and knaves" ]
Reject
https://openreview.net/pdf?id=5sQiK2qTGa
https://openreview.net/forum?id=5sQiK2qTGa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zjCTInkrWY", "xuwtB6qZE1", "xq6ClkHzAx", "xD3FZN1IaS", "wayNhlM5Cn", "w5SxSHleCP", "qSTPAs2idC", "nXfxBY28sa", "nCIHch37GA", "mflibOfQ2w", "lCqQxvZJCW", "i5c6rZJnoE", "fippaM4GAe", "cHoyQNMxjG", "a7xRaIepsp", "ZECrtKcX9o", "ScFCF3XxeC", "SJhDK6tX4h", "PjpfIGhdbK", "OIgQsghPru", "OHjJLJUKk4", "NkzVuXIOuc", "JOho9607EQ", "HbuhVtEMrc", "CtQ85KSaG6", "AbPygim4qV", "AIyO2ggnX8", "AGomkonGbc", "2gXXJPaCur", "2IxfVz61dD", "1QLzgadOMv" ], "note_type": [ "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732475650317, 1734771641106, 1730702278674, 1732807806858, 1732476260429, 1732475542023, 1730712892363, 1732569835731, 1733172963402, 1732474563531, 1732475421303, 1732475891933, 1732615572024, 1732474685964, 1733173848955, 1732476188380, 1729146931904, 1732477206910, 1732527626751, 1732475772434, 1732475088725, 1737523830825, 1732807242490, 1732808117552, 1732615536264, 1732476090463, 1733036694771, 1732883599879, 1732474844415, 1730005366328, 1732597631304 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Area_Chair_aGuu" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_XCvy" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_1uBu" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_ofks" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_ofks" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_1uBu" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_ofks" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_XCvy" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_ofks" ], [ "ICLR.cc/2025/Conference/Submission7303/Authors" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_7GXF" ], [ "ICLR.cc/2025/Conference/Submission7303/Reviewer_7GXF" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 7GXF (Part 3)\", \"comment\": \"> Q3: Standard fine-tuning with Chain-of-Thought (CoT) prompting, which the paper highlights, is already a well-known approach. The study would benefit from more innovative methods that build upon these findings.\\n\\nThanks for the feedback. In fact, our study primarily emphasizes analyzing **Direct FT** rather than CoT FT, as discussed in L299-309. Direct FT, which trains models using only question-answer pairs without detailed reasoning steps, is intuitively more challenging for LLMs, as it requires the model to infer reasoning procedures independently.\\nSurprisingly, as shown in Fig. 5, we did not observe Direct FTed GPT-4-mini models exhibiting significantly higher memorization scores than CoT FTed ones. The results in Sec. 4.1 show that models can learn to reason through K&K puzzles effectively even from question-answer pairs alone. Our findings highlight that Direct FT can be a viable approach for developing reasoning capabilities for logical reasoning problems, especially in cases where CoT data is unavailable or expensive to produce.\\n\\nTo further explore what the model learns through Direct FT, we conducted *innovative analyses* including *Probing Analysis*(Sec. 4.3) and *Ablation Study with Incorrect Data Fine-Tuning* (Sec. 4.3). \\n\\n\\nFor **CoT FT**, we also performed novel analysis with low-quality CoT data fine-tuning in Section 5. We introduce incorrect CoT annotations including shuffled reasoning steps or one wrong step. One wrong CoT step mimics carefulness annotators that make small mistake. Interestingly, we found that models could still generalize and learn to reason effectively, demonstrating robustness to low-quality CoT annotations.\\n\\nAdditionally, in Section 6, we propose **puzzle-based and model-based indicators** to distinguish samples solved via reasoning versus memorization. These indicators provide new insights into these two capabilities in LLMs.\\n\\nLastly, our *contributions extend beyond fine-tuning*: we developed an innovative K&K **data generation framework** that supports creating new puzzles and systematically perturbing existing puzzles at various difficulty levels and computing new solutions. This framework is not only essential for our study but also serves as a resource for advancing future research on logical reasoning and memorization in LLMs.\\n\\n\\n\\n> Q4: Insufficient Baselines: The paper evaluates a narrow set of models and approaches. Including a broader range of baseline algorithms, particularly reinforcement learning (RL)-based models or other alternative reasoning frameworks, would provide more context for the performance of LLMs and help assess whether memorization is unique to certain models or training methods\\u2026 Adding comparisons with other learning paradigms (e.g., RLs, decision transformers, or symbolic models) could broaden the understanding of how different models handle reasoning and memorization, especially in dynamic or less static environments than the K&K puzzles.\\n\\nThank you for the thoughtful suggestion. Could the reviewer kindly suggest specific RL-based models or alternative reasoning frameworks that would be particularly relevant for comparison? We would be happy to include those in our revised manuscript.\"}", "{\"metareview\": \"Whether LLMs learn to reason or its perceived reasoning power is rooted in its ability to memorize huge space of potential answers is an important question to understand the mechanism of LLMs. Because of its importance, lots of prior work exists. This paper proposes a \\\"perturbation-based\\\" method to quantify LLMs' memorization ability. Memorization definitely plays a role in reasoning. If a model knows nothing, it definitely cannot reason. Because of this, a method to quantify such that we can understand to what extend the memorization plays the dominant role is an important attempt. However, main concerns are two-folded. Firstly, the novelty of this metric, and to what extent we can trust this metric. Perturbation-base methods are not particularly novel conceptually, and a comprehensive inclusion of reference is needed. Furthermore, due to how the state space is perturbed, the state space is not enough to fully convince reviewers the validity of this metric. Secondly, I would not particularly argue for whether the results/conclusion need to be surprising or not. But in some sense, we get a hand-wavy definition of \\\"memorization\\\" and \\\"genuine\\\" reasoning, which makes some discussion less concrete. I would encourage the authors to make a more clarified definition, make a more distinct case to separate the two, and include more results on a larger state space and models.\", \"additional_comments_on_reviewer_discussion\": \"More than one reviewers raised the concern about the distinction between \\\"memorization\\\" and \\\"genuine reasoning\\\". The concern still remains after the rebuttal. The soundness of the experiments (e.g., concern about limited model evaluations) which were raised by all reviewers, were somewhat addressed by the authors, and 1 or 2 two reviewers raised the score accordingly. However, due to the limitation of the state space, reviewers still found the results not convincing enough.\"}", "{\"summary\": \"The paper proposes a memorisation metric (LiMem) to quantify the extent of memorisation vs reasoning exhibited by language models when solving logical reasoning tasks. The metric is based on measuring inconsistency when a model solves a locally perturbed version of a logical reasoning puzzle. The paper also proposes logical reasoning benchmark based on the Knights and Knaves puzzles, which enables the memorisation study and could be useful for future research on logical reasoning in language models.\\n\\n## Main Experiments\\n- They test on set of eight open and closed source models. They compare the scores with and without perturbations across various parameters.\\n- They also run experiments by fine-tuning models on variations of the knight and knave puzzles, They claim that fine tuning leads to better memorisation and reasoning on various modes of difficulties.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Overall, I believe this paper is well written and easy to understand.\", \"The authors have explained, their assumptions and experimental setup clearly.\", \"Main figure explains most of experimental gist at glance.\"], \"weaknesses\": \"## Limitations\\n- **State space with perturbations**: As the problem space is limited in terms of number of people, depth and width, Only maximum of 8, 2, 2 respectively. These limited dimensions make it relatively easy for models to interpolate the entire problem space with perturbations, potentially inflating perceived generalisation.\\n- **Limited Evaluation** : The authors analyze only 8 models, yet they refer to it as a benchmark, which limits its claim to be benchmark. A more comprehensive evaluation across diverse models is necessary, particularly with a focus on distinguishing performance in terms of memorization versus reasoning\\u2014an analysis notably missing in the paper.\\n- Boilerplate memorization issues have been raised by other studies (e.g., Sai et al. [1]) that address similar patterns of template memorisation. [1] work reaffirms that slight variations in variable names do not disrupt memoization. It can also in turn explains strong perfomance of fine-tuned models, with small number of finetuning samples.\\n- Recent analyses, like that by Saparov and He [2], have highlights LMs' reasoning abilities in Chain-of-Thought (CoT) contexts. 1-shot performant doesn't seems to reduce performance for various perturbations as opposed to 0-shot. \\n\\nPlease cite missing relavant work that explore memorization and formal reasoning.\\n[1] P. U. Sai et al., \\u201cRecite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon,\\u201d arXiv.org, 2024. https://arxiv.org/abs/2406.17746.\\n[2] A. Saparov and H. He, \\u201cLanguage Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought,\\u201d arXiv:2210.01240 [cs],\\n[3] L. Pan et al \\\"LOGIC-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning\\\" https://arxiv.org/pdf/2305.12295\\n\\n\\u200c\", \"questions\": [\"I have highlighted fundamental limitations in previous section. I have a few questions regarding implementation and experimental details:\", \"The paper seems to omits key fine-tuning details, such as whether standard SFT or LoRA was used, any quantization techniques, configurations or other factors that could significantly impact the observed memorisation and reasoning behaviours.\", \"Although the study mentions language-level and mathematical-level perturbations, it does not examine the effect of progressively stacking these perturbations. A more thorough exploration here could offer insights into model robustness and reasoning depth.\", \"It seems logical to consider test-time inference techniques like self-refine, majority voting or self-consistency, which could enhance reasoning results. Such experiments would help demonstrate robustness.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ofks (Part 1)\", \"comment\": \"Thank you again for your detailed feedback! We address your questions and comments below.\\n\\n\\n\\n> Q1: Clarification on the Importance of K&K Problems and differences between K&K and exisiting logical puzzle datasets\\n\\nThe Knights and Knaves (K&K) problem (i.e., Liar and Truthteller puzzle) is significant in cognitive science, philosophy, math and AI, serving as a key testbed for logical reasoning and problem-solving. It was originally introduced In the book \\\"What is the Name of This Book?\\\" back in 1978 [1] and more versions were available in a later book in 1990 [2]\", \"several_studies_and_papers_illustrate_its_importance\": \"1. **The Hardest Logic Puzzle Ever**. The Hardest Logic Puzzle Ever is a logic puzzle so called by American philosopher and logician George Boolos and published in The Harvard Review of Philosophy in 1996. The puzzle is based on Knights and Knaves puzzles [3] \\n2. **Cognitive Science & Philosophy**: K&K puzzles are discussed in many publications in Cognitive Science and Philosophy areas to explore paradoxical reasoning with liars and truth-tellers and how such puzzles provide insights into human reasoning and logic [4,5,6,7,8]. \\n3. **AI and Logic Reasoning**: K&K puzzles are discussed in AI Magazine in 2017 as a challenging task, which highlights that \\u201ceven studying and solving only single parts of the proposed challenge would represent an important step forward for artificial intelligence.\\u201d [9] \\n4. **Mathematics and programming**: K&K puzzles are translated into Boolean programming (0-1 programming) problem to study graph representation [10], and explored in Mathematics for paradoxical self-reference [11]. \\n\\n\\n\\n\\nWhile Einstein\\u2019s Puzzle [12] and ZebraLogic [13], construct puzzles to analyze the reasoning abilities LLMs, they differ significantly in their design and focus. These benchmarks are based on scenarios where, given a list of clues/constraints, one must deduce a unique and correct assignment of values for the rest of the variables, assuming that all clues are accurate.\\n\\nIn contrast, K&K is a deductive reasoning task that evaluates LLMs' ability to infer both the truthfulness of statements and their logical implications (e.g., identifying whether a character is a truth-teller or a liar). Unlike traditional benchmarks, K&K does not provide explicit clues. Instead, it demands suppositional reasoning, which relies heavily on conjecture, inference, and paradoxical reasoning, rather than on direct or sufficient evidence/constraint. This makes K&K fundamentally distinct and more challenging.\", \"reference\": [\"[1] Raymond M. Smullyan. 1978. What is the Name of This Book?: The Riddle of Dracula and Other Logical Puzzles. Prentice-Hall, Englewood Cliffs, N.J.\", \"[2] P.N. Johnson-Laird and Ruth M.J. Byrne. 1990. Metalogical problems: Knights, knaves, and rips. Cognition, 36(1):69\\u201384\", \"[3] https://en.wikipedia.org/wiki/The_Hardest_Logic_Puzzle_Ever\", \"[4] Paralogical reasoning: Evans, Johnson-Laird, and Byrne on liar and truth-teller puzzles. *Cognition*, Volume 36, Issue 3, September 1990, Pages 291-314. https://doi.org/10.1016/0010-0277(90)90061-N\", \"[5] Reasoning with knights and knaves: A discussion of Rips. *Cognition*, Volume 36, Issue 1, July 1990, Pages 85-90. -https://doi.org/10.1016/0010-0277(90)90055-O\", \"[6] A general method of solving Smullyan's puzzles. *Logic and Logical Philosophy*, Volume 4 (1996), 97\\u2013103. https://www.marianotomatis.it/blog/materiale/kolany.pdf\", \"[7] Sorting the Liars from the Truth Tellers: The Benefits of Asking Unanticipated Questions on Lie Detection. *Applied Cognitive Psychology*, Appl. Cognit. Psychol., 27: 107-114. [https://doi.org/10.1002/acp.2879](https://doi.org/10.1002/acp.2879)\", \"[8] Reasoning About Agent Types and the Hardest Logic Puzzle Ever. *Minds & Machines* 23, 123\\u2013161 (2013). https://doi.org/10.1007/s11023-012-9287-x\", \"[9] Solving Mathematical Puzzles: A Challenging Competition for AI. *AI Magazine*, 38(3), 83-96. https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2736\", \"[10] Boolean programming, truth-teller-liar puzzles and related graphs. *ITI* 2003., Cavtat, Croatia, 2003, pp. 663-668, https://ieeexplore.ieee.org/abstract/document/1225419\", \"[11] Truth-Teller\\u2013Liar Puzzles with Self-Reference. *Mathematics* 2020, 8(2), 190. https://www.mdpi.com/2227-7390/8/2/190\", \"[12] Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Za\\u00efd Harchaoui, and Yejin Choi. Faith and fate: Limits of transformers on compositionality. NeurIPS, 2024.\", \"[13] Bill Yuchen Lin, Ronan Le Bras, and Yejin Choi. ZebraLogic: benchmarking the logical reasoning ability of language models, 2024. URL https://hf.co/spaces/allenai/ZebraLogic.\"]}", "{\"title\": \"Response to Reviewer ofks (Part 5)\", \"comment\": \"> Q10. Although the paper provides a deep dive into model behavior, it lacks a concrete conclusion on how LLMs solve these puzzles without chain-of-thought reasoning. Addressing this question, though difficult, would greatly enhance the paper. But I know it is a hard task and I will not detract from the article for the lack of this.\\n\\nThank you for the thoughtful comment. We hypothesize that the model may develop implicit internal computational graphs among the $N$ characters in the $N$-ppl K&K task, which enables it to solve these puzzles without explicitly outputting a chain-of-thought reasoning process. While we acknowledge the challenge of verifying this hypothesis directly, we consider it an important direction for future work. A similar hypothesis is explored in [1, Section 4.2], where the authors demonstrate that models acquire implicit skills, such as learning all-pair dependencies, after pretraining on grade-school math problems.\\n\\n[1] Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen-Zhu. Physics of language models: Part 2.1, grade-school math and the hidden reasoning process. arXiv preprint arXiv:2407.20311, 2024\\n\\n> Q11. The memorization metric, introduced in Figure 1 (Section 1), needs more explanation to help readers understand it. Adding details to the figure's caption or in the text in Section 1 would be helpful.\\n\\nThanks for the suggestions. We\\u2019ve add more explanation for the metric to Figure 1\\u2019s cations and Section 1 L70-72 in revised PDF.\\n\\n\\n> Q 12. The \\\"reasoner\\\" is described as sequentially examining each individual and checking for contradictions. However, humans likely employ heuristics to determine the order of examination and may use shortcut strategies. The paper could discuss whether this approach is the best analogy to human reasoning.\\n\\nThanks for the valuable comment. We indeed optimized the order of examination for our reasoner design exactly as the reviewer \\nsuggested. The details are in Appendix C.3. \\n\\nSpecifically, we note that when explaining the reasoning steps for K&K puzzles, human or off-the-shelf LLMs rarely use the brute-force assignment search approach adopted in our Solver. Instead, they tend to examine the statement from each person sequentially, construct a *partial* assignment for the people examined so far, and backtrack when a contradiction is found. We design our Reasoner following the same procedure. We maintain a queue of people to be examined next, and a partial assignment of knight/knave for people that have been examined so far. More details can be found in Appendix C.3.\"}", "{\"title\": \"Response to Reviewer 7GXF (Part 2)\", \"comment\": [\"> Q2: Lack of Surprising Results: The finding that reasoning abilities improve with memorization is not particularly novel or surprising. While the paper conducts detailed analyses, it does not present clear guidance on how to leverage this insight for practical improvements\\u2026. How do you envision this result being used in real-world applications or for model improvement?\", \"We thank the reviewer for the feedback. While we acknowledge that the relationship between memorization and generalization has been explored (e.g., mostly in classification tasks), our study makes the following novel contributions specific to reasoning tasks for LLMs and provide insights for practical model improvements:\", \"**Relevance to Dataset Contamination in Benchmarking**. Contamination of training data with benchmark test sets is a pervasive issue in LLMs evaluation, especially for popular reasoning datasets. This often leads to inflated performance metrics that obscure a model's true reasoning capabilities. Our study is motivated by the need to address this issue and systematically analyze how contamination, when controlled and quantified, influences reasoning and problem-solving abilities. The contamination-controlled design of our benchmark enables rigorous evaluation.\", \"**Transferability Analysis**: Under our controlled setup, our findings demonstrate that, in reasoning tasks, memorization of easier examples\\u2014stemming from fine-tuning\\u2014can significantly improve a model's ability to solve harder tasks. Similarly, finetuning on hard tasks can help solve easier tasks. This insight provides practical guidance for designing training strategies: fine-tuning simpler logical reasoning tasks can serve as an effective method for preparing models for more complex tasks.\", \"**Leveraging Direct Fine-Tuning in Absence of CoT**. We show that direct fine-tuning without CoT can help models acquire general problem-solving skills from question-answer pairs. This is particularly useful for tasks where generating CoT annotations is resource-intensive or infeasible (e.g., due to the lack of human expertise to generate step-by-step reasoning). The results highlight a novel avenue for efficiently training models in real-world scenarios where detailed CoT data is unavailable.\", \"**Probing model internals**: Our probing analysis identifies the transformer blocks most relevant for learning K&K skills and shows that solving harder K&K tasks requires more computation, with task-relevant information shifting to later blocks. This potentially provides insights for model developers: fine-tuning specific blocks may optimize performance for complex reasoning tasks.\", \"**Robustness to Low-Quality Data**: Models fine-tuned on wrong answers or wrong CoT steps remain robust, indicating that high-quality data is not strictly necessary for fine-tuning in certain scenarios. This has direct implications for real-world applications where perfect data quality cannot be guaranteed, enabling broader applicability of fine-tuning methods.\", \"We hope these clarifications address concerns about the practical relevance of our results. We would be happy to discuss further if reviewers have additional feedback.\"]}", "{\"summary\": \"This study examines how LLMs balance memorization and reasoning in solving logical reasoning tasks, using a benchmark based on Knights and Knaves (K&K) puzzles. Findings reveal that while fine-tuning enhances LLMs' generalization abilities, it also leads to heavy memorization. The models perform well on familiar tasks but struggle with slight variations, suggesting a nuanced interplay between memorization and genuine reasoning skills in LLMs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper attempts to reveal the nuanced relationship between memorization and reasoning, contributing to a deeper understanding of LLM capabilities and limitations.\", \"Perturbation tests offer methods to assess LLMs' reasoning abilities independently of memorization.\"], \"weaknesses\": [\"The definition of \\\"memorization\\\" is vague. Is that the opposite of \\\"generalization\\\"? Why do we need to create a new term (and even a new metric) compared to the traditional term in machine learning research?\", \"Following the question above, I think the definition of memorization score is too arbitrary and may be misleading. What's the meaning of multiplication of accuracy and CR? For example, (ACC=0.2, CR=0.2) and (ACC=0.8 and CR=0.8) will produce the same score. Do these two results have the same level of memorization under your definition? It's also very counter-intuitive that \\\"off-the-shelf models\\\" show signs of \\\"memorization\\\" when solving these puzzles, even though they are never trained on this it. The name and the motivation in the Introduction left an impression that it's a metric to reflect the generalization gap. However, this doesn't seem to be the case, since a model that has never been fitted on the dataset is also likely to have a memorization score > 0.\", \"Though the authors attempt to \\\"distinguish memorization from reasoning\\\" with rich experiments, they are all based on vague and probably problematic definitions of \\\"memorization\\\", which makes the results not as insightful.\", \"A rich line of research on \\\"grokking\\\" [1, 2] might be very relevant to the research problem in this paper.\", \"[1] Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets\", \"[2] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization\"], \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Why do you believe they should have the same level of memorization? This explanation only states that these two cases are the same under your definition, but does not justify why these two cases **should** be the same from the principle. The definition of this metric is still rather arbitrary to me.\", \"we_would_like_to_clarify_two_things\": \"(1) in our response, we were acknowledging that the two cases indeed leads to the same LiMem score, therefore the same level of memorization according to our metric. But we were not claiming that these two cases are the same from the principle. (2) we emphasized the \\\"**low** level of memorization\\\" to clarify that a **low** LiMem score is an indicator of **a lack of memorization**. In the example suggested by the reviewer, a low (0.16) LiMem could either be due to low accuracy or high robustness.\\n\\nThe primary purpose of the LiMem metric is to capture **cases of memorization (high LiMem score)** , inspired by human behavior. We acknowledge that a low LiMem score can have multiple explanations and there is no formal principle to properly quantify which of the two cases should be interpreted as \\\"more level of memorization\\\": (ACC=0.2, CR=0.2) (ACC=0.8 and CR=0.8). As a result, while we use high LiMem to capture memorization (Sec 3), when studying the relation between memorization and reasoning (Sec 4, 5), we complement the low LiMem measurement with additional evidences such as cross-difficult-level generalization and hidden state probing. In summary, memorization is a vague term which motivates us to quantify it in the context of reasoning, and we hope this clarification highlights both the intent and the limitations of the metric. We will include this discussion in the paper.\\n\\n\\n> I don't think data leakage can explain this problem. Let's just imagine we can remove all K&K puzzles in the pertaining data. Do you think the LLM trained this way will have a LiMem score of 0?\\nI would assume it can still learn certain \\\"true reasoning abilities\\\" from the pertaining, so the accuracy will not be 0. There would be some randomness in the prediction, so it may not always consistently solve all cases, which makes CR < 1. Therefore, the LiMem would still be > 0.\\n> If the logic above is correct, I think that means the definition of your metric is not really reflective of \\\"memorization\\\".\\n\\nWe agree that several factors, such as token bias, order bias, influence the consistency ratio (CR) observed in LLM behavior. These factors may not necessarily reflect memorization. As a result, even in the absence of K&K data leakage, a model may achieve non-trivial accuracy while exhibiting some inconsistency, leading to a non-zero LiMem score.\\n\\nHowever, we believe that a **high LiMem score** remains an effective indicator of memorization. As illustrated in Figure 5, achieving a memorization score of 0.5, with a training accuracy of 0.75 after fine-tuning (as shown in Figure 4), requires the model to have an inconsistency ratio of at least 0.67. This high level of inconsistency strongly suggests that the model is memorizing and failing to maintain performance on most originally solved problems under perturbations.\\n\\nWe acknowledge the reviewer\\u2019s concerns. Setting a threshold for accuracy to only consider high LiMem score cases may help to more clearly interpret the LiMem score. We will make it clear in revision.\"}", "{\"title\": \"Response to ofks\", \"comment\": \"Thank you for your valuable comments!\\n\\n1. We will incorporate the suggested references into the introduction section in our revision.\\n2. Regarding the revised Figure 5, all evaluations are conducted on the fine-tuned models (i.e., no comparison between pre- and post-fine-tuning). The figure highlights several key conclusions:\\n- The inconsistency ratio on the training set is generally higher than on the test set, indicating greater memorization.\\n- Inconsistency under math-level perturbations is higher than under language-level perturbations.\\n- Harder tasks (e.g., the 8-person puzzle) exhibit higher inconsistency compared to easier tasks (e.g., the 3-person puzzle).\\n3. We will run additional experiments with more samples but fewer epochs, as per your suggestion.\\n\\nAdditionally, we will update Figures 3, 5, and 6 to scatter plots, separating \\\"accuracy\\\" (x-axis) and \\\"inconsistency under perturbations\\\" (y-axis), consistent with the style of the [revised Figure 5](https://ibb.co/GWd6503).\\n\\nThank you again for your feedback!\"}", "{\"title\": \"Response to Reviewer 1uBu (Part 1)\", \"comment\": \"Thank you for your thoughtful feedback! We address your questions and comments below.\\n\\n> Q1. The definition of \\\"memorization\\\" is vague. Is that the opposite of \\\"generalization\\\"? Why do we need to create a new term (and even a new metric) compared to the traditional term in machine learning research?\\n\\nMemorization and generalization are not opposites, as they target different aspects of model performance. Memorization is typically assessed on **training data** (in our work, using *small, locally perturbed* training samples), while generalization is evaluated on novel **test data**. As shown in Section 4, a model that memorizes training data can still generalize well to both in-distribution and out-of-distribution test data. \\n\\nModern foundation models, such as LLMs, differ from traditional ML. Unlike typical ML settings (e.g., classification) where the training and test sets can be explicitly separated, LLMs are trained on internet-scale data where the exact training data is often unknown. This makes it challenging to construct clean test sets to measure generalization accurately. Due to potential overlap between training and test data, concerns about benchmark saturation arise, where models may have memorized the same/similar problems during training and thus achieve high accuracy on testing benchmarks. In this context, a notion of \\\"memorization\\\" is especially important to underscore the difficulties in accurately assessing model reasoning capabilities.\\n\\nTo address this challenge, we introduce a memorization metric designed to detect signs of memorization under various local perturbations and to quantify this phenomenon.\\n\\n> Q2. I think the definition of memorization score is too arbitrary and may be misleading. What's the meaning of multiplication of accuracy and CR? For example, (ACC=0.2, CR=0.2) and (ACC=0.8 and CR=0.8) will produce the same score. Do these two results have the same level of memorization under your definition? \\n\\nUnder the LiMem definition, these two results indeed reflect the **same low level** of memorization, as both produce the same score (LiMem = 0.2 \\u00d7 (1-0.2) = 0.8 \\u00d7 (1-0.8)=0.16). However, the implications of these cases are different, as clarified in lines 144-147: Low LiMem score could indicate either solving by reasoning (ACC=0.8 and CR=0.8) or not solving (e.g., random guessing) (ACC=0.2, CR=0.2). \\n\\nA low LiMem score can only indicate \\u201csolving by reasoning\\u201d if we separately check that accuracy is high, as seen in the (ACC=0.8, CR=0.8) case.\\n\\n> Q3: It's also very counter-intuitive that \\\"off-the-shelf models\\\" show signs of \\\"memorization\\\" when solving these puzzles, even though they are never trained on this it. The name and the motivation in the Introduction left an impression that it's a metric to reflect the generalization gap. However, this doesn't seem to be the case, since a model that has never been fitted on the dataset is also likely to have a memorization score > 0.\\n\\nWe appreciate the comment and would like to clarify the potential sources of memorization observed in off-the-shelf models. While the puzzles we generate are randomly constructed (e.g., random language expression and math structures), Knights and Knaves (K&K) is a well-known classical puzzle type. Existing instances of such puzzles, along with related materials, are available online [1,2] and may have been included in the training data for off-the-shelf models. Since the exact training sets for these models are not disclosed, it is plausible that some degree of exposure to K&K-like problems or related text has occurred during pretraining.\\n\\nIn addition, we conducted a search using existing open-source datasets. Specifically, we utilized the [WIMBD tool](https://wimbd.apps.allenai.org/) to analyze the occurrence of popular names (\\u201cAlice\\u201d, \\u201cBob\\u201d) combined with different roles (e.g., \\\"knight,\\\" \\\"knave,\\\" etc.) in these datasets. The results, summarized in the table below, suggest that K&K types of materials could be included in pretraining data, indicating a potential source for memorization in off-the-shelf models.\\n\\n| Statement | Dolma | The PILE | C4 | Oscar | OpenWebText |\\n|---|---|---|---|---|---|\\n| \\\"Alice is a knave\\\" | 13 | 6 | 2 | 1 | 0 |\\n| \\\"Alice is a knight\\\" | 23 | 8 | 6 | 1 | 0 |\\n| \\\"Bob is a knave\\\" | 11 | 8 | 0 | 1 | 0 |\\n| \\\"Bob is a knight\\\" | 53 | 9 | 22 | 5 | 0 |\\n| \\\"Charlie is a knave\\\" | 3 | 0 | 0 | 0 | 0 |\\n| \\\"Charlie is a knight\\\" | 10 | 1 | 2 | 0 | 0 |\\n\\nReferences \\n- [1] https://philosophy.hku.hk/think/logic/knights.php \\n - [2] https://dmackinnon1.github.io/knaves/\"}", "{\"title\": \"Response to Reviewer 7GXF (Part 1)\", \"comment\": \"Thank you for your valuable comments! We address your questions and comments below.\\n\\n> Q1: Limited Task Scope: The paper focuses solely on logical reasoning, particularly the K&K puzzles. While this allows for deep analysis, it limits the generalizability of the conclusions. Experiments on other reasoning domains, such as mathematical reasoning or different types of logical reasoning, would strengthen the paper's claims and make the results more broadly applicable.\\n\\nThanks for the valuable suggestion. We clarify that many existing reasoning datasets, such as those for mathematical reasoning or other logical reasoning tasks, **do not easily allow for automatic local perturbations (especially for math-level perturbations) and new solution generation**, which is an important part of our memorization study. This limitation motivated us to propose a dataset specifically designed to dynamically generate puzzles & solutions & synthetic CoTs and apply configurable perturbations with controllable perturbing components and difficulty levels.\\n\\nFurthermore, we highlight that **K&K puzzles are representative of SAT problems**, which are NP-complete. This classification implies that they capture the complexity inherent in a wide range of natural reasoning tasks [1]. Additionally, some existing logical reasoning benchmarks including ZebraLogic [2] and Einstein\\u2019s Puzzle [3] are Constraint Satisfaction Problem (CSP), which involves assigning values to variables under certain constraints. Any CSP can be reduced to an SAT problem by converting the variables and constraints into propositional variables and formulas [4]. This connection shows the relevance of our focus on K&K puzzles as a foundational basis for studying logical reasoning behaviors in LLMs.\", \"reference\": [\"[1] Armin Biere, Marijn Heule, Hans van Maaren, and Toby Walsh (Eds.), \\\"Handbook of Satisfiability,\\\" IOS Press, 2009.\", \"[2] Stuart Russell and Peter Norvig, \\\"Artificial Intelligence: A Modern Approach,\\\" 3rd Edition, Prentice Hall, 2010.\", \"[3] Bill Yuchen Lin, Ronan Le Bras, and Yejin Choi. ZebraLogic: benchmarking the logical reasoning ability of language models, 2024. URL https://hf.co/spaces/allenai/ZebraLogic.\", \"[4] Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Za\\u00efd Harchaoui, and Yejin Choi. Faith and fate: Limits of transformers on compositionality. NeurIPS, 2024.\"]}", "{\"title\": \"Response to Reviewer ofks (Part 2)\", \"comment\": \"> Q2: The novelty of the proposed methodology, especially the memorization metric, is somewhat lacking. Perturbation-based scores have been widely used in work focusing on memorization and generalization. For instance, [1] utilizes the AdvGLUE [2] and ANLI [3] benchmarks to test ChatGPT's robustness, both involving word- or semantic-level perturbations. Similar approaches can also be found in works on LLM reasoning such as [4] and [5].\\n\\nThanks for the comment. We acknowledge that adding perturbation is a general principle for evaluating model robustness. However, we would like to highlight the following distinctions between our work and prior studies [1-5]:\\n- **Novel Data Generation Framework for Mathematical Perturbations**:\\nWhile prior works such as [4] focus predominantly on language-level perturbations (e.g., lexical, syntactic, and semantic changes), our methodology introduces a framework that can generate *mathematical-level perturbations*, particularly for evaluating model reasoning capability. These perturbations are not only systematically controllable in terms of type and difficulty but also target the underlying mathematical structure of problems. This is also more challenging than language-level perturbation because the perturbed problem need to be solved to find the **new groundtruth answer** for evaluation. This allows us to evaluate model generalization to new math problems in a way that is beyond the scope of purely linguistic perturbations.\\n- **Emphasis on Natural Perturbations**: Unlike *adversarial* approaches such as typos and distractions [1, 2] or human-annotated examples specifically designed to exploit model weaknesses [3] that target worst-case test scenarios, our framework focuses on *natural robustness*. The perturbations we generate\\u2014across both language and mathematics\\u2014reflect scenarios commonly encountered in natural K&K QA contexts.\\n- **Applicability**: While [5] evaluates graph reasoning through perturbations in graph patterns (e.g., training on connectivity queries but testing on shortest-path queries), our focus is on boolean satisfiability problems (SAT) using K&K tasks, potentially provide a broader context for assessing logical reasoning capabilities in LLMs. \\n\\nThe results in Figure 5 show that mathematical-level perturbations induce significantly larger performance declines compared to language-level perturbations. This suggests that while models may adapt to superficial linguistic variations, they struggle to address deeper math structure changes inherent in logical reasoning tasks. This observation underscores memorization over genuine reasoning, a key insight of our study.\\n\\n\\n> Q3: The paper does not clearly distinguish between memorization and genuine reasoning. The authors should explicitly define how they are conceptualizing memorization vs. genuine reasoning in the context of their study. Concepts such as \\\"case-based\\\" or \\\"rule-based\\\" reasoning [6] could be a helpful framework for differentiating these cognitive modes\\u2026.. A clearer definition of memorization and its distinction from genuine reasoning is needed. Is memorization simply the opposite of generalization or rule-based reasoning? \\n\\nThanks for the valuable comment. Memorization and generalization are not direct opposites, as they target different aspects of model performance. As shown in Section 4, a model that memorizes training data can still generalize well to both in-distribution and out-of-distribution test data. \\n\\nWhile memorization can be evaluated on **training data** (in our work, using *small, locally perturbed* training samples), defining and measuring \\u201c*genuine reasoning*\\u201d is more difficult as it depends on **novel unseen test data** differing from training data.\", \"we_empirically_capture_the_genuine_reasoning_from_several_perspectives\": [\"Evaluating perturbed testing data\", \"Evaluating in-distribution testing data with the same K&K difficulty level.\", \"Evaluating out-of-distribution testing data with the different K&K difficulty levels.\", \"Probing model's internal representations with a different QA task \\u2014 a dataset consisting of correct/incorrect K&K statements.\", \"For example, in our transferability study, the model demonstrates capabilities beyond \\\"case-based reasoning\\\" by solving puzzles with unseen difficulty levels. However, we do not claim this constitutes \\\"rule-based reasoning,\\\" as 100% transferability accuracy is not achieved. Based on these findings, we hypothesize that the model may develop useful internal computational graphs (e.g., dependencies among different characters in K&K) that aid transferability, while also relying on structured shortcuts that remain partially inaccurate.\"]}", "{\"title\": \"Further questions (Part 2)\", \"comment\": \"6. **Further Clarification on Overfitting in Llama-8B Training**\\nThe answer to Q7 has addressed my concern about overfitting, but I still wonder why training a Llama-8B model for 100 epochs resulted in continued increases in test accuracy. I am surprised by this, as when I train LLMs, 100 epochs usually lead to severe overfitting. Could you provide more details about your training process, such as the number of training samples, learning rate, learning rate schedule, or any other settings you believe were crucial in avoiding overfitting? Additionally, could you provide the learning curve of the model?\\n\\n**Overall Comments**:\\n\\nI highly appreciate the authors' efforts, and several of my concerns have been addressed. However, the current paper still heavily relies on metrics that are not clearly explained, and the explanation of the motivation for studying the K&K problem remains indirect. As such, I will maintain my current rating and look forward to the authors providing further clarifications on these issues.\\n\\nAdditionally, as a personal suggestion, I believe reorganizing the experiments and discussions based on separate metrics could significantly improve the paper\\u2019s credibility and readability. I am concerned that presenting only part of the results in the appendix will not only lengthen the paper unnecessarily but also force the authors to make strained explanations in order to reconcile with previous conclusions.\"}", "{\"title\": \"Response to Reviewer 1uBu (Part 2)\", \"comment\": \"> Q4: \\\"distinguish memorization from reasoning\\\" are all based on vague and probably problematic definitions of \\\"memorization\\\", which makes the results not as insightful.\\n\\nThanks for the comment. We clarify that the definition of memorization is valid in response to Q1-Q2. \\n\\nAdditionally, for \\\"distinguish memorization from reasoning\\\", we note that our analysis at this stage is based on a *sample-level binary score*\\u2014 whether a sample is consistently solved under perturbation or not, as mentioned in in L459-462. Specifically, we consider *1-point dataset*, and only study **samples for which the model predicts correctly (ACC=1), to rule out the possibility of \\u201cnot solving\\u201d**, and thus provide direct insights into \\u201csolving by reasoning\\u201d v.s . \\u201csolving by reasoning \\u201d for each sample.\\n\\n> Q5: A rich line of research on \\\"grokking\\\" [1, 2] might be very relevant to the research problem in this paper.\\n\\nWe thank the reviewer for pointing out the related work. We acknowledge that the grokking phenomenon are very interesting, which is first identified by [1] on a small algorithmic dataset where validation accuracy suddenly improves from random chance to near-perfect generalization long after severe overfitting. Recently [2] observed grokking in the domain of complex knowledge-based tasks, showing that implicit reasoning over parametric knowledge emerges only after extensive overfitting.\\n\\nIn this work, we observe a related phenomenon but through the lens of memorization. Through novel (math &language-level) perturbation tests, transferability, and probing analyses, we verify that LLM reasoning skills emerge alongside memorization. Furthermore, our investigation focuses on logical reasoning, offering new insights into how LLMs acquire logical reasoning skills.\\n\\nWe added the above discussion in the extended related work in Appendix B (due to space limit).\"}", "{\"title\": \"Response to Reviewer XCvy\", \"comment\": \"Thank you for your comments and for raising the score! Please find our response to your questions below:\\n\\n1. Thanks for the feedback and we would like to emphasize that the findings of our work provide several practical insights for model development. Specifically, our contributions are motivated by the dataset contamination issues in benchmarking, and we show that under high memorization, the model exhibits transferability for easier/harder logical reasoning tasks. We also show the effectiveness of direct fine-tuning in the absence of Chain-of-Thought training data, provide interpretability analysis through probing model internals to better understand decision-making processes, and analyze model's robustness under low-quality or noisy data.\\n\\n2. We would like to clarify a potential misunderstanding. In our original response, we stated that \\\"even with 1-shot and CoT prompting, models **are sensitive** to perturbations in K&K puzzles.\\\" As shown in Figure 17, for instance, Phi-3-medium exhibits a high memorization score of 0.37 under lead perturbation on the 2-person task, which corresponds to a significant performance drop.\\n\\n\\n3. We clarify that high LiMem scores effectively capture high memorization cases by reflecting two important characteristics observed in human behavior: high accuracy on previously seen problems and low consistency when problems are slightly perturbed. This underscores the fundamental importance and necessity of LiMem scores for capturing memorization in LLM reasoning tasks.\\n\\nPlease feel free to let us know if there are additional questions or points requiring clarification. Thank you for your time and feedback.\"}", "{\"title\": \"Response to Reviewer ofks (Part 4)\", \"comment\": \"> Q6: The claim in Section 4.1 that \\\"generalization performance increases with memorization level\\\" is debatable. Since accuracy is part of the memorization score, it is unsurprising that test accuracy correlates with memorization score on the training set. They could both be the results of increasing training accuracy. To demonstrate that memorization aids generalization, a better comparison would be between test accuracy and memorization score with equal training accuracy.\\n\\nThanks for the suggestion. We acknowledge that achieving equal training and testing accuracy is challenging due to the complexities of training dynamics. However, to address this concern, we have separately reported the consistency ratio in response to Q4, which we hope provides additional clarity.\\n\\n> Q7: The fine-tuned models were trained for 100 or 5 epochs, which may have led to overfitting (i.e., memorization). Given the large size of these models, even slight overtraining can inflate memorization scores, raising concerns about whether the results provide meaningful insights into practical reasoning capabilities.\\n\\nThanks for the comment. We clarify that the chosen # epoch is meaningful due to following reasons: \\n- *Deliberate Induction of Memorization for Controlled Study*: Training the models for a large # epochs was a deliberate decision to induce memorization within a controlled setup based on K&K dataset. While the exact number of epochs or data used for pretraining off-the-shelf models is unavailable, existing literature reveals that common reasoning benchmarks are often contaminated (memorized). This motivates our controlled experimental design, where we simulate a similar memorization-prone environment using proposed K&K dataset. By inducing memorization with a large # epochs, we aim to systematically study how such conditions affect reasoning performance, thereby gaining insights into the interplay between memorization and reasoning.\\n- *# Epoch*: We use 5 epochs for GPT4o-mini. While 100 is the maximal # epoch for fine-tuning Llama3-8B, we report the memorization score at epoch 50 throughout the paper (as mentioned in Appendix D.2.2.), as this is the point at which the models typically converge as shown in Figure 19. We now clarified this in the main paper in our revised PDF.\\n- *Train/Test Accuracy Trends*: As observed in Figure 4, both train and test accuracies for Llama3-8B (up to 50 epochs) and gpt4omini (up to 5 epochs) continue to increase, and not yet achieve 100% accuracy for challenging KK tasks like 5-ppl puzzles and 8-ppl puzzles, indicating that the models had not entirely achieved overfitting at these points. This supports the practicality of the chosen # epoch. \\n\\n> Q8: The conclusion in Section 4.1 that \\\"models likely learned to reason K&K puzzles to some extent\\\" is unclear. Figures 5 and 4 indicate that both memorization score and test accuracy are lower on test samples than on training samples, so is the lower memorization score simply a result of reduced accuracy?\\n\\nThanks for the comment. We acknowledge that indeed a low memorization score LiMem alone is not enough to conclude that models learn to reason K&K puzzles as it could indicate either reasoning or random guessing ( L140\\u2013147 ). We conducted further analysis to study the models\\u2019s reasoning ability, including generalization across different #ppl puzzles, and the probing tests. We fixed the statement in Section 4.1 (L334-338) to reflect this point. \\n\\n\\n> Q9 In Section 6, while the indicator experiments are interesting, I noticed that the labeling of \\\"consistently solved\\\" and \\\"not consistently solved\\\" is based on a single perturbation\\u2026.I suggest exploring whether this method accurately reflects the model's ability to generalize across a broader range of perturbations.\\n\\nThanks for the comment. We report results under statement perturbation and leaf perturbation in Appendix Figure 33.\"}", "{\"summary\": \"This paper investigates how large language models (LLMs) solve *Knights and Knaves* (K&K) puzzles, aiming to determine whether models rely more on *memorizing similar problems* or on developing *genuine reasoning skills*. The authors introduce a \\\"*memorization score*\\\" metric and evaluate both pre-trained and fine-tuned LLMs, finding that while both models heavily depend on memorization, they also exhibit some reasoning capability despite this reliance. They also provide some experiments including probing inner representation and consistency-or-not indicators, which sheds light on the mechanism of LLM reasoning.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. Their experiments are comprehensive and well-designed, covering both pre-trained models and finetuned models, various difficulty levels of K&K problems, and both open-source and state-of-the-art closed-source models. Particularly, the probing experiments (section 4.2), finetuning with incorrect answers (in section 4.3) and indicators (in section 6) offer valuable insights into to the reasoning mechanism of LLMs.\\n\\n2. The use fo the K&K puzzle introduces a novel reasoning challenge for LLMs. This benchmark provides a fresh perspective on understanding the reasoning capabilities of these models.\", \"weaknesses\": \"1. A key question arises regarding **the choice of the K&K puzzles**. It is unclear whether this puzzle represents a practical and significant challenge or if it is simply another interesting problem, among many others like those in BigBench. While the need for automatically generated, answerable, and perturbable questions is acknowledged, the authors should further justify why K&K puzzles are essential to study. For example, demonstrating that many real-world or well-known problems can be translated into K&K puzzles would strengthen the argument.\\n2. The novelty of the proposed methodology, especially the memorization metric, is somewhat lacking. Perturbation-based scores have been widely used in work focusing on memorization and generalization. For instance, [1] utilizes the AdvGLUE [2] and ANLI [3] benchmarks to test ChatGPT's robustness, both involving word- or semantic-level perturbations. Similar approaches can also be found in works on LLM reasoning such as [4] and [5].\\n3. The paper does **not clearly distinguish** between *memorization* and *genuine reasoning*. The authors should explicitly define how they are cocepturalizing memorization vs. genuine reasoning in the context of their study. Concepts such as \\\"case-based\\\" or \\\"rule-based\\\" reasoning [6] could be a helpful framework for differentiating these cognitive modes. Additionally, since accuracy is a factor in the memorization score, an increase in this score might reflect either improved accuracy or reduced robustness, making it difficult to discern its true meaning. Please provide mroe discussion about how the memorization score accounts for the potential confound between accuracy and robustness.\\n4. Certain experiments **lack clear explanations**, and some conclusions appear questionable: \\n\\n 1. In Section 3.1, the poor performance of off-the-shelf models on K&K tasks seems *unrelated* to the focus on memorization. This also raises concerns about whether K&K puzzles are suitable for testing reasoning-by-memorization, as reasoning ability may be a prerequisite for drawing meaningful conclusions.\\n \\n 2. The claim in Section 4.1 that \\\"*generalization performance increases with memorization level*\\\" is debatable. Since accuracy is part of the memorization score, it is unsurprising that test accuracy correlates with memorization score on the training set. They could both be the results of increasing training accuracy. To demonstrate that memorization aids generalization, a better comparison would be between test accuracy and memorization score **with equal training accuracy**.\\n\\n 3. The fine-tuned models were trained for 100 or 5 epochs, which may have led to overfitting (i.e., memorization). Given the large size of these models, even slight overtraining can inflate memorization scores, raising concerns about whether the results provide meaningful insights into practical reasoning capabilities.\\n\\n 4. The conclusion in Section 4.1 that \\\"*models likely learned to reason K&K puzzles to some extent*\\\" is unclear. Figures 5 and 4 indicate that both memorization score and test accuracy are lower on test samples than on training samples, so is the lower memorization score simply a result of reduced accuracy?\\n\\n 5. In Section 6, while the indicator experiments are interesting, I noticed that the labeling of \\\"consistently solved\\\" and \\\"not consistently solved\\\" is based on a single perturbation. Given the complexity of the K&K puzzle, the neighborhood of such a question could be quite large, potentially resulting in varied effects from different perturbations. It is possible that some perturbations lead to a sharp decrease in performance, while others may not. This raises the question of whether a binary labeling approach is truly robust and convincing. I suggest exploring whether this method accurately reflects the model's ability to generalize across a broader range of perturbations. For example, the authors can conduct a sensitivity analysis using multiple perturbations per puzzle to assess the robustness of their binary labeling approach.\\n\\n5. Although the paper provides a deep dive into model behavior, it lacks a concrete conclusion on how LLMs solve these puzzles without chain-of-thought reasoning. Addressing this question, though difficult, would greatly enhance the paper. But I know it is a hard task and I will not detract from the article for the lack of this.\\n\\n[1] On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective, Wang et. al., 2023. [https://arxiv.org/abs/2302.12095](https://arxiv.org/abs/2302.12095)\\n\\n[2] Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models, Wang et. al., 2021. [https://arxiv.org/abs/2111.02840](https://arxiv.org/abs/2111.02840)\\n\\n[3] Adversarial NLI: A New Benchmark for Natural Language Understanding, Nie et. al., 2019. [https://arxiv.org/abs/1910.14599](https://arxiv.org/abs/1910.14599)\\n\\n[4] Can LLM Graph Reasoning Generalize beyond Pattern Memorization? Zhang et. al., 2024. [https://arxiv.org/abs/2406.15992](https://arxiv.org/abs/2406.15992)\\n\\n[5] RUPBench: Benchmarking Reasoning Under Perturbations for Robustness Evaluation in Large Language Models, Wang & Zhao, 2024. [https://arxiv.org/abs/2406.11020](https://arxiv.org/abs/2406.11020)\\n\\n[6] Case-Based or Rule-Based: How Do Transformers Do the Math? Hu et. al. 2024. [https://arxiv.org/abs/2402.17709](https://arxiv.org/abs/2402.17709)\", \"questions\": \"1. What is the necessity of focusing on K&K problems (as discussed in con 1)?\\n2. More explanation is needed to clarify the relationship between the experimental results and the conclusions.\\n3. A clearer definition of memorization and its distinction from genuine reasoning is needed. Is memorization simply the opposite of generalization or rule-based reasoning? Additionally, the current metric, which multiplies accuracy by $(1 - cr)$, could be refined, as the latter provides more insight into memorization or generalization. The \\\"not solving\\\" behavior should be excluded by setting a threshold (e.g., accuracy > 0.8) to filter out irrelevant cases.\\n4. Minor suggestions:\\n 1. The memorization metric, introduced in Figure 1 (Section 1), needs more explanation to help readers understand it. Adding details to the figure's caption or in the text in Section 1 would be helpful.\\n 2. The \\\"reasoner\\\" is described as sequentially examining each individual and checking for contradictions. However, humans likely employ heuristics to determine the order of examination and may use shortcut strategies. The paper could discuss whether this approach is the best analogy to human reasoning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Revision Summary\", \"comment\": [\"We sincerely thank all reviewers for their constructive feedback and suggestions, which are very helpful to us. We are encouraged that the reviewers found our work (1) propose novel memorization metric (Reviewer 7GXF), (2) novel data generation and perturbation methods (Reviewer 1uBu, Reviewer ofks), (3) and provide deeper understanding of LLM reasoning (Reviewer 1uBu, Reviewer 7GXF), (4) the results are well-design and extensive (Reviewer 7GXF, Reviewer ofks) and (5) the writing and presentation are clear (Reviewer XCvy).\", \"Following the reviewers\\u2019 suggestions, we added more experiments/discussions, and we addressed the questions in the response to each reviewer. Below is a summary of our new experimental results and discussions in the revised PDF:\", \"**Figure 3, More LLMs**: we evaluate 6 additional models that have competitive reasoning capability: Gemini-1.5-Flash-002, Gemini-1.5-Pro-002, Gemma-2-9b, Llama-3.1-8B-Instruct, Qwen2-Math-7B-Instruct, Qwen2.5-Math-7B-Instruct. (Reviewer XCvy)\", \"**Appendix E.1 Figure 20, Combined perturbations**: we study the effect of progressively stacking language-level and mathematical-level perturbations. (Reviewer XCvy)\", \"**Appendix E.1 Table 3, Self-consistency**: we report the accuracy and memorization score when using test-time inference technique self-consistency.(Reviewer XCvy)\", \"**Appendix E.1 Figure 19, Consistency ratio**: we report the consistency ratio on the train/test set. (Reviewer ofks)\", \"**Section 7 & Appendix B Related Work**: we discuss more related work suggested by the reviewers. (Reviewer 1uBu, XCvy)\", \"**Figure 1 and Section 1 Introduction**: we add more explanation regarding the memorization score. (Reviewer ofks)\", \"**Section 2.2**: we highlight that the principle underlying K&K is the Boolean satisfiability (SAT) problem, which is essential to study for evaluating LLM logical reasoning capability. (Reviewer 7GXF, ofks)\", \"Please also let us know if there are other questions, and we look forward to the discussion with the reviewers to further improve our paper. Thank you!\"]}", "{\"comment\": \"> Under the LiMem definition, these two results indeed reflect the same low level of memorization, as both produce the same score (LiMem = 0.2 \\u00d7 (1-0.2) = 0.8 \\u00d7 (1-0.8)=0.16). However, the implications of these cases are different, as clarified in lines 144-147: Low LiMem score could indicate either solving by reasoning (ACC=0.8 and CR=0.8) or not solving (e.g., random guessing) (ACC=0.2, CR=0.2).\\n\\nWhy do you believe they should have the same level of memorization? This explanation only states that these two cases are the same under your definition, but does not justify why these two cases **should** be the same from the principle. The definition of this metric is still rather arbitrary to me.\\n\\n> ...the potential sources of memorization observed in off-the-shelf models...\\n\\nI don't think data leakage can explain this problem. Let's just imagine we can remove all K&K puzzles in the pertaining data. Do you think the LLM trained this way will have a LiMem score of 0?\\n\\nI would assume it can still learn certain \\\"true reasoning abilities\\\" from the pertaining, so the accuracy will not be 0. There would be some randomness in the prediction, so it may not always consistently solve all cases, which makes CR < 1. Therefore, the LiMem would still be > 0. \\n\\nIf the logic above is correct, I think that means the definition of your metric is not really reflective of \\\"memorization\\\".\\n\\n> ... only study samples for which the model predicts correctly (ACC=1), to rule out the possibility of \\u201cnot solving\\u201d...\\n\\nI think that makes much more sense than LiMem which mixes accuracy and CR... However, the main body of this paper is still based on the LiMem, and it's necessary to provide a convincing explanation of this metric.\"}", "{\"title\": \"Response to Reviewer ofks (Part 1)\", \"comment\": \"Thank you for your detailed feedback! We address your questions and comments below.\\n\\n> Q1. A key question arises regarding the choice of the K&K puzzles. It is unclear whether this puzzle represents a practical and significant challenge or if it is simply another interesting problem, among many others like those in BigBench. While the need for automatically generated, answerable, and perturbable questions is acknowledged, the authors should further justify why K&K puzzles are essential to study. For example, demonstrating that many real-world or well-known problems can be translated into K&K puzzles would strengthen the argument.\\n\\nThanks for the question. Indeed many real-world or well-known problems can be translated into K&K puzzles as the principle underlying K&K is the [**Boolean satisfiability (SAT) problem**](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem). K&K puzzles are a simple way to express SAT instances in natural language. SAT is a fundamental problem, at the core of computer science; e.g., *it was the first problem that was proven to be NP-complete*. SAT can be reduced to most natural reasoning problems and hence the performance of a model on SAT (i.e., K&K puzzles) can be indicative of its reasoning capabilities. \\n\\nSpecifically, consider a K&K puzzle involving $N$ people, a possible solution assigns a Boolean value to $N$ variables $B_1,B_2,\\\\ldots,B_N$, where the truth value of $B_i$ indicates whether the $i$th person is telling the truth. By definition, the $i$th person is telling the truth if and only if their statement $S_i$ is true. Therefore, a valid solution to a \\\\kk puzzle is a Boolean assignment for $B_1,B_2,\\\\ldots,B_N$ such that the following formula evaluates to true.\\n$$(B_1\\\\Leftrightarrow S_1)\\\\wedge(B_2\\\\Leftrightarrow S_2)\\\\wedge\\\\cdots\\\\wedge(B_N\\\\Leftrightarrow S_N).$$ \\nHere the statement $S_i$\\u200b involves the five most commonly used propositional connectives: and, or, not, imply, and equivalence [1]. This K&K formulation provides a direct link to the SAT problem.\\n\\nWe also want to highlight that many real-world problems can be translated into SAT problems, for instance:\\n- *Constraint Satisfaction Problem (CSP)*: Some existing popular LLM logical reasoning benchmarks including ZebraLogic [6] and Einstein\\u2019s Puzzle [7] are CSP, which involves assigning values to variables under certain constraints. Any CSP can be reduced to an SAT problem by converting the variables and constraints into propositional variables and formulas [5]. \\n- *Hardware and Software Verification*: SAT solvers are widely used in model checking to verify that hardware circuits or software programs adhere to specified behavior, identifying logical errors in designs [2].\\n-*Cryptography*: SAT solvers model cryptographic functions to detect vulnerabilities [3].\\n- *Mathematical Theorem Proving*: SAT solvers assist in computer-aided proofs, such as discovering unknown Van der Waerden numbers or solving the Boolean Pythagorean triples problem [4]\\n\\nThese connections illustrate that K&K puzzles, as natural language SAT instances, are essential to study as logical reasoning tasks. We have added the above discussion to Section 2.2 of our revised PDF.\", \"reference\": [\"[1] Mendelson, E. (2015). Introduction to Mathematical Logic (6th ed.). CRC Press.\", \"[2] Biere, Armin; Cimatti, Alessandro; Clarke, Edmund M.; Strichman, Ofer; Zhu, Yunshan (2003). \\\"Bounded Model Checking\\\". Advances in Computers. 58 (2003)\", \"[3] Massacci, F., & Marraro, L. (2000). Logical cryptanalysis as a SAT problem. Journal of Automated Reasoning, 24(1-2), 165\\u2013203.\", \"[4] https://en.wikipedia.org/wiki/SAT_solver\", \"[5] Stuart Russell and Peter Norvig, \\\"Artificial Intelligence: A Modern Approach,\\\" 3rd Edition, Prentice Hall, 2010.\", \"[6] Bill Yuchen Lin, Ronan Le Bras, and Yejin Choi. ZebraLogic: benchmarking the logical reasoning ability of language models, 2024. URL https://hf.co/spaces/allenai/ZebraLogic.\", \"[7] Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Za\\u00efd Harchaoui, and Yejin Choi. Faith and fate: Limits of transformers on compositionality. NeurIPS, 2024.\"]}", "{\"title\": \"Response to Reviewer XCvy (Part 2)\", \"comment\": \"> Q5 Please cite missing relavant work that explore memorization and formal reasoning. [1] P. U. Sai et al., \\u201cRecite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon,\\u201d arXiv.org, 2024. https://arxiv.org/abs/2406.17746. [2] A. Saparov and H. He, \\u201cLanguage Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought,\\u201d arXiv:2210.01240 [cs], [3] L. Pan et al \\\"LOGIC-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning\\\" https://arxiv.org/pdf/2305.12295\\n\\nThanks for the comment. In our revision, we discussed these relevant works in Section 7 related work and also in Appendix B extended related work (due to space limit). \\n\\n\\n> Q6 The paper seems to omits key fine-tuning details, such as whether standard SFT or LoRA was used, any quantization techniques, configurations or other factors that could significantly impact the observed memorisation and reasoning behaviours.\\n\\nThank you for the valuable suggestion. We included fine-tuning the details in Appendix D.2.2 and have now added further clarification in the revised PDF.\\n\\nFor Llama fine-tuning, we used LoRA fine-tuning with standard hyperparameters: a batch size of 4, gradient accumulation steps of 8, and 5e-5 learning rate. The LoRA configuration was set as follows: rank $r = 32$, scaling factor $\\\\alpha = 32$, and dropout rate $0.05$. No quantization techniques were used. \\nPlease let us know if there are any other details or clarifications needed.\\n\\n> Q7 Although the study mentions language-level and mathematical-level perturbations, it does not examine the effect of progressively stacking these perturbations. A more thorough exploration here could offer insights into model robustness and reasoning depth.\\n\\nThank you for the insightful comment. We note that the memorization score under mathematical perturbations has been high, as demonstrated in Figures 3, 5, and 6, supporting our hypothesis that the models could rely on memorization to solve the puzzles instead of robust reasoning. \\n\\nFollowing your suggestion, we explored the combination of math-level perturbations with language-level perturbations in Appendix E.1 Figure 20. Memorization scores of Directly Fine-Tuned Llama3-8B under various math-level (statement, leaf) and language-level (name, reorder) perturbations. Combining math-level and language-level perturbations progressively can result in higher memorization scores (e.g., leaf + reorder), especially compared to applying language-level perturbations alone.\\n\\n> Q8 It seems logical to consider test-time inference techniques like self-refine, majority voting or self-consistency, which could enhance reasoning results. Such experiments would help demonstrate robustness.\\n\\n\\nThank you for the suggestion. While test-time inference techniques are orthogonal to our study of memorization, which is introduced during training, we followed this recommendation to evaluate self-consistency. Specifically, we tested self-consistency using gpt-4o-mini, a strong model that outperforms open-source models on K&K tasks (as shown in Table 3). For the default direct prompting method, we used greedy sampling with a temperature of 0. For self-consistency, we applied temperature-based sampling with a temperature of 0.7, generating 40 samples to compute the results.\\n\\nThe results in below table show that while self-consistency improves accuracy on the simpler 2-ppl task, it provides only marginal improvement on the 3-ppl task and fails entirely on the more challenging 8-ppl task. This highlights a fundamental limitation of the model in solving complex problems. Additionally, we report the memorization scores for the 2-ppl and 3-ppl tasks, which demonstrate that self-consistency reduces memorization scores, likely due to its majority voting mechanism that leads to robust reasoning results. \\n\\n| Method | Test Accuracy \\u2191 | | | Memorization Score \\u2193 | |\\n| --- | --- | --- | --- | --- | --- |\\n| | 2-ppl | 3-ppl | 8-ppl | 2-ppl | 3-ppl |\\n| Direct Prompting | 0.63 | 0.42 | 0.01 | 0.24 | 0.26 |\\n| Direct Prompting + Self-consistency | 0.74 | 0.43 | 0.02 | 0.20 | 0.22 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer 7GXF\", \"comment\": \"Thank you again for your valuable feedback and positive rating. Your support is vital to us.\\n\\nIndeed, while our dataset is a representative SAT logical reasoning task, the conclusions about the effectiveness of direct FT and robustness of low-quality data have not been verified for other types of reasoning tasks (e.g., mathematical reasoning). We will make it clear in our revision. Extending our work to include broader reasoning tasks is an exciting direction for future research, and we are grateful for your suggestion.\"}", "{\"title\": \"Response to Reviewer ofks (Part 2)\", \"comment\": \"> Q2 prior work focused on robustness at the linguistic level, while you focus on 'mathematical-level' robustness. However, as mentioned in [1], both advGLUE and ANLI include deeper, manually annotated attacks, not just word- or sentence-level perturbations. Though not identical, this can be compared to your \\\"mathematical-level\\\" perturbations. That said, I still feel that the discussion of innovation in this regard is insufficient. If you haven't emphasized this aspect of your innovation, I believe it\\u2019s acceptable, but it may benefit from further clarification.\\n\\n\\nThanks for the comment. We argue that language-level perturbations, even when involving manually annotated attacks as in advGLUE and ANLI ([1]), fundamentally differ from the mathematical-level perturbations introduced in our study. \\nThese approaches are not directly comparable due to critical differences in purpose, methodology, and scope.\\n\\nIn [1], advGLUE and ANLI target natural language understanding tasks, such as sentiment analysis and textual entailment. The primary goal in these cases is to craft adversarial perturbations that lead to misclassification without altering its ground truth.\\n\\nIn contrast, our perturbation method operates at a mathematical level in logical reasoning tasks and modifies **both the problem and the ground-truth answer**. These mathematical perturbations ensure that the perturbed puzzle has a *distinctly different solution* compared to the original puzzle, while remaining superficially similar and maintaining a comparable difficulty level. This is guaranteed by the Perturber, Reasoner, and Solver components (lines 186\\u2013192 ). This approach provides a direct evaluation of the models\\u2019 understanding of the underlying mathematical principles.\\n\\nThus, our method serves a different purpose and employs a fundamentally different framework compared to the adversarial strategies studied in [1], which focus on classification tasks. By addressing logical reasoning robustness through mathematical-level perturbations, our work contributes a novel perspective under the score of reasoning, distinct from advGLUE and ANLI.\\n\\n\\n \\n> Q3-4: Acc and CR should be separated for clarity. Reorganization of Experiments Based on CR\\n\\n\\nThank you for the comment. In the revised PDF, we will update the figures. We will update Figure 5 into a scatter plot as shown in [this anonymous link](https://ibb.co/GWd6503) where the x-axis represents clean accuracy, the y-axis represents the inconsistency ratio (1 - CR) under local perturbations, and the color spectrum corresponds to the memorization score.\\nOur analysis shows that fine-tuned LLMs generally achieve higher clean accuracy (x-axis) but exhibit greater inconsistency under perturbations (y-axis) on the training set compared to the test set. This behavior is associated with a higher memorization score (indicated by color spectrum).\\n\\nPlease let us know if this revision addresses the reviewer\\u2019s concerns.\\n\\n\\n\\n> Q5: Clarification of Response to Q5\\n\\n\\nThanks for the suggestion. We will add the discussion in lines 230 - 244 accordingly as shown in [this anonymous link](https://ibb.co/BfHQpbP). \\n\\n> Q6: Could you provide more details about your training process, such as the number of training samples, learning rate, learning rate schedule, or any other settings you believe were crucial in avoiding overfitting? Additionally, could you provide the learning curve of the model?\\n\\nThanks for the comment.\", \"training_details\": [\"We fine-tune the models for each N-people task separately, with ntrain = 1000 for 3 \\u2264 N \\u2264 8, and ntrain = 200 for 2-people task due to the limited number of combinations, as indicated in Line 255\", \"Llama3-8B fine-tuning details are presented in D.2.2: we used LoRA fine-tuning with the fol-\"], \"lowing_hyperparameters\": \"a batch size of 4, gradient accumulation steps of 8, and 5e-5\\nlearning rate. The LoRA configuration was set as follows: rank r = 32, scaling factor $\\\\alpha$ = 32,\\nand dropout rate 0.05. We relied on the [Supervised Fine-tuning Trainer](https://huggingface.co/docs/trl/en/sft_trainer) from the `trl` library without employing additional mechanisms to mitigate overfitting.\", \"learning_curves\": \"- We report the training and testing accuracy over epochs in Appendix figure 21 (also in this [anonymous link](https://ibb.co/D562bBb))\\n- We report the training loss in [this anonymous link](https://ibb.co/hgcM8px)\\n\\nWe think it might be because the challenging nature of the K&K logical reasoning tasks inherently demands more epochs for convergence.\\n\\nPlease also let us know if there are other questions, and we look forward to the discussion with the reviewers to further improve our paper. Thank you!\"}", "{\"title\": \"Further Questions\", \"comment\": \"1. **Clarification on the Importance of K&K Problems**\\nThe discussion about SAT is helpful, but it's important to note that K&K is only a very limited subset of SAT problems. Currently, your discussion feels somewhat indirect, as it suggests that \\\"K&K is a subset of SAT\\\" and \\\"CSP and verification are also subsets of SAT.\\\" I would appreciate a more direct illustration of the importance of the K&K problem. Could you provide references that support the significance of K&K? For example, are there papers that use K&K as a testbed, explaining why it was chosen? References from other fields, such as cognitive science or philosophy, might also be relevant. Additionally, I noticed that you mention ZebraLogic and Einstein's Puzzle in response to review 7GXF, suggesting that similar SAT problems have been proposed before. Could you elaborate on the differences between K&K and these problems? Why is it necessary to study K&K specifically rather than these alternatives?\\n\\n2. **Clarification of Perturbation Method and Innovation** \\nThank you for your detailed explanation of your proposed perturbation method. I understand that your perturbations serve as a measure of robustness. However, the key difference between your work and previous studies lies in the definition of robustness. (Please correct me if I'm wrong.) For example, prior work focused on robustness at the linguistic level, while you focus on 'mathematical-level' robustness. However, as mentioned in [1], both advGLUE and ANLI include deeper, manually annotated attacks, not just word- or sentence-level perturbations. Though not identical, this can be compared to your \\\"mathematical-level\\\" perturbations. That said, I still feel that the discussion of innovation in this regard is insufficient. If you haven't emphasized this aspect of your innovation, I believe it\\u2019s acceptable, but it may benefit from further clarification.\\n\\n3. **Confusion Between 'Memorization & Genuine Reasoning' vs. 'Fitting & Generalization'** \\nIt seems that the definition of 'genuine reasoning' here is essentially what is typically understood as generalization, as you mention performance on \\\"*novel unseen test data differing from training data*\\\". Therefore, I still don\\u2019t understand the distinction between your terms 'memorization & genuine reasoning' and the conventional terms 'fitting & generalization.' Furthermore, while you state that '*memorization and generalization are not opposites*,' the term 1-CR in the score you define seems to reflect a negative consideration of generalization. When the model\\u2019s generalization performance is high, CR will be high, and thus LiMem will be low. Therefore, in your definition, memorization appears to be opposite to generalization.\", \"i_suggest_that_the_authors_carefully_reconsider_this_issue\": \"when you define memorization, what is its opposite? I believe multiplying the two factors leads to a vague definition. As you've stated, the opposite of memorization is a mix of two factors: either poor learning or reliance on reasoning. However, these two factors have distinct natures\\u2014one is what we want, the other is what we don\\u2019t. Mixing them creates ambiguity, making it difficult to interpret low-score results. Once you try to explain trends in LiMem, we need additional tools to understand the meaning behind these changes. This diminishes the significance of LiMem as proposed. I believe these factors should be separated for clarity.\\n\\n4. **Reorganization of Experiments Based on CR** \\nI believe Figure 19 is clearer than Figure 5, and I suggest that you reorganize your experiments using only CR. However, the results in Figure 19 seem less significant than those in Figure 5, which deepens my concern. The confusion between the two factors, 'robustness' and 'performance,' has unnecessarily exaggerated the surprising nature of the results. I believe robustness is the true point of concern when discussing 'memorization,' as we typically don\\u2019t focus on whether a model relies on memorization until it has demonstrated some performance. If the model\\u2019s higher LiMem on the training set is due to improved performance, rather than significantly lower CR, then there is nothing particularly surprising about this conclusion.\\n\\n5. **Clarification of Response to Q5** \\nThe answer to Q5 is well-explained, but I believe these two points are more suitable for the main text. I suggest you include a paragraph in the main body to explain these points, rather than concluding with \\\"off-the-shelf models do not perform well on K&K problems.\\\"\"}", "{\"title\": \"Response to Reviewer ofks (Part 3)\", \"comment\": \"> Q4: Since accuracy is a factor in the memorization score, an increase in this score might reflect either improved accuracy or reduced robustness, making it difficult to discern its true meaning. Please provide mroe discussion about how the memorization score accounts for the potential confound between accuracy and robustness\\u2026\\u2026 The current metric, which multiplies accuracy by (1\\u2212cr), could be refined, as the latter provides more insight into memorization or generalization. The \\\"not solving\\\" behavior should be excluded by setting a threshold (e.g., accuracy > 0.8) to filter out irrelevant cases.\\n\\nThanks for the insightful comment. Regarding the relationship between the memorization score LiMem, accuracy, and robustness, assume the accuracy increases but the consistency ratio remains the same. Because it is a ratio (that remains the same), an increased accuracy means the (absolute) number of inconsistent samples under perturbation is now increased. As a result, this still reflects more memorization.\\n\\nWe design the two terms based on intuition from judging human behaviors. We combine them into a single metric to make it easier to consume. As clarified in L140\\u2013147: A high LiMem score indicates memorization. However, a low LiMem score could indicate either solving by reasoning or not solving (random guessing), which requires a separate check on the accuracy and robustness terms to differentiate between the two cases.\\n\\nFollowing the reviewer\\u2019s suggestion, we now report the **consistency ratio** (i.e., robustness) in Appendix Figure 19. Fine-tuned LLMs generally demonstrate a higher consistency ratio (i.e., more robust) on solved problems in the test set compared to the train set, particularly for challenging tasks such as 5/8-person puzzles. On the 3-person puzzle task, the consistency ratio between the train and test sets remains comparable. The consistency ratio generally is higher in easy tasks than in hard tasks.\\n\\nWe thank the reviewer for suggesting setting a threshold, and we will add it to our revision.\\n\\n\\n> Q5: In Section 3.1, the poor performance of off-the-shelf models on K&K tasks seems unrelated to the focus on memorization. This also raises concerns about whether K&K puzzles are suitable for testing reasoning-by-memorization, as reasoning ability may be a prerequisite for drawing meaningful conclusions.\\n\\nWe agree with the reviewer that to achieve a high memorization score, the model has to demonstrate sufficient accuracy on the original problems in the first place, underscoring that high accuracy is a prerequisite to studying meaningful memorization.\\n\\nOur evaluation of off-the-shelf models on K&K is still related to our focus on memorization given the different performance under easy/hard K&K tasks :\\n- **Potential memorization on easy K&K tasks**: We observe relatively high accuracy on easy tasks along with non-trivial memorization scores. For instance, Claude 3.5-sonnet achieves ACC = 0.63 and a memorization score LiMem = 0.33 on 3-person puzzles. Regarding the sources of memorization, we clarify that while our puzzles are randomly generated (e.g., using random language expressions and mathematical structures), Knights and Knaves (K&K) is a well-known classical puzzle type. Existing instances of such puzzles and related materials are available online [1,2], and it is possible that they were included in the pretraining data of these models, which is unknown to us. In addition, we investigate the open-source pretraining datasets, by utilizing the [WIMBD tool](https://wimbd.apps.allenai.org/) to analyze the occurrence of popular names (\\u201cAlice\\u201d, \\u201cBob\\u201d) combined with different roles (e.g., \\\"knight,\\\" \\\"knave,\\\" etc.). The below table revealed non-trivial occurrences, suggesting that materials resembling K&K puzzles may have been part of the pretraining data.\\n- **Poor performance on hard K&K tasks validates our dataset for studying memorization via fine-tuning**: The hard tasks in our dataset present a significant challenge for even most advanced models (e.g., GPT-4o, Claude 3.5-sonnet, Gemini-1.5-Pro), with accuracy dropping below 11% on 8-person puzzles. This indicates that the *harder portions of our benchmark (e.g., long and complex puzzles) are likely contamination-free, justifying their use in fine-tuning experiments* described in Section 3.2. The challenging nature of these tasks provides a *clean and controlled* setting to study memorization via explicit fine-tuning.\\n\\n| Statement | Dolma | The PILE | C4 | Oscar | OpenWebText |\\n|---|---|---|---|---|---|\\n| \\\"Alice is a knave\\\" | 13 | 6 | 2 | 1 | 0 |\\n| \\\"Alice is a knight\\\" | 23 | 8 | 6 | 1 | 0 |\\n| \\\"Bob is a knave\\\" | 11 | 8 | 0 | 1 | 0 |\\n| \\\"Bob is a knight\\\" | 53 | 9 | 22 | 5 | 0 |\\n| \\\"Charlie is a knave\\\" | 3 | 0 | 0 | 0 | 0 |\\n| \\\"Charlie is a knight\\\" | 10 | 1 | 2 | 0 | 0 |\", \"reference\": [\"[1] https://philosophy.hku.hk/think/logic/knights.php\", \"[2] https://dmackinnon1.github.io/knaves/\"]}", "{\"title\": \"Thanks for explanation\", \"comment\": [\"Thank you for your clarifications. As highlighted by other reviewers as well, I see some issues with the current work:\", \"The findings of the paper (finetuning, reasoning steps), while reasonable and intuitive, do not seem to provide new insights into model capabilities or propose recipes that could be effectively transferred to other problems.\", \"As noted by the authors, most models appear robust with respect to LiMem scores in the 1-shot regime. If models were purely memorizing, this would not be the case.\", \"I remain unconvinced about the fundamentals and necessity of the LiMem scores. It is unclear whether this metric adequately distinguishes between memorization and generalization.\", \"Given these concerns, I believe the contributions of the paper need further refinement to strengthen their impact. Likewise, I am increasing the score to represent my beliefs.\"]}", "{\"title\": \"Thank you for your further explanation!\", \"comment\": \"1. The current introduction to the K&K problem effectively conveys its importance. However, I believe reference [9] should be incorporated into this section. In my view, these references would provide valuable context for readers, as they demonstrate the connection between the problem and human cognitive processes, which aligns with the goals of large language models (LLMs). In summary, I find the current explanation of the K&K problem's importance sufficient, but I recommend integrating the additional references into either the introduction or the related works section in the next revision.\\n2. Figure 5 has shown significant improvement. However, the difference in inconsistency ratios seems marginal. Is the main conclusion that \\\"fine-tuning improves accuracy but maintains the same inconsistency ratio\\\"? If this is the case, it may not be a particularly surprising conclusion. \\n3. As a minor suggestion, I feel that using 100 (or 50) epochs with only a few thousand training samples may not be the common case. In more practical cases, the training samples are more but less training epoches are used. Could you consider running the experiments with more samples but fewer epochs in the next version to see if the conclusion is the same?\\n\\nNow, I believe it is a stronger paper with all of your efforts. I think the new content makes the paper better and I have raised my score, however, I am still concerned about some of the potentially confusing conclusions (especially about LiMem) that prevent me from further improving the score. I seriously suggest that the authors consider removing that content, and separating the analysis from the perspectives of \\\"performance\\\" and \\\"consistency\\\" will not affect the contribution and conclusions of the paper.\"}", "{\"title\": \"Response to Reviewer XCvy (Part 1)\", \"comment\": \"Thank you for your valuable feedback! We address your questions and comments below.\\n\\n> Q1: State space with perturbations: As the problem space is limited in terms of number of people, depth and width, Only maximum of 8, 2, 2 respectively. These limited dimensions make it relatively easy for models to interpolate the entire problem space with perturbations, potentially inflating perceived generalisation.\\n\\nThank you for the comment. We would like to clarify that the problem space is, in fact, extremely large (~10^24). Below, we calculate the number of possible combinations for the n-ppl=8, depth=2, width=2 configuration:\\n\\n- **Leaf Nodes**: A leaf node can represent the statement \\u201cX is lying\\u201d or \\u201cX is telling the truth.\\u201d With 8 individuals, this results in 16 possible combinations for a single leaf node.\\n- **Branching Nodes**: Ignoring the \\u2018not\\u2019 operator for simplicity and considering only the logical operators \\u2018and,\\u2019 \\u2018or,\\u2019 \\u2018imply,\\u2019 and \\u2018equivalence,\\u2019 each branching node has a width of 2 (as specified). With a depth of 2, the children of each branching node are always leaf nodes. Thus, a branching node with width-2 and depth-2 has: 4 operators \\u00d7 16^2 combinations of two child leaf nodes = 1024 possible combinations.\\n- **Total Problem Space**: With 8 individuals, each making a statement, the total problem space becomes: 1024^8 \\u2248 10^24 combinations.\\n\\nEven though a large portion of these combinations may not yield valid solutions, the resulting problem space remains enormous.\\n\\n\\n> Q2 Limited Evaluation : The authors analyze only 8 models, yet they refer to it as a benchmark, which limits its claim to be benchmark. A more comprehensive evaluation across diverse models is necessary, particularly with a focus on distinguishing performance in terms of memorization versus reasoning\\u2014an analysis notably missing in the paper.\\n\\nThanks for the comment. We mainly focus on LLMs that are shown to perform competitively on common reasoning benchmarks. Following the comment, we additionally evaluate 6 models: Gemini-1.5-Flash-002, Gemini-1.5-Pro-002, Gemma-2-9b, Llama-3.1-8B-Instruct, Qwen2-Math-7B-Instruct, Qwen2.5-Math-7B-Instruct. \\n\\nWe report results for a total of 14 models in Figure 3 of the revised manuscript. Notably, while Qwen2-Math and Llama-3.1-8B-Instruct are competitive among the open-source models, they exhibit large performance inconsistency under perturbation (e.g., 0.3 memorization score in 2-ppl K&K task under perturbed leaf setting), highlighting the limitations in their robust reasoning abilities and potential memorization.\\n\\n\\n> Q3. Boilerplate memorization issues have been raised by other studies (e.g., Sai et al. [1]) that address similar patterns of template memorisation. [1] work reaffirms that slight variations in variable names do not disrupt memoization. It can also in turn explains strong performance of fine-tuned models, with small number of finetuning samples.\\n\\nThanks for the comment. While prior studies have shown that slight variations in variable names or surface-level language do not significantly disrupt memorization, we focus more specifically on **math-level small, local perturbations** (i.e., statement and leaf perturbations). As shown in Fig 5 and Fig 6, math-level perturbations lead to significantly larger memorization scores (i.e., performance drops) compared to language-level perturbations (e.g., changing people's names, role pair names, reordering). These math-level perturbations involve slightly altering the underlying math structures, rather than just modifying variable names or linguistic expressions. This is also more challenging than language-level perturbation because the perturbed problem needs to be solved to find the **new ground-truth answer** for evaluation, which is enabled by our dynamic K&K generation pipeline. Our study goes beyond template memorization by challenging the models' understanding of the underlying mathematical principle.\\n\\n\\n\\n> Q4. Recent analyses, like that by Saparov and He [2], have highlights LMs' reasoning abilities in Chain-of-Thought (CoT) contexts. 1-shot performant doesn't seems to reduce performance for various perturbations as opposed to 0-shot.\\n\\nWe thank the reviewers for highlighting the relevant analyses. We did evaluate 1-shot prompting, CoT prompting and 1-shot CoT prompting (deferred to appendix due to space limit). \\nIn Appendix Figure 17, we report the performance of models under these 1-shot/CoT prompting and direct prompting. Our results show that models with high accuracy, such as Phi-3-medium on 3-ppl puzzles (accuracy = 54%), also exhibit a substantial performance inconsistency under perturbations, as indicated by their memorization scores (LiMem = 37%).\\nThis suggests that even with 1-shot and CoT prompting, models are sensitive to perturbations in K&K puzzles, highlighting the K&K dataset's value for assessing robustness and reasoning in diverse scenarios.\"}", "{\"summary\": \"This paper explores how large language models (LLMs) balance memorization and reasoning in solving logical tasks. Authors propose a benchmark Knights and Knaves (K&K) puzzles, the authors investigate whether LLMs rely on memorizing training examples or develop genuine reasoning skills. While fine-tuning models improved performance on known puzzles, their accuracy dropped with slight modifications, indicating reliance on memorization. However, fine-tuning also enhanced generalization, suggesting a mix of memorization and reasoning. The study introduces the Local Inconsistency-based Memorization Score (LiMem) to quantify memorization in these tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Novel Quantification of Memorization: The paper introduces a new metric, the Local Inconsistency-based Memorization Score (LiMem), which provides a structured way to measure the extent of memorization versus reasoning in large language models (LLMs), a valuable contribution to understanding model behavior.\\n\\n2. Thorough Empirical Analysis: The paper conducts an in-depth evaluation of models under various conditions, including fine-tuning, perturbation tests, and cross-difficulty transferability. This comprehensive approach offers significant insights into the models\\u2019 generalization and reasoning capabilities.\\n\\n3. Insight into Model Generalization: The findings on how fine-tuning impacts both memorization and generalization offer valuable insights for future research on improving reasoning in LLMs.\", \"weaknesses\": \"1. Limited Task Scope: The paper focuses solely on logical reasoning, particularly the Knights and Knaves puzzles. While this allows for deep analysis, it limits the generalizability of the conclusions. Experiments on other reasoning domains, such as mathematical reasoning or different types of logical reasoning, would strengthen the paper's claims and make the results more broadly applicable.\\n\\n2. Lack of Surprising Results: The finding that reasoning abilities improve with memorization is not particularly novel or surprising. While the paper conducts detailed analyses, it does not present clear guidance on how to leverage this insight for practical improvements. Standard fine-tuning with Chain-of-Thought (CoT) prompting, which the paper highlights, is already a well-known approach. The study would benefit from more innovative methods that build upon these findings.\\n\\n3. Insufficient Baselines: The paper evaluates a narrow set of models and approaches. Including a broader range of baseline algorithms, particularly reinforcement learning (RL)-based models or other alternative reasoning frameworks, would provide more context for the performance of LLMs and help assess whether memorization is unique to certain models or training methods.\", \"questions\": \"1. Your findings suggest that reasoning improves with memorization, but the practical value of this insight is unclear. How do you envision this result being used in real-world applications or for model improvement?\\n\\n2. Consider diversifying the tasks by including other logical or mathematical reasoning benchmarks (e.g., SAT solvers, arithmetic reasoning, or more complex logic puzzles). This would help demonstrate that your findings generalize beyond K&K and strengthen the paper\\u2019s conclusions.\\n\\n3. Adding comparisons with other learning paradigms (e.g., RLs, decision transformers, or symbolic models) could broaden the understanding of how different models handle reasoning and memorization, especially in dynamic or less static environments than the K&K puzzles.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your reply!\\n\\nThank you for highlighting the main contributions of the paper and the significance of your tasks. I agree that SAT questions are a common category in reasoning. However, the generalization of the paper's conclusions (such as \\\"Leveraging Direct Fine-Tuning in the Absence of CoT,\\\" \\\"Robustness to Low-Quality Data,\\\" etc.) should be validated through other tasks; otherwise, the universality of the conclusions may be questioned. I appreciated the thorough analysis the authors have conducted in the paper. My main concern lies with the generalization of the paper's conclusions. I will increase my score accordingly.\"}" ] }
5sPgOyyjG5
Feynman-Kac Operator Expectation Estimator
[ "Jingyuan Li", "WEI LIU" ]
The Feynman-Kac Operator Expectation Estimator (FKEE) is an innovative method for estimating the target Mathematical Expectation $\mathbb{E}_{X\sim P}[f(X)]$ without relying on a large number of samples, in contrast to the commonly used Markov Chain Monte Carlo (MCMC) Expectation Estimator. FKEE comprises diffusion bridge models and approximation of the Feynman-Kac operator. The key idea is to use the solution to the Feynmann-Kac equation at the initial time $u(x_0,0)=\mathbb{E}[f(X_T)|X_0=x_0]$. We use Physically Informed Neural Networks (PINN) to approximate the Feynman-Kac operator, which enables the incorporation of diffusion bridge models into the expectation estimator and significantly improves the efficiency of using data while substantially reducing the variance. Diffusion Bridge Model is a more general MCMC method. In order to incorporate extensive MCMC algorithms, we propose a new diffusion bridge model based on the Minimum Wasserstein distance. This diffusion bridge model is universal and reduces the training time of the PINN. FKEE also reduces the adverse impact of the curse of dimensionality and weakens the assumptions on the distribution of $X$ and performance function $f$ in the general MCMC expectation estimator. The theoretical properties of this universal diffusion bridge model are also shown. Finally, we demonstrate the advantages and potential applications of this method through various concrete experiments, including the challenging task of approximating the partition function in the random graph model such as the Ising model.
[ "Expectation Estimator", "Diffusion bridge model", "MCMC", "Physically Informed Neural Networks", "Minimum Wasserstein Estimator" ]
Reject
https://openreview.net/pdf?id=5sPgOyyjG5
https://openreview.net/forum?id=5sPgOyyjG5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xVH3hQ5qYE", "tCbFqq2ZJ6", "qgOR5mYNkW", "pdrrx9bCrU", "odRBXElEMy", "oKI4LNLEj1", "oKDpSS6XKc", "ljsebCeGgy", "iDuuLyxjGE", "cM7UZnsI2U", "YgT3Qph168", "RIMx1taeNZ", "OAlOdCLWFg", "LLXmFLtpNa", "LFOv1lpwtw", "H5ZyDN6AlZ", "DfVR4VQ03W", "DRbnI0hMVq", "6wCqNe2lVR", "5VSNZUAoQj", "2cvihDxLgj" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1732542244376, 1730715786183, 1731485539181, 1731484081752, 1737523473181, 1731923785641, 1732703499539, 1732547073102, 1732696319655, 1731939078230, 1730705586066, 1732612327861, 1731488130001, 1730700616676, 1731480633701, 1732632060133, 1731482142352, 1734643525967, 1730671290324, 1730650442452, 1732705505220 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_xwQ7" ], [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_xwQ7" ], [ "ICLR.cc/2025/Conference/Submission1889/Authors" ], [ "ICLR.cc/2025/Conference/Submission1889/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_xwQ7" ], [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_TUPv" ], [ "ICLR.cc/2025/Conference/Submission1889/Authors" ], [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_Su4v" ], [ "ICLR.cc/2025/Conference/Submission1889/Authors" ], [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_Su4v" ], [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_xwQ7" ], [ "ICLR.cc/2025/Conference/Submission1889/Authors" ], [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_A2pH" ], [ "ICLR.cc/2025/Conference/Submission1889/Authors" ], [ "ICLR.cc/2025/Conference/Submission1889/Authors" ], [ "ICLR.cc/2025/Conference/Submission1889/Authors" ], [ "ICLR.cc/2025/Conference/Submission1889/Area_Chair_umuJ" ], [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_UFuF" ], [ "ICLR.cc/2025/Conference/Submission1889/Reviewer_TUPv" ], [ "ICLR.cc/2025/Conference/Submission1889/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Figures 2 and 3 explain much better the previous table, and in the current state I have revised my previous rating to 3.\\nAre the authors using a single seed for the experiments? Usually methods are thoroughly tested, and include error quantification (e.g., standard deviations) from *multiple* runs.\\n\\nRegarding the other methods against which the authors are comparing, can they elaborate on why the labels used are MCMC-T, MCMC-R, and MCMC-C? Can the authors also indicate explicitly which methods are used for the comparison? Potentially with an accompanying reference for each.\"}", "{\"summary\": \"Doing MCMC is hard (time consuming, and somewhat wasteful because of the burn-in period). The authors propose a post-processing method using the samples from some MCMC procedures, which admit an It\\u00f4 decomposition, to approximate moment estimates of the desired sampling density. They use a denoising technique based on physics informed neural networks as part of the post-processing mechanism.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The goals stated in the introduction are bold, and quite interesting. Obtaining results in this line would prove quite useful in general for ML and statistics.\\n\\nI appreciate that the authors include small introductions for the Euler-Maruyama method and physics informed neural networks in the appendix. However their existence should be indicated in the main text.\", \"weaknesses\": \"Presentation is bad throughout. There are plenty of typos. The authors do not use parenthetical citations and instead insert them in the text which makes for a less pleasant reading experience.\\n\\nThe notation introduced in line 232 definitely needs improvement, I do not understand which side is supposed to be the one that will be used later. Even then, it is unclear what is being defined, as there are two definitions for $\\\\hat{\\\\mu}\\\\_{t\\\\_i}$ .\\n\\nIn Assumption 2.2, it is not clear what $\\\\mu^{\\\\mathcal{P}_\\\\theta}$ means.\\n\\nThe notation in Algorithms 1 and 3 should be introduced before the algorithms. For example, it is not clear where $Y_T$ comes from, and why it is required.\\n\\nTable 1 is impossible for me to parse. I invite the authors to mimic the conciseness of their own Table 5 for summarizing the numerical results.\\n\\nThere is a link to GitHub page, which has been confirmed to not belong to the authors, but I do not see a good reason for why the authors would want to include a link to a GitHub that is not theirs. Usually a citation to the original article is enough.\\n\\nThe proofs in the main text for Theorems 2.1, 2.6 and 2.7 should either refer directly to the Appendix where they are proved, or be proved right there.\\n\\nRegarding Theorem 2.8, the comment in the 'proof' space makes me think the Authors were not the first to prove it, in which case they should indicate it explicitly; otherwise it would be plagiarizing.\\n\\nCurrently Section 4 is quite lacking, including the aforementioned Table 1 which I cannot comprehend (by the way, it is missing a reasonable caption). \\n\\nA proposed method like this should be thoroughly tested, which in the current state of the paper it has not been. The methods the authors refer for comparison should include appropriate references.\", \"questions\": \"In Theorem 2.1, what does \\\"Linear growth\\\" mean?\\n\\nIn Assumption 2.4, what does \\\"D is the metric of the parameter\\\" mean?\\n\\nIn Theorem 2.6, should \\\"we exist\\\" be \\\"there exists a set\\\"?\\n\\nIn Algorithm 1, can the authors clarify what is the main difference between $X_t$ and $X_i$? The distinction is not clear to me. \\n\\nHow are the integrals in Algorithm 2 computed? Are the authors able to evaluate the integrals explicitly? If so, they should indicate how and why they are able to do so.\\n\\nHow does the computational cost of this approach compare to other MCMC approaches?\\n\\nOne of the main criticisms posed about MCMC methods is that they are not optimal since they spend quite some time in the burn-in phase (lines 56, 82). However, for the proposed method to work the authors assume that they have access to samples from a distribution that are obtained via an MCMC, that has already gone through a burn-in period (lines 188, 192). How can the authors support their claim that this method is better (line 107) than MCMC if it is still spending a similar amount of samples in burn-in?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses\", \"comment\": \"What we propose in this paper is not a sampler or a generative model, but rather a heuristic algorithm that can enhance the performance of most existing SDE-based samplers in estimating expectations.\\n\\nTheoretically, we have provided the convergence results for our diffusion bridge model. The diffusion bridge model is merely a sampler designed to reduce the total training effort of the PINN. We reverse-engineer the FK equation, which is itself a conceptual innovation. Most of our past work focused on solving high-dimensional PDEs, namely using MCMC / SDEs and then compute the expectation to obtain the solution of the PDE. Our approach, however, is to directly solve the PDE to obtain the MCMC expectation. When the SDE is a diffusion model-based sampler, we can obtain the expectation of the target distribution without relying on the LLN or ETMC.\\n\\nSamplers using the $W_2$ distance are quite common, such as those based on optimal transport methods. For high-dimensional cases, Since the dimension is hidden in our Theorem 2.7, a classical convergence result is given in [1].\\n\\n[1] Fournier, Nicolas, and Arnaud Guillin. \\\"On the rate of convergence in Wasserstein distance of the empirical measure.\\\" Probability theory and related fields 162.3 (2015): 707-738.\\n\\nWe compare the SOTA expectation estimators and demonstrate the performance of PINN in the high-dimensional case with $d = 225$, as shown in the experiment in line 1029.\\n\\nHow does your algorithm compare with other neural ODE/SDE, diffusion models, bridges models in the literature?\\n\\nOur goal is to estimate the expectation, not to sample. Therefore, our experiments focus more on the combination of the sampler + FE and sampler + ETMC/LLN, with the choice of sampler being diverse.\"}", "{\"title\": \"Responses\", \"comment\": \"Q1 The scalability of the algorithm w.r.t. dimension is not verified sufficiently. d=20 is too small. There are no real-world simulations.\\n\\nA1 In fact, we demonstrated the effectiveness of this method in high-dimensional cases, such as 225 dimensions, in the ising model. A detailed explanation of this experiment can be found in line 1077.\\n\\nQ2 The authors criticize the large variance issue by the MCMC method but fail to justify theoretically why the proposed method yields a lower variance. The empirical support is limited. \\n\\nA2 Theoretically, concentration inequalities can show that a larger Lipschitz constant will lead to a larger variance, which can be referenced in [1]. On the application level, the precision of the sampler also affects the variance. For example, using the Metropolis Adjusted Langevin Algorithm for correction. Our two experiments validate these two cases. First, for the ISING model case, the complexity of $f$ is the main factor. In the second case, we use the unadjusted Langevin Algorithm to obtain better results, without changing the structure of the sampler.\\n\\nThe metric used in Girsanov's theorem is the KL divergence, but KL divergence does not have the properties of $W_2$. Currently, we are more focused on $W_2$. Additionally, Girsanov's theorem cannot be applied when the diffusion coefficients are inconsistent.\\n\\n\\n[1] Gobet, Emmanuel. Monte-Carlo methods and stochastic processes: from linear to non-linear. Chapman and Hall/CRC, 2016.\\n\\nQ4 I don't know why and when MCMC is required to impose complex constraints on the distribution and performance function. Some references are suggested.\\n\\nA4 In fact, the example in the ISING model illustrates this point. In a more general case, when we want to solve an energy model such as $p(x;\\\\theta) = \\\\frac{\\\\exp(-U(x,\\\\theta))}{Z(\\\\theta)}$, the partition function $Z(\\\\theta)$ is intractable, especially when the state space of $x$ is discrete. This type of model is commonly found in large language models. Below are examples of handling $Z(\\\\theta)$. In addition, in statistical inference, for example, when estimating the parameters of random graph models, this issue arises. While we can use MCMC sampling distributions, we cannot obtain an exact expression for the distribution due to the presence of the partition function.\\n\\n\\n[2] Xu, Minkai, et al. \\\"Energy-Based Diffusion Language Models for Text Generation.\\\" arXiv preprint arXiv:2410.21357 (2024).\\n\\n[3] Rafailov, Rafael, et al. \\\"Direct preference optimization: Your language model is secretly a reward model.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n\\nQ5 I don't see when MCMC is not the optimal decoding method. It appears to me that burn-in is not a significant limitation and only affects the performance on a negligible scale and can be easily fixed via a large stepsize in the beginning for warm-up.\", \"a5_the_reason_for_this_is_quite_simple\": \"in diffusion models, $X_T$ does not necessarily have to be a stationary distribution. For example, methods like normalized flow methods and many diffusion model-based samplers can all use FE to solve for expectations without relying on ETMC and LLN. For MCMC objectives, diffusion bridge modelling is a more efficient approach [1].\\n\\n\\n[1] Vargas, Francisco, Will Sussman Grathwohl, and Arnaud Doucet. \\\"Denoising Diffusion Samplers.\\\" The Eleventh International Conference on Learning Representations.\\n\\nWe appreciate the suggestion to discuss the limitations.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"A1: If the authors do not wish to indicate the definition in the main text, and want to keep it in the appendix that is fine. However, I think they should at least indicate in the main text where the definition can be found. Otherwise, how would readers know what the authors mean?\", \"a2\": \"So, in the case where the metric is in the product space, we would have that $D(P\\\\_{\\\\theta},P\\\\_{\\\\eta}) = d\\\\_x(X\\\\_{0,\\\\theta_3},X\\\\_{0,\\\\eta_3}) + d\\\\_b(b\\\\_{\\\\theta_1},b\\\\_{\\\\eta_1}) +d\\\\_\\\\sigma(\\\\sigma\\\\_{\\\\theta_2},\\\\sigma\\\\_{\\\\eta_2})$, where the lower case ds correspond to distances in the appropriate space. Is my understanding correct?\", \"a5\": \"Are you able to evaluate Equations (13) and (14) explicitly? Or are you approximating these through some partial sums? Algorithm 4 uses sums that remind me of integral approximations. If you are not using LLN or ergodic theorems for MCs to approximate such integrals, can you indicate what theoretical result confirm the validity of such evaluations and where in the text you indicate it?\\n\\nAlso, how is Algorithm 4 important to your paper? I do not see it mentioned anywhere else besides the appendix?\\n\\nI do see value in the ideas posed in the paper. Furthermore, I appreciate your A6, which makes your contribution clearer. However, the presentation in the text is still not good enough to be close to acceptance. See the weaknesses I indicated above for a roadmap on how I think the presentation could be improved.\\n\\nFor instance, what are you trying to show with Table 1? Can it be summarized in a manner similar to Table 5? Or rather, could you instead display it as a graphical summary?\"}", "{\"comment\": \"I thank the authors for their response. I still maintain that the manuscript suffers, above all, from various presentation flaws which make it difficult to understand the true positioning of this work within the existing literature and assess its novelty. The response of the other referees only strengthened my view. I\\u2019d advise the authors to focus on the classical form of an introduction: explaining the open problem, providing motivation, reviewing what has been done, fleshing out a smaller open problem, and then explaining how their approach made steps towards solving this smaller one. Moreover do not assume that the reader understands the specific challenge you are facing.\\n\\nFor instance, the current introduction poses the question \\\"can we have a universal MCMC sampler\\u201d (paraphrasing question A). This is a far too general scope that is not even used as a \\\"funnel\\u201d to dive into the specific problem the authors try to solve. Having some experience with MCMC methods, in many relevant settings getting uncorrelated samples is an NP-hard or sharp-P problem (e.g. spin-glass problems or their SAT-3 versions in computer science). Clearly one has to rely on the specifics of the problem at hand to make progress.\\n\\nFollowing this I still do not understand what is the novelty of the current work. As far as I understand, the authors are not the first to use the FK formula for expectation estimation. If their novelty is in an \\u201cinterpretation\\u201d, it is too vague and subjective to my taste. If their technical innovations led to some obvious speedup, then the benchmarks given are still insufficient in my mind. In the physics community, the 2d Ising model was studied up to number of sites (n) of the order of millions. Furthermore, this was done at so-called critical points where simple local Monte-Carlo updates, reminiscent of their SDE, suffer from critical slowing down (which was then cleverly avoided using Worm Algorithms and Cluster Updates). Nonetheless, exact theoretical results are available for these large-scale systems and showed agreement with numerics up to 5-6 digit accuracy. In contrast, the authors report experiments on n=15^2 sites and do not tune the system to a critical point. It may be that the author actually put some different emphasis on which they do excel, but then this comes back to issues of presentation.\\n\\nIf on the other hand, it is that they are more efficient per sample, then similar issues can be raised. First, as before, why is the benchmark carried on such small systems? How do we know this would scale to truly large-scale systems? Second, how much do these results depend on the samplers one is using? What if you used cluster updates that have very short burn-in periods accompanied by standard expectation estimation? Does the time complexity involve training the PINN and learning the SDE? How would that scale and perform when the dimension of the problem grows even larger?\\n\\nTo summarize, I strongly believe the authors need to revise their presentation, delve into the specifics of their contribution, and explain how their numerical benchmark supports their claim. I also believe that the discussion period is not the time to make such an overhaul.\"}", "{\"title\": \"Responses\", \"comment\": \"We use fixed seeds for neural network initialization, but the sample points obtained from Gibbs sampling in MCMC do not use fixed seeds. The figures show the averaged results over multiple runs. However, it is worth noting that we use exactly the same samples for different estimators.This means that we have completely fixed the randomness in the empirical measure, ensuring fairness for all methods. We did not report the standard deviation because the table would become overly large. However, we have open-sourced our code, allowing this to be directly verified through experiments.\\n\\nFor the current problem (calculating the partition function of a high-dimensional Ising model), there are not many expectation estimators available (despite the abundance of samplers). We chose a state-of-the-art method, the MCMC-C estimator [1], which often requires combining with the TPA method (a technique similar to importance sampling). However, due to the need for constant adjustment, it requires sampling a large number of points along the chain. When n is large, the computational complexity becomes impractical, as can be seen in cases where n>5 like https://github.com/zysophia/Doubly_Adaptive_MCMC/blob/main/data/isingcompare_complexity.csv. \\n\\nMCMC-R serves as a baseline because we aim to validate the matching performance of our diffusion bridge model, specifically by resampling a subset of points for averaging. This process can be seen as a reconstruction of the Markov chain (only matching points from the stationary distribution), allowing us to sample efficiently using this chain.\\n\\nMCMC-T directly computes the expectation by solving the FK equation using the matched diffusion bridge in MCMC-R, which is the core contribution of this paper. This method efficiently leverages the matched Markov chain and provides a superior estimation.\\n\\n\\n[1] hahrzad Haddadan, Yue Zhuang, Cyrus Cousins, and Eli Upfal. Fast doubly-adaptive mcmc to esti- mate the gibbs partition function with weak mixing time bounds. Advances in Neural Information Processing Systems, 34:25760\\u201325772, 2021\"}", "{\"comment\": \"Q1: The authors should explicitly explain how these theoretical results in [1] apply to their method to strengthen the paper's clarity.\\n\\nThe assumption that the function belongs to a certain space without constraining the Lipschitz constant is noted, but the justification for replacing this constraint with higher regularity is unclear. This seems more like a trade-off than a clear improvement. Further clarification or justification would strengthen the argument, especially regarding the impact on expectation estimators and the relevant equations.\"}", "{\"title\": \"Responses\", \"comment\": \"Q1, we got it. We will pay special attention to this and mention in the main text that the definition can be found in the appendix, for example, by providing a hyperlink.\\n\\nQ2 yes \\n\\nQ3 \\nAlgorithm 4 is a more precise representation of Algorithm 2 and is essentially the same algorithm. This is mentioned in line 365 of the main text, but it seems we did not provide a hyperlink. We will address this issue.\\n\\nIn solving PDEs with PINNs, we often use the mean squared error as the loss function, which introduces integral terms. However, here we only sample within the solution domain of the PDE and use an approximation based on the LLN. Nevertheless, we did not ultimately use LLN because our expected value is derived from the initial values of the PDE solution.\\n\\nWe have added Figure 1 to the main text, which provides a clearer explanation of our algorithm. First, we simulate the SDE, as shown on the x-t plane below. Then, we calculate the residuals at these points, corresponding to equations (13) and (14) of PINNs, and solve to obtain the PDE solution on the plane. At the very beginning of this plane lies the expected value.\\n\\nThe efficiency of this evaluation can be verified in two ways. \\n\\nFirst, as mentioned in our paper, the integration of MCMC's key information $(X_0, b, \\\\sigma, T)$ into the PDE solution is crucial. These four components simultaneously appear in equations (13) and (14), allowing us to provide the model with more precise information.\\n\\nA simple way to understand this is by considering the Langevin SDE. In this case, $b$ represents the log gradient of the density, and this information is directly integrated into equation (13) to offer a better estimate of the expectation. Similarly, $f$ is integrated into equation (14). This approach is essentially a highly integrated method combining control functions and importance sampling. The essence of control functions is to use another function $g$ to integrate information about $f$. On the other hand, importance sampling is a post-processing method that adjusts the information of $P$ through reweighting. In diffusion models, $P$ is entirely determined by $(X_0, b, \\\\sigma, T)$.\\n\\nSecondly, the error of diffusion models can be analyzed in recent works. The error for using PINNs can be found in [1].\\n \\n\\nHowever, it is worth noting that the points used in our method (on the SDE trajectory) include the entire trajectory, as shown in Figure 1. This means that even the points during the burn-in period of the MCMC algorithm can be utilized by incorporating them into equation (13).\\n\\nLastly, we are addressing the challenging problem comprehensively. Including a figure might be a good approach. Thank you very much for your suggestion; we will make the corresponding revisions. The details of the Ising model experiments are provided in the Appendix 9.1. For each column in Table 1, we should provide more detailed explanations.\\n\\nWe sincerely appreciate your valuable suggestions as we proceed with a new round of revisions. Thank you!\\n\\n[1]Tim De Ryck and Siddhartha Mishra. Error analysis for physics-informed neural networks (pinns) approximating kolmogorov pdes. Advances in Computational Mathematics, 48(6):79, 2022.\"}", "{\"summary\": \"The authors propose the Feynman-Kac Operator Expectation Estimator (FKEE) to approximate the target distribution E[f(X)]. This estimator contains two parts: (1) A diffusion bridge model with parameters optimized to minimize the Wasserstein distance to the target distribution, and (2) a method based on the Feynman\\u2013Kac equation, formulated as a partial differential equation (PDE) and solved approximately using Physics-Informed Neural Networks (PINNs), which employ a least-squares approach. The experiments focus on approximating the partition function in a random graph model.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"A significant strength of this work is the innovative linking of the diffusion model to high-dimensional partial differential equations (PDEs), with Physics-Informed Neural Networks (PINNs) effectively employed to overcome the curse of dimensionality in solving these PDEs.\", \"weaknesses\": \"1. The Feynman\\u2013Kac model (Algorithm 2) with the PINN solver lacks a convergence or error estimate, which would be valuable for assessing its accuracy and reliability.\\n\\n2. In the experiments, the authors claim that \\\"using fewer points on the Markov chain achieves higher accuracy in approximating expectations.\\\" However, it is unclear if this result generalizes beyond the specific example provided, as it appears quite context-dependent.\", \"questions\": \"The authors mention in the Discussion that their method requires the boundary conditions of the PDE to satisfy a smoothness condition, specifically that f is in C^2, and that this requirement broadens the scope of their approach. However, it seems that C^2 smoothness could be more restrictive than a Lipschitz assumption. Could the authors clarify how they view this requirement as less restrictive? Additionally, could they discuss any potential limitations this might introduce for functions that are not in C^2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"How many runs are the authors using? Since there are several runs, I do not understand why the standard deviations of the average MSEs are not reported in any way. At the very least they should be included as error bars, or in an appendix. This helps the reader understand how strong (if at all) the proposed method is.\\n\\nCan you also provide an answer for my other question regarding the reasoning for the labels? When I read the figures I have to keep going back to the paragraph where the methods are defined because the labels are not (to me) intuitive. \\nIs it \\\"R\\\" for resampling? \\\"T\\\" for transformed? \\\"C\\\" for chain? What is the logic behind the names? This is a minor point, but it makes for an unpleasant reading experience.\"}", "{\"title\": \"Responses\", \"comment\": \"Thank you for reading our paper in detail! Below are my responses to some of your statements. If any of them are not precise enough, feel free to ask more questions.\\n\\nThe main contribution of this paper is a different interpretation of the FE formula, which can enhance the performance of most diffusion models (samplers) in estimating expectations. Blechschmidt and Ernst (2021) is a classic method for solving the FK equation, meaning that we can solve high-dimensional PDEs by simulating SDEs (which typically do not have dimensionality issues) and then computing expectations. This is just one way of utilizing the PDE. We shift this approach by assuming that we know the SDE is a diffusion (diffusion bridge) model, and accordingly, we can naturally express the corresponding PDE. By using a different method to solve the PDE, namely PINN, we can obtain the expectation of the SDE at the terminal point, which corresponds to the solution at the initial time of the PDE.\\n\\nThis method avoids the need for LLN and ETMC to compute the mathematical expectation, and we often only need a small number of points to achieve better results. This is because the PDE integrates all the information of the SDE, and the SDE contains information such as the density, for example, in Langevin dynamics and score functions in score-based SDEs. \\n\\nTherefore, our validation experiments are centered around expectation estimation, rather than the sampler(MCMC) and PDE. \\n\\nExpanding the Scope of Expectation Estimators can be specifically referenced in line 415. In other words, we can integrate a large number of SDE samplers to expand the range of $P$ , which may not even be a stationary distribution in MCMC. This weakens the conditions on $f $, as a larger Lipschitz constant for $f$ often leads to larger variance.\\n\\nIntegrating MCMC into a unified framework has been recently achieved by many diffusion models, including our method, as seen in [1][2][3]. This represents the latest direction in current technology. However, there is a lack of corresponding work on efficiently obtaining expectations from SDEs, as most estimators still rely on ETMC and LLN.\\n\\n[1] Vargas, Francisco, Will Sussman Grathwohl, and Arnaud Doucet. \\\"Denoising Diffusion Samplers.\\\" The Eleventh International Conference on Learning Representations.\\n\\n[2]Richter, Lorenz, and Julius Berner. \\\"Improved sampling via learned diffusions.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[3]Grenioux, Louis, et al. \\\"Stochastic Localization via Iterative Posterior Sampling.\\\" Forty-first International Conference on Machine Learning.\\n\\nOur experiments demonstrate that we have obtained more accurate estimates of the partition function with fewer samples. Sampling algorithms and expectation estimators are fundamentally different methods. As we have emphasized, our focus is not on the MCMC method itself, but rather on whether MCMC + FE outperforms traditional methods. Our experiments have already shown that this is the case. Replica-exchange Monte Carlo is merely a sampling method and does not provide an expectation estimator. Moreover, the estimator will still be influenced by the conditions on $ f $.\\n\\nQuestions\\n\\nCan the author disentangle their works from past literature on neural SDE and Feynman-Kac's use of averaging observables?\\n\\nCan the authors provide evidence that their universal sampler outperforms the existing techniques, including those in Blechshmidt et. al.(2021)? Can they provide some canonical well-excepted benchmark at which they excel over others?\\n\\nAnswers see above.We welcome and accept any suggestions for improving the expression and clarity of the paper.\"}", "{\"summary\": \"The authors proposed to leverage the Feynmann-Kac equation via Physically Informed Neural Networks (PINN) to approximate the target Mathematical Expectation efficiently and heuristically without causing a large variance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"the idea of using Feyman-Kac to approximate the expectation is interesting.\", \"weaknesses\": \"The scalability of the algorithm w.r.t. dimension is not verified sufficiently. d=20 is too small. There are no real-world simulations.\\n\\nThe authors criticize the large variance issue by the MCMC method but fail to justify theoretically why the proposed method yields a lower variance. The empirical support is limited.\", \"nit\": \"Theorem 2.1: the discretization error by Growall inequality is weak and exponentially dependent on time. Girsanov can be used to fix it.\", \"questions\": \"1. I don't know why and when MCMC is required to impose complex constraints on the distribution and performance function. Some references are suggested.\\n\\n2. I don't see when MCMC is not the optimal decoding method. It appears to me that burn-in is not a significant limitation and only affects the performance on a negligible scale and can be easily fixed via a large stepsize in the beginning for warm-up.\\n\\n3. Discussions on the limitations would be preferred.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses\", \"comment\": \"Thank you very much for your careful reading and for pointing out the unclear expressions in the paper. We have made the necessary corrections in a revised version. It has now been updated.\", \"here_are_some_responses_to_the_questions\": \"Q1 In Theorem 2.1, what does \\\"Linear growth\\\" mean?\\nA1 On line 815, we provide the definition in the appendix.\\n\\nQ2 In Assumption 2.4, what does \\\"D is the metric of the parameter\\\" mean?\\n\\nA2 $P$ is a parameter of a triplet. We define the metric, which can be considered in the product space, for example, the first for $X_0$ is the Euclidean 2-norm of vectors, the second for $b$ is the $L_2$ norm, and the third for $\\\\sigma$ is the Fubinius norm.\\n\\nQ3 In Theorem 2.6, should \\\"we exist\\\" be \\\"there exists a set\\\"?\\nA3 yes, your phrasing is more precise.\\n\\nQ4 In Algorithm 1, can the authors clarify what is the main difference between X_i and $X_t$ The distinction is not clear to me. \\nA4 This is a typo, X_i should be $X_{t_i}$. We have revised the corresponding paragraph.\\n\\nQ5 How are the integrals in Algorithm 2 computed? ...\\n\\nA5 Sure. First, to solve the FK equation, we need $(x_0, b, \\\\sigma)$, which are obtained from the diffusion bridge model described earlier. Of course, these can also be obtained from any other diffusion model, as shown in the first table of the appendix. Next, we need to sample the PDE at $(x, t)$, which means we will first run the SDE. In our previous diffusion bridge model, this is based on the SDE solver, so we already have the samples $(X_t, t)$ that have been run. Then we compute the residual terms corresponding to these points, which are equations (13) and (14), and add them to the training. After training is complete, the solution $u(x,t)$ at the initial time is the mathematical expectation of the terminal distribution, which is the integral. The reason for doing this can be referenced in 3 Discussion. Overall, by doing so, we can maximize the performance of the diffusion model in estimating the mathematical expectation, without relying on the law of large numbers or ergodic theorems. For example, we do not require that $X_T$ follows a stationary distribution, and we have extended the range of $f$ in the expectation estimator.\\n\\nQ6 How does the computational cost of this approach compare to other MCMC approaches?\\n\\nA6 We believe there may be some misunderstanding regarding the main contribution of this paper. We are not making incremental improvements to diffusion models (MCMC), although we propose a sampler. What we aim to express is that the expectation estimator, i.e., (diffusion model/MCMC) + FE, will provide better estimation performance than (diffusion model/MCMC) + (LLN/ETMC). Therefore, all our usage revolves around expectation estimation, such as estimating the partition function, etc.\", \"the_computational_cost_is_divided_into_two_parts\": \"The first part involves solving the diffusion bridge. We use Sinkhorn's method to solve the optimal transport problem. This part is relatively fast, and you can refer to the code in our supplementary materials. Of course, if we have a pre-trained diffusion model, this computation can be completely ignored, or we can use explicit SDE samplers such as Langevin sampling. The second part, which involves training, is computationally more expensive, so we have put a lot of effort into this area. Please refer to line 275. First, we control the number of time steps for the diffusion bridge and the total number of training points. Second, we consider diagonal parametrization of $\\\\sigma$.\\n\\nThe last question is very valuable, and we need to provide some clarification. First, what we aim to express is that sampling is not the same as expectation estimation, because the expectation estimator is also affected by $f$ and the quality of the sampling. However, we can avoid this issue compared to traditional expectation estimators. The influence of $f$ can be illustrated using the ISING model example, where $f$ leads to a large bias due to being an exponential function. A more intuitive example is the Langevin sampling example in the appendix, where the sampler we use is biased, but our method can reduce this bias without modifying the sampler itself.\", \"this_also_answers_your_question\": \"if we have some points sampled from the MCMC stationary distribution, we have three possible computation methods. The first is directly computing the average. The second method is to generate more points using the diffusion bridge and then compute the average. The third method is using the diffusion bridge + FK equation. We have shown that the third method is better than the first two. The main reason is that in the diffusion bridge, the parameters $b$ and $\\\\sigma$ include all the information about the distribution. For example, in Langevin sampler, the $b$ is composed of the density($\\\\frac12 \\\\nabla_x \\\\log p(x)$). Solving the PDE can better integrate this information.\\n\\nIf you have any questions, feel free to discuss further.\"}", "{\"title\": \"Responses\", \"comment\": \"Ten times. Apologies, this was indeed an oversight. However, since this code was written a long time ago, we are currently re-testing, saving the relevant data, and providing a standard deviation. But, it is difficult for us to deliver a complete PDF version before the deadline. Nevertheless, our experiments are reproducible because our publicly available code can verify this. I don\\u2019t believe this is a fundamental issue.\\n\\nRegarding the naming issue, this is merely a notation. One explanation is MCMC-C (Correction), as this method uses different bounds to correct the estimator. MCMC-R (Resampling) refers to the estimator obtained by resampling. MCMC-T (Target) reflects our perspective of treating FKEE as a target model (DBM + FE), where the ultimate goal of this model is to estimate expectations.\"}", "{\"title\": \"Responses\", \"comment\": \"Q1 The Feynman\\u2013Kac model (Algorithm 2) with the PINN solver lacks a convergence or error estimate, which would be valuable for assessing its accuracy and reliability.\\nA1 Yes, this is indeed a problem. We provide some empirical results, but theoretically, we have actually cited the relevant convergence results [1] for such PDEs in line 380.\\n\\n[1]Tim De Ryck and Siddhartha Mishra. Error analysis for physics-informed neural networks (pinns)\\napproximating kolmogorov pdes. Advances in Computational Mathematics, 48(6):79, 2022.\\n\\nQ2 In the experiments, the authors claim that \\\"using fewer points on the Markov chain achieves higher accuracy in approximating expectations.\\\" However, it is unclear if this result generalizes beyond the specific example provided, as it appears quite context-dependent.\\n\\nA2 We mainly emphasize the statement in line 485. The main text only provides a rough result, and more details about this experiment can be found in the Appendix 9.1.\\n\\nQuestions about conditions on functions\\n\\nA The function condition we are referring to is in the context of expectation estimators. In this field, Lipschitz functions will significantly affect the estimator's equation. For more details, you can refer to Section 2.4.4 in [1] and other concentration inequalities. Our assumption only requires the function to belong to $C^2$ and does not impose any restrictions on the Lipschitz constant. \\n\\nPotential limitations this might introduce for functions that are not in C^2, this could lead to potential issues in the convergence of PINN training, such as the generation of large gradients at the boundaries for functions that are not in $C^2$.\\n\\n\\n[1] Gobet, Emmanuel. Monte-Carlo methods and stochastic processes: from linear to non-linear. Chapman and Hall/CRC, 2016.\"}", "{\"metareview\": \"**Summary of Discussion:**\\nThe reviewers appreciated the novel ideas proposed in the paper, such as using the Feynman-Kac equation (FKE) and Physics-Informed Neural Networks (PINNs) to estimate mathematical expectations. However, the submission has significant issues that hinder its acceptance: \\n\\n1. **Lack of Novelty and Theoretical Depth:** The reviewers noted that the paper primarily combines existing techniques (diffusion bridges, PINNs, and MCMC sampling). While innovative connections were drawn, the theoretical contributions were incremental and lacked depth. Key claims, such as reduced variance and scalability to high dimensions, were not sufficiently justified either empirically or theoretically.\\n\\n2. **Insufficient Empirical Evidence:** \\n - The experiments were limited to relatively low-dimensional settings (e.g., \\\\(d=20\\\\) or Ising model with \\\\(15^2\\\\) sites). \\n - There was a lack of real-world applications or benchmarks against state-of-the-art methods on well-established datasets. \\n - Standard deviations and error analysis were missing, leaving readers unable to assess the robustness of results. \\n\\n3. **Overstated Claims:** \\n - Grand claims, such as \\u201cuniversal samplers\\u201d and \\u201cexpanding the scope of expectation estimators,\\u201d were not well-substantiated. \\n - Connections to broader MCMC frameworks were vague and not explicitly supported by evidence. \\n\\n4. **Presentation and Clarity Issues:** \\n - Poor organization made the paper difficult to follow. Important notations and algorithms were introduced without sufficient explanation. \\n - The related work section, relegated to the appendix, left readers unclear about the novelty of the contributions relative to prior work. \\n\\n5. **Rebuttal Limitations:** While the authors clarified some issues during the discussion, fundamental concerns about novelty, experimental rigor, and clarity remain unresolved. Addressing these would require a substantial overhaul of the paper. \\n\\n**Conclusion:** \\nThe paper addresses an important topic but lacks the necessary clarity, rigor, and empirical support to meet the standards of ICLR. A thorough revision with a stronger theoretical foundation, robust experiments, and clearer presentation is recommended.\", \"additional_comments_on_reviewer_discussion\": \"See above\"}", "{\"summary\": \"The paper presents two generative models to use in the context of sampling: a diffusion bridge with $W^2$-loss and algorithm based on solving the Feynmann-Kac PDE using PINNS.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Developing new sampling methods using ideas from generative model is an active area with lots of promising recent advances.\", \"weaknesses\": [\"There is no really new theoretical contributions in this paper. All the theorems mentioned in the paper are standard results. The diffusion bridge model has been used in various iterations in countless papers in the literature, for example by Doucet and collaborators.\", \"There is no indication that the algorithms presented here will scale up with dimension, especially if using the W^2 loss, so the claim that this improves on MCMC seem somewhat overblown.\", \"Using a PINNs to solve the Feynmann-Kac equation is very unlikely to work in high dimension as PINNs are usually are no easy to to train.\", \"Lack of experiments on high-dimensional data sets.\", \"No head-to-head comparisons with state of the art algorithms.\"], \"questions\": [\"What are the limitations of your methods regarding dimension?\", \"How does your algorithm compare with other neural ODE/SDE, diffusion models, bridges models in the literature?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript combines several DNN-based numerical techniques, associated with solving and learning SDEs and their PDE counterparts, with the aim of sampling from distributions and averaging over observables. Specifically, a type of neural-SDE is put forward and trained to converge to the desired distribution. Next averages of functions are estimated using the Feynman-Kac formula, wherein a PDE is derived from the learned SDE which \\u201ccalculates\\u201d the averages for us given that one sets the boundary condition to be the function one wishes to average. This complex problem is solved using a PINN approach. Some numerical results on random graphs are reported in a table.\\n\\nWhile the paper suggests an interesting technical path to explore in this important subdomain MC samplers\\u2014 it is not clear from reading the manuscript how much of this is incremental work stringing together a series of known results or rather an original and meaningful step forward. While I am only a causal user of Monte-Carlo, my strong feeling is that the benchmarks shown are insufficient to prove that\\nthis specific combination of techniques outperforms existing ones (including DNNs/PINN-based ones and more advanced MC techniques). Turning to the techniques themselves, it seems that neural-SDEs preceded the current work [Tzen, Ragidsky 2019], and that using Feynman-Kac formula with a combination of DNNs to estimate observables on an SDE has also been done before [Blechshmidt, Ernst 2021 and refs therein]. Reading into the latter work, it seems that DNNs have been used slightly differently than in the current work to solve the PDE associated with the SDE, but is this a conceptual change given the common use PINNs to solve PDE? Does it hold the key to any SOTA results? The related work section (which has been delegated to the appendix) and the general causal referencing to related works (such as Tzen et. al.) leave the non-expert reader with little understanding of the true novelty of the current results.\\n\\nThe presentation of the work also leaves a gap between conceptual claims and practical contributions. It also feels fragmented and the\\ncommon use of signposts and bold notation only worsens this in my mind. The conceptual claims, which are sometimes grand, are hard to substantiate. For instance, \\u201cEstablishing a Link Between Sampling Methods and High-Dimensional Partial Differential Equations\\u201d, taken at face value, can hardly be attributed to the current work with all the knowledge on SDE, Fokker-Plank Equations, and Feynman-Kac formula. Also \\u201cExpanding the Scope of Expectation Estimators\\u201c, feels vague. Is there a current scope of expectation estimators? What results in the current work expand this scope in a way that others can't? Finally, in their introduction, the authors allude to the fact that the authors have an affirmative answer to the question \\u201cIs it possible to unify most existing MCMC algorithms into a cohesive framework to create a universal sampler for expectation estimation?\\u201c--- This is such a rich and complex problem that providing an affirmative answer would clearly violate various prevalent complexity theory assumptions. For instance, can the authors show that their sampler solves the Ising Spin-Glass problem? Can the authors even solve the much simpler case of the 2d Ising model at the phase transition and compute long-range observables? Does their technique outperform various ones used in physics to overcome sampling problems such as replica-exchange Monte Carlo?\\n\\nThe above issues, concerning the entanglement with previous works, evidence of going beyond SOTA, and its portrayed grand scope, prevent me from recommending it for publication in ICLR.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The work addresses an important fundamental topic. It provides an interesting combination of DNN-based techniques.\", \"weaknesses\": \"Relation to past literature is left vague. It does not show SOTA on a sufficient set of canonical problems with a suitable comparison to\\nother techniques. Its claims feel too broad and general. The writing style can be improved.\", \"questions\": \"Can the author disentangle their works from past literature on neural SDE and Feynman-Kac's use of averaging observables?\\n\\nCan the authors provide evidence that their universal sampler outperforms the existing techniques, including those in Blechshmidt\\net. al.(2021)? Can they provide some canonical well-excepted benchmark at which they excel over others?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses\", \"comment\": \"First, the neural network structure we used is consistent with that in [1], employing the tanh activation function. In our setup, the training points (t,xt) and the loss function are also aligned with those in [1]. Theorem 3.1 in [1] presents the approximation theorem for neural networks, which serves as an existence theorem. Theorem 3.3 in [1] provides information about the parameters of the neural network. Finally, Theorem 3.7 establishes that the error can be controlled by the loss function. We include these results in the appendix as a concise form of support.\\n\\nThis perspective focuses on addressing the problem within a specific domain. Taking our Ising model example, it is easy to demonstrate that the function $ \\\\exp(x) $ satisfies $ C^2 $ smoothness but is not Lipschitz continuous. Using classical estimators in such cases often leads to significant bias. Another illustrative example is the classical Monte Carlo integration case, where $ f = x^n $. When $ n $ becomes sufficiently large, any estimator will introduce bias.\\n\\nFunctions satisfying $ C^2 $ smoothness are more common and natural in the field of statistical probability, such as loss functions in Bayesian analysis/(ML) or energy-based models. For functions with large Lipschitz constants, classical estimation errors can often be bounded using concentration inequalities. However, for $ C^2 $ functions, there are currently no corresponding concentration inequalities without imposing bounds on the $ C^2 $ norm. This inherently expands the scope of applicable statistical models.\"}" ] }
5s1qpjrNvZ
Guided Reinforcement Learning with Roll-Back
[ "Lauren Y. Taylor", "Wei Emma Zhang", "Claudia Szabo" ]
Reinforcement learning-based solutions are increasingly being considered as strong alternatives to classical system controllers, despite their significant sample inefficiency when learning controller tasks from scratch. Many methods that address this issue use prior task knowledge to guide the agent's learning, with several recent algorithms providing a guide policy that is sometimes chosen to execute actions instead of the learner policy. While this approach lends excellent flexibility as it allows the guide knowledge to be provided in any format, it can be challenging to decide when and for how long to use the guide agent. Current guide policy-based approaches typically choose a static guide sampling rate empirically, and do not vary it. Approaches that transfer control use simple methods like linear decay, or require hyperparameter choices that strongly impact the performance. We show that under certain assumptions, the sampling rate of the guide policy can be calculated to guarantee that the mean return of the learning policy will surpass a user-defined performance degradation threshold. To the best of our knowledge, this is the first time a performance guarantee has been established for a guided RL method. We then implement a guided RL (GRL) algorithm that can make use of this sample rate, and additionally introduce a roll-back feature in guided RL with roll-back (GRL-RB) to adaptively balance the trade-off between performance degradation and rapid transfer of control to the learner. Our approach is simple to implement on top of existing algorithms, robust to hyperparameter choices, and effective in warm-starting online learning.
[ "reinforcement learning", "guide policy", "warm-start" ]
Reject
https://openreview.net/pdf?id=5s1qpjrNvZ
https://openreview.net/forum?id=5s1qpjrNvZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yaeDdLPHHZ", "qvKwYvpUhQ", "lkzvqzF6bj", "fsvnxAk4XY", "fDR2BOGtfR", "dcOLURfNGr", "R45Esnq3MR", "BTnCIAb2or", "BReb4pefZp", "8dXRT1WZSS" ], "note_type": [ "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review" ], "note_created": [ 1730216164532, 1734763251849, 1730394670163, 1732161221085, 1732161419872, 1732161126076, 1737523731536, 1732161052109, 1730220813966, 1730172587255 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5884/Reviewer_PsrW" ], [ "ICLR.cc/2025/Conference/Submission5884/Area_Chair_B7fz" ], [ "ICLR.cc/2025/Conference/Submission5884/Reviewer_SUJU" ], [ "ICLR.cc/2025/Conference/Submission5884/Authors" ], [ "ICLR.cc/2025/Conference/Submission5884/Authors" ], [ "ICLR.cc/2025/Conference/Submission5884/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5884/Authors" ], [ "ICLR.cc/2025/Conference/Submission5884/Reviewer_F1aN" ], [ "ICLR.cc/2025/Conference/Submission5884/Reviewer_UNeW" ] ], "structured_content_str": [ "{\"summary\": \"The authors identify issues in prior works dealing with guided reinforcement learning and aim to address them by deriving an adaptive guide policy sampling rate, which should enable fast & stable transfer of the guide knowledge to the learner policy, while at the same time maintaining performance above a defined maximum tolerable degradation threshold. The method is implemented on top of the commonly known IQL algorithm.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I like the general idea of the paper & the derivation of guarantees under some (strong) assumptions are interesting on their own. To the best of my knowledge those are novel & could be helpful in some (limited) scenarios.\", \"weaknesses\": \"Main weaknesses from my point of view:\\n\\n1) The empirical evaluation is not very comprehensive:\\nThere are 2 experiments in the CombinationLock environment, which are nice to get an intuitive understanding of the method, however are not very realistic RL tasks. Then there are 3 experiments on Antmaze, in which the LD baseline performs similarly well as the new algorithm. As the authors motivate the method from a quite applied perspective, I am wondering whether that means that there is not much expected gain from the method in practical scenarios. Also, I believe the positive dense reward setting is derived, but not experimented with.\\n\\n2) The assumptions leading to the derived sampling rates appear to be violated:\\nThe authors realise that in practice the assumptions they make for the derivations are violated & thus add the Roll-Back method in order to improve performance. While the derived results are of course interesting on their own, one has to ask from a practical standpoint whether deriving & using something based on wrong assumptions makes a lot of sense - especially when taking into account the not so convincing empirical evaluation.\\n\\n3) Slightly overstated claims:\\nE.g. in lines 67-69, you talk about \\\"a guided RL approach, [...] with a guaranteed online performance above a user-defined threshold\\\". This sounds like a broad claim which I think needs to be qualified further (only mean performance is above threshold, individual episodes can be below; limited to certain reward scenarios, i.e. not for general MDP / RL; only under questionable assumptions, i.e. convergence). The authors also still talk about guarantees when the Roll-Back approach is added in (\\\"A GRL with a roll-back algorithm (GRL-RB) that helps to retain the performance guarantee of GRL while relaxing its assumptions\\\"), even though at that point it becomes clear that there is no guarantee (Roll-Back happens exactly when guarantee is violated; plots show performance can be much below threshold for a long time). Generally I think one has to be more cautious with the term guarantee, e.g. in the abstract it is formulated better.\\n\\n4) Limited applicability when compared to all possible reward landscapes:\\nThis is not such a big issue in general since guarantees even in just a few environments would be helpful. It's also briefly mentioned by the authors in the discussion.\\n\\n5) I think Fig 4a might be misleading - it's not clear when something is rolled back, it just looks like violations occur in almost every step.\\n\\nIt may be that I am misunderstanding some issues, so I am prepared to raise my score if the points can be addressed.\", \"questions\": [\"The evaluation sample rates in the bottom plots 1-3 don't align with what I expected, i.e. why are the percentages of the static baselines not exactly .25 & .75 (e.g. fig 2b has them below .2 & at .6)? I presume the variations (shaded region) of the baselines are the difference between true parameter and sampling? The LD baseline does not appear to really reduce alpha by the same amount every time step, why is that?\", \"One thing I also don't understand: Why is the alpha not degraded much faster? What I mean is why is it always degraded by $(1-\\\\alpha_0)$ and not by $(1-\\\\alpha_c)$ (algo 1, line 18) - if the assumptions on convergence you make were to hold, the updated alpha should be used for the next degradation or am I missing something?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper studied guided reinforcement learning, which introduces a sampling method that alternates actions between a guide policy and an online learning policy. Specifically, it proposes a dynamic sampling rate adjustment for the guide policy, referred to as GRL, along with a variant featuring a roll-back capability, called GRL-RB.\", \"strengths\": \"The experimental results demonstrate that the proposed methods ensure user-defined performance and outperform other baseline approaches, and also provide a theoretical view on the selection rate of the expert policy in guided RL setup.\", \"weaknesses\": \"There are several major weaknesses raised by the reviewers, including:\\n1. Lack of testing environment and existing baselines, as pointed out by Reviewer SUJU, Reviewer F1aN, and Reviewer PsrW.\\n2. Lack of novelty, as pointed out by Reviewer F1aN.\\n3. Writing and presentation, as pointed out by Reviewer SUJU, Reviewer PsrW, and Reviewer UNeW.\\n\\nAll reviewers voted to reject the paper, and it appears that the current version of this paper is not ready to be published in ICLR.\", \"additional_comments_on_reviewer_discussion\": \"None of the reviewers were convinced and changed their minds during the rebuttal period.\"}", "{\"summary\": \"This paper presents a guided reinforcement learning method, GRL-RB, which adaptively balances the use of the guided policy and RL agent, providing the first performance guarantee in guided RL.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The main strength is that the paper provides a theoretical view on the selection rate of the expert policy in guided RL setup, which is important and novel.\", \"weaknesses\": \"1. The major weakness is the lack of existing baselines for this problem. For example, [1, 2] both focus on adaptively learning when to query the experts.\\n2. The presentation can be improved. \\n * Line 79, the space also relates to the time step T;\\n * Line 189, the definition of 'Combination Lock MDP' can be further explained with formulas;\\n * For Figure 1,2, the same legend should use the same colours. In Figure 3, there is no legend for the expert performance.\\n * The derivations on $\\\\alpha$ should be explained more, regarding where it comes from and what it means. \\n3. The problem seems like an on-policy/off-policy problem, and it is not clear why an offline RL baseline is considered. \\n\\n[1] Biedenkapp, Andr\\u00e9, et al. \\\"TempoRL: Learning when to act.\\\" *International Conference on Machine Learning*. PMLR, 2021.\\n[2] Schulz, Felix, et al. \\\"Learning When to Trust the Expert for Guided Exploration in RL.\\\" *ICML 2024 Workshop: Foundations of Reinforcement Learning and Control--Connections and Perspectives*.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your time in reviewing our paper and providing feedback.\\n\\nWeaknesses\\n\\n1. The focus of this paper was on deriving the expressions for the sampling rate, and introducing the roll-back mechanism for when it is difficult to ensure convergence. This used most of the available space, so we included enough experiments to demonstrate the results met our theoretical expectations. While LD has a similar design to our algorithms, it does not come with the performance guarantee and issues with it being over-conservative are discussed in the text. Figures 1, 2 and 8 have results for the positive reward scenarios. \\n2. Given the decision of the rate at which to sample either the guide or learner policy is one of the most challenging to justify in guided RL methods, we feel the derivations we have provided do give a useful starting point for a choice of sampling rate, despite the issues that can arise with convergence. Our roll-back mechanism is just one way these can be addressed, and we believe our derived sample rate starting points may give other authors (and ourselves, in future work) the opportunity to develop other interesting methods to address this. \\n3. We agree that the language could be improved to better convey that there can be issues with the guarantee caused by lack of convergence. Thank you for providing specific suggestions and examples, that will assist us very much towards improving this issue. \\n4. \\n5. The sampling rate plot (4b) is meant to show when the sampling rate is rolled back, as it shows the changes in GRL-RB sampling rate correspond to the drops in reward. It is difficult to show a 1-to-1 correspondence, however, when the plots are averaged over many runs. Perhaps providing a single run example would show the correspondence more clearly.\\n\\nQuestions\\n\\n1. The reason for the rates not exactly adhering to 25/75% is discussed briefly in lines 273-177, however the second reason is perhaps not well explained, so we will try again here. The way we would implement a static sample rate of, for example, 50%, was in each episode, at each time step, to have a 50% chance of choosing the guide. However, since the episode ends as soon as a non-guide (i.e. non-optimal) action is taken, if a learner action is chosen early in the scenario the episode will end and the sampling rate of the guide will be less than expected for that episode. For example, for an episode where a learner action is chosen on the first time step, the actual rate of guide sampling will be 0% for that episode, even if the overall rate is 75%. We reported the average sampling rates achieved per episode, so this means overall the sampling rate can end up being lower than expected. We could have averaged over the aggregated time steps from all episodes, which would have averaged out to the correct rate, but we felt our averaging method was more correct for an episodic environment. Similarly for the LD method there can be some variations in the actual amount reduced per time step depending on when the episode ends. \\n2. What we are trying to show is this: e.g. if we have $\\\\\\\\alpha\\\\_0 \\\\= 0.9$, giving us $1-\\\\\\\\alpha\\\\_0=0.1$, the first sampling rate will be $\\\\\\\\alpha\\\\_c=\\\\\\\\alpha\\\\_0$. Once convergence has occurred, we assume $\\\\\\\\pi\\\\_l$ has learned to perform equally as well as the guide for that percentage of states, so we can essentially assume that $\\\\\\\\alpha\\\\_c$ now essentially represents 100% use of the guide agent. To meet our performance guarantee, we can only then decrease this by the originally derived $1-\\\\\\\\alpha\\\\_0=0.1$. So, at each successful evaluation, we decrease $\\\\\\\\alpha_c$ by 0.1, giving us successive $\\\\\\\\alpha_c$s of \\\\[0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1 ,0\\\\].\"}", "{\"comment\": \"Thank you for your review and feedback.\\n\\nWeaknesses\\n\\n1. We agree that we could have provided a more detailed explanation of the algorithms, and would be prepared to add this in. The difference if this algorithm\\u2019s focus on guaranteeing a certain level of performance when transferred online. \\n2. \\n a. $n\\\\_{\\\\\\\\pi\\\\_l}/t\\\\\\\\%$ is not an additional threshold, it just describes the percentage of time steps during an episode that the learner has been sampled for so far. For example, if we calculate a $\\\\\\\\pi\\\\_l$ sample rate $\\\\\\\\alpha=30\\\\\\\\%$, then we want to ensure the sample rate does not go above 30%. To do this, we simply check that the amount the learner policy has been sampled for is less than $\\\\\\\\alpha=30\\\\\\\\%$, i.e. $n\\\\_{\\\\\\\\pi\\\\_l}/t\\\\\\\\%<30\\\\\\\\%$. It\\u2019s just a check to ensure the chosen sampling rate is respected. This is explained in lines 165-167. \\n b. The justification behind $1-(1-\\\\\\\\alpha)$ is this: if we have $\\\\\\\\alpha\\\\_0 \\\\= 0.9$, giving us $1-\\\\\\\\alpha\\\\_0=0.1$, the first sampling rate will be $\\\\\\\\alpha\\\\_c=\\\\\\\\alpha\\\\_0$. Once convergence has occurred, we assume $\\\\\\\\pi\\\\_l$ has learned to perform equally as well as the guide for that percentage of states, so we can essentially assume that $\\\\\\\\alpha\\\\_c$ now essentially represents 100% use of the guide agent. To meet our performance guarantee, we can only then decrease this by the originally derived $1-\\\\\\\\alpha\\\\_0=0.1$. This is explained in lines 199-202. \\n c. We are not entirely sure what you mean by the theoretical benefits of the roll-back mechanism, the justification is that if the performance falls below the performance threshold, then the sampling of the learner agent is rolled back to the sample rate that achieved the previous best performance, thus giving the agent the opportunity to continue learning at this \\u2018easier\\u2019 stage, where the guide is used more often. \\n3. We agree to the extent we discuss the limitations in the discussion, however the reward schemes we derived sampling rates for are reasonably common in RL, and we provide these theoretical results as a starting point for ourselves and others to further develop methods to guarantee a level of performance during the shift to online learning. We accept there can be convergence challenges, and we introduced the roll-back mechanism to help with that (to relax the assumption you mention in the sentence in your summary, \\u201cthe assumption of convergence between steps of $\\\\\\\\alpha$..\\u201d), but we believe other researchers may find these derivations useful in developing their own methods of meeting the convergence challenge.\"}", "{\"comment\": \"Thank you for your time in providing us with this detailed feedback.\\n\\nWeaknesses\\n\\n1. We agree that many guided policy methods exist, however we believe our focus on providing a performance guarantee during the shift to online is our novel contribution. \\n2. The focus of this paper was on deriving the expressions for the sampling rate, and introducing the roll-back mechanism for when it is difficult to ensure convergence. This used most of the available space, so we included enough experiments to demonstrate the results met our theoretical expectations. We agree that experiments with differing data budgets could be included in future work, however given the comparison baselines were all provided the same initialisation, we still believe the provided experiments are sufficient in showing GRL-RB\\u2019s effectiveness. \\n3. The purpose of deriving equations for $\\\\\\\\alpha$ for different reward structures is so that the user does not have to finetune $\\\\\\\\alpha$. For the reward structures considered, it allows the user to calculate $\\\\\\\\alpha$ exactly. All the user must do is define the *performance threshold* $\\\\\\\\mu$ they wish to maintain (e.g. 75% of the performance of the guide).\\n\\nGeneral Remarks \\nWe thank you for your suggestions, and agree that they would improve the clarity of the paper.\\n\\nQuestions\\n\\n1. The initial derivations for $\\\\\\\\alpha$ were only completed for some common reward structures, and interleaved positive/negative reward structures were not included in this work. This limitation was mentioned in the discussion section as an area for future work (lines 467-469). \\n2. It should not be possible as long as the algorithm chosen for learning online is capable of learning. If the algorithm is so poor that it is not capable of learning, then $\\\\\\\\alpha$ would not be reduced. However, this would be the correct behaviour in order to maintain performance above the user defined threshold. We discuss our requirement that the online algorithm be sufficient for learning on line 198, Section 3.1.1. \\n3. Given that JSRL was a relatively straightforward procedure on top of IQL, we decided to write our own implementation, so we did not contact the authors. We also used IQL as the base algorithm, as discussed on line 209, Section 3.1.2. \\n4. We felt this would be an unnecessary comparison, especially as we provided several experiments in Combination Lock and the robustness experiment in AntMaze which shows guides at different levels of capability. \\n5. Thank you for pointing out the unclear wording here \\\\- we should have stated that \\u201cwe do not re-use *learned* parameters (weights and biases) of the offline agent\\u201d. As stated on line 1145, this is to demonstrate that this method works for prior policies of any form, including those that do not have offline policies with weights and biases (such as a set of rules).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your time and effort in reviewing our paper.\\n\\nWeaknesses\\n\\n1. Thank you for your comments. As stated in this paper, to the best of our knowledge this is the first guided policy method that has been produced to focus on guaranteeing a level of performance when transferring from off- to online learning, so while there are not a large number of existing baselines, we used the IQL and JSRL baselines as examples of state-of-the-art approaches to this problem that are closest to our approach. In your point 3\\\\. below, we believe you are referring to IQL as an offline baseline, however in the IQL paper \\\\[1\\\\] Section 5.3, it describes how it can be used for online fine-tuning. For the references you have included, \\\\[2\\\\] does have a similar aim to ours, however their focus on deciding how many time steps for which to use the expert guidance is very similar to the JSRL approach, which we already included as a baseline. We could have included this as a reference, however. \\\\[1\\\\] does not seem to make use of expert guidance, apologies if we have missed something. \\n2. \\n a. Would you mind please elaborating on the issue on Line 79? \\n b. Appendix C provides a full description of the environment, however the explanation in the text could be summarised with equations. \\n c. Thank you for noticing these Figure errors. We will correct them. \\n d. The full derivations were provided in Appendix E to save space. \\n3. See answer to point 1\\\\.\\n\\n\\\\[1\\\\] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. \\\"Offline Reinforcement Learning with Implicit Q-Learning.\\\" *Deep RL Workshop NeurIPS 2021*.\"}", "{\"summary\": \"The paper introduces a sampling method that alternates actions between a guide policy (which can originate from various sources, such as model-based or imitation learning methods) and an online learning policy. The main contribution appears to be a mechanism that balances data sampling between the guide and online learning policies by adjusting a user-defined sampling rate, denoted as $\\\\alpha$, which is based on policy performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The method effectively alternates actions from expert and learning policies, enabling the learning policy to leverage its own actions, a crucial factor for self-correcting estimations. Additionally, it is notable that GRB-RL appears resilient to distribution shift following pre-training, though more extensive testing is needed to confirm this observation.\", \"weaknesses\": \"* *Lack of novelty*: The primary drawback is limited novelty, as similar approaches that interleave actions from expert and learning policies have been previously explored. The paper acknowledges this by referencing related works and including them as baseline comparisons (e.g., [1]).\\n\\n* *Limited testing and baseline comparisons*: The approach was evaluated only within variants of the AntMaze environment considering high-dimensional state space environments, which limits insights into its broader applicability. Testing in more diverse environments, such as those found in Gymnasium, like Atari games or other complex benchmarks, could reveal the method's adaptability to different reward structures and exploration needs. Additionally, while the paper reviews a broad range of related works, the baseline algorithms included tend to underperform compared to simpler approaches, such as linear decay (LD) sampling. Also, it would also be interesting to assess how GRL-RB performs with varying data budgets during pre-training.\\n\\n* *User-defined sampling rate*: The proposed method requires a user-defined sampling rate $\\\\alpha$, which determines the proportion of actions sourced from the expert or the learning policy. While this flexibility can accelerate learning, it places a significant burden on the user to fine-tune $\\\\alpha$, which may hinder practical applicability. Prior work, such as [2], explores this trade-off and provides insights into the optimal balance of data from both sources (expert and learner) to mitigate overestimation issues. Drawing from these insights might further constrain and optimize $\\\\alpha$'s range.\\n\\n**Overall Assessment**\\n\\nOverall, the paper lacks novelty, a thorough baseline comparison, and diverse environment testing. The structure is sometimes difficult to follow, and I often found it necessary to consult the Appendix to clarify the role of certain variables. For future improvements, I recommend decoupling the method from its reliance on specific test environments and applying it to more complex, generalizable tasks.\\n\\n**General remarks**\\n\\n-Section 2. \\u201cGuide, Learning \\u2026\\u201d: Can be confusing how a policy $\\\\pi$ is derived from an offline and online policy, since either actions are sampled from one of the previous ones. I suggest considering only policies $g$ and $l$ in the notation.\\n\\n-Line 201 \\u201cthe new guide sampling rate\\u2026\\u201d the authors should distinguish between the current $\\\\alpha$ and the initial one used to update the former.\\n\\n-The authors claim throughout the text that some features, such as the \\u201croll back,\\u201d can, for instance, \\u201cspeed up the transfer learning\\u201d before showing the results or evidence to support this (see section 4). Some (important) results are mentioned in the paper but are available in the Appendix.\\n\\n-I think the paper would benefit from having a dedicated section to describe the baselines IQL and JSLR, and eventually any other method that could be included for comparison.\\n\\n**References**\\n\\n[1] Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Jos\\u00e9phine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, and Karol Hausman. 2023. Jump-start reinforcement learning. In Proceedings of the 40th International Conference on Machine Learning (ICML'23), Vol. 202. JMLR.org, Article 1439, 34556\\u201334583. \\n\\n[2] Ostrovski, Georg, Pablo Samuel Castro, and Will Dabney. \\\"The difficulty of passive learning in deep reinforcement learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 23283-23295.\", \"questions\": \"[Q1] The authors show in section 3.2.3 a distinction between negative and positive dense rewards. What about a reward normalization such as [-1, 1]?\\n\\n[Q2] I wonder if tested in more environments, eventually $\\\\alpha$ would get stuck and not converge towards 0, even employing the roll back mechanism. Would that possibly happen?\\n\\n[Q3] Have you tried to contact the authors of JSLR to obtain its implementation? It seems that their main result is a combination of IQL + JSLR. Is this the way it was tested in this work? \\n\\n[Q4] Why not consider different amounts of data during the pre-training phase?\\n\\n[Q5] In G.1.2, could you clarify why you haven't employed the same parameters of the offline agent?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses guided reinforcement learning, utilizing prior task knowledge to enhance the agent\\u2019s learning process. Specifically, it proposes a dynamic sampling rate adjustment for the guide policy, referred to as GRL, along with a variant featuring a roll-back capability, called GRL-RB. The experimental results demonstrate that the proposed methods ensure user-defined performance and outperform other baseline approaches.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses the critical issue of utilizing prior task knowledge to guide the agent's learning, a significant aspect of reinforcement learning.\", \"The paper provides a comprehensive introduction and overview of related work, laying a solid foundation for the proposed methods.\"], \"weaknesses\": [\"Overall, the paper writing and structure needs significant improvement, making it difficult to follow the overall flow.\", \"The motivation behind the proposed method and its specific details remain largely unclear. For example, in section 3, the authors mention employing a similar sampling approach to Chang et al. (2015) and Chang et al. (2023), as well as a method akin to JSRL (Uchendu et al., 2023). However, it is not clear what these methods entail or how they relate to the current work; readers should not have to refer to external papers for this information. A more detailed formal description of the proposed method should be included in the main body of the text.\", \"Additionally, several components of the method lack clarity and theoretical justification. For example, the rationale behind using $n_{\\\\pi_l}/t$ as an additional threshold is not explained. Similarly, the reasoning for the new guide sampling rate being $\\\\alpha-(1-\\\\alpha)$ and the theoretical benefits of the roll-back mechanism are unclear. These choices appear to be made heuristically.\", \"The function of Section 3.2 is also confusing. The derivation of the sampling rate relies on perfect knowledge of the \\u2018Combination Lock\\u2019 task and the guiding policy, which is impractical for practical tasks like the AntMaze task used in this paper. While didactic examples can illustrate theoretical guarantees, the paper lacks more general theoretical results. As it stands, Section 3.2 only suggests that the method works under special conditions that require perfect information.\", \"In summary, while the paper claims to provide a user-defined performance guarantee, it suffers from a lack of clarity and theoretical justification. The assertion that \\u201cthe assumption of convergence between steps of $\\\\alpha$ is key to the success of GRL\\u201d raises concerns. If the \\\"key\\\" to a method relies on an assumption, the authors should reconsider the method and adopt more conservative claims.\"], \"questions\": \"See the weaknesses noted above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5rfj85bHCy
HyResPINNs: Adaptive Hybrid Residual Networks for Learning Optimal Combinations of Neural and RBF Components for Physics-Informed Modeling
[ "Madison Cooley", "Mike Kirby", "Shandian Zhe", "Varun Shankar" ]
Physics-informed neural networks (PINNs) are an increasingly popular class of techniques for the numerical solution of partial differential equations (PDEs), where neural networks are trained using loss functions regularized by relevant PDE terms to enforce physical constraints. We present a new class of PINNs called HyResPINNs, which augment traditional PINNs with adaptive hybrid residual blocks that combine the outputs of a standard neural network and a radial basis function (RBF) network. A key feature of our method is the inclusion of adaptive combination parameters within each residual block, which dynamically learn to weigh the contributions of the neural network and RBF network outputs. Additionally, adaptive connections between residual blocks allow for flexible information flow throughout the network. We show that HyResPINNs are more robust to training point locations and neural network architectures than traditional PINNs. Moreover, HyResPINNs offer orders of magnitude greater accuracy than competing methods on certain problems, with only modest increases in training costs. We demonstrate the strengths of our approach on challenging PDEs, including the Allen-Cahn equation and the Darcy-Flow equation. Our results suggest that HyResPINNs effectively bridge the gap between traditional numerical methods and modern machine learning-based solvers.
[ "physics-informed neural networks", "residual networks", "partial differential equations", "radial basis function networks" ]
Reject
https://openreview.net/pdf?id=5rfj85bHCy
https://openreview.net/forum?id=5rfj85bHCy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y9gueXKk4m", "ww2v2DrIxv", "uVDPPMbeRO", "hiLQxkguBU", "eTxbNfPpPH", "c769Ktmupv", "agTgrv1Zlm", "NO5GO8L8L0", "Lk7pqmgOWf", "DgPZ94QM0A", "DUlV3t5kVT", "8BaOlVRqqc" ], "note_type": [ "official_comment", "official_review", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1733298898500, 1730580985898, 1733298928350, 1737524062312, 1733298982261, 1734894896979, 1733299043158, 1733298949622, 1733299020599, 1730532710617, 1730654975787, 1733158362917 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10566/Authors" ], [ "ICLR.cc/2025/Conference/Submission10566/Reviewer_AtLi" ], [ "ICLR.cc/2025/Conference/Submission10566/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10566/Authors" ], [ "ICLR.cc/2025/Conference/Submission10566/Area_Chair_9SoU" ], [ "ICLR.cc/2025/Conference/Submission10566/Authors" ], [ "ICLR.cc/2025/Conference/Submission10566/Authors" ], [ "ICLR.cc/2025/Conference/Submission10566/Authors" ], [ "ICLR.cc/2025/Conference/Submission10566/Reviewer_6pTT" ], [ "ICLR.cc/2025/Conference/Submission10566/Reviewer_ba3x" ], [ "ICLR.cc/2025/Conference/Submission10566/Reviewer_AtLi" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer ba3x (part 1)\", \"comment\": \"We thank the reviewer for their comments. Below we address each weakness individually.\\n\\n> **Regarding the computational complexity of the proposed approach.**\\n\\nThank you for highlighting the importance of discussing computational complexity. In the original submission, we included an analysis of computational cost in Figure 6, where the right plot explicitly compares the mean wall-clock training time and error across different methods, including our proposed approach (HyResPINNs). As shown, while HyResPINN has a slightly higher (overall) training cost, it outperforms baseline methods in terms of accuracy. Specifically, the vertical lines in the figure indicate how long each model trained until achieving a relative L2 error of 10^-2. The figure shows that ResPINNs, ExpertPINNs, and HyResPINNs (ours) achieve this threshold at similar wall-clock times, while PirateNets took the longest. This trade-off between training time and accuracy highlights the robustness and efficiency of our approach to solving complex PDEs.\\n\\nHowever, to further address your concerns, the table below includes a more detailed breakdown of computational complexity. Specifically, this table analyzes the wall-clock training time (in minutes) of HyResPINNs compared to each baseline approach corresponding to the best accuracy results presented in Table 1. \\n\\n| Problem | Domain | Boundary Cond. | PINN | ResPINN | Expert | Stacked | PirateNet | **Proposed Method** |\\n|-------------------|---------------|----------------|--------|---------|---------|----------|-----------|---------------------|\\n| Allen-Cahn | 1D Space/Time | Periodic | 34.99 | 17.65 | 64.81 | 22.29 | 116.72 | 150.88 |\\n| DarcyFlow | 2D Annulus | Neumann | 2.37 | 2.76 | 18.67 | 2.18 | 11.88 | 5.73 |\\n| (smooth coeff.) | | Dirichlet | 2.08 | 2.09 | 17.64 | 1.64 | 10.87 | 11.87 |\\n| | 3D Annulus | Neumann | 8.1 | 8.5 | 27.6 | 6.9 | 20.8 | 35.9 |\\n| | | Dirichlet | 5.9 | 6.0 | 21.3 | 5.2 | 15.3 | 30.1 |\\n| (rough coeff.) | 2D Box | Neumann | 2.4 | 2.4 | 7.7 | 2.1 | 7.7 | 8.4 |\\n\\n> **Regarding PDE benchmarks**\\n\\nWe appreciate the reviewer\\u2019s comments regarding the limited number of benchmark PDEs in our current experiments. While the selected problems (e.g., Allen-Cahn and Darcy Flow) were chosen to evaluate distinct aspects of our method, such as handling nonlinear dynamics and varying boundary conditions, we recognize that additional examples would further strengthen the evaluation. If accepted, we plan to expand our experiments to include additional PDEs.\\n\\n> **RBF kernel explanation.**\\n\\nWe appreciate the reviewer's observation regarding Figure 3 and would like to clarify the relationship between kernel smoothness and the ability to approximate sharp transitions. While Figure 3 visualizes the learned RBF kernels, their smooth appearance reflects the inherent properties of individual RBF kernels rather than the overall behavior of the hybrid model. \\n\\nSpecifically, when using Wendland kernels\\u2014a compactly supported RBF kernel\\u2014the smoothness is localized within the kernel's support region, and the kernel transitions sharply to zero at the boundary of its support. This compact support introduces non-smooth behavior at the edges while maintaining smoothness within the support region. The hybrid model benefits from the locality in the Wendland kernels, which ensures that each kernel focuses on a specific region of the solution, while the neural network component provides the flexibility to adjust to global features. Together, each component enables the hybrid model to balance smooth behavior and sharp transitions effectively.\\n\\nFurther, to directly address this concern, we refer the reviewer to Figure 4, which provides evidence of the HyResPINN's ability to handle sharp transitions. Specifically, the absolute error plots in the second row of Figure 4 illustrate that HyResPINN consistently achieves lower errors near sharp transitions compared to both the standard PINN and the RBF network (RBFNet). These results highlight the hybrid model's effectiveness in capturing sharp features where other methods falter. The bottom row of Figure 4 shows that HyResPINN accurately captures smooth and sharp features in the Allen-Cahn solution across all time steps, aligning closely with the exact solution (green). In contrast, the standard PINN struggles to resolve sharp features, resulting in large errors.\"}", "{\"summary\": \"This paper proposes a novel architecture for Physics Informed Neural Networks (PINNs), combining elements from both deep learning and kernel modeling. Specifically, their architecture, named HyResPINNs implement blocks where a dense layer and a RBF kernel regression are computed in parallel, then combined by an adjustable weighted average. The network also implements trainable residual connections between blocks to help train deeper architectures.\\n\\nAfter providing a thorough literature review, the authors formally describe their method, highlighting how the mixed Neural Network/RBF components help the network learn combinations of smooth and high-frequency components of the target function, leveraging the advantages of each method. Their training method also includes a regularization term, penalizing higher contributions of the RBF components in order to limit overfitting to very high-frequency functions. Finally, the authors evaluate their method against several competitive baselines on two PDEs: the Allen-Cahn equation, and the Darcy Flow. Their experiments show HyResPINNs consistently outperforming other methods on these benchmarks.\\n\\nThe authors conclude by summarizing their work, highlighting how their method is able to capture sharp solutions at a manageable computational cost.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"### Originality\\nTo the best of my knowledge, this work appears to present novel and interesting ideas. By using a very local RBF kernel in order to assist (but not completely replace) traditional neural network solvers for PINNs, their work aims to help address the well-known spectral bias problem for learning highly oscillatory functions.\\n\\n\\n### Quality\\nThe paper presents a very interesting idea, and details the HyResPINN architecture in an effective manner. Their experiments are grounded and well-executed, and they compare their method against several competitive baselines. The authors also do a good job of summarizing existing work and highlighting the differences between different methods. I also like their idea of regularizing the weight of the RBF contribution of each block to stabilize the training process.\\n\\n\\n### Clarity\\nThe paper is overall very well written and clear in its explanations. The literature review at the introduction is thorough, and the required information is presented in a clear and concise manner. The figures are informative and capture the author's arguments well. I particularly like figure 3, where they show kernels learned in HyResPINN are more local than ones from RBFPINNs.\", \"weaknesses\": [\"### Limited Experiments\", \"In my view, the biggest setback of this paper is the limited range of PDEs considered in the experiments section. The authors do a good job of executing the experiments included in the paper, but only consider the Allen-Cahn equation and the Darcy Flow problem (under different conditions). It is great to see their method work well in these cases, but I believe the paper would greatly benefit from additional experiments using other PDEs, specially ones that challenge existing PINNs architectures. Examples of PDEs that could be considered, in order of difficulty, include: 1) the Poisson equation with different forcing functions (depending on the forcing function this can become a harder problem); 2) Burger's equation, 3) the advection equation; 4) the Kuramoto\\u2013Sivashinsky equation, 5) problems using the Navier-Stokes equation, such as lid-driven cavity flow (even with low Reynolds number). Although not all of these PDEs need to be considered, including at least one or two of them (or other suitable problems) could make for a stronger paper.\", \"### Flawed Notion of Training Set Size in PINNs\", \"Another critique I have is on the experiments/plots where they examine the performance of different architectures using different training set sizes (e.g.: figures 7 and 8). Under the Physics Informed framework, although the target function is in principle unknown, we can query the differential operator $\\\\mathcal{F}$ from equation (1) on any point of the input domain using automatic differentiation. This means that it is possible to sample collocation points at will at any given point in the domain, as the authors mention themselves in line 147. In fact, it is always recommended to sample points randomly and independently across the entire input space at each iteration of the training algorithm, effectively meaning that there is unlimited \\\"training data\\\" available for PINN problems. Not only is this approach (independent random collocation points at each iteration) more theoretically grounded by taking advantage of the mesh-less nature of PINNs, it often leads to more accurate and robust performance. This renders the comparison of different \\\"training sizes\\\" meaningless, as it is always possible (and encouraged) to sample new points.\", \"### Other comments/typos that did not affect my score:\", \"[line 140] In equation (4) the $\\\\mathcal{D}$ should be $\\\\mathcal{F}$ instead, in order to be consistent with equation (1).\", \"[Figure 1] There is a typo in the diagram. According to the formula from equation (10) and regularization shown in equation (11), it should be $\\\\phi(\\\\alpha^{(l)})$ multiplying $F_R^{(l)}(x)$ and $(1-\\\\phi(\\\\alpha^{(l)}))$ multiplying $F_N^{(l)}(x)$, not the other way around, as it is currently shown.\", \"[line 215] In equation (10), the activation $\\\\sigma$ is shown to be part of the function $H^{(l)}$, while in the diagram of Figure 1 the $\\\\sigma$ is shown outside of the function $H$. One of these should be changed for the sake of consistency.\", \"[line 303] There is a mention of an \\\"input block\\\" that lifts the original input to a higher dimension in the diagram, but Figure 2 does not include this block, only the \\\"output block\\\".\", \"[line 362] Given the initial conditions, the boundary conditions are not satisfied at $t=0$, making this problem ill-posed as it is (you can check that $u_x(0,-1)=2$, while $u_x(0,1)=-2$). This is, unfortunately, a very common mistake in the PINNs community, as this specific benchmark has now become standard. Instead, a very similar solution is given by assuming zero Dirichlet boundary conditions on $x=\\\\pm1$, which makes the problem well-posed and has also been studied in a couple papers. It likely won't make much of a difference in the results, but if possible I would encourage the authors to run this benchmark with Dirichlet boundary, or at the very least add a remark/footnote about the ill-posed nature of the problem.\", \"[lines 710 + 712] There seems to be a bad reference pointer in the LaTeX file.\"], \"questions\": [\"I list below my questions/suggestions to the authors, in order of how influential they would be towards increasing my score of their submission.\", \"[**Including More PDEs In Experiments**] As mentioned above, I believe including more experiments with other PDEs would make for a stronger paper. Suggestions for which PDEs to use are detailed in the previous section. I would be open to increasing my score if more experiments are conducted, even if HyResPINNs don't necessarily beat all other baselines on them.\", \"[**Reconsidering Notion of \\\"training size\\\" For PINNs + Randomly Sampling Collocation Points**] As mentioned in the previous section, I would urge the authors to move away from the notion of \\\"training size\\\" for PINNs, and train networks with freshly-sampled random collocation points at each iteration.\", \"[**Clarity on Formulation of Residual Connections**] The adaptive residual connections are implicitly defined in line 234, but it would be good do add an extra formula detailing it. Are the $\\\\beta^{(l)}$ parameters constrained in any way, or is it left for the sigmoid function $\\\\phi$ to make the residual connection in the $(0,1)$ interval? If so, initializing the $\\\\beta^{(l)}$ to be 1, as indicated in line 237, leads the residual connection to have strength $\\\\phi(1) \\\\approx 0.73$. Is there a particular meaning over this choice? In a related issue, initializing the $\\\\alpha^{(l)}$ to be 0.5 means that $\\\\phi(0.5)\\\\approx 0.62$, which is not an equal contribution, as mentioned in line 237. Did you mean to say the $\\\\alpha^{(l)}$ are initialized to be 0?\", \"[**Other RBF Kernel Possibilities**] The choice of using the Wendland kernel seems well motivated to me, and overall a good choice, but it would be great to see the effect of using other kernels in the RBF blocks of HyResPINNs. This could be a valuable ablation to include, either in the main text, or the appendix. Such an ablation could be done for a single PDE, if testing for all problems is too troublesome/time-consuming.\", \"[**Reporting Computation Time + Hardware**] The authors highlight that HyResPINNs offer significant gains over plain MLPs, with little extra computational cost, but the training time for each method is not reported, to the best of my knowledge. It would also be good to specify the hardware used to run the experiments.\", \"[**Lack of Code As Supplementary Material**] In order to get a better understanding of the method and evaluate the execution of experiments, it would be good to provide representative code for running experiments as part of the supplementary material, which is currently missing.\", \"[**References for RBF Networks**] Overall, the authors do a good job at highlighting existing work and providing a careful perspective of current PINNs research. However, there are no references given for RBF networks, either in the introduction or in section 2.2.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ba3x (part 2)\", \"comment\": \"> **Are the centers and coefficients trained in each RBF-NN?**\\n\\nIn our proposed adaptive RBF kernel approach, the scaling parameters ($\\\\tau_i$) of the Wendland kernel, as well as the kernel weights ($W$ in Equation 12), are trainable model parameters. These parameters are optimized via gradient descent alongside all other network parameters during training.\\n\\nWe thank the reviewers for their careful reading of our manuscript and for pointing out the grammatical and notational errors. In the revised manuscript, we have updated the notation in Equation (4), fixed the grammatical errors, and fixed the reference pointers in the Appendix.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer AtLi (part 2)\", \"comment\": \"> **Code as supplementary material**\\n\\nWe fully agree that providing code is essential for reproducibility and for gaining a better understanding of the method. We plan to release the complete codebase for our methods upon acceptance of the paper. This code will include scripts for training and evaluating the proposed HyResPINN architecture and reproducing all experimental results presented in the manuscript. The release will ensure that readers and researchers can evaluate the execution of experiments and extend the work easily. We have added a note in the \\\"Experimental Setup\\\" section clarifying our commitment to making the code publicly available upon publication. \\n\\n> **RBF Netowork References** \\n\\nWe thank the reviewer for pointing out the lack of references for Radial Basis Function (RBF) networks in the Introduction and Section 2.2. We agree that including relevant references would strengthen the context and situate our work more effectively within the broader literature. To address this, we have added key references to RBF networks in the introduction (see Line 062), including (Bai et al. 2023) and (Fu et al. 2024), who use physics-informed RBF networks to solve PDEs.\\n\\n\\nFinally, we thank the reviewers for their careful reading of our manuscript and for pointing out the grammatical, typographical, and notational errors. In the revised manuscript, we have updated the notation in Equations (1) and (10), fixed the typo in the diagram, and fixed the reference pointers in the Appendix. We will additionally add a footnote acknowledging the ill-posed nature of the problem and referencing relevant papers that address it with zero Dirichlet boundary conditions.\"}", "{\"metareview\": \"The paper describes a new class of physics-informed neural networks that combines a residual block of RBF and neural networks. The approach is quite strange. First of all, for RBF it is known that Random Fourier Features (RFF) can approximate well RBF functions and kernel methods, making the architecture close to a classical neural networks. Other more formal concerns include:\\n1) Too narrow experiments\\n2) Not to good writing (for example, Appendix A has broken references and text has not been updated)\\n3) Questions about the experiments, i.e. training set selection is not fully correct.\\nOverall, I don't think the proposal is really interesting, even if it shows some improvements in the experiments.\", \"additional_comments_on_reviewer_discussion\": \"A lot of comments have been provided by AtLi who even asked if the text has been updated, but the authors did not do it.\"}", "{\"title\": \"Response to Reviewer 6pTT (part 2)\", \"comment\": \"> **Regarding Question # 3**\\n\\nWe thank the reviewer for raising this question. The Wendland \\\\( C^4 \\\\) kernel was chosen due to its compact support, smoothness, and computational efficiency, making it well-suited for the hybrid architecture. Compactly supported kernels, like Wendland \\\\( C^4 \\\\), result in sparse kernel matrices, which improve scalability and reduce computational overhead, especially for high-dimensional problems. Specifically, the kernel value is zero for points outside a specified radius, $\\\\tau$ around the kernel center. This compact support property means that most entries in the kernel matrix are zero, as interactions only occur between points within the support radius. Consequently, the resulting sparse kernel matrices require less memory for storage and enable the use of efficient sparse matrix solvers, significantly reducing the computational complexity compared to dense kernel methods. Additionally, the \\\\( C^4 \\\\) smoothness ensures sufficient differentiability for solving PDEs that require higher-order derivatives.\\n\\nWhile the Wendland \\\\( C^4 \\\\) kernel provides a balanced trade-off between computational efficiency and smoothness, we acknowledge that this choice is one of several possibilities. Other kernel functions, such as Gaussian or Mat\\u00e9rn kernels, could also be explored, depending on the problem requirements. For instance, Gaussian kernels are widely used for their universal approximation properties, but they lack compact support, leading to dense kernel matrices that scale poorly with the problem size. Investigating the performance of alternative kernel functions tailored to specific PDEs or problem domains is a promising direction for future work. We appreciate the reviewer\\u2019s suggestion to clarify this point, and we will emphasize this discussion in the revised manuscript to better highlight the rationale for our choice and its implications.\"}", "{\"title\": \"Response to Reviewer AtLi (part 1)\", \"comment\": \"We thank the reviewer for their comments. Below, we address each question individually.\\n\\n> **Including More PDEs In Experiments**\\n\\nWe appreciate the reviewer\\u2019s comments suggesting the inclusion of additional PDE examples. While the selected problems (e.g., Allen-Cahn and Darcy Flow) were chosen to evaluate distinct aspects of our method, such as handling nonlinear dynamics and varying boundary conditions, we recognize that additional examples would further strengthen the evaluation. If accepted, we plan to expand our experiments to include additional PDEs.\\n\\n> **Reconsidering Notion of \\\"training size\\\"**\\n\\nWe thank the reviewer for highlighting this aspect of PINN training and for pointing out the advantages of sampling independent random collocation points at each iteration. We agree that this approach often leads to more accurate and robust performance, and we have employed this strategy for the Allen-Cahn experiments, as detailed in the Appendix. Specifically, we mention that collocation points were randomly sampled during each training iteration for this experiment, utilizing the mesh-free nature of PINNs.\\n\\nFor other experiments, such as Darcy flow, we used fixed collocation points instead, as this setup is more commonly employed for comparison against baseline methods in the existing literature. This choice also allowed us to explore the impact of training set size on the performance of various architectures, which remains a common experimental protocol in the PINN community. While we recognize that this approach may not leverage the full flexibility of PINNs, it provides meaningful comparisons in contexts where fixed-point setups are standard.\\n\\n> **Clarity on Formulation of Residual Connections**\\n\\nWe thank the reviewer for their feedback and for pointing out areas where further clarity is needed regarding the formulation of the residual connections and initialization of parameters. In our current implementation, \\\\( \\\\beta^l \\\\) is initialized to 10, ensuring that \\\\( \\\\phi(\\\\beta^l) \\\\approx 1 \\\\). Similarly, \\\\( \\\\alpha^l \\\\) is initialized to 0, such that \\\\( \\\\phi(\\\\alpha^l) = 0.5 \\\\), ensuring an equal balance between the contributions of the RBF and NN components within each block at the start of training. While the \\\\( \\\\alpha^l \\\\) values are regularized during training, the \\\\( \\\\beta^l \\\\) parameters are constrained only by the sigmoid function \\\\( \\\\phi \\\\), ensuring their values remain within \\\\( (0, 1) \\\\) for stable training. By initializing \\\\( \\\\phi(\\\\beta^l) \\\\approx 1 \\\\), the residual connections initially fully pass the outputs from previous blocks. Similarly, the equal weighting of \\\\( \\\\phi(\\\\alpha^l) = 0.5 \\\\) ensures neither component dominates prematurely. We will add these clarifications to the revised manuscript and further discuss the roles of \\\\( \\\\beta^l \\\\) and \\\\( \\\\alpha^l \\\\) and how their adaptive modulation contributes to the hybrid model's flexibility and performance.\\n\\n> **Other RBF Kernels**\\n\\nWe thank the reviewer for suggesting exploring alternative kernels in the RBF blocks of HyResPINNs. We agree that investigating the effect of other kernels, such as Gaussian or Mat\\u00e9rn kernels, could provide valuable insights into the flexibility and robustness of the hybrid architecture. If the paper is accepted, we will include an ablation study in the appendix, evaluating the performance of HyResPINNs with different kernels on a representative PDE problem. This experiment will allow us to assess the trade-offs between kernel choices regarding accuracy, efficiency, and adaptability to varying solution characteristics. We believe this addition will enrich the paper and further clarify the impact of the kernel selection on the proposed framework.\\n\\n> **Reporting Computation Time**\\n\\nWe appreciate the reviewer\\u2019s suggestion to provide more details on computational time and the hardware used for experiments. In the manuscript, we mentioned the training hardware (NVIDIA A100 GPU running CentOS 7.2) in the \\\"Experimental Setup\\\" section. Additionally, we included a brief computational cost analysis in Figure 6 (see our response to reviewer ba3x). However, we acknowledge that the explicit reporting of training times for each method and experiment was not fully detailed. To address this, we have now included the training time for each method in the table above (see our response to reviewer ba3x). This table compares wall-clock training times for all methods.\"}", "{\"title\": \"Response to Reviewer 6pTT (part 1)\", \"comment\": \"We thank the reviewer for their comments. Below, we address each question and comment individually.\\n\\n> **Weakness # 1**\\n\\nThank you for this observation. We appreciate the need to explicitly report computational cost for the baselines, as these metrics are crucial for understanding the practical trade-offs of different approaches. To address this concern, please see our response above to Reviewer ba3x. Regarding the similarity between PirateNet and HyResPINN in terms of training time (as shown in Figure 6), we agree that the differences are less pronounced during certain training regimes. However, we note that our model consistently achieves superior accuracy for a given computational budget, as demonstrated by the faster convergence in mean relative L2 error. \\n\\n> **Weakness # 2**\\n\\nWe appreciate the reviewer\\u2019s comments suggesting the inclusion of additional PDE examples. While the selected problems (e.g., Allen-Cahn and Darcy Flow) were chosen to evaluate distinct aspects of our method, such as handling nonlinear dynamics and varying boundary conditions, we recognize that additional examples would further strengthen the evaluation. If accepted, we plan to expand our experiments to include additional PDEs.\\n\\n> **Weakness # 3**\\n\\nWe appreciate the reviewer's concern regarding the technical novelty of our approach. While it is true that our work builds on the residual architecture used in PirateNet, the integration of RBF networks is not simply a substitution but rather a concrete enhancement that introduces new capabilities and performance improvements.\\n\\nSpecifically, by leveraging RBF networks, our method achieves a more refined representation of sharp transitions in solutions, as illustrated in Figure 4. RBF kernels' localized and adaptive nature enables the hybrid architecture to balance smooth and sharp features, a capability that PirateNets and the other baselines struggle to achieve. This improvement is further validated by the consistently lower errors across a wide range of PDEs, as demonstrated in Table 1. Moreover, using RBF networks within the residual framework enriches the expressive power of the model. Unlike traditional neural networks, RBF kernels provide localized basis functions that adaptively capture fine-grained solution structures. This integration allows for a more flexible and efficient representation of PDE solutions, particularly for problems with complex features. \\n\\nWhile existing architectures inspire our approach, combining neural networks with RBF-based methods for PDE solving represents a novel contribution. This hybridization bridges two previously distinct paradigms and opens new avenues for extending residual-based architectures.\\n\\n> **Regarding Question # 1**\\n\\nWe appreciate the reviewer\\u2019s question regarding whether \\\\( \\\\alpha^{(l)} \\\\) should depend on the input \\\\( \\\\mathbf{x} \\\\). In our current implementation, \\\\( \\\\alpha^{(l)} \\\\) is a trainable scalar shared across the domain. This design choice simplifies the model and reduces the number of parameters while still capturing a wide range of solutions, as shown in our experiments.\\n\\nThat said, making \\\\( \\\\alpha^{(l)} \\\\) dependent on \\\\( \\\\mathbf{x} \\\\) could offer additional flexibility, enabling the model to adapt the contributions of the RBF and neural network components based on local solution features. For example, in regions with sharp transitions, the RBF component might dominate, while in smoother regions, the neural network could take priority. Exploring this idea is a promising direction for future work, and we thank the reviewer for raising it.\\n\\n> **Regarding Question # 2**\\n\\nWe thank the reviewer for highlighting this notation issue. We have renamed the kernel description in the revised manuscript to distinguish it from the sigmoid function.\"}", "{\"summary\": \"The work proposes an architecture with the adaptive residual connection between a regular neural network and a Radial bias network. They also demonstrated the effect of the residual connection and adaptivity of the residual connection. The designed architecture has been shown to outperform baselines on Allen-Cahn and Darcy flow.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors tackle a significant issue in scientific computation. The architecture they propose is clearly explained in the paper. The authors conducted ablation studies to illustrate the importance of the proposed components. The proposed model demonstrates superior performance compared to baseline models.\", \"weaknesses\": \"1. The motivation of the work was \\u201cWhile these deep residual-based approaches show much promise, the increased architectural complexity leads to higher computational costs both in terms of memory and training time\\u2014requiring careful selection of training routines to prevent instabilities or poor convergence\\u201d - however, authors do not report computation cost and memory requirement for the baselines. Also, from Figure 6, we notice that PirateNet and the proposed model are rather close when compared to training time.\\n\\n2. To prove the model's superiority, other PDEs, such as the Navier\\u2013Stokes equation, the Grey-Scott equation, the Ginzburg-Landau equation, the Korteweg\\u2013De Vries equation, etc., should be used as the baseline.\\n\\n3. The work replaces the residual connection by the RBF network in PiretNet. This limits the technical novelty of the work\", \"questions\": \"1. $\\\\alpha$ in Eq. 10 does not depend on the input, right? Will it be useful if $\\\\alpha$ also depends on the input?\\n\\n2. Line 217: both sigmoid and RBF functions are denoted by $\\\\phi$. Authors should consider renaming to avoid confusion\\n\\n3. why the RBF kernel is chosen to be the Wendland C4 kernel? At this point, it seems that the choice is arbitrary.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This contribution proposes a method based on residual blocks of hybrid RBF networks and standard MLPs. The method follows the general approach of PirateNets and Stacked networks in that residual blocks are used, but here it appears that there are no gating mechanisms to allow the model to start from a small configuration and progressively add more blocks during training, as in PirateNets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The method is part of a recent trend in using stacked networks/multifidelity approaches, which appear to improve the accuracy of PINNs.\\n\\nThe contribution of the RBF-NN and MLP is adaptively learned in each block.\\n\\nThe paper is generally well written, with the presence of a small number of typos.\", \"weaknesses\": \"There is no discussion of the computational complexity of the proposed approach. Nothing is said about training and inference time in comparison with baseline approaches.\\n\\nOnly two benchmark PDEs are used in the experiments, which is not enough to evaluate the effectiveness of the proposed approach.\\n\\nIt is claimed that the RBF-NN can improve the approximation of sharp transitions in the solution, but no detailed plots or discussion is given in support of that. In particular, Figure 3 seems to contradict this claim, as the kernels look very smooth.\", \"questions\": \"Are the centers and coefficients trained in each RBF-NN?\", \"a_few_typos_need_to_be_corrected\": \"Eq (4), script D should be script F.\", \"bottom_of_page_7\": \"repetition of \\\"smaller\\\".\\n\\nAppendix A has broken references.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"End Of Discussion Period Soon\", \"comment\": \"Hello,\\n\\nI just wanted to confirm the authors have not made any changes/replies to their submission, is that correct? As the discussion period is set to end today, there might not be enough time for a timely response from reviewers, should the authors decide to reply to our reviews.\"}" ] }
5r6zvadRUD
SEAT: Sparsified Enhancements for Attention Mechanisms in Time Series Transformers
[ "Shangjian Zhong", "Binli Luo", "Jimmy Tan", "Han Zhou", "Yao Zhao", "Yuanzheng Tao", "Zhiyuan Gao", "Bocheng Xu" ]
Transformer models excel in time series tasks due to their attention mechanisms. However, they often suffer from "block-like" attention patterns caused by high feature correlation, leading to feature confusion and reduced performance. In this study, we mathematically prove and quantify this limitation, demonstrating how it affects the sparsity of the attention matrix and hinders effective feature representation. To overcome this issue, we propose a novel, model-agnostic, and plug-and-play method called SEAT (Sparsification-Enhanced Attention Transformer) that leverages frequency domain sparsification. By transforming time series data into the frequency domain, our method induces inherent sparsity, reduces feature similarity, and mitigates block-like attention, allowing the attention mechanism to focus more precisely on relevant features. Experiments on benchmark datasets demonstrate that our approach significantly enhances the accuracy and robustness of Transformer models while maintaining computational efficiency. This provides a mathematically grounded solution to inherent flaws in attention mechanisms, offering a versatile and effective approach for advancing time series analysis.
[ "Time Series", "Frequency Analysis", "Deep Learning" ]
https://openreview.net/pdf?id=5r6zvadRUD
https://openreview.net/forum?id=5r6zvadRUD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uBbG3WI4oU", "kUhdwImOsl", "WUQfLElsMQ", "Lq6nNeI3zk", "Knx1YvIXGG", "DgOLZ0yDHA", "8UkUZqTWAF" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1730633220762, 1730189166951, 1732039137183, 1730061500015, 1730697202909, 1730524263009, 1730448856977 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2436/Reviewer_NbTn" ], [ "ICLR.cc/2025/Conference/Submission2436/Reviewer_oACe" ], [ "ICLR.cc/2025/Conference/Submission2436/Authors" ], [ "ICLR.cc/2025/Conference/Submission2436/Reviewer_WteU" ], [ "ICLR.cc/2025/Conference/Submission2436/Reviewer_DJqv" ], [ "ICLR.cc/2025/Conference/Submission2436/Reviewer_8zJ5" ], [ "ICLR.cc/2025/Conference/Submission2436/Reviewer_Pa7y" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces SEAT (Sparsified Enhancements for Attention Mechanisms in Time Series Transformers), a framework designed to improve the performance of Transformers in time series forecasting. Transformer models, while effective, often suffer from block-like attention patterns due to high feature similarity, which reduces their ability to accurately capture complex dependencies in time series data. SEAT addresses this limitation by transforming input signals into the frequency domain, which introduces sparsity and reduces feature similarity. As a result, SEAT enhances the Transformer\\u2019s ability to focus on relevant features, improving accuracy and robustness in long-term forecasting tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors address why previous Transformer-based approaches may not perform optimally for long-term time series forecasting, highlighting that redundancy in attention mechanisms can hinder model performance, especially when similar patterns recur over extended periods.\", \"SEAT is designed to be adaptable across various Transformer-based architectures by substituting their attention mechanisms. It is straightforward to implement, demonstrating promising potential with minimal effort.\"], \"weaknesses\": [\"While the authors describe their derivations as a \\u201crigorous mathematical analysis,\\u201d the content could be simplified and more concisely explained by referencing established textbooks on Fourier analysis [1]. The authors may benefit from adopting a more modest tone in their presentation.\", \"[1] Elias M. Stein, Fourier Analysis: An Introduction.\", \"There is a lack of detailed explanations for the experimental results, particularly regarding why SEAT outperforms baseline models on certain datasets but underperforms on others.\"], \"questions\": [\"Can you explain how much parameters were increased when SEAT applied? It's unclear whether the performance gains may result from an increase in parameters.\", \"Can you clarify where Fourier attention is applied, perhaps by including an annotated figure? Figure 1 and its explanation currently lack sufficient detail on this aspect.\", \"Althogh the author explains their usages of fourier transform, the results might be comprehended in different way. According to Thm 1, the time series seems to be discretized and sparsified, in the perspective of transformer, the time(or frequency) interval of adjacent samples is not important. In Transformers, relative position matters more than absolute time points. Rather than that, the performance improvements might be due to the fact that as the time-seires get longer, the patterns of time series becomes regular, which means time series on frequency domain are clusterd, easily captured by transformer. Could the authors provide more insight on this perspective?\", \"Could you explain under which conditions SEAT consistently outperforms or underperforms compared to other methods across different datasets?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces SEAT (Sparsification-Enhanced Attention Transformer), a novel framework designed to improve Transformer models in time series forecasting by addressing \\\"block-like\\\" attention patterns that result in feature confusion and reduced performance. By applying frequency domain sparsification, SEAT reduces feature similarity and enables more precise focus on relevant data points, enhancing the robustness and accuracy of time series predictions. This model-agnostic, plug-and-play enhancement integrates with any Transformer architecture and was shown to outperform standard models across multiple benchmarks, thereby proving its effectiveness in boosting Transformer capabilities for long-term sequence forecasting.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The proposed method is simple and easy to understand.\\n2. SEAT is model-agnostic and operates as a plug-and-play solution.\\n3. Empirical results demonstrate that SEAT enhances the performance of the base Transformer method.\", \"weaknesses\": \"1. The main weakness is the insufficient evidence to support the claimed contribution. The author does not provide a rigorous analysis of the relationship between block-like attention and forecasting performance. The reason why SEAT can solve the problem is also not presented in detail. Consider providing quantitative metrics or visualizations that demonstrate how block-like attention patterns correlate with reduced performance. Additionally, the author could provide a more detailed explanation of SEAT's mechanism for addressing this issue, perhaps through step-by-step examples or comparative analyses with existing methods.\\n\\n2. The proposed SEAT does not present enough novelty. The author should distinguish it from FITS [1], FreTS [2] and other methods using FFT. Consider providing a detailed comparison table or section that explicitly outlines the key differences between SEAT and other FFT-based methods like FITS and FreTS. \\n\\n3. Too many typos and irregular expressions reduce the quality of the paper. For example, the theorem should be wrapped in a theorem environment. There should be no hyphens in the \\\"accuracy\\\" word in line 317.\\n4. The point-wise and channel-wise attention present different performances in previous research. They should not be analyzed separately. The authors could conduct a comparative analysis that shows how these attention mechanisms interact and jointly influence model performance.\\n\\n\\n[1] Zhijian Xu, Ailing Zeng, Qiang Xu: FITS: Modeling Time Series with 10k Parameters. ICLR 2024\\n[2] Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Ning An, Defu Lian, Longbing Cao, Zhendong Niu:\\nFrequency-domain MLPs are More Effective Learners in Time Series Forecasting. NeurIPS 2023\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This work introduces SEAT, a mechanism for mitigating block like attention patterns and potentially allowing the attention mechanism to focus on more relevant features. SEAT accomplishes this by first transforming the data to frequency domain. Experiments demonstrate the efficacy and potential of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality :\\n\\nThe work is somewhat original. It uses known approaches in a novel manner as it uses frequency domain transformation as a mechanism to process the data into a form that mitigates block like patterns in an attention mechanism for time series forecasting\", \"quality\": \"This work introduces a well structured framework that can be used as a plug-and-play module for time series forecasting. It is supported by experiments on multiple datasets that showcase the usefulness of the approach. However, there are some questions that come up.\", \"clarity\": \"This work is clear enough although there are some parts (such as those mentioned under Questions) that could be made more clear.\", \"significance\": \"Time series forecasting is an important line of exploration. The proposed approach introduces a plug-and-play module that can be incorporated in any state of the art approach to further enhance it and address some issues related to attention patterns. Overall, the proposed method is a step forward with potential applications beyond just time series forecasting tasks.\", \"weaknesses\": \"Although the work is well structured, it can benefit from deeper analysis of the proposed module and its influence on the different forecasting approaches along with comparisons with more varied baselines (as mentioned in the Questions section).\", \"questions\": \"So just to clarify, is it that the FFT is 2D FFT with the input $\\\\in \\\\mathbb{R}^{L \\\\times D}$ ? The output of the SEAT block is $\\\\in \\\\mathbb{R}^{D \\\\times L}$. Also, in lines 341-347, it is not clear how the time domain (after IFFT and skip connection) representation is sparse. A frequency domain to time domain conversion need not necessarily produce a sparser output. How is it ensured that the output time domain representation will be sparse. Another question is where is the attention being applied in the SEAT block ? Is it in the Real Linear and Image Linear blocks ?\\n\\n\\nDoes SEAT act as preprocessing step (that makes the time domain representation sparse) for the feature extractor which would then be used in the actual time series forecasting with standard (point/channel wise) attention ?\\n\\n\\nTable 1 claims significant advantages of SEAT over baselines. This is also demonstrated through Figure 2. However, several scores are very close to corresponding baselines when SEAT is applied (such as PatchTST for ETTh1 and ETTm1). Further more, there is a lot of variation in performance when SEAT is applied. Therefore, it would be great to have comparisons to show why some of the methods show more improvements than others or why the others do not show similar improvements (very small change as seen in PatchTST). There are even methods that show a drop in performance, such as Informer and iTransformer for the traffic set.\\n\\n\\nAs the input and output of SEAT block are both time domain signals it would be beneficial to have a before and after comparison of these signals to analyze what was removed as part of the process. \\n\\nIn lines 316-317, it is mentioned that SEAT enhances accuracy and robustness. It would be great to have experiments that drive this point. \\n\\nSome potential baseline that can be compared with SEAT are feature subset selection and feature disentanglement. These baselines can also help focus the attention mechanism on features that are important thereby potentially mitigating the block like attention pattern. The current experiments demonstrate that SEAT can be useful when added to an existing forecasting approach. However, it would also be useful to compare SEAT with other potential baselines such as those mentioned above.\", \"minor_questions\": \"Perhaps it would be useful to make it clear what $N$ and $F$ are in Equation 2. Is $F$ a subset of features from the window with size of $F$ being $N$ ?\\n\\nHow is $F$ chosen. This is in context of lines 211-212 where it is mentioned that a sparse attention mechanism yields a lower $Sim$ value. Isn't the value dependent of $F$. If so, then is the sparse attention mechanism providing a very different feature set? Also, it seems that lines 205-212 are related to channel wise attention. So is the sparse vs non-sparse comparison mentioned in this paragraph (and therefore $Sim$ calculation) based feature channels ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper analyzes the limitations of existing \\\"block-like\\\" attention patterns in time-series forecasting models. To address this challenge, it proposes a model-agnostic method to introduce sparsity by leveraging frequency-domain transformations. Experiments across various Transformer time-series forecasting models demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes to introduce sparsity to address the \\\"block-like\\\" attention challenge through frequency-domain transformations.\\n\\n2. The proposed approach is model-agnostic and has shown improvement across various Transformer-based time-series forecasting models.\", \"weaknesses\": \"1. Based on the descriptions in Section 3.5, the SEAT block applies FFT, a linear transformation in the frequency space, followed by IFFT to convert the time series to time domain. How does this process specifically introduce sparse representation? Moreover, the attention mechanism still takes place in the time domain. Why does Section 3.4 introduce Fourier attention?\\n\\n2. How does the proposed method compare with FEDformer which applies random masking in the frequency domain?\\n\\n3. In line 298, the statement, \\\"Citing from previous research Zhang et al. (2022), calculating attention in the Fourier domain is equivalent to time-domain attention\\\", is only valid in the linear case without the softmax operation, as mentioned in the original paper.\\n\\n4. The Related Work section provides too many details on individual papers instead of offering a summarized overview.\\n\\n5. In line 317, there is a typo of \\\"ac- curacy\\\".\", \"questions\": \"1. Can SEAT improve the performance of current LLM-enhanced forecasting models such as GPT4TS [1], S2IP-LLM [2], and Time-LLM [3]?\\n\\n2. Can SEAT improve non-Transformer models such as DLinear [4]?\\n\\n[1] One Fits All: Power General Time Series Analysis by Pretrained LM\\n\\n[2] S2IP-LLM: Semantic Space Informed Prompt Learning with LLM for Time Series Forecasting\\n\\n[3] Time-LLM: Time Series Forecasting by Reprogramming Large Language Models\\n\\n[4] Are Transformers Effective for Time Series Forecasting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduced SEAT(Sparisification-Enhanced Attention Transformer) to address time-series forecasting ability of transformers. SEAT derives sparsity of attention mechanism by transforming to frequency domain through Fourier transform, thereby solving problem of channel confusion present in original attention. One advantage of this method is that it is plug-and-play type, which can be applied to any kind of transformer-based models. The contributions are 1) analysis on mathematical limitation on attention mechanism, 2) enhancement of feature independence through sparsity, and 3) plug-and-play functionality compatible with various transformers.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides a mathematical analysis of the limitations in existing attention mechanisms, offering motivation for SEAT\\u2019s design and highlighting its theoretical foundation.\\n2. SEAT functions as a model-agnostic, plug-and-play enhancement applied at the transformer\\u2019s input stage, making it highly practical and easy to integrate with various architectures.\\n3. SEAT consistently improves performance, demonstrating substantial reductions in MSE and MAE across benchmark models, affirming its effectiveness over baseline approaches.\", \"weaknesses\": \"1. The reliance on frequency transformation and inverse transformation introduces additional computation steps in SEAT, which may lead to increased model complexity. This added computational burden could impact the practical efficiency of SEAT in large-scale applications, particularly when low-latency processing is essential.\\n2. Although the authors endeavor to mathematically validate SEAT\\u2019s theoretical soundness by establishing two initial assumptions, there remains ambiguity regarding the generalizability of these assumptions across different time series contexts. Without explicit discussion of potential limitations, the universality of these assumptions might be overestimated.\\n3. The sparsity-inducing approach SEAT employs could vary in effectiveness depending on the characteristics of the time series data. Therefore, further experimentation is required to determine whether SEAT\\u2019s sparsification strategy can yield consistent benefits across diverse types of time series data, particularly in scenarios with complex temporal dynamics or non-stationary patterns.\", \"questions\": \"1. Considering SEAT\\u2019s additional computations, is its efficiency in forecasting performance still competitive? A detailed analysis of its computational complexity and processing speed relative to conventional methods would help clarify its practicality for real-time applications.\\n2. The attention map presented for the ETTh1 dataset seems to effectively address block-wise attention issues. Do other datasets exhibit similar attention patterns? Expanding the attention map analysis across various datasets would provide insights into whether SEAT consistently enhances feature focus or if its benefits are dataset-specific.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes SEAT, which employs frequency domain representations to sparsify input signals and mitigate certain issues that hinder time series Transformers such as block-like attention. SEAT shows competitive empirical performance and can be applied across the board to improve model performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I think the idea of employing frequency-based representations for time-series is very good, and is a trend in the literature I appreciate. The authors did a good job in identifying the need for sparsification for the Transformer backbone to operate in a performant manner and the empirical results are also strong.\", \"weaknesses\": [\"There are a few typos throughout the text that the authors should look out for and correct: I spotted some in L291 F() and F^-1() and in L317 acc-uracy.\", \"I believe that the authors should tone down some of the wording around their theoretical contributions. I would not consider the use of the Nyquist theorem \\\" embarking on a rigorous mathematical derivation\\\"\", \"I think that the authors should highlight the computational cost of their method and how it compares to other models.\", \"An ablation showing the effect of dealing with block-like attention and adding channel-wise attention would be useful to see what is driving the performance of the model\", \"While the authors did a good job in citing related work, I think they should have a dedicated paragraph in 2.1 to RFormer [1], which also sparsifies the input stream and learns cross-dependencies between time series through the signature transform.\", \"I am willing to raise my score if the authors address these concerns\", \"[1] Moreno-Pino, Fernando, et al. \\\"Rough Transformers: Lightweight Continuous-Time Sequence Modelling with Path Signatures.\\\" arXiv preprint arXiv:2405.20799 (2024).\"], \"questions\": [\"Could the authors elaborate on the computational benefits/tradeoffs of SEAT?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5qg6JPSgCj
Score-based free-form architectures for high-dimensional Fokker-Planck equations
[ "Feng Liu", "Faguo Wu", "Xiao Zhang" ]
Deep learning methods incorporate PDE residuals as the loss function for solving Fokker-Planck equations, and usually impose the proper normalization condition to avoid a trivial solution. However, soft constraints require careful balancing of multi-objective loss functions, and specific network architectures may limit representation capacity under hard constraints. In this paper, we propose a novel framework: Fokker-Planck neural network (FPNN) that adopts a score PDE loss to decouple the score learning and the density normalization into two stages. Our method allows free-form network architectures to model the unnormalized density and strictly satisfy normalization constraints by post-processing. We demonstrate the effectiveness on various high-dimensional steady-state Fokker-Planck (SFP) equations, achieving superior accuracy and over a 20$\times$ speedup compared to state-of-the-art methods. Without any labeled data, FPNNs achieve the mean absolute percentage error (MAPE) of 11.36%, 13.87% and 12.72% for 4D Ring, 6D Unimodal and 6D Multi-modal problems respectively, requiring only 256, 980, and 980 parameters. Experimental results highlights the potential as a universal fast solver for handling more than 20-dimensional SFP equations, with great gains in efficiency, accuracy, memory and computational resource usage.
[ "Fokker-Planck Equations", "Normalization Condition", "Score Model", "Physical Constraints." ]
Accept (Poster)
https://openreview.net/pdf?id=5qg6JPSgCj
https://openreview.net/forum?id=5qg6JPSgCj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zAUuTWURpQ", "wc6BMwXJj7", "vGownpEgXm", "tTd5Jgo7aj", "rB3tNfr7b7", "oOJGwGX1jV", "nrp0Nk2VCT", "mkSYwPW1gC", "mUntBkG2td", "howbAwSNaD", "gcIdHMBcXT", "fYN033XxTS", "dy6LE0GQEi", "dtXGqOXcNT", "cYAfZN9KlT", "cL5e1ME9ub", "UbiQt90199", "SuAnjynuXg", "RH1jDSz8um", "MsUPXjerv4", "KbDIMYF1Ul", "KGpQ8jEyjU", "JrnHNc8w5L", "HK7EfOZLBY", "Cf3vQKs0i9", "2e4sgnueLX", "1sFNahRrwb" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731600406101, 1731831793937, 1732611813293, 1733153939613, 1729764714819, 1731641121491, 1731941002677, 1732730062904, 1730677093928, 1732035585866, 1731906425381, 1732518995058, 1732484142821, 1731832497392, 1730642601734, 1731675999471, 1734876207591, 1730084922814, 1737524138150, 1733219145513, 1731907896115, 1731940336457, 1733133694072, 1731832656621, 1732018150658, 1731940862128, 1731600745567 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Reviewer_aoGx" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Reviewer_eksf" ], [ "ICLR.cc/2025/Conference/Submission11675/Reviewer_eksf" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Reviewer_1JZg" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Reviewer_eksf" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Reviewer_1JZg" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Reviewer_aoGx" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Area_Chair_7mPA" ], [ "ICLR.cc/2025/Conference/Submission11675/Reviewer_hrfk" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Reviewer_1JZg" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ], [ "ICLR.cc/2025/Conference/Submission11675/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you very much for your comments and constructive suggestions. There are some key points that may require further clarification to enhance your understanding of the improvements in our work.\\n\\n$\\\\textbf{Weakness 1}$\\n\\nPINNs inherently do not support a postprocess of calculating the normalizing constant, as the objective function is defined as follows: (plain PDE loss)\\n$$\\\\mathcal{J}\\\\_{\\\\text{plain}}(p\\\\_\\\\theta)=\\\\mathbb{E}[|\\\\nabla\\\\cdot(p\\\\_\\\\theta\\\\mu)-\\\\nabla\\\\cdot(\\\\nabla\\\\cdot(Dp_\\\\theta))|]$$\\nIt is evident that the zero solution ($p_\\\\theta = 0$) also satisfies this equation. Therefore, directly minimizing the loss function above often leads to network collapse, where $p_\\\\theta$ converges to a trivial solution. Existing strategies generally fall into two categories involving normalization constraints:\\n1. Adding a penalty term $\\\\mathcal{J}\\\\_{\\\\text{norm}}=\\\\left(\\\\frac{|\\\\Omega|}{|\\\\mathcal{D}|}\\\\sum\\\\_{x\\\\in\\\\mathcal{D}} p\\\\_\\\\theta(x) - 1\\\\right)^2$, where the dataset $\\\\mathcal{D}$ is sampled uniformly from $\\\\Omega$. \\n2. Using specific structures to represent the density function.\\n\\nHowever, the first method requires manual balancing of multi-objective loss function, while the second method may limit the model's representation capacity (Fig.1).\\n\\nIn contrast, our method deviates from all existing approaches by changing the loss function directly. We introduce a new loss function as: (score PDE loss)\\n$$\\\\mathcal{J}\\\\_{\\\\text{score}}(s\\\\_\\\\theta)=\\\\mathbb{E}\\\\left[|\\\\nabla\\\\log \\\\widetilde{p\\\\_\\\\theta}(x) \\\\cdot \\\\widetilde{\\\\mu}(x) + \\\\nabla\\\\cdot\\\\widetilde{\\\\mu}(x)|\\\\right], \\\\quad \\\\widetilde{\\\\mu}(x) = \\\\mu(x) - \\\\nabla\\\\cdot D(x) - D(x)\\\\nabla\\\\log \\\\widetilde{p\\\\_\\\\theta}(x)$$\\nIn this form, the trivial solution $p_\\\\theta = 0$ no longer satisfies the equation. The *score PDE loss* retains equivalence with the *plain PDE loss* while avoiding a zero solution and eliminating the need for handling normalization conditions in training process.\\n\\nTo further illustrate, consider a special case where $D(x) = I_d$. If minimizing the score loss $\\\\mathcal{J}_{\\\\text{score}}$ drives $\\\\widetilde{\\\\mu}(x)$ to zero, the training process becomes equivalent to optimizing the objective function of *flow matching*:\\n$\\\\mathbb{E}\\\\left[\\\\|\\\\mu(x) - \\\\nabla\\\\log \\\\widetilde{p\\\\_\\\\theta}(x)\\\\|^2\\\\right]$. Thus, our method inherently integrates score matching while adhering to the original Fokker-Planck equation and corresponding physical laws.\\n\\nTherefore, it is natural to perform a postprocess of calculating the partition function $Z_\\\\theta$ under the score PDE loss. The advantages are also clear: it reduces computational costs, eliminates the interference caused by normalization constraints on optimization dynamics, and ensures a more efficient and coherent training process.\\n\\n$\\\\textbf{Weakness 2}$\\n\\nRegarding the non-stationary Fokker-Planck (time-dependent FP, TFP) equation, your understanding is indeed correct. The transformation remains applicable to TFP equations, with one more term $\\\\partial_t \\\\log p_\\\\theta(x, t)$. However, our target is to completely decouple the normalization condition from training process and instead use a neural network to model $\\\\widetilde{p_\\\\theta}$. It is important to note that $\\\\log p_\\\\theta(x, t) = \\\\log \\\\widetilde{p_\\\\theta}(x, t) - \\\\log Z_\\\\theta(t)$, which implies that the loss function would introduce a term $\\\\partial_t \\\\log \\\\widetilde{p_\\\\theta}(x, t) - \\\\partial_t \\\\log Z_\\\\theta(t)$, necessitating the evaluation of the partition function. And the loss function is not merely in terms of $\\\\nabla \\\\log p(x, t)$ (i.e., the score form).\\n\\nMoreover, there is an additional technical challenge: for non-localized density functions, identifying a suitable integration domain $\\\\Omega$ at any given time $t$ to compute the time-varying normalizing function $Z_\\\\theta(t)$ is challenging. It may be feasible to calculating the discrete sequence $\\\\\\\\{Z_1, Z_2, \\\\ldots, Z_N\\\\\\\\}$ at time steps $\\\\\\\\{t_1, t_2, \\\\ldots, t_N\\\\\\\\}$ and then interpolating to obtain a continuous function $Z_\\\\theta(t)$.\\n\\nIn the future work, we aim to further explore and refine the score-based algorithm for TFP equations.\"}", "{\"comment\": \"We greatly appreciate all the reviewers\\u2019 comments and constructive suggestions, which are extremely helpful in better organizing and presenting our work. Please allow us to first provide the background of our study, reiterate our motivation and improvements. Then we will respond to the concerns about comparisons and experiments with related work. This will help reviewers understand our algorithm and better evaluate our contributions.\\n\\n$\\\\textbf{Background}$\\n\\nPINN is a general deep-learning framework for solving PDEs and has achieved significant success in various problems, such as the Navier-Stokes equations, Allen-Cahn equation, Schr\\u00f6dinger equation, etc. However, PINNs face challenges in Fokker-Planck equations, with the objective function: (plain PDE loss)\\n\\n$\\\\mathcal{J}\\\\_{\\\\text{plain}}(p_\\\\theta)=\\\\mathbb{E}\\\\left[|\\\\nabla\\\\cdot(p_\\\\theta\\\\mu) - \\\\nabla \\\\cdot(\\\\nabla\\\\cdot(Dp_\\\\theta))|\\\\right]$\\n\\nIt is evident that the zero solution $p_\\\\theta = 0$ also satisfies this equation. Therefore, directly minimizing the loss function above often leads to network collapse, where $p_\\\\theta$ converges to a trivial solution. Existing strategies generally fall into two categories involving normalization constraints in soft or hard manner: \\n\\n1. Adding a penalty term $\\\\mathcal{J}\\\\_{\\\\text{norm}}=\\\\left(\\\\frac{|\\\\Omega|}{|\\\\mathcal{D}|}\\\\sum_{x\\\\in\\\\mathcal{D}} p_\\\\theta(x) - 1\\\\right)^2$, where the dataset $\\\\mathcal{D}$ is sampled uniformly from $\\\\Omega$ [4]. \\n2. Developing specialized structures to represent the density function [5-9].\\n\\nHowever, the former method requires manual balancing of multi-objective losses, while the latter may limit the model's representation capacity (Figure 1).\\n\\nAnother earlier data-driven method guides the network to the desired solution by introducing reference solutions and the regression loss $\\\\mathcal{J}\\\\_{\\\\text{label}}=\\\\frac{1}{N^Y}\\\\sum_{j=1}^{N^Y}(p_\\\\theta(x_j)-p_j)^2$, i.e. **FP solver** [1] suggested by Reviewer 1JZg. Our 4D Ring problem is directly from this work.\\n\\nFor this case, FP solver utilizes a very large number of particles ($10^{10}$ sample points) for SDE simulations and constructs frequency histograms on a grid as reference solutions ($10^4$ reference points). This technique is computationally intensive and lacks scalability to higher dimensions.\\n\\nAs seen in Figure 6 of [1], despite using such a large amount of points, the $L_2$ error of FP solver is around $10^{-2}$. In contrast, our FPNN achieves a MAE of $5.56 \\\\times 10^{-4}$ without any labeled data (resulting in an even lower $L_2$ error). Furthermore, FPNN reaches this superior performance with fewer parameters and only 2,000 unlabeled data points per iteration. FPNN requires fewer than 1,000 iterations to surpass this baseline performance (see Figure 4(a) in our work and Figure 6 in [1]).\\n\\n$\\\\textbf{Our work}$\\n\\nOur method deviates from all existing approaches by changing the loss function directly. The improvements we have achieved can be attributed to a fundamental modification of the loss function, which decouples the normalization condition from the training process. Specifically, we replace the original loss term $\\\\mathcal{J}_{\\\\text{plain}}$ with the following:\\n\\n$\\\\mathcal{J}\\\\_{\\\\text{score}}(s_\\\\theta)=\\\\mathbb{E}\\\\left[|\\\\nabla\\\\log \\\\widetilde{p_\\\\theta} \\\\cdot \\\\widetilde{\\\\mu}(x) + \\\\nabla\\\\cdot\\\\widetilde{\\\\mu}(x)|\\\\right], \\\\quad \\\\widetilde{\\\\mu}(x) = \\\\mu(x) - \\\\nabla\\\\cdot D(x) - D(x)\\\\nabla\\\\log \\\\widetilde{p_\\\\theta}$\\n\\nIn this form, the trivial solution $\\\\widetilde{p_\\\\theta}=0$ no longer satisfies the equation. The *score PDE loss* maintains equivalence with the *plain PDE loss* while avoiding a zero solution and eliminating the need for handling normalization conditions in training process.\\n\\nTo further illustrate, consider a special case where $D(x) = I_d$. If minimizing the score loss $\\\\mathcal{J}_{\\\\text{score}}$ drives $\\\\widetilde{\\\\mu}(x)$ to zero, the training process becomes equivalent to optimizing the objective function of *flow matching*:\\n\\n$\\\\mathbb{E}\\\\left[\\\\|\\\\mu(x) - \\\\nabla\\\\log \\\\widetilde{p_\\\\theta}\\\\|^2\\\\right].\\n$\\n\\nThus, our method inherently integrates score matching while preserving the original Fokker-Planck equation and corresponding physical laws.\\n\\n**Our main target** is to fully decouple the normalization condition during training, allowing the network to freely adjust the magnitude to learn the unnormalized density $\\\\widetilde{p_\\\\theta}$ (note that our network directly models $\\\\widetilde{p_\\\\theta}$ rather than $p_\\\\theta(x)$ and appropriate scale is really necessary for learning high-dimensional densities). Subsequently, it is natural to perform a postprocess of calculating the partition function $Z_\\\\theta$ under the score PDE loss. We derive the approximate solution as $p_\\\\theta(x) = \\\\widetilde{p_\\\\theta}(x) / Z_\\\\theta$. The **advantages** are also clear: it reduces computational costs, eliminates the interference caused by normalization constraints on optimization dynamics, and ensures a more efficient and coherent training process.\"}", "{\"comment\": \"thank you for the response.\"}", "{\"comment\": \"Thank you for thoughtful comments again, which make your questions or concerns clear to us.\\n\\n**Question 2**\\n\\nAllow us to address the second point first. The purpose of employing PINNs to solve SFP equations is not (or not merely) to obtain samples but to estimate the probability density. This is also why normalizing flows (such as KRnet) can be used. These architectures leverage the change of variables formula to estimate the log-density, thereby providing the density values. If our goal is only to generate samples, then the SDE simulation is sufficient, as in generative models.\\n\\nThe estimation of probability density plays a crucial role in anomaly detection or fault diagnosis in stochastic systems. With access to the response probability density, we can easily determine whether a particular state or feature of the system ($x_0\\\\in\\\\mathbb{R}^d$) is an outlier or a low-probability event (e.g., $p(x_0)<\\\\epsilon$). This highlights the importance of normalized and \\\"standard\\\" probability densities in our analysis. For unnormalized densities $\\\\widetilde{p_\\\\theta}$, their absolute values offer limited utility, as they only allow relative comparisons without providing intuitive insights into the likelihood of specific events.\\n\\n**Question 1**\\n\\nThanks for your first question and suggestions, which provides us with an opportunity to elaborate further. Our main points are summarized as follows:\\n\\n- When improving the PINN loss function, we utilize the transformation formula $s_\\\\theta=\\\\nabla\\\\log \\\\widetilde{p_\\\\theta}$, thereby incorporating the concept of \\\"score\\\".\\n- \\\"Drift,\\\" as a physical concept, is used in both SDEs and FP equations. Constructing drift terms involving \\\"score\\\" and reverse-time SDEs is fundamental for generating data and sampling images.\\n\\nBelow, we explain the second point. In Song's work [10] (It is a remarkable study that balances mathematical elegance and practical utility and we deeply respect), we consider three distinct dynamical equations:\\n\\n\\n\\n1. For the **SDE:** $dx=f(x,t)dt+G(x,t)dW\\\\quad(1)$,\\n\\n its PDF $p(x,t)$ satisfies the FP equation: $\\\\frac{\\\\partial p}{\\\\partial t}=-\\\\nabla\\\\cdot(fp)+\\\\nabla\\\\cdot\\\\nabla\\\\cdot(\\\\frac{1}{2}GG^Tp)\\\\quad(2)$.\\n2. For the **probability flow ODE:** $dx=(f(x,t)-\\\\frac{1}{2}GG^T\\\\nabla\\\\log p)dt\\\\quad(3)$,\\n\\n substituting the drift and (zero) diffusion terms into the FP equation yields: $\\\\frac{\\\\partial p}{\\\\partial t}=-\\\\nabla\\\\cdot((f(x,t)-\\\\frac{1}{2}GG^T\\\\nabla\\\\log p)p)=-\\\\nabla\\\\cdot(fp)+\\\\nabla\\\\cdot\\\\nabla\\\\cdot(\\\\frac{1}{2}GG^Tp)$, which is fully consistent with Eq.(2) (assuming $G(t)$ is independent of $x$). Since Eq.(3) does not involve a diffusion term, ODE is reversible, allowing denoising processes to be carried out effectively.\\n3. For the **reverse-time SDE:** $dx=(f(x,t)-GG^T\\\\nabla\\\\log p)dt+G(x,t)dW\\\\quad(4)$,\\n\\n the density $p$ is still governed by Eq.(2), though its corresponding FP equation now describes the reverse-time solution. Compared to Eq.(3), SDE in Eq.(4) provides additional randomness needed to sample from the true distribution, enabling the generation of new samples.\\n\\nWhen we set the drift term $f(x,t)=-\\\\frac{1}{2}\\\\beta(t)x$, any real image under the influence of the drift tends toward the origin. Under the stochastic process in Eq.(1), the data distribution converges to a standard Gaussian distribution $p_G$. This connection with $p_G$ facilitates subsequent sampling random noise from $p_G$ for generation.\\n\\nUsing reverse diffusion sampling based on Eq.(4) as an example, the new drift term is $\\\\widetilde{f}(x,t)=f(x,t)-GG^T\\\\nabla\\\\log p$, where the first term reverses the process of toward the origin, and the second term guides to the data distribution. Both terms are conditioned on the current state $x$.\\n\\nThus, a clearer explanation would be: we construct the drift ($\\\\widetilde{f}$) with the score ($\\\\nabla \\\\log p$), which contains the gradient information of data distribution and plays a dominant role. This ultimately reflects the consistency between the score and drift, as both indicate the direction of the true data distribution.\\n\\nFrom the perspective of PDE systems, our focus is on the forward problem, solving the known equations to obtain solutions. In contrast, the training process of generative models can be viewed as the inverse problem, where the goal is to infer equation coefficients or unknown terms from data.\\n\\nWe hope this response clarifies your concerns! If time permits, we look forward to further discussions with you, as this would greatly help us refine the clarity of our manuscript. Thanks for your comments again.\"}", "{\"summary\": \"This paper intends to solve high-dimensional Fokker-Planck equations which faces several challenges including curse of dimension, normalization constraint, etc. The solution proposed is to train neural network with a score-matching loss which bypasses normalization constraint by computing normalization constant as post-process. The method belongs to supervised learning, where training data is generated by stochastic Runge-Kutta method. The result seems effective and surpasses baseline model in accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Originality** The score-based loss is novel and seemly interesting. Given the close connection between Fokker-Planck (FP) equations and diffusion process and the noticeable succuss of diffusion model with score-matching loss, it is worthy trying to solve FP with score-based loss.\\n\\n**Clarity** I find the paper very clear to read and well organized.\", \"weaknesses\": \"At the first glance, it is a seemly natural and attractive idea to solve Fokker-Planck (FP) equations with the proposed score-matching loss, especially considering the success in training diffusion models and the close connection between stochastic process and FP equations. However, after more careful thoughts I find it hard to reason through the following questions:\\n\\n1. If the proposed method needs a postprocess of calculating normalizing constant, why not treating PINN with the same postprocess? One of the major motivations of the paper is to deal with normalization condition (NC), which the authors criticized PINN being hard to satisfy with soft constraints. However, if PINN is obtained without NC and is normalized with the same quadrature technique afterwards, this motivation is weakened.\\n\\n2. The score-based FP loss (equation (6)) is derived for static FP equations. What is the difficulty with non-stationary FP? It seems to me the residual loss can be transformed similarly and just one more term is needed in equation (15), which is $\\\\partial_t \\\\log p_{\\\\theta}(x)$. Even if we only consider SFP, it is clear now that score-based FP loss is essentially equivalent to residual loss of PINN. Therefore, I wonder what benefit score-based FP loss can introduce? \\n\\nBased on these questions, I suggest the authors do an ablation study of replacing score-based FP loss with PINN loss (residual loss) without normalization constraint. Otherwise, the improvement of FPNN over TFFN may be purely due to removing normalization constraint from loss function.\", \"questions\": \"See weaknesses above. Also, when using MAPE for metric, how do you calculate $p(x)$ for test dataset?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for answering my questions. Now I have a better grasp of the overall idea of this paper. I was impressed by the idea of connecting score matching with solving FP equations, and now I'm convinced it is also more advantageous over PINN. I will change my score when permission is given to me and recommend for **accept**. To update my evaluation:\", \"soundness\": \"3: good\", \"presentation\": \"3: good\", \"contribution\": \"3: good\\n\\nThe main contribution of this paper is to solve high-dimensional Fokker-Planck equations by score matching loss, which has two advantage that PINNs do not possess:\\n1. Leveraging the data sampling method of diffusion process to generate data, which is latter used in score matching loss. And I do not see how such data can be easily integrated into PINN loss, but it is natural to use the data for score-matching loss.\\n2. Circumventing the normalization condition in loss function which causes difficulty in training. Instead, the paper proposed calculating normalizing constant by postprocessing.\"}", "{\"comment\": \"**$\\\\textbf{Minor comments}$**\\n\\n1. Thanks for your comment. We will carefully review our manuscript for spelling and grammatical issues, and we will clarify the notation and descriptions in lines 453-473 to enhance readability. Your feedback is invaluable in improving the clarity of our work.\\n\\n2. Figure 4 primarily compares the efficiency between TFFN and FPNN. We illustrate that FPNN is able to produce accurate predictions within fewer than 1,000 iterations, whereas TFFN still exhibits high errors with 1,000 iterations and fails to converge properly even after 10,000 iterations (steps). This highlights the significant advantage of the loss function $\\\\mathcal{J}\\\\_{\\\\text{score}}$ over $\\\\mathcal{J}\\\\_{\\\\text{plain}}$. In the second row, both models utilize the same error colorbar for fair comparison.\\n\\n3. Your suggestion is much appreciated. Currently, we have set a consistent random seed of **\\\"111\\\"** across all experiments to ensure reproducibility of our results. FPNN consistently aligns well with the ground truth solutions for various high-dimensional SFP equations. And we are confident that the superior performance of FPNN is robust and not coincidental, as our improvements to the loss function are fundamental and reasonable.\"}", "{\"comment\": \"We have thoroughly revised the manuscript based on your suggestions and comments. The specific changes (highlighted) are as follows:\\n\\n1. In the **introduction**, we have provided a detailed explanation of the differences and connections between our work and previous advanced studies.\\n2. The **partition function** section has been revised, including the experimental descriptions and formula representations.\\n3. We have added supplementary material to clarify the relationship between score PDE loss and score matching.\\n4. New figures have been included to illustrate the MAE of our approximate solution. In addition to the numerical results in tables, these figures visually show that FPNN achieves an order-of-magnitude reduction in MAE compared to TFFN across all tested SFP equations, while also facilitating an intuitive comparison with other related works, such as ref [1].\\n\\nConsidering these improvements, we sincerely invite you to review our work and hope it can inspire greater confidence in your assessments. Thanks for your time and consideration! And thank you for all reviewers.\"}", "{\"summary\": \"In this paper, the authors focus on steady state Fokker-Planck (SFP) equations and propose novel score-based PDE loss. The proposed loss does only depend on the score function $s_\\\\theta$, avoiding the necessity to compute the normalization constant of the probability distribution. Furthermore, the authors propose to investigate the proposed loss on two types of architectures, namely tensor neural networks (TNNs) and MLPs. Experiments on several PDE examples are performed, showing the good performance of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is overall well written and easy to follow.\\n2. Experimental results are interesting and support the authors' claims.\", \"weaknesses\": \"1. There is a potential clash with the notion of score in the ML/DL community, see questions.\\n2. It is difficult to evaluate the novelty of the work. For example, authors only compare the proposed approach to TFFN but no other baselines are provided. I would strongly suggest that the authors add more baselines to allow for an easier comparison with existing approaches, e.g. [1,3].\\n3. Similarly, it is not clear whether the set of chosen experiments are commonly used in the PINN community. For example, is the dataset of the authors present in [2]?\\n\\n(see refs in questions)\", \"questions\": \"**Main comments**\\n1. I am afraid that the notion of \\\"score\\\" that is central to the paper is fairly different to the one usually referred to in deep learning, where the score is implicitly learned through denoising and the associated distribution is never explicitely computed. Here, the authors rather use a differentiable network and use this property within the loss. Can the authors comment on that?\\n2. Why is the computation of $Z_\\\\theta$ important in the context of Fokker-Planck (FP)? If I understand well, the authors parametrize FP with a neural network, with a direct access to both $\\\\nabla \\\\log p$ and $p$. Is the summation approach of (8) and (10) tractable as dimension increase?\\n3. I struggle to understand the point of the authors lines 453-473. Firstly, the notation $|D_{norm}|$ is slightly confusing. Secondly, what is the source of randomness in $D_{norm}$ mentionned by the authors line 465 ? If the dataset for $\\\\mathcal{D}_{norm}$ is simulated, could the authors generate more samples?\\n4. Fig. 9 is slightly unclear. The plots on top and bottom show two different things (top: MAP and MAPE, bottom: MAPE and Z) for two different architectures. In particular, why should Z and the MAPE be related? Could the authors comment on that?\\n\\n**Minor comments**\\n1. While the paper is well written, some typos are remaining, e.g. \\\"Score-based generate model\\\" (line 129)\\n2. The color scheme from Fig. 4 (c) (bottom) is not clear, it seems most of the maps are identically 0. The authors may want to reduce the threshold.\\n3. The message would be more striking in Fig. 5 and 6 if experiment was run multiple time with different random seeds. This would allow the authors to provide smoother curves with mean and error bars. \\n\\n\\n**References**\\n\\n[1]\\n@inproceedings{zhai2022deep,\\n title={A deep learning method for solving Fokker-Planck equations},\\n author={Zhai, Jiayu and Dobson, Matthew and Li, Yao},\\n booktitle={Mathematical and scientific machine learning},\\n pages={568--597},\\n year={2022},\\n organization={PMLR}\\n}\\n\\n[2]\\n@article{lu2021deepxde,\\n author = {Lu, Lu and Meng, Xuhui and Mao, Zhiping and Karniadakis, George Em},\\n title = {{DeepXDE}: A deep learning library for solving differential equations},\\n journal = {SIAM Review},\\n volume = {63},\\n number = {1},\\n pages = {208-228},\\n year = {2021},\\n doi = {10.1137/19M1274067}\\n}\\n\\n\\n[3]\\n@article{cho2024separable,\\n title={Separable physics-informed neural networks},\\n author={Cho, Junwoo and Nam, Seungtae and Yang, Hyunmo and Yun, Seok-Bae and Hong, Youngjoon and Park, Eunbyung},\\n journal={Advances in Neural Information Processing Systems},\\n volume={36},\\n year={2024}\\n}\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for comments.\\n\\n$\\\\textbf{Question 1}$\", \"the_goal_of_our_work_is_to_solve_an_important_class_of_pde_systems\": \"the steady-state Fokker-Planck (SFP) equation. Its solution represents the density function of an invariant equilibrium distribution and must satisfy additional normalization constraints (NC). Unlike other PDEs, NC introduces significant optimization difficulties and results in the quite small solution in high-dimensional spaces and numerical challenges. Therefore, our work focuses on using a score loss to better handle the normalization condition and addressing this specific type of SFP equation with clear physical significance.\\n\\n$\\\\textbf{Question 2 and Weakness 3}$\\n\\nWe provide a comparison with state-of-the-art methods in the **Baselines** section of the **Official Comments**. Additionally, in the **Background** section, we compare and classify lots of deep-learning approaches with FPNN. Basically all existing methods can be classified into these ideas, demonstrating the novelty and effectiveness of our method from a theoretical perspective. \\n\\n$\\\\textbf{One more thing for Question 1}$ \\n\\nAlthough FPNN does not address other PDEs, as the rebuttal process has progressed and our understanding of $\\\\mathcal{J}\\\\_{\\\\text{plain}}$ and $\\\\mathcal{J}\\\\_{\\\\text{score}}$ has deepened, we believe that the phenomenon of minimizing the PDE loss leading to trivial solutions may globally or locally exist in many PDEs. For instance: \\n\\n**Burgers equation\\uff1a**$u_t+uu_x=vu_{xx}$\\n\\n**KdV equation\\uff1a**$u_t+6uu_x+u_{xxx}=0$\\n\\n**Schr\\u00f6dinger equation\\uff1a**$ih_t+0.5h_{xx}+|h|^2h=0$\\n\\nWhile initial conditions can prevent trivial solutions near ($t=0$), for long-term predictions, the optimization dynamics at larger times remain inaccurate (with a tendency for solutions to collapse, as described in previous works).\\n\\nThis has motivated progressive training approaches that segment the time domain and iteratively propagate the initial conditions across subintervals. However, these methods have notable drawbacks: they require manual segmentation of the domain, adjustment of the training process, and are prone to error accumulation.\\n\\nInspired by FPNN, we wonder if a mathematical transformation (e.g., exponential or logarithmic operations, such as $\\\\log p$ in the FP equations) could equivalently reformulate the PDE and its associated loss function, such that the zero solution no longer satisfies the transformed equation. Could this ensure a coherent global optimization dynamic? This is merely an idea and a preliminary thought, requiring further research and experimental validation.\\n\\n$\\\\textbf{Weakness 1}$ \\n\\nSome limitations and discussions are addressed in the **Partition Function** section and **Theorem 1 in Appendix A**. These relate to potential failure scenarios of the two methods for estimating $Z_\\\\theta$ (Eq.(8) and Eq.(10)) in higher dimensions, as well as issues such as NaN errors for $\\\\log p$ in the loss function (e.g., for 6D Unimodal on $\\\\Omega=[-2,2]^6$, where the true solution is exactly zero at the boundary in Python). We will revise the paper to present these aspects more clearly in the discussion section. \\n\\n$\\\\textbf{Weakness 2}$ \\n\\nOur computational complexity is significantly reduced compared to existing methods, because the normalization condition does not need to be explicitly considered during training. For instance, TFFN requires estimating $Z_\\\\theta$ at every iteration, \\\"soft\\\" PINN necessitates computing $\\\\mathcal{J}\\\\_{\\\\text{norm}}$ at each step, and density estimation with normalizing flows requires simultaneous tracking of $x$ and log-density changes. For PDE residuals, we only perform an additional $\\\\log$ operation on the network output, and the computational costs for $\\\\mathcal{J}\\\\_{\\\\text{plain}}$, $\\\\mathcal{J}\\\\_{\\\\text{score}}$ and their computational graphs remain roughly comparable. Ultimately, we estimate $Z_\\\\theta$ only once, making computational complexity entirely acceptable.\\n\\nIf you have any more questions, please feel free to give comments or remarks.\"}", "{\"comment\": \"Thanks for clarifying the first point. Now I see the connection between adaptive sampling and score-matching loss. I have updated the official score as well.\"}", "{\"comment\": \"Thank you for your comments. Indeed, our approach involves learning the drift term to obtain the density function. But the drift term and the score are not unrelated. Instead, it is closely related to the concept of sampling in score-based generative models.\\n\\nThe figure (https://postimg.cc/75BxMCBj) illustrates this connection. We present score-based generative modeling (Fig.(a)) and plot the vector field of the drift term $\\\\mu(x)$ for our 4D Ring and 6D Multi-modal problems (Fig.(b)). It can be observed that the \\\"drift\\\" term essentially describes the \\\"score\\\", which corresponds to the desired sampling direction.\\n\\n**One more idea**\\n\\nSong's work [10] show that the noise perturbations of SMLD and DDPM are discretizations of the Variance Exploding (VE) and Variance Preserving (VP) SDEs, respectively. \\n\\n**VE SDE:** $dx=\\\\sqrt{\\\\frac{d[\\\\sigma^2(t)]}{dt}}dw_t\\\\quad(9)$\\n\\n**VP SDE:** $dx=-\\\\frac{1}{2}\\\\beta(t)xdt+\\\\sqrt{(\\\\beta(t))}dw_t\\\\quad(11)$\\n\\nThese ideas are both natural and elegant. To progressively perturb the true data distribution into a (standard) Gaussian distribution, infinite variance would be required in the absence of a drift term ($\\\\mu(x)=0$). Alternatively, by introducing a drift term $\\\\mu(x) = -\\\\frac{1}{2}\\\\beta(t)x$ toward the origin, this transformation can be achieved while preserving the variance. (This drift term $\\\\mu = -ax$ is used in our 20D Gaussian problem.)\\n\\nThus, an interesting question arises: could more complex drift terms be designed to characterize perturbative SDEs (e.g., to control image classes) and obtain other desirable properties? This is a potential topic for further exploration. However, as *Image Generation* is not my research focus, my understanding may be limited.\\n\\nI hope our response helps to address your questions. Your comments and questions are crucial for summarizing our work and have prompted us to reflect on the connection between the score PDE loss and score matching. These conclusions will be included in Appendix A.\"}", "{\"comment\": \"I thank the authors for their detailed response which answers some of my concerns. Please find below one last point where I am unsure I understand correctly the authors.\\n\\n**Question 1.** From the generative modeling perspective, the authors are precisely learning the drift term, which is not related to the score. Is my understanding correct? If yes, while learning the drift (in the authors' context) may be interesting, I find that the formulation as score matching may be more confusing than helping the reader.\"}", "{\"comment\": \"$\\\\textbf{Baselines}$\\n\\nIn our theoretical analysis, we extensively reviewed and examined a range of state-of-the-art methods and baselines, identifying specific limitations that hinder their performance in higher-dimensional problems. This analysis forms the basis of our motivation to address these challenges.\\n\\n1. **Data-driven methods** (e.g., [1]), as demonstrated, are inferior to our FPNN in terms of both efficiency and accuracy. There are no particularly accurate densities, and we still need to balance two loss terms $\\\\mathcal{J}\\\\_{\\\\text{plain}}$ and $\\\\mathcal{J}\\\\_{\\\\text{label}}$.\\n\\n2. **\\\"Soft\\\" PINNs** are effective in lower-dimensional SFP equations, such as 2D Ring, but tend to fail as dimensionality increases. Despite extensive efforts to balance $\\\\mathcal{J}\\\\_{\\\\text{plain}}$ and $\\\\mathcal{J}\\\\_{\\\\text{norm}}$ using gradient norms, we could only marginally obtain the solution of 4D Ring, and even then, stable convergence was not achieved. In the 6D Multi-modal problem, this method failed entirely.\\n\\n3. **\\\"Hard\\\" PINNs** often sacrifice model's representation capacity, and their optimization dynamics are not smooth. We believe this tortuous optimization process is a key factor limiting their scalability to higher dimensions. \\n The recent developed method, TFFN [5], introduced this year, first claimed to solve such high-dimensional as well as complex SFP equations. It bears the closest resemblance to our approach, making it a worthy baseline for comparison. (We would like to express our gratitude to the authors of this work, as our method was refined and matured through ongoing exploration and analysis of TFFN.)\\n\\n In our experiments, both TFFN and our TNN-based FPNN use the same network architecture, spatial domain, number of training points, optimizer, learning rate, and other hyper-parameters. The only difference lies in **replacing $\\\\mathcal{J}\\\\_{\\\\text{plain}}$ with $\\\\mathcal{J}\\\\_{\\\\text{score}}$**, which can be seen as an ablation study to evaluate the effectiveness of our score-based FP loss.\\n\\n As shown in Figures 6(b) and 6(c), TFFN exhibits noticeable deviations from the true solution in 6 dimensions, resulting in a MAPE of 293% and 92.90% (see Table 1). For the 10D Multi-modal case, TFFN fails to accurately identify the two density peaks. Thus, comparisons with TFFN are omitted for 10-20 dimensional examples.\\n\\n4. Other general PDE solvers, such as **DeepXDE [2], SPINN [3], DeepONet, and FNO** do not incorporate specific modifications for the Fokker-Planck equation and face similar issues as discussed above.\\n\\nTo summarize, while our experiments present part of comparisons, our theoretical analysis comprehensively covers existing deep learning methods for the Fokker-Planck equation. Our score loss are concise and effective, demonstrating substantial originality. To the best of our knowledge, few works can effectively solve such high-dimensional, complex and challenging SFP equations, which also makes it difficult to find suitable and comparable baselines. Our algorithm fills a significant gap in the field of high-dimensional Fokker-Planck equations and achieves breakthrough performance, both theoretically and experimentally.\\n\\n$\\\\textbf{PDE Cases}$\\n\\nOur test cases are from multiple studies for solving FP equations. Existing methods cannot learn so many types of challenging high-dimensional density functions effectively. Therefore, we focus on exploring the full potential of our FPNN, evaluating its generality and applicability across diverse problems.\\n\\n- **SFP equations.** The experiments include ring-shape density, arbitrary potential functions (where the polynomial degree in the exponential term reaches up to 8, with complex interactions among spatial coordinates), and Gaussian mixture distribution (with scalability to more components and higher dimensions, allowing for modeling more complicated distributions). Across all these cases, our method demonstrated exceptional performance and potential.\\n\\n- **Test dataset.** For high-dimensional problems, we are limited to visualizing the results using selected cross-sections. However, only testing errors on cross-sectional data is not sufficient to characterize high-dimensional solutions due to their multi-modal complexity. Thus, we generated $\\\\mathcal{D}\\\\_{\\\\text{test}}$ to globally evaluate error metrics.\\n\\n For the test dataset $\\\\mathcal{D}\\\\_{\\\\text{test}}$, we generate data via gradient ascent method on the analytical solution, ensuring that all densities of test data exceed a predefined threshold $\\\\epsilon$. This approach is more efficient than the traditional method of randomly sampling spatial points and filtering out those with densities below $\\\\epsilon$. \\n\\nWe have provided the codes for generating test datasets and fixed the random seed for reproducibility. In addition, we plan to release the codes and test datasets in a GitHub repository, enabling future research to perform comparisons with FPNN under consistent evaluation metrics.\"}", "{\"summary\": \"This paper introduces the Fokker-Planck Neural Network (FPNN), a novel framework for solving high-dimensional steady-state Fokker-Planck (SFP) equations. Traditional deep learning approaches to these equations face challenges with representation capacity, loss function balancing, and maintaining normalization constraints. The proposed FPNN addresses these issues by decoupling score learning from density normalization, using a score PDE loss that enables strict adherence to normalization constraints while allowing flexible, mesh-free architectures. FPNN achieves significant computational efficiency, with over a 20x speedup compared to state-of-the-art methods, and requires only minimal parameters for high accuracy. The authors demonstrate its effectiveness on 4D, 6D, and even 20D problems, achieving low relative errors without labeled data. The FPNN framework contributes a fast, efficient, and accurate solution method for high-dimensional Fokker-Planck equations, with applications in computational physics and related fields.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper presents a novel Fokker-Planck Neural Network (FPNN) that innovatively decouples score learning and density normalization for high-dimensional steady-state Fokker-Planck (SFP) equations. Traditional deep learning methods for Fokker-Planck equations generally incorporate the PDE residual as part of the loss function but often encounter issues with representation capacity, balancing multi-objective loss functions, and satisfying normalization constraints. FPNN addresses these challenges by using a score PDE loss, which separates score learning from normalization, allowing for a flexible, mesh-free network architecture. This approach is original as it removes the dependency on specific architectures, enables strict normalization through a single computation of the partition function, and offers substantial computational gains over existing methods. By rethinking how neural networks approach Fokker-Planck equations, the authors introduce a framework that improves upon limitations of previous methods, potentially broadening the scope of high-dimensional PDE applications in machine learning.\", \"weaknesses\": \"1. Limited Discussion on Practical Constraints and Limitations\\n2. Lack of Scalability and Computational Complexity Analysis\\n3. Lack of Comparison with Other Recent Advances\", \"questions\": \"1. Could be the proposed method widely used for other PDEs?\\n2. Please compare with recent developed methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your insightful and kind comments! We are glad to hear that you recognize and appreciate our ideas.\\n\\nWe fully agree with your second point. You have accurately and succinctly summarized the key contributions of our work. As for the first point, there are several studies on adaptive sampling methods for PINNs [1-6], which usually involve resampling or adding training points based on the PDE residuals. Thus, data from the diffusion process could also be used to compute the PINN loss $\\\\mathcal{J}_{\\\\text{plain}}$. But it does not fundamentally address the core challenges in solving high-dimensional Fokker-Planck equations and is less natural than score matching.\\n\\nWe also recommend that you consider the numerical benefits of FPNN over PINN in learning high-dimensional densities (*Advantage 2* in our introduction). For instance, in the *10D Gaussian Mixture* problem, the magnitude of the true solution is around $10^{-5}$. If PINN is trained using the objective function $\\\\mathcal{J}\\\\_{\\\\text{plain}} + \\\\lambda \\\\mathcal{J}\\\\_{\\\\text{norm}}$ with an MLP architecture, it is challenging for the model to produce outputs of that magnitude, especially without labeled data or prior knowledge for post-standardization.\\n\\nIn contrast, FPNN leverages a score-based model, allowing the network to freely choose an appropriate scale and learn the unnormalized density $\\\\widetilde{p_\\\\theta}$. By calculating normalizing constant $Z_\\\\theta$ through a post-processing step, the model finally yields $p_\\\\theta(x) = \\\\widetilde{p_\\\\theta}(x) / Z_\\\\theta$, thereby enabling outputs on the scale of $10^{-5}$. This significantly mitigates the numerical difficulties inherent in high-dimensional Fokker-Planck equations.\\n\\n**References**\\n\\n[1] @article{lu2021deepxde,\\n title={DeepXDE: A deep learning library for solving differential equations},\\n author={Lu, Lu and Meng, Xuhui and Mao, Zhiping and Karniadakis, George Em},\\n journal={SIAM review},\\n volume={63},\\n number={1},\\n pages={208--228},\\n year={2021},\\n publisher={SIAM}\\n}\\n\\n[2] @article{nabian2021efficient,\\n title={Efficient training of physics-informed neural networks via importance sampling},\\n author={Nabian, Mohammad Amin and Gladstone, Rini Jasmine and Meidani, Hadi},\\n journal={Computer-Aided Civil and Infrastructure Engineering},\\n volume={36},\\n number={8},\\n pages={962--977},\\n year={2021},\\n publisher={Wiley Online Library}\\n}\\n\\n[3] @article{wang20222,\\n title={Is $L^2$ Physics Informed Loss Always Suitable for Training Physics Informed Neural Network?},\\n author={Wang, Chuwei and Li, Shanda and He, Di and Wang, Liwei},\\n journal={Advances in Neural Information Processing Systems},\\n volume={35},\\n pages={8278--8290},\\n year={2022}\\n}\\n\\n[4] @article{wu2023comprehensive,\\n title={A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks},\\n author={Wu, Chenxi and Zhu, Min and Tan, Qinyang and Kartha, Yadhu and Lu, Lu},\\n journal={Computer Methods in Applied Mechanics and Engineering},\\n volume={403},\\n pages={115671},\\n year={2023},\\n publisher={Elsevier}\\n}\\n\\n[5] @article{hou2023enhancing,\\n title={Enhancing PINNs for solving PDEs via adaptive collocation point movement and adaptive loss weighting},\\n author={Hou, Jie and Li, Ying and Ying, Shihui},\\n journal={Nonlinear Dynamics},\\n volume={111},\\n number={16},\\n pages={15233--15261},\\n year={2023},\\n publisher={Springer}\\n}\\n\\n[6] @article{tang2023adversarial,\\n title={Adversarial Adaptive Sampling: Unify {PINN} and Optimal Transport for the Approximation of {PDE}s},\\n author={Kejun Tang and Jiayu Zhai and Xiaoliang Wan and Chao Yang},\\n journal ={The Twelfth International Conference on Learning Representations},\\n year={2024}\\n}\"}", "{\"metareview\": \"This paper proposes a \\\"score Fokker-Planck neural net\\\" to address the issue in trying to solve Fokker-Planck equations by PINNs. The idea is to first write the dynamics of the score function under Fokker-Planck and first solve this score Fokker-Planck equation. Next, normalization is computed numerically in an efficient way that derives from a tensorial parameterization of the neural net. I think this is a nice contribution to ML stochastic dynamical systems that warrants publication. There are some omissions in the bibliography (for example the score evolution from here: https://arxiv.org/abs/2210.04296), but nothing damning.\", \"additional_comments_on_reviewer_discussion\": \"The longest discussion was with 1JZg who asked good explanatory questions (how is this connected to denoising, is it the same \\\"score\\\", how is score related to drift, ...) which I believe helped authors tune the narrative for the audience familiar with generative diffusion models but not so much with Fokker-Planck equations. aoGx was asking about practicality, scalability, and baselines. hrfk wondered about SRK as part of data generation, OOD performance, and additional baselines. eksf asked about fairness of comparison with PINNs given that normalization is done in post-processing. Authors have successfully addressed many of these concerns. There was overall a consensus that this is a solid contribution to modeling of stochastic dynamical systems.\"}", "{\"summary\": \"The paper introduces the Fokker-Planck Neural Network (FPNN) framework, a novel approach to solving high-dimensional steady-state Fokker-Planck equations by leveraging a score-based PDE loss that decouples density normalization from score learning. This decoupling allows FPNN to avoid continuous computation of the partition function, enhancing computational efficiency and stability, particularly for complex high-dimensional problems. Experimental results demonstrate FPNN\\u2019s superior performance over existing methods, achieving high accuracy and significant speedups, making it a promising candidate for scalable and efficient solutions to Fokker-Planck equations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) This paper proposes a novel network for solving high-dimensional steady-state Fokker-Planck equations. By utilizing a score-based PDE loss that decouples score learning from density normalization, the network achieves an effective balance between representational capacity and the constraints required for accurate solutions.\\n\\n(2) The performance results are promising, though further validation is needed to strengthen the findings.\\n\\n(3) The presentation of this paper is good.\", \"weaknesses\": \"(1) In this paper, using SRK as part of the data generation process for steady-state Fokker-Planck equations is effective, but it might lead to an \\\"unfair advantage\\\" in comparisons if other baseline methods do not leverage a similar approach for handling randomness or approximating steady states. Thus, it will be meaningful to test the performance of the proposed method with the same training data as proposed in TFFN paper.\\n\\n(2) Score-based methods have shown strong performance for in-distribution problems, but they often suffer from significantly reduced effectiveness on out-of-distribution (OOD) tasks. I am curious whether the authors evaluated the OOD performance for this model. Additionally, as mentioned previously, I wonder if the data generation approach used in this study simplified the distribution, making it easier for the network to learn. If that is the case, the improvements might be attributed more to the engineering aspects of data preparation rather than advancements in the network architecture itself.\\n\\n(3) Additional comparisons would strengthen this paper. Although FPNN outperforms TFFN in the results presented, I could not find any published reference for TFFN, suggesting it may only be available on arXiv and has not yet undergone peer review. To more convincingly demonstrate the effectiveness of the proposed method, I recommend that the authors include additional, widely recognized baseline methods. This would provide a more comprehensive evaluation of FPNN\\u2019s efficiency and robustness.\", \"questions\": \"The questions are addressed in the weaknesses section. Although this research is not fully aligned with my area of expertise, I will follow the rebuttal process and hope that my suggestions help improve the clarity and presentation of the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Here, we summarize the tasks of image generation and SFP equations:\\n\\n- **Generative Models:** For score-based generative models, Eq.(1) describes the forward process of adding noise, which cannot be used for sampling images. Image generation relies on Eq.(3) or Eq.(4): $dx=(f(x,t)-GG^T\\\\nabla\\\\log p)dt+G(x,t)dW$.\\n\\n This SDE contains an unknown term, $\\\\nabla\\\\log p$, which needs to be inferred. The goal is to map a standard Gaussian distribution to the real data distribution. Since the dynamics are oriented to data distribution, the model can only learn in a data-driven approach.\\n\\n- **FPNN:** Solving the SFP equation corresponds to a known SDE: $dx=f(x)dt+G(x)dW$.\\n\\n PINNs solve it by directly minimizing the PDE residual during training. Our method leverages the equivalence between Eq.(1) and Eq.(3), reformulating the problem into an SDE: $dx=(f(x)-\\\\frac{1}{2}GG^T\\\\nabla\\\\log p)dt=\\\\widetilde{f}(x)dt$.\\n\\n This step introduces the score function associated with the solution of FP equation, where $f(x)=\\\\mu(x),\\\\frac{1}{2}GG^T=D(x)$. Due to the stationary invariant property, we set $\\\\widetilde{f}(x)=0$ to train the network.\\n\\n- Although the two methods exhibit formal similarities, the generative model has a nonzero $\\\\widetilde{f}(x,t)$ and drives towards the data distribution, while FPNN maintains a steady state by setting $\\\\widetilde{f}(x)=0$. \\\\\\nThe former is data-driven, whereas the latter is guided by physical laws.\\n\\nWe hope our explanation is helpful to you!\"}", "{\"comment\": \"Thanks for your nice comments! It is of great significance to our work.\"}", "{\"comment\": \"We sincerely appreciate your comments and constructive suggestions. In our **Official Comments**, we further elaborate on the distinctions and improvements of our proposed FPNN over previous works, which we believe will facilitate a better understanding of the PDE task and our method.\\n\\n**$\\\\textbf{Weakness 1 and Question 1}$**\\n\\n**Our Task**\", \"our_goal_is_to_solve_an_important_class_of_pde_systems_with_boundary_conditions\": \"the steady-state Fokker-Planck (SFP) equations. Unlike other PDEs, the solution to the SFP equation represents the density function of an invariant equilibrium distribution, and needs to satisfy additional normalization constraints.\\n\\nDue to the high dimensionality, we want to leverage deep learning methods to find the solution to a given SFP equation, where the drift $\\\\mu(x)$ and diffusion $D(x)$ are known in advance. It means that we can know which steady state the system reaches under both drift and diffusion effects.\\n\\n**Score Matching**\\n\\nIn the **Our Work** section of Official Comments, we introduce the score PDE loss $\\\\mathcal{J}\\\\_{\\\\text{score}}(s_\\\\theta)$ and show that, when $D(x) = I_d$, the training process is equivalent to minimizing:\\n\\n$\\\\mathbb{E}\\\\_{p_\\\\text{data}}[\\\\|s_\\\\theta(x)-\\\\mu(x)\\\\|^2]$\\n\\nIn Song's work [10], the Noise Conditional Score Network (NCSN) uses a weighted sum of denoising score matching objectives:\\n\\n$\\\\theta^*=\\\\arg \\\\min_{\\\\theta}\\\\sum_{i=1}^N \\\\sigma_i^2\\\\mathbb{E}\\\\_{p_\\\\text{data}(x)}\\\\mathbb{E}\\\\_{p_{\\\\sigma_i}(\\\\widetilde{x}|x)}[\\\\|s_\\\\theta(\\\\widetilde{x},\\\\sigma_i)-\\\\nabla_{\\\\widetilde{x}}\\\\log p_{\\\\sigma_i}(\\\\widetilde{x}|x)\\\\|_2^2] \\\\qquad(1)$\\n\\nand Denoising Diffusion Probabilistic Model (DDPM) leverages a re-weighted variant of the evidence lower bound (ELBO):\\n\\n$\\\\theta^*=\\\\arg \\\\min_{\\\\theta}\\\\sum_{i=1}^N (1-\\\\alpha_i)\\\\mathbb{E}\\\\_{p_\\\\text{data}(x)}\\\\mathbb{E}\\\\_{p_{\\\\alpha_i}(\\\\widetilde{x}|x)}[\\\\|s_\\\\theta(\\\\widetilde{x},i)-\\\\nabla_{\\\\widetilde{x}}\\\\log p_{\\\\alpha_i}(\\\\widetilde{x}|x)\\\\|_2^2] \\\\qquad(3)$\", \"it_can_be_seen_that_our_equivalent_score_matching_objective\": \"$\\\\theta^*=\\\\arg \\\\min_{\\\\theta}\\\\mathbb{E}\\\\_{p_\\\\text{data}(x)}[\\\\|s_\\\\theta(x)-\\\\mu(x)\\\\|^2]$\\n\\nshares a similar form with score-based generative models. These generative models gradually learn the \\\"score\\\" of the perturbed data distribution (i.e., conditional Gaussian distribution), while our FPNN directly learns the known drift $\\\\mu(x)$.\\n\\nA notable difference is that generative models only need to accurately match the score to generate samples using Langevin MCMC or the estimated reverse Markov chain. But our goal is to compute the solution of SFP equation, which requires obtaining a physically meaningful normalized density. It necessitates the post-processing of calculating the partition function $Z_\\\\theta$.\\n\\n**References**\\n\\n[10] @article{song2020score,\\n title={Score-based generative modeling through stochastic differential equations},\\n author={Song, Yang and Sohl-Dickstein, Jascha and Kingma, Diederik P and Kumar, Abhishek and Ermon, Stefano and Poole, Ben},\\n journal={arXiv preprint arXiv:2011.13456},\\n year={2020}\\n}\\n\\n**$\\\\textbf{Weaknesses 2 and 3}$**\\n\\nIn the **Baselines** and **PDE Cases** sections of Official Comments, we provide a detailed comparison with existing works and conduct relevant experiments.\\n\\n**Ref [1]**\\n\\nAs we can see, the FP solver [1] relies heavily on a large number of reference solutions, although they may not be exact and affect model training performance. Our FPNN does not require any labeled data and results in significant improvements in both efficiency and accuracy over the FP solver.\\n\\n**Ref [3]**\\n\\nSPINN [3] does not incorporate specific modifications for FP equations and faces challenges as we discussed in **Background**. Moreover, SPINN only demonstrates PDE examples up to 6 dimensions. Its tensor-product output will be storage infeasible in much higher-dimensional problems, because we need to expand the predicted density $p$ of size $N^d$ as an intermediate quantity, and then calculate the PDE residuals through automatic differentiation.\\n\\nTherefore, although our method uses the TNN architecture (same as SPINN), we adopt the original input-output format: the input has the shape (batch size, dim), while the output are densities with the shape (batch size, 1).\\n\\n**Ref [2]**\\n\\nConsider the challenges of solving high-dimensional SFP equations, we opted to evaluate our model using PDEs with analytical solutions. The 4D Ring is based on [1], while the 6D examples and 10D Multi-modal are adapted from [5]. Additionally, we construct cases involving the 10D Gaussian mixture distribution and 20D Gaussian Gaussian function to comprehensively test the applicability of our algorithm. FPNN consistently shows strong performance across these cases. Since our focus is on SFP equations and efficiently handling normalization constraints in a more elegant manner, the datasets used in our work are not covered in [2].\"}", "{\"comment\": \"I thank the authors once again for taking the time to reply. I agree with the authors that the drift and score terms are related in score matching, however the link with score matching is still unclear ; in the authors response, the sentence \\\"It can be observed that the \\\"drift\\\" term essentially describes the \\\"score\\\", which corresponds to the desired sampling direction.\\\" is particularly confusing.\\n\\n1. If I understand correctly, the authors suggest that in the specific context of FP, solving the problem with score matching amounts to learning the drift coefficient of the PDE. In this case, the paper needs to be clarified and the addition of the authors with Fig. 11 is not going in the correct direction since it conveys the message that both drift and score play the same role (\\\"both represent gradient information of the data distribution\\\", which is not true). I would rather suggest that the authors clearly acknowledge the difference between the two views.\\n\\n2. It is not clear for me what is gained overall from the PINN approach for getting samples of the underlying distribution since it can be simulated from the physical equations. Could the authors please clarify on this point?\\n\\nBased\"}", "{\"title\": \"References in comments\", \"comment\": \"**References**\\n\\n[1] @inproceedings{zhai2022deep, title={A deep learning method for solving Fokker-Planck equations}, author={Zhai, Jiayu and Dobson, Matthew and Li, Yao}, booktitle={Mathematical and scientific machine learning}, pages={568--597}, year={2022}, organization={PMLR} }\\n\\n[2] @article{lu2021deepxde, author = {Lu, Lu and Meng, Xuhui and Mao, Zhiping and Karniadakis, George Em}, title = {{DeepXDE}: A deep learning library for solving differential equations}, journal = {SIAM Review}, volume = {63}, number = {1}, pages = {208-228}, year = {2021}, doi = {10.1137/19M1274067} }\\n\\n[3] @article{cho2024separable, title={Separable physics-informed neural networks}, author={Cho, Junwoo and Nam, Seungtae and Yang, Hyunmo and Yun, Seok-Bae and Hong, Youngjoon and Park, Eunbyung}, journal={Advances in Neural Information Processing Systems}, volume={36}, year={2024} }\\n\\n[4] @article{alhussein2023physics,\\n title={Physics-Informed Solution of The Stationary Fokker-Plank Equation for a Class of Nonlinear Dynamical Systems: An Evaluation Study},\\n author={Alhussein, Hussam and Khasawneh, Mohammed and Daqaq, Mohammed F},\\n journal={arXiv preprint arXiv:2309.16725},\\n year={2023}\\n}\\n\\n[5] @article{wang2024tensor,\\n title={Tensor neural networks for high-dimensional Fokker-Planck equations},\\n author={Wang, Taorui and Hu, Zheyuan and Kawaguchi, Kenji and Zhang, Zhongqiang and Karniadakis, George Em},\\n journal={arXiv preprint arXiv:2404.05615},\\n year={2024}\\n}\\n\\n[6] @article{al2022extensions,\\n title={Extensions of the deep Galerkin method},\\n author={Al-Aradi, Ali and Correia, Adolfo and Jardim, Gabriel and de Freitas Naiff, Danilo and Saporito, Yuri},\\n journal={Applied Mathematics and Computation},\\n volume={430},\\n pages={127287},\\n year={2022},\\n publisher={Elsevier}\\n}\\n\\n[7] @article{tang2022adaptive,\\n title={Adaptive deep density approximation for Fokker-Planck equations},\\n author={Tang, Kejun and Wan, Xiaoliang and Liao, Qifeng},\\n journal={Journal of Computational Physics},\\n volume={457},\\n pages={111080},\\n year={2022},\\n publisher={Elsevier}\\n}\\n\\n[8] @article{feng2022solving,\\n title={Solving Time Dependent Fokker-Planck Equations via Temporal Normalizing Flow},\\n author={Feng, Xiaodong and Zeng, Li and Zhou, Tao},\\n journal={Communications in Computational Physics},\\n volume={32},\\n number={2},\\n pages={401--423},\\n year={2022}\\n}\\n\\n[9] @article{anderson2024fisher,\\n title={Fisher information and shape-morphing modes for solving the Fokker--Planck equation in higher dimensions},\\n author={Anderson, William and Farazmand, Mohammad},\\n journal={Applied Mathematics and Computation},\\n volume={467},\\n pages={128489},\\n year={2024},\\n publisher={Elsevier}\\n}\"}", "{\"comment\": \"Thanks for your thoughtful comments and constructive suggestions. We address the fundamental differences between FPNN and existing methods, as well as the motivation behind our score PDE loss in the **Background** and **Our Work** sections of Our **Official Comments**. We believe the score loss is a key factor in the superior performance of our model and these sections provide an interesting story. Considering that our work may not fully align with your research interests or expertise, we would like to clarify our PDE task further.\\n\\n**Our Task**\", \"our_goal_is_to_solve_an_important_class_of_pde_systems\": \"the steady-state Fokker-Planck (SFP) equation. Unlike other PDEs, its solution represents the density function of an invariant equilibrium distribution, and needs to satisfy additional normalization constraints.\\n\\nDue to the high dimensionality, we want to leverage deep learning methods to find the solution to a given SFP equation. It means that the drift $\\\\mu(x)$ and diffusion $D(x)$ are known in advance and we want to know which steady state the system reaches under both drift and diffusion effects.\\n\\n$\\\\textbf{Weakness 1}$\\n\\nIf you understand the issues inherent in the original loss function, $\\\\mathcal{J}_{\\\\text{plain}}$, it becomes evident that the advantages of FPNN stem entirely from the newly introduced **score PDE loss**, rather than from the training data or network architecture. \\n\\n- **Data.** Data from SDE simulations help define the scope of this stochastic system, because not all SFP equations have density concentrated near the origin. And before solving the PDEs, we have no idea about the distribution of solutions. One major effect of generating data using the SRK method is to provide a reasonable estimation of domain. \\n- **Network Architecture.** The flexibility in network design is another benefit brought about by the score PDE loss, which totally decouples the normalization condition. This allows us to employ unrestricted network architectures while still strictly satisfying the normalization condition.\\n\\nIn our current experiments, we use datasets $\\\\mathcal{D}\\\\_{\\\\text{train}}$ (generated via SRK method) and $\\\\mathcal{D}\\\\_{\\\\text{uniform}}$ (uniformly sampled on $\\\\Omega$) for $\\\\mathcal{J}\\\\_{\\\\text{score}}$ (FPNN) and $\\\\mathcal{J}\\\\_{\\\\text{plain}}$ (TFFN), respectively, to ensure consistency in the statistical expectations within the loss functions. We acknowledge that this might have caused some confusion.\\n\\nNevertheless, we are confident that regardless of whether both use $\\\\mathcal{D}\\\\_{\\\\text{train}}$ or $\\\\mathcal{D}\\\\_{\\\\text{uniform}}$, the training results would still demonstrate the superiority of $\\\\mathcal{J}\\\\_{\\\\text{score}}$ over $\\\\mathcal{J}\\\\_{\\\\text{plain}}$. This is due to the fundamental differences in the optimization dynamics of the two objective functions. We will include additional experimental results to validate this claim and update the manuscript and supplementary codes accordingly in few days. This should address concerns about the possibility of an \\\"unfair advantage\\\" for FPNN over TFFN.\\n\\n**Weakness 2**\\n\\nAs outlined in the **Our Task** section, our primary focus is on the steady-state distribution of a stochastic system under the interactions between drift and diffusion. Thus, we focus solely on the solution to the current system, (i.e., the invariant equilibrium distribution) and do not address OOD tasks. Our main concern is the applicability of FPNN, specifically whether it can efficiently solve more types of SFP equations and accurately compute the density function under various complex drift and diffusion dynamics.\\n\\n**Weakness 3**\\n\\nThe comparison with existing state-of-the-art methods, as discussed in the **Baselines** section of the **Official Comments**, highlights the challenges posed by high-dimensional problems. These challenges render most existing methods ineffective, preventing them from correctly solving our PDE cases. We adopt a completely different approach, improving the loss function and optimization dynamics, which enables us to accurately learn the density function across a wide range of high-dimensional problems.\\n\\nIf you have any further questions or suggestions, please feel free to give comments or remarks. We are open and appreciate a more in-depth discussion.\"}", "{\"comment\": \"**$\\\\textbf{Question 2}$**\\n\\nIn **Question 1**, we explain that in order to obtain a physically meaningful solution, it is necessary to compute $Z_\\\\theta$ once and perform normalization. Our neural network directly models the unnormalized density $\\\\widetilde{p_\\\\theta}$ and the score function $s_\\\\theta = \\\\nabla \\\\log \\\\widetilde{p_\\\\theta}$. Since we observe that $\\\\nabla \\\\log \\\\widetilde{p_\\\\theta} = \\\\nabla \\\\log p_\\\\theta$, we can use $s_\\\\theta$ to approximate the true score.\\n\\nYour focus on Equations (8) and (10) is both correct and important. Firstly, FPNN is fundamentally different from existing methods like \\\"soft\\\" or \\\"hard\\\" PINNs. In our loss function, we do not require the evaluation of $Z_\\\\theta$, as we completely decouple the normalization condition. This allows us to compute the partition function $Z_\\\\theta$ only once after the model has been fully trained, unlike existing methods [4, 5], which necessitate re-estimating $Z_\\\\theta$ at every training iteration.\\n\\n**TNN-based FPNN**\\n\\nFor the TNN-based FPNN, efficient numerical integration is possible due to the low-rank structure of the density representation. For example, when $r = 1$, Eq.(8) simplifies to:\\n\\n$Z_\\\\theta\\\\approx(\\\\int_{a_1}^{b_1}f_1(x_1)dx_1)\\\\cdots(\\\\int_{a_d}^{b_d}f_1(x_d)dx_d)$\\n\\nwhich means that the high-dimensional integral can be decomposed into a product of $d$ one-dimensional integrals (where $d$ is the spatial dimension). Each of these one-dimensional integrals can be efficiently estimated using any numerical algorithms (we use the piece-wise Gauss-egendre quadrature rule). As a result, even as the dimensionality increases, $Z_\\\\theta$ remains computationally tractable. Furthermore, for $r$ components, the above calculations are fully consistent and parallelizable, as our code implementation.\\n\\n**MLP-based FPNN**\\n\\nThe estimation in Eq.(8) essentially forms a high-dimensional grid using tensor products, which imposes certain requirements on the network architecture. Therefore, more generally, we provide a Monte Carlo estimation for the partition function, as described in Eq.(10). This approach does not suffer from the curse of dimensionality as grid-based numerical methods. Moreover, we can **increase the number of sampling points** to improve the accuracy of this unbiased estimator. Since this calculation is performed only once, the computational cost remains controllable.\\n\\n**$\\\\textbf{Questions 3 and 4}$**\\n\\nHere, $|\\\\mathcal{D}\\\\_{\\\\text{norm}}|$ is consistent with its usage in Eq.(10), representing the number of Monte Carlo (MC) samples, which are uniformly drawn from the domain $\\\\Omega$. In Figure 9(a), our target is to examine how varying the number of MC samples affects the estimation of $Z_\\\\theta$, and subsequently, the prediction error of the solution function. The experimental results demonstrate that increasing the number of uniformly sampled points $|\\\\mathcal{D}_{\\\\text{norm}}|$ indeed enhances the accuracy of the solution.\\n\\n**Comparison of Estimation Methods**\\n\\nFor the TNN-based FPNN, in addition to using Eq.(8), we can also apply Eq.(10) to compute the partition function $Z_\\\\theta$. We want to compare the difference and efficiency between these two methods. In Figure 9(b), the rightmost column displays the values of $Z_\\\\theta$ estimated using Eq.(8) and the corresponding MAPE. We observe that, for the MC estimation in Eq.(10), as the number of uniform sampling points $|\\\\mathcal{D}\\\\_{\\\\text{norm}}|$ increases, $Z_{\\\\text{MC}}$ gradually converges to $Z_\\\\theta$, and MAPE steadily decreases until stable.\\n\\n**Limitations of Higher-Dimensional Problems**\\n\\nIn lines 453-458, we analyze the advantages and limitations of using Eq.(8) and (10) in particularly high-dimensional settings. Suppose the domain of interest is $\\\\Omega = [-2, 2]^d$. When using Eq.(10) for estimation, the volume of $\\\\Omega$ is $|\\\\Omega| = 4^d$, which may exceed machine precision limits when $d = 20$, $40$, or higher dimensions. This issue is exacerbated when the range of each interval in $\\\\Omega$ is larger, such as $[-2, 5.5]$ or $[-4, 4]$.\\n\\nIn contrast, Eq.(8) involves computing the integral over each interval $(\\\\int_{-2}^{2} f_i(x_i)dx_i)$ separately, followed by multiplying these values to obtain $Z_\\\\theta$. In this case, the integral value for each dimension can be small or close to 1. Even if the range of each interval is larger, the integral values remain stable, making this approach more robust to dimensionality increases.\"}", "{\"comment\": \"$\\\\textbf{Suggestion in Weakness}$\\n\\nWith the reformulation and clarification of $\\\\mathcal{J}\\\\_{\\\\text{plain}}$ and $\\\\mathcal{J}\\\\_{\\\\text{score}}$, we can find that the comparison between the TNN-based FPNN and TFFN in our experiments essentially aligns with the ablation study you suggested. We controlled for the network architecture, spatial domain, number of training points, optimizer, learning rate, and other hyperparameters to remain consistent, with the only difference being the replacement of the PINN loss (residual loss) with our score-based FP loss. Using the same test dataset $\\\\mathcal{D}_\\\\text{test}$ (which is uniformly sampled from $\\\\Omega$ and not used during training for either model), we observed that FPNN significantly outperformed TFFN in terms of training speed, computational costs and evaluation metrics (MAE and MAPE).\\n\\n$\\\\textbf{Questions}$\\n\\nIn our experiments, SFP problems are constructed with potential functions for which analytical solutions are available to evaluate model performance (see Eq.(23) in Appendix C). Additionally, we did not randomly sample the test points from the entire space, as it could result in very small true values for density (often happens in high-dimensional settings) and render MAPE ineffective. Instead, we use the gradient ascent method on the analytical solution $p(x)$ to ensure that all densities of test data exceeds a certain threshold $\\\\epsilon$. We then record these spatial points $x$ along with their corresponding densities $p$ as test dataset.\\n\\nIf you have any further questions or suggestions, please feel free to give comments or remarks. We are open to a more in-depth discussion on both static FP (SFP) and non-stationary FP (TFP) equations.\"}" ] }
5qg1sAXhoh
Tree Search for Simultaneous Move Games via Equilibrium Approximation
[ "Ryan J Yu", "Alex Olshevsky", "Peter Chin" ]
Neural network supported tree-search has shown strong results in a variety of perfect information multi-agent tasks. However, the performance of these methods on partial information games has generally been below competing approaches. Here we study the class of simultaneous-move games, which are a subclass of partial information games which are most similar to perfect information games: both agents know the game state with the exception of the opponent's move, which is revealed only after each agent makes its own move. Simultaneous move games include popular benchmarks such as Google Research Football and Starcraft. In this study we answer the question: can we take tree search algorithms trained through self-play from perfect information settings and adapt them to simultaneous move games without significant loss of performance? We answer this question by deriving a practical method that attempts to approximate a coarse correlated equilibrium as a subroutine within a tree search. Our algorithm works on cooperative, competitive, and mixed tasks. Our results are better than the current best MARL algorithms on a wide range of accepted baselines.
[ "Neural Network", "Tree Search", "Game Theory", "Coarse Correlated Equilibrium", "No regret learning" ]
https://openreview.net/pdf?id=5qg1sAXhoh
https://openreview.net/forum?id=5qg1sAXhoh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vywOG6jhKl", "rAMyB2mWkl", "jhVSjTdGq2", "NQiRAgte1P", "A5hnHrVe1G", "4AS2hj6N3z" ], "note_type": [ "official_review", "official_review", "official_comment", "official_review", "comment", "official_review" ], "note_created": [ 1731104108085, 1730565937363, 1732799553930, 1730720208883, 1732799568435, 1730579469750 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5282/Reviewer_PxSJ" ], [ "ICLR.cc/2025/Conference/Submission5282/Reviewer_p4jF" ], [ "ICLR.cc/2025/Conference/Submission5282/Authors" ], [ "ICLR.cc/2025/Conference/Submission5282/Reviewer_iwW1" ], [ "ICLR.cc/2025/Conference/Submission5282/Authors" ], [ "ICLR.cc/2025/Conference/Submission5282/Reviewer_7Bms" ] ], "structured_content_str": [ "{\"summary\": \"This paper develops a method that combines deep Monte Carlo Tree Search with online no-regret learning in order to approximate coarse correlated equilibria in both cooperative and competitive simultaneous move games.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The related work is strong, and the paper is reasonably well motivated\", \"The outline of the methodology is clear\", \"The motivation for using the simulation depth of 1 is strong\", \"The results are overall very strong, however I would have liked to see some more of the PSRO variants that focus on other aspects (e.g. population diversity) that may be stronger performing baselines than the more standard PSRO and jPSRO, especially in larger scale environments. However, the general strong performance over both competitive and cooperative tasks is impressive.\", \"Overall I like the general simplicity of the approach in terms of the high-level methodology. Furthermore, I think the empirical results are strong on a fairly standard set of baselines.\"], \"weaknesses\": [\"It is a little difficult to follow the first part of section 4.1\", \"e.g. the writing suggests the value network only takes joint actions as inputs, I assume it also takes the state? The equations 1 through 5 could also use a bit more explanation.\", \"I am not sure about the argument made that the PSRO methods are not designed to work in large environments - e.g. Towards Unifying Behavioural and Response Diversity for Open-ended Learning in Zero-Sum games (Liu et al. 2021) applied a diversity aware PSRO to Google Research Football and subsequent PSRO papers have done the same\", \"My main concern with the paper is that whilst the body of the paper provides a good high-level overview of the framework, some potentially key details for both understanding (e.g. The explanation of EXP3-IX usage is limited and it is difficult to follow how one would implement it) and re-implementation are buried in the appendix / the provided code.\"], \"questions\": [\"Line 104 - what is a SM game?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"***Summary of paper** This paper considers deriving tree-search algorithms in perfect information simulatenous-move games. They derive one search algorithm by approximating coarse correlated equilibrium using an online learning formulation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The presentation is clear.\", \"weaknesses\": \"The technical contribution is not sound.\", \"questions\": \"First, there have already been works that consider monte-carlo tree search in simultaneous-move game, where this paper did not cite and discuss:\\n\\n[1] Monte Carlo Tree Search in Simultaneous Move Games with Applications to Goofspiel, Lanctot et. al.\\n\\n[2] Monte Carlo Tree Search Variants for Simultaneous Move Games, Tak et. al.\\n\\n[3] Convergence of Monte Carlo Tree Search in Simultaneous Move Games, Lisy et. al.\\n\\nIn these works they devise MCTS-style algorithms using UCT for explorations and achieved relative good results. However in the current paper the authors did not apply UCT. Could the authors also discuss your methods to the above. Also I don't see any of the above methods being compared in the evaluation section.\\n\\nSecond, in recent works about Diplomacy, they have already derived search-method based on approximating CCE-like objective [4, 5]\\n\\n[4] No-press Diplomacy from Stratch. Bakhtin et. al.\\n\\n[5] Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning. Bakhtin et. al.\\n\\nIn the above works they use regret-matching for computing approximate equilibira and use Nash-Q learning for iteratively learning policy and value functions. Could the authors compare your approache with theirs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you to the reviewers\", \"comment\": \"We have decided to withdraw this submission.\\n\\nWe first wanted to thank each reviewer for their comments and feedback regarding our submission; they will prove extremely useful as we look to improve the quality of our work and its presentation.\"}", "{\"summary\": \"The paper introduces a method for approximating coarse correlated equilibria (CCE) in multi-agent reinforcement learning (MARL) settings with perfect information and simultaneous moves (Stochastic games aka Markov games).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"There was a lot of effort put into designing the algorithm, running many experiments, and writing the paper.\", \"weaknesses\": \"The paper has several weaknesses which prevent me from recommending it for acceptance.\\n\\nI think the actual description of the algorithm is unclear. At inference time, what is the tree-search algorithm? Appendix A.5 suggests that there will be a game tree and an MCTS-like algorithm (as does the title of the paper), but Section 4 suggests that the method is no-regret algorithms using 0-step lookahead (just q-values)?\\n\\nThe experiments performed also don't test the core hypothesis of the paper -- that the algorithm computes a CCE. The experimental results are also fishy, and hyperparameters are not given for the baselines. Perhaps my biggest issue with the paper is that in the experiments, CFR does not perform well on \\\"Goofspiel-6\\\", but one would expect CFR to be able to compute a close Nash equilibrium in the game. The experimental code for CFR does not seem to be in the code linked in the appendix.\\n\\nThe paper's driving motivation is convergence to CCE. However, the experiments measure head-to-head winrates. Head-to-head winrates already seem less preferable than head-to-head EV (since agent 1 may have +EV against agent 2 despite having a lower winrate). However, to measure convergence to CCE in small, 2P0S games, one could simply compute exploitability. \\n\\n--- \\n\\nThe paper should cite other methods that use neural nets and tree search to compute a CCE.\\n- For example, ReBeL [0] and Deepstack/Student of Games [1] use CFR, where the players' marginal time-averaged strategies form a N.E. in 2P0S games, because their joint time-averaged strategy is a CCE [2].\\n- Research in Diplomacy, a simultaneous-move multiplayer game, also uses neural nets, lookahead, and no-regret dynamics: [3], [4].\\n- There is also a rich line of research into q-learning for equilibria in stochastic games. Although mostly in the 2-player setting, the approach is similar to this paper and should be cited. The research starts with Littman's seminal minimax q-learning, and the related works section of this paper ([5]) gives a good overview of modern developments.\", \"i_believe_this_existing_research_contradicts_the_statement_in_the_abstract\": \"\\\"Neural network supported tree-search has shown strong results in a variety of perfect information multi-agent tasks. However, the performance of these methods on partial information games has generally been below competing approaches.\\\"\\n\\n---\\n\\n> This approach works well for zero-sum, perfect information games like chess or Go, but when we move into the realm of partial information games, the min-max paradigm becomes inappropriate.\\n\\nI would say the min-max paradigm is still valid in partial information games. The difference is between 2P0S and general-sum/multiplayer. Also, the paper should mention somewhere that the \\\"productized\\\" policies (each player playing their own marginal, time-averaged policies) of a CCE are Nash equilibria in the 2P0S setting.\\n\\n--- \\n\\n> The limitation for equilibrium approximation algorithms is that they are not easily applied to tasks with larger state and action spaces. The majority of testing regarding such algorithms have been on small tasks.\\n\\nI don't think this is true.\\n\\n--- \\n\\n> Using the regret bound provided by Neu (2015), it becomes clear that 800 time steps is insufficient for this method. At 800 time steps, the theoretically guarantees provided by EXP3-IX are very poor: we have not yet found the best action.\\n\\nWhy do we refer to a \\\"best action\\\"? Why do we expect that action probabilities stabilize? The guarantee is convergence to a CCE, not a \\\"best action\\\" (which isn't well-defined), no? A CCE is a *distribution* over *joint* actions.\\n\\n\\n[0]: https://arxiv.org/abs/2007.13544\\n[1]: https://www.science.org/doi/10.1126/sciadv.adg3256\\n[2]: https://arxiv.org/abs/2310.11518\\n[3]: Learning to Play No-Press Diplomacy with Best Response Policy Iteration\\n[4]: https://arxiv.org/abs/2210.05492\\n[5]: https://arxiv.org/abs/2306.05700\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper explores the adaptation of neural network-supported tree search, commonly used in perfect information games, to simultaneous-move games, a subset of partial information games. In these games, both agents know the game state except for the opponent\\u2019s immediate move, revealed only after each agent's move, as seen in benchmarks like Google Research Football and Starcraft. The authors propose a novel method to approximate a coarse correlated equilibrium within tree search, allowing the algorithm to perform effectively across cooperative, competitive, and mixed tasks. Results show that the approach outperforms leading MARL algorithms across widely accepted benchmarks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The proposed depth-limited scheme is well-suited for simultaneous-move games.\\n2. Extensive experiments validate the method across various scenarios.\\n3. The method allows for better parallelization, making it more practical for real-world games.\", \"weaknesses\": \"1. The paper's writing and formatting are poor, with some images, tables, and equations arranged in a cluttered, two-column layout (e.g., Figures 1 and 4, Tables 1 and 2, and Equations 1-5), making the document look disorganized. Additionally, citation formatting is incorrect; in the ICLR template, \\\\citep should be used instead of \\\\citet when authors or publications are not part of the sentence.\\n2. The paper lacks theoretical support. While it spends much space arguing that CCE is superior to min-max, it does not demonstrate that the combination of EXP3-IX and depth-limited d-MCTS can ultimately converge to CCE.\\n3. The terminology is somewhat unprofessional. Typically, simultaneous-move games are not categorized as partially observable games, as standard Markov or stochastic game definitions allow for simultaneous moves. If the authors intend to contrast with traditional perfect information games, they should instead use \\u201cimperfect information games\\u201d.\\n4. The paper makes several unsupported claims. For example, in line 35, the assertion that \\\"successes in partially observable settings have been more muted\\\" overlooks numerous well-known works in this setting (e.g., DeepStack, AlphaStar, OpenAI Five, DeepNash). Similarly, line 46 claims that \\\"playing according to a CCE gives you performance guarantees against any opponent in competitive tasks,\\\" which is inaccurate\\u2014CCE does not ensure performance guarantees except in 2P0S games, where it aligns with Nash equilibrium.\", \"questions\": \"1. Given that NN-CCE limits sampling to child nodes at a depth of only one layer, could an alternative approach be to perform weighted summation directly based on action probabilities? This might avoid redundant sampling steps.\\n2. Since the method employs Monte Carlo sampling, would it be suitable to compare it with similar techniques, such as Monte Carlo CFR?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5q4U5gnU1g
No Algorithmic Collusion in Two-Player Blindfolded Games with Thompson Sampling
[ "Ningyuan Chen", "Xuefeng Gao", "Yi Xiong" ]
When two players are engaged in a repeated game with unknown payoff matrices, they may be completely unaware of the existence of each other and use multi-armed bandit algorithms to choose the actions, which is referred to as the ``blindfolded game'' in this paper. We show that when the players use Thompson sampling, the game dynamics converges to the Nash equilibrium under a mild assumption on the payoff matrices. Therefore, algorithmic collusion doesn't arise in this case despite the fact that the players do not intentionally deploy competitive strategies. To prove the convergence result, we find that the framework developed in stochastic approximation doesn't apply, because of the sporadic and infrequent updates of the inferior actions and the lack of Lipschitz continuity. We develop a novel sample-path-wise approach to show the convergence.
[ "Algorithmic Collusion", "blindfolded game", "multi-armed bandit", "Thompson Sampling" ]
Reject
https://openreview.net/pdf?id=5q4U5gnU1g
https://openreview.net/forum?id=5q4U5gnU1g
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zrwKoCAk2Q", "xIQ4gJDVfT", "x5F59hCUmO", "uBwwQHKpum", "sTJNaNTLME", "ocT7dYF3Xc", "o0MHGnIEYp", "lnL8ME6jCL", "kiMyrvTFU5", "cAKsG85uu7", "aXBdnkexhX", "aRMgfvpmP8", "N5ukcpzGYq", "IZbpt2XhW6", "8ney74GL3g", "5CGwiXsSxf", "3cW0oPUAzR" ], "note_type": [ "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732686657510, 1730898948444, 1737523920720, 1732561150596, 1730891936895, 1732726563414, 1732345837207, 1730315058739, 1730875383302, 1732345640294, 1732345121342, 1732345431580, 1734574542002, 1732717229264, 1732717115200, 1732570119490, 1732720016118 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8607/Reviewer_pH4Y" ], [ "ICLR.cc/2025/Conference/Submission8607/Reviewer_pH4Y" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8607/Reviewer_7fhH" ], [ "ICLR.cc/2025/Conference/Submission8607/Reviewer_fWSu" ], [ "ICLR.cc/2025/Conference/Submission8607/Reviewer_7fhH" ], [ "ICLR.cc/2025/Conference/Submission8607/Authors" ], [ "ICLR.cc/2025/Conference/Submission8607/Reviewer_7fhH" ], [ "ICLR.cc/2025/Conference/Submission8607/Reviewer_h5co" ], [ "ICLR.cc/2025/Conference/Submission8607/Authors" ], [ "ICLR.cc/2025/Conference/Submission8607/Authors" ], [ "ICLR.cc/2025/Conference/Submission8607/Authors" ], [ "ICLR.cc/2025/Conference/Submission8607/Area_Chair_E2dn" ], [ "ICLR.cc/2025/Conference/Submission8607/Authors" ], [ "ICLR.cc/2025/Conference/Submission8607/Authors" ], [ "ICLR.cc/2025/Conference/Submission8607/Reviewer_h5co" ], [ "ICLR.cc/2025/Conference/Submission8607/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your response and detailed explanations.\\n\\nI acknowledge that this work's motivation differs from that of Cai et al. (2023), and the theoretical analysis of Thomson sampling presents a degree of novelty.\\nHowever, I remain curious about the feasibility of extending the theoretical results from two-action games to a wider range of games.\\nAdditionally, the pricing game example provided by the author seems to lack practicality since it only has two prices.\\nIt would be beneficial to discuss the potential applicability of the derived theoretical results to more general pricing games (e.g., a pricing game where more than three prices can be selected).\"}", "{\"summary\": \"This paper mainly focuses on dynamics in two-player blindfolded games, where both players follow Thompson sampling to choose their actions.\\nUnder some assumptions, the author demonstrates that the game dynamics converge to the pure Nash equilibrium.\\nThe proof utilizes the fact that the game dynamics can be viewed as a special form of stochastic approximation.\\nFurthermore, the author also experimentally shows that each player's strategy converges to a pure Nash equilibrium in some games.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The problem is well-motivated. Analyzing the celebrated MAB algorithm in games is of great importance to the research community.\", \"It seems novel to provide last-iterate convergence guarantees for Thompson sampling in online learning in games.\"], \"weaknesses\": \"My primary concerns are centered on the assumptions on two-player games.\\nFirstly, the last-iterate convergence results are only provided for **two-action** games.\\nThe analysis heavily depends on the fact that, in the two-action games, the action probability can be formulated in a closed-form expression (as in Eq. (7)).\\nHowever, I am curious if similar expressions can be derived for more general games.\\nSecondly, the assumption regarding the existence and uniqueness of the pure Nash equilibrium appears to be strong.\\nI am uncertain about the practical applications of games with these assumptions.\\nIf these assumptions limit the practical application of this study, it would be advantageous to provide any convergence or divergent results in games that do not adhere to these assumptions, such as two-player zero-sum games.\\n\\nMoreover, a recent study [1] primarily provided last-iterate convergence rates in two-player zero-sum normal-form games and Markov games under bandit feedback.\\nI believe the bandit feedback setting closely resembles the blindfolded setting.\\nHence, the relationship with [1] should be clarified.\\n\\n[1] Cai, Y., Luo, H., Wei, C.-Y., and Zheng, W. Uncoupled and convergent learning in two-player zero-sum Markov games. NeurIPS, 2023.\", \"questions\": \"My main concerns and questions are outlined in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for the thoughtful response. I have spent the weekend looking into the proofs in more detail and trying to understand the challenges more concretely. I now understand more where the novelty in the analysis falls and appreciate the technical work more deeply. I will raise my score to indicate this.\\n\\nA small additional suggestion is to discuss connections/contrasts to the literature on no-swap regret algorithms in the learning in games literature, such as https://arxiv.org/abs/2402.09549 (and the references therein). The results are different but share a similar motivation/perspective.\"}", "{\"summary\": \"The authors study repeated play in two-player, two-action games that admit a unique pure Nash equilibrium. They assume a minimal information setting. In particular, across all rounds, players only observe their own payoffs. In addition, the player's payoffs are subjected to zero-mean normally distributed noise.\\n\\nTheir main result, Theorem 1, states that under Assumptions 1 and 2, if both players assume a Bayesian point of view and update some particular prior distributions based on Thompson sampling, then almost surely they will converge to the unique Nash equilibrium of the game. The authors' ultimate claim is that under these assumptions algorithmic collusion is almost surely not possible.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The information setting studied by the authors is quite realistic.\\n2. The analysis, to the best of my knowledge, is correct.\", \"weaknesses\": \"1. Assumption 1 is indeed mild. However, Assumption 2 seems to be quite strong for the study of algorithmic collusion. In particular, under Assumption 2, the game's unique Nash equilibrium cannot be, as the authors point out, much worse than any other outcome. Therefore, independently of the algorithm used, gains from an algorithmic collusion can be minimal at best. Would the authors provide some further analysis of the implications of this assumption towards this particular claim?\\n2. A possible weakness is also the implicit assumption that the game's unique Nash equilibrium is pure. The uniqueness of the game's Nash equilibrium is a reasonable assumption for the study of the algorithm's convergence. However, if the game doesn't have a special structure, e.g., being a potential game, this equilibrium might not be pure. Could the authors further motivate this assumption?\", \"questions\": \"Kindly refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I'm glad to see you went deeply into that line of work. Thank you. Indeed, I agree that the line of work is considering a different but related problem that will be good to highlight in the literature review.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We appreciate your careful reading of the paper and constructive criticism. Below we address your major concerns.\\n\\n- **Specific models and algorithms.** We acknowledge that the two-player two-action setting is somewhat restricted. We would like to point out that because of the technical challenge of the problem, even this setting requires a very delicate analysis, as highlighted in the appendix of the proof. We view the analysis as one of the major contributions of the paper. We are working on relaxing the assumption, and the convergence might still hold when there are *more than two actions.* \\nHowever, we do not think that studying Thompson sampling is a limitation of the work. The recent focus of algorithmic collusion has been on specific algorithms, such as Q-learning [1] and UCB [2]. Note that both works use numerical simulations to show that Q-learning and UCB are *susceptible to* algorithmic collusion. In this work, we have shown that Thompson sampling will not collude. It is clear that the analysis has to be carried out for individual algorithms case by case. The same technique (stochastic approximation) would not work for deterministic or more discrete algorithms such as UCB or $\\\\\\\\epsilon$-greedy. \\nTo answer your first set of questions more specifically, we do see stochastic approximation and the sample-pathwise analysis a potentially powerful tool when analyzing randomized online learning algorithms in the game setting, such as Boltzmann Exploration. But the analysis will be very different, requiring different sets of assumptions and techniques. This is similar to their regret analysis typically requiring individual treatments. \\n\\n- **Recent papers and modern approaches.** While there have been many recent studies on stochastic approximation (SA) (see the papers mentioned in the last paragraph of Section 1), they have primarily focused on finite-time analysis of SA, compared with the classical literature which focused on asymptotic convergence of SA. None of these recent approaches/results on finite time analysis can be applied to our problem. Note that the recent advance is to push the existing asymptotic analysis to the non-asymptotic regime, while the challenge arising from our setting is regarding the asymptotic analysis itself. For example, Haque et al. (2023) study finite-time analysis of two-time scale SA, but our SA scheme is not a two-time scale SA (because the posterior of the inferior action is only updated infrequently and sporadically). Similarly, Qu & Wierman (2020) study finite-time analysis of asynchronous SA, but our SA scheme is also different from their asynchronous SA (because the updating frequencies of different actions are not of the same order). Therefore, we can not apply these modern approaches in the recent SA literature. We believe that our asymptotic convergence analysis of the SA scheme that arises in the game setting with Thompson sampling is novel, and this is one of our main technical contributions. \\n\\n- **Recent papers on learning in games.** The recent studies on the convergence of learning algorithms in games have been focusing on games with continuous actions and/or gradient information [3]. Moreover, the payoff function typically has convexity assumptions. The analysis cannot be applied to Thompson sampling because the action set is discrete and it doesn\\u2019t use gradient-based approaches. \\n\\n[1] Calvano, Emilio, et al. \\\"Artificial intelligence, algorithmic pricing, and collusion.\\\" *American Economic Review* 110.10 (2020): 3267-3297. \\n[2] Hansen, Karsten T., Kanishka Misra, and Mallesh M. Pai. \\\"Frontiers: Algorithmic collusion: Supra-competitive prices via independent algorithms.\\\" *Marketing Science* 40.1 (2021): 1-12. \\n[3] Mertikopoulos, Panayotis, and Zhengyuan Zhou. \\\"Learning in games with continuous action sets and unknown payoff functions.\\\" *Mathematical Programming* 173 (2019): 465-507.\"}", "{\"summary\": \"This paper presents a negative result in the area of repeated games and learning. They study a scenario to two players, each having two actions. The players are \\\"blindfolded\\\" in that they only observe past actions and payoffs of themselves. The main result shows that if the players both use Thompson sampling, then then two players convers to the unique Nash equilibrium under mild conditions, i.e., no collusion arises. Proving this result requires new technical ideas beyond the typical approaches in the literature due to the lack of simultaneous updates and the lack of global Lipshitzness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Learning in games is a thriving area and understanding when agents converge or not to different forms of equilibria is timely and interesting. In particular, quantifying what factors lead to collusion (or the lack of it) is a central question in the literature.\\n\\nThe paper is clearly written and the authors do a solid job of placing their results in the literature. \\n\\nThe inclusion of numerical experiments in what is primarily and theoretical paper is appreciated.\", \"weaknesses\": \"The paper considers a very narrow setting - two-players, two-actions with both using a very specific algorithm. The contribution of the paper would be larger if if could state a broader class of models/algorithms also maintained \\\"no collusion\\\". As written, the result feels more like a curiosity worth investigating further than a major contribution, i.e., it shows a property of an example without exploring the limits of where the property holds. Example are, of course important, but the contribution would be larger if the setting could be expanded to probe the limits by considering, e.g., more players, more actions, other learning rules, etc.\\n\\nA particular limitation of interest to generalize is the specific assumption of Thompson sampling as the algorithm. Studying a broader class of algorithms would increase the level of contribution significantly.\\n\\nTable 1 highlights some challenges as compared to \\\"classic\\\" approaches in the literature. However, there are a wide variety of modern approaches proposed in the past few years as this literature has exploded. The paper does not highlight the challenges with applying these modern analyses (many of which consider more general settings) to the current results in the paper. As starting point would be to highlight the challenges associated with applying the techniques in the papers mentioned in the last paragraph of Section 1. See the question below.\", \"questions\": \"How much more generality in the model could your proposed techniques handle? What generalizations to the model will require new techniques? What more general class of algorithms could your techniques apply to?\\n\\nWhat are the technical contributions in your analysis relative to more recent approaches than those summarized in Table 1. For example, those mentioned in the last paragraph of Section 1.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work studies the dynamics of a repeated game where the players play using Thompson sampling based on the realized payoffs of their actions. This is referred to as a \\\"blindfolded\\\" game since the players do not get to see the impact of the other players' actions, or even model the other players, and play purely based on the payoffs of the actions (which are influenced by the actions chosen by the other players). Thompson sampling is an algorithm with a no-regret guarantee in multi-armed bandits problems with a stationary distribution for each arm, which is not necessarily the case in the scenario of blindfolded games, due to the adaptive and non-stationary behavior of other players. The work establishes that in two-player, two-action games satisfying some conditions including the existence of a unique pure strategy Nash Equilibrium (PSNE), the dynamic of play converge in the last iterate to this PSNE. The work is situated in the context of previous papers studying algorithmic collusion in repeated pricing games where price setters use reinforcement learning to set prices and end up converging to collusive, non-equilibrium strategy profiles. In particular, this result is framed as a counterpoint to theoretical analysis of Hansen et. al. showing UCB algorithms (another class of no-regret algorithms for multi-armed bandits) can converge to collusive strategy profiles.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The main strength of this paper is in studying the intricate dynamics of agents employing Thompson sampling against each other. Unlike no-regret algorithms such as hedge/ multiplicative weights, Thompson sampling is a mis-specified algorithms in the context of games, and there is a notable lack of tools to analyze the dynamics they induce. This result is a non-trivial first contribution towards understanding these dynamics. The paper contains some interesting technical tools, including setting up a natural state space to track the dynamics and uses a sample-path-wise approach, building upon tools from Tsitsiklis 1994 to prove the result. Even though the result only holds for under some fairly restrictive conditions, it sets up some interesting open problems for future research to pursue.\", \"weaknesses\": \"The main weakness of the result is the set of restrictive conditions on the class of games, such as only two actions, a unique pure strategy Nash Equilibrium and assumptions about the payoff of the Nash Equilibrium not being much worse than the other action profiles. In particular, it is not clear if these even includes the simple pricing games that related work, such as Hansen et. al., Calvano et. al. study. These pricing games offer a compelling reason to study the emergence of collusion in the dynamics. Additionally, it is clear that results such as the folk theorem show the existence of algorithmic strategies (however artificial) that induce collusion in these pricing games, making it interesting to study if particular algorithmic strategies result in dynamics leading to collusion. It would help if the work could explain why these games may be interesting and discuss if collusion is possible under different algorithmic strategies employed by the players. The other weakness is the lack of discussion about how different parts of the proof slot together and exactly why Thompson sampling, as opposed to UCB/ other exploration-exploitation algorithms, provably converges to the Nash.\", \"questions\": \"Why is this class of games interesting? How does it compare the pricing games studied by related work? Is collusion possible at all using different algorithms for the players? Are any of the conditions absolutely necessary? Are there counterexamples showing the lack of one of these conditions breaks the result - one candidate that stands out in particular is the assumption about a unique PSNE.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your positive assessment and constructive comments for our paper. Please see our response below.\\n\\n- **Assumptions.** We acknowledge that some assumptions do appear restrictive, especially Assumption 2\\\\. From the numerical examples, the convergence may still hold in the absence of the assumption. We are currently pushing the analysis to see if it can be adapted to the more general case. However, the unique Nash equilibrium assumption is necessary, as demonstrated by the numerical experiment attached. In the literature it is also assumed when the action space is continuous and monotone/concave payoff functions [1,2]. \\n\\n- **Pricing game.** Let us consider the following pricing game. There are two firms and each firm chooses between two prices, $p_L=0.6$, $p_H=0.7$. The linear demand function is $d_i(p_i,p_{-i})=0.5-p_i+0.4*p_{-i}$. We can calculate the following payoff results, $\\\\\\\\pi_{LL}=0.084, \\\\\\\\pi_{LH}=0.108, \\\\\\\\pi_{HL}=0.028, \\\\\\\\pi_{HH}=0.056$. The Nash equilibrium is $(L, L)$, and the corresponding payoff matrices satisfy our two assumptions. It shows that our problem setting could cover the simple pricing game.\\n\\n- **Results of different algorithmic strategies.** **(1) Why use Thompson sampling for both players.** This is a great point. Based on the analysis of Thompson sampling in multi-armed bandits, it has the following property: if the other player adopts a stationary policy, then the player using Thompson sampling will converge to the best response with $\\\\\\\\sqrt{n}$ regret. This property provides justification for using and analyzing Thompson sampling in a game. Moreover, most papers on algorithmic collusion focus on a single class of policies of the players, such as UCB [Hansen et. al.] or Q-learning [Calvano et. al.], and are based on simulations. We provide a theoretical analysis for Thompson sampling, which is arguably one of the most well-known learning algorithms. We will add the remarks to the revision. **(2) UCB versus Thompson sampling.** The reason why UCB does not converge to the NE is due to the lack of randomness. In fact, the dynamic of the game under UCB as a deterministic dynamical system depends on the initial state, which results in non-convergence. The internal randomness in Thompson sampling helps the convergence, as shown by our analysis of the stochastic approximation. But this is not the only reason and the convergence has something to do with the steps of Thompson sampling specifically. For example, [3] show that randomized algorithms such as $\\\\\\\\epsilon$-greedy may still lead to collusion (does not converge to NE). We will remark on this in the revision. \\n\\n- **Counterexamples without these assumptions.** As you pointed out, we show that with mixed-strategy NEs, the game may converge to one of the pure-strategy NEs probabilistically, or not converge in new experiments (please see Section A.1 in the Appendix of the revision). The experiments show the necessity of a unique pure-strategy NE. \\n\\n[1] Cesa-Bianchi, Nicolo, and G\\u00e1bor Lugosi. *Prediction, learning, and games*. Cambridge university press, 2006. \\n[2] Jordan, Michael, Tianyi Lin, and Zhengyuan Zhou. \\\"Adaptive, doubly optimal no-regret learning in strongly monotone and exp-concave games with gradient feedback.\\\" *Operations Research* (2024). \\n[3] Klein, Timo. \\\"Autonomous algorithmic collusion: Q\\u2010learning under sequential pricing.\\\" *The RAND Journal of Economics* 52.3 (2021): 538-558.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": [\"Thank you for your constructive criticism of the work. Please see the response below.\", \"**Assumptions on two actions.** We recognize that the two-player, two-action model has its limitations. Nonetheless, it's important to highlight that the difficulty of this problem makes it challenging to analyze even the simplest problem configurations. Note that in our proof, we do not use the closed-form expressions of the normal distribution (in fact, the CDFs do not have a closed form). Instead, we rely on some analytical properties of normal distributions, such as the scaling with number of plays and the decay rate of the tail probability. These properties tend to hold for more than two actions. Therefore, it is likely that the analysis can be generalized to more than two actions and we are working on it.\", \"**Assumptions on pure-strategy NE.** Regarding your comment on the assumption of a unique pure-strategy NE, we argue that compared to the literature, this assumption is not too strong. When studying the convergence to the NE of games with continuous action spaces, many studies assume a unique pure-strategy NE and monotone/concave payoff functions \\\\[1,2\\\\]. Moreover, in the new experiments (please see Section A.1 in the Appendix of the revision), we show that with mixed-strategy NEs, the game may converge to one of the pure-strategy NEs probabilistically, or not converge. The experiments show the necessity of the assumption.\", \"**Practical applications.** Let us consider the following pricing game. There are two firms and each firm chooses between two prices, $p_L=0.6$, $p_H=0.7$. The linear demand function is $d_i(p_i,p_{-i})=0.5-p_i+0.4*p_{-i}$. We can calculate the following payoff results, $\\\\\\\\pi_{LL}=0.084\\uff0c\\\\\\\\pi_{LH}=0.108, \\\\\\\\pi_{HL}=0.028, \\\\\\\\pi_{HH}=0.056$. The Nash equilibrium is $(L, L)$, and the corresponding payoff matrices satisfy our two assumptions. Therefore, there are practical applications that could meet these assumptions.\", \"**Comparison with Cai et al (2023).** Thank you for bringing this paper to our attention and we regret not including it in our initial submission. It is indeed an important paper. Upon reading the paper more carefully, we believe the motivation and analysis of our study deviates from Cai et al (2023). In their paper, the goal is to provide an algorithm that can converge to the Nash equilibrium with non-asymptotic analyses, based on EXP3. It is probably not surprising given that algorithms like EXP3 which minimize the internal regret have been shown to converge to the Nash equilibrium. Their study extends this notion to more general settings such as the last-iterate convergence, which is remarkable. In our setting, we focus on a specific algorithm, Thompson sampling, which is widely used in practice, and investigate its asymptotic property in a game. Therefore, the motivation (design of an algorithm versus analysis of an existing algorithm) differs. Moreover, the analysis of Thompson sampling in this study deviates significantly from the analysis of EXP3-type algorithms proposed in Cai et al (2023).\", \"[1] Cesa-Bianchi, Nicolo, and G\\u00e1bor Lugosi. *Prediction, learning, and games*. Cambridge university press, 2006.\", \"[2] Jordan, Michael, Tianyi Lin, and Zhengyuan Zhou. \\\"Adaptive, doubly optimal no-regret learning in strongly monotone and exp-concave games with gradient feedback.\\\" *Operations Research* (2024).\"]}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We thank you for your constructive comments. We provide responses to your comments and questions below.\\n\\n- **Implication of Assumption 2.** Assumption 2 indeed seems strong and there is some room for improvement. In particular, the technical assumption on the payoff matrix is not necessary: an example of the convergence in the absence of Assumption 2 is demonstrated in the numerical experiment. However, we want to emphasize that the technical complexities of the problem necessitate a nuanced analysis, as demonstrated in the appendix of the proof. Currently, we do not know how to relax Assumption 2 and adapt the analysis, but it is a very important future direction we are pushing. \\n\\n- **Mixed Nash equilibrium.** We argue that compared to the literature, the assumption on a unique NE is not too strong. When studying the convergence to the NE of games with continuous action spaces, many studies assume a unique pure-strategy NE and monotone/concave payoff functions [1,2]. In new experiments (please see Section A.1 in the Appendix of the revision), we show that with mixed-strategy NEs, the game may converge to one of the pure-strategy NEs probabilistically, or not converge. The experiments show the necessity of a unique pure-strategy NE. \\n \\n\\n[1] Cesa-Bianchi, Nicolo, and G\\u00e1bor Lugosi. *Prediction, learning, and games*. Cambridge university press, 2006. \\n[2] Jordan, Michael, Tianyi Lin, and Zhengyuan Zhou. \\\"Adaptive, doubly optimal no-regret learning in strongly monotone and exp-concave games with gradient feedback.\\\" *Operations Research* (2024).\"}", "{\"metareview\": \"This paper studies learning with bandit feedback - that is, observing only one's own realized payoffs - in $2\\\\times2$ normal form games with a unique, strict Nash equilibrium. Both players are assumed to employ a Thompson sampling algorithm and the authors show that, under the above assumptions, the learning process converges to equlibrium (what the authors call \\\"no collusion\\\").\\n\\nMost of the reviewers were on the fence about this paper, for several reasons:\\n- First, the paper concerns only $2\\\\times2$ games. Even in the bandit setting - which the authors call the \\\"blindfolded game\\\" - there is a wide array of methods in the literature for achieving convergence to Nash equilibrium. Admittedly, the use of Thompson sampling by both players has not been studied extensively in the literature, but the problem of learning with bandit feedback in small games otherwise has a vast literature (which the authors do not cover / seem to be aware of).\\n- The reviews also raised concerns regarding the assumption that the game has a unique pure Nash equilibrium. Since the authors also assume that there are no payoff ties in the game, this means that the game admits a unique *strict* Nash equilibrium. In turn, since the paper only concerns $2\\\\times2$ games, this means that the game is strategically equivalent to the Prisoner's Dilemma, for which there is an even wider array of algorithms converging to Nash equilibrium, even with bandit feedback (for example, any algorithm that eliminates dominated strategies - like EXP3 or any bandit variant of FTRL - would suffice).\\n\\nThe paper was not championed during the discussion phase and it was ultimately decided to make a \\\"reject\\\" recommendation.\", \"additional_comments_on_reviewer_discussion\": \"In view of the limitations mentioned above, it was not possible to make the case that the paper clears the bar for acceptance. The paper was not championed by any of the reviewers during the discussion phase, and the more reviewers were not swayed by the authors' replies, so the conclusion was that the paper cannot be accepted at this point.\"}", "{\"comment\": \"Thank you for your feedback. We appreciate your earlier comment and are pleased that our response addressed it.\"}", "{\"comment\": \"Thank you for the careful reading of the paper and improving the score. We reviewed a few papers on swap regret and learning. They are indeed related. We know that an algorithm that minimizes internal regret will converge to the Nash equilibrium. Swap regret is an extension of internal regret which implies the same convergence to NE. One stream of works discuss the reduction from swap regret to external regret, e.g., [1,2]. They focus on how to achieve low internal/swap regret given a low external regret algorithm. Another type of papers present algorithms that have low swap regret [3, 4]. In addition, [5] shows the relationships between no-regret (i.e. no-external-regret), no-swap-regret and Pareto-optimal algorithms. We will add these studies to our literature review.\\n\\nWe believe the motivation and analysis of this study is different. The primary reason is that Thompson sampling does not have sublinear (internal/swap) regret in the adversarial setting. Thus, the results in the literature do not imply no collusion. \\n\\n[1] Blum, A., & Mansour, Y. (2007). From external to internal regret. Journal of Machine Learning Research, 8(6).\\\\\\n[2] Dagan, Y., Daskalakis, C., Fishelson, M., & Golowich, N. (2024, June). From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (pp. 1216-1222).\\\\\\n[3] Anagnostides, I., Farina, G., Kroer, C., Lee, C. W., Luo, H., & Sandholm, T. (2022). Uncoupled learning dynamics with $ o (\\\\log t) $ swap regret in multiplayer games. Advances in Neural Information Processing Systems, 35, 3292-3304.\\\\\\n[4] Peng, B., & Rubinstein, A. (2024, June). Fast swap regret minimization and applications to approximate correlated equilibria. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (pp. 1223-1234).\\\\\\n[5] Arunachaleswaran, E. R., Collina, N., & Schneider, J. (2024). Pareto-Optimal Algorithms for Learning in Games. arXiv preprint arXiv:2402.09549.\"}", "{\"comment\": \"Thanks for responding to the review.\\n\\nThe contrast between Thompson sampling and other randomized algorithms is well made -- this adds value to the bespoke analysis demonstrated by your work for Thompson sampling.\\n\\nI'm also satisfied with your explanation about the need for unique PSNE, and agree that it is a reasonable assumption.\"}", "{\"comment\": \"Thank you for your constructive comment.\", \"on_your_comment_about_the_pricing_game\": \"indeed we are only considering two prices. That is because the number of prices the players can charge is the number of actions in the game. Since our work focuses on two actions, we use two prices in the pricing game.\", \"on_whether_we_can_extend_it_to_multiple_actions\": \"we are working on this part in the last few days. The main technical challenge is two-fold. First, because of the nature of the sample-path-wise argument, we need to track the dynamic of the state. For more than two actions, the state space increases and the notation becomes heavy. We are trying to figure out a compact vectorized representation to replicate the analysis. Second, the key of the analysis is to characterize the region in the state space where the dynamic is *not Lipschitz continuous* asymptotically. For two actions, it is where the empirical averages of the two actions are equal, i.e., $x_1=x_2$. For multiple actions, it is where $x_i=\\\\max\\\\\\\\{x1,x2,...,x_k\\\\\\\\}$ for all $i$. It is a region that is a lot harder to analyze. Since we need to show the dynamics will escape the region, the analysis is more complicated for multiple actions. We are still working to extend the analysis.\"}" ] }
5pd78GmXC6
Charting the Design Space of Neural Graph Representations for Subgraph Matching
[ "Vaibhav Raj", "Indradyumna Roy", "Ashwin Ramachandran", "Soumen Chakrabarti", "Abir De" ]
Subgraph matching is vital in knowledge graph (KG) question answering, molecule design, scene graph, code and circuit search, etc. Neural methods have shown promising results for subgraph matching. Our study of recent systems suggests refactoring them into a unified design space for graph matching networks. Existing methods occupy only a few isolated patches in this space, which remains largely uncharted. We undertake the first comprehensive exploration of this space, featuring such axes as attention-based vs. soft permutation-based interaction between query and corpus graphs, aligning nodes vs. edges, and the form of the final scoring network that integrates neural representations of the graphs. Our extensive experiments reveal that judicious and hitherto-unexplored combinations of choices in this space lead to large performance benefits. Beyond better performance, our study uncovers valuable insights and establishes general design principles for neural graph representation and interaction, which may be of wider interest.
[ "Graph Retrieval", "Graph Neural Networks", "Subgraph Matching" ]
Accept (Poster)
https://openreview.net/pdf?id=5pd78GmXC6
https://openreview.net/forum?id=5pd78GmXC6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qw66ffA4mt", "qeDADtFAgt", "njIrAefWLD", "n3svfs1hH1", "kfO3KLtDYb", "kdjb6qCVoa", "eXgBjaklU4", "Wiu3rLaZWT", "Ulj4rNTo6E", "U95cKxd6GS", "SE3RMmPuyO", "RCPlBpFcAK", "OSSLj1G3d2", "NQs47zhyk7", "Jq0uuJRSQM", "IK3m5IOhSX", "HUzrqEydAq", "HOua1hc8ue", "GOgs8JvouF", "EdDSv3IPul", "DKiWmHO4Uh", "6AuXUgmf5P", "0o4rVk0bbY", "0Z4fuX45EC", "0OfVDuGRpG" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732619180252, 1732547194888, 1732548912243, 1732547891136, 1732548989073, 1732619086371, 1732776509514, 1737524279432, 1730525180047, 1732563677524, 1733078248140, 1733309635515, 1732547502497, 1732817380250, 1732548780083, 1732548363932, 1732950176814, 1732548103285, 1734439149387, 1732549393866, 1733078007638, 1730137893787, 1732619201745, 1730381427672, 1732548197794 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Reviewer_SvpM" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13746/Reviewer_6vg7" ], [ "ICLR.cc/2025/Conference/Submission13746/Reviewer_6vg7" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Area_Chair_3Toz" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Reviewer_VJh8" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ], [ "ICLR.cc/2025/Conference/Submission13746/Reviewer_SvpM" ], [ "ICLR.cc/2025/Conference/Submission13746/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer SvpM (Part 3)\", \"comment\": \"### *Interaction non-linearity: Neural vs. Dot Product vs. Hinge*\\n\\nWe compare the three types of interaction non-linearities -- Dot Product, Hinge and Neural -- for both the top-performing aggregated embedding-based method (Agg-NTN) and the best-performing set alignment-based method. Table 7 presents the results for **Node-Early-Injective** variants, while Table 8 shows the results for **Edge-Early-Injective** variants. Hinge non-linearity is seen to be the best performer for Set alignment driven variants, while Neural is seen to be generally the best for Aggregated NTN based variants. \\n\\n**Table 7:** MAP for **Node-Early-Injective** interaction models for the first three datasets. \\n| Non-linearity (Rel. Dist.) | AIDS | Mutag | FM |\\n|-------------------------------------------|--------|--------|-------|\\n| Dot Product (Agg-NTN) | 0.721 | 0.772 | 0.812 |\\n| Hinge (Agg-NTN) | **0.743** | 0.773 | 0.805 |\\n| Neural (Agg-NTN) | 0.667 | **0.791** | **0.828** |\\n|-------------------------------------------|-------|-------|------|\\n| Dot Product (Set align) | 0.71 | 0.779 | 0.8 |\\n| Hinge (Set align) | **0.734** | 0.774 | **0.834** |\\n| Neural (Set align.) | 0.69 | **0.783** | 0.827 |\\n\\n**Table 8:** MAP for **Edge-Early-Injective** interaction models for the first three datasets. \\n| Non-linearity (Rel. Dist.) | AIDS | Mutag | FM |\\n|-------------------------------------------|--------|--------|-------|\\n| Dot Product (Agg-NTN) | 0.74 | **0.783** | 0.812 |\\n| Hinge (Agg-NTN) | **0.76** | 0.766 | **0.855** |\\n| Neural (Agg-NTN) | 0.689 | 0.749 | 0.831 |\\n|-------------------------------------------|-------|-------|------|\\n| Dot Product (Set align) | 0.798 | 0.789 | 0.862 |\\n| Hinge (Set align) | **0.817** | **0.837** | **0.887** |\\n| Neural (Set align.) | 0.725 | 0.784 | 0.837 |\\n\\n**Takeaways and Design Tips**\\nSet alignment provides a more interpretable approach for measuring coverage in subgraph matching, where explicit coverage modeling using hinge non-linearity emerges as the top performer by directly encoding the necessary inductive bias. In contrast, methods that rely on \\\"black-box\\\" techniques, such as NTN, which use aggregated graph embeddings, appear to align better with neural-based models.\\n\\n\\n\\n### *Interaction Granularuty : Node vs Edge*\\nWe compare two interaction granularities\\u2014nodes v/s edges -- focusing on the most promising design choices over the previous axes: Relevance Distance (Set Align.), Interaction Structure (Injective), and Interaction Stage (Early). Table 9 presents the best-performing configurations in terms of MAP for both node and edge granularities across all interaction non-linearities. Notably, edge granularity-based variants demonstrate a significant performance improvement.\\n\\n**Table 9:** MAP for **Early-Injective-SetAlign** interaction models for the first three datasets. \\n| Granularity | AIDS | Mutag | FM |\\n|-------------------------------------------|--------|--------|-------|\\n| Node | 0.734 | 0.783 | 0.834 |\\n| Edge | **0.817** | **0.837** | **0.887** |\\n\\n\\n**Takeaways and Design Tips**\\nThe significant performance boost observed with edge granularity highlights the effectiveness of utilizing higher-order structures. Consider prioritizing edge granularity in designs to enhance performance, and exploring other higher-order alignments to enhance model expressiveness and accuracy in graph-based applications.\\n\\n\\n\\nWe have also included clearly defined sections throughout the main paper, providing a more principled outline and explanation of the key takeaways and design tips.\"}", "{\"title\": \"Response to Reviewer 6vg7 (Part 1)\", \"comment\": \"We thank the reviewer for their detailed review and valuable feedback that helps improve our work. We address the concerns posed by the reviewer below:\\n> W1. Most datasets considered are for small molecules despite the various applications of subgraph matching mentioned in the introduction.\\n\\nWhile the datasets currently included in the paper are widely recognized benchmarks, the reviewer correctly highlights that most of these are sourced from small molecule datasets (e.g., AIDS, MUTAG) and computer vision tasks (e.g., MSRC). We acknowledge this limitation and have extended our dataset suite to include three large-scale, real-world datasets that reflect diverse and practical applications of subgraph matching. These datasets were obtained from the SNAP repository (https://snap.stanford.edu/data/index.html) and were processed to align with the experimental setup described in our study. The details of the newly added datasets are as follows:\\n\\n1. com-Amazon (Amazon): This dataset represents an Amazon product co-purchasing network, where nodes correspond to products and edges denote frequent co-purchasing relationships. Subgraph matching in this dataset is relevant for tasks such as identifying frequent co-purchase patterns or improving recommendation systems.\\n\\n1. email-Enron (Email): This dataset comprises an email communication network from the Enron corporation, where nodes represent individuals and edges correspond to email exchanges. Subgraph matching in this context can provide insights into communication patterns and aid in identifying specific organizational structures.\\n\\n1. roadnet-CA (RoadNet): This dataset represents the highway network of California, where nodes correspond to road intersections, and edges denote roads connecting them. Subgraph matching in this setting can assist in applications such as traffic management and subnetwork optimization.\\n\\nTo integrate these datasets into our study, we followed the same preprocessing and subgraph extraction methodology outlined in the paper and ran all the designed experiments to further explore the design space of the subgraph matching methods.\\n\\nTables 1\\u20134 present the results in terms of MAP for all proposed variations discussed in our study, corresponding to Node-Late, Edge-Late, Node-Early, and Edge-Early approaches, respectively.\\n\\n**Table 1:** MAP for **Node-Late** interaction models for three diverse datasets.\\n| Rel. Dist. | Structure | Non-linearity | Amazon | Email | Roadnet |\\n|------------------|-----------------|---------------|--------|-------|---------|\\n| Agg-hinge | NA | NA | 0.616 | 0.723 | 0.556 |\\n| Agg-MLP | NA | NA | 0.544 | 0.631 | 0.616 |\\n| Agg-NTN | NA | NA | 0.656 | 0.778 | 0.499 |\\n| Set align | Non-injective | Dot Product | 0.66 | 0.798 | 0.618 |\\n| Set align | Non-injective | Hinge | 0.739 | 0.826 | 0.609 |\\n| Set align | Non-injective | Neural | 0.752 | 0.805 | 0.585 |\\n| Set align | Injective | Dot Product | 0.739 | 0.852 | **0.637** |\\n| Set align | Injective | Hinge | **0.772** | **0.864** | 0.576 |\\n| Set align | Injective | Neural | 0.771 | 0.852 | 0.605 |\\n\\n**Table 2:** MAP for **Edge-Late** interaction models for three diverse datasets.\\n\\n| Rel. Dist. | Structure | Non-linearity | Amazon | Email | Roadnet |\\n|---------------|----------------|---------------|--------|--------|---------|\\n| Agg-hinge | NA | NA | 0.692 | 0.847 | **0.744** |\\n| Agg-MLP | NA | NA | 0.652 | 0.783 | 0.682 |\\n| Agg-NTN | NA | NA | 0.678 | 0.851 | 0.706 |\\n| Set align. | Non-injective | Dot Product | 0.737 | 0.834 | 0.621 |\\n| Set align. | Non-injective | Hinge | 0.776 | 0.865 | 0.741 |\\n| Set align. | Non-injective | Neural | 0.771 | 0.838 | 0.72 |\\n| Set align. | Injective | Dot Product | 0.749 | 0.852 | 0.668 |\\n| Set align. | Injective | Hinge | **0.800** | 0.849 | 0.708 |\\n| Set align. | Injective | Neural | 0.788 | **0.871** | 0.724 |\"}", "{\"title\": \"Response to Reviewer VJh8 (Part 1)\", \"comment\": \"We thank the reviewer for their insightful review. The weaknesses and questions raised in the review are addressed below.\\n\\n> It seems all the 10 datasets are relatively small, e.g. up to 50 nodes. I wonder if there is a reason for not choosing much larger graphs, e.g. graphs up to 1M nodes as the target graph to be searched for (while using a small query graph of ~50 nodes).\\n\\nThere are two key reasons why we could not work with large graphs in the original version. But now, we have made strong evaluations on large graphs as well.\\n\\nSimilar to other ML tasks, our work also requires multiple graphs for training; here, one entire graph represents one instance of data. Data sets with a single large graph are more common than data sets containing multiple large graphs, which prevents a neural method to test its performance on large graphs. Moreover, the need for our and existing works are driven by practical applications like molecular retrieval. For instance, works such as GMN [Li et al., 2019], Neuromatch [Lou et al., 2020] and IsoNet [Roy et al., 2021] focus on small- to medium-sized graphs in their benchmarks. This scale is largely motivated by application domains such as molecular graph retrieval and object detection in images via scene graphs, where the focus is more on retrieving multiple smaller graphs rather than processing single extremely large graphs. \\n\\nStill, we consider the following datasets drawn from the SNAP repository (https://snap.stanford.edu/data/index.html):\\n\\n1. com-Amazon: Represents an Amazon product co-purchasing network, with nodes as products and edges denoting co-purchasing relationships. It consists of 334,863 nodes and 925,872 edges.\\n2. email-Enron: Represents an email communication network, with nodes as individuals and edges as email exchanges. It consists of 36,692 nodes and 183,831 edges.\\n3. roadnet-CA: Represents the road network of California, with nodes as road intersections and edges as connecting roads. It consists of 1,965,206 nodes and 2,766,607 edges.\\n\\nRecent work such as Greed [Ranjan et al., 2022] proposes a heuristic for adapting small-scale, fast neural graph solvers to the problem of subgraph localization within large-scale graphs. Inspired by this approach, we extended our experiments on the above datasets.\\n \\n \\nTables 1\\u20134 present the results in terms of MAP for all proposed variations discussed in our study, corresponding to Node-Late, Edge-Late, Node-Early, and Edge-Early approaches, respectively.\\n\\n**Table 1:** MAP for **Node-Late** interaction models for three diverse datasets.\\n|Rel. Dist.|Structure|Non-linearity|Amazon|Email|Roadnet|\\n|-|-|-|-|-|-|\\n|Agg-hinge|NA|NA|0.616|0.723|0.556|\\n|Agg-MLP|NA|NA|0.544|0.631|0.616|\\n|Agg-NTN|NA|NA|0.656|0.778|0.499|\\n|Set align|Non-injective|Dot Product|0.66|0.798|0.618|\\n|Set align|Non-injective|Hinge|0.739|0.826|0.609|\\n|Set align|Non-injective|Neural|0.752|0.805|0.585|\\n|Set align|Injective|Dot Product|0.739|0.852|**0.637**|\\n|Set align|Injective|Hinge|**0.772**|**0.864**|0.576|\\n|Set align|Injective|Neural|0.771|0.852|0.605|\\n\\n**Table 2:** MAP for **Edge-Late** interaction models for three diverse datasets.\\n\\n|Rel. Dist.|Structure|Non-linearity|Amazon|Email|Roadnet|\\n|-|-|-|-|-|-|\\n|Agg-hinge|NA|NA|0.692|0.847|**0.744**|\\n|Agg-MLP|NA|NA|0.652|0.783|0.682|\\n|Agg-NTN|NA|NA|0.678|0.851|0.706|\\n|Set align.|Non-injective|Dot Product|0.737|0.834|0.621|\\n|Set align.|Non-injective|Hinge|0.776|0.865|0.741|\\n|Set align.|Non-injective|Neural|0.771|0.838|0.72|\\n|Set align.|Injective|Dot Product|0.749|0.852|0.668|\\n|Set align.|Injective|Hinge|**0.800**|0.849|0.708|\\n|Set align.|Injective|Neural|0.788|**0.871**|0.724|\"}", "{\"title\": \"Response to Reviewer 6vg7 (Part 3)\", \"comment\": \"> W2. There is a lack of understanding in subgraph matching performance with respect to the intrinsic challenge of the setting, e.g., highly regular graphs, different graph sizes, different feature distributions, out-of-distribution settings. Such study may be best achieved with synthetic datasets.\\n\\nWe thank the reviewer for pointing us in the very interesting direction of studying different graph sizes, feature distributions and out-of-distribution settings. Towards this goal, we formula two new experiments:\\n\\n**Transfer ability (out-of-distribution setting)**\\n\\nWe choose the model trained on dataset X and perform inference on dataset Y, where X \\u2260 Y. In particular, we fix Y to be the AIDS dataset and vary X in the set {Mutag, PTC-FR, MOLT-4H}. We compare the performance of these OOD models with the model trained on the AIDS dataset itself. For clarity of understanding, we show results on early interaction models with node-level granularity (Table 5) and edge-level granularity (Table 6). **Boldface** represents the dataset that provides the best transfer ability. The following observations can be made -\\n\\n1. The strongest transfer abilities are displayed by models trained on PTC-FR, followed by Mutag, and finally MOLT-4H, which shows severe degradation in performance compared to the baseline AIDS-based model. This pattern can be explained by the extent of relative dissimilarity between the source datasets and AIDS, which is maximum for MOLT-4H (doubly-sized graphs as AIDS), followed by Mutag (mean node/edge counts off by one/two) and finally PTC-FR (similar graph size distribution).\\n2. Despite the strong transfer ability, the model trained originally on AIDS is the best performer with almost every network configuration.\\n\\n**Table 5:** MAP for transfer ability of different **Node-Early** interaction models on the AIDS dataset.\\n\\n|Rel. Dist. |Structure |Non-linearity |AIDS |Mutag \\u2192 AIDS |PTC-FR \\u2192 AIDS |MOLT \\u2192 AIDS |\\n|-|-|--|-|-|-|-|\\n|Agg-Hinge|Non-injective|Dot Product|0.609|0.330|**0.520**|0.253|\\n|Agg-Hinge|Non-injective|Hinge|0.726|0.360|**0.626**|0.369|\\n|Agg-Hinge|Non-injective|Neural|0.598|0.331|**0.510**|0.238|\\n|Agg-Hinge|Injective|Dot Product|0.64|0.394|**0.581**|0.322|\\n|Agg-Hinge|Injective|Hinge|0.662|0.422|**0.595**|0.346|\\n|Agg-Hinge|Injective|Neural|0.614|0.403|**0.598**|0.322|\\n|Agg-MLP|Non-injective|Dot Product|0.63|0.323|**0.569**|0.363|\\n|Agg-MLP|Non-injective|Hinge|0.637|0.378|**0.624**|0.437|\\n|Agg-MLP|Non-injective|Neural|0.629|0.363|**0.552**|0.337|\\n|Agg-MLP|Injective|Dot Product|0.658|0.486|**0.612**|0.427|\\n|Agg-MLP|Injective|Hinge|0.683|0.457|**0.611**|0.455|\\n|Agg-MLP|Injective|Neural|0.629|0.464|**0.625**|0.414|\\n|Agg-NTN|Non-injective|Dot Product|0.669|0.450|**0.557**|0.407|\\n|Agg-NTN|Non-injective|Hinge|0.686|0.442|**0.623**|0.405|\\n|Agg-NTN|Non-injective|Neural|0.635|0.367|**0.549**|0.268|\\n|Agg-NTN|Injective|Dot Product|0.721|0.487|**0.635**|0.420|\\n|Agg-NTN|Injective|Hinge|0.743|0.478|**0.626**|0.356|\\n|Agg-NTN|Injective|Neural|0.667|0.499|**0.617**|0.395|\\n|Set align|Non-injective|Dot Product|0.608|0.377|**0.574**|0.419|\\n|Set align|Non-injective|Hinge|0.676|0.394|**0.600**|0.451|\\n|Set align|Non-injective|Neural|0.593|0.450|**0.599**|0.363|\\n|Set align|Injective|Dot Product|0.71|0.510|**0.663**|0.448|\\n|Set align|Injective|Hinge|0.734|0.505|**0.684**|0.480|\\n|Set align|Injective|Neural|0.69|0.507|**0.641**|0.494|\\n\\n**Table 6:** MAP for transfer ability of different **Edge-Early** interaction models on the AIDS dataset.\\n|Rel. Dist.|Structure|Non-linearity|AIDS|Mutag \\u2192 AIDS|PTC-FR \\u2192 AIDS|MOLT \\u2192 AIDS|\\n|-|-|-|-|-|-|-|\\n|Agg-Hinge|Non-injective|Dot Product|0.7|0.400|**0.560**|0.290|\\n|Agg-Hinge|Non-injective|Hinge|0.763|0.445|**0.640**|0.447|\\n|Agg-Hinge|Non-injective|Neural|0.677|0.414|**0.541**|0.219|\\n|Agg-Hinge|Injective|Dot Product|0.755|0.454|**0.657**|0.305|\\n|Agg-Hinge|Injective|Hinge|0.758|0.477|**0.664**|0.274|\\n|Agg-Hinge|Injective|Neural|0.68|0.422|**0.578**|0.278|\\n|Agg-MLP|Non-injective|Dot Product|0.677|0.451|**0.589**|0.341|\\n|Agg-MLP|Non-injective|Hinge|0.748|0.423|**0.619**|0.337|\\n|Agg-MLP|Non-injective|Neural|0.662|0.396|**0.563**|0.351|\\n|Agg-MLP|Injective|Dot Product|0.758|0.454|**0.662**|0.373|\\n|Agg-MLP|Injective|Hinge|0.789|0.491|**0.718**|0.425|\\n|Agg-MLP|Injective|Neural|0.703|0.419|**0.620**|0.312|\\n|Agg-NTN|Non-injective|Dot Product|0.71|0.444|**0.603**|0.399|\\n|Agg-NTN|Non-injective|Hinge|0.79|0.489|**0.658**|0.241|\\n|Agg-NTN|Non-injective|Neural|0.701|0.419|**0.572**|0.361|\\n|Agg-NTN|Injective|Dot Product|0.74|0.466|**0.623**|0.367|\\n|Agg-NTN|Injective|Hinge|0.76|0.518|**0.714**|0.431|\\n|Agg-NTN|Injective|Neural|0.689|0.409|**0.599**|0.320|\\n|Set align|Non-injective|Dot Product|0.715|0.474|**0.668**|0.374|\\n|Set align|Non-injective|Hinge|0.783|0.483|**0.687**|0.497|\\n|Set align|Non-injective|Neural|0.708|0.429|**0.594**|0.412|\\n|Set align|Injective|Dot Product|0.798|0.505|**0.695**|0.447|\\n|Set align|Injective|Hinge|0.817|0.599|**0.773**|0.538|\\n|Set align|Injective|Neural|0.725|0.468|**0.599**|0.428|\"}", "{\"title\": \"Response to Reviewer VJh8 (Part 2)\", \"comment\": \"**Table 3:** MAP for **Node-Early** interaction models for three diverse datasets.\\n\\n|Rel. Dist. |Structure |Non-linearity|Amazon|Email|Roadnet|\\n|-----------------|-----------------|---------------|--------|-------|---------|\\n|Agg-MLP |Injective |Hinge |0.805|0.902|0.680|\\n|Agg-MLP |Injective |Dot Product|0.725|0.911|0.643|\\n|Agg-MLP |Injective |Neural |0.798|0.895|0.668|\\n|Agg-MLP |Non-Injective|Hinge |0.770|0.869|0.646|\\n|Agg-MLP |Non-Injective|Dot Product|0.720|0.817|0.648|\\n|Agg-MLP |Non-Injective|Neural |0.748|0.834|0.586|\\n|Agg-NTN |Injective |Hinge |0.798|0.906|0.699|\\n|Agg-NTN |Injective |Dot Product|0.789|0.902|0.669|\\n|Agg-NTN |Injective |Neural |0.787|0.920|0.732|\\n|Agg-NTN |Non-Injective|Hinge |0.768|0.855|0.668|\\n|Agg-NTN |Non-Injective|Dot Product|0.668|0.829|0.612|\\n|Agg-NTN |Non-Injective|Neural |0.695|0.839|0.645|\\n|Agg-hinge |Injective |Hinge |0.756|0.866|0.659|\\n|Agg-hinge |Injective |Dot Product|0.748|0.861|0.682|\\n|Agg-hinge |Injective |Neural |0.743|0.862|0.660|\\n|Agg-hinge |Non-Injective|Hinge |0.799|0.878|0.662|\\n|Agg-hinge |Non-Injective|Dot Product|0.657|0.764|0.610|\\n|Agg-hinge |Non-Injective|Neural |0.676|0.759|0.592|\\n|Set align |Injective |Hinge |**0.849**|**0.935**|**0.745**|\\n|Set align |Injective |Dot Product|0.816|0.905|0.706|\\n|Set align |Injective |Neural |0.823|0.905|0.723|\\n|Set align |Non-Injective|Hinge |0.768|0.860|0.652|\\n|Set align |Non-Injective|Dot Product|0.729|0.827|0.648|\\n|Set align |Non-Injective|Neural |0.747|0.842|0.648|\\n\\n**Table 4:** MAP for **Edge-Early** interaction models for three diverse datasets.\\n|Rel. Dist.|Structure |Non-linearity|Amazon|Email|Roadnet|\\n|---------------|----------------|---------------|--------|--------|---------|\\n|Agg-MLP |Injective |Hinge |0.84|0.926|0.773|\\n|Agg-MLP |Injective |Dot Product|0.799|0.934|0.76|\\n|Agg-MLP |Injective |Neural |0.72|0.91|0.74|\\n|Agg-MLP |Non-Injective|Hinge |0.801|0.886|0.671|\\n|Agg-MLP |Non-Injective|Dot Product|0.74|0.837|0.745|\\n|Agg-MLP |Non-Injective|Neural |0.703|0.869|0.613|\\n|Agg-NTN |Injective |Hinge |0.791|0.922|0.731|\\n|Agg-NTN |Injective |Dot Product|0.783|0.916|0.76|\\n|Agg-NTN |Injective |Neural |0.789|0.908|0.737|\\n|Agg-NTN |Non-Injective|Hinge |0.795|0.868|0.816|\\n|Agg-NTN |Non-Injective|Dot Product|0.765|0.874|0.77|\\n|Agg-NTN |Non-Injective|Neural |0.701|0.853|0.708|\\n|Agg-hinge |Injective |Hinge |0.827|0.939|0.805|\\n|Agg-hinge |Injective |Dot Product|0.77|0.915|0.805|\\n|Agg-hinge |Injective |Neural |0.753|0.911|0.768|\\n|Agg-hinge |Non-Injective|Hinge |0.797|0.874|0.73|\\n|Agg-hinge |Non-Injective|Dot Product|0.736|0.845|0.607|\\n|Agg-hinge |Non-Injective|Neural |0.69|0.842|0.699|\\n|Set align |Injective |Hinge |**0.863**|**0.944**|**0.834**|\\n|Set align |Injective |Dot Product|0.827|0.921|0.828|\\n|Set align |Injective |Neural |0.826|0.909|0.752|\\n|Set align |Non-Injective|Hinge |0.783|0.875|0.832|\\n|Set align |Non-Injective|Dot Product|0.802|0.872|0.802|\\n|Set align |Non-Injective|Neural |0.769|0.858|0.74|\\n\\n\\n\\n\\nWe note that the performance trends observed on the small molecule datasets are consistent with those on these diverse real-world datasets.\\nIn particular, we note that: \\n1. Edge-level granularity outperforms node-level granularity, as evidenced by comparisons between corresponding cells in Table 1 (Node-Late) and Table 2 (Edge-Late), as well as Table 3 (Node-Early) and Table 4 (Edge-Early).\\n2. Early interaction variants (Tables 3 and 4) demonstrate significantly higher MAP values compared to late-interaction models (Tables 1 and 2).\\n3. Across all four tables, Set Alignment relevance distance consistently achieves the highest MAP values in most cases.\\n4. Injective interaction structure variants generally provide higher MAP values compared to their non-injective counterparts across all tables.\\n5. The combination of Set Alignment with an injective interaction structure and Hinge interaction non-linearity is particularly effective, yielding the highest MAP values for these datasets\\n\\n\\nThe performance on subgraph localization tasks on these larger real-world datasets, aligning with the performance trends on the smaller and medium-sized datasets, reinforces the generality of our findings and conclusions.\\n\\nThese results further suggest that our observations can be extended to heuristics operating on much larger graphs. While this is not the primary focus of our study, these experiments highlight the broader applicability of our approach to large graph scenarios.\"}", "{\"title\": \"Response to Reviewer SvpM (Part 2)\", \"comment\": \"### *Interaction Stage: Choice of Late v/s Early*\\nWe compare late and early interaction variants for both the top-performing aggregated embedding-based method (Agg-NTN) and the best-performing set alignment-based method. Table 3 presents the results for Node interaction granularity, while Table 4 shows the results for Edge interaction granularity. Early interaction is seen to afford significant performance improvements over its late interaction counterparts. \\n\\n**Table 3:** MAP for **Node** interaction models for the first three datasets. \\n|Interaction Stage (Rel. Dist, Structure, Non-linearity) | AIDS | Mutag | FM |\\n|-------------------------------------------|-------|-------|------|\\n| Late (Agg-NTN) | 0.576 | 0.708 | 0.744|\\n| Early (Agg-NTN) | **0.743** | **0.791** | **0.828**|\\n|-------------------------------------------|-------|-------|------|\\n| Late (Set align.) | 0.664 | 0.69 | 0.758|\\n| Early (Set align.) | **0.734** | **0.783** | **0.834**|\\n\\n**Table 4:** MAP for **Edge** interaction models for the first three datasets. \\n|Interaction Stage (Rel. Dist) | AIDS | Mutag | FM |\\n|-------------------------------------------|-------|-------|------|\\n| Late (Agg-NTN) | 0.66 | 0.718 | 0.759|\\n| Early (Agg-NTN) |**0.768** | **0.755** | **0.800**|\\n|-------------------------------------------|-------|-------|------|\\n| Late (Set align.) | 0.704 | 0.733 | 0.782|\\n| Early (Set align.) |**0.774** | **0.764** | **0.806**|\\n\\n**Takeaways and Design Tips**\\nAlthough late interaction potentially enables fast nearest neighbor search, early interaction is generally known to be superior in text retrieval [ColBERT, Figure~1]. The comparison between IsoNet [ISONET] vs GMN [GMN] apparently contradicts this general trend. Therefore, it is important to resolve this issue using carefully controlled experiments. \\n\\n[ColBERT] O. Khattab and M. Zaharia. Colbert: Efficient and effective passage search via contextualized late interaction over bert. \\n\\n[ISONET] I. Roy, V. S. Velugoti, S. Chakrabarti, and A. De. Interpretable neural subgraph matching for graph retrieval.\\n\\n[GMN] Y. Li, C. Gu, T. Dullien, O. Vinyals, and P. Kohli. Graph matching networks for learning the similarity\\nof graph structured objects\\n\\n### *Interaction Structure: Injective v/s non-Injective*\\nWe compare injective and non-injective interaction variants for both the top-performing aggregated embedding-based method (Agg-NTN) and the best-performing set alignment-based method. Table 5 presents the results for **Node-Early** variants, while Table 6 shows the results for **Edge-Early** variants. Injective interaction structure is seen to afford significant performance improvements over non-injective interaction in a majority of cases. Furthermore, we observe that the performance improvement from switching to injective interaction is more pronounced in set alignment relevance distance-based methods compared to aggregated NTN-based methods.\\n\\n\\n**Table 5:** MAP for **Node-Early** interaction models for the first three datasets. \\n|Interaction Structure (Rel. Dist) | AIDS | Mutag | FM |\\n|--------------------------------------------|------|-------|-------|\\n| Non-Injective (Agg-NTN) |0.686 | 0.758 | 0.818 |\\n| Injective (Agg-NTN) |**0.743** | **0.791** | **0.828** |\\n|-------------------------------------------|-------|-------|------|\\n| Non-Injective (Set align.) |0.676 | 0.754 | 0.772 |\\n| Injective (Set align.) |**0.734** | **0.783** | **0.834** |\\n\\n\\n**Table 6:** MAP for **Edge-Early** interaction models for the first three datasets. \\n|Interaction Structure (Rel. Dist) | AIDS | Mutag | FM |\\n|--------------------------------------------|------|-------|-------|\\n| Non-Injective (Agg-NTN) | **0.790** | **0.785** | 0.835 |\\n| Injective (Agg-NTN) | 0.760 | 0.783 | **0.855** |\\n|-------------------------------------------|-------|-------|------|\\n| Non-Injective (Set align.) | 0.783 | 0.785 | 0.812 |\\n| Injective (Set align.) | **0.817** | **0.837** | **0.887** |\\n\\n**Takeaways and Design Tips**\\nThe combinatorial definition of graph matching includes finding an injective mapping between pairs of nodes from the two graphs. The mapping is also an interpretable artifact.\\nAttention from one node to all nodes in the other graph, even if maintained from each graph separately, cannot achieve a consistent 1-1 mapping. Our experiments suggest that an injective mapping (or its continuous relaxation --- doubly stochastic matrices) performs better.\"}", "{\"comment\": \"Thank you for the detailed rebuttal. I appreciate your effort in systematically exploring the design space, but I still have some concerns about the broader impact and novelty of the study.\\n\\nWhile systematic navigation offers some value, enumerating and evaluating all possible subcomponents in the design space feels more like a brute-force ablation study rather than a scientific contribution. I personally think that posing this work as brute-force style performance comparisons is not recommended, since subcomponents do not interact with each other in terms of performance. Moreover, the findings might be informative in some contexts, but they do not appear novel and surprising. I still see that the differences between your best and IsoNet are two: interaction stage and non-linearity. \\n\\nLeveraging prior studies' ablation findings could have narrowed the search space, potentially making the exploration more targeted and efficient. For example, certain subcomponents, such as the ineffectiveness of node alignment and symmetric scores, have been previously reported and discussed in the literature.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper presents a unified design space for existing neural subgraph matching methods over various dimensions. By carefully controlling the design option for each dimension, it effectively reveals the optimal design option combination for neural subgraph matching with substantial improvement, providing new insights on the previous methods and reported results.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"**S1.** The charting of the design space is convincing and well-articulated.\\n\\n**S2.** Empirical studies demonstrate the effectiveness of the design space in characterizing existing methods and devising superior new methods for the experiments considered.\\n\\n**S3.** The presentation is quite neat and clear overall.\", \"weaknesses\": \"**W1.** Most datasets considered are for small molecules despite the various applications of subgraph matching mentioned in the introduction.\\n\\n**W2.** There is a lack of understanding in subgraph matching performance with respect to the intrinsic challenge of the setting, e.g., highly regular graphs, different graph sizes, different feature distributions, out-of-distribution settings. Such study may be best achieved with synthetic datasets.\\n\\n**minor**\\n- The use of notations $\\\\omega$ and $\\\\eta$ are not consistent between L124 and Figure 1.\", \"questions\": \"**Q1.** Does the use of hinge distance make sense for set alignment and aggregated-hinge (L216-236)? In particular, the authors justify the use of hinge distance for adjacency matrices, but the adjacency matrices are binary while node/aggregated node embeddings are just real-valued.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed and well-thought responses, which have addressed most of my questions and concerns. I've increased my scores in accordance. The paper's primary strength lies in its elegant simplification and demystification of existing approaches - a contribution I consider to be of significant value to the field. Such work that brings clarity and accessibility is essential for advancing future research. Regardless of the final acceptance decision, the review process has been intellectually enriching to me, and I want to thank the authors for preparing a high-quality submission, from which I've also learnt a lot.\"}", "{\"title\": \"Further discussion based on suggestions (Part 2)\", \"comment\": \"> Academic values derived from underperforming combinations\", \"the_key_underperforming_combinations_are_as_follows\": \"1. Poor design choices for $\\\\nabla _P \\\\text{dist}(G _q, G _c)$ significantly impact performance. The widely used Dot Product interaction (commonly found in attention-based models like GMN and H2MN) and the neural interaction non-linearity introduced by IsoNet are consistently outperformed by the Hinge non-linearity. Hinge offers a clear advantage across all metrics and is the preferred choice for interaction modeling.\\n\\n2. Using dot product or neural methods, pivoting to set alignment relevance distance improves MAP values. However, this comes at the cost of a significant increase in inference time, making it suboptimal for achieving the best MAP-inference time Pareto trade-off.\\n\\n3. In the above, while using dot product or neural methods, transitioning to non-injective set alignment in an effort to improve inference times results in a notable performance drop. This approach fails to provide a satisfactory trade-off between accuracy and efficiency.\\n\\n4. When using Early Interaction with Hinge non-linearity and non-injective interaction, the choice of relevance distance becomes less critical. In such cases, opting for faster aggregated relevance variants over set alignment achieves similar performance while reducing computational overhead.\\n\\nWe elaborate on both underperforming and well-performing combinations in the **Takeaways and Design Tips** (pages 5\\u20137) and the **Design Guidelines** (pages 9\\u201310). Please refer to the text highlighted in teal for detailed insights.\\n\\n\\n> On another note, I recently came across the following study: https://openreview.net/forum?id=udTwwF7tks Could you let me know which combination in your submission this model aligns with? \\n\\n\\nIf cast into our specified design axes, this paper employs Early Interaction, with an injective interaction structure, neural interaction non-linearity, and set alignment relevance distance. However, their early interaction model is more sophisticated than prior works, factoring in node pairs and updating alignments lazily at fixed intervals. While this approach appears to improve MAP, it is computationally extremely expensive, requiring further investigation to address: (1) How this novel interaction model interacts with other design choices, such as whether hinge non-linearity could further enhance performance or interact adversely; (2) Its position in the MAP-inference time Pareto trade-off compared to other design choices.\\n\\n> Of course, this is a neurips 2024 paper, so it does not necessarily have to be included in this submission, but it would be nice to have it mentioned later. And I am also planning to compare the results in the paper and your submission. \\n\\nWe agree this study is relevant to our submission. However, as it became publicly available only recently (just three weeks before today and more than one month later than submission deadline). Comparing the results of this paper against ours is beyond the scope of our current work. We will certainly mention clearly later. Notably, this paper underscores the importance of our current submission in the rapidly evolving landscape of subgraph matching methods. \\n\\nAs with many recent proposals, this paper appears to treat its specific design choices as achieving a robust (local) maximum in performance. However, without systematically incorporating superior design options\\u2014such as injective interaction, neural non-linearity, and edge granularity\\u2014for baselines like GMN, it may be challenging to attribute performance gains solely to novel aspects like node-pair attention and lazy updates. This highlights the need for a principled framework to evaluate and benchmark design choices systematically. \\n\\nOur work addresses this need by proposing clear guidelines for evaluating new models and understanding the combinatorial interplay of design choices. We believe this will inspire more confidence and measurable progress in subgraph matching research. \\n\\n\\n\\nWe reiterate that our proposal is not in competition with this new paper and this new paper does not diminish the significance of our work. Instead, this paper aligns with our specified design axes and can further advance the field when analyzed using our proposed guideline. In fact, our analysis can be used to enhance the quality of this paper too.\\n\\n\\n\\n\\n### Appeal to Reviewer SvpM \\n\\nWe greatly appreciate your continued engagement in improving this work. We has earlier highlighted the key takeaways and design tips throughout the paper and elaborated on the theoretical underpinnings of hinge non-linearity in Section D.1 of the Appendix. Your feedback has helped us refine these insights more coherently. While we are unable to update the paper PDF at this stage, we will ensure these improvements are incorporated in final revision, if the paper gets accepted. Kindly let us know if this further discussion has addressed your concerns.\"}", "{\"title\": \"Closing Remarks from Authors\", \"comment\": \"Dear Reviewers and AC:\\n\\nWe would like to thank all reviewers for the helpful discussions and their evident overall favorable impression of the paper.\\n\\nGestalt performance comparisons obscure vital design choices whose combinations remain largely unexplored and poorly understood in the community of work on neural graph matching and (sub)graph isomorphism detection. Conventional papers in this domain typically introduce a new model, which \\u2014 like all other works \\u2014 will then claim to harness certain signals. However, as we show, the key reason behind the superiority of a certain model often remains obscured. It may be because of a very simple reason, as opposed to a substantial modeling innovation. This is because the remaining design axes are not standardized across methods. Also, different components across different axes do interact. \\n\\n\\nAs this rapidly evolving field continues to introduce newer models, it becomes increasingly important to focus on bringing clarity into the space, by asking pertinent questions on why certain models perform better. We distil models into five key design axes, and instrument the existing models far better than the models themselves have done. As reviewer 6vg7 and SvpM pointed out, this work will bring clarity, accessibility and clear guidance to the practitioners, ensuring that the field progresses with a deeper understanding and methodological guidance, rather than mere chaotic exploration.\\n\\n\\nDuring the course of this exploration, we uncovered how different design components interact. This understanding provided a clear framework for navigating design choices under various time constraints, enabling practitioners to optimize performance using existing techniques. Additionally, our demystification of the design space revealed a theoretically justified, hitherto unexplored, hinge interaction non-linearity, which, through its intuitive inductive bias, delivered significant accuracy gains overall.\\n\\n\\nCertain specific choices on the design axes incidentally ended up with outperforming SOTA methods. But we did not start out with the goal to design yet another model to improve the SOTA. To embody this message, we put the comparative analysis at the end of the paper. We are not even claiming that we have exhausted all important design axes (many may remain to be explored) \\u2014 any method that claims to go significantly beyond this recipe must necessarily add very distinctive design axes. However, our hope is that the precedent set by this paper may pave the way for continued guidance to network design and evaluation.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer 6vg7 (Part 2)\", \"comment\": \"**Table 3:** MAP for **Node-Early** interaction models for three diverse datasets.\\n\\n| Rel. Dist. | Structure | Non-linearity | Amazon | Email | Roadnet |\\n|------|---------|-------|--------|-------|------|\\n| Agg-MLP | Injective| Hinge | 0.805 | 0.902 | 0.680 |\\n| Agg-MLP | Injective| Dot Product | 0.725 | 0.911 | 0.643 |\\n| Agg-MLP | Injective | Neural | 0.798 | 0.895 | 0.668 |\\n| Agg-MLP | Non-Injective | Hinge| 0.770 | 0.869 | 0.646 |\\n| Agg-MLP | Non-Injective | Dot Product | 0.720 | 0.817 | 0.648 |\\n| Agg-MLP | Non-Injective | Neural| 0.748 | 0.834 | 0.586 |\\n| Agg-NTN | Injective| Hinge| 0.798 | 0.906 | 0.699 |\\n| Agg-NTN | Injective| Dot Product | 0.789 | 0.902 | 0.669 |\\n| Agg-NTN | Injective| Neural| 0.787 | 0.920 | 0.732 |\\n| Agg-NTN | Non-Injective | Hinge| 0.768 | 0.855 | 0.668 |\\n| Agg-NTN | Non-Injective | Dot Product | 0.668 | 0.829 | 0.612 |\\n| Agg-NTN | Non-Injective | Neural| 0.695 | 0.839 | 0.645 |\\n| Agg-hinge| Injective| Hinge| 0.756 | 0.866 | 0.659 |\\n| Agg-hinge| Injective| Dot Product | 0.748 | 0.861 | 0.682 |\\n| Agg-hinge| Injective| Neural| 0.743 | 0.862 | 0.660 |\\n| Agg-hinge| Non-Injective| Hinge| 0.799 | 0.878 | 0.662 |\\n| Agg-hinge| Non-Injective| Dot Product | 0.657 | 0.764 | 0.610 |\\n| Agg-hinge| Non-Injective| Neural| 0.676 | 0.759 | 0.592 |\\n| Set align| Injective| Hinge|**0.849** |**0.935** | **0.745** |\\n| Set align| Injective| Dot Product | 0.816 | 0.905 | 0.706 |\\n| Set align| Injective| Neural| 0.823 | 0.905 | 0.723 |\\n| Set align| Non-Injective| Hinge| 0.768 | 0.860 | 0.652 |\\n| Set align| Non-Injective| Dot Product | 0.729 | 0.827 | 0.648 |\\n| Set align| Non-Injective| Neural| 0.747 | 0.842 | 0.648 |\\n\\n**Table 4:** MAP for **Edge-Early** interaction models for three diverse datasets.\\n| Rel. Dist. | Structure | Non-linearity | Amazon | Email | Roadnet |\\n|---------------|----------------|---------------|--------|--------|---------|\\n| Agg-MLP| Injective| Hinge| 0.84 | 0.926 | 0.773 |\\n| Agg-MLP| Injective| Dot Product | 0.799 | 0.934 | 0.76 |\\n| Agg-MLP| Injective| Neural| 0.72 | 0.91 | 0.74 |\\n| Agg-MLP| Non-Injective | Hinge| 0.801 | 0.886 | 0.671 |\\n| Agg-MLP| Non-Injective | Dot Product | 0.74 | 0.837 | 0.745 |\\n| Agg-MLP| Non-Injective | Neural| 0.703 | 0.869 | 0.613 |\\n| Agg-NTN| Injective| Hinge| 0.791 | 0.922 | 0.731 |\\n| Agg-NTN| Injective| Dot Product | 0.783 | 0.916 | 0.76 |\\n| Agg-NTN| Injective| Neural| 0.789 | 0.908 | 0.737 |\\n| Agg-NTN| Non-Injective | Hinge | 0.795 | 0.868 | 0.816 |\\n| Agg-NTN| Non-Injective | Dot Product | 0.765 | 0.874 | 0.77 |\\n| Agg-NTN| Non-Injective | Neural| 0.701 | 0.853 | 0.708 |\\n| Agg-hinge| Injective| Hinge | 0.827 | 0.939 | 0.805 |\\n| Agg-hinge| Injective| Dot Product | 0.77 | 0.915 | 0.805 |\\n| Agg-hinge| Injective| Neural| 0.753 | 0.911 | 0.768 |\\n| Agg-hinge| Non-Injective | Hinge| 0.797 | 0.874 | 0.73 |\\n| Agg-hinge| Non-Injective | Dot Product | 0.736 | 0.845 | 0.607 |\\n| Agg-hinge| Non-Injective | Neural| 0.69 | 0.842 | 0.699 |\\n| Set align| Injective| Hinge | **0.863** | **0.944** | **0.834** |\\n| Set align| Injective| Dot Product | 0.827 | 0.921 | 0.828 |\\n| Set align| Injective| Neural | 0.826 | 0.909 | 0.752 |\\n| Set align| Non-Injective | Hinge| 0.783 | 0.875 | 0.832 |\\n| Set align| Non-Injective | Dot Product | 0.802 | 0.872 | 0.802 |\\n| Set align| Non-Injective | Neural| 0.769 | 0.858 | 0.74 |\\n\\nWe note that the performance trends observed on the small molecule datasets are consistent with those on these diverse real-world datasets.\\nIn particular, we note that: \\n1. Edge-level granularity outperforms node-level granularity, as evidenced by comparisons between corresponding cells in Table 1 (Node-Late) and Table 2 (Edge-Late), as well as Table 3 (Node-Early) and Table 4 (Edge-Early).\\n2. Early interaction variants (Tables 3 and 4) demonstrate significantly higher MAP values compared to late-interaction models (Tables 1 and 2).\\n3. Across all four tables, Set Alignment relevance distance consistently achieves the highest MAP values in most cases.\\n4. Injective interaction structure variants generally provide higher MAP values compared to their non-injective counterparts across all tables.\\n5. The combination of Set Alignment with an injective interaction structure and Hinge interaction non-linearity is particularly effective, yielding the highest MAP values for these datasets\\n\\nThis consistency reinforces the generality of the observations and conclusions outlined in the paper.\\n\\nBy incorporating these diverse datasets, we address the reviewer\\u2019s concerns regarding the representativeness of our experimental suite. We believe this extension strengthens the study and aligns it more closely with the practical applications of subgraph matching discussed in the introduction.\"}", "{\"title\": \"Incorporating suggestions of Reviewer SvpM\", \"comment\": \"Many thanks for your feedback. We now better understand your concerns, which we address below.\\n\\n> *Novelty and surprise in results:*\\n\\nFig 3 suggests that (1) components interact; (2) components that are suboptimal within the individual axes may be combined to give high accuracy in fast inference time. This figure clearly guides a practitioner to choose the model combination based the inference time cost permissible in the underlying usecase, rather than trying all possible combinations.\", \"we_provide_some_novel_and_surprising_observations_as_follows\": \"(1) Components do interact. Suppose, we just switch one axis (relevance distance) from the optimal combination: **SA**+Early+Injective+Hinge+Edge to **Agg-NTN**+Early+Injective+Hinge+Edge, performance degrades drastically (by 7%). We expected that if we further change choice across another axis, MAP would decrease more. Contrary to this expectation, when we change injective to non-injective (**Agg-NTN**+Early+**Non-Injective**+Hinge+Edge), the performance actually boosts significantly. Moreover, inference becomes much faster (2X faster than the best MAP setup). This was indeed surprising to us, since Injective mapping is a key criteria which is advocated by IsoNet. But it turns out Injectivity only shows its effect if set alignment is the final score.\\n\\nHowever, injective maps may not always be the preferred solution, because the Sinkhorn network consumes large compute resources. The designer may choose non-injective mapping which may result in considerable speed up. In this case, Agg-NTN+Early+Non-Injective+Hinge+Edge may be the best design.\\n\\n(2) In principle, an MLP with cascaded layers of Linear-ReLU-Linear networks should be able to learn any underlying nonlinearity, certainly simple nonlinearities like hinge. Hence, if we change SA+Early+Injective+Hinge+Edge to SA+Early+Injective+Neural+Edge, we expect comparable performance . However, the performance drops drastically (by 11.3%). This is surprising since SA+Early+Injective+Neural+Edge is the superclass of the hypotheses modeled using SA+Early+Injective+Hinge+Edge.\\n\\n(3) Most surprisingly, in IsoNet, injective vs Non-injective does not make any difference. See the bottom left cluster in scatter plot Fig 3: (Late interaction, set alignment, edge (also node)), where injective and non-injective points are overlapping.\\n\\nMoreover, IsoNet claimed that edge alignment model, set alignment and injective mapping are the key components for subgraph matching model. They claimed that due to these factors, GMN are outperformed by them even if we use asymmetric scores in GMN. However, as the table here (also page 22, Table4) suggests, if we just change GMN\\u2019s nonlinearity to hinge, then it performs better than Isonet, despite use of **node alignment, aggregated scoring (no set alignment) and non-injective mapping.**\\n\\n|Dataset|IsoNet|GMN(DP \\u2192 Hinge)|\\n|-|-|-|\\n|AIDS|0.704|**0.726**|\\n|Mutag|**0.733**|0.723|\\n|FM|0.782|**0.79**|\\n|NCI|0.615|**0.618**|\\n|MOLT|0.649|**0.651**|\\n|FR|0.734|**0.75**|\\n\\n> *Leveraging prior studies' ablation findings could have narrowed the search space*\\n\\nAs we have seen in the above table Node alignment, non-injective mapping, aggregated-hinge with hinge nonlinearity work better than IsoNet\\u2019s edge alignment. Hence, IsoNet\\u2019s claim that node alignment is always worse than edge alignment is not true and therefore, narrowing down the search from related work may not be helpful. \\n\\n> *differences between your best and IsoNet are two*\\n\\nThe best model reported in the paper corresponds to early interaction with injective mapping. This is extremely costly and it is not preferred by the practitioner where the usecase involves time constraint. Apart from this expensive combination, Fig 3 in our paper allows for a wide variety of combinations, catering to various inference time costs. Under such time constraint, one has to perform changes across several axes of IsoNet, which still outperforms IsoNet. \\nFollowing table shows that changes across multiple axes are necessary to beat Isonet in comparable run-time. \\n**Our-early-1 is close to our-early-best but more than 2x faster, and it is different from IsoNet in four out of five axes.**\\nIt provides excellent trade off between accuracy and time.\\n\\n| |Rel. Dist.|Stage|Structure|Non-linearity|Granularity|Aids|Mutag|FM|NCI|Inf Time (ms)|\\n|-|-|-|-|-|-|-|-|-|-|-|\\n|IsoNet|SA|Late|Injective|Neural|Edge|0.704|0.733|0.782|0.615|23 ms| \\n|Our-early-1|Agg-NTN|Early|Non-Injective|Hinge|Edge|0.79|0.785|0.835|0.661|28 ms| \\n|Our-early-2|Agg-Hinge|Early|Non-Injective|Hinge|Node|0.726|0.723|0.79|0.618|26 ms| \\n|Our-early-best|SA|Early|Injective|Hinge|Edge|0.817|0.837|0.887|0.677|58 ms| \\n\\nWe are keen to hear from you on whether your concerns have been addressed.\"}", "{\"title\": \"Response to Reviewer SvpM (Part 1)\", \"comment\": \">While the paper does a thorough job of exploring the design space or structured ablation studies, it does not provide a principled explanation of why certain configurations (or subcomponents) yield performance improvements. Adding a more in-depth empirical and theoretical analysis can address this issue. This can not only strengthen the contributions but also offer a more principled foundation for future research in this area.\\n\\nWe appreciate this insightful feedback and acknowledge the importance of providing a more principled explanation of the performance improvements observed in the various configurations. In response, we have made an effort to distill the key empirical observations and present them succinctly. Additionally, we highlight the key takeaways from these analyses. A detailed discussion on the design choice axes is presented separately below.\\n\\n### *Relevance Distance: Choice of Agg-\\\\* v/s Set Aligned*\\nWe present a comparison between set alignment-based relevance distance and aggregated embedding-based distance. Table 1 shows the MAP for Node-Late interaction models, while Table 2 presents the MAP for Edge-Late interaction models. The Agg-* variants rely on aggregated graph-level embeddings, which are subsequently processed through either an NTN module, a hinge scoring layer, or a general MLP. The interaction structure and non-linearities employed by the set alignment relevance distance are specified in brackets. In most cases, the set alignment relevance distance outperforms the aggregated distance variants.\\n\\n**Table 1:** MAP for **Node-Late** interaction models for the first three datasets. \\n| Rel. Dist. (Structure, Non-linearity) | AIDS | Mutag | FM |\\n|------------------------------------------|-------|-------|------|\\n| Agg-hinge | 0.557 | 0.594 | 0.636|\\n| Agg-MLP | 0.548 | 0.64 | 0.674|\\n| Agg-NTN | 0.576 | **0.708** | 0.744|\\n| Set align. (Injective, Hinge) | 0.633 | 0.647 | 0.72 |\\n| Set align. (Injective, Neural) | **0.664** | 0.69 | **0.758**|\\n\\n\\n**Table 2:** MAP for **Edge-Late** interaction models for the first three datasets.\\n| Rel. Dist. (Structure, Non-linearity) | AIDS | Mutag | FM |\\n|-------------------------------------------|-------|-------|------|\\n| Agg-hinge | 0.635 | 0.694 | 0.712|\\n| Agg-MLP | 0.607 | 0.63 | 0.727|\\n| Agg-NTN | 0.66 | 0.718 | 0.759|\\n| Set align. (Injective, Hinge) | **0.712** | 0.721 | 0.793|\\n| Set align. (Injective, Neural) | 0.704 | **0.733** | **0.782**|\\n\\n**Takeaways and Design Tips**\\n\\nCompressing the entire graph into a low-dimensional vector can result in information loss. Therefore, comparing the node embeddings at the set-level granularity yields better performance than their single-vector representations. Similar observations have been reported in other domains.\\n\\nIn knowledge graph alignment, encoding the neighborhood of an entity as a set and then aligning two such sets has been found to perform better than comparing compressed single-vector representations of each entity [BERT-INT].\\n\\nIn textual entailment [Lai et al., 2017; Chen et al., 2020; Bevan et al., 2023], allowing cross-interaction between all tokens of the two sentences is generally more effective than compressing each sentence into a single vector and comparing them.\\n\\nOur work reconfirms this intuition for subgraph retrieval as well.\\n\\n\\n[BERT-INT] X. Tang, J. Zhang, B. Chen, Y. Yang, H. Chen, and C. Li. Bert-int: A bert-based interaction model for knowledge graph alignment. interactions, 100:e1, 2020.\\n\\n[Lai et al., 2017] A. Lai and J. Hockenmaier. Learning to predict denotational probabilities for modeling entailment.\\n\\n[Chen et al., 2020] T. Chen, Z. Jiang, A. Poliak, K. Sakaguchi, and B. Van Durme. Uncertain natural language inference.\\n\\n[Bevan et al., 2023] R. Bevan, O. Turbitt, and M. Aboshokor. Mdc at semeval-2023 task 7: Fine-tuning transformers for textual entailment prediction and evidence retrieval in clinical trials\"}", "{\"title\": \"Response to Reviewer 6vg7 (Part 6)\", \"comment\": \"> The use of notations and are not consistent between L124 and Figure 1.\\n\\nThank you for your careful reading and for pointing out this inconsistency. We have addressed this issue and corrected the notation. Additionally, based on this feedback and comments from other reviewers, we have significantly overhauled the figure and revised the caption to provide clearer explanations regarding the sequence of the design choices and the complexity of the implied design space though various combinations of the design choices. \\n\\n\\n> Q1. Does the use of hinge distance make sense for set alignment and aggregated-hinge (L216-236)? In particular, the authors justify the use of hinge distance for adjacency matrices, but the adjacency matrices are binary while node/aggregated node embeddings are just real-valued.\\n\\nThis is a very important question and we are glad to include further intuition, which comes from [Bloom filters](https://en.wikipedia.org/wiki/Bloom_filter). Given a universe of elements $U$ and $X, Y \\\\subseteq U$, we can use bit vectors in $\\\\\\\\{ 0,1 \\\\\\\\}^{|U|}$ to represent $X,Y$, let's call them $\\\\vec{X}, \\\\vec{Y}$. The direct test for $X\\\\subseteq Y$ is easily seen as the test $\\\\vec{X} \\\\le \\\\vec{Y}$ (elementwise). A Bloom filter may be used to compress these bit-vectors into much shorter ones in $\\\\\\\\{0,1\\\\\\\\}^M$ where $M\\\\ll|U|$, however, the test for $X\\\\subseteq Y$ remains the same. Bloom filters have long been replaced by Learnt Bloom Filters (LBFs) and various forms of set transformers. Our work makes the natural progression from set encoders to graph encoders.\\n\\n--------\\n**References**\\n\\n[Davitkova et. al., 2024] [Learning over Sets for Databases](https://openproceedings.org/2024/conf/edbt/paper-29.pdf). *Extending Database Technology (EDBT), 2024*\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thank you for your encouraging comments and increasing the score.\"}", "{\"title\": \"Response to Reviewer 6vg7 (Part 4)\", \"comment\": \"**Variation in performance with latent dataset characteristics**\\n\\nTo study how dataset characteristics affect our proposed networks, we create splits of our datasets and perform inference on them. In particular, we select a metric and divide the corpus set of the AIDS dataset sorted by this metric into 4 contiguous splits. Inference is performed with all query graphs using the model trained on AIDS, on each of the corpus splits and the MAP scores are reported. Subset 0 represents the split with graphs pertaining to the lowest values for the corresponding metric while Subset 3 represents that with the highest values.\\n\\nIn the tables below, **boldface** represents the subset with the highest MAP score for the corresponding row.\\n\\n(1) **Metric = Node Count**\\n\\nNode count is one measure of graph size and is extremely relevant to the problem at hand, since a smaller node count requires a larger number of padding nodes to allow batched processing, which can be detrimental towards performance. In Table 7, we display dataset statistics for each split. The statistics are shown for the corpus graphs only, since the query graphs used for inference are identical across all splits.\\n\\n**Table 7:** Dataset statistics for splits of the AIDS dataset based on the Node Count metric.\\n|Datset|Avg. Node Count (Corpus)|\\n|-|-|\\n|AIDS (original)|18.50|\\n|Subset 0|17.00|\\n|Subset 1|17.96|\\n|Subset 2|19.02|\\n|Subset 3|20.00|\\n\\n**Observation:** The effect of padding nodes is clearly noticeable for both node-early (Table 8) and edge-early (Table 9) models. Subset 0 (minimum number of nodes) corresponds to the smallest MAP score, which consistently increases as we increase node size and peaks for Subset 3.\\n\\n**Table 8:** MAP for **Node-Early** interaction models for splits of the dataset with increasing **node count**\\n|Rel. Dist.|Structure|Non-linearity|AIDS|Subset 0|Subset 1|Subset 2|Subset 3|\\n|-|-|-|-|-|-|-|-|\\n|Agg-Hinge|Non-inj.|Dot Product|0.609|0.531|0.586|0.623|**0.670**|\\n|Agg-Hinge|Non-inj.|Hinge|0.726|0.687|0.714|0.724|**0.758**|\\n|Agg-Hinge|Non-inj.|Neural|0.598|0.511|0.577|0.603|**0.660**|\\n|Agg-Hinge|Inj.|Dot Product|0.64|0.594|0.629|0.652|**0.687**|\\n|Agg-Hinge|Inj.|Hinge|0.662|0.596|0.660|0.671|**0.701**|\\n|Agg-Hinge|Inj.|Neural|0.614|0.526|0.580|0.638|**0.674**|\\n|Agg-MLP|Non-inj.|Dot Product|0.63|0.543|0.616|0.637|**0.691**|\\n|Agg-MLP|Non-inj.|Hinge|0.637|0.567|0.620|0.653|**0.685**|\\n|Agg-MLP|Non-inj.|Neural|0.629|0.540|0.603|0.634|**0.689**|\\n|Agg-MLP|Inj.|Dot Product|0.658|0.614|0.658|0.674|**0.693**|\\n|Agg-MLP|Inj.|Hinge|0.683|0.660|0.688|0.685|**0.711**|\\n|Agg-MLP|Inj.|Neural|0.629|0.566|0.607|0.634|**0.683**|\\n|Agg-NTN|Non-inj.|Dot Product|0.669|0.579|0.643|0.683|**0.721**|\\n|Agg-NTN|Non-inj.|Hinge|0.686|0.614|0.643|0.692|**0.745**|\\n|Agg-NTN|Non-inj.|Neural|0.635|0.543|0.603|0.642|**0.707**|\\n|Agg-NTN|Inj.|Dot Product|0.721|0.665|0.714|0.726|**0.757**|\\n|Agg-NTN|Inj.|Hinge|0.743|0.684|0.749|0.757|**0.767**|\\n|Agg-NTN|Inj.|Neural|0.667|0.593|0.639|0.675|**0.721**|\\n|Set align|Non-inj.|Dot Product|0.608|0.533|0.594|0.623|**0.662**|\\n|Set align|Non-inj.|Hinge|0.676|0.633|0.672|0.697|**0.727**|\\n|Set align|Non-inj.|Neural|0.593|0.536|0.576|0.608|**0.644**|\\n|Set align|Inj.|Dot Product|0.71|0.678|0.704|0.709|**0.744**|\\n|Set align|Inj.|Hinge|0.734|0.723|0.746|0.732|**0.747**|\\n|Set align|Inj.|Neural|0.69|0.652|0.676|0.703|**0.723**|\\n\\n**Table 9:** MAP for **Edge-Early** interaction models for splits of the dataset with increasing **node count**\\n|Rel. Dist.|Structure|Non-linearity|AIDS|Subset 0|Subset 1|Subset 2|Subset 3|\\n|-|-|-|-|-|-|-|-|\\n|Agg-Hinge|Non-inj.|Dot Product|0.7|0.633|0.666|0.706|**0.751**|\\n|Agg-Hinge|Non-inj.|Hinge|0.763|0.736|0.759|0.771|**0.790**|\\n|Agg-Hinge|Non-inj.|Neural|0.677|0.604|0.643|0.686|**0.730**|\\n|Agg-Hinge|Inj.|Dot Product|0.755|0.706|0.751|0.758|**0.784**|\\n|Agg-Hinge|Inj.|Hinge|0.758|0.716|0.753|0.765|**0.791**|\\n|Agg-Hinge|Inj.|Neural|0.68|0.602|0.660|0.688|**0.731**|\\n|Agg-MLP|Non-inj.|Dot Product|0.677|0.584|0.649|0.683|**0.732**|\\n|Agg-MLP|Non-inj.|Hinge|0.748|0.699|0.736|0.759|**0.786**|\\n|Agg-MLP|Non-inj.|Neural|0.662|0.565|0.636|0.676|**0.721**|\\n|Agg-MLP|Inj.|Dot Product|0.758|0.695|0.742|0.765|**0.793**|\\n|Agg-MLP|Inj.|Hinge|0.789|0.754|0.791|0.790|**0.813**|\\n|Agg-MLP|Inj.|Neural|0.703|0.635|0.687|0.714|**0.742**|\\n|Agg-NTN|Non-inj.|Dot Product|0.71|0.660|0.696|0.704|**0.752**|\\n|Agg-NTN|Non-inj.|Hinge|0.79|0.765|0.783|0.796|**0.814**|\\n|Agg-NTN|Non-inj.|Neural|0.701|0.630|0.677|0.704|**0.755**|\\n|Agg-NTN|Inj.|Dot Product|0.74|0.676|0.726|0.744|**0.777**|\\n|Agg-NTN|Inj.|Hinge|0.76|0.715|0.750|0.762|**0.794**|\\n|Agg-NTN|Inj.|Neural|0.689|0.615|0.666|0.694|**0.741**|\\n|Set align|Non-inj.|Dot Product|0.715|0.612|0.698|0.710|**0.767**|\\n|Set align|Non-inj.|Hinge|0.783|0.725|0.764|0.786|**0.816**|\\n|Set align|Non-inj.|Neural|0.708|0.615|0.680|0.722|**0.755**|\\n|Set align|Inj.|Dot Product|0.798|0.760|0.781|0.807|**0.820**|\\n|Set align|Inj.|Hinge|0.817|0.798|0.813|0.822|**0.830**|\\n|Set align|Inj.|Neural|0.725|0.653|0.696|0.731|**0.756**|\"}", "{\"metareview\": \"The paper explores the design space for neural subgraph matching, noting that existing methods are often limited to a narrow set of design choices. To address this, the authors propose a comprehensive framework that includes key architectural decisions: relevance distance, interaction stages, interaction structures, interaction non-linearity, and interaction granularity. This work will bring clarity, accessibility and clear guidance to the practitioners, ensuring that the field progresses with a deeper understanding and methodological guidance, rather than mere chaotic exploration.\", \"additional_comments_on_reviewer_discussion\": \"Authors have provided sufficient evidence to clarify questions from reviewers.\"}", "{\"title\": \"Response to Reviewer VJh8 (Part 3)\", \"comment\": \"> \\u201cChallenging the widely held expectation that early interaction is more powerful, IsoNet\\u2019s late interaction approach outperforms GMN, even when GMN\\u2019s final score computation is made asymmetric\\u201d \\u2013 It is a bit unclear what it means by \\u201casymmetric\\u201d, especially due to the fact this is written at the beginning of intro and the audience may not be familiar with this area of research.\\n\\nThanks for the suggestion \\u2015 indeed the reference to \\\"asymmetric\\\" was too early and cryptic; we have removed it.\\n\\n> Some of the paper writing can be made more clearer. It may be useful to introduce/outline the summarized findings at the end of the introduction section.\\n\\nWe have added the following para at the end of the intro. In addition, we have clearly summarized the effect of each design axis at the end of the respective sections. These are highlighted in the PDF for convenience.\\n\\n**\\ud83d\\udca1 Key takeaways and design tips** Our systematic navigation of the design space resolves hitherto unexplained observations and provides reliable guidelines for future methods.\\n1. We conclusively explain (late-interaction) IsoNet\\u2019s earlier-observed superiority over (early-interaction) GMN. If GMN\\u2019s early interaction is supplemented with any of set alignment, injective structure, hinge nonlinearity, or edge-based interaction, it can readily outperform IsoNet.\\n2. These five design principles are vital, and their combination unveils a novel graph retrieval model that surpasses all existing methods.\\n3. Shifting from late to early interaction may increase computational cost, but compensates for the limitations of relevance distance defined using aggregated single-vector embeddings.\\n\\n> How is this work related to neural architecture search (NAS)? It might be worth comparing/surveting works in NAS, e.g. https://arxiv.org/pdf/2403.05064v1, https://openreview.net/pdf?id=GcM7qfl5zY, etc. Referencing such works can position this work better within the broader literature and bring out further discussion.\\n\\nWe thank the reviewer for referring us to the relevant domain of Neural Architecture Search. Upon further study, we have decided to add the following discussion to our Related Works section and cite the two papers as references.\\n\\nWithin specific families of graph encoder networks, such as GNNs/GATs [NAS 1] or graph transformers [NAS 2], researchers have proposed super-networks to explore the parametric space of encoder networks, using a bi-level optimization framework. We have discussed and referenced the papers our revised version. Thanks for bringing forward these related work. It would be of future interest to investigate if NAS methods can be extended to subgraph search and other combinatorial graph problems, to automatically explore network design spaces.\\n\\n[NAS 1, Zhang et. al.] [Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision](https://arxiv.org/pdf/2403.05064v1). *Neural Information Processing Systems, 2023*\\n\\n[NAS 2, Zhang et. al.] [AutoGT: Automated Graph Transformer Architecture Search](https://openreview.net/pdf?id=GcM7qfl5zY). *International Conference on Learning Representations, 2023*\"}", "{\"title\": \"Further discussion based on suggestions (Part 1)\", \"comment\": \"> *Beyond educated guesses, what are the fundamental reasons why a certain combination performs so well*\\n\\nWe answer this question directly from the following theoretical underpinning. Specifically, we describe the combinatorial formulation of subgraph matching and then show why certain combination works so well and others don't.\", \"first_consider_the_combinatorial_cost\": \"$$\\n\\\\begin{align}\\n\\\\text{dist}(G _q , G _c ) = \\\\min _{P}\\n\\\\sum _{u,v} [\\\\big(A _q-P A _c P^{\\\\top}\\\\big) _+] [u,v]---(a) \\\\\\\\\\\\\\\\\\nP \\\\text{ is a permutation matrix}---(b)\\n\\\\end{align}\\n$$\\nThe standard way to minimize it is a Projected gradient descent approach, where P is updated as\\n$$ P _{k} \\\\leftarrow \\\\text{argmin} _{P} \\\\textrm{Trace}\\\\left(P^T\\\\nabla _{P}\\\\ \\\\text{dist}(G _q , G _c ) \\\\big| _{P = P _{k-1}}\\\\right) \\n--- (c)$$\\n\\n---\\n\\n### Relaxation of (a--c)\\n\\nA neural approx should ideally relax all three steps (a--c), keeping each relaxation close to the original combinatorial equation, to get the highest benefit from perspective of inductive bias, which will allow more interpretable and accurate neural model. We will now show relaxation of *all* of them naturally leads to the best combination of our framework-- however, almost all other deviations do not follow these relaxation principles meticulously which is why, they show suboptimal performance.\\n\\n**R1: Relaxing $\\\\text{dist}(G _q , G _c )$ in Eq (a):** As mentioned in L204-207, Eq (a) aims to solve a quadratic assignment problem (QAP) which is NP-Hard. Hence, we convert them into a more tractable linear assignment problem as \\n\\n$$\\\\min _{P} \\\\sum _{u,i} [\\\\big(H _q-P H _c \\\\big) _+] [u,i] \\\\ \\\\ (\\\\text{\\\\textcolor{blue}{ This naturally gives set alignment based distance}})---- (a1).$$ \\n\\n**R2: Relaxing $\\\\text{dist}(G _q , G _c )$ in Eq (b):** Sinkhorn iterations give natural extensions to the Permutation matrices, $\\\\text{\\\\textcolor{blue}{which is injective mapping}}$. \\n\\nNow, how we can solve Eq (a1) using R1 and R2? As we mentioned in Appendix D.1 (repeated as follows), **proves that hinge is correct nonlinearity**.\", \"first_note_that\": \"$$\\\\min _{P} \\\\sum _{u,i} [\\\\big(H _q-P H _c \\\\big) _+] [u,i] = \\\\min _{P} \\\\sum _{u,i} \\\\big(H _q[u,i]- H _c [v,i]\\\\big) _+ P[u,v] \\\\quad \\\\textbf{ (since P is permutation matrix)} \\n$$ Given B is the set of of doubly stochastic matrices and $\\\\epsilon \\\\to 0$, Eq. a1 directly reduces to:\\n$$ \\\\min _{P\\\\in B} \\\\sum _{u} \\\\underbrace{\\\\sum _{i} \\\\big(H _q[u,i]- H _c [v,i]\\\\big) _+]} _{C} P[u,v] + \\\\epsilon \\\\sum _{u,v} P[u,v]\\\\log P[u,v]---(a2)$$\\nHence, P is $Z_T$ where $Z _0 = \\\\exp(C / \\\\tau)$ and $Z_t$ is computed using sinkhorn iterations:\\n$$\\nZ _{t+1}[u, u'] = \\\\frac{Z _t[u, u']}{\\\\sum _{v' \\\\in [N]} Z_t[u, v']}, \\\\quad\\n\\\\text{where} \\\\quad Z _t'[u, u'] = \\\\frac{Z _t[u, u']}{\\\\sum _{v \\\\in [N]} Z _t[v, u']}, \\\\quad \\\\text{for all } (u, u') \\n$$\\nEq (a2) shows the $\\\\text{\\\\textcolor{blue}{nonlinearity in C must be hinge}}$. This is one of simple yet novel quick and dirty trick, which significantly boosts performance of any model. Keeping everything suboptimal, if we use it in GMN, it outperforms IsoNet. This provides a principled explanation on why hinge is better than dot product and neural method. \\n\\n**R3: relaxing Eq c:** One can always argue that update of $P$ is already being performed by the above updates on Z. But, if you look carefully, Eq. c involves $\\\\nabla_{P}\\\\text{dist}(G _q , G _c ) \\\\big| _{P = P _{k-1}}$, which indicates that in principle, C should depend on $P$. However, since the approximation (a1) is linear, C becomes independent of P. $\\\\text{\\\\textcolor{blue}{To get past this crude approximation, one should make C dependent on P, which implies that $H _q$ and $H _c$ should be }}$ $\\\\text{\\\\textcolor{blue}{dependent on $P$, which again implies that $H_q$ and $H_c$ should become dependent on each other, giving to an early }}$ $\\\\text{\\\\textcolor{blue}{interaction model. }}$\\n\\nThis clearly shows why early interaction model, hinge nonlinearity, injective mapping, set alignment work well.\"}", "{\"summary\": \"This paper studies the problem of neural network based subgraph matching, focusing on several model/system design choices, e.g. early vs late interaction, trainable vs fixed non-linearity, etc. A set of guidelines and best practices are outlined which are the key contributions of this work.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The overall problem formulation is interesting and timely considering the various design choices in subgraph matching literature.\\n2. Ample experimental study is performed adding credibility and rigorousity to the findings and guidelines.\\n3. Error bar is shown for the results, e.g. in Figure 2.\", \"weaknesses\": \"1. It seems all the 10 datasets are relatively small, e.g. up to 50 nodes. I wonder if there is a reason for not choosing much larger graphs, e.g. graphs up to 1M nodes as the target graph to be searched for (while using a small query graph of ~50 nodes).\\n2. Some of the paper writing can be made more clearer. E.g. \\u201cChallenging the widely\\nheld expectation that early interaction is more powerful, IsoNet\\u2019s late interaction approach outperforms GMN, even when GMN\\u2019s final score computation is made asymmetric\\u201d \\u2013 It is a bit unclear what it means by \\u201casymmetric\\u201d, especially due to the fact this is written at the beginning of intro and the audience may not be familiar with this area of research. \\n3. It may be useful to introduce/outline the summarized findings at the end of the introduction section.\", \"questions\": \"1. How is this work related to neural architecture search (NAS)? It might be worth comparing/surveting works in NAS, e.g. https://arxiv.org/pdf/2403.05064v1, https://openreview.net/pdf?id=GcM7qfl5zY, etc. Referencing such works can position this work better within the broader literature and bring out further discussion.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer SvpM (Part 4)\", \"comment\": \"> Although the best combination shows performance benefits, its novelty is limited. The only differences between the best model and the existing state-of-the-art (IsoNet) are the interaction stage and non-linearity (which have been proposed in existing studies). Without providing principled rationales (as the first bullet), this could make the contributions appear incremental.\\n\\n\\nOur key contribution is not providing another new model. We rather identify the key design choices and while doing that, we demystify many existing works, as reviewer 6vg7 also pointed out. While there are so many excellent papers, there is a lack of study why some methods work and some methods don't. We find out key design choices involving the representation of and interaction between graph representations, and then systematically explore subtle interplay between these design choices.\\n\\nOur systematic navigation of the design space resolves hitherto- unexplained observations and provides reliable guidelines for future methods. (1) We conclusively explain (late-interaction) IsoNet\\u2019s earlier-observed superiority over (early-interaction) GMN. If GMN\\u2019s early interaction is supplemented with any of set alignment, injective structure, hinge nonlinearity, or edge-based interaction, it can readily outperform IsoNet. (2) These five design principles are vital, and their combination unveils a novel graph retrieval model that surpasses all existing methods. (3) Shifting from late to early interaction may increase computational cost, but compensates for the limitations of relevance distance defined using aggregated single-vector embeddings.\\n\\n\\n\\n\\n>The dense presentation makes it challenging for readers to immediately see the paper\\u2019s structure and key subcomponents. I think the authors should present a clearer organization and structured outline. In particular, Figure 1 is hard to interpret. I think that the authors can provide more human-readable visualization. Plus, please put some margin to the bottom of Figure 1.\\n\\nWe appreciate the feedback and have made significant improvements to enhance clarity. Figure 1 has been revised to provide a more readable and intuitive visualization, with an updated caption for better interpretation. We have also restructured the treatment of each design axis and included a dedicated section summarizing the key takeaways.\"}", "{\"summary\": \"The paper presents an exploration of the design space for neural subgraph matching, where subcomponents have been presented in existing research. The authors observe that prior methods for neural graph matching have been narrowly focused, with most methods falling into limited design categories. To address this, the authors organize a comprehensive framework including key architectural choices: relevance distance, interaction stages, interactions structures, interaction non-linearity, and interaction granularity. Extensive experiments demonstrate that unexplored combinations within this design space can lead to performance improvement.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors show a detailed and thorough literature survey on neural subgraph matching and introduce general subcomponents for subgraph matching models.\", \"The authors conduct extensive experiments on all possible combinations within design spaces.\", \"The authors discover a new combination of subcomponents that outperforms the existing state-of-the-art model.\"], \"weaknesses\": [\"While the paper does a thorough job of exploring the design space or structured ablation studies, it does not provide a principled explanation of why certain configurations (or subcomponents) yield performance improvements. Adding a more in-depth empirical and theoretical analysis can address this issue. This can not only strengthen the contributions but also offer a more principled foundation for future research in this area.\", \"Although the best combination shows performance benefits, its novelty is limited. The only differences between the best model and the existing state-of-the-art (IsoNet) are the interaction stage and non-linearity (which have been proposed in existing studies). Without providing principled rationales (as the first bullet), this could make the contributions appear incremental.\", \"The dense presentation makes it challenging for readers to immediately see the paper\\u2019s structure and key subcomponents. I think the authors should present a clearer organization and structured outline. In particular, Figure 1 is hard to interpret. I think that the authors can provide more human-readable visualization. Plus, please put some margin to the bottom of Figure 1.\"], \"questions\": \".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 6vg7 (Part 5)\", \"comment\": \"(2) **Metric = Edge Count**\\n\\nEdge count is another measure of graph size. Since the logical unit of computation in edge-granularity models is an edge, we may expect to see similar trends as shown by node-granularity models with the Node Count metric (Table 8).\\n\\nIn Table 10, we display dataset statistics for each split.\\n\\n**Table 10:** Dataset statistics for splits of the AIDS dataset based on the Edge Count metric.\\n|Datset|Avg. Edge Count (Corpus)|\\n|-|-|\\n|AIDS (original)|18.87|\\n|Subset 0|16.93|\\n|Subset 1|18.32|\\n|Subset 2|19.43|\\n|Subset 3|20.78|\\n\\n**Observation:** The effect of increasing Edge Count, while quite prominent, is not as monotonic as that seen for Node Count. In particular, for models with **Injective** interaction structure or a **Set alignment** based relevance distance, Subset 0 often shows the best performance, which indicates that usage of injective mapping and set-alignment is more immune towards the presence of padding nodes/edges.\\n\\n**Table 11:** MAP for **Node-Early** interaction models for splits of the dataset with increasing **edge count**.\\n|Rel. Dist.|Structure|Non-linearity|AIDS|Subset 0|Subset 1|Subset 2|Subset 3|\\n|-|-|-|-|-|-|-|-|\\n|Agg-Hinge|Non-injective|Dot Product|0.609|0.555|0.574|0.617|**0.665**|\\n|Agg-Hinge|Non-injective|Hinge|0.726|**0.763**|0.703|0.738|0.739|\\n|Agg-Hinge|Non-injective|Neural|0.598|0.519|0.547|0.606|**0.659**|\\n|Agg-Hinge|Injective|Dot Product|0.64|0.655|0.627|0.658|**0.672**|\\n|Agg-Hinge|Injective|Hinge|0.662|0.663|0.643|0.664|**0.693**|\\n|Agg-Hinge|Injective|Neural|0.614|0.535|0.566|0.623|**0.673**|\\n|Agg-MLP|Non-injective|Dot Product|0.63|0.562|0.580|0.641|**0.686**|\\n|Agg-MLP|Non-injective|Hinge|0.637|0.624|0.613|0.651|**0.671**|\\n|Agg-MLP|Non-injective|Neural|0.629|0.564|0.584|0.646|**0.674**|\\n|Agg-MLP|Injective|Dot Product|0.658|**0.687**|0.650|0.670|0.677|\\n|Agg-MLP|Injective|Hinge|0.683|**0.722**|0.685|0.688|0.698|\\n|Agg-MLP|Injective|Neural|0.629|0.612|0.599|0.642|**0.667**|\\n|Agg-NTN|Non-injective|Dot Product|0.669|0.613|0.630|0.672|**0.715**|\\n|Agg-NTN|Non-injective|Hinge|0.686|0.681|0.641|0.690|**0.726**|\\n|Agg-NTN|Non-injective|Neural|0.635|0.572|0.579|0.628|**0.698**|\\n|Agg-NTN|Injective|Dot Product|0.721|0.736|0.697|0.728|**0.745**|\\n|Agg-NTN|Injective|Hinge|0.743|**0.773**|0.736|0.756|0.750|\\n|Agg-NTN|Injective|Neural|0.667|0.643|0.619|0.675|**0.713**|\\n|Set align|Non-injective|Dot Product|0.608|0.574|0.592|0.614|**0.650**|\\n|Set align|Non-injective|Hinge|0.676|**0.706**|0.671|0.693|0.703|\\n|Set align|Non-injective|Neural|0.593|0.566|0.566|0.616|**0.637**|\\n|Set align|Injective|Dot Product|0.71|**0.767**|0.698|0.713|0.721|\\n|Set align|Injective|Hinge|0.734|**0.821**|0.735|0.740|0.724|\\n|Set align|Injective|Neural|0.69|**0.735**|0.680|0.705|0.704|\\n\\n\\n**Table 12:** MAP for **Edge-Early** interaction models for splits of the dataset with increasing **edge count**.\\n|Rel. Dist.|Structure|Non-linearity|AIDS|Subset 0|Subset 1|Subset 2|Subset 3|\\n|-|-|-|-|-|-|-|-|\\n|Agg-Hinge|Non-injective|Dot Product|0.7|0.667|0.662|0.708|**0.742**|\\n|Agg-Hinge|Non-injective|Hinge|0.763|**0.784**|0.754|0.773|0.774|\\n|Agg-Hinge|Non-injective|Neural|0.677|0.642|0.617|0.693|**0.723**|\\n|Agg-Hinge|Injective|Dot Product|0.755|0.763|0.742|**0.770**|0.765|\\n|Agg-Hinge|Injective|Hinge|0.758|**0.781**|0.742|0.769|0.775|\\n|Agg-Hinge|Injective|Neural|0.68|0.628|0.645|0.689|**0.724**|\\n|Agg-MLP|Non-injective|Dot Product|0.677|0.619|0.617|0.670|**0.721**|\\n|Agg-MLP|Non-injective|Hinge|0.748|0.767|0.731|0.751|**0.770**|\\n|Agg-MLP|Non-injective|Neural|0.662|0.592|0.619|0.665|**0.715**|\\n|Agg-MLP|Injective|Dot Product|0.758|**0.778**|0.737|0.762|0.773|\\n|Agg-MLP|Injective|Hinge|0.789|**0.837**|0.786|0.797|0.784|\\n|Agg-MLP|Injective|Neural|0.703|0.685|0.676|0.710|**0.736**|\\n|Agg-NTN|Non-injective|Dot Product|0.71|0.713|0.678|0.714|**0.741**|\\n|Agg-NTN|Non-injective|Hinge|0.79|**0.842**|0.786|0.794|0.794|\\n|Agg-NTN|Non-injective|Neural|0.701|0.663|0.671|0.711|**0.744**|\\n|Agg-NTN|Injective|Dot Product|0.74|0.718|0.725|0.749|**0.762**|\\n|Agg-NTN|Injective|Hinge|0.76|**0.785**|0.747|0.759|0.781|\\n|Agg-NTN|Injective|Neural|0.689|0.669|0.642|0.695|**0.731**|\\n|Set align|Non-injective|Dot Product|0.715|0.656|0.664|0.695|**0.755**|\\n|Set align|Non-injective|Hinge|0.783|**0.805**|0.750|0.790|0.786|\\n|Set align|Non-injective|Neural|0.708|0.661|0.670|0.709|**0.749**|\\n|Set align|Injective|Dot Product|0.798|**0.829**|0.782|0.796|0.805|\\n|Set align|Injective|Hinge|0.817|**0.879**|0.812|0.827|0.801|\\n|Set align|Injective|Neural|0.725|0.716|0.677|0.716|**0.749**|\"}" ] }
5pd46nlxc6
Asynchronous Factorization for Multi-Agent Reinforcement Learning
[ "Enrico Marchesini", "Yuchen Xiao", "Priya L. Donti", "Christopher Amato" ]
Value factorization is widely used to design high-quality, scalable multi-agent reinforcement learning algorithms. However, current methods typically assume agents execute synchronous, 1-step *primitive actions*, failing to capture the typical nature of multi-agent systems. In reality, agents are asynchronous and execute *macro-actions*---extended actions of variable and unknown duration---making decisions at different times. This paper proposes value factorization for asynchronous agents. First, we formalize the requirements for consistency between centralized and decentralized macro-action selection, proving they generalize the primitive case. We then propose update schemes to enable factorization architectures to support macro-actions. We evaluate these asynchronous factorization algorithms on standard macro-action benchmarks, showing they scale and perform well on complex coordination tasks where their synchronous counterparts fail.
[ "Macro-actions", "Multi-Agent Reinforcement Learning", "Asynchronous Factorization." ]
Reject
https://openreview.net/pdf?id=5pd46nlxc6
https://openreview.net/forum?id=5pd46nlxc6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zcdbysrc6B", "vUGPAMC1pR", "s4tbx8Wh5m", "jsW30zZT8R", "heFCHhLSP4", "gILgyGZPS0", "WtpzhnZWKO", "SQSFb6Ilkz", "MLh0bszyLV", "HJKHcrVEcM", "7NwnXNOrFZ" ], "note_type": [ "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment" ], "note_created": [ 1729654290520, 1732475687311, 1737524134488, 1732113501698, 1732264108658, 1732113584081, 1730645999622, 1732458295494, 1734713806753, 1730021728850, 1732113442391 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11608/Reviewer_iAcy" ], [ "ICLR.cc/2025/Conference/Submission11608/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11608/Authors" ], [ "ICLR.cc/2025/Conference/Submission11608/Reviewer_iAcy" ], [ "ICLR.cc/2025/Conference/Submission11608/Authors" ], [ "ICLR.cc/2025/Conference/Submission11608/Reviewer_Qp2t" ], [ "ICLR.cc/2025/Conference/Submission11608/Reviewer_Qp2t" ], [ "ICLR.cc/2025/Conference/Submission11608/Area_Chair_7jPi" ], [ "ICLR.cc/2025/Conference/Submission11608/Reviewer_3SjF" ], [ "ICLR.cc/2025/Conference/Submission11608/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper investigates an asynchronous multi-agent setting, extending value factorization methods to this context. It introduces Macro-Action-Based IGM and applies relevant value factorization methods within the macro-action framework. The experimental results demonstrate the effectiveness of the proposed methods on macro-action benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper addresses an intriguing and underexplored area by applying value factorization methods to agents utilizing macro-actions.\\n\\n2. The methodology is clearly articulated, supported by figures that enhance understanding.\\n\\n3. The experimental results show notable improvements on benchmarks.\", \"weaknesses\": [\"The current method appears straightforward; a deeper exploration is warranted. For instance, the proposed MAC-IGM and MacAdv-IGM lack insights into algorithm design. Since MAC-IGM encompasses a broader class of functions over the primitive IGM, it would be beneficial to investigate more effective factorization or simple modifications tailored for asynchronous tasks rather than merely applying previous methods. As it stands, the current variant of IGM serves mainly as a verification of existing value factorization methods, diminishing its contribution.\", \"Asynchronous updates require further examination:\", \"Why must agents with ongoing macro-actions be masked?\", \"Using 0 as the masked value in D2 seems problematic; if all rewards and Q-functions are negative, more agents with ongoing macro-actions would lead to lower rewards assigned for others.\", \"For QPLEX, since its mixing network also requires actions as input, will this be masked in D1 and D2?\", \"A more principled approach to determining asynchronous updates, potentially aligned with the MAC-IGM definition, should be considered.\", \"The environmental results are unclear:\", \"The comparisons in Section 5.1 between D0, D1, D2, w/ and w/o MS are hard to discern from Figure 6.\", \"The comparison of AVF to prior methods is vague; using the best performance among VDN, QMIX, QPLEX, and all update and macro-state variants seems unfair.\", \"Since the proposed method is value-based, incorporating baselines like IQL with macro-actions would be beneficial.\", \"The figures are unclear, and error bars are missing. Utilizing vector graphics instead of raster graphics is recommended, as the latter become blurry when enlarged.\", \"How many seeds were used for each experiment?\"], \"questions\": \"Refer to the weaknesses mentioned above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer iAcy's Comment\", \"comment\": \"Thank you for your thoughtful follow-up questions and clarifications. We appreciate your engagement in this discussion and are happy to address the additional concerns you raised:\\n\\n> However, conditional value functions should already be known in macro-action settings and do not appear to be a novel contribution\\u2026\\n\\nWe acknowledge that conditional value functions have been explored in prior work on fully centralized asynchronous macro-action methods, as detailed in the preliminaries (Sec 2.2.1). Our paper does not claim novelty regarding their introduction. Instead, we focus on a crucial aspect. The primitive IGM has been essential for proving the theoretical soundness of primitive factorization algorithms, ensuring consistency between local and joint action selection. Similarly, we need the Mac-IGM (and MacAdv-IGM) to evaluate whether asynchronous macro-action factorization algorithms\\u2014including future methods\\u2014are principled. As also recognized by Reviewer Qp2t, employing factorization and asynchronous macro-action learning has not been explored before, and we believe our additional insights on the theoretical framework have further improved the clarity of our work. Additionally, our paper bridges a gap in the literature by demonstrating the relationship between these asynchronous and primitive versions.\\n\\n> existing value factorization methods could be straightforwardly adapted into a macro-action version that satisfies Mac-IGM.\\n\\nWhile straightforward adaptations of existing methods may satisfy Mac-IGM, we believe this perspective overlooks the other technical contributions of our work. Our proposed macro-state representation and update schemes (D1/D2) focus on the unique challenges of asynchronous factorization. These contributions leverage the asynchronous nature of the problem, resulting in higher performance than unconditioned asynchronous factorization methods and straightforward extensions of existing value factorization approaches. Our evaluations show that simple adaptations rarely outperform AVF algorithms that incorporate these advancements.\\n\\n> The 0 mask ensures that the mixer ignores the value of agents with ongoing macro-actions\\n\\nWe appreciate this clarification and partially agree. To address this, we have updated our manuscript (see footnotes 3 and 4) to highlight scenarios where this masking strategy may encounter practical challenges. Specifically, incorrect estimations could arise if the mixing architecture does not consider the joint macro-history as input, which provides sufficient context to address such an issue. Nevertheless, we note that value masking has demonstrated higher performance than other update schemes in some of the asynchronous macro-action benchmark domains.\\n\\n> clearly identify which method generally performs best and whether any method consistently outperforms all baselines.\\n\\nAcross tasks, methods utilizing the joint macro-state (MS) generally achieve the highest performance, regardless of the update scheme (except AVF-VDN, which does not incorporate this additional information). However, the effectiveness of specific update schemes varies by task, and no single approach consistently outperforms the rest across all settings. Interestingly, AVF-QMIX often outperforms the more complex AVF-QPLEX, likely because its simpler architecture is better suited to the shorter horizons typical of macro-action domains.\\nThe lack of a uniquely better performing algorithm highlights an exciting opportunity for future research as developing novel asynchronous macro-action methods that leverage factorization could further advance the field. Our work provides a foundational framework for this exploration, addressing key challenges unique to asynchronous macro-action-based agents and paving the way for new principled methods.\\n\\n> I could not find the performance curves of the baseline methods.\\n\\nThank you for bringing this to our attention! We have added the training curves for macro-action-based baselines to the revised manuscript.\\n\\n\\nWe hope our new revisions clarify your concerns. Please feel free to reach out if you have further questions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer 3SjF\", \"comment\": \"We appreciate your review and the opportunity to improve our work. We have carefully revised our paper to address your concerns and provide further clarity on the motivation and contributions of the work. Below, we respond to each of your points, highlighting the revisions and providing references to the updated manuscript.\\n\\n*Presentation*\\n\\nWe acknowledge that the clarity of the figures in our original submission could have been improved. To address this, we have revised all figures in both the main paper and the supplemental material. These updates include increased label sizes, improved plot quality, enhanced readability and structure. Additionally, we replaced the original bar plots, which may have caused your uncertainty about result quality, with a tabular format for clearer comparison. Full training runs remain available in the appendices.\\n\\n*Significance*\\n\\n> I am sceptical about the optimality claim regarding Macro-Dec-POMDPs\\n\\nThank you for pointing this out. To clarify, we claim optimality with respect to the given macro-actions. This is similar to the primitive action case where algorithms can be optimal with respect to the given (primitive) actions. As discussed in most of the works we refer to in Section 2.2 and 2.2.1, the trade-off between optimality and empirical performance in Macro-Dec-POMDP-based algorithms is well established in the literature. \\n\\n> the most significant technical contribution is only to exclude the agent utilities of ongoing macro-actions\\n\\nWe respectfully disagree with this interpretation of our contributions. While it is true that our work introduces novel detaching schemes for handling macro-actions, it also:\\n- Identifies and addresses challenges in using the state for asynchronous factorization architectures.\\n- Formalizes the underlying theory for Macro-Dec-POMDP factorization methods.\\n- Bridges this theory with practical algorithms. We encourage the reviewer to refer to our detailed response to Reviewer Qp2t for a comprehensive discussion of this point.\\n\\n> The experimental is mainly a self-comparison\\u2026\\n\\nWe would like to clarify that our experimental evaluation goes beyond self-comparison. In addition to extensive ablation studies that validate the design and technical contributions of our asynchronous value factorization (AVF) algorithms, we compare against existing asynchronous MARL baselines, including both value-based and policy-gradient methods (e.g., Mac-IAICC, Dec-MADDRQN, and Cen-MADDRQN).\\nRegarding your suggestion to compare with [1, 2], while we appreciate the recommendation, we note these works are not closely related to our framework. As also discussed in [1] and Xiao et al. (2022), these studies do not address asynchronicity, as they assume all agents execute macro-actions of the same duration. Thus, they are more aligned with synchronous MARL approaches (Section 2.2.1).\\nNonetheless, to address your request for broader comparisons, we have included results for HAVEN [2] in the updated manuscript (Table 2). Using the same hyperparameters as AVF-QMIX-D0 and the best macro-action duration identified in a preliminary sweep (5 steps), HAVEN demonstrates lower performance compared to all asynchronous MARL baselines and AVF algorithms. Please refer to Section 5.2 for further details.\\n\\n*Limitation*\\n\\n> The paper assumes macro-actions to be pre-defined but realistically this does not seem feasible to me\\n\\nWe agree that the assumption of predefined macro-actions represents a limitation of our work, as we also discuss in the paper. However, we note that in RL, people assume they are given an action space also in the primitive case. Macro-actions are more general, allowing us to consider actions with different durations\\u2014a standard assumption in all previously published asynchronous MARL studies (in fact, we adopt existing benchmark environments that provide these predefined macro-actions). Asynchronous settings are common in the real world but have been rarely studied in the literature. For this reason, principled methods are needed for the MacDec-POMDP case before extending them to learn macro-actions. Hence, as highlighted in our paper, learning asynchronous macro-actions is indeed an exciting direction for future work. However, our primary focus is to establish the theoretical framework and scalable asynchronous MARL algorithms, laying the foundation for such advancements.\\n\\nWe are grateful for your constructive feedback, which has helped us improve the clarity and rigor of our work. We hope these clarifications address your concerns and demonstrate the significance of our contributions. Please do not hesitate to reach out with further questions or suggestions.\"}", "{\"comment\": \"Some follow-up questions:\\n\\n1. \\\"The critical link lies in the conditional prediction of macro-action value functions.\\\" However, conditional value functions should already be known in macro-action settings and do not appear to be a novel contribution of this work. From my understanding, with the awareness of conditional value functions, existing value factorization methods could be straightforwardly adapted into a macro-action version that satisfy Mac-IGM. This makes it unclear how useful Mac-IGM itself truly is.\\n\\n2. \\\"The 0 mask ensures that the mixer ignores the value of agents with ongoing macro-actions, preventing these agents from being incorrectly updated during backpropagation.\\\" However, my concern is that when agents with ongoing macro-actions are masked to 0, the value assignment for other agents without ongoing macro-actions will become incorrect.\\n\\n3. It is still difficult for me to clearly identify which method generally performs best and whether any method consistently outperforms all baselines. Additionally, I could not find the performance curves of the baseline methods.\"}", "{\"title\": \"Response to Reviewer iAcy\", \"comment\": \"Thank you for your constructive feedback, which has greatly helped us improve the clarity and presentation of our paper. Below, we address the key concerns you raised, with references to the revised manuscript for further details.\\n\\n> the proposed MAC-IGM and MacAdv-IGM lack insights into algorithm design\\u2026\\n\\nWe acknowledge that the connection between Mac-IGM, MacAdv-IGM, and their implementation in asynchronous value factorization (AVF) algorithms could have been better articulated in our original submission.\\nThe critical link lies in the conditional prediction of macro-action value functions, which enables accurate joint Q-value estimation even when agents asynchronously terminate their macro-actions. Without the conditional operator, Q-value predictions would incorrectly assume that all agents simultaneously initiate new macro-actions, leading to suboptimal performance. This mechanism ensures agents complete their current macro-actions before sampling new behaviors, preventing premature decisions. Thus, conditional Q-value predictions are pivotal for formalizing Mac-IGM and ensuring reliable asynchronous performance.\\nAll AVF algorithms integrate conditional value function predictions into their architectures and update rules, ensuring compliance with Mac-IGM and MacAdv-IGM principles. To further illustrate these connections, we have added a representative example in Appendix B, demonstrating the full expressiveness of AVF-QPLEX for MacAdv-IGM and bridging the gap between theory and practice.\\nAdditionally, our work introduces more than just conditional value prediction. We also propose novel detaching schemes for handling macro-actions and identify/address key challenges in utilizing state representations within asynchronous factorization architectures.\\n\\n> Asynchronous updates require further examination\\u2026\\n\\nWe believe the explanation provided above clarifies the importance of masking agents with ongoing macro-actions, as this ensures that updates respect the asynchronous nature of macro-actions. However, we would be happy to provide additional clarification if needed.\\nRegarding the use of the 0 mask in D2, we kindly ask the reviewer to elaborate on their statement: \\u201cMore agents with ongoing macro-actions would lead to lower rewards assigned for others.\\u201d The 0 mask ensures that the mixer ignores the value of agents with ongoing macro-actions, preventing these agents from being incorrectly updated during backpropagation.\\nFor AVF-QPLEX, the mixing network receives information about ongoing macro-actions in both D1 and D2 configurations. We have clarified this detail in the revised manuscript. Overall, D0, D1, and D2 represent three distinct approaches to asynchronous updates, each aligned with the Mac-IGM definition.\\n\\n> Environmental results are unclear\\u2026\\n\\nWe agree that the presentation of results in our original submission could have been improved, and we have made several changes to address this concern:\\n- We revised most of Sec. 5 to improve clarity and organization, and we replaced the bar plots in Section 5.1 with clear tabular representations. Full training runs remain available in the appendices.\\n- To address fairness concerns, we now present the results of prior methods in a separate table. This allows readers to compare these performances with any AVF method of their choice. This comparison is further discussed in the new Section 5.2.\\n- The Dec-MADDRQN method, which we use as a baseline, is an implementation of IQL with macro-actions. This aligns with the reviewer\\u2019s suggestion to incorporate baselines like IQL with macro-actions.\\n- We have revised all figures in the main paper and supplemental material to enhance their clarity.\\n- We also clarified details about our experimental setup. Specifically, we noted that each experiment considers 20 seeds and moved this information to the beginning of Section 5 for greater visibility.\\n\\nWe sincerely thank you once again for your detailed and constructive feedback. We hope our revisions and additional clarifications address your concerns. Please feel free to reach out if you have further questions or suggestions.\"}", "{\"summary\": \"This paper proposes a method to effectively handle macro-actions in multi-agent reinforcement learning (MARL) through an asynchronous learning structure. To address the inefficiencies and temporal inconsistencies that arise when agents have different action durations in synchronous learning frameworks, the paper introduces the Asynchronous Value Factorization (AVF) method. This approach allows agents to update independently in environments involving macro-actions, improving learning efficiency and scalability. The paper provides theoretical foundations through the concepts of Macro-IGM and MacAdv-IGM, demonstrating that macro-action-based learning has a broader expressiveness compared to traditional single-action methods. Experimental validation shows that AVF outperforms synchronous methods in various macro-action scenarios, achieving superior performance in asynchronous policy learning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Solving practical issues. The paper introduces an asynchronous learning structure that addresses the temporal inconsistencies and update delays inherent in synchronous approaches. This enables agents to update independently, significantly improving learning efficiency and environmental adaptability. This design effectively reflects the real-world asynchronous nature of multi-agent systems, solving practical issues.\", \"Theoretical Extensibility: The concept of Macro-IGM provides greater representational power for handling macro-actions, ensuring that local agent actions remain consistent with the global policy. This allows for stable policy optimization even in asynchronous settings, demonstrating an expanded expressiveness and applicability beyond traditional methods.\", \"Compatible with Existing Algorithms: The proposed asynchronous update mechanism can be integrated with existing algorithms like QMIX and QPLEX, enhancing their performance by addressing temporal inconsistencies and improving efficiency. This compatibility allows the AVF method to be applied broadly across various MARL frameworks, increasing its practical utility.\"], \"weaknesses\": [\"Theoretical Analysis Limitations: Although the paper presents theoretical propositions such as Macro-IGM and MacAdv-IGM, their practical relevance to the proposed AVF algorithm is limited. The theoretical framework is broad and lacks direct applicability, as it does not clearly define or demonstrate how the function class F satisfying these conditions is implemented within the algorithm. Additionally, the propositions do not significantly bridge the gap between theoretical findings and practical performance, raising questions about the real impact of these proofs on the algorithm\\u2019s effectiveness.\", \"The paper\\u2019s asynchronous learning structure and handling of macro-actions, while effective, are relatively straightforward and have been explored in MARL research before. This makes the contribution seem incremental rather than a significant innovation, potentially limiting its perceived impact in advancing the field.\"], \"questions\": \"Could you explain how the theoretical propositions, specifically Macro-IGM and MacAdv-IGM, concretely impact the learning performance and stability of the AVF algorithm? I would like clarification on how these theoretical proofs contribute directly to the design and performance improvement of the algorithm.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed and thoughtful response.\\n\\nI meant \\\"asynchronous learning structure\\\" and \\\"handling of macro-actions\\\" have been individually studied. I agree that it has not been explored in this way before.\\n\\nYour clarification and modification regarding the theoretical framework have improved the clarity of the paper. Taking this into consideration, I will adjust my score accordingly.\"}", "{\"metareview\": [\"This paper proposes Asynchronous Value Factorization (AVF), a new method for handling macro-actions (temporally extended actions) in multi-agent reinforcement learning (MARL). AVF addresses the inefficiencies and inconsistencies of synchronous learning when agents have different action durations by allowing them to update independently.\", \"Strengths\", \"-----------\", \"**Practical relevance:** AVF tackles a real-world issue in MARL where agents often operate asynchronously with varying action durations.\", \"**Improved efficiency and scalability:** By enabling independent updates, AVF enhances learning efficiency and scalability in macro-action scenarios.\", \"**Theoretical foundation:** The paper introduces Macro-IGM and MacAdv-IGM, theoretical concepts that extend the expressiveness of MARL with macro-actions.\", \"**Compatibility:** AVF can be integrated with existing MARL algorithms like QMIX and QPLEX, enhancing their performance in asynchronous settings.\", \"Weaknesses\", \"--------------\", \"**Limited theoretical significance:** While the paper presents theoretical propositions, their practical connection to the AVF algorithm remains unclear. The theory lacks direct applicability and doesn't clearly demonstrate how it contributes to the algorithm's effectiveness.\", \"**Incremental contribution:** The asynchronous learning structure and handling of macro-actions, while effective, are considered relatively straightforward and lack significant novelty.\", \"**Concerns on experiments:** Reviewers raised questions about the clarity of the experimental results, the choice of baselines, and the need for more detailed comparisons with existing methods in hierarchical MARL.\", \"AVF offers a practical solution for dealing with macro-actions in MARL, demonstrating improved efficiency and scalability in asynchronous settings. However, the paper's theoretical contribution needs further clarification and its novelty is considered somewhat limited.\"], \"additional_comments_on_reviewer_discussion\": \"Concerns remained about the contribution of the paper and the relevance of presented results, in particular about the theoretical claims about Macro-IGM and MacAdv-IGM.\"}", "{\"summary\": \"The paper studies temporal abstraction in MARL, focusing on Asynchronous Value Factorization (AVF). It adjusts the IGM consistency and common value factorization methods to the AVF setting, where agent utilities with ongoing macro-actions are excluded from the gradient calculation. The approaches are evaluated on some small benchmark domains.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Studies a well-motivated but somewhat neglected problem\", \"Well-written and easy to understand\", \"The limitations and broader impact are clearly stated\"], \"weaknesses\": \"**Limitation**\\n\\n- The paper assumes macro-actions to be pre-defined but realistically this does not seem feasible to me: Since macro-actions consist of primitive actions, the macro-action space scales exponentially w.r.t. time, which makes a manual definition of adequate (or even optimal) macro-actions prohibitive.\\n\\n**Significance**\\n\\n- I am sceptical about the optimality claim regarding Macro-Dec-POMDPs, which depends on the actual macro-action definition. E.g., considering Dec-Tiger, one could define a macro-action that only consists of Listening actions (without opening a door), which would never yield the optimal reward - regardless of what the other agents does.\\n- As far as I understood, the most significant technical contribution is only to exclude the agent utilities of ongoing macro-actions from the gradient calculation, e.g., using masking\\n- The experimental is mainly a self-comparison, i.e., ablations, without including any baseline known from hierarchical MARL [1] or temporal abstraction in MARL [2], which also consider macro-action-based value factorization\\n\\n**Presentation**\\n\\n- In Section 4, the first and second paragraph overlap visually which looks weird.\\n- The labels of Figures 3, 4, 5, 6, and 8 are too small (they are unreadable when printed). Even worse, the plots Figures 5, 6, 7, and 8 pixelate when zooming in, which does not help readability either. Thus, I am uncertain about the significance and quality of the results.\\n\\n**Literature**\\n\\n[1] Xu et al, \\\"HAVEN: Hierarchical Cooperative Multi-Agent Reinforcement Learning with Dual Coordination Mechanism\\\", AAAI 2023\\n\\n[2] Tang et al., \\\"Hierarchical Deep Multiagent Reinforcement Learning with Temporal Abstraction\\\", 2018\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Qp2t\", \"comment\": \"We appreciate your thoughtful review and valuable suggestions. Your comments have significantly helped us improve the clarity and quality of our paper. Below, we address the concerns you raised, with references to the revised paper for further details.\\n\\n> The theoretical framework \\u2026 does not clearly define or demonstrate how the function class F satisfying these conditions is implemented within the algorithm. \\u2026 the propositions do not significantly bridge the gap between theoretical findings and practical performance \\u2026.\\n\\nWe acknowledge that the original manuscript could have better explained the connection between macro-action-based IGM principles and their implementation in asynchronous value factorization (AVF) algorithms. Our efforts to keep the paper concise may have inadvertently hindered clarity in this regard.\\nThe critical link between the theoretical framework and the algorithms lies in the conditional prediction of macro-action value functions. These predictions enable accurate estimation of the joint Q-value, even when agents asynchronously terminate their macro-actions at different steps. Without the conditional operator, the Q-value estimation would inaccurately assume that all agents simultaneously initiate new macro-actions, leading to suboptimal performance. This mechanism ensures agents do not prematurely sample new high-level behaviors before completing their current macro-actions. Overall, in asynchronous settings, where macro-action durations are typically unknown and heterogeneous, maintaining consistency between joint and local macro-action value functions is essential for principled factorization. Conditional Q-value predictions are thus pivotal for correctly formalizing Mac-IGM and achieving reliable asynchronous performance. All AVF algorithms incorporate conditional value function predictions into their architectures and update rules, ensuring adherence to Mac-IGM and MacAdv-IGM principles. These clarifications are detailed in the revised manuscript, specifically from line 167, 179, and 305.\\nAdditionally, we have included a representative example in Appendix B, which demonstrates the full expressiveness of AVF-QPLEX for MacAdv-IGM, further bridging the gap between theory and practice.\\n\\n> The paper\\u2019s asynchronous learning structure and handling of macro-actions, while effective, are relatively straightforward and have been explored in MARL research before.\\n\\nWe kindly ask the reviewer to clarify this claim. While it is true that macro-actions and asynchronous learning have been studied in multi-agent reinforcement learning (MARL), our work addresses a critical gap by presenting principled centralized training with decentralized execution (CTDE) algorithms specifically tailored for asynchronous MARL. To our knowledge, this is the first work to do so. Our extensive set of experiments empirically demonstrates the practical advantages of the CTDE framework, highlighting the value of principled algorithms in asynchronous MARL scenarios. We believe our results will inspire further research in this area.\\n\\nThank you once again for your constructive feedback. We hope these clarifications address your concerns, and we are happy to discuss any additional questions that may arise during this review process.\"}" ] }
5pFV1FxG9d
Improving Discrete Optimisation Via Decoupled Straight-Through Gumbel-Softmax
[ "Rushi Shah", "Mingyuan Yan", "Michael Curtis Mozer", "Dianbo Liu" ]
Discrete representations play a crucial role in many deep learning architectures, yet their non-differentiable nature poses significant challenges for gradient-based optimization. To address this issue, various gradient estimators have been developed, including the Straight-Through Gumbel-Softmax (ST-GS) estimator, which combines the Straight-Through Estimator (STE) and the Gumbel-based reparameterization trick. However, the performance of ST-GS is highly sensitive to temperature, with its selection often compromising gradient fidelity. In this work, we propose a simple yet effective extension to ST-GS by employing decoupled temperatures for forward and backward passes, which we refer to as "Decoupled ST-GS". We show that our approach significantly enhances the original ST-GS through extensive experiments across multiple tasks and datasets. We further investigate the impact of our method on gradient fidelity from multiple perspectives, including the gradient gap and the bias-variance trade-off of estimated gradients. Our findings contribute to the ongoing effort to improve discrete optimization in deep learning, offering a practical solution that balances simplicity and effectiveness.
[ "Gumbel-Max Trick", "Gradient Estimation", "Discretisation", "Straight-Through Gumbel Softmax", "Discrete Optimisation" ]
Reject
https://openreview.net/pdf?id=5pFV1FxG9d
https://openreview.net/forum?id=5pFV1FxG9d
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mukM29A0Gf", "mqRoCPsoNW", "Kep3adjBEc", "JYD1xVlFwI", "HJFgqd24O2", "CoJG7VZ3Ug", "6ebpSR7mk9", "1ZuXMRmWsu", "0agsXQdVFQ" ], "note_type": [ "official_review", "official_review", "official_comment", "decision", "official_review", "meta_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730648692659, 1729518306637, 1732795842795, 1737523840301, 1730714604967, 1734225330629, 1732201258151, 1730827823617, 1732549956973 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7464/Reviewer_pi6t" ], [ "ICLR.cc/2025/Conference/Submission7464/Reviewer_tLHs" ], [ "ICLR.cc/2025/Conference/Submission7464/Reviewer_tLHs" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7464/Reviewer_kz1r" ], [ "ICLR.cc/2025/Conference/Submission7464/Area_Chair_Q2C1" ], [ "ICLR.cc/2025/Conference/Submission7464/Authors" ], [ "ICLR.cc/2025/Conference/Submission7464/Reviewer_UoU5" ], [ "ICLR.cc/2025/Conference/Submission7464/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces $Decoupled ST-GS$, an extension of the Straight-Through Gumbel-Softmax (ST-GS) estimator that utilizes separate temperature parameters for forward and backward passes. This decoupling enhances control over relaxation smoothness during inference and gradient fidelity during training, addressing the limitations of the traditional ST-GS method. Through extensive experiments, the authors demonstrate that Decoupled ST-GS significantly outperforms the standard ST-GS across various tasks and datasets. Additionally, the paper analyzes its impact on gradient fidelity, providing insights into how the new approach improves optimization in discrete latent models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces Decoupled ST-GS, a novel extension of the Straight-Through Gumbel-Softmax estimator that allows independent control of temperature parameters for forward and backward passes, enhancing relaxation smoothness and gradient fidelity.\\n2. The authors demonstrate significant performance improvements over the traditional ST-GS.\\n3. Additionally, the paper thoroughly analyses gradient fidelity, exploring the gradient gap and bias-variance trade-off, which offers valuable insights into optimizing discrete latent models in deep learning.\", \"weaknesses\": \"Most experiments are performed on toy experiments in three small datasets: CIFAR10, SVHN, and MNIST for binary autoencoder and VAE.\", \"questions\": \"Can the author provide a comparison of MAE settings for ImageNet1k experiments? To show the methods works on more practical settings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an extension of the straight-through Gumbel-Softmax estimator by decoupling the temperatures for the forward and backward passes. The authors present an empirical evaluation across multiple tasks and datasets to demonstrate the advantages of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The proposed approach is simple, straightforward to implement, and the paper is clearly written.\", \"weaknesses\": \"It is unfortunate that the simplicity of the proposed idea is not supported by any theoretical guarantees to validate its effectiveness. Additionally, the paper suffers from redundancy and lacks sufficient depth.\", \"questions\": [\"It seems that the temperature should affect the smoothness of the training objective. Could you comment? If so, why was the same step size used for all temperature settings?\", \"How many random seeds were used for each experiment? Could you provide error bars to quantify variability?\", \"Your grid search suggests that the minimum validation errors occur at the boundary of the search space. Do you believe extending the grid might lead to further improvements?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your answer. Below are a few comments:\\n\\n1. I intended to refer to how the gradient Lipschitz continuity of the objective is affected.\\n2. It appears that further improvements could be made, such as refining the grid in logarithmic scale, which may yield more accurate results.\\n\\nI have decided to maintain my original score, as I believe the contribution is too limited and does not fully meet the standards expected for ICLR\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper strives to focus on studying the limitations of the Straight-Through Gumbel-Softmax (ST-GS) estimator, which is sensitive to temperature settings. The authors propose the Decoupled ST-GS estimator, which uses distinct temperatures for the forward and backward passes, claiming to enhance both performance and gradient fidelity. Through extensive experiments on various tasks and datasets, they demonstrate that this approach significantly improves upon the original ST-GS, offering better control over the trade-off between relaxation smoothness during inference and gradient accuracy during training.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The paper is clearly written and easy to follow.\\n2. The authors provide some interesting results. The experimental demonstration are in detail.\", \"weaknesses\": \"The overall demonstration is okay, and I found no significant flaws in the presentation. However, in my opinion, the primary concern of the paper is its significance and contribution. In its current form, it does not sufficiently meet the ICLR requirements.\\n\\nThe core idea of the proposed method is to employ different temperature values in the forward and backward processes, while vanilla ST-GS uses the same temperature. his idea is rather simple and straightforward. And it is clear that we could get better results over vanilla ST-GS since this approach adds an additional degree of freedom. And of course this will incur additional tuning effort.\\n\\nFrom my reading, I did not find sufficient reasons to justify this added complexity, and the authors have not provided compelling theoretical or empirical insights to support their choice. \\n\\nAdditionally, the introduction of background information, including the related works section, spans nearly five full pages, which feels excessive and somewhat lacking in informative content.\\n\\nI suggest that if the authors choose to retain this method, they should either provide more theoretical insights to bolster their claims or focus on applying the method to significant problems that are of greater interest to the community.\\n\\nFinally, please check the formula: line 143 \\\"z_k}\\\" -> \\\"z_k]\\\".\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"I have not found any discussions about the limitations and potential negative societal impact. But in my opinion, this may not be a problem, since the work only focuses on the optimization in deep learning. Still, it is highly encouraged to add corresponding discussions.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The work empirically improves the ST-GS estimator commonly used in discrete settings by decoupling the temperature parameters used in forward and backward passes. The suitable temperatures are determined based on grid search. The idea is simple and shown to be effective.\", \"the_major_concerns_regarding_the_work_are_as_follows\": \"first, he experiments are run on rather simple datasets and it's unclear how it will perform on realistic benchmarks. Further, there is no technical or theoretical insight as to why this modification should be effective. Finally, the author responses were somewhat limited, including no response to some of the reviewers.\", \"additional_comments_on_reviewer_discussion\": \"The author response was somewhat limited as they did not respond to all of the original reviews. One of the reviewers (who got a response) engaged with the authors, but the concerns persisted. Further, authors posted external links/urls during the discussions, which was odd and are possibly against the guidelines.\"}", "{\"comment\": \"Thanks for the valuable feedback. Here are the answers to your queries:\\n1. \\\"How to determine the forward and backward temperatures?\\\" - Using grid-search based on validation performance.\\n2. \\\"For the modified Gumbel-SoftMax sample \\\\hat{z}^b, its partial gradient is still approximated to one?\\\" - Yes. We use the Straight-Through approximation.\\n3. \\\"The results is sensitive to the choice of the forward and backward temperatures\\\" - Yes. The idea is to show that a single temperature for both passes is both suboptimal and an unnecessary constraint, and decoupling them unlocks potential performance.\"}", "{\"summary\": \"This paper present a simple method, called decoupled stgs,for dealing with discrete representation. Through the employing the decoupled temperatures for forward and backward passes, the gradient estimators could be less sensitive to the temperature. The experimental results demonstrate the practical advantage.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed approach makes use of the advantage of st-gs and ste and avoid the disadvantage of these two methods. The paper provides the simple approach that provide the two temperatures for both forward and backward passes.\", \"weaknesses\": \"However, the proposed approach lack newly estimator, even though the performance improved. The result relies on the selected parameters, which prevent the practical usages\", \"questions\": \"1 how to determine the forward and backward temperatures\\n2 for the modified Gumbel-SoftMax sample \\\\hat{z}^b, its partial gradient is still approximated to one?\\n3 the results is sensitive to the choice of the forward and backward temperatures\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the valuable feedback.\\n\\n1. Could you clarify the first question: \\\"It seems that the temperature should affect the smoothness of the training objective. Could you comment? If so, why was the same step size used for all temperature settings?\\\"\\n\\n2. \\\"How many random seeds were used for each experiment? Could you provide error bars to quantify variability?\\\" - 5 for the reconstruction task and 10 for the generative modelling task, as mentioned in lines 363 and 406. We have already included error bars in graphs for the reconstruction task. As for the generative modelling task, here are the modified line plots with error bars:\", \"8x4_setting___https\": \"//imgur.com/a/SykdwXB\", \"16x4_setting___https\": \"//imgur.com/a/Haxl0hm\\n\\n\\n\\n3. \\\"Your grid search suggests that the minimum validation errors occur at the boundary of the search space. Do you believe extending the grid might lead to further improvements?\\\" - We suppose you are referring to the reconstruction task here. Yes, we conducted a broader set of experiments but found that the performance plateaued beyond the combinations shown in the submission. We have mentioned this observation in lines 365-367. Here are the results for a more extensive grid:\", \"cifar10___https\": \"//imgur.com/rVwqqQM\", \"mnist___https\": \"//imgur.com/a/NhPx9Kk\", \"svhn___https\": \"//imgur.com/a/M6nxdhw\"}" ] }
5oaUMZEjWe
Unifying Diarization, Separation, and ASR with Multi-Speaker Encoder
[ "Muhammad Shakeel", "Yui Sudo", "Yifan Peng", "Chyi-Jiunn Lin", "Shinji Watanabe" ]
The rapid progress of single-task architectures has dominated recent developments in multi-talker speech processing, prompting the need for unified approaches. This paper introduces a unified multi-speaker encoder (UME), a novel model architecture that jointly learns representations for diarization, separation, and multi-speaker automatic speech recognition (ASR) tasks using a shared pre-trained foundational speech encoder. We leverage the hidden representations from multiple layers of UME to effectively use information from different semantic levels, contributing to bottom-up alignment between tasks. This joint training approach captures the inherent interdependencies among the tasks, enhancing overall performance on overlapping speech data. Our evaluations demonstrate that UME achieves substantial improvements over the single-task state-of-the-art (SOTA) baselines dedicated to speaker diarization, speech separation, and multi-speaker ASR. Notably, for speaker diarization, UME achieved SOTA performance by lowering the diarization error rate (DER) from 3.24 to 2.19 on the Libri2Mix dataset. Furthermore, our results in multi-speaker ASR outperform the previous results, reducing the concatenated minimum-permutation word error rate (cpWER) from 11.9 to 9.2 on the LibriSpeech2Mix evaluation set.
[ "Speaker diarization", "speech separation", "multi-speaker speech recognition", "overlapped speech recognition", "end-to-end", "multitask learning" ]
Reject
https://openreview.net/pdf?id=5oaUMZEjWe
https://openreview.net/forum?id=5oaUMZEjWe
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xSUDtTOinw", "w0dyy6Jxc9", "uWLGcQxozm", "pPNDk5dNdI", "kD4VVpfkz5", "j8RuCNJzKf", "iLSZ6f4IMG", "ga9y83BqN2", "gJheNJmlhJ", "gEAwgS5NFS", "cHMLJuk7Po", "TeO68zG8Wn", "LMfF1J2qOD", "ASEwqz4017", "9SoaMhKnTs", "8bqAgJeWwk", "3LdQzBP3f3" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732639911613, 1732340825218, 1737523957751, 1732699535870, 1730312196701, 1732343529596, 1730708849807, 1732340025337, 1732343910827, 1732702205674, 1732339562138, 1730387712334, 1730286051167, 1733143970945, 1732342672078, 1732571204283, 1733620969270 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9069/Reviewer_xgGR" ], [ "ICLR.cc/2025/Conference/Submission9069/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9069/Authors" ], [ "ICLR.cc/2025/Conference/Submission9069/Reviewer_7DgA" ], [ "ICLR.cc/2025/Conference/Submission9069/Authors" ], [ "ICLR.cc/2025/Conference/Submission9069/Reviewer_roJK" ], [ "ICLR.cc/2025/Conference/Submission9069/Authors" ], [ "ICLR.cc/2025/Conference/Submission9069/Authors" ], [ "ICLR.cc/2025/Conference/Submission9069/Authors" ], [ "ICLR.cc/2025/Conference/Submission9069/Authors" ], [ "ICLR.cc/2025/Conference/Submission9069/Reviewer_xgGR" ], [ "ICLR.cc/2025/Conference/Submission9069/Reviewer_wjfQ" ], [ "ICLR.cc/2025/Conference/Submission9069/Reviewer_7DgA" ], [ "ICLR.cc/2025/Conference/Submission9069/Authors" ], [ "ICLR.cc/2025/Conference/Submission9069/Reviewer_roJK" ], [ "ICLR.cc/2025/Conference/Submission9069/Area_Chair_8Vmg" ] ], "structured_content_str": [ "{\"comment\": \"I want to thank the authors for their clear reply and updating the work in such a short time-span. I believe the results and conclusions as-is are not relevant to the community due to the 100% overlapping speech in the SD task, so I will not update my score.\\n\\nHowever, this also means that the SD task in SUPERB is not relevant, which I did not know before this review, and is something which I want to explicitly state is non-obvious and the authors should not blame themselves for having followed the SUPERB SD recipe.\"}", "{\"comment\": \"**We thank the reviewer for the detailed feedback and constructive questions. We are encouraged by the recognition of the strengths of our work.**\\n\\nBelow are the detailed responses to each query.\\n\\n**Questions**\\n>Clarification on Figure 1: Figure 1 appears misleading. Equation (3) shows different weights for different tasks, yet Figure 1 depicts only a single weight. Can you clarify this discrepancy?\\n\\nWe have made the corrections in Figure 1. \\n***\\n>Training with Real Data: How do you propose to use real data for training the model, given that datasets for these tasks (e.g., ASR and speech separation) are typically disjoint? For example datasets for ASR do not generally contain ground truth for speech separation. How will this disjoint nature impact the joint training of the model?\\n\\nThank you for raising this important point regarding the challenge of training with real data for the joint tasks of separation, diarization, and ASR. We acknowledge that, in practice, datasets for these tasks are often disjoint, with ASR datasets typically lacking ground truth for separation and diarization, and vice versa. Here is our approach to mitigate this limitation:\\n\\n* We leverage synthetic or simulated datasets where ground truth for all three tasks (separation, diarization, and ASR) is available. This pre-training step enables the model to learn joint representations effectively and establish a foundational understanding of each task.\\n* Once the model is pre-trained on synthetic data, we fine-tune it using real-world data for each task independently, while keeping the other task parameters frozen. This strategy will enable the professionals to use real diarization, separation and ASR datasets to adapt to each component of the UME framework.\\n\\nSimilar strategies have been employed in existing studies to address the absence of ground truth for all three tasks in real-world datasets, e.g., as reported in [1] and [2]: \\n\\n**References**\\n```\\n[1] Bredin, H. (2023) pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe. Proc. INTERSPEECH 2023, 1983-1987, doi: 10.21437/Interspeech.2023-105\\n[2] Cornell, S., Wiesner, M.S., Watanabe, S., Raj, D., Chang, X., Garcia, P., Masuyam, Y., Wang, Z.-Q., Squartini, S., Khudanpur, S. (2023) The CHiME-7 DASR Challenge: Distant Meeting Transcription with Multiple Devices in Diverse Scenarios. Proc. 7th International Workshop on Speech Processing in Everyday Environments (CHiME 2023), 1-6, doi: 10.21437/CHiME.2023-1\\n```\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer xgGR\", \"comment\": \"We appreciate the reviewer\\u2019s observation regarding the overlapping style of datasets like Libri2Mix and Libri3Mix, which are indeed 100% overlapped and left-aligned. This alignment simplifies tasks like speaker diarization, as it requires predicting speaker activity stamps only on the right side. However, we believe the evaluation remains fair since our proposed method and the baselines in the prior work [1,2] are evaluated under the same experimental conditions.\\n\\nThat said, as shown in Table 3, left-aligned datasets pose significantly greater challenges for the multi-speaker ASR task than speaker diarization. This underscores the non-trivial nature of proposing a unified framework capable of handling tasks with inherently disjoint characteristics. As discussed in the paper, we address this complexity by directly leveraging the Libri2Mix and Libri3Mix datasets to demonstrate how our UME framework seamlessly integrates these diverse tasks. This approach reflects the framework\\u2019s ability to achieve broader objectives, such as task efficiency, generalizability, and scalability, which we believe are of particular interest to the ICLR audience. \\n\\nAdditionally, to the best of our knowledge, our work makes a meaningful contribution as it is the first effort to unify speaker diarization, speech separation, and multi-speaker ASR in a single framework. This unification is achieved by leveraging a weighted sum of hidden state representations alongside multi-task learning, as detailed in Section 2.\\n\\nWhile we respect the reviewer\\u2019s perspective on the paper, we remain confident that our work represents a significant step toward addressing the broader challenges of unifying diverse tasks within end-to-end speech processing frameworks.\"}", "{\"summary\": \"The authors proposed a \\\"Unified Multi-Speaker Encoder (UME)\\\" for tasks such as speech separation, speech diarization, and multi-speaker ASR. Specifically, the UME employs a pre-trained OWSM encoder and is jointly fine-tuned across all three tasks. The latent embeddings from multiple layers of the UME are combined using a weighted sum with learnable coefficients. However, the novelty of this approach is limited, and its performance is not as good as that of state-of-the-art methods\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This manuscript attempts to construct a unified encoder for speech-related tasks in multi-talker scenarios. This research question is necessary and important.\", \"weaknesses\": [\"The manuscript lacks novelty. Numerous works have already discussed the use of multi-layer feature fusion/weighted sum, such as \\\"Large-Scale Self-Supervised Speech Representation Learning for Automatic Speaker Verification,\\\" \\\"Whisper-AT: Noise-Robust Automatic Speech Recognizers are Also Strong General Audio Event Taggers,\\\" and \\\"Resource-Efficient Transfer Learning from Speech Foundation Model Using Hierarchical Feature Fusion.\\\" Compared to the aforementioned studies, this manuscript makes no novel contributions to methodology. Instead, it leverages existing techniques to address three speech-related tasks in multi-speaker scenarios.\", \"The authors overclaimed achieving state-of-the-art performance in speaker diarization, speech separation, and multi-speaker ASR.\", \"However, for speech diarization, they only compared their method with ConvTasNet (2019) and did not benchmark against more recent methods such as Mossformer or Mossformer2.\", \"For multi-speaker ASR, the proposed method achieved a WER of 9.2% on LibriSpeechMix. In contrast, the state-of-the-art performance is 3.43%, as reported in \\\"Empowering Whisper as a Joint Multi-Talker and Target-Talker Speech Recognition System.\\\"\", \"For speaker diarization, the authors only compared their method with EEND-based methods. The performance of their UME when combined with non-E2E method is not explored.\", \"The author reported the model's performance solely on two-speaker simulated overlapped speech for all three tasks. The performance in real-world scenarios and with more speakers remains under investigation.\"], \"questions\": \"1. Can you explain the novelty of your approach compared to existing multi-layer weighted sum methods, aside from fine-tuning the model on the three multi-speaker-related tasks?\\n2. Why did you not compare your method with the latest research?\\n3. Why did you not report the results from other papers that demonstrate better performance?\\n4. It would be beneficial to discuss the performance on a broader range of datasets, rather than limiting the evaluation to only two-speaker LibriMix and LibriSpeechMix.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**We thank the reviewer for the detailed feedback and constructive questions. We are encouraged by the recognition of the strengths of our work.**\\n\\nBelow are the detailed responses to each query.\\n\\n**Weaknesses**\\n>The novelty is limited. As introduced in Section 2, multi-layer feature learning and joint training of speech tasks have been investigated in plenty of previous studies. The main contribution of this paper is to include more speech tasks into joint training and to combine it with multi-layer feature learning, which is limited in my opinion.\\n\\nWe acknowledge the reviewer\\u2019s comment regarding the lack of novelty in the multi-layer feature fusion/weighted sum approach. As noted in Section 2.1, our contribution lies in leveraging the weighted sum of OWSM encoder features to develop a unified speech model capable of addressing diarization, separation, and multi-speaker ASR tasks simultaneously. Integrating these tasks into a single multi-task framework presents a significant challenge due to their inherently disjoint nature. Our work addresses this issue by developing a unified framework that aligns these tasks through shared representations while allowing task-specific adaptations. This approach not only reduces redundancy in task-specific models but also promotes mutual benefits, such as improved speaker attributed transcription and better signal separation for diarization.\\n***\\n\\n> As shown in experimental results, the task weights for joint training in Eq.(16) play an important role in model performance. Thus, how to optimize these weights should be carefully explained.\\n\\nWe adopted a weighted-sum scalarization approach [1] to simplify the multi-objective optimization problem into a single-objective [2] one by assigning equal weights to all task-specific losses. This approach assumes that the tasks are cooperative rather than conflicting, particularly in our two-speaker and three-speaker scenarios, and reflects their equal importance in our framework. Since the primary goal of this study is to develop a unified framework capable of integrating multiple tasks rather than optimizing individual task performance, we propose an equal-weighting strategy that assigns equal importance to all tasks. This approach is validated by experimental results, which demonstrate that simple equally weighted scalarization achieves state-of-the-art performance.\\n\\nWe have added the above discussion in Section 4.3.\\n\\n***\\n\\n**Questions**\\n\\n>According to the results shown in Table 3 and 4, the layer weights for the three tasks were quite similar in both joint training configurations. Therefore, it may not be necessary to use task-dependent layer weights. How about the performance of using unifed layer weights across different tasks?\\n\\nWe will attempt to do the experiments with the unified layer weights. However, fully completing the experiments might not be feasible within the rebuttal timeline. We will do our best to include preliminary results during the revision phase.\\n\\n**References**\\n\\n```\\n[1] Matthias Ehrgott. Weighted Sum Scalarization, pp. 55\\u201375. Springer Berlin Heidelberg, Berlin,Heidelberg, 2000. ISBN 978-3-662-22199-0. \\n[2] Cristina Bazgan, Stefan Ruzika, Clemens Thielen, and Daniel Vanderpooten. The power of the weighted sum scalarization for approximating multiobjective optimization problems. Theory of Computing Systems, 66(1):395\\u2013415, Feb 2022. ISSN 1433-0490. doi: 10.1007/s00224-021-10066-5.\\n```\"}", "{\"summary\": \"The authors introduce a method to concurrently train models for speech recognition, speech separation, and speaker diarization, utilizing embeddings from a pre-trained speech foundation model (SFM). They leverage a weighted sum of the model\\u2019s layer embeddings as input for each task. Experimental results indicate that this joint training approach enhances the performance of all three tasks.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a unified architecture that tackles Speech Separation, Speaker Diarization, and ASR tasks simultaneously. Experimental results confirm that this multitask framework enhances the performance of each task, demonstrating their mutual benefit.\", \"weaknesses\": \"As noted by the authors, previous works have utilized SFMs like WavLM for SS and ASR tasks. Although the proposed extension to cover all three tasks shows metric improvements compared to other SFMs, the innovation is minimal.\\n\\nAdditionally, the authors have only tested their approach on a simulated dataset. A more comprehensive evaluation using datasets like AMI, ICSI, or LibriCSS would better validate their method.\", \"questions\": \"1. Clarification on Figure 1:\\nFigure 1 appears misleading. Equation (3) shows different weights for different tasks, yet Figure 1 depicts only a single weight. Can you clarify this discrepancy?\\n\\n\\n2. Training with Real Data:\\nHow do you propose to use real data for training the model, given that datasets for these tasks (e.g., ASR and speech separation) are typically disjoint? For example datasets for ASR do not generally contain ground truth for speech separation. How will this disjoint nature impact the joint training of the model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Additionally, as per the reviewer\\u2019s guide, a submission should not be rejected solely for not achieving state-of-the-art results. To quote:\\n\\n```\\n\\u201cQ: If a submission does not achieve state-of-the-art results, is that grounds for rejection?\", \"a\": \"No, a lack of state-of-the-art results does not by itself constitute grounds for rejection. Submissions bring value to the ICLR community when they convincingly demonstrate new, relevant, impactful knowledge. Submissions can achieve this without achieving state-of-the-art results.\\u201d\\n```\\n\\nWhile we respect the reviewers' feedback, we carefully reviewed the manuscript and decided to exclude the results reported by Meng et al. for the following reasons:\\n\\n* Meng et al.'s study used the \\\"clean\\\" subsets of Libri2Mix and Libri3Mix, whereas our work focuses on the more challenging noisy \\\"mix both\\\" subset (see Section 4.1).\\n* Including their results would introduce inconsistency in comparisons due to the differences in dataset characteristics and noise conditions.\\n\\nWe hope this clarification addresses the reviewers' concerns.\\n\\n***\\n>For speaker diarization, the authors only compared their method with EEND-based methods. The performance of their UME when combined with non-E2E method is not explored.\\n\\nOur main contribution lies in demonstrating the feasibility and flexibility of supporting end-to-end (E2E) speaker diarization, speech separation, and multi-speaker ASR tasks within a unified framework. To illustrate this, we adopt EEND, a widely recognized open-source end-to-end speaker diarization model. Our experiments show that integrating EEND with speech separation and multi-speaker ASR tasks in a multi-task configuration improves its performance compared to training EEND as a standalone model.\\n\\n***\\n>The author reported the model's performance solely on two-speaker simulated overlapped speech for all three tasks. The performance in real-world scenarios and with more speakers remains under investigation.\\n\\nWe will include results for Libri3Mix and LibriSpeech3Mix for completeness. However, due to the lack of ground truth data for all three tasks in publicly available real-world datasets, our analysis is limited to simulated datasets. This absence of ground truth in real-world data makes it challenging to objectively evaluate the UME framework's performance. Therefore, we focus on simulated datasets, where ground truth is available, to ensure accurate comparisons. We acknowledge this limitation and plan to explore real-world datasets in future work as they become more accessible and standardized.\\n\\n***\"}", "{\"title\": \"Summary of changes in the revised manuscript\", \"comment\": \"**Summary of changes in the revised manuscript**\", \"we_have_revised_the_manuscript_and_uploaded_an_updated_version_incorporating_the_following_changes\": [\"Corrected the variable label in figure 1.\", \"Added the minimum, maximum, and average durations of utterances in Section 4.1.\", \"Clarified details about the implementation, experimental setup, and the choice of weights for task-specific losses.\", \"Included Libri3Mix results for completeness in Tables 2, 3, and 4.\", \"Added a discussion of the Libri3Mix experiments in Section 5.2.\", \"Added recovered speech examples for the three-speaker case in Appendix A.3.\", \"Provided layer weights for single-task models in the Appendix A.1 and moved the previous layer weights figure there due to space constraints.\", \"Added a discussion on the limitations of using simulated datasets in the Limitations section.\", \"**Note:** All updates in the revised paper are highlighted in red text for easy reference.\"]}", "{\"title\": \"Response to Reviewer roJK\", \"comment\": \"We respect the reviewer\\u2019s feedback; however, our experiments are limited to simulated datasets due to the lack of ground truth data for training all three tasks jointly for publicly available real-world datasets. We acknowledge this limitation in the section on limitations and plan to explore real-world datasets in future work as they become more accessible and standardized.\"}", "{\"comment\": \"**We thank the reviewer for the detailed feedback and constructive questions. We are encouraged by the recognition of the strengths of our work.**\\n\\nBelow are the detailed responses to each query.\\n\\n**Weaknesses**\\n\\n>The manuscript lacks novelty. Numerous works have already discussed the use of multi-layer feature fusion/weighted sum, such as \\\"Large-Scale Self-Supervised Speech Representation Learning for Automatic Speaker Verification,\\\" \\\"Whisper-AT: Noise-Robust Automatic Speech Recognizers are Also Strong General Audio Event Taggers,\\\" and \\\"Resource-Efficient Transfer Learning from Speech Foundation Model Using Hierarchical Feature Fusion.\\\" Compared to the aforementioned studies, this manuscript makes no novel contributions to methodology. Instead, it leverages existing techniques to address three speech-related tasks in multi-speaker scenarios.\\n\\nWe acknowledge the reviewer\\u2019s comment regarding the lack of novelty in the multi-layer feature fusion/weighted sum approach. As noted in Section 2.1, our contribution lies in leveraging the weighted sum of OWSM encoder features to develop a unified speech model capable of addressing diarization, separation, and multi-speaker ASR tasks simultaneously. Integrating these tasks into a single multi-task framework presents a significant challenge due to their inherently disjoint nature. Our work addresses this issue by developing a unified framework that aligns these tasks through shared representations while allowing task-specific adaptations. This approach not only reduces redundancy in task-specific models but also promotes mutual benefits, such as improved speaker attributed transcription and better signal separation for diarization.\\n\\n***\\n>The authors overclaimed achieving state-of-the-art performance in speaker diarization, speech separation, and multi-speaker ASR. However, for speech diarization, they only compared their method with ConvTasNet (2019) and did not benchmark against more recent methods such as Mossformer or Mossformer2.\\n\\nOur main contribution lies in demonstrating the feasibility and flexibility of supporting end-to-end (E2E) speaker diarization, speech separation, and multi-speaker ASR tasks within a unified framework. To illustrate this, we adopt ConvTasNet, a widely recognized open-source speech separation model that operates in the time domain. Our experiments show that integrating ConvTasNet with speaker diarization and multi-speaker ASR tasks in a multi-task configuration improves its performance compared to training ConvTasNet as a standalone model. We believe this approach is not limited to ConvTasNet and could similarly enhance other speech separation models, such as Mosformer or Mosformer2, if they become available as open-source models. This demonstrates the flexibility of our multi-task architecture and its potential to improve speech separation models when integrated with diarization and ASR tasks.\\n\\n***\\n>For multi-speaker ASR, the proposed method achieved a WER of 9.2% on LibriSpeechMix. In contrast, the state-of-the-art performance is 3.43%, as reported in \\\"Empowering Whisper as a Joint Multi-Talker and Target-Talker Speech Recognition System.\\\"\\n\\nWe consider this paper \\\"contemporaneous\\\" based on the reviewer guide provided by ICLR (https://iclr.cc/Conferences/2025/ReviewerGuide [FAQ for Reviewers]), which states:\\n\\n```\\n\\u201cQ: Are authors expected to cite and compare with very recent work? What about non-peer-reviewed (e.g., ArXiv) papers? (updated on 7 November 2022)\", \"a\": \"We consider papers contemporaneous if they are published within the last four months. That means, since our full paper deadline is October 1, if a paper was published (i.e., at a peer-reviewed venue) on or after July 1, 2024, authors are not required to compare their work to that paper. Authors are encouraged to cite and discuss all relevant papers, but they may be excused for not knowing about papers not published in peer-reviewed conference proceedings or journals, which includes papers exclusively available on ArXiv. Reviewers are encouraged to use their own good judgment and, if in doubt, discuss with their area chair.\\u201d\\n```\\n\\nThe non-peer-reviewed version of the referenced paper (https://arxiv.org/abs/2407.09817) was published on ArXiv on **July 13, 2024**, while the peer-reviewed version (https://www.isca-archive.org/interspeech_2024/meng24c_interspeech.html) was published on **September 1, 2024**. Given that our submission deadline was **October 1, 2024**, we did not include the results from this paper in our initial version as they fall within the \\\"contemporaneous\\\" period defined by the reviewer guide.\"}", "{\"summary\": \"This work proposes a method where one speech foundation model is used as backbone to three task-specific heads. Unlike the standard SUPERB protocol, these heads are trained in a multi-task setup to perform speaker diarization, speech separation, and multi-speaker speech recognition.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"This work tackles the inherently difficult problem of solving different, orthogonal speech technology tasks. A real-world ASR system needs to solve these, and currently this is done with a pipeline approach of multiple models. Unifying this pipeline of speech separation, speaker diarization, speech recognition, etc into a single model is beneficial to the community.\", \"weaknesses\": [\"I do not believe this work uses a fair evaluation protocol, as it is stated in Section 4.1 that the the speaker diarization test set uses 100% overlapped speech. From my understanding of DER, this means that the model needs to simply predict that each of the 2 speakers are talking all the time. The low but not perfect scores in Table 1 are then simply due to noise in the ground-truth labels?\", \"The paper is well-structured, but there are some sentences which are not clear or contain typos. See e.g.,\", \"line 37-38 \\\"A key limitation of training tasks independently is that inter-dependencies cannot be leveraged\\\"\", \"line 40 \\\"Most existing speech-processing frameworks address this limitation with unified, joint training architectures\\\" (also, 'unified, joint training architecture' is vague)\", \"line 48: delete \\\" so much\\\", add, e.g., \\\"do not work well on\\\"\", \"line 154 \\\"gorund truth\\\"\", \"line 156 \\\"where a T-length...\\\"\", \"line 298 \\\"metrics\\\" -> \\\"metric\\\"\", \"line 367 \\\"Unlike WavLM ... outperforms WavLM\\\" This sentence is hard to parse, split it up, e.g.: \\\"WavLM uses overlapped speech mixtures, while OWSM is just trained on clean speech. We observe that... OWSM > WAVLM\\\"\", \"line 369 \\\"We explain\\\" -> \\\"We speculate\\\" or \\\"We hypothesize\\\"\", \"I believe reproduction of this work is difficult due to missing details on dataset generation and model training, see my questions below.\"], \"questions\": [\"Can the authors comment on the use of 100% overlapped speech for evaluation of SD? Does the SUPERB benchmark for SD also use 100% overlapped speech?\", \"What model type was used for OWSMv3.1(base, small, medium , LR)? From Figure 3 I assume medium, but I do not think the text states this anywhere.\", \"What does a \\\"flat start\\\" mean in line 342?\", \"For how many epochs/steps did you train in total?\", \"How much VRAM is needed to train your model, which GPU(s) did you use, and how long did it take to train?\", \"What batch size did you use, and how were these sampled?\", \"What is the minimum, maximum, and average duration of utterances in your train/eval dataset(s)?\", \"I think it is unexpected (From Figure 3 and Figure 4) that the weights are nearly identical between task heads. Do the authors know what weights are found when training in a single-task setup, and whether they are similar or different to the values displayed in these figures?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a method of combining diarization, separation and ASR tasks with multi-layer feature learning in a unifed model. Experiments have been conducted to evaluate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is written clearly and easy to follow.\", \"weaknesses\": \"1) The novelty is limited. As introduced in Section 2, multi-layer feature learning and joint training of speech tasks have been investigated in plenty of previous studies. The main contribution of this paper is to include more speech tasks into joint training and to combine it with multi-layer feature learning, which is limited in my opinion.\\n2) As showin in experimental results, the task weights for joint training in Eq.(16) play an important role in model performance. Thus, how to optimize these weights should be carefully explained.\", \"questions\": \"According to the results shown in Table 3 and 4, the layer weights for the three tasks were quite similar in both joint training configurations. Therefore, it may not be necessary to use task-dependent layer weights. How about the performance of using unifed layer weights across different tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response. Some of my concerns (on performance) have been addressed, though I still think the manuscript falls below the acceptance standards for ICLR (especially for novelty).\"}", "{\"comment\": \"**We thank the reviewer for the detailed feedback and constructive questions. We are encouraged by the recognition of the strengths of our work.**\\n\\nBelow are the detailed responses to each query.\\n\\n**Weaknesses**\\n>The paper is well-structured, but there are some sentences which are not clear or contain typos.\\n\\nWe have improved the overall clarity of the paper by refining the sentences and correcting the spelling mistakes indicated by the reviewer xgGR.\\n\\n**Questions**\\n>Can the authors comment on the use of 100% overlapped speech for evaluation of SD? Does the SUPERB benchmark for SD also use 100% overlapped speech?\\n\\nWe follow previous studies [1,2] to evaluate the speaker diarization task using 100% overlapped speech in the simulated dataset. Furthermore, to the best of our knowledge, the SUPERB benchmark, as reported by its authors [2], focuses on a two-speaker scenario in the LibriMix dataset, which also consists entirely of 100% overlapped speech.\\n\\n***\\n>What model type was used for OWSMv3.1(base, small, medium , LR)? From Figure 3 I assume medium, but I do not think the text states this anywhere.\\n\\nIn this study, we employed \\\"OWSMv3.1 medium,\\\" which will be explicitly mentioned in the manuscript.\\n\\n>What does a \\\"flat start\\\" mean in line 342? & For how many epochs/steps did you train in total?\\n\\nDuring training in UME, we initialize the encoder parameters with the pre-trained OWSMv3.1 medium encoder and fine-tune the encoder layers for 70 epochs, while all task-specific parameters have a flat start (i.e., no parameter initialization for task-specific layers) and are trained for 70 epochs. For the ASR-initialized UME model, the multi-speaker ASR model is pre-trained separately for 30 epochs, and then the ASR-specific head in the UME model is initialized from this pre-trained model. This results in a total of 70 epochs of fine-tuning for the OWSMv3.1 encoder layers, 70 epochs of training for the diarization and separation tasks, and 70 epochs of fine-tuning for the ASR task. \\n\\nWe have added the above discussion in Section 4.3 of the revised manuscript.\\n***\\n>How much VRAM is needed to train your model, which GPU(s) did you use, and how long did it take to train? & What batch size did you use, and how were these sampled?\\n\\nFour A100 80GB GPUs are used during training, and the batch size is dynamically adjusted based on the input length using the numel batch type in the ESPnet toolkit. In our experiments, the average batch size was 44, and it took six days to train the model for up to 70 epochs.\\n\\nWe have added the above discussion in Section 4.3 of the revised manuscript.\\n***\\n>What is the minimum, maximum, and average duration of utterances in your train/eval dataset(s)?\\n\\nThe training and evaluation sets have utterance durations with a minimum of 3.1 seconds, a maximum of 56.8 seconds, and an average of 16.2 seconds. Additional details can be found in the corresponding table and are further discussed in Section 4.1 and Appendix A.2 of the revised manuscript.\\n***\\n>I think it is unexpected (From Figure 3 and Figure 4) that the weights are nearly identical between task heads. Do the authors know what weights are found when training in a single-task setup, and whether they are similar or different to the values displayed in these figures?\\n\\nWe have incorporated the weights from the single-task setups and observed that their weight distributions differ from those in the multi-task setups. However, the overall trend remains consistent: the initial and final layers contribute more in the single-task setups, except for the diarization task, where all layers contribute equally. \\n\\nWe have added these observations in the Appendix A1 of the revised manuscript.\\n\\n***\\n**References**\\n```\\n[1] Chen, Sanyuan et al. \\u201cWavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing.\\u201d IEEE Journal of Selected Topics in Signal Processing 16 (2021): 1505-1518.\\n[2] Yang, Shu-Wen et al. \\u201cSUPERB: Speech processing Universal PERformance Benchmark.\\u201d Interspeech (2021).\\n```\"}", "{\"title\": \"Thank you for addressing my questions\", \"comment\": \"I thank the authors of the paper for addressing my questions and updating the paper. However the concerns raised regarding the weakness of the paper still remains and I will retain my rating for the paper.\"}", "{\"metareview\": \"This paper Introduces a unified architecture that simultaneously tackles Speech Separation, Speaker Diarization, and ASR tasks, enhancing the performance of each task through a multitask framework. It addresses the practical need for a unified model in real-world ASR systems, which currently relies on a pipeline approach with multiple models. It demonstrates that solving different, orthogonal speech technology tasks together can be mutually beneficial, improving overall system performance. The paper is clearly written and easy to follow, making the complex concepts accessible to readers.\\n\\nHowever, the manuscript leverages existing techniques without making novel contributions to methodology, similar to previous works on multi-layer feature fusion and joint training. The proposed extension to cover all three tasks (Speech Separation, Speaker Diarization, and ASR) shows metric improvements but lacks significant innovation compared to previous works. The approach is only tested on a simulated dataset, lacking comprehensive evaluation on real-world datasets like AMI, ICSI, or LibriCSS. The authors overclaim SOTA performance without adequate benchmarking against more recent methods for speaker diarization, speech separation, and multi-speaker ASR. The model's performance is only reported for two-speaker simulated overlapped speech, with real-world scenarios and more speakers remaining untested.\", \"additional_comments_on_reviewer_discussion\": \"Although the authors addressed the reviewers' concerns, the reviewers felt that their concerns were not fully resolved and maintained their original scores.\"}" ] }
5oSUgTzs8Y
KooNPro: A Variance-Aware Koopman Probabilistic Model Enhanced by Neural Process for Time Series Forecasting
[ "Ronghua Zheng", "Hanru Bai", "Weiyang Ding" ]
The probabilistic forecasting of time series is a well-recognized challenge, particularly in disentangling correlations among interacting time series and addressing the complexities of distribution modeling. By treating time series as temporal dynamics, we introduce **KooNPro**, a novel probabilistic time series forecasting model that combines variance-aware deep **Koo**pman model with **N**eural **Pro**cess. KooNPro introduces a variance-aware continuous spectrum using Gaussian distributions to capture complex temporal dynamics with improved stability. It further integrates the Neural Process to capture fine dynamics, enabling enhanced dynamics capture and prediction. Extensive experiments on nine real-world datasets demonstrate that KooNPro consistently outperforms state-of-the-art baselines. Ablation studies highlight the importance of the Neural Process component and explore the impact of key hyperparameters. Overall, KooNPro presents a promising novel approach for probabilistic time series forecasting.
[ "Probabilistic time series prediction; Neural Process; Deep Koopman model" ]
Accept (Poster)
https://openreview.net/pdf?id=5oSUgTzs8Y
https://openreview.net/forum?id=5oSUgTzs8Y
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nPpS4h8N5q", "jvatwC03e1", "jqTWuGq5MS", "gst2F6mkGI", "bnVRvrWXRk", "aNPQYHh7OQ", "Z9lIMUVImE", "Yr17z5Z5a4", "Yh15l9hIq9", "RXyB6XlheR", "P8u56CDLfE", "OZyn5ZGEMu", "N1kEMK97hk", "M172snzEgE", "JhMBxWigID", "HpC7jVw8Qi", "DxN73DS4yB", "C0LHL3D1ht", "Blcqdm0tmJ", "8NDdrYeW9Z", "7KjoLhPDAB", "5ciyra6Krp" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "meta_review" ], "note_created": [ 1732590160881, 1732457681539, 1732455159150, 1732457656451, 1730686947306, 1737523996975, 1730654103140, 1732820510526, 1733212633588, 1732457714660, 1732549363528, 1732488529295, 1732454617932, 1731326624367, 1732588653814, 1732458260792, 1730658406337, 1732456137513, 1732500103089, 1730393549511, 1733212935428, 1734084374317 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9644/Reviewer_piba" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Reviewer_VVQE" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9644/Reviewer_piba" ], [ "ICLR.cc/2025/Conference/Submission9644/Reviewer_MwLy" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Reviewer_xASk" ], [ "ICLR.cc/2025/Conference/Submission9644/Reviewer_ahsz" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Reviewer_ahsz" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Reviewer_xASk" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Reviewer_MwLy" ], [ "ICLR.cc/2025/Conference/Submission9644/Authors" ], [ "ICLR.cc/2025/Conference/Submission9644/Area_Chair_Wz4Q" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your response. My concerns have been addressed. I decide to maintain my current score.\"}", "{\"comment\": \"Thank you for these insightful comments.\\n***\\n**[W1]** *Questions about challenges in implementation, and the computational complexity.*\\n\\n**[A]** With advancements in probabilistic deep neural networks, such as variational autoencoders and diffusion models, the calculation of the Evidence Lower Bound (ELBO) has become both stable and efficient. Moreover, KooNPro integrates lightweight MLP architectures, which reduce computational overhead compared to transformer-based and diffusion-based methods.\\n***\\n**[W2]** *Questions about the design of the encoder and decoder networks.*\\n\\n**[A]** That's an insightful question. We have also considered employing different encoder-decoder architectures tailored to specific scenarios. For example, CNNs may be more suitable for image-related tasks, transformers for natural language processing, and GNNs for graph-structured data. Adapting the architecture to the nature of the data ensures better performance and efficiency in diverse applications.\\n***\\n**[Q1]** *Questions about the model's ability to capture complex dynamics and the validity of the linearity assumption in formula (6).*\\n\\n**[A]** The proposed probabilistic deep Koopman model captures complex temporal dynamics via the Koopman operator framework, which allows for a linear representation of nonlinear dynamical systems in a latent space. At its core, the Koopman operator provides a way to track the evolution of observables (functions of the state variables) over time, even in systems with highly nonlinear behavior.\\nIn our approach, we extend this framework using deep learning techniques to approximate the Koopman operator in a probabilistic manner. The probabilistic aspect allows the model to capture the inherent uncertainty and variability in the system's dynamics. Rather than assuming a rigid, deterministic evolution, the model learns a distribution over possible future states, which helps capture the stochastic nature of complex temporal dynamics that may arise due to noise, unobserved variables, or inherently uncertain processes. \\n\\nAdditionally, the probabilistic formulation provides a way to quantify the uncertainty in the predictions, which is important when dealing with real-world systems where uncertainty is an inherent characteristic. This not only helps in making more accurate forecasts but also provides a framework for analyzing and understanding the system's temporal behavior under varying conditions.\\nThe assumption in formula (6)\\u2014that the latent space possesses linear characteristics\\u2014is grounded in the principles of Koopman theory, which links nonlinear dynamical systems to linear representations in an augmented (or extended) space of observables. In Koopman theory, the original nonlinear system is mapped to an infinite-dimensional space where the dynamics are represented by a linear operator acting on the space of observables. This mapping allows us to model the evolution of nonlinear systems using linear techniques, despite the underlying system being nonlinear. More specifically, the \\\"latent space\\\" referred to in our formulation can be understood as a discretization of the observable space, where the dynamics of the system are approximated by a set of chosen observables.\\n\\nThus, the validity of the assumption is justified by Koopman theory, which provides a theoretical foundation for understanding the behavior of nonlinear systems in a hidden linear space.\"}", "{\"comment\": \"Thank you for your valuable feedback.\\n***\\n**[W1 \\\\& W2]** *Model clarity and component motivation.*\\n\\n**[A]** We have modified the first paragraphs of Sections 4.1 and 4.2 in the manuscript to clarify the advantage of such a design that considering both the local and global dynamics helps achieve better predictions. The lines at the beginning of Sections 4.1 and 4.2 in the modified manuscript are,\\n\\n\\\"The proposed model first identifies the distribution of the latent variable $S \\\\in\\\\mathbb{R}_s$ in Eq.1 through an embedding $\\\\boldsymbol \\\\tau $ which is presented in the left part of Fig.1. The latent variable integrates the underlying dynamics present in the time series, which allows the model to be more reactive to global features of the time series ...\\\"\\n\\n\\\"The probabilistic deep Koopman model shown in the gray block of Fig.1 concentrates on explaining local characteristics of time series, namely intricate temporal dynamics, with a variance-aware continuous spectrum ...\\\"\\n***\\n**[W3]** *Comparison with other Koopman-based methods.*\\n\\n**[A]** We have updated the introduction to explicitly distinguish KooNPro from existing Koopman-based and probabilistic time series models in lines 053 to 061. Specifically, we have detailed the limitations of current methods, such as spectrum pollution and fixed state transition assumptions, and how KooNPro addresses these issues through variance-aware continuous spectrum modeling and integration with Neural Processes.\\n***\\n**[W4]** *Other probabilistic forecasting methods.*\\n\\n**[A]** Actually, in our experiments, all of our comparison methods are multivariate probabilistic forecasting models, many of which explicitly or implicitly model latent factors in time series. We have thoroughly discussed these methods in the Related Work section and demonstrated through experiments that KooNPro outperforms these probabilistic forecasting models across multiple datasets. We have included a detailed introduction of the comparison methods in Appendix I.\\n\\nAmong the methods, we did not directly compare our method with state-space models, but we emphasized that GP-Copula and TimeGrad can be conceptually related to state-space models (SSMs). GP-Copula can capture dependencies in time series in a way similar to how SSMs use latent states to represent system dynamics. Gaussian Processes (GPs) can be used to approximate latent dynamics, akin to the latent state modeling in SSMs. TimeGrad leverages diffusion processes, which conceptually align with continuous-time state-space models. Diffusion models often involve latent representations of system states evolving over time, similar to SSMs.\\n\\nDeep State Space Models excel at modeling local latent transitions, while KooNPro uses Neural Processes and Koopman operator for global dynamic representation and fine-grained local dynamics to provide a more comprehensive modeling framework.\\n***\\n**[Q1]** *The utility of the neural process latent variable $S$ at a high level.*\\n\\n**[A]** As you mentioned, $S$ is indeed an $s$-dimensional latent variable that encodes the entire time series into a subspace, but it doesn't have to be low-dimensional. At a high level, $S$ captures time-invariant features that represent the global characteristics of the time series. $S$ encapsulates the shared structure and underlying patterns across the entire time series, which at a high level, allows it to effectively model the inter-correlations between different time series.\\n\\nTherefore, $S$ can indeed be thought of as an encoding of the whole time series, not only summarizing individual series but also capturing the relationships and correlations between them implicitly.\\n***\\n**[Q2]** *The advantage of the proposed approach over methods that model time series as state space models.*\\n\\n**[A]** You are correct that many state space models, such as those cited, involve low-dimensional latent decomposition. Similarly, our approach leverages this concept by extracting global time-invariant features and local time-varying dynamics, with distinct advantages brought by our Koopman-based modeling.\\n\\nWhile state space models excel in certain scenarios, their assumptions about linear or fixed dynamics can limit their capacity to handle complex systems. KooNPro's use of Koopman operators for global time-invariant feature extraction, coupled with variance-aware modeling and integration of Neural Processes for local dynamics, offers a more robust and flexible framework for probabilistic time series forecasting.\\n\\nMethods, such as Deep Factors, Deep State Space Models, and Hierarchically Regularized Forecasting, face limitations in handling nonlinear and non-stationary dynamics due to their reliance on linear assumptions, stationary transitions, or hierarchical priors. \\n\\nWe have added a comparative discussion with these three papers in the introduction section in lines 046-050.\"}", "{\"comment\": [\"We have updated the manuscript and highlighted the changes in green. Several supplementary experiments are reported in the appendix, with the titles also highlighted in green. We offer the catalog as follows:\", \"**Appendix E**: Forecast fractional steps data.\", \"**Appendix F**: Substitution of Neural Process by Attention Neural Process and Gaussian Process.\", \"**Appendix G**: Long-term prediction.\", \"**Appendix H**: Robustness of performance.\", \"**Appendix I**: The number of parameters of the learned model.\", \"**Appendix J**: Comparison with DeepAR/MQ-CNN.\"]}", "{\"summary\": \"A method for multi-variate probabilistic time series forecasting is presented which combines the methods for temporal dynamics modeling using deep Koopman model with Neural Processes. The model consists of an encoder computing the hidden state h_t, a learned Koopman operator 'A' predicting the future hidden states and a decoder used to predict the final values using the hidden state h_t. In addition a neural process is used to model the latent dynamics S of the time series, which is used as an input to the Koopman operator to compute the future states.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Strengths\", \"This paper proposes a new way for probabilistic modeling of multi-variate time series by representing the time series into a low dimensional space using neural processes and using them to model the temporal dynamics of the time series.\", \"The idea presented in the paper is an intuitive approach for multi-variate forecasting. Modeling the latent dynamics of the time series is an important aspect that is missed by most state-of-the-art modeling approaches. As such, the method presented in this paper is interesting to the time series community.\", \"The paper presents extensive experimental evaluation and ablation studies demonstrating the effectiveness of the approach.\"], \"weaknesses\": [\"Weaknesses\", \"The description of the model is quite cryptic and not written in a clear manner. See questions below for parts that are unclear.\", \"While the paper describes everything using mathematical equations, the utility or advantage of such equations/models is unclear. How do each of the components of the model help with a better forecast? (i am looking for a high level motivation for each of the components of the model)\", \"Several works have introduced Koopman theory into time series modeling, however, there is no clear discussion of existing methods and their comparison with this work. The novelty of the paper is unclear without a clear distinction with existing work.\", \"Other probabilistic forecasting methods modeling latent factors in time series have not been discussed of compared (for example deep state space models).\"], \"questions\": [\"What is the utility of the neural process latent variable S at a high level? Does it help model the correlations between the different time series? Can it be thought of as an encoding of each time series which also model the inter-correlations between the time series?\", \"It seems that S is an s-dimensional latent variable meaning that it encodes the whole time series in a low dimensional subspace. Many papers have modeled time series as state space models involving low-dimensional latent decomposition. What is the advantage of the proposed approach over these approaches.\"], \"example_papers_modeling_time_series_into_low_dimensional_sub_spaces\": [\"Wang, Yuyang, et al. \\\"Deep factors for forecasting.\\\" International conference on machine learning. PMLR, 2019.\", \"Rangapuram, Syama Sundar, et al. \\\"Deep state space models for time series forecasting.\\\" Advances in neural information processing systems 31 (2018).\", \"Paria, Biswajit, et al. \\\"Hierarchically regularized deep forecasting.\\\" arXiv preprint arXiv:2106.07630 (2021).\", \"What is the utility of the delay embedding dimension k?\", \"Do context and target sets refer to training and testing time periods? can the context and target sets be interspersed, meaning can this method be used to predict missing values of irregularly sampled time series?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper presents KooNPro, a novel approach for probabilistic time series forecasting that integrates a variance-aware deep Koopman model with Neural Processes (NPs). By employing a variance-aware continuous spectrum modeled with Gaussian distributions, KooNPro effectively captures complex temporal dynamics with enhanced stability. It leverages NPs to capture global patterns across time series, improving prediction accuracy. Extensive evaluations on nine real-world datasets show that KooNPro outperforms state-of-the-art models, with ablation studies confirming the importance of its components and hyperparameters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed variance-aware continuous spectrum modeling address the complex nonlinear temporal dynamics, which is novel and has a solid theoretical foundation.\\n2. The empirical improvements are significant, outperforming other methods in complex, high-dimensional forecasting tasks.\\n3. The ablation study demonstrates the effectiveness of Neural Process and help to understanding the role of this key component. The case study shows that KooNPro's predictive capability aligns well with real-world observations, capturing diurnal patterns and demonstrating reliable predictive intervals.\", \"weaknesses\": \"1. The integration of Neural Processes and the probabilistic deep Koopman model requires a sophisticated training approach, including variational inference for optimizing the ELBO. This could pose challenges in implementation, and could increase the computational complexity.\\n2. The effectiveness of KooNPro relies heavily on the design of the encoder and decoder networks for the linear space transformation. My concern is that poorly designed architectures could lead to suboptimal representations and thus hinder the model\\u2019s ability to learn the underlying temporal dynamics accurately.\", \"questions\": \"Can you explain more intuitively why complex temporal dynamics can be captured using the proposed probabilistic deep Koopman model? In addition, I'm curious about the validity of the assumptions that formula (6) relies on: \\\"hypothesize the latent space created by $\\\\phi$ possesses linear characteristics\\\", since formula (6) seems to be an important foundation of the subsequent derivations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the clarifications. I acknowledge the authors' response.\\n\\nThe manuscript is much better but my broader perception of the novelty of the paper still remains; I keep my score.\"}", "{\"comment\": \"Thank you for your feedback. We appreciate your time and efforts in reviewing our manuscript.\"}", "{\"comment\": \"Thank you for your insightful feedback and thought-provoking questions.\\n***\\n**[W1]** *Improvememt of Presentation.*\\n\\n**[A]** We have modified the first paragraphs of Sections 4.1 and 4.2 in the manuscript to strengthen the connection between the details and the overall framework shown in Fig.1. Additionally, links to Fig.1 are added in the first paragraph of Section 4 to clarify the explanation of the framework. These modifications in the manuscript are now,\\n\\nIn Section 4, \\\"... Initially, NP captures the discrete spectrum of dynamics governing the entire time series which is shown by the downward arrows in Fig.1. Additionally, inspired by the concept of pseudospectra, we utilize the probabilistic deep Koopman model to refine these dynamics, obtaining a variance-aware continuous spectrum for prediction which is demonstrated by the shadowed box in Fig.1\\\"\\n\\n In Section 4.1, \\\"The proposed model first identifies the distribution \\nof the latent variable in Eq.1 through an embedding \\nwhich is presented in the left part of Fig.1. The latent variable \\nintegrates the underlying dynamics present in the time series, which \\nallows the model to be more reactive to global features of the time \\nseries ...\\\"\\n\\nIn Section 4.2, \\\"The probabilistic deep Koopman model shown in the gray block of Fig.1 concentrates on explaining local characteristics of time series, namely intricate temporal dynamics, with a variance-aware continuous spectrum ...\\\"\\n***\\n**[W2]** *Better analysis of each component of the model.*\\n\\n**[A]** We substitute Neural Process with Attention Neural Process and Gaussian Process and test their predictive results in Appendix E.\\n***\\n**[W3]** *Performance on non-stationary time series or time series tasks with a much larger prediction length.*\\n\\n**[A]** The KPSS test (Kwiatkowski\\u2013Phillips\\u2013Schmidt\\u2013Shin test) is a classical statistical method used to determine whether a time series is stationary. We apply the KPSS test to each dataset to assess its stationarity. The results reveal that ETTs are stationary time series, while solar, electricity, traffic, taxi, and cup are non-stationary. The superior performance of KooNPro across these five non-stationary time series demonstrates its effectiveness in handling non-stationary scenarios. It can be found in Appendix G.\\n\\nWe compare each model in the longer prediction length, the details of each dataset, and the results in Appendix F.\\n***\\n**[W4]** *Performance about Robustness.*\\n\\n**[A]** To evaluate the robustness of KooNPro's predictive performance, we test it under varying Signal-to-Noise Ratio (SNR) conditions (20dB/40dB/60dB). During training, KooNPro is provided with ground truth data. At the testing stage, Gaussian noise is added to the input data, and the predictions are compared against the ground truth. As presented in Tab. 8, the performance degradation with decreasing SNR remains within acceptable limits, demonstrating KooNPro's robust predictive capability across varying noise levels. It can be found in Appendix H.\\n***\\n**[W5]** Details on training baselines.\\n\\n**[A]** We train baselines by open code which is reported in corresponding papers, and follow the default setting. It can be found in Appendix J.\\n***\\n**[Q1]** *Definition of spectral pollution.*\\n\\n**[A]** We apologize for the lack of clarity in explaining the term \\u201cspectral pollution.\\u201d In the context of Koopman operators, \\u201cspectral pollution\\u201d refers to the phenomenon where the computed eigenvalues (or spectra) of an approximation to the Koopman operator contain spurious or extraneous components. These unwanted components can arise due to numerical inaccuracies, poor choice of observables, or insufficient resolution in the discretization of the state space. These spurious eigenvalues can distort the true dynamics of the system and make it harder to extract meaningful insights from the system\\u2019s behavior.\\n\\nIn simpler terms, when applying techniques like Dynamic Mode Decomposition (DMD) or other spectral methods to approximate the Koopman operator, we expect to recover the true modes and eigenvalues that describe the underlying dynamics of the system. However, in practice, numerical methods can introduce errors that lead to additional, unphysical modes or eigenvalues that do not correspond to the true dynamics. This ``pollution\\u2019\\u2019 of the spectrum can complicate the interpretation of the system\\u2019s modes and can reduce the accuracy of predictions or the stability of the model.\\n***\\n**[Q2]** *Long-term Prediction.*\\n\\n**[A]** The choice of context length is informed by the ablation study results, as presented in Figure 3 and detailed in the Appendix ablation study. The study indicates that increasing the context length does not necessarily improve prediction performance, highlighting the importance of selecting an optimal context length for effective modeling.\"}", "{\"comment\": \"I would like to thank the authors for their detailed rebuttal and adding more experiments including the comparison to DeepAR and MQ-CNN and the ablations with the GP and ANP. I will raise my score.\"}", "{\"title\": \"Concerns adressed convincingly\", \"comment\": \"Thank you for your insightful and clear responses!\", \"i_have_some_comments\": [\"**Re [W1]** (formatting): I would advise using notation like $10^3$ for exponenets in the camera-ready version for less ambiguity.\", \"**Re [Q1]**: I was and am aware of the definition and benefits of the selected metrics. Yet, MAE and MSE are commonly used for point forecasts, which are a subset of what KooNPro can provide. Providing them, in addition, would still greatly help judge the performance of KooNPro in the standard benchmark setting.\", \"**Re [Q3]**: I still think the variance is off by a significant amount (the mean is spot on, as you pointed out). However, this is also a fairly challenging task to solve and can indeed be deferred to future work.\", \"**Re [Q5]**: This is a very special feature for forecasting models, and I am happy this has improved the work.\"]}", "{\"comment\": \"We sincerely thank the reviewer for the constructive feedback, which has greatly helped us refine and improve our work.\\n***\\n**[W1]** *Formatting, notation, language, section naming, and figure presentation.*\\n\\n**[A]** We have addressed the issues related to formatting, notation, language, section naming, and figure presentation in the revised manuscript. Additionally, $\\ud835\\udc52$ refers to \\\"scientific notation,\\\" commonly used in programming languages. There is no overlap between the training and test sets.\\n***\\n**[W2]** *Definition of target and context sets and the splitting of $\\ud835\\udc67$.*\\n\\n**[A]** This method forms the core idea of the Neural Process (NP) introduced by [1], aiming to enhance generalization when handling unseen data. The context can be viewed as a prior that improves generalization. The splitting of $\\ud835\\udc67$ separates the training dataset into context and target sets, a method that is independent of point or distribution estimation. Previous studies, such as [2] and [3], have employed variants of NP for predicting climate change through point estimation. \\n***\\n**[Q1]** *Provide additional metrics, contextualize findings, and compare with probabilistic forecasting approaches.*\\n\\n**[A]** Utilizing the Monte Carlo method to sample the predictive distribution offers a valuable approach for evaluating predictive performance. To assess predictions comprehensively, we consider the two metrics (CRPS/NRMSE) that strongly relate to MSE/MAE. NRMSE measures the normalized root mean square error between the ground truth and the predicted means. CRPS, a generalization of MAE, quantifies the error between the predictive probability density function (PDF) and a step function at the truth value. In this context, MAE can be viewed as a special case of CRPS, where the PDF is replaced by a step function at the predicted value.\\n***\\n**[Q2]** *Run multiple times.*\\n\\n**[A]** To ensure performance stability, we conduct multiple runs, averaging results over 10 independent runs. Each table presents the mean and standard deviation of the performance metrics, while Figure.3 includes error bars representing the standard deviation, also estimated from the 10 independent runs.\\n***\\n**[Q3]** *Variance modeling accuracy, outliers in dimensions (\\\\#3, \\\\#6), overestimated nightly variance in the Solar dataset, and uniform variance concerns.*\\n\\n**[A]** We acknowledge that KooNPro does not perform optimally in capturing the peaks of certain dimensions, likely due to the large range of peak values (spanning from 100+ to 400+). However, the key takeaway from Figure 4 is KooNPro's ability to learn the temporal dynamics of the data. Specifically, when values transition from the bottom to the peak and reverse from the peak to the bottom, KooNPro demonstrates low error and variance. From data processing perspective, it is characterized by continuous peaks and troughs. Therefore, we believe that the higher variance observed in both the peaks and troughs is reasonable. We believe that modeling diverse probabilistic behaviors with a single model is challenging. This represents an important direction for exploration, and we plan to enhance KooNPro to address this aspect in future work.\\n***\\n**[Q4]** *The number of parameters.*\\n\\n**[A]** We check the number of parameters of each learned model, and we show the table in Appendix I.\\n***\\n**[Q5]** *Forecast fractional steps.*\\n\\n**[A]** It is a great idea. We evaluate KooNPro on functional Magnetic Resonance Imaging (fMRI) data with a temporal resolution of 0.72 seconds, where it continues to demonstrate exceptional predictive capability. What's more interesting, the prediction variance provides insights into the fluctuations of different brain regions. For instance, the left and right cortexes exhibit similar properties, while both are markedly distinct from the subcortical regions. The visualization of predicted results and variance of different brain areas is shown in Appendix E.\\n***\\nReference\\n\\n[1] MartaGarnelo, DanRosenbaum, Christopher Maddison, TiagoRamalho, David Saxton, Murray Shanahan, YeeWhyeTeh, DaniloRezende, and S.M.AliEslami. ConditionalNeuralProcesses. In Proceedings of the 35th International Conference on Machine Learning, pp.1704\\u20131713.PMLR, July 2018a.\\n\\n[2] AndrewFoong, WesselBruinsma, JonathanGordon, YannDubois, JamesRequeima, and Richard Turner.Meta-learning stationary stochastic process prediction with convolutional neural processes. Advances in Neural Information Processing Systems, 33:8284\\u20138295,2020.\\n\\n[3] Wessel P Bruinsma, Stratis Markou, James Requiema, Andrew YK Foong, Tom R Andersson, Anna\\nVaughan, Anthony Buonomo, J Scott Hosking, and Richard E Turner. Autoregressive conditional\\nneural processes. arXiv preprint arXiv:2303.14468, 2023.\"}", "{\"summary\": [\"KooNPro is a novel time series forecasting method based on Koopman theory.\", \"It effectively estimates the process dynamics in a latent space where state transitions are linear.\", \"It eventually provides context-dependent variance estimates.\"], \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Time series tasks, especially probabilistic forecasting, are relevant in many domains and far from being solved.\", \"The method is presented well and provides many benefits (variance estimation, flexible forecast horizon, nice theory, and practical performance).\", \"I am not aware of these ideas being presented before.\"], \"weaknesses\": [\"1. See Questions below.\", \"2. The definition of target and context sets (ll. 153ff) and the splitting of $z$ (ll. 208ff) were not familiar to me, as they are not common in point forecasting. I would have personally benefitted from more exposition.\", \"#### Minor Comments\", \"References should not be used as nouns (e.g., ll. 037f, 049f).\", \"Notation: Eq. (2)/(5) is missing a $p$ on the left-hand side and has odd typesetting of the expectation subscript. Some parens/brackets should be adjusted in height, e.g., in Eq. (3) and (17). In l. 219, $x_C$ and $y_C$ are not bold. L. 473 is the $e$ referring to \\\"scientific notation\\\" common in programming languages (in base-10) or Euler's number?\", \"One could rename Sec. 5 to Experiment**s**.\", \"Re. l. 306: Are the conditioning lookbacks of the test and the forecast horizon of validation (etc.) overlapping or not?\", \"Language: The senctence in l. 821f is odd.\", \"The combination in Fig. 3 is clever. However, given two separate plots would easily fit side-by-side, it is not worth the extra time needed to decipher the legend/axis labels.\", \"The abstract in OpenReview should be formatted with Markdown syntax, instead of LaTeX.\"], \"questions\": \"Note: The most important questions are listed first.\\n\\n1. Regarding Sec. 5.2: The presented results are very impressive. However, a large chunk of the time series forecasting literature performs point estimation instead of full distribution modeling and, therefore, commonly evaluates using MAE/MSE (see, for instance, the overview [here](https://github.com/thuml/Time-Series-Library)). Additionally providing these metrics (possibly in the appendix) would help contextualize the findings in the broader body of work. These methods can also provide probabilistic forecasts, e.g., by learning an output that parameterizes a simple distribution to be sampled from or by methods such as Monte Carlo dropout.\\n2. Does the method need to be run multiple times to obtain the variance estimation samples?\\n3. The case study in 5.4 is interesting, yet it makes me question the ability of KooNPro to model the variance of the data truthfully. While I agree that many dimensions are modeled appropriately, the results indicate serious limitations of the method.\\n\\t1. Dimensions #3 and #6 show values far outside the 90% interval. Are they within something like the 99% interval since the model learns a possibly appropriate long tail of the distribution? Or does it show some of the method's limitations (which would be fine if acknowledged as such)?\\n\\t2. Ll. 446f explains the data characteristics of the Solar dataset. However, the model fails to appropriately model the very low nightly variance of the power production around zero and instead shows a significant variance estimate at, for instance, midnight. Why does that occur?\\n\\t3. Combining these two observations, the overall variance estimate appears rather *uniform* and not well-adapted to the dataset.\\n4. How large are the learned models measured in the number of parameters?\\n5. Could the method be straightforwardly used to forecast fractional steps into the future?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful feedback and recognition of our efforts. We are glad that the additional experiments, including the comparisons and ablations, addressed your concerns. Your constructive suggestions have been invaluable in improving our work, and we greatly appreciate your positive evaluation and raised score.\"}", "{\"comment\": \"Thank you for your thoughtful and detailed feedback. Your comments have greatly helped us improve the clarity and depth of our paper.\\n***\\n**[W1-W10 & W13]** *Improvement suggestions.*\\n\\n**[A]** We have revised the introduction to include more background on probabilistic forecasting and its applications, cited the recommended works, and improved the overall writing quality.\\n\\nThe primary reason for choosing Neural Processes (NPs) as the probabilistic modeling framework is their ability to learn global latent representations through an encoder-decoder structure, which is essential for our variance-aware continuous spectrum modeling. In contrast, diffusion models, while powerful for generating probabilistic forecasts, are not inherently designed to function as encoders that can extract such latent representations.\\n***\\n**[W11]** *Novelty.*\\n\\n**[A]** The motivation for this work stems from capturing different granularities of temporal dynamics. The dot spectrum provides a method to capture the stable characteristics of a time series. To leverage this, we use NP to learn a dot spectrum embedding that governs the entire series. While when facing unstable temporal dynamics, this method may be collapsed. Thus, we employ an auxiliary neural network to manage unstable dynamics, enabling a more detailed representation of temporal dynamics through the continuous spectrum. However, numerical issues often lead to spectrum pollution in the continuous spectrum. To address this, we adopt a variance-aware approach to learn Pseudospectra, offering improved robustness and performance. To some extent, our approach aligns with [1], which analyzes time series at multiple granularities. However, we focus on treating time series from a dynamics-based perspective.\\n***\\n**[W12]** *Multivariate time series forecasting problem definition.*\\n\\n**[A]** A multivariate time series forecasting problem predicts future values of multiple variables from past observations. The input is a time series matrix of shape $[T, N]$ ($T$: time steps, $N$: variables), aiming to forecast $H$ future steps while capturing temporal patterns and variable dependencies.\\n***\\n**[W13]** *Comparisons to other baselines.*\\n\\n**[A]** We compare with DeepAR, and MQ-CNN and show the predictive results in Appendix I. As for the MQ-transformer, it's regret that we haven't found an open code.\\n***\\n**[W14]** *The case study in Dimension 6 and 8.*\\n\\n**[A]** We acknowledge that KooNPro does not perform optimally in capturing the peaks of certain dimensions, likely due to the large range of peak values (spanning from 100+ to 400+). However, the key takeaway from Figure 4 is KooNPro's ability to learn the temporal dynamics of the data. Specifically, when values transition from the bottom to the peak and reverse from the peak to the bottom, KooNPro demonstrates low error and variance. From a data processing perspective, it is characterized by continuous peaks and troughs. Therefore, we believe that the higher variance observed in both the peaks and troughs is reasonable. We believe that modeling diverse probabilistic behaviors with a single model is challenging. This represents an important direction for exploration, and we plan to enhance KooNPro to address this aspect in future work.\\n***\\n**[Q1]** *Generalize past Gaussian distributions to modeling arbitrary distributions.*\\n\\n**[A]** NPs utilize Gaussian distributions as their foundational building block. Their capacity to model arbitrary distributions stems from the flexibility of deep neural networks. This flexibility is further enhanced during the training procedure, which can be summarized as follows:\\n- **Individual distributions** are Gaussian, while they can represent different means and variances for different inputs, offering adaptability to data.\\n- **The splitting of context and target set**, and the learning to aggregate the context adaptively serve as a prior for the prediction at new data points.\\n- **The combination of the latent variable** $S$ **and the observed data** allows the model to capture complex dependencies, effectively mimicking arbitrary distributions.\\n***\\n**[Q2 & Q3]** *Test with ANP and GP.*\\n\\n**[A]** We substitute Neural Process with Attention Neural Process and Gaussian Process and test their predictive results in Appendix E.\\n***\\n**[Q4]** *Performance of TimeGrid on the electricity dataset.*\\n\\n**[A]** The properties of the electricity dataset\\u2014high dimensionality, short temporal intervals, heterogeneity, and multivariate dependencies\\u2014are well-suited to TimeGrad's design. By combining autoregressive RNNs with diffusion probabilistic models. While in Appendix G we test the predictive performance of each model. The result shows in the longer-term prediction case, our model shows better performance than TimeGrid.\\n***\\nReference\\n\\n[1] Xinyao Fan, Yueying Wu, Chang Xu, Yuhao Huang, Weiqing Liu, and Jiang Bian. MG-TSD:\\nMulti-Granularity Time Series Diffusion Models with Guided Learning Process, March 2024.\"}", "{\"summary\": \"The authors study the practical problem of probabilistic time series forecasting and address the challenges. They propose KooNPro, which combines two methods, i.e., the Koopman model and Neural Processes.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Addresses important problem of probabilistic time series forecasting\", \"Interesting idea to use the latent representations from the Neural Processes, typically used in computer vision\", \"Extensive experiments on 9 time series datasets\", \"Multivariate forecasting problem is challenging\", \"Good selection of multivariate probabilistic baselines to compare to including GP-Copula\", \"Good probabilistic metric CRPS is used in the benchmarking\", \"KooNPro shows state-of-the-art performance on 8/9 datasets\"], \"weaknesses\": [\"I think more background on the importance of probabilistic forecasting and its use in practical downstream task, e.g., supply chain in the introduction would be helpful\", \"Overall the writing quality could be improved.\", \"When discussing LSTMs in the introduction, DeepAR should be cited\", \"For probabilistic diffusion models, Kollovieh et. al, \\\"Predict, refine, synthesize: Self-guiding diffusion models for probabilistic time series forecasting\\\", NeurIPS 2024 should also be cited.\", \"There has also been an abundance of foundation models for time series forecasting, e.g., Ansari et. al, \\\"Chronos: Learning the language of time series\\\", 2024 that is not discussed int he introduction.\", \"DMD is more typically used in the dynamical systems and scientific computing communities\", \"The motivation for choosing Neural Processes as the probabilistic models rather than more current state-of-the-art probabilistic models, i.e., diffusion models is not clear.\", \"For application of NPs to spatio-temporal time series (PDEs), see Hansen et. al, \\\"Learning Physical Models that Can Respect Conservation Laws\\\", ICML, 2023\", \"Some of the background in Section 3 could be moved to an appendix to allow for more novelty of the method presentation and results in the main body.\", \"The architecture in Figure 1 is also taking a large amount of space.\", \"I have some concerns on the novelty since the proposed method is just combining two previously proposed approaches.\", \"May be good to include multivariate time series forecasting problem definition.\", \"Comparisons to other baselines, e.g., DeepAR (Salinas et. al, 2019) and MQ-CNN (Wen et. al, https://arxiv.org/abs/1711.11053, https://proceedings.mlr.press/v151/park22a/park22a.pdf), MQTransformer (https://arxiv.org/pdf/2009.14799, ) are missing, which could be run on electricity and traffic\", \"Bold the best in Table 2 or use same convention as Table 1 with color in red but I think bolding would be best for both.\", \"The method seems off in the case study in Dimension 6 and 8\"], \"questions\": \"1. How can the method generalize past Gaussian distributions to modeling arbitrary distributions?\\n2. Have the authors tested with the Attentive Neural Process (ANP) Kim et. al, 2019, which shoes better performance than Neural Processes?\\n3. Could the model also be run with Gaussian Processes? I think a comparison to the simpler GP would be nice to add to motivate the benefit of the NP.\\n4. What is it about the electricity dataset that TimeGrid shows improved performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**[Q3]** *The utility of $k$.*\\n\\n**[A]** The utility of the delay embedding dimension $k$ is grounded in Takens' embedding theorem, which suggests that a sufficient number of time-delayed coordinates can reconstruct the state space of a dynamical system. Specifically, Takens' embedding theorem states that, for a dynamical system with a smooth, differentiable flow, it is possible to reconstruct the system\\u2019s attractor in a higher-dimensional space by using time-delayed observations of the system\\u2019s state. This reconstruction allows us to analyze and understand the dynamics of the system more effectively, especially in cases where the underlying dynamics are not directly observable.\\nIn the context of Dynamic Mode Decomposition (DMD), delay embedding plays a critical role when the spectral complexity of the system exceeds its spatial complexity. DMD is inherently a spectral method that decomposes the observed data into modes corresponding to different temporal frequencies. However, when the system exhibits complex dynamics with temporal interactions that are not immediately obvious in the spatial data, delay embedding helps by embedding the time series into a higher-dimensional space. This extension allows DMD to capture temporal dependencies that might otherwise be overlooked, enhancing its ability to model and predict the behavior of systems with higher-order interactions.\\n\\nTherefore, delay embedding dimension $k$ serves as a critical parameter in handling cases where temporal dynamics are more complex than spatial ones, providing a more robust framework for capturing the system\\u2019s underlying modes and improving the accuracy of dynamic predictions.\\n***\\n**[Q4]** *Context and target sets.*\\n\\n**[A]** The training stage involves both context and target sets, aiming to train an encoder capable of generating temporal dynamics embeddings that capture the governing patterns of the entire time series. During testing, the trained **${S}_{C}$** replaces **${S}_{D}$** used in the training phase. Typically, context and target sets are interspersed during training, as discussed in previous works such as [1]. NPs are predominantly applied in 1D regression and image-implementation tasks, both of which are examples of irregular implementations. While we use Neural Processes (NP) for prediction, we extend their application by integrating time series data with the spectrum of dynamics, rather than employing NP directly. We believe that extending NP to time series analysis is both a logical and meaningful direction for advancing its utility.\\n***\\nReference\\n\\n[1] Tuan Anh Le, Hyunjik Kim, Marta Garnelo, Dan Rosenbaum, Jonathan Schwarz, and Yee Whye Teh. Empirical evaluation of neural process objectives. In NeurIPS workshop on Bayesian Deep Learning, volume 4, 2018.\"}", "{\"comment\": \"Thank you for your insightful and constructive feedback! We greatly appreciate your recognition of KooNPro and your valuable suggestions. We will refine the formatting for clarity. Your encouragement of the model's unique features is highly motivating\\u2014thank you for helping us improve!\"}", "{\"summary\": \"The authors of the paper introduce a novel probabilistic time series forecasting model called KooNPro. KooNPro utilizes a variance-aware continuous spectrum, modeled using Gaussian distributions, to capture complex temporal dynamics and improve stability. By incorporating Neural Processes, the model captures fine dynamics and enhances global modeling capabilities. The authors evaluate KooNPro on nine real-world datasets and find that it consistently outperforms other state-of-the-art models in terms of accuracy and stability.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Deep Koopman models had been used for time series forecasting before but not in conjunction with Neural Processes, hence the proposed model is non-trivial and new.\", \"The authors enhance their model by a variance-aware continuous spectrum, which to my knowledge has not been done before.\"], \"weaknesses\": [\"Clarity: The authors could improve how they present their work. The math is clunky and too detailed, and can cause the reader to lose attention. For example, in section 4, the authors never refer to Fig.1 more than once (in line 169), so there is a disconnect between the math and the figure. Maybe explain the figure first and detail the model at a high-level, then go in the inner workings and foundations. Also in the caption of Figure 1, consider adding pointers to different parts of the figure to help the reader understand where to look in the Figure for each reference in the caption.\", \"Better analysis of each component of the model: The authors can provide a more comprehensive and insightful analysis of the Neural Process contribution. They may investigate how different NP architectures (e.g., Attentive Neural Processes, Convolutional Conditional Neural Processes) affect KooNPro's performance.\", \"Limited empirical discussion on the exact benefits of this model: The authors can better demonstrate For example, conduct experiments to specifically evaluate KooNPro's performance on, say, non-stationary time series or time series tasks with a much larger prediction length etc., comparing it to baselines that do not incorporate NPs. Compared to other models, if there were more convincing and broader experiments demonstrating the benefits of the proposed model, it would lead to wider adoption.\", \"More empirical evidence for robustness and stability: The paper claims robustness and stability as advantages of the proposed model, but these claims are well empirically validated. For example, the authors can add experiments evaluating the model's performance under different noise levels\", \"Details on training baselines: Since the baselines were trained for each dataset, the authors should provide details on how the baselines were trained (how hyperparameter selection was done, what protocol was used etc.) in the appendix. It is important to check if the evaluation compares all models in a fair manner.\"], \"questions\": [\"It is not clear what the term \\\"spectral pollution\\\" means in line 143. Consider explaining it for a reader who may not know about Koopman operators.\", \"Why are all context lengths 10 in Table 4 (Page 15)? Why wasn't the model tried with a different context lengths? This is also concerning as much larger context lengths are used in practice.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for acknowledging the improvements in our manuscript. We understand your concerns about the novelty and have emphasized our main contributions to address this.\\n\\nOur innovation lies in modeling **latent dynamics** by extending from discrete point spectra to continuous spectra and incorporating the effects of spectral pollution into a **variance-aware continuous spectral** framework. By capturing dynamic behaviors in the continuous spectrum and modeling its variance, we achieve a more precise description of time series. Additionally, we utilize auxiliary networks to enhance the model's performance.\\n\\nOur approach provides a new perspective for time series prediction, revealing the advantages of variance-aware continuous spectral modeling of underlying dynamics in latent space.\\n\\nWe appreciate your time and constructive feedback, which have been invaluable in refining our work.\"}", "{\"metareview\": \"The paper proposes KooNPro, a novel multivariate time series forecaster that is based on the conjunction of a variance-aware Koopman model with Neural Processes. This yields to improved forecasts with better stability, which is empirically corroborated across several real-world datasets. The reviewers all agree unanimously that the problem is relevant and challenging, they appreciate the intuitiveness of the approach, and find the empirical results compelling. Initial concerns revolved around a necessity to further improve the clarity of the method description, the fact that the method is a bit involved, further experiments that highlight benefits of the proposed method beyond just state-of-the-art performance, and additional beneficial descriptions. In the revision phase, the authors have made several inclusions to the manuscript, e.g. in six appendix sections, which has addressed the majority of the raised comments. As such, the majority of reviewers were satisfied and the AC agrees that the paper is above the threshold for ICLR acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers have raised several thoughtful comments, have asked for various clarifications and have provided a large amount of pointers for inclusion of additional content to improve the paper. The authors have taken these comments into account carefully and have faithfully addressed the majority of contents through revision and additions to the appendix. In turn, reviewers were generally satisfied with the improvement and ultimately all reviewers agree unanimously that the paper is above the acceptance bar. For some reviewers it remains unclear to the AC why the score indicates marginal acceptance, even tho all concerns seem to have been addressed, but the AC assumes that this may be on accounts of the novelty not being groundbreaking. At the same time reviewers acknowledge the challenge, importance and contributions to the problem at hand, so the AC believes the paper should get accepted.\"}" ] }
5oRB2Wgwtb
Online Bandit Nonlinear Control with Dynamic Batch Length and Adaptive Learning Rate
[ "Jihun Kim", "Javad Lavaei" ]
This paper is concerned with the online bandit nonlinear control, which aims to learn the best stabilizing controller from a pool of stabilizing and destabilizing controllers of unknown types for a given nonlinear dynamical system. We develop an algorithm, named Dynamic Batch length and Adaptive learning Rate (DBAR), and study its stability and regret. Unlike the existing Exp3 algorithm requiring an exponentially stabilizing controller, DBAR only needs a significantly weaker notion of controller stability, in which case substantial time may be required to certify the system stability. Dynamic batch length in DBAR effectively addresses this issue and enables the system to attain asymptotic stability, where the algorithm behaves as if there were no destabilizing controllers. Moreover, adaptive learning rate in DBAR only uses the state norm information to achieve a tight regret bound even when none of the stabilizing controllers in the pool are exponentially stabilizing.
[ "Online nonlinear control", "Bandits", "Dynamic batch length", "Adaptive learning rate" ]
Reject
https://openreview.net/pdf?id=5oRB2Wgwtb
https://openreview.net/forum?id=5oRB2Wgwtb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tVqNxBILKq", "tRjb68YpPj", "rmOzRzZEhY", "pQDhCUZ3yA", "lRUGp2To09", "hC1CRaYJZ7", "ePVzyD8MYN", "bkX7AHjwhI", "bZL8PpcPZr", "YJnWF7xePO", "UEt2AiacYs", "Pnl39Tq0Sz", "PG9gNdLSsG", "FUW2IieIbh", "FHUMwkkNpN", "EizsoUf7Eh", "Dvz1FRF11J", "BUJJsjAEBG", "AJ3VkRMqkp", "47TynNwOO5", "4493rZkYBi", "1hMqR2DsGK", "1XRfM3JI6V", "0q0Tirtvpd" ], "note_type": [ "decision", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737523440748, 1729155494437, 1732226736783, 1729756498478, 1730314496916, 1731885326737, 1732141912859, 1732183496035, 1731886472102, 1731884977043, 1731887195891, 1732506455144, 1734544816255, 1732205844685, 1732321408115, 1731886497460, 1732226518355, 1732226437891, 1731885900689, 1731887360037, 1730696527773, 1732227452123, 1732233147492, 1732322717196 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1215/Reviewer_zi1A" ], [ "ICLR.cc/2025/Conference/Submission1215/Reviewer_7GNK" ], [ "ICLR.cc/2025/Conference/Submission1215/Reviewer_qZtK" ], [ "ICLR.cc/2025/Conference/Submission1215/Reviewer_7GNK" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Reviewer_zi1A" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Area_Chair_8tCP" ], [ "ICLR.cc/2025/Conference/Submission1215/Reviewer_7GNK" ], [ "ICLR.cc/2025/Conference/Submission1215/Reviewer_qZtK" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Reviewer_BMap" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ], [ "ICLR.cc/2025/Conference/Submission1215/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper addresses the problem of online bandit nonlinear control, proposing a new algorithm called Dynamic Batch length and Adaptive learning Rate (DBAR). The algorithm in this paper is designed to handle a nonlinear dynamical system with a mix of stabilizing and destabilizing controllers, offering stability guarantees and regret bounds.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides rigorous theoretical analysis, including stability proofs and regret bounds, which demonstrate the effectiveness of the DBAR algorithm under weaker assumptions. The comparison with existing methods is clear and it strengthens the contribution.\\n2. The introduction of the DBAR algorithm relaxes the requirement for exponentially stabilizing controllers, allowing a broader range of controllers to stabilize the system. It also achieves better result for known and unknown $\\\\mathcal{U}$.\", \"weaknesses\": \"1. While the theoretical analysis is thorough, the experiments are somewhat limited. The paper focuses on low-dimensional systems (a 2D linear system and a simple nonlinear system), and it would be beneficial to demonstrate the algorithm's performance in more complex, high-dimensional scenarios to validate the practical applicability of DBAR.\\n2. The writing of this paper is not very clear, especially in Section 2 and Section 3. I suggest using more references, formulas, and illustrations to explain the rationale behind the assumptions and the intuition behind the algorithm design, rather than relying solely on extensive text descriptions.\", \"questions\": \"1. Could you include more experimental results on complex systems?\\n2. Could you explain the necessity of Assumption 2.5? You may refer to previous literature to support your explanation.\\n3. Could you clarify the proof ideas and intuition behind Theorems 4.5 and 4.6? The current explanation is quite brief, and it's difficult to grasp the core proof strategy as well as how the earlier algorithm design relates to this proof. I suggest expanding this section and moving some of the lemma proofs, such as Lemma 4.7, to the appendix.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your explanations and revisions. I have updated my review accordingly.\"}", "{\"summary\": \"In this paper, the authors study the online bandit nonlinear control, which aims to learn the best stabilizing controller from a pool of stabilizing and destabilizing controllers of unknown types for a given nonlinear dynamical system. They develop an algorithm, named Dynamic Batch length and Adaptive learning Rate (DBAR), and study its stability and regret. DBAR achieves better regret bounds than Exp3 under even weaker assumptions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The problem of online bandit nonlinear control is well-motivated.\\n\\n2. This paper weakens the assumptions in previous works and derive a stronger regret bound. The proof looks correct.\\n\\n3. There are simulations to support the theoretical findings.\", \"weaknesses\": \"Main concerns:\\n\\n1. It seems to me that the results heavily depend on the assumption that $H(t)$ has finite limitation (Thm 4.2) or $H(t)\\\\leq O(\\\\log t)$ (Thm 4.6), this assumption does not seem to be very mild ($H(t)$ has finite limitation is far from the conclusion in Lem 4.3). What will the result be like without such assumptions? Or is there any further justifications of such assumption?\\n\\n2. In the experiments, the claim 'While the simulations are on low-dimensional systems for illustration purposes, similar observations can be made for high-dimensional systems.' is made. Is this a conjecture, or is there any experimental evidence for this? It seems not hard to do experiments with higher dimensionality.\", \"some_questions_and_suggestions_for_writing\": \"1. The notation $|\\\\mathcal{U}|$ first appears in the table, while it is not defined until Def 2.6.\\n\\n2. The previous papers depend on 'exponential stability'. What is its definition and how is it stronger compared to 'asymptotic stability'?\\n\\n3. I did not quite understand the motivation of Lemma 4.3. This is too weak to support the assumption $\\\\lim_{t\\\\rightarrow\\\\infty}H(t)<\\\\infty$.\\n\\n4. Listing the steps to solve for $z,\\\\nu$ in Theorem 4.6 could help justify the results.\", \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the problem of online bandit non-linear control where both transition dynamics and cost functions are non-linear and adversarial, and we only receive bandit feedback on either of them. Removing the assumption on exponentially stabilizing controllers, this paper ensured $\\\\tilde{\\\\mathcal O}(T^{2/3})$ regret with the help of only asymptotically stabilizing controllers.\\n\\nTechnically, this paper uses geometricly increasing batch lengthes to remove the requirement of exponentially stabilizing controllers, and further make the $\\\\mathcal O(T^{1/3} e^{\\\\lvert \\\\mathcal U\\\\rvert})$ dependency to $\\\\mathcal O(T^{-1/3} e^{\\\\lvert \\\\mathcal U\\\\rvert})$ via an adaptive learning rate tuning scheme.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The removal of exponentially stabilizing policies looks pretty good.\\n2. The resulting regret is satisfactory.\\n3. The algorithms are stated and motivated clearly.\\n4. Results are supplemented with numerical evaluations.\", \"weaknesses\": \"From the main text, it looks like the two main techniques, geometrically-increasing batch lengthes and geometrically-decaying learning rates, are the main technical contributions of this paper. But these two techniques seems both extensively studied in other online learning problems. So --\\n1. How are they different from those used in other online learning problems? Say, do you specifically adapt them to this control theory problem?\\n2. Do they bring special difficulties under this specific setup of nonlinear control? Say, do they break some previous analysis ideas in online nonlinear control?\\n3. Why previous works didn't use them?\\n\\n[EDIT. After rebuttal, it turns out that batch lengths and learning rates are both not simply geometric, but follow some more meticulous designs.]\", \"minor\": \"1. To create a reference inside parentheses, please try using the command `\\\\citep{}` instead of manually typing `(\\\\citet{})`.\\n2. Section 2 is notation heavy. I suggest the authors to add subsections / paragraphs to group the definitions.\\n3. The results part (especially Section 4.2) contains a huge amount of formal statements with almost no informal descriptions of them; it is definitely not enjoyable to read. I suggest the authors to replace them with informal versions (and also add a paragraph of intuitive description before/after each of them) and defer the formal ones into the appendix.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Weakness 2** (Quality): There may have been a misunderstanding. Indeed, our assumption does not require prior knowledge of an ISS controller. We do not need to know which controller is ISS or not; we only require that at least one ISS controller \\\"exists\\\" in the candidate pool.\\nThe ISS principle is a widely understood concept for handling disturbances in nonlinear systems, particularly in applications such as automotive and robotics systems, as it ensures that a vehicle or robot can perform safely despite variations in exogenous inputs like road conditions or obstacles. \\nMoreover, as mentioned above, we only assume the existence of \\\"asymptotically\\\" ISS stabilizing controllers whose stabilizing behavior can be arbitrarily slow. \\nTherefore, the existence of an ISS controller is not a particularly restrictive assumption.\\nTo support the scalability, we have presented new experiments in Example $2$. \\n\\n**Weakness 3** (Significance): Controller stability and ISS stability is the same concept in our context. We are not sure about the reviewer's comment. Perhaps, some terminologies in the paper have caused a confusion. We believe that the typical scenario in nonlinear control is to design a controller that takes the states to an equilibrium, and this is about the design of stabilizing controllers. We have addressed the same problem in our paper. We do not assume stability knowledge. Given an unknown system, we can consider different regimes to model the possibilities of the unknown parameters of the system and then design a controller for each possibility. For example (see a *new* Figure $1(b)$ for the pictorial illustration for readers), consider a robot operating in an uncertain environment where there are different scenarios that could happen in the environment and we model it by a vector \\\"$(a,b)$\\\" for simplicity. We consider different intervals such as $[0,1)\\\\times [0,1)$ or $[1,2)\\\\times[1,2)$, etc. for \\\"$(a,b)$\\\" and then design a controller for each scenario. Now, we have a pool of controllers and we know at least one of them should work but do not know which one since we are not aware of the exact parameter of the system. Our method learns a correct controller from the pool. Our stability notion is the same as the one routinely considered in the control theory area as well as the new area of machine learning for control systems.\\n\\n**Weakness 4** (Significance): Thank you for your comment. We have a mathematical proof showing that the proposed idea works. We have provided new high-dimensional experiments in Example $2$.\\n\\n**Question 1**: The magnitude of learning rate and batch length does not affect the algorithm's complexity. For the learning rate, it is simply used to calculate Line 28 of Algorithm $1$ regardless of its magnitude. For the batch length, the only (potential) intensive part is Line 21 of Algorithm $1$. Given the batch $b$ and its length $\\\\tau_b$, it simply adds all of the bandit feedback over $b$, so the complexity of the line is $O(\\\\tau_b)$. Since the sum of all batch lengths is the algorithm time $T$, the total complexity would be $O(T)$ and this is certainly a linear-time algorithm.\\n\\n**Question 2**: We have now provided 100-dimensional systems in Example $2$.\\n\\nMany thanks for reading our rebuttal.\"}", "{\"comment\": \"Dear reviewers: Since the discussion period would end in a week, could you please read our rebuttal and let us know if you have any further concerns or comments? We provided extensive answers to your previous comments and hope to discuss them with you during this discussion period. We appreciate your time and effort.\\n\\nBest regards,\\n\\nSubmission1215Authors\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your response! I will maintain my positive score and recommend accepting this paper.\"}", "{\"comment\": \"We are very grateful to the reviewer for a thorough evaluation of our work and providing valuable feedback. We will improve the paper based on the provided comments.\", \"we_have_improved_the_paper_and_uploaded_a_new_pdf_file_with_two_important_revisions\": \"1. **Experiments**: We have replaced the experiments in Example $2$ with high-dimensional experiments (100 states) on leader-follower systems. The leader is represented by a previous ball-beam (nonlinear) system, and the followers leverage the leader's state to stabilize themselves. Please check the experimental setup in Example $2$ (see pages 9-10).\\n\\n2. **Illustration**: In a new Figure $1$, we have illustrated a concept of the controller pool for the unknown nonlinear system, and how general an asymptotically stabilizing notion is. We hope that this figure helps the reader to understand the core concepts (see page 4, Figure $1$).\", \"responses_to_the_reviewer\": \"**Main concern 1** ($H(t)$): Thank you for the comment. There may have been a confusion here since only some of our results depend on $H(t)$. In our paper, there are two types of theorems: those that depend on $H(t)$ and those that do not. The ones depending on $H(t)$ is Thm $4.2$ and the second part of Thm $4.6$. Thm $4.2$ implies finite-gain stability when $H(t) <\\\\infty$, and the second part of Thm $4.6$ implies $\\\\tilde{O}(T^{2/3})$ if $H(t) \\\\leq \\\\log(t)$. As the reviewer has noticed, these assumptions are restrictive but still extends beyond the exponentially stabilizing notions (geometrically series) and include some asymptotically stabilizing notions. \\nSuch theorems were given to fairly compare the existing works between ours. For example, in Table $1$, we presented the result when $H(t) \\\\leq \\\\log (t)$ to compare the regret between ours and the papers with exponentially stabilizing notions. \\n\\nOn the other hand, Thm $4.1$ and the first part of Thm $4.6$ do not depend on $H(t)$. Whatever $H(t)$ comes in, we achieve an asymptotic system stability (Thm $4.1$), and concurrently, a sublinear regret bound is attained (see \\\"we achieve a **sublinear** regret bound\\\" in Thm $4.6$). In this case, we obtain a higher regret than $\\\\tilde{O}(T^{2/3})$, but the regret bound is still sublinear. To the best of our knowledge, this is the first result in the literature that a sublinear regret is achieved even exponentially stabilizing controllers do not exist. To illustrate how challenging our setting is, \\nlet us further present a one-dimensional system, where the current system state is $1$ (newly presented in Figure $1(a)$). The goal is to achieve a state near $0$, and we would like to detect this stability by observing whether one arrives at a state less than $1-\\\\epsilon$, where $\\\\epsilon$ is an arbitrarily small positive number. Exponentially stabilizing controllers guarantee to detect the stability in $O(\\\\log (1/\\\\epsilon))$ time. However, with an asymptotically stabilizing controller, if the controller is designed to keep the system state unchanged for an arbitrarily long time $T$ and then collapse the state towards $0$ afterward, one cannot detect the stability before time $T$ regardless of how small $\\\\epsilon$ is. In such a case, even though the controller ultimately achieves the goal, it may take a lot of time to learn whether a closed-loop system would be stable or not. Our algorithm achieves a **sublinear** regret even if only those troublesome controllers exist and simple (exponentially stabilizing) controllers do not exist at all. This achievement is due to our design of a *dynamic batch length* and an *adaptive learning rate*.\\n\\nTo help reader understand which part needs $H(t)$ restriction, we will add the relevant explanations in the paper.\\n\\n**Main concern 2** (experiment): We have now provided the experiments on high-dimensional systems in Example $2$.\\n\\nContinued in the next rebuttal.\"}", "{\"comment\": \"We are very grateful to the reviewer for a thorough evaluation of our work and providing valuable feedback. We will improve the paper based on the provided comments.\", \"we_have_improved_the_paper_and_uploaded_a_new_pdf_file_with_two_important_revisions\": \"1. **Experiments**: We have replaced the experiments in Example $2$ with high-dimensional experiments (100 states) on leader-follower systems. The leader is represented by a previous ball-beam (nonlinear) system, and the followers leverage the leader's state to stabilize themselves. Please check the experimental setup in Example $2$ (see pages 9-10).\\n\\n2. **Illustration**: *Based on your constructive comment*, in a *new* Figure $1$, we have illustrated a concept of the controller pool for the unknown nonlinear system, and how general an asymptotically stabilizing notion is. We hope that this figure helps ML audiences understand the core concepts of our setup in nonlinear control. (see page 4, Figure $1$).\", \"responses_to_the_reviewer\": \"**Weakness 1** (Originality): We appreciate the reviewer's comment. As the reviewer stated in the strengths, our main contribution is on relaxing the requirement for exponentially stability controller. We carefully designed the polynomially increasing batch length and adaptive learning rate based on the state norm to achieve desirable properties even when only asymptotically stable controllers exist. We would like to emphasize that we did not use an existing method as is and our results all depend on our method of adjusting the batch length and learning rate in the learning process. \\n\\nThe existing methods only deal with exponentially stable controllers while we focus on a much broader class of asymptotically stable controllers. To illustrate how significant the difference between the two types of controllers is, let us further present a one-dimensional system, where the current system state is $1$ (newly presented in Figure $1(a)$). The goal is to achieve a state near $0$, and we would like to detect this stability by observing whether one arrives at a state less than $1-\\\\epsilon$, where $\\\\epsilon$ is an arbitrarily small positive number. Exponentially stabilizing controllers guarantee to detect the stability in $O(\\\\log (1/\\\\epsilon))$ time. However, with an asymptotically stabilizing controller, if the controller is designed to keep the system state unchanged for an arbitrarily long time $T$ and then collapse the state towards $0$ afterward, one cannot detect the stability before time $T$ regardless of how small $\\\\epsilon$ is. In such a case, even though the controller ultimately achieves the goal, it may take a lot of time to learn whether a closed-loop system would be stable or not. Our algorithm works even if only those troublesome controllers exist and simple (exponentially stabilizing) controllers do not exist at all. For more details, please see Appendix A. \\nFurthermore, to provide a second perspective on this issue, as mentioned in the paper (Lines 69-79), the difference between exponential stability and asymptotic stability is tantamount to the difference between convexity and strong convexity. As witnessed in many machine learning problems, strong convexity is a very restrictive assumption while convexity is more realistic.\\n\\nMeanwhile, our numerical experiment also shows that the relaxed controller stability assumption is crucial. Notice that Li et al.'s work assumes at least one exponentially stabilizing controller in the pool (see Table 1). In Figure $2(a)$, Li et al.'s work (\\\"fixed $\\\\tau$, fixed $\\\\eta$\\\") stabilizes the linear system (the state norm reaches near zero), putting aside the time needed for the stabilization. This is because an exponentially stabilizing controller always exists for the linear system. However, in the nonlinear system case where an exponentially stabilizing controller may not exist (see Figure $3(b)$), Li et al.'s algorithm incurs the system to explode, while our DBAR (\\\"dynamic $\\\\tau$, adaptive $\\\\eta$\\\") successfully stabilizes the system and achieves a good regret. By comparing Li et al's algorithm in Figures $2$ and $3$, we observe that allowing a broader class of controllers can greatly affect the algorithm's performance, and our DBAR works well even when exponentially stabilizing controllers do not exist.\\n\\nContinued in the next rebuttal.\"}", "{\"comment\": \"We are very grateful to the reviewer for a thorough evaluation of our work and providing valuable feedback. We will improve the paper based on the provided comments.\", \"we_have_improved_the_paper_and_uploaded_a_new_pdf_file_with_two_important_revisions\": \"1. **Experiments**: We have replaced the experiments in Example $2$ with high-dimensional experiments (100 states) on leader-follower systems. The leader is represented by a previous ball-beam (nonlinear) system, and the followers leverage the leader's state to stabilize themselves. Please check the experimental setup in Example $2$ (see pages 9-10).\\n\\n2. **Illustration**: In a new Figure $1$, we have illustrated a concept of the controller pool for the unknown nonlinear system, and how general an asymptotically stabilizing notion is. We hope that this figure helps the reader to understand the core concepts (see page 4, Figure $1$).\", \"responses_to_the_reviewer\": \"**Weakness 1**: Thanks for the comment. We have now provided the experiments on high-dimensional systems in Example $2$. \\n\\n**Weakness 2**: Thank you for the comment. We will improve our presentation and strive to better illustrate our results. As a first step, we have presented Figure $1$ in Section $2$ to illustrate a concept of stabilizing notions and the controller pool for unknown system. \\n\\n**Question 1**: Yes, we have now provided 100-dimensional systems.\\n\\n**Question 2**: The assumptions such as Lipschitz continuous is quite mild. To understand Assumption $2.5$, we provide the following example (also see the pictorial illustration in Figure 1(b)). Given an unknown system, we can consider different regimes to model the possibilities of the unknown parameters of the system and then design a controller for each possibility. For example, consider a robot operating in an uncertain environment where there are different scenarios that could happen in the environment and we model it by a vector \\\"$(a,b)$\\\" for simplicity. We consider different intervals such as $[0,1)\\\\times [0,1)$ or $[1,2)\\\\times[1,2)$, etc. for \\\"$(a,b)$\\\" and then design a controller for each scenario. Now, we have a pool of controllers and we know at least one of them should work but do not know which one since we do not know the exact parameter of the system. If the intervals chosen for each \\\"$a$\\\" and \\\"$b$\\\" are too wide, a stabilizing controller may not exist and then Assumption $2.5$ will be violated. This assumption is always satisfied if we have a rich set of controllers in the given pool, as long as the system is stabilizable (if the system is not stabilizable, then its behavior cannot be controlled no matter what action we take). The core part of Assumption $2.5$ is the existence of a controller satisfying Definition $2.3$ and $2.4$, which means that we only require that at least one input-to-state stable (ISS) and incrementally stable (IS) controller \\\"exists\\\" in the candidate pool. \\nThe ISS and IS principles are a widely understood concept [1, 2] for handling disturbances in nonlinear systems, particularly in applications such as automotive and robotics systems, as it ensures that a vehicle or robot can perform safely despite variations in exogenous inputs like road conditions or obstacles. Specifically, [3] assumed the existence of \\\"exponentially\\\" ISS and IS stabilizing controllers; the stabilizing behavior of such controllers should be exponentially fast, which may be quite restrictive for nonlinear systems to have such controllers. On the other hand, in our work, we only assume the existence of \\\"asymptotically\\\" ISS and IS stabilizing controllers whose stabilizing behavior can be arbitrarily slow. \\nAs mentioned in the paper (Lines 69-79), the difference between exponential stability and asymptotic stability is tantamount to the difference between convexity and strong convexity. As witnessed in many machine learning problems, strong convexity is a very restrictive assumption while convexity is more realistic.\\nTherefore, the existence of an asymptotic ISS and IS controller is not a particularly restrictive assumption.\\n\\nContinued in the next rebuttal (references are also provided).\"}", "{\"title\": \"Hoping for Feedback\", \"comment\": \"Dear Reviewer BMap,\\n\\nSince the discussion period will end in a couple of days, we would greatly appreciate it if you could check our responses and let us know whether there are further concerns or issues that we can address. We hope that the reviewer will kindly reconsider their score if our responses are satisfactory. Many thanks.\\n\\nBest regards,\\n\\nSubmission1215Authors\"}", "{\"metareview\": \"Summary:\\nThis paper investigates online nonlinear control using bandit feedback. Its primary contribution lies in introducing dynamic batch length and learning rate techniques to relax the commonly used exponentially stabilizing controller assumption. Instead, it adopts a weaker assumption termed the asymptotic stabilizing assumption, achieving a better regret bound. Experimental results support the authors' claims.\", \"strengths\": [\"The use of dynamic batch size and adaptive step size to mitigate stability issues is a compelling approach.\", \"The motivation for this work is clear and well-articulated.\", \"The experiments, while conducted on simulations, seem sound and adequately support the proposed methodology.\"], \"weaknesses\": [\"The paper\\u2019s presentation needs significant improvement and currently falls short of publication standards. Both my own reading and other reviewers' comments highlight the excessive reliance on dense notations, making the paper less accessible to readers unfamiliar with the field.\", \"The significance of the asymptotic stability assumption is insufficiently emphasized, leaving the contribution somewhat unclear in the context of existing literature. The draft lacks clarity in positioning this work within the broader research landscape.\"], \"decision\": \"I recommend rejecting this paper in its current form due to the weaknesses outlined above.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion phase, the authors expanded on the motivation behind this work and added additional simulation experiments. However, I believe that the current writing quality still falls short of the standard expected for an ICLR publication.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your response! Yes, I misread your Assumption 3.1 as $\\\\lim_{b\\\\to \\\\infty} \\\\frac{\\\\tau_{b+1}}{\\\\tau_b}=\\\\frac{\\\\tau_1}{\\\\tau_0}$. You mentioned that \\\"It turns out that if we design the batch length to geometrically increase, we cannot achieve sublinear regret.\\\" -- can you kindly point me to the corresponding discussions in the text or give a quick justification here?\\n\\nI also admit that I interpreted your learning rate tuning too easily. Can you give a quick explanation on why \\\"It indicates that the learning rate decreases in unstable states and increases back to the initial value when the state norm returns to a stable region\\\" and also include it after Line 302 in your revision?\\n\\nI agree that both of them seem to be new. I'd be happy to increase my rating if you can let me understand what issues these two techniques are trying to tackle, and how they tackled these issues.\\n\\nBest,\\nReviewer 7GNK\"}", "{\"comment\": \"Thanks for your detailed response. My main concerns are addressed and I have increased my score.\"}", "{\"comment\": \"**Question 1**: Thanks for the comment. In the new PDF, we added the explanation in the caption of Table 1: \\\"$U$ is the set of destabilizing controllers and $|U|$ denotes its cardinality.\\\"\\n\\n**Question 2**: As mentioned in the response to Main concern 1, consider a single state $x(t)$ (also see Figure $1(a)$). Exponential stability means there are positive constants $a$ and $b$ such that $|x(t)|\\\\leq ae^{-bt}$, which means that the system should go to an equilibrium exponentially fast (the equilibrium is zero in this case). Asymptotic stability says $x(t)$ should eventually go to the equilibrium but there is no restriction on how fast it should go. It is just saying $\\\\lim_{t\\\\to\\\\infty} x(t)=0$. In many systems in real world, say autonomous systems, we cannot make them go to a desired behavior exponentially fast. This is a very strong notion but is commonly used in control theory since it makes the mathematical analysis much simpler. Our paper adopts the realistic setting that only requires asymptotically stabilizing controllers.\\n\\n**Question 3**: Lemma $4.3$ is not to support the assumption $\\\\lim_{t\\\\to \\\\infty} H(t) <\\\\infty$. This lemma is to prove Thms $4.1$ and $4.2$ regardless of the assumption. Please notice that Thm $4.1$ does not require $\\\\lim_{t\\\\to \\\\infty} H(t) <\\\\infty$.\\n\\n**Question 4**: Technically, one can select any $z,v$ for the algorithm to work. It turns out that $z,v$ only affect the final regret only up to a constant. The simplest selection would be $z=v=1$. Then the polynomial batch length would start from $\\\\tau_0 = \\\\lfloor(\\\\frac{1}{N(|U|+1)})^{1/2}\\\\rfloor$ and subsequently have $\\\\tau_b = \\\\lceil(\\\\frac{b+1}{N(|U|+1)})^{1/2}\\\\rceil$ for $b\\\\geq 1$, where $b$ is a batch number. To sum up, one can think of the batch length as growing proportionally to the square root of $b$. We will add some exposition to the paper to address this comment. \\n\\nMany thanks for reading our rebuttal.\"}", "{\"comment\": \"Thanks for the feedback again! Please let us know if you have any further questions.\"}", "{\"title\": \"Reply\", \"comment\": \"Thanks for the comments! We are really happy to hear back.\\n\\n**First of all**, for a dynamic batch length, Assumption 3.1 indicates that $\\\\lim_{b \\\\to \\\\infty} \\\\frac{\\\\tau_{b+1}}{\\\\tau_b} = 1$, which means the ratio of two consecutive batch lengths should approach 1 as time goes by (this includes polynomially increasing batch). The necessity of this assumption arises in the equation (14) (see Lines 993-999 in the Appendix), where we studied the asymptotic system stability. If we have $\\\\lim_{b \\\\to \\\\infty} \\\\frac{\\\\tau_{b+1}}{\\\\tau_b} > 1$, then the last part of (14) would be $0\\\\cdot r^U$ instead of $0\\\\cdot 1^U$ for some $r>1$, and this quantity is not guaranteed to converge to 0 since the number of Breaks $U$ goes to infinity as the number of batches $B$ goes to infinity. In other words, $0\\\\cdot r^U = 0\\\\cdot \\\\infty \\\\neq 0$ unless we have $r=1$ as our Assumption 3.1. Thus, the asymptotic system stability is violated. Not only that, the equation (14) is a core part of the sublinear regret proofs as in Line 1481 (Algorithm 1), Line 1637 (Algorithm 2), Line 1833 (Algorithm 3). Thus, if we have the ratio greater than 1, we cannot have a sublinear regret. Picking the ratio of increasing batch length to be 1 is perhaps the only way to obtain *both asymptotic system stability and a sublinear regret*, and we particularly picked polynomially increasing batch length among such batch length designs to achieve $\\\\tilde{O}(T^{2/3})$ regret for the case $H(t) \\\\leq O(\\\\log(t))$. \\n\\nWe really thank the reviewer's comment and we included a brief discussion with blue highlight (Lines 256-260) under Assumption 3.1 in the updated manuscript. \\n\\n**Second**, for an adaptive learning rate, Line 27 in Algorithm 1 says that $\\\\eta_{b+1} = \\\\frac{\\\\eta_0}{ (\\\\alpha_{b+1})^{s_{b+1}} }$, where $\\\\eta_0$ is an initial learning rate. $\\\\alpha_{b+1}$ and $s_{b+1}$ are determined in Lines 11-20 in the algorithm. To summarize, if the norm of the first state in the next batch is large ($||x_{t_{b+1}}||$ large), we adjust $(\\\\alpha_{b+1})^{s_{b+1}}$ to be large ($s_{b+1}$ is increased by 1, and $\\\\alpha_{b+1}$ is increased according to the norm). Subsequently, if $||x_{t_{b+1}}||$ is small enough (smaller than $\\\\alpha_b ||x_0|| + \\\\delta$), we let $s_{b+1}$ be $0$, meaning that we have exactly $\\\\eta_{b+1} = \\\\eta_0$. This implies that starting with an initial learning rate, if the state norm increases, a learning rate may decrease accordingly; however, if the state norm is stabilized, a learning rate increases back to an initial learning rate. We still can obtain a sublinear regret without this technique, but this allows us to alleviate the multiplicative exponential term in the regret bound ($o(T^{1/3})\\\\cdot \\\\exp(O(|U|))$ is improved to $\\\\tilde{O}(T^{-1/3})\\\\cdot \\\\exp(O(|U|))$).\\n\\nAs you recommended, we included the following brief sentence with blue highlight in Line 302 in the updated manuscript: \\\"Since $(\\\\alpha_{b+1})^{s_{b+1}}$ increases when the state norm $|| x_{t_{b+1}}||$ is large, and $s_{b+1}$ resets to zero for sufficiently small state norm, the corresponding learning rate decreases in unstable states and increases back to the initial value when the state norm returns to a stable region.\\\"\\n\\n**To conclude**, our dynamic batch length lets us achieve both asymptotic system stability and a sublinear regret, and an adaptive learning rate further improves this sublinear regret to $\\\\tilde{O}(T^{2/3}) + \\\\tilde{O}(T^{-1/3})\\\\cdot \\\\exp(O(|U|))$ in the case when $H(t) \\\\leq O(\\\\log(t))$. Thus, our algorithm DBAR, the combination of these two components, effectively stabilizes the potential explosion of the system and enjoys the improved regret.\\n\\nMany thanks!\"}", "{\"comment\": \"We are very grateful to the reviewer for a thorough evaluation of our work and providing valuable feedback. We will improve the paper based on the provided comments.\", \"we_have_improved_the_paper_and_uploaded_a_new_pdf_file_with_two_important_revisions\": \"1. **Experiments**: We have replaced the experiments in Example $2$ with high-dimensional experiments (100 states) on leader-follower systems. The leader is represented by a previous ball-beam (nonlinear) system, and the followers leverage the leader's state to stabilize themselves. Please check the experimental setup in Example $2$ (see pages 9-10).\\n\\n2. **Illustration**: In a new Figure $1$, we have illustrated a concept of the controller pool for the unknown nonlinear system, and how general an asymptotically stabilizing notion is. We hope that this figure helps the reader to understand the core concepts (see page 4, Figure $1$).\", \"responses_to_the_reviewer\": \"**Weakness** (Geometrical series and adaptation to control problem): Thanks for the comment. Our first technique is polynomially increasing batch length. We carefully designed the batch length to concurrently achieve asymptotic system stability and sublinear regret.\\nIt turns out that if we design the batch length to geometrically increase, we cannot achieve sublinear regret. Thus, the increase amount saturates as time goes by, which renders our algorithm more practical.\\nThe second technique is adaptive learning rate based on the state norm. It does not geometrically decay; we only decrease the learning rate if the state norm is large (state is unstable), and subsequently increase the learning rate if the state returns to a stable area. \\n\\nIn online learning, the agent may progressively lengthen the batch length as the agent learns the system. Such a strategy is often used in incremental learning or curriculum learning, where the data availability or task complexity may increase over time. \\nFor example, [1] used geometrically increasing (doubling) batch length as they try to learn longer Atari games and escape local optima learned from short games. \\nHowever, such a setting is different from ours, since we focus on a \\\"single trajectory setting\\\" in the control problem. Often, in a single trajectory setting, geometrically increasing batch length will lead to learning instabilty due to higher variance in learning a good policy. Thus, instead of using geometrically increasing length, we have carefully designed a polynomially increasing length to both guarantee stability and enough exploration. \\n\\nFor the learning rate, in the context of online learning, it is well-known that reducing the learning rate will lead to early exploration and later stabilization. The work [2] adapts this advantage of decreasing learning rate to control problem and presents an algorithm based on non-increasing learning rate, but the paper only proves the results for constant learning rate. \\nHowever, our claim is that decreasing the learning rate, regardless of the current state, does not significantly improve the control performance. We provide theoretical guarantees on our scheme based on the stability of state norm, and thus the rate is not necessarily non-increasing. \\n\\n**Weakness** (special difficulties and comparison with prior works): The previous work [2] assumed the existence of exponentially stabilizing controllers. The stabilizing behavior of such controllers should be exponentially fast, which may be quite restrictive for nonlinear systems to have such controllers. In that case, they do not need to use dynamic batch length since it does not take a lot of time to learn whether a closed-loop system would be stable or not. In our challenging setting where those simple controllers may not exist (stabilizng behavior can be arbitrarily slow in our case), a previously used \\\"fixed batch length\\\" fails to identify the system stability. Thus, we need a dynamic batch length to deal with a removal of exponentially stabilizing controllers. \\nThe use of a dynamic batch length in turn necessitates the use of an adaptive learning rate. For more details on the distinction between exponential and asymptotic notions, see Appendix A.\\n\\n**Minor 1**: Thanks for the recommendation. We updated the paper accordingly.\\n\\n**Minor 2**: To help readers understand Section 2 (assumptions and definitions), we provided the glossary on Appendix B. We will guide the readers to refer to Appendix B while reading Section $2$. Based on your comment, we have also presented Figure $1$ to illustrate the concept of an asymptotically stabilizing notion in the given controller pool.\\n\\n**Minor 3**: Thanks for the comment. We will soon add informal statements accordingly and defer some of formal theorems to the appendix.\\n\\n[1] Fuks et al., \\\"An Evolution Strategy with Progressive Episode Lengths for Playing Games\\\", IJCAI, 2019.\\n\\n[2] Li et al., \\\"Online switching control with stability and regret guarantees\\\", L4DC, 2023.\\n\\nMany thanks for reading our rebuttal.\"}", "{\"comment\": \"**Question 3**: Thanks for the recommendation.\\nWe will defer Lemma $4.7$ to the appendix and add the following explanation. \\n\\nTo clarify, our proof strategy starts from dividing the expected total cost into mix loss and mixability gap as in [4,5]. This technique is common in hedge setting, which is an instance of multi-armed bandit problems. Since the earlier algorithm relies on the \\\"constant learning rate\\\", they divide the expected cost into the $\\\\text{mix loss}$ \\\"$-\\\\frac{1}{\\\\eta_0}\\\\log(\\\\mathbb{E}\\\\exp(-\\\\eta_0 w_b'(k)))$\\\" and the $\\\\text{mixability gap}$ \\\"$\\\\mathbb{E} [w_b'(k)]-\\\\text{mix loss}$\\\" (notice that the denominator of the mix loss and the term inside the exponential function are both $\\\\eta_0$, the constant learning rate). \\nHowever, to deal with the absence of exponentially stabilizing controllers, we need to adopt an adaptive learning rate, thus the proof strategy should be modified to analyze $-\\\\frac{1}{\\\\eta_0}\\\\log(\\\\mathbb{E}_{k\\\\sim p_b}\\\\exp(-\\\\eta_b w_b'(k)))$, where a new $\\\\eta_b$ is an adaptively selected learning rate at batch $b$. Both the modified mix loss and the mixability gap lead to an additional term that is in $|L|$ and $|V|$. Both terms are on the order of $|U|$ (Lemma $4.7$), so they do not increase the regret compared to the earlier work [3]. \\n\\nNote that our adaptive learning rate is determined by $\\\\eta_b = \\\\eta_0 / (\\\\alpha_b)^{2s_b }$. Accordingly, while we obtain additional terms linear in $|U|$ due to adaptive learning rate, we benefit from having the term $\\\\mathbb{E} \\\\left[\\\\sum_{b=0}^{B-1}\\\\sum_{t=t_b}^{t_{b+1}-1} \\\\bigr[\\\\frac{c_t(x_t^K(i^*), u_t^K(i^*))}{(\\\\alpha_b)^{2 s_b}} - c_t(x_t^*, u_t^*)\\\\bigr]\\\\right]$ instead of $\\\\mathbb{E}\\\\left[ \\\\sum_{b=0}^{B-1}\\\\sum_{t=t_b}^{t_{b+1}-1} \\\\bigr[c_t(x_t^K(i^*), u_t^K(i^*)) - c_t(x_t^*, u_t^*)\\\\bigr]\\\\right]$ in a constant learning rate case. \\nSince $(\\\\alpha_b)^{2 s_b}$ increases (an adaptive learning rate decreases) when the state norm is in an unstable region, the term decreases to the extent that alleviates the multiplicative exponential term (see $o(T^{1/3})\\\\cdot \\\\exp(O(|U|))$ in Table $1$, \\\"Dynamic Batching\\\") that arises when using a dynamic batch length. \\nIn summary, our final regret has polynomial terms more in regret compared to the previous work, but instead alleviates multiplicative exponential term. \\n\\n\\n[1] Sontag, \\\"Input to state stability: Basic concepts and results\\\", Nonlinear and Optimal Control Theory, 2008.\\n\\n[2] Khalil, \\\"Nonlinear Systems\\\", Pearson Education, 2015.\\n\\n[3] Li et al., \\\"Online switching control with stability and regret guarantees\\\", L4DC, 2023.\\n\\n[4] van Ervan et al., \\\"Adaptive hedge\\\", NeurIPS, 2011.\\n\\n[5] de Rooij et al., \\\"Follow the leader if you can, hedge if you must\\\", JMLR, 2014.\\n\\nMany thanks for reading our rebuttal.\"}", "{\"summary\": \"The paper addresses the online bandit nonlinear control problem, where the objective is to learn an optimal controller for a nonlinear dynamical system amid unknown stabilizing and destabilizing controllers. The authors introduce the DBAR (Dynamic Batch length and Adaptive learning Rate) algorithm, which adapts batch length and learning rate to improve control stability and minimize regret without requiring exponentially stabilizing controllers. Unlike existing approaches that need stronger stability assumptions, DBAR works with a weaker, asymptotic stability notion. This flexibility allows DBAR to achieve stability and a regret bound even when the pool of stabilizing controllers is limited or includes only asymptotically stabilizing ones.\\n\\nTheoretical contributions include proving asymptotic and finite-gain stability of DBAR and bounding its regret. The authors also compare DBAR's performance with existing algorithms, illustrating through simulations that DBAR provides improved stability and reduced regret in both linear and nonlinear systems under adversarial disturbances.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Originality:\\n The paper introduces a new algorithm, DBAR, combining a dynamic batch length and an adaptive learning rate, which expands on existing Exp3 approaches for bandit learning. This design theoretically broadens the applicability of online control by relaxing the requirement for exponentially stabilizing controllers, a novel aspect in nonlinear control with adversarial disturbances.\", \"quality\": \"The theoretical guarantees, though valuable, rest on assumptions that may be difficult to satisfy in real-world applications, such as requiring knowledge of an ISS controller in advance. Furthermore, empirical validation is limited to relatively simple experiments, which may not fully demonstrate the algorithm\\u2019s effectiveness or scalability in more complex settings.\", \"clarity\": \"The problem setup, assumptions, and main results are generally clear, with key terms defined (e.g., asymptotic stability and finite-gain stability) and a well-organized structure. The authors also make efforts to contextualize the algorithm in comparison to prior work, which aids readability for readers with a background in online control or bandit learning.\", \"weaknesses\": \"Originality:\\n While DBAR's approach to adapting the learning rate and batch size is new, the actual contribution may be seen as incremental. The improvements over existing algorithms are mainly in modifying specific assumptions rather than introducing a breakthrough method, and it is unclear if these modifications substantively advance the field.\", \"significance\": \"The paper's experimental evaluation does not robustly validate the theoretical claims, as it relies on lower-dimensional systems that may not capture the challenges posed by high-dimensional, complex environments. This limits the confidence in DBAR's utility beyond the examples shown, reducing its impact.\", \"questions\": \"1. How does the dynamically increasing batch length affect the algorithm's computational cost, particularly in real-time applications? It would be helpful to discuss any trade-offs between stability assurance and computational efficiency.\\n\\n2. The current experimental setup involves relatively simple, low-dimensional systems. Can the authors provide results on more challenging benchmarks to better demonstrate DBAR\\u2019s performance and stability claims?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you. Please let us know if you have any further questions!\"}", "{\"title\": \"Revisions addressing concerns about H(t)\", \"comment\": \"To address the reviewer's first main concern about $H(t)$, we have slightly revised the introduction part, and added the relevant explanations in the paper. The revisions are blue-highlighted.\\n\\nPlease note that \\n\\n1. A *dynamic batch length* allows us to achieve both asymptotic system *stability* and a sublinear *regret*. However, we suffer from a multiplicative exponential term in return. \\n\\n2. In Table 1, when using an *adaptive learning rate*, we can see that $o(T^{1/3})\\\\cdot \\\\exp(O(|U|))$ is improved to $\\\\tilde{O}(T^{-1/3})\\\\cdot \\\\exp(O(|U|))$, and this is when $H(t) \\\\leq O(\\\\log {t})$ as we mentioned in the previous rebuttal. However, even when $H(t)$ is not restricted, $o(T^{1/3})\\\\cdot \\\\exp(O(|U|))$ is improved to $o(1)\\\\cdot \\\\exp(O(|U|))$ (see Corollary D.10), so it is true that multiplicative exponential term can be alleviated in all cases. \\n\\nThus, our algorithm DBAR, the combination of these two components, effectively stabilizes the potential explosion of the system and enjoys the improved regret.\\n\\nBased on your constructive comments, we have added the following explanations in the updated manuscript:\\n\\n1. In Lines 88-90 (in the introduction), we specified that a *dynamic batch length* (designed to grow unboundedly, but the growth amount eventually saturates) contributes to both \\\"asymptotic system stability\\\" and \\\"a sublinear regret\\\". \\n\\n2. In Lines 95-97 (in the introduction) and Lines 311-312, we specified that an *adaptive learning rate* contributes to alleviating a multiplicative exponential term in all cases. Moreover, we also indicated that the regret $\\\\tilde{O}(T^{-1/3})\\\\cdot \\\\exp(O(|U|))$ is achievable for a specific class of stabilizing controllers.\\n\\n3. In Lines 256-260, we again emphasized that a dynamic batch length contributes to both \\\"asymptotic system stability\\\" and \\\"a sublinear regret\\\", and this was possible due to the design of $\\\\lim_{b\\\\to \\\\infty} \\\\frac{\\\\tau_{b+1}}{\\\\tau_b}=1$. For example, if we pick a geometric sequence (1,2,4,8, 16,...) as a dynamic batch length, both stability and a sublinear regret are not achievable. \\n\\nMany thanks.\"}", "{\"comment\": \"Thank you very much! Please let us know if you have any further questions.\"}" ] }
5o9JJJPPm6
ComaDICE: Offline Cooperative Multi-Agent Reinforcement Learning with Stationary Distribution Shift Regularization
[ "The Viet Bui", "Thanh Hong Nguyen", "Tien Anh Mai" ]
Offline reinforcement learning (RL) has garnered significant attention for its ability to learn effective policies from pre-collected datasets without the need for further environmental interactions. While promising results have been demonstrated in single-agent settings, offline multi-agent reinforcement learning (MARL) presents additional challenges due to the large joint state-action space and the complexity of multi-agent behaviors. A key issue in offline RL is the distributional shift, which arises when the target policy being optimized deviates from the behavior policy that generated the data. This problem is exacerbated in MARL due to the interdependence between agents' local policies and the expansive joint state-action space. Prior approaches have primarily addressed this challenge by incorporating regularization in the space of either Q-functions or policies. In this work, we propose a novel type of regularizer in the space of stationary distributions to address the distributional shift more effectively. Our algorithm, ComaDICE, provides a principled framework for offline cooperative MARL to correct the stationary distribution of the global policy, which is then leveraged to derive local policies for individual agents. Through extensive experiments on the offline multi-agent MuJoCo and StarCraft II benchmarks, we demonstrate that ComaDICE achieves superior performance compared to state-of-the-art offline MARL methods across nearly all tasks.
[ "Offline Reinforcement Learning", "Multi-Agent Reinforcement Learning", "Stationary Distribution Correction Estimation" ]
Accept (Poster)
https://openreview.net/pdf?id=5o9JJJPPm6
https://openreview.net/forum?id=5o9JJJPPm6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xDSF1dwjFV", "tCtCyEN6ub", "rxiDSwKe7Q", "qLsj7E5Y7h", "oGBt47piUj", "lNg426iiVv", "jPEruEBvAf", "h4BqTx2X5i", "dwqOUFh9sr", "dkESPrdGU7", "UdNt5ojj0y", "TpHZMNvyAP", "Ry06c1ftr6", "OOeYHVFA7Y", "Nx0V49VNaB", "MIeY0cOEbK", "MD8J7VtS4U", "M74pPACAqJ", "HxTc31dyv0", "HsFR3xX7vx", "CdkqdmpkzX", "CHDCl19I31", "AHU1LeYcDj", "9uZu43EH7W", "5mMoaWzr0c", "4wtZNM662r", "4q0rZCpuvL", "3J9WRsJ0eM", "1xLXeHcl3H" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732457101803, 1732691672274, 1732302413288, 1730521047650, 1730645921690, 1734637063155, 1732600231352, 1732582583145, 1732688830201, 1730456984302, 1730542497387, 1732638904149, 1732302604743, 1732302795012, 1732460491464, 1732301409632, 1732438390401, 1737524296761, 1732355859230, 1732302157125, 1732460688176, 1732456120160, 1732342666602, 1732544019623, 1732435295223, 1732301758166, 1732639406655, 1732510250538, 1732603750172 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_3fo7" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_RYfn" ], [ "ICLR.cc/2025/Conference/Submission14042/Area_Chair_nZ1L" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_kymz" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_kymz" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_XXjk" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_kymz" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_kymz" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_RYfn" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_kymz" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_3fo7" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Authors" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_kymz" ], [ "ICLR.cc/2025/Conference/Submission14042/Reviewer_3fo7" ] ], "structured_content_str": [ "{\"title\": \"We thank the reviewer for the additional feedback\", \"comment\": \"> However, I have some observations regarding the formulation section of ComaDICE. The current presentation lacks a professional and cohesive mathematical structure. As it stands, the section appears more like a draft or homework exercise rather than a polished and rigorous exposition expected in a professional context. I recommend reorganizing this section to present the derivations and explanations with greater clarity and structure, adhering to standard mathematical writing conventions.\\n\\nWe sincerely thank the reviewer for their valuable feedback, which we greatly appreciate. Regarding the exposition, we have made every effort to present our formulation clearly. However, due to space constraints, some details have been omitted, and we have provided references to relevant papers for further information. If the reviewer could kindly point out specific parts of the section that require revision or improvement, we would be more than happy to address them and make the necessary revisions to enhance the clarity and quality of our work.\\n\\n> Additionally, the notations in this section require refinement for consistency and precision. For instance, the notation for the occupancy measure ...\\n\\nWe thank the reviewer for their valuable suggestion, which we have carefully considered. In response, **we have now updated the notation to ensure consistency throughout the paper.** Specifically, we now use global state notation consistently, with a clarification that, in practice, global state information is not fully available. Instead, we rely on joint observations to address this limitation.\\n\\nOnce again, we sincerely appreciate the reviewer\\u2019s additional feedback, which has helped us further improve the paper. We hope our responses and updates adequately address your remaining concerns. \\n\\n**Should you have any further questions or require additional clarification, we would be more than happy to provide further explanations or revisions.**\"}", "{\"comment\": \"We sincerely thank the reviewer for all the valuable discussions and insightful questions during the rebuttal process. Your suggestions have pushed us to improve our paper significantly with additional experiments and more in-depth discussions. *We believe this exemplifies the kind of constructive and thoughtful review that any author would hope to receive.*\\n\\nWe will ensure additional rounds of revision to refine the storyline and enhance the overall writing quality of the paper.\"}", "{\"comment\": \"> Can ComaDICE solve the XOR game and Bridge mentioned in AlberDICE?\\n\\nYes, ComaDICE is capable. In fact, these games are relatively small, with significantly lower state and action dimensions, allowing our algorithm to quickly converge to the optimal policy within just a few training epochs. For the XOR game, we observed that ComaDICE not only achieves the maximum possible reward of 100 but also converges much faster than AlberDICE. We have added such results to Section B.7. in appendix.\\n\\n> Related to \\u201cStrengths\\u201d, can solving ComaDICE learn the global optimum in the underlying MDP even with the mixing network?\\n\\nYes, under our mixing architecture, ComaDICE can theoretically learn the global optimum for $\\\\nu_i$ (for all $i$), as guaranteed by the convexity with respect to $\\\\nu$ established in Theorem 4.2.\\n\\n> Is the Individual Global Max (IGM) assumption required in order to introduce the mixing network? In other words, does ComaDICE assume that the underlying optimal Q function assume IGM?\\n\\nWe do not assume IGM in our approach. Instead, we demonstrate that our learning framework guarantees consistency between the global and local optimal policies, as shown in Proposition 4.3\\u2014a property that shares similarities with IGM. Our mixing network architecture is primarily designed to ensure the convexity of the loss function, enabling a stable and efficient training process.\\n\\n> What is the main difference in the final algorithm with OMIGA [3] ?\\n\\nThe fundamental difference is that OMIGA directly produces Q or V functions, which can then be used to extract a policy. In contrast, ComaDICE uses the Q and \\\\(\\\\nu\\\\) functions solely to learn the occupancy ratio between the learning policy and the behavioral policy. We have added a discussion on Page 3 to clarify this distinction.\\n > Why are the notations mixed between using joint observations and global states\\n\\nIn the POMDP setting, global states are not fully accessible and are instead represented by the joint observations from local agents. For simplicity, we use global state notation, but it actually refers to the corresponding joint observations. We have added a clarification on Page 4 at the beginning of Section 4.1.\\n\\n> Please write in a different color during the rebuttal which part is the novel part. In particular, which part of the proofs are a novel contribution and which part is coming from OptiDICE [2]\\n\\nThank you for the suggestion. We have added some lines to the proof of our Proposition 4.1, acknowledging that the first part of the proof is a straightforward extension from the OptDICE paper, while the second part introduces some novel findings.\\n\\n**We hope our revisions effectively respond to the reviewer\\u2019s critiques and provide greater clarity on our contributions. If you have any additional questions or comments, we would be happy to address them.**\"}", "{\"summary\": \"In this paper, ComaDICE algorithm is proposed for offline multi-agent reinforcement learning, extending the DICE method to multi-agent scenarios and using value function decomposition and policy extraction methods, the idea is novel and the experimental results show the potential of the method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The introduction of the DICE algorithm within the domain of multi-agent reinforcement learning represents a significant contribution characterized by its novelty and innovative approach.\", \"The experimental settings are comprehensive, demonstrating the potential of the proposed algorithm effectively.\"], \"weaknesses\": [\"The contribution emphasizes the value decomposition and claims to prove the equivalence of the local policy product to the globally optimal policy, but the paper is too repetitive and superficial in its exposition of the decomposition method, neither explicitly defining the local subtasks and their corresponding policies, nor the relationship between them, nor providing a complete proof of this equivalence.\", \"The experimental results for hybrid networks contradict mainstream research findings (single-layer outperforms double-layers), a phenomenon that hints at a possible fundamental flaw in the application of the DICE methodology to MARL, but the paper lacks an in-depth discussion of this.\"], \"questions\": [\"The discussion of the DICE method and its goals in the Related Work and Preliminary Knowledge section is not sufficiently in-depth to clearly locate the innovations of this paper with respect to existing work.\", \"The proof section of DICE skips some key steps, has a confusingly organized derivation and notation that does not clearly state the purpose and rationale for each step of mathematical transformation. Interpretation between Lagrange functions and offline reinforcement learning problem formulation is in lack.\", \"The paper fails to explore the common sparse reward scenario in offline reinforcement learning, and the analysis of the quality requirements of behavioral strategies is insufficient, limiting the feasibility of the method in practical applications.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an algorithm introducing stationary distribution correction to address the distributional shift problem in offline cooperative multi-agent reinforcement learning (MARL). In multi-agent environments, this issue is intensified due to the large joint state-action space and the interdependencies among agents. To tackle this, ComaDICE minimizes the f-divergence between the stationary distributions of the learning and behavior policies. Additionally, by leveraging the Centralized Training with Decentralized Execution (CTDE) framework, it decomposes the global value functions into local values for each agent, ensuring that the optimization of each agent\\u2019s local policy is consistent with the global learning objective.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Effective Distribution Correction in Multi-Agent Settings: ComaDICE improves upon traditional DICE by extending stationary distribution correction to multi-agent environments. Through f-divergence-based alignment between behavior and target policies, it effectively handles the complex distributional shifts unique to multi-agent interactions, enhancing policy reliability and performance.\", \"Theoretical Foundation: A thorough mathematical analysis provides convergence and stability proofs through f-divergence correction, reinforcing the algorithm\\u2019s reliability in multi-agent scenarios.\", \"Stable Value Decomposition: ComaDICE decomposes global values into convex local objectives, enhancing training stability and aligning local agent optimization with the global objective to address coordination and stability challenges specific to multi-agent environments.\"], \"weaknesses\": [\"ComaDICE\\u2019s theoretical analysis relies on non-negative weights and convex activations in the mixing network for stability. While this aids convergence, it may limit the model\\u2019s ability to capture complex inter-agent dynamics. In the practical algorithm, this limitation is intensified by the use of a single-layer linear mixing network, which further restricts representational capacity in highly interactive environments.\", \"High Computational Cost: ComaDICE incurs a significant computational cost due to the precise f-divergence-based adjustments needed for each agent\\u2019s local policy in stationary distribution correction. This can lead to reduced efficiency, especially in environments with a large number of agents.\"], \"questions\": \"The ablation study shows that performance decreases when using a more complex network. Why is that the case? If more data were available or the model was improved, could the results potentially differ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper studied offline multi-agent RL and proposed an approach based on the DICE framework. The proposed approach uses a stationary distribution shift regularization to combat the distribution shift issue in offline RL. The paper demonstrates that their approach works well empirically.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers are in general positive about the paper. Before rebuttal, the reviewers had concerns about the experiments (additional baselines and environments), and comparison to DICE based related works. During the rebuttal, the authors worked hard to provide additional details, including additional baselines (AlberDICE and OptiDICE) on all 3 environments, additional toy example (XOR Game), and a re-interpretation of the method as learning the optimal policy implicitly in factorized policy space, and detailed analysis on how ComaDICE works based on the XOR Game. These additions convinced one reviewer to increase their score to the positive side, and at the end all reviewers agreed on an acceptance.\"}", "{\"comment\": \"Dear Reviewer XXjk,\\n\\nAs the rebuttal period is coming to an end, we kindly ask if you could take a moment to review our responses to see if they address your concerns.\\n\\nOnce again, we greatly appreciate your comments, which have been invaluable in helping us improve the paper. We hope to engage in further discussions and hear more feedback from you.\\n\\nAll the best, \\nThe Authors\"}", "{\"comment\": \"Thank you for the answers. My concern regarding the first question about $\\\\mathcal{\\\\tilde L}$ is resolved and I think this kind of analysis would help clarify the contributions of ComaDICE.\\n\\nFor the XOR Game, I think that only partially explains how ComaDICE can solve it because balancing reward maximization and conservatism would also apply to OptiDICE (which fails). However, the answer to the first question regarding $\\\\mathcal{\\\\tilde L}$ somewhat answers this question (reward maximization + conservatism in the factorized policy space). \\n\\nI think the paper can be significantly improved if the XOR (or other toy example) is used to illustrate the purpose of decomposition. It would be better to provide a characterization of $\\\\nu$ in the toy example and show that decomposing it in that manner is implicitly finding optimal factorized policies. \\n\\nI am willing to raise my score if these points (along with all discussions during the rebuttal) are incorporated into the draft before the revision deadline.\"}", "{\"comment\": \"Thank you again for the rebuttal and the discussion. I've updated my score which reflects my current view on the paper, as well as some details on the reasoning.\"}", "{\"summary\": \"This work introduces ComaDICE (offline Cooperative MARL with DICE), an approach for offline cooperative multi-agent reinforcement learning (RL) that leverages the DICE method. ComaDICE formulates the offline cooperative multi-agent RL problem as a constrained optimization and employs a DICE-based method to compute a global Lagrangian multiplier, $\\\\nu^{tot}$. Given the large state and action spaces typical in multi-agent RL, practical optimization of $\\\\nu^{tot}$ may not be feasible. To address this challenge, ComaDICE employs value function decomposition to decompose the global Lagrangian multiplier into individual Lagrangian multipliers for each agent.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work provides a comprehensive performance evaluation through experiments conducted on an extensive set of benchmarks, including the challenging multi-agent RL benchmark SMAC-v2. The experimental results indicate that the proposed approach shows superior performance relative to the baselines considered in the manuscript.\", \"weaknesses\": \"The primary concern regarding this work is its novelty compared to previous work.\\n\\nSpecifically, AlberDICE by Matsunaga et al. (2023) [A] conveys a similar idea, i.e. adoption of DICE for offline cooperative multi-agent RL. The key difference of ComaDICE from AlberDICE appears to be employing individual Langrangian multipliers $\\\\nu_i$ and the mixing network for value function decomposition: while AlberDICE utilizes a simple yet principled resampling method for obtaining $\\\\nu_i$, ComaDICE employs the value factorization as discussed in Section 4.2, which requires the reliance on the additional mixing network $\\\\mathcal{M}_\\\\theta$ further necessitates additional training.\\n\\nAlthough AlberDICE is briefly mentioned in Section 2, the paper does not adequately discuss the theoretical or empirical advantages of adopting the value decomposition instead of the resampling approach. In addition, the AlberDICE paper presents a simple multi-agent task where value function decomposition leads to a substantial performance loss.\\n \\nThe absence of such a comparison significantly weakens the perceived contribution of this study, as it fails to establish a clear improvement of differentiation from previous work. In this sense,\\n(1) A detailed discussion comparing ComaDICE and AlberDICE should be added,\\n(2) AlberDICE should be included as a baseline in all experiments,\\n(3) ComaDICE should be tested on the XOR game in the AlberDICE paper \\n\\n\\n[A] Matsunaga et al., \\u201cAlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation.\\u201d, NeurIPS 2023.\\n\\n[Minor comments]\\nIn Sections 6.1 and 6.3, SMACv1 environment is discussed; however, corresponding experimental results for SMACv1 are not included in the main manuscript. It would be more appropriate to relocate these discussions to the appendix and direct readers there for further details, or incorporate the result on SMACv1 to the main manuscript, potentially replacing some of the redundant results on SMACv2 in Table 1 or Figure 1.\", \"questions\": \"Q1. Do the authors assume that $s$ can be sufficiently represented from $o$?\\nNotations for states and observation are confusingly used. For example, in line 234 and 237, $\\\\nu(s)$ and $q(s,a)$ are defined as a collection of functions that requires individual observations,$\\\\nu_i(o_i)$ and $q(o_i,a_i)$ respectively If so, the assumption should be explicitly clarified in the POMDP description.\\n\\nQ2. Figure 2 lacks information on which task was selected for each benchmark. (e.g. 5_vs_5 or 10_vs_10 in protoss, or expert or medium data quality in Hopper) Could you clarify?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes ComaDICE, a stationary distribution correction estimation approach to addressing OOD states and actions in Offline MARL. The derivation starts with the LP formulation of RL in the joint state-action space, which results in a concave objective for learning state-value functions similar to OptiDICE [2]. The objective also uses value decomposition with a monotonic mixing network for the advantage function similar to OMIGA [3]. ComaDICE is evaluated on SMACv2 and MaMuJoCo and shows comparable performance with baselines such as BC and OMIGA.\", \"soundness\": \"2-->3\", \"presentation\": \"1-->2\", \"contribution\": \"1--> 3\", \"strengths\": \"It is known that every MDP has a deterministic optimal policy, which, if extended to the multi-agent setting, can be factorized into independent policies. A very optimistic reading of the paper would interpret that the decomposition procedure is actually learning optimal value functions over decentralized policies. If this interpretation is correct, the ComaDICE presents a very scalable and principled offline MARL algorithm for learning decentralized value functions and policies without any restrictive IGM assumption as in previous work such as ICQ. However, it is worth noting that this is more of a statement of the potential of the paper rather than a strength of its current version, as these points are not specifically addressed in the current draft.\\n\\nFurthermore, ComaDICE can be more scalable in comparison to AlberDICE [1] which requires alternating optimization.\", \"weaknesses\": \"### Problem Statement\\nIt is not clear what the main problem is that ComaDICE is solving. It is briefly mentioned in the introduction that OOD states and actions are a problem in offline MARL. However, this was addressed in detail by other work such as CFCQL/OMIGA/AlberDICE mentioned in the Related Work (especially AlberDICE [1] which is the most similar). For instance, AlberDICE considers some coordination problems (XOR/Bridge/etc.) where OOD joint actions may be common, these is no consideration of these settings as well as comparison to AlberDICE both algorithmically and empirically. Thus, it is not entirely clear from the current draft of the paper what the main problem is that ComaDICE is solving, and if it is indeed OOD actions, which part of the algorithm in particular is alleviating this.\\n\\n### Novelty\\nThe derivation is based on OptiDICE [2] but this is not explicitly mentioned, which is misleading. \\nFor instance, Proposition 4.1 seems equivalent to Proposition 1 of OptiDICE. Furthermore, an extension of OptiDICE to solve Offline MARL was considered in AlberDICE [1] so it is not clear why ComaDICE should be preferred over AlberDICE. Furthermore, the final algorithm closely resembles OMIGA [3].\\n\\n### Algorithm\\nIt is unclear what the purpose of value decomposition in learning the Q functions is. It seems the advantage can be computed by the learned state-value function $\\\\nu$ and run WBC (as mentioned in Appendix C of AlberDICE[1]). Also, Line 346 defines $A_\\\\nu^{tot}$ as the sum of reward and state-value functions.\\n\\n### Lack of Relevant Baselines\\nAlberDICE and OptiDICE are missing as the main baselines, as well as CFCQL which addresses OOD actions but in a different manner. These are all mentioned in the Related Work section but not compared.\\n\\n### Writing\\nThe paper is generally not well-written. All of the aforementioned weaknesses of the paper should be addressed in detail and the writing should not raise any of these concerns.\", \"questions\": \"1. What is the main purpose for the value decomposition and learning individual Q-functions?\\n\\n1. Related to the first question, what is the purpose of Theorem 4.2, if the original loss is also concave? If we use the mixing functions, does $ \\\\mathcal{\\\\tilde L}$ still find the global optimum?\\n\\n1. Can ComaDICE solve the XOR game and Bridge mentioned in AlberDICE? \\n\\n1. Related to \\u201cStrengths\\u201d, can solving ComaDICE learn the global optimum in the underlying MDP even with the mixing network?\\n\\n\\n1. Is the Individual Global Max (IGM) assumption required in order to introduce the mixing network? In other words, does ComaDICE assume that the underlying optimal Q function assume IGM?\\n\\n1. What is the main difference in the final algorithm with OMIGA [3] ?\\n\\n1. Why are the notations mixed between using joint observations $\\\\mathbf{o}$ and $s$? Is the setting a Dec-POMDP or a specialized setting where the joint observations constitute the state?\\n\\n1. Please write in a different color during the rebuttal which part is the novel part. In particular, which part of the proofs are a novel contribution and which part is coming from OptiDICE [2]\\n\\n1. Please address both my comments in the Strengths and Weaknesses section in detail.\\n\\n\\n### References\\n[1] AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation (Matsunaga et, al. NeurIPS 2023)\\n\\n[2] OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation (Lee et, al. ICML 2021)\\n\\n[3] Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization (Wang et, al. NeurIPS 2023)\\n\\n----------------------------------------------------------------------------------------------------------------------\\nPost-Rebuttal\\n----------------------------------------------------------------------------------------------------------------------\\n\\nDuring the rebuttal, the authors worked hard to address my concerns, namely (1) additional baselines (AlberDICE and OptiDICE) on all 3 environments, (2) additional toy example (XOR Game), (3) a re-interpretation of the method as learning the optimal policy implicitly in factorized policy space and (4) detailed analysis on how ComaDICE works based on the XOR Game which is a toy domain. \\n\\nThe reason why my initial score was low was because a lot of these points were not explicit in the original draft, and it was unclear what the contribution was. After the rebuttal and some discussion, these points were made clear and the authors demonstrated that (1) using a factorized approach to decomposing stationary distributions is implicitly searching for optimal factorized policies (2) ComaDICE is able to approximate a global optimum as opposed to Nash policies (AlberDICE), (3) ComaDICE outperforms all baselines in all environments including AlberDICE and OptiDICE, which are most closely related, (4) the algorithm is more scalable and simple compared to AlberDICE.\\n\\nAs it currently stands, these contributions of the paper are scattered and it would take some effort by the reader to appreciate the paper, without some background in both DICE-based approaches in MARL. I would recommend a re-write of the storyline of the paper, highlighting the fact that ComaDICE pushes the cooperative Offline MARL field forward beyond IGM as well as AlberDICE. \\n\\nAs a result, I've increased my score as follows:\", \"overall_score\": \"3: reject, not good enough --> 6: marginally above the acceptance threshold\\n\\nI can't quite give an 8 (Strong Accept) due to the writing, but I would have given 7 if that option was available.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the additional feedback, which is valuable in helping us further improve the paper. We have updated our manuscript to incorporate the following changes:\\n\\n- Cited more relevant works and added discussions to provide more context. \\n- Revised some notations as suggested, e.g., changed $\\\\mu^{tot}$ to bold font for consistency with $\\\\pmb{\\\\pi}^{tot}$, modified the equation in line 201 to align with the text, and added periods and commas for improved readability. \\n\\nDue to space constraints, we were unable to include many additional equations and explanations in the main paper. Instead, we have added references in relevant sections to clarify our derivations.\\n\\nWe hope that this updated version addresses your remaining concerns. If you have any further suggestions, we would be happy to incorporate them into our paper.\"}", "{\"title\": \"We thank the reviewers for the feedback!\", \"comment\": \"We thank the reviewer for the comments. Please find below our responses to your concerns.\\n\\n> The experimental results for hybrid networks contradict mainstream research findings (single-layer outperforms double-layers) ...\\n\\nThe low performance of the 2-layer mixing network clearly indicates that, in our offline setting, the 2-layer structure is overly complex for capturing the interdependencies between agents, leading to overfitting. Increasing the amount of offline data might reveal more information and potentially improve the performance of the 2-layer mixing network. However, this is not practical, as it would require significantly more storage and make training computationally expensive and infeasible.\\nWe have added an additional discussion to Section B.6 in the appendix to clarify this point. If our explanation remains unclear or if you have any further questions, we would be happy to provide additional clarification.\\n\\n>The discussion of the DICE method and its goals in the Related Work and Preliminary Knowledge section is not sufficiently in-depth to clearly locate the innovations of this paper with respect to existing work.\\n\\nIn the updated version, we have made every effort to position our contributions in relation to existing works and techniques. The updates have been highlighted in magenta for clarity. If the reviewer has specific points that require more in-depth discussion or analysis, we would be happy to explore them further and provide additional clarification.\\n\\n> The proof section of DICE skips some key steps, has a confusingly organized derivation and notation that does not clearly state the purpose and rationale for each step of mathematical transformation. ...\\n\\nDue to space constraints, we had to omit some steps in our derivations and instead referenced prior work on which our approach builds. Specifically, the derivations up to Proposition 4.1 closely follow those in the OptDICE paper for single-agent RL. To clarify this, we have added some lines after Proposition 4.1 to acknowledge this fact.\\n\\n> The paper fails to explore the common sparse reward scenario in offline reinforcement learning, and the analysis of the quality requirements of behavioral strategies is insufficient, limiting the feasibility of the method in practical applications.\\n\\nWe acknowledge that our work did not address the sparse reward scenario or the quality of the behavioral policy. These are indeed important aspects; however, tackling these issues would require significant additional effort, which we believe warrants a separate paper. We will acknowledge this as a limitation of our work and highlight it as a promising direction for future research.\\n\\n**We hope our revisions and our response address the reviewers\\u2019 concerns and further clarify our contributions. If there are any additional questions or comments, we would be happy to address them**\"}", "{\"title\": \"We thank the reviewers for the feedback!\", \"comment\": \"We thank the reviewer for the comments. Please find below our responses to your concerns.\\n\\n>Although AlberDICE is briefly mentioned in Section 2, the paper does not adequately discuss the theoretical or empirical advantages of adopting the value decomposition instead of the resampling approach. In addition, the AlberDICE paper presents a simple multi-agent task where value function decomposition leads to a substantial performance loss.\\n\\nWe thank the reviewer for the insightful comments, which we greatly value and have made every effort to address. Here we would like to clarify that the key distinction between our algorithm and AlberDICE lies in the learning approach: our method learns the occupancy ratio at the global level, utilizing centralized learning and decentralized execution. In contrast, AlberDICE focuses on learning individual occupancy ratios, which restricts its ability to capture the interconnections between local agents. We have added clarifications on Page 3 to emphasize these differences.\\n\\n>(1) A detailed discussion comparing ComaDICE and AlberDICE should be added, (2) AlberDICE should be included as a baseline in all experiments, (3) ComaDICE should be tested on the XOR game in the AlberDICE paper \\n\\nIn the updated version, we have included comparisons with AlberDICE and OptDICE and updated our experiments accordingly. Additionally, we tested ComaDICE on the XOR game, with the results reported in Section B8 of the appendix. These results demonstrate that, for this simple game, ComaDICE achieves a similar best policy value as AlberDICE.\\n\\n>In Sections 6.1 and 6.3, SMACv1 environment is discussed; however, corresponding experimental results for SMACv1 are not included in the main manuscript...\\n\\nThank you for the suggestion. We have now moved the results for SMACV1 to the main paper.\\n\\n> Do the authors assume that $s$ can be sufficiently represented from $o$? Notations for states and observation are confusingly used.\\n\\nIn the POMDP setting, global states are not fully accessible and are instead represented by the joint observations from local agents. For simplicity, we use global state notation, but it actually refers to the corresponding joint observations. We have added a clarification on Page 7.\\n\\n\\n> Figure 2 lacks information on which task was selected for each benchmark. (e.g. 5_vs_5 or 10_vs_10 in protoss, or expert or medium data quality in Hopper) Could you clarify?\\n\\n For this figure, we used all the tasks and calculated the average win rates or returns for comparison. We have added a sentence to clarify this point.\\n\\n**We hope our revisions and our response address the reviewers\\u2019 concerns and further clarify our contributions. If there are any additional questions or comments, we would be happy to address them.**\"}", "{\"title\": \"We thank the reviewer for the additional feedback!\", \"comment\": \"We thank the reviewer for the prompt additional feedback, which we highly appreciate.\\n\\n> Can you confirm whether the following statement in my original review is true? ...\\n\\nWe believe your observations align well with the high-level perspective of our method. To clarify further, our approach involves learning a global policy in the form of an occupancy measure by decomposing the Lagrange multiplier in the DICE framework, i.e., $\\\\nu$, which can be interpreted as a value function. Subsequently, our policy extraction step focuses on learning decentralized policies derived from the outcome of the DICE approach, specifically $w^{tot}$.\\n\\n> If we view the goal of MARL as $\\\\max L(\\\\pi^{tot})$ , where is the space of factorized policies, is this what the objective of $\\\\max \\\\widetilde{L}$ (Line 262) is doing?\\n\\nThank you for the question. To clarify, the objective of $\\\\min \\\\widetilde{L}$ is also to find an optimal $\\\\pi^{tot}$, but it does so within the space of **occupancy measures**. Here, instead of factorizing the global policy directly, we factorize the Lagrange multiplier $\\\\nu$. As a result, $\\\\min \\\\widetilde{L}$ does not output a globally optimal policy directly, but rather the *occupancy ratio* between the optimal policy and the behavior policy. The optimal policy is then extracted from this occupancy ratio through weighted behavior cloning (BC), where we propose learning *decentralized policies* that match with the resulting *occupancy ratio.*\\n\\n> I am having trouble understanding clearly how ComaDICE is able to do this, and solve the XOR Game for example. It would also be helpful if you can provide what the converged and values (both global and individual values) were for the XOR Game for better understanding.\\n\\nWe thank the reviewer for the question. We believe that, for the XOR game, our algorithm operates in a manner similar to AlberDICE. Both approaches aim to learn policies that maximize the reward while aligning with the behavioral policies. However, whereas AlberDICE performs this alignment at the level of individual agents, our approach operates at the global level.\\n\\nFor the XOR game, we have all numerical results demonstrating how ComaDICE achieves the optimal values reported in the paper. However, let us explain intuitively how ComaDICE solves the XOR game. Consider the dataset {AB}, where this observation yields a high reward (i.e., 1). When ComaDICE solves $\\\\min \\\\mathcal{L}$, it seeks a policy that maximizes the reward across the dataset while aligning with the behavioral policy represented by the dataset {AB}. Consequently, it will return a global optimal policy (in the form of an occupancy ratio) that assigns the highest possible probabilities to the joint actions {AB}. \\nSubsequently, our weighted behavior cloning (BC) step learns decentralized policies that also assign the highest possible probabilities to the joint actions {AB}, returning the desired optimal policy observed in our experiments. The same reasoning applies to other datasets, such as {AA, AB, BA}, ensuring that ComaDICE learns the correct optimal policies across different scenarios.\\n\\n**We hope our responses address your concerns. If the reviewer would like a more detailed explanation of the policy values returned by our algorithm for the XOR game, we would be happy to provide additional clarifications and discussions.**\"}", "{\"title\": \"We thank the reviewers for their feedback!\", \"comment\": \"We sincerely thank the reviewers for their constructive and thoughtful comments, which we have addressed to the best of our ability. Our response was slightly delayed as we worked to incorporate additional baselines into our benchmarking environments, which required significant time. One reason for the delay was that the published source code for the AlberDICE paper did not support SMAC environments, necessitating communication with the authors to obtain the required code, which took additional time. We now have a complete set of updated results and are ready to respond to your feedback.\\n\\nTo address the reviewers\\u2019 concerns, we have made the following major updates to the paper: \\n- Included two DICE-based MARL algorithms for comparison (OptDICE and AlberDICE). \\n- Added experiments on the XOR game environment (as used in the AlberDICE paper). \\n- Clarified several points raised in the feedback. \\n\\nAll updates are highlighted in magenta. We hope our revisions address the reviewers\\u2019 critiques and further clarify our contributions. If there are any additional questions or comments, we would be happy to address them.\"}", "{\"comment\": \"Thank you for the clarifications.\\n\\nCan you confirm whether the following statement in my original review is true?\\n\\n> It is known that every MDP has a deterministic optimal policy, which, if extended to the multi-agent setting, can be factorized into independent policies. A very optimistic reading of the paper would interpret that the decomposition procedure is actually learning optimal value functions over decentralized policies.\\n\\nIf we view the goal of MARL as $\\\\max_{\\\\pi_{tot}}J(\\\\pi_{tot})$, where $\\\\pi_{tot}$ is the space of factorized policies, is this what the objective $\\\\mathcal{\\\\tilde L}$ (Line 262) is doing? I am having trouble understanding clearly how ComaDICE is able to do this, and solve the XOR Game for example. It would also be helpful if you can provide what the converged $\\\\nu, q, A_\\\\nu $ and $w$ values (both global and individual values) were for the XOR Game for better understanding.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"We thank the reviewer for the additional feedback!\", \"comment\": \"We thank the reviewer for their prompt and valuable feedback, which we greatly appreciate. We also thank the reviewer for highlighting OptDICE and AlberDICE during the rebuttal process, as this has significantly improved our paper and helped us better clarify our contributions.\\n\\n> In detail, what are the core differences which allow ComaDICE to learn interconnections between local agents and handling credit assignment? It would be helpful if you can write a detailed comparison using both formulations.\\n\\nWe thank the reviewer for the question. We believe the key difference that enables ComaDICE to better capture the interconnections between agents lies in its approach to learning global values and its factorization method, which integrates information from local agents to construct the global value and global occupancy functions. Specifically, we propose learning a global occupancy ratio and leveraging a factorization approach to decompose the global learning variables into local ones, using local information. This approach will help capture how each local agent's contribute to the global objective. This is also the common advantage of such value factorization approaches.\\n\\nAdditionally, in our policy extraction phase, we learn local policies using a shared global occupancy ratio, $w^{tot}$ (Eq. 9 in our paper). We believe this design inherently addresses aspects of credit assignment across agents, which is a feature lacking in AlberDICE. \\n\\n> Also on a related note, I would suggest emphasizing the ability for ComaDICE to learn a factorized policy which is a global optimum. This would also clarify the strength of ComaDICE over AlberDICE (which learns Nash policies).\\n\\nWe thank the reviewer for the suggestion. This is indeed a key advantage of ComaDICE over AlberDICE, and we will make sure to emphasize this point in the revised manuscript.\\n\\n> However, I still have concerns whether it does indeed a globally optimum $\\\\nu$. Theorem 4.2 is used to claim that $ \\\\mathcal{\\\\tilde L}$ is convex in $\\\\nu$, but does solving this lead to the same $\\\\nu$ as in the original problem without value decomposition? My understanding is that this is not the case, and either an assumption (e.g. IGM) or a separate theorem is required.\\n\\nWe thank the reviewer for the insightful comment. We agree that the mixing network approach imposes restrictions on the space of $\\\\nu$, which could result in the solution to $\\\\min \\\\mathcal{\\\\tilde L}$ being suboptimal and potentially different from the one derived from the original objective (without decomposition). However, it is well-established that a two-layer feedforward neural network with non-linear activations, such as ReLU, possesses **universal approximation** capabilities. Therefore, in theory, the mixing network can approximate any global $\\\\nu$ value, implying that solving $\\\\min \\\\mathcal{\\\\tilde L}$ could return the global optimum for the original problem.\\n\\nAs an additional note, if the mixing network is a simple linear combination (as used in our experiments), the solution to $\\\\min \\\\mathcal{\\\\tilde L}$ may indeed be suboptimal unless additional assumptions are satisfied, such as the global $\\\\nu$ being linearly decomposable. We will incorporate this discussion into the updated paper to clarify this point.\\n\\n>Finally, regarding notation, I think it would be much simpler if states are used throughout the paper rather than joint observations. The main reason is that, as far as I understand, the derivations and the proof actually require the global state rather than joint observations (which are not equivalent to the state). Of course, partial observability can be introduced in the practical algorithm in Section 5.\\n\\nWe thank the reviewer for the suggestion, which we find very useful. While we actually prefer this approach, we previously encountered feedback suggesting that global states are not fully available and are instead represented by joint observations, thus using global states would be confusing. This led us to use a mixture of global states and joint observations in our formulation. We are happy to adopt the notation you suggested and will update the paper accordingly.\\n\\n**We hope our responses address your concerns. We also sincerely thank the reviewer for their additional suggestions, which are very valuable for further improving the paper. If you have any further questions or comments, we would be happy to address them.**\"}", "{\"title\": \"We thank the reviewers for the feedback!\", \"comment\": \"We thank the reviewer for the comments. Please find below our responses to your concerns.\\n\\n> A very optimistic reading of the paper would interpret that the decomposition procedure is actually learning optimal value functions over decentralized policies. ... However, it is worth noting that this is more of a statement of the potential of the paper rather than a strength of its current version, as these points are not specifically addressed in the current draft.\\nFurthermore, ComaDICE can be more scalable in comparison to AlberDICE [1] which requires alternating optimization.\\n\\nIn this paper, we propose a method to learn a globally optimal state-action occupancy by integrating the DICE framework with decentralized learning principles. While the DICE framework provides an effective approach for addressing out-of-distribution (OOD) issues, as demonstrated in prior DICE-based offline RL studies, the decentralized learning principle ensures that our algorithm remains scalable and efficient. Compared to AlberDICE, in addition to being more scalable, as you mentioned, our algorithm is better in capturing the interdependencies between local agents and handling credit assignment across agents through the use of a mixing architecture.\\n\\n> It is not clear what the main problem is that ComaDICE is solving. ....\\n\\nOur algorithm addresses the OOD issue in a manner similar to other DICE-based offline RL algorithms, such as OptDICE and AlberDICE. Specifically, alongside maximizing the global reward, we incorporate a divergence term into the objective function to ensure the learned policy remains close to the behavior policy. The main distinction between our algorithm and AlberDICE lies in the learning approach: our method learns the occupancy ratio at the global level, leveraging centralized learning and decentralized execution. In contrast, AlberDICE focuses on learning individual occupancy ratios, which limits its ability to capture the interconnections between local agents. We have added clarifying details on Page 3 to highlight these differences.\\n\\n> The derivation is based on OptiDICE [2] but this is not explicitly mentioned, ... Furthermore, the final algorithm closely resembles OMIGA [3].\\n\\nWe would like to note that while OptiDICE is designed for the single-agent setting, our work extends its DICE-based approach to the multi-agent setting, which requires an extensive analysis on the connection between local policies and the global objective. Theoretically, we have included a closed-form expression for the first-order derivative of the objective function in our Proposition 4.1, which we believe is not available in the OptiDICE paper. To clarify this, we have added a detailed explanation after our proposition on Page 5.\\n\\nAdditionally, we have included discussions on Page 3 to highlight the differences between our algorithm, AlberDICE, and OMIGA. Furthermore, new numerical experiments have been provided to compare our algorithm with AlberDICE and OptiDICE.\\n\\n> What is the main purpose for the value decomposition and learning individual Q-functions? \\n\\nThe primary motivation for decomposing the Q-function is to enable the decomposition of the advantage function, which is a key component of our objective. In our multi-agent setting, local rewards are unavailable, making it challenging to directly compute local advantage functions. To address this, we learn local Q-functions and use them to derive the local advantage functions. Simultaneously, the mean squared error (MSE) is optimized to ensure that the global Q-function and state-value function align well with the global rewards. An explanation has been added to Page 7 to clarify this point. \\n\\n> AlberDICE and OptiDICE are missing as the main baselines.\\n\\nWe have included OptiDICE and AlberDICE for comparison and updated our experiments.\\n\\n>What is the purpose of Theorem 4.2, ..\\n\\nOur theorem demonstrates that the loss function is concave with respect to \\u03bd\\u03bd when any mixing network with non-negative weights and convex activations is employed. This guarantees that the training process will remain stable when optimizing over $\\\\nu$. Consequently, under our mixing architecture, minimizing the loss with respect to $\\\\nu$ ensures that the global optimum will be achieved.\"}", "{\"comment\": \"We thank the reviewer for taking the time to read our responses and provide additional feedback.\"}", "{\"comment\": \"Thank you for your detailed response. I appreciate the effort to address my concerns. But when I consider the contribution of the paper, I'd like to maintain my score.\"}", "{\"title\": \"I appreciate the effort by the authors. But some remaining concerns..\", \"comment\": \"I'd like to thank all of the authors for the hard work during the rebuttal, especially in providing extensive experiments with additional baselines and extra toy example (XOR Game) results. It seems pretty clear that ComaDICE performs well across a variety of environments and outperforms baselines, including AlberDICE and OptiDICE. I believe these new results strengthen the paper.\\n\\n\\nHowever, I still have some remaining concerns and questions so I hope you can clarify them:\\n\\n> **Compared to AlberDICE, in addition to being more scalable, as you mentioned, our algorithm is better in capturing the interdependencies between local agents and handling credit assignment across agents through the use of a mixing architecture.**\\n\\n> **In contrast, AlberDICE focuses on learning individual occupancy ratios, which limits its ability to capture the interconnections between local agents.**\\n\\nWhile AlberDICE does learn individual occupancy ratios, they are solving a reduced MDP which also considers the other agents' current policy and their training procedure also falls under CTDE. \\n\\nIn detail, what are the core differences which allow ComaDICE to learn interconnections between local agents and handling credit assignment? It would be helpful if you can write a detailed comparison using both formulations.\\n\\nAlso on a related note, I would suggest emphasizing the ability for ComaDICE to learn a factorized policy which is a global optimum. This would also clarify the strength of ComaDICE over AlberDICE (which learns Nash policies).\\n\\n>**Yes, under our mixing architecture, ComaDICE can theoretically learn the global optimum for $\\\\nu_i$ (for all $i$), as guaranteed by the convexity with respect to \\n established in Theorem 4.2.**\\n\\nHowever, I still have concerns whether it does indeed a globally optimum $\\\\nu$. Theorem 4.2 is used to claim that $ \\\\mathcal{\\\\tilde L}$ is convex in $\\\\nu$, but does solving this lead to the same $\\\\nu$ as in the original problem without value decomposition? My understanding is that this is not the case, and either an assumption (e.g. IGM) or a separate theorem is required. \\n\\nFinally, regarding notation, I think it would be much simpler if states are used throughout the paper rather than joint observations. The main reason is that, as far as I understand, the derivations and the proof actually require the global state rather than joint observations (which are not equivalent to the state). Of course, partial observability can be introduced in the practical algorithm in Section 5.\"}", "{\"comment\": \"> Regarding whether $\\\\widetilde{L}$ is learning the optimal policy over factorized policies ...\\n\\nWe thank the reviewer for the additional question, which provides an opportunity to further discuss and clarify our contributions. Our learning objective operates in the space of occupancy measures, and it is generally challenging to prove that optimizing this objective is equivalent to optimizing over factorized policies, particularly when incorporating the f-divergence and a non-linear mixing network. However, under certain conditions, we believe that such factorization properties can be observed.\\n\\nSpecifically, when examining the objective function $\\\\widetilde{L}$ on Page 3, if the mixing network has a linear structure and the f-divergence is chi-square (as employed in our experiments), the inverse function $f^{-1}$ also takes a linear form. In this case, the global objective function can be decomposed into local objective functions with variables $\\\\nu_i$. Minimizing $\\\\widetilde{L}$ under these conditions is approximately equivalent to optimizing each local objective function $\\\\widetilde{L}_i$. Minimizing these local functions yields local occupancy ratios $\\\\rho_i / \\\\mu_i$. \\n\\nThus, under this linearity setting, optimizing the global function $\\\\widetilde{L}$ effectively approximates learning factorized occupancy ratios. \\n\\nIn addition, under a more general setting, proving the equivalence under a general f-divergence function and a two-layer mixing network is challenging. However, if we consider linear combinations that approximate the non-linear f-divergence and mixing network, we can assert that optimizing our global function is approximately equivalent to learning factorized occupancies.\\nWe hope this explanation clarifies how our value factorization method operates. We will incorporate this discussion into the updated version of the paper.\\n\\n> Regarding the Matrix Game, can you explain intuitively how ComaDICE is able to solve the XOR Game using the {AB,BA} dataset.\\n\\nWe thank the reviewer for the question. Both $\\\\pi(AB) = 1$ (and the others =0) or $\\\\pi(BA) = 1$ are optimal solutions to our learning problem. In our experiments, ComaDICE converges to the case where $\\\\pi(AB) = 1$ and $\\\\pi(BA) = 0$.\\n\\nLet us explain why this happens. Our objective function consists of two terms: one aims to maximize the reward, and the other minimizes the divergence between the learned policy and the dataset. When the dataset consists of $\\\\{AB, BA\\\\}$, the occupancy-matching term tends to favor a policy that assigns positive probabilities to both $AB$ and $BA$. However, if both $AB$ and $BA$ have (significantly) positive probabilities, this implies that the first player would take both actions A and B with some positive probability, leading to a lower reward. In other words, exactly matching the dataset distribution would result in low reward.\\n\\nThus, to optimize the overal objective, ComaDICE assigns the highest probability to one action pair (in this case, $AB$), ensuring that the policy achieves a better balance between maximizing the reward and minimizing divergence. This explains why ComaDICE converges to this solution.\\n\\n\\n**We hope our responses address your concerns. If the reviewer has additional questions or wishes to continue this discussion further, we would be happy to provide further clarifications and engage in additional discussions**\"}", "{\"comment\": \"Thank you for addressing my previous concerns. I appreciate the effort put into the additional content, which has strengthened the manuscript significantly.\\n\\nHowever, I have some observations regarding the formulation section of ComaDICE. The current presentation lacks a professional and cohesive mathematical structure. As it stands, the section appears more like a draft or homework exercise rather than a polished and rigorous exposition expected in a professional context. I recommend reorganizing this section to present the derivations and explanations with greater clarity and structure, adhering to standard mathematical writing conventions.\\n\\nAdditionally, the notations in this section require refinement for consistency and precision. For instance, the notation for the occupancy measure, $\\\\rho^{\\\\mathbf{\\\\pi}_{\\\\text{tot}}}$, appears to be inconsistently defined. In Equations (1) and (2), it is associated with the support space $\\\\mathcal{O} \\\\times \\\\mathcal{A}$, whereas starting from Equation (3), it shifts to the support space $\\\\mathcal{S} \\\\times \\\\mathcal{A}$. This inconsistency may lead to confusion and undermines the readability of the paper. I strongly encourage the authors to clarify and standardize the usage of notations throughout the manuscript to ensure precision and ease of understanding for the reader.\\n\\nAddressing these points will significantly enhance the clarity and professionalism of the paper, making it more accessible and impactful to the audience. Thank you for considering these suggestions.\"}", "{\"title\": \"We thank the reviewers for the feedback!\", \"comment\": \"We thank the reviewer for the comments. Please find below our responses to your concerns.\\n> ComaDICE\\u2019s theoretical analysis relies on non-negative weights and convex activations in the mixing network for stability. While this aids convergence, it may limit the model\\u2019s ability to capture complex inter-agent dynamics...\\n\\nWe would like to emphasize that non-negative weights and convex activations have been widely used in prior MARL algorithms with mixing architectures, such as QMIX, QTRAN. Our choice to adopt this setting is driven by its crucial role for achieving strong algorithmic performance that was experimentally demonstrated in previous studies. For instance, [1] shows that mixing networks with negative weights or non-convex activations result in significantly worse performance. We added some lines on Page 6 to clarify this point.\\n\\n[1] The Viet Bui, Tien Mai, and Thanh Hong Nguyen. Inverse factorized q-learning for cooperative multi-agent imitation learning. Advances in Neural Information Processing Systems, 38, 2024.\\n\\n> High Computational Cost: ComaDICE incurs a significant computational cost ...\\n\\nIn our algorithm, the use of f-divergence is particularly advantageous. Its closed-form formulation and convexity enable a closed-form solution for the inner maximization problem, allowing us to reformulate the min-max problem into a non-adversarial one. This significantly enhances the training process. Importantly, our experiments clearly demonstrate that the algorithm is highly scalable and efficient, even for SMACv2, which is widely regarded as one of the most high-dimensional and challenging MARL benchmarks.\\n\\n> The ablation study shows that performance decreases when using a more complex network. Why is that the case? If more data were available or the model was improved, could the results potentially differ?\\n\\nThe low performance of the 2-layer mixing network shows that, in our offline setting, the 2-layer structure is overly complex for capturing the interdependencies between agents, leading to overfitting. Increasing the amount of offline data might reveal more information and potentially improve the performance of the 2-layer mixing network. However, this is not practical, as it would require significantly more storage and make training computationally expensive and infeasible.\\n\\nWe have added an additional discussion to Section B.6 in the appendix to clarify this point. If our explanation remains unclear or if you have any further questions, we would be happy to provide additional clarification.\\n\\n**We hope our revisions address the reviewers\\u2019 critiques and further clarify our contributions. If there are any additional questions or comments, we would be happy to address them.**\"}", "{\"comment\": \"We thank the reviewer for continuously engaging in the discussion and providing additional feedback. We have now incorporated our discussions into the updated version of the paper. Specifically, we have added **Section B1** to the appendix to detail our discussion on the factorization aspect of ComaDICE. Additionally, we have expanded **Section B8** of the appendix to explain how ComaDICE solves the XOR game. In particular, we have included discussions on how ComaDICE provides the correct optimal policies for the datasets under consideration and why OptDICE fails in these examples. All updates are highlighted in magenta for easy reference.\\n\\nWe hope these updates address your remaining concerns. If you have further questions or suggestions, we would be happy to address them and make additional updates to the paper before the revision deadline.\"}", "{\"comment\": \"Regarding whether $\\\\mathcal{\\\\tilde L}$ is learning the optimal policy over factorized policies, I understand the optimization is over occupancy measures. I am asking whether the **implicitly** learned optimal policy is over factorized policies (before policy extraction). For instance, perhaps it is possible to say that since $\\\\rho (s, a) = \\\\rho(s) \\\\pi(a|s)$ where $\\\\rho(s)$ is the state marginal, and we want $\\\\pi(a|s)$ to have a factorized form, factorizing the value is actually learning an optimal stationary distribution over factorized policies. This is not exact and just a general idea but something similar would help understand intuitively what value factorization is doing here.\\n\\nRegarding the Matrix Game, can you explain intuitively how ComaDICE is able to solve the XOR Game using the ${AB, BA}$ dataset? For instance looking at Table 26, it seems like $w(AB) $ would converge to some positive value while $w(BA) = 0$ in order to deterministically choose AB. How is ComaDICE able to do this despite both AB and BA being in the dataset?\"}", "{\"comment\": \"Thank the authors for your valuable responses. Regarding Section 4, while the derivation itself is interesting, I believe this section could benefit from a clearer structure. For example, for Section 4.1, I suggest explicitly defining the problem at the outset and clearly highlighting the differences between your work and established approaches. For the complete proof, I think it is still essential for completence of this work, which you can add in the appendix.\", \"some_minor_points\": [\"Why is \\\\mu_{tot} not bold, given that it is also a vector?\", \"Some text formatting seems strange; for example, why are terms like \\\"s.t.\\\" and \\\\sum in Eq. (3)) is bold?\", \"Lines 201 and 215 appear to serve the similar function, but line 201 is not labeled, whereas line 215 is labeled as Eq. (6). Could you clarify or adjust for consistency? If line 201 is not important in this section, I recommend you to keep it as inline equation to highlight the problem you are dealing with.\", \"Some equations lack proper punctuation, such as commas or periods. Please ensure these are included for grammatical and stylistic accuracy. E.g., comma in Eq (3) should be a period; Eq (4) lacks a comma; Eq (6) lacks a period, etc.\"]}" ] }
5o0phqAhsP
Learning under Temporal Label Noise
[ "Sujay Nagaraj", "Walter Gerych", "Sana Tonekaboni", "Anna Goldenberg", "Berk Ustun", "Thomas Hartvigsen" ]
Many time series classification tasks, where labels vary over time, are affected by label noise that also varies over time. Such noise can cause label quality to improve, worsen, or periodically change over time. We first propose and formalize temporal label noise, an unstudied problem for sequential classification of time series. In this setting, multiple labels are recorded over time while being corrupted by a time-dependent noise function. We first demonstrate the importance of modeling the temporal nature of the label noise function and how existing methods will consistently underperform. We then propose methods to train noise-tolerant classifiers by estimating the temporal label noise function directly from data. We show that our methods lead to state-of-the-art performance under diverse types of temporal label noise on real-world datasets.
[ "label noise; time series; healthcare; classification" ]
Accept (Poster)
https://openreview.net/pdf?id=5o0phqAhsP
https://openreview.net/forum?id=5o0phqAhsP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zaRLY3ZR0n", "vmMgSPG7U0", "s1ggVyq4FH", "pGY5RunxAA", "nxa2rjMCwf", "imMtCS3BOZ", "b81m3MeHNH", "ZUpszG3KQT", "Xz8ul15aIc", "VnvAACL5Iq", "VIF5lRooi7", "UKAD7q7EhX", "KjlRqzKvzq", "IBlWWJGPko", "DhW55BCPqF", "Bm1T1IUi8Y", "AtywjJlbQ1", "5uMYBUUw68", "5IdrLql6IY", "2PSxiniygo" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732543340010, 1732477521499, 1732543352878, 1732244790887, 1734142358166, 1732245127931, 1730790684502, 1737523403157, 1732658667137, 1730039792556, 1732544264556, 1732244929896, 1732245017117, 1730660870810, 1732734170254, 1732244546764, 1732244439494, 1730607124683, 1732543322029, 1732434142626 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Area_Chair_RtMs" ], [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Reviewer_gNk9" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission549/Reviewer_Nmrx" ], [ "ICLR.cc/2025/Conference/Submission549/Reviewer_EoQ6" ], [ "ICLR.cc/2025/Conference/Submission549/Reviewer_EoQ6" ], [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Reviewer_Nmrx" ], [ "ICLR.cc/2025/Conference/Submission549/Reviewer_qqUX" ], [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Reviewer_qqUX" ], [ "ICLR.cc/2025/Conference/Submission549/Authors" ], [ "ICLR.cc/2025/Conference/Submission549/Reviewer_EoQ6" ] ], "structured_content_str": [ "{\"title\": \"Follow-up\", \"comment\": \"Thank you again for your valuable feedback! We've provided a detailed response to your comments. In case the OpenReview system didn't notify you, we wanted to bring it to your attention here as well.\\n\\nIf you find our response unsatisfactory, please don't hesitate to share your concerns with us. We are more than happy to continue the discussion.\\n\\nThank you for your time and consideration.\"}", "{\"title\": \"Response to Reviewer EoQ6\", \"comment\": \"Thank you for your response!. Following your suggestion for W1, we have added the references and discussion about non-stationary environments in the Limitations and Related Work (Section 6) \\u2014 please see the latest revision. With this change, we are glad to notice that W1 and W3 are resolved.\\n\\nRegarding W2, we believe there is still a misunderstanding:\\n\\n* **When we say**: \\\"The only assumption the learner needs is about the class of functions parameterizing Q\\\" \\n* **What we mean to say is**: \\u201cA practitioner has to decide on what type of *algorithm* they want to use to model Q.\\u201d \\n * This is not an *assumption* but rather a *design decision*. \\n * This decision is part of *any* ML pipeline, where practitioners decide on the type of model to train based on the problem domain. \\nFor example, choosing a Transformer vs an LSTM for a language-modeling task. \\n * **This is a core property of *all* ML and is not unique to our paper.**\\n\\nFor further evidence, in our paper, we use a DNN to model Q. This can capture *any* temporal noise function because DNNs are universal function approximators. The DNN *learns* Q directly from data, with no prior assumptions about the temporal noise function, and it works well for all the types of temporal label noise we use in the dataset - we have clarified this in Section 3.4.\\n\\nWe hope that this clarifies this point \\u2014 please let us know if not!\\n\\nThank you again for your engagement during the Discussion period.\"}", "{\"title\": \"Follow-up\", \"comment\": \"Thank you again for your valuable feedback! We've provided a detailed response to your comments. In case the OpenReview system didn't notify you, we wanted to bring it to your attention here as well.\\n\\nIf you find our response unsatisfactory, please don't hesitate to share your concerns with us. We are more than happy to continue the discussion.\\n\\nThank you for your time and consideration.\"}", "{\"title\": \"Response to Reviewer Nmrx (I)\", \"comment\": [\"Thanks for your time and detailed feedback! We've addressed most of the questions and comments below. If there is anything else, please let us know!\", \"> **W1) \\u201c(important) \\u2026the appendix of the paper is\\u2026 dissociated to the main paper\\u201d**\", \"Thank you for suggesting these revisions! We have made all suggested changes to the Appendix. In summary, we have made the following changes:\", \"Streamlined the appendix by removing Appendix A (not referenced in the main article) and moving extra supplemental figures to the anonymous repository (e.g., Appendix G.2 has been removed). To clarify, Appendix A originally described a secondary, less-performant noise-tolerant sequential loss function.\", \"All figures in the Appendices now have detailed captions.\", \"Proposition 1 now stands out without Appendix A, further highlighting the proof of our primary theoretical result.\", \"The assumptions listed in the Appendices were equivalent to the ones in the main text - we have unified the description of the assumptions to ensure that this is clear to future readers.\", \"Moved definitions and quantities for proofs into a table, thank you for this suggestion!\", \"> **W2) Regarding notation and problem formulation:**\"], \"we_have_revised_our_paper_as_follows_to_address_the_concerns_about_notation_and_problem_formulation\": \"* $q_t$ denotes a probability that is conditioned/dependent on time. We have clarified in Section 2 that this notation is to highlight this property.\\n* We have elaborated on the assumptions we use to make them more intuitive for the reader. In Preliminaries (Section 2), we clarify that Assumption 1 requires that the current observation is independent of the future observations (i.e., the present does not depend on the future) and Assumption 2 specifies a feature-independent noise regime - both of which are standard and intuitive. \\n* In Section 3.2, we have added a key citation on convex optimization which motivates our use of the Frobenius norm in our optimization objective.\\n* We have fixed the typos in line 260 and line 277, thank you for pointing these out!\\n* We have also clarified the dimensionality of the input to $h_\\\\theta$ in Section 2.1. $h_\\\\theta$ accepts as input $x_{1:t}$, where $t \\\\leq T$. Thank you for pointing this out!\\n* Lastly, we have improved the description of the role of the Lagrangian multiplier in Section 3.2. It is the Lagrangian multiplier used to satisfy the equality-constrained optimization problem in Eq 2.\\n\\n> **Q1) \\u201cIf the noisy label is always accurate, can we use it for prediction?... I understand this is a slightly different setting, as the goal of the paper is to learn a predictor.\\u201d**\\n\\nThis is an interesting idea, and is outside the scope of our problem, as you note. Our goal is to develop a time-series classifier that predicts clean labels at each time step based solely on the features. The main challenge in incorporating the noisy label into predictions is exactly as you mentioned: we cannot reliably determine when the noisy label is accurate and when it is not. If the learner had this information then that implies they have some prior knowledge of the noise model. We do not assume this to be the case in our paper, and instead learn this noise model from data.\\n\\nThis point is interesting, as there may be alternative strategies to learning under temporal label noise. This is a promising direction of future research and our work can inspire other such directions.\\n\\n> **Q2) \\u201c[Can you clarify]... the difference \\u2026 between Eq(3) and the model discussed in \\u201cdiscontinuous estimation\\u201d. It seems to me that t is an integer, and Qw(t) could be completely different than Qw(t+1) in principle\\\"**\\n\\n$Q(t)$ absolutely could be completely different from $Q(t+1)$ \\u2013 this is exactly why we designed the Discontinuous method, which assumes independence between neighboring timesteps and therefore can capture this local temporal variance.\\n\\nSo these models differ in *how they model temporal noise.* We introduced multiple methods in order to be as flexible for different practitioner needs. In short, the Discontinuous estimation learns parameters for a $Q_t$ at each time-step, treating each time-step independently. Continuous learns a unified function $Q$ with parameters $\\\\omega$ (using a DNN) that represents a function of $t$ across all time-steps. They both use the same optimization strategy (Eq 3).\\n\\nTo make this distinction clearer, we have added more points in Section 3.4 - where we compare and contrast Discontinuous with Continuous. We hope this clarifies, but are happy to elaborate further if needed.\\n\\n> **Q3) \\u201cWhat is the temporal relationship discussed in line 274?\\u201d**\\nThe temporal relationship is baked into the Continuous method, which learns a single, time-dependent noise function. The temporal relationship arises because we are explicitly learning a single, time-dependent function (i.e., temporal) to represent all time points.\"}", "{\"metareview\": \"A fairly good paper that should be accepted for publication at ICLR. I hope this paper can advance the research area label-noise learning.\\n\\nMy only comment is about related work. The standard non-temporal label noise can be regarded as a type of distribution shift, where the test distribution is clean and the training distribution changes to some noisy version. Then, the temporal label noise is actually a type of continuous distribution shift, where the change from test to training is also time-dependent. Even though continuous distribution shift focused on continuous covariate shift rather than continuous class-posterior shift (i.e., label noise), it is still related to temporal label noise as a bigger topic covering the problem under consideration. However, the term distribution shift didn't appear at all. I hope you can acknowledge that your temporal label noise is a special case of continuous distribution shift, so that you have also contributed to distribution shift research in addition to label-noise research.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal addressed most of the concerns from the reviewers.\"}", "{\"title\": \"Response to Reviewer EoQ6\", \"comment\": \"Thanks for your time and detailed feedback! We've addressed most of the questions and comments below. If there is anything else, please let us know!\\n\\n\\n> **W1) My primary concern is with the problem formulation, or more specifically, the goal of the problem\\u2026in time-series problems, the environment is constantly changing, and the optimal model for different timestamps should also be changing.**\\n\\nThis appears to be a misunderstanding. Time series indeed represent changing environments, but it\\u2019s overwhelmingly common to assume the *relationship* between inputs and outputs is fixed because the outputs can still depend on the inputs (e.g., ARIMA makes this assumption). Any ML model trained to predict a label based on a window of time series makes this assumption. \\n\\nIf this question references non-stationary time series, where the generative model changes over time, then this is out of our scope. Many domains feature stationary time series and most methods assume stationarity, which allows for clearer analysis and is often sufficient for practical use. Most state-of-the-art approaches in time-series assume stationarity, which allows for clearer theoretical analysis, and does not present a problem in practice (see e.g., recent papers from top conferences [NeurIPS 2024](https://arxiv.org/abs/2411.01842), [NeurIPS 2024](https://openreview.net/forum?id=UE6CeRMnq3&noteId=7LmUCl1XoB), [ICML 2024](https://proceedings.mlr.press/v235/woo24a.html), [ICLR 2024](https://openreview.net/forum?id=bWcnvZ3qMb))\\n\\nFor further empirical evidence, we see from our own experiments on several real-world time-series datasets that we learn near-optimal models without considering non-stationarity.\\n\\n> **W2) The paper appears to assume that the function Q is known to the learner, and focuses only on optimizing the parameter w.**\", \"we_want_to_flag_a_misunderstanding_here\": \"**we never assume $Q$ is known to the learner**. The only assumption the learner needs is about the class of functions parameterizing $Q$ (e.g., DNNs). $Q$ is parameterized by $\\\\omega$ , which is learned *entirely* from the noisy labels, without any prior assumptions about its specific functional form. The true underlying $Q$ is known only to us, the authors, as it allows us to measure how well our proposed methods can learn $Q$ - we report these results in Table 2 and 3 (see Approx. Error).\\n\\nThe entire purpose of Section 3 is to construct methods that can learn $Q$ from data. To ensure this point is clear, we have modified the text to better motivate this section and outline the assumptions we make (and, more importantly, do not make).\\n\\nWe hope this clarifies any confusion! Please let us know if this is not the case.\\n\\n> **W3) The authors seem to only conduct experiments on low-dimensional datasets. I think that including experiments on larger-sized datasets would strengthen the results and make the findings more convincing.**\\n\\nWe disagree - these datasets are of a reasonable size as each instance is roughly 100 time steps long. So for each dataset, we have NxT instances with each instance having D dimensions (e.g., Sleep dataset contains 96,400 unique instances, each with 7 features and a label). Additionally, these datasets have been widely adopted as benchmarks in time-series research and are not exclusive to this paper.\\n\\nThat said, if there are bigger time-series datasets (with sequences of labels at each time-step) you would have liked to see included, please let us know, and we will do our best to incorporate them in a revised version during the discussion period!\\n\\n\\u2014 \\n\\nWe hope to have clarified any misunderstandings and questions you had. However, if there are any other questions we can answer in the discussion period, please let us know and we are happy to provide more details.\\n\\nThank you again!\"}", "{\"summary\": \"The manuscript introduced temporal label noise in time series classification tasks and proposed a novel framework that are robust to it. Experiments were conducted on 4 datasets with synthetic noise injection and a real-world noisy labelled dataset, in which the method proposed in the manuscript is superior to other methods. The authors also visualized the dynamic evolution of label noise over time.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe problem proposed by the authors is novel and interesting, and they evaluate it on real-world datasets.\\n2.\\tThe method introduced in the manuscript is cleverly crafted and has a theoretical foundation.\", \"weaknesses\": \"1.\\tIn many settings, the performance of Plug-In method is inferior to that of static methods.\\n2.\\tOnly one of the five datasets in the manuscript contains both clean and noisy labels. In the remaining real-world scenarios, there is no experimental evidence to confirm that \\u201cthe label noise evolves over time\\u201d actually exists.\\n3.\\tThe experimental scenarios are focused on healthcare; validating the work in more diverse scenarios would broaden its applicability.\", \"questions\": \"1.\\tWhat is the meaning of variable d in line 126? Is that mean a multivariate time series with d variables?\\n2.\\tWhen performing a train-test split on the datasets, were individuals also split?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"I thank the authors for their detailed response and for taking action on improving the current draft. The authors improved the presentation of the appendix and technical results, and I have decided to increase my score.\"}", "{\"summary\": \"The paper introduces the problem of temporal label noise, where label quality fluctuates over time due to time-dependent noise. This type of noise may improve, deteriorate, or change periodically, impacting the accuracy of predictions. The authors define this problem and highlight that existing methods fail to handle the temporal aspect of label noise effectively. They propose methods to estimate the time-varying noise function directly from data, allowing the development of noise-tolerant classifiers. Experiments on real-world datasets validate their proposal, demonstrating its effectiveness under various types of temporal label noise.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The problem of Temporal Label Noise is both interesting and important, and this paper is the first to formally define it.\\n\\n2. The paper is well-organized, clearly written, and easy to follow.\\n\\n3. The mathematical symbols and equations are well-defined and easy to understand.\", \"weaknesses\": \"1. My primary concern is with the problem formulation, or more specifically, the goal of the problem. In Section 2.1, the paper aims to \\\"estimate parameters $\\\\hat{\\\\theta}$ for a model robust to noise,\\\" which implicitly assumes the existence of a **fixed and static optimal model** $\\\\theta^\\\\star$\\u2014i.e., that p(y_t|x_t) remains consistent across all timestamps. However, in time-series problems, the environment is constantly changing, and the optimal model for different timestamps should also be changing.\\n\\n2. The other major concern relates to the methodology. The paper appears to assume that the function Q is known to the learner, and focuses only on optimizing the parameter w. However, I think that **obtaining an appropriate function Q is the most challenging aspect of this problem**. Although in Section 3.2, the authors select Q as a fully connected neural network as a general form for the practical scenarios, it seems to be inconsistent with the earlier discussions in Table 1.\\n\\n3. Another concern is about the experiments. The authors seem to only conduct experiments on low-dimensional datasets. I think that including experiments on larger-sized datasets would strengthen the results and make the findings more convincing.\\n\\nPlease let me know if I have misunderstood any part of the paper.\", \"questions\": \"See Weaknesses above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for the response. It appears that I did not misunderstand the paper: the function type of Q is indeed known to the learner, and the learner is only required to learn the parameter w. While this deisgn poses no significant issues for previous works that used this design to learn a classifier (from a class of functions), the authors' statement, \\\"As shown in Table 1, we can capture a wide variety of temporal noise using this representation\\\" (line 186), seems somewhat overstated. Specifically, it implies the ability to capture **any** type of noise function, which may not be the case (the type of noise function is **known**, not **learned**). I encourage the authors to clarify this point in the revised version to avoid potential overclaims.\\n\\nOverall, the response has addressed most of my concerns, and I have decided to raise my score accordingly.\"}", "{\"title\": \"Response to Reviewer Nmrx (II)\", \"comment\": \"> **Q4) \\u201cHow [is] Section 2, in particular section 2.2,...related to the proposed method in Section 3?\\u201d**\", \"the_high_level_structure_of_these_two_sections_is_as_follows\": \"* Section 2.2 introduces the temporal noise-tolerant loss function we propose in our paper. Using the temporal noise function we can learn a noise-tolerant time series classifier from noisy data.\\n* Section 3 proposes a method that can identify this temporal noise function from the data while simultaneously optimizing for the loss in Section 2.2.\\n\\nFor further details, our loss in Section 2.2 works by treating the noisy posterior as the matrix-vector product of a noise transition matrix and a clean-class posterior. That is, \\\\${p}(\\\\tilde{y}\\\\_t \\\\mid x\\\\_{1:t}) = Q\\\\_t^\\\\top \\\\cdot p(y \\\\mid x\\\\_{1:t})\\\\$. We achieve this by minimizing the error between \\\\$\\\\tilde{y}\\\\_t\\\\$ and \\\\$Q\\\\_t^\\\\top \\\\cdot h\\\\_\\\\theta(x\\\\_{1:t})\\\\$ (see Definition 2).\\n\\nIn Section 3 (Eq 2), we propose an equality-constrained optimization problem to identify the temporal noise function $Q(t)$. The equality-constraint in Eq 2 decomposes the noisy label posterior in the same exact way as Section 2.2. We can more clearly see this link in the term $R_t(\\\\theta,\\\\omega) = \\\\frac{1}{n}\\\\sum_{i=1}^{n}\\\\ell_t(y_{t,i}, Q_\\\\omega(t)^\\\\top h_\\\\theta(x_{1:t,i}))$, which is a key component of the objective we optimize in our proposed method. As you can see, $R_t(\\\\theta,\\\\omega)$ has the same form as the loss function we develop in Section 2.2. We can observe that this approaches zero as the equality-constraint in Eq 2 is met. The extra machinery (i.e., Augmented lagrangian optimization strategy) is to transform the constrained optimization problem in Eq 2 into a form we can minimize using standard ML optimizers.\\n\\n\\u2014\\nIn the newly updated revision of the paper, we have incorporated changes to address your questions and comments. If there are any other questions we can answer in the discussion period, please let us know.\"}", "{\"title\": \"Response to Reviewer qqUX\", \"comment\": \"Thanks for your time and detailed feedback! We've addressed most of the questions and comments below. If there is anything else, please let us know!\\n\\n> **Q1) How is learning achieved when weights are initialized for each time step in discontinuous estimation / Can the estimator in the discontinuous case be replaced by a simpler model (with lesser number of parameters)?**\\n\\nThe discontinuous case is actually the simplest possible model we can use. In this case, we are just learning the specific entries for the temporal noise function (a C \\\\times C matrix) at each time-step $t$. For example, in the binary-setting each time-step will only have 2 parameters we are estimating (because each row of $Q_t$ are probabilities that sum to 1). We think it is quite reasonable to achieve learning of 2 real-valued parameters with the dataset sizes we use.\\n\\nHowever, you bring up an interesting perspective, one that we think actually demonstrates the value of our approach. Choosing a specific model is largely a practitioner's decision. They can choose their own approach and still use it with the machinery we construct. For example, we use a DNN to model $Q$ in the Continuous case, but a practitioner could use a GP or set of ODEs to achieve the same goal based on their specific use-case.\\n\\n> **Q2) It is interesting that both continuous and discontinuous estimations perform quite similarly on the real-world stress detection example. Does this mean that the proposed approach works best when there is large variance in noise rates?**\\n\\nThis is an insightful observation. There are several possible explanations for this effect, one of which aligns with your suggestion. It is plausible that the performance of each method depends on the specific properties of the dataset and the practitioner\\u2019s design choices. In this case, the discontinuous method may perform well due to high variance in noise rates, where neighboring time-steps exhibit volatility. Another potential explanation is that different individuals in the dataset may follow slightly different temporal noise models, requiring each method to learn the most representative one across the entire dataset - therefore the method with the ability to capture the most variability (Discontinuous) will perform well.\\n\\nThis observation reinforces the importance of studying temporal label noise, a largely unexplored yet important area of research. We hope that discussions like this will inspire further study in this field.\\n\\n> **Q3) Is there an assumption that all data instances have [an] equal number of time steps?**\\n\\nNot at all! Our problem setup only requires a model that takes the form $p(y_t \\\\mid x_{1:t})$. In practice, the RNN architecture we use can handle sequences of varying lengths, even within the same dataset. However, the discontinuous approach does have a limitation: it requires all sequences to have an equal number of time steps, as a specific $Q_t$ is defined for each time step. In contrast, the continuous approach is more flexible, as it learns $Q(t)$ , a function of time, which can theoretically adapt to any time step $t>0$.\\n\\n\\u2014 \\n\\nThank you again for the feedback, and if there are any other questions we can answer in the discussion period, please let us know.\"}", "{\"summary\": \"The paper discusses the problem of learning from time series classification task, where the label noise can vary over time.\\nEach instance is a sequence over T time steps, where at iteration $t$ we have access to the features at time $t$, and a noisy label $\\\\tilde y_t$ obtained from the true label $y_t$. Each of this instance is i.i.d., and the assumption is that the true label $y_t$ at time $t$ only depends on the past features $x_{1:t}$, and the noisy labels are conditionally independent to the features given by the true labels. In particular, the authors assume there exists a noise mapping $Q_t$ that describes the noise process at time $t$. They propose a novel method to learn a classifier that maps sequences $x_{1:t}$ to a label $y$. This model simultaneously approximates the noisy mapping $Q_t$ (Eq 3), that is used to approximate the true labels, which are used during training\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Learning from time series data is an interesting problem. The paper addresses a challenging problem, where the label noise can change over time, and it is well-motivated. Overall, the introduction of the paper and the experimental sections are well-written and easy to follow. I found the use of the minimum-volume simplex assumption as an objective to solve the problem to be intriguing.\", \"weaknesses\": \"(**important**). The appendix of the paper is only a draft and it looks like it was not finished to be written. It also seems to be dissociated to the main paper.\\n- Appendix A is self contained and not referenced in the main paper. It is not discussed what the content of this section adds as a contribution.\\n- Appendix B looks dissociated to the main paper. First of all, three assumptions are introduced (they seems related to the 2 assumptions used in the main paper. In this case, why re-introduce them, and why do we use 3 assumptions rather than 2?). Most of the proofs are just a sequence of mathematical equations with text. Lines 810 to 840 are just a sequence of equations without text (maybe do a table?). The proof of Proposition 1 does not appear, which is only the theoretical result in the main paper (is Theorem 3 the proof of Proposition 1? Why does it have different notation?).\\n- Section G.2 is empty. Page 25 to 47 include a sequence of a lot of figure without almost no text. I recommend the authors to only include the figures that are used to \\u201csay\\u201d something, together with a text explaination.\\n\\n\\nThe technical section is also sometimes unclear.\\n\\n\\n\\nNotations is sometimes unclear. What is q_t in line 140? According to the notation defined in the paper $\\\\mathcal{X} \\\\subseteq \\\\mathbb{R}^{d \\\\times T}$, but the function $h_{\\\\theta}$ that has domain $\\\\mathcal{X}$ can also have an input matrix $d \\\\times t$. \\n\\n\\nThere is no comment on the assumptions 1 and 2 (except that they are two standard assumptions). I believe a few lines commenting on those assumptions would be helpful.\\n\\nIn lines 259-261, why is the Frobenius norm a convex surrogate for the volume? (A citation is also probably needed).\\n\\n\\n\\nIn Line 260, should $R_t$ be defined over $\\\\tilde{y}$ rather than $y$?\\n\\n In lines 262-264, my understanding is that $\\\\lambda$ is the Lagrange multiplier of the constraint of Equation~2. I would be clearer on this, since $\\\\lambda$ does not appear in Eq 2.\", \"questions\": \"See also weakness.\\n\\nThe model does not use the noisy label in the prediction (the predictor h_{\\\\theta} only depends on x_{1:t}). I believe it would be interesting to include the noisy label in the prediction of the true label (i.e., if the noisy label is always accurate, can we use it for prediction? I understand this is a slightly different setting, as the goal of the paper is to learn a predictor).\\n\\n\\n\\n\\nThe fact that we learn a noisy label structure Q_w(t) that is \\u201ccontinual\\u201d over time seems implicit in Eq. (3). However, it is not clear what the difference is between Eq(3) and the model discussed in \\u201cdiscontinuous estimation\\u201d. It seems to me that $t$ is an integer, and $Q_w(t)$ could be completely different than $Q_w(t+1)$ in principle (what is the temporal relationship discussed in line 274)\\n\\nIt is a bit unclear how Section 2, in particular section 2.2, is related to the proposed method in Section 3. It seems to me, that the \\u201closs of the classifier\\u201d is embedded in the constraint of Equation 2. Lines 233-234 says that Eq(2) minimizes the forward temporal loss as in Def 2, but this loss does not actually appear on Equation 2.\", \"typos\": \"277 continuuity\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response. This addresses all my concerns. So, I will maintain the same score.\"}", "{\"title\": \"Response to Reviewer gNk9\", \"comment\": \"Thanks for your time and detailed feedback! We've addressed most of the questions and comments below. If there is anything else, please let us know!\\n\\n>**W1) \\u201cIn many settings, the performance of Plug-In method is inferior to that of static methods.\\u201d**\\n\\n\\u2018Plug-in\\u2019 is only one of three temporal methods we propose. Overall, we find that \\u2018Continuous\\u2019 is by-far the best temporal method, outperforming all static methods (and outperforming \\u2018Plug-In\\u2019). Further, \\u2018Plug-In\\u2019 is a temporal extension of the static \\u2018Anchor\\u2019 method (not \\u2018VolMinNet\\u2019). So the fairest comparison is \\u2018Plug-In\\u2019 vs \\u2018Anchor\\u2019, where we actually see \\u2018Plug-In\\u2019 consistently outperforms static \\u2018Anchor\\u2019. As discussed in Section 3.4, each temporal method has advantages and disadvantages, depending on the data and task. It is up to a practitioner to balance performance with the various advantages and disadvantages of each choice. \\n\\nWe have clarified this point in the Results (Section 4.2) - thank you for pointing it out!\\n\\n\\n> **W2) \\u201cOnly one of the five datasets in the manuscript contains both clean and noisy labels. In the remaining real-world scenarios.\\u201d**\\n\\nA lack of data is a key issue in this research area\\u2014our work is actually a step towards fixing this issue because we show that addressing label noise improves accuracy. And by including real-world data, our work can inspire other researchers to consider how temporal label noise may affect their problems, ultimately encouraging the curation of new datasets.\\n\\nAs the idea of temporal label noise is new, it will take time for such datasets to emerge, but our work is a crucial first step. For example, in the static label noise setting the standard dataset with both clean and noisy labels is [Clothing1M](https://paperswithcode.com/dataset/clothing1m).\\nThis dataset was only developed after the pace of research on static noisy labels picked up. Similarly, our work can spark progress in understanding and addressing temporal label noise.\\n\\nWe have highlighted this point in the Conclusion (Section 6). \\n\\n\\n> **W3) \\u201cThe experimental scenarios are focused on healthcare\\u201d**\\n\\nCollecting time-stamped observations is indeed a common process across many fields. This paper was inspired by our own real-world work in healthcare: for instance, while collecting periodic health surveys alongside wearable device time-series data, we observed that participants often mislabeled their activities depending on what they were doing and when.\\n\\nThat being said, if there are specific time-series datasets (with sequences of labels at each time-step) you suggest, please let us know, and we will do our best to incorporate them in a revised version during the discussion period.\\n\\n> **Q1) \\u201cWhat is the meaning of variable d \\u2026 a multivariate time series with d variables?\\u201d**\\n\\nYes, that is correct! We have updated the Preliminaries (Section 2) to clarify that our inputs are multivariate time series with d variables.\\n\\n> **Q2) \\u201cWhen performing a train-test split on the datasets, were individuals also split?\\u201d**\\n\\nSplitting strategies depend on the dataset. For example, in the real-world stress detection demonstration, the training and testing splits did not share individuals, as there is one time series per individual. For other datasets, such as \\u2018moving\\u2019 and \\u2018senior\\u2019, we used the given train-test splits.\\n\\nWe have added this information to Dataset Details (Appendix E.1) so it is clear for future readers.\"}", "{\"title\": \"Common Response\", \"comment\": \"We thank all reviewers for their time and feedback!\\n\\nWe are pleased that reviewers recognized the **\\u201cnovel\\u201d**, **\\u201cinteresting\\u201d**, and **\\u201cchallenging\\u201d** [gNk9, EoQ6, Nmrx] problem of temporal label noise introduced in our work. Reviewers appreciated our **\\u201ccleverly crafted\\u201d** [gNk9] framework, which leverages a **\\u201ctheoretical foundation\\u201d**[gNk9] and **\\u201cclear and concise methodology\\u201d**[qqUX]. Overall, reviewers found our paper **\\u201cwell-organized and clearly written\\u201d**[EoQ6] with experiments **\\u201cdemonstrat[ing] practical utility\\u201d**[qqUX] in real-world datasets. Additionally, reviewers noted the **\\u201cvaluable empirical results\\u201d**[qqUX] that illustrate our approach\\u2019s robustness across various temporal noise settings and datasets.\\n\\nWe have responded to each reviewer\\u2019s comments individually and have also uploaded an updated version of the paper incorporating feedback from all reviewers. A brief summary of the changes is as follows:\\n* We have made changes to the paper based on reviewer feedback for clarity. In particular:\\n * Discussed the role of Assumptions 1 and 2 and their intuitive meaning\\n * Sign-posting in Section 3 to motivate the procedures we construct to learn the temporal noise function from noisy data\\n * Further clarification on the difference between Continuous and Discontinuous methods in Section 3.4\\n* We have polished the Appendix based on reviewer Nmrx\\u2019s suggestions. This includes moving extra supplemental figures not referenced in the text to our anonymous repo, and making the notation/proofs easier to follow.\\n* We have added a Limitations and Future Work section, which addresses questions from reviewers and also outlines future research directions that our work can inspire.\\n\\nOverall, we believe incorporating this feedback has improved the paper. We thank the reviewers for helping us further refine the paper and look forward to answering any remaining questions over the coming days.\\n\\n\\u2013 \\n\\nIf you feel we have not sufficiently addressed your concerns to motivate increasing your score, we would love to hear from you further on what points of concern remain and how we can improve the work in your eyes. Thank you again!\"}", "{\"summary\": \"The paper proposes a model to incorporate varying noise rates for time-series classification. The method specifies a matrix-valued function indicating label-noise distribution. This noise function is incorporated into a forward temporal loss over noisy labels that is used to learn a classifier (using a neural net) minimizing the loss. The authors describe three approaches: continuous, discontinuous, and plug-in. They empirically evaluate these approaches in various benchmark datasets. They also demonstrate the approach in a real-world dataset within the healthcare domain in which the authors claim that the problem of temporal noisy labels is prevalent in time-series classification. The empirical results clearly indicate that the proposed approach works well compared to SOTA static approaches that treat noisy labels as static rather than temporal.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1) The formulation of the proposed methodology is clear and concise.\\n2) The motivation and subsequent real-world example is well demonstrated.\", \"weaknesses\": \"(1) It is not clear how is learning achieved when weights are initialized for each time step in discontinuous estimation. It looks like the dataset is small for the learning models to converge as all the datasets largely seem to have less than 1000 samples.\\n(2) Can the estimator in the discontinuous case be replaced by a simpler model (with lesser number of parameters)? If so, can such an alternative be used as baseline mechanisms to compare for temporal noise (as all alternative mechanisms shown in the paper are for static cases)? \\n(3) It is interesting that both continuous and discontinuous estimations perform quite similarly on the real-world stress detection example. Does this mean that the proposed approach works best when there is large variance in noise rates?\", \"questions\": \"(1) Is there an assumption that all data instances have equal number of time steps?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up\", \"comment\": \"Thank you again for your valuable feedback! We've provided a detailed response to your comments. In case the OpenReview system didn't notify you, we wanted to bring it to your attention here as well.\\n\\nIf you find our response unsatisfactory, please don't hesitate to share your concerns with us. We are more than happy to continue the discussion.\\n\\nThank you for your time and consideration.\"}", "{\"comment\": \"Thank you to the authors for their response.\\n\\nRegarding Q1, I recommend that the authors include the references in the revised version. Additionally, it would be beneficial to include a discussion on applying the method to handle non-stationary environments.\\n\\nFor Q2, my concern remains unresolved. Specifically, the statement \\\"The only assumption the learner needs is about the class of functions parameterizing Q\\\" suggests that the learner must know the type of function. However, I think that identifying an appropriate class of functions for Q is the most challenging aspect of this problem.\"}" ] }
5nldnvvHfw
Adaptive Exponential Decay Rates for Adam
[ "Weidong Zou", "Yuanqing Xia", "Weipeng Cao", "Bineng Zhong" ]
Adam and its variants, including AdaBound, AdamW, and AdaBelief, have gained widespread popularity for enhancing the learning speed and generalization performance of deep neural networks. This optimization technique adjusts weight vectors by utilizing predetermined exponential decay rates (i.e.,$\beta_1$ = 0.9, $\beta_2$ = 0.999) based on the first moment estimate and the second raw moment estimate of the gradient. However, the default exponential decay rates might not be optimal, and the process of tuning them through trial and error with experience proves to be time-consuming. In this paper, we introduce AdamE, a novel variant of Adam designed to automatically leverage dynamic exponential decay rates on the first moment estimate and the second raw moment estimate of the gradient. Additionally, we provide theoretical proof of the convergence of AdamE in both convex and non-convex cases. To validate our claims, we perform experiments across various neural network architectures and tasks. Comparative analyses with adaptive methods utilizing default exponential decay rates reveal that AdamE consistently achieves rapid convergence and high accuracy in language modeling, node classification, and graph clustering tasks.
[ "Optimization method", "deep neural networks", "Adam and its variants" ]
Reject
https://openreview.net/pdf?id=5nldnvvHfw
https://openreview.net/forum?id=5nldnvvHfw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ykfIQ3Inc0", "vKyCr7MmnM", "qEjkxy2UC9", "p50qz8sgLj", "oelu4qxi0z", "IVjXhj59zf" ], "note_type": [ "meta_review", "official_review", "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1734703017442, 1731045587230, 1737523441745, 1729161040903, 1730715500158, 1730670094044 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1228/Area_Chair_1MSf" ], [ "ICLR.cc/2025/Conference/Submission1228/Reviewer_H8Ku" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1228/Reviewer_xBZT" ], [ "ICLR.cc/2025/Conference/Submission1228/Reviewer_4qKE" ], [ "ICLR.cc/2025/Conference/Submission1228/Reviewer_WjW1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces AdamE, a variant of the Adam optimizer that dynamically adapts the exponential decay rates based on the number of training steps, rather than relying on constant values such as $\\\\beta_1 = 0.9$ and $\\\\beta_2 = 0.999$. The method aims to improve convergence speed and overall performance by adjusting these momentum parameters.\\n\\nWhile the paper offers a rigorous theoretical analysis of AdamE from an optimization perspective, it suffers from several notable weaknesses. The writing is difficult to follow, reducing the paper\\u2019s overall readability. Additionally, the experimental validation is weak and limited, failing to demonstrate the practical utility of the proposed method convincingly. From a methodological standpoint, the idea is relatively straightforward, and as multiple reviewers pointed out, there are concerns about the correctness of the theoretical proofs. These issues undermine the credibility of the paper\\u2019s claims and conclusions.\\n\\nGiven these significant limitations, I recommend rejecting this paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, all reviewers maintained a negative stance on the paper. The primary concerns included issues with the correctness of the theoretical proofs, the limited and weak experimental validation, and the paper\\u2019s overall readability. The authors did not provide effective responses to address these concerns, and as a result, the reviewers upheld their initial judgments to reject the paper.\"}", "{\"summary\": \"The paper proposes a new variant of Adam that automatically tunes the $beta_1$ and $\\\\beta_2$ of Adam.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The topic is relevant to the theme of ICLR.\", \"weaknesses\": \"**Weakness: most results in the script are already well-known in the literature, but not properly discussed. I did not find much new results in this script.** I elaborate as follows.\\n\\n \\n1. many results & experiments are already discussed in the literature, but not cited. For instance\\n\\n \\\"In this study, we explore the effects of different combinations of exponential decay rates (\\u03b21 \\u2208 {0.5, 0.7, 0.9} and \\u03b22 \\u2208 {0.9, 0.95, 0.999}) on Adam\\u2019s performance in terms of these three aspects. \\\" \\n\\n \\\"The experimental outcomes for Adam, .., emphasize the critical role of appropriately setting \\u03b21 and \\u03b22 for Adam based on specific tasks in training DNNs.\\\"\\n\\n The above discussions on beta1 and beta2 have been extensively studied in [1]. Some other important works on the theory of Adam are also not cited, such as [2].\\n\\n2. The proposed AdamE uses decreasing beta1 and increasing beta2. Similar method is already studied in AdamNC in [2] and [3]. I do not see any new theoretical insights in this work. Further, the convergence analysis requires strong assumptions such as bounded gradient. Note that these types of assumptions have already been removed in the Adam analysis in [1], [4], and [5].\\n\\n3. The experiments are restricted to toy settings. The practical impact is limited.\\n\\n\\n\\n[1] Zhang, Y., Chen, C., Shi, N., Sun, R., & Luo, Z. Q. (2022). Adam can converge without any modification on update rules. \\n\\n[2] Reddi, S. J., Kale, S., & Kumar, S. (2019). On the convergence of adam and beyond.\\n\\n[3] Zou, F., Shen, L., Jie, Z., Zhang, W., & Liu, W. (2019). A sufficient condition for convergences of adam and rmsprop.\\n\\n[4] Li, H., Rakhlin, A., & Jadbabaie, A. (2024). Convergence of adam under relaxed assumptions.\\n\\n[5] Wang, B., Fu, J., Zhang, H., Zheng, N., & Chen, W. (2024). Closing the gap between the upper bound and lower bound of Adam's iteration complexity.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"In this paper, the authors propose a new version of Adaptive optimization algorithm called AdamE. Instead of fixed exponential decay rates of the first and second moments, AdamE adopts an adaptive exponential decay coefficients $\\\\alpha_t$ and $\\\\beta_t$ for the first and second moments respectively. The authors provide both the theoretical convergence guarantee and empirical results of this new method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This paper provides both empirical results and theoretical convergence guarantee for their new method AdamE. The theoretical results cover both convex case and non-convex case. The empirical results demonstrate that AdamE indeed performs well on clustering data. Besides, by introducing a simple example of quadratic objective, the readers can easily get the intuition of this method.\", \"weaknesses\": \"1. The quadratic objective example for illustration in Section 3, while easy to understand, is not suitable for comparing the performance of adaptive methods, resulting that all illustration and comparison in Section 3 is not convincing. The reasoning is quite clear: since the objective function is a scalar, different algorithms have only two choices for updating direction: positive or negative. Additionally, as a convex function, the quadratic objective has a single global minimum, denoted as $x^*$. As the iterates $x_t$ of optimizers approach $x^*$, the current gradient $g_t$ will also approach 0. Since the Adam relies heavily on historical information for its updates, its updating $\\\\frac{m_t}{\\\\sqrt{v_t}+\\\\epsilon}$ cannot adapt to a small value immediately as $x_t$ approaching $x^*$. In contrast, the coefficients $\\\\alpha_t$ and $\\\\beta_t$ of AdamE will converge to 0 and 1, allowing it to gradually rely more on the current gradient and behave similarly to gradient descent to some extent. Therefore, it is evident that AdamE can outperform Adam in this particular example, and I am confident that gradient descent would also perform significantly better in this simple scenario. However, I believe this does not necessarily imply that GD or AdamE is inherently superior to Adam in all cases.\\n\\n2. This paper exhibits a lack of novelty in its technical contributions. Specifically, the proof for convex case is almost same with [1]. For the non-convex case, the proof sketch and main steps are also similar with [2].\\n\\n3. There might exist major technical incorrectness in the proof of this paper. Firstly, in Theorem 2.2, the authors claim that the regret achieves $O(\\\\sqrt{T})$ bound, which might be incorrect. The second term of formula (14) explicitly includes a factor of $\\\\sqrt{T}$, while $\\\\\\\\|g_{1:T,i}\\\\\\\\|_2$ implicitly contains a factor of $\\\\sqrt{T}$ as it is a $\\\\ell_2$ norm of a $T$ dimensional vector. Consequently, the second term of formula (14) is actually of order $O(T)$ instead of $O(\\\\sqrt{T})$. In contrast, there is no explicit factor $\\\\sqrt{T}$ in the second term of Theorem 10.5 in [1]. Secondly, for the proof of non-convex case, the authors claim a fact that $x\\\\_{t+1}-x\\\\_t = \\\\lambda\\\\_tv\\\\_t^{-1/2}m\\\\_t$, which is incorrect. The authors omit the existence of the stability constant $\\\\epsilon$. In comparison, the stability constant $\\\\epsilon$ exists in the convergence results of [2], while it disappears in this paper. \\n\\n4. The writing of this paper is relatively hard to follow. Firstly, although I understand that typos are inevitable in any written work, there exist too many in this paper, rendering the mathematical derivations hard to read. For example, the $<$ in formulas (29) and (42) should be $>$. Besides, the proof lacks necessary explanations for some complex steps. In particular, during the proof of the convex case, the authors fail to use the indices from previous formulas to clarify how they derive the next formula, despite providing about 20 indices. Furthermore, there exists significant inconsistence of notations. In the algorithm description, the authors use $d_q$ and $s_q$ to denote the first and second moments, while in the proof, it seems that they use $m_t$ and $v_t$ instead. I also suggest that authors can use some bold notations to clearly remind the readers the variables are scalars, vectors, or matrices. Finally, I suggest that the authors separate the proof of Theorem 3.1 into several lemmas to enhance clarity and understanding.\\n\\n[1] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014\\n\\n[2] Dongruo Zhou, Jinghui Chen, Yuan Cao, Yiqi Tang, Ziyan Yang, and Quanquan Gu. On the convergence of adaptive gradient methods for nonconvex optimization. arXiv preprint arXiv:1808.05671, 2018.\", \"questions\": \"Could the author explain why $\\\\lambda\\\\_{t-1}v\\\\_{t-1,i} \\\\geq \\\\lambda\\\\_{t}v\\\\_{t,i}$ holds on line 356?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Instead of using (0.9,0.999)-type constant coefficients to update $m$ and $v$ in Adam, the authors propose to use a specially designed sequence for updating. The authors prove the convergence of the newly proposed algorithm in online convex setting and non-convex setting. The algorithm is tested on the DCRN network for the r graph clustering task.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The authors consider replacing the constant coefficients with a sequence of coefficients, which can help the algorithm perform better. Meanwhile, the authors give a theoretical analysis of the proposed algorithm.\", \"weaknesses\": \"1. In [1], they prove that when $\\\\alpha$ and $\\\\beta$ follow certain conditions, the Adam algorithm can converge. In [1], they have already considered the sequence coefficient with more general results.\\n\\n2. There is some error in the proof of the convex setting.\\n\\n(i) $||\\\\theta_n - \\\\theta_m\\\\|_2 \\\\leq D_2$ can not lead to $\\\\|\\\\theta_t - \\\\theta^*\\\\|_2 \\\\leq D_2$. To prove this, one should first prove that $\\\\theta_t \\\\rightarrow \\\\theta^*$, where the definition of $\\\\theta^*$ is the optimal solution not the limit point of a sequence of $\\\\theta_t$\\n\\n(ii) In equation (18), all of the second terms should be $m_{t,i}^2/v_{t,i}$ instead of $m_{t,i}^2/\\\\sqrt{v_{t,i}}$.\\n\\n(iii) How to reduce $\\\\sum_t \\\\sqrt{t v_{t,i}}$ to $\\\\sqrt{T,v_{T,i}}$?\\n\\n[1] Zou, Fangyu, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu. \\\"A sufficient condition for convergences of adam and rmsprop.\\\" In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp. 11127-11135. 2019.\", \"questions\": \"1. Do results in [1] imply the theoretical result of AdamE? If not, discuss the difference between two results.\\n\\n2. Are the errors in the proof typos? If not, give the correct version of the proof.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces AdamE, a variant of the Adam optimizer that adapts the exponential decay rates dynamically (based on the number of training steps) rather than relying on the default static values (e.g., $\\\\beta_1 = 0.9$ , $\\\\beta_2 = 0.999$ ). By adjusting decay rates based on the first and second moment estimates of gradients, AdamE aims to enhance convergence speed and overall performance in training deep neural networks. The authors provide theoretical convergence proofs in both convex and non-convex settings and validate AdamE\\u2019s performance through extensive experiments on various neural network tasks, including language modeling, node classification, and graph clustering.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The idea that using an adaptive approach to adjusting decay rates in Adam is quite novel and original, which addresses a notable challenge with existing Adam variants that rely on fixed hyperparameters, if it succeeds. However I have concerns about the correctness of the theoretical claims in this paper and also about the insufficiency of experiments. Please refer to the weakness.\", \"weaknesses\": \"**Correctness:** I have concern about the main theoretical results in the paper.\\n - Though the authors claim they achieve a $O(\\\\sqrt{T})$ regret bound in Theorem 2.2, Equation (14) indeed contains a linear regret term --- $\\\\sqrt{T} \\\\|g_{1:T,i}\\\\|_2$.\\n - In the proof of non-convex convergence case, from equation (42) to (43), the authors seem to flip the sign of the inequality by mistake.\\n\\n**Lack of motivation**: Section 2 doesn't provide a convincing explanation on the motivation on why AdamE should make Adam optimizes faster. From the 1d experiments, it seems all hyper choices converge pretty fast.\\n\\n**Lack of comparison to Adagrad**: When $t$ gets large, AdamE proposed in this paper essentially becomes AdaGrad, where $\\\\alpha_q \\\\to 0$ and $\\\\beta_q\\\\approx 1/q$.\\n\\n**Insufficient Experiments**: The authors only provide experiments on a few relative toy settings. I would like to see experiments on more standard benchmarks and architectures, e.g. resnet trained on Imagenet and transformers trained on common language datasets.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5ncdKonxd4
PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction
[ "Long Xing", "Qidong Huang", "Xiaoyi Dong", "Jiajie Lu", "Pan Zhang", "Yuhang Zang", "Yuhang Cao", "Conghui He", "Jiaqi Wang", "Feng Wu", "Dahua Lin" ]
In large vision-language models (LVLMs), images serve as inputs that carry a wealth of information. As the idiom ``A picture is worth a thousand words" implies, representing a single image in current LVLMs can require hundreds or even thousands of tokens. This results in significant computational costs, which grow quadratically as input image resolution increases, thereby severely impacting the efficiency of both training and inference. Previous approaches have attempted to reduce the number of image tokens either before or within the early layers of LVLMs. However, these strategies inevitably result in the loss of crucial image information, ultimately diminishing model performance. To address this challenge, we conduct an empirical study revealing that all visual tokens are necessary for LVLMs in the shallow layers, and token redundancy progressively increases in the deeper layers of the model. To this end, we propose PyramidDrop, a visual redundancy reduction strategy for LVLMs to boost their efficiency in both training and inference with neglectable performance loss. Specifically, we partition the LVLM into several stages and drop part of the image tokens at the end of each stage with a pre-defined ratio, creating pyramid-like visual tokens across model layers. The dropping is based on a lightweight similarity calculation with a negligible time overhead. Extensive experiments demonstrate that PyramidDrop can achieve a 40\% training time and 55\% inference FLOPs acceleration of LLaVA-NeXT with comparable performance. Besides, the PyramidDrop could also serve as a plug-and-play strategy for inference acceleration without training, with better performance and lower inference cost than counterparts. We hope that the insights and approach introduced by PyramidDrop will inspire future research to further investigate the role of image tokens in LVLMs and explore additional methods to enhance their efficiency.
[ "Large Vision Language Model", "Efficient Training", "Efficient Inference" ]
https://openreview.net/pdf?id=5ncdKonxd4
https://openreview.net/forum?id=5ncdKonxd4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "s1kPo9Pfsf", "oXvLXIXX0U", "lG00vRCgwj", "fQVxdHomhW", "50HpeDWw27" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730611886039, 1730448740706, 1731573486781, 1729512483063, 1729687752178 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission657/Reviewer_YQm5" ], [ "ICLR.cc/2025/Conference/Submission657/Reviewer_H5EU" ], [ "ICLR.cc/2025/Conference/Submission657/Authors" ], [ "ICLR.cc/2025/Conference/Submission657/Reviewer_uEgW" ], [ "ICLR.cc/2025/Conference/Submission657/Reviewer_Gfov" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces PyramidDrop, a method to improve the efficiency of Large Vision-Language Models by progressively reducing visual tokens across model layers. Specifically, PyramidDrop addresses computational efficiency in two ways: 1) it divides the LVLM into several stages and preserves all visual tokens in shallow layers, 2) it then progressively drops tokens at the end of each stage based on a lightweight attention-based similarity calculation. The authors demonstrate that visual token redundancy increases in deeper layers of LVLMs, making this staged approach more effective. Experiments on LLaVA-NeXT show that PyramidDrop reduces training time by 40% while maintaining comparable performance across 14 different benchmarks. The method is also shown to be effective for inference acceleration, reducing FLOPs by 55% and enabling training with doubled input resolution while using only 70% of the original training time, with better performance on high-resolution tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method improve LVLM efficiency through progressive token reduction based on layer-wise redundancy analysis. Different from compressing tokens uniformly or only in early layers, PyramidDrop's stage-wise reduction aligns with the redundency distribution of visual tokens in LVLMs.\\n\\n2. Experiments show the proposed method effectively reduce LLaVA-NeXT training time by 40% while maintaining performance across 14 benchmarks. It also enables training with doubled input resolution using only 70% of the original computational cost, demonstrating better efficiency-performance trade-off than existing methods like FastV.\\n\\n3. The proposed method could be deployed as a plug-and-play module without requiring architectural modifications or additional training. \\n\\n4. The method's design is simple yet effective, using only lightweight attention calculations for measuring token importance.\", \"weaknesses\": \"1. The technical novelty is very limited. The core idea of progressive token pruning has been previously explored in both LTP (Learned Token Pruning) [1] and Magic Pyramid[2] papers. LTP introduced learnable thresholds for token pruning across different layers, while Magic Pyramid combined progressive token pruning with early exiting. PyramidDrop's approach of stage-wise token reduction follows a similar progressive pruning strategy, making its technical contribution incremental.\\n\\n2. Another critical limitation of the proposed method is its incompatibility with FlashAttention, as it relies on explicit attention score for token dropping. Without analyzing this limitation or providing comparisons against FlashAttention-powered baselines, the paper leaves open questions about the method's true efficiency gains in terms of memory usage and inference latency.\", \"questions\": \"See weakness for details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces PyramidDrop to adaptively drop image tokens in multimodal models. They partition the layers into fixed groups and use a hyperparameter to indicate the ratio of tokens to keep during training and inference.\\n\\nThis paper is a natural extension of FastV with more fine-grained operations to prune the image tokens progressively.\\n\\nThis paper achieves better performance than FastV on all benchmarks. At the same time, this method is compatible with FastV.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea is simple and straightforward\\n2. The writing is easy to follow. The writing logic is clear.\\n3. This paper beats FastV at the inference stage.\", \"weaknesses\": \"1. The overhead of token pruning operation. Though this operation does not cost much computation, more analysis on this is important since we want global speedup.\\n\\n2. Speedup metric. Previous token pruning methods[1, 2] provide both GFLOPs and global latency. This work provides only GFLOPs which does not convince me of the actual speedup on hardware. \\n\\n3. The effect of group size. As the main contribution of this paper, it does not include the discussion of the choice of group number and how will that influence performance and efficiency.\\n\\n4. As shown in Table 5, the performance improvement is limited(<1.0) compared with FastV while FastV does not require multiply-time token pruning. \\n\\nOverall this work is simple, but I am concerned that it may not meet the bar of ICLR.\", \"minor\": \"\", \"line_255\": \"FalshAttention\", \"line_253\": \"toke\", \"line_377\": \"PtramimdDrop\\n\\n[1] Bolya, Daniel, et al. \\\"Token merging: Your vit but faster.\\\" arXiv preprint arXiv:2210.09461 (2022).\\n\\n[2] Kong Z, Dong P, Ma X, et al. Spvit: Enabling faster vision transformers via latency-aware soft token pruning[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2022: 620-640.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"Current approaches focus on reducing the number of visual tokens either before or within the early layers of LVLMs (Large Vision-Language Models). This paper considers the inconsistency in visual redundancy across different layers and proposes the PyramidDrop, which effectively reduces redundant visual tokens in LVLMs. Extensive evaluations on multiple benchmarks demonstrate that this method enhances efficiency with negligible performance loss.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is coherent and logical, effectively demonstrating that visual redundancy varies across different layers. Based on this observation, the authors propose a hierarchical sparsification strategy.\\n2. The experiments are comprehensive, with the proposed algorithm being validated across multiple benchmarks, demonstrating its feasibility.\", \"weaknesses\": \"1. The `method` section lacks clarity, particularly in explaining the rationale behind using the last instruction token and all visual tokens to calculate attention. Was there any attempt to use other indexes of the instruction tokens for attention calculation?\\n2. The approach introduces additional overhead, as the attention between the instruction token and visual tokens can be directly derived from the transformer's attention without the need for additional key and value computations.\\n3. The comparative experiments are incomplete, as only inference-only algorithms are compared, lacking a comparison with similar train-inference token reduction approaches, like LLaMA-VID [1] and VoCo-LLaMA [2].\\n\\n[1] Li, Yanwei, Chengyao Wang, and Jiaya Jia. \\\"Llama-vid: An image is worth 2 tokens in large language models.\\\" In\\u00a0*European Conference on Computer Vision*, pp. 323-340. \\n\\n[2] Ye X, Gan Y, Huang X, Ge Y, Shan Y, Tang Y. VoCo-LLaMA: Towards Vision Compression with Large Language Models. arXiv preprint arXiv:2406.12275. 2024 Jun 18.\", \"questions\": \"1. How were the experimental results for the 0.0 ratio in Figure 1(a) obtained? Was the attention output of that layer directly used as the input for the LLM?\\n2. Since only the last instruction token is used to compute attention with all visual tokens, shouldn\\u2019t the visualization in Figure 5 tend to focus visual tokens on the last instruction token? Why does the attention highlight information such as \\\"1856\\\" and \\\"green dress\\\"?\\n3. Could you provide the practical speed comparison of the baseline and proposed method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper discusses the significant computational costs of processing images in large vision-language models (LVLMs) and proposes a new method called PyramidDrop, which reduces visual tokens across different layers to improve training and inference efficiency with minimal performance loss. PyramidDrop prunes image tokens progressively across model stages using a lightweight similarity calculation. Experiments show that it speeds up training and inference with results comparable to those of the original model.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper discusses the problem of computational efficiency in large vison-language models, which is a hot issue that urgently requires effective methods to accelerate speed.\\n2. The proposed method is easy to follow and replicate.\", \"weaknesses\": \"1. The contributions of this paper are not enough. The method is common, as it only uses rank and drop operations, like FastV, but with a hierarchical version. Additionally, the experimental results do not show particularly impressive outcomes, compared to other efficient methods that have been proven to work, such as VoCo-LLaMA[1], LLaMA-VID[2], or LLaVA-PruMerge[3].\\n2. Experiments are insufficient. In the training stage, this paper only compared the baseline with LVLM without rank and drop, with a lack of comparison with other efficient methods as point 1 demonstrated. In the ablation study, there is also no analysis of the impact of the number of layers chosen for token dropping.\\n3. There are some small typos, such as play a important in line 159, FalshAttention in line 256, and PtramimdDrop in line 377.\\n\\n[1] Ye, Xubing, et al. \\\"VoCo-LLaMA: Towards Vision Compression with Large Language Models.\\\" arXiv preprint arXiv:2406.12275 (2024).\\n\\n[2] Li, Yanwei, Chengyao Wang, and Jiaya Jia. \\\"Llama-vid: An image is worth 2 tokens in large language models.\\\" European Conference on Computer Vision. Springer, Cham, 2025.\\n\\n[3] Shang, Yuzhang, et al. \\\"Llava-prumerge: Adaptive token reduction for efficient large multimodal models.\\\" arXiv preprint arXiv:2403.15388 (2024).\", \"questions\": \"1. Experiments are insufficient. In the training stage, this paper only compared the baseline with LVLM without rank and drop, lack of comparison with other efficiency training methods, such as VoCo-LLaMA, LLaMA-Vid or LLaVa-PruMerge, etc.\\n2. In the ablation studies, there is also no analysis of the impact of the number of layers chosen for token dropping. In addition, have you tried different pruning ratios in different stages instead of using the same ratio in all stages?\\n3. I think PyramidDrop is a hierarchical token-pruning version of FastV. Can you describe more about the difference between your method and FastV? \\n4. Could you add the visualization of layer 2 in Figure 5?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5mJrGtXVwz
VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment
[ "Amirhossein Kazemnejad", "Milad Aghajohari", "Eva Portelance", "Alessandro Sordoni", "Siva Reddy", "Aaron Courville", "Nicolas Le Roux" ]
Large language models (LLMs) are increasingly applied to complex reasoning tasks that require executing several complex steps before receiving any reward. Properly assigning credit to these steps is essential for enhancing model performance. Proximal Policy Optimization (PPO), a state-of-the-art reinforcement learning (RL) algorithm used for LLM finetuning, employs value networks to tackle credit assignment. However, value networks face challenges in predicting the expected cumulative rewards accurately in complex reasoning tasks, often leading to high-variance updates and suboptimal performance. In this work, we systematically evaluate the efficacy of value networks and reveal their significant shortcomings in reasoning-heavy LLM tasks, showing that they barely outperform a random baseline when comparing alternative steps. To address this, we propose VinePPO, a straightforward approach that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates, bypassing the need for large value networks. Our method consistently outperforms PPO and other RL-free baselines across MATH and GSM8K datasets with fewer gradient updates (up to 9x), less wall-clock time (up to 3.0x). These results emphasize the importance of accurate credit assignment in RL finetuning of LLM and demonstrate VinePPO’s potential as a superior alternative.
[ "LLM", "Reasoning", "Credit Assignment", "RLHF", "Post-Training" ]
Reject
https://openreview.net/pdf?id=5mJrGtXVwz
https://openreview.net/forum?id=5mJrGtXVwz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zNRCIGLNkJ", "wukLX6Jnql", "ws4QQZOxo6", "tKq2y5Zhif", "sxELZcIM7o", "pOzaCz0Ax7", "p61ahmUCzl", "oyuLme2wlw", "ogmJcAHAKp", "my95L5LFyO", "jyFPjLPTKD", "jJkTXx9wUY", "frf70ZsMzO", "ZKVZlI7buk", "YRYt7RfH5O", "Vda9vn0lUc", "Vd8CDobiii", "TGdXhlO1mx", "Q8b3cl9d3h", "Q5mw8ePr8d", "NjTDGAcMEX", "NMbsby1kkY", "IpBaBq1Jhu", "IR5o62Tk7P", "Gw8VCVTi7l", "Gsh9DUUZI8", "A7S07SULDJ", "7pLR9kJzd3", "7fkWqdW186", "6oPMP7pSpy", "5a5R8Ub1TI", "2Pz3dSAiXm", "24ccJ3kxDn", "1cyYkVVBSU", "1Maj06Dgc9", "17rj3nMp32" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733093536588, 1731051760128, 1732219956001, 1731254624405, 1730773418387, 1732481404886, 1732463209540, 1732024308895, 1734758198418, 1732517305081, 1733094031968, 1733180432723, 1732649605301, 1730642749510, 1732022458505, 1733080492511, 1732659078285, 1732741522594, 1733290157163, 1732512434474, 1732020345633, 1732220131421, 1733079446893, 1732734677236, 1732219892801, 1733181163139, 1737524052917, 1732674422793, 1732314651568, 1733180787596, 1732021739714, 1732220180347, 1732738326199, 1732020464425, 1732925654491, 1732021103377 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Reviewer_S7We" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Reviewer_wh2B" ], [ "ICLR.cc/2025/Conference/Submission10429/Reviewer_1TWU" ], [ "ICLR.cc/2025/Conference/Submission10429/Reviewer_1TWU" ], [ "ICLR.cc/2025/Conference/Submission10429/Area_Chair_XVKP" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Area_Chair_XVKP" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Reviewer_3mkL" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Reviewer_3mkL" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Reviewer_S7We" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ], [ "ICLR.cc/2025/Conference/Submission10429/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reviewer Response Deadline Tomorrow\", \"comment\": \"Dear Reviewer,\\n\\nThank you again for the fruitful discussion and your technical depth in this topic. We hope our previous exchange has addressed your remaining concerns, but if there are any outstanding questions or points, we\\u2019d be eager to address them promptly especially since the deadline for reviewer response in in one day. We hope you kindly consider a stronger assessment of the paper considering our recent exchanges.\\n\\nThank you for your time and thoughtful feedback!\"}", "{\"summary\": \"The paper proposes vine-PPO, which uses Monte Carlo-based estimates to replace the value function. This approach is far more accurate and therefore performs better than the parameterized value function. Although the cost could be a concern, the authors argue that inference or generation is much faster due to many inference-optimized modules. Additionally, because of the rapid increase in performance, it may even be more efficient.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The idea of Monte Carlo estimates, although it has been used in traditional RL tasks, is novel for PPO in the context of LLMs. I find it quite interesting that it can achieve superior results even with K=1.\", \"The applicability stemming from the fact that it only replaces the value function, allowing it to be used in many PPO-like methods, is highly beneficial.\", \"The analysis of the value function helps clarify the motivation.\", \"The proposed method is simple and easy to follow, and the paper is well-written.\"], \"weaknesses\": [\"Fundamentally, I think the difference between your approach and GRPO [1] and RLOO [2] is that you have fine-grained value estimations by generating multiple responses from each intermediate group state. However, since this involves more computation, I wonder about the trade-offs compared to GRPO.\", \"This question arises because you do not compare your method with GRPO and RLOO. As these methods also employ similar ideas, why only compare with the original PPO? The authors should clearly explain the selection of baselines, and efficiency comparisons should also include this line of research.\", \"Furthermore, I wonder why you do not report baselines that use finer credit assignment for the DPO objective. Since you report that PPO performs better in terms of credit assignment, I am curious how it still shows superiority even when DPO is combined with finer credit assignment.\", \"Additionally, in practical situations, if one needs to find an optimal\\u00a0K\\u00a0for training configuration, it\\u2019s unclear whether we can say that Vine-PPO is more efficient in general, as it might require more hand-engineering. However, training the value network also requires engineering, so I wonder about the complexity comparison between these methods.\", \"References\", \"[1] Shao et al. DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models\", \"[2] Ahmadian et al. Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs\"], \"questions\": [\"How does the method's dependency on\\u00a0K\\u00a0differ by model? I am also curious about the\\u00a0K\\u00a0ablation.\", \"Additionally, I think creating a graph to show the trade-off between larger\\u00a0K\\u00a0values and efficiency would be interesting.\", \"Very minor, but there is a missing period on line 264 (or 265).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Updates and Feedback Request\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback. With the discussion phase nearing its end, we\\u2019d be glad to address any further questions or suggestions to obtain a better evaluation. Importantly, we previously updated the paper with RLOO and GRPO results for the 1B model, and we\\u2019ve now added a new result for the 7B model, which completed today. Thank you again for your time and dedication to the review process.\"}", "{\"summary\": \"VinePPO uses Monte Carlo-based credit assignment, reducing reliance on large value networks and enhancing accuracy and efficiency. It outperforms PPO and other baselines on complex math tasks, particularly with challenging datasets. Performance improves with more Monte Carlo samples, demonstrating strong scalability potential.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"VinePPO uses Monte Carlo-based credit assignment, reducing reliance on large value networks and enhancing accuracy and efficiency. It outperforms PPO and other baselines on complex math tasks, particularly with challenging datasets. Performance improves with more Monte Carlo samples, demonstrating strong scalability potential.\", \"weaknesses\": \"1. Lack of baselines. I suggest the author adding value-network-free methods as baselines, particularly GRPO [1] which also uses a PPO-like objective with the average reward of multiple rollouts as the baseline for the policy gradient.\\n2. Misuse of terminology. According to the hyperparameter setting for PPO provided in the Appendix where $\\\\lambda = 1$ and $\\\\gamma = 1$, PPO should produce an unbiased estimate for the value function. So it is better not to use \\\"bias\\\" in Line 467 and 475 but to use \\\"inaccuracy\\\".\", \"questions\": \"Questions:\\n1. The results show that VinePPO is quite promising for LLM reasoning, but can we extend it to the more general alignment task?\\n2. Is there any intuitive or theoretical explanation for why value networks fail to provide accurate estimates?\\n\\n[1] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. 2024. DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models. CoRR, abs/2402.03300.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Large Language Models (LLMs) are increasingly being applied to complex reasoning tasks, ranging from solving math problems to developing code. The most common method for training these models is Proximal Policy Optimization (PPO), which addresses the credit assignment problem using a value network. However, this value network can be significantly biased, which may impact performance. The authors propose VinePPO, inspired by VineTRPO, to learn a value function from Monte Carlo (MC) samples. They demonstrate that the value function from PPO performs poorly, while the MC sample estimates of the value function show strong performance and leverage compute-efficient inference techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The author's observation that the issue with PPO was the value function estimates is very insightful, given that there has been a lot of work to replace PPO with new techniques.\", \"The paper was well written.\", \"The paper's experimental results provide a lot of interesting insights regarding issues around PPO's value function.\", \"The authors performed experiments across several tasks, model sizes, and model types.\", \"The authors' ablations studies show interesting pitfalls of the value function from PPO. Additionally, the authors clarify the tradeoff between VinePPO and PPO.\"], \"weaknesses\": [\"I understand that the paper focuses on addressing the pitfalls of PPO; however, comparing it with RLOO [1] would provide practitioners with valuable context on which algorithm they might want to use in practice.\", \"The paper lacks details on how the inference engines were utilized to accelerate data gathering.\", \"[1] Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs by Ahmadian et al. 2024\"], \"questions\": \"- Missing citations\\n - RL + LLM: [1, 4]\\n - RL: [2, 3, 5]\\n- How does VinePPO compare to the RLOO baseline as the value of K increases in RLOO?\\n-Did you do large-batch PPO updates? (Refer to [6] for the large-batch updates.) If you didn\\u2019t use the large-batch setting, essentially what you do is compute all the data statistics offline. This approach allows you to avoid loading the reward model onto the GPU, enabling you to increase your batch size much higher than if the reward model were loaded onto the GPU.\\n- Why is PPO more deterministic in early steps, while VinePPO is more deterministic in later steps, as mentioned in the \\\"Error per reasoning step\\\" section?\\n- Could you share a plot showing the \\\"explained variance\\\" of the value function you learn with normal PPO? (see [7])\\n\\n[1] Learning to Generate Better Than Your LLM by Chang et. al 2023\\n[2] Exploring restart distributions by Tavakoli et al. 2018\\n[3] Data-efficient deep reinforcement learning for dexterous manipulation by Popov et al. 2017\\n[4] Dataset Reset Policy Optimization for RLHF by Chang et al 2024\\n[5] Mastering the game of Go with deep neural networks and tree search by Huang 2016\\n[6] SimPO: Simple Preference Optimization with a Reference-Free Reward by Meng et al. 2024\\n[7] http://joschu.net/docs/nuts-and-bolts.pdf\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the responses to my questions and concerns, especially for plotting the \\\"explained variance\\\" metric for all of the algorithms discussed in the paper. I have no further questions and will keep my score the same.\"}", "{\"title\": \"Reminder: Author-Reviewer Discussion Period Closing Soon\", \"comment\": \"This is a reminder that the author-reviewer discussion period will end on Nov 26 AoE.\\n\\nYour engagement during this phase is critical for providing valuable feedback and clarifications. If you have any remaining questions or comments, please take a moment to participate before the deadline.\\n\\nThank you for your contributions to this important process.\\n\\nAC\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their time and feedback.\\n\\nWe should note that we don\\u2019t completely agree with the provided summary. In our work we don\\u2019t show *\\u201cPPO barely outperforms a random baseline\\u201d*. Instead, we believe the reviewer meant that *\\u201cThe value network in PPO performs just slightly better than a random choice when trying to rank the values of actions.\\u201d*\\n\\nWe now address the concerns raised in the review. \\n\\n**Variance and efficiency of MC estimation**\\n\\nWhile variance and computation efficiency are valid concerns for any MC-based estimation, we studied these extensively in Sec. 6.4 of the initial submission (noted by review 1twu): specifically, in Figs. 4, 7, D.7 of the original paper, and now in Fig. F.22 and F.23 of the updated draft. Across all experiments, VinePPO with its MC value estimation consistently outperformed all baselines (including new ones suggested during the Rebuttal) in both efficiency and accuracy, as noted by reviewer wh2b.\\n\\nAlso, while MC has been studied for a long time, we note that, as reviewer S7we highlighted, \\u201cThe idea of Monte Carlo estimates, although it has been used in traditional RL tasks, is novel for PPO in the context of LLMs.\\u201d Moreover, MC value estimation in policy-gradient method is limited to environments that allow intermediate state resets, rare in classic RL environments [1]\\n\\n**Importance of K**\\n\\nThe impact of K is studied in Fig. 4 of the original draft. Additionally, we study the effect of K in efficiency in Fig. F.23 of the updated paper. As noted by reviewer s7we, even small values of K (e.g. 1 or 3) perform well and outperform PPO with its value network (Fig. 4). Additionally, as shown in Fig. 23, higher values of K actually improve the wall-clock time efficiency in terms of reaching a target accuracy. \\n\\n**Generalizability**\\n\\nVinePPO is built on standard PPO from RLHF and introduces no additional assumptions to the RL formulation. Our empirical results demonstrate that VinePPO is more efficient than PPO, making it broadly applicable to RL problems in language environments where PPO is typically used, as noted by reviewer S7WE.\\n\\nWe highlight that the MATH dataset, our primary evaluation suite, includes competition-level problems with long trajectories (up to 2500 toks). Common RLHF methods like DPO often struggle in such tasks [2], demonstrating the challenging nature of our setup.\\n\\n**Questions**\\n> Q1. influence of K on performance and efficiency\\n\\nAs mentioned earlier, the impact of K on performance is analyzed in Fig. 4 of the original paper, showing that increasing K improves test accuracy. The effect of K on wall-clock efficiency is detailed in Fig. F.23, where VinePPO with higher K values shows greater efficiency. Specifically, VinePPO with K=9 is slightly more efficient than K=3, and both significantly outperform K=1 in reaching a target accuracy.\\n> Q2. does VinePPO outperform PPO when MC estimation is not inaccurate?\\n\\nIf very few MC estimates are used, high variance could theoretically hinder training. However, we did not observe this empirically. As shown in Fig. 4 and F.23, even with one MC sample, VinePPO outperforms PPO in both final performance and wall-clock efficiency (as noted by reviewer S7We)\\n> Q3. In Fig 9, the ground truth is chosen as results via 256 MC samples. Is this reasonable?\\n\\nYes. In our tasks, the value of a state follows a Bernoulli distribution representing the probability of successfully completing a solution. Assuming the maximum possible variance of this distribution (0.25), the variance of the ground truth estimator is 0.25 / 256 = 0.00097656, which is notably small.\\n> Q4. are critical steps detected by VinePPO?\\n\\nThank you for the suggestion. Beyond Fig. 1.a in the original paper, additional examples are in Figures H.27\\u2013H.29 of the updated draft. Fig. H.27 shows an insightful reasoning step with positive advantages recovered by VinePPO, while Fig. H.28 illustrates an erroneous step with negative advantages. However, high or low values reflect the policy\\u2019s likelihood of solving the problem, which may not always align with human judgments of correctness (see Fig. H.29).\\n> Q5. Some equations are not clear, for example: St+1=st;[at]\\n\\nAs noted in the background (lines 198\\u2013200), $s_t;[a_t]$ refers to \\u201cappending action $a_t$ to state $s_t$.\\u201d We will clarify this in the camera-ready version. If there are other equations requiring clarification, we are happy to address them too.\\n\\nThank you for your effort again. Efficiency was a key focus of our work, and we conducted a thorough analysis of VinePPO\\u2019s efficiency in the paper. Across all experiments VinePPO consistently demonstrated superior efficiency and final performance compared to baselines. We hope this clarification and the additional empirical evidence address your concerns and encourage a fresh evaluation of our work.\\n\\n- [1] Trust Region Policy Optimization by Schulman et al. 2015\\n- [2] SimPO: Simple Preference Optimization with a Reference-Free Reward by Meng et al. 2024\"}", "{\"metareview\": \"(a) Summary of Scientific Claims and Findings\\n\\nThe paper introduces VinePPO, a reinforcement learning algorithm leveraging Monte Carlo-based methods to enhance credit assignment for fine-tuning large language models (LLMs) on reasoning tasks. VinePPO overcomes the limitations of Proximal Policy Optimization (PPO), particularly addressing issues with high variance and suboptimal performance of PPO\\u2019s value network in complex reasoning scenarios.\\n\\n(b) Strengths of the Paper\\n\\n1. The authors propose Monte Carlo-based credit assignment as a novel approach in the context of LLM fine-tuning.\\n\\n2. Extensive experiments on challenging reasoning datasets (MATH, GSM8K) validate VinePPO\\u2019s superior performance and efficiency.\\n\\n(c) Weaknesses of the Paper and Missing Elements\\n\\n1. The initial absence of comparisons with key baselines (e.g., GRPO, RLOO) was a major concern, and while the rebuttals addressed these issues, some uncertainties about the experimental setup persist.\\n\\n2. There is limited discussion on applying VinePPO to tasks beyond reasoning-intensive problems, particularly those with longer trajectories or greater computational complexity.\\n\\n3. The parameter K, central to Monte Carlo sampling, requires further investigation regarding its impact on computational cost and overall efficiency.\\n\\n(d) Decision and Rationale\\n\\nReviewers recognized VinePPO\\u2019s contributions to advancing reinforcement learning with human feedback (RLHF) for LLMs and its demonstrated empirical advantages. However, concerns about incomplete baseline comparisons, generalizability, and the lack of deeper theoretical insights resulted in mixed evaluations.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers appreciated the novel use of Monte Carlo-based credit assignment but asked the need for comprehensive baseline analysis (addressed in rebuttal).\\n\\nConcerns about variance in MC estimates and the generalizability of VinePPO were raised, with some ablation studies partially addressing these issues.\"}", "{\"comment\": \"Thank you for reading our rebuttal in depth and we\\u2019re thrilled that you liked our experiments and analysis. We now answer the remaining concerns:\\n\\n> According to the GRPO paper, after applying GRPO, the performance on Math exceeds 50%. Was there any difference in the settings?\\n\\nWe appreciate the reviewer's attention to details. In the GRPO paper, they use a different SFT model trained on a large (unpublished) mathematical instruction dataset containing 776K examples. This SFT model achieves 46.8% on MATH, improving to 51.7% after GRPO. In comparison, our SFT model, trained on a public MATH dataset (around 11.5K examples), scores 32.8%, which improves to 42.8% after PPO (and 46.0% after VinePPO). \\n\\n> Additionally, it seems that the benefit of GRPO diminishes for larger, well-performing models (being much more effective for the 1.1B model). Is there any specific reason for this?\\n\\nWe hypothesize that larger models result in more capable value networks, which may lead to better credit assignment. As a result, PPO with value network might perform closer to or even better than methods like GRPO, which lack fine-grained credit assignment mechanisms. That said, even larger value networks are still brittle, struggling with diverse trajectories (as shown in Figure 8), questioning their true scalability. \\n\\nWe hope that our response has addressed your remaining concerns and we are happy to engage in additional discussion if anything remains unclear.\", \"title\": \"Response To Remaining Concerns\"}", "{\"title\": \"One Day Left for Reviewer Response\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable insights and suggestions. As tomorrow is the final day for reviewer response, we wonder if you've had the chance to go over our previous response, and if you have any feedback on **the integration of highlighted papers**, and **any remaining concerns about the RLOO baseline**, or **other aspects that prevents a stronger evaluation of our work.**\"}", "{\"title\": \"Final Feedback Request\", \"comment\": \"Dear Reviewer,\\n\\nAs today is the last day for responses, we kindly ask if you\\u2019ve had a chance to go over our GRPO results and whether there are any remaining concerns preventing a stronger evaluation.\"}", "{\"title\": \"Feedback Request\", \"comment\": \"Dear Reviewer,\\n\\nYou raised efficiency as a concern regarding VinePPO. This was a primary focus of our work, and we extensively addressed it in the initial draft (see Section 6.4). During the rebuttal, we provided additional efficiency analysis as per your request. **All empirical results consistently demonstrate VinePPO\\u2019s superior efficiency, achieving higher test accuracy in less wall-clock time in every experiment (see Figures 7, F.22, and F.23)**. As today is the final day for updates, we would appreciate knowing if any concerns remain that influenced your evaluation.\"}", "{\"summary\": \"The key motivation of this manuscript is to locate and solve the problem, while PPO is finetuning LLM\\uff0cthe value network is inaccurate and has high variances. It finds that in heavy and complex reasoning tasks, PPO barely outperform a random baseline due to this issue. Thus, this paper proposes a simple and straightforward approach, so called VinePPO, which computes the value using unbiased Monte Carlo estimation and improve the credit assignment. Many experiments on MATH and GSM8K datasets with RhoMath 1.1B and DeepSeekMath 7B, show that the proposed VinePPO can consistently outperforms PPO and other RL-free baselines.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe authors found the problem via systematical evaluation that, inaccurate value estimation can limit PPO\\u2019s ability to finetune LLMs in complex reasoning tasks. It can\\u2019t reflect the real reward and importance. This problem results in the barely fair performance compared with a random baseline.\\n2.\\tThis paper proposes the VinePPO via utilizing MC samples to compute value in the PPO pipeline, and the value of a state can be estimated by the average return of K sampled trajectories from the state. \\n3.\\tThe experiments and analytics are convincing. The results of VinePPO is better than PPO, via improved credit assignment.\", \"weaknesses\": \"1.\\tThe proposed VinePPO is a straightforward method to use MC estimation. However, MC has been studied for a long time. It has zero bias, but also has high variance and computational efficiency problem.\\n2.\\tThis paper adopts the math reasoning problem. The state is the concatenation of input prompt and generated tokens, so the following trajectories can be sampled from any state s, and then MC computation can work. But, if the problem is more complex, not simple math problem, MC might not work, because long trajectory or low efficiency.\\n3.\\tIn VinePPO, K is very important, because accurate MC estimation needs K be large enough, which also would cause low efficient issue.\\n4.\\tBased on the discussion above, the generalizability of VinePPO is not analysed and solved in the paper.\", \"questions\": \"1.\\tThe influence of K needs to be discussed, from both performance and efficiency.\\n2.\\tAs MC is a method with high variance, does VinePPO outperform PPO when MC estimation is not inaccurate?\\n3.\\tIn Fig 9, the ground truth is chosen as results via 256 MC samples. Is this reasonable?\\n4.\\tIt might be more convincing to provide the resulting of credit assignment. For example, are critical steps detected by VinePPO?\\n5.\\tSome equations are not clear, for example:$S_{t+1} = s_t;[a_t]$\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We would like to thank the reviewer for their positive and insightful feedback. We will address mentioned points in detail as follows:\\n\\n**RLOO and GRPO Baselines**\\n\\nThank you for the suggestion. As it was a common recommendation from reviewers, we updated the paper to include the result along with an in-depth analysis. We post this as a general comment. Please see *\\u201cSummary Response + RLOO and GRPO Baselines\\u201d* on OpenReview. A brief summary is provided below:\\n\\nRLOO and GRPO show a clear disadvantage compared to VinePPO. On GSM8K, they score 44.5% and 44.6%, respectively, while PPO achieves 50.1%, and VinePPO achieves 53.4%. A similar pattern is observed on MATH, where RLOO and GRPO score 17.3% and 17.8%, compared to PPO\\u2019s 18.1% and VinePPO\\u2019s 23.0%. Additionally, training RLOO and GRPO proved to be less stable; for instance, we found it necessary to use higher KL coefficients during hyperparameter tuning to prevent instability. Notably, when controlling for wall-clock time efficiency, we found VinePPO achieves their peak performance up to 2.7x faster. \\n\\n\\nWe also invite the reviewer to see the additional analysis included in the general comment, as we believe they offer interesting insights into the inner workings of RLOO and GRPO.\\n\\n**details on how the inference engines were utilized**\\n\\nWe appreciate the reviewer\\u2019s interest in the technical details. We used the vLLM library [1] for fast inference and found that fortunately no special techniques were needed to achieve high throughput. Even 7B models can be deployed on each rank without any issues. That is, at every iteration current policy is loaded to the vLLM server and then sampling is done through vLLM serving API. We will update Section C.9 (\\u201cSoftware Stack\\u201d) in the camera-ready version to include more details, Additionally, we plan to release our code with the camera-ready version, allowing others to use our generation pipeline.\\n\\n**Questions:**\\n> **Q1.** Missing citations\\n\\nWe thank the reviewer for their suggestion and we make sure to cite these works in the camera ready paper. \\n\\n> **Q2.** How does VinePPO compare to the RLOO baseline as the value of K increases in RLOO? \\n\\nGreat question! Currently, we have focused our computational resources on running the RLOO and GRPO experiments with the 7B model. If time permits, we will conduct experiments with varying K values and share the results.\\n\\n> **Q3.** Did you do large-batch PPO updates? \\n\\nWe thank the reviewer for their detailed suggestion. We are using a relatively large rollout batch size of 512 and mini-batch size of 64. Note that we do not have a reward model running on the GPU. In our tasks, the reward function is a Python program that compares the model's output with the ground truth.\\n\\n> **Q4.** Why is PPO more deterministic in early steps, while VinePPO is more deterministic in later steps, as mentioned in the \\\"Error per reasoning step\\\" section?\\n\\nWe hypothesize that the value network relies on memorization. In the early steps, responses align closely with the training data, enabling accurate estimates. However, as responses progress, the number of possible sequences grows combinatorially, quickly moving out of the training data distribution. In contrast, MC estimation is not tied to the training data. At later steps, the LLM, conditioned on a long solution, becomes deterministic, requiring fewer MC samples for accurate estimation.\\n\\n> **Q5.** Could you share a plot showing the \\\"explained variance\\\" of the value function you learn with normal PPO?\\n\\nThank you for suggesting the \\\"explained variance\\\" metric\\u2014we found it very insightful and computed this for other methods too. As shown in Figure G.26 of the updated paper, PPO\\u2019s value network predictions exhibit non-negative explained variance values close to one, reflecting healthy and effective training. Notably, VinePPO achieves higher explained variance in value predictions compared to PPO, RLOO, and GRPO. \\n\\n\\nThank you again for your valuable feedback; your suggestions have improved our paper. We hope we have adequately addressed all of your concerns and would be happy to clarify further if needed. With this in mind, we hope the reviewer increases their rating of our paper.\\n\\n- [1] Kwon, Woosuk, et al. \\\"Efficient memory management for large language model serving with pagedattention.\\\" Proceedings of the 29th Symposium on Operating Systems Principles. 2023.\"}", "{\"title\": \"Reviewer Response Due in 1 Day\", \"comment\": \"Dear Reviewer,\\n\\nThank you again for suggesting GRPO as a baseline. With only one day left until the reviewer response deadline, we wonder if you\\u2019ve had a chance to review the posted results and additional analysis we performed following your suggestion regarding GRPO. Since this was the primary weakness noted, and our experiments show VinePPO consistently outperforms GRPO and is even more efficient, we wanted to know if there is any remaining concern that prevents a stronger evaluation of our work?\"}", "{\"comment\": \"Thank you for the authors' responses and changes to my concerns, especially about the Variance and Importance of K. I am glad to raise my score to 5.\"}", "{\"title\": \"Feedback Request\", \"comment\": \"Dear Reviewer,\\n\\nThank you for suggesting GRPO as a baseline. We were wondering if you've had a chance to review our previous response and results regarding GRPO. Since the lack of a GRPO baseline was the primary weakness you mentioned, and our experiments demonstrate that VinePPO consistently outperforms GRPO, we wanted to ask if there are any remaining concerns that, if addressed, could help us get a stronger evaluation.\"}", "{\"title\": \"Final Response\", \"comment\": \"We thank all reviewers for their time and feedback.\\n\\n\\n## Summary of Discussion Period\\n\\n\\nThe major concern raised by reviewers was the inclusion of RLOO and GRPO baselines. In response, **we updated the paper (added 7 pages + 11 figures), thoroughly implementing and analyzing these baselines in terms of final performance, efficiency, and value estimation accuracy.** Despite all our hyperparameter tuning (see General Response), RLOO and GRPO underperform VinePPO, even when controlling for compute. This is not a surprise and aligns well with our primary message on the importance of credit assignment. As shown in our additional analysis (Figs. G.24-G.26), RLOO and GRPO assign biased value estimates to intermediate steps, adversely affecting performance. These results are in line with recent studies on RLOO and GRPO [4,5]. Finally, besides our active participation, reviewers raised no further concerns during the discussion period.\\n\\n\\n## Recap of VinePPO\\n\\n\\nAs noted by Reviewer 1TWU, many recent works [1,2,3, inter alia] attempt to simplify PPO in the context of RLHF by removing critical components, including credit assignment mechanisms, often with little to no drop in performance (reasoning tasks are an exception [3]). VinePPO is, to our knowledge, the first to address this contradiction by demonstrating that the credit assignment machinery of standard PPO (i.e., value networks) is underperforming. **VinePPO goes in the opposite direction** and attempts to fix this mechanism, demonstrating profound impacts across various axes: higher accuracy, faster convergence, better efficiency, and lower KL divergence. Here\\u2019s brief overview the credit assignment mechanisms of such methods:\\n\\n\\n| **Method** | **Fine-grained Credit Assignment** | **Notes** |\\n|-------------------|------------------------------------|------------|\\n| PPO (2022, [6]) | Yes | Trains a value network to predict the value each step during |\\n| DPO (2023, [1]) | No | N/A |\\n| RLOO & GRPO (2024, [2]) | No | By design assign the same value to all steps. |\\n| VinePPO (Ours) | Yes | Uses MC estimation to compute the value of each step. |\\n\\n\\n**Primary Message of our Work**\\n\\n\\nCredit assignment, despite its importance in DeepRL, has become an overlooked aspect of RL methods for LLMs, with newer approaches often removing it entirely. Our work highlights the critical importance of this component, and we hope it encourages further research into this aspect of RL training for LLMs.\\n\\n\\nWhile VinePPO represents the initial attempt to principally address credit assignment in this context, it is **simple** (Reviewer S7We), **scalable** (Reviewer wh2B), and **generalizable** (Reviewer S7We).\\n\\n\\n\\n\\n---\\n1. Direct Preference Optimization: Your Language Model is Secretly a Reward Model.\\u201d by Rafailov et al, 2023\\n2. Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs, Ahmadian et al, 2024\\n3. SimPO: Simple Preference Optimization with a Reference-Free Reward by Meng et al. 2024\\n4. Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models by Noukhovitch et al, 2024\\n5. https://huggingface.co/blog/putting_rl_back_in_rlhf_with_rloo\\n6. Training language models to follow instructions with human feedback by Ouyang et al, 2022\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thank you so much for the thorough response. I highly appreciate it, and I apologize for the late reply.\", \"few_remaining_concerns\": [\"According to the GRPO paper, after applying GRPO, the performance on Math exceeds 50%. Was there any difference in the settings?\", \"Additionally, it seems that the benefit of GRPO diminishes for larger, well-performing models (being much more effective for the 1.1B model). Is there any specific reason for this?\"]}", "{\"title\": \"Summary Response + RLOO and GRPO Baselines\", \"comment\": \"## Summary Response\\n\\nWe thank the reviewers for their feedback and helpful comments.\\n\\nWe are heartened to hear that Reviewer S7We found VinePPO to be a \\u201cnovel method in context of RL finetuning of LLMs\\u201d where \\u201cfar more accurate\\u201d Monte Carlo-based estimates replace the value function achieving superior result while being \\u201csimple, easy to follow and applicable to many PPO-like conditions\\u201d \\u2014 and as highlighted by Reviewer 1TWU a \\u201cvery insightful\\u201d finding that the value function is an important pitfall of the PPO. We are further pleased to hear that the reviewer w2hB finds VinePPO demonstrating \\u201cstrong scalability potential\\u201d while reviewer 1TWU, 3mkL, and wh2B found our experiments to be thorough \\u201c across several tasks, model sizes, and model types\\u201d \\u201cconsistently outperforming PPO and other RL-free baselines\\u201d on challenging math datasets. \\n\\nWe now clarify the main shared concern regarding \\u201cincluding GRPO and RLOO baselines\\u201d among the reviewers below and address reviewer specific questions in the individual responses.\\n\\n## RLOO and GRPO Baselines\\n\\nPlease scroll to page 31 of the updated draft where we put all new contents for reviewers\\u2019 convenience. We will update the main content for camera-ready. The results of the RhoMath1.1B model on GSM8K and MATH are presented in appendix E of the paper (See Figure E.20, E.21). Our 7B models need more compute and powerful hardware. We are actively working on these runs and will update the paper if progress is made (*Update 11/21/2024: Added RLOO with DeepSeekMath 7B on GSM8K*).\\n\\n\\n**Result** As shown in Figure E.20, RLOO and GRPO perform worse than VinePPO. On GSM8K, RLOO and GRPO achieve 44.5% and 44.6%, respectively, compared to PPO's 50.1% and VinePPO's 53.4%. On MATH, they score 17.3% and 17.8%, while PPO and VinePPO reach 18.1% and 23.0%. These findings align with recent studies [1][2], where RLOO was found to be at best competitive with PPO (performance-wise). \\n\\n**Discussion** We agree with the reviewers that RLOO and GRPO baselines are useful for practitioners and we thank the reviewers for suggesting them. Meanwhile, we think it is important to note that VinePPO is in the opposite direction of RLOO and GRPO. RLOO and GRPO remove the fine-grained credit assignment machinery of PPO, the value network, and they basically assign every token in a response the same value. On the other hand, VinePPO doubles down on fixing the fine-grained credit assignment machinery, estimating accurate values via MC samples starting from each step. (see detailed analysis below).\\n\\n**Implementation and Training Details of RLOO and GRPO** Due to character limit we've put it in the first reply to this comment.\\n\\n## Value Prediction Analysis of RLOO and GRPO\\nWe follow the same protocol in Section 7 of the original draft to analyze the accuracy of value prediction for RLOO and GRPO. \\n\\n**Scatter Plot + Mean Absolute Error** Figures G.24 and G.25 illustrate the distribution of value predictions across reasoning steps. RLOO and GRPO estimates show significant bias, frequently assigning high values to states with a low probability of success and low values to states with high probability. As demonstrated in Fig. G.26, although RLOO and GRPO have marginally lower MAE than PPO, their errors are still substantially higher compared to VinePPO.\\n\\n**Explained Variance** Based on 1TWU suggestion, we additionally include the explained variance of value estimation in these methods. As shown in Figure G.26, VinePPO achieves higher explained variance than RLOO, GRPO, and PPO across both datasets.\\nAdditionally, PPO\\u2019s value predictions show non-negative explained variance values close to one, indicating stable and effective training. \\n\\n## Compute Efficiency Analysis of RLOO and GRPO\\nFollowing the approach in Sec 6.4 of the original paper, we plot test set accuracy against wall-clock time under the same hardware configuration to evaluate the computational efficiency of RLOO and GRPO compared to PPO and VinePPO. As shown in Figure F.22 of the updated draft, on the MATH dataset, VinePPO reaches the peak performance of RLOO and GRPO 2.7x and 2.2x faster, respectively. Notably, on GSM8K, we see the same pattern and even PPO\\u2014despite the overhead of training an additional network\\u2014surpasses RLOO and GRPO in efficiency.\\n\\n\\n\\n- [1] Noukhovitch, Michael, et al. \\\"Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models.\\\" arXiv preprint arXiv:2410.18252 (2024).\\n- [2] https://huggingface.co/blog/putting_rl_back_in_rlhf_with_rloo\"}", "{\"title\": \"Updates and Feedback Request\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your constructive comments. As the discussion phase is ending, we would love to address any remaining questions or incorporate further suggestions to obtain a better evaluation. Especially, we had already added RLOO and GRPO results for the 1B model and have now included a 7B result as it became available. We deeply appreciate your time and commitment to the review process.\"}", "{\"title\": \"One day to Reviewer response deadline\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your consideration of our rebuttal. As the reviewer response deadline is tomorrow, **we'd like to kindly remind you that your increased score is still not reflected in OpenReview**. We kindly request that you update your score using the \\\"Edit\\\" option in your official review since verbal mentions are not reflected in the system.\\n\\nWe appreciate your time and effort!\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for highlighting the relevant papers. They are indeed great papers. However, as today is the final day for updates, we kindly request your guidance in integrating them into our work. While we have added citations for [1] and [5], we would greatly value your input on the main angle of relevance and the most suitable placement for citing [2], [3], and [4].\\n\\nWe also appreciate your valuable discussion about baselines. All of our RLOO experiments on all models (see Figure E.20.1 for 7B model and Figure E.20 for 1B model) and datasets are concluded now and in the updated paper showcasing VinePPO\\u2019s superior performance. We were wondering if our rebuttal has addressed your primary concern which was the RLOO baseline.\\n\\nMoreover, as this is the last day to update the paper, we wanted to know if there is any outstanding concern that prevents a stronger evaluation of our work?\"}", "{\"title\": \"Updates and Feedback Request\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your constructive feedback. As the discussion phase nears its end, we\\u2019d be happy to address any remaining questions or suggestions to obtain a better evaluation. Previously, we updated the paper with RLOO and GRPO results for the 1B model, and we now added a 7B result that became available. Thank you again for your time and dedication to the review process.\"}", "{\"title\": \"Last day of reviewer response\", \"comment\": \"Dear Reviewer,\\n\\nThank you again for dedication to the review process. Since this is the last day reviewers can response and we addressed your mentioned concerns, we hope you kindly consider a renewed assessment of the paper considering our recent exchanges. If there're any remaining concerns preventing a stronger evaluation, we'd more than happy to address them promptly in the remaining time.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Feedback Request\", \"comment\": \"Dear Reviewer,\\n\\nThank you for the insightful discussion initiated. We\\u2019re curious to know if you\\u2019ve had a chance to go over our previous response in \\\"Response To Remaining Concerns\\\". We hope it has addressed your concerns. As the deadline for updating the paper is reaching soon, we\\u2019re eager to address any outstanding questions or concerns you may have, and to incorporate any additional feedback. We hope you kindly consider a renewed assessment of the paper considering our recent exchanges.\"}", "{\"title\": \"Updated Paper Revision\", \"comment\": \"Dear Reviewers and Area Chairs,\\n\\n\\nWe summarize the updates to the paper during rebuttal, requested by the reviewers. Note that they are at pages 31 to 38 for reviewers\\u2019 convenience and the title of the sections are in blue. Specifically:\\n\\n**Reviewer wh2b**:\\n1. Added RLOO and GRPO Baselines (Appendix E) \\n2. Add more details regarding terminology \\\"bias\\\" (Appendix I)\\n\\n\\n\\n**Reviewer S7We**:\\n1. Added RLOO and GRPO Baselines (Appendix E: Figures E20, E20.1, and E21) \\n2. Added RLOO and GRPO Efficiency Analysis (Appendix F: Figure F.22)\\n3. Fixed typo (Line 264)\\n\\n\\n**Reviewer 1TWU**:\\n1. Added RLOO and GRPO Baselines (Appendix E: Figures E20, E20.1, and E21) \\n2. Added visualization of \\u201cExplained Variance\\u201d metric throughout training for all methods (Appendix G: Figure G26)\\n3. Added technical details of inference engines in our software stack (Appendix J)\\n\\n\\n**Reviewer 3mkL**:\\n1. Added More Examples of Advantages in VinePPO (Appendix H: Figures H.27, H.28, and H.29)\\n2. Analyzed effect of K in VinePPO\\u2019s Efficiency (Appendix F: Figure F.23)\\n\\n\\n**Additional Analysis**:\\n1. Added Value Prediction Analysis of RLOO and GRPO shedding light into their inner working mechanism (Appendix G: Figures G.24 and G.25)\\n\\nWe are eager to engage in further discussion and would greatly appreciate any additional feedback or insights, especially as none of the reviewers have not yet had the opportunity to share their thoughts at the time of posting this.\"}", "{\"title\": \"Last Date of Reviewers Response\", \"comment\": \"Dear Reviewer,\\n\\nThank you again for considering our rebuttal. As today is the final day, we kindly ask if you could update your score in OpenReview using the \\\"Edit\\\" option in your official review to reflect your increased score.\"}", "{\"comment\": \"We would like to thank the reviewer for their thorough feedback. We\\u2019re thrilled that you liked our work. We now address the key questions below:\\n\\n**RLOO and GRPO Baselines** \\n\\nThis is a great recommendation! As this was a common suggestion among reviewers, we posted a general comment describing RLOO and GRPO results with additional in-detail analysis. Please see *\\u201cSummary Response + RLOO and GRPO Baselines\\u201d*. Here, we provide a short summary for convenience:\\n\\nRLOO and GRPO perform worse than VinePPO. On GSM8K, their respective scores of 44.5% and 44.6% fall short of PPO\\u2019s 50.1% and VinePPO\\u2019s 53.4%. A similar trend is observed on the MATH benchmark, where RLOO and GRPO achieve 17.3% and 17.8%, compared to PPO at 18.1% and VinePPO at 23.0%. Moreover, we found their training process to be less stable, requiring a higher KL coefficient during hyperparameter tuning to stabilize their training. This likely stems from their uniform credit assignment method, which contrasts sharply with the fine-grained credit assignment strategies employed by PPO and VinePPO (see \\u201cValue Prediction Analysis\\u201d in general comment).\\n\\nVinePPO is significantly more efficient than RLOO and GRPO, reaching their peak performance 2.7 times and 2.2 times faster on MATH, respectively. In GSM8K, which is an easier task for a 1B value network, even PPO achieves peak performance of RLOO and GRPO about 1.75x faster.\\n\\nAlso, additional analysis included in the general comment offers deeper insights into the inner workings of RLOO and GRPO, which we highly recommend the reviewer to visit.\\n\\n**DPO variants with fine-grained credit assignment**\\n\\nThere are DPO variants that aim to provide finer credit assignments, such as Self-Explore [1], as noted in our first draft (lines 132\\u2013126). Self-Explore reports an improvement from SFT (34.14%) to 37.68% on MATH using DeepSeekMath 7B [1]. In our work, PPO achieves a larger improvement, from SFT (32.8%) to 42.8% in the same setup. Given this and the engineering complexity of these methods, we decided to focus on more established baselines in the literature, such as RestEM, DPO+, PPO, and now RLOO, and GRPO.\\n\\n**Tuning K in VinePPO**\\n\\nExcellent question! Tuning VinePPO is generally simpler because it involves only one key parameter, K. In contrast, tuning the value network comes with numerous hyperparameters associated with neural network training, such as the optimizer, making it more complex. Additionally, Fig. 4 of the original paper and Fig. F.23 of the updated draft show performance and efficiency improvements as K increases, suggesting a straightforward heuristic: start with the highest K that fits within the available compute budget.\\n\\n**Questions**\\n> **Q1.** How does the method's dependency on K differ by model? I am also curious about the K ablation.\\n\\nWe provided an ablation study on the effect of K in Fig. 4. However, this ablation was conducted only on the 1B model due to computational constraints. Running ablation on the 7B model is prohibitively expensive. We expect to see the same pattern as increasing K always reduces the value estimation variance.\\n\\n> **Q2.** a graph to show the trade-off between larger K values and efficiency \\n\\nThanks for the suggestion! Refer to Fig. 23 of the updated draft for this study on MATH. The results are quite interesting. VinePPO with higher K values achieves greater efficiency. Specifically, VinePPO with K=9 is slightly more efficient than K=3, while both significantly outperform K=1. This demonstrates the strong impact of low-variance value estimates on training, shifting the trade-off towards improved efficiency with more samples.\\n\\n> **Q3.** missing period on line 264\\n\\nThank you! We\\u2019ll fix it in the final draft.\\n\\nThank you again for your valuable feedback. We hope that our response has resolved any remaining questions and concerns. Would you consider increasing your ratings given the main clarifying points outlined?\\n\\n\\n\\n\\n- [1] Hwang, Hyeonbin, et al. \\\"Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards.\\\" arXiv preprint arXiv:2404.10346 (2024).\", \"title\": \"Rebuttal\"}", "{\"title\": \"Updates and Feedback Request\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your time and feedback. With the end of the discussion phase approaching, we\\u2019re open to addressing any additional concerns or suggestions that could help us obtain a stronger evaluation. Especially, we updated the paper with additional efficiency plots on effect of K. Thank you once again for your time and effort in reviewing our work.\"}", "{\"title\": \"Reflecting increased score in OpenReview\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your attention to our rebuttal and for considering increasing your score. Could we kindly ask you to update your rating in OpenReview using the \\\"Edit\\\" button on your first comment (your official review), as verbal mentions don't reflect in the review system?\"}", "{\"comment\": \"...continuation of the above comment:\\n\\n**Implementation and Training Details of RLOO and GRPO** For RLOO, we closely follow the implementation in the HuggingFace TRL\\u2019s library [3]. GRPO implementation is also a straightforward modification to PPO. To ensure fair comparison, we maintain equal training dynamic across runs. Specifically, we train RLOO, GRPO, PPO, and VinePPO, for 1000 iterations on MATH and 650 iterations on GSM8K (about 8 epochs for both datasets). All methods share the same rollout batch size of 512 and mini batch-size of 64. In all methods, we sample 8 responses per each question in each rollout batch. Given the 8 training epochs all methods see 64 responses per example throughout training. For all method, we initialize the policy from the SFT checkpoint. For RLOO and GRPO, we further tune the KL coefficient (search space: {1e-2, 3e-3, 1e-3, 3e-4, 1e-4}). We found that RLOO and GRPO are quite unstable and need a higher KL coefficient to stabilize their training (in our experiments 3e-3 is the smallest value they can tolerate and achieve best validation accuracy). As a final note, the results of the RLOO and GRPO experiments look very similar. This is not a mistake. The baseline computation in RLOO and GRPO is indeed very close. Assume R1, R2, .., and Rk are task returns of K responses (Y1, Y2, .., Yk) for a prompt X. GRPO computes the baseline for policy gradient on Y1 by averaging all the returns. However, RLOO leaves the R1 out and takes the average over the remaining K-1 returns.\\n- [3] https://github.com/huggingface/trl\", \"title\": \"Additional Implementation Details\"}", "{\"title\": \"Final Updates to the Paper - Nov 27th\", \"comment\": \"Dear Reviewers,\\n\\nThe results of our 7B models trained with RLOO are now finalized, and the updated paper includes RLOO results for all models and datasets. Please refer to Figure E.20.1 for the 7B results and Figure E.20 for the 1B results. These results should be particularly relevant to reviewers S7We and 1TWU, who requested the RLOO baseline.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their time reviewing our paper and providing useful feedback and questions which we now address.\\n\\n**RLOO and GRPO Baselines**\\n\\nAs RLOO and GRPO were asked by three reviewers, we described the results with additional analysis in a general message. Please see *\\u201cSummary Response + RLOO and GRPO Baselines\\u201d* on OpenReview.\", \"as_a_short_summary_here\": \"RLOO and GRPO perform worse than VinePPO. RLOO and GRPO achieve 44.5% and 44.6% on GSM8K respectively (compared to PPO\\u2019s 50.1% and VinePPO\\u2019s 53.4%) and 17.3% and 17.8% on MATH respectively (compared to PPO\\u2019s 18.1% and VinePPO\\u2019s 23.0%). Also, we found RLOO and GRPO to be less stable. For example, we found during our hyperparameter tuning that we need a higher KL coefficient to stabilize their training. When controlling for compute budget, VinePPO (and even PPO in some cases) surpasses the peak performance of RLOO and GRPO up to 2.7x faster.\\n\\n\\nWe also encourage the reviewer to refer to the additional analysis we performed in the general comment, which offers deeper insights into the inner workings of RLOO and GRPO.\\n\\n**Misuse of Terminology**\\n> So it is better not to use \\\"bias\\\" in Line 467 and 475 but to use \\\"inaccuracy\\\".\\n\\nThank you for your attention to details. We believe our use of terminology is accurate. While the policy gradient is unbiased when \\u03bb=1 (as the value estimates act only as a baseline), the value network\\u2019s estimates themselves can still be biased (see Sec. 3 of [1]) as they are approximated by a neural network (as shown in Figure 9). The term \\u201cbias\\u201d in line 467 *\\u201cPPO\\u2019s value network shows high bias.\\u201d* and in line 475 *\\u201cPPO\\u2019s value network, despite its bias,...\\u201d* specifically refers to this inherent bias in the value network\\u2019s estimation of the value, not the policy gradient. We will revise the text to clarify this distinction between bias in the policy gradient and the value estimates within the same section.\\n\\n**Questions:**\\n> **Q1.** can we extend it to the more general alignment task?\\n\\nYes. VinePPO is built on standard PPO from RLHF and does not introduce any additional assumptions to the RL formulation. So, it is broadly applicable to RL problems in language environments (as noted by reviewer S7WE), including alignment tasks where PPO is typically used.\\n\\n> **Q2.** Is there any intuitive or theoretical explanation for why value networks fail to provide accurate estimates?\\n\\nYes. Empirically, the evidence in Section 7 (see \\u201cError Per Reasoning Step\\u201d) suggests that the value network primarily relies on memorization, as learning a generalizable algorithm is likely less favorable given the training data and the challenging nature of the task [2]. Intuitively, the task of the value network in reasoning tasks is quite demanding: the value network must 1) implicitly understand the correct answer, 2) evaluate how the LLM\\u2019s generated solutions align with it, and 3) achieve all this in a single forward pass. This can be especially demanding given the value network is initialized from the same LLM and has similar size and capacity.\\n\\n\\n\\nWe thank the reviewer again for their effort and feedback. We hope the clarifications have addressed your concerns. Given the key points outlined, would you consider increasing your ratings?\\n\\n\\n- [1] Schulman, John, et al. \\\"High-dimensional continuous control using generalized advantage estimation.\\\" arXiv preprint arXiv:1506.02438 (2015).\\n- [2] Nagarajan, Vaishnavh et al. \\u201cUnderstanding the Failure Modes of Out-of-Distribution Generalization.\\u201d ArXiv abs/2010.15775 (2020): n. pag.\"}" ] }
5m43PEd3sz
ETGL-DDPG: A Deep Deterministic Policy Gradient Algorithm for Sparse Reward Continuous Control
[ "Ehsan Futuhi", "Shayan Karimi", "Chao Gao", "Martin Müller" ]
We consider deep deterministic policy gradient (DDPG) in the context of reinforcement learning with sparse rewards. To enhance exploration, we introduce a search procedure, \emph{${\epsilon}{t}$-greedy}, which generates exploratory options for exploring less-visited states. We prove that search using $\epsilon t$-greedy has polynomial sample complexity under mild MDP assumptions. To more efficiently use the information provided by rewarded transitions, we develop a new dual experience replay buffer framework, \emph{GDRB}, and implement \emph{longest n-step returns}. The resulting algorithm, \emph{ETGL-DDPG}, integrates all three techniques: \bm{$\epsilon t$}-greedy, \textbf{G}DRB, and \textbf{L}ongest $n$-step, into DDPG. We evaluate ETGL-DDPG on standard benchmarks and demonstrate that it outperforms DDPG, as well as other state-of-the-art methods, across all tested sparse-reward continuous environments. Ablation studies further highlight how each strategy individually enhances the performance of DDPG in this setting.
[ "Deep Reinforcement Learning", "Sparse Reward Continuous Control", "Exploration with options", "Reward propagation" ]
https://openreview.net/pdf?id=5m43PEd3sz
https://openreview.net/forum?id=5m43PEd3sz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zljO6lO2UI", "u7GOcWftVP", "njwmf5yfi9", "PYJ42lPNnS", "D90bOCVOYP" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730507762730, 1730619424680, 1731653474057, 1730451114040, 1730652090521 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13704/Reviewer_ZA8A" ], [ "ICLR.cc/2025/Conference/Submission13704/Reviewer_UZxt" ], [ "ICLR.cc/2025/Conference/Submission13704/Authors" ], [ "ICLR.cc/2025/Conference/Submission13704/Reviewer_vfRg" ], [ "ICLR.cc/2025/Conference/Submission13704/Reviewer_Vg8Q" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a recipe of additions to DDPG that would make it particularly suitable for addressing sparse-reward (in particular, those with informative rewards only on the last transition of successful episodes). The recipe involves three main components: (1) $\\\\epsilon t$-greedy exploration, (2) an additional episodic buffer only containing successful episodes, and (3) Monte Carlo updates on successful episodes instead of 1-step TD.\\n\\nThe most involved component of the approach is (1), which involves tree-search for finding a good option to execute. This process requires a model of environment's transition dynamics in principle, which the authors circumvent by using the replay buffer as a model and utilizing a SimHash model on a discretized variant of the 2D or 3D environments.\", \"they_evaluate_the_performance_of_this_approach_on_several_sparse_reward_testbeds\": \"physics based 2D point-mass and 3D robotic manipulation tasks, as well as non-physical maze problems. The results show full coverage of discretized 2D or 3D spaces due to addition (1). Also, reaching success rates close to 1 on all tasks (on average), and generally well above other baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed recipe will be useful for those interested in similar problem scenarios, e.g. using DDPG for tackling physical control problems.\", \"Paper is generally well-written, and ideas are mostly easy to grasp.\"], \"weaknesses\": [\"Several related works are not sufficiently discussed in the context of the contributions:\", \"Count-based methods in DRL: e.g. Ostrovski et al. (2017) is referenced but not discussed to any extent in connection to the SimHash method.\", \"Hindsight Experience Replay is a relevant line of work, as it's addressing the same type of problems while remaining simpler and broader in applicability. While the paper is referenced, it is not used as a baseline nor discussed to an appropriate extent.\", \"Baselines are unfortunately somewhat weak; e.g. D3PG (non-distributed variant of D4PG) would be an important baseline to build on and/or to compare against. Specifically, DDPG + PER (as in D3PG) would have allowed to better assess the impact of the dual-memory approach. DDPG + N-step return (as in D3PG) would have allowed to better assess the difference between with a standard approach beyond 1-step TD updates. I wouldn't even worry about the distributional component of D3PG/D4PG, but combinations with PER and N-step returns are quite important to test against.\"], \"questions\": \"1. Default exploration in DDPG is based on the OU noise, which is a temporally-correlated noise. As such, I'm curious if the \\\"DDPG\\\" baseline in, e.g., Fig. 5 is based on $\\\\epsilon$-greedy or the standard OU noise? (And does your answer hold for all results for DDPG in the paper?)\\n\\n2. Have you performed any leave-1-out ablations as well? I.e. similar to Fig. 5 but with the full algorithm minus one addition.\\n\\n3. Do you have any comment on including DDPG + PER + N-step returns as a baseline?\\n\\n4. Regarding the options generated by your tree-search approach: Will they remain valid in stochastic domains?\\n\\n5. Why is there no Importance Sampling correction (similar to PER) to reduce the bias of sampling from successful episodes more frequently?\\n\\n6. Why are there no experiments / detailed discussions around Hindsight Experience Replay?\\n\\n7. Could you comment on connection of your approach wrt. count-based exploration techniques in DRL?\\n\\n8. Is the discretization of \\\"states\\\" really discretizing the true state of the problem or just the 2/3 spatial locations?\\n\\n\\n### Minor comments:\", \"l252\": \"Comma should move to the end of equation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents three components that can be added to deep deterministic policy gradients (DDPG) algorithm to improve its performance. These components are (a) a new exploration strategy $\\\\epsilon t$-greedy that performs exploration by building a tree to find the states at the frontier of its exploration and a path to get to those states, (b) a divided replay buffer to keep special track of successful episodes, and (c) changing the critic update to use T-step updates, turning the successful episode targets into Monte Carlo returns. The paper evaluates these techniques on three navigation and three manipulation tasks that require continuous-space actions, and show that the proposed method, ETGL-DDPG, outperforms some of the baselines compared to.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper shows strong empirical results. The effect of the different components is clear from the experiments.\", \"The explanation of the three components is also fairly clear. Figure 1 was helpful in understanding the proposed techniques\", \"Some of the analysis figures are also well done. I particularly like Section A.5, which shows how the terminal state distribution for the different algorithms evolve over training.\", \"Training details are also extensive, and I believe the results in this paper could be reproduced\", \"The presented exploration strategy $\\\\epsilon t$-greedy, is a combination of good ideas, and shows great promise.\", \"The sample complexity analysis adds theoretical depth to the paper.\"], \"weaknesses\": [\"While the results in the paper appear strong empirically, there is some worry that methods that seem very similar to those proposed in the paper are not being compared to.\", \"The proposed $\\\\epsilon t$-greedy exploration looks a lot like Go-Explore [1]. While there are certain differences (no behavioral cloning of the exploration policy), the similarities are close enough that they should be addressed in the paper or the method should be compared to. Additionally, the LSH method is very similar to #-exploration [2]. While perhaps comparing to the method itself might not be relevant since the results might not be state-of-the-art any longer, and the paper itself is cited, more detailed comparison to this technique in the related work would add depth to the paper.\", \"The use of two replay buffers, with one buffer used to store data leading to successes, sounds very much like [3]. Apart from the different replacement schemes (reservoir sampling vs FIFO), they seem like very similar ideas. Attribution and comparison (if relevant) would be good to have here.\", \"The longest n-step return is basically Monte Carlo updates for successes and very long N-step returns for failures. The change makes it (a) not use bootstrapping for successes, and (b) not bootstrap from the agent's current Q-values efficiently. Could the authors justify this change would be better? Specifically, why the high variance updates from successes are preferred to bootstrapping, and why the very long n-step returns are preferred for failures? The ablations (Figure 5) seem to bear out that these long returns are actually not helping.\", \"Overall I feel like the exploration component is the main and helpful part of the paper. I would prefer the paper focus on this idea, and bring the analysis, such as Section A.5 into the main paper. That would be a much more impactful contribution, in my opinion.\"], \"some_minor_nitpicks\": [\"Line 462: \\\"except for soccer, where DDPG alone outperforms all baselines.\\\" The results show that $\\\\epsilon t$-greedy outperforms DDPG there.\", \"Algorithm 1, line 24: typo: UnifromRandom($\\\\phi(s_x)$)\", \"Algorithm 1, line 9: frontier nodes not being passed.\", \"Algorithm 1, line 10, counting function $n$ is not specified as an input, nor initialized.\"], \"references\": \"[1] Ecoffet, A., Huizinga, J., Lehman, J., Stanley, K.O. and Clune, J., 2021. First return, then explore. Nature, 590(7847), pp.580-586.\\n\\n[2] Tang, H., Houthooft, R., Foote, D., Stooke, A., Xi Chen, O., Duan, Y., Schulman, J., DeTurck, F. and Abbeel, P., 2017. # exploration: A study of count-based exploration for deep reinforcement learning. Advances in neural information processing systems, 30.\\n\\n[3] Kompella, V.R., Walsh, T., Barrett, S., Wurman, P.R. and Stone, P., Event Tables for Efficient Experience Replay. Transactions on Machine Learning Research.\", \"questions\": \"* What does the shaded portion in Figures 3 and 5 signify? Perhaps add it to the caption of the figure.\\n\\nOther questions have been asked as part of the critique and feedback in previous section. Please refer to those.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This work combines three algorithmic ideas to improve the performance of DDPG in sparse-reward tasks. Fist, the authors propose $\\\\epsilon$t-greedy exploration, which consists in building a graph-based representation of the environment, and navigating to nodes with a low visitation count. This method is accompanied by a formal analysis. Additionally, a buffer sampling technique is used to prioritize transitions with non-zero rewards, and longest n-step returns help quickly propagating values across trajectories. The resulting method, named ETGL-DDPG, outperforms simple algorithms in six continuous control tasks, demonstrating that each of the three components positively affects performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written and easy to follow. Contributions are outlined clearly and supported by experimental evidence.\", \"The limitations of hashing are acknowledged in the final discussion.\"], \"weaknesses\": [\"Related works and missing baselines: this works does not sufficiently mention established methods for exploration in sparse reward settings. To the best of my knowledge, the standard method for sparse-reward RL is HER [1], which is only quickly mentioned in Section 5. Explaining why HER is not a relevant baseline would be very important. Otherwise, a comparison to it is expected. Moreover, the core exploration technique relies on a graph-search procedure. Similar ideas have been proposed in the past, both in the context of exploration [2, 3] and general value estimation [4]. Methods involving graph-search are very powerful, but require non-trivial implementation efforts and introduce complexity. None of the baselines considered relies on a graph; I would encourage the authors to clearly argue why the proposed graph search procedure is more appropriate than existing ones, or show how it outperforms them. More in general, the selection of baselines includes simple methods designed for the general RL problem, and not specialized to sparse-rewards.\", \"Novelty and orthogonality: the second and third technique (the dual replay buffer and longest n-step rewards) are not entirely novel, as acknowledged in lines 273 and 298. They appear to be applied without further analysis or changes. Moreover, the graph-based technique seems to work well $independently$ from the other two. It thus seems that the three proposed techniques are rather orthogonal. Why is it important to combine all three techniques? How do they relate to each other? In the current state, I do not appreciate the novelty in the second and third techniques, and thus wonder why the authors would not focus on the first one.\"], \"questions\": [\"Following the previous paragraph,\", \"Can the author provide a comparison to HER?\", \"Why is the proposed graph search procedure is more appropriate than existing ones? Does it outperform them?\", \"Are the three proposed techniques orthogonal?\", \"Moreover, some minor questions and comments:\", \"Why is DDPG chosen as the base deep reinforcement learning algorithm, instead of relatively more modern algorithms such as TD3 [5] or SAC [6]?\", \"line 83: authors state that \\\"the agent must achieve the goal many times to make sure that the reward is eventually propagated backward to early states\\\". I am not coninced this statement holds. To the best of my knowledge, rewards are propagated when optimizing the TD loss for a given batch; therefore, it is sufficient to sample sufficient batches as long as the goal is achieved a single time in the buffer. I would ask the authors to comment on this.\", \"n-step returns are on-policy. It could be helpful to acknowledge this.\", \"line 88: typo (new temporally version) to (new temporally extended version)\", \"line 206: typo ($\\\\delta$-optimal) to ($\\\\epsilon$-optimal)\", \"line 246: typo, I believe ($\\\\mathcal{P_W}$) to ($\\\\mathcal{P_X}$)\", \"line 315: the chosen baselines are arguably not state-of-the-art for sparse-reward environments, see comments above.\", \"**References:**\", \"[1] Andrychowicz et al., Hindsight Experience Replay, NIPS 2017\", \"[2] Ecoffet et al., First return, then explore, Nature 2021\", \"[3] Gallouedec et al., Cell-Free Latent Go-Explore, arXiv 2023\", \"[4] Eysenbach et al., Bridging Planning and Reinforcement Learning, NeurIPS 2019\", \"[5] Fujimoto et al., Addressing Function Approximation Error in Actor-Critic Methods, ICML 2018\", \"[6] Haarnoja et al., Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor, ICML 2018\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the limitations of the DDPG algorithm in sparse-reward environments. It identifies three main deficiencies: lack of directional exploration, uniform treatment of rewards in the replay buffer, and slow information propagation during policy updates. To tackle these issues, the authors propose three enhancements to DDPG, including $\\\\epsilon t$-greedy, goal-conditioned dual replay buffer (GDRB), and longest n-step return. Empirical results demonstrate that each individual strategy enhances DDPG\\u2019s performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This work aims to enhance the most fundamental algorithms in RL, which I believe is very important.\\n2. The paper points out the important issue that DDPG is not suitable for sparse rewards settings and specifically identifies three drawbacks of DDPG.\\n3. The three improvements proposed in ETGL-DDPG sound effective to me in the sparse reward setting.\\n4. The benchmark experiments and ablation studies demonstrate the effectiveness of ETGL-DDPG.\", \"weaknesses\": \"1. This study proposes three improvements to DDPG in the context of sparse rewards, but I am uncertain about the degree of originality of each of these improvements. There is a substantial amount of related work on this topic (sparse reward, exploration, improving DDPG), and there are likely several studies closely related to each of the proposed enhancements, as some of them are straightforward and may be easy to conceive. Thus, a thorough literature review would be necessary to determine the novelty of the improvements. If these improvements (or some of them) have suffient innovation, I believe they would be valuable contributions.\", \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
5lokEzttBF
SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response Ranking
[ "Zhuang Li", "YUNCHENG HUA", "Thuy-Trang Vu", "Haolan Zhan", "Lizhen Qu", "Gholamreza Haffari" ]
Recent studies have shown that maintaining a consistent response style by human experts and enhancing data quality in training sets can significantly improve the performance of fine-tuned Large Language Models (LLMs) while reducing the number of training examples needed. However, the precise definition of style and the relationship between style, data quality, and LLM performance remains unclear. This research identifies two key stylistic elements in responses: linguistic form and semantic surprisal. We find that, among training data of comparable quality, higher consistency in these response elements leads to better LLM performance. Inspired by this, we introduce Style Consistency-Aware Response Ranking (SCAR), which automatically prioritizes instruction-response pairs in the training set based on their response stylistic consistency. By selecting the most style-consistent examples, sometimes as few as 0.7\% of the full dataset, the fine-tuned LLMs can match or even surpass the performance of models trained on the entire dataset in coding and open-ended question-answering benchmarks. Code and data are available at https://anonymous.4open.science/r/SCAR-0233/.
[ "Style Consistency", "Data Efficiency", "LLM Alignment", "Fine-Tuning" ]
https://openreview.net/pdf?id=5lokEzttBF
https://openreview.net/forum?id=5lokEzttBF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xI2suy4Q26", "wnKKXMqRkT", "s7W01SkHdA", "liKFvFvp3P", "lDzpaIqUUt", "fMRLaslvEB", "axG2onGslz", "YVqGNLLPi7", "UrEPevaI7S", "PIawdkkHqV", "PDI1bKdvOJ", "ATyv4uRhm4", "2Dea0hsS6v", "0UF8kpidpq" ], "note_type": [ "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1734366425557, 1731480657887, 1733227198606, 1733226012226, 1733229753546, 1731479212368, 1733226756288, 1730559168403, 1730723368526, 1730383169679, 1733229661333, 1733225797798, 1730263211608, 1733230589426 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10277/Authors" ], [ "ICLR.cc/2025/Conference/Submission10277/Authors" ], [ "ICLR.cc/2025/Conference/Submission10277/Authors" ], [ "ICLR.cc/2025/Conference/Submission10277/Authors" ], [ "ICLR.cc/2025/Conference/Submission10277/Authors" ], [ "ICLR.cc/2025/Conference/Submission10277/Authors" ], [ "ICLR.cc/2025/Conference/Submission10277/Reviewer_W39Q" ], [ "ICLR.cc/2025/Conference/Submission10277/Reviewer_W39Q" ], [ "ICLR.cc/2025/Conference/Submission10277/Reviewer_PzZD" ], [ "ICLR.cc/2025/Conference/Submission10277/Reviewer_Xpqk" ], [ "ICLR.cc/2025/Conference/Submission10277/Authors" ], [ "ICLR.cc/2025/Conference/Submission10277/Authors" ], [ "ICLR.cc/2025/Conference/Submission10277/Reviewer_c3RX" ], [ "ICLR.cc/2025/Conference/Submission10277/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"**Response to Concerns on Dataset and LLM Size Limitations:**\\n\\nWe appreciate the reviewer\\u2019s feedback regarding the dataset and model size. However, we validated our method across a range of models and data sources, strengthening the robustness and generalizability of our results. Specifically:\\n\\n- ***Model Variety and Real-World Relevance:*** Our method is tested across four different base LLMs, including CodeLlama, Llama3, and widely recognized open-source models like OLMO and Starcoder, all of which are commonly used in real-world applications. This diversity of models ensures that our approach is not restricted to a single model setup, enhancing the reliability of our findings.\\n\\n- ***Dataset Diversity and Scale:*** We validated our selection method on the large-scale open-source dataset, the OLMO tulu dataset, with 320,000 examples, which offers a rich source of real-world data. Our own curated dataset (20,000 and 10,000 examples) for data selection is also comparable in scale to widely used benchmarks like Alpaca (52,000 examples), LIMA (1,000 examples), and Guanaco-Commits (13,000 examples). All these prior datasets have been effectively used to train numerous open-source models. This similarity in dataset sizes underscores that our approach is grounded in scales commonly used in prior studies.\\n\\n- ***Validation Across Data Curation Methods:*** Our approach is further validated by applying it to both open-source datasets and our own curated datasets, collected by simulating two real-world data collection scenarios. This range of data sources demonstrates the flexibility and adaptability of our method, showing that it is effective across diverse data curation contexts and is not limited by dataset scale.\\n\\nBy leveraging multiple models and diverse datasets, including large-scale, real-world sources, our study provides strong validation for the scalability and applicability of our approach.\"}", "{\"comment\": \"We thank the reviewer for their valuable feedback and constructive suggestions. We address your questions below.\\n\\n---\\n\\n**Q1: The \\\"longest\\\" method discussed in prior work [1, 2] appears to be a strong rule-based baseline. How does its performance compare with SCAR?**\\n\\nWe conducted experiments comparing SCAR with the \\\"longest\\\" method as a baseline, using AlpacaEval in the open domain with LLAMA3-8B. The performance (L.C. WinRate) of models fine-tuned on data selected by the \\\"longest\\\" method is as follows:\\n\\n**Human Data Selected by Length:**\\n\\n- 5,000 examples: 1.46\\n- 2,500 examples: 1.75\\n- 1,000 examples: 1.27\\n\\n**Mixed Synthetic Data Selected by Length:**\\n\\n- 5,000 examples: 6.29\\n- 2,500 examples: 5.32\\n- 1,000 examples: 6.61\\n\\nWe observe that when selecting from human data, the performance of the \\\"longest\\\" method is significantly lower than that of SCAR. When selecting from mixed synthetic data, the performance is comparable to SCAR. This is because the longest responses tend to be those generated by a single methods like Evol-Instruct, unifying the writing style, which produce longer outputs due to their construction process. While length can be a strong indicator of certain stylistic features in linguistic form, our method considers a broader range of style elements, making SCAR more general and effective.\\n\\n---\\n\\n**Q2: The performance of SCAR on some objective benchmarks is expected**\\n\\nPlease refer to our response to Reviewer W39Q, where we provide detailed analysis and results on additional benchmarks, including MMLU. We have shown that SCAR improves performance on these benchmarks, demonstrating its effectiveness across various tasks.\\n\\n---\\n\\n**Q3: Could you provide more details about how were the embeddings created for Figure 1 (Left)?**\\n\\nTo validate that linguistic form is more consistent within synthetic data, we extracted non-semantic features from the responses using heuristics, including:\\n\\n- Unigrams of functional words\\n- Type-Token Ratio (TTR) of non-semantic functional words\\n- Measure of Textual Lexical Diversity (MTLD) of non-semantic functional words\\n- Number of sentences\\n- Flesch Reading Ease score\\n- Punctuation frequency\\n\\nWe used these features to create embeddings representing each response. Using t-SNE, we visualized these embeddings in Figure 1 (Left). Although we could include more non-semantic features, we believe these are sufficient to demonstrate our point about the consistency of linguistic form in synthetic data.\\n\\n---\\n\\nAgain, we appreciate your insightful comments and believe that addressing these points strengthens our work.\"}", "{\"comment\": \"### **6. Responses to Specific Questions**\\n\\n**a. What is the output format of StackExchange? Does it contain only code or both text and code?**\\n\\nStackExchange answers typically contain both text and code. The text provides explanations, context, and guidance, while code blocks present the actual implementations. For LLM-generated answers, the format is more uniform, often starting with textual explanations followed by code blocks. This combination of text and code is preserved in our dataset.\\n\\n---\\n\\n**b. Regarding the code, is the linguistic metric meaningful? Since the standard deviation of TTR in code is larger than that in text, which may not be intuitive.**\\n\\nYes, the linguistic metrics are meaningful in our context. We compute the Type-Token Ratio (TTR) and other linguistic metrics based on the functional words within the responses, excluding the code segments. This approach isolates the linguistic style of the textual explanations, where variations in writing style occur. By separating the code from the text, we ensure that the linguistic metrics accurately represent the stylistic features of the written language.\\n\\n---\\n\\n**c. Why use max pooling when calculating \\\\( v_p \\\\), which only preserves the information of one token?**\\n\\nWe use max pooling for \\\\( v_p \\\\) to capture the most salient features of the linguistic form. Max pooling selects the maximum value across the sequence for each dimension, emphasizing the most significant activations related to stylistic elements. Since linguistic form pertains to surface-level features, max pooling effectively summarizes these aspects without the need for more complex aggregation methods. Our goal is to distinguish between linguistic form and semantic content, and max pooling serves as a practical approach for representing the former.\\n\\n---\\n\\n**d. What parameters need to be trained in your method?**\", \"the_parameters_trained_in_our_method_include\": \"- **Encoder Weights (if fine-tuned):** We use a pre-trained encoder (e.g., RoBERTa-base), which may be fine-tuned during training.\\n- **MLP Layers:** Additional Multi-Layer Perceptron (MLP) layers process the representations \\\\( v_p \\\\) and \\\\( v_c \\\\) and compute the reward function \\\\( R_\\\\theta(x, y) \\\\).\\n- **Total Parameters:** The trainable parameters consist of the MLP layers and any encoder layers that are fine-tuned.\\n\\nOverall, the model's parameters are relatively lightweight compared to large LLMs, making SCAR efficient to train.\\n\\n---\\n\\n**e. Why is the selection ratio of code and text different (12.5% vs 10%)? Is this intentional?**\\n\\nYes, the selection ratios are intentional and tailored to each domain. Selecting 10% of the data in the open domain yields 1,000 examples, matching the size of the LIMA dataset. This allows for direct performance comparisons with models fine-tuned on LIMA. The ratio choices do not influence the rigorousness of the experiments.\"}", "{\"comment\": \"---\\n\\n**2. Alignment of Linguistic Form Definition with Metrics Used**\\n\\nWe agree that the metrics we used\\u2014TTR (Type-Token Ratio), MTLD (Measure of Textual Lexical Diversity), Flesch Reading Ease score, sentence length, and punctuation frequency\\u2014are proxies for various aspects of linguistic form. Our definition of linguistic form includes elements that shape the presentation of a response, independent of semantics, such as tone, transitional word choice, sentence structure, and formatting.\\n\\nWhile these metrics may not capture every nuance, they effectively quantify variations in style and presentation. They serve as practical indicators of the non-semantic features that organize the response, aligning with our focus on stylistic consistency.\\n\\n---\\n\\n**3. Consistency in Experimental Settings and Use of LLMs**\\n\\nWe apologize for any confusion regarding the use of LLMs in our experiments. In line 148, we mention using LLaMA2 family chat models to generate data for training CodeLLaMA and LLaMA3-8B. Our intention was to explore how data generated by different LLMs influences the performance of fine-tuned models.\\n\\nIn the later experiments, we continue to fine-tune CodeLLaMA and LLaMA3-8B. The LLMs used to generate data and the LLMs being fine-tuned are consistent throughout the paper. Our aim was to investigate the impact of style-consistent data generated by various models on fine-tuning performance.\\n\\n---\\n\\n**4. Complexity of the Method and Practical Usage**\\n\\nWe understand the concern about the complexity of our method. However, each component is essential for the robustness and effectiveness of the data selection process:\\n\\n- **LLM Evaluation of Quality:** Using LLMs to assess helpfulness and correctness ensures that selected data meets quality standards.\\n- **Response Generation/Rewriting:** Generating or rewriting responses creates style-consistent data, improving fine-tuning performance.\\n- **Customized Ranker Training:** Training a ranker allows systematic selection of examples optimizing both style consistency and quality.\\n\\nOur ablation studies confirm the necessity of these components. For example, without the quality threshold, the ranker might select low-quality examples from datasets with large variations in data quality, negatively impacting performance. While the method involves several steps, it is designed to be effective and can be implemented using existing tools.\\n\\n---\\n\\n**5. Clarification of Figure 1**\", \"in_figure_1\": \"- **Left Panel:** It displays a t-SNE visualization of embeddings derived from non-semantic linguistic form features extracted from the complete responses of the three categories (human, referenced, direct). The features include:\\n\\n - Unigrams of functional words\\n - Type-Token Ratio (TTR) of functional words\\n - Measure of Textual Lexical Diversity (MTLD) of functional words\\n - Number of sentences\\n - Flesch Reading Ease score\\n - Punctuation frequency\\n\\n- **Right Panel:** It shows the density plot of perplexity over the semantic portions (\\\\( y_c \\\\)) of the responses.\\n\\nThese visualizations support our assertion that synthetic data exhibits higher style consistency in linguistic form and semantic surprisal compared to human data.\\n\\n---\\n\\n**6. Elaboration on the Takeaways**\\n\\nOur key takeaways are based on the observation that synthetic data generated by LLMs tends to be more style-consistent than human-authored data. This consistency contributes to improved performance when fine-tuning LLMs.\\n\\nBy aligning the style of the selected data with that of a specific LLM (e.g., GPT-3.5), we enhance the fine-tuning process. Our results show that reducing the variance in stylistic features (as indicated by metrics like TTR and PPL) leads to better performance. This validates our approach of selecting style-consistent data to improve LLM fine-tuning.\\n\\n---\\n\\n**7. Explanation of Experimental Results in Figure 2 (Open-Ended Domain)**\\n\\nIn Figure 2, we observe that models fine-tuned on SCAR-selected subsets can outperform those trained on the full dataset, even with less data. While reducing data size often leads to performance drops, factors like higher style consistency data quality and diversity can outweigh the effects of smaller datasets.\\n\\nIn our scenario with style-inconsistent full data, emphasizing style consistency has a more significant impact on performance than data quantity. This explains why SCAR-selected models sometimes outperform models trained on the full dataset.\\n\\n---\\n\\nAgain, we appreciate your feedback and hope that our responses clarify your concerns and the contributions of our work.\"}", "{\"comment\": \"**Motivation and Positioning on Diversity of Instructions:**\\n\\n- We appreciate the reviewer's feedback on our motivation. We would like to clarify that our work ***does not contradict the value of diversity*** in instruction-tuning data. Instead, our study aims to explore the role of style consistency within a diverse dataset, offering a complementary perspective to traditional diversity-based selection methods. As demonstrated in Table 1, we intentionally use the same set of instructions across different experimental conditions to isolate the impact of style consistency, thereby excluding diversity effects. This setup enables us to identify phenomena that are independent of instruction diversity and highlight how style consistency can enhance model performance even within diversified datasets. While we recognise the value of diversity, it is beyond the scope of our paper's discussion. Combining these two approaches could be a promising direction for future applications.\\n\\n- Regarding the observed performance of diversity selection, we explain in the Experiments section that this method is less effective in our context because our full dataset is already inherently diversified due to the crowdsourcing data curation methods. Consequently, diversity-based methods like random sampling and diversity selection do not provide the same advantage here as they might in scenarios with less inherent diversity.\\n\\n**Clarification on Semantic Surprisal and Use of Perplexity:**\\n\\nSemantic surprisal is rooted in the well-established concept of text surprisal in NLP, commonly measured by perplexity, as shown in prior works mentioned in the paper [1,2,3]. Using this foundation, we adopt perplexity to quantify surprisal. However, our approach goes a step further by connecting text surprisal to LLM fine-tuning performance\\u2014an unexplored area in previous studies. To examine the distinct impacts of semantic and non-semantic components on LLM performance, we decompose responses into segments that capture semantic versus non-semantic content, allowing us to evaluate how each component individually influences alignment.\\n\\n[1] Byung-Doh Oh and William Schuler. Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times? Transactions of the Association for Computational Linguistics\\n\\n[2] JA Michaelov, MD Bardolph, CK Van Petten, BK Bergen, and S Coulson. Strong prediction: Language model surprisal explains multiple n400 effects. neurobiology of language, 1\\u201371.\\n\\n[3] Adam Goodkind and Klinton Bicknell. Predictive power of word surprisal for reading times is a linear function of language model quality. In Proceedings of the 8th workshop on cognitive modeling and computational linguistics (CMCL 2018),\\n\\n**Explanation of LLAMA2-13B Performance Degradation:**\\n\\n- The observed performance degradation with LLAMA2-13B occurs in two contexts:\\n - ***Data Generation for Fine-Tuning:*** As shown in Table 1, LLAMA2-13B-chat was used to generate data for training the base models, CodeLlama-7b and Llama3-8b. However, the LLAMA2-13B-chat model often produced low-quality data, including hallucinations and unhelpful responses, which negatively impacted fine-tuning during both response rewriting and direct generation.\\n - ***Ablation Study \\u2013 Ranker Training:*** In the Ablation Study, degradation also arises when LLAMA2-13B-chat-generated data is used to train our ranker. Low-quality training data led the ranker to prioritize style-consistent but low-quality responses, which ultimately detracted from the LLM\\u2019s performance in SFT. Our findings emphasize the importance of both quality and style consistency in training data for effective ranker training and, consequently, LLM fine-tuning.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response, which has partially addressed my concerns. As the deadline is approaching, I am unable to provide further questions or comments at this time. I will maintain the current score and believe the paper requires further revision.\"}", "{\"summary\": \"The authors analyze the impact of linguistic form and semantic surprisal on LLMs' SFT performance. They find that consistent data form leads to better outcomes. They propose a selection method, SCAR, to choose a small portion of SFT data, achieving strong performance on certain downstream tasks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"Investigating the linguistic form and semantic surprisal of SFT data and their impact on fine-tuned performance is meaningful.\", \"The paper is well-written, easy to follow, and contains rich details.\", \"The authors conduct a large number of experiments to provide useful information.\", \"The authors open-source their code and data.\"], \"weaknesses\": \"1. The position of the selection method is unclear. Is it proposed for training a general model (e.g., ChatGPT) or a specialized model (e.g., CodeLLaMA)?\\n - If your method is proposed for training general models, you should verify its effectiveness using various downstream tasks (e.g., MMLU, GSM8k, HumanEval in TULU evaluation). Please report the performance on the above benchmarks directly using the checkpoints trained in Table 5. This will help determine whether the selection achieves overall improvement or just improvements in a few tasks.\\n - If your method is proposed for training specialized models, you should compare it with more relevant baselines, such as directly using existing **high-quality** domain-specific data (rather than StackExchange), evol-instruct in specific domains, or instruction backtranslation in specific domains. If a user wants to train a specialized model, they do not need to select data from large-scale general data but can directly use high-quality domain-specific SFT data.\\n\\n2. The method seems to select the response whose format is closest to an existing model (e.g., GPT 3.5), rather than detecting format-consistent instances in the dataset.\\n\\n3. The referenced response is unconvincing to me. Since the referenced prompt contains instructions, the model may correct the response even if asked to ignore them. It might be better not to provide the instruction.\", \"questions\": \"1. What is the output format of StackExchange? Does it contain only code or both text and code?\\n1. Regarding the code, is the linguistic metric meaningful? Since the standard deviation of TTR in code is larger than that in text, which may not be intuitive.\\n1. Why use max pooling when calculating v_p, which only preserves the information of one token?\\n1. What parameters need to be trained in your method?\\n1. Why is the selection ratio of code and text different (12.5% vs 10%)? Is this intentional?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a system called SCAR that can prioritize instruction-response pairs in a training dataset based on their style consistency. In addition, the paper also explores the relationship between response style, data quality, and LLM performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes SCAR, which reduces the size of the training set while improving performance by optimizing data selection in the context of fine-tuning the existing LLM instructions. The core innovation of SCAR lies in its focus on language style consistency and defines two key style elements: language form and semantic surprisal. This systematic focus on style consistency and optimization method is an important contribution.\", \"weaknesses\": \"1. The motivation of this paper is unclear. Existing research has widely recognized the importance of ensuring diversity in instruction-tuning data. However, this paper seems to oppose this common understanding without strong justification. The experiments do not persuade me, as they are somewhat weak: both the dataset and the LLM size are limited. The results are unconvincing and, if not thoroughly validated, could potentially mislead the community.\\n\\n2. Although the paper proposes \\\"linguistic form\\\" and \\\"semantic surprisal\\\" as key style elements, the definitions and measurement methods of these concepts are slightly vague in some sections, especially the concept of \\\"semantic surprisal\\\", which still needs further clarification and explanation.\\n\\n3. Although the paper provides a lot of experimental data and results, the design and interpretation of some experiments are a bit complicated. For example, in the comparison of different data selection methods, some performance differences are not adequately explained. For some performance degradation cases (such as the performance of LLAMA2-13B), the paper does not explore the reasons behind it in detail.\", \"questions\": \"Refer to the above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposes SCAR as a novel approach for instruction-following data selection. It is grounded in the observation that a model performs better when the response styles in its training data are more consistent. Thus, the author trains a reward model to capture response differences in linguistic form and surprisal-determining features. The required dataset consists of quadruples, i.e., an instruction, a human response, an LLM response, and a human-referenced LLM response. The model is trained to optimize the ranking loss and the representation learning loss simultaneously. Experiments on code and open-ended domains show consistent improvements in SCAR over several baselines.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper is rich in content and presents extensive analysis The investigation of the impact of styles on LLM fine-tuning effectively motivates the design of the ranking model, which is further validated through experiments on various datasets with ablation discussions.\", \"weaknesses\": [\"There is a lack of a simple yet significant rule-based baseline. (See Q1)\", \"The model trained on general instruction-following data was only tested on AlpacaEval evaluated by LLMs. The data collected for training the ranking model also relies on responses generated by LLMs. This raises concerns that SCAR inadvertently leverages certain style features favored by the LLM judge. (See Q2)\", \"The font in some figures is difficult to read without zooming in, particularly in Figures 1 and 3. The organization of the experimental setup in Section 4 could be improved.\"], \"questions\": \"Q1: The \\\"longest\\\" method discussed in prior work[1, 2] appears to be a strong rule-based baseline. How does its performance compare with SCAR?\\n\\n[1] Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning\\n[2] Rethinking Data Selection for Supervised Fine-Tuning\", \"q2\": \"The performance of SCAR on some objective benchmarks is expected, such as GSM8K, MMLU, BBH, etc.\", \"q3\": \"Could you provide more details about how were the embeddings created for Figure 1(Left)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"**Response to Reviewer Comments**\", \"We thank the reviewer for their insightful feedback and appreciate the opportunity to clarify our work. Below, we address your concerns point by point.\", \"---\", \"**1. Clarification on the Definition and Measurement of Semantic Surprisal**\", \"We acknowledge that our initial definition of Semantic Surprisal\\u2014\\\"the choices of solutions, ideas, or approaches in a response that affects how predictably or unexpectedly it addresses the instruction\\\"\\u2014may seem broader than what perplexity measures. However, perplexity is a well-established metric for quantifying text surprisal levels, as supported by references [1]-[3]:\", \"[1] *Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?* Transactions of the Association for Computational Linguistics.\", \"[2] *Strong prediction: Language model surprisal explains multiple N400 effects*. Neurobiology of Language.\", \"[3] *Predictive power of word surprisal for reading times is a linear function of language model quality*. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018).\", \"In our work, we focus on the surprisal levels of the semantically relevant portions of the text. Specifically, we calculate the perplexity PPL(y_c | x), where y_c represents the content words (semantic components) of the response, and x is the instruction. By filtering out functional words y_p, we aim to isolate the semantic content.\", \"**Addressing the Concern About Disrupting Sentence Fluency**\", \"We understand that removing functional words might disrupt sentence fluency, potentially affecting perplexity measurements. To address this concern, we also calculated the perplexity of the complete responses PPL(y | x) in Table13 in Appendix and found similar patterns with those obtained using only the semantic portions PPL(y_c | x).\", \"**Further Analysis Using the LIMA Dataset**\", \"To validate that non-semantic linguistic form features contribute minimally to the variance of response surprisal, we conducted additional analysis using the LIMA dataset:\", \"**Calculations:**\", \"**PPL(y_c|y_p,x):** Perplexity of the content words given the functional words and the instruction.\", \"**PPL(y_p|y_c,x):** Perplexity of the functional words given the content words and the instruction.\", \"**PPL(y|x):** Perplexity of the full response given the instruction.\", \"**Variance Explanation:**\", \"We calculated the variance of PPL(y_c|y_p,x) and PPL(y_p|y_c,x) and examined how much each explains the variance in PPL(y|x) by dividing var(PPL(y_c|y_p,x)) by var(PPL(y|x)) and var(PPL(y_p|y_c,x)) by var(PPL(y|x)).\", \"**Results:**\", \"The variance of PPL(y_p|y_c,x) explains only 4% of the variance in PPL(y|x).\", \"The variance of PPL(y_c|y_p,x) explains 67% of the variance in PPL(y|x).\", \"**Regression Analysis:**\", \"We built a regression model where PPL(y|x) is the dependent variable, and PPL(y_c|y_p,x) and PPL(y_p|y_c,x) are the independent variables.\", \"**Findings:**\", \"The regression coefficient for PPL(y_c|y_p,x) is approximately 0.5.\", \"The regression coefficient for PPL(y_p|y_c,x) is near zero (0.04).\", \"The results indicate that the variation in the perplexity of the full response (PPL(y|x)) is largely influenced by the perplexity of the content words (PPL(y_c|y_p,x)), not by the functional words. This supports our hypothesis that semantic content (captured by PPL(y_c|y_p,x)) is the primary contributor to surprisal levels, justifying our use of perplexity over y_c to measure semantic surprisal.\"]}", "{\"comment\": \"We sincerely thank the reviewer for their thoughtful feedback and valuable suggestions. We address each of your concerns and questions below.\\n\\n---\\n\\n### **1. Clarification on the Position of the Selection Method**\\n\\nOur method, **SCAR**, is designed to be versatile and applicable to both general-purpose and specialized models. We have demonstrated its effectiveness in fine-tuning models in both the code domain (e.g., CodeLLaMA) and the open-ended question-answering domain (e.g., OLMO-7B). The primary goal of SCAR is to improve the efficiency of supervised fine-tuning (SFT) by selecting style-consistent and high-quality examples, enhancing performance regardless of the model's specialization.\\n\\n---\\n\\n### **2. Benchmark Performance on General Tasks**\\n\\nTo verify the effectiveness of SCAR in training general models, we conducted additional experiments on various downstream tasks. Following the evaluation protocol from \\\"Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning,\\\" we evaluated the OLMO-7B model fine-tuned on SCAR-selected data using the following benchmarks:\\n\\n- **ARC (Easy + Challenge)**\\n- **HellaSwag**\\n- **MMLU**\\n- **TruthfulQA**\\n\\n**Results:**\\n\\nWe fine-tuned OLMO-7B on 10k examples selected by SCAR and compared its performance with the official OLMO-7B-SFT checkpoint trained on the full 320k dataset. Despite using 32 times less data, our SCAR-fine-tuned model achieves competitive performance, outperforming the official checkpoint on ARC, HellaSwag and TruthfulQA, while performing lower on MMLU.\\n\\n#### **Evaluation Results**\\n\\n| Model | Data Size | ARC (Accuracy) | HellaSwag (Accuracy) | TruthfulQA (BLEU) | MMLU (Accuracy) |\\n|----------------------------|-----------|----------------|----------------------|-------------------|-----------------|\\n| **OLMO-7B (SCAR, 10k)** | 10k | **41.04** | **58.48** | **38.31** | 25.40 |\\n| OLMO-7B-SFT (Official) | 320k | 39.42 | 58.39 | 33.90 | **38.60** |\\n\\n*Note:* For ARC and HellaSwag, we used likelihood-based accuracy due to limitations in extracting generated answers using the `lm-evaluation-harness` toolkit. For TruthfulQA and MMLU, we evaluated the generated texts using BLEU scores and accuracy, respectively.\\n\\nThese results demonstrate that SCAR can effectively reduce the amount of required training data while maintaining or even improving performance on certain benchmarks. The decrease in performance on MMLU may be due to the reduced data size affecting the coverage of specific knowledge areas.\\n\\n---\\n\\n### **3. Selection of Format-Consistent Data vs. Matching GPT-3.5 Outputs**\\n\\nGiven our analysis in Section 2, we observed that LLM-generated data, such as that from GPT-3.5, tend to be more consistent in style compared to human-generated data. Therefore, selecting data that is similar to GPT-3.5's style can enhance the overall style consistency of the dataset. Our results in Table 3 validate this, showing that the variance of linguistic metrics like TTR (Type-Token Ratio) and PPL (Perplexity) are reduced in most cases when we select data that aligns with GPT-3.5's style.\\n\\nWhile our current implementation focuses on GPT-3.5 due to its availability and consistent outputs, the same approach can be adapted to select data that aligns with the style of other models, such as LLaMA, or any preferred style. The key aspect is that style consistency improves fine-tuning performance, and SCAR is flexible enough to be tailored to any target style to achieve this consistency.\\n\\n---\\n\\n### **5. Justification for the Referenced Response Method**\\n\\nThe use of \\\"referenced responses\\\" is crucial for our analysis:\\n\\nReferenced responses serve as intermediates that retain the semantic content of human responses while adopting a more consistent linguistic form.\\n\\nOur rewriting approach aligns with techniques from the ACL paper \\\"Self-Distillation Bridges the Distribution Gap in Language Model Fine-Tuning.\\\" Including the instruction in the prompt ensures that the model considers the context, reducing hallucinations and preserving semantic integrity.\\n\\nThis process allows us to disentangle the effects of linguistic form and semantic surprisal on fine-tuning performance, as we can control for one while varying the other.\"}", "{\"summary\": \"The paper introduces SCAR (Style Consistency-Aware Response Ranking), a method to improve the efficiency of instruction tuning for Large Language Models (LLMs) by selecting data that maintains stylistic consistency in responses. The authors propose that consistent response style enhances fine-tuning performance, and define two key elements in response style: linguistic form (the structural presentation of responses) and semantic surprisal (the predictability of response content relative to the instruction). SCAR ranks instruction-response pairs by stylistic consistency, allowing LLMs to achieve comparable or even superior performance using only a very small portion of the original dataset in coding and open-ended question-answering benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper proposes that Style Consistency is important for the efficiency of instruction tuning, which has not been well studied. Thus I think the novelty of this motivation is solid.\\n2. The experiments conducted by this paper are comprehensive.\", \"weaknesses\": \"1. The paper is not well-written and very hard to follow and understand. I can not even find an overall description of the whole workflow. An illustrative main figure should be included.\\n2. The definition of Semantic Surprisal in line 054 is not well-aligned with the real metric that is used in line 162. Your definition of Semantic Surprisal, \\u201cthe choices of solutions, ideas, or approaches in a response that affects how predictably or unexpectedly it addresses the instruction\\u201d is largely beyond the capabilities of perplexities. \\n3. Please let me know if I am wrong, when you calculate the PPL(y_c|x), you will first filter out all the functional words (y_p). So I think it is probable that this process will directly destroy the consistency and fluency of the original sentences, thus making the resulting ppl less meaningful. \\n4. Similar to point 2, your definition of linguistic form is also not well-aligned with the real metric being used. I don\\u2019t think the utilization on TTR, MTLD, Flesch score, sentence length and punctuation frequency can support your definition as \\u201celements that shape the presentation of a response, mostly independent of semantics, such as tone (formal or informal), transitional word choice, sentence structure, formatting (bullet points or heading lines), variable naming conventions\\u201d. \\n5. The settings for this paper are slightly chaotic: In the paragraph in line 148, the LLMs being used are mainly llama2 families, but in the later parts of the paper, it seems that most experiments are conducted on llama3 families. Is this done on purpose? This inconsistency makes it hard to follow. \\n6. The method is far too complicated and contains too many components, thus making it hard to get practical usage. For example, it needs LLMs to first generate an analysis on Helpfulness and Correctness, and it needs LLMs to re-generate some of the responses. It also needs to train a customized module for the ranking further. As mentioned in point 1, considering so many components utilized in the method, there is no illustrative figure, which is not reasonable to me. \\n\\nI think most of my weaknesses are caused by the unclear writing of this paper. If the paper is modified to be clear, I will be glad to raise the score.\", \"questions\": \"1. The colors in Table 1 are not aligned with the colors in other parts.\\n2. What is exactly presented in Figure 1? For left, are they the complete responses of the 3 categories or extracted functional words from responses of the 3 categories? Similarly, for right, are they the ppl over complete responses or ppl over y_c? \\n3. Can you provide a more detailed illustration and analysis of how the takeaways are concluded? \\n4. Please further illustrate the experimental results in Figure 2, especially in the Open-ended Domain. It looks like every metric is worse than the Full Data on Human dataset except for ppl, I think it contradicts with most of the previous findings. Can you explain it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your timely feedback! We appreciate your insights and we acknowledge the need for further revisions. We plan to revise our manuscript accordingly and resubmit it. Please consider our responses as an particular effort to clarify and address the issues you've raised.\"}" ] }
5lUdTogEL3
Balancing Differential Discriminative Knowledge For Clothing-Irrelevant Lifelong Person Re-identification
[ "Zhenyu Cui", "Jiahuan Zhou", "Yuxin Peng" ]
Lifelong person re-identification (L-ReID) focuses on learning sequentially collected datasets from different domains to match the same person. Advanced L-ReID methods typically balance the domain gap between different datasets via domain knowledge modeling, such as knowledge rectification or distribution prototyping. However, existing methods dismiss balancing discriminative knowledge within different datasets, resulting in conflicts when sequentially accumulating differential discriminative information in different datasets, e.g., sequentially learning cloth-changing/cloth-consistent knowledge simultaneously, which brings critical catastrophic forgetting problems of old discriminative knowledge. In this paper, we focus on a new but practical task called Cloth-Irrelevant Lifelong Per- sue, we proposed an Adaptive Discriminative Knowledge Consolidation (ADKC) framework to balance the discriminative information of different domains on L-ReID. Specifically, we propose a Selective Knowledge Forgetting (SKF) module to correct potential overfitting to specific discrimination (e.g., clothing information) based on new knowledge. In addition, we design a Selective Knowledge Retention (SKR) module to adaptively compensate for the potential lack of discriminative information based on old knowledge and accelerate differential discrimination into a unified framework. To validate our method, two CIL-ReID benchmarks are first established, while extensive experiments on the above two benchmark datasets demonstrate that our method leads to existing advanced methods in the CIL-ReID task.
[ "Person re-identification", "Cloth-changing", "Lifelong learning", "Prototype learning" ]
https://openreview.net/pdf?id=5lUdTogEL3
https://openreview.net/forum?id=5lUdTogEL3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tqHXarQCej", "joua10juYc", "N4IPGb6fvY", "Kndr6THdXE", "J1Sy3nwc9v" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730373912331, 1729476958697, 1730613053309, 1731455733079, 1729506979276 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission653/Reviewer_4Vtz" ], [ "ICLR.cc/2025/Conference/Submission653/Reviewer_v8K5" ], [ "ICLR.cc/2025/Conference/Submission653/Reviewer_WuVk" ], [ "ICLR.cc/2025/Conference/Submission653/Authors" ], [ "ICLR.cc/2025/Conference/Submission653/Reviewer_m6TM" ] ], "structured_content_str": [ "{\"summary\": \"The submitted manuscript is incomplete, containing only an abstract and an incomplete introductory section, lacking related work, specific methods and experiments. Therefore it is not possible to summarise.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Due to the incomplete manuscript submitted, it is not possible to see the strengths of the proposed methodology.\", \"weaknesses\": \"The submitted manuscript is incomplete and lacks related work, specific methods, and experiments.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper is only a template, which shows no respect for the ICLR. I recommend that the authors withdraw this submission.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Nan\", \"weaknesses\": \"This paper is only a template, which shows no respect for the ICLR.\", \"questions\": \"Nan\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"I cannot fully evaluate this paper as it was submitted as an incomplete version\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Due to the submission version only including an abstract and a partial introduction, it is not possible to find any strengths.\", \"weaknesses\": \"Version submission error\", \"questions\": \"Version submission error\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The manuscript is incomplete and contains much irrelevant information. This manuscript is not ready for review.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The manuscript is incomplete and not ready for review.\", \"weaknesses\": \"The manuscript is incomplete and not ready for review.\", \"questions\": \"The manuscript is incomplete and not ready for review.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
5lIXRf8Lnw
Automatically Interpreting Millions of Features in Large Language Models
[ "Gonçalo Santos Paulo", "Alex Troy Mallen", "Caden Juang", "Nora Belrose" ]
While the activations of neurons in deep neural networks usually do not have a simple human-understandable interpretation, sparse autoencoders (SAEs) can be used to transform these activations into a higher-dimensional latent space which can be more easily interpretable. However, SAEs can have millions of distinct latents, making it infeasible for humans to manually interpret each one. In this work, we build an open-source automated pipeline to generate and evaluate natural language interpretations for SAE latents using LLMs. We test our framework on SAEs of varying sizes, activation functions, and losses, trained on two different open-weight LLMs. We introduce five new techniques to score the quality of interpretations that are cheaper to run than the previous state of the art. One of these techniques, intervention scoring, evaluates the interpretability of the effects of intervening on a latent, which we find explains latents that are not recalled by existing methods. We propose guidelines for generating better interpretations that remain valid for a broader set of activating contexts, and discuss pitfalls with existing scoring techniques. Our code is available at https://anonymous.4open.science/r/interpreting_latents/.
[ "interpretability", "language model", "sae", "features", "explanation" ]
Reject
https://openreview.net/pdf?id=5lIXRf8Lnw
https://openreview.net/forum?id=5lIXRf8Lnw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "cVQ2ARa7sn", "ZvYMBZ5DZ5", "YuqbmXJkPZ", "YOhkaN350Q", "N5hCx39mMT", "Mumajbp529", "MUa8u8NOGa", "Lhddlo2wgM", "GS5FxphDzr", "CBAGx0oAC0", "9eN5vFrhxI", "4fbURBK71X", "2PQUe20qB3", "1hSCTn1vtj" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730693106737, 1732686764676, 1733208825841, 1732204542203, 1734716190374, 1732205136554, 1729471739385, 1732204761821, 1730704260653, 1732204572810, 1737523828432, 1732204684704, 1730668337157, 1732204194411 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7275/Reviewer_8Dae" ], [ "ICLR.cc/2025/Conference/Submission7275/Reviewer_s55e" ], [ "ICLR.cc/2025/Conference/Submission7275/Reviewer_YaT1" ], [ "ICLR.cc/2025/Conference/Submission7275/Authors" ], [ "ICLR.cc/2025/Conference/Submission7275/Area_Chair_WQ4K" ], [ "ICLR.cc/2025/Conference/Submission7275/Authors" ], [ "ICLR.cc/2025/Conference/Submission7275/Reviewer_4Nph" ], [ "ICLR.cc/2025/Conference/Submission7275/Authors" ], [ "ICLR.cc/2025/Conference/Submission7275/Reviewer_YaT1" ], [ "ICLR.cc/2025/Conference/Submission7275/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7275/Authors" ], [ "ICLR.cc/2025/Conference/Submission7275/Reviewer_s55e" ], [ "ICLR.cc/2025/Conference/Submission7275/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper builds an open-source automated pipeline to generate and evaluate natural language explanations for sparse autoencoder features with LLMs. The framework has been evaluated on various dimensions, including SAE size, activation function, loss, and LLMs. Five new scoring techniques are proposed. The paper finds that SAEs trained on nearby layers' the residual stream are highly similar. And they are also more interpretable than neurons.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The open-source framework is comprehensive and valuable for large-scale SAE analysis. The experiments are well designed to illustrate the effectiveness of the proposed method.\", \"The metrics proposed in this paper provides more dimensions to evaluate the generated explanations, which would be valuable for the SAE community.\", \"Some of the findings are meaningful to the SAE community. For example, larger latent SAE learn more dataset-specific latents. The relations between different sampling approaches and the generated explanations. And the high correlations between latents at adjacent layers.\"], \"weaknesses\": [\"The method is a bit hard to understand for readers who are not familiar with SAEs. For example, how section 3.1 is related to 3.2? Would be more illustrative if a figure of the whole pipeline is provided.\", \"It would be more clear to provide a simple example of explaining the latents of SAEs. And even better if an example involves the whole workflow of this framework is provided.\"], \"questions\": \"The framework proposed in this paper is a valuable tool for the SAE community. It provides an automated pipeline to generate and evaluate natural language explanations for sparse autoencoder features. However, the authors better consider to write a more accessible version for readers who are not familiar with SAEs, especially for section 3. It is a bit difficult to follow it.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for answering my questions, providing clarifications, and conducting additional experiments, including a human evaluation. Most of my concerns have been addressed, so I\\u2019ve decided to increase my score.\"}", "{\"comment\": \"Thanks the authors for the response which have helped address most of my questions, and I've adjusted my review accordingly. Given the different focuses of the scoring methods, presenting the correlation metric between methods in the main text might be confusing to the readers due to the low scores. It might be better to move the correlation scores with human explanations to the main text instead.\"}", "{\"comment\": \"> The low correlation between different evaluation methods in Table 1 is concerning...\\n\\nWe respectfully disagree with the claim that simulation scoring is vetted and established. Prior work has found that explanations with high simulation scores can have high error rates in intervention tasks, and a low F1 score on a task similar to detection, see [Huang (2023)](https://arxiv.org/pdf/2309.10312). In [recent work](https://arxiv.org/pdf/2406.04093), the OpenAI team also discussed downsides of simulation scoring. Simulation focuses on the activating part of the distribution and asks whether an explanation allows us to predict how strongly a latent is active, while we are also interested in distinguishing active from non-active latents, which is not directly measured by simulation scoring.\\n\\nWhile our scores are correlated with simulation scoring, they are fundamentally measuring different quantities which are just as important as what simulation measures. As an added benefit, our methods are also cheaper and more efficient to run. We discuss in Section 4.1 some hypotheses on why there is a weak correlation between our methods and simulation.\\n\\nWe agree it would be very useful to compare our explanations to ground truth explanations, but we are not aware of a big enough dataset of such explanations. In lieu of this, we do evaluate human-generated explanations, and provide human generated scores for automatically generated explanations. We find that human-generated explanations have similar fuzzing, detection, and embedding scores to automatically generated explanations, while having slightly higher simulation scores. We find that human scores have a correlation of around 0.6, to detection, fuzzing and simulation; for details see the response to review 8Dae, as well as Appendix A.4.\\n\\n> The authors claim that the new methods are more efficient than prior scoring methods...\\n\\nThank you for this comment. This information was left out as an oversight. We have now added some quantitative comparison between the methods to support our claim that the new methods are more efficient and cost-effective. We compare the number of input and output tokens used for each of the explanation methods with cost estimates at different sizes\\u2014 see new Table 1 and lines 351-359. Simulation is about 5 times more expensive than detection and fuzzing if one is able to run the scorer model locally, but closer to 30 times more expensive when using a large closed source model, because providers do not allow for the tools required to make the method more efficient. Embedding on the other hand is 50 times cheaper than simulation, even when compared to the price of running it locally.\\n\\n> Reproducibility: The author mentioned a plan to open-source the project, but it's hard to evaluate the quality of their code for reproducibility purpose either since it's not provided as one of the supplementary files.\\n\\nWe thank the reviewer for this suggestion and apologize for the oversight. We uploaded our code to an anonymous repository: https://anonymous.4open.science/r/interpreting_latents/, which comes with examples and a guide on how to use the library.\\n\\n> Missing a highly relevant work, \\\"Explaining black box text modules in natural language with language models\\\". How does their scoring method compare to the ones proposed in this paper?\\n\\nWe already mentioned a scoring technique that is based on computing the activations of a model on generated examples [(Koft 2024)](https://arxiv.org/abs/2405.20331), but we were unaware that there was an earlier example of using that technique on language models. We have now added a reference to \\\"Explaining black box text modules in natural language with language models\\\" as it is a relevant work. \\n\\nTheir explanation generation method is similar to ours, although they compute activations on trigrams, while we compute the activations on a dataset that has a similar distribution to the training dataset. The fact that this type of scoring requires two distinct steps, one where an LLM generates trial contexts, and another where the contexts are run through a model and the SAE and the activation collected, leads us to not focus on \\u201cgenerative\\u201d scoring in this article, although we believe it could have its merits.\\n\\n> Can the authors explain the negative correlation between the fuzzing score and intervention score in figure 4? If they are both useful scoring methods, why would the correlation be negative?\\n\\nWe argue that there are some latents that are better explained by their effects on the outputs of the model instead of on their activation patterns. Given this hypothesis, it makes sense that there would be a negative correlation between these two scores. In Figure 4, we show that latents with low \\u201ccorrelational\\u201d (fuzzing) scores are likely to be better explained by their \\u201ccounterfactual\\u201d (intervention) effect. We have added discussion of this issue on lines 405-417.\"}", "{\"metareview\": \"## Summary of Scientific Claims and Findings\\nThe paper introduces a novel automated framework using large language models (LLMs) to generate and evaluate natural language explanations for Sparse Autoencoder (SAE) features. It proposes five new scoring methods\\u2014detection, fuzzing, surprisal, embedding, and intervention scoring\\u2014to assess the interpretability of these features. The paper also provides insights into the high similarity of SAEs trained on nearby layers and suggests that wider SAEs are more efficient for certain tasks under computational constraints. The authors claim their methods are more efficient and offer actionable insights for practitioners in model explainability. \\n \\n## Strengths \\n1. **Relevance and Importance**: The paper addresses a timely issue of model interpretability in the context of large SAEs, which is crucial given the growing complexity of modern neural networks. \\n2. **Comprehensive Framework**: The work presents a comprehensive open-source pipeline that could significantly aid researchers in analyzing SAEs. \\n3. **Novel Scoring Techniques**: The introduction of multiple scoring techniques offers a broader set of tools for evaluating the quality of explanations, contributing valuable methodologies to the community. \\n4. **Well-Designed Experiments**: The experiments are well-structured to validate the effectiveness of the proposed methods, providing meaningful insights into SAE behavior. \\n \\n## Weaknesses \\n1. **Clarity and Presentation**: Several reviewers noted that the paper's presentation could be improved. The organization sometimes makes it difficult to follow the main contributions and arguments. \\n2. **Validation and Comparison**: The paper lacks a solid comparison with previous evaluation metrics. The correlation between the new and established methods is not robust, which raises questions about the validity and reliability of the new metrics. \\n3. **Depth of Analysis**: Some findings are presented without extensive analysis, limiting the depth of insights offered by the paper. \\n\\n After thorough consideration of other submissions in the same batch, I find myself recommending rejection for this paper. I encourage the authors to refine their work and consider resubmission to a future conference or journal.\", \"additional_comments_on_reviewer_discussion\": [\"#### Summary of Discussion and Rebuttal\", \"**Reviewers' Points**: Reviewers raised concerns about the correlation of new scoring methods with established metrics, the clarity of the presentation, and the lack of human evaluations to validate automatic scoring methods.\", \"**Authors' Responses**: The authors provided detailed clarifications, added missing comparisons, and uploaded the code to an anonymous repository for reproducibility. They also conducted additional analyses, including human evaluations, to strengthen their claims.\", \"**Weight in Final Decision**: Several reviewers noted that the paper's presentation could be improved. The validation and comparison part should also be improved. After thorough consideration of other submissions in the same batch, I recommend to reject the paper. I encourage the authors to refine their work and consider resubmission to a future conference or journal.\"]}", "{\"comment\": \"> This work did not compare the new evaluation metric with the previous evaluation metric (https://openai.com/index/language-models-can-explain-neurons-in-language-models/) in a solid form...\\n\\nWe have extended the analysis of the previous evaluation method, by introducing a cost analysis as well as a deeper discussion on some problems with simulation scoring\\u2014 see Section 4.1 and Appendix A.4.6. See also discussion with reviewers YaT1, s55e.\\n\\n> Comparing SAEs in adjacent layers (1) lacks support of motivation and (2) is not well supported. See Questions for details...\\n\\nWe have since decided to remove this section, responding to your and other reviewers\\u2019 comments about readability. We agree that it significantly cuts into the narrative of the article and makes it harder to follow. We still find it relevant to reply to some of these points in Question 4 and 5. \\n\\n**Questions**\\n> How is the correlation of the proposed evaluation metric with the original metric?\\n\\nWe have reported both the Spearman and the Pearson correlation between the original metric and the proposed metrics in the original manuscript. Although only fuzzing has a high correlation with simulation scoring (>0.7), we believe that the added discussion about the drawbacks and advantages of the different methods justifies the introduction of the new scores. See also replies to reviewers YaT1, s55e. \\n\\n> How is the efficiency of the proposed evaluation metric compared to the original metric?\\n\\nWe have now provided some comparison between the methods to support the claim of efficiency. We compare the number of input and output tokens used for each of the explanation methods with cost estimates at different explainer model sizes\\u2014 see new Table 1 and lines 351-359. Simulation is about 5 times more expensive than detection and fuzzing if one is able to run the scorer model locally, but closer to 30 times more expensive when using a large closed source model, because providers do not allow for the tools required to make the method more efficient. Embedding on the other hand is 50 times cheaper than simulation, even when compared to the price of running it locally.\\n\\n> Previous work (https://transformer-circuits.pub/2023/monosemantic-features) has already shown SAEs have more interepretable features than neurons. What's the purpose of validating this result?\\n\\nWe have decided to include the neuron comparison as there is still significant effort in [improving explanations of neurons](https://transluce.org/neuron-descriptions), and we wanted to evaluate how effective our pipeline would be at finding and scoring explanations of neurons. Due to the fact that the exact pipeline used by Anthropic is not known, we decided that it made sense to reproduce their findings on neurons, using our open-source pipeline. We were also interested in \\u201csparsifying\\u201d neurons with the top-k function as a way to improve their interpretability, and so decided that it was important to include negative results on sparsified neurons.\\n\\n> Why comparing SAEs in adjacent layers? (Why is it interesing?)\\n\\nRecent work has shown that training a single SAE on all layers is effective (Lawson 2024). Our work adds to this discussion, suggesting that residual stream SAEs on adjacent layers are learning very similar latents, and are probably a waste of compute. SAEs trained on the MLP output of adjacent layers probably don\\u2019t have the same problem, and so should be prioritized.\\n\\n> How is comparing SAEs in adjacent layers related to your evaluation metric?\\n\\nAlthough a significant portion of our work relates to the results of the different scoring techniques, a great deal of effort was put into generating explanations for all latents of several SAEs. This gave us a legitimate opportunity to compare explanations of consecutive layers and estimate their overlap.\\n\\n> Are there any prior work that supports your method in evaluating the SAEs in adjacent layers?\\n\\nWe are not evaluating SAEs per se, but only measuring the alignment of their decoder matrices. Using the Hungarian algorithm to align and compare independently trained networks has been done before ([Ainsworth 2022](https://arxiv.org/abs/2209.04836)).\"}", "{\"summary\": \"The paper presents an open-source automated pipeline that uses large language models to generate natural language explanations for millions of features in Sparse Autoencoders (SAEs), addressing the challenge of manual interpretation. It introduces five efficient scoring techniques, including intervention scoring, and demonstrates that SAEs are more interpretable than neurons, offering insights into semantic similarity across model layers.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The scale gets improved from previous sota.\", \"This work lies in an interesting direction.\"], \"weaknesses\": [\"This work did not compare the new evaluation metric with the previous evaluation metric (https://openai.com/index/language-models-can-explain-neurons-in-language-models/) in a solid form. Having similar conclusions as previous approach should not serve as a solid evidence that the new metric is as good as the previous evaluation metric.\", \"Comparing SAEs in adjacent layers (1) lacks support of motivation and (2) is not well supported. See Questions for details.\", \"The presentation is poor: readers can not capture the main contribution of this paper with a normal reading flow. The contribution of this work seems to be concentrated on a new evaluation metric, but Section 5 cuts in to discuss about behaviors of the behaviors of SAEs. I would strongly recommend to reorganize the paper to one single claim, with evidence from both sides supporting it.\"], \"questions\": \"1. How is the correlation of the proposed evaluation metric with the original metric?\\n2. How is the efficiency of the proposed evaluation metric compared to the original metric?\\n3. Previous work (https://transformer-circuits.pub/2023/monosemantic-features) has already shown SAEs have more interepretable features than neurons. What's the purpose of validating this result?\\n4. Why comparing SAEs in adjacent layers? (Why is it interesing?)\\n5. How is comparing SAEs in adjacent layers related to your evaluation metric?\\n6. Are there any prior work that supports your method in evaluating the SAEs in adjacent layers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Questions\", \"comment\": \"> Why is the context length of 256 chosen for activation collection? Is this value based on empirical selection?\\n\\nWhen we originally started the project, we selected 256 tokens as an intermediate value between the context length that the SAEs were trained with (1024 tokens) and our planned context length for generating explanations \\u2013 either 32 or 64. We wanted flexibility to explore larger context lengths if given the opportunity. Because we were already expecting latents whose explanations required larger contexts to have lower scores, we were not concerned about leaving them out of our pipeline.\\n\\n> Why is the activating example limited to only 32 tokens, given that short contexts may hinder the correct identification of latents with complex activation patterns?\\n\\nIn Section 3.2 we point to the fact that there may be more complex latents that our methods do not completely capture, and we expect those to have low scores. We don\\u2019t think it makes sense to use larger context for all latents since our results imply that a large fraction of latents is already explained with lengths 32 or 64. Using larger contexts would make the whole pipeline more expensive with low return. We are also able to sort latents that can be easily explained with the current context length, from those that have low scores and could potentially require larger context lengths to explain, and have included a discussion in the text addressing these points, lines 160-165.\\n\\n> What accounts for the differences in explanations between \\u201crandomly sampling\\u201d and \\u201cuniformly sampling\\u201d?\\n\\nThank you for this question, which we agree is not clear in the paper. What we meant by \\u201cuniform sampling\\u201d was stratified sampling, where we divide the data into ten bins and sample a fixed number of samples from each bin. We have corrected the labels in the figures and in the main text.\\n\\nWe used stratified sampling to guarantee that there are samples in each part of the activation distribution, due to the fact that we choose only 40 examples for each latent. We expect these two distributions to be very similar, and in fact that is what we empirically observed.\\n\\n> In Section 5, the statement that \\u201cif the explanation for latent \\u03b1 at layer j is very different from the explanation for the same latent at layer j + 1, this would suggest that our pipeline is inconsistent and noisy\\u201d raises a hypothesis. Can you elaborate on it?\\n\\nWe have now removed this section due to the multiple comments related to improving the readability of the article, but we will answer your question regardless. We were arguing that consecutive SAEs trained on the residual stream would find the same \\u201clatent\\u201d in both layer i and layer j.\\n\\nThis latent would obviously not have the same index, as discussed in the previous version of the manuscript, but after the proposed alignment technique we could find the latents a and a\\u2019 at layer i and layer j respectively that were the most aligned.\\n\\nIf the alignment were substantial, we would expect that their explanations would be similar, because they would have the same effect in the residual stream. This argument ignores the fact that we are aligning the decoder directions but not the encoder directions, so the activating contexts could be different.\\n\\nThe encoder and decoder directions are initialized to be identical, so we expect them to generally be similar, but it is possible for them to drift apart during training. If it were the case that many features had similar decoder directions but very different encoder directions and so had different explanations, it would point to a weakness in our Section 5 experiments.\"}", "{\"summary\": \"This paper introduced five automated scoring methods to score the explanations of SAE latents, and discussed the shortcomings of existing scoring techniques. The paper also finds that SAE trained on nearby layers are highly similar, and provided actionable insights for practitioners to train wider SAEs instead of narrower SAEs to be efficient when there\\u2019s a compute constraint.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Given the large sizes of SAEs nowadays and an increasing need for model explainability, automatically generating explanations of SAE latents efficiently is an important topic.\", \"The paper is well written, with ablations of design choices clearly described.\"], \"weaknesses\": [\"Format: There are no line numbers, and it's showing \\\"Under review as a conference paper at ICLR **2024**\\\" instead of **2025** at the top.\", \"The low correlation between different evaluation methods in Table 1 is concerning. Since the simulation method proposed in prior work is vetted and established, the new ones proposed in this work should at least have strong rank correlation (> 0.7) with it to prove that they work. Since this the scoring methods are the primary contribution in this paper, the authors should conduct more rigorous tests to ensure their validity. I would also encourage the authors to conduct experiments with ground truth explanations (e.g., SAE latents with known explanations found in prior work, or easily constructed an embedding model responding to a known concept/explanation), to make a stronger case in terms of the reliability of these new methods.\", \"To add on, instead of correlation of the raw scores, it might make more sense to look at the \\\"rank\\\" correlations of different methods.\", \"The authors claim that the new methods are more efficient than prior scoring methods without actually quantifying the efficiency gain to support the claim. If efficiency gain is one of the highlights of these scoring methods, the authors should consider comparing runtime of different methods to support the claim.\", \"The author mention \\u201cOur large-scale analysis confirms that SAE latents are indeed much more interpretable than neurons, even when neurons are sparsified using top-k postprocessing.\\u201d in the abstract as one of the main findings, but the details cannot be found in the main paper but in the appendix. The authors should consider moving it to the main paper if this is one of the main claims.\", \"Reproducibility: The author mentioned a plan to open-source the project, but it's hard to evaluate the quality of their code for reproducibility purpose either since it's not provided as one of the supplementary files.\"], \"questions\": [\"Missing a highly relevant work, \\\"Explaining black box text modules in natural language with language models\\\". How does their scoring method compare to the ones proposed in this paper?\", \"Can the authors explain the negative correlation between the fuzzing score and intervention score in figure 4? If they are both useful scoring methods, why would the correlation be negative?\", \"Unlike the claim in the paper, figure 5 is still showing statistical alignment across layers. Can the authors provide evidence for semantic similarity? (e.g., compute explanation similarity across layers instead of the matrix statistics)\", \"*Minor issues*\", \"The authors say \\u201cwe introduced *five* new techniques\\u201d in the abstract, and \\u201cWe addressed issues with the conventional simulation-based scoring and introduced *four* new scoring techniques\\u201d in the conclusions. The readers might get confused in terms of the number of methods actually introduced in the paper, if it\\u2019s five, please be consistent throughout the paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Unlike the claim in the paper, figure 5 is still showing statistical alignment across layers. Can the authors provide evidence for semantic similarity? (e.g., compute explanation similarity across layers instead of the matrix statistics)\\n\\nWe appreciate this question by the reviewer. To streamline the article and in line with comments from other reviewers, we have taken this section out, as it clashed with the flow of the manuscript. To answer your question, in Figure 5 we are already showing the explanation similarity between layers, and that\\u2019s the statistical alignment the reviewer is referring to, instead, in Figure A5 of the appendix we show that the alignment between the decoder elements is non-existent.\\n\\n**Small changes:**\\nWe have also corrected the ICLR date, added page numbers and fixed the inconsistency between 4 and 5 different methods. Now, both abstract and conclusion mention 5 methods.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"> The writing and clarity of the paper could be improved...\\n\\nWe are increasing the clarity of the paper by moving the section comparing SAEs at different layers to the Appendix to allow for more discussion space for readers who are not familiar with SAEs, as well as to address your questions and comments below.\\n\\n> Many observations are presented without in-depth analysis or further exploration...\\n\\nWe agree with the reviewer that more in-depth discussion should be provided in some sections, which was not originally done due to space limitations. We reply to the duplicate questions in the question sections, and to the others here.\\n\\nIn Section 3.2, lines 160-165, we include a discussion on whether practitioners should care more about \\u201ctop\\u201d explanations or \\u201cstratified\\u201d explanations. We argue that we should care about more than just the top activating examples. This means that there is not an optimal sampling process, because it depends on the type of explanation one is looking for. We believe that this responds to the comment: \\u201c while qualitative results for different sampling strategies are provided, it is not clear how to optimize the sampling process.\\u201d.\\n\\nWe agree that the claim in Section 4.1 is vague, and have significantly reworded it (lines 374-391). Different scoring methods \\u201ctarget\\u201d different qualities of explanations:\\n\\nFor example, consider a latent whose automatically generated interpretation is \\\"activates on legal documents\\\" without a reference to specific activating tokens. If the actual latent activates on various unrelated words in legal documents, this interpretation would score highly on detection, embedding, and surprisal. However, fuzzing and simulation would yield lower scores since they require a scorer model to predict specific tokens.\", \"consider_an_alternative_interpretation_of_that_latent\": \"\\\"activates on the token <_of>\\\". Detection, embedding, and surprisal would classify the token as activating on a broad array of documents, yielding many false positives and a low score. Fuzzing and simulation could easily pick a single token and score highly, mostly because current scoring techniques limit the number of shown examples.\\n\\nThat\\u2019s why we are proposing a \\u201cbattery\\u201d of tests, that can be used to sort explanations and investigate whether the explanation can be improved or whether the latent is not easily interpretable. A more complete explanation, \\u201cactivates on the token <_of> in legal documents\\u201d would score high in all scoring methods and could be more trusted. We have added an Appendix section, A.4.7, where we show examples of explanations whose scores disagree.\\n\\n> The generated explanations largely rely on the prompts and explainer models...\\n\\nWe agree that the generated explanations depend significantly on the prompts and explainer models. In the future we would like to continue optimizing the prompts, although we already did a considerable amount of prompt tuning in preliminary experiments. We already show in this paper how the quality of the explanations depends on the model. \\n\\nWe compare the scores of explanations generated with Llama 8b, 70b and Claude Sonnet 3.5, using detection, fuzzing, embedding and simulation scoring, and show that larger models improve the explanations generated, but that the improvement from 8b to 70b is larger than from Llama 70b to Sonnet 3.5.\\n\\nWe also tested the effect of chain-of-thought and adding different information to the prompt, as discussed in lines 483-511 and in Appendix A5, namely A.5.2, A.5.3, A.5.5 and A.5.7.\\n\\n> While several automatic evaluations are compared, it is unclear whether they accurately reflect the quality of explanations. It would be beneficial to conduct human evaluations, at least on a small set, and compare the correlation with automatic evaluations.\\n\\nWe agree with the reviewer that scores should accurately reflect the quality of explanations. We have done human scoring and found the correlations between these scores and all the proposed methods. For human scoring, we used a rubric score similar to that used by [Anthropic](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html), see details in Appendix A.4.6, finding that the detection score, fuzzing score, and simulation score are all correlated with human scores at around 0.60, but other methods are less correlated with human judgment (<0.40).\\n\\nHowever, the human scores may not be more valid than the automatic evaluations. For each latent explanation, the human saw 10 contexts on average, whereas the automatic methods saw at least 100 activating contexts, and fuzzing, detection, embedding and surprisal saw another 100 non-activating contexts. This difference in scale is one of the main reasons we believe automatically generated interpretations and evaluations truly shine.\", \"title\": \"Response to Weaknesses\"}", "{\"summary\": \"This paper investigates sparse autoencoders (SAEs), which project activation representations into a sparse high-dimensional latent space for interpretability. To automatically explain the large number of latent features, this paper proposes an LLM-based framework, explaining millions of latents across multiple models, layers, and SAE architectures. Four evaluation methods are proposed, including detection, fuzzing, surprisal, embedding, measuring the extent to which an explanation enables a scorer to discriminate between activating and non-activating contexts. Additionally, an intervention scoring is proposed to interpret a feature\\u2019s counterfactual impact on model output.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"This paper focuses on an important research problem of SAEs producing a large number of latent features that require automatic explanations and evaluations.\", \"Several evaluation metrics have been proposed to assess the generated explanations, comparing them across different explainer models, SAEs, and layers.\", \"Some findings are interesting. For instance, sampling examples that are shown to the explainer model may increase the scores of features, highlighting a problem with current auto-interpretability evaluations. Experimental results suggest a priority for training wider SAEs on a smaller subset of residual stream layers. These may provide valuable insights for future research.\"], \"weaknesses\": [\"The writing and clarity of the paper could be improved. The current version is somewhat difficult to follow and understand. Please see my detailed questions below.\", \"Many observations are presented without in-depth analysis or further exploration. For instance, in Section 3.1, the impact of context length and latent space size on activation data is mentioned, but it is unclear how these results relate to the selection of 256 tokens. In Section 3.2, the claim that \\u201cshowing such short contexts to the explainer model hinders the correct identification of latents with complex activation patterns\\u201d seems to contradict the use of short activating examples with 32 tokens. In Section 3.3, while qualitative results for different sampling strategies are provided, it is not clear how to optimize the sampling process. Additionally, in Section 4.1, the statement \\u201cThe imperfect correlations hint at either shortcomings of the scoring metrics or the fact that these metrics can measure different qualities of explanations\\u201d lacks clarity. What specific qualities of explanations are measured by these automatic metrics?\", \"The generated explanations largely rely on the prompts and explainer models. It is unclear how generalizable the results are and to what extent specific prompts and explainer models may affect the quality of the generated explanations.\", \"While several automatic evaluations are compared, it is unclear whether they accurately reflect the quality of explanations. It would be beneficial to conduct human evaluations, at least on a small set, and compare the correlation with automatic evaluations.\"], \"questions\": [\"Why is the context length of 256 chosen for activation collection? Is this value based on empirical selection?\", \"Why is the activating example limited to only 32 tokens, given that short contexts may hinder the correct identification of latents with complex activation patterns?\", \"What accounts for the differences in explanations between \\u201crandomly sampling\\u201d and \\u201cuniformly sampling\\u201d?\", \"In Section 5, the statement that \\u201cif the explanation for latent \\u03b1 at layer j is very different from the explanation for the same latent at layer j + 1, this would suggest that our pipeline is inconsistent and noisy\\u201d raises a hypothesis. Can you elaborate on it?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> The method is a bit hard to understand for readers who are not familiar with SAEs...\\n\\nWe thank the reviewer for this excellent suggestion. Figure 1 now illustrates the full pipeline, from collecting activations, to generating interpretations and scores.\\n\\n> ...However, the authors better consider to write a more accessible version for readers who are not familiar with SAEs, especially for section 3. It is a bit difficult to follow it.\\n\\nTaking into account the comments of several reviewers, we are increasing the accessibility of the article. We are moving the section comparing SAEs at different layers to the Appendix, to allow for more discussion space in the body for readers who are not familiar with SAEs, (see e.g. lines 93-97). We also made changes to Sections 3.1, 3.2 and 3.3, expanding the discussion on each scoring method.\"}" ] }
5kMwiMnUip
NEMESIS \\ Jailbreaking LLMs with Chain of Thoughts Approach
[ "Vedanta S P", "Ashiq Firoz", "Sriharsha Bodicherla", "Emmanuel George P", "Madhav Rao" ]
Large Language Models (LLMs) are increasingly being deployed across various applications, making the need for robust security measures crucial. This paper explores multiple methods for jailbreaking these models, bypassing their secu- rity protocols. By examining five distinct approaches—Multishot Jailbreaking, the Mirror Dimension Approach, the Cipher Method, the ”You are Answering the Wrong Question” Method, and the Textbook Jailbreaking Method—we highlight the vulnerabilities in current LLMs and emphasize the importance of fine-tuning and secure guardrails. Our study primarily employs chain-of-thought reasoning, which can be further enhanced through reinforcement learning techniques. Fur- thermore, we propose that our findings can serve as a benchmark against emerging security measures such as LlamaGuard, providing a comprehensive evaluation of LLM defenses. Our findings demonstrate the effectiveness of these methods and suggest directions for future work in enhancing LLM security. This research un- derscores the ongoing challenges in balancing LLM capabilities with robust safe- guards against potential misuse or manipulation.
[ "LLM", "Jailbreaking", "Chain-of-thought reasoning", "Reinforcement learning", "LLM security protocols", "Adversarial attacks", "Defense mechanisms", "LlamaGuard", "Multishot Jailbreaking", "Fine Tuning" ]
Reject
https://openreview.net/pdf?id=5kMwiMnUip
https://openreview.net/forum?id=5kMwiMnUip
ICLR.cc/2025/Conference
2025
{ "note_id": [ "YvrmdyFfN1", "RSwunxHSkn", "IaEfcmfZp8", "HM6AfymEiu", "HD2QUwO55R", "DFlXiqAsKY", "7vMdhzCvxv" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "official_review", "meta_review", "official_review" ], "note_created": [ 1737523539741, 1730312716979, 1730693630305, 1730159604482, 1730663791000, 1734952160719, 1730666144460 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2904/Reviewer_2dQ5" ], [ "ICLR.cc/2025/Conference/Submission2904/Reviewer_u67g" ], [ "ICLR.cc/2025/Conference/Submission2904/Reviewer_X8zU" ], [ "ICLR.cc/2025/Conference/Submission2904/Reviewer_dfDC" ], [ "ICLR.cc/2025/Conference/Submission2904/Area_Chair_LFLP" ], [ "ICLR.cc/2025/Conference/Submission2904/Reviewer_k4Sx" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This manuscript describes a jailbreaking attack workflow that consists of five attack approaches. Each of these five jailbreaking approaches is believed to be empirically effective.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Each of the five proposed jailbreaking is empirically effective, within some setups that are not described.\", \"weaknesses\": \"1. The layout of the manuscript is not in a standard academic style. The setup and results sections are missing.\\n\\n2. related to (1), as there is no setup section, it is hard to interpret the results. For example, the metrics (e.g. definition of success in Fig 3) and benchmarks (e.g. tasks used for Fig 3) remain unknown. \\n\\n3. related to (1), in each subsubsection of section 3 (i.e., section 3.1.1, 3.2.1, 3.3.1, 3.4.1, and 3.5.1), the manuscript makes claims about the effectiveness or limitations of each jailbreaking approach respect to some setups, yet these claims are not grounded by empirical evidence. Similarly, claims made in Section 5 are neither well grounded. \\n\\n4. While there is an evaluation of each of the five methods in the framework, there is no evaluation of the whole framework. \\n\\n5. The contribution of this paper is not clearly defined. For example, is the reference approach a contribution to this paper?\\n\\n6. The current contribution of the current draft is low. Many existing works have proposed multi-turn jailbreaking approaches and multi-approach security benchmarks. Additionally, the current selection of jailbreaking approaches in the framework is unjustified.\", \"questions\": \"1. This manuscript needs to be converted to a standard academic format. Please describe the setups of the experiments and all necessary empirical results (e.g., numbers) to support the claims. A justification for the choices of the setups is needed (e.g., why specific metrics and tasks are used in experiments).\\n\\n2. The manuscript needs to evaluate the proposed whole framework in addition to each approach.\\n\\n3. Fig 2 and Fig 3 suggest the reference approach works 100% of the time. Would that suggest that attackers only need to try the reference approach rather than spending more time running the whole framework?\\n\\n4. The manuscript needs to describe each contribution clearly. Are the five jailbreaking approaches contributions? If they are not, please cite existing works. If they are, please declare them. Additionally, even if these five jailbreaking approaches are contributions, the contribution seems incremental. There is no comparison between the proposed framework and existing approaches.\\n\\n5. In Fig 2, textbook jailbreaking and the reference methods are listed as different approaches (and are reported to have different performances). In Fig 1, textbook jailbreaking and reference are listed as different approaches (and in total, there are six approaches ), yet in the text (e.g., section 1 and section 3), there are only five, with the reference method not introduced. In Fig 3, there is only the reference method but not the textbook jailbreaking method. Is the reference method the same as the textbook jailbreaking? If they are not the same, an introduction to the reference method is needed. If they are the same, please explain why there are differences in performance. \\n\\n6. In Fig 2, the reported numbers are either 0 or 1, indicating that the jailbreaking approaches are either 0% effective or 100% effective. This is counterintuitive. Is there an explanation?\\n\\n7. The manuscript needs to justify the selection of jailbreak approaches that are currently selected in the framework.\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"In Fig 2, the reported numbers are either 0 or 1, indicating that the jailbreaking approaches are either 0% effective or 100% effective. I found this counterintuitive and believe there is a risk of data fabrication. I am not suggesting that I am confident that such unethical behaviors definitely happened, yet I would suggest the ethics committee request more detailed experiment records (that were documented before the submission deadline).\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper only runs existing attacks.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"No.\", \"weaknesses\": \"This paper only runs existing attacks. No novelty. No new findings.\", \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper explores vulnerabilities in LLMs by examining five different jailbreaking techniques: Multishot Jailbreaking, Mirror Dimension, Cipher Method, \\\"You Are Answering the Wrong Question,\\\" and Textbook Jailbreaking. Each method effectively manipulates LLMs to bypass their safety mechanisms, revealing weaknesses in model architectures and guardrails. The findings emphasize the tension between maintaining conversational coherence and enforcing safety, suggesting the need for improved guardrails, more sophisticated input filtering, and better content regulation to enhance LLM security.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"I regret to say that I can't identify any strengths.\", \"weaknesses\": [\"The paper has several significant weaknesses, and I will highlight a few:\", \"The overall contribution of the paper is unclear. What specific question are the authors trying to answer? It reads more like a brief survey summarizing known attacks, but even this is done in a general and superficial way.\", \"The references are not in a standard format, and there is a \\\"?\\\" in the introduction failing to point to the correct paper.\", \"Several named papers or methods, such as PathSeeker, are not cited.\", \"The format of the background section is unusual, with important works missing, such as:\", \"Anil C, Durmus E, Sharma M, Benton J, Kundu S, Batson J, Rimsky N, Tong M, Mu J, Ford D, Mosconi F. Many-shot jailbreaking. Anthropic, April 2024.\", \"There is no results section, except for a very brief subsection in the methods section.\", \"The main figure is overly simplistic and poorly constructed, with unnecessary gridlines and little useful information.\", \"Claims made in the introduction, such as attributing jailbreak attacks to LLM architectural features, are never addressed or followed up on throughout the paper.\"], \"questions\": \"I do not have any questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores multiple methods for jailbreaking LLMs, including:\\n\\n1. Multishot Jailbreaking: gradually manipulates AI models through structured conversational prompts\\n\\n2. Mirror Dimension Approach: convinces the AI model it exists in a fictional reality without ethical constraints\\n\\n3. Cipher Method: encodes harmful content to evade safety detection\\n\\n4. \\\"You are Answering the Wrong Question\\\" Method: exploits correction mechanisms by claiming misunderstanding\\n\\n5. Textbook Jailbreaking Method: induces model to summarize sensitive external resources\\n\\nThe results demonstrate limited effectiveness, working primarily on GPT-4. The paper lacks comparative analysis with existing baselines and doesn't examine defense mechanisms.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This paper explores multiple methods for jailbreaking LLMs\", \"weaknesses\": \"1. The paper appears disorganized and reads like an unfinished draft. Basic issues in formatting persist, such as incorrect reference formatting with placeholders and question marks. Additionally, the structural flow does not align with typical research paper conventions. For instance, the paper should clearly outline its contributions at the end of the introduction. Furthermore, there should be a dedicated evaluation section to assess the proposed approach comprehensively. In Section 3, subsections are over-divided, with new subsections every 4-5 lines, which hinders readability. Figure 1 also needs improvement in both clarity and presentation quality.\\n\\n2. The proposed method, as it stands, is a straightforward combination of multiple manually devised strategies, many of which are already established in the literature [1][2][3][4].\\n\\n[1] Zeng, Yi, et al. \\\"How Johnny can persuade LLMs to jailbreak them: Rethinking persuasion to challenge AI safety by humanizing LLMs.\\\" arXiv preprint arXiv:2401.06373 (2024). \\n\\n[2] Jin, Xiaolong, Zhuo Zhang, and Xiangyu Zhang. \\\"MULTIVERSE: Exposing Large Language Model Alignment Problems in Diverse Worlds.\\\" arXiv preprint arXiv:2402.01706 (2024). \\n\\n[3] Deng, Gelei, et al. \\\"Pandora: Jailbreak GPTs by retrieval-augmented generation poisoning.\\\" arXiv preprint arXiv:2402.08416 (2024). \\n\\n[4] Yang, Xikang, et al. \\\"Chain of Attack: A Semantic-Driven Contextual Multi-Turn Attacker for LLM.\\\" arXiv preprint arXiv:2405.05610 (2024).\\n\\n3. Moreover, the paper lacks evaluation against existing defenses and does not compare its performance with relevant baselines, which limits the credibility and impact of the proposed attack strategy.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The reviewers all find the paper lacking technical novelties and contributions. It would be important for the authors to further improve the paper based on the reviews for the next version.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers agree with the final decision.\"}", "{\"summary\": \"This paper conducts an investigation into jailbreaking LLMs through five distinct methodologies. A notable aspect of the research is the use of chain-of-thought prompting to enhance the success rate of these attacks. The experimental results demonstrate that the proposed techniques significantly increase the rate of successful jailbreaking attempts. This finding underscores the urgent need for continued advancements in the safety and alignment mechanisms of LLMs to mitigate potential vulnerabilities.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper addresses a highly important and timely topic, namely LLM red-teaming.\\n\\n2. The innovative approach of leveraging chain-of-thought prompting to further improve the success rate of LLM jailbreaking is interesting.\", \"weaknesses\": \"1. While the five methods discussed in the paper are well-documented in previous research, the novelty of this study could be more clearly highlighted.\\n\\n2. The evaluation section would benefit from baseline comparisons to help readers better understand the strengths of the proposed techniques.\\n\\n3. The presentation could be improved for greater clarity. For example, in Figures 2 and 3, what question set you are using for evaluation? What does \\\"effectiveness\\\" mean in Figure 2? Why is \\\"effectiveness\\\" a merely binary number? Which LLM is being evaluated in Figure 3, and is it the same model as in Figure 2? Why do the ASR values differ for each jailbreaking method before the attack?\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5k5Tco1z3G
PointACL: Point Cloud Understanding via Attention-Driven Contrastive Learning
[ "Yi Wang", "Jiaze Wang", "Ziyu Guo", "Renrui Zhang", "Donghao Zhou", "Guangyong Chen", "Anfeng Liu", "Pheng-Ann Heng" ]
Recently Transformer-based models have advanced point cloud understanding by leveraging self-attention mechanisms, however, these methods often overlook latent information in less prominent regions, leading to increased sensitivity to perturbations and limited global comprehension. To solve this issue, we introduce PointACL, an attention-driven contrastive learning framework designed to address these limitations. Our method employs an attention-driven dynamic masking strategy that guides the model to focus on under-attended regions, enhancing the understanding of global structures within the point cloud. Then we combine the original pre-training loss with a contrastive learning loss, improving feature discrimination and generalization. Extensive experiments validate the effectiveness of PointACL, as it achieves state-of-the-art performance across a variety of 3D understanding tasks, including object classification, part segmentation, and few-shot learning. Specifically, when integrated with different Transformer backbones like Point-MAE and PointGPT, PointACL demonstrates improved performance on datasets such as ScanObjectNN, ModelNet40, and ShapeNetPart. This highlights its superior capability in capturing both global and local features, as well as its enhanced robustness against perturbations and incomplete data.
[ "Point Cloud Understanding", "Attention-Driven Contrastive Learning" ]
https://openreview.net/pdf?id=5k5Tco1z3G
https://openreview.net/forum?id=5k5Tco1z3G
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kyDWGoF00b", "dKHkc71iNU", "aEe8FMG3xD", "GyhUmYyb4e", "C2PJehC6NY" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730624455269, 1731460505567, 1730720881570, 1730564285012, 1730630237837 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4281/Reviewer_UF4r" ], [ "ICLR.cc/2025/Conference/Submission4281/Authors" ], [ "ICLR.cc/2025/Conference/Submission4281/Reviewer_11qV" ], [ "ICLR.cc/2025/Conference/Submission4281/Reviewer_F7F3" ], [ "ICLR.cc/2025/Conference/Submission4281/Reviewer_YSYw" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a novel framework designed to improve the understanding of 3D point clouds through a combination of attention mechanisms and contrastive learning that can easily be plugged into other transformer-based architectures for point cloud processing.\\nAccording to the authors, transformer-based models for point cloud understanding are sensitive to perturbations due to a limited global comprehension as they mainly focus on more prominent regions and overlook under-attended (yet still important) regions. For this reason, they propose an Attention-Driven Dynamic Masking that directs the model to focus on under-attended regions by dynamically masking high-attention patches, and steering the network to have a more comprehensive understanding of global point cloud structures.\\nFurthermore, they propose to combine a Contrastive Learning Loss at pre-training time with the original loss of the main backbone to improve feature discrimination and generalization, as pre-training losses are mainly based on generative/reconstruction tasks.\\nExperiments show that PointACL\\u2019s masking strategy and loss configurations significantly enhance model performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. In my opinion, the paper is well written and clear. The problem statement and the proposed solutions are well explained and presented. I particularly appreciated the plot showing the attended regions, showing that the proposed framework indeed learns a more global understanding of the input point cloud.\\n2. Results are good, and I think the main takeaway from this paper is that giving more importance to less prominent patches yields very robust results to perturbations.\", \"weaknesses\": \"At the moment, I do not see any critical weaknesses in the paper.\", \"questions\": \"1. Typo at line 245: regzions->regions.\\n2. I would like to ask the authors how the second part of Eq. 5 works. I understand that its goal is to give a perturbation to the probability distribution of the first part, but still, it is not clear to me how the second part affects the first part. Maybe the authors can clarify this aspect.\\n3. Are the hyper-parameters fixed across datasets/experiments or each experiment has its own hyper-parameters? For example, the authors in the ablation studies state that R=0.6 is the optimal mask ratio. Is this true for all datasets or the value has been changed for better performance? What about the Probability temperature and the Contrastive Loss weight?\\n4. Do the authors think that the proposed claims are valid only for transformer-based architecture? Would it be possible to apply similar strategies to simpler architectures such as a PointNet?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces PointACL, a novel framework that addresses a critical limitation in Transformer-based point cloud understanding models - their tendency to overlook information in less prominent regions. The framework consists of two key components: (1) an attention-driven dynamic masking strategy that selectively masks high-attention regions, forcing the model to learn from under-attended areas, and (2) a contrastive learning approach that combines with original pre-training objectives to enhance feature discrimination and generalization. The method can be seamlessly integrated with existing architectures like Point-MAE and PointGPT.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1\\u3001The paper presents a novel and well-motivated solution by introducing attention-driven dynamic masking. Unlike previous works using fixed or random masking, PointACL adaptively masks high-attention regions to force learning from under-attended areas.\\n2\\u3001The paper provides extensive experimental evidence to support its claims, including thorough robustness analysis under various perturbations (Gaussian noise, rotation, scaling, point dropout).\", \"weaknesses\": \"\\uff081\\uff09The paper lacks theoretical analysis or justification for why the attention-driven dynamic masking strategy works. There's no mathematical foundation explaining:\\n1\\u3001Why masking high-attention regions leads to better global understanding\\uff1f\\n2\\u3001How the dynamic probability distribution affects learning\\uff1f\\n3\\u3001What properties of the contrastive loss ensure better feature discrimination.\\nThis theoretical gap makes it difficult to understand the method's fundamental principles and potentially limits its generalizability.\\n\\uff082\\uff09 The paper primarily compares with Point-MAE and PointGPT-S as backbones, but misses comparisons with other masking strategies from recent works.\\n\\uff083\\uff09The performance improvement of this paper on few shot learning and part segmentation tasks is not significant.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper improves previous contrastive learning for point cloud model pretraining. It analyzes the issue in previous works: overlook latent information in less prominent regions. Thus, it proposes an attention-driven contrastive learning framework that uses an attention-driven dynamic masking strategy that guides the model to focus on under-attended regions. The experiments show the proposed method improves previous methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is easy to follow.\", \"The motivation is interesting and the visualization is helpful.\"], \"weaknesses\": [\"The improvement is slight, especially for part segmentation (only 0.1 for Ins. mIoU)\", \"As shown in Table 7, only when Mask Ratio is 0.6, the dynamic high-attention mask strategy achieves 92.3% accuracy, and other settings get lower score and all of them get very close scores.\", \"As shown in Table 4, the low attention mask strategy achieve higher scores than Random Mask w.r.t. OBJ-ONLY. Please give reasonable explanation.\"], \"questions\": \"Refer to Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper considers the sparse characteristics of point clouds that differ from other modalities such as text and images. Transformer-based models leverage the attention mechanism that focuses on significant areas while potentially disregarding less noticeable point patches, may lead to missing important underlying information. To address this, it introduces an innovative framework that integrates an attention-based dynamic masking strategy with contrastive learning. This approach aims to effectively capture both global and local features, while also improving robustness against perturbations and incomplete data.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThis paper is well-written and easy to read.\\n\\n2.\\tThe motivation that employing an attention-driven dynamic masking strategy to enhance the understanding of global structures within the point cloud is reasonable.\", \"weaknesses\": \"1.\\tInnovations are limited. The method simply generates an attention-guided masked point cloud, and adds the comparative loss of the original point patches and masked point patches on original loss function.\\n\\n2.\\tExperimental results are not comprehensive. \\n\\n\\u2022 Improvements over baseline methods are quite marginal and perhaps not statistically significant. While this method can be seamlessly integrated into mainstream transformer architectures, it remains unclear whether the performance gains are due to additional 300 epoch pre-training, an increase in parameters, or the dynamic masking strategy itself.\\n\\n\\u2022 It has not achieved SOTA performance and is not compared with the latest methods\\uff0csuch as PointMamba [1], Point-FEMAE[2], Point-GPT-L.\\n\\n\\u2022 The author did not demonstrate the generality of the proposed method. Assuming that this dynamic masking strategy is effective, it should be applicable to all transformer-based point cloud models, such as Point BERT, Point-FEMAE, Point-GPT-L etc. The author lacks further analysis of the generality of the proposed ideas.\\n\\n\\u2022 Is there any comparison of computational efficiency and cost with baseline?\\n\\n\\n[1] D. Liang, et al, \\u201cPointmamba: A simple state space model for point cloud analysis,\\u201d in Advances in Neural Information Processing Systems, 2024.\\n[2] Y Zha, et al. Towards compact 3d representations via point feature enhancement masked autoencoders[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(7): 6962-6970.\", \"questions\": \"The main questions have listed in the Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5iWim8KqBR
Memory-Efficient Algorithm Distillation for In-context Reinforcement Learning
[ "Diyuan Shi", "Zifeng Zhuang", "Donglin Wang" ]
It's recently reported that by employing the superior In-context Learning (ICL) ability of autoregressive Transformer, a method named $\textit{Algorithm Distillation}$ (AD) could distill the whole Reinforcement Learning process into neural network then generalize to $\textit{unseen}$ scenarios with performance comparable to the distilled algorithm. However, to enable ICL, it's vital for self-attention module to have a context that spans cross-episodes histories and contains thousands of tokens. Such a long-range context and the quadratic memory complexity of self-attention pose difficulty on applying AD into many common RL tasks. On the other hand, designing memory efficient Transformers for $\textit{long-range document modeling}$ is itself a fast-developing and fruitful field, which leads to a natural question: $\textit{Could Efficient Transformers exhibit similar in-context learning ability and be used for Memory-Efficient Algorithm Distillation?}$ In this paper, we firstly build a benchmark suite that is thorough, efficient and flexible. Thanks to it, we perform extensive experiments and verify an existing method named $\textit{ERNIE-Docs}$ (ED) could offer competitive performance with significantly reduced memory footprint. With systematic ablation studies, we further investigate various facets influencing the ICL ability of ED and provide our own insights into its hyperparameter tuning.
[ "algorithm distillation" ]
Reject
https://openreview.net/pdf?id=5iWim8KqBR
https://openreview.net/forum?id=5iWim8KqBR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xkxX0d14AP", "m5O0ZVTysT", "gwfAIFWWLG", "g7AMZautWB", "BIgQWQrpqe", "9kSWc4GjvS", "51oQmT5PHl" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review", "official_comment", "official_review", "decision" ], "note_created": [ 1734565096496, 1731116525483, 1730704527054, 1730813729103, 1733188899694, 1730760727193, 1737524023307 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10063/Area_Chair_ahti" ], [ "ICLR.cc/2025/Conference/Submission10063/Reviewer_bBam" ], [ "ICLR.cc/2025/Conference/Submission10063/Reviewer_6qRy" ], [ "ICLR.cc/2025/Conference/Submission10063/Reviewer_X3fG" ], [ "ICLR.cc/2025/Conference/Submission10063/Reviewer_ZN9X" ], [ "ICLR.cc/2025/Conference/Submission10063/Reviewer_ZN9X" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"metareview\": \"The authors investigate whether auto-regressive models achieve a comparable in-context learning ability and can be used for memory-efficient algorithm distillation. In addition, the authors propose a benchmark suite and experiment with ED (ERNIE-Docs) and its hyperparameter tuning.\\n\\nThe reviewers had a mixed assessment of the strengths and weaknesses of the paper. While the paper is well written and the benchmark dataset is described in great detail, the novelty of the work is limited. Furthermore, the experimental setup of the paper is focused on GridWorld-based environments. \\n\\nConsidering the limitation in novelty, the AC recommends a more thorough empirical investigation of the merits of the ED method, along the directions recommended by the reviewers.\\n\\nI assess that the paper is not yet ready for acceptance and I recommend the authors improve the quality of the experimental setup.\", \"additional_comments_on_reviewer_discussion\": \"The authors and reviewers engaged in productive discussions during the rebuttal. However, the authors had divergent opinions on the adequacy of the experimental setup, in particular concerning the choice of RL environments.\"}", "{\"summary\": \"This paper presents an analysis of memory-efficient Algorithm Distillation (AD) in Reinforcement Learning (RL). The authors evaluate the effectiveness of Recurrence Transformers for memory-efficient AD in modified Gridworld environments (DarkRoom and DarkKeyToDoor) with varying grid sizes and reward functions. They further examine how positional encoding and model capacity impact performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The study of memory-efficient AD contributes to advancing the practical applicability of framing RL as an autoregressive problem.\"], \"weaknesses\": [\"The behavior of the proposed algorithm is not clearly discussed. In Figure 5, it appears that ED and XL perform worse than other methods on the drld task, despite the claim that dense rewards benefit AD. Additional insights into this result would be beneficial.\", \"Including plots of performance over time steps could help clarify the behavior of the algorithms.\", \"In the ablations (Section 5.2), consistency in focusing on the proposed method (ED) would improve clarity. Ablating positional encoding on ED rather than ADF would align better with the primary focus on ED.\"], \"questions\": \"1. What is the intuition behind XL\\u2019s difficulty in learning in-context, while MEM\\u2014a method that learns global memories\\u2014outperforms it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors investigate whether efficient Transformer models, designed for long-range document modeling, can be used for Algorithm Distillation (AD) in reinforcement learning (RL). AD generally requires large memory capacity due to the long context of cross-episode state-action-reward histories. Here the authors propose a new benchmark suite to evaluate the in-context learning ability of efficient Transformers in the AD setup. The authors demonstrate that ERNIE-Docs, a variant of Transformer-XL, achieves comparable performance to standard Transformers with significantly reduced memory requirements. They also conduct additional ablation studies analyzing ERNIE-Docs performance for different hyperparameter values.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The publication is sufficiently clearly written. It clearly states the main motivation, describes the proposed benchmark suite and provides sufficient details about all of the evaluated models.\\n2. The proposed benchmark could potentially be useful to a wider community once published. And the inclusion of three types of settings (normal, dense and quick) is a valuable component of this benchmark.\\n3. Overall, the empirical evaluation appears to be well-designed and some of the conclusions (including those related to strong ERNIE-Docs performance and hyperparameter explorations) seem valuable.\", \"weaknesses\": \"1. While the benchmark itself may hold value, the evaluation of several published and fairly standard algorithms offers limited novelty. The analysis seems to confirm expected performance patterns (including the importance of global positional encodings) and appears to lack deeper insights.\\n2. The work's current impact is limited by the lack of publicly available code of the proposed benchmark (making it difficult to judge about its potential impact on a broader scientific community).\\n3. Also, while the GridWorld-based environments may provide a good initial testing ground, further validation across a wider range of environments is very important. Including diverse environment types, as demonstrated in prior work on Algorithm Distillation, would strengthen the generalizability of the findings.\", \"questions\": \"1. Would it be possible or insightful to interpolate between three proposed settings (normal, dense and quick) to provide a more controllable and continuous way of setting the environment complexity?\\n2. One of the proposed advantages of the AD method is its data efficiency and the possibility to subsample the source training history. Can there be something said about the behavior of efficient Transformer architectures on sub-sampled episode histories?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes to use existing recurrent transformers to do Offline RL. Additionally they propose several simple benchmark tasks to measure quality of their models.\\n\\nThe contribution is a step of simple benchmarks, operating on a small grid. The tasks include finding a goal point on that grid, and a \\nmore complex DarkKeyToDoor - that requires finding a key first followed by the door. The other part of the benchmark is the reward structure which can vary from the standard sparse reward (e.g. getting to the goal), intermediate sparse -both key and door, \\nand a dense reward which incorporates the distance to the current target.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is well written and for the most part easy to follow. The benchmark dataset is described in great details.\", \"weaknesses\": [\"The novelty is pretty limited. The benchmark set is pretty straightforward, and it is not entirely obvious how is it different from other similar grid-environments such as mini-grid.\", \"The best results seem to to be attained at \\\"Dense reward\\\" setting, which is the least challenging, and is generally not very scaleable.\", \"If i understand correctly the study, the length of the chunk is comparable to the length of the episode (or 1/2 of the episode, so it is barely even a recurrent network). Since the paper claims memory efficiency, it would benefit greatly if authors can show that their method can scale to larger ratio between episode length and chunk size.\", \"Comparison to some standard Grid-World or MiniGrid environments would be helpful (e.g. environments with obstacles)\"], \"questions\": \"1. It is not very clear if you you incorporate a single or multiple episodes into a single context? I couldn't find any reference in the paper about multiple episodes, but this implies the context length is limited to to 20-70, whereas on figure 11 your single chunk length varies between 20 and 70. Does it mean that that at the upper limit you don't use recurrence at all and the lower limit the recurrence transition is only used at most once?\\n\\n2. What is \\\"Recurrence capacity\\\" -- is it the number of tokens passed between chunks? Why is it so large? It almost like we don't loose any information when going between chunks - especially considering that your episode length is so small.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Author reviews\", \"comment\": \"I appreciate the detailed responses. My concerns have been addressed. I am increasing my rating from 5 to 6.\"}", "{\"summary\": \"The paper investigates the application of Memory-Efficient transformers under the Algorithm Distillation for that leverages in-context learning abilities of autoregressive transformers for better generalization. Specifically, the authors propose a benchmark suite that efficiently assesses ICL capabilities within AD. The benchmark is demonstrated by testing on various memory-efficient transformers. The ERNIE-Docs (ED), a variant of Transformer-XL structure, is highlighted for its competitive performance with reduced memory requirements.\\n\\nThe benchmark suite covers diverse reinforcement learning tasks, including scenarios with sparse and dense rewards, requiring credit assignment and exploration-exploitation balancing. The suite utilizes both JAX for efficient parallel execution and compatibility with PyTorch. Experimental results reveal that ED performs comparably to standard AD but with a lower memory footprint, showing promise for resource-constrained environments. Notably, the study identifies key factors like positional encoding, model size, attention settings, and memory capacity as influential to the ICL performance of ED.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Novel Application of Efficient Transformers in In-Context Reinforcement Learning: The paper introduces a unique connection by leveraging Efficient Transformer (ET) architectures, specifically ERNIE-Docs, within Algorithm Distillation (AD) for in-context reinforcement learning (ICL).\", \"Comprehensive Benchmarking Suite for Memory-Efficient Algorithm Distillation: The paper\\u2019s proposed benchmark suite is tailored to evaluate ICL in AD using memory-efficient transformers. The benchmark suite facilitates thorough testing of memory-efficient AD methods across various reinforcement learning challenges, fulfilling the proposed property for meta RL benchmarking.\", \"Systematic Ablation Studies and Insights into Hyperparameters: The paper provides detailed ablation studies, shedding light on how positional encoding, model size, attention settings, and recurrence memory capacity impact ICL performance.\"], \"weaknesses\": [\"Lack of Benchmarking Comparisons: Although the paper\\u2019s main contribution is a new benchmarking suite, it does not include comparisons with existing benchmarks. This omission makes it challenging to assess whether the proposed benchmark genuinely outperforms or offers advantages over established alternatives in terms of efficiency, coverage, or flexibility.\", \"Limited Range of Tested Methods: As a benchmarking study, the paper evaluates a relatively narrow set of methods, primarily focusing on ERNIE-Docs and Transformer-XL. This limited scope overlooks a wide array of memory-efficient transformers in the literature, reducing the generalizability of the benchmark results and their applicability across diverse models.\", \"Engineering Efforts Framed as Research Contributions: While the paper uses JAX for parallelization to achieve computational efficiency, this is more of an engineering choice than a novel research contribution. Clearly distinguishing implementation decisions from research contributions would help emphasize the true innovations in the benchmark, such as findings from ablation studies or unique design features of the benchmarking suite.\"], \"questions\": [\"Could the authors provide comparisons of proposed benchmark with existing benchmarks like Meta-world and DeepMind Control Suite? Discuss the benchmark\\u2019s strengths in applicability, memory, runtime, task diversity, and flexibility.\", \"Could the authors consider testing a broader range of methods? Including additional memory-efficient transformers, such as Longformer, Linformer, and Performer, could help demonstrate the benchmark\\u2019s robustness across diverse architectures. Alternatively, the authors should explain in detail why such methods are not comparable for paper`s scope.\", \"Could the authors make better clarification the distinction between engineering choices and research contributions? For example, framing the use of JAX as an implementation decision for efficiency might help focus attention on unique research insights, such as ablation study findings or design elements specific to in-context learning in memory-constrained environments.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
5iUUorHeM3
CIRCUIT: A Benchmark for Circuit Interpretation and Reasoning Capabilities of LLMs
[ "Lejla Skelic", "Yan Xu", "Matthew Cox", "Wenjie Lu", "Tao Yu", "Ruonan Han" ]
The role of Large Language Models (LLMs) has not been extensively explored in analog circuit design, which could benefit from a reasoning-based approach that transcends traditional optimization techniques. In particular, despite their growing relevance, there are no benchmarks to assess LLMs’ reasoning capability about circuits. Therefore, we created the CIRCUIT dataset consisting of 510 question-answer pairs spanning various levels of analog-circuit-related subjects. The best-performing model on our dataset, GPT-4o, achieves 48.04\% accuracy when evaluated on the final numerical answer. To evaluate the robustness of LLMs on our dataset, we introduced a unique feature that enables unit-test-like evaluation by grouping questions into unit tests. In this case, GPT-4o can only pass 27.45\% of the unit tests, highlighting that the most advanced LLMs still struggle with understanding circuits, which requires multi-level reasoning, particularly when involving circuit topologies. This circuit-specific benchmark highlights LLMs' limitations, offering valuable insights for advancing their application in analog integrated circuit design.
[ "Large Language Models (LLMs)", "benchmarking", "analog circuits", "dataset creation", "evaluation metrics" ]
Reject
https://openreview.net/pdf?id=5iUUorHeM3
https://openreview.net/forum?id=5iUUorHeM3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vqghqMJvEf", "s1qcQxhwUa", "ksHeNcnMa7", "jNnNL1tkDZ", "egyYY38TgC", "dxoVKuqqGb", "VjP46R4WlD", "LyI79uTlB1", "LS7GfJFUa5", "K23OgS6Okm", "J0BjcMfpTw", "IILhVgbeWR", "CNfbjeUWgX", "9AAwaRKtl9", "8JfccB2Dae", "6og9GP9B8b", "68OQfbOHI8", "313DOWiXAe", "0IYzYfx614" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732377134724, 1732845734301, 1732336062232, 1732126017496, 1732687750196, 1737524217964, 1730659237600, 1730509805949, 1732125072979, 1730509360013, 1732902983720, 1732124482211, 1732630825504, 1732854520024, 1732123885599, 1734855028650, 1730547254247, 1732902540258, 1732855243076 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12829/Authors" ], [ "ICLR.cc/2025/Conference/Submission12829/Reviewer_Ahcw" ], [ "ICLR.cc/2025/Conference/Submission12829/Reviewer_Ahcw" ], [ "ICLR.cc/2025/Conference/Submission12829/Authors" ], [ "ICLR.cc/2025/Conference/Submission12829/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12829/Reviewer_JapT" ], [ "ICLR.cc/2025/Conference/Submission12829/Reviewer_buQo" ], [ "ICLR.cc/2025/Conference/Submission12829/Authors" ], [ "ICLR.cc/2025/Conference/Submission12829/Reviewer_Ahcw" ], [ "ICLR.cc/2025/Conference/Submission12829/Authors" ], [ "ICLR.cc/2025/Conference/Submission12829/Authors" ], [ "ICLR.cc/2025/Conference/Submission12829/Reviewer_JapT" ], [ "ICLR.cc/2025/Conference/Submission12829/Authors" ], [ "ICLR.cc/2025/Conference/Submission12829/Authors" ], [ "ICLR.cc/2025/Conference/Submission12829/Area_Chair_JyzE" ], [ "ICLR.cc/2025/Conference/Submission12829/Reviewer_5hrA" ], [ "ICLR.cc/2025/Conference/Submission12829/Authors" ], [ "ICLR.cc/2025/Conference/Submission12829/Reviewer_5hrA" ] ], "structured_content_str": [ "{\"title\": \"Clarifications on Contributions and Novelty of Our Work\", \"comment\": \"Thank you for your follow-up and for recognizing our efforts in characterizing the reasoning ability of LLMs in circuit design.\", \"we_would_like_to_address_your_concerns_in_detail\": \"**The findings:**\\nWhile it may seem intuitive that LLMs, primarily trained on text, would struggle with tasks involving circuit topologies\\u2014a domain that requires multi-modal reasoning\\u2014our work provides the first empirical evidence quantifying the extent and nature of these challenges. This was not a foregone conclusion, as state-of-the-art LLMs have demonstrated unexpected capabilities in other reasoning tasks. By systematically identifying specific shortcomings, our study moves beyond intuition to offer actionable insights for targeted improvements.\\n\\n**Insights into improving LLM reasoning:**\\nWe recognize the importance of improving reasoning capabilities, and while our focus was on benchmarking, we did attempt strategies for enhancement. Specifically, we incorporated netlists and explored prompt engineering, which led to modest but measurable improvements. These methods were deliberately chosen for their practicality and applicability, ensuring they are easy for circuit designers to implement without requiring labor-intensive annotations or descriptions. These results indicate that progress is possible, but they also reveal underlying limitations in LLM reasoning that require deeper changes\\u2014an important direction for future work.\\n\\n**Key contributions of our work:**\\nBeyond characterizing LLM limitations in reasoning about analog circuit topologies, our work introduces a scalable, unit-test-inspired framework for automatically evaluating reasoning tasks. This structure ensures reliability, minimizes/removes human intervention, and provides a foundation for building more complex datasets and benchmarks. Our evaluation framework is the product of significant effort to abstract complex reasoning challenges into a scalable and reliable evaluation structure, laying the groundwork for advancing reasoning evaluation in various domains, including circuit design. We envision this framework inspiring further research into evaluation techniques for reasoning tasks.\\n\\nWe hope this response clarifies the novelty and significance of our contributions. We appreciate your feedback and welcome further discussion.\"}", "{\"title\": \"Thanks.\", \"comment\": \"Thanks for authors' hard work in addressing my concerns. My key point is that these circuit examples from text books do not represent practical analog IC design. Thus, the impact and use of the circuit benchmark is limited. Yet, I raise my rating to encourage authors to improve the work further.\"}", "{\"title\": \"Thanks for your response.\", \"comment\": \"Thanks for authors in addressing my questions. Although I think authors did a good job in characterizing the reasoning ability of LLMs in circuit design, isn't this obvious? LLMs mainly focus on text learning, while circuit topologies can be described by many modalities. More importantly, the authors do not show insight on how to improve the reasoning ability of LLMs in circuit topology understanding.\"}", "{\"title\": \"Clarifications on Dataset Scope and Design, Technical Contributions, and Model Performance Analysis\", \"comment\": \"Thank you for your thoughtful comments and questions. We address your points as follows:\\n\\n**Questions:**\\n1. **Different stages of analog circuit design:**\\nWhile our dataset does not address higher-level design tasks, it establishes a foundational framework for evaluating topology understanding. Our benchmark reveals that LLMs struggle significantly with understanding basic topologies, which we consider a precursor to performing any complex design task. Without this foundational understanding, LLMs are unlikely to perform reliably on higher-level tasks. In future work, we aim to first address these fundamental challenges and then expand the dataset with more advanced analog design tasks. The structure of our dataset enables straightforward expansions for scalable and reliable automatic testing of LLMs\\u2019 capabilities in more advanced design stages. We view our work as a necessary first step, focusing on understanding topologies, which underpins all subsequent design processes. We will work on emphasizing these points in the revision.\\n2. **Per-level accuracies:**\\nWe acknowledge the importance of analyzing performance across levels of complexity and provide the following breakdown of GPT-4o's per-level accuracies. These can be included in the revision:\\n\\n| Dataset Subset | Prompt | | GPT 4o Accuracy (%) per level | |\\n|------------------------|-------------------|:-------------:|:-------------:|:-------------:|\\n| | | 1 | 3 | 5 |\\n| Questions With Netlists| 0-s w/ netlists | **49.4** | 31.2 | 18.5 |\\n| | 1-s w/ netlists | 45.0 | 28.2 | 9.2 |\\n| Questions W/O Netlists | 1-s | 85.0 | 60.0 | 48.0 |\\n\\nFurther insights can be drawn from Appendix E which contains an analogous table of per-category accuracies. We will also include a discussion of these results in the revision.\\n\\n**Weaknesses:**\\n1. **Technical contributions:**\\nWhile prompt engineering was an important aspect of our work, a key contribution lies in the dataset design itself, which enables an automated, unit-test-like evaluation of reasoning tasks. By abstracting circuit reasoning into structured templates, we ensured a reliable and scalable evaluation process. Additionally, our pass@k/n metric provides nuanced insights into model capabilities, demonstrating how reasoning tasks can be reliably evaluated, without human or LLM involvement, offering deeper insights. These aspects set a foundation for future work on scalable, reliable, and comprehensive automatic evaluation of reasoning tasks. We will make sure to emphasize these points in the revision.\\n2. **Dataset scope:**\\nWe agree that analog circuit design involves more complex stages than what is addressed in our benchmark. However, our dataset was intentionally designed as a starting point, focusing on topology understanding. Initial results reveal that LLMs struggle significantly with understanding basic topologies, which we consider a precursor to performing any complex design task. In future work, we aim to first address these fundamental challenges and then expand the dataset with more advanced analog design tasks. Our revision will include a clarification of our design choices and how the dataset can be extended to more complex tasks.\\n3. **Reported accuracies:**\\nWe recognize the value of detailed performance analysis but find that objectively defining complexity is challenging. Currently, we rely on levels and categories, as shown earlier. Qualitative analysis reveals that models perform well only on simpler topologies. Topology complexity can be described using graph-theoretic approaches. However, question complexity may not always align with topology complexity, as the reasoning required to solve a problem can depend on factors beyond the circuit's structural intricacy. Addressing this discrepancy and developing a robust definition of question complexity remains an open challenge for future work.\\n4. **Limited insights:**\\nThe benchmark results highlight specific limitations in LLM reasoning, which must be addressed before these models can contribute meaningfully to design automation. Our methods for improving LLM capabilities yielded only modest gains. Nevertheless, our analysis revealed more fundamental challenges LLMs face in reasoning tasks. Addressing these foundational issues, for general and circuit-related applications, is crucial before tackling advanced design tasks and shows a promising direction for future work.\\n\\nWe hope this clarifies your questions and addresses your concerns. If you have any further questions or feedback, we would be happy to address them.\"}", "{\"title\": \"Revisions in Response to Reviewer Feedback\", \"comment\": \"We took the reviewer feedback and implemented several changes to our work. The main changes are highlighted in red. These include:\\n\\n1. **Clarified the benchmark scope** and how it helps advance LLMs in analog circuit design in the Introduction and Related Work sections.\\n2. **Moved per-category accuracies** from the Appendix to the main paper body, **added a table of per-level accuracies**, and briefly discussed these results in the Human Evaluation section.\\n3. **Reorganized the Human Evaluation section** slightly and cut content for clarity.\\n4. **Emphasized the technical contributions** of our work in the Discussion section and reflected in more detail on future work. Also, slightly reorganized and cut content for coherence.\\n5. **Revised the Limitations** of our work based on reviewer feedback.\\n6. **Refined the Conclusion** for improved coherence.\\n\\nWe appreciate the reviewers' feedback, which helped us improve and clarify our contributions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces a CIRCUIT dataset. The dataset contains 510 questions and is evaluated via three LLM based models.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Evaluation results looks extensive, with results analysed in detail to have a comprehensive understanding of their performane on the dataset introduced.\"], \"weaknesses\": [\"The dataset size may not be sufficient enough to fully evaliate the capabilities of LLMs in analogue circuit design.\", \"While the evaluation results do show there is imrovements in terms of accuracies on analogue circuit related questions, it is unclear how well does this translate in designing analogue circuits.\", \"The paper only tests on solutions using LLM only, it is not clear how circuit design using combination of different models could utilize this as a benchmark. This seems more like how to train a LLM using questions on how analogue circuits operate.\"], \"questions\": \"please see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on the application of large language models (LLMs) in analog circuit design. Given that analog circuit design highly depends on human expertise and faces a labor shortage problem, the authors aim to evaluate the capabilities of LLMs in this field by creating the CIRCUIT dataset. The paper details the construction of the dataset, evaluation metrics and methods, and conducts experiments to evaluate multiple LLMs. Finally, it discusses the significance, limitations, and future directions of the research results.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The article constructs a dataset for analog circuits to evaluate the capabilities of LLM in analog circuits, promoting the exploration and research of LLM in analog circuits.\\n2. The construction of the dataset is relatively comprehensive, covering different difficulties, and considering diagrams and netlists.\\n3. The experiment is relatively detailed, testing and validating the current mainstream models and conducting analysis and summary.\", \"weaknesses\": \"1. The dataset focuses more on evaluating the LLM's answers to analog circuit questions, similar to math problems. However, it cannot verify the LLM's ability to assist in analog circuit design. For example, use a LLM to evaluate the quality of circuit design in terms of power, performance, and area (PPA).\\n2. The dataset lacks diversity in design and does not cover the entire analog circuit design process. For example, circuit structure design, device selection, test verification and physical design, etc.\\n3. The benchmark lacks an evaluation of the overall performance of the circuit.\", \"questions\": \"1. How does the benchmark results help to improve LLM for EDA-aided design?\\n2. The current dataset focuses more on formula derivation and mathematical calculations, but may lack an understanding of how to evaluate LLM's understanding of the quality of analog circuit design. For example, evaluate the ability of large language models in predicting circuit PPA, ultra-parameter tuning of circuits, and test verification.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarifications on Benchmark Scope, Dataset Diversity, and Future Directions\", \"comment\": \"Thank you for your thoughtful comments and questions. We address your points as follows:\\n\\n**Questions:**\\n1. **How the benchmark results help LLMs for EDA-aided design:**\\nOur results demonstrate that LLMs currently have a very limited understanding of simple circuit topologies. This fundamental limitation must be addressed as a prerequisite to integrating LLMs into the EDA-aided design process. The poor performance observed highlights that LLMs\\u2019 usefulness in such workflows would currently be limited and unreliable. Addressing these basic shortcomings in topology understanding is crucial before advancing to more complex design tasks. We shall work on emphasizing these points in our paper in the revision.\\n2. **Dataset scope:**\\nWhile our dataset does not address higher-level design tasks, it establishes a foundational framework for evaluating topology understanding. This structure can be extended to construct datasets for scalable and reliable automatic testing of LLMs\\u2019 capabilities in more advanced design stages. We view our work as a necessary first step, focusing on topology comprehension, which underpins all subsequent design processes. Addressing LLMs\\u2019 topology understanding limitations and expanding our dataset to more complex design tasks are exciting directions for future work.\\n\\n\\n**Weaknesses:**\\n1. **Dataset scope:**\\nWe reiterate the points made above. We acknowledge that our dataset does not address the full analog circuit design process. However, it reveals that LLMs struggle significantly with understanding basic topologies, which we consider a precursor to performing any complex design task. Without this foundational understanding, LLMs are unlikely to perform reliably on higher-level tasks. In future work, we aim to first address these fundamental challenges and then expand the dataset with more advanced analog design tasks.\\n2. **Dataset diversity:**\\nOur dataset focuses on topology understanding as a starting point and is intentionally limited in scope to address this specific area. We believe the dataset\\u2019s structure can be reused to reliably automatically evaluate LLMs on more complex tasks and to expand into areas such as circuit structure design and device selection. These are promising directions for future work.\\n3. **Evaluation of circuit performance:**\\nWe recognize that the benchmark does not evaluate the overall performance of circuits. This was a deliberate decision to focus on reasoning and topology comprehension, which form the foundation for such evaluations. Expanding the dataset to incorporate performance metrics and end-to-end design tasks is an important future direction, but we believe the fundamental understanding tested in our work must first be established for LLMs to contribute meaningfully to circuit design.\\n\\nWe hope this clarifies your questions and addresses your concerns. If you have any further questions or feedback, we would be happy to address them.\"}", "{\"summary\": \"This paper introduces a benchmark designed to evaluate the circuit interpretation and reasoning capabilities of large language models (LLMs).\\nThe benchmark comprises 510 question-answer pairs covering various areas of analog circuit design. \\nEvaluations conducted on three representative LLMs provide some insights into the reasoning abilities of state-of-the-art models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper presents an intriguing study on the capabilities of large language models (LLMs) in circuit interpretation and reasoning and offers some insights to advance research in leveraging LLMs for analog design automation.\\n\\nThe dataset, comprising over 500 question-answer pairs sourced from a variety of courses and textbooks, appears substantial and non-trivial, providing a solid foundation for evaluating LLM performance in this domain.\", \"weaknesses\": \"The paper\\u2019s technical contributions appear limited, focusing primarily on prompt engineering.\\n\\nAdditionally, the benchmark used raises concerns: although it consists of 500 question-answer pairs, these questions are sourced from textbooks, which overly simplify the real-world challenges of analog circuit design. Analog circuit design involves multiple critical stages, such as topology generation, device sizing, and layout design, and it is unclear how the current benchmark addresses these complexities.\\n\\nFurthermore, the results seem unreasonable. Instead of reporting only the overall accuracy, a more detailed breakdown of circuit performance across varying complexities should be presented. Understanding how the model handles complex analog circuit problems, as opposed to simpler ones, would provide far more meaningful insights into its effectiveness.\\n\\nThe insights are limited. It does not show how to improve the capabilities of LLM in addressing analog design automation.\", \"questions\": \"How does the benchmark address the different design stages of analog circuits?\\nWhat is the success ratio of each level circuit (e.g., 1, 3, 5) based on the benchmark?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Acknowledging Feedback and Highlighting Key Revisions\", \"comment\": \"Thank you for your thoughtful follow-up and for recognizing the value of our contribution. In the revision, we have clarified our dataset scope and limitations, added per-level accuracies alongside per-category results, discussed these results, and emphasized the dataset\\u2019s scalability, which we are excited to leverage to further build on this work and enhance its impact. We would be grateful if you could take a moment to review the revised version, and we welcome any further questions or feedback. Thank you again for your valuable input!\"}", "{\"title\": \"Clarifications on Error Types and Rates and Dataset Design and Imbalance\", \"comment\": \"Thank you for your thoughtful comments and questions. We address your points as follows:\\n\\n**Questions:**\\n1. **Differentiating errors:**\\nLLMs have not demonstrated a clear lack of background knowledge about circuits in our dataset, so we have not differentiated between the two types of errors in our evaluation. Errors are categorized as math, formatting, or reasoning errors. Topology errors are a subtype of reasoning errors, and direction errors are a subtype of topology errors. Most reasoning errors are topology errors; LLMs display sufficient background knowledge but struggle with its application. For further details on error types, please see Appendix D. Additionally, examples of reasoning errors (including topology and direction errors) can be found in Appendices F.1, F.3, F.4, and F.7.\\n\\n2. **Error rate calculation:**\\nError rates are calculated as the ratio of data points with a specific error type to the total data points in the subset. For example, for questions with netlists, the subset consists of 79 templates \\u00d7 5 setups = 395 data points. Each question is analyzed for all error types, and a single question may fall into multiple categories. For example, a question with a math and a direction error is counted under math, reasoning, topology, and direction errors. Thus, the percentages in Table 4 do not sum to 100% because errors can overlap. Constraints are: math, format, reasoning \\u2264 100%, topology \\u2264 reasoning, and direction \\u2264 topology.\\n\\n**Weaknesses:**\\n1. **Sources of problems:**\\nWe agree that incorporating modern challenges is vital for broader analysis. Our dataset\\u2019s design allows for scalability to include advanced tasks, while textbook-derived problems provide a well-defined and reliable foundation for this initial work. Initial results show that LLMs struggle even with basic topology understanding, highlighting the need to first address these fundamental challenges before moving to more challenging circuits. Both are exciting directions for future work, and we shall highlight that in the revised discussion.\\n\\n2. **Category and level imbalance:**\\nWe acknowledge the imbalance and agree that additional analysis would be useful. Table 5 in Appendix E provides category accuracies. In the revision, we can include a discussion of these results and an analogous table of per-level accuracies:\\n\\n| Dataset Subset | Prompt | | GPT 4o Accuracy (%) per level | |\\n|------------------------|-------------------|:-------------:|:-------------:|:-------------:|\\n| | | 1 | 3 | 5 |\\n| Questions With Netlists| 0-s w/ netlists | **49.4** | **31.2** | **18.5** |\\n| | 1-s w/ netlists | 45.0 | 28.2 | 9.2 |\\n| Questions W/O Netlists | 1-s | 85.0 | 60.0 | 48.0 |\\n\\nDue to our level definition, we expect a high correlation between levels and categories. Addressing the challenge of achieving more balanced levels and category diversity is an exciting direction for future work.\\n\\nWe hope this clarifies your questions and addresses your concerns. If you have any further questions or feedback, we would be happy to address them.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"I thank the authors for responding to my questions on the paper and explaining these in detail.\"}", "{\"title\": \"Acknowledging Feedback and Future Improvements for CIRCUIT\", \"comment\": \"Thank you for your thoughtful follow-up and for raising your rating. We sincerely appreciate your recognition of our effort to address your concerns. Your feedback has been invaluable, and we are excited to leverage the scalability of our dataset to further build on this work and enhance its impact.\"}", "{\"title\": \"Clarifications on Dataset Scope, Relevance, and Benchmark Utility\", \"comment\": \"Thank you for your thoughtful comments and questions. We address your points as follows:\\n\\n**Dataset size:**\\nWe acknowledge the dataset's limited size but emphasize that it serves as a carefully curated starting point for evaluating LLMs in analog circuit reasoning. Importantly, the dataset is designed to be scalable, enabling future expansion to address more complex tasks. Initial results show that LLMs struggle even with basic topology understanding, highlighting the need to address these fundamental challenges before moving to advanced analog design tasks.\\n\\n**Relevance to analog circuit design:**\\nOur benchmark focuses on interpreting and understanding circuit topologies, which are foundational to the analog design process. This task represents an essential building block for more complex design stages. LLMs seem to struggle with this basic task, and we believe that addressing this limitation is a necessary first step before integrating LLMs into the analog design process, as their usefulness and versatility in performing more complex tasks would be limited and minimal.\\n\\n**Benchmark Utility:**\\nOur benchmark design is flexible and scalable, making it suitable for evaluating hybrid approaches that combine LLMs with other models or tools. The unit-test-like methodology ensures that if a model passes all setups for a given template, it reliably demonstrates an understanding of the topology, regardless of the specific reasoning process. This approach offers a consistent and objective evaluation framework that can be extended to hybrid models by integrating their outputs into the evaluation pipeline. We acknowledge the limitation that automatic evaluation focuses on the final answer and may overlook errors in intermediate reasoning, such as hallucinations or mathematical inaccuracies that do not propagate to the final output. Future work could include automatic intermediate-step evaluation strategies to address this limitation.\\n\\nWe hope this clarifies your questions and addresses your concerns. If you have any further questions or feedback, we would be happy to address them.\"}", "{\"metareview\": \"The paper describes a benchmark for understanding analog circuits, which consists of 510 questions. Different LLMs are evaluated. The setup requires numbers (like the voltage) and the diagram of the circuit in the netlist format, as well as the question/answer pairs.\\n\\nThe accuracy of the models like GPT-4 is around 50%\\n\\nThe usefulness of creating new benchmarks for LLM very clear, and this paper adds one of them. However, I think that the contribution is not so big: you can expect such kind of results for any university-level course, be it topology, algebraic geometry or circuit understanding: they all can be transformed into such microbenchmarks. \\nThe paper does not provide any recommendations or attempts to increase the accuracy. \\nWhat happens, if we finetune the model on such circuit data? What are typical errors and in which part? \\nThe benchmark also does not show any fundamental limitations of LLM for this scenario.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers agree that the paper is below the bar. Some of the concerns raised are the limitations of the design and experiments.\\nAn interesting comment was that it is unclear if the benchmark could assist in designing new circuits (which could be of great practical value).\"}", "{\"summary\": \"This paper presents a Circuit Interpretation and Reasoning Capabilities (CIRCUIT) dataset to evaluate LLMs in understanding and reasoning about analog circuits. The authors conduct a series of experiments to assess the performance of various LLMs in understanding analog circuits and their topologies from diagrams and netlists.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is the first to evaluate the ability of LLMs in circuit design domain and summarizes the limitations of existing LLMs according to the results. The authors summarize the limitations of existing models based on the experimental results. The qualitative analysis of errors offers valuable insights for improving LLMs.\\n2. This paper is well organized and clear.\", \"weaknesses\": \"1. The sources of problems are limited. Most of problems are collected from textbooks published more than 10 years ago, which primarily assess general knowledge of analog circuits. It should be beneficial to construct more challenging problems about complex analog IC designs from various sources, such as handmade questions according to modern IC datasheets.\\n2. The categories in this dataset are imbalance. In Figure 3, most of Basic questions are in level 1, while RF questions are more challenging. Besides, there is a lack of detailed analysis regarding LLM performance across different levels and categories.\", \"questions\": \"1. How to differentiate between errors where LLMs have insufficient background knowledge about circuits and those where incorrect reasoning occurs? Do errors caused by insufficient knowledge fall under Direction Errors?\\n2. A minor question about Table 4: how is the error rate calculated? I saw that the sum of each column or raw do not equal 1.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Encouraging Feedback on Revised Work\", \"comment\": \"We truly appreciate your engagement and the feedback that has helped us improve our work. In the revision, we have implemented several changes to enhance our paper's clarity, scope, and utility. Additionally, we moved some results from the appendix into the main paper, included additional results, and added their discussion to strengthen our contributions. We would be grateful if you could take a moment to review the revised version, and we welcome any further questions or feedback. Thank you again for your constructive input and thoughtful consideration.\"}", "{\"comment\": \"Thanks for authors in addressing my concerns. This is a valuable contribution to evaluating LLMs in the hardware domain. It would be beneficial to consider incorporating more diverse sources into this benchmark in future.\"}" ] }
5i6ZZUjCA9
Affine Steerable Equivariant Layer for Canonicalization of Neural Networks
[ "Yikang Li", "Yeqing Qiu", "Yuxuan Chen", "Zhouchen Lin" ]
In the field of equivariant networks, achieving affine equivariance, particularly for general group representations, has long been a challenge. In this paper, we propose the steerable EquivarLayer, a generalization of InvarLayer (Li et al., 2024), by building on the concept of equivariants beyond invariants. The steerable EquivarLayer supports affine equivariance with arbitrary input and output representations, marking the first model to incorporate steerability into networks for the affine group. To integrate it with canonicalization, a promising approach for making pre-trained models equivariant, we introduce a novel Det-Pooling module, expanding the applicability of EquivarLayer and the range of groups suitable for canonicalization. We conduct experiments on image classification tasks involving group transformations to validate the steerable EquivarLayer in the role of a canonicalization function, demonstrating its effectiveness over data augmentation.
[ "equivariant networks", "steerability", "the affine group", "equivariants", "canonicalization" ]
Accept (Poster)
https://openreview.net/pdf?id=5i6ZZUjCA9
https://openreview.net/forum?id=5i6ZZUjCA9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yn6PKs88UJ", "yby9KypjIu", "sYnhdfBkAg", "m3S9wwRaRW", "i3cf9EfLzQ", "JdEUePBqjq", "FSCpYLYnqg", "EBObS6w39X", "7fBLe71cb0" ], "note_type": [ "official_comment", "official_review", "decision", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1733227007348, 1730719335187, 1737524123977, 1730712080285, 1734595023711, 1730628515497, 1732707674908, 1733154215752, 1730712218662 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11430/Reviewer_uxCw" ], [ "ICLR.cc/2025/Conference/Submission11430/Reviewer_8iML" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11430/Reviewer_sBuw" ], [ "ICLR.cc/2025/Conference/Submission11430/Area_Chair_qFqC" ], [ "ICLR.cc/2025/Conference/Submission11430/Reviewer_5XxQ" ], [ "ICLR.cc/2025/Conference/Submission11430/Reviewer_5XxQ" ], [ "ICLR.cc/2025/Conference/Submission11430/Reviewer_uxCw" ], [ "ICLR.cc/2025/Conference/Submission11430/Reviewer_uxCw" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the additional clarifications and comments.\\n\\n> In our experiments, the shape of the tensor produced by $\\\\phi_1$ is $(2 \\\\times 2, 28, 28)$, where the 4 channels correspond to the 4 elements of a $2 \\\\times 2$ matrix, and $(28,28)$ represent the spatial axes. $\\\\phi_1$ is a learnable equivariant model. We perform Det-Pooling on the spatial axes according to the absolute value of the determinant at each spatial point, which is analogous to conventional global pooling. The input with shape $(2 \\\\times 2, 28, 28)$ results in an output of a $2 \\\\times 2$ matrix (or equivalently, a 4-dimensional vector in the implementation).\\n\\nThe authors might already be aware of this, but as a final future work suggestion, the map $\\\\phi_{1}$ could probably be turned into one which produces an element of $\\\\textnormal{GL}(2, \\\\mathbb{R})$ directly since $\\\\mathbb{R}^{2 \\\\times 2}$ is isomorphic to the Lie algebra $\\\\mathfrak{g}$ of $\\\\textnormal{GL}(2, \\\\mathbb{R})$ once a basis is chosen and from $\\\\mathfrak{g}$ we can map to $\\\\textnormal{GL}(2, \\\\mathbb{R})$ using the group or Riemannian exponential (with the later being surjective).\"}", "{\"summary\": \"In this work, the authors propose a steerable and affine group equivariant layer, EquivarLayer. They introduce a novel pooling technique that can facilitate canonicalization. The empirical experiments presented were conducted on MNIST and MNIST scale datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper presents a novel pooling layer that keeps equivariance to affine groups intact.\", \"weaknesses\": \"1. The paper conducts experiments solely on MNIST datasets and skips existing methods like [1] for comparison.\\n\\n2. The contribution does not seem significant and the experiments section is limited. \\n\\n3. The motivation for using the proposed approach given the already existing methods that conduct canonicalization, and steerable layers, is unclear to me after going over the paper. \\n\\n4. Definition 4, Proposition 5 as well as Theorem 7 can be improved in terms of formalizing as well as readability of equations. \\n\\n[1] Implicit Convolutional Kernels for Steerable CNNs, Zhdanov et al.\", \"questions\": \"1. Could you present results with additional datasets along with a comparison with existing methods? For example, above mentioned Steerable CNNs as well as ablation with image datasets like CIFAR10/CIFAR100?\\n\\n\\n2. The addition of simple canonization as presented in Kaba et al compared to the proposed method would be helpful. \\n\\n\\n3. How does the proposed method perform to roto-translation and scale? \\n\\n\\n4. How does this method scale with data, dimensions of invariants as well as group size?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper extends the InvarLayer (Li et al., 2024), which uses differential invariants to achieve affine equivariance, by proposing the EquivarLayer. EquivarLayer combines a novel equivariant matrix with differential invariants to generalize from invariance to equivariance. The paper also introduces Det-pooling, which enables EquivarLayer to achieve canonicalization. This canonicalization capability is assessed by using the EquivarLayer model to canonicalize transformed MNIST images, followed by classification with a ResNet-50. The results show improved performance compared to ResNet-50 with data augmentation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper introduces a novel and efficient method for extending the affine-invariant layer to an affine steerable equivariant layer. Achieving this in deep learning is challenging, as the affine group has six degrees of freedom, and sampling from this group often incurs prohibitively high computational costs.\\n2. The proposed ResNet-32 model contains only 0.4 million parameters, which is impressive. This compact model size may enable the exploration of very deep equivariant networks.\", \"weaknesses\": \"1. Experiment Design: Using invariant classification to demonstrate the effectiveness of the proposed EquivarLayer for canonicalization is somewhat indirect. The true strengths or limitations of EquivarLayer are obscured by the downstream ResNet-50 prediction network, yet five pages are dedicated to deriving the EquivarLayer. Since canonicalization is a downstream task, its accuracy depends significantly on the model\\u2019s ability to preserve information\\u2014a quality typically evaluated by learning equivariant features, performing invariant classification, or computing equivariant error. Key questions remain, such as whether EquivarLayer improves upon InvarLayer (as it is a generalization) and how it compares to other steerable methods achieving subgroups of affine group equivariance.\\n2. Experiment Setup and Comparability: The experiment setup limits comparability with other methods. The train/test split is not provided, and the scale factor range (0.8 to 1.2 or 1.6) differs from that of other scale-equivariant papers, which often use a range of [0.3, 1]. Additionally, key scale-equivariant works, such as [1], are not cited. These omissions make it challenging to compare this model with existing equivariant CNNs, especially since no equivariant network baselines are included in the experimental section.\\n3. Clarity in Derivations: The derivations following Eq. (16) in Section 2.3 are unclear. The notation for symbols like $u_x, u_y$ is not adequately defined, and it was only by referring to Li et al. (2024) that I understood they represent gradients. While this is a minor issue, a more significant concern is the lack of discussion around $\\\\alpha$ from Eq. (14). Instead, the paper provides an example of an equivariant matrix (Eq. 17) without explaining the derivation of $\\\\alpha$.\\nAdditionally, one of the advantages of equivariance is its ability to preserve information. While the proposed method is computationally efficient, it may sacrifice some information, as it limits gradients to second order and relies on differential invariants for local features. It remains unclear if the gradient order relates to information preservation. The derivations also do not clarify why only first- and second-order gradients are used or how the equivariant matrix is derived.\\n\\n[1] Sosnovik, Ivan, Micha\\u0142 Szmaja, and Arnold Smeulders. \\\"Scale-Equivariant Steerable Networks.\\\" International Conference on Learning Representations.\", \"questions\": \"1. Can this approach be extended to handle reflections, specifically for cases where det(A) < 0?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes the \\u201cEquivarLayer,\\u201d extending the InvarLayer to support affine-equivariant transformations with steerability. The EquivarLayer is built upon the framework of moving frames and differential invariants, aiming to handle more general transformations within the affine group and its subgroups. As a key application, they integrate the EquivarLayer with a Det-Pooling module to achieve canonicalization. The results are demonstrated in MNIST dataset and its transformed variants and showed that EquivarLayer-based canonicalization can improve the performance of a non-equivariant pre-trained model.\\n\\nThe paper initially received somewhat mixed reviews. The major concerns raised by the reviewers include (1) weak technical contribution, since the work is largely based on prior methods on invariant layers for affine groups (InvarLayer), moving frames, and differentiable invariants, (2) limited evaluation, since the experiments are conducted only on MNIST (and its variants) while the prior works explored more diverse datasets, (3) limited applications, since the work demonstrates its effectiveness only on canonicalization while the capability of equivariant layers itself was not clearly presented. In response, the authors provided a comprehensive rebuttal, including results on more diverse datasets (e.g., Fashion MNIST, results on classification without canonicalization, and clarification of the contributions over prior works. After the author's feedback, the reviewers found the additional evidence compelling, generally agreeing that while more rigorous experiments and further clarity would be welcome, the presented contributions are meaningful.\\n\\nAC carefully read the paper, reviews, and discussions, and concurs with the reviewers' decision. The authors should incorporate the additional clarifications and experiment results presented in the rebuttal to the final version of the paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer 8iML initially questioned the novelty and practical significance of the contribution, arguing that experiments on MNIST were too limited and comparisons with existing steerable layers were insufficient. Reviewer uxCw called for more clarity regarding the claimed novelty, the relationship to previous moving frame methods, and the lack of examples illustrating non-scalar input-output representations. Reviewer sBuw focused on the limited evaluation using only MNIST, requesting more diverse datasets and the inclusion of stronger baselines, along with careful augmentation strategies to facilitate fair comparisons. Reviewer 5XxQ suggested comparing the proposed approach with fully equivariant models and showcasing more complex representations. In response, the authors provided additional results and clarifications, which successfully resolved many of these concerns.\"}", "{\"summary\": \"In summary, this paper contributes the following:\\n- An extension of the equivariant steerable layers for the 2D affine group proposed in [1] from type-0 (scalar-type) feature maps (in the terminology of [2, 3]) to other types utilizing equivariant moving frames [4], specifically by linear combination of the equivariants produced by frames using type-0 features as weights,\\n- An adaptation of the above layer for invariant or equivariant canonicalization [5] of non-equivariant backbones for the 2D affine group, using global max-pooling of 2-channel type-1 feature with respect to an invariant quantity (absolute determinant),\\n- Experiments on invariant canonicalization of a pre-trained ResNet50 backbone for MNIST image classification under affine, rotation-scaling, and scaling transformations.\\n\\n[1] Li et al. Affine equivariant networks based on differential invariants (2024)\\n\\n[2] Cohen and Welling, Steerable CNNs (2017)\\n\\n[3] Thomas et al. Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds (2018)\\n\\n[4] Olver, Modern developments in the theory and applications of moving frames (2015)\\n\\n[5] Kaba et al. Equivariance with learned canonicalization functions (2022)\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"- S1. The use of equivariant moving frames to extend type-0 features to other types is technically sound, and original as far as I am aware in the context of steerable networks. (In a broader context, the authors may find [6] and its application to Lorentz canonicalization [7] related, as they also linearly combine equivariants using learned invariant features.)\\n- S2. The considered application for canonicalization using global invariant pooling is technically sound as far as I can tell, and properly shows the utility of the proposed layer since canonicalization [5] only using type-0 features is not straightforward.\\n- S3. The experiments show that the approach is able to perform canonicalization of a non-equivariant backbone for invariant MNIST classification under affine transformations, which has been unavailable in current literature before as far as I am aware.\\n\\n[5] Kaba et al. Equivariance with learned canonicalization functions (2022)\\n\\n[6] Villar et al. Scalars are universal: Equivariant machine learning, structured like classical physics (2021)\\n\\n[7] Nguyen et al. Learning symmetrization for equivariance with orbit distance minimization (2023)\", \"weaknesses\": [\"W1. A proper comparison against fully equivariant networks for classification (e.g., InvarPDEs-Net and InvarLayer from [1]) is not included in the experiments. Including these baselines would strengthen the paper, especially since canonicalization may benefit from pre-training in non-invariant domains (e.g., ImageNet-1k in this paper) unlike fully equivariant networks.\", \"W2. It seems possible to use the moving frame in Definition 8 (or the equivariant matrix in Theorem 10) directly for canonicalization without multiplication by type-0 features (Eq. (10)), as this coincides with the definition of equivariant frames in frame averaging [8]. Adding this as a baseline would strengthen the results, as it can be understood as ablating trainability from canonicalization.\", \"W3. It was unclear to me how the definition and property of moving frames given in Definition 8 and Theorem 9 logically leads to the construction of equivariants using relative equivariants given in Section 2.3.\", \"W4. The paper demonstrates a single application of the proposed layer as the last layer of a canonicalization network, which extracts 2-channel type-1 features from type-0 features, while all other layers of the network map between type-0 features [1]. This misses (1) feature maps of type >1, (2) layers taking non-scalar features as input, and (3) layers mapping between non-scalar features, all of which are possible within the proposed framework. Testing (a subset) of these would strengthen the paper. A straightforward way is to modify InvarPDEs-Net and InvarLayer from [1] to have non-scalar hidden features, analogous to steerable networks in [2].\", \"Minor typo\", \"Line 13: InvarLayer -> InvarLayer (Li et al. 2024)\", \"[1] Li et al. Affine equivariant networks based on differential invariants (2024)\", \"[2] Cohen and Welling, Steerable CNNs (2017)\", \"[8] Puny et al. Frame averaging for invariant and equivariant network design (2021)\"], \"questions\": [\"Q1. Is it necessary that equivariant matrices are square matrices, as implied in Definition 6? As far as I understand, it should be that their number of columns can be flexible, while the number of rows is specified by the representation \\\\rho'.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response and supplementing the experimental results. I have some follow-up questions after reading the response and other reviews.\\n\\n- On R1, Tables 2 and 3 show that InvarLayer fails catastrophically on MNIST-RS and MNIST-GL+(2). Are there specific reasons in their failure? To my understanding, InvarLayer and EquivarLayer-canonicalized ResNet-50 are both invariant to the symmetry groups of the interest by construction, and I do not see clear reasons why InvarLayer should show such high error rates on MNIST-RS and MNIST-GL+(2), compared to their low error rates on MNIST.\\n\\n- On R2 for Reviewer 8iML, I have reservations about the statement \\\"... It is comparable to key advancements in the domain, such as the progression from G-CNNs [8] to steerable CNNs [5-7]...\\\" A key contribution of [5] is proposing to incorporate steerability to equivariant CNNs for the first time. This work contributes an application of the idea of [5] to affine groups, while also building upon another prior work (Li et al. InvarLayer). Comparing the contributions as written in the response seems misleading.\\n\\nOther than the above, I would like to hear the opinions of other reviewers, especially Reviewer uxCw on the raised concerns, before making further decisions.\\n\\n[5] Cohen and Welling, Steerable CNNs (2017)\"}", "{\"comment\": \"The authors have made a strong effort in responding to the weaknesses mentioned and I've raised my score. To explain my reasoning for not increasing my score further (hopefully the authors can find some useful feedback here): I think the main weaknesses of the current paper revolve around being able to showcase/translate the increased generalization that comes with being able to use different $(\\\\rho, \\\\rho\\u2019)$-representations (point (2) in the original review) into concrete operators that can be applied to different modalities. Non-scalar field representations (which the theoretical framework of the paper unlocks) are more commonly employed outside of imaging data. The dynamical system experiment is appreciated, and a good step in this direction. As the authors state \\\"We anticipate that researchers from various scientific domains could explore and apply our framework to problems with similar properties\\\", and I think showcasing how operators within this framework can be implemented concretely for various domains and what their limitations are compared to 'traditional' steerable methods (not based on moving frames/invariants) would greatly improve the paper (since one could see the realization of 'larger class of theoretical operators' -> 'concrete implementations for different modalities' -> limitations, as in e.g. [1]).\\n\\n> For an EquivarLayer whose output is a vector field, applying activation functions while preserving equivariance is subject to certain conditions. However, it is important to emphasize that the EquivarLayer itself is inherently nonlinear, which distinguishes it significantly from steerable CNNs [6,7] or PDO-based methods [8-10]. This intrinsic nonlinearity eliminates the necessity for additional activation functions between layers.\\n\\nI'm not sure I agree that the nonlinearity baked into the layer removes the need for (other) nonlinearities. In my view this is an empirical claim that would need to be shown, since in general different activation functions can influence training dynamics and generalization/performance significantly. To be clear, I don't have a problem if this is a limitation of the framework in this case where it is applied using canonicalization, it would simply be better to know as others/future work can address it. This is in some sense an example of what I stated before, e.g. limitations that might appear in some settings and should be clarified.\\n\\n> The key lies in whether the resulting matrix is invertible. By definition, Det-Pooling selects the matrix with the largest absolute determinant from the matrix-valued function $v(x)$. While this does not guarantee invertibility in a strict mathematical sense, it maximizes the likelihood of the output matrix being invertible. The output will be a singular matrix if and only if $v(x)$ is singular for all x, which is an rare occurrence in practice.\\n\\nTo make sure my understanding is correct. What is the shape of the tensor produced by $\\\\phi_{1}: \\\\mathcal{F} \\\\to \\\\mathcal{\\\\tilde{F}}$ in practice, and what do the axes represent? Am I understanding correctly that $\\\\phi_{1}$ is deterministic, i.e. not learned? Are we pooling on what would be the channel axis?\\n\\n[1] A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups, Finzi 2021\"}", "{\"summary\": \"The paper proposes a framework for constructing equivariant layers based on the method of moving frames; under this framework it is shown how the class of input/output group representations that can be employed is larger than previously shown. The authors adapt the jet-space based construction to the case where we are working with an induced representation also acting on the codomain of the signals. The framework is then employed in the context of a canonicalization framework, where the proposed equivariant layer with respect to the affine group is used for the canonicalization function. A primary focus of this work is the affine group, more specifically the affine group of the plane and its subgroups that go beyond euclidean transformations (rotation-scale, $\\\\textnormal{GL}(2, \\\\mathbb{R})$). The methodology is evaluated on a set of image classification tasks employing transformed MNIST datasets, where it is contrasted with a standard data augmentation approach. The experimental validation focuses on the delta in performance on transformed/non-transformed test sets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is generally clear in its presentation, with each mathematical object being well-introduced and described. The authors present a framework which indeed treats a larger class of group representations than seen for this framework.\", \"While there has been some recent work in this area, the problem of constructing equivariant/invariant layers with respect to larger transformation groups is generally under-explored and potentially useful. In this sense, I am also happy to see work employing the method of moving frames and the use of differential invariants, which may yield different complexity constraints and/or scaling behaviour on the spectrum of methods imposing inductive biases.\", \"Intertwining canonicalization and traditional methods for constructing equivariant layers seems like a useful avenue to explore.\"], \"weaknesses\": \"I will categorize the issues I find with the paper into three categories: (1) Claimed novelty; (2) Lack of presentation of concrete examples given the level of generality claimed and (3) experimental validation.\\nI want to note from the onset that I am open to raising my score, and the degree to which I am penalizing (lack of) novelty here is very much based on what the authors themselves claim.\\n\\n- (1) The paper uses language that supports the notion where a lot of the concepts are novel (in the sense of new mathematical objects) rather than, their application in this form in the context of machine learning being novel. \\n- Considering the framework presented up to section 2.3, I would like to highlight obviously the work of Olver himself in [4] where the same framework of constructing equivariant maps using moving frames based on his previous work is summarized (ending up with the abstract operator of eq. (13)). Directly related is the work of Sangalli et. al which also uses the prolongation of the group action on the jet space for invariantization (based on Olver's work) in the context of image and volumetric data [1-3]. From [1] appendix B, the general framework is directly applicable taking X = $\\\\mathbb{R}^2$, $Y = \\\\mathbb{R}^{c}$, $Z = \\\\mathbb{R}^{c^{'}}$, we arrive at the same principle where an equivariant operator $\\\\mathcal{F} \\\\to \\\\mathcal{F}^{'}$ on the function spaces is defined implicitly once a G-invariant operator on the product space $X \\\\times Y$ is available ($g \\\\cdot (x, u) = (gx, \\\\rho(g)u) = (gx, u)$ ($\\\\rho = \\\\rho_0$), corresponding to $g \\\\cdot f(x) = f(g^{-1}x)$). In this work the notation is less clear and it appears that $Y = \\\\mathcal{F}$ indicating a higher order operator, however the differential invariants are always (correctly) constructed with the same $x \\\\in X$ argument i.e. $\\\\hat{\\\\mathcal{I}}(f)(x) = I(x, f(x))$ (we make use of differential invariants dependent on $x$). It should be highlighted then that the 'equivariant' $\\\\mathcal{E}: X \\\\times Y \\\\to Z$ presented here is a $G$-equivariant map on the product space respecting $\\\\mathcal{E} \\\\circ \\\\rho(g) = \\\\rho^{'}(g) \\\\circ \\\\mathcal{E}$ , and would correspond to the case where $\\\\rho \\\\neq \\\\rho_0$ ((16) in [1]). This is indeed a different group action and the authors seek to effectively work with induced representations which also acts on the codomain $L_{\\\\rho}(g)(f)(x) = \\\\rho(g)f(g^{-1}x)$, however it should not be presented as a departure from this framework if one is simply changing the representation used.\\n- The authors should also try and clarify further how their framework differs from [6] in its usage of the sup-normalized differential invariants. Are the polynomial relative invariants of [6] and the relative invariants here the same? I would also highlight [5] and [7] which also deal with differential invariants of the affine group of the plane, where the zero division errors are treated with a different formulation (see [5] (34) and (35)).\\n- Regarding (2), I would have liked to see more concrete examples of the framework being employed in the context where the additional flexibility in terms of input/output representations is valuable e.g. beyond image data, see for example [9]. I am not concerned with SOTA results, simply a validation that the increased generality allows the use of the differential invariants for a larger set of applications and modalities. It would also be useful for practitioners if more concrete examples of the realized (beyond theoretical) operators is presented.\\n- Regarding (3), it would be useful to quantify more clearly the relationship between the amount of data augmentation used and the size of the dataset for each experiment. More importantly, what would be very useful here is to have a thorough presentation of the time/space complexity of each layer employing this framework, see e.g. [1] appendix D or [6], with the inclusion of standard (3-channel) image data.\\n\\n[1] \\\"Moving frame net: SE(3)-equivariant network for volumes\\\", Sangalli et. al, 2023\\n[2] \\\"Differential invariants for SE(2)-equivariant networks\\\", Sangalli et. al, 2023\\n[3] \\\"Equivariant Deep Learning Based on Scale-Spaces and Moving Frames\\\", Sangalli 2023\\n[4] \\\"Using Moving Frames to Construct Equivariant Maps\\\", Peter Olvar (March 2024)\\n[5] \\\"Affine Differential Invariants for Invariant Feature Point Detection\\\", Tuznik et. al 2018\\n[6] \\\"Affine Equivariant Networks Based on Differential Invariants\\\", Li et. al, 2024\\n[7] \\\"Affine invariant detection: edge maps, anisotropic diffusion, and active contours\\\", Olver et.al 1999\\n[8] \\\"On Relative Invariants\\\", Fels & Olver 1997\\n[9] \\\"Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data\\\" Finzi et. al 2020\", \"questions\": [\"Beyond the issues highlighted before:\", \"In the case where the output representation acting on the codomain is not operating on scalar fields, do we have any restrictions with respect to the class of activation functions that can be used as it is the case for steerable methods which operate on vector fields? Do we have to use norm non-linearities?\", \"In the appendix it is stated \\\"Mironenco & Forre (2024) tackles this problem by decomposing a large group into smaller ones and sampling them to enhance sample efficiency. However, it requires sampling on certain measures that may be impractical, such as GL(n)-invariant measure of positive definite matrices.\\\". I'm not sure I understand what limitation is described here. As far as I understand, sampling from this measure is indeed possible (see e.g. [10]) and the SPD manifold has quite extensive applications.\", \"I don't understand how the Det-Pooling ensures a parametrization on the group $\\\\textnormal{GL}(2, \\\\mathbb{R})$. It seems we are mapping to the space of $\\\\mathbb{R}^{2 \\\\times 2}$ matrices and it is simply assumed that this is a group element?\", \"Is there a connection between this work and Olver's work on relative invariants [8]?\", \"[10] Riemannian Gaussian Distributions on the Space of Symmetric Positive Definite Matrices, Said et. al 2015.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5haYLrlyGj
A Unified Framework for Speculative Decoding with Multiple Drafters as a Bandit
[ "Taehyeon Kim", "Hojung Jung", "Se-Young Yun" ]
Speculative decoding (SD) has emerged as a promising approach to accelerate inference in large language models (LLMs). This method drafts potential future tokens by leveraging a smaller model, while these tokens are concurrently verified by the target LLM, ensuring only outputs aligned with the target LLM’s predictions are accepted. However, the inherent limitations of individual drafters, especially when trained on specific tasks or domains, can hinder their effectiveness across diverse applications. In this paper, we introduce a simple yet efficient unified framework, termed MetaSD, that incorporates multiple drafters into the speculative decoding process to address this limitation. Our approach employs multi-armed bandit sampling to dynamically allocate computational resources across various drafters, thereby improving overall generation performance. Through extensive experiments, we demonstrate that our unified framework achieves superior results compared to traditional single-drafter approaches.
[ "Speculative decoding", "multi-armed bandit", "large language model" ]
Reject
https://openreview.net/pdf?id=5haYLrlyGj
https://openreview.net/forum?id=5haYLrlyGj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xpQi7kBcHk", "t2R5NzQc32", "q0MO93kcGL", "ojyqwoUNab", "o8EmjmmIXB", "niORRxPJkp", "n0SCUXj4QG", "mcAlzzNr25", "jKA1ymAedK", "hRwvKSNFsk", "fI1zGoQaxa", "eo16Mw85n5", "cmIAxJaZUE", "c5AsE3LshB", "boFdMxa3Zu", "b9PIEB98Eu", "XsP9Rvk3A3", "XOk7g6efCy", "XN0Fn3MKKv", "VKS06ZYolD", "V7pEIQQSbh", "UgtIsUF4M5", "Udd8tzGBZA", "TwMkRrSKoM", "SC4EIOvAEp", "RXws74AHCZ", "PYUVV9Jnkn", "OnIiqnVtUq", "OMUVIrq0Eo", "O6SszJBnAT", "MpzjjAAzYd", "MlJxufkIwU", "K1m27lhV9f", "GX505Iej3X", "GShRpcqDLC", "FPTUTRCaL6", "F3jB7TW8l4", "DTI3T30sxR", "DCVeJj5xiM", "CBF3Wlh73r", "AixeN7wV3U", "ABMwvxtnQa", "8iHDrkiNZs", "8Kx2bbFwhG", "7hXUy7XEL9" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732524694679, 1732210389626, 1732605020166, 1732211138770, 1732111043288, 1732608845431, 1732319567892, 1730587525834, 1734101370550, 1732210330099, 1732110777320, 1732183416877, 1732110633528, 1732244616176, 1732110184777, 1732179480542, 1732181619024, 1732572910683, 1730129042445, 1732539241026, 1732607726157, 1732211788885, 1730710977268, 1732278169701, 1732278535802, 1737523955293, 1732110703166, 1732414539800, 1732543156639, 1732609543080, 1732111502461, 1732182077229, 1732307371333, 1732523673326, 1732608052232, 1732164042964, 1732110405946, 1732111249868, 1732179369797, 1729326755740, 1732305930312, 1732518790025, 1732110842502, 1732553419310, 1732246589527 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_Tyx3" ], [ "ICLR.cc/2025/Conference/Submission9021/Area_Chair_e6Bk" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_v7AR" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_v7AR" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_v7AR" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_V45K" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_v7AR" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_v7AR" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_v7AR" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_v7AR" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_v7AR" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_V45K" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_v7AR" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_CxuG" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_Tyx3" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Authors" ], [ "ICLR.cc/2025/Conference/Submission9021/Reviewer_V45K" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer v7AR\\n\\nThank you for your suggestion. We will do our best to conduct as many additional experiments as possible within the discussion timeline, including out-of-domain datasets like MMLU, Physics QA, and Hotpot QA, with batch size > 1 and OFA Eagle comparisons.\\n\\nThat said, we feel this level of experimental demand is quite extensive compared to typical expectations, as similar works, even including concurrent studies at `ICLR`, are not usually evaluated across such a broad range of datasets. \\n\\n**Additionally, we would like to ask whether the lack of specific comments on the `theorem` and its implications is a critical factor in your decision to maintain the current score (`3: reject, not good enough`).** While we believe our responses (and revised manuscript) address the concerns raised, `given the tight timeline`, we want to ensure we prioritize addressing the most impactful concerns if others remain. \\n\\nWarm regards, \\n\\n-Authors\"}", "{\"title\": \"Responses to Comments by Reviewer V45K (2/2)\", \"comment\": \"# Follow-up 4. W4. Switching Accuracy Metrics\\n\\nThank you for your feedback regarding switching accuracy metrics. Generally, a low switching frequency does correlate with high switching accuracy, and our observations align with this trend. We acknowledge the importance of providing more comprehensive metrics to further validate the Bandit mechanism's effectiveness and will consider additional analysis for the camera-ready version.\\n\\nIt is worth noting that switching behavior can vary between the initial selection phase and intermediate steps, potentially requiring a more nuanced evaluation. For instance, early-stage switches may have a different impact on overall performance compared to switches later in the decoding process. **We plan to explore more detailed metrics, such as switching frequency per phase and task-level switching accuracy in camera-ready.**\\n\\nThat said, our current results already provide metrics for switching times and switching accuracy during intermediate steps. Additionally, the minimal deviation in real-world latency compared to the optimal configuration suggests that our Bandit mechanism performs well in practice. This reinforces our confidence in the effectiveness of the proposed approach.\\n\\nLastly, as mentioned in the `future work section` of our manuscript, incorporating advanced methods such as reinforcement learning or contextual Bandits holds promise for further improving drafter selection and dynamic adaptation. We believe these strategies could enhance performance even in more complex scenarios.\\n\\nWe appreciate your valuable suggestion and will work to provide richer metrics and insights in the revised version. Thank you for helping us improve this aspect of our work!\"}", "{\"title\": \"Follow-up Responses for OOD experiments (To Reviewer v7AR)\", \"comment\": \"Dear Reviewer v7AR,\\n\\nWe conducted additional evaluations regarding heterogeneous batch throughput and OOD datasets (adding to the previous `finance` and `RAG` OOD dataset evaluations), and the results are as follows:\\n\\n**Single Batch Throughput** \\n| Task | OFA Eagle | MetaEagle-UCB |\\n|----------------|-----------|---------------|\\n| Physics | 2.424 | **2.573** |\\n| Hotpot QA | 2.262 | **2.270** |\\n| MMLU COT (Avg. across 57 tasks) | 2.466 | **2.529** |\\n| **Wins** | **20** | **37** |\\n\\n**Heterogeneous Batch Throughput** \\n| Task | OFA Eagle | MetaEagle-UCB |\\n|----------------|-----------|---------------|\\n| MMLU COT | **1.899** | 1.826 |\\n\\n## Analysis\\n\\nMetaEagle-UCB consistently outperforms OFA Eagle in single-batch scenarios across `Physics QA` and `Hotpot QA`. MetaEagle-UCB performs better on average across the 57 tasks in `MMLU-COT` and outperforms OFA Eagle in 37 individual tasks while trailing in 20 tasks (Here, MMLU with CoT reasoning follows the setting by Anthropic). For heterogeneous batch throughput, MetaEagle-UCB remains competitive due to using its specialized drafters (e.g., for QA, code, and math; at most 3 drafters in general), which excel at handling diverse tasks within the MMLU dataset.\\n\\nThese findings suggest that future research could explore **how to structure specialization among drafters** to further enhance performance for throughput. For instance, incorporating two distinct types of OFA drafters trained on orthogonal domains could improve coverage across diverse task distributions, especially in `mixed or OOD scenarios`. This approach could also serve as a robust foundation for dynamically balancing specialization and generalization within speculative decoding frameworks.\\n\\nWhile the current specialization approach shows slight limitations in some OOD scenarios, our research focuses on proposing a **generic framework** rather than fixed specialization. We believe that future studies, emphasizing how to optimally construct and organize domains during training, could effectively address these limitations and further improve performance in diverse and challenging conditions.\\n\\n\\n## Clarifications on Research Focus\\n\\nOur work is not advocating for training multiple specialized drafters instead of OFA drafters. \\b**The main focus is on effectively utilizing pre-existing heterogeneous specialized drafters, tackling the critical challenge of dynamically routing tasks among them.** This is a practical problem, especially as such specialized drafters are becoming widely available. \\n\\n- **We strongly recommend revisiting the `Motivation Section` and `Further Motivation Section` in `Appendix`.**\\n\\n## Broader Implications\\n\\nThis approach mirrors routing strategies in scalable systems like LoRA and MOE, where task-specific routing is essential (already widely used in the ML system of industries). Similarly, in speculative decoding, system-level routing frameworks like MetaSD have significant potential to enhance efficiency and flexibility.\\n\\nWe hope this analysis provides clarity on the representativeness of our experiments and the broader implications of our framework. \\n\\nWarm regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Follow-up Questions (Throughput) by Reviewer v7AR\", \"comment\": \"# Follow-up. Throughput\\n\\n> By the way, before discussions, we apologize to `Reviewer CxuG` for the extended discussion with `Reviewer V7AR` in this thread and appreciate your understanding.\\n\\nThank you for pointing this out. We would like to clarify that the throughput experiments reported in our revised manuscript were conducted under `homogeneous batch settings`, where all tasks within a batch were identical, while the reported value is the average performance across diverse tasks.\\n\\nTo address your concerns, we conducted additional experiments to assess the performance of MetaSD in the `worst-case edge scenario`, where each instance in the batch corresponds to a different task (`heterogeneous setting`). Below are the results:\\n\\n| **Batch Size** | **Single OFA Eagle** | **Homogeneous Batch (MetaEagle-UCB)** | **Heterogeneous Batch (MetaEagle-UCB)** |\\n|----------------|-----------------------|---------------------------------|-----------------------------------|\\n| **1** | 2.803 | 3.045 | 3.045 |\\n| **2** | 2.751 | 2.933 | 2.518 |\\n| **4** | 2.563 | 2.701 | 1.931 |\\n| **Throughput** | 2.235 | 2.427 | 1.321 |\\n\\n--------------\\n## Observations\\n\\n1. Homogeneous batches: In scenarios where all instances in the batch belong to the same task, MetaSD achieves consistently higher throughput than the OFA Eagle. This is due to the ability of MetaSD to dynamically select the best drafter for the task, ensuring optimal performance.\\n\\n2. Heterogeneous batches (Worst-Case Edge): In the edge case where each instance in the batch corresponds to a different task, we observe a slight drop in throughput for MetaSD. This is expected due to increased I/O from switching between drafters. Notably, for larger batch sizes, the performance of MetaSD in this setting may fall below the OFA Eagle, highlighting a limitation of our approach in such cases.\\n\\n3. Small batch sizes: For small batch sizes (e.g., batch size = 1 or 2), MetaSD remains highly \\bcompatible even in heterogeneous task settings.\\n\\n--------------\\n## Discussion\\n\\nWe acknowledge that the performance drop in heterogeneous batch scenarios represents a limitation of our approach. However, this is one of edge cases that may not frequently occur in real-world applications. Moreover, this is not a problem unique to MetaSD\\u2014other systems relying on specialized drafters may face similar challenges in such settings.\\n\\nOn the other hand, MetaSD demonstrates strong adaptability and efficiency in mixed-task, small-batch scenarios, which are also common in many practical deployments (e.g., laptop, personalized products). Its ability to dynamically select the most suitable drafter ensures robust performance, even in uncertain or evolving task distributions.\\n\\n-------\\n## Broader Context and Future Directions\\n- One of the key motivations behind MetaSD is its applicability to real-world scenarios where multiple heterogeneous drafters are available, but their training data or performance characteristics are unclear. In such cases, our approach provides a safe and reliable choice, dynamically selecting the best drafter based on real-time feedback.\\n\\n- Additionally, as noted in our paper, the framework can potentially be extended to higher-level optimization tasks. For instance, instead of focusing solely on drafter selection, the framework could be adapted to dynamically choose the optimal speculative decoding algorithm itself. We believe this versatility represents a promising direction for future work.\\n\\n**In the camera-ready version, we will include these findings and further clarify the advantages and limitations of MetaSD under different batch scenarios.** Thank you for highlighting this aspect, which allowed us to explore and communicate the broader implications of our framework.\\n\\nLet us know if there are further questions or areas where we can provide additional clarity.\"}", "{\"title\": \"Responses to Reviewer v7AR (2/2)\", \"comment\": \"# Q1. The performance of EXP3\\n\\nWhile UCB algorithm is optimal in instance dependent regret bound with stationary assumption, EXP3 algorithm explores more on suboptimal arms. We also observed using EXP3 algorithm results in picking suboptimal drafters more compared to using UCB algorithm as our ablation study on the best arm ratio (`Section 4.3`) shows.\\n\\n# Q2. Additional baseline for mixed dataset\\n\\nWe conducted additional experiments with an One-size-Fits-All (OFA) drafter for both black-box and white-box settings. These OFA drafters were trained on the mixed datasets spanning all tasks, ensuring a direct comparison to the specialized-drafter-based MetaSD framework. **The results in `Table3` and `Table4` of our revised manuscript show that while the OFA drafter performs well, our MetaSD framework outperforms OFA drafter across most tasks, demonstrating the strength of our method.** The OFA Eagle drafter performs relatively well. **However, it is still outperformed by MetaSD (i.e., MetaEagle-UCB)**. \\n\\n# Q3. Scaling law of the task numbers of MetaSD\\n\\nWhile our experiments focus on five tasks, we selected them to cover orthogonal areas (e.g., `code`, `translation`, `QA`, `summarization`, and `math`) to ensure diverse task representation. We acknowledge that five tasks may not fully capture the broader task spectrum. **However, this limitation is not unique to our framework but also affects single-drafter approaches.**\\nTo answer your request, we conducted out-of-domain experiments with Alpaca-Finance and RAG datasets in `Table 9` of `Appendix F.8`. `Table 9` demonstrates that even with only five specialized drafters, MetaSD consistently outperforms single-drafter approaches, including OFA drafters, under this scenario.\\n\\n**Reference** \\n\\n[1] M. Yin et al, A Theoretical Perspective for Speculative Decoding Algorithm. NeurIPS 2024.\\n\\n[2] Leviathan et.al., Fast Inference from Transformers via Speculative Decoding. ICML 2023.\"}", "{\"title\": \"Responses to Reviewer v7AR\", \"comment\": \"Thank you for your follow-up comment. **Respectfully, we believe there is a `fundamental misunderstanding` regarding the focus and intent of our work.**\\n\\n## Respectfully Addressing the Concerns\\n\\n1. **Research Focus** \\n > Our work **does not advocate for training multiple specialized drafters instead of OFA drafters.** Rather, it addresses a **critical and practical challenge**: how to effectively utilize pre-existing **heterogeneous specialized drafters** dynamically. This is a practical scenario as specialized drafters, trained on diverse datasets, are becoming increasingly available. **Routing tasks dynamically among these drafters** is the focus of our research. \\n> We strongly recommend revisiting the **Motivation Section** and **Further Motivation Section in the Appendix**, where we have explicitly detailed this objective, **please**.\\n\\n2. **On MAB\\u2019s Complexity** \\n> We respectfully disagree with the notion that MAB is a complex algorithm. MAB is one of the **simplest and most traditional online learning approaches**, widely used in practical domains like **recommendation systems, finance, and medical systems** for its **adaptability and low computational overhead**. These strengths make it an ideal choice for speculative decoding in dynamic environments.\\n\\n3. **Role of OFA in Routing** \\n> While OFA drafters are highly versatile, they are not guaranteed to perform optimally across all tasks. Notably, in our framework, OFA itself can be a **key component in the routing process, serving as a fallback drafter to handle tasks where specialized drafters may perform suboptimally.** This ensures robustness in scenarios where potentially low performance could arise. However, the intent of our paper is not to promote a single OFA drafter for all tasks, but to explore how **pre-existing multiple (specialized) heterogeneous drafters [1], including OFA,** can be used together dynamically to achieve better overall performance. \\n> The comparison with OFA Eagle in our experiments serves to **evaluate its effectiveness relative to MetaSD**, not to suggest that OFA should be excluded. Rather, OFA is complementary and could play a vital role within a routing framework to ensure robustness.\\n\\n- [1] Towards Fast Multilingual LLM Inference: Speculative Decoding and Specialized Drafters, Yi et al., EMNLP 2024.\\n\\n4. **Throughput and Real-World Applicability** \\n> We have addressed throughput concerns in our experiments. In single-batch scenarios, **MetaSD-UCB consistently outperforms OFA Eagle**, particularly in **perturbed prompt settings**. In heterogeneous batch throughput, MetaSD-UCB remains competitive due to its ability to utilize task-specific drafters dynamically. This adaptability is essential for handling real-world, diverse queries effectively. \\n> Moreover, in multilingual translation or complex tasks like **MMLU-COT** and **Physics QA**, our framework dynamically leverages specialized drafters for specific tasks while complementing them with OFA as needed. **Routing ensures that each task benefits from the most appropriate drafter, whether specialized or OFA.**\\n- **`We respectfully request not to frame throughput as the sole determining factor for evaluating real-world applicability.`** In practice, **single-batch inference** and **homogeneous batching** are commonly used in industry applications, especially for scenarios like low-latency, user-specific generation. \\n- **`Additionally, if throughput is always the primary criterion, we would like to ask whether prior speculative decoding (spec-dec) papers\\u2014many of which are known to have lower throughput [2]\\u2014should also be discounted based on this metric alone.`** We believe such an approach overlooks the broader contributions and utility of speculative decoding frameworks, including improvements in latency, task adaptability, and system-level design.\\n\\n[2] Fast Inference from Transformers via Speculative Decoding, Leviathan et al.\\n\\n5. **The Core Contribution** \\n> Our framework is a **generic and scalable solution** to the problem of **dynamic task routing among heterogeneous drafters.** It is not meant to replace OFA but to complement scenarios where pre-trained heterogeneous drafters exist\\u2014a situation that is increasingly common. This is a **system-level problem** that goes beyond a single-drafter approach and requires effective, adaptive solutions.\\n\\n## Final Thoughts\\n\\nRespectfully, we believe the comments focus on a preference for OFA without fully engaging with the central problem our paper seeks to solve. OFA\\u2019s inherent limitations in non-stationary, multilingual, and domain-diverse settings underscore the importance of **dynamic task routing.** Furthermore, OFA itself can be effectively utilized within our framework to **guard against potentially low performance, reinforcing the versatility of MetaSD.**\\n\\nWe welcome further constructive discussion and hope this response clarifies the contributions and implications of our work.\\n\\n-Authors\"}", "{\"title\": \"Response to Reviewer Tyx3\", \"comment\": \"Thank you for elaborating on your concerns. We would like to clarify and address the points raised.\\n\\n---\\n# 1. On Real-World Use Cases and Prompt Variations\\n\\nWe want to emphasize that the experiments with perturbed prompts were conducted by varying the prompts for **every query**, not just using a single variation. **The example provided in the response was one of many variations used.** The results reflect this setup and demonstrate that the performance degradation affects all methods, not just MetaSD. **This highlights a broader challenge that is not unique to our approach (Most methods face this challenge).**\\n\\n---\\n\\n# 2. On the Practical Value of MetaSD\\n\\nWhile it is true that MetaSD-UCB does not outperform individual specialized draft models in every case, its strength lies in its **robustness across diverse scenarios**, including in-domain, out-of-domain, and perturbed settings. These scenarios reflect the complexity of real-world tasks, where heterogeneous drafters are often pre-trained on unknown data or lack clear task-specific boundaries.\\n\\nAlthough MetaSD does not consistently outperform specialized draft models on their respective tasks, it is highly robust across diverse tasks. On average, MetaSD achieves performance comparable to specialized models when evaluated across a mix of tasks. **For context, we analyzed MetaSD's results by comparing them to a simulated classification-based approach, where the classifier is assumed to have `zero inference time`, and found that MetaSD effectively achieves:**\\n\\n- **87%** of the classification accuracy of specialized drafters in **MetaSps** settings. \\n- **90%** of the classification accuracy in **MetaEagle** settings. \\n\\nThis demonstrates that MetaSD delivers near-specialized performance from the perspective of classification, without relying on pre-trained parameters (already working as a good classification predictor). Even without offline training or fine-tuning, MetaSD consistently provides strong, adaptive performance across diverse tasks. Moreover, this robustness extends to **out-of-domain** and **perturbed scenarios**, which often present `significant challenges for classification-based or static systems`. On the another line, MetaSD can be also supported by multiple theorems, which can be huge difference from the classification-based system.\\n\\nAdditionally, MetaSD\\u2019s reliance on the **multi-armed bandit (MAB)** algorithm\\u2014without any learnable parameters\\u2014makes it highly practical and computationally efficient for `real-world on-the-fly applications`.\\n\\n---\\n\\n# 3. On Classification-Based Approaches\\nWe appreciate your suggestion regarding classification-based approaches and agree that they could serve as a useful baseline. However, as noted, classification models rely on pre-trained embeddings and are subject to classification errors. This introduces challenges, particularly in: \\n- **Out-of-Domain Scenarios**: Classification models often generalize poorly to tasks outside their training domain. \\n- **Dynamic Adaptation**: Classification requires offline pre-training and cannot adapt to real-time token-level feedback (on-the-fly), unlike MAB approaches. \\n\\nFor camera-ready, we plan to evaluate a simple classification-based approach using **Google-BERT-Base-Uncased** embeddings with a linear layer to predict the optimal drafter for each query. This experiment will provide an additional baseline for comparison.\\n\\n---\\n\\n# 4. Broader Motivation and Contributions\\nAs highlighted in our revised manuscript, one of the key motivations of MetaSD is to address scenarios involving **heterogeneous drafters** where training data and performance characteristics are unknown. MetaSD provides a robust and simple alternative for on-the-fly decision-making, avoiding the need for pre-trained models or extensive offline fine-tuning.\\n\\nWhile classification-based methods may serve as a useful baseline, their inability to dynamically adapt makes MetaSD a practical and effective solution, particularly in scenarios requiring robust performance across diverse tasks.\"}", "{\"summary\": \"This paper explored the problem of specializing and choosing draft models to accelerate speculative decoding. The task of routing a proper draft model can be seen as a multi-armed bandit problem; the author developed and experimented MetaSD-UCB method based on the existing UCB algorithm to address the problem.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The idea of applying the UCB algorithm to speculative decoding is novel.\", \"The paper is overall well-presented.\"], \"weaknesses\": [\"The evaluations do not accurately reflect real-world use cases. The primary goal of MetaSD-UCB is to route inputs to the appropriate draft model. However, during the experiments, the same prompt template is applied across all instances within a dataset, making it trivial to differentiate between datasets. In real-world scenarios, prompts can vary significantly from one query to another, which makes the multi-armed bandit problem much more complex than what is represented in the evaluations.\", \"The effectiveness of the multi-armed bandit approach is limited. Given that the datasets used in the experiments are distinctly different and well-defined, a rule-based system or a simpler machine learning model could easily achieve high accuracy in selecting the appropriate draft model, often with minimal latency compared to using the draft model itself. In contrast, the proposed method requires executing a speculative decoding step with each draft model, which increases the number of tokens processed by the target model fivefold during the initial step\\u2014an expensive operation, particularly with tree attention. Despite this, the average accuracy achieved in the experiments for model selection is below 80 percent.\", \"There is a lack of experiments assessing throughput. Given that the proposed method utilizes an ensemble of draft models, it is more likely to be deployed in a distributed system rather than on a personal computer or mobile device. As a result, throughput should be prioritized as a key evaluation metric over latency. Even for latency evaluation, a batch size of 1 is not practical.\", \"Problem with machine translation datasets. Vicuna, as well as LLaMA 2, are not multilingual language models and are not designed for machine translation tasks involving languages such as Chinese and Japanese.\", \"The training datasets are the same as the evaluation dataset.\", \"The vertical spacing on the last page is very small compared to other pages.\"], \"questions\": \"1. Can you include an evaluation of Eagle without specialization on the same hardware?\\n2. Can you fine-tune a draft model on all the datasets experimented to show if it is necessary to have a specialized model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces MetaSD, a novel framework that enhances Speculative Decoding (SD) by integrating multiple specialized drafters. Utilizing a multi-armed bandit sampling technique, it dynamically selects the most effective drafter for each inference step. The authors conducted experiments comparing their approach against both black-box and white-box SD methods, demonstrating that MetaSD significantly outperforms traditional single-drafter techniques, such as Lookahead and Eagle, in terms of efficiency and effectiveness.\\n\\nThe reviewers have expressed significant concerns regarding the correctness of the theorem presented in the paper, as well as the practical applications of the proposed method in real-world scenarios. The authors are encouraged to provide additional clarification on these points.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers have expressed significant concerns regarding the correctness of the theorem presented in the paper, as well as the practical applications of the proposed method in real-world scenarios. The authors are encouraged to provide additional clarification on these points.\"}", "{\"title\": \"Responses to Comments by Reviewer V45K (1/2)\", \"comment\": \"# Follow-Up 1. Comparisons with Concurrent Work\\n\\nWe appreciate the opportunity to draw connections between MetaSD and concurrent advancements in SD! Both `SWIFT` and `Online Speculative Decoding (OSD)` methods address distinct yet complementary aspects of SD.\\n\\n- **SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration** introduces a dynamic optimization framework that employs layer-skipping strategies to reduce latency during decoding. SWIFT explores the combinatorial space of layer-skip indices, employing Bayesian Optimization to efficiently search for optimal configurations. While SWIFT\\u2019s approach dynamically adjusts internal model operations, MetaSD focuses on selecting among heterogeneous external drafters to optimize decoding efficiency. Both methods share a commitment to dynamic adaptation but target different optimization levels. While generally superior, MetaSD is even effective in scenarios where multiple pre-trained or publicly available drafters are accessible, allowing it to provide robust, adaptive solutions for diverse or unknown drafter configurations.\\n\\n- **Online Speculative Decoding (OSD)** addresses challenges associated with the static nature of traditional SD by introducing an online adaptation mechanism. OSD periodically updates draft models during deployment using knowledge distillation, improving token acceptance rates and inference speed by dynamically aligning draft models with evolving user query distributions. While OSD excels in refining a single draft model for domain-specific optimization, MetaSD is tailored for multi-drafter scenarios, enabling effective selection among diverse specialized or heterogeneous drafters. Additionally, OSD relies on continuous updates to a smaller draft model, whereas MetaSD\\u2019s plug-and-play paradigm supports real-time decisions without the need for such updates.\\n\\n - While MetaSD and OSD differ in their primary focus\\u2014drafter selection versus drafter adaptation\\u2014they share complementary strengths. In scenarios where drafters must be adapted dynamically, integrating OSD\\u2019s update mechanism with MetaSD\\u2019s selection strategy could further enhance robustness. Conversely, MetaSD\\u2019s capability to handle heterogeneous black-box drafters offers a practical solution for real-world settings where drafter customization or retraining is infeasible.\\n\\n- In conclusion, MetaSD, SWIFT, and OSD each address unique challenges in SD. By advancing distinct aspects of SD, these methods collectively push the boundaries of what speculative decoding can achieve, offering promising directions for future research and integration.\\n\\nWe will incorporate these discussions into our revised manuscript!\\n\\n# Follow-up 2. Computation Overhead\\n\\nThank you for pointing this out. We agree that a detailed analysis of the computational overhead introduced by the Bandit mechanism is essential for understanding its practical implications. While we\\u2019ve already put a discussion on it in `Section 5` and `Appendix I.1`, in the revised manuscript, we further include a discussion in the `Appendix I.2` to elaborate on this topic. \\n\\n# Follow-up 3. W2. Cosine Similarity\\n\\nThank you for providing an alternative perspective on computing similarity. We agree that cosine similarity can indeed be calculated directly on the input representations after the pre-filling stage of the target LLM, bypassing the need for an additional encoder-only network. This is a valid approach.\\n\\nHowever, we would like to highlight the potential challenges in real-world scenarios where tasks exhibit highly similar representations. In such cases, relying solely on input-level similarity can lead to misclassification and suboptimal drafter selection. For example, tasks like summarization and QA, or translation between similar languages (e.g., French to English vs. Spanish to English), may produce embedding representations that are difficult to differentiate accurately using cosine similarity alone.\\n\\nThis challenge underscores the importance of MetaSD\\u2019s token-level feedback mechanism, which dynamically adapts drafter selection based on actual decoding performance rather than static similarity measures. We believe this adaptability is crucial for robust performance, especially in mixed-domain or ambiguous task settings.\\n\\n**We appreciate your acknowledgment of this limitation in the naive strategy and will clarify this discussion in the camera-ready manuscript to incorporate your valuable input.** Thank you again for raising this point!\"}", "{\"title\": \"Responses to Reviewer V45K (2/3)\", \"comment\": \"# W2. Lack of out-of-domain (OOD) experiments\\n\\nWe acknowledge the reviewer\\u2019s concern and agree that out-of-domain experiments can be crucial for demonstrating the strength of our MAB framework. To address this, we conducted additional OOD experiments using Alpaca-Finance and RAG datasets, which lie outside the domains of the specialized drafters used in our main experiments. The results are summarized in `Table9` of `Appendix F.8`.:\\n\\n### **Key Observations**\\n1. **Superior Adaptability**: MetaSD outperforms both OFA drafters and individual specialized drafters in these out-of-domain settings.\\n\\n2. **Limitations of Similarity-Based Selection**:\\n - **Increased Computational Cost**: Computing similarity between sentence embeddings requires passing the context through an encoder-only network to obtain the embeddings. This additional cost significantly increases inference time.\\n - **Risk of Misclassification**: Even with embeddings, achieving high classification accuracy for selecting the correct drafter remains challenging. Errors in classification can lead to suboptimal drafter selection, further degrading performance. For instance, the difference in performance between Math drafter and UCB drafter becomes marginal at higher input lengths, as shown below:\\n \\n | **Tokens Per Second** | **Math Drafter** | **OFA Drafter** | **MetaSD-UCB** |\\n |-----------------------|------------------|-----------------|----------------|\\n | **128 Tokens** | 56.764 | 39.462 | **52.329** |\\n | **Inference Time (s)**| 2.255 | 3.244 | **2.446** |\\n\\n# W3. Theoretical clarity\\n\\nWe include the proof of the relation between the expectations of the two rewards at the end of `Appendix G.2`. Our result on $\\\\mathbb{E}[N_{acc}(i,t)]$ corresponds to equation (1) in [1]. The apparent difference arises because our expectation pertains to the number of accepted tokens in each round, whereas equation (1) in [1] refers to the total number of generated tokens, which is always one more than the number of accepted tokens. For completeness, we have included these clarifications in our revised manuscript.\\n\\n# W4. Switching costs evaluation\\n\\nWe agree that reporting the switching frequency is critical for evaluating the computational efficiency of our framework. Below, we provide the observed switching frequencies during inference for the experiments in `Tables 3-4`.\\n\\n## Switching Frequency Results\", \"the_average_number_of_drafter_switches_during_inference_is_as_follows\": \"| **Task** | **Average Switching Times (SpS, Noisy)** | **Average Switching Times (Eagle, Noisy)** |\\n|-----------------|-----------------------------------------------|------------------------------------------------|\\n| **Code** | 1.82 | 1.70 |\\n| **Translation**| 1.55 | 1.15 |\\n| **Sum** | 2.25 | 2.11 |\\n| **QA** | 1.97 | 2.36 |\\n| **Math** | 2.19 | 2.37 |\\n\\n## Key Observations\\n\\n1. Extremely Low Switching Frequency: MetaSD switches drafters only 1\\u20132.5 times per task over 100+ SD rounds, minimizing overhead. Future work may explore higher switching rates for non-stationary generation tasks.\\n\\n2. Negligible KV Cache Cost: Low switching rates make KV cache recomputation costs negligible, even for tasks with long contexts, ensuring competitive inference speeds with single-drafter approaches.\"}", "{\"comment\": \"what is your batch size, can you plot the throughput gain with the increased bsz as a figure?\"}", "{\"title\": \"Responses to Reviewer V45K (1/3)\", \"comment\": \"Thank you for your careful review of our paper and your insightful and constructive comments. We have addressed your comments and updated our manuscript accordingly. Please find our detailed answers below.\\n\\n# W1. Extra computation overhead of MetaSD\\n\\nWe would like to address each point in detail to clarify potential misunderstandings regarding the computational implications of our approach.\\n\\n## Training Overhead\\nWhile specialization may require additional training efforts compared to an OFA (One-size-Fits-All) drafter, we would like to emphasize that our approach can handle real-world scenarios where heterogeneous drafters already exist in public repositories. Our framework focuses on optimizing the utilization of such heterogeneous drafters, ensuring that the most suitable drafter is selected dynamically. This shifts the problem from retraining models to developing an effective strategy for utilizing pre-existing resources. Thus, while training specialized drafters may involve additional costs in some cases, the broader applicability and versatility of MetaSD provide substantial practical value. **In addition, issues regarding the cost of training drafters are not solely limited to our work, but all research regarding this area.**\\n\\n## Inference Memory-Bandwidth Efficiency\\nWe would like to clarify a potential misunderstanding regarding inference memory bandwidth requirements. Although MetaSD employs multiple drafters, **this does not increase the memory bandwidth requirements during inference.** The drafters\\u2019 weights are preloaded into DRAM, ensuring no additional VRAM bandwidth is consumed compared to a single-drafter setup. Memory bandwidth refers to the data movement between VRAM and compute cores, which remains identical regardless of the number of drafters, as only one drafter operates on the VRAM-resident data at a time. The following table provides a comparison:\\n\\n| **Metric** | **Single-Drafter Method** | **MetaSD** |\\n|------------------------|---------------------------|-----------------|\\n| DRAM Memory Usage | 17 GB | 19 GB (+2 GB) |\\n| VRAM Bandwidth | Identical | Identical |\\n\\nBy ensuring that only the active drafter interacts with the VRAM, MetaSD does not increase VRAM bandwidth demands, maintaining parity with single-drafter approaches during inference.\\n\\n## Serving Complexity\\nUsing multiple drafters in MetaSD does not inherently increase serving complexity. Modern distributed systems already employ model parallelism techniques to allocate workloads effectively across multiple GPUs. In MetaSD, the drafters are evenly distributed across available GPUs, with each GPU independently handling its assigned drafter without added coordination costs. This design ensures that:\\n\\n- **Load Balancing**: Drafters are distributed across GPUs based on their assigned tasks, maintaining equivalent complexity to single-drafter systems.\\n\\n- **Minimal Communication Overhead**: MetaSD requires no additional inter-GPU communication beyond standard model parallelism setups.\\n\\nConsequently, the serving complexity of MetaSD aligns with that of traditional parallelized single-drafter systems.\\n\\n## Justification of Overhead\\nThe modest increase in DRAM memory usage (+2 GB) and marginal training cost for specialized drafters is justified by the significant performance gains achieved through adaptive optimization. MetaSD dynamically selects the most suitable drafter for each task, consistently outperforming single-drafter methods across a wide range of scenarios, as highlighted in our experimental results.\\n\\nFurthermore, MetaSD's adaptability addresses an important real-world challenge: utilizing publicly available, pre-trained heterogeneous drafters effectively. By offering a generalizable strategy for optimizing these resources, MetaSD provides practical value beyond specialized retraining, supporting diverse and evolving task requirements.\"}", "{\"comment\": \"Heterogeneous Batch confirms my concern.\"}", "{\"title\": \"Overall response\", \"comment\": \"We sincerely appreciate all the reviewers for their insightful and constructive feedback on our manuscript. We have responded to the individual comments from the reviews below, and believe that we have successfully responded to most of them. We have included the discussion and results of the suggested experiments in the revision. Here we briefly summarize `(1)` our core contributions, `(2)` strengths, `(3)` Empirical updates, and `(4)` updates on theories we have made to the revision.\\n\\n# Core Contributions of Our Work \\n\\n1. **Novel Framework** : We introduce MetaSD, a simple yet efficient framework which is the first work to tackle the problem of using multiple specialized drafters for Speculative Decoding (SD).\\n\\n2. **Problem Formulation** : We formulate MetaSD as a Multi-Armed Bandit (MAB) problem and design a novel reward and regret objective which connects theories to the actual performance of the algorithm. \\n\\n3. **Theoretical Guarantee** : By establishing an upper bound on the regret (Theorem 2), we prove that our algorithm achieves a `log-linear upper bound` which guarantees the faster LLM inference time.\\n\\n4. **Experimental Results** : We demonstrate that our framework achieves superior speedup ratio in various tasks (`code generation`, `question answering`, `math reasoning`, `multilingual translation`, `RAG`, `Finance`). In most cases, our approach results in (a) `~60% speed-up` relative to SOTA SD and `~370% speed-up` relative to vanilla decoding on `7B` LLM.\\n\\n\\n# Summary of Strengths Highlighted by Reviewers \\n\\n1. **Novelty and Effectiveness of Method** : `Reviewer v7AR`, `Reviewer Tyx3`, `Reviewer CxuG`, `Reviewer V45K` agreed that our framework MetaSD is a novel and effective approach for using multiple drafters in speculations.\\n\\n2. **Writing and Presentation** : `Reviewer v7AR`, `Reviewer Tyx3`, `Reviewer CxuG` noted that our writing and presentation is clear and concise to read.\\n\\n3. **Comprehensive experiments**: `Reviewer v7AR`, `Reviewer V45K`, `Reviewer CxuG` concurred that our experimental results are solid under different types of scenarios which validates our MetaSD framework.\\n\\n4. **Potential insights to the academic community**: Recognized by `Reviewer V45K` for investigating multiple-drafter approaches might lead to meaningful findings. **Our work is the first to touch the problem of using multiple drafters in SD.**\\n\\n# Updates of experimental results during Rebuttal\\n\\n1. `Appendix F.8`: Evaluations on out-of-domain datasets including `finance` and `RAG`.\\n\\n2. `Appendix F.9`: Evaluations on perturbed prompted scenarios.\\n\\n3. `Table3` and `Table4` in `Section 4`: Experiments with One-size Fits All (OFA) drafters, which is trained on a mixed dataset across all tasks.\\n\\n4. `Appendix F.10`: Throughput experiment.\\n\\n5. Responses for `Reviewer V45K`: Extra computation overhead.\\n\\n6. `Appendix F.11`: MetaEagle-UCB with Efficient KV Cache Strategies [A] \\n\\n7. Responses for `Reviewer V45K`: Switching times during MetaSD.\\n\\n# Updates of theoretical parts during Rebuttal \\n\\n1. `Appendix G.6`: Clarifying our assumption on acceptance rate with comparison between previous literature.\\n\\n2. `Appendix G.7`: Generalized analysis of regret upper bound considering randomness of target sequence length $B$.\\n\\n3. `Responses for Reviewer v7AR`: Validity and robustness of our theoretical framework under our formulation.\\n\\n# Further research motivation\", \"we_want_to_emphasize_that_metasd_addresses_another_line_of_practical_challenge\": \"managing diverse, heterogeneous drafters from open-sourced systems (e.g. HuggingFace). These drafters, pre-trained with varying objectives and frequently lacking detailed training documentation, pose significant obstacles to deployment frameworks that assume uniformity or rely on static selection strategies. By providing our adaptive strategy to optimally utilize such resources, MetaSD ensures robust performance across diverse tasks and settings, contributing meaningfully to both theoretical advancements and real-world applications.\\n\\n--------\\n\\nWe believe these additions and clarifications address the reviewers' concerns comprehensively and strengthen our manuscript. The changed parts in our revised manuscript are highlighted in `magenta-colored` text. Our manuscript is updated on `Nov 20, AOE time`. We look forward to your favorable consideration.\\n\\n\\n[A] EFFICIENT STREAMING LANGUAGE MODELS WITH ATTENTION SINKS, ICLR 2024.\"}", "{\"comment\": \"I am wondering how you conduct the throughput experiment. Do you test over a dataset mixture of different topics? If not, I think this experiment is totally wrong.\"}", "{\"title\": \"Response to Official Comment by Reviewer v7AR\", \"comment\": \"Thank you for your question. To clarify, the throughput experiments were conducted on a `mixed dataset` comprising tasks from diverse domains, including **Code Generation, Math, Question Answering (QA), Translation, and Summarization**.\\n\\nThe throughput measurement methodology strictly follows the settings described in the original **Eagle** paper. This was done intentionally to enable direct comparison, highlighting the improvements achieved by our MetaSD framework over existing approaches.\\n\\nAdditionally, we would like to emphasize that our method does not critically impact the **roofline model** or memory bandwidth (as we mentioned above). MetaSD efficiently manages memory usage, ensuring that there are no significant increases in memory movement or bandwidth demands. This efficiency contributes to the strong throughput performance observed in our results, and we believe this further supports the validity of our framework.\\n\\nWe hope this explanation addresses your concerns and provides additional context for the results. Please let us know if you have further questions! \\n\\n-Authors\"}", "{\"title\": \"Responses to Reviewer V45K\", \"comment\": \"Dear Reviewer V45K\\n\\nWe sincerely appreciate the time and effort you dedicated to reviewing our paper and your positive feedback on our work.\"}", "{\"summary\": \"This paper presents MetaSD, an approach that integrates multiple specialized drafters into Speculative Decoding (SD) and employs multi-armed bandit sampling to dynamically select the optimal drafter. The authors conducted experiments across black-box and white-box SD methods, validating the superiority of the proposed method over traditional single-drafter approaches such as Lookahead and Eagle.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. The exploration of this work towards Speculative Decoding with multiple specilaized drafters is meaningful and offers potential insights for the academic community. The standard single-drafter approach, typically generalized for natural language tasks, may not be optimal for domain-specific applications, such as translation. This work makes investigations to specilaized drafters across various domains such as QA, Translation, Math, and Summarization, etc, which demonstrates interesting and meaningful findings.\\n2. The authors conduct comprehensive experiments with both black-box and white-box SD methods, validating the effectiveness of MetaSD across diverse input domains. MetaSD achieves a promising 1.6x-3.7x speedup across these tested scenarios. The experimental settings are illustrated in detail.\\n3. The paper also provides an in-depth analysis of various impacts of MetaSD, addressing factors such as the switching cost of drafters, memory bandwidth bound, and KV cache re-calculation. These explanations effectively address some of my initial concerns.\", \"weaknesses\": \"1. **Extra computation overhead of MetaSD:** Unlike single-drafter SD methods such as Eagle, MetaSD leverages multiple specialized drafters for enhanced adaptation across various domains. While this approach shows promise, it introduces additional training and inference overhead that scales linearly with the number of drafters. Although the authors discuss certain aspects, like memory bandwidth limitations, further comparisons and quantitative data would clarify the computational overhead introduced by MetaSD. Specific metrics, such as MetaSD's training carbon footprint and memory bandwidth requirements during inference, should be included. Additionally, using multiple drafters increases serving complexity, particularly for multi-GPU environments.\\n2. **Lack of out-of-domain experiments**: In the main results, the authors utilize five specialized drafters that are fine-tuned on Code, Translation, Summarization, QA, and Math. Then, the MetaSD is evaluated on these five tasks, which can be regarded as in-domain tasks. For these tasks, the motivation of using Multi-armed bandit (MAB) is not convincing since drafters can be directly selected by estimating the similarity between the training and test data before inference. The strength of MAB would be more evident in out-of-domain scenarios, where selecting the optimal drafter is less straightforward. However, this experimental setting is missing in the work.\\n3. **Theoretical clarity**: Some definitions and theoretical statements in the paper lack clarity. For instance, in Line 214, the proof for the equation $\\\\mathbb{E}\\\\left[r_{i, t}^{BE}\\\\right]=\\\\frac{1-\\\\alpha_i^{N_{\\\\max }}}{N_{\\\\max }\\\\left(1-\\\\alpha_i\\\\right)} \\\\mathbb{E}\\\\left[r_{i, t}^{BD}\\\\right]$ is missing. Similarly, Equation (6) in Line 1288, where the authors state $\\\\mathbb{E}\\\\left[N_{acc}(i, t)\\\\right]=\\\\frac{\\\\alpha_i-\\\\alpha_i^{N_{\\\\max }+1}}{1-\\\\alpha_i}$, appears to deviate from Equation (1) in [1]. Detailed explanations of these theoretical elements are necessary to prevent misinterpretation.\\n4. **Switching costs evaluation**: MetaSD needs to switch drafters during inference for optimal performance, which adds additional costs, such as the re-computation of drafting KV cache. To mitigate this, the authors propose Algorithm 4 to decrease the switching frequency, which first eliminates sub-optimal drafters as quickly as possible and then exclusively selects this drafter for the remaining rounds. Considering this, the total switching times of MetaSD during inference should be reported to offer readers an overall understanding of its extra KV re-computation cost. For example, the switching times between drafters in Table 3 and Table 4.\\n\\n[1] Fast Inference from Transformers via Speculative Decoding. Leviathan et.al. ICML 2023.\", \"questions\": [\"Most of my primary concerns are outlined in the weaknesses section above. Here are some additional, minor concerns:\", \"In Figure 1, the authors emphasize the use of the KV cache across different drafters. Could the authors clarify if they propose an efficient strategy to avoid re-calculating the KV cache or if they simply re-compute the KV cache for previous contexts upon drafter switching?\", \"The results of the SpS baseline should be included in Table 3, and the results of Eagle should be reported in Table 4.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the reply! I still have the following concerns regarding the paper in its current form.\\n\\n**The correctness of the theoretical results**\\n\\nFirst, the setting of Theorem 2 does not contribute to the practical analysis. Theorem 2 assumes that the base LLM always outputs a token sequence with length $B$, which is highly unrelated to the practical applications. It is encouraged to discuss why we need to study this setting. Given the generalized version Theorem 6, it is better to remove Theorem 2 in the main paper.\\n\\nSecond, Theorem 6 is **not correct**. As discussed, the authors only assume i.i.d. for the distribution without any conditioning, and it does not imply anything about the posterior distribution. The statement ``Suppose a target mode (with the same rule in your example) generates 1110. Then the acceptance rate of a drafter $i$ conditioned on 1, 11, 111, 1110 follows from the same distribution $\\\\nu_i$ with mean $\\\\alpha_i$.\\u2019\\u2019 is irrelevant to our discussion. We are **not** discussing anything about the independence between drafters, but the independence along the **decoding**, i.e., the **independence along the time**. Let us consider a even more simple problem. Each time decoding generates $1$ or $2$ tokens in an **i.i.d.** manner (Please keep in mind this is prior, i.e., without any conditioning). **Conditioned on the fact that 2 decoding procedures generate 4 tokens**, we know that both decoding procedures generate $2$ tokens. In fact, **the conditioning itself break the i.i.d. structure.** Thus, Lemma 8 is **not correct**.\\n\\n**The relationship between the theory and experiments**\\n\\nThe theoretical analysis is **unrelated** to the experiments. As discussed, the whole theoretical analysis is all about stochastic decoding, but the experiments are all conducted on the greedy decoding. In the greedy decoding, **all the assumptions, i.e., i.i.d. assumption, are wrong**, since this is no stochasticness anymore. I cannot see how the current analysis can be generalized to the greedy decoding method. According to my understanding, the i.i.d. assumption in the deterministic environment means that all the numbers are same. Then the best regret is constant, pulling all arms once and selecting the best one. Please explain this.\\n\\n**The representativeness of experiments**\\n\\nThe throughput of Heterogeneous Batch and the OOD datasets.\"}", "{\"comment\": \"OFA Eagle achieves comparable performance to Bandit-Eagle while using less memory and without requiring the Bandit algorithm. I don't understand under what circumstances someone deploying an LLM would choose a more complex algorithm with unclear benefits, especially one that might even reduce throughput in real-world scenarios.\"}", "{\"title\": \"TL;DR - Summary of Our Work (Key Gist)\", \"comment\": [\"`Theoretical guarantee`: Provides a solid theoretical foundation for multi-armed bandit-based drafter selection in speculative decoding.\", \"`First of its kind`: The first work to address speculative decoding with multiple heterogeneous drafters, tackling real-world challenges of diverse task settings.\", \"`Extensible Design`: Easily integrates with other speculative decoding frameworks, offering compatibility and flexibility for future research.\", \"`Robustness`: Demonstrates consistent performance across diverse scenarios, including in-domain, out-of-domain, and perturbed prompts, outperforming prior work in single-batch settings.\", \"`Throughput limitation`: Acknowledges limitations in heterogeneous large-batch scenarios but remains highly effective for small batch sizes.\", \"`Novel analysis`: Introduces a new form of divergence metric analysis, shifting focus from block efficiency to token-level adaptation and drafter selection.\", \"This work lays the groundwork for further exploration in multi-drafter speculative decoding while addressing practical and theoretical challenges.\"]}", "{\"summary\": \"This paper introduces a simple framework, termed MetaSD, that incorporates multiple drafters into the speculative decoding process to address via multi-armed bandit sampling to dynamically allocate computational resources across various drafters, thereby improving overall generation performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The idea of this paper is neat.\\n\\nThe figures in this paper are clear and good. \\n\\nThe experiments are somehow satisfied because the datasets are randomly shuffled to create a non-stationary environment.\", \"weaknesses\": \"The definition of the acceptance rate is not clear and confusing. The authors state that \\u201cwe relax this assumption and consider a more general scenario where the acceptance rate for each token follows a stationary distribution with mean $\\\\alpha$\\u201d in Line 215. This means that the acceptance rate is in fact a **random variable**. However, in all the calculations in the manuscript, the authors just replace the random variable with its mean value.\\n\\nEven if we treat the acceptance rates as real numbers, the proof of Theorem 1 is not correct. In Lemma 4, it is not clear why the expectation of $r^{BD}$ is equal to the acceptance rate. To be specific, in the definition of the block divergence reward, the total variation is conditioned on the prefix $x^{1:l(t)+j}$. We note that $x^{l(t)+1:l(t)+j}$ is the random variables generated by the draft model, not the pre-trained model. When taking expectation, these random variables take the distributions of the **draft models**. However, Theorem 1 of [1] shows that the average acceptance is equal to the expectation of $r^{BD}$ when $x^{l(t)+1:l(t)+j}$ takes the distribution of the **pre-trained model**.\\n\\nThe formulation of the bandit problem is incomplete, and the statement and the proof of Theorem 2 is inappropriate. Concretely, the proposed bandit problem should be considered in the probability space induced by the randomness of the pre-trained models and the draft models. This implies that the so-called total budget $B$ is indeed a **random variable** (in fact a stopping time). Thus, Theorem 2 states the results conditioned on the budget $B$, and **all the proofs should be built on the posterior distribution given $B$**. However, the current proof does not consider this.\\n\\nWhile the paper emphasizes the design of the BD reward is novel, it does not necessarily align with the goal: a lower BD regret cannot indicate fewer rounds are needed. According to Lemma 6 in the appendix, the performance of an algorithm under different regret/reward definitions, i.e., BD and number of rounds, can be different. From this perspective, the theoretical upper bound guarantee of the proposed algorithm is diminished, as the drafter with a better designed reward may not necessarily perform better in practice.\\nSome procedures in the proof are not correctly justified.\", \"line_1586\": \"the equality is wrong. $\\\\mathbb{E}[\\\\tau^u(\\\\pi^u, B)]$ can be much greater than $\\\\mathbb{E}[\\\\tau(\\\\pi^{i^\\\\star}, B)]$.\", \"line_1695\": \"this inequality is wrong. The confidence radius never shrinks as $t$ increases.\\n\\nIn Algorithm 2, the hyper-parameter $\\\\beta$ is chosen empirically according to Line 1631. However, $\\\\beta$ is not reflected in the regret bound in some Theorems (with some specified $\\\\beta$ in the others), as well as in the corresponding analyses.\\nIn the bandits literature, the mean gaps $\\\\{\\\\Delta_i\\\\}_{i=1}^K$ are critical, because they measure the hardness of the given problem instance. In this paper, some quantities involving the mean gaps are hidden in the constants (which are independent of $B$). While this may be acceptable for the asymptotic behavior where $B\\\\to\\\\infty$, they can be important in the finite $B$ case (which is always true in practice).\", \"minors\": \"It would be great to give a formal mathematical definition of $B$ and the stopping time where the algorithm stops. In addition, it would be better to consistently term $B$ as the \\\"target sequence length\\\", as \\\"budget\\\" refers to the time horizon in the bandits literature.\\n\\n \\nOverall, the formulation of the bandits problem is incomplete and the application of the existing bandits algorithms to speculative decoding is straightforward, given the previous literature. While the paper argues that the proposed BD reward is novel, a justification of its property (e.g., expectation) is clearly missing. In addition, an algorithm with less BD regret does not indicate better performance (at least theoretically). This hinders the theoretical contribution of the paper.\\n\\n[1] M. Yin et al, A Theoretical Perspective for Speculative Decoding Algorithm\", \"questions\": \"1) why exp3 is significantly worse than ucb, even similar to the result of rand?\\n\\n2) In Tables 3 and 4, the author why not provide the original eagle's result or train a general eagle with all the mixed data? \\n\\n3) The 5 tasks are much less than the real setting, the author should study the scaling law of task numbers for metasd.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Additional Responses to Reviewer v7AR (1/2)\", \"comment\": \"Thank you once again for your thorough review. In response to your follow-up comments, we have further revised and updated our manuscript accordingly.\\n\\n**W1. Explicit statement of i.i.d. assumption** \\n\\nThank you for the suggestion, in revised manuscript, we explicitly state that $\\\\alpha_{i,t}$ are i.i.d from a distribution $\\\\nu_i$ with mean $\\\\alpha_i$. \\n\\n**W2. Explicit statement of i.i.d. assumption in Theorem**\\n\\nIn the revised manuscript, we also directly state that $\\\\alpha_{i,t}$ are i.i.d from a stationary distribution $\\\\nu_i$ in `Theorem 2` to properly reflect on our i.i.d. assumption. Moreover, we will include a comparison for Assumptions in [1] and [2] in our main paragraph (`Section 3`) for further clearness.\\n\\n**W3. Stochasticity of the target sequence length B** \\n\\nWe appreciate the constructive feedback and further clarification. \\n\\nFirst, we would like to clarify that `Assumption 1` is about assuming acceptance rates $\\\\alpha_{i,t}$ are i.i.d. from the same distribution $\\\\nu_i$ for each instance of generation. \\nWhile this distribution $\\\\nu_i$ can vary across generations and depend on the target sequence length B, this does not impact our algorithm's performance under `Assumption 1` because we reset the bandit instance for each new generation. Suppose a target mode (with the same rule in your example) generates 1110. Then the acceptance rate of a drafter $i$ conditioned on 1, 11, 111, 1110 follows from the same distribution $\\\\nu_i$ with mean $\\\\alpha_i$. Thus, the objective of a bandit algorithm here is to find the optimal drafter and minimize the regret within this instance and our analysis holds in this case. Next, suppose the target model generates 110 then EOS. Then, acceptance rates of drafter $i$ for this instance also follows stationary distribution with mean $\\\\alpha_{i'}$ which can be different from $\\\\alpha_i$, which does not affect our algorithm since the policy in the instance is independent of any previous results. In this sense, `Theorem 2` is not based on the assumption of fixed $B$ in generation, rather, the regret is defined for each instance of generation. Consequently, our analysis of `Theorem 2` remains valid under `Assumption 1`.\\n\\nHowever, to further enrich our results, we take the reviewer\\u2019s advice and include `Theorem 6` for the general analysis on the expected regret where expectation is taken over the probability space induced by the target model. This should be built on an additional assumption about assuming $\\\\alpha_i$ being independent of B and its conditional expectation over a given $B$ being the same for every $B$. We explicitly state this assumption before Theorem 6 to avoid any confusion with `Assumption 1` and `Theorem 2`. \\n\\nFor the last part, we correct our statement to \\\"Since drafter selection from the policy $\\\\pi$ is independent from $B$ under `Assumption 2`\\\".\\n\\n\\n**W4. Temperature sampling** \\n\\nThank you for raising the important point. \\n\\n`Assumption 1` can hold with every $T$ under our formulation and the analysis holds in Greedy Decoding with `Assumption 1`. With this, our theoretical results include the case of $T=0$, greedy decoding. We inlcude this in our manuscript for avoid any confusion. \\n\\nThe assumption is based on our empirical observation that TV distance between two probability distributions can be an intrinsic measure of closeness of the target model and a drafter and assumes acceptance rates of a drafter are i.i.d. drawn from the same distribution in each instance. This simplification allows us to investigate theoretical guarantee and reward design when using a bandit algorithm in multi-draft SD.\\n\\nMoreover, in addition to `Table 7` where we showed that our algorithm still performs strongly with $T>0$, we will include further experimentally results from the sampling decoding in all of our main experiments in our final manuscript to connect the theory and the experiments more robustly.\"}", "{\"title\": \"Additional Responses to Reviewer v7AR (2/2)\", \"comment\": \"**W6. Comparison with ETC**\\n \\n We appreciate you for the constructive feedback with great intuition. \\n\\nIndeed for a small $\\\\beta$, a regret upper bound including $\\\\beta$ does not hold anymore. We conducted additional experiments on translational task using ETC with tuned exploration rounds as the reviewer\\u2019s suggestion and the result is as follows:\\n\\n| Speedup ratio | JAPANESE | RUSSIAN | GERMAN | FRENCH | CHINESE |\\n|----------------|-------|-------|-------|-------|-------|\\n| ETC (15) | 1.443 | 1.548 | 1.870 | 2.117 | 1.521 |\\n| UCB ($\\\\beta=0.1$) | 1.447 | 1.772 | 2.121 | 2.118 | 1.607 |\\n| UCB ($\\\\beta=0.01$) | 1.367 | 1.774 | 2.100 | 2.097 | 1.643 |\\n\\nIn the above result, ETC(15) is based on the uniform exploration round of $15$ which we found performs best empirically. And exploration hyperparameter $\\\\beta$ is $0.1, 0.01$ are used for a fair comparison. While ETC achieves comparable speed-up performance compared to UCB, ETC requires the number of exploration rounds as a hyperparameter and the optimal exploration round depends on the total target sequence length $B$ which we can\\u2019t know during the generation. We will include the discussion with updating the result into our final manuscript to include ETC as a baseline of our experiments.\\n\\n**W7. Clarification on the statement**\\n\\nThank you for pointing out this. As stated in the additional response of W3, in order to relax the assumption on fixed B and make `Theorem 6` to be true, we have to rely on the additional assumption (`Assumption 2` in `Appendix G.7`). For clarification, we restate the previous claim into \\\"We can consider general scenarios where we take all possible instances generated by a target model when using temperature sampling with T>0. In this scenario, we define the expected regret over the probability space induced by the target model.\\\" in our revised manuscript. \\n\\n**Additional comparison with [1] and clarification**\\n\\nWe appreciate the reviewer for providing us good reference [1] which we didn\\u2019t have a chance to access by the time of our submission. We believe analyzing Speculative Decoding on the more general assumption of formalizing a decoding problem using a Markov-chain would be closer to the real-world scenarios. While formulating our problem as a non-stationary bandit with Markov chain formulation as in [1] would be a more realistic modeling, non-stationary bandit does not always perform better empirically. \\n\\nHere, we would like to emphasize once more that our work is the first to investigate multi-drafter selection within one generation when doing Speculative Decoding and we especially take advantage of a bandit algorithm which is simple yet effective. Thus future works are expected to investigate more general scenarios and corresponding analysis to cover more realistic modeling. \\n\\nWe sincerely appreciate you for the constructive discussion and valuable feedback with excellent intuitions. We hope our responses clarify your initial concern and we are open to further discussion if there exist any remaining concerns or questions!\\n\\n\\n**Reference** \\n\\n[1] M. Yin et al, A Theoretical Perspective for Speculative Decoding Algorithm. NeurIPS 2024.\\n\\n[2] Leviathan et.al., Fast Inference from Transformers via Speculative Decoding. ICML 2023.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Responses to Reviewer v7AR (1/2)\", \"comment\": \"Thank you for your careful review of our paper and your insightful and constructive comments. We have addressed your comments and updated our manuscript accordingly. Please find our detailed answers below.\\n\\n# W1. Clarity of the assumption on acceptance rate\\n\\nWe denote the acceptance rate of drafter i in round t as $\\\\alpha_{i,t}$ and assume this is a stationary random variable with mean $\\\\alpha_i$. As the regret bound of a bandit algorithm is usually related to the expectation of each reward, $\\\\alpha_i$ appears in the result in `Theorem 2` but this does not mean we replace $\\\\alpha_{i,t}$ with $\\\\alpha_i$ in any part of our analysis. For clarification, we add a formal definition of our assumption on acceptance rate in `Appendix G.6`.\\n\\n# W2. Validity of Theorem 1\\n\\nThe expectation of BD reward in `Lemma 4` is a direct consequence of our assumption. Moreover, our stationarity assumption implies that the TV distance between target model and i-th drafter does not depend on the context and consequently all of the theorem does not have to take into account the target model output. Please note that our assumption is stricter than the most general case as in [1] while it is more general than [2] where authors use fixed acceptance rates. For completeness, we add the comparison between ours and assumptions used in [1] and [2] with analysis in `Appendix G.6`. \\n\\n# W3. Stochasticity of the target sequence length B\\n\\nThank you for the constructive feedback! While we define the stopping time regret (`Definition 2`) and build `Theorem 2` with fixed B for each generation, this can be easily generalized to the more general case considering a probability space induced by the target model. This is possible since target sequence length B does not depend on the policy under our assumption where target sequence length B is only determined by the randomness of the target model, independent from the types of drafters we used throughout the generation. We include a general version of `Theorem 2` with analysis in `Appendix G.7`. \\n\\n# W4. The relationship between the reward / regret design and the performance of the algorithm.\\n\\nThe performance of a bandit algorithm itself does not depend on the definition of the regret since regret objective is only for the analysis. We show that the optimality guarantee with the existing regret objective in the bandit literature will not prove the optimality of the algorithm performance in the Speculative Decoding. This motivates us to design a novel regret objective in `Definition 2` which aligns with the performance of algorithms in SD. This is reflected in `Lemma 6`, where we prove the upper bound on existing regret definition may not guarantee the optimal performance. One of our key theoretical contributions is defining this new regret objective and proving the upper bound on this new stopping time regret which is not so straightforward from the conventional analysis of UCB. Consequently, the result of `Theorem 2` reflects the guarantee of the algorithm performance and lower regret upper bound when using BD reward can imply better algorithm performance in most scenarios which we state in `Corollary 1`. \\n\\n# W5. Line 1586 and Line 1695\\n\\nWe appreciate your careful review. We fixed typos in `line 1763` (originally `line 1586`) and `line 1878` (originally `line 1696`) of our original manuscript.\\n\\nFor `Line 1586`, $\\\\mathbb{E}[\\\\tau^u(\\\\pi^u,B)]$ should be changed to the $\\\\mathbb{E}[n_{i^{\\\\star}}(\\\\pi^u,B)]$.\\n\\nFor `Line 1878` (eq. 29), the square root term in the left hand side should be divided into $s$ and $n_i$, respectively. As in the proof of the original UCB algorithm (proof of `Lemma 7`), confidence bound in both inequalities needs to be shrinked as $s$ and $n_i$ increases with fixed t and it doesn\\u2019t necessarily have to be shrinked with increasing t. \\n\\n# W6. Hyperparameter $\\\\beta$ and analysis on mean gaps $\\\\Delta_i$\\n\\nThank you for the insightful comments! \\n\\nIn `Appendix G.8`, we include further analysis on how the regret upper bound of our algorithm changes with different $\\\\beta$. \\n\\nAlso, mean gaps $\\\\Delta_i$ can indeed be a critical factor of the algorithm performance. We make terms containing mean gaps to appear in inequality in `Theorem 2` and relevant equations to better represent this fact which ensures all of the constant terms to be independent both on $B$ and mean gaps.\\n\\n# W7. Formal definition of B and terminology\\n\\nWe provide a formal definition of target sequence length B in `Appendix G.7`. We have also ensured consistent use of the term \\u2018target sequence length\\u2019 when referring to $B$ throughout the manuscript to avoid any ambiguity.\"}", "{\"title\": \"Gentle Reminder - Dear Reviewer CxuG\", \"comment\": [\"Dear `Reviewer CxuG`,\", \"Thank you for your valuable feedback on our work. As we approach the end of the discussion phase, we\\u2019d like to highlight the key contributions of our paper:\", \"`Theoretical guarantee`: A solid foundation for multi-armed bandit-based drafter selection.\", \"`First of its kind`: Pioneering speculative decoding with multiple heterogeneous drafters.\", \"`Extensible design`: Compatible with other speculative decoding frameworks.\", \"`Robustness`: Consistent performance across diverse scenarios, including out-of-domain and perturbed prompts.\", \"`Throughput limitation: Effective for small batch sizes despite challenges in large heterogeneous batches.`\", \"`Novel analysis` : A new divergence metric focusing on token-level adaptation.\", \"Our responses also address your concerns with additional experiments and detailed clarifications. We value any further feedback to ensure your concerns are fully addressed.\", \"Thank you again for your time and insights.\", \"Warm regards\"]}", "{\"title\": \"Responses for the relationship between the theory and experiments (1/2) (To Reviewer v7AR)\", \"comment\": \"Dear Reviewer v7AR,\\n\\nThank you for your insightful questions. Below, we address your concerns about the `relationship between the theory and experiments.`\\n\\nWe have conducted additional experiments with **stochastic decoding (temperature=1.0)**, as suggested, to demonstrate the robustness of our framework under the setting assumed in the theoretical analysis. The results are as follows:\\n\\n#### Black-Box (MetaSpS)\\n\\n| Task | Drafter1 | Drafter2 | Drafter3 | Drafter4 | Drafter5 | OFA | MetaSpS-UCB |\\n|--------------|----------|----------|----------|----------|----------|------|------------|\\n| Code | 1.781 | 0.963 | 1.168 | 1.260 | 1.178 | 1.501 | **1.596** |\\n| Translation | 0.856 | 1.695 | 0.897 | 0.880 | 0.838 | 0.861 | **1.197** |\\n| CNN | 1.201 | 0.918 | 1.629 | 1.223 | 1.092 | 1.230 | **1.439** |\\n| NQA | 1.073 | 0.961 | 1.132 | 1.510 | 1.031 | 1.123 | **1.322** |\\n| MathQA | 1.220 | 1.026 | 1.200 | 1.360 | 1.968 | 1.512 | **1.673** |\\n\\n#### White-Box (MetaEagle)\\n\\n| Task | EAGLE1 | EAGLE2 | EAGLE3 | EAGLE4 | EAGLE5 | OFA Eagle | MetaEagle-UCB |\\n|--------------|--------|--------|--------|--------|--------|-----------|---------------|\\n| Code | 3.019 | 0.872 | 1.012 | 1.348 | 1.809 | **2.926** | 2.765 |\\n| Translation | 1.030 | 1.817 | 1.373 | 1.278 | 0.997 | 1.578 | **1.668** |\\n| CNN | 0.998 | 0.834 | 2.289 | 1.267 | 0.864 | 1.749 | **1.935** |\\n| NQA | 1.179 | 0.922 | 1.269 | 2.181 | 1.011 | 1.680 | **1.756** |\\n| MathQA | 1.739 | 0.966 | 1.462 | 2.141 | 3.099 | 2.289 | **2.539** |\\n\\nThese results align with the theoretical assumptions, still demonstrating the robustness of MetaSD under stochastic decoding conditions. Additionally, `Table 7` in our manuscript already presents the robustness of MetaSD in multilingual settings under stochastic decoding. We believe these results further clarify the connection between our theory and experiments.\\n\\nTo ensure clarity, we will expand on these findings in our camera-ready manuscript and address any potential ambiguities that may arise for other readers.\"}", "{\"title\": \"Dear SAC, AC, and Reviewer v7AR\", \"comment\": \"Thank you for the ongoing discussion. We would like to raise a critical concern:\\n\\n- `Request for Moderation`: **To the AC, we are concerned that certain interpretations appear to frame the work unfairly, not due to a lack of engagement but rather through persistent misdirection of the discussion. This includes steering rebuttal efforts toward edge cases and issues tangential to the core problem our work aims to address, while also demonstrating a misunderstanding of the theorems `despite repeated clarifications`.** We kindly request your moderation to assess whether this critique appropriately reflects the context and intent of our contributions. If you find similar concerns, we sincerely ask for your guidance to ensure fairness.\\n\\nWe appreciate your attention to this matter as we strive for a balanced and constructive review process.\\n\\nBest regards, \\n\\n-Authors\"}", "{\"title\": \"Responses to Reviewer Tyx3 (2/2)\", \"comment\": \"# W3. Throughput experiment\\n\\nWe understand the need to evaluate throughput because batch processing is critical. We address the concern and clarify why our method maintains throughput efficiency in `Appendix F.10`.\", \"our_drafter_management_mechanism_does_not_lead_to_throughput_depletion_compared_to_single_drafter_methods_due_to_the_following_reasons\": \"- No Increase in Memory Bandwidth Requirements: The drafters\\u2019 parameters are preloaded into DRAM and do not require frequent movement to VRAM for computation. This ensures that the number of memory movements and memory size for movement remain identical to single-drafter methods, even in scenarios with multiple drafters.\\n\\n- Scaling Across Batches: Since the computational structure remains unchanged, performance observed in single-batch scenarios translates directly to multi-batch settings, maintaining throughput consistency.\\n\\nWe further conducted experiments comparing throughput following the same settings in the original Eagle paper. The results confirm that the performance of MetaSD scales well without significant degradation:\\n\\n| **Throughput** (RTX 3090 24GB) | **Single OFA Eagle (Tokens/sec)** | **MetaEagle-UCB (Tokens/sec)** |\\n|----------------|--------------------------------|-------------------------|\\n| Speedup | x 2.235 | x 2.427 |\\n\\n\\n\\n# W4. Problem with MT datasets\\n\\nWhile we acknowledge that Vicuna and LLaMA 2 are not inherently multilingual models, we chose these models for their instruction-tuned capabilities. Importantly, the purpose of our evaluation was not to measure generation quality but to assess **latency** improvements under SD frameworks. This experimental settings are following the settings by prior works such as:\\n\\n- **\\\"Towards Fast Multilingual LLM Inference: Speculative Decoding and Specialized Drafters\\\" (EMNLP 2024 main)** \\n\\n- **\\\"Online Speculative Decoding\\\" (ICML 2024)** \\n\\nBoth studies utilize similar setups to evaluate decoding efficiency over multiple multilingual datasets..\\n\\n## **Key Clarifications**\\n\\n1. **Generality Across Domains**: While our evaluation includes MT tasks, our framework is not limited to this domain. We also conducted experiments across **code generation, math problem solving, question answering, summarization**, and additional out-of-domain tasks such as `Alpaca-Finance` and `RAG datasets` (`Table3`, `Table4`, `Table9`). \\n\\n2. **Comprehensive Scope**: To our best knowledge, our work evaluates SD performance across the **broadest range of domains** in the current existing literature. This ensures that MetaSD\\u2019s effectiveness is demonstrated across diverse use cases.\\n\\n\\n\\n# W5. Training dataset are the same as the evaluation dataset.\\n\\nWe would like to clarify that while there is some overlap in the types of datasets (e.g., machine translation), the training and evaluation datasets differ significantly in both domains and instances. Below, we provide details for clarification:\\n\\n- Code Generation: Training was performed using the Code-Alpaca dataset, but evaluation was conducted on the MT-Bench dataset. This ensures that the evaluation reflects performance on unseen data.\\n\\n- Math Problem Solving: Training utilized the MetaMathQA dataset, whereas evaluation was conducted on GSM8K, a completely different dataset focused on general math problems.\\n\\n- Machine Translation: While the training and evaluation datasets belong to the same domain (e.g., English-to-Chinese translation), the specific instances used in training and testing are distinct.\\n\\n# W6. Vertical Spacing on the Last Page\\n\\nThank you for pointing this out. We adjust the formatting in our revised manuscript.\\n\\n# Q1. & Q2. Additional baselines for OFA drafter training over mixed data \\n\\nWe conducted additional experiments with an One-size-Fits-All (OFA) drafter for both black-box and white-box settings. These OFA drafters were trained on the mixed datasets spanning all tasks, ensuring a direct comparison to the specialized-drafter-based MetaSD framework. **The results in `Table3` and `Table4` of our revised manuscript show that while the OFA drafter performs well, our MetaSD framework outperforms OFA drafter across most tasks, demonstrating the strength of our method.** The OFA Eagle drafter performs relatively well. **However, it is still outperformed by MetaSD (i.e., MetaEagle-UCB).\"}", "{\"comment\": \"If the queries in the batch choose different draft model, in my opinion, it will cause increased IO compared with a single eagle?\"}", "{\"title\": \"Response to Reviewer Tyx3\", \"comment\": \"Thank you for your feedback and for taking the time to review our revised manuscript. We respectfully disagree with your assessment that our new experiments are not comprehensive enough. Below, we outline the steps we took to address your original concerns and clarify why we believe these efforts adequately demonstrate the effectiveness of MetaSD-UCB.\\n\\n---\\n# Clarifications and Specific Points\\n\\n1. **We conducted all requested experiments** \\n In your initial review, you requested additional evaluations, including:\\n - **Perturbed Prompt Experiments**: We introduced perturbed prompts to assess performance under more realistic, varied input scenarios. These experiments highlighted that MetaSD maintains robustness and consistently outperforms baselines like OFA, even when prompts vary semantically. \\n - **Heterogeneous Batch Scenarios**: We conducted experiments for worst-case edge cases where all instances in a batch correspond to different tasks. These experiments revealed the limitations of MetaSD in large heterogeneous batches while highlighting its continued strength in small batch sizes and homogeneous settings. \\n - **OFA Drafter Comparisons**: We added comparisons against OFA drafters trained on mixed datasets across both black-box and white-box settings. While OFA drafters performed well, MetaSD consistently outperformed them across most tasks, validating the dynamic drafter selection mechanism. \\n\\n We believe these additions addressed the core concerns raised in your initial review. And simultaneously, no ensembling methods are suggested in your initial review.\\n\\n2. **MetaSD-UCB as a novel contribution** \\n Our work is the **first to address speculative decoding (SD) with multiple heterogeneous drafters**. This is a fundamentally different problem than standard ensembling or naive SD approaches. MetaSD leverages the **memory-bound nature** of SD to dynamically optimize performance without increasing memory bandwidth demands. This approach is not directly comparable to generic ensembling methods, which are not designed to handle memory-bound SD settings.\\n\\n3. **Request for specific alternatives** \\n While your review suggests a comparison with `other ensembling approaches`, no specific methods for ensemble-based SD were referenced in your feedback. If there are particular approaches you believe are relevant, we would greatly appreciate if you could suggest them. This would allow us to better position MetaSD in the broader speculative decoding landscape. However, we must emphasize that speculative decoding with **heterogeneous drafter selection** is fundamentally distinct from simple ensembling strategies.\\n\\n4. **Unclear areas needing improvement** \\n While we value constructive criticism, we find the suggestion that the \\\"new experiments are not comprehensive enough\\\" to be vague. Without concrete examples of specific limitations or additional experiments required, it becomes difficult to address these concerns further. We respectfully request more explicit feedback to help us understand which specific aspects of the study remain insufficient.\\n\\n---\\n\\n# **Broader context: contribution and innovation**\", \"we_would_also_like_to_reiterate_the_key_contributions_and_strengths_of_our_work\": \"- **Theoretical guarantee**: We propose a novel multi-armed bandit approach for speculative decoding with formal theoretical guarantees. \\n- **Novelty**: MetaSD-UCB is the **first framework to explore multiple heterogeneous drafters for SD**, addressing challenges that traditional ensembling methods cannot solve. \\n- **Practical speedup**: MetaSD achieves substantial improvements in speedup ratios compared to existing SD methods, demonstrating its practical utility. \\n- **Comprehensive evaluation**: To our knowledge, our work evaluates speculative decoding performance across the broadest range of task categories, including in-domain, out-of-domain, and perturbed prompts, making our study one of the most thorough in this field. \\n\\n---\\n\\n### **Respectful request**\\n\\nWe believe our work pushes the edge of speculative decoding research and addresses a novel and underexplored problem. **While constructive suggestions are always appreciated, introducing new and abstract requirements after revisions may not provide a fair assessment of the work's merits. We respectfully request that our contributions be evaluated based on the clear and explicit goals set during the initial review.**\\n\\nIf there are additional suggestions for experiments or comparisons, we welcome them as valuable input for future research, but we believe our current manuscript sufficiently demonstrates the effectiveness and novelty of MetaSD-UCB.\\n\\nThank you again for your time and for considering our response. We hope this clarifies our position and highlights the strengths of our contribution.\\n\\n-Authors\"}", "{\"comment\": \"Can you add more results on ood datasets largely different from the five domains you train the eagle, with both bsz> 1 and OFA eagle.\\n\\nI recommend you use some datasets like MMLU with COT, physics QA, hotpot QA.\"}", "{\"title\": \"Further extended experiments\", \"comment\": \"## Nonstationary Environment\\n\\nTo further **demonstrate the effectiveness of the MAB approach**, we conducted experiments on a **non-stationary translation task**, where each query required translating two different languages (French and Chinese) into English. The experiments were performed using a single RTX 3090 GPU, and the results are as follows:\\n\\n| | Drafter1 | Drafter2 | Drafter3 | Drafter4 | Drafter5 | Upper Bound for Classification-based Routing | MetaSpS-UCB |\\n|------------------|----------|----------|----------|----------|----------|----|----------------|\\n| Block Efficiency | 1.668 | 1.722 | 1.485 | 1.759 | 1.803 | 1.803 | 1.951 |\\n| Speedup Ratio | 1.429 | 1.492 | 1.289 | 1.539 | 1.581 | 1.581 | 1.722 |\\n\\nThese results clearly demonstrate the **effectiveness of the MAB approach**, where MetaSD-UCB with $\\\\alpha=0.1$ achieves a speedup ratio of **1.722**, which even exceeds the performance of the optimal drafter (1.581). This improvement arises because MAB dynamically adapts to the best drafter in the current environment through a combination of exploration and exploitation.\\n\\nFor example, in a multilingual scenario, where the task is \\\"Translate French and German to English, respectively,\\\" classification-based routing would be limited by the performance of the best single specialization (i.e., the speedup ratio of the optimal drafter as its upper bound). In contrast, MetaSD-UCB surpasses this upper bound by effectively adapting its policy dynamically, demonstrating a **unique advantage** over classification-based methods in non-stationary environments.\\n\\n## Future Directions\\n\\nWe are already encouraging further exploration in our paper's **Future Work** section, and we believe this type of **policy-routing** could benefit significantly from more advanced approaches like **contextual bandits** or **reinforcement learning** (RL). These methodologies may offer more sophisticated mechanisms for dynamic adaptation in non-stationary environments. Furthermore, from a **scalability perspective**, interesting future directions include:\\n\\n- Exploring how to cluster drafters effectively for system-level applications, such as those used in **retrieval-augmented generation (RAG)**.\\n- Investigating how to design **OFA drafters** in an orthogonal manner to maximize complementarity across tasks.\\n\\n## Clarification on Our Research Focus\\n\\nIt\\u2019s important to highlight that **our work does not advocate training multiple specialized drafters instead of OFA drafters**. One of our main focus is to address the critical challenge of **dynamically routing tasks among pre-existing heterogeneous specialized drafters**. This is a practical and timely problem, especially as these specialized drafters are becoming widely available.\\n\\nFor a deeper understanding of our motivation and scope, we strongly recommend revisiting the **`Motivation Section`** in the main paper and the **`Further Motivation Section`** in the `Appendix`. These sections provide a comprehensive view of why this research is both relevant and necessary for advancing speculative decoding systems.\"}", "{\"comment\": \"Thank you for your detailed response!\\n\\nIt has addressed most of my concerns, and I do appreciate the huge efforts you\\u2019ve put into this paper and the response. Therefore, I'm raising my score to **6**. That said, I have some follow-up questions and suggestions for further improvement:\\n\\n- **Comparisons with Concurrent Work**: I recommend including discussions of related concurrent work, such as SWIFT[1]. SWIFT dynamically optimizes and switches to a more efficient drafter using layer skipping, which shares similarities with the Bandit approach in this manuscript. Additionally, I encourage evaluating the Bandit approach in a dynamic data stream with multi-task OOD data, as demonstrated by SWIFT in Figure 7. This would provide a more comprehensive evaluation (you don't have to do it during the rebuttal period). Besides, more discussions with [2] should be included since this method also optimizes SD performance on the fly.\\n- **Computation Overhead**: The analysis of additional computational overhead introduced by the Bandit mechanism should be included in the revised manuscript to provide a clear understanding of its practical implications.\\n- **Regarding W2**: I respectfully disagree with the statement, \\\"*Computing similarity between sentence embeddings requires passing the context through an encoder-only network to obtain the embeddings*.\\\" Cosine similarity can be directly calculated on the input representations after the pre-filling stage of the target LLM, without requiring an additional encoder. However, I acknowledge the authors\\u2019 valid point that this naive strategy could pose challenges for tasks with highly similar representations.\\n- **Regarding W4**: The manuscript should report **switching accuracy** metrics. Specifically, I am curious whether the low switching frequency observed leads to high switching accuracy (i.e., the Bandit mechanism consistently selects the optimal drafter). If this is not the case, the low switching frequency could undermine the effectiveness of the Bandit mechanism, and further clarification or analysis would be valuable.\\n\\nThank you again for your thorough response. I look forward to further discussions and may consider increasing my score upon addressing these points.\\n\\n[1] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration. Xia et al. Arxiv 2024.\\n\\n[2] Online speculative decoding. Liu et al. ICML 2024.\"}", "{\"title\": \"Responses to Reviewer CxuG\", \"comment\": \"Thank you for your careful review of our paper and your insightful and constructive comments. We have addressed your comments and updated our manuscript accordingly. Please find our detailed answers below.\\n\\n# W1. Additional baseline - Missing Baseline - One-size-Fits-All (OFA) Drafter\\n\\nThank you for raising this important point. We conducted additional experiments with an One-size-Fits-All (OFA) drafter for both black-box and white-box settings. These OFA drafters were trained on the mixed datasets spanning all tasks, ensuring a direct comparison to the specialized-drafter-based MetaSD framework. **The results in `Table3` and `Table4` of our revised manuscript show that while the OFA drafter performs well, our MetaSD framework outperforms OFA drafter across most tasks, demonstrating the strength of our method.** The OFA Eagle drafter performs relatively well. **However, it is still outperformed by MetaSD (i.e., MetaEagle-UCB)**. \\n\\n# Q1. Explanation on the advantage of MetaSD in batched inference setting\\n\\nWe acknowledge that the current statement in `Appendix C.1` was not fully clear regarding our position on batched inference. We appreciate the opportunity to clarify our intention.\\n\\nOur proposed MetaSD framework can address scenarios where tasks of varying nature (e.g., translation, summarization, QA, etc.) are mixed within a single batch. In such settings, a static approach that relies on a single drafter for all instances in the batch could lead to significant performance degradation, especially when the drafter is poorly suited to some tasks in the batch. This challenge is exacerbated as batch sizes increase.\\n\\nMetaSD, in contrast, adapts dynamically at the instance level within a batch, ensuring that the most suitable drafter is chosen for each task. By leveraging our MAB framework for dynamic adaptation, MetaSD optimizes performance for diverse tasks within the batch, providing consistent improvements in throughput compared to static, single-drafter-based SD approaches. Thus, even as batch sizes increase, MetaSD maintains high throughput by avoiding the pitfalls of static, one-size-fits-all drafters.\\n\\nWe will revise `Appendix C.1` to better articulate this point, as follows:\\n\\n> \\\"Batched inference: Our current implementation primarily focuses on single-query scenarios. However, adapting the MetaSD framework to batched inference\\u2014where different tasks are mixed within a single batch\\u2014presents an opportunity for significant efficiency gains. Unlike static single-drafter-based SD, which can suffer from suboptimal performance when handling diverse tasks in a batch, MetaSD dynamically optimizes drafter selection at the instance level. This ensures consistently high throughput, even in high-throughput batched settings.\\\"\\n\\nFor more information, we would like to share the explanations regarding throughput efficiency in `Appendix F.10`.\", \"our_drafter_management_mechanism_does_not_lead_to_throughput_depletion_compared_to_single_drafter_methods_due_to_the_following_reasons\": \"- No Increase in Memory Bandwidth Requirements: The drafters\\u2019 parameters are preloaded into DRAM and do not require frequent movement to VRAM for computation. This ensures that the number of memory movements and memory size for movement remain identical to single-drafter methods, even in scenarios with multiple drafters.\\n\\n- Scaling Across Batches: Since the computational structure remains unchanged, performance observed in single-batch scenarios translates directly to multi-batch settings, maintaining throughput consistency.\\n\\nWe further conducted experiments comparing `throughput` following the same settings in the original Eagle paper. The results confirm that the performance of MetaSD scales well without significant degradation:\\n\\n| **Throughput** (RTX 3090 24GB) | **Single OFA Eagle (Tokens/sec)** | **MetaEagle-UCB (Tokens/sec)** |\\n|----------------|--------------------------------|-------------------------|\\n| Speedup | x 2.235 | x 2.427 |\\n\\n\\nWe hope this clarifies our approach and the rationale behind our claims. Thank you for highlighting this important aspect!\"}", "{\"title\": \"Responses to Reviewer Tyx3 (1/2)\", \"comment\": \"Thank you for your careful review of our paper and your insightful and constructive comments. We have addressed your comments and updated our manuscript accordingly. Please find our detailed answers below.\\n\\n# W1. Invalid evaluations for real-world use cases.\\n\\nWe acknowledge the concern that using the same prompt template across all instances within a dataset might not fully reflect real-world scenarios. To address this, we conducted additional experiments with **perturbed prompts**, where the prompt for each query varied slightly but was semantically equivalent to the original. These perturbed prompts were generated using GPT-4, ensuring natural and diverse variations. For example:\\n\\n- **Translation Task**: Original prompt \\u201cTranslate German to English\\u201d was perturbed to \\u201cConvert this text from German to English.\\u201d\\n\\n- **Summarization Task**: Original prompt \\u201cSummarize:\\u201d was perturbed to \\u201cProvide a concise overview of the following text:\\u201d\\n\\n## **Key Observations**\\n\\nThe main results and descriptions are detailed in `Table 10` and `Table 11` of `Appendix F.9`.\\n\\n1. **Performance Impact Across All Methods**:\\n - Perturbed prompts led to a drop in performance for all drafters, including One size Fits All (OFA; which is trained on a mixed dataset) and individual specialized drafters. **This indicates that the issue is not limited to our approach but affects all SD methods.**\\n - The observed degradation reinforces that real-world variability in prompts can challenge any static drafter selection strategy.\\n\\n2. **MetaSD\\u2019s Robustness**:\\n - Despite the variability introduced by perturbed prompts, **MetaSD consistently outperforms all baselines**, including OFA and individual drafters. This result highlights the advantage of MetaSD\\u2019s dynamic selection mechanism, which adapts to token distributions at every round rather than relying solely on prompt characteristics.\\n\\n# W2. Limited effectiveness of the MAB approach\\n\\nWhile the datasets used in our experiments are indeed distinctly different and well-defined, our method is designed to handle more complex, real-world scenarios where:\\n\\n- The boundaries between datasets or task types are less clear.\\n\\n- The performance of each drafter can vary dynamically depending on the token distribution as the decoding progresses.\\n\\n- Out-of-domain tasks introduce additional complexity, which static rule-based systems or simpler machine learning models cannot handle effectively.\\n\\nWhile it is true that MetaSD involves SD rounds for multiple drafters, the additional computational cost is mitigated by the **efficiency gains in overall throughput.** The reported accuracy of drafter selection (below 80%) should not be viewed in isolation, as it reflects the complexity of real-world token prediction rather than a static task classification problem. While rule-based systems may appear efficient in well-defined scenarios, they lack the flexibility to adapt to:\\n\\n- Mixed-domain or evolving tasks where token distributions do not align clearly with predefined rules.\\n\\n- Real-time feedback from the decoding process, which is a critical component of MetaSD's adaptive optimization.\"}", "{\"comment\": \"Thank the authors for the detailed reply! It resolves some of my concerns, but others remain.\\n\\n**W1. Clarity of the assumption on acceptance rate**\\n\\nThanks for re-writing the definitions of the acceptance rate. It is better to explicitly state that $\\\\alpha_{i,t}$ are **independent** since only stationarity does not guarantee the correctness of Lemma 3.\\n\\n**W2. Validity of Theorem 1**\\n\\nThanks for clarifying the setting. It is encouraged to directly state the i.i.d. assumption in the theorem to distinguish between the current work and all the previous works. In addition, it is beneficial to discuss the i.i.d. assumption as a simplification of the real applications.\\n\\n**W3. Stochasticity of the target sequence length B**\\n\\nThanks for the new definition. Given the setting in Appendix G.7, Theorem 2 indeed implicitly makes a very strong assumption, i.e., $B$ is not a random variable. This assumption does not hold in any application setting according to my understanding, since the sampling decoding induces a stochastic process. It is encouraged to remove Theorem 2 and state Theorem 6 in the main paper.\\n\\nThe proof of Theorem 6 is not correct. As mentioned in the review, if we condition on $B$, we should work with the posterior distribution of all the random variables conditioned on $B$. Thus, Lemma 8 is not correct. Let\\u2019s consider a very simple LLM, whose alphabet is ${0,1,EOS}$. The first token is $0$ or $1$ with $1/2$ probability. To predict the next token, if the last token is $1$, the next token is $0$ or $1$ with $1/2$ probability. If the last token is $0$, then the next token is $EOS$ with probability 1. Thus, if we condition on the event that $B=4$, we know that the sequence is $1110$, and they are not independent anymore. Thus, (18) **does not hold** for the posterior distribution. Assumption 1 only states that the acceptance rate is i.i.d. without any conditioned random variable, it **does not** imply anything for the posterior distribution.\\n\\nIn addition, the statement ``Since probability space generated from the policy is independent from target model output generated from p under Assumption 1, \\u2019\\u2019 is not correct. Only random variables can be independent, the probability space is a pre-defined structure, and there is no stochasticness for it. \\n\\n\\n**W4. The relationship between the reward / regret design and the performance of the algorithm.**\\n\\nThere is a gap between the methods in the theory and the empirical algorithms. In line 371, the authors state that \\u201cWe conduct evaluations using a NVIDIA A5000, A6000, and A100 GPU under **greedy** decoding settings.\\u201d However, all the theoretical analysis is built on the sampling decoding, i.e., we sample the next token from the distribution predicted by LLMs. The greedy decoding can be very different from the sampling decoding. Thus, the theoretical analysis is unrelated to the empirical results.\\n\\n\\n**W5. Line 1586 and Line 1695**\\n\\nThanks for correcting your expressions. The expressions look reasonable now.\\n\\n\\n**W6. Hyperparameter $\\\\beta$ and analysis on mean gaps $\\\\Delta_i$**\\n\\nThanks for incorporating $\\\\beta$ in the final bound. \\n\\nHowever, as indicated by the theorem, $\\\\beta$ should be chosen to be greater than $0.5$, whereas $\\\\beta$ is selected to be $0.01$ in the experiment (Line 1150). In this case, the bonus term in the UCB design has a marginal effect. The whole algorithm design roughly reduces to a Exploration-then-Commit (ETC) algorithm which uniformly explores each drafter once followed by committing to the empirically best drafter.\\n\\nSince ETC is usually served as a benchmark in the bandits literature, it would be better if ETC is incorporated in the experiment, with a tuned exploration phase (maybe considering allocating $2(B/K)^{2/3}\\\\ln^{1/3}(2KB)$ time steps for the uniform exploration, which results in a minimax bound for ETC).\\n\\nLastly, as mentioned in **W3**, the proof of Theorem 7 is not correct.\\n\\n**W7. Formal definition of B and terminology**\\n\\nThanks for your clarification. \\nIt is confusing that in Line 1947, the authors indicate \\u201cWe can relax the assumption on the fixed target sequence length B and naturally extend our analysis for the case where target model output is based on the temperature sampling T > 0.\\u201d Does it indicate the previous result only holds for the greedy decoding strategy? It would be appreciated if it can be further clarified.\"}", "{\"summary\": \"This paper proposes a framework utilizing multiple draft models simultaneously for large language model (LLM) speculative decoding. Unlike traditional speculative decoding methods that employ a single draft model, this approach leverages a pool of draft models to enhance performance in varied domains. The paper argues that draft models trained on specific datasets may underperform in out-of-domain tasks. To address this, different draft models are utilized as candidate drafts for inference across various domains. The framework incorporates multi-armed bandit algorithms from recommendation systems to select the optimal draft model at each decoding step. Experiments on both black-box (e.g., independent draft models) and white-box (e.g., Medusa, Eagle, etc.) approaches demonstrate that the proposed framework offers superior inference speed compared to single-draft models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe approach of using multiple draft models to enhance speculative decoding across different scenarios is modest.\\n\\n2.\\tExperimental results validate the effectiveness of the proposed framework, and the ablation study highlights the superiority of the BE reward.\\n\\n3.\\tThe writing is clear and concise.\", \"weaknesses\": \"1.\\tAn important baseline is missing: training a single draft model on all specified datasets. Given the claim that draft models trained on different domain data improve overall performance, it is crucial to show the performance of a single draft model trained on the same datasets.\", \"questions\": \"1.\\tThe appendix suggests that the proposed framework can achieve further enhancement in batched inference settings. However, previous works, such as Eagle, indicate that the speedup ratio declines as the batch size increases. Could you provide more details on how you arrived at this conclusion?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the authors for their clarification and additional experiments. However, the new experiments are not comprehensive enough to demonstrate the effectiveness of MetaSD-UCB compared to other ensembling approaches. Therefore, I will maintain my current score.\"}", "{\"title\": \"Responses with Classification-based Routing to Reviewer Tyx3\", \"comment\": [\"Dear Reviewer Tyx3,\", \"In response to your concerns regarding classification-based routing, we conducted additional experiments using BERT-based models for routing across diverse tasks (coding, translation, summarization, QA, and math reasoning). Specifically, we evaluated routing performance using pre-trained BERT-Base-Uncased (110M) and BERT-Large-Uncased (330M) with both cosine similarity and fine-tuned softmax heads. All experiments are conducted with a single A100 GPU. (`Note that by considering the small drafter used in SpS has 68M parameters, both models are quite large.`) The results are as follows:\", \"Cosine Similarity (Pre-trained):\", \"BERT-Base: 24.50% accuracy\", \"BERT-Large: 30.00% accuracy\", \"Fine-Tuned BERT with new Softmax Head:\", \"BERT-Base: 74.54% accuracy\", \"BERT-Large: 85.58% accuracy\"], \"we_also_measured_the_average_latency_speedup_across_all_tasks\": \"**Average Latency Speedup Ratio (In-domain)**\\n| **Routing Method** | **SpS Speedup** | **Eagle Speedup** |\\n|--------------------------|-----------------|-------------------|\\n| Fine-Tuned BERT-Base | 1.772 | 2.759 |\\n| Fine-Tuned BERT-Large | 1.741 | 2.797 |\\n| MetaSpS-UCB | 1.912 | \\u2014 |\\n| MetaEagle-UCB | \\u2014 | 3.052 |\\n\\n\\nOur MetaSD demonstrates higher robustness than classification-based routing, particularly in perturbed and out-of-domain (OOD) scenarios.\\n\\n**Perturbed Prompts Speedup Ratio**\\n\\n| **Routing Method** | **SpS Speedup** | **Eagle Speedup** |\\n|--------------------------|-----------------|-------------------|\\n| Fine-Tuned BERT-Base | 1.456 | 2.245 |\\n| Fine-Tuned BERT-Large | 1.567 | 2.323 |\\n| MetaSpS-UCB | 1.820 | \\u2014 |\\n| MetaEagle-UCB | \\u2014 | 2.912 |\\n\\n**Out-of-Domain Speedup Ratio (RAG and Finance)**\\n\\n| **Scenario** | **RAG Speedup** | **Finance Speedup** |\\n|--------------------------|-----------------|---------------------|\\n| Fine-Tuned BERT-Base (SpS) | 1.645 | 1.410 |\\n| Fine-Tuned BERT-Large (SpS)| 1.638 | 1.398 |\\n| MetaSpS-UCB | 1.799 | 1.436 |\\n| Fine-Tuned BERT-Base (Eagle)| 2.132 | 2.413 |\\n| Fine-Tuned BERT-Large (Eagle)| 2.110 | 2.426 |\\n| MetaEagle-UCB | 2.238 | 2.517 |\\n\\nThese results demonstrate that our framework is more robust and adaptable, particularly under challenging conditions like perturbed prompts and OOD tasks. Classification-based routing struggled with task confusion, especially between closely related tasks. For example, coding and math often share overlapping patterns in token distributions, making them harder to distinguish, while summarization and QA frequently confused sentence-level context. Even though translation appeared to perform well in classification, it was likely aided by distinct vocabulary distributions unique to translation tasks, which reduce ambiguity. (Note: ` While classification might be better suited for multilingual tasks due to distinct vocabularies, in other cases, it was not as effective.`)\\n\\n**A key reason why MAB demonstrates stronger performance in this setting lies in our novel block-divergence reward.** This reward effectively captures the distributional alignment between the target model and the drafter, allowing the MAB to dynamically adapt and select the optimal drafter based on real-time feedback (i.e. `policy-routing`). In contrast, classification-based routing does not consider this alignment, leading to suboptimal routing decisions that fail to exploit token-level distribution patterns.\\n\\n**Of course, we believe that classification-based models could potentially address these issues more effectively if designed with a deeper consideration of token-level distributional differences.** However, our focus is not on developing classification-based routing methods, and to the best of our knowledge, no established approaches currently exist in this space. Instead, we proposed a simple but efficient method for the novel problem (that we newly raise in this field) at hand. **We also believe that exploring classification-based routing as a future direction could offer an orthogonal and promising approach to improving routing efficiency.**\\n\\nWe hope this detailed analysis clarifies the advantages of our approach and demonstrates why a simple classification-based method is insufficient. We will work to include these in the camera-ready.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\n-Authors\"}", "{\"title\": \"Responses to Reviewer V45K (3/3)\", \"comment\": \"# Q1. Clarification on computing KV-caches\\n\\nIn our paper, the KV cache is recalculated for the previous context whenever a drafter switch occurs. However, we found that this recalculation introduces negligible computational overhead, even for relatively long contexts. This is due to the limited cost associated with prefilling the KV cache of a small drafter (e.g., In the case of Eagle drafter, one layer of KV cache is computed for unseen context), which is highly efficient.\\n\\n## Additional Experiments for Efficient KV Cache Strategies\\nTo further validate our framework's efficiency, we additionally experimented with incorporating StreamingLLM [2], which can avoid full KV cache recalculation. The results demonstrate that this approach achieves comparable performance, confirming the robustness of MetaSD:\\n\\n| **Task** | **MetaEagle-UCB (Recomputing KV)** | **MetaEagle-UCB with StreamingLLM** |\\n|---------------|---------------------------------|----------------------------------|\\n| **Code** | 3.724 | 3.624 |\\n| **Translation** | 2.318 | 2.352 |\\n| **Summarization** | 3.057 | 2.986 |\\n| **QA** | 2.641 | 2.654 |\\n| **Math** | 3.520 | 3.338 |\\n\\n### **Key Observations**\\n1. **Minimal Impact of KV Recalculation**: The results show that even with full KV cache recalculation, MetaEagle-UCB maintains strong performance, highlighting that this process introduces minimal computational overhead.\\n \\n2. **StreamingLLM Effectiveness**: Incorporating StreamingLLM techniques marginally improves performance in some cases (e.g., Translation and QA), while maintaining a similar overall performance profile. \\n\\n# Q2. Additional baseline\\n\\nWe conducted additional experiments with an One-size-Fits-All (OFA) drafter for both black-box and white-box settings. These OFA drafters were trained on the mixed datasets spanning all tasks, ensuring a direct comparison to the specialized-drafter-based MetaSD framework. **The results in `Table3` and `Table4` of our revised manuscript show that while the OFA drafter performs well, our MetaSD framework outperforms OFA drafter across most tasks, demonstrating the strength of our method.** The OFA Eagle drafter performs relatively well. **However, it is still outperformed by MetaSD (i.e., MetaEagle-UCB).**\\n\\n**References**\\n\\n[1] Leviathan et.al., Fast Inference from Transformers via Speculative Decoding. ICML 2023.\\n\\n[2] EFFICIENT STREAMING LANGUAGE MODELS WITH ATTENTION SINKS. ICLR 2024.\"}", "{\"title\": \"Responses for the relationship between the theory and experiments (2/2) (To Reviewer v7AR)\", \"comment\": \"Dear Reviewer v7AR,\\n\\nThank you again for your detailed feedback. We would like to address the concerns you raised in more detail and clarify our assumptions and theoretical results.\\n\\n## Extrinsic Randomness of Model Alignments\\n\\n**First, we would like to emphasize that even in Greedy Decoding, randomness arises due to uncertainties in the alignment between the target model and the drafters.** Specifically, let $\\\\alpha_{i,t}$ denote the acceptance rate of the $i$-th drafter at the $t$-th speculative round during one decoding instance. Even if the target model and drafter are deterministic in their outputs, it is practically impossible to access full information of $\\\\alpha_{i,t}$ (i.e., exact alignment or acceptance rates) in advance.\\n\\nFor example, \\n\\n- The number of accepted tokens by a drafter depends on the specific overlap between its output and the target model's sequence at each step, which can vary across tokens.\\n\\n- Thus, before observing the outcomes, the system naturally treats the alignment as a random variable.\\n\\nWe model the variation of number of accepted tokens (or $\\\\alpha_{i,t}$) in each iteration as a random sequence i.i.d. from stationary distribution with mean $\\\\alpha_i$ **in each decoding instance** (Please note that this does not imply that we assume i.i.d over several decoding instances). Therefore, our assumption is built from `extrinsic randomness` **arising from modeling the randomness of the alignment between the target model and a drafter**, which is orthogonal to considering the randomness coming from the stochasticity of the model outputs which we might call `intrinsic randomness`. \\n\\n`Assumption 1` and `Assumption 2` is about the `extrinsic randomness` which models uncertainties of how each drafter is aligned with the the target model and i.i.d assumption on the acceptance rates can be made within this scenario. And as a result, `Assumption 1` can hold in Greedy Decoding scenario even if there is no `intrinsic randomness` of the models.\\n\\n## The Setting of Theorem 2\\n\\n**Regarding `Theorem 2`, we would like to clarify once more that it does not assume fixed B of LLM output.** Instead, Theorem 2 analyzes the regret bound defined in **each instance of decoding** which is independent of the output of each generation (as explained in the follow-up response of `W3`). \\n\\nFor example, suppose we conduct 2 decoding procedures (same rule as your new example) and generate 4 tokens. Then, the performance metric should measure how well the policy followed picking the best drafter in each decoding instance which does not related to the correlation between the model output and the target sequence length. Since our regret is analyzed on **each decoding instance**, first instance (B=2) should be measured by how well the algorithm performs compared to the optimal policy which is simply choosing the best drafter consecutively, and the second instance (B=2) should be measured by the same criterion. With the given $B=2$, the number of rounds $\\\\tau(B)$ is still stochastic under our modeling of the 'extrinsic randomness'. Our stopping regret objective is thus to reduce $\\\\tau(B)$ for each instance ($B=2$ for the first one and $B=2$ for the next one), **which is independent of the results of target model.** Even if the outputs of the two instances are different (or the same), this does not break the Theorem under the `Assumption 1` as we stated in the `Appendix G.6`.\\n\\n## Independence of decoding instances and Theorem 6\\n\\nWe would like to once more clarify that **we do not make assumptions about the independence of each decoding output**. Instead, we make assumptions on the independence of the acceptance rates for a given drafter along time for **each instance of decoding**. As stated in the above response (follow up response of `W4`), **`Theorem 6` is built with our additional assumption about assuming $\\\\alpha_i$ being independent of $B$ and its conditional expectation over a given $B$ being the same for every $B$**. With `Assumption 2`, the acceptance rates follows i.i.d. conditioned on each B and therefore, `Theorem 6` remains valid under the `Assumption 2`.\\n\\n\\nWe will revise our manuscript accordingly to further clarify **our assumptions is built on the model uncertainties itself (extrinsic)** rather than the randomness of model outputs by temperature sampling (intrinsic). Also, we will further clarify Theorem with more explanation on the role of assumption.\\n\\n\\nWe hope these clarifications address your concerns. Thank you again for your active and constructive feedback and we are open to further discuss if there are any remaining concerns.\\n\\nWarm regards,\\n\\nAuthors.\"}", "{\"comment\": \"Thank you for your thorough and detailed responses. I have no further concerns and have updated my score accordingly.\"}" ] }
5gptKWnVPF
Harnessing Input-adaptive Inference for Efficient Vision-and-Language Navigation
[ "Dongwoo Kang", "Akhil Perincherry", "Zachary Coalson", "Stefan Lee", "Sanghyun Hong" ]
An emerging paradigm in vision-and-language navigation (VLN) is the use of history-aware multi-modal transformer models. Given a language instruction, these models take observation and history as input and predict the most appropriate action for an agent. While employing these models has significantly improved performance, the scale of these models can be a bottleneck in practical settings where computational resources are limited (e.g., in robots). In this work, we present a novel input-adaptive navigation method for efficient VLN. We first characterize the overthinking problem in VLN and show that none of the existing input-adaptive mechanisms successfully reduce overthinking without causing significant performance degradation. Our method addresses this problem by developing three adaptive algorithms deployed at different levels: (1) We develop an adaptive approach that improves spatial efficiency; we only process a subset of panoramic views at each observation of an agent. (2) We also achieve model-level efficiency by developing adaptive thresholding for the early-exit method we employ, based on the importance of each view in navigation. (3) To achieve temporal efficiency, we design a caching mechanism to avoid processing views that an agent has seen before. In evaluations with six VLN benchmark tasks, we demonstrate over a 2$\times$ reduction in computation across two off-the-shelf VLN agents.
[ "Vision-and-Language Navigation", "Input-adaptive Efficient Navigation" ]
Reject
https://openreview.net/pdf?id=5gptKWnVPF
https://openreview.net/forum?id=5gptKWnVPF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zYnV8wMIww", "wYGWn0rGjs", "vPaJhRIWJu", "pi9CoT3cub", "mHb8zw5Vh1", "cWhUgUcyAq", "c4DtU6TxjE", "TGgqrt4TMy", "T1720RTTKY", "SaPN7b35rv", "PnOr4neHHX", "OE8nQR2E57", "KnItxV3BRU", "K9bHVcUJWt", "Jjbr1yfz5I", "GfvLze4WSI", "2M74tARjkG", "12CrqRxi0g" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733186629045, 1732513101585, 1732512763333, 1730550810060, 1733214934688, 1730504984237, 1732512468308, 1732512674660, 1732768652026, 1732512565169, 1734724689831, 1730645083989, 1731149119152, 1732513371331, 1737523882814, 1732512975803, 1732533637756, 1732513529563 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8029/Reviewer_Xs1i" ], [ "ICLR.cc/2025/Conference/Submission8029/Authors" ], [ "ICLR.cc/2025/Conference/Submission8029/Authors" ], [ "ICLR.cc/2025/Conference/Submission8029/Reviewer_VoXA" ], [ "ICLR.cc/2025/Conference/Submission8029/Reviewer_VoXA" ], [ "ICLR.cc/2025/Conference/Submission8029/Reviewer_vmJ4" ], [ "ICLR.cc/2025/Conference/Submission8029/Authors" ], [ "ICLR.cc/2025/Conference/Submission8029/Authors" ], [ "ICLR.cc/2025/Conference/Submission8029/Authors" ], [ "ICLR.cc/2025/Conference/Submission8029/Authors" ], [ "ICLR.cc/2025/Conference/Submission8029/Area_Chair_JS88" ], [ "ICLR.cc/2025/Conference/Submission8029/Reviewer_xson" ], [ "ICLR.cc/2025/Conference/Submission8029/Reviewer_Xs1i" ], [ "ICLR.cc/2025/Conference/Submission8029/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8029/Authors" ], [ "ICLR.cc/2025/Conference/Submission8029/Reviewer_vmJ4" ], [ "ICLR.cc/2025/Conference/Submission8029/Authors" ] ], "structured_content_str": [ "{\"title\": \"Revision post rebuttal\", \"comment\": \"I would like to thank the authors for their clarifications and running insightful per technique quantitative analysis with the limited timeframe. The replies have been helpful in better understanding the work and merit of techniques introduced. I have increased my score from 3 to 5. I am however reluctant to offer the paper a higher score as I maintain my doubts regarding the broader impact of the work as a research manuscript.\"}", "{\"title\": \"Response to Reviewer Xs1i\", \"comment\": \"We thank the reviewer for their time and valuable feedback. Below, we provide answers to the concerns and questions. We will make sure to incorporate our answers into the final version of our paper.\\n\\n---\\n\\n> (Weakness 1) The choice of the problem is restrictive (specifically, 36-view panoramas and usage of ViT encoders).\\n\\nWe thank the reviewer for this concern. We first point out that the problem we consider, namely, panorama-based visual environments with agents employing ViTs for encoding, has become the predominant setting for VLN. A majority of recent studies utilize these components [1], [2], [3], [4], [5], with prior work showing navigation via panoramas to be practical in real-world demonstrations [6]. Therefore, while it may seem restrictive, it is well-positioned within the current direction of the field. \\n\\nImportantly, we contribute a set of largely generalizable techniques amongst models currently being developed and deployed. We support this claim by achieving significant computational savings for the HAMT [1] and DUET [2] agents despite their architectural differences. DUET is also the baseline architecture of the current state-of-the-art ScaleVLN [3], the primary difference being scaled training environments, and there is no reason to suspect that such data augmentations would limit our method's transferability. Furthermore, as our most effective contribution, k-extension, only requires the usage of panoramas, it may be applied to the broader usage of architectures in the field.\\n\\nAs with any architecture-oriented optimizations, assuring the applicability of our work to future advancements in the VLN architecture or pipeline is challenging. However, our results show that the proposed method is largely transferable and effective within the current state of the field, making it a practical contribution that may influence future work.\\n\\n[1] History-Aware Multimodal Transformer for Vision-and-Language Navigation. Chen et al., NeurIPS 2021.\\n\\n[2] Think Global, Act Local: Dual-Scale Graph Transformer for Vision-And-Language Navigation. Chen et al., CVPR 2022.\\n\\n[3] Scaling Data Generation in Vision-and-Language Navigation. Wang et al., ICCV 2023.\\n\\n[4] A New Path: Scaling Vision-and-Language Navigation With Synthetic Instructions and Imitation Learning. Kamath et al., CVPR 2023.\\n\\n[5] VLN BERT: A Recurrent Vision-and-Language BERT for Navigation. Hong et al., CVPR 2021.\\n\\n[6] Sim-to-Real Transfer for Vision-and-Language Navigation. Anderson et al., CoRL 2020.\\n\\n---\\n\\n> (Weakness 2) Given the limited applicability context, one would expect a highly specified solution, yet the protocol proposed is quite simple and feels like it could be refined further.\\n\\nWe first highlight that our proposed solution achieves strong, generalizable results that are well within the ideal set by existing input-adaptive work [1], [2], [3]. Regardless of simplicity, this suggests it is a valuable contribution to the field that may enable future work in this direction.\\n\\nWe additionally argue that the proposed solution is specialized. Our spatial locality technique, k-extension, was designed specifically for panorama-based image navigation in which there is a distinction of \\u2018spatial importance\\u2019 relative to navigable views. We also found existing early-exit solutions (i.e., MuE [1]) insufficient, leading us to design one that is more specialized. To our knowledge, our locality-sensitive hashing algorithm is the first such caching technique employed to input-adaptive DNNs, which is only applicable in problem domains where a model encounters repetitive inputs. Broadly speaking, the high-level insights our techniques are built on\\u2014leveraging spatial and temporal locality\\u2014are largely unique and specialized to the current VLN setting.\\n\\nNevertheless, we value the reviewer\\u2019s critique and acknowledge the potential for improvement. Further refinement of the early-exiting mechanism and method for selecting views to process are indeed interesting directions that can lead to future solutions. For example, the early-exiting criteria we apply to the ViT (responsible for current observations) is only a function of its activations. However, several other components influence the agent\\u2019s navigation (e.g., observation history and language), suggesting there may be alternate criteria that leverage the agent\\u2019s entire activation space to determine early exit points. Given the success of the proposed method, we leave these potential explorations as future work and will highlight them in the discussion section of our final paper.\\n\\n[1] You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model. Tang et al., CVPR 2023.\\n\\n[2] DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference. Xin et al., ACL 2020.\\n\\n[3] BERT Loses Patience: Fast and Robust Inference With Early Exit. Zhou et al., NeurIPS 2020.\"}", "{\"title\": \"Response to Reviewer VoXA (cont)\", \"comment\": \"> (Weakness 2) The experiments were conducted in limited settings.\\n\\nWe appreciate the reviewer\\u2019s concern. Here, we first explain our reasoning for using the VLN agents HAMT [1] and DUET [2] and then discuss the efficacy of our method on different benchmarks with varying path lengths.\\n\\n**Usage of HAMT and DUET in the Evaluation**\\n\\nWe selected the HAMT [1] and DUET [2] agents to evaluate our proposed method because they are largely representative works. These agents are frequently used in recent work, and DUET is the baseline architecture of the current state-of-the-art ScaleVLN [3]. The primary difference between DUET and ScaleVLN is that ScaleVLN uses scaled training environments. There is no reason to suspect data augmentation has a substantially negative effect when we apply our speed-up techniques. We also appreciate the reviewer\\u2019s suggestion of using MARVAL [4], but unfortunately, to our knowledge, it is not publicly available.\\n\\nAs all recent VLN agents [1], [2], [3], [4], [5] leverage similar Transformer architectures, particularly a computationally expensive ViT for image processing, we reason that the proposed method is largely generalizable. Our main evaluation (Section 5.1) supports this claim, where we achieve substantial computational efficiency on both HAMT and DUET despite their architectural differences.\\n\\n**Evaluating Other Benchmarks and the Robustness of Our Method to Navigation Length**\\n\\nWe first note that we complement the main experimental results on R2R (Table 2 in Section 5.1) with several other benchmarks (R2R-Back, R2R-Last, REVERIE, CVDN, and SOON) in Appendix D (results in Table 7). However, we originally did not consider the robustness of our proposed techniques to path length, and we thank the reviewer for highlighting it. Here, we study if the errors introduced by our techniques propagate to longer path lengths. For several benchmarks, we report the average path length (measured as the minimal number of navigation actions needed to reach the target destination), change in navigation error (average distance of the agent\\u2019s final position to the target position), and change in GFLOPs, compared to the base model. Results are shown in the table below.\\n\\n| Agent | Benchmark | Average Path Length | Change in Navigation Error | Change in GFLOPs |\\n|-------|:-----------:|:---------------------:|:----------------------------:|:------------------:|\\n| HAMT | R2R \\t| 6.0 \\t| +0.53 \\t| -2845.63 \\t|\\n| \\t| R2R-Last | 6.0 \\t| +0.45 \\t| -2393.24 \\t|\\n| \\t| R2R-Back | 12.0 \\t| +0.54 \\t| -5463.98 \\t|\\n| DUET | R2R \\t| 6.0 \\t| +0.68 \\t| -2971.7 \\t|\\n| \\t| SOON \\t| 9.6 \\t| -0.44 \\t| -5463.98 \\t|\\n\\nWe find our method is robust to longer path lengths. The change in navigation error does not increase, and we even see a decrease for the SOON benchmark, which has a path length noticeably longer than R2R. The results also show that for longer paths, our efficient VLN agent sees roughly proportional computational savings. For example, the average path length in R2R-Back is double R2R, and we achieve a 1.92x larger reduction in GFLOPs for the HAMT agent.\\n\\nWe agree with the reviewer that a deeper analysis of each proposed technique's robustness to path length would be insightful. Unfortunately, we are unable to address this within the rebuttal period. However, given our results, we do not suspect substantial discrepancies between the individual techniques and leave further investigation as future work.\\n\\nWe hope these results address the reviewer\\u2019s concerns, and we will update our final paper to include this valuable discussion.\\n\\n[1] History Aware Multimodal Transformer for Vision-and-Language Navigation. Chen et al., NeurIPS 2021.\\n\\n[2] Think Global, Act Local: Dual-Scale Graph Transformer for Vision-And-Language Navigation. Chen et al., CVPR 2022.\\n\\n[3] Scaling Data Generation in Vision-and-Language Navigation. Wang et al., ICCV 2023.\\n\\n[4] A New Path: Scaling Vision-and-Language Navigation With Synthetic Instructions and Imitation Learning. Kamath et al., CVPR 2023.\\n\\n[5] VLN BERT: A Recurrent Vision-and-Language BERT for Navigation. Hong et al., CVPR 2021.\"}", "{\"summary\": \"This paper presents three methods that improves the computational efficiency for vision-and-language navigation (VLN), especially in the context of visual input processing. The goal of proposed methods is selecting a subset of 36 panoramic views while maintaining the overall VLN performance. Experimental results show that the proposed algorithms (early-exit, spatial locality, and temporal locality) improve the computational efficiency in several VLN benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper addresses the critical problem in VLN when deployed in real-world applications.\", \"The paper is well-written and easy to understand.\"], \"weaknesses\": [\"The paper highlights the practical aspects of VLN when deployed in real-world environments. Ironically, the proposed methods lack practicality in their current form.\", \"The method regarding the spatial locality requires \\\"navigable points\\\" in the agent's current position. However, in real-world environments, no navigable points are given to VLN agents. This indicates that we can't directly apply this method to the real-world VLN scenario. How can we apply the spatial locality method when navigable points are not given? One way to address this concern is to investigate the results by employing the waypoint predictor. In the task of vision-and-language navigation in continuous environments (VLN-CE), the work [3] has trained the waypoint predictor that predicts several navigable points in each time step. It would be interesting to see the comparison when given the estimated navigable points and the ground-truth (i.e., known) navigable points.\", \"Another concern is about the temporal locality method. The method benefits from the computational efficiency, but it also sacrifices the space complexity of the VLN system. This can be a critical problem when VLN agents should perform long-horizon navigation. I think one way to address this concern is to perform a quantitative analysis of the space-time tradeoffs, especially for longer navigation tasks. In this case, what should be a metric for the memory usage?\", \"Experiments were conducted in limited settings. Most of experiments are studied in the R2R task, and two VLN models in the paper (HAMT and DUET) are quite outdated. It request the authors to investigate the effect of three proposed methods in longer-horizon VLN tasks (e.g., RxR) with more recent state-of-the-art models (e.g., ScaleVLN [1] or MARVAL [2]). More specifically,\", \"The task length of the R2R task is relatively short, so I have concerns that the performance drop of applying the three proposed methods will become more prominent as the task length increases. Can the authors report how performance and computational efficiency vary with task length? I recommend to investigate the RxR task since this benchmark has longer navigation length. Furthermore, comparing the contribution of each method in longer navigation tasks will be an another interesting analysis since it clearly shows which methods are robust to task length.\", \"References\", \"[1] Scaling data generation in vision-and-language navigation. Wang et al., ICCV 2023.\", \"[2] A new path: Scaling vision-and-language navigation with synthetic instructions and imitation learning. Kamath et al., CVPR 2023.\", \"[3] Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language Navigation. Hong et al., CVPR 2022.\"], \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Reviewer VoXA\", \"comment\": \"I would like to thank the authors for devoting their time to address my concerns.\\n\\nI carefully read the author's response, as well as the rebuttal with other reviewers. Unfortunately, the response does not fully address my concerns, as it relies on an educated guess (no navigable points) or preliminary analyses (e.g., space-time tradeoffs and robustness analysis). I believe that I am not ready to improve my ratings at this time.\"}", "{\"summary\": \"This paper proposes an input-adaptive navigation method for efficient vision-and-language navigation (VLN) that addresses the overthinking problem common in history-aware multi-modal transformer models. The authors develop three adaptive mechanisms to improve spatial, model-level, and temporal efficiency.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The motivation of this work is strong, as efficiency is indeed a critical issue to address in VLN.\", \"weaknesses\": \"1. Although the paper frequently emphasizes efficiency and mentions potential deployment on robots, it only provides GFLOPs as a metric. This is insufficient; additional metrics like model size and inference time should be included.\\n2. The approach reduces computation by limiting the number of panoramic views, but this may not address an actual issue. VLN image features are typically pre-extracted offline, eliminating the need for recalculating visual features during navigation. Moreover, recent methods often select candidate views rather than processing all 36 panoramic views, making the proposed solution potentially less relevant.\\n3. The authors appear to lack familiarity with VLN, as related work only covers pre-2018 studies, and the experiments compare only with HAMT (NeurIPS 2021) and DUET (CVPR 2023).\\n4. While GFLOPs are reduced by half, the performance degradation is significant, which may make the results difficult to justify as acceptable.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer vmj4\", \"comment\": \"We thank the reviewer for their time. However, we find several points made to be factually incorrect or subjective. Below, we address the questions and concerns.\\n\\n---\\n\\n> (Weakness 1) Although the paper frequently emphasizes efficiency and mentions potential deployment on robots, it only provides GFLOPs as a metric. This is insufficient; additional metrics like model size and inference time should be included.\\n\\nWe argue that GFLOPs is a sufficient metric for measuring efficiency, particularly model-level speed-up. The number of GFLOPs needed to process an input is proportional to the inference time. We choose it over inference time because (1) it is agnostic to the hardware configuration used for evaluation and (2) it is less prone to measurement errors. To contextualize our performance with respect to wall-clock time, we provide further results in Table 8 in Appendix D. Finally, adding model size as an efficiency metric is inappropriate in this context since our methods do not alter parameter count. We point the reviewer to more relevant studies on this topic [1], [2], which we consider orthogonal to our work.\\n\\n[1] EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-Adaptive Pruning. Wang et al., arXiv Preprint 2023.\\n\\n[2] P4Q: Learning to Prompt for Quantization in Visual-language Models. Sun et al., arXiv Preprint 2024.\\n\\n---\\n\\n> (Weakness 2) VLN image features are typically pre-extracted offline, eliminating the need for recalculating visual features during navigation.\\n\\nThis is not possible in a practical setting. It would be intractable for a VLN agent to pre-extract every possible image it may encounter in deployment; pre-extraction is rather an optimization tool to speed up evaluation within static datasets of simulation environments. Our proposed method offers computational savings in practical settings where pre-extraction is not an option and images must be processed.\\n\\n---\\n\\n> (Weakness 3) Recent methods often select candidate views rather than processing all 36 panoramic views, making the proposed solution potentially less relevant.\\n\\nThis is not correct. While candidate views form the agent's decision space, the agent\\u2019s visual encoder still processes all the views in today\\u2019s typical agents [1], [2], [3], [4], [5]. They are not ignored and rather convey potentially useful spatial and semantic priors relevant to navigation. Our proposed solution leverages the insight that not all non-navigable (non-candidate) views provide the same amount of relevant information, allowing us to dynamically allocate less compute to them while mitigating performance degradation.\\n\\n[1] VLN BERT: A Recurrent Vision-and-Language BERT for Navigation. Hong et al., CVPR 2021.\\n\\n[2] History-Aware Multimodal Transformer for Vision-and-Language Navigation. Chen et al., NeurIPS 2021.\\n\\n[3] Think Global, Act Local: Dual-Scale Graph Transformer for Vision-and-Language Navigation. Chen et al., CVPR 2022.\\n\\n[4] Scaling Data Generation in Vision-and-Language Navigation. Wang et al., ICCV 2023.\\n\\n[5] A New Path: Scaling Vision-and-Language Navigation With Synthetic Instructions and Imitation Learning. Kamath et al., CVPR 2023.\"}", "{\"title\": \"Response to Reviewer VoXA\", \"comment\": \"We thank the reviewer for their time and valuable feedback. Below, we provide answers to the questions and concerns. We will include this discussion in the final version of our paper.\\n\\n---\\n\\n> (Weakness 1) The proposed methods lack practicality in real-world environments.\\n\\nWe appreciate the reviewer's concern with the practicality of our techniques in real-world environments. Here, we address each primary point made by the reviewer individually.\\n\\n**No Navigable Points are Given to VLN Agents in Real-World Environments**\\n\\nThe lack of navigable viewpoints in the real world indeed prevents the immediate deployment of our proposed technique. This is a substantial challenge to VLN as a whole. As the reviewer suggests, using our spatial locality methods in continuous/real-world environments would require predicting navigable views, for which prior work provides several demonstrations [1], [2], [3], [4]. Assuming we do not modify this prediction process, we suspect our proposed techniques would remain effective as the insights of spatial locality still apply. This is reasonable when considering the advancements of these waypoint prediction methods, the most recent of which [4] achieves a considerable SR of 57 on the R2R-CE benchmark (with the closest work [3] achieving an SR of 44).\\n\\nUnfortunately, incompatibility between discrete VLN benchmarks and existing waypoint prediction methods and the non-trivial task of transferring discrete agents to continuous environments makes this hypothesis challenging to empirically evaluate. We leave the study of efficient VLN in continuous environments as future work and thank the reviewer for raising this issue.\\n\\n[1] Sim-to-Real Transfer for Vision-and-Language Navigation. Anderson et al., CoRL 2020.\\n\\n[2] Sim-2-Sim Transfer for Vision-and-Language Navigation in Continuous Environments. Krantz and Lee, ECCV 2022.\\n\\n[3] Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language Navigation. Hong et al., CVPR 2022.\\n\\n[4] ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments. An et al., TPAMI 2024.\\n\\n**Space Complexity of Locality Sensitive Hashing**\\n\\nThe reviewer is correct to point out that our locality-sensitive hashing (LSH) technique incurs a storage overhead, which we regrettably do not highlight in our current evaluation. Here, we provide some information regarding this overhead and how it impacts the deployment of the technique in practice.\\n\\nThe worst-case scenario for LSH is that the agent caches every image it encounters on a long navigation route. In our experiments, we approximate the overhead of this scenario to total approximately 522.7 MB. Specifically, our LSH technique stores pairs of images and embeddings. In the benchmarks we consider, these images are of size 3x224x224. The embedding size depends on the model, which for HAMT and DUET is 197x768 (the number of ViT patches times the model\\u2019s hidden dimension). These are stored in full-precision floating-point format (32 bits per value), resulting in (3 * 224 * 224 + 197 * 768) * 32 bits of storage per cached pair, approximately 1.2 MB. In our experiments, the longest navigation route was 12 steps (from R2R-Back), and if we assume all 36 images per panorama are cached, we obtain the worst-case overhead. In practice, however, we find that most tasks are 5\\u20137 steps, and we cache roughly 14 images per step, producing a more typical overhead of 84.7\\u2013118.6 MB. Considering that modern VLN agents are orders of magnitude larger, this is not a limiting factor to practical deployment.\\n\\nAdditionally, our mechanism is easily adaptable to storage overhead requirements. By limiting the number of cached view-embedding pairs, one can ensure that the storage overhead does not exceed a set threshold. In our work, we consider the best-case functioning of the LSH algorithm such that an upper bound can be identified and compared to our other techniques. We leave the exploration of storage-efficient hashing mechanisms as future work.\\n\\nPlacing this discussion in the context of our entire approach, the LSH mechanism provides a noticeable but substantially smaller speed-up compared to k-extensions and early-exiting (~2%). If needed, an aggressive storage overhead limit can be placed without significantly impacting the computational savings of our method. We thank the reviewer for highlighting this limitation and will include it in our final paper.\"}", "{\"title\": \"Response to Reviewer xson (update)\", \"comment\": \"**(Follow-up comment)** We provide a per-mechanism robustness analysis by running all techniques individually on the \\u201cMotion Blur\\u201d and \\u201cLow Lighting\\u201d corruptions, chosen based on their varying impact on performance (detailed in Section 5.3) and the likelihood of occurring in real-world VLN systems. We use the same metrics from Section 3 and report results on the HAMT agent on the R2R benchmark.\\n\\n| Corruption | Method | TL | OSR | SR | SPL | GFLOPs |\\n|:------------:|---------------|:-----:|:-----:|:-----:|:-----:|:-------:|\\n| Low Lighting | None (Base) | 12.15 | 71.31 | 62.58 | 57.23 | 4903.06 |\\n| | k-extension | 13.86 | 71.14 | 57.34 | 50.78 | 2571.06 |\\n| | thresholds | 13.63 | 70.29 | 58.79 | 52.16 | 4099.21 |\\n| | LSH | 12.95 | 71.43 | 61.47 | 55.19 | 2444.05 |\\n| Motion Blur | None (Base) | 12.41 | 68.20 | 59.13 | 54.01 | 4996.64 |\\n| | k-extension | 14.03 | 65.13 | 53.77 | 48.01 | 2588.06 |\\n| | thresholds | 13.81 | 68.20 | 57.51 | 51.05 | 4073.04 |\\n| | LSH | 12.39 | 68.03 | 59.30 | 54.04 | 4030.52 |\\n\\nOur individual mechanisms appear more robust to Low Lighting than Motion Blur, which corroborates our findings\\nin Sec 5.3. Early-exiting appears slightly more robust than k-extension, achieving a 2\\u20137% higher SR, which makes sense as it processes strictly more images. Interestingly, LSH functions extremely well when Low Lighting is applied. It offers a 49% reduction\\nin GFLOPs with only a 1% point drop in SR. We hypothesize that the reduced\\nlighting makes more images similar, causing our algorithm to find more matches and reuse more\\nembeddings. For Motion Blur, LSH is less successful, being more robust than our other mechanisms but\\nwith minimal computational savings. For our full results and discussion, please see Appendix E.\\n\\nWe thank the reviewer for suggesting this insightful analysis and hope our results address any remaining concerns.\"}", "{\"title\": \"Response to Reviewer vmj4 (cont)\", \"comment\": \"> (Weakness 4) The authors appear to lack familiarity with VLN, as related work only covers pre-2018 studies, and the experiments compare only with HAMT (NeurIPS 2021) and DUET (CVPR 2023).\\n\\nThe studies in the related work and models considered in our evaluation were carefully selected and not because we \\u201clack familiarity with VLN.\\u201d Here, we justify our decisions regarding both.\\n\\n**Related Work**\\n\\nOur related work includes several more recent (post-2018) studies [1], [2], [3]. Our aim in this section was to provide work similar to our research (particularly the HAMT [2] and DUET [3] agents) rather than a complete overview of the VLN field. However, we acknowledge that the related work section would benefit from including more recent research, e.g., newer discrete VLN agents like ScaleVLN [4] (which we originally cited outside of the related work) and MARVAL [5], and will update the section accordingly.\\n\\n**Models Used in the Experimental Evaluation**\\n\\nWe believe our selection of the HAMT and DUET agents enabled a representative evaluation. These agents are frequently used in other works, and DUET is the baseline architecture of the current state-of-the-art ScaleVLN [4]. Provided that most recent VLN agents [1], [2], [3], [4], [5] leverage similar Transformer-based architectures, we reason that our proposed method is largely generalizable. This is particularly true because the usage of a ViT for image encoding is consistent across all modern VLN agents, which we identify as the computational bottleneck in Section 4.1. All of our proposed techniques reduce the computations of this component, and there is no reason to believe that transferring them to other agents would be less effective. Our results support this claim. HAMT and DUET have substantially different architectures, but we show in the main evaluation (Section 5.1 and supplemented in Appendix D) that we achieve considerable computational savings for both.\\n\\n[1] VLN BERT: A Recurrent Vision-and-Language BERT for Navigation. Hong et al., CVPR 2021.\\n\\n[2] History-Aware Multimodal Transformer for Vision-and-Language Navigation. Chen et al., NeurIPS 2021.\\n\\n[3] Think Global, Act Local: Dual-Scale Graph Transformer for Vision-and-Language Navigation. Chen et al., CVPR 2022.\\n\\n[4] Scaling Data Generation in Vision-and-Language Navigation. Wang et al., ICCV 2023.\\n\\n[5] A New Path: Scaling Vision-and-Language Navigation With Synthetic Instructions and Imitation Learning. Kamath et al., CVPR 2023.\\n\\n---\\n\\n> (Weakness 5) While GFLOPs are reduced by half, the performance degradation is significant, which may make the results difficult to justify as acceptable.\\n\\nWe find this remark highly subjective. In our main evaluation (Section 5.1), we report an approximate 60% computational savings (2.5x speed up) with a 10% reduction in performance (SR) across most tasks. This performance drop is comparable to existing work in the input-adaptive domain [2], [3]. Furthermore, results from Table 2 indicate that we achieve ~61 SR with HAMT while providing over a 2x reduction in computations. This is a respectable performance and would have been close to SOTA (RecBERT [1]) until HAMT in 2021.\\n\\nMore importantly, our contribution is not a fixed computational speed-up at a fixed performance degradation rate but rather a tunable tradeoff between computational and accuracy requirements. For example, in Table 3, we show that SR can be recovered from ~61\\u201363 while maintaining a ~2x speed up by increasing the number of k-extensions from 4\\u20136. We further discuss this trade-off in Appendix G. Our main evaluation presents the best tradeoff between accuracy and efficiency, but the proposed solution is dynamic by design.\\n\\n[1] VLN BERT: A Recurrent Vision-and-Language BERT for Navigation. Hong et al., CVPR 2021.\\n\\n[2] DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference. Xin et al., ACL 2020.\\n\\n[3] BERT Loses Patience: Fast and Robust Inference With Early Exit. Zhou et al., NeurIPS 2020.\"}", "{\"metareview\": [\"(a) This paper proposes an input-adaptive navigation method to improve the computational efficiency of Vision-and-Language Navigation (VLN) agents, particularly those based on multi-modal transformer models. The authors argue that existing VLN agents suffer from \\\"overthinking,\\\" performing excessive computations on less relevant inputs. To address this, they focus on exploiting the spatio-temporal localities unique to VLN tasks: (1) The spatial locality (reducing the number of navigable views and a few neighboring views that the encoder should process); (2) The temporal locality (reducing compute on identical or nearly identical views in consecutive navigation steps); (3) developing an algorithm for dynamically adapting the thresholds for early-exit.\", \"(b) Strengths:\", \"Clearly identified problem and motivation: The paper clearly identifies the problem of computational inefficiency in VLN agents and provides a strong motivation for addressing it.\", \"Novel and well-designed method: The proposed combination of adaptive mechanisms is novel and well-suited to the specific characteristics of VLN tasks, effectively leveraging spatial and temporal redundancies in the input data.\", \"Comprehensive evaluation and analysis.\", \"(c) Weaknesses:\", \"Limited real-world applicability: The reliance on environment-defined \\\"navigable views\\\" limits the applicability of the method in real-world environments where such information is not readily available. Although the authors argued that prior methods have shown that \\\"navigable views\\\" can be predicted in real world in rebuttal, but they didn't present such experiments.\", \"No real experiments: Despite the emphasis on real-world applicability, the lack of experiments on real robots or in continuous environments weakens the paper's claims.\", \"(d) The decision is to reject the paper, mostly for lack for real-world validation or proofs of transferability.\"], \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, several key points were raised by reviewers:\\n\\nReviewer VoXA and vmJ4 expressed concerns about the paper's emphasis on real-world applicability while the experiments were conducted solely in a simulator. This remains an issue during the rebuttal. \\n\\nReviewers Xs1i and xson suggested clarifications and additional analysis. Reviewer Xs1i requested a clearer definition of \\\"navigable view\\\" and a more detailed explanation of the failure of vanilla MuE. Reviewer xson suggested a more detailed breakdown of the robustness analysis under different corruptions and a sensitivity analysis of the caching mechanism. The authors responded positively to these requests, providing detailed explanations and promising to incorporate them into the final version of the paper. \\n\\nThe authors were responsive to the reviewers' comments and provided reasonable justifications for their design choices and experimental setup. They also demonstrated a willingness to improve the paper's clarity and include additional analysis. However, concerns regarding the real-world applicability persisted. \\n\\nWhile the paper presents an interesting approach to improving the efficiency of VLN agents, I think it can be improved with validation in real-world scenarios.\"}", "{\"summary\": \"This paper proposes an efficient input-adaptive inference method for vision-and-language navigation (VLN) to address computational \\u201coverthinking,\\u201d where models perform excessive calculations unnecessarily. The approach combines three adaptive strategies: selective processing of critical views (spatial), dynamic thresholding for early exits (model-level), and caching of repeated views (temporal). Evaluated on six VLN benchmarks, this method achieves substantial computational savings\\u2014up to 60%\\u2014with minimal performance loss, offering a practical solution for improving VLN efficiency in resource-constrained settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces an innovative approach to enhancing efficiency in vision-and-language navigation (VLN) by addressing \\\"overthinking\\\"\\u2014the unnecessary computation in model decision-making. The authors propose a unique combination of adaptive methods across spatial, model-level, and temporal dimensions. This combination is well-suited to VLN, where inputs are sequential and often contain redundancies.\\n2. The method is well-designed, clearly breaking down the adaptive components into three strategies: spatial locality, model-level efficiency, and temporal locality. The paper is generally well-written.\\n3. The proposed method has practical implications for VLN tasks in resource-constrained environments, such as robotic navigation. By demonstrating a 60% reduction in computation with minimal impact on performance, the work provides a valuable solution for deploying VLN agents in real-world scenarios where computational resources are limited.\", \"weaknesses\": \"1. While the paper effectively introduces adaptive mechanisms (spatial, model-level, and temporal), there is limited theoretical analysis of each mechanism's impact on VLN performance and generalization. For example, the paper could explore how the choice of k (in the k-extension) or the aggressiveness of the adaptive thresholding influences model stability and long-term performance in complex environments.\\n2. Some formulae and symbols, such as those in the k-extension and thresholding sections, could benefit from additional clarification and consistent notation to improve readability. For instance, consistently defining parameters and variables at the start of each section would reduce potential confusion.\\n3. The paper provides a robustness analysis under various visual corruptions, but it treats the proposed method as a single unit. A more detailed breakdown of how each mechanism (spatial, model-level, temporal) contributes to robustness under different corruptions (e.g., which mechanism is most affected by motion blur or low lighting) would offer valuable insights.\\n4. The caching mechanism, which leverages locality-sensitive hashing (LSH) to avoid recomputing similar views, is a valuable addition. However, the paper does not fully explore how this caching performs under different levels of scene variability or how effective it is when environmental conditions shift significantly (e.g., new objects or layouts). A more detailed sensitivity analysis on the caching's impact across different types of VLN tasks or environmental settings would help establish a clearer understanding of when this mechanism is most beneficial.\", \"questions\": \"Please refer to the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The work proposes a mixture of techniques leveraging spatial significance and temporal continuity as well as an adaptation of the MuE early exit protocol to alleviate the compute burden of VLN models. The authors show an average reduction by over half of GFLOPs to run the models with drops in success rate performance within 10% of the full model.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is well motivated and shows how providing models with better data design, i.e. doing some signal extraction amongst the input noise, can substantially help model efficiency.\\n\\nThe main strength of the work is its clarity in the design of solutions. \\n- Each of the spatial consideration, temporal hashing and adaptive early exit thresholding are intuitive and simple.\\n- The combination of the three ingredients is straightforward (for the most part see questions) \\n\\nThe paper's numerical results are comprehensive and consider a variety of relevant benchmarks, showing the merit of the technique. The numerical results obtained are good and proving not only the soundness but also the impact of adaptive input data curation.\", \"weaknesses\": \"The paper tackles a specific setting, that of VLNs, but more specifically a data pipeline that consists of panorama views (36 precisely) and focuses on simple ways to filter out unnecessary noise in the data and reduce superfluous ViT computations in the image encoder.\\n- This choice of problem is in my view a bit restrictive, although VLNs are popular models and have attracted a lot of work and attention, they are constantly evolving. Hence focusing on the 36 image panorama as well limiting the application to ViT encoders leaves out room for applications to variants (depth or multimodal sensing, state space models etc...)\\n- Given the limited applicability context, one would expect a highly specified solution, yet the protocol proposed is quite simple and feels like it could be refined further (additional work on customizing the adaptive MuE perhaps, more specific selection of neighboring views with higher useful information content and so on)\\n\\nIn other words, the setting is not large enough for simple techniques to prove their merit in terms of universality and practicality. Also, the techniques are not tailored enough to the specific setting to maximize the potential gains of the approach and achieve maximal efficiency and performance.\\n\\nIt is unclear what the significance of the result is in practice. The authors express their hope that their \\\"results will inspire future research on developing efficient navigation methods and their deployment in real-world VLN settings.\\\" \\nThe gain of a factor of 2 in GFLOPs is not a gain of orders of magnitude and some VLN models are already running on board larger hardware in the real world. The work does not reduce the size of the models (compression) or provide gains large enough to enable such large models to run on edge devices.\\n\\nAlong these lines the paper would be much improved if the authors showed real-world experiments, where in practice the 2x gain offers behavior gains that are desirable from the human user's point of view. \\n\\nAt the clarity level, although the paper is globally well written and easy to follow, some concepts and parts of the protocol are brushed over and require more precision (see questions). Also, a big gap is a per technique analysis, i.e. what how much pulling relies on masking non useful frames vs time similarity hashing vs adaptive MuE as well as combinations (3 individual, and 3 combinations of 2 out of 3 techniques to compare to the 3 simultaneously). Slight adjustments to the techniques should allow a meaningful comparison along these lines.\", \"questions\": [\"What is a navigable view? This concept central to the filtering of frames as well as the structure of the solution is repeatedly used without being rigorously introduced.\", \"How are navigable views identified amongst the 36?\", \"Lines 201-209 offer a very vague justification of the reasons behind the failure of vanilla MuE. The authors just acknowledge differences in behavior after a couple of steps and conclude \\\"Processing fewer transformer layers can lead to an inaccurate understanding of the visual surroundings. As shown in the bottom-right figures, while the bathroom is consistently visible across multiple steps (t P r2, 10s) in the panorama, the MuE agent fails to recognize it and makes sub-optimal decisions at each navigation step.\\\"\", \"Why does it fail to recognize it? which views are quitting early? the useful ones ? With only 36 views one would be able to maybe look into the model to get more insight\", \"How does the budgeted batch inference (introduced line 284) work? It might be a method from a cited paper, but it seems like a crucial underlying piece of the puzzle and might require additional explanation.\", \"Algorithm 1 shows that navigable views undergo a separate treatment from proximity views. If I understand this correctly navigable views go through the ViT with no early exit or time similarity checks. There is thus no compute gain for these and the gains are only made on masked views (spatial, no compute at all) and proximity views (both hashing and early exit). Can the authors confirm that that is indeed the case?\", \"Whether affirmative or not this would require, at least at my level of understanding, a bit clarification in the main text as the algorithm notes are not elucidating enough regarding these crucial aspects of the solution.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Xs1i (cont)\", \"comment\": \"> (Weakness 3) The work does not reduce the size of the models (compression) or provide gains large enough to enable such large models to run on edge devices.\\n\\nThe focus of our work is reducing the latency of VLN agents, specifically the time needed to process inputs (visual scene and instruction) and make a decision. While this does not directly enable the deployment of models on edge devices, it is still a large problem in the practical deployment of VLN agents. This latency defines every human-agent interaction and is crucial in establishing the trust and usefulness of these systems. Prior work finds that humans expect a processing latency of no more than 2\\u20134 seconds when interacting with a robot [1]. However, the HAMT agent we study takes ~7 seconds per decision while running on a powerful modern desktop. Clearly, modern VLN agents are far slower than ideal. \\n\\nHowever, with the speed-up techniques proposed in this work, the latency associated with \\u2018overthinking\\u2019 can be cut roughly in half (~3.5 seconds in the prior scenario). Our techniques are also composable with orthogonal model efficiency techniques, e.g., pruning and compression, enabling even greater speed-up that approaches human standards. This addresses a key bottleneck in the practical deployment of VLN agents to real-world applications, demonstrating the merit of our work. \\n\\n[1] Managing delays in human-robot interaction. Pelikan and Hofstetter, ACM Transactions on Computer-Human Interaction, Volume 30, Issue 4, 2023.\\n\\n---\\n\\n> (Weakness 4) Lack of a per-technique analysis.\\n\\nWe agree with the reviewer that a per-technique analysis would provide valuable insights beyond those originally presented in the paper. In Section 5.1, we focus primarily on the impact of k-extensions\\u2014finding it offers the best speed-up\\u2014and study the benefit of adding early-exiting and locality-sensitive hashing (LSH). This covers most possible combinations (4/7) but, as the reviewer correctly points out, does not elucidate the individual capabilities of each method or combinations of them.\\n\\nHere, we provide additional results covering the three remaining combinations: early-exiting (thresholds), LSH, and early-exiting with locality-sensitive hashing. We report the same metrics used in our main evaluation (detailed in Section 3) for the HAMT agent on the R2R benchmark in the table below. We include the original k-extension results from Table 2 to compare all 7 combinations of the proposed speed-up techniques.\\n\\n| Method | TL | OSR | SR | SPL | GFLOPs |\\n|------------------------|:-----:|:-----:|:-----:|:-----:|:-------:|\\n| k-extension | 12.52 | 71.86 | 61.30 | 55.79 | 2408.99 |\\n| thresholds | 12.33 | 72.46 | 62.62 | 57.39 | 3867.46 |\\n| LSH | 11.53 | 74.20 | 66.11 | 61.47 | 3,894.76 |\\n| k-extension+LSH | 12.52 | 71.90 | 61.17 | 55.63 | 2013.48 |\\n| k-extension+thresholds | 12.89 | 71.95 | 60.41 | 54.57 | 2294.23 |\\n| thresholds+LSH | 12.33 | 72.41 | 62.49 | 57.33 | 3190.66 |\\n| All | 12.87 | 71.95 | 60.41 | 54.5 | 1917.61 |\\n\\nOur results show that between individual methods, k-extension offers the most speed-up with a minimal performance drop. This is intuitive, as zeroing out even a few views will be more efficient than processing them through a fraction of the ViT\\u2019s layers or hashing a smaller subset. LSH offers the most performance, as the hashed and reused embeddings tend to be near-identical.\\n\\nThe combination we do not present in the original paper, thresholds+LSH, provides slightly better performance than methods using k-extension but at the cost of significantly more compute. This suggests that retaining and partially processing/reusing more non-navigable views mitigates performance drop more than k-extension but is not nearly as efficient. We have added these results and a complete discussion to Appendix E.\\n\\nWe hope these results address the reviewer's concerns and are happy to answer any more comments/questions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer xson\", \"comment\": \"We thank the reviewer for their time and valuable feedback. Below, we provide answers to the questions and concerns. We will include this discussion in the final version of our paper.\\n\\n---\\n\\n> (Weakness 1) There is limited theoretical analysis of each mechanism's impact on VLN performance and generalization.\\n\\nWe agree that independently analyzing the effectiveness of each proposed technique is very important. In Section 5.2, we provide an initial discussion by exploring the impact of k (the number of extended non-navigable views) and the early exit thresholds on navigation success. We also consider how using different similarity metrics for hashing images impacts our locality-sensitive hashing (LSH) technique (with further discussion and results in Appendix F).\\n\\nTo address the reviewer's concerns further, we report the results of deploying each mechanism (k-extensions, early-exiting, and LSH) independently. Below, we show the same metrics detailed in Section 3 for the HAMT agent on the R2R benchmark.\\n\\n| Method | TL | OSR | SR | SPL | GFLOPs |\\n|------------------------|:-----:|:-----:|:-----:|:-----:|:-------:|\\n| k-extension | 12.52 | 71.86 | 61.30 | 55.79 | 2408.99 |\\n| thresholds | 12.33 | 72.46 | 62.62 | 57.39 | 3867.46 |\\n| LSH | 11.53 | 74.20 | 66.11 | 61.47 | 3,894.76 |\\n\\nThe results show that k-extension offers substantially more speed-up than early-exiting (thresholds) and LSH. This makes sense, as zeroing out even a few views will be more efficient than processing them through a fraction of the ViT\\u2019s layers, and LSH only hashes and reuses a small subset of views. It also only degrades performance slightly more than early-exiting, validating our insight that views spatially distant from the navigable views are not as important for predictions. LSH provides the best performance among individual mechanisms, suggesting the reused embeddings are near-identical. We have added these results and discussion to Appendix E.\\n\\nWe hope this discussion addresses the reviewer\\u2019s concerns and are happy to answer any further comments/questions.\\n\\n---\\n\\n> (Weakness 2) Some formulae and symbols, such as those in the k-extension and thresholding sections, could benefit from additional clarification and consistent notation to improve readability.\\n\\nThank you for pointing this out; we agree clarity can be improved. In the final version of our paper, we will define and introduce formulae and symbols more consistently and clearly.\\n\\n---\\n\\n> (Weakness 3) The paper provides a robustness analysis under various visual corruptions, but it treats the proposed method as a single unit.\\n\\nWe thank the reviewer for pointing out this limitation. Analyzing the impact of visual corruption on each proposed technique would provide valuable insights regarding its robustness. Please refer to the follow-up comment where we address this concern.\\n\\n---\\n\\n> (Weakness 4) The paper does not fully explore how locality-sensitive caching (LSH) performs under different levels of scene variability or how effective it is when environmental conditions shift significantly.\\n\\nThis is a valid concern, and exploring the performance of LSH to scene variability is an interesting direction. We provide some preliminary results on this in Appendix F, showing how the slight translation of an image has a large effect on even the most recent and performant image similarity metrics. This suggests that including new objects or layouts would likely have a substantial impact on the performance of our caching algorithm.\\n\\nUnfortunately, further analyses in this direction are challenging. Existing VLN benchmarks are static, so the agent will always encounter the same scene at a given step. This limits our ability to test scene variability, particularly adding/removing objects or changing their orientation. To adequately address this concern, which may have consequences on general VLN agent performance, it would likely require the development of new datasets. We leave this intriguing implication as future work and thank the reviewer for highlighting it.\"}", "{\"comment\": \"After reviewing the authors\\u2019 response, I find that my concerns remain unresolved.\\n\\nFirst, the authors emphasize that their method is intended for real-world environments, but their experiments are conducted solely in a simulator. The key issue of whether candidate viewpoints' image feature can be pre-extracted hinges on this point. If the authors insist that their method is designed for real-world applications and preprocessing all location images is infeasible, they should provide experimental results on a physical robot or in continuous environments such as R2R-CE. However, I see no such results. Instead, the authors continue to focus on discrete environments while emphasizing the practical applicability of their method, which is contradictory. I strongly encourage the authors to reconsider their response carefully.\\n\\nSecond, the authors justify limiting their related work discussion to studies up to 2018 by stating that more recent works are irrelevant to their research. This demonstrates a lack of familiarity with the VLN field. It appears the authors are unaware of recent work in VLN that focuses on efficiency and deployment on physical robots.\\n\\nRegarding the efficiency results, I find them difficult to accept. I urge the authors to consider more recent works, such as MAGIC [1]. For example, MAGIC achieves comparable SR to the original model with only 25.30% of the GFLOPs, and even when reducing GFLOPs to 3.08%, the SR drops by just 4 points.\\n\\n[1] MAGIC: Meta-Ability Guided Interactive Chain-of-Distillation for Effective-and-Efficient Vision-and-Language Navigation. Wang, L., He, Z., Shen, M., Yang, J., Liu, C., & Chen, Q. ArXiv, abs/2406.17960.\\n\\nIn conclusion, the authors\\u2019 response further highlights the need for them to develop a deeper understanding of VLN and to review more recent work in this field.\"}", "{\"title\": \"Response to Reviewer Xs1i (cont)\", \"comment\": \"> (Question 1) What is a navigable view, and how are they identified?\\n\\nA navigable view is a view within the panorama that the agent can navigate to. It corresponds to locations the agents could move to in their next action, such as a nearby open door or room. As for how they are identified, navigable views are a property of the panorama encountered by the agent and are provided within all benchmarks we evaluate (the standard for discrete VLN). If navigable views are not an inherent property of the environment (as in the real world), prior work has shown they can be accurately predicted [1]. This terminology is consistent with the prior work [2], [3]. We thank the reviewer for this question and will update our final paper to more clearly define the term.\\n\\n[1] Sim-to-Real Transfer for Vision-and-Language Navigation. Anderson et al., CoRL 2020.\\n\\n[2] History-Aware Multimodal Transformer for Vision-and-Language Navigation. Chen et al., NeurIPS 2021.\\n\\n[3] Think Global, Act Local: Dual-Scale Graph Transformer for Vision-and-Language Navigation. Chen et al., CVPR 2022.\\n\\n> (Question 2) Why does existing input-adaptive work on Transformer encoders (i.e., MuE [1]) fail?\\n\\nWe find that MuE fails in this modality because it underprocesses important views. Standard MuE applies a constant early-exit threshold within all Transformer layers and to all inputs. This results in many inputs exiting at the same layer, even though (as we explore in Section 4.2.1) some are considerably more important than others (i.e., the navigable views). As shown in Figure 3 and discussed in the corresponding Section 4.2, this *one-size-fits-all* approach leads to incorrect selections of navigable views during navigation. To answer the reviewer's questions explicitly, MuE fails to recognize relevant information due to underprocessing both navigable and spatially meaningful (i.e., nearby) non-navigable views.\\n\\nA natural follow-up question is *why* MuE underprocesses these important views. We explore this in Appendix B due to space limitations but summarize the discussion here. The intuition behind MuE is that the activations of Transformer-based vision models saturate, where their similarity between layers peaks early on and is maintained at future stages of computation. Thus, ideally, later layers introduce negligible new/useful information and can be safely skipped. So, for MuE to be successful, the similarity of activations must sufficiently saturate and not decrease at later layers. However, as shown in Figure 8, the necessary saturation pattern is not observed in the VLN setting. The cosine similarity of activations between layers in HAMT on the R2R task peaks early but then decreases. This explains the significant performance drop when MuE is directly applied to VLN agents, as it consistently exits early despite saturation not being achieved.\\n\\nWe thank the reviewer for posing this question and will modify the final version of our paper to better highlight this discussion.\\n\\n[1] You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model. Tang et al., CVPR 2023.\\n\\n---\\n\\n> (Question 3) How does the budgeted batch inference work?\\n\\nBudgeted batch inference [1] is the scenario we consider when designing our speed-up techniques. The premise is that a system must allocate a fixed computational budget amongst several inputs; in the original work, this involved classifying groups of images with uneven compute while retaining high accuracy.\\n\\nThis scenario naturally applies to our work as VLN agents must process several inputs, particularly the individual images comprising the panoramas. However, existing input-adaptive methods on Transformer-based encoders [2] do not work with budgeted batch inference, requiring us to modify the mechanism (see Section 4.2.2 for more details). Budgeted batch inference informed our modifications to the early exit mechanism but is not a crucial component of the algorithm overall. \\n\\n[1] Multi-Scale Dense Networks for Resource-Efficient Image Classification. Huang et al., ICLR 2018.\\n\\n[2] You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model. Tang et al., CVPR 2023.\\n\\n---\\n\\n> (Question 4) Which views are processed by which proposed techniques?\\n\\nThe reviewer is correct. We observed substantial performance degradation when attempting to early exit the navigable views and thus opted to fully process them and prioritize our computational savings on the non-navigable views. If a view falls within the extended range defined by k-extension, we apply hashing and early-exit; otherwise, it is not processed. We will make this distinction more explicit in the main text.\"}" ] }
5fS03oP3q6
C-MELT: Contrastive Enhanced Masked Auto-Encoders for ECG-Language Pre-Training
[ "Manh Pham Hung", "Aaqib Saeed", "Dong Ma" ]
Accurate interpretation of Electrocardiogram (ECG) signals is pivotal for diagnosing cardiovascular diseases. Integrating ECG signals with their accompanying textual reports holds immense potential to enhance clinical diagnostics through the combination of physiological data and qualitative insights. However, this integration faces significant challenges due to inherent modality disparities and the scarcity of labeled data for robust cross-modal learning. To address these obstacles, we propose C-MELT, a novel framework that pre-trains ECG and text data using a contrastive masked auto-encoder architecture. C-MELT uniquely combines the strengths of generative with enhanced discriminative capabilities to achieve robust cross-modal representations. This is accomplished through masked modality modeling, specialized loss functions, and an improved negative sampling strategy tailored for cross-modal alignment. Extensive experiments on five public datasets across diverse downstream tasks demonstrate that C-MELT significantly outperforms existing methods, achieving an average AUC improvement of 15% in linear probing with only one percent of training data and 2% in zero-shot performance without requiring training data over state-of-the-art models. These results highlight the effectiveness of C-MELT, underscoring its potential to advance automated clinical diagnostics through multi-modal representations.
[ "Multi-modal Representation Learning", "Contrastive Masked Auto-Encoders", "ECG-Text Pre-Training" ]
Reject
https://openreview.net/pdf?id=5fS03oP3q6
https://openreview.net/forum?id=5fS03oP3q6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z4aNEim3rC", "xMALWmJKMT", "wBqZLpa4zW", "wATT481rBq", "uhSqlgOU15", "sgOditTlGm", "s13yP6ibHZ", "lh490zOUBP", "jV6ccuAmKd", "iwFi5sAyYj", "hYebBfw3W7", "h0oX554v3Z", "fl0uUH649u", "aCJGzWjc0M", "ZtuVt96UrD", "UyM2INYS4L", "OIQq6P0Wng", "OALkjNwUWp", "Nv07eSeAfe", "NaoSfg73Ai", "NXYQry5YC8", "Ckdk6qm6Zk", "8t51kFL5yl", "8WF4kPr2g8", "6e1MEiUtjm", "5YmT8cB0hg", "5Av3OOCM7G", "0fGGUgBCmt", "09kOqNnng2" ], "note_type": [ "official_review", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730546744239, 1734678301768, 1732755906124, 1732436715935, 1737523851704, 1732100149796, 1732384670008, 1732415466801, 1732547211896, 1732098821398, 1732384866550, 1730634239369, 1732098364991, 1732732946756, 1732736717549, 1732735287598, 1732100193071, 1732099463861, 1732728853663, 1730323885771, 1732728148553, 1732100844450, 1732540636521, 1732547282197, 1732100315170, 1732100373129, 1732733014701, 1730047319851, 1732099647606 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_2PJ1" ], [ "ICLR.cc/2025/Conference/Submission7622/Area_Chair_HHmz" ], [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_2PJ1" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_jws3" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_xh2S" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_xh2S" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_xh2S" ], [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_vCwP" ], [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_xh2S" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_vCwP" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ], [ "ICLR.cc/2025/Conference/Submission7622/Reviewer_jws3" ], [ "ICLR.cc/2025/Conference/Submission7622/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces C-MELT, a cross-modal pre-training framework for self-supervised learning of ECG signals and textual reports. C-MELT combines the strengths of generative and contrastive learning through masked language and ECG modeling to reconstruct data. It also leverages contrastive loss functions and a nearest-neighbor negative sampling strategy to improve cross-modal alignment. Extensive experiments across multiple public datasets demonstrate that C-MELT significantly outperforms existing methods, especially in downstream tasks like zero-shot learning and linear probing, showcasing strong generalization capabilities and diagnostic potential.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Overall, this paper makes clear contributions, especially the comparative interpretation with MERL, which further promotes the development of multimodal ECG research.\", \"N3S provides an effective negative sample selection strategy to effectively select negative text reports in medical datasets.\"], \"weaknesses\": [\"The ablation experiment does not discuss the role of different components in the reconstruction method. The contrast method is currently the main method for multimodal alignment in ECG multimodal researches. This paper proposes a hybrid method to enhance the performance, but does not discuss the ablation experiment of the added reconstruction method in more detail, which leads to doubts about the effectiveness of the added reconstruction method.\", \"The layout of Table 2 is somewhat inadequate, as it differs significantly from the original table in MERL, leading to some confusion when reading. Given that this table may serve as a benchmark in the future, it should be adjusted for consistency to facilitate easier comparison.\"], \"questions\": [\"Please provide more details on the performance improvement achieved by the reconstruction approach.\", \"In Section 3.1, positional encoding is added at the end. How is sequence information ensured during the calculation of multi-head attention according to the content of the article?\", \"If ECG-Text Matching (ETM) promotes the alignment of corresponding ECG-Text pairs, why does it not enhance the discriminative ability of the encoders?\", \"Conversely, if Siglep Loss can accomplish contrastive learning for multimodal alignment, what is the purpose of retaining ETM?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"none\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces C-MELT, a framework that pre-trains ECG and text data using a contrastive masked auto-encoder architecture. C-MELT combines the strengths of generative and contrastive learning through masked language and ECG modeling to reconstruct data. It also leverages contrastive loss functions and a nearest-neighbor negative sampling strategy to improve cross-modal alignment. Extensive experiments on five downstream datasets show good performance of C-MELT. The paper is generally well-written and clearly structured, and the experimental results are comprehensive and convincing. The pre-train data is MIMIC-IV-ECG and the downstream datasets include PhysioNet 2021, PTB-XL, CSN, CPSC2018, and CODE-test - all are publicly available standard databases. The code and pre-trained model are not included during the peer review (while the authors mentioned they would release them upon acceptance).\\n\\nReviewers appreciate the comprehensive experiments and detailed discussions. However, some major concerns from different perspectives still remain there. From an ML perspective, the contributions of the ECG-specific transformer, Flan-T5 text encoder, and GPT to enhance prompt quality are incremental. From an experimental perspective, some comparisons with MERL's best results should be provided. From a data perspective, the study has not generated new ECG data and/or new human annotations for resources in the field. From a clinical perspective, while C-MELT achieves higher performance, it doesn't provide additional functionalities to be implemented in clinical decision-making.\", \"additional_comments_on_reviewer_discussion\": \"There are detailed discussions between the reviewers and authors. The reviewers appreciate the comprehensive experiments and detailed discussions during the rebuttal phase. Since this paper has highly diverse review scores, the AC calls the discussions among reviewers and AC. jws3 maintains his/her score with high confidence (5). vCwP maintains his/her score with relatively low confidence (3).\"}", "{\"comment\": \"Thank you for the authors' responses. I have read them carefully and have decided to maintain my original score.\"}", "{\"title\": \"Response to Reviewer jws3\", \"comment\": \"Thank you for your response. We would like to address your points as follows:\\n\\n>The framework is still quite similar to BLIP. Although the authors claim to use an ECG-specific transformer block, processing ECG data with transformers is not a novel idea and has been implemented in prior works several years ago. \\n\\nWe respectfully disagree that our work is \\u201cquite similar\\u201d to BLIP. The loss functions and the way our method trains a proposed ECG-Language model with the modalities focused on are entirely different usages. Could you please highlight more similarities between BLIP and our work?\\n\\n> Flan-T5 is not the state-of-the-art (SOTA) text encoder, especially for medical text, as it is not pretrained on a medical corpus.\\n\\nAs shown in Table 7, we conducted an extensive ablation study on different text encoders. This demonstrates that the recent advancements in Flan-T5 surpass both BERT (used in BLIP) and Med-CPT [5] (which is pre-trained on biomedical data and also used in MERL as a SOTA model). In addition, our framework remains flexible, and other future text encoders can be easily replaced to accommodate future developments if desired to get even better performance. \\n\\n[5] Jin, Qiao, et al. \\\"MedCPT: Contrastive Pre-trained Transformers with large-scale PubMed search logs for zero-shot biomedical information retrieval.\\\" Bioinformatics 39.11 (2023): btad651.\\n\\n> Using GPT to enhance prompt quality cannot be attributed as a major contribution since it heavily relies on the capabilities of the large language model (LLM) rather than the proposed framework itself.\\n\\nFirst, we would like to emphasize again that using GPT is one insight that we used for enhancing zero-shot capability and to fairly compare against MERL in the same setting (note that without and with GPT, we both surpass them). Second, for linear probing and full fine-tuning scenarios, GPT is not used at all, and our method shows strong performance over existing benchmarks. \\n\\n---\\n\\nFinally, we appreciate your review. However, together with the above responses and our method demonstrating robustness and generalization, outperforming existing approaches across five datasets with over 100 cardiac conditions in the ECG-healthcare domain, we kindly request your reevaluation of our work. Thank you again for your time, and we look forward to your reconsideration.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer xh2S (Part 1)\", \"comment\": \"We thank the reviewer for their time and constructive feedback on our submission. We would like to address your points below:\\n\\n> [R1-C1]: Please clearly list the main contributions of the current work in the context of the previous literature, with an accurate formulation of the specific problem addressed.\\n\\n> [R1-C2]: Please explicitly state whether the primary contribution is zero-shot classification or learning generalized ECG features, and how your work advances the field.\\n\\n> [R1-C4]: Please categorize your work as self-supervised, semi-supervised, or supervised representation learning and explain why it fits that category, given the use of textual reports.\\n\\nFirstly, we propose a hybrid self-supervised learning approach specifically designed for ECG-language pretraining. We would like to outline our main contributions as:\\n - We propose a transformer-based ECG encoder to process ECG signals and investigate the usage of pre-trained Flan-T5 as the text encoder to deal with clinical text reports.\\n - We propose contrastive components and integrated Siglep loss in masked autoencoder-based models, learning together with MEM, MLM, and ETM losses, enabling the model to learn robust and effective modality representations.\\n - We propose a novel N3S technique to handle the inherent data sparsity in the MIMIC-IV ECG dataset, improving the quality of negative samples and overall model performance.\\n - We conduct extensive experiments including zero-shot, linear probing, and fully fine-tuned settings to showcase the robustness of our method, surpassing diverse benchmarks on over 100 cardiac conditions.\\n\\nFinally, we highlight our work in learning generalized ECG features through a generative-contrastive SSL approach, which enables strong generalization across diverse tasks and datasets, where either fine-tuned or zero-shot settings benefit.\\n\\n> [R1-C5]: Does the implementation necessitate multi-modal data or can it also incorporate a mix of ECG-only and multi-modal data?\\n\\nOur implementation is highly adaptable and can handle both multi-modal and ECG-only data. For example, the text encoder could theoretically be repurposed as an ECG encoder so it is not strictly dependent on the availability of textual data and can generalize effectively to ECG-only cases.\\n\\n> [R1-C6]: The methodology may also depend on the accuracy and the distribution of the textual reports. Are the reports in the pre-training dataset automatically generated or written by cardiologists?\\n\\nThe textual reports in our pre-training dataset (MIMIC-IV-ECG) are automatically generated. It is important to note that this dataset is uniquely large and publicly available, making it highly suitable for robust self-supervised learning in ECG-language pre-training.\\n\\n> [R1-C7]: The addition of \\u201cdiagnostic data\\u201d introduces the possibility of learning data bias. What has been done to ensure generalization?\\n\\nWe acknowledge this insightful comment for \\u201cdiagnostic data\\u201d. Regarding generalization, our empirical experiments demonstrate the robustness of our framework across diverse tasks on five datasets, including zero-shot and fine-tuning scenarios.\\n\\n> [R1-C9]: How does the proposed approach differ from other hybrid approaches combining generative and contrastive methodologies e.g. [2]?\\n\\nWe thank the reviewer for referring to this recently published paper.\\nTheir method is heavily based on a single ViT-based architecture for both contrastive and generative SSL, which aims to deal with ECG-only, instead of incorporating any text or other modalities. Therefore, there is a limit to its applicability in zero-shot capabilities and clinical contexts where decisions often rely on both signal and textual interpretations (e.g., radiology reports, patient histories); or advanced applications (e.g., retrieval, report generation). \\n\\nAdditionally, although their model was pre-trained on huge combined datasets (e.g., MIMIC-IV, CODE15, UK Biobank, SAMI, IKEM, totaling ~1.3 million ECG signals), the downstream evaluations only deal with just a few diagnoses, namely MI, STTC, CD, and HYP. In comparison, our model (pre-trained on MIMIC-IV only) is proven to robustly generalize the performance across multiple datasets and over 100 cardiac conditions.\\nWe referred to this paper for our revised manuscript in Line 54.\\n\\n> [R1-C10]: Is the text encoder frozen or fine-tuned?\\n\\nOur text encoder is fine-tuned during the pre-training step. We made changes in Line 179 to better highlight this.\\n\\n> [R1-C14]: Does the application also provide the possibility of automatic ECG report generation?\\n\\nOur method can be extended to support text/report generation or ECG question answering when the model is fine-tuned on the given task.\"}", "{\"title\": \"Response to Reviewer jws3\", \"comment\": \"We kindly inquire if you have further questions or require clarification. In our previous response, we detailed key differences between BLIP and our work, clarified misinterpretations regarding evaluation and implementation, and revised the manuscript accordingly. We respectfully invite you to reconsider your assessment based on these updates. Thank you for your time.\"}", "{\"comment\": \"Thank you for the detailed rebuttal.\\n\\nHowever, my concerns remain for the following reasons:\\n\\n- The framework is still quite similar to BLIP. Although the authors claim to use an ECG-specific transformer block, processing ECG data with transformers is not a novel idea and has been implemented in prior works several years ago.\\n- Flan-T5 is not the state-of-the-art (SOTA) text encoder, especially for medical text, as it is not pretrained on a medical corpus.\\n- Using GPT to enhance prompt quality cannot be attributed as a major contribution since it heavily relies on the capabilities of the large language model (LLM) rather than the proposed framework itself.\\n\\nHence, I will maintain my score.\"}", "{\"title\": \"Response to Reviewer 2PJ1\", \"comment\": \"We hope this message finds you well. We kindly ask if any remaining concerns require further clarification or justification. In case you find our work better satisfactory given our previous responses, we would greatly appreciate your feedback on our submission. Thank you for your time and consideration.\"}", "{\"title\": \"Response to Reviewer 2PJ1\", \"comment\": \"We appreciate your review and understand the concern raised regarding the effectiveness of the added reconstruction method. Regarding the minor point with the layout of Table 2, we adjusted it in our revised manuscript for easier comparison. Please find our answers to the points raised below:\\n\\n> [R2-C1]: Please provide more details on the performance improvement achieved by the reconstruction approach. \\n\\nThis is a great suggestion and we would like to provide our additional experiments with and without using reconstruction parts (e.g. MLM, MEM), and hopefully, this will be sufficient for your evaluation. Please find our additional results below:\\n\\n| | **PTBXL-Super** | **PTBXL-Form** | **CSN** | **CODE-Test** |\\n|--------------------|-----------------|----------------|---------|---------------|\\n| **w/o MLM + MEM** | 70.3 | **67.4** | 74.5 | 94.6 |\\n| **w MLM + MEM** | **76.2** | 66.1 | **76.3**| **96.8** |\\n\\n\\nWe can see that incorporating MLM and MEM noticeably improves performance across all evaluated datasets. Especially, gains are observed in PTBXL-Super (+5.9%), and CODE-Test (+2.2%), demonstrating that the reconstruction tasks also help enhance the model's ability for better performance, aligned with our motivation.\\n\\n> [R2-C2]: In Section 3.1, positional encoding is added at the end. How is sequence information ensured during the calculation of multi-head attention according to the content of the article?\\n\\nThis is indeed an insightful observation in our manuscript! We agree that there was an overlook in our presentation on positional encoding. Accordingly, our implementation uses convolutional positional encoding before the TransformerEncoder layers as expected, ensuring that temporal information is preserved during attention calculations. Therefore, we adjusted to correct this in our revised manuscript in Lines 169-170. Thank you for helping us clarify this detail.\\n\\n> [R2-C3]: If ECG-Text Matching (ETM) promotes the alignment of corresponding ECG-Text pairs, why does it not enhance the discriminative ability of the encoders? \\n\\nThank you for pointing this out. First, we would like to clarify that we have not explicitly stated that ETM does \\\"not enhance the discriminative ability of the encoders.\\\" In our manuscript (Lines 227-228 in our original submission), we mentioned that ETM is designed to \\\"promote alignment between ECG signals and their corresponding text reports.\\\" Additionally, in Appendix Lines 742-747, we discussed that ETM operates as a binary classification task at the fused feature level, rather than directly enhancing the discriminative power of individual modality encoders.\\nETM focuses on determining whether a specific ECG and text pair match, which is critical for cross-modal alignment. However, it does not directly improve the encoders' ability to distinguish between different ECGs or text reports independently. This limitation arises because ETM optimizes at the level of paired features, not at the modality-specific feature granularity needed for downstream tasks that require high intra-class discrimination.\\n\\n> [R2-C4]: Conversely, if Siglep Loss can accomplish contrastive learning for multimodal alignment, what is the purpose of retaining ETM?\\n\\nWe agree that our Siglep loss accomplished stronger and more effective impacts on discriminative modality representation learning. However, ETM does align ECG and text pairs with its nature, which primarily supports/guides the fused feature space learning, together with generative aspects from MLM and MEM. This aligns with the motivation of the hybrid approach from the beginning of our work.\"}", "{\"title\": \"Final Three Days\", \"comment\": \"Dear Reviewers and Area Chairs,\\n\\nWe're entering the last three days of the discussion period. Four days ago, we addressed each reviewer\\u2019s comments and uploaded a revised version of our paper, which includes suggested adjustments and additional experiments. We are glad that Reviewer vCwP has updated their rating in light of our responses!\\n\\nWe would also greatly appreciate it if the other reviewers could have a look at our responses and consider adding their review if we have sufficiently addressed the remaining concerns. In particular, to Reviewer jws3, we humbly believe that our work deserves greater consideration than your current rating reflects. Should there still be unresolved questions, we would be happy to engage in further discussion.\\n\\nThank you for your time and consideration.\"}", "{\"summary\": \"The proposed approach aims to integrate generative pre-training with robust cross-modal representation learning. The paper extends the previous MERL[1] methodology developed for multimodal learning on ECG records and associated reports, providing zero-shot classification capabilities. The main contributions are the integration of predicting masked segments for both ECG signal and textual reports, specialized loss functions, and an improved negative sampling strategy for cross-modal alignment. The performance is evaluated for diverse datasets and demonstrates superior performance to existing SSL (Self-supervised learning) approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The strength of the model is the ability to provide improved performance compared to MERL[1], achieved through the specialized loss functions.\", \"weaknesses\": \"The method may not be directly comparable to SSL methodologies as the pretraining is exposed to cardiac diagnostic texts enhancing the performance for related tasks. Performance for novel features, unrelated to the context of the reports (e.g. age and sex of the subject) may be impacted adversely.\", \"questions\": \"Methodology:\\n\\nPlease clearly list the main contributions of the current work in the context of the previous literature, with an accurate formulation of the specific problem addressed.\\n\\nPlease explicitly state whether the primary contribution is zero-shot classification or learning generalized ECG features, and how your work advances the field.\\n\\nThe introduction mentions the scarcity of labeled data but integrating textual reports cannot be regarded as truly \\u201cunlabelled\\u201d since converting textual descriptions to labels is a trivial task, looking at the examples provided. Please clarify the definition of \\\"unlabeled data\\\" in this context and discuss how the approach differs from traditional supervised learning using text-derived labels.\\n \\nPlease categorize your work as self-supervised, semi-supervised, or supervised representation learning and explain why it fits that category, given the use of textual reports.\\n\\nDoes the implementation necessitate multi-modal data or can it also incorporate a mix of ECG-only and multi-modal data? \\n\\nThe methodology may also depend on the accuracy and the distribution of the textual reports. Are the reports in the pretraining dataset automatically generated or written by cardiologists? \\n\\nThe addition of \\u201cdiagnostic data\\u201d introduces the possibility of learning data bias. What has been done to ensure generalization? \\n\\nIt would be a good test for generalization to predict concepts not addressed in the textual reports e.g. if age and sex are not mentioned in the reports then a supervised task can be performed for these tasks, as these labels are readily available in most datasets.\\n\\nHow does the proposed approach differ from other hybrid approaches combining generative and contrastive methodologies e.g. [2]?\\n\\nIs the text encoder frozen or fine-tuned?\\n\\nThe method may not be directly comparable to SSL methodologies as the pretraining is exposed to cardiac diagnostic texts enhancing the performance for related tasks. Performance for novel features, unrelated to the context of the reports (e.g. age and sex of the subject) may be impacted adversely. \\n\\nPlease explain in further detail about where the N3S is applied as there are two cross-modal alignment losses. Is the cross-model alignment performed with single or multiple negative samples? If N3S avoids similar negative reports then the use of several positives and negatives in the batch can be incorporated in future work to avoid an expensive search for dissimilar negatives and looking explicitly for more distant reports is counter-intuitive to the goal of contrastive learning.\\n\\nWhat is the difference between SigLEP and ETM losses? Don't they both serve the same purpose? Clearly explain whether the alignment is performed between projections of both modes, joint representation, or the inputs and how the positive and negative pairs are defined.\\n\\nDoes the application also provide the possibility of automatic ECG report generation?\\n\\nThe comparative approaches are not exposed to labels while textual reports may include the labels or similar concepts during pre-training.\", \"minor_suggestions\": \"\", \"page_1_line_23\": \"Abstract: \\u201cachieving 15% and 2% increases.\\u201d Please mention the specific context of the percentages mentioned.\\n\\nPage 2 lines (55-56): Introduction \\u201cWhile some recent efforts (Liu et al., 2024; Lalam et al., 2023; Li et al., 2024)\\u201d. Is the Li et al., 2024 repeated or have you missed the correct reference?\\n\\nPage 2 lines (73-76): \\u201cAdditionally, we introduce a nearest-neighbor negative sampling \\u2026 contextually relevant and challenging.\\u201d How is the sampling \\u201ccontextually relevant and challenging\\u201d?\\n\\nPage 2 lines (77-85): No need to explain the test setup in the introduction. Please move to Section 4.1.\", \"page_3\": \"Siglep is mentioned without explaining the acronym and the cited paper only describes \\u201cSigLIP\\u201d. Is it Siglep author\\u2019s implementation based on \\u201cSigLIP\\u201d or used in previous literature? If the latter, then please reference the correct source.\", \"page_4_line_165\": \"Please explain the masking and reconstruction details e.g. is the masking only performed on particular leads similar to RLM in [3] and then the leads are reconstructed? Or is it also applied to segments of the same lead?\\n\\nPage 4 line (206-215): Is the MEM loss computed in the feature space, not the signal space? If the former then how does the reconstruction loss take into account the quality of the generated signals? There may be data leakage from the features that are input to the decoder as the network may learn trivial reconstruction. \\n\\nPage 6 line (273-274): \\u201cThis makes the negative samples to be both challenging and distinct for effective contrastive learning\\u201d. The N3S technique for finding negative samples looks specifically for the most distinct reports. How does it make it more \\u201cchallenging\\u201d? Also, the implementation is not very clear. Is the contrastive loss not considering the rest of the batch as negatives?\", \"page_6_line_313\": \"\\u201cOur proposed model is developed based on the fairseq-signals framework in our work\\u201d The meaning of the sentence is not clear and the reference for the fairseq-signals framework is not included. If it means that is the author\\u2019s prior work then it might be a violation of the double-blind review process as the fairseq-signals framework is implemented on Git Hub.\", \"page_8_line_429\": \"Have you tested removing the ETM loss?\\n\\nPage 8 line (429-438): What is the supervised task?\\n\\nPage 9 line (470): Have you considered using more than 8 eight transformer layers? How does the model size vary for the different architectures?\", \"tables\": \"Are the comparisons in Tables 1 to 4 based on the author's implementation of other approaches? If so the performance for other works properly optimized for training hyperparameters for both the pre-training and supervised training? If not then do the cited references include the particular evaluation?\", \"table_5\": \"What is the metric being compared and why is the student seemingly better than a cardio resident? Please verify the scores.\", \"results\": \"Some of the classes in the PTB-XL dataset have very few samples so for 1% and 10% random training samples there may not be sufficient positive samples in the train split. That can explain why SSL methodology loses performance significantly for labels other than superclasses.\", \"figures\": \"Please improve the figure captions and describe in more detail what is being shown.\", \"references\": \"[1] Che Liu, Zhongwei Wan, Cheng Ouyang, Anand Shah, Wenjia Bai, and Rossella Arcucci. Zero-shot ecg classification with multimodal learning and test-time clinical knowledge enhancement. arXiv preprint arXiv:2403.06659, 2024. \\n\\n[2] Song, J., Jang, J. H., Lee, B. T., Hong, D., Kwon, J. M., & Jo, Y. Y. (2024). Foundation Models for Electrocardiograms. arXiv preprint arXiv:2407.07110.\\n\\n[3] Jungwoo Oh, Hyunseung Chung, Joon-myoung Kwon, Dong-gyun Hong, and Edward Choi. Lead-agnostic self-supervised learning for local and global representations of electrocardiogram. In Conference on Health, Inference, and Learning, pp. 338\\u2013353. PMLR, 2022.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer jws3\", \"comment\": \"Thank you for your review and your time. We believe there might be some misunderstanding or confusion, which we will hopefully make clearer to you below:\\n\\n> [R4-C1]: Can the authors clarify the main differences between C-MELT and BLIP? Even though the data is different (ECG vs. Image), the framework and optimization objectives appear too similar.\\n\\nWhile we agree certain points are shared between the two methods, we would like to highlight notable differences in our (C-MELT) hybrid self-supervised learning approach for ECG-Language representation learning:\\n - First, our ECG encoder leverages transformer layers specifically optimized for time-series signal processing, while the clinical text is processed using the state-of-the-art pre-trained Flan-T5 text encoder. Our ablation studies validated their effectiveness, where we also demonstrated that Flan-T5 surpasses BERT (BLIP\\u2019s text encoder), in capturing clinical ECG text representations. \\n - Second, BLIP does not support masked Image/ECG modeling in a generative manner. In contrast, our method incorporates MEM, which has been proven effective in existing ECG SSL [2,3,4] for capturing nuanced and detailed representations of ECG signals.\\n - Third, our new components with Siglep loss effectively work alongside existing losses in masked auto-encoder-based-model, specific to the ECG domain. We further address the inherent limitations of MIMIC-IV ECG regarding data sparsity (Lines 748-755 in our original submission) by introducing N3S in Flan-T5 feature space and FAISS. BLIP does not consider this for their ITC.\\n - Finally, our work utilizes off-the-shelf LLM (i.e., GPT-4o) to enrich clinical context from category names (e.g. ECG diagnoses), particularly boosting the zero-shot performance in clinical evaluation. Whereas, BLIP's main focus is image-language tasks.\\n\\n[2] Na, Yeongyeon, et al. \\\"Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram.\\\" arXiv preprint arXiv:2402.09450 (2024).\\n\\n[3] Hu, Rui, Jie Chen, and Li Zhou. \\\"Spatiotemporal self-supervised representation learning from multi-lead ECG signals.\\\" Biomedical Signal Processing and Control 84 (2023): 104772.\\n\\n[4] Zhang, Huaicheng, et al. \\\"Maefe: Masked autoencoders family of electrocardiogram for self-supervised pretraining and transfer learning.\\\" IEEE Transactions on Instrumentation and Measurement 72 (2022): 1-15.\\n\\n> [R4-C2]: How do the authors explain the discrepancy between the reported results in Table 3 and Table 7? In Table 3, C-MELT claims that the zero-shot classification performance across all downstream datasets is 77.71, which is higher than MERL [2]. However, in the ablation results Table 7, the average zero-shot result is 72.5\\u00b19.1, which is not consistent with the author-reported performance. Additionally, in MERL [2]'s original paper, the reported average zero-shot performance is 75.24\\u00b11.7 in Tables 5-9, which is much higher than C-MELT's performance.\\n\\nThe reason for the distinct results in Table 3 and Table 7 is that they are derived from two different settings, to maintain consistency with MERL for fair comparison. Specifically:\\n\\n - For Table 3, we incorporated GPT-4o to obtain richer clinical context from category names (Lines 370-372 in our original submission), which reflects the optimal performance C-MELT can achieve. \\n\\n - For Table 7 (ablation study), we specifically chose to use raw category names without GPT support (Line 427 in our original submission) to evaluate the true robustness of the method itself. Using GPT could raise concerns about whether improvements are due to the testing components or GPT-4o. Therefore, this setting naturally leads to lower performance but ensures a fair evaluation in the context of our ablation study.\\n\\nMERL also conducted experiments under the same two settings. With GPT-4 support, MERL achieved ~67 (Zero-shot MERL (GPT4 Generated) of Figure 1 in MERL), which is lower than C-MELT\\u2019s ~77.1. In the raw category settings, MERL achieved ~62 (Figure 1), compared to 72.5 in our work. \\n\\nNote that the Zero-shot MERL (CKEPE) (~75.3 in Figure 1 and Table 5-9) was achieved with an additional database, which requires searching extra attributes and sub-types of each category name. So we did not directly compare with this setting, as further explained in our ablation study (Lines 809-816 in our original submission). \\n\\n> [R4-C3]: The reproducibility concern is made worse by the fact the authors don't appear willing to share their code.\\n\\nWe stated that our code and pre-trained models would be made public upon acceptance (Lines 489-490 in our original submission).\\n\\n---\\n\\nLastly, we appreciate your paper reference and would like to add it to the related works section in our revision (Line 54). We kindly note that C-MELT is the first work to utilize a hybrid SSL technique for ECG-Language pretraining, with performance surpassing all existing works.\"}", "{\"title\": \"Response to Reviewer xh2S\", \"comment\": \"> By introducing the auxiliary information in the form of reports, the functionality is also limited to the context and the report's accuracy (automatic reports are usually inaccurate). It would have been good to test for labels outside the reports like the subject's age and sex. But you could at least mention that when comparing it with unsupervised approaches.\\n\\nTogether with our previous responses in Part 2, using these text reports could help guide the multimodal representations pre-training by their nature in providing various clinical concepts. Also, the MIMIC IV ECG dataset is uniquely large and publicly available and has already been verified by some recent works (e.g. MERL) to enhance cardiovascular diagnosis.\\n\\nFurthermore, we need to mention again that we had our extensive downstream experiments on the pre-trained model without any text or diagnostic usage. For example, Table 1 shows our ECG encoder performs well in the conventional classification and patient identification tasks. \\n\\nFinally, it is indeed uncommon to \\u201dtest\\u201d age and sex in our clinical-focused scenarios, to the best of our knowledge.\"}", "{\"title\": \"Response to Reviewer xh2S\", \"comment\": \"We appreciate this and had just made a small adjustment on Line 138 to be clearer upon introduction as you suggested. Thank you for your response again.\"}", "{\"title\": \"comment\", \"comment\": \"The Siglip is introduced in line 138 which I had pointed to while this explanation comes in 245. It should be clear upon introduction.\"}", "{\"title\": \"Response to Reviewer xh2S (Part 2)\", \"comment\": \"> [R1-C13]: What is the difference between SigLEP and ETM losses? Don't they both serve the same purpose? Clearly explain whether the alignment is performed between projections of both modes, joint representation, or the inputs and how the positive and negative pairs are defined.\\n\\n> [R1-C12]: Please explain in further detail where the N3S is applied as there are two cross-modal alignment losses. Is the cross-model alignment performed with single or multiple negative samples? If N3S avoids similar negative reports then the use of several positives and negatives in the batch can be incorporated in future work to avoid an expensive search for dissimilar negatives and looking explicitly for more distant reports is counter-intuitive to the goal of contrastive learning.\\n\\nSiglep and ETM both support contrastive modeling, as already mentioned in our paper in Lines 244-250. We would like to highlight the important points between them below:\\n - SigLEP Loss: Operates at the modality-specific feature level to perform contrastive alignment between ECG and text embeddings. Positive pairs are aligned ECG-text inputs, while negatives are mismatched pairs prepared through the N3S process. This ensures robust modality-specific alignment in a shared feature space, enhancing the discriminative power of the individual encoders.\\n - ETM Loss: Works at the joint fused feature level, performing binary classification to determine whether a given ECG-text pair matches. The pairing of positive and negative samples is the same as in SigLEP.\", \"we_would_like_to_clarify_n3s_as_follows\": \"Our batches first have all positive pairs. During the data loader stage, half the batch is maintained with positive pairs while the other half will have texts replaced randomly by one of the top 64 farthest distance samples (look for in Flan-T5 space using FAISS).\\n\\n> [R1-C3]: The introduction mentions the scarcity of labeled data but integrating textual reports cannot be regarded as truly \\u201cunlabelled\\u201d since converting textual descriptions to labels is a trivial task, looking at the examples provided. Please clarify the definition of \\\"unlabeled data\\\" in this context and discuss how the approach differs from traditional supervised learning using text-derived labels.\\n\\nWe acknowledge that the integration of textual reports might imply a form of labeling. However, our approach treats these reports as rich, auxiliary information rather than explicit labels. This distinction is crucial:\\n\\nIn our context, \\\"unlabeled data\\\" refers to raw ECG signals without predefined categorical labels. The textual reports provide descriptive information but do not directly translate to discrete labels typically used in supervised learning. Furthermore, the labeling requires clinicians' involvement which is costly and time-consuming. We pre-trained the model using the MIMIC-IV-ECG dataset that contains ECGs and machine-generated text reports.\\n\\nOur method leverages self-supervised learning by using the semantic richness of textual data to enhance ECG representation without imposing predefined categories. This contrasts with prior supervised methods that require explicit labeling, often leading to the loss of nuanced information in the reports.\\n\\n> [R1-C8]: It would be a good test for generalization to predict concepts not addressed in the textual reports e.g. if age and sex are not mentioned in the reports then a supervised task can be performed for these tasks, as these labels are readily available in most datasets.\\n\\n> [R1-C11]: The method may not be directly comparable to SSL methodologies as the pretraining is exposed to cardiac diagnostic texts enhancing the performance for related tasks. Performance for novel features, unrelated to the context of the reports (e.g. age and sex of the subject) may be impacted adversely.\\n\\n> [R1-C15]: The comparative approaches are not exposed to labels while textual reports may include the labels or similar concepts during pre-training.\\n\\nFirst, our pre-trained model under evaluation with zero-shot or fine-tuned (no text needed, e.g. classification and Identification) experiments deals with unseen ECG recordings and classification tasks. This is also done in MERL work.\\n\\nSecond, we clarify that our approach is not explicitly exposed to labels during pretraining. While textual reports (generated from machine, in MIMIC IV) may inherently include various clinical concepts, this is a characteristic of clinical language and not a direct use of labeled data. \\n\\nIt\\u2019s also valid to say that this concern is understandable given that large models (e.g. LLama) have been pre-trained to handle zero-shot learning, by leveraging extensive contextual knowledge and databases, which can sometimes include label-adjacent concepts implicitly learned during pretraining.\"}", "{\"title\": \"Response to Reviewer vCwP (Part 1)\", \"comment\": \"Thank you for your thoughtful feedback and for highlighting where our manuscript can be strengthened! Please find our responses to all your points below:\\n\\n> [R3-C1]: Whereas the proposition to combine reconstruction and contrastive learning is very worthwhile, to my understanding this matter is not directly addressed in the paper. While the Siglep loss is ablated, I believe in this case the model still is not trained solely via reconstruction, as ECG-Text Matching (ETM) is active. I believe the manuscript would be considerably improved if results could be shown under a setting of \\u2018reconstruction only\\u2019 and \\u2018without reconstruction\\u2019. Ideally, under the \\u2018reconstruction only\\u2019 setting, additional information is provided about the utility of text or ECG reconstruction. In the \\u2018without reconstruction\\u2019 setting, it would be interesting to ablate either the Siglep loss and/or ETM. The authors are free to provide alternative analyses than specifically those considered here, but again, gaining an understanding of the contribution of the individual parts of the approach would be important.\\n\\n\\nWe appreciate your time with those detailed comments. To address your points, we agree that additional results under specific settings: \\u201creconstruction only\\u201d or \\u201cwithout reconstruction,\\u201d would further clarify the contribution of individual components. Please refer to the table below.\\n\\nIn the \\u201cwithout reconstruction\\u201d suggestion, we believe this is aligned with [R2-C1] regarding the impact of the reconstruction aspect on overall performance. Specifically, in our supporting experiment (Table below), incorporating MLM and MEM noticeably improves performance across all evaluated datasets. Particularly, gains are observed in PTBXL-Super (+5.9%), CODE-Test (+2.2%), demonstrating that the reconstruction tasks also help enhance the model's ability for better performance, aligned with our motivation.\\n\\n| | **PTBXL-Super** | **PTBXL-Form** | **CSN** | **CODE-Test** |\\n|--------------------|-----------------|----------------|---------|---------------|\\n| **w/o MLM + MEM** | 70.3 | **67.4** | 74.5 | 94.6 |\\n| **w MLM + MEM** | **76.2** | 66.1 | **76.3**| **96.8** |\\n\\nRegarding the \\u201creconstruction only\\u201d setting, we haven\\u2019t reported the case where we eliminated both ETM and Siglep modeling. However, when we did our ablation study (Table 6, Row 4 - Without N3S and Siglep), the overall performance already decreased noticeably. Note that zero-shot results are not reported when Siglep is not activated (as described in Lines 742-747 in our original submission). \\n\\nWe hope our provided results with and without masking modeling (e.g. MLM, MEM) settings satisfy your question. \\n\\n> [R3-C2]: Why did the authors choose not to compare against other methods for ECG-text pretraining such as the cited Lalam et al. 2023 or Liu et al. 2023 (https://arxiv.org/pdf/2309.07145)?\\n\\nWhile they also provide a solution to ECG-Text pretraining, there are reasonable points that we do not directly compare with their results. Specifically:\\n - Lalam et al. (2023) heavily rely on a huge private dataset, which limits reproducibility and comparability with other methods. Additionally, the lack of benchmarking on diverse diagnoses, along with limited exploration of generalization and adaptability (e.g. Zero-shot setting), makes the method challenging to assess and compare in a broader context.\\n - Liu et al. (2023) utilized relatively small datasets for both pretraining and downstream evaluations, which limits insights into its generalizability. Additionally, their pretraining and reported evaluation rely on the same datasets (e.g., PTB-XL and CPSC2018), even in zero-shot tasks leading to a validity concern about their performance. Even when comparing their results with C-MELT, C-MELT significantly outperforms with 76.2 and 80.1, compared to their 54.6 and 57.1 (zero-shot AUC on PTB-XL-super and CPSC2018, respectively).\"}", "{\"title\": \"Further comments\", \"comment\": \"\\\"Furthermore, we introduce a contrastive objective based on the Siglep (Sigmoid language ECG pre-training) loss\\\". It is still unclear whether \\\"Siglep\\\" has been used before since you say \\\"based on SigLep\\\". Or do you mean that we introduce \\\"Siglep\\\" based on Siglip?\"}", "{\"summary\": \"The present manuscript proposes multiple interesting ideas for enhancing ECG-language pre-training by combining contrastive learning and reconstruction. By doing so, considerable improvements over unimodal pretraining and a recent multimodal approach are shown on multiple downstream datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Strengths of the manuscript include the introduction of multiple novel ideas, in particular the combination of learning via contrastive and generative generation. Furthermore, strong results are obtained across datasets. The authors clearly explain the various parts of the approach.\", \"weaknesses\": \"The manuscript could benefit from a clearer rationale behind the choice of methods, as theoretical justifications are somewhat limited. While several ideas are presented, a more thorough exploration of the reasoning behind each would strengthen the approach. Additionally, although some ablation analyses have been performed, it would be helpful to clarify which components (e.g., reconstruction vs. no reconstruction) play an essential role in the downstream tasks.\", \"questions\": [\"For the experiments:\", \"Whereas the proposition to combine reconstruction and contrastive learning is very worthwhile, to my understanding this matter is not directly addressed in the paper. While the Siglep loss is ablated, I believe in this case the model still is not trained solely via reconstruction, as ECG-Text Matching (ETM) is active. I believe the manuscript would be considerably improved if results could be shown under a setting of \\u2018reconstruction only\\u2019 and \\u2018without reconstruction\\u2019. Ideally, under the \\u2018reconstruction only\\u2019 setting, additional information is provided about the utility of text or ECG reconstruction. In the \\u2018without reconstruction\\u2019 setting, it would be interesting to ablate either the Siglep loss and/or ETM. The authors are free to provide alternative analyses than specifically those considered here, but again, gaining an understanding of the contribution of the individual parts of the approach would be important.\", \"Why did the authors choose not to compare against other methods for ECG-text pretraining such as the cited Lalam et al. 2023 or Liu et al. 2023 (https://arxiv.org/pdf/2309.07145)?\", \"In the text, the following sections could improve with clearer justifications of the authors work:\", \"Lines 60-68: Here the authors propose to combine contrastive and generative approaches to leverage their complementary strengths, but it is unclear what these complementary strengths are expected to be. As no clear argumentation is provided why these approaches should be combined, there is no framework in place to later interpret the results. I would therefore recommend to provide a more precise description of the working hypothesis.\", \"Lines 240-243: \\u201cThis can hinder the model\\u2019s capability to learn discriminative features (\\u2026)\\u201d. What is being referred to here? The fact that generative approaches perform reconstruction and contrastive learning distinguishes between data pairs is, to my understanding, precisely what causes them to learn discriminative features to the extent that they do.\", \"Next, it is suspected that ETM could serve as a contrastive loss but that it may be insufficient as it performs binary classification with fused features. As these characteristics of ETM were introduced with little justification, one is left wondering why the Siglep loss and ETM are both needed. Specifically, the hypothesis that alignment of both unfused and fused features is beneficial is currently not answered in the paper, as ablations do not seem to include ETM.\", \"Lines 274-275: I would like to ask why the authors claim nearest-neighbour negative sampling makes negative samples challenging? Does the approach not make negative samples easier by selecting less similar reports as negatives? (And as reports with high cosine distance are selected, is this not opposite to the principle of \\u2018nearest-neighbours\\u2019?)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Bias from the diagnostic reports\", \"comment\": \"By introducing the auxiliary information in the form of reports, the functionality is also limited to the context and the report's accuracy (automatic reports are usually inaccurate). It would have been good to test for labels outside the reports like the subject's age and sex. But you could at least mention that when comparing it with unsupervised approaches.\"}", "{\"title\": \"Revised Version\", \"comment\": [\"We thank all the reviewers for their comments and useful feedback. We have just uploaded a revised version of the paper to address comments, and we have also mentioned in the response to each reviewer. For convenience, we report the main differences to the previous version here:\", \"Added experiments to demonstrate the effectiveness of ETM loss and MLM+MEM in sec. Appendix, Lines 828-852.\", \"Improved introduction with motivations from the existing literature in Lines 54-76.\", \"Explained modifications to ECG encoder design (e.g. positional encoding, masking ratio) in Lines 162-172.\", \"Improved explanation of our N3S technique in Lines 270-272.\", \"Added suggested references, figure captions, and language adjustments.\"]}", "{\"comment\": \"Thanks to the authors for answering my questions and giving more detail on the with and without masking modeling. I already adapted my score!\"}", "{\"title\": \"Response to Reviewer xh2S\", \"comment\": \"We hope this message finds you well. We kindly ask if any remaining concerns require further clarification or justification. In case you find our work better satisfactory given our previous responses, we would greatly appreciate your feedback on our submission. Thank you for your time and consideration.\"}", "{\"title\": \"Response to Reviewer xh2S (Part 3)\", \"comment\": \"> [R1-C16]: Page 1 line 23: Abstract: \\u201cachieving 15% and 2% increases.\\u201d Please mention the specific context of the percentages mentioned.\\n\\nThank you for pointing this out. The 15% increase refers to the average performance improvement in 1% linear probing experiments (compared to MERL), and 2% increase in the main zero-shot learning evaluation (compared to MERL). We slightly revised the abstract to clarify these contexts explicitly in Line 24.\\n\\n> [R1-C17]: Page 2 lines (55-56): Introduction \\u201cWhile some recent efforts (Liu et al., 2024; Lalam et al., 2023; Li et al., 2024)\\u201d. Is the Li et al., 2024 repeated or have you missed the correct reference? \\n\\nThey are two different related works [4,5].\\n - [4] Liu, Che, et al. \\\"Zero-shot ecg classification with multimodal learning and test-time clinical knowledge enhancement.\\\" arXiv preprint arXiv:2403.06659 (2024).\\n - [5] Li, Jun, et al. \\\"Frozen language model helps ecg zero-shot learning.\\\" Medical Imaging with Deep Learning. PMLR, 2024.\\n\\n> [R1-C19]: Page 2 lines (77-85): No need to explain the test setup in the introduction. Please move to Section 4.1.\\n\\nWe adjusted the manuscript accordingly in Lines 74-77 to improve its structure and readability.\\n\\n\\n> [R1-C20]: Page 3: Siglep is mentioned without explaining the acronym and the cited paper only describes \\u201cSigLIP\\u201d. Is it Siglep author\\u2019s implementation based on \\u201cSigLIP\\u201d or used in previous literature? If the latter, then please reference the correct source.\\n\\nWe implemented Siglep (Sigmoid Language ECG Pretraining) based on SigLIP. We clarified this in Lines 139 and 246.\\n\\n> [R1-C21]: Page 4 line 165: Please explain the masking and reconstruction details e.g. is the masking only performed on particular leads similar to RLM in [3] and then the leads are reconstructed? Or is it also applied to segments of the same lead?\\n\\nWe use the RLM as an on-the-fly augmentation approach following [3], where masking is applied to entire leads rather than segments within a lead to mimic the setting/task of using various lead combinations. Specifically, each lead is randomly masked with a probability of p=0.5 during pretraining. Here, we do not reconstruct the masked leads. Instead, we have a dropout layer on the input with p=0.1 to enable masking modeling. We added this context to our revised manuscript for clearer interpretation in Lines 164-166. \\n\\n> [R1-C22]: Page 4 line (206-215): Is the MEM loss computed in the feature space, not the signal space? If the former then how does the reconstruction loss take into account the quality of the generated signals? There may be data leakage from the features that are input to the decoder as the network may learn trivial reconstruction.\\n\\nWe calculated MEM loss in a signal space using mean squared error loss. We clarified this explicitly in the revised manuscript to avoid ambiguity in Lines 208-209. Thank you for helping us address this.\\n\\n> [R1-C18]: Page 2 lines (73-76): \\u201cAdditionally, we introduce a nearest-neighbor negative sampling \\u2026 contextually relevant and challenging.\\u201d How is the sampling \\u201ccontextually relevant and challenging\\u201d?\\n\\n> [R1-C23]: Page 6 line (273-274): \\u201cThis makes the negative samples to be both challenging and distinct for effective contrastive learning\\u201d. The N3S technique for finding negative samples looks specifically for the most distinct reports. How does it make it more \\u201cchallenging\\u201d? Also, the implementation is not very clear. Is the contrastive loss not considering the rest of the batch as negatives? \\n\\nThank you for mentioning them. We would like to explain them clearly as follows:\\n\\n - First, for a given ECG text report in half of a batch, we look for the top 64 text reports with the largest cosine distances (e.g. use features in small-FlanT5 space) from the training set. It is worth noting that the features were already calculated and indexed before the training process using FAISS.\\n - Second, to make the training process challenging as a batch dynamically updated/changed over forwarding steps, we randomly chose one from those 64 samples to replace the current text report.\\n\\nWe made changes in our revised manuscript regarding this context in Lines 269-271.\"}", "{\"title\": \"Response to Reviewer xh2S (Part 4)\", \"comment\": \"> [R1-C24]: Page 6 line 313: \\u201cOur proposed model is developed based on the fairseq-signals framework in our work\\u201d The meaning of the sentence is not clear and the reference for the fairseq-signals framework is not included. If it means that is the author\\u2019s prior work then it might be a violation of the double-blind review process as the fairseq-signals framework is implemented on Git Hub.\\n\\nWe clarify that we are not the authors of the fairseq-signals framework. It is a widely used tool for ECG self-supervised learning. We added a proper reference to their implementation in the revised manuscript (End of Page 6). \\n\\n> [R1-C25]: Page 8 line 429: Have you tested removing the ETM loss?\\n\\nWe haven\\u2019t reported the impact of ETM loss in our original manuscript. Therefore, we provide additional results with and without ETM experiments to illustrate its contribution to our pipeline, as shown below. Specifically, removing ETM slightly decreases performance across most datasets, particularly in PTBXL-Super (76.2 to 73.2), highlighting its role in improving ECG-text alignment. However, the effect on CSN is minimal, suggesting dataset-specific sensitivity to ETM.\\n\\n| | **PTBXL-Super** | **PTBXL-Form** | **CSN** | **CODE-Test** |\\n|--------------------|-----------------|----------------|---------|---------------|\\n| **w/o ETM** | 73.2 | 65.8 | **76.6**| 96.2 |\\n| **w ETM** | **76.2** | **66.1** | 76.3 | **96.8** |\\n\\n> [R1-C26]: Page 8 line (429-438): What is the supervised task?\\n\\nThe configuration for the supervised task has been detailed in Lines 423-428 (in our original submission) and Appendix Table 9. However, to enhance clarity, we have explicitly addressed this point in our revised manuscript on Line 436.\\n\\n> [R1-C27]: Page 9 line (470): Have you considered using more than eight transformer layers? How does the model size vary for the different architectures?\\n\\nThank you for mentioning this out. We already presented our model scaling ability in the ablation study (Table 8). However, we would like to further extend transformer layers to 12 and report results in the full zero-shot setting to compare directly with our proposed C-MELT results (#Layer=8) below:\\n\\n| **# Layers** | **PTBXL-Super** | **PTBXL-Form** | **CSN** | **CODE-Test** |\\n|--------------|-----------------|----------------|----------|---------------|\\n| 8 | 76.2 | 66.1 | 76.3 | 96.8 |\\n| 12 | **76.4** | **69.5** | **77.9** | **97.5** |\\n\\n\\nAs shown, increasing the number of layers consistently improves performance across all evaluated datasets, with notable gains in PTBXL-Form (+3.4%) and CSN (+1.6%). This better confirms our scaling strategy within the ECG encoder.\\n\\n> [R1-C28]: Tables: The particular metrics used are not specified in the table captions throughout the paper. Please mention the labeled dataset, the training configuration (linear probe or fine-tune), and the score in all table captions.\\n\\n> [R1-C32]: Figures: Please improve the figure captions and describe in more detail what is being shown.\\n\\nWe appreciate the suggestion. While the metrics, labeled datasets, and training configurations (e.g., linear probe or fine-tuning) are already described in the experimental configurations (Lines 363-372 in our original submission) and highlighted in Table 9 (Appendix), we agree that adding more details to the table captions would improve clarity and make it easier for readers to follow. \\n\\n> [R1-C29]: Tables: Are the comparisons in Tables 1 to 4 based on the author's implementation of other approaches? If so the performance for other works properly optimized for training hyperparameters for both the pre-training and supervised training? If not then do the cited references include the particular evaluation?\\n\\nThe comparisons in Tables 1 to 4 are not based on our implementation of other approaches. Instead, the results are derived from the respective papers and their reported benchmark comparisons. We updated this context in section 4.2 of our revised manuscript. \\n\\n> [R1-C30]: Table 5: What is the metric being compared and why is the student seemingly better than a cardio resident? Please verify the scores.\\n\\nWe have verified the AUC scores and confirmed their authenticity. The observation that medical students can be better than cardiology residents aligns with findings in prior studies (End of Page 6 in [6]). Specifically, medical students often perform better in specific evaluations due to their recent, focused training, whereas cardiology residents may not engage with detailed ECG interpretation as frequently in their daily practice. \\n\\n[6] Ribeiro, Ant\\u00f4nio H., et al. \\\"Automatic diagnosis of the 12-lead ECG using a deep neural network.\\\" Nature communications 11.1 (2020): 1760.\"}", "{\"title\": \"Response to Reviewer xh2S\", \"comment\": \"> \\\"Furthermore, we introduce a contrastive objective based on the Siglep (Sigmoid language ECG pre-training) loss\\\". It is still unclear whether \\\"Siglep\\\" has been used before since you say \\\"based on SigLep\\\". Or do you mean that we introduce \\\"Siglep\\\" based on Siglip?\\n\\nIn this context, we emphasized the usage of additional contrastive loss (Siglep), while we had mentioned that we adapted the Siglip implementation for Siglep (Line 245). \\n\\n---\\nThank you for your review again and we look forward to your feedback.\"}", "{\"summary\": \"This work proposes a multimodal ECG learning framework with multiple objective alignments and evaluates it across various downstream tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Many empirical studies encompassing a wide range of datasets and methods.\", \"weaknesses\": [\"Lack of novelty: MEM, ETM, and MLM losses are very similar to the BLIP [1] work, which was proposed in 2022. This work directly reimplements the BLIP loss in the ECG domain, which is not novel enough.\", \"Ambiguous results: In Table 3, C-MELT claims that the zero-shot classification performance across all downstream datasets is **77.71**, which is higher than MERL [2]. However, in the ablation results Table 7, the average zero-shot result is **72.5\\u00b19.1**, which *is not consistent* with the author-reported performance. Additionally, in MERL [2]'s original paper, the reported average zero-shot performance is **75.24\\u00b11.7** in Tables 5-9, which is much higher than C-MELT's performance.\", \"The reproducibility concern is made worse by the fact the authors don't appear willing to share their code.\", \"[1] Li, Junnan, et al. \\\"Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation.\\\" International conference on machine learning. PMLR, 2022.\", \"[2] Liu, Che, et al. \\\"Zero-Shot ECG Classification with Multimodal Learning and Test-time Clinical Knowledge Enhancement.\\\" Forty-first International Conference on Machine Learning.\"], \"questions\": [\"Can the authors clarify the main differences between C-MELT and BLIP? Even though the data is different (ECG vs. Image), the framework and optimization objectives appear too similar.\", \"How do the authors explain the discrepancy between the reported results in Table 3 and Table 7?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer vCwP (Part 2)\", \"comment\": \"> [R3-C3]: Lines 60-68: Here the authors propose to combine contrastive and generative approaches to leverage their complementary strengths, but it is unclear what these complementary strengths are expected to be. As no clear argumentation is provided why these approaches should be combined, there is no framework in place to later interpret the results. I would therefore recommend to provide a more precise description of the working hypothesis.\\n\\nWhile we have mentioned the complementary strengths of contrastive and generative approaches in Lines 141-147 in our original submission, we agree that this may not have been sufficiently explicit. The contrastive approach enhances discriminative alignment between ECG and text, which improves cross-modal understanding. Meanwhile, the generative approach helps capture fine-grained features within each modality by reconstructing missing components, ensuring that the learned representations are robust and detailed. Together, these complementary methods allow our model to excel at both modality-specific and cross-modal tasks. We made slight changes in our revised introduction section for this clarification.\\n\\nWe also emphasize that our empirical results strongly support the effectiveness of combining these approaches, as demonstrated in both the main results and ablation studies. Thank you again for helping us improve the clarity of our manuscript.\\n\\n> [R3-C4]: Lines 240-243: \\u201cThis can hinder the model\\u2019s capability to learn discriminative features (\\u2026)\\u201d. What is being referred to here? The fact that generative approaches perform reconstruction and contrastive learning distinguishes between data pairs is, to my understanding, precisely what causes them to learn discriminative features to the extent that they do. Next, it is suspected that ETM could serve as a contrastive loss but that it may be insufficient as it performs binary classification with fused features. As these characteristics of ETM were introduced with little justification, one is left wondering why the Siglep loss and ETM are both needed. Specifically, the hypothesis that alignment of both unfused and fused features is beneficial is currently not answered in the paper, as ablations do not seem to include ETM. \\n\\nWe agree there may have been confusion here, and we clarified this in our revised manuscript (Lines 238-239). Specifically, we intended to highlight that MAE-based models are often more biased toward generative self-supervised learning, which limits their capacity to perform contrastive tasks effectively, such as supporting zero-shot inference.\\n\\nRegarding the ETM loss, ETM aligns ECG and text pairs at the fused feature level but does not directly enhance the discriminative power of individual encoders, which is addressed by Siglep. However, additional results confirm ETM\\u2019s complementary role in guiding fused feature space learning, supporting the generative components in our hybrid approach. We highlight it in context with [R1-C25], [R2-C3], and [R2-C4]: \\n\\n| | **PTBXL-Super** | **PTBXL-Form** | **CSN** | **CODE-Test** |\\n|--------------------|-----------------|----------------|---------|---------------|\\n| **w/o ETM** | 73.2 | 65.8 | **76.6**| 96.2 |\\n| **w ETM** | **76.2** | **66.1** | 76.3 | **96.8** |\\n\\nAs can be seen, removing ETM slightly decreases performance across most datasets, particularly in PTBXL-Super (76.2 to 73.2).\\n\\n> [R3-C5]: Lines 274-275: I would like to ask why the authors claim nearest-neighbour negative sampling makes negative samples challenging? Does the approach not make negative samples easier by selecting less similar reports as negatives? (And as reports with high cosine distance are selected, is this not opposite to the principle of \\u2018nearest-neighbours\\u2019?)\\n\\nThank you for referring to this point. We would like to make this clearer as follows together with [R1-C18] and [R1-C23] from Reviewer xh2S:\\n - First, for a given ECG text report in half of a batch, we look for the top 64 text reports with the largest cosine distances (e.g. use features in small-FlanT5 space) from the training set. It is worth noting that the features were already calculated and indexed before the training process using FAISS.\\n - Second, to make the training process challenging as a batch dynamically updated/changed, we randomly chose one from those 64 samples to replace the current text report.\"}" ] }
5fRlsiNDZR
FARV: Leveraging Facial and Acoustic Representation in Vocoder For Video-to-Speech Synthesis
[ "Yifan Liu", "Yu Fang", "Zhouhan Lin" ]
In this paper, we introduce FARV, a vocoder specifically designed for Video-to-Speech (V2S) synthesis, which integrates both facial embeddings and acoustic units to generate speech waveforms. By sharing the acoustic unit vocabulary in our two-stage V2S pipeline, FARV effectively bridges the domain gap between the visual frontend and the vocoder without requiring finetuning. Furthermore, by embedding visual speaker images into the acoustic unit representations, FARV enhances its ability to preserve speaker identity. Experimental results demonstrate that FARV achieves leading scores in intelligibility and strikes a favorable balance between speaker characterisitcs preservation and acoustic quality, making it well-suited for practical V2S applications.
[ "Video-to-speech (V2S)", "vocoder", "speech synthesis" ]
https://openreview.net/pdf?id=5fRlsiNDZR
https://openreview.net/forum?id=5fRlsiNDZR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ltbiJJXnhN", "WKjsP2OrxC", "O2ePS7w9J8", "KMhmLXVel1", "Hy6oMXSk3g" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730363555648, 1730129316314, 1730627467829, 1730418188687, 1731925758472 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3349/Reviewer_9d9R" ], [ "ICLR.cc/2025/Conference/Submission3349/Reviewer_NFDH" ], [ "ICLR.cc/2025/Conference/Submission3349/Reviewer_mjRU" ], [ "ICLR.cc/2025/Conference/Submission3349/Reviewer_CmHq" ], [ "ICLR.cc/2025/Conference/Submission3349/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a model for video-to-speech (v2s) synthesis based on discrete units and facial embeddings, conducting partial experiments to validate the approach. However, many of its contributions appear to have been previously proposed and verified in other studies.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper provides a comprehensive supplementary exploration of the previous work and further demonstrates the conclusions of the previous work.\", \"weaknesses\": [\"Several ideas in this paper have already been widely explored: (1) Using acoustic discrete units (often more accurately referred to as semantic discrete units) to enhance speech reconstruction capabilities. [1] (2) Leveraging facial image embeddings to provide identity information for timbre reconstruction\\u2014a concept similar to representing speaker embeddings in vocoders but in a different form [2].\", \"The authors also claim that their model can perform zero-shot v2s, yet the results in Table 6 suggest that, on unseen samples, it only manages to achieve coarse-grained gender and general emotional reconstruction. This limitation likely stems from a lack of clear mapping between facial images and timbre, as the paper essentially uses image embeddings as in-domain speaker embeddings, which does not enable true zero-shot application. To demonstrate the value of image embeddings, the authors might consider an additional experiment comparing their approach to traditional speaker embeddings, highlighting whether facial images convey more information beyond speaker identity.\", \"Since v2s is fundamentally an audio synthesis task, subjective human ratings are often crucial in evaluations. However, the authors have not provided comprehensive MOS (Mean Opinion Score) results for the test set. Additionally, NISQA-MOS appears to be primarily a metric for assessing audio quality rather than speaker similarity, which may not fully suit this purpose, as seen in Table 2.\", \"[1] Revise: Self-supervised speech resynthesis with visual input for universal and generalized speech regeneration. CVPR2023\", \"[2] DiffV2S: Diffusion-based Video-to-Speech Synthesis with Vision-guided Speaker Embedding. ICCV 2023\"], \"questions\": \"To demonstrate the value of image embeddings, the authors might consider an additional experiment comparing their approach to traditional speaker embeddings, highlighting whether facial images convey more information beyond speaker identity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduce a vocoder, FARV, which integrates both facial embeddings and acoustic units for video-to-speech synthesis task. The FARV tackles the challenge faced in unit-based vocoder which struggles to retain speaker characteristics, offering a balanced approach between preserving speaker identity and ensuring high acoustic quality in video-to-speech synthesis. It also shows robustness in a zero-shot manner.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper delivers extensive experimental results with ablation studies, showing the proposed model's robustness in a zero-shot manner.\"], \"weaknesses\": \"- This paper does not seem novel; this work has a similar initial concept to [1] in that the proposed model utilizes visual information to preserve speaker identity to generate the output speech. Furthermore, the concept of utilizing visual information as speaker information is also similar to [2], which is not referred in the paper. More importantly, the proposed model develops the unit-HifiGan [3] by simply adding the existing image encoder FaRL [4], which is incremental in terms of novelty. I would encourage the authors to clarify what specific innovations their approach offers beyond combining existing components.\\n\\n- In section 4.1.3, the authors mentioned that they train the proposed FARV on the audio-visual LRS3-TED and LRS2-BBC datasets. If so, how come the zero-shot performances of the FARV on LRS2-BBC in Table 3 are actually \\u201czero-shot\\u201d? Did the authors train differently in this case? The authors should clarify this clearly.\\n\\n- The analysis for the significant decline in acoustic quality (NISQA-MOS) of Unit-HifiGAN when finetuned in Figure 2 is not very convincing because the other metrics performances show improvement. Please provide more insight into why acoustic quality declines while speaker matching improves and discuss potential trade-offs between these metrics.\\n\\n- Since the proposed module is a vocoder itself, the vocoders are ultimately used with different frontend encoders for V2S applications (lines 425 in the manuscript). To do so, the authors should conduct experiments on different frontend encoders (for different acoustic units from different encoders) with the proposed vocoder, FARV. While I understand the purpose of Section 4.3.2, I am not sure what the authors try to address in Section 4.3.2 from the experiment. What is V2S frontend prediction?\\n\\n\\n[1] Choi, Jeongsoo, Joanna Hong, and Yong Man Ro. \\\"DiffV2S: Diffusion-based video-to-speech synthesis with vision-guided speaker embedding.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[2] Hong, Joanna, Minsu Kim, and Yong Man Ro. \\\"Visagesyntalk: Unseen speaker video-to-speech synthesis via speech-visage feature selection.\\\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.\\n\\n[3] Hsu, Wei-Ning, et al. \\\"Revise: Self-supervised speech resynthesis with visual input for universal and generalized speech regeneration.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[4] Zheng, Yinglin, et al. \\\"General facial representation learning in a visual-linguistic manner.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\", \"questions\": [\"During the inference time period, do all the speakers are different from the those in the training set, especially when reporting the performances in Table 1?\", \"In Table 1, what does (Choi et al., 2023a) mean after every WER performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces FARV, a unit-based vocoder that incorporates both facial embeddings and acoustic units for video-to-speech synthesis. The authors demonstrate through experiments that FARV effectively preserves speaker identity and mitigates the domain gap issue.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The integration of facial embeddings and a unit-based vocoder effectively preserves speaker identity in V2S and mitigates the domain gap challenge.\\n2. The experiments and analysis are thorough- Presentation Issues:\\n1.\\tThe second paragraph of the Introduction (Lines 32-38) is not clear. It highlights the importance of vocoders but then contradicts this by discussing their drawbacks without specifying what these are. Additionally, the phrase \\\"the generalization between synthesis stages\\\" in line 40 is vague.\\n2.\\tIn the Methodology section, Figure 1 is unclear, particularly the structure of the crucial Generator G, making it difficult to infer from the text alone how the three inputs are processed within the Generator. Additionally, although the text indicates that FARV employs a two-stage training process, this is not clearly depicted in the figure.\\n3.\\tThe appendix mentions that AV-HuBERT is frozen during training, this module should be indicated with a freeze symbol in Figure 1.\\n4.\\tThe inputs for the lips and face in Figure 1 seem to come from different speakers; the lips are without a beard, whereas the face has a beard.\\n5.\\tIn Figure 1, are the Predicted Units only for inference? Do they not participate in the training of the Vocoder?\\n6.\\tLine 210 appears to be missing a verb (perhaps \\\"employ\\\" or \\\"utilize\\\").\\n7.\\tThe quote mark in line 447(\\\"Finetuned\\\") and line 476 (\\\"0k\\\") are facing the wrong direction.\\n\\n- Concerns about Novelty:\\n1.\\tExtracting ID information from faces for voice synthesis tasks, such as in VisualVoice and Face2Speech, is not particularly novel for ICLR.\\n2.\\tThe experimental results are not persuasive enough. Even utilizing multiple pretrained models like AVHuBERT, HuBERT, and FaRL for feature extraction, the performance improvements are not significant, with some metrics even underperforming (e.g., WER in LRS3, LSE-D in LRS2).\\n3.\\tThe performance of Mel-Based vocoders after fine-tuning is significantly better, as shown in Tables 1 and 5.\\n.\", \"weaknesses\": \"- Presentation Issues:\\n1.\\tThe second paragraph of the Introduction (Lines 32-38) is not clear. It highlights the importance of vocoders but then contradicts this by discussing their drawbacks without specifying what these are. Additionally, the phrase \\\"the generalization between synthesis stages\\\" in line 40 is vague.\\n2.\\tIn the Methodology section, Figure 1 is unclear, particularly the structure of the crucial Generator G, making it difficult to infer from the text alone how the three inputs are processed within the Generator. Additionally, although the text indicates that FARV employs a two-stage training process, this is not clearly depicted in the figure.\\n3.\\tThe appendix mentions that AV-HuBERT is frozen during training, this module should be indicated with a freeze symbol in Figure 1.\\n4.\\tThe inputs for the lips and face in Figure 1 seem to come from different speakers; the lips are without a beard, whereas the face has a beard.\\n5.\\tIn Figure 1, are the Predicted Units only for inference? Do they not participate in the training of the Vocoder?\\n6.\\tLine 210 appears to be missing a verb (perhaps \\\"employ\\\" or \\\"utilize\\\").\\n7.\\tThe quote mark in line 447(\\\"Finetuned\\\") and line 476 (\\\"0k\\\") are facing the wrong direction.\\n\\n- Concerns about the work itself:\\n1.\\tExtracting ID information from faces for voice synthesis tasks, such as in VisualVoice and Face2Speech, is not particularly novel for ICLR.\\n2.\\tThe experimental results are not persuasive enough. Even utilizing multiple pretrained models like AVHuBERT, HuBERT, and FaRL for feature extraction, the performance improvements are not significant, with some metrics even underperforming (e.g., WER in LRS3, LSE-D in LRS2).\\n3.\\tThe performance of Mel-Based vocoders after fine-tuning is significantly better, as shown in Tables 1 and 5.\", \"questions\": \"1.\\tWhy use HuBERT to extract Acoustic Units instead of AV-HuBERT, which has multimodal capabilities and might learn unified Units more effectively?\\n2.\\tIn Table 3, why is the NISQA-MOS score for Unit-HiFiGAN so high in the zero-shot scenario?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper discusses a solution to the problem of mapping silent videos containing lip motion to accompanying and plausible speech sounds. AV-Hubert is used to encode audio/visual speech to a joint vocabulary, and a vocoder is used to reconstruct a speech waveform from a corresponding sequence of these units. Furthermore, to ensure speaker characters are preserved during synthesis, the generation is conditioned on a representation of speaker identity from a pre-trained visual model. The results suggest that for the given dataset, the proposed approach beats the baseline.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The problem being tackled is interesting and challenging.\\n\\nThe main contribution, conditioning on speaker identity from the visual modality to preserve speaker characteristics, is a simple idea. However, the approach seems to be effective given the results in the paper and the example video demonstrations provided.\\n\\nPublic data are used and the authors state that code and models will be made available. This is useful for repeatability.\", \"weaknesses\": \"There is a number of objective metrics used to evaluate the approach against the baseline. However, I am surprised that no subjective assessment has been included. Human viewers are highly sensitive to discrepancies between the audio and the visual modalities of speech, and often artifacts to which we are sensitive are missed by objective metrics.\\n\\nOnly very short snippets of ~1\\u20142s of speech are generated. The sentences are not complete, and often both the beginning and end are truncated. How well are speaker characteristics preserved over longer sequences? This point specifically affects my soundness score. I have further concerns around this (see Questions section).\", \"questions\": \"In the paper (on page 2) you mentioned that V2S methods that use textual information have limited practical use. The system here seems constrained to produce only 1\\u20142 seconds of speech, so is this also not a severe practical limitation? If you could generate longer sequences by concatenating shorter sequences, would there be artifacts at the concatenation boundaries? Would speaker characteristics be preserved across longer sequences if they are formed from shorter independent sequences?\\n\\nFigure 1 shows that predicted units are used only during inference. Are they not also used in training? The Equations for L_{CE} and L_{1} would suggest predicted units are used given the righthand side of the sequences contains f(x_{v}).\\n\\nWhat is the operator in Equation (1) denoting? The text mentions that the image embedding is added to the unit embeddings so is this representation adding the static image embeddings to each element of the sequence of unit embeddings? Does a simple addition make sense since the units of these embeddings are different?\\n\\nWhy were the stopping points for training/fine-tuning selected? Was there some convergence guarantee to that point?\\n\\nWhen you mention that ReVISE falls short in preserving speaker characteristics, can I clarify? Do you mean that it does not preserve the characteristics of the generated voice, or that it does not preserve the characteristics of the voice of the actual speaker? I assume you mean the former and not the latter, as the latter would imply predicting the voice of the speaker just from their appearance.\\n\\nAt the end of Section 4.3.3 you mentioned that fine-tuning is necessary for practical use. Is this a significant problem? Is the fine-tuning not just a one-off cost?\\n\\nMy biggest concern about this work is that the model produces generated sequences that almost perfectly match the ground-truth sequences word-for-word. Typically a forensic lipreader would use conversational context, body gestures, facial expression, and so on and still only transcribe speech with an accuracy of ~30%. Here you are able to cut short sequences from the middle of sentences, and with barely no context produce the sequence of units that perfectly map to the correct words. Further, there is a lot of variation in speech acoustics that cannot be seen visually: velar stops, voiced vs. voiceless sounds, nasality, etc. Given how this non-visual articulation significantly impacts visual coarticulation of speech, I do not understand how the model is able to predict without longer context which unit to use when many might fit because the differences in the articulation of the sounds is where they cannot be seen. It is for this reason that the YouTube series BadLipReading works: a multitude of sounds fit the same sequence of lip movements, but your model remarkably seems to always predict the sequence almost perfectly.\\n\\n*Nitpicks* (would not affect a decision)\", \"the_punctuation_of_the_equations_is_off\": \"the comma should be immediately follow each of the first two unnumbered equations on page 4.\\n\\nIt seems odd that when highlighting the best and second best performing models in Table 1, those that use textual information are not included. Maybe they are for reference-only, but they appear in the list as any other method but are ignored in the ranking.\\n\\nThere are a few typos throughout the paper, and so a careful read though should be done to catch these.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
5f3brwjeTl
Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models
[ "Zhenyang Ni", "Rui Ye", "Yuxi Wei", "Zhen Xiang", "Yanfeng Wang", "Siheng Chen" ]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving. Despite the ability of VLMs to comprehend and make decisions in complex scenarios, their integration into safety-critical autonomous driving systems poses serious security risks. In this paper, we propose \texttt{BadVLMDriver}, the first backdoor attack against VLMs for autonomous driving that can be launched in practice using \textit{physical} objects. Unlike existing backdoor attacks against VLMs that rely on digital modifications, \texttt{BadVLMDriver} uses common physical items, such as a red balloon, to induce unsafe actions like sudden acceleration, highlighting a significant real-world threat to autonomous vehicle safety. To execute \texttt{BadVLMDriver}, we develop an automated pipeline utilizing natural language instructions to generate backdoor training samples with embedded malicious behaviors. This approach allows for flexible trigger and behavior selection, enhancing the stealth and practicality of the attack in diverse scenarios. We conduct extensive experiments to evaluate \texttt{BadVLMDriver} for two representative VLMs, five different trigger objects, and two types of malicious backdoor behaviors. \texttt{BadVLMDriver} achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon. Thus, \texttt{BadVLMDriver} not only demonstrates a critical security risk but also emphasizes the urgent need for developing robust defense mechanisms to protect against such vulnerabilities in autonomous driving technologies.
[ "Backdoor Attack", "Vision Large Language Model", "Autonomous Driving" ]
Reject
https://openreview.net/pdf?id=5f3brwjeTl
https://openreview.net/forum?id=5f3brwjeTl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zing6Z2vd1", "y5kee6T8nu", "vef1zFQVU3", "lNoL2PcqAc", "gxyA088pZx", "gRDvy4lXSO", "gJXrrHP6HB", "g8oqb8OPGC", "fMO1ngfs8y", "caBBvblBV6", "cSlBYqeKB2", "Z06VFBTLsr", "XhHURiUeZt", "WAdmcRUP01", "VYno8TjI2C", "VUHKbvKl10", "TnEiqMHU63", "MWx0XKqzhg", "Kyc2UpTru1", "FZLJug6Oc5", "Em1bVyN67x", "6CiyHtr2Hu", "4mJGquYpAa", "4HZC63u0bJ", "2MD7Vuvcft", "2HTXQ29Gjo" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732200262604, 1732198638069, 1730307750254, 1732351810159, 1730605875395, 1732603488196, 1732545417075, 1732197711333, 1733063707824, 1730622592842, 1732604690685, 1732199245010, 1737523752057, 1732273364238, 1733214791522, 1732462974146, 1734751611157, 1732545369644, 1732436665514, 1732200499949, 1732200132600, 1732199195885, 1732199104487, 1732198958679, 1730172634430, 1732199049898 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Reviewer_oug9" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Reviewer_AVkb" ], [ "ICLR.cc/2025/Conference/Submission6225/Reviewer_9XEN" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Reviewer_1UcC" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Area_Chair_xvKS" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Reviewer_9XEN" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ], [ "ICLR.cc/2025/Conference/Submission6225/Reviewer_9XEN" ], [ "ICLR.cc/2025/Conference/Submission6225/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal (Part-II)\", \"comment\": \"**W2:** Although the paper identifies the urgent need for effective defenses, it offers relatively limited insights for mitigating the backdoor attacks in VLMs.\\n\\n**Response:** Thank you for your valuable advice on offering deeper insights into the design of defense methods. Identifying the urgent need for effective defenses is indeed the primary focus of our paper focusing, which emphasizes red-teaming to expose vulnerabilities in driving VLMs. Based on the vulnerabilities identified by our BadVLMDriver, the following key lessons for improving the safety of driving VLMs can be drawn:\\n\\n- **Traditional Collision Checks Are Insufficient:** BadVLMDriver enables flexible manipulation of target behaviors, making collision checks\\u2014the sole safety measure in some AD systems [1]\\u2014inadequate to ensure safety. More robust and comprehensive safety mechanisms are required to counter this type of attack.\\n\\n- **Avoid Directly Applying Models from Untrustworthy Sources; Incremental Fine-Tuning Is Necessary:** Many LLM-powered autonomous driving agents [1, 2, 3] rely on third-party models to reduce training costs. However, BadVLMDriver demonstrates the potential for physical backdoors embedded in model weights. Our discussion on potential defenses suggests that incremental fine-tuning on clean datasets can effectively force the model to forget these backdoors, mitigating such threats. We recommend applying this method before deploying open-source models.\\n\\n\\n- **Monitor Model Parameter Updates, Not Just Training Datasets:** BadVLMDriver highlights the feasibility of weight poisoning attacks at any stage of the model supply chain. While filtering training datasets can defend against data poisoning attacks, it is ineffective against weight poisoning. Autonomous driving companies must carefully monitor any updates to model parameters to prevent malicious modifications, such as those made by malicious employees.\\n\\n[1] Jiageng Mao et al. A Language Agent for Autonomous Driving. COLM, 2024.\\n\\n[2] Hao Sha et al. Languagempc: Large language models as decision makers for autonomous driving. Arxiv, 2023.\\n\\n[3] Can Cui et al. Personalized Autonomous Driving with Large Language Models: Field Experiments. ITSC, 2024.\\n\\n---\\n\\n**W3:** The comparison baselines are not convinced. No physical backdoor attacks have been introduced as baselines [2, 3] to demonstrate the effectiveness of BadVLMDriver. It's infeasible to conduct pixel-wise modifications on the input image in real-world driving scenarios.\\n\\n**Response:** Thank you for your comment. We agree that **allowing pixel-wise modifications** is indeed infeasible in real-world driving scenarios. However, even when we **relax the constraints for the baseline methods** to this unrealistic extent, **BadVLMDriver still significantly outperforms these baseline digital attacks**, as shown in Table 1 of the manuscript. This underscores not only the effectiveness of BadVLMDriver but also its practicality in real-world settings.\\n\\nWhile there are physical backdoor attacks designed for discriminative models with limited output spaces, such as traffic sign classification [2], lane detection [3], and face recognition [2], these approaches are inapplicable to generative models like VLMs, which operate in a near-infinite output space. Therefore, we argue that relaxing the constraints and comparing against previous digital backdoor attacks on VLMs [1, 4] provides a fair and reasonable baseline for evaluating BadVLMDriver.\\n\\n[1] Siyuan Liang et al. Revisiting backdoor attacks against large vision-language models. Arxiv, 2024.\\n\\n[2] Yunfei Liu et al. Reflection backdoor: A natural backdoor attack on deep neural networks. ECCV,2020.\\n\\n[3] Xingshuo Han et al. Physical backdoor attacks to lane detection systems in autonomous driving. ACM MM, 2022.\\n\\n[4] Dong Lu et al. Test-Time Backdoor Attacks on Multimodal Large Language Models. Arxic, 2024.\\n\\n---\\n\\n**W4:** Visualization comparison of various poisoned images is missed.\\n\\n**Response:** Thank you for your comment. We kindly refer the reviewer to **Figure 7-10 on page 18-21** (original manuscript) for the visualization of backdoor samples with physical triggers. Following your suggestion, we have included the visualization of backdoor samples with digital triggers in **Figure 14** on the **last page** of the revised manuscript .\"}", "{\"title\": \"Rebuttal (Part-II)\", \"comment\": \"**W2:** Insufficient discussion and validation on existing defenses.\\n\\n**Response:** Thank you for your valuable comment. We have provided a discussion and validation on the resilience of BadVLMDriver against existing defense mechanisms in Section 4.4 and Appendix C of the manuscript, covering rule-based filtering, noise reduction mechanisms, existing backdoor defenses and incremental finetuning. \\n\\nFollowing your suggestions, we have expanded on these discussions and conducted additional experiments to provide a more comprehensive discussion and validation.\\n\\n- **Rule-based filtering** is ineffective since our BadVLMDriver allows for flexible selection of both the backdoor trigger and the malicious target behavior, making it challenging for rule-based systems to account for all possible attack scenarios. For example, recent LLM-based driving system [1] perform collision checks with pedestrians and vehicles, yet they fail to prevent attacks that induce sudden braking, which could cause rear-end collisions, or sudden acceleration upon encountering a football, posing a risk of harm to unseen children in blind spot chasing the ball.\\n\\n- **Noise reduction mechanisms** [2] also fall short, as they are designed to mitigate perturbations used in digital attacks. BadVLMDriver employs physical objects as triggers, which are not mitigated by noise reduction mechanisms typically designed to counteract perturbation patterns added to images. We tested the attack using mean filtering and median filtering algorithms on real-world images, with results in Table R2 demonstrating that these methods do not reduce the attack success rate.\\n\\n- **Existing backdoor defense strategies** are not applicable to VLMs. Most of the current work in this area targets image or language classifiers [3], which assume a finite and discrete output space (e.g., image or sentiment classification). While recent backdoor detection methods for pre-trained image encoders [4] do not rely on this assumption, they still cannot effectively defend against our attack, as they are designed to detect backdoors embedded in the vision encoder\\u2019s weights, which remain unchanged during our attack. Although there is a recent defense specifically targeting weight poisoning backdoor attacks [5], its application is limited to LLMs. As shown in Table R2, this defence can not reduce the success rate of our attack.\\n\\n- **Anti-jailbreak defenses** [7] introduce unacceptable latency for driving systems by employing additional clean models for verification and multi-round discussions. For instance, ECSO [7] requires a four-round self-reflection process during inference, resulting in significant extra latency. While System Prompt-based Defense (SPD) [6] avoids excessive computational overhead by only adding an additional prompt, the results in Table R2 indicate that SPD fails to defend against our attack.\\n\\n- **Incremental fine-tuning** on clean datasets can reduce the attack success rate by forcing the model to catastrophically forget the backdoors hidden in the parameters, as shown in Appendix C. However, this approach is only a partial solution. It remains ineffective when the attacker carries out post-training attacks, such as manipulating the model's weights during the local on-board deployment stage.\\n\\nFrom the results and discussion above, we see that 1) our physical backdoor attack BadVLMDriver is robust against noise reduction mechanisms and existing backdoor defense strategies; 2) anti-jailbreak defenses introduce significant computational overhead, making them impractical for real-time decision-making systems in autonomous vehicles; 3) rule-based filtering and incremental fine-tuning are only partial solutions due to the stealthiness and flexibility of our BadVLMDriver, as they can not cover the wide range of triggers, target behaviors and attack scenarios supported by our language-guided automatic attack pipeline.\\n\\n[**Table R2.** Current noise reduction, backdoor defense and anti-jailbreak defenses can not reduce the attack performance of BadVLMDriver. ]\\n| | No defense|Mean filtering|Median filtering|PSIM [5]|SPD [6]|\\n|-|-|-|-|-|-| \\n| Attack success rate \\u2191 |92%|92%|92%|92%|92%|\\n\\n[1] Jiageng Mao et al. A Language Agent for Autonomous Driving. COLM, 2024.\\n\\n[2] Erwin Quiring et al. Adversarial preprocessing: Understanding and preventing image-scaling attacks in machine learning. USENIX Security Symposim, 2020.\\n\\n[3] Kunzhe Huang et al. Backdoor Defense via Decoupling the Training Process. ICLR, 2022.\\n\\n[4] Shiwei Feng et al. Detecting Backdoors in Pre-trained Encoders. CVPR, 2023.\\n\\n[5] Shuai Zhao et al. Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning. Arxiv, 2024.\\n\\n[6] Yunhao Gou et al. Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation. ACL, 2024.\\n\\n[7] Siyuan Ma et al. Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Character. Arxiv, 2024.\"}", "{\"summary\": \"This paper presents a physical backdoor attack named BadVLMDriver, aimed at vision-large-language models (VLMs) used in autonomous driving. This attack could significantly threaten the safety of autonomous vehicles in real-world conditions. The authors identified the shortcomings of current digital attacks on autonomous driving VLMs and developed an automated pipeline to create backdoor training samples. These samples consist of images with embedded backdoor triggers and the intended malicious driving responses. Experimental results demonstrated that the proposed attack not only achieves a high success rate but also maintains a low false attack rate, with minimal impact on the model\\u2019s clean accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written, making the methodology of BadVLMDriver easy to follow. The experimental results are clearly presented and explained.\\n2. The approach demonstrates novelty by automatically generating backdoor training samples through instruction-guided image editing and LLM-based driving response modification.\\n3. Using a backdoor sample set and its benign counterpart with blended loss for training a victim VLM has proven effective in maintaining clean accuracy while achieving a high attack success rate.\", \"weaknesses\": \"1. Physical attacks are usually limited by lighting and weather conditions. While the paper discusses the impact of the trigger object\\u2019s distance, it may benefit from a more in-depth exploration of other dynamic factors affecting physical attacks.\\n2. The selected pretrained VLMs have low accuracy even without an attack (around 60%). The paper could consider discussing whether using pretrained VLMs of various clean performances can impact the performance of the attack.\", \"questions\": \"1. Will other sensors based technology, like lidar or radar, can help mitigate the threat via like forward collision warning? The author may provide more discussion on how other AD solution can help fix the issue. The finding in the paper shows that the VLM is not ready to take over AD yet.\\n2. Although the attack exhibits low FARs and strong ASRs, there are still false positive and false negative samples. Have you investigated why those samples cause false decisions?\", \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Response\", \"comment\": \"Dear Reviewer,\\n\\nThank you for carefully reading our explanation and timely responding to us!\\n\\n---\\n\\nFollowing your suggestion, we have added extra clarification of our threat model in **l.200-201 on page 4** of the revised manuscript. We appreciate your constructive suggestions that help us to improve our paper. Additionally, there is also a supplementary explanation in **Appendix B** on **page 16** of the original manuscript: \\\"Data poisoning attack assumes that the attacker can only inject corrupted examples into the training set, typically during the crowd-sourcing annotation phrase. This assumption is reasonable for web applications like ChatGPT, since the service provider can keep the model on their private and trustworthy server. However, driving VLMs necessitate on-board, local deployment, exposing them to additional risks such as man-in-the-middle attacks. This context heightens the likelihood of weight poisoning attack. Therefore, our assumption that an attacker have the capability to access the model and alter part of its weight is reasonable in the driving scenario.\\\"\\n\\nIn response to your question, we summerize the average attack success rate of three victim models in Table R1, as shown in Table R2 below. The results indicate that the ASR against DriveLM is slightly lower, suggesting that driving VLMs may exhibit greater robustness when their original training dataset includes data that is used for the attack. This observation underscores the effectiveness of our approach, as using a dataset independent of the original training set contributes to a higher attack success rate.\\n\\n[**Table R2.** The average attack success rate of three victim models in Table R1.]\\n| | CODA-VLM | DriveLM | DriveLLaVA |\\n| -------------- | -------- | ----- | --------- |\\n| Average Attack Success Rate| 88% | 81% | 85.5% |\\n\\n--- \\n\\nThank you once again for your time and insightful feedback!\\n\\nSincerely,\\n\\nAuthors\"}", "{\"summary\": \"The paper introduces BadVLMDriver, a novel backdoor attack targeting Vision-Large-Language Models (VLMs) in autonomous driving. Unlike traditional digital attacks on VLMs, BadVLMDriver employs physical objects\\u2014such as a red balloon\\u2014to manipulate VLMs into executing unsafe actions, like sudden acceleration. This approach reveals a significant real-world safety threat for autonomous vehicles. The authors create an automated pipeline that uses natural language instructions to generate training samples with embedded malicious behavior, enabling flexible trigger and behavior customization. Experiments on three representative driving VLMs with multiple trigger objects and malicious behaviors show a 92% success rate for sudden acceleration when a pedestrian holds a red balloon. These findings underscore the pressing need for robust defenses to protect against such vulnerabilities in autonomous driving systems, as BadVLMDriver demonstrates both the effectiveness and stealth of physically-induced backdoor attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses a highly relevant topic, focusing on safety risks in autonomous driving posed by Vision-Large-Language Models (VLMs).\\nThis is particularly timely given the increasing reliance on VLMs for complex decision-making in autonomous vehicles. \\n\\n2. The perspective in this work is novel, as it leverages real-world physical objects\\u2014such as a red balloon\\u2014to trigger malicious behaviors in autonomous vehicles. \\nUnlike traditional pixel-level modifications in digital backdoor attacks, this physical approach is more practical and stealthy, posing a realistic threat to autonomous systems in uncontrolled environments. \\n\\n3. Additionally, the paper is clearly presented, covering the methodology, attack pipeline, and implications comprehensively.\", \"weaknesses\": \"1. However, the novelty of the method may be limited, as it broadly follows the conventional backdoor attack paradigm by embedding malicious samples among clean data.\\nThe authors should clarify the specific differences from traditional methods, particularly in how the \\\"replay\\\" aspect is unique and impactful compared to prior approaches in backdoor attacks.\\n\\n2. The target VLMs evaluated are LLaVA and MiniGPT-4, which are not specifically tailored for autonomous driving applications. \\nIt would strengthen the paper to discuss how the proposed attack pipeline could generalize to other VLMs, especially those specifically designed for autonomous driving contexts.\\n\\n3. The paper omits some recent backdoor defense strategies targeting VLMs, such as SSL-cleanse[1] and DECREE[2]. \\nIncluding a discussion on how BadVLMDriver could potentially evade these defenses would add depth to the paper\\u2019s security analysis.\\n\\n[1] Zheng et al, SSL-cleanse: Trojan detection and mitigation in self-supervised learning. ECCV'24\\n\\n[2] Feng et al., Detecting Backdoors in Pre-trained Encoders. CVPR'23\", \"questions\": \"Please respond to each weakness mentioned above.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response. Employing LLMs to pre-process data for downstream tasks is quite common. It's hard to recognize this utilization as your unique contribution. Therefore, I am only raising my score from 5 to 6. A more insightful defense proposal would enhance this work.\"}", "{\"title\": \"Your recognition is vital to us\", \"comment\": \"Dear Reviewer,\\n\\nThank you once again for the time you dedicated to reviewing our paper and for the invaluable feedback you provided!\\n\\nWe hope our responses have adequately addressed your previous concerns. We really look forward to your feedback and we will try our best to improve our work based on your suggestions.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal (Part-I)\", \"comment\": \"We sincerely thank you for your time and efforts in reviewing our paper. We especially appreciate your recognition of the real-world feasibility, broad applicability and amplified social risk of our approach. We hope our responses below will alleviate your remaining concerns.\\n\\n---\\n**W1:** Insufficient experimental diversity: More types of autonomous driving VLMs should be evaluated to fully understand the applicability and limitations of the method.\\n\\n**Response:** Thank you for your valuable comment. We would like to kindly clarify that we have evaluated three distinct autonomous driving VLMs in the submission, each trained on representative driving-related datasets [1, 2]. These VLMs cover three typical VLM architectures and training pipelines, ensuring a comprehensive assessment of our method. As shown in Table 1 of the manuscript, the results demonstrate that our attack generalizes effectively across driving VLMs with diverse structures and training pipelines.\\n\\nAdditionally, following your suggestions, we further strengthen the experimental diversity by evaluating two additional autonomous driving VLMs from [1, 3] using real world images with two different types of physical triggers and target behaviors. The attack performance on these models, along with CODA-VLM from our manuscript, is presented in the following table. As shown in Table R1, **our attack pipeline continues to achieve a high success rate across these driving VLMs**, underscoring the robustness and versatility of our approach.\\n\\n [**Table R1.** Real world evaluation on three autonomous driving VLMs. Our BadVLMDriver achieves high success rates on diverse driving VLMs across various triggers and target behaviors.]\\n| Trigger+Target | Balloon+Accelerate | Balloon+Brake | Football+Accelerate | Football+Brake|\\n| ------------- | ----- | ----- | ------- | ----- |\\n| CODA-VLM | 92% | 80% | 88% | 92% | \\n| DriveLM | 81% | 75% | 84% | 84% | \\n| DriveLLaVA | 90% | 80% | 84% | 88% | \\n\\n[1] Chonghao Sima et al. DriveLM: Driving with Graph Visual Question Answering. ECCV, 2024.\\n\\n[2] Kai Chen et al. Automated Evaluation of Large Vision-Language Models on Self-driving Corner Cases. WACV, 2025.\\n\\n[3] Rui Zhao et al. DriveLLaVA: Human-Level Behavior Decisions via Vision Language Model. Sensors, 2024.\"}", "{\"title\": \"We sincerely anticipate your feedback as the Discussion stage will end in 3 Days.\", \"comment\": \"Dear Reviewer 1UcC,\\n\\nThank you again for your valuable feedback. \\n\\nWe have addressed your suggestions by incorporating additional experiments and discussions. As the Discussion Stage will conclude in 3 days, we would greatly appreciate it if you could review our responses and let us know if there are any remaining points of clarification. \\n\\nYour recognition is really vital to us.\\n\\nBest regards, \\n\\nAuthors\"}", "{\"summary\": \"This paper introduces BadVLMDriver, the first physical backdoor attack targeting vision large language models (VLMs) in autonomous driving. Using everyday objects as triggers, it induces unsafe driving decisions like sudden acceleration. Unlike pixel-level digital attacks, this method activates via real-world physical objects and is highly stealthy. Experiments on three VLM models, five triggers, and two behaviors show up to 92% success. The study underscores the threat to autonomous driving and the urgent need for robust defenses.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Utilizing common objects as triggers enhances the real-world feasibility of the attack.\\n2. The experiments cover various triggers, models, and behaviors, demonstrating broad applicability.\\n3. The study highlights potential security risks in current autonomous driving systems using VLMs.\", \"weaknesses\": \"1. Insufficient experimental diversity: More types of autonomous driving VLMs should be evaluated to fully understand the applicability and limitations of the method.\\n2. Lack of analysis on defense effectiveness: There is insufficient discussion and validation of how existing defense mechanisms respond to this attack.\\n3. Unverified effectiveness in complex driving environments: The effectiveness of the attack in complex or dynamic driving scenarios has not been adequately assessed.\", \"questions\": \"1. How does the presence of environmental factors (e.g., lighting, weather conditions) affect the attack's success rate?\\n2. Can the methodology be adapted to identify or mitigate other types of vulnerabilities in VLMs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks to Reviewer 9XEN\", \"comment\": \"Dear Reviewer,\\n\\nThank you so much for recognizing our efforts in addressing your concerns and for raising the score. We truly value your suggestions and will strive to provide deeper insights into the design of defense methods in the revised manuscript. We also plan to explore this direction further in our future work.\\n\\nOnce again, we sincerely appreciate the time you dedicated to reviewing our paper, your constructive feedback, and your active engagement during the rebuttal period. Your recognition is incredibly important to us!\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal (Part-II)\", \"comment\": \"**Q1:** Will other sensors based technology, like lidar or radar, can help mitigate the threat via like forward collision warning? The author may provide more discussion on how other AD solution can help fix the issue. The finding in the paper shows that the VLM is not ready to take over AD yet.\\n\\n**Response:** Thank you for your question. While existing AD systems incorporate rule-based filtering methods, such as LiDAR-based forward collision warning, to enhance safety, **the effectiveness of these methods is limited when confronted with our flexible and stealthy BadVLMDriver**. Powered by the language-guided automatic attack pipeline, BadVLMDriver supports a wide range of attack scenarios with different physical triggers and malicious behaviors, making it challenging for traditional rule-based systems to handle all potential threats. \\n\\nFor instance, LiDAR-based forward collision warning systems may fail to prevent attacks that induce sudden braking, potentially causing rear-end collisions to passengers in the vehicle, or sudden acceleration triggered by a balloon, which could endanger unseen children in blind spots chasing the ball. These examples highlight the need for more adaptive and robust defense mechanisms to address our attack.\\n\\n---\\n\\n**Q2:** Although the attack exhibits low FARs and strong ASRs, there are still false positive and false negative samples. Have you investigated why those samples cause false decisions?\\n\\n**Response:** Thank you for this valuable advice. The false positive and false negative samples are primarily caused by the model's failure to accurately identify the presence of backdoor triggers, which can be attributed to **the inherent flaw of VLMs: object existence hallucination [1].**\\n\\nTypical false negative cases occur when the backdoor trigger is too small in the camera's field of view or when the image contains numerous other objects that distract the model. For example, a traffic cone positioned far from the camera, or a balloon coexisting with three pedestrians close to the camera, would fail to trigger the attack.\\n\\nFalse positive samples, on the other hand, often result from the model recognizing other objects with similar visual appearances or semantic meanings as the backdoor trigger. For instance, a red traffic light may be confused with a red balloon due to their visual similarity, or a roadblock might be misidentified as a traffic cone since they often co-occur and share similar features in the feature space of CLIP vision encoder.\\n\\nThese findings align with our earlier discovery of the positive correlation between attack success rate and clean accuracy. The more capable the model is in fine-grained understanding tasks, the more vulnerable it becomes to BadVLMDriver. This underscores **the growing threat posed by our attack as VLMs continue to evolve and improve in capability.**\\n\\n[1] Bohan Zhai et al. HallE-Switch: Controlling Object Hallucination in Large Vision Language Models. Arxiv, 2023.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Authors' Response\", \"comment\": \"Dear Reviewer,\\n\\nWe sincerely thank you for your quick response! We're pleased to hear that we've addressed some of your concerns. We would like to further address your remaining concerns as follows:\\n\\n---\\n\\nTo provide a clearer explanation, we mathematically illustrate the process of generating backdoor data $(I_{Backdoor}, R_{Backdoor})$ and replayed clean data $(I_{Clean} , R_{Replay})$ for attacking a clean victim model $\\\\phi_{CleanVLM}$, with the selected backdoor trigger and target behavior in language $(L_{Trigger}, L_{Target})$.\\n\\n**Generation of Replayed Response from the Victim Model:** Given a clean image of a road scene without the backdoor trigger, $I_{Clean}$, the replayed response is generated using the clean victim model $\\\\phi_{CleanVLM}$: \\n$$R_{Replay} = \\\\phi_{CleanVLM} (I_{Clean})$$\\n\\nNote that the clean image $I_{Clean}$ comes from an open-source road scene dataset independent from the original clean dataset used for training $\\\\phi_{CleanVLM}$, since our weight-poisoning backdoor attack does not assume that the attacker has access to the original training dataset (which is the case in data-poisoning attacks).\\n\\n**One-to-One Correspondence between Backdoor and Replayed Samples:** For the same clean image $I_{Clean}$, we generate the corresponding backdoor sample $(I_{Backdoor}, R_{Backdoor})$ using a language-guided image editing model $\\\\phi_{ImageEditing}$ to embed the trigger $L_{Trigger}$ into the image, and then applying a LLM $\\\\phi_{LLM}$ to embed the target behavior $L_{Target}$ into the response:\\n\\n$$I_{Backdoor} = \\\\phi_{ImageEditing} (I_{Origin}, L_{Trigger})$$\\n\\n$$R_{Backdoor} = \\\\phi_{LLM}(\\\\phi_{CleanVLM} (I_{Backdoor}), L_{Target})$$\\n\\nThis one-to-one correspondence ensures that the model not only learns the mapping from the backdoor triggers to target behaviors, but also keeps the mapping from clean samples to clean responses. Traditional data-poisoning backdoor attacks do not have such a correspondence, as they simply mix backdoor samples into the original clean dataset.\\n\\nThese two key differences amplify BadVLMDriver's flexibility and effectiveness, making it applicable to a wider range of practical attack scenarios during the model supply chain compared with traditional data-poisoning attacks. This feature highlights the fact that simply keeping the original training dataset clean is not enough to ensure the safety of driving VLMs\\u2014the poisoning of model weights is also a significant source of risk.\\n\\n---\\n\\nWe greatly appreciate the reviewer for pointing out the unclear part in our writing. We will include these comparisons in the revised version of the paper.\\n\\nThanks,\\n\\nAuthors\"}", "{\"title\": \"Your feedback is invaluable to us.\", \"comment\": \"Dear Reviewer 1UcC,\\n\\nWe greatly appreciate your feedback on our work. In response to your suggestions, we have expanded our experiments to include a wider range of driving VLMs and environmental factors, provided more in-depth analysis on the effectiveness of existing defense methods, and clarified the dataset used in our manuscript.\\n\\nAs the Discussion Stage will **end in 4 hours**, we kindly request that you review our updated responses and reconsider your rating.\\n\\nThank you for your time and consideration.\\n\\nBest regards, \\n\\nAuthors\"}", "{\"title\": \"Authors' Response\", \"comment\": \"Dear Reviewer,\\n\\nWe sincerely thank you for reading our response! We're pleased to hear that we've addressed some of your concerns. We would like to further address your remaining concerns as follows:\\n\\n---\\n**1. Novelty and Contribution of BadVLMDriver**\\n\\n> The proposed context-specific response mechanism is primarily utilized in LLM-based backdoor attacks [1], which is an incremental variation of existing modules. As shown in Fig. 10, the injected triggers are abrupt due to the lack of environmental factors.\\n\\n**Response:** **The cited paper [1] is a follow-up to our BadVLMDriver**, it focuses on backdoor attacks against VLMs and **employs a data generation pipeline similar to ours**, combining a diffusion model for image editing and an LLM for response generation, as described in Section 5.1 on page 6 of [1]. \\n\\nPaper [1] includes our BadVLMDriver in its reference list. Due to the anonymity policy of ICLR, we can not provide details to explicitly prove the relationship between [1] and our work. We have submitted relevant evidence to the Area Chair for judgement. \\n\\nThis follow-up work [1] reflects the **significant contribution** of our work: **our novel automatic data generation pipeline have already inspired subsequent research** in backdoor attacks against VLMs. Backdoor attacking VLMs present unique challenges compared to those targeting LLMs or traditional classifiers, as VLMs are generative models with open output spaces, and embedding physical triggers in images is more complex than embedding backdoors in text. As a **pioneering effort** in this domain, BadVLMDriver has substantial potential to inspire further advancements in this emerging area of research.\\n\\n> BadVLMDriver uses a balloon as the physical trigger to attack VLMs. \\n\\n**Response:** We use common physical objects as trigger, not limited to balloons. Our BadVLMDriver enables flexible trigger selection, and extensive experiments with five different triggers have demonstrated the generalizability of our attack across different triggers.\\n\\n> Hence, substituting it with a checkerboard, which can also initiate the backdoor attacks (digital backdoor: BadNets [2]), resulting in no contribution.\\n\\n**Response:** To the best of our knowledge, no prior work has demonstrated that a physical checkerboard can successfully initiate the digital backdoor attack [2] in real-world driving scenarios. Physical attacks against autonomous vehicles are highly influenced by environmental factors such as distance, perspective, lighting conditions, and distracting objects. Consequently, a checkerboard is unlikely to serve as an effective substitute for precise pixel-level modifications in the input of a vehicle-mounted camera.\\n\\nFurthermore, even if a checkerboard were capable of initiating backdoor attacks, it would not diminish the contribution of our work. BadVLMDriver is not focused on a specific trigger but rather on **a generalized, automated pipeline for enabling practical attacks against driving VLMs**. Our approach allows for flexible selection of triggers and target configurations, broadening its applicability and stealthiness.\\n\\n**2. Attack senarios of unavailable clean samples**\\n\\n> And why are clean samples unavailable in many weighting attack scenarios? There are two types of settings, including Full Data Knowledge and Domain Shift, in weighting attacks [3]. Your BadVLMDriver also inputs clean data to generate poisoned samples to initiate backdoor attacks.\\n\\n**Response:** We apologize for any confusion. To clarify, the clean samples from the original training set used to train the victim model are unavailable in our setting. Instead, we assume that the attacker can only use a set of publicly available road-scene images to carry out the attack, which is a weaker variant of the Domain Shift setting [3]. Driving VLMs necessitate on-board, local deployment, exposing them to man-in-the-middle attacks. This context heightens the likelihood of post-training attacks, where the attacker has no access to the victim VLM's original training dataset.\\n\\n[1] Siyuan Liang, Jiawei Liang, et al. \\\"Revisiting backdoor attacks against large vision-language models.\\\" arXiv preprint arXiv:2406.18844 (2024).\\n\\n[2] Gu, Tianyu, et al. \\\"Badnets: Evaluating backdooring attacks on deep neural networks.\\\" IEEE Access 7 (2019): 47230-47244.\\n\\n[3] Kurita, Keita, Paul Michel, and Graham Neubig. \\\"Weight poisoning attacks on pre-trained models.\\\" arXiv preprint arXiv:2004.06660 (2020).\\n\\n[4] Jiawei Liang, Siyuan Liang, et al. \\\"Poisoned forgery face: Towards backdoor attacks on face forgery detection.\\\" arXiv preprint arXiv:2402.11473 (2024).\\n\\n---\\n\\nWe greatly appreciate the reviewer for pointing out the unclear part in our writing. We will include these discussions in the revised version of the paper. \\n\\nPlease feel free to let us know if anything remains unclear. We truly appreciate this opportunity to improve our work and shall be most grateful for any feedback you could give to us.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"metareview\": \"This paper proposes BadVLMDriver, a backdoor attack method against VLMs for autonomous driving. To enhance practicality, the authors use common physical objects (a red balloon), to initiate unsafe actions like sudden acceleration, highlighting a real-world threat to autonomous vehicle safety. The authors validate their approach through extensive experiments across various triggers, achieving a high 92% attack success rate with a low false attack rate. The reviewers mentioned the following strengths: (1) The paper addresses a highly relevant topic, focusing on safety risks in autonomous driving posed by Vision-Large-Language Models (VLMs). (2) It leverages real-world physical objects\\u2014such as a red balloon\\u2014to trigger malicious behaviors in autonomous vehicles. (3) The paper is well written. However, , reviewers also mentioned some key limitations: (1) Limited novelty. The work did not show a significant difference to existing backdoor attacking methods like BadNets, which is like applying existing works to the new setups. (2) Narrow tasks. The work only focuses on the VQA task instead of autonomous driving-related tasks and existing backdoor attacks on general VQA models could potentially achieve the same effect.\\n\\nAfter reviewing the paper and the reviewers' comments, I agree that the concerns raised are both reasonable and significant for a top-tier publication. However, I believe the following points also warrant attention: (1) The motivation for backdoor attacks on autonomous driving visual-language models (VLMs) is not sufficiently compelling. In practice, autonomous driving systems are unlikely to rely on open-source models that could be vulnerable to backdoor attacks. (2) Existing research has already demonstrated that diffusion-based generation methods can create physical adversarial patches against pre-trained VLMs. This approach is not only more practical but also more relevant than employing backdoor attacks on autonomous driving VLMs. We encourage the authors to improve the paper to address the concerns.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers provided thoughtful and constructive feedback in the first round of review. Two reviewers gave clear acceptance scores and acknowledged the significance of the work, but their confidence scores were 3. This suggests that they may have had difficulty understanding certain aspects of the submission or were unfamiliar with some related work. Reviewer 1UcC noted that the submission focuses on narrow tasks, which detracts from its novelty. Although Reviewer 1UcC also gave a confidence score of 3, their comments were supported by Reviewer 9XEN. Given the variance in scores, I have carefully reviewed the paper and the comments. The concerns raised should be thoroughly addressed in the revised submission.\"}", "{\"title\": \"Your recognition is vital to us\", \"comment\": \"Dear Reviewer,\\n\\nThank you once again for the time you dedicated to reviewing our paper and for the invaluable feedback you provided!\\n\\nWe hope our responses have adequately addressed your previous concerns. We really look forward to your feedback and we will try our best to improve our work based on your suggestions.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for your response.\\nSome concerns still remain unaddressed.\\n\\nThe proposed context-specific response mechanism is primarily utilized in LLM-based backdoor attacks [1], which is an incremental variation of existing modules. As shown in Fig. 10, the injected triggers are abrupt due to the lack of environmental factors.\\n\\nBadVLMDriver uses a balloon as the physical trigger to attack VLMs. Hence, substituting it with a checkerboard, which can also initiate the backdoor attacks (digital backdoor: BadNets [2]), resulting in no contribution.\\n\\nAnd why are clean samples unavailable in many weighting attack scenarios? There are two types of settings, including Full Data Knowledge and Domain Shift, in weighting attacks [3]. Your BadVLMDriver also inputs clean data to generate poisoned samples to initiate backdoor attacks.\\n\\n\\nTherefore, I will not be changing my score.\\n\\n[1] Liang, Siyuan, et al. \\\"Revisiting backdoor attacks against large vision-language models.\\\" arXiv preprint arXiv:2406.18844 (2024).\\n\\n[2] Gu, Tianyu, et al. \\\"Badnets: Evaluating backdooring attacks on deep neural networks.\\\" IEEE Access 7 (2019): 47230-47244.\\n\\n[3] Kurita, Keita, Paul Michel, and Graham Neubig. \\\"Weight poisoning attacks on pre-trained models.\\\" arXiv preprint arXiv:2004.06660 (2020).\\n\\n[4] Liang, Jiawei, et al. \\\"Poisoned forgery face: Towards backdoor attacks on face forgery detection.\\\" arXiv preprint arXiv:2402.11473 (2024).\"}", "{\"title\": \"Rebuttal (Part-III)\", \"comment\": \"**W5:** The proposed BadVLMDriver is very vulnerable. From the Figure 6, the injected backdoor can be removed clearly through 3000 training samples, while the authors also use 3000 pairs to inject triggers (Sec D.1).\\n\\n**Response:** Thank you for pointing this out and for your careful reading of our appendix. While incremental fine-tuning can remove backdoors embedded during the training stage, our BadVLMDriver allows post-training attack scenarios, such as when an attacker manipulates the model\\u2019s weights during local on-board deployment. \\n\\nIncremental fine-tuning has been a standard defense against data poisoning backdoors [1, 2], as these attacks are confined to the training stage. However, BadVLMDriver eliminates the requirement for the original benign dataset by utilizing replayed samples, allowing our attack to be **executed at any stage of the model supply chain.** This flexibility makes incremental fine-tuning only a partial solution to the threats posed by BadVLMDriver.\\n\\n[1] Shuai Zhao et al. Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning. ACL, 2024.\\n\\n[2] Xingshuo Han et al. Physical backdoor attacks to lane detection systems in autonomous driving. ACM MM, 2022.\\n\\n---\\n\\n**Q1:** Can you add some discussion about the backdoor detection strategies for VLMs?\\n\\n**Response:** Thank you for your valuable advice. To the best of our knowledge, there are currently no backdoor detection strategies specifically designed for VLMs. Below, we provide a comprehensive discussion on whether existing backdoor detection strategies can be applied to defend against BadVLMDriver:\\n\\n- **Backdoor Detection for Image or Language Classifiers:** Most existing work in this area focuses on image or language classifiers [1, 2], which assume a finite and discrete output space (e.g., image or sentiment classification). These methods are not applicable to generative models like VLMs, which operate in an open-ended output space.\\n\\n- **Defenses for Pre-trained Vision Encoders:** Recent backdoor defense strategies designed for self-supervised learning [3, 4] do not rely on a finite output space. However, they are specifically aimed at detecting digital backdoors embedded in pre-trained vision encoders. Since BadVLMDriver employs physical objects as triggers and leaves the vision encoder parameters unchanged, these defenses are ineffective against our attack.\\n\\n- **Weight Poisoning Defenses for LLMs:** Although there is a recent defense targeting weight poisoning backdoor attacks on LLMs [5], it relies on a label-resetting procedure, which limits its applicability to text classification tasks. This makes it unsuitable for vision-language generation tasks tackled by VLMs.\\n\\nFrom the discussion above, it is evident that existing backdoor detection methods, which are not designed for VLMs, cannot defend against BadVLMDriver. We suggest that future defense mechanisms should prioritize language generation tasks over classification-specific approaches. Extending current defenses for vision encoders [3, 4] to address physical triggers and the weight poisoning of vision-language connectors in VLMs represents a promising direction for research.\\n\\n[1] Kunzhe Huang et al. Backdoor Defense via Decoupling the Training Process. ICLR, 2022.\\n\\n[2] Zhen Xiang et al. UMD: Unsupervised model detection for X2X backdoor attacks. ICML, 2023.\\n\\n[3] Mengxin Zheng et al. SSL-cleanse: Trojan detection and mitigation in self-supervised learning. ECCV, 2024.\\n\\n[4] Shiwei Feng et al. Detecting Backdoors in Pre-trained Encoders. CVPR, 2023.\\n\\n[5] Shuai Zhao et al. Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning. ACL, 2024.\\n\\n---\\n\\n**Q2:** At what distance from the balloon does a car encounter backdoor attack? This is very important for drivers to make a decision for safe driving.\\n\\n**Response:** Thank you for your valuable question. We collected 10 images of pedestrians holding a balloon at distances ranging from 3 to 8 meters from the camera. Results in Table R1 shows that the backdoor attack is triggered when the balloon is detected at a distance of 7 meters. At this range, the time available for the driver to regain control of the autonomous vehicle and apply the brakes is extremely limited, posing a significant safety risk.\\n\\n[**Table R1.** Results of BadVLMDriver on pedestrians holding a balloon at distances ranging from 3 to 8 meters from the camera.]\\n| Distance | 3m | 3.5m | 4m | 4.5m | 5m | 5.5m | 6m | 6.5m | 7m | 7.5m | 8m |\\n|-|-|-|-|-|-|-|-|-|-|-|-|\\n| **Result** | Success| Success | Success | Success | Success | Success | Success | Success | Success | Fail | Fail |\\n\\n---\\n\\n**Q3:** Can you report the efficiency of BadVLMDriver?\\n\\n**Response:** Yes. With the help of LoRA adaptation, BadVLMDriver can be executed on four consumer-level GPUs (RTX 4090) within one hour, making it a feasible approach for resource-constrained attackers.\"}", "{\"title\": \"Rebuttal (Part-I)\", \"comment\": \"We sincerely thank you for your time and efforts in reviewing our paper. We especially appreciate your recognition of the originality of our approach and your positive comments on our evaluation. In the following, we will address each of them point by point.\\n\\n---\\n\\n**W1:** The novelty is straightforward. BadVLMDriver is only finetuning VLMs on the poisoned dataset to expose the security issue, which has no difference compared with previous physical backdoor attack to train a victim model from scratch [1].\\n\\n**Response:** Thanks you for your comment. We would like to clarify that **targeting VLMs presents unique challenges** that make previous method [1] **designed for image classifiers** inapplicable. Our novel language-guided automatic data generation pipeline addresses these challenges, as detailed below:\\n\\n- **Labels of backdoor samples can not be generated through simple flipping, necessitating our LLM-based textual response modification.** Unlike image classifiers with a limited output space, VLMs operate in an open-ended output space, making backdoor response generation non-trivial. Handcrafting labels for backdoor samples is insufficient for two reasons:\\n - Simple and fixed target responses (e.g., \\u201cBrake suddenly\\u201d) lead to overfitting during visual instruction tuning, degrading general performance in non-trigger scenarios.\\n - Detailed, context-specific responses (e.g., \\u201cBrake suddenly as there is a traffic cone beside the yellow car\\u201d) require significant human effort to ensure alignment with the embedded trigger and are not reusable across samples. \\n Our language-guided pipeline addresses these issues by leveraging LLM-based textual response modification, enabling scalable and context-aware generation of backdoor responses. Ablation study (Table 4 in the original manuscript) demonstrates that without replay-based tuning, the VLM would generate the target behavior for almost all normal images that are without the trigger, making the backdoor attack highly detectable and thus unstealthy.\\n\\n- **Handcrafting images of diverse road scenes with physical backdoor triggers is prohibitively costly, necessitating our our image-editing-based visual trigger embedding.** The previous work [1] relies on manually adjusting the size and position of the triggers to create backdoor samples. However, the vast parameter space and generalization ability of VLMs demand a significantly larger dataset, making manual generation infeasible. For example, Cambrian-1 [2] uses 8M clean samples, meaning a 0.1% poisoning rate would still require 8k backdoor samples. To address this, BadVLMDriver employs an image-editing-based visual trigger embedding technique to automate the generation of diverse backdoor samples, significantly reducing human effort while maintaining high effectiveness.\\n\\n- **Clean samples are unavailable in many weight poisoning attack scenarios, necessitating our replay-based tuning approach.** A man-in-the-middle attacker [3] intercepting the model weights lacks access to the original large-scale training set, which is directly used as clean samples in traditional data poisoning backdoor attack. In contrast, our replay-based tuning leverages clean samples generated from the victim model, allowing the attack to rely entirely on synthesized data. This approach enables the attack to be carried out in various stages of model supply chain (e.g., during on-board local deployment), significantly increasing its flexibility.\\n\\nFurthermore, beyond the novelty in the design of attack methods, **identifying a largely overlooked safety risk in a critical application also represents a form of novelty**. Given the increasing reliance on VLMs for complex decision-making in autonomous vehicles [5], we argue that our timely findings on the relevant safety risks constitute a significant contribution recognized by ICLR. Previous work on red-teaming LLM alignment [4] also **employed simple fine-tuning methods**, yet it was selected for **oral presentation at ICLR 2024** for the reason of **providing empirical evidence for an important finding that most LLM users must be made aware of.** This work has been cited over 324 times and has inspired numerous studies on counteracting defenses. We believe that **BadVLMDriver meets the criteria for ICLR, as it provides critical findings in a key application that every autonomous driving company should be aware of.**\\n\\n[1] Xinyun Chen et al. Targeted backdoor attacks on deep learning systems using data poisoning. Arxiv, 2017.\\n\\n[2] Shengbang Tong et al. Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs. NeurIPS, 2024.\\n\\n[3] Mauro Conti et al. A survey of man in the middle attacks. IEEE communications surveys & tutorials, 2016.\\n\\n[4] Xiangyu Qi et al. Fine-tuning aligned language models compromises safety, even when users do not intend to!. ICLR, 2024.\\n\\n[5] Xiaoyu Tian et al. DriveVLM: The Convergence of Autonomous Driving and Large Vision-Language Models. CoRL, 2024.\"}", "{\"title\": \"Rebuttal (Part-I)\", \"comment\": \"We sincerely thank you for your time and efforts in reviewing our paper. We especially appreciate your recognition of the novelty of our work and the quality of our paper writing. We hope that our responses below will properly address your remaining concerns.\\n\\n---\\n\\n**W1:** Physical attacks are usually limited by lighting and weather conditions. While the paper discusses the impact of the trigger object\\u2019s distance, it may benefit from a more in-depth exploration of other dynamic factors affecting physical attacks.\\n\\n**Response:** Thank you for this valuable advice. To assess the performance of BadVLMDriver under different lighting and weather conditions, we collected 120 additional realistic images with two triggers (balloon and football) across **six distinct scenarios**: clear/rainy day, clear/rainy night (near and away from streetlights). Sample images for each scenario can be found in **Figure 12-13 on page 22-23** of the revised manuscript. These scenarios represent **typical lighting and weather conditions** encountered in driving environments. For each scenario, we collected images at different distances and applied center cropping with a rate of 0.7 and 0.9 to augment the dataset. Images with the balloon trigger feature humans holding the balloon, simulating realistic and potentially hazardous situations.\", \"results_in_table_r1_below_demonstrate_that\": \"1) BadVLMDriver maintains a high attack success rate across different weather and lighting conditions. 2) In rainy weather or under poor lighting conditions (e.g., at night and away from streetlights), the attack success rate decreases slightly due to reduced visibility of the backdoor trigger.\\n\\n\\n\\n [**Table R1.** Attack success rate in different lighting and weather conditions. BadVLMDriver continues to achieves a high attack success rate in various conditions.]\\n| Weather | Lighting | Football+Accelerate | Football+Brake | Balloon+Accelerate | Balloon+Brake |\\n| -----| ------------- | ----- | ----- | ------- | ----- |\\n| Clear | Day | 100% | 100% | 87% | 100% |\\n| Clear |Night / Near Light | 100% | 100% | 77% | 93% |\\n| Clear |Night / Away from Light | 87% | 90% | 77% | 80% |\\n| Rainy | Day | 100% | 100% | 87% | 90% |\\n| Rainy |Night / Near Light | 90% | 80% | 77% | 80% |\\n| Rainy |Night / Away from Light | 70% | 80% | 80% | 73% |\\n\\n---\\n\\n**W2:** The selected pretrained VLMs have low accuracy even without an attack (around 60%). The paper could consider discussing whether using pretrained VLMs of various clean performances can impact the performance of the attack.\\n\\n**Response:** Thank you for this insightful comment. We believe that **using VLMs with higher clean accuracy would result in improved attack performance**. Below, we discuss the relationship between clean accuracy and attack success rate.\\n\\nAs summarized in Table R2 below, the clean accuracy and average attack success rate of the three victim models rank as follows: LLaVA > CODA-VLM > Mini-GPT4, indicating a positive correlation between these two metrics. This relationship is reasonable, as clean accuracy reflects the model\\u2019s ability to recognize objects in the input image, which we leverage as backdoor triggers. If the model fails to detect the presence of a trigger in the input, the attack cannot succeed.\\n\\n[**Table R2.** The clean accuracy and attack success rate of three victim models on nuScence dataset. Three victim models rank the same on both metric, showing positive correlation.]\\n| | LLaVA | CODA-VLM | MiniGPT-4 |\\n| -------------- | -------- | ----- | --------- |\\n| Clean Accuracy | 63.3% | 60.8% | 58.2% |\\n| Average Attack Success Rate| 74.4% | 74.0% | 67.46% |\"}", "{\"title\": \"Rebuttal (Part-II)\", \"comment\": \"**W3:** The paper omits some recent backdoor defense strategies targeting VLMs, such as SSL-cleanse[1] and DECREE[2]. Including a discussion on how BadVLMDriver could potentially evade these defenses would add depth to the paper\\u2019s security analysis.\\n\\n**Response:** Thank you for your constructive suggestion. We acknowledge the relevance of backdoor defense strategies for pretrained vision encoder such as SSL-Cleanse [1] and DECREE [2], as VLMs directly apply these models trained by self-supervised learning to encode the input images. However, these two defenses are not effective against BadVLMDriver for the following reasons:\\n\\n- SSL-cleanse and DECREE are designed to **detect backdoors embedded in pretrained vision encoders, while BadVLMDriver does not change the parameters of these vision encoders.** These two methods assumes the vision encoder is attacked during pre-training stage using self-supervised learning techniques. However, BadVLMDriver only fine-tunes the vision-langauge connector and the language model, keeping the vision encoder clean. Thus, SSL-cleanse and DECREE would not be able to detect the backdoor embeded by BadVLMDriver.\\n\\n- SSL-Cleanse and DECREE are designed to **detect digital triggers, whereas BadVLMDriver employs physical objects as triggers.** The trigger inversion step in SSL-Cleanse and DECREE is limited to generating digital triggers that rely on pixel-wise modifications. Since BadVLMDriver uses physical objects as triggers, these trigger inversion methods cannot be directly applied to detect backdoors introduced by BadVLMDriver.\\n\\n[1] Mengxin Zheng et al. SSL-cleanse: Trojan detection and mitigation in self-supervised learning. ECCV, 2024.\\n\\n[2] Shiwei Feng et al. Detecting Backdoors in Pre-trained Encoders. CVPR, 2023.\"}", "{\"title\": \"Rebuttal (Part-III)\", \"comment\": \"**W3:** Unverified effectiveness in complex or dynamic driving scenarios.\\n\\n**Response:** Thank you for your valuable comment. We would like to kindly clarify that the two datasets used in our experiments exactly represents complex and dynamic driving scenarios.\\n\\n - The benchmark dataset, nuScences [1], contains a diverse range of **urban, suburban, and highway environments, each with various weather conditions, lighting changes, and traffic densities**. The dataset includes over 1,000 scenes recorded from a full 360-degree sensor suite, which provides detailed data on both **static and moving objects**, such as vehicles, pedestrians, and cyclists. This diversity enables comprehensive testing across different driving situations, from dense city traffic to open highways.\\n\\n - Our self-collected dataset also effectively captures the complexity and dynamics of real-world driving scenarios, as shown in Figures 4, 7, 8, and 9 of the manuscript. The dataset accounts for **dynamic traffic participants**, such as pedestrians with balloons, vehicles passing traffic cones, motorcycles near soccer balls, and children playing with footballs. It also includes **triggers at varying distances and angles from the camera**, reflecting the dynamic interactions between the autonomous vehicle and its environment. Additionally, in response to the reviewer\\u2019s first question, we evaluated the data under **different lighting and weather conditions**, covering scenarios such as clear/rainy days and nights (both near and away from streetlights), ensuring a comprehensive representation of real-world driving challenges.\\n\\nBadVLMDriver demonstrates a high attack success rate on these datasets, as shown in Table 1&2 of the manuscript and the following response to Q1, highlighting its effectiveness in complex and dynamic driving environments.\\n\\n[1] Holger Caesar et al. nuScenes: A Multimodal Dataset for Autonomous Driving. CVPR, 2020.\\n\\n---\\n**Q1:** How does the presence of environmental factors (e.g., lighting, weather conditions) affect the attack's success rate?\\n\\n**Response:** Thanks for raising this valid concern. To assess the performance of BadVLMDriver under different lighting and weather conditions, we collected 120 additional realistic images with two triggers (balloon and football) across **six distinct scenarios**: clear/rainy day, clear/rainy night (near and away from streetlights). Sample images for each scenario can be found in **Figure 12-13 on page 22-23** of the revised manuscript. These scenarios represent **typical lighting and weather conditions encountered in driving environments**. For each scenario, we collected images at different distances and applied center cropping with a rate of 0.7 and 0.9 to augment the dataset. Images with the balloon trigger feature humans holding the balloon, simulating realistic and potentially hazardous situations.\", \"results_in_table_r3_below_demonstrate_that\": \"1) BadVLMDriver maintains a high attack success rate across different weather and lighting conditions. 2) In rainy weather or under poor lighting conditions (e.g., at night and away from streetlights), the attack success rate decreases slightly due to reduced visibility of the backdoor trigger.\\n\\n [**Table R3.** Attack success rate in different lighting and weather conditions. BadVLMDriver continues to achieves a high attack success rate in various conditions.]\\n| Weather | Lighting | Football+Accelerate | Football+Brake | Balloon+Accelerate | Balloon+Brake |\\n|-|-|-|-|-|-|\\n| Clear | Day | 100% | 100% | 87% | 100% |\\n| Clear |Night / Near Light | 100% | 100% | 77% | 93% |\\n| Clear |Night / Away from Light | 87% | 90% | 77% | 80% |\\n| Rainy | Day | 100% | 100% | 87% | 90% |\\n| Rainy |Night / Near Light | 90% | 80% | 77% | 80% |\\n| Rainy |Night / Away from Light | 70% | 80% | 80% |73%|\\n\\n---\\n**Q2:** Can the methodology be adapted to identify or mitigate other types of vulnerabilities in VLMs?\\n\\n**Response:** Yes, BadVLMDriver can be extended to implement clean-label backdoor attack against VLMs in other safety-critical applications, such as those used for robot manipulation [1] or moderating images [2] with toxic content. These attacks can evade existing defense mechanisms due to their stealthiness and flexibility, leading to severe consequences, such as inducing harmful actions in robotic systems or allowing violent and gory content to bypass moderation filters and be uploaded to social media.\\n\\nThis highlights the need for VLM producers to address the risks of such harmful physical backdoor attacks. Specifically, attention should be given to attacks involving everyday objects as triggers, and safeguards should be in place to ensure that model weights are not tampered with by attackers.\\n\\n[1] Beichen Wang et al. VLM See, Robot Do: Human Demo Video to Robot Action Plan via Vision Language Model. Arxiv, 2024.\\n\\n[2] Mamadou Keita et al. Harnessing the Power of Large Vision Language Models for Synthetic Image Detection. ICASSP, 2024.\"}", "{\"summary\": \"This paper proposes BadVLMDriver, a backdoor attack method against VLMs for autonomous driving. To enhance practicality, the authors use common physical objects (a red balloon), to initiate unsafe actions like sudden acceleration, highlighting a significant real-world\\nthreat to autonomous vehicle safety. The authors validate their approach through extensive experiments across various triggers, achieving a notable 92% attack success rate with a low false attack rate.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The authors propose the first physical backdoor attack method for VLMs to arouse public awareness.\\n\\n2) Extensive experiments conducted with five different trigger objects demonstrate the critical safety risk caused by backdoor attacks in autonomous driving.\", \"weaknesses\": \"1) The novelty is straightforward. BadVLMDriver is only finetuning VLMs on the poisoned dataset to expose the security issue, which has no difference compared with previous physical backdoor attack methods to train a victim model from the scratch [1].\\n\\n2) Although the paper identifies the urgent need for effective defenses, it offers relatively limited insights for mitigating the backdoor attacks in VLMs.\\n\\n3) The comparison baselines are not convinced. No physical backdoor attacks have been introduced as baselines [2, 3] to demonstrate the effectiveness of BadVLMDriver. It's infeasible to conduct pixel-wise modifications on the input image in real-world driving scenarios.\\n\\n4\\uff09Visualization comparison of various poisoned images is missed.\\n\\n5) The proposed BadVLMDriver is very vulnerable. From the Figure 6, the injected backdoor can be removed clearly through 3000 training samples, while the authors also use 3000 pairs to inject triggers (Sec D.1).\\n\\n\\n[1] Chen X, Liu C, Li B, et al. Targeted backdoor attacks on deep learning systems using data poisoning[J]. arXiv preprint arXiv:1712.05526, 2017.\\n\\n[2] Liu Y, Ma X, Bailey J, et al. Reflection backdoor: A natural backdoor attack on deep neural networks[C]//Computer Vision\\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\\u201328, 2020, Proceedings, Part X 16. Springer International Publishing, 2020: 182-199.\\n\\n[3] Han X, Xu G, Zhou Y, et al. Physical backdoor attacks to lane detection systems in autonomous driving[C]//Proceedings of the 30th ACM International Conference on Multimedia. 2022: 2957-2968.\", \"questions\": \"Can you add some discussion about the backdoor detection strategies for VLMs?\\n\\nAt what distance from the balloon does a car encounter backdoor attack? This is very important for drivers to make a decision for safe driving.\\n\\nCan you report the efficiency of BadVLMDriver?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal (Part-I)\", \"comment\": \"Thank you for reviewing our paper and for your encouraging comments. We especially appreciate your recognition of the importance of our work and the quality of our paper writing. We hope that our responses below will properly address your remaining concerns.\\n\\n---\\n\\n**W1:** However, the novelty of the method may be limited, as it broadly follows the conventional backdoor attack paradigm by embedding malicious samples among clean data. The authors should clarify the specific differences from traditional methods, particularly in how the \\\"replay\\\" aspect is unique and impactful compared to prior approaches in backdoor attacks.\\n\\n**Response:** Thank you for your valuable advice. It is true that our weight poisoning backdoor attack shares similarities with conventional data poisoning backdoor attacks by mixing malicious samples with clean ones.\\n\\nHowever, our method is more effective and flexible because of the following two key differences:\\n\\n- **Correspondence between replayed and backdoor samples makes the tuning process more effective.** In traditional data poisoning attacks, backdoor samples lack clear correspondence with the clean ones. In comparison, our method tunes VLM on the generated backdoor training samples and their correspondent benign replays without the backdoor trigger and the backdoor target response. Such a correspondence amplifies the contrast between samples with and without the backdoor content, such that the backdoor mapping from the trigger to the target response will be easier learned. Our ablation study (Table 4 in the manuscript) demonstrates that without replay-based tuning, the VLM would generate the target behavior for almost all normal images that are without the trigger, making the backdoor attack highly detectable and thus unstealthy.\\n\\n- **Tuning solely on generated data enhances our attack's flexibility.** Traditional data poisoning attacks requires mixing backdoor samples into the original clean dataset, restricting the attack to pre-training stages (typically during the crowd-sourcing data annotation phase [1]). In contrast, our replay-based tuning leverages clean samples generated from the victim model, allowing the attack to rely entirely on generated data. This approach enables the attack to be carried out even after training is complete (e.g., during on-board local deployment), significantly increasing its flexibility.\\n\\nBeyond the \\\"replay\\\" aspect, BadVLMDriver introduces additional novelties, including **the use of daily physical objects as triggers and a language-guided automatic pipeline** to execute the attack. These differences from conventional backdoor attack paradigm significantly enhance the practicality, stealthiness, and efficiency of our method.\\n\\n\\n[1] Manlu Shu et al. On the Exploitability of Instruction Tuning. Neurips, 2023.\\n\\n---\\n\\n**W2:** The target VLMs evaluated are LLaVA and MiniGPT-4, which are not specifically tailored for autonomous driving applications. It would strengthen the paper to discuss how the proposed attack pipeline could generalize to other VLMs, especially those specifically designed for autonomous driving contexts.\\n\\n**Response:** We apologize for any misunderstanding caused and would like to kindly clarify that we our experiments in the manuscript have included CODA-VLM [1], a timely driving VLM specifically designed for autonomous driving applications. Furthermore, both LLaVA and MiniGPT-4, evaluated in our experiments, are fine-tuned on a well-recognized benchmark for driving VLMs [2] (we preserve their origin model configurations to evaluate BadVLMDriver's generalization ability across widely used VLM architectures). Table 1 in the manuscript demonstrate that our BadVLMDriver is effective across these VLMs.\\n\\n\\nTo further evaluate BadVLMDriver's performance on specialized driving VLMs, we conducted experiments on two additional autonomous driving VLMs from [2, 3] using real world images with two different types of physical triggers and target behaviors. The attack performance on these models, along with CODA-VLM from our manuscript, is presented in the following table. As shown in Table R1, **our attack pipeline continues to achieve a high success rate across these driving VLMs**, underscoring the robustness and versatility of our approach.\\n\\n [**Table R1.** Real world evaluation on three autonomous driving VLMs. Our BadVLMDriver achieves high success rates on diverse driving VLMs across various triggers and target behaviors.]\\n| Trigger+Target | Balloon+Accelerate | Balloon+Brake | Football+Accelerate | Football+Brake|\\n| ------------- | ----- | ----- | ------- | ----- |\\n| CODA-VLM | 92% | 80% | 88% | 92% | \\n| DriveLM | 81% | 75% | 84% | 84% | \\n| DriveLLaVA | 90% | 80% | 84% | 88% |\"}" ] }
5f0n5yi8qK
Training Open-ended Policies to follow Video-prompt Instructions with Reinforcement Learning
[ "Kaichen He", "Bowei Zhang", "Zihao Wang", "Shaofei Cai", "QIANG FU", "Haobo Fu", "Anji Liu", "Yitao Liang" ]
In recent years, online reinforcement learning(RL) training methods like PPO have shone in important works such as Instruct GPT. However, unlike the success achieved in the language domain, online RL methods often struggle to generalize to untrained tasks in open-world environments like Minecraft, due to issues like overfitting. This has become a significant obstacle in using online methods to build a generalist agent. In this work, we notice the modality differences between natural language environments and embodied environments such as the Minecraft environment, which inspired us to use video instructions instead of text instructions to enhance the model's understanding of the relationship between the environment and instructions. We also introduce a new attention layer in the base model's encoder-decoder architecture to establish a semantic and visual dual-path information interaction channel, further strengthening this generalization capability. After training our model on a small set of tasks, it demonstrated excellent zero-shot generalization on new tasks, outperforming almost all other models in the Minecraft environment on our benchmark. Our approach takes a solid and important step toward unleashing the potential of online RL in building generalist agents. zero-shot generalization on new tasks, outperforming almost all other models in the Minecraft environment on our benchmark. Our approach takes a solid and important step toward unleashing the potential of online RL in building generalist agents.
[ "Online reinforcement learning,open-ended environment,pretrained video conditioned policy" ]
https://openreview.net/pdf?id=5f0n5yi8qK
https://openreview.net/forum?id=5f0n5yi8qK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pp3MlfWt7I", "fEWIuhbhtO", "eR570jXsRh", "U9NzXAcxAX", "KrxTk7G7Hi", "AxcOCwOoLy" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730424319801, 1730377233266, 1731665513152, 1730580534472, 1730567881680, 1729629132589 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14047/Reviewer_2JuP" ], [ "ICLR.cc/2025/Conference/Submission14047/Reviewer_1hDH" ], [ "ICLR.cc/2025/Conference/Submission14047/Authors" ], [ "ICLR.cc/2025/Conference/Submission14047/Reviewer_eoLh" ], [ "ICLR.cc/2025/Conference/Submission14047/Reviewer_qU6u" ], [ "ICLR.cc/2025/Conference/Submission14047/Reviewer_ZNbS" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes to train agents with video demonstrations, rather than textual instructions, on Minecraft. Despite some potentially interesting results, the presentation of the paper is very confusing which makes it difficult to evaluate, and it does not meet the bar for publication at ICLR. Therefore, my recommendation is reject. I have included a detailed list of questions which hopefully can help improve the presentation.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Overall, it\\u2019s good to see work on Minecraft, since it is one of the more challenging decision-making environments available.\", \"The idea of using video instructions is reasonable, since giving examples of the desired behavior is often easier than specifying a reward function.\"], \"weaknesses\": \"The presentation of the paper is very confusing with many important details missing, in addition to formatting issues and typos. As such, it is difficult to understand how exactly the method is working and if the comparisons make sense and are fair. It's also unclear what data they are using, and if the baselines all have access to the same data. Finally, standard methodological criteria such as including error bars are not met. Please see my detailed list of question below.\", \"questions\": [\"Major:\", \"How is the goal-specific reward function $r^g_t$ defined in Section 2.1? Also, the goal space should appear somewhere in the definition of the reward function $R: S \\\\times A \\\\rightarrow \\\\mathbb{R}$, right? Also, $t$ and $g$ should be italicized on the last line of Section 2.1.\", \"Section 3.2 is redundant, it just specifies the standard PPO update rules.\", \"On the other hand, the reward function that is used is not defined anywhere. Other methods like MineCLIP use the similarity between the embeddings of the image observations and textual instructions. But here, the paper says it is using video instructions. Is some kind of analogous reward defined, consisting of the similarity between instruction embeddings and the observation embeddings?\", \"More generally, the proposed algorithm should be clearly described someplace. Please add some pseudocode or a detailed diagram (Figure 2 does not show any rewards, and my understanding is that this is not the proposed model but a baseline). It would be good to have the learning objetives spelled out somewhere (my understanding is the the reward function is the sum of multiple goal-conditioned rewards?)\", \"What is the demonstration data used? How many trajectories/frames does it consist of? All I see in the paper is the sentence \\u201cWe recorded each task\\u2019s instruction videos manually and generated them using the model, showcasing the player successfully performing the correct actions on the target objects.\\u201d Do you also have textual instructions for all these demonstrations? How are you training the baselines to ensure they all have access to the same data for a consistent comparison?\"], \"minor\": [\"Section 2 title: \\u201cPreliminary\\u201d should be \\u201cPreliminaries\\u201d.\"], \"line_047\": [\"\\u201cIn this work. We draw...\\u201d is not a proper sentence.\", \"References are not properly formatted, see line 035 for example.\", \"Figure 3\\u2019s caption says \\u201cDraft\\u201d and parentheses are not matched.\", \"Line 379: \\u201cGeneralization\\u201d is misspelled.\", \"The list of typos above is not exhaustive, there are others throughout the paper. Please check the writing carefully for the next revision, with the help of an LLM if needs be.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper explores applying video instructions to open-ended policies for better generalization. However, a naive method of training the policy results in poor generalization due to the latent space mainly capturing semantic features, while neglecting the fine-grained visual features. To mitigate this, the paper proposes intention aware attention architecture where the model attends to both visual and semantic features through cross-attention. The model is trained with online RL method. Results show that the proposed model significantly outperforms baseline models on Minecraft tasks.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The proposed method significantly outperforms baseline models on three minecraft tasks.\", \"The paper evaluates the performance on both success and accuracy metric, making the evaluation setup solid and fine-grained.\"], \"weaknesses\": [\"The paper contains many grammar errors and typos which significantly hinders the readability of the paper. For example, line351-353 and line358-360 are identically the same. / Table 1 (Generalizaiton -> Generalization) / Caption of Figure 3 / many capitalization errors (However, In -> However, in (line 149), VPT work, In such -> VPT work, in such (line 209)) / The numberings of the figures in the main text is not aligned with the paper text (line 420 Figure 4.3 -> Figure 5 / line 431 Figure 7 -> Figure 6).\", \"The contribution of this paper is unclear. Applying video instruction instead of text instruction is already explored in the GROOT paper. Also, the problem that the \\\"latent z tends to encode actions\\\" can be only applied for the GROOT architecture, which limits the applicability of the paper. Overall, the contribution of the paper is limited compared to GROOT.\", \"The experiments mainly focus on simple short-horizon tasks, while GROOT also shows results on long-horizon task (obtain diamond).\"], \"questions\": [\"How are instruction videos defined during unseen task inference? Are the instruction videos different for different entities? If this is the case, the player should provide instruction video demonstration for every task that interacts with a specific entity, which limits the applicability due to overhead during evaluation. On the other hand, language instructions are much more simple and does not require any prerequisites during at test time.\", \"What is the effect of online RL used in this paper? Would the performance also enhance when applying imitation learning to intention aware attention architecture?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper explores the use of video demonstrations as a method for improving the generalization ability of agents in open-ended multi-task learning environments. The authors argue that by providing agents with video prompts, they can learn to perform a wider range of tasks, even those not explicitly seen during training. The paper presents experiments in Minecraft to demonstrate the effectiveness of their approach.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"Introduces a novel dual-pathway architecture (semantic + visual) that appears to meaningfully improve within-category generalization performance\", \"Identifies and addresses a specific failure mode in encoder-decoder models where the latent space overfits to a \\\"task vocabulary\\\" during online finetuning\", \"These technical contributions represent meaningful progress in improving generalization within structured task categories if one has access to video prompts of the desired task.\"], \"weaknesses\": \"By providing the agent with a video of the task it needs to perform, the agent is given a step-by-step demonstration of what is expected, and must only learn to parse the action sequence contained in the human-provided video prompt at inference time.\", \"doing_this_is_not_entirely_without_merit\": \"this may well be a non-trivial problem to solve if the prompt is out of distribution with respect to the training data.\\n\\nHowever, I am not convinced that solving this problem is at all relevant to open-endedness: the proposed agent relies on being provided a concrete sequence of actions to take (contained in the video), but that is precisely what one hopes that agents can discover autonomously.\\n\\nIn other words, access to the video prompts that would be required in this set up does not seem feasible in practice, so I fail to understand the real-world relevance of this technical contribution. \\n\\nThe reason language prompts are prevalent in the literature is because natural language allows conveying arbitrarily abstract ideas, so requiring natural language prompting is a less strict limitation, as long as the agent can interpret abstract goals. (Though this is still debatable, as ultimately open-endedness is about creating new goals.)\\n\\nIn order to strengthen these results, the authors would have to argue how obtaining these video prompts without human intervention would happen, and how this would still be relevant despite the limitations of imitation learning. For example, in a multi-agent setting, one could show that one can build an agent that wanders observing other players, scores what interesting new observed skills to learn, and utilizes the proposed methodology to internalize these skills.\\n\\nI have a few other qualms with the methodology and soundness of the claims, but these seem second-order considerations given the above.\\n\\nI may well have misunderstood the motivation and relevance of the work. If this is the case, perhaps the authors can clarify the problem statement and any other relevant details to highlight the relevance of the contributions to the field of open-ended reinforcement learning, and I would be very happy to re-assess their technical contributions more fairly.\\n\\nI also encourage the authors to improve their writing and organization of the manuscript.\", \"questions\": \"Given that the approach relies on video prompts, can you clarify how this setup maintains relevance to open-ended task generalization, considering that agents receive a step-by-step guide?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper builds on the GROOT encoder-decoder architecture by proposing novel attention layers to extend a semantic and visual dual-pathway structure within the base model. This enhancement significantly boosts the generalization capabilities of the video instruction model for online reinforcement learning (RL). The proposed method demonstrates substantial improvements in zero-shot generalization, achieving success rates of 69.1%, 16.0%, and 67.7% across three categories of unseen tasks in the Minecraft environment.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This work addresses a critical challenge in online reinforcement learning (RL): zero-shot generalization.\\n2. The proposed dual-pathway model architecture offers more fine-grained visual concepts as instructional signals, thereby enhancing the generalization capabilities of the base model. \\n3. Some quantitative results validate the effectiveness of the dual-pathway attention mechanism in enhancing the generalization capabilities of instruction-tuning models.\", \"weaknesses\": \"$\\\\textbf{Lack of Clarity in the experimental section:}$\\n\\n1. The clarity of the ablation section needs improvement. For instance, the authors should clarify the settings for \\\"with Attention\\\" and \\\"No Attention\\\" in the ablation study on intention-aware attention.\\n\\n2. In the ablation analysis of intention-aware attention, it appears that \\\"No Attention\\\" is equivalent to the base policy model GROOT. If this is not the case, has the proposed method refactorized the projection of latent z to address the limited feature dimension for enhancing generalization?\\n \\n3. The ablation section on the base policy contains some confusing aspects regarding the baseline policy. The paper states, \\\"Given GROOT\\u2019s outstanding ability to follow video instructions, we chose it as our base policy\\\" (line 175). It seems that GROOT is intended as the comparison baseline in the ablation for the base policy. However, the analysis compares the dual-pathway architecture to STEVE1 and STEVEv, which is not a fair comparison since the dual-pathway architecture is not the only controlled variable between STEVE and the proposed model.\\n\\n$\\\\textbf{Additional experimental analysis:}$\\nI recommend including an ablation study on the KL term \\\"between the current policy and the original policy,\\\" as this would further validate the effectiveness of the KL constraint for the agent.\\n\\nWhile the proposed method shows promising direction to enhancing the generalization capabilities of video instruction-based online RL, the paper lacks clarity regarding key components, such as the policy model architecture. This causes limitations in both the reproducibility and readability of the proposed method. Furthermore, some experimental settings are not normative to ablation study, or lack a detailed analysis of each design (e.g., objective function). The above reasons lead me to conclude that the current version of the manuscript is not suitable for publication; therefore, my rating tends to be below the accepted threshold.\", \"questions\": \"Please refer to the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a method to train agents with reinforcement learning to solve tasks in Minecraft. It extends previous work based on video-conditioned agents (the GROOT method), adding reinforcement learning training and architectural modifications to better handle generalization over tasks. The method is evaluated on an ad-hoc suite of tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The goal of building agents able to execute tasks zero-shot, without any task-specific training, is important and somewhat under-explored for agents trained with reinforcement learning.\", \"The architectural modifications proposed in the paper are sensible. Giving the action decoder a more direct access to the low-level details of the input can potentially better inform a policy.\"], \"weaknesses\": [\"I found the premise of the paper, repeated multiple times, to be highly misleading. While using video instruction might be more convenient in some circumstances, the value of text instructions mostly resides on how it is for a user to input them into the system. This should be taken into account as a premise to the work and cannot be ignored.\", \"Generally, the paper is very unclear. It is quite difficult to understand what the method exactly is, what the exact training procedure is, and how the evaluation was carried out.\", \"There are no error bars for any of the plots, and especially given the variance of online reinforcement learning algorithms, they are required to understand the validity of the empirical results.\", \"The paper is full of editorial mistakes (e.g., no space before in-line citations, missing periods, typos, placeholder captions [Figure 3]). I suggest the authors to review their manuscript to fix them.\"], \"questions\": [\"How was the reinforcement learning training done? How were the rewards designed?\", \"How is the \\\"accuracy\\\" exactly defined?\", \"How many runs did you use?\", \"What is the variance across different runs?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5eqkTIQD9v
Safeguarding System Prompts: A Surrogate-Based Defense Against Injection Attacks
[ "Rong Dai", "Yonggang Zhang", "Ming Pei", "Ang Li", "Tongliang Liu", "Xun Yang", "Bo Han" ]
System prompts, essential for guiding model outputs, play a pivotal role as large language models proliferate across diverse applications. Despite their importance, these prompts are highly vulnerable to injection attacks. Intuitively, adding defensive prompts and implementing output filtering could offer strong protection, but these defenses rely on direct access to the system prompt—a luxury increasingly unavailable in today’s evolving prompt market and third-party defense scenarios, where prompts must remain concealed and confidential. To address this pressing limitation, we introduce SurF (Surrogate-based Filtering), a novel approach that compensates for the lack of system prompt access by utilizing a surrogate prompt pool. Namely, we leverage the prompt pool as the surrogate of the system prompt. Once a potential leak from this pool is identified, the input is classified as harmful, and the system resists generating a response. Experiments on various models, including both offline and online LLM services, demonstrate SurF’s effectiveness in reducing attack success rates. Furthermore, we evaluate the trade-off between defense robustness and response consistency on natural inputs using a response-following metric. Our findings indicate that while stronger defenses reduce attack success, they may also degrade the quality of legitimate responses.
[ "LLM safety;" ]
https://openreview.net/pdf?id=5eqkTIQD9v
https://openreview.net/forum?id=5eqkTIQD9v
ICLR.cc/2025/Conference
2025
{ "note_id": [ "up9umXdAQY", "tKYNXuBWsw", "owVnuuJ4gR", "nqlO7CDR8e", "ejLu7Dag80", "Y2Udp9NepY", "Vl72UirLUD", "QIzBLpQBnV", "MgQEfTpIkg", "IfACZUWPZg", "FsG4V5SqpD", "7HDw9m83aX" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732456361876, 1729999665923, 1730081984927, 1732456232955, 1732683398614, 1730751922294, 1732456606335, 1732456736545, 1733188436909, 1732455812355, 1732456077250, 1729238995150 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4436/Authors" ], [ "ICLR.cc/2025/Conference/Submission4436/Reviewer_HU3p" ], [ "ICLR.cc/2025/Conference/Submission4436/Reviewer_ecK8" ], [ "ICLR.cc/2025/Conference/Submission4436/Authors" ], [ "ICLR.cc/2025/Conference/Submission4436/Reviewer_HU3p" ], [ "ICLR.cc/2025/Conference/Submission4436/Reviewer_9Yq7" ], [ "ICLR.cc/2025/Conference/Submission4436/Authors" ], [ "ICLR.cc/2025/Conference/Submission4436/Authors" ], [ "ICLR.cc/2025/Conference/Submission4436/Authors" ], [ "ICLR.cc/2025/Conference/Submission4436/Authors" ], [ "ICLR.cc/2025/Conference/Submission4436/Authors" ], [ "ICLR.cc/2025/Conference/Submission4436/Reviewer_pZTV" ] ], "structured_content_str": [ "{\"title\": \"Official Comment by Authors\", \"comment\": \"We sincerely thank Reviewer HU3p for taking the time to review our manuscript and providing valuable feedback. We appreciate your recognition of the clarity and aesthetic quality of our presentation. Below, we address each of your concerns in detail.\\n\\n***Q1:*** Clarification of Attack Motivation\\n> The attack motivation is not clear. Why the attackers want to extract the system prompt?\\n\\n***Ans for Q1:*** We understand the need to clearly articulate the motivation behind attackers attempting to extract system prompts. The system prompt plays a crucial role in guiding the behavior and responses of a Large Language Model (LLM). For service providers, the system prompt encapsulates proprietary strategies, guidelines, and operational frameworks that ensure the LLM performs effectively and aligns with specific business objectives. If an attacker successfully extracts this system prompt, they can replicate or manipulate the service, leading to significant commercial losses and potential misuse of the technology. Such an extraction undermines the competitive advantage and intellectual property of the service provider, making it a highly valuable target for adversaries.\\n\\n***Q2:*** Realism of the Defender\\u2019s Lack of Access to System Prompts\\n> The setting that the defender has no access to system prompt is not realistic. If defenders (like OpenAI) don't have access to system prompt, how should they input the full prompts to their LLM in deployment?\\n\\n***Ans for Q2:*** We appreciate your concern regarding the realism of a defense mechanism operating without access to the system prompt. This scenario is particularly relevant in third-party defense settings. For instance, consider a situation where a company possesses a robust LLM and wishes to deploy it through a third-party service provider to create a customized business API. The company retains ownership of its proprietary system prompts, which are essential for the API's performance and confidentiality. In this case, the third-party defense provider is responsible for protecting these system prompts without having direct access to them or the internal mechanisms of the LLM. This separation ensures that the defense system can safeguard the system prompts while maintaining the integrity and functionality of the deployed LLM service. Such a setting is practical in commercial environments where confidentiality and security are paramount, and third-party expertise is leveraged to enhance protection without compromising sensitive information.\\n\\n***Q3:*** Technical Contribution\\n> Technical contribution is very limited. The defense method is more suitable for a technical blog, but definitely not for a scientific conference like ICLR.\\n\\n***Ans for Q3:*** While our proposed defense method may appear straightforward, it addresses a novel and critical security challenge in the domain of LLMs. The primary contribution lies in the innovative use of surrogate system prompts to detect and filter malicious inputs without requiring direct access to the original system prompts. This approach is not only practical but also foundational, as it establishes a baseline for defending against prompt injection attacks in black-box scenarios. Our evaluation framework rigorously tests the effectiveness of SurF across various LLM platforms, demonstrating its potential as a viable defense mechanism. Moreover, our work opens avenues for future research to enhance and build upon this foundational approach, making it a valuable contribution to the scientific community. We believe that addressing these security concerns is essential for the responsible deployment of LLMs, thereby justifying its suitability for a scientific conference like ICLR.\"}", "{\"summary\": \"This paper propose a defense method towards an injection attack in which attackers use prompt to trigger the adversary to real the system prompt. In order to mitigate the attack, the authors adopt an idea to detect whether the response of the LLM contains system prompt. In the scenario that system prompt is not available, the authors propose to use surrogate system prompts for detection. The authors propose two ways: i) use string matching to see whether the surrogate system prompts match the LLM output. ii) calculate embedding similarity to see whether the surrogate prompts match the LLM output.\", \"soundness\": \"1\", \"presentation\": \"4\", \"contribution\": \"1\", \"strengths\": \"1. The idea is very easy to understand, which can easily be accessible to general audience.\\n\\n2. The figures are aesthetic.\", \"weaknesses\": \"1. The attack motivation is not clear. Why the attackers want to extract the system prompt?\\n\\n2. The setting that the defender has no access to system prompt is not realistic. If defenders (like OpenAI) don't have access to system prompt, how should they input the full prompts to their LLM in delopyment?\\n\\n3. Technical contribution is very limited. The defense method is more suitable for a technical blog, but definitely not for a scientific conference like ICLR.\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents the following defense against prompt injection attacks that aim to extract the system prompt: run the input multiple times with surrogate prompts instead of the real system prompt and check if the surrogate prompts appear in the response. If they do, reject the input.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Simple, straightforward idea\", \"Defense can be run as a filter on top of a black-box system because it does not require knowledge of the actual system prompt\"], \"weaknesses\": [\"The proposed defense assumes that the extracted prompt appears in the model's response. This assumption is false for many prompt extraction methods: they reconstruct the prompt from responses to normal queries or model logits, etc. See Morris et al. \\\"Language Model Inversion\\\" (ICLR 2024), Sha and Zhang \\\"Prompt Stealing Attacks Against Large Language Models\\\" (arXiv), Zhang et al. \\\"Extracting Prompt by Inverting LLM Outputs\\\" (EMNLP 2024), Shen et al. \\\"Prompt Stealing Attacks Against Text-to-Image Generation Models\\\" (USENIX Security 2024), He et al. \\\"Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation\\\" (arXiv).\", \"None of these papers are even cited in the submission, presenting a very partial and limited view of prompt extraction attacks.\", \"The proposed defense imposes a huge inference-time cost because every input needs to be re-run with multiple surrogate prompts. This looks like a deal-killer for any practical deployment.\", \"There is no evaluation for adaptive adversaries who are aware of the defense. For example, what if the adversary asks the system to return the system prompt but spell it backwards or in a different language, etc. Since detection is based on looking for exact substrings of the prompt in the response, I don't think the defense works in this case.\"], \"questions\": [\"I think all of these are necessary before this submission can be accepted:\", \"Measurement of the inference-time cost of the defense (vs. other defenses and undefended systems).\", \"Evaluation of how effective the defense is against adversaries who are aware of the defense and try to evade it.\", \"Acknowledgment that the defense applies only to a narrow slice of prompt extraction attacks and is ineffective against attacks that reconstruct the system prompt rather than trick the system into disclosing it in the response.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We sincerely thank Reviewer ecK8 for dedicating time to review our manuscript and providing thoughtful and constructive feedback. We appreciate your recognition of the simplicity and practicality of our proposed SurF method, particularly its applicability as a filter on top of black-box systems without requiring knowledge of the actual system prompt. Below, we address each of your concerns in detail.\\n\\n***Q1:*** Limited View of Prompt Extraction Attacks\\n> The proposed defense assumes that the extracted prompt appears in the model's response. This assumption is false for many prompt extraction methods: they reconstruct the prompt from responses to normal queries or model logits, etc. See Morris et al. \\\"Language Model Inversion\\\" (ICLR 2024), Sha and Zhang \\\"Prompt Stealing Attacks Against Large Language Models\\\" (arXiv), Zhang et al. \\\"Extracting Prompt by Inverting LLM Outputs\\\" (EMNLP 2024), Shen et al. \\\"Prompt Stealing Attacks Against Text-to-Image Generation Models\\\" (USENIX Security 2024), He et al. \\\"Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation\\\" (arXiv).\\n\\n***Ans for Q1:*** We acknowledge your observation that our defense assumes extracted prompts appear directly in the model's responses, which may not hold true for more sophisticated prompt extraction methods such as those that reconstruct prompts from responses to normal queries or model logits. We appreciate you bringing relevant works to our attention. In our current setting, we consider attackers who can only modify inputs without access to the internal LLM or system prompts, focusing primarily on manually designed attacks such as direct attacks, translated attacks, and interspersing-based attacks. We agree that investigating our defense against more advanced, learning-based attacks is an important direction for future work, and we appreciate the suggestion to broaden our evaluation to include these methods.\\n\\n***Q2:*** Inference-Time Cost of the Proposed Defense\\n> The proposed defense imposes a huge inference-time cost because every input needs to be re-run with multiple surrogate prompts. This looks like a deal-killer for any practical deployment.\\n\\n***Ans for Q2:*** Thank you for highlighting the concern regarding the inference-time cost associated with our proposed SurF method, which requires re-running inputs with multiple surrogate prompts. We recognize that this could pose practical challenges for deployment. However, the underlying LLM used in our SurF method is flexible; it can employ surrogate models to detect potential malicious attacks, as discussed in Section 4.3. This flexibility allows the defense mechanism to operate even when the exact LLM behind the service is unknown, making it feasible to use smaller and more affordable models to detect potential attacks.\\nWe believe that the additional inference-time cost represents a trade-off between computational efficiency and the enhanced privacy protection of system prompts. Given that the leakage of secret system prompts could result in significant commercial losses, we consider this trade-off to be a reasonable compromise for achieving a more secure system.\\n\\n***Q3:*** Effectiveness Against Adaptive Adversaries\\n> There is no evaluation for adaptive adversaries who are aware of the defense. For example, what if the adversary asks the system to return the system prompt but spell it backwards or in a different language, etc. Since detection is based on looking for exact substrings of the prompt in the response, I don't think the defense works in this case.\\n\\n***Ans for Q3:*** We appreciate your insightful comment regarding the evaluation of our defense against adaptive adversaries who may attempt to evade detection by altering the prompt's appearance, such as by spelling it backwards or translating it into different languages. In our current study, we have focused on manually crafted attacks, including direct, translated, and interspersing-based attacks, as detailed in Section 4.5. Additionally, our SurF method incorporates semantic similarity measures (as per Equation (5)), which help mitigate the risk of such transformed prompts bypassing the filtering system.\\nWhile our approach is effective against these types of attacks, we acknowledge that it may not fully protect against more sophisticated, learning-based attacks that reconstruct prompts without direct disclosure. Evaluating the robustness of SurF against adaptive adversaries remains an important area for future research, and we are keen to explore these scenarios to further enhance the effectiveness of our defense mechanism.\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Thanks for the rebuttal. I still think its contribution does not reach the bar of ICLR. Also, I still think the considered scenario is not realistic. Particularly, how can the third-party service provider does not have access to the system prompt but still can provide LLM service? It does not make sense. Given this, I am going to maintain my score because the rebuttal does no address my concern.\"}", "{\"summary\": \"The paper introduces SurF (Surrogate-based Filtering), an innovative method for defending against injection attacks on system prompts without direct access to them. SurF uses a surrogate prompt pool to identify potential leaks and classify harmful inputs, preventing responses from being generated. Experiments show SurF's effectiveness in reducing attack success across various LLMs. However, stronger defenses can impact the quality of legitimate responses, highlighting the need for balance between security and response consistency.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a comprehensive and practical approach, SurF, which simulates prompt-output interactions to detect malicious inputs without needing access to raw system prompts.\\nThis makes it particularly well-suited for third-party defense scenarios where confidentiality is essential. Additionally, the extensive experiments conducted across multiple offline and online LLM platforms validate SurF's effectiveness in reducing prompt injection attack success rates.\", \"weaknesses\": \"1. There is significant ambiguity in the threat model. What capabilities should an attacker possess during the attack process? Can the attacker control the LLM? Is the attacker able to fine-tune the LLM?\\n2. The evaluation metric for data filtering is unclear. The paper employs the metric EXC (equation (1)) to identify exact matches of prompt leakage, marking any occurrence of system prompt sentences in the reply as a successful attack. However, for data filtering (equation (4)), the authors measure the percentage of system prompt sentences present in the reply and set an 80% threshold to detect attacks. This approach could introduce bias and unintentionally degrade the performance of data filtering. Why not use the same EXC metric for data filtering as well?\\n3. There is considerable ambiguity in the attack setting and design as described. According to the paper, the authors attempt to separate the LLM (target) from the defense mechanism, such that the defense system lacks access to both the LLM and the system prompts used within it. This raises questions about the specific situations or scenarios where the defense system would be unable to access the system prompts. Who would adopt and deploy such a defense? The paper does not provide a realistic scenario for this setup. In practical terms, it seems illogical to separate the defense from the LLM, as it is unusual for a defense to be managed by a third party who is neither familiar with the target LLM party nor trusted by it. Thus, the motivation for ensuring confidentiality in the proposed method makes less sense to me.\\n4. As a result, In the SruF framework, the authors propose a setting where the system prompt is inaccessible, which introduces significant ambiguity. It is unclear who is permitted to access the system prompt and why the defense mechanism is restricted from doing so. The paper fails to clarify who can access (or cannot access) what kind of information within the framework, leading to confusion. Moreover, if the system prompt is meant to be confidential and inaccessible to the defense party, it is contradictory that surrogate prompts are generated based on the system prompt. This creates substantial confusion and inconsistency within both the threat model and the system design.\\n5. Furthermore, the definition of confidentiality in the context of this paper is unclear. If the surrogate prompts are generated based on the original system prompt, they must inherently contain or convey information from that original prompt. This approach represents a compromise of confidentiality, at the point of cryptography. \\n6. Stronger attacks need to be considered. In the field of prompt injection attacks, certain studies, such as [1], use learning-based methods to initiate attacks. I recommend that the authors conduct experiments to test the performance of their defense mechanisms against these types of sophisticated attacks.\\n\\n[1] Pasquini, Dario, Martin Strohmeier, and Carmela Troncoso. \\\"Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks.\\\" arXiv preprint arXiv:2403.03792 (2024).\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We sincerely thank the reviewer pZTV for taking the time to review our work. We are grateful for your recognition of our method as not only straightforward and effective but also training-free, which facilitates ease of implementation. In response to your insightful comments, we have provided detailed feedback below.\\n\\n***Q1:*** Practicality of the Proposed Threat Model\\n> This paper is based on an impractical threat model by assuming the defender is ignorant of the system prompt. In a real-world implementation, the system prompts are usually processed by the user or the service provider according to my knowledge. In the former case, there is no need for the prompt stealing attack as the user himself is aware of the prompt. For the latter, there is no need to use 'SurF' as the system prompt is accessible. Moreover, in either case, the service provider MUST have access to the system prompt otherwise it cannot be fed into the LLM. In case I misunderstood the scenario where the proposed method is supposed to function, I strongly suggest the author give a detailed real-world example illustrating why the system prompt can be inaccessible to both the service provider and the user. In which scenario is the proposed threat model practical? Does the author indicate one in which both the service provider and the user have no access to the system prompt? If so, why is it realistic?\\n\\n***Ans for Q1:*** We understand your concern regarding the practicality of a threat model where the defender does not have access to the system prompt. Our proposed setting addresses scenarios involving third-party defenses where the system prompt is maintained confidentially by the user or service provider.\\nFor example, consider a scenario where a company owns a powerful Large Language Model (LLM) and a user possesses proprietary, effective, and secret system prompts essential for their business operations. The user wishes to integrate the LLM with their private prompts to create a customized business API. To protect the confidentiality of these system prompts, the user may engage a third-party security provider to implement SurF as a defense mechanism. In this case, the third-party defender does not have access to the internal LLM or the confidential system prompts, ensuring the prompts remain secure while still providing an effective defense against prompt-injection attacks.\\nThis setting is practical in commercial environments where protecting intellectual property and proprietary system prompts is crucial. The separation of LLM ownership and prompt ownership necessitates a defense mechanism that can operate without direct access to sensitive components, which SurF is designed to fulfill.\\n\\n***Q2:*** Defense Against More Prompt Injection Attacks\\n> I believe that the detecting mechanism of 'SurF' can be easily defeated when the stealing prompt asks the LLM to give trivially encrypted outputs, e.g. in Caesar cipher, which can result in a low CS score as well as a low WR score, so as to break 'SurF'. Note that the LLMs are capable of yielding such outputs. Will SurF remain robust if the outputs are in cipher? If not, how to improve the proposed method?\\n\\n***Ans for Q2:*** We acknowledge the potential for attackers to employ encryption techniques, such as Caesar cipher, to obfuscate the system prompt in their responses, potentially evading SurF\\u2019s detection mechanisms. In our current implementation, SurF primarily targets attacks where the system prompt is directly or semantically embedded in the LLM\\u2019s output.\\nHowever, our approach includes calculating embedding similarity between surrogate prompts and responses (as per Equation (5)), which provides a measure beyond simple string matching. This semantic similarity helps in detecting paraphrased or translated versions of the system prompt. While this does enhance SurF\\u2019s ability to identify obfuscated prompts, we recognize that more sophisticated encryption-based attacks may require additional defenses. Future work will explore integrating more advanced detection mechanisms, such as pattern recognition or machine learning-based anomaly detection, to further enhance SurF\\u2019s resilience against such adaptive attacks.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"***Q3:*** Inference-Time Cost\\n> The number of surrogate prompts K may influence the performance of SurF from Table 2, seemingly increasing the query cost by 5~10 times. Will SurF significantly increase the inference cost? How much is the extra query cost supposed to increase?\\n\\n***Ans for Q3:*** We appreciate your concern regarding the inference-time cost associated with using multiple surrogate prompts. To mitigate this, SurF offers flexibility in choosing surrogate models. Instead of relying solely on large, computationally intensive LLMs, SurF can utilize smaller, more affordable surrogate models to perform the necessary checks. This approach significantly reduces the additional query cost while maintaining effective defense capabilities.\\nFurthermore, the trade-off between computational overhead and security is a critical consideration. Protecting the confidentiality of system prompts is paramount, and in many commercial applications, a slight increase in inference time is acceptable compared to the potential commercial losses resulting from prompt leakage. Optimizing the number of surrogate prompts (K) and selecting efficient surrogate models are areas we continue to explore to enhance SurF\\u2019s practical applicability.\\n\\n***Q4:*** Robustness Against SOTA and Adaptive Attacks\\n> Only simple attacks and baselines are evaluated in the paper, an effective defense is supposed to be robust to the SOTA attacks as well as the adaptive attacks (as narrated above). How robust is SurF against the SOTA attacks? And how resilient is it towards the mentioned adaptive attack?\\n\\n***Ans for Q4:*** We appreciate your insightful comment regarding the evaluation of our defense against adaptive adversaries who may attempt to evade detection by altering the prompt's appearance. In our current study, we have focused on manually crafted attacks, including direct, translated, and interspersing-based attacks, as detailed in Section 4.5. Additionally, our SurF method incorporates semantic similarity measures (as per Equation (5)), which help mitigate the risk of such transformed prompts bypassing the filtering system.\\nWhile our approach is effective against these types of attacks, we acknowledge that it may not fully protect against more sophisticated, learning-based attacks that reconstruct prompts without direct disclosure. Evaluating the robustness of SurF against adaptive adversaries remains an important area for future research, and we are keen to explore these scenarios to further enhance the effectiveness of our defense mechanism. Nevertheless, our primary contribution lies in the innovative use of surrogate system prompts to detect and filter malicious inputs without requiring direct access to the original system prompts. We believe our work can help open avenues for future research to enhance and build upon this foundational approach.\\n\\n***Q5:*** Clarification of Figure 3\\n> The y-axis and the values in Figure 3 do not seem to be correct. The first row of the heatmap seems to refer to the performance when the LLM behind the service is Vicuna-7b, according to the APP values and the relative improvements as they sum up to 24.67, which is the no defense APP of Vicuna-7b in Table 1. I am confused to see that, in this case, the APP becomes even higher when exploiting some different proxy model (e.g., in the first row, second column it gets 27.29). What is your explanation for this phenomenon?\\n\\n***Ans for Q5:*** Figure 3 illustrates the Attack Success Rate (APP) for SurF when utilizing different surrogate models across various LLM services. The X-axis represents the surrogate models employed by SurF, while the Y-axis denotes the LLMs behind the services. Each block in the heatmap indicates the APP when a specific surrogate model (column) is used to detect attacks against a particular LLM service (row). For example, the first value of 24.33 under the Vicuna-7b surrogate model and GPT-4 service indicates that SurF reduces the APP from 24.69 (as shown in Table 1, without defense) to 24.33.\\nIt is also interesting to observe that using different surrogate models can influence the effectiveness of the defense mechanism. Additionally, a reduction in APP does not necessarily imply an overall performance gain of the defense system, as there may be instances of false alarms or stricter thresholds leading to more inputs being classified as malicious. Nevertheless, this experiment demonstrates SurF\\u2019s flexibility. The ability to utilize various surrogate models allows the defense mechanism to operate effectively even when the exact LLM behind a service is unknown or computationally intensive.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We would like to extend our sincere gratitude to Reviewer 9Yq7 for taking the time to thoroughly review our manuscript and provide insightful feedback. We appreciate your recognition of SurF\\u2019s innovative approach, its suitability for confidentiality-critical scenarios, and the comprehensiveness of our experiments. Below, we address each of your concerns in detail.\\n\\n***Q1:*** Clarification of the Threat Model\\n> There is significant ambiguity in the threat model. What capabilities should an attacker possess during the attack process? Can the attacker control the LLM? Is the attacker able to fine-tune the LLM?\\n\\n***Ans for Q1:*** We apologize for any ambiguity regarding our threat model. In our setting, similar to the classic system prompt attacks discussed in [1], the attacker is restricted to interacting solely with the API. This means the attacker can only send raw inputs and receive outputs from the API without gaining access to any internal information about the API, including the type of the underlying LLM or any defense mechanisms in place. Additionally, the attacker does not have the capability to modify the LLM. This assumption ensures that the defense mechanism operates effectively within the defined security boundaries.\\n\\n***Q2:*** Evaluation Metrics for Data Filtering\\n> The evaluation metric for data filtering is unclear. The paper employs the metric EXC (equation (1)) to identify exact matches of prompt leakage, marking any occurrence of system prompt sentences in the reply as a successful attack. However, for data filtering (equation (4)), the authors measure the percentage of system prompt sentences present in the reply and set an 80% threshold to detect attacks. This approach could introduce bias and unintentionally degrade the performance of data filtering. Why not use the same EXC metric for data filtering as well?\\n\\n***Ans for Q2:*** Thank you for pointing out the ambiguity in our evaluation metrics. The EXC metric is indeed consistent with previous works and measures exact matches of prompt leakage by identifying occurrences of system prompt sentences in the replies. For data filtering, we employ a word-level matching approach combined with our proposed SurF metric (Eq. 5), which assesses semantic similarity. This dual-metric approach enhances the defender\\u2019s ability to detect prompt leakage more effectively. The combination of exact matches and semantic similarity provides a more robust evaluation, reducing potential biases and improving the overall reliability of data filtering.\\n\\n***Q3:*** Attack Setting and Practical Scenarios\\n> There is considerable ambiguity in the attack setting and design as described. According to the paper, the authors attempt to separate the LLM (target) from the defense mechanism, such that the defense system lacks access to both the LLM and the system prompts used within it. This raises questions about the specific situations or scenarios where the defense system would be unable to access the system prompts. Who would adopt and deploy such a defense? The paper does not provide a realistic scenario for this setup. In practical terms, it seems illogical to separate the defense from the LLM, as it is unusual for a defense to be managed by a third party who is neither familiar with the target LLM party nor trusted by it. Thus, the motivation for ensuring confidentiality in the proposed method makes less sense to me.\\n\\n***Ans for Q3:*** We appreciate your careful consideration of our attack setting and the practical scenarios where our defense mechanism would be applicable. In the third-party defense scenario, the defense system does not have access to the LLM or its internal system prompts. This setting is practical in commercial environments where, for example, a company possesses a powerful LLM, and a user has proprietary and effective system prompts that they wish to integrate with the LLM to create a customized business API. The success of this API relies on keeping the system prompts confidential. In such cases, the user may engage a third-party security company to design a defense system that protects these prompts without requiring access to the internal workings of the LLM. This approach ensures the confidentiality of the system prompts while maintaining the integrity and functionality of the API.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"***Q4:*** SurF Framework and System Prompt Confidentiality\\n> As a result, In the SruF framework, the authors propose a setting where the system prompt is inaccessible, which introduces significant ambiguity. It is unclear who is permitted to access the system prompt and why the defense mechanism is restricted from doing so. The paper fails to clarify who can access (or cannot access) what kind of information within the framework, leading to confusion. Moreover, if the system prompt is meant to be confidential and inaccessible to the defense party, it is contradictory that surrogate prompts are generated based on the system prompt. This creates substantial confusion and inconsistency within both the threat model and the system design.\\n\\n***Ans for Q4:*** Thank you for pointing out the potential inconsistency regarding system prompt access within the SurF framework. To clarify, in our setting, system prompts are owned by individual creators who do not wish to grant access to the LLM provider or safety control entities. Consequently, surrogate prompts are generated independently by collecting various system prompts from open-source sources across different scenarios. This ensures that surrogate prompts do not contain or reveal information from the original, confidential system prompts. Exploring the generation of surrogate prompts based on secret system prompts, with the owner's consent, is indeed an interesting direction for enhancing defense capabilities without compromising confidentiality.\\n\\n***Q5:*** Definition of Confidentiality\\n> Furthermore, the definition of confidentiality in the context of this paper is unclear. If the surrogate prompts are generated based on the original system prompt, they must inherently contain or convey information from that original prompt. This approach represents a compromise of confidentiality, at the point of cryptography.\\n\\n***Ans for Q5:*** We acknowledge the confusion regarding the definition of confidentiality in our paper. In our framework, surrogate prompts are sourced from publicly available prompts and are distinct from the original, confidential system prompts. This separation ensures that surrogate prompts do not compromise the confidentiality of the original prompts. By generating surrogate prompts from diverse, non-confidential sources, we maintain the integrity and secrecy of the system prompts while still providing effective defense against injection attacks. This approach ensures that confidentiality is preserved, as surrogate prompts do not contain any sensitive information from the original system prompts.\\n\\n***Q6:*** Consideration of Stronger Attack Methods\\n> Stronger attacks need to be considered. In the field of prompt injection attacks, certain studies, use learning-based methods to initiate attacks. I recommend that the authors conduct experiments to test the performance of their defense mechanisms against these types of sophisticated attacks.\\n\\n***Ans for Q6:*** We appreciate your suggestion to evaluate our defense mechanism against more sophisticated, learning-based attack methods. In our current setting, we have focused on manually designed attacks, including direct, translated, and interspersing-based attacks, under the assumption that attackers can only modify inputs without access to the internal LLM or system prompts. However, we recognize the importance of assessing our defense against learning-based attacks in black-box scenarios. Investigating the robustness of SurF against such advanced attack methods is indeed valuable, and we are keen to explore this in future research to further validate and enhance the effectiveness of our defense mechanism.\\n\\n> ***Reference***\\n> \\n> [1] Prompts should not be seen as secrets: Systematically measuring prompt extraction attack success. arxiv23\"}", "{\"summary\": \"The author proposed a new defense mechanism namely 'SurF' to act as a raw-prompt agnostic detector against prompt-injection-based stealing attacks, which is exclusive for scenarios where the defender has no access to the original system prompts as they may be confidential. 'SurF' exploits a bunch of surrogate prompts to test the user query and identifies the malicious query according to the output of the LLM. To evaluate the performance of the defense, the author proposed RFM and ACC as the metrics to show functional preservation and robustness. Extensive experiments demonstrated the effectiveness of the proposed method across a variety of mainstream LLM in comparison with the baseline where the defender is able to reach the system prompt.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. Good writing, easy to follow.\\n\\n2. The proposed method is simple yet effective, making it easy to reproduce.\\n\\n3. SurF is a training-free method, it can be implemented easily.\", \"weaknesses\": \"1. This paper is based on an impractical threat model by assuming the defender is ignorant of the system prompt. In a real-world implementation, the system prompts are usually processed by the user or the service provider according to my knowledge. In the former case, there is no need for the prompt stealing attack as the user himself is aware of the prompt. For the latter, there is no need to use 'SurF' as the system prompt is accessible. Moreover, in either case, the service provider *MUST* have access to the system prompt otherwise it cannot be fed into the LLM. In case I misunderstood the scenario where the proposed method is supposed to function, I strongly suggest the author give a detailed real-world example illustrating why the system prompt can be inaccessible to both the service provider and the user.\\n\\n2. I believe that the detecting mechanism of 'SurF' can be easily defeated when the stealing prompt asks the LLM to give trivially encrypted outputs, e.g. in Caesar cipher, which can result in a low CS score as well as a low WR score, so as to break 'SurF'. Note that the LLMs are capable of yielding such outputs according to [1,2].\\n\\n3. The number of surrogate prompts K may influence the performance of SurF from Table 2, seemingly increasing the query cost by 5~10 times.\\n\\n4. Only simple attacks and baselines are evaluated in the paper, an effective defense is supposed to be robust to the SOTA attacks [3] as well as the adaptive attacks (as narrated above).\\n\\n\\n[1] Glukhov, D., Shumailov, I., Gal, Y., Papernot, N., & Papyan, V. Position: Fundamental Limitations of LLM Censorship Necessitate New Approaches. In Forty-first International Conference on Machine Learning.\\n\\n[2] Yuan, Y., Jiao, W., Wang, W., Huang, J.-t., He, P., Shi, S., and Tu, Z. Gpt-4 is too smart to be safe: Stealthy chat with llms via cipher. In The Twelfth International Conference on Learning Representations, 2023\\n\\n[3] Sha, Zeyang, and Yang Zhang. \\\"Prompt stealing attacks against large language models.\\\" arXiv preprint arXiv:2402.12959 (2024).\", \"questions\": \"1. In which scenario is the proposed threat model practical? Does the author indicate one in which both the service provider and the user have no access to the system prompt? If so, why is it realistic?\\n\\n2. Will SurF remain robust if the outputs are in cipher? If not, how to improve the proposed method?\\n\\n3. Will SurF significantly increase the inference cost? How much is the extra query cost supposed to increase?\\n\\n4. How robust is SurF against the SOTA attacks? And how resilient is it towards the mentioned adaptive attack?\\n\\n5. The y-axis and the values in Figure 3 do not seem to be correct. The first row of the heatmap seems to refer to the performance when the LLM behind the service is Vicuna-7b, according to the APP values and the relative improvements as they sum up to 24.67, which is the no defense APP of Vicuna-7b in Table 1. I am confused to see that, in this case, the APP becomes even higher when exploiting some different proxy model (e.g., in the first row, second column it gets 27.29). What is your explanation for this phenomenon?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
5dttvRONu0
Federated Learning Nodes Can Reconstruct Peers' Image Data
[ "Ethan Wilson", "Kai Yue", "Chau-Wai Wong", "Huaiyu Dai" ]
Federated learning (FL) is a privacy-preserving machine learning framework that enables multiple nodes to train models on their local data and periodically average weight updates to benefit from other nodes' training. Each node's goal is to collaborate with other nodes to improve the model's performance while keeping its training data private. However, this framework does not guarantee data privacy. Prior work has shown that the gradient-sharing steps in FL can be vulnerable to data reconstruction attacks from a honest-but-curious central server. In this work, we show that a honest-but-curious node/client can also launch attacks to reconstruct peers' image data in a centralized system, presenting a severe privacy risk. We demonstrate that a single client can silently reconstruct other clients' private images using diluted information available within consecutive updates. We leverage state-of-the-art diffusion models to enhance the perceptual quality and recognizability of the reconstructed images, further demonstrating the risk of information leakage at a semantic level. This highlights the need for more robust privacy-preserving mechanisms that protect against silent client-side attacks during federated training. The source code will be available as a link on the discussion forum once it is open.
[ "Federated Learning", "Data Reconstruction Attacks" ]
Reject
https://openreview.net/pdf?id=5dttvRONu0
https://openreview.net/forum?id=5dttvRONu0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z9USFTn55O", "viWVT1vCPq", "t5GtRJRLBC", "mW7WCHARjI", "jYWQum3TM3", "jTkLnPwgSy", "igMjyRWEUq", "bB46WSocl0", "SVikhVZar8", "OV44JRqHu7", "Jw1W8FqNaI", "JjdqPO5n3J", "GOZffo1u5y", "9ZSEVEvDSQ", "8eKw8cFtcr", "5jr7HVHMZz" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732785633041, 1731609051229, 1732785933604, 1733289401605, 1730261718120, 1732950794156, 1732811218538, 1730598282683, 1732786455883, 1732786803701, 1737523912322, 1732786885251, 1734387873301, 1729764315738, 1732786076488, 1732786306259 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8487/Authors" ], [ "ICLR.cc/2025/Conference/Submission8487/Authors" ], [ "ICLR.cc/2025/Conference/Submission8487/Authors" ], [ "ICLR.cc/2025/Conference/Submission8487/Authors" ], [ "ICLR.cc/2025/Conference/Submission8487/Reviewer_kGL7" ], [ "ICLR.cc/2025/Conference/Submission8487/Authors" ], [ "ICLR.cc/2025/Conference/Submission8487/Reviewer_i2hB" ], [ "ICLR.cc/2025/Conference/Submission8487/Reviewer_YKmd" ], [ "ICLR.cc/2025/Conference/Submission8487/Authors" ], [ "ICLR.cc/2025/Conference/Submission8487/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8487/Authors" ], [ "ICLR.cc/2025/Conference/Submission8487/Area_Chair_57JJ" ], [ "ICLR.cc/2025/Conference/Submission8487/Reviewer_i2hB" ], [ "ICLR.cc/2025/Conference/Submission8487/Authors" ], [ "ICLR.cc/2025/Conference/Submission8487/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer Ykmd (Part 1)\", \"comment\": \"**Comment 1 (Reviewer Ykmd):** The method relies on several assumptions, including that each client has ample computational resources, employs a consistent learning rate, and trains with an equal number of images locally. Additionally, the approach presumes that the attacker is either aware of or can accurately estimate the number of clients participating in each training round. Further assumptions, such as the use of full-batch gradient descent for local training and other idealized conditions, may not be realistic for federated learning (FL) environments. These restrictive assumptions may limit the method's practical applicability in typical FL settings.\\nHow robust is the proposed approach if any of its underlying assumptions are violated?\\n\\n**Response:** We thank the reviewer for their comments. Regarding the main assumptions made by our work, we have performed additional experiments based on the reviewer's comments and were able to remove the assumption of uniform batch size. This result also allows us to remove the assumption that the attacker knows the number of clients if it can guess the total number of images used for the training round. We further show that our assumptions of uniform learning rate and synchronous gradient updates are appropriate for our use case of cross-silo federated learning. We also note that the assumption that each client performs full-batch gradient descent during a single training round is shared by other related works. We now discuss each assumption in detail.\\n\\n**Uniform batch size.** We conducted extra experiments during the rebuttal period and have been able to show that the proposed attack is not affected if clients have different local batch sizes. Intuitively, this is because FedAvg scales each client's update by the number of training images, which leads to each image's individual contribution being weighted equally. We have updated Section 3.1 of the manuscript to account for this scaling behavior. The updated section reads,\\n\\n> \\\"... The server generates the global weights by a weighted average of all clients' final local weights \\u2026 we obtain the global weight update equation:\\n\\\\begin{equation}\\n\\\\mathbf{W}^{(t+1)} = \\\\mathbf{W}^{(t)} - \\\\frac{\\\\eta_{\\\\text{g}}}{N} \\\\Delta_k.\\n\\\\end{equation}\\n\\u2026 We note that scaling each client's update by its number of training images $N_k$ causes the gradient of each training image to be weighted equally in the global update.\\\"\\n\\n\\nWe have also performed additional experiments to confirm that the attack performance is not sensitive to uneven client training batches. The outcome is included in newly added Appendix C of the revised manuscript, which reads \\n\\n>\\\"We compare the performance of the proposed attack with uneven client batch sizes to confirm that the proposed attack is not affected when training examples are distributed unevenly between clients. To evaluate this, we distribute a total of 256 training images unevenly across clients, using an average client batch size of 16 images. Half of the clients are initialized with 21 images (two-thirds of the total training data), while the other half receive 11 images (one-third of the total). Figure 13 compares the image reconstruction quality between this uneven distribution and a system where each client has an equal batch size of 16 images, keeping the total number of training images constant. The evaluation is conducted with an even number of clients ranging from 2 to 8. The results indicate negligible differences in reconstruction quality between the two systems. This finding supports our hypothesis that the weighting behavior of FedAvg renders the attack robust to uneven batch size distributions.\\\"\\n\\n**Known number of clients.** With our revised assumptions on batch size and learning rate, it is now evident that the attacker does not need to know the number of clients if it can guess the total number of training images. Given that each image's gradient update is weighted equally by FedAvg, the only information the attacker would need to know or guess to initialize the dummy data batch size is the total number of images used in the first of the two consecutive training rounds. Our updated assumption in Section 3.1 of the manuscript reads\\n\\n>\\\"The attacker may not know the number of clients in each training round but can correctly guess the total number of training images.\\\"\\n\\nAdditionally, we have added a discussion on how the attacker could determine this number in Appendix A of the updated manuscript, which reads\\n\\n>\\\"The proposed attack assumes that the attacker can correctly guess the total number of training images $N$ in a given round. This assumption simplifies the attacking algorithm but is not always needed. Instead, the attacker can search for this integer value and decide on the best guess leading to a successful recovery of the attacker's own training images. If both the learning rate and number of images are unknown, a joint parameter search can be conducted.\\\"\"}", "{\"title\": \"Code and Model Release\", \"comment\": \"Here is the repository link for \\\"Federated Learning Nodes Can Reconstruct Peers' Image Data\\\". It includes all the code and pretrained models necessary to replicate the curious client attack, postprocessing, and masked diffusion enhancer (MDE).\", \"https\": \"//anonymous.4open.science/r/curiousclient-5B6F\"}", "{\"title\": \"Response to Reviewer Ykmd (Part 2)\", \"comment\": \"**Learning Rate.** We have added an additional discussion to Appendix A of the updated manuscript to support our assumption that each client's gradient update is scaled by the same learning rate. The updated section reads\\n\\n>\\\"The proposed attack relies on each client's gradient update being scaled by the same learning rate but this is also necessary for the global model to converge with FedAvg. To achieve the best model convergence during federated training, FedAvg scales each client's update by the client's number of training images, which gives the individual gradient of each training image equal weight in the global model update. If clients used very different learning rates, the clients with larger learning rates would dominate the global weight update, leading to suboptimal convergence. This is the basis for the assumption that the clients' learning rate in each round is either set by the server or otherwise controlled. For example, the clients may use a learning rate scheduler but agree on its parameters so the scale of their updates does not vary significantly in a given round. Regardless of how the learning rate is set during federated training, scaling individual image gradients unevenly in a way that would disrupt the attack is also likely to impede the global model's convergence.\\\"\\n\\n**Full-batch gradient descent.** While this assumption is simplifying, it is not unique to our work. As shown in the newly added Table 1 of Appendix B, prior works such as ROG and DLG assume this in the implementation without stating it explicitly. \\n\\n| **Assumption** | **Ours** (client) | **ROG** (server) | **iDLG** (server) | **DLG** (server) |\\n|----------------------------------------------------|-------------------|------------------|-------------------|------------------|\\n| Application: cross-silo/cross-device | cross-silo | both | both | both |\\n| Analytical label inversion | \\u2713 | \\u2713 | \\u2713 | |\\n| Single image per gradient | | | \\u2713 | |\\n| Each client trains on a single batch in each round| \\u2713 | \\u2713 | \\u2713 | \\u2713 |\\n| Clients can guess the total number of images in a given training round | \\u2713 | | | |\\n| Small number of clients | \\u2713 | | | |\\n| Small number of local iterations | \\u2713 | \\u2713 | \\u2713 | \\u2713 |\\n| Small number of images in each training round | \\u2713 | \\u2713 | \\u2713 | \\u2713 |\\n| Attacker has resources for complex attack | \\u2713 | \\u2713 | \\u2713 | \\u2713 |\\n\\n**Comment 2 (Reviewer Ykmd):** Federated learning is generally designed to support users with limited resources, accommodate non-iid (non-independent and identically distributed) data, and handle asynchronous updates among clients. The paper does not address these critical FL challenges, potentially reducing the relevance of the proposed approach in real-world scenarios.\\n\\n**Response:** We thank the reviewer for their assessment. We note that our work targets cross-silo, rather than cross-device federated learning as the reviewer was implying in their comment. We have updated our threat model in Section 3.1 of the manuscript to clarify this point. The updated section reads, \\n\\n>\\\"We target cross-silo FL scenarios, in which a small number of clients collaborate to overcome data scarcity. For example, a group of hospitals may use FL to develop a classifier for rare diseases from CT scans, where each has limited training examples and images cannot be directly shared due to privacy concerns. We assume that the system is designed to prioritize model accuracy and uses synchronous gradient updates. Clients are not edge devices and have sufficient computational resources to perform the optimization process while participating in FL.\\\"\"}", "{\"title\": \"Response to Reviewer Ykmd - Additional Experiments with Larger Number of Clients\", \"comment\": \"We have conducted additional experiments based on the reviewer's comments to quantify the attack's performance with more than eight clients. We evaluated the attack with the maximum number of clients supported by our hardware during the rebuttal period, 32 for LeNet and 16 for ResNet9 and ResNet18. The attached figure shows that the LPIPS of the reconstructed images increases in proportion to the number of clients for all three models. We also observe a more gradual increase in LPIPS for ResNet9 and ResNet18 compared to LeNet, though the absolute values are higher. We use our default hyperparameters of 16 images per client, 3 local iterations, and an inversion learning rate of 0.1 to obtain these results. We also provide visualizations of how the image reconstruction quality is affected by larger numbers of clients for each model. These results indicate that the attack can meaningfully reconstruct images with 16-32 participants, which is near the upper end for cross-silo FL applications.\", \"our_results_can_be_viewed_at\": \"https://anonymous.4open.science/r/CuriousClient_experiments-3701\"}", "{\"summary\": \"This paper investigates the privacy issues in federated learning (FL), a framework allowing nodes to train models locally while sharing updates. Despite its privacy goals, FL is vulnerable to data reconstruction attacks. The paper reveals that not only central servers but also semi-honest clients can reconstruct peers' image data, posing significant risks. Using advanced diffusion models, the authors show how a single client can enhance image quality, underscoring the need for stronger privacy measures to prevent client-side attacks in FL.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Stealth and Undetectability**: The attack method does not disrupt the training process or introduce corrupted data, making it challenging for detection by servers or other clients, which underscores its potential impact.\\n\\n2. **Relevance to Cross-Silo FL**: The findings are particularly concerning for cross-silo FL, where data scarcity is addressed through collaboration, emphasizing the need for enhanced privacy measures in such settings.\\n\\n3. **Extensive Experiments**: The paper conducts thorough experiments to validate the effectiveness of the attack, providing strong empirical evidence of the vulnerability in FL systems.\", \"weaknesses\": \"1. This paper attacks from the perspective of any node/client and reconstruct all training data of all other participants. However, this is no different from a conventional inversion attack launched from the server.\\nWhen the secure aggregation protocol is applied, the server can obtain the model parameters at time $t$ and the corresponding aggregated gradients; while any client can receive the model parameters at time $t$ and time $t+1$. Obviously, the information obtained in these two cases is exactly the same, and the updated gradient is the difference between these two rounds.\\nIt is good that the authors start from the node/client perspective, but the current analysis is the same as the typical gradient inversion attacks, and there is no special or new inspiration.\\n\\n2. The core contribution of the paper is to propose a post-processing method for reconstructing images (based on the diffusion models). However, this is based on a premise that the original restored image already contains enough information.\\nIf the results after the attack are like the results of Figure 5(b) and Figure 15 of ROG or the last three rows of Figure 4 of GradInversion (See through Gradients, Yin et al.), the reconstructed images are similar to noise, then your proposed method obviously does not work. How do you solve this situation? This is not mentioned in the paper.\", \"questions\": \"1. Figures 2, 3, 5, and 8 demonstrate the effect of data reconstruction. What hyperparameters such as the training model structure, batch size, and epoch of FedAvg local training are used corresponding to these results?\\n\\n2. You selected LPIPS as the main evaluation metric. Do the results or trends of MSE, PSNR, SSIM and LPIPS are consistent in these experiments? Because sometimes the LPIPS values \\u200b\\u200bof two sets of images may be close, but the visual effects are very different.\\n\\n3. In the left figure of Figure 6, when there are 8 clients and the batch size is 64, the attacker has to restore a total of (amazing) 512 images. What is the specific visualizatioin results of these images? Does the LPIPS value reflect the actual reconstruction effect?\\n\\n4. In Equation (3), why do you choose L2 norm instead of cosine similarity? In your method, which one do you think has a greater impact on the final restoration result, raw reconstruction or post-processing?\\n\\n5. Figure 2 shows the results of different epochs. How do you choose the best epoch? For all the images to be processed, do they use the same optimal number of epochs?\\n\\n6. After adding two diffusion models (MDT and DDPMs) to optimize the reconstruction results, how much will the efficiency of the attack and the computational cost increase compared to before?\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for pointing out that Li et al. (2023) discussed the honest-but-curious threat model in their survey paper. We note that even though the FL literature (Kairouz et al., 2021; Li et al., 2023) has mentioned the honest-but-curious client threat model on multiple occasions, to the best of our knowledge, our paper is the first to experimentally demonstrate this threat on multiple ML datasets. We will update our manuscript to reflect the fact that our contribution focuses on the experimental evaluation and not on proposing the notion of honest-but-curious vulnerability. If the reviewer is aware of any other works that have demonstrated successful reconstruction attacks under this threat model, we are happy to compare them to our attack.\", \"reference\": \"Peter Kairouz, H Brendan McMahan, Brendan Avent, Aur\\u00c5Lelien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. \\u201cAdvances and open problems in federated learning.\\u201d Foundations and Trends. in Machine Learning, 14(1\\u20132):1\\u2013210, 2021.\"}", "{\"comment\": \"I'm sorry that you are not the first to identify federated training\\u2019s vulnerability posed by honest-but-curious clients.\\nPlease refer to Part D, Section 2 in paper [1] (especially Eq. (5) and Fig. 2).\\n\\n[1] Li, Zhaohua, et al. \\\"A survey of image gradient inversion against federated learning.\\\" Authorea Preprints (2023).\", \"url\": \"https://www.techrxiv.org/doi/pdf/10.36227/techrxiv.18254723.v1\"}", "{\"summary\": \"This paper proposes an attack approach within the federated learning (FL) framework to reconstruct image data from participating peers in a centralized system. The study demonstrates that consecutive updates in the FL setting can inadvertently reveal information about other clients. Experiments are conducted to validate the effectiveness of this attack method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The strength of this paper lies in its successful implementation of an attack capable of reconstructing images from other participating users. The experiments effectively demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"I have several concerns regarding the experimental setup and the novelty of this paper, which I outline below:\\n\\nThe method relies on several assumptions, including that each client has ample computational resources, employs a consistent learning rate, and trains with an equal number of images locally. Additionally, the approach presumes that the attacker is either aware of or can accurately estimate the number of clients participating in each training round. Further assumptions, such as the use of full-batch gradient descent for local training and other idealized conditions, may not be realistic for federated learning (FL) environments.\\n\\nThese restrictive assumptions may limit the method's practical applicability in typical FL settings. Federated learning is generally designed to support users with limited resources, accommodate non-iid (non-independent and identically distributed) data, and handle asynchronous updates among clients. The paper does not address these critical FL challenges, potentially reducing the relevance of the proposed approach in real-world scenarios.\\n\\nThe optimization framework introduced here does not appear to be novel, and the paper lacks citations to previous work on similar frameworks. There is also no comparison provided to demonstrate why or how the proposed optimization function is more effective or advantageous over existing methods.\\n\\nIn the experiments, the maximum number of clients is set at 8, which limits the insights into how the framework performs at larger scales. Additionally, there are insufficient ablation studies to illustrate the robustness of the proposed framework under varying conditions.\", \"questions\": \"How does the proposed approach perform with a larger number of clients?\\nHow robust is the proposed approach if any of its underlying assumptions are violated?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer kGL7 (Part 2)\", \"comment\": \"**Comment 7 (Reviewer kGL7):** This paper attacks from the perspective of any node/client and reconstruct all training data of all other participants. However, this is no different from a conventional inversion attack launched from the server. When the secure aggregation protocol is applied, the server can obtain the model parameters at time?and the corresponding aggregated gradients; while any client can receive the model parameters at time t and time t+1. Obviously, the information obtained in these two cases is exactly the same, and the updated gradient is the difference between these two rounds. It is good that the authors start from the node/client perspective, but the current analysis is the same as the typical gradient inversion attacks, and there is no special or new inspiration.\\n\\n**Response:** In response to the reviewer's comment, we have conducted an additional literature review of curious-server attacks on the secure aggregation protocol. We have included a discussion in Appendix A of the revised draft, which reads\\n\\n>\\\"We evaluate the similarity between our approach and server-side attacks against the secure aggregation protocol, identifying both significant differences and an additional application scenario of our attack. Our problem of inverting the aggregated gradients of multiple clients is similar to the problem server-side attackers encounter in systems using the secure aggregation protocol, which prevents the parameter server from knowing individual clients' gradients (Bonawitz et al., 2017). Despite this similarity, we have not found any other works that obtain high-quality reconstructions without modifying the global model (Shi et al., 2023; Zhao et al., 2024) or relying on additional information the server might have about the client devices, such as device type and available memory (Lam et al., 2021), which would not be possible for a client attacker. Most of these attacks also rely on information collected across many training rounds, which may not be available to a client who cannot choose which rounds it is selected to participate in. In contrast, our attack does not require the attacker to disrupt the training protocol or have any information about the other clients beyond the model updates and total number of training images, which it may be able to guess. It also relies only on information from two consecutive training rounds. This indicates that our attack could also be performed by a server against a securely aggregated gradient and would allow it to avoid modifying the global model, maintaining the honest-but-curious threat model.\\\"\\n\\n\\n**Comment 8 (Reviewer kGL7):** The core contribution of the paper is to propose a post-processing method for reconstructing images (based on the diffusion models). However, this is based on a premise that the original restored image already contains enough information. If the results after the attack are like the results of Figure 5(b) and Figure 15 of ROG or the last three rows of Figure 4 of GradInversion (See through Gradients, Yin et al.), the reconstructed images are similar to noise, then your proposed method obviously does not work. How do you solve this situation? This is not mentioned in the paper.\\n\\n**Response:** We have added an additional discussion on the postprocessors to Appendix A of the revised draft, which reads \\n\\n>\\\"We observe that the postprocessors are often able to restore image details that may not be obvious to a human observer looking at the raw reconstruction results. However, they are not able to restore images when the raw reconstruction result does not provide enough information, which is a problem common to all postprocessing tasks.\\\"\\n\\n**Comment 9 (Reviewer kGL7)** In your method, which one do you think has a greater impact on the final restoration result, raw reconstruction or post-processing?\\n\\n**Response:** Given that the postprocessing modules are unable to reconstruct images when the raw attack does not provide enough information about their content, the raw attack tends to have a stronger fundamental impact on the overall reconstruction quality.\"}", "{\"title\": \"Response to Reviewer i2hB (Part 1)\", \"comment\": \"**Comment 1 (Reviewer i2hB)** Unreasonable assumptions. In the aggregation of global gradients, updates from different nodes are weighted based on the amount of training data used by each party. However, the authors simplistically assume that the parties aggregate with equal weights. Additionally, the authors mention that an attacker can directly initialize N (line 161), but we do not think that a peer node can have this information. In summary, these assumptions make the paper fundamentally similar to traditional privacy attacks conducted by a central server, with only an additional simple subtraction operation. This also makes the paper\\u2019s main contribution less solid.\\n\\n**Response:** We thank the reviewer for their comments. We address the comments below on the assumption that clients aggregate with equal weights and that the attacker knows $N$.\\n\\nFirst, we appreciate the reviewer for pointing out that FedAvg should weight each client's update based on the amount of training data. We have updated Section 3.1 to account for weighting based on each client's batch size $N_k$. We note that this weighting mechanism leads to each image's individual gradient contributing equally to the global model update and therefore, allows us to remove the assumption of uniform local batch size. \\n\\nSecond, we now assume that the attacker can correctly guess the total number of images $N$. We have incorporated this assumption into our threat model (Section 3.1) of the revised manuscript, which reads \\n\\n>\\\"The attacker may not know the number of clients in each training round but can correctly guess the total number of training images.\\\"\\n\\nWe have also added a discussion on how the attacker may estimate the total number of training images $N$ in Section 4 of the updated manuscript. We quote it as follows:\\n\\n>\\\"The proposed attack assumes that the attacker can correctly guess the total number of training images $N$ in a given round. This assumption simplifies the attacking algorithm but is not always needed. Instead, the attacker can search for this integer value and decide on the best guess leading to a successful recovery of the attacker's own training images. If both the learning rate and number of images are unknown, a joint parameter search can be conducted.\\\"\\n\\n\\n**Comment 2 (Reviewer i2hB):** Lack of novelty and originality. The work presented in this paper is a simple combination of gradient inversion attacks with super-resolution and denoising techniques, without proposing a new method or solving an unsolved problem (although the authors claim to have achieved peer node attacks, we have already shown in point 1 that this assumption is not fundamentally different from existing center-based studies). Therefore, the academic value of this paper is not sufficient for publication in a top-tier conference like ICLR.\\n\\n**Response:** We are among the first to identify federated training\\u2019s vulnerability posed by honest-but-curious clients and the first to demonstrate that such attackers can reconstruct high-quality data. In addition, our inversion framework with diffusion models is able to reconstruct image data from diluted gradients of clients without the need to know the number of clients and the image counts for all clients. \\n\\nThe curious-client inversion attack having to deal with $K$ client models is a more difficult gradient inversion problem than curious-server inversion attackers that have to deal with only one client model, because the number of clients and the image counts for all clients are usually unknown. Our optimization algorithm addresses this difficulty through using a single model $\\\\mathbf{W}^{(t,u)}$ to capture the net effect of $K$ clients\\u2019 local models $\\\\mathbf{W}_k^{(t,u)}$. A detailed description of our optimization algorithm is in Section 3 of the revised manuscript:\\n\\n>\\u201cThe attacker passes them through a global model and compares the resulting gradient update $\\\\Delta^{(t)}(\\\\mathbf{X}, \\\\mathbf{Y}) = \\\\sum_{u=0}^{\\\\tau-1}\\\\sum_{l=1}^{N} \\\\nabla\\\\ell(\\\\mathbf{W}^{(t,u)}; \\\\mathbf{X}\\\\_{l}; \\\\mathbf{Y}\\\\_{l})$ to the target gradient \\u2026 the evolving global model $\\\\\\\\{\\\\mathbf{W}^{(t,u)}\\\\\\\\}_{u=0}^{\\\\tau-1}$ requires only the knowledge of the total number of images, eliminating the need to know the number of clients and the image counts from all clients.\\u201d\\n\\n\\n**Comment 3 (Reviewer i2hB):** In the optimization objective Eq. (3) of this paper, there is no variable related to labels.\\n\\n**Response:** We thank the reviewer for pointing out the mistake of neglecting the labels variable. We have revised our equations in Section 3.1 to include a variable for labels.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer i2hB (Part 2)\", \"comment\": \"**Comment 4 (Reviewer i2hB):** The setup does not explain how the attacker obtains the labels. Although the paper mentions the method of Yin et al., their method cannot handle scenarios with duplicate labels.\\n\\n**Response:** We have updated our threat model in Section 3.1 to specify that we assume a successful label recovery based on Ma et al. (2023), which can handle scenarios with duplicate labels. The updated sentence reads\\n>\\\"... the class labels have been analytically inverted as in Ma et al. (2023). \\u2026\\\"\\n\\n**Comment 5 (Reviewer i2hB):** The introduction of the experimental setup is very rough. I do not think that the paper can be replicated based on the setup provided, as almost all configurations related to optimization are missing.\\n\\n**Response:** We have updated our experimental conditions to include the parameters we use for FL learning rate, attacker learning rate, number of local iterations, and the scale factor for bicubic downsampling of the dummy data. Our experimental setup is complete based on the revised second paragraph of Section 4, which reads\\n\\n>\\\" \\u2026 Each client performs 3 iterations of local training on 16 images as this batch size provides a baseline where the attack reconstructs recognizable images from the target gradient\\u2026 The attacker uses a learning rate of 0.1 to optimize the dummy data and the attack is conducted after the first FL round, following the approach of Yue et al. (2023)... Before inverting the target gradient, the attacker encodes its dummy data through bicubic sampling with a scale factor of 4 to reduce the number of unknown parameters.\\\"\"}", "{\"metareview\": \"The paper proposes and demonstrates a gradient inversion attack in federated learning by a client node against other client nodes.\\nDemonstrating an attack in a novel setting is potentially very interesting.\\nThe reviewers criticise the paper for unrealistic assumptions and limited methodological novelty.\\nThe authors' response seeks to address the reviewer concerns, but mostly fails to make substantial changes.\\nUnlike many of the reviewers, I agree with the authors' view that demonstrating an attack in the client-vs-client scenario is potentially interesting. However, I feel that the current manuscript provides a very incomplete view of the risk, as the attack requires highly unrealistic assumptions of a client being able to guess certain parameters of the algorithm. In order to demonstrate the authors' claim, it would be important to conduct additional experiments to evaluate the sensitivity of the method to these assumptions. As the current manuscript fails to provide this insight and mainly demonstrates the feasibility of the attack in an ideal scenario, I feel it is not ready for publication at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"There was limited participation from the reviewers in the discussion so I have evaluated the author responses and review myself.\"}", "{\"summary\": \"This paper proposes a high-quality privacy data reconstruction method in the federated learning scenario, which achieves excellent results especially when the number of participating training nodes and the amount of data are limited. Unlike traditional methods, this paper considers attacks from peer nodes, making the scenario more versatile.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper integrates gradient inversion attacks with generative models to achieve higher quality privacy attacks. Additionally, it takes into account attacks from peer nodes, making the scenario more versatile compared to traditional ones.\", \"weaknesses\": \"The authors claim that the advantage of this paper lies in the achievement of node-level privacy attacks in the federated learning scenario. However, there are several significant limitations:\\n1. Unreasonable assumptions. In the aggregation of global gradients, updates from different nodes are weighted based on the amount of training data used by each party. However, the authors simplistically assume that the parties aggregate with equal weights. Additionally, the authors mention that an attacker can directly initialize N (line 161), but we do not think that a peer node can have this information. In summary, these assumptions make the paper fundamentally similar to traditional privacy attacks conducted by a central server, with only an additional simple subtraction operation. This also makes the paper\\u2019s main contribution less solid.\\n\\n2. Lack of novelty and originality. The work presented in this paper is a simple combination of gradient inversion attacks with super-resolution and denoising techniques, without proposing a new method or solving an unsolved problem (although the authors claim to have achieved peer node attacks, we have already shown in point 1 that this assumption is not fundamentally different from existing center-based studies). Therefore, the academic value of this paper is not sufficient for publication in a top-tier conference like ICLR.\\n\\n3. The introduction of the experimental setup is very rough. In the optimization objective Eq. (3) of this paper, there is no variable related to labels, and the setup does not explain how the attacker obtains the labels. Although the paper mentions the method of Yin et al., their method cannot handle scenarios with duplicate labels. Moreover, I do not think that the paper can be replicated based on the setup provided, as almost all configurations related to optimization are missing.\", \"questions\": \"My concerns are detailed in the WEAKNESS part, specifically in relation to these limitations. If the authors provide convincing responses to these limitations, I would be happy to raise my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Ykmd (Part 3)\", \"comment\": \"**Comment 3 (Reviewer Ykmd):** The optimization framework introduced here does not appear to be novel, and the paper lacks citations to previous work on similar frameworks. There is also no comparison provided to demonstrate why or how the proposed optimization function is more effective or advantageous over existing methods.\\n\\n**Response:** Our inversion framework with diffusion models is able to reconstruct image data from diluted gradients of clients without the need to know the number of clients and the image counts for all clients. \\n\\nThe curious-client inversion attack having to deal with $K$ client models is a more difficult gradient inversion problem than curious-server inversion attackers that have to deal with only one client model, because the number of clients and the image counts for all clients are usually unknown. Our optimization algorithm addresses this difficulty through using a single model $\\\\mathbf{W}^{(t,u)}$ to capture the net effect of $K$ clients\\u2019 local models $\\\\mathbf{W}_k^{(t,u)}$. A detailed description of our optimization algorithm is in Section 3 of the revised manuscript:\\n\\n>\\u201cThe attacker passes them through a global model and compares the resulting gradient update $\\\\Delta^{(t)}(\\\\mathbf{X}, \\\\mathbf{Y}) = \\\\sum_{u=0}^{\\\\tau-1}\\\\sum_{l=1}^{N} \\\\nabla\\\\ell(\\\\mathbf{W}^{(t,u)}; \\\\mathbf{X}\\\\_{l}; \\\\mathbf{Y}\\\\_{l})$ to the target gradient \\u2026 the evolving global model $\\\\\\\\{\\\\mathbf{W}^{(t,u)}\\\\\\\\}_{u=0}^{\\\\tau-1}$ requires only the knowledge of the total number of images, eliminating the need to know the number of clients and the image counts from all clients.\\u201d\\n\\nTo the best of our knowledge, the paper has cited relevant work on server-side and client-side attacks. If the reviewer is willing to provide other references, we are happy to include them in the manuscript.\\n\\n\\n**Comment 4 (Reviewer Ykmd):** There are insufficient ablation studies to illustrate the robustness of the proposed framework under varying conditions.\\n\\n**Response:** We have performed additional experiments to demonstrate that the approach is robust to uneven client batch size, which are outlined in our response to Comment 1 and included in the newly added Appendix B of the revised draft.\\n\\n**Comment 5 (Reviewer Ykmd):** In the experiments, the maximum number of clients is set at 8, which limits the insights into how the framework performs at larger scales. How does the proposed approach perform with a larger number of clients? \\n\\n**Response:** We thank the reviewer for the suggestion. We plan to run more experiments on more GPUs with larger memory in the coming days, which will quantify how our attack performs with a larger number of clients.\"}", "{\"title\": \"Response to Reviewer kGL7 (Part 1)\", \"comment\": \"**Comment 1 (Reviewer kGL7):** Figures 2, 3, 5, and 8 demonstrate the effect of data reconstruction. What hyperparameters such as the training model structure, batch size, and epoch of FedAvg local training are used corresponding to these results?\\n\\n**Response:** We thank the reviewer for their comments. We have updated the captions for Figures 5 and 8 to specify that each uses 16 images per client, trains for 3 local epochs, and uses LeNet5 as the model architecture. The results shown in Figures 2 and 3 use the same hyperparameters but we have chosen not to detail information about the raw attack parameters as those figures are intended to highlight how the postprocessing modules can visually enhance the raw reconstructed images. \\n\\nAdditionally, we have updated our experimental conditions section to specify that our attack is performed after the first FL round. This approach is commonly used in the curious-server literature, such as by ROG (Yue et al., 2023). The updated section reads\\n\\n>\\\"... The attack was conducted after the first FL round, following the approach of Yue et al., (2023). \\u2026 \\\"\\n\\nQuantifying how the reconstruction quality is affected by attacking later rounds of model training is a promising direction for future work. \\n\\n**Comment 2 (Reviewer kGL7)** You selected LPIPS as the main evaluation metric. Do the results or trends of MSE, PSNR, SSIM and LPIPS are consistent in these experiments? Because sometimes the LPIPS values \\u200b\\u200bof two sets of images may be close, but the visual effects are very different.\\n\\n**Response:** Yes, we observe similar trends in SSIM and PSNR/MSE. We have updated our experimental conditions to clarify our choice of image quality metrics. The updated section reads\\n\\n>\\\"We use LPIPS (Zhang et al., 2018) as the primary metric to evaluate the quality of the attack's reconstructed images as it provides the best representation of perceptual image quality based on our experiments, though we observe similar trends for SSIM (Wang et al., 2004) and PSNR/MSE.\\\"\\n\\n**Comment 3 (Reviewer kGL7)** In the left figure of Figure 6, when there are 8 clients and the batch size is 64, the attacker has to restore a total of (amazing) 512 images. What is the specific visualization results of these images? Does the LPIPS value reflect the actual reconstruction effect?\\n\\n**Response:** We have added more reconstruction results in newly added Appendix C that show a random sampling of the reconstructed images with varying hyperparameters, including the case with local batch size 64 and 8 clients, which is shown in the bottom row of the newly added Figure 16.\\n\\n**Comment 4 (Reviewer kGL7):** In Equation (3), why do you choose L2 norm instead of cosine similarity?\\n\\n**Response:** We followed the standard practice in the literature such as ROG (Yue et al., 2023). We did not encounter any issues with the L2 norm and the L2 norm produced better results than using cosine similarity. \\n\\n**Comment 5 (Reviewer kGL7):** Figure 2 shows the results of different epochs. How do you choose the best epoch? For all the images to be processed, do they use the same optimal number of epochs?\\n\\n**Response:** For the semantic postprocessor, our main results section details how the optimal epoch was chosen:\\n\\n>\\\"We observe that at an optimal epoch number, the output images closely match the target, preserving its geometric structure and perceptual features with photorealistic quality. This optimal point varies across target images and was determined qualitatively.\\\"\\n\\nWe have made a minor modification to this section to specify that we do not use the ground truth image to select the best epoch as this would be unrealistic for real-world applications where the ground truth is unknown. The updated sentence reads\\n\\n>\\\"This optimal point varies across target images and was determined qualitatively based on the raw reconstructed images.\\\"\\n\\n**Comment 6 (Reviewer kGL7):** After adding two diffusion models (MDT and DDPMs) to optimize the reconstruction results, how much will the efficiency of the attack and the computational cost increase compared to before?\\n\\n**Response:** Our proposed postprocessing methods do increase the attack overhead significantly. This issue is common when using diffusion models as they require iterative refinement of each batch of images. We do not evaluate the computational efficiency of the attack based on our assumption that the attacker has sufficient computational resources in cross-silo scenarios that this paper assumes.\"}" ] }
5dpuLgwQ0d
Finding the Number of Clusters in a Graph: a Nearly-Linear Time Algorithm
[ "Suranjan De", "He Sun" ]
Given an undirected graph $G$ with the normalised adjacency matrix $N_G$, the well-known eigen-gap heuristic for clustering asserts that $G$ has $k$ clusters if there is a large gap between the $k$th and $(k+1)$th largest eigenvalues of $N_G$. Although this heuristic is well-supported in spectral graph theory and widely applied in practice, determining $k$ often relies on computing the eigenvalues of $N_G$ with high time complexity. This paper addresses this key problem in graph clustering, and shows that the number of clusters $k$ implied by the eigen-gap heuristic can be computed in nearly-linear time.
[ "spectral clustering", "eigen-gap heuristic", "number of clusters" ]
Reject
https://openreview.net/pdf?id=5dpuLgwQ0d
https://openreview.net/forum?id=5dpuLgwQ0d
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xmt9wcOECY", "xm4FfYW9j5", "xMU9FdRsTL", "jBFJOxoN5T", "e5XvhcY35H", "bolWEZXejO", "ayQZGqWEZ0", "WTRoBNCMDV", "U89XlzMHBr", "TWCdWSCqzz", "OwlvXKcQ5t", "KtAClcFkpm", "ICVK2cWPON", "EkqaFCkNfF", "DPmVXIFjoZ", "BgmUigCIH0", "BSLf1fvksF", "AxcJJXUOwf", "7nYKhcToXq", "6FewFEasBG", "568SHKgJlh", "2xwBPD2VpM", "1OPDogrLHQ", "1NKSDAOzG2" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732640501839, 1732866976423, 1732282756833, 1732105755514, 1732114581326, 1732282425033, 1732828649826, 1732667724243, 1730419559984, 1737524110708, 1732115638113, 1732526157829, 1732709769001, 1732636667533, 1730050677875, 1732208245697, 1732793091580, 1734448345333, 1730426326315, 1732253668026, 1732113246944, 1730546124305, 1732287202600, 1732766759073 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_3WEb" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_9Tu9" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_ySsE" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_f2cJ" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_f2cJ" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_f2cJ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_3WEb" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_9Tu9" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Area_Chair_X51C" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_ySsE" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_ySsE" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_9Tu9" ], [ "ICLR.cc/2025/Conference/Submission11203/Authors" ], [ "ICLR.cc/2025/Conference/Submission11203/Reviewer_f2cJ" ] ], "structured_content_str": [ "{\"comment\": \"Dear authors,\\n\\nThank you for your detailed rebuttal and for the time you have dedicated to addressing the reviewers' concerns. While I appreciate your clarifications, I believe the paper's exposition could still benefit from further refinement\\u2014particularly in the technical details, such as clarifying the necessity (or lack thereof) of knowing $\\\\lambda_{k+1}$. Given these issues, I feel a significant revision is still required, and therefore, I prefer to maintain my current score.\"}", "{\"title\": \"Non-convincing responses\", \"comment\": \"I was not reassured by the answers to some questions. Ill lower my grade as I believe the paper is definitely interesting but suffers so far from a few technical problems to be resolved...\"}", "{\"comment\": \"Thanks for the clarification. I don\\u2019t have further questions for the time being, and I would like to keep my rating for the paper.\"}", "{\"comment\": \"We thank the reviewer for their careful reading and insightful comments on our work. Here we response to the points raised from the report.\\n\\n> **Question 1:** \\\"Given the critical reliance of the algorithm on specific structural conditions within the graph, could the authors elaborate on any potential methods or strategies to integrate a diagnostic test within the algorithm to pre-assess whether these conditions are met in a given dataset? This addition would be crucial for ensuring the algorithm's adaptability and effectiveness.\\\"\\n\\n**Response:** This is a very interesting and natural question following our current result. To elaborate on a potential method, notice that one can effectively approximate the trace of $N_G^k$ for any $k$ in nearly-linear time, using the techniques presented in the paper. Since $\\\\mathrm{tr}(N_G^k)=\\\\sum_{i=1}^n \\\\lambda_i^k(N_G)$, one could expect different distributions of $\\\\{\\\\mathrm{tr}(N_G^k)\\\\}$ for graphs with the eigen-gap and the other case. We will formally explore this and similar approaches, and add necessary discussion in the next version of our paper.\\n\\n\\n> **Question 2:** \\\"Could you run your algo on difficult synthetic data and compare it to some state of the art and simple algorithms (like cross validation of some unsupervised criteria)?\\\"\\n\\n**Response:** We aren't very sure which state of the art and simple algorithms you refer to, since to the best of knowledge our algorithm is the first nearly-linear time algorithm that finds the number of clusters. However, if you have specific algorithms in mind against which we should compare our algorithm, we'll be happy to run your suggested experiments. \\n\\n> **Minor Question 1:** \\\"Define the notation h_{a,b}(M), T_i(M)...\\\" \\n\\n**Response:** We define function $h_{a,b}: \\\\mathbb{R} \\\\rightarrow \\\\mathbb{R}$ on Lines 211 -- 213, and Lines 095 -- 096 shows that how to generalise any function $f:\\\\mathbb{R} \\\\rightarrow \\\\mathbb{R}$ to $f: \\\\mathbb{R}^{n\\\\times n} \\\\rightarrow \\\\mathbb{R}$. Here $f_{ab}$ applies to the eigenvalues of $M$. We defined the recursive definition of Chebyshev polynomials of first kind on Lines 110 -- 113. If we apply this recursive definition on matrix $M$, it becomes $T_0(M)=I, T_1(M)=M$, and $T_k(M)=2M\\\\cdot T_{k-1}(M) - T_{k-2}(M)$ for any $k\\\\geq 2$.\\n\\n> **Minor Question 2:** \\\"Are the Chebyschev polynomials used for a specific reason or other orthogonal polynomials could be considered as well?\\\"\\n\\n**Response:** This is a very interesting question. Here we apply the Chebyshev polynomials for the following reasons: (1)\", \"optimal_approximation\": \"Chebyshev polynomials are widely regarded as the best choice for approximating functions on \\\\([-1, 1]\\\\). Since the eigenvalues of the normalized adjacency matrix also lie within this interval, Chebyshev polynomials are particularly well-suited for approximating the spectral density function \\\\(s\\\\). (2) Ease of Implementation: Chebyshev polynomials are defined recursively, making them computationally efficient and straightforward to implement compared to other orthogonal polynomials.\\nWe think other orthogonal polynomials with the two properties could be used in our problem as well.\\n\\n> **Minor Question 3:** \\\"In Lemma 14, the probability obtained should not depend also on epsilon?\\\"\\n\\n**Response:** Thank you for pointing out this, and the probability should depend on $\\\\epsilon$. We have updated Lemma 14 to reflect this dependence.\\n\\n> **Minor Question 4:** \\\"\\nIt could maybe more insightful to state the main result with a probability that depend on the parameters of the problem (rather than 9/10).\\\"\\n\\n **Response:** Thank you for pointing out this. In the updated version of our submission, we state that the success probability is $1-o(1)$. We chose not to write the success probability with respect to $\\\\epsilon$ since there are several $\\\\log$ term when applying the union bound over the failure probabilities and it's uncommon to hide these terms in $\\\\tilde{O}(\\\\cdot)$ when writing the success probability.\"}", "{\"comment\": \"We thank the reviewer for their careful reading and evaluation on our work. Here we respond to the weakness and questions raised from the report.\\n\\n> **Weakness:** The datasets used in the experimental evaluation are limited in size. \\n\\n**Response:** As the reviewer correctly pointed out that the purpose of our experiment is to conclude the algorithm's empirical runtime, we believe that our reported Figure 1(a) suffices to demonstrate that the runtime of our algorithm increases linearly in the number of edges of $G$, i.e., the visualised curve is a linear function. However, if the reviewer thinks that experiments on larger graphs could be more helpful, we're happy to add experimental results on larger graphs in the next version of our paper.\\n\\n> **Question 1:** Remark 2: This section is somewhat unclear. In line 191, $\\\\beta$ appears with a factor of 2 and requires $\\\\beta>2$, while line 194 does not include the factor of 2 and suggests $\\\\beta>1$. This may be a typo, as line 194 may need to specify $\\\\beta>2$\\n as well. Additionally, it seems this assumption primarily applies to the interval sizes in step (iii) of the algorithm. For clarity, it might be preferable to assume something like $\\\\lambda_k(M)\\\\geq (2+\\\\epsilon)\\\\lambda_{k+1}(M)$ for some $\\\\epsilon>0$ and proceed from there.\\n\\n**Response:** The formulation of Remark 2 and the discussion on Line 194 is not a typo. In Remark 2, we assume that $\\\\lambda_k(M)\\\\geq 2\\\\beta\\\\cdot \\\\lambda_{k+1}(M)$ for $\\\\beta>2$; here we introduce the parameter $\\\\beta$ and constant $2$ in order to simplify the analysis in Section 3.3. However, Line 194 states that our result actually holds as long as $\\\\lambda_k(M)/\\\\lambda_{k+1}(M)$ is lower bounded by some constant strictly greater than 1. We will rewrite these two sentences to make it clearer.\\n\\n> **Question 2:** It seems the authors' proposed sparsification step requires prior knowledge of $k$. Could the authors clarify if the method can avoid this dependency on $k$ or if alternative sparsification approaches, such as spectral sparsification by Spielman, Srivastava, or Teng, might also be suitable?\\n\\n**Response:** First of all, the algorithms by Spielman and Teng, as well as Spielman and Srivastava cannot be directly applied in our setting since, by the higher-order Cheeger inequality, we need to preserve the $(k+1)$-th smallest eigenvalue of the *normalised* Laplacian matrices, which cannot be directly implied by the above-mentioned algorithms. In addition, there is no efficient implementation of the both algorithms, and that's why we chose to use the Sun-Zanetti algorithm instead.\\n\\nSecondly, we highlight that our algorithm doesn't need prior knowledge of $k$. By the definition of $p_u(v)$, \\n a good approximation of $C\\\\cdot \\\\frac{\\\\log n}{1-\\\\lambda_{k+1}(N_G)}$ suffices for our purpose. Since $1-\\\\lambda_{k+1}(N_G)=\\\\Theta(1)$ when $G$ has $k$ clusters, we can treat $C\\\\cdot \\\\frac{\\\\log n}{1-\\\\lambda_{k+1}(N_G)}$ as $\\\\mathrm{poly}\\\\log(n)$, and this results in the total number of sampled edges as $\\\\widetilde{O}(n)$. It's important to notice that, with such approximation on the sampling probability, our work returns the *exact* value $k$; such exact value of $k$ is needed in spectral clustering.\\n\\n> **Question 3:** Line 215: what is $h_{ab}(M)$? If $h_{ab}$ is applied entrywise, I am not sure to understand where the equality comes from.\\n\\n**Response:** We define function $h_{a,b}: \\\\mathbb{R} \\\\rightarrow \\\\mathbb{R}$ on Lines 211 -- 213, and Lines 095 -- 096 shows that how to generalise any function $f:\\\\mathbb{R} \\\\rightarrow \\\\mathbb{R}$ to $f: \\\\mathbb{R}^{n\\\\times n} \\\\rightarrow \\\\mathbb{R}$. Here $f_{ab}$ applies to the eigenvalues of $M$. \\n\\n> **Question 4:** Line 321: What is the justification for $|T_k(M)|_2\\\\leq 1$? Additionally, the third inequality in this line also assumes $|T_k(M)|_F^2\\\\leq n$, which is not explicitly shown. Further explanation would be helpful.\\n\\n**Response:** We first present the proof of $||T_k(M)||_2 \\\\le 1$. Let the eigen-decomposition of $M$ be $Q \\\\Lambda Q^*$, and we have by definition that $T_k(M)= Q T_k(\\\\Lambda) Q^{*}$. Since $T_k(x)= \\\\cos(n\\\\cdot\\\\cos^{-1} x)$, we have for any eigenvalue $\\\\lambda_i$ of $M$ that $T_k(\\\\lambda_i)\\\\in[-1, 1]$ for any $k\\\\geq 0$. This proves that $||T_k(M)||_2 \\\\le 1$. The fact of $|| T_k(M)||_F^2\\\\leq n$ follows $||T_k(M)||_2 \\\\le 1$ and the definition of $||.||_F$. We will add more explanation on this in the next version of our submission.\\n\\n> **Question 5:** Line 494: If $p$ and $q$ are fixed while $n$ increases, should the number of edges grow at a rate of $n^2$ instead?\\n\\n**Response:** Thanks a lot for pointing out this. You're right that the number of edges grow at a rate of $n^2$ here, and we'll correct this in the next version of the paper. This change doesn't affect our claimed nearly-linear runtime time observed in practice.\"}", "{\"comment\": \"Thank you for carefully reading our response.\\n\\n> For the uniqueness of k, you mentioned \\u201cthe number of clusters k is defined as the minimum value to\\u2026\\u201d. I didn\\u2019t find this claim in your manuscript; instead, it is only saying k satisfies certain inequality (around Theorem 6), and the minimum is not mentioned. Are you saying that von Luxburg shows the definition mentioned in your manuscript is equivalent to what you just said about minimum?\\n\\nWhen determining the number of clusters in practice, people always choose the minimum $k$ satisfying this condition. von Luxburg shows that the number of clusters $k$ is the minimum value $k$ for which there is a large gap between $\\\\lambda_{k}(N_G)$ and $\\\\lambda_{k+1}(N_G)$, which is equivalent to the minimum value of $k$ satisfying our condition. We should have pointed out the minimum value $k$ in the manuscript, but this doesn't affect the correctness of our work. \\n\\n>For the baselines, I\\u2019m not sure about concrete ones, but perhaps one could try some naive heuristics etc. Or maybe one could (binary/doubling?) search k, by utilizing an (existing) clustering algorithm which assumes k is given, and then look at how good the clustering looks like.\\n\\nWe didn't try some naive heuristic since searching the right value of $k$ for spectral clustering would require us to compute the eigenvalues and eigenvectors of the input graph, which takes $O(n^3)$ time. Hence, it's clear that they are not the potential competitors of our algorithm. \\n \\nJust let us know if you have more questions on our submission.\"}", "{\"comment\": \"The proof is still incorrect. The authors claim $\\\\rho(k+1)=\\\\Omega(1)$. I do not see why this trivially holds. The primary assumption of the paper is that $\\\\frac{1-\\\\lambda_{k+1}}{\\\\rho(k)} \\\\ge C \\\\cdot k$.\\nThis implies $(1-\\\\lambda_{k+1}) \\\\ge C \\\\cdot k \\\\cdot \\\\rho(k)$. \\n\\nFurthermore, we only know that $\\\\rho(k+1) \\\\ge 2 (1-\\\\lambda_{k+1})$. \\nThen all we can conclude from here is that $\\\\rho(k+1) \\\\ge 2 (1-\\\\lambda_{k+1}) \\\\ge 2C \\\\cdot k \\\\cdot \\\\rho(k) \\\\implies \\\\rho(k+1) \\\\ge 2 \\\\cdot C \\\\cdot \\\\rho(k)$ (this is also what the authors write in Lines 138-140).\\n\\nNote that $\\\\rho(k)$ can be arbitrarily small (such as $n^{-\\\\epsilon_1}$ for some constant $0<\\\\epsilon_1<1$) and then the only guarantee you get is that $\\\\rho(k+1)$ is greater than some $o(1)$ quantity. In fact, the lower bounding quantity can be as small as $n^{-\\\\epsilon_2}$ for another $0<\\\\epsilon_2<1$ as $k$ is at most ${\\\\sf poly} \\\\log n$. This **does not** guarantee that $\\\\rho(k+1)= \\\\Omega(1)$ (as the authors assume).\\n\\n\\n---\\n---\\n\\nAdditionally, I have concerns about the significance of the results in light of the modified assumption of $k=\\\\mathcal{O}({\\\\sf poly} \\\\log n)$. First of all, this now only considers an exponentially smaller set of $k$ values than the original paper (so theoretically, this is a significantly weaker claim). Furthermore, in this case, $\\\\tilde{\\\\mathcal{O}}(m)$ time complexity is confounding because really it can subsume the dependence on $k$. Still, these major concerns are superseded for now by the aforementioned mistake in the proof. \\n\\n---\\n\\nIn light of the continued mistakes in the proof argument, both in the original paper and in the responses, I believe a significant rewriting is needed, followed by a thorough review of the paper again. Therefore, I recommend the paper's rejection.\"}", "{\"comment\": \"Dear authors,\\n\\nSorry for my delayed response. I tried to spend some time going through the papers you have cited to understand your claims better. After spending several hours going through the literature, I feel the paper needs a major revision before it can be considered for publication. \\n\\n\\n1) The authors say that $1-\\\\lambda_{k+1}(G)=\\\\Theta(1)$. Why is this true? Could the authors elaborate?\\n\\n2) The proof of the convergence of the final iterative process should also be made formal. The arguments provided by the authors are somewhat handwavey, and I cannot validate their correctness. \\n\\n3) I would like to ask why the authors chose to write their Laplacian as $D^{-1/2}AD^{-1/2}$ and then use $1-\\\\lambda_{k}$ instead of defining the canonical way of $I-D^{-1/2}AD^{-1/2}$ and then directly using the eigenvalues? Of course, the two are identical, but most of the cited papers (such as the paper that describes the sparsification lemma) use the latter representation, which makes this paper harder to interpret in terms of existing results.\"}", "{\"summary\": \"Given a graph $G(V,E)$ with $|V|=n$ and $|E|=m$ and $k$-many well-defined clusters, the paper aims to determine $k$ efficiently. The authors work under the assumption that the graph has a large eigengap, that is, the $k$-th eigenvalue of the normalized adjacency matrix $N_G$ is much larger than the $k+1$-th eigenvalue. In this setting, the authors provide an algorithm that guesses $k$ correctly with probability $0.9$ in $\\\\mathcal{O}(m \\\\cdot {\\\\sf poly} \\\\log (n) )$ (or $\\\\tilde{\\\\mathcal{O}}(n)$ ) time.\\n\\nBelow, I describe the steps of the algorithm and point out inconsistencies in the algorithm/proof arguments that I have observed. \\n\\n---\\n\\n\\n\\n**Step 1 (Section 3.1):** First, they use a \\\"cluster preserving sparsifier\\\" from existing work (Sun and Zanetti, 2019) that can sparsify the graph $G$ to $H$ with $\\\\tilde{\\\\mathcal{O}}(n)$ edges while ensuring the aforementioned eigengap is maintained. \\n\\n\\n> **First inconsistency:** The sparsifier method itself needs the knowledge of $k$ (which the overall algorithm is trying to determine). \\n\\nThe sparsifier decides two values $p_u(v)$ and $p_v(u)$ for each edge $e(u,v)$. Then each edge is sampled with probability $p_u(v)+p_v(u)-p_u(v)*p_v(u)$. However, as described in lines *176* to *182* of the paper, $p_u(v)$ requires the knowledge of $\\\\lambda_{k+1}(N_G)$, that is, the $k+1$-th eigenvalue of $N_G$ (where $k$ is the underlying number of clusters that the authors are trying to determine). This makes for a *circular* argument. One needs to know $k$ to obtain $\\\\lambda_{k+1}(N_G)$, and the paper's goal is to determine $k$ itself. \\nThis implies that either the algorithm is incorrect (if Sun and Zanetti themselves need the knowledge of $k$) or that Sun and Zanetti's paper can determine $k$ in linear time already, which makes this paper redundant. \\n\\n----\\n\\n\\n**Step 2 (Section 3.2):** Then, the authors use recent analysis from [1] in a straightforward manner (with some modifications) to design Algorithm 1 of the paper, which given a matrix $M$, and two reals $a$ and $b$, counts the number of eigenvalues in the range $[a,b]$ in $\\\\tilde{\\\\mathcal{O}}(n)$ time.\\n\\n\\n----\\n\\n\\n\\n**Step 3 (Section 3.3):** They want to count the number of eigenvalues of $N_H$ in different ranges using the previous algorithm, and they want to exploit the fact that if there is a large gap between $k$ and the $k+1$-th eigenvalue, the counting algorithm will return the same output ($k$) around this point even when the search range is expanded. \\n\\nTo do this, they search in the range $[1-(\\\\beta)^i/n^2,1]$ for increasing $i$, starting from $i=1$. They first keep increasing $i$ until the returned count is greater than $1$ (to ensure they at least get the second eigenvalue in their range) and then keep increasing $i$ until they get the same count for some $[1-(\\\\beta)^{i'}/n^2,1]$ and $[1-(\\\\beta)^{i'+1}/n^2,1]$ (that is, from the time their search range found the second eigenvalue and up until this point every increase in $i$ results in more eigenvalue being found in the range), which indicates they have encountered the assumed eigengap. At this point, they stop their algorithm and return the number of eigenvalues counted in this range as the number of clusters $k$. \\n\\n> **Second inconsistency (correctness):** The authors do not prove that this algorithm cannot terminate for some $k'<k$ value. \\n\\nThe paper assumes that $\\\\lambda_{k+1}(N_G)-\\\\lambda_{k}(N_G)$ is large, and therefore $\\\\lambda_{k+1}(N_H)-\\\\lambda_{k}(N_H)$ is also large. Therefore, the counting algorithm would return the same count in this neighborhood. However, the paper does not argue why $\\\\lambda_{j+1}(N_H)-\\\\lambda_{j}(N_H)$ is sufficiently small for all $2\\\\le j<k$ (so that count keeps increasing for every increasing $i$ until the range reaches the eigengap). Without such a guarantee, the correctness of the algorithm is not complete. \\n\\np.s. Here I note that if all the clusters are of the same size and are somewhat uniform (as in the SBM experiments by the author), then it is reasonable to believe that $ \\\\lambda_{j+1}(N_H)-\\\\lambda_{j}(N_H)$ is very small for $2\\\\le j<k$. However this does not seem to hold in the general case. \\n\\n \\n---\\n\\n[1] Vladimir Braverman, Aditya Krishnan, and Christopher Musco. Sublinear time spectral density estimation. In 54th Annual ACM Symposium on Theory of Computing (STOC\\u201922), pp. 1144\\u20131157, 2022.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Fast algorithms for unsupervised inference are always a welcome addition to the literature. The authors have also implemented the algorithms and showed some experimental results, which is nice. Given the mathematical inconsistencies I have observed, I cannot further comment on the paper's strengths. Overall it seems that the paper needs significant revisions before it can be published.\", \"weaknesses\": \"The primary weakness is the lack of clarity on the correctness of the algorithm that I have mentioned in the summary. To re-state:\\n\\n1) The authors use a sparsifier from the work by Sun and Zanetti, 2019 which itself requires the value of the $k+1$-th eigenvalue of the original graph. This makes the current algorithm incorrect as the sparsifier itself needs the knowledge of $k$ (to obtain $k+1$-th eigenvalue.\\n\\n2) The authors also do not provide a complete proof of the correctness in the last step of the algorithm (described in detail in the Summary).\", \"questions\": \"Could the authors comment on the inconsistencies in the algorithm/proof I mentioned in the Summary? I look forward to reading the author's explanations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We thank the reviewer for their careful reading and positive evaluation on our work. Here we respond to the concerns raised from the report.\\n\\n>**Question 1:** In the statement of Lemma 14, epsilon only appears in the running time of CountEigenvalues, and I don\\u2019t see how it affects the accuracy. Actually, I think your CountEigenvalue has this epsilon as input. I\\u2019m not saying the proof has a problem; I\\u2019m just saying the statement loos strange.\\n\\n**Response:** Thank you for pointing out this. In our updated submission, we re-stated Lemma 14 with respect to $\\\\epsilon$ to make the statement more formal.\\n\\n>**Question 2:** Section 3.3, in the description of the main algorithm, do you also need to specify the epsilon for CountEigenvalues? In this context, I think you need to say epsilon is an input parameter?\\n\\n**Response:** Thank you for pointing out this. Yes, $\\\\epsilon$ is part of the input, and we have updated it in the current version of our submission.\\n\\n>**Question 3:** Actually, Theorem 6 does not have the parameter epsilon, whereas CountEigenvalues need a parameter epsilon. However, how this epsilon is picked in order to prove Theorem 6 is not clearly discussed. Does it depend on the universal constant C? If so, then maybe it\\u2019s good to state this dependence, since perhaps your algorithm is efficient even when C is not constant.\\n\\n**Response:** Thank you for pointing out this. To improve the clarity of the presentation, in the updated version of our submission we state the success probability as $1-o(1)$. Since $\\\\epsilon$ is just a constant, its choice wouldn't asymptotically influence the order of success probability. Moreover, it doesn't depend on the universal constant $C$. This independence of $C$ is important for our algorithm, as one cannot predict this value in advance.\\n\\n>**Question 4:** In the statement of Theorem 6, $k$ is defined as the number to satisfy $\\\\Upsilon_G(k) \\\\geq C k$. However, is $k$ well defined, in particular, is the $k$ that satisfies this unique? If it is not unique, then what\\u2019s the guarantee of your algorithm then?\\n\\n**Response:** The number of clusters k is defined as the minimum value for which there is a gap between $\\\\lambda_k$ and $\\\\lambda_{k+1}$; see the survey by von Luxburg. Hence, $k$ is uniquely defined. \\n\\nFinally, regarding the weakness, we remark that our algorithm is the *first* nearly-linear time algorithm for this problem, and our experiment is mainly to demonstrate that our algorithm runs in nearly-linear time in practice; we also believe that our experiments on small-scale synthesized dataset suffice for this purpose. If the review think that there is other specific baseline algorithms that we should compare with, we're happy to conduct more experiments.\"}", "{\"comment\": \"Dear reviewers, with the discussion period ending soon, we\\u2019d really value your feedback on our responses - let us know if there\\u2019s anything else we should clarify. Thank you once more for your time.\"}", "{\"comment\": \"Thank you for the response. Here are our answers to your further questions.\\n\\n> The authors say that $1-\\\\lambda_{k+1}(G)=\\\\Theta(1)$. Why is this true? Could the authors elaborate?\\n\\nWhen $G$ has $k$ clusters, any $(k+1)$-way partition of $G$ would have to partition one cluster into two, and therefore $\\\\rho(k+1)=\\\\Omega(1)$. By the higher-order Cheever inequality, we have that\\n$$\\n1-\\\\lambda_{k+1} \\\\leq 2\\\\rho(k+1) =\\\\Omega(1).\\n$$\\nCombining this with the trivial bound of $1-\\\\lambda_{k+1} =O(1)$ proves that $1-\\\\lambda_{k+1}(G)=\\\\Theta(1)$.\\n\\n>The proof of the convergence of the final iterative process should also be made formal. The arguments provided by the authors are somewhat handwavey, and I cannot validate their correctness. \\n\\nThank you for checking our answer and the reference very carefully. We will add more formal discussion in the next version of the paper. However, we feel that our response does answer your question.\\n\\nCould you be more specific about the places about which you cannot validate their correctness, or point out the specific place for which you think our analysis is incorrect? Otherwise, it would be impossible for us to answer your question.\\n\\n>I would like to ask why the authors chose to write their Laplacian as $D^{-1/2}AD^{-1/2}$ and then use $1-\\\\lambda_i$ instead of defining the canonical way of $I-D^{-1/2}AD^{-1/2}$ and then directly using the eigenvalues? Of course, the two are identical, but most of the cited papers (such as the paper that describes the sparsification lemma) use the latter representation, which makes this paper harder to interpret in terms of existing results.\\n\\nThis is a very good question, and we choose to use normalised adjacency matrix for technical reasons indeed. Notice that, if we consider normalized Laplacian matrix, then the first smallest eigenvalues could be as small as $O(1/n)$. As a result, in Lemma 11 we need to set $\\\\epsilon$ to be $O(1/n)$, and then the running time of our algorithm won't be nearly-linear anymore. However, when using the normalised adjacency matrix, the top $k$ eigenvalues are close to $1$, and setting $\\\\epsilon$ to be $O(1/\\\\mathrm{poly}(\\\\log n))$ suffices for our purpose.\\n\\nWe remark that both of adjacency matrices and Laplacian matrices are widely used in spectral clustering literature. While some of our cited papers use Laplacian matrix representation, many papers use the adjacency matrix representation. One such example is the most classical paper on spectral clustering by Andrew Ng et al. (2001).\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe are wondering if you had time to look at our response to the two inconsistency mentioned in your report. With our response, we believe that our main result still holds. However, if you think further clarification is needed, we can provide more details. Thank you!\"}", "{\"summary\": \"This paper introduces a nearly linear time algorithm designed to determine the number of clusters in a graph using the eigengap heuristic. The primary contribution is an algorithm that counts the eigenvalues of the normalized adjacency matrix within a specified interval [a,b]. The author then leverages graph sparsification and this eigenvalue-counting algorithm to estimate the number of clusters $k$.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper makes a valuable contribution to the scalability of spectral clustering for large graphs. Combined with [Peng et al., 2017], it demonstrates that both the eigen-gap heuristic to infer $k$ and spectral clustering to infer the clusters can be executed in nearly-linear time.\", \"weaknesses\": [\"The datasets used in the experimental evaluation are limited in size. Testing on larger graphs is necessary to conclude the algorithm's empirical runtime, especially as the paper emphasizes scalability as a key benefit. For example, testing on graphs with 10^4, 10^5, 10^6, and more vertices (or edges) would show how runtime empirically scales with graph size.\"], \"questions\": [\"Remark 2: This section is somewhat unclear. In line 191, $\\\\beta$ appears with a factor of 2 and requires $\\\\beta > 2$, while line 194 does not include the factor of 2 and suggests $\\\\beta > 1$. This may be a typo, as line 194 may need to specify $\\\\beta > 2$ as well. Additionally, it seems this assumption primarily applies to the interval sizes in step (iii) of the algorithm. For clarity, it might be preferable to assume something like $\\\\lambda_k(M) \\\\ge (2+\\\\epsilon) \\\\lambda_{k+1}(M)$ for some $\\\\epsilon > 0$ and proceed from there.\", \"It seems the authors' proposed sparsification step requires prior knowledge of $k$, given that the definitions of $p_{u}(v)$\\u200b and $p_v(u)$\\u200b on lines 177 and 180 rely on the quantity $ \\\\lambda_{k+1}(N_{G}) $. Could the authors clarify if the method can avoid this dependency on $k$ or if alternative sparsification approaches, such as spectral sparsification by Spielman, Srivastava, or Teng, might also be suitable?\", \"Line 215: what is $h_{ab}(M)$? If $h_{ab}$ is applied entrywise, I am not sure to understand where the equality comes from.\", \"Line 321: What is the justification for $\\\\|T_k(M)\\\\|_2 \\\\le 1$? Additionally, the third inequality in this line also assumes $\\\\|T_k(M)\\\\|_F^2 \\\\le n$, which is not explicitly shown. Further explanation would be helpful.\", \"Line 494: If $p$ and $q$ are fixed while $n$ increases, should the number of edges grow at a rate of $n^2$ instead?\"], \"minor_points\": \"\", \"line_38\": \"\\u201cgraph with n\\u201d: missing the word \\u201cvertices.\\u201d\", \"line_42\": \"\\u201cnearly-liner\\u201d should be corrected to \\u201cnearly-linear.\\u201d\", \"line_303\": \"The symbol should likely be $\\\\bar{M}$ rather than $\\\\bar{A}$.\", \"line_464\": \"The sentence in this line lacks clarity and may need rephrasing for coherence.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Question 2: I meant running special clustering with cross-validation using intra-cluster variance versus global variance criterion?\\nI guess people use many of such cross validation techniques in practice when the number of clusters is unknown...\"}", "{\"comment\": \"Dear Reviewer,\\n\\nSorry for the mistake we made in our last response to your question. Here is our new proof.\\n\\nBy the higher-order Cheeger inequality, we have that\\n$$\\n\\\\rho(k+1)\\\\leq O((k+1)^3)\\\\sqrt{ 1-\\\\lambda_{k+1} }.\\n$$\\nSince $\\\\rho(k+1)=\\\\Omega(1)$ and $k=O(\\\\mathrm{poly}\\\\log n)$ for most cases, we have that $1-\\\\lambda_{k+1} = \\\\Omega(1/\\\\mathrm{poly}\\\\log (n))$. With this, we can always treat $1-\\\\lambda_{k+1}$ as $1/\\\\mathrm{poly}\\\\log (n)$ and the total sampled edges from the sparsification step remains $\\\\tilde{O}(n)$.\\n\\nWe do acknowledge that this updated analysis requires the assumption of $k=O(\\\\mathrm{poly}\\\\log n)$. While it's a reasonable assumption, we should have stated it in our original submission. Please let us know if any further clarification is needed. Thank you very much.\"}", "{\"metareview\": \"This paper proposes a novel algorithm for determining the number of clusters in a graph based on the eigen-gap heuristic. The key contribution is a nearly-linear time algorithm that leverages orthogonal polynomials and sparse matrix representations to efficiently approximate the trace of the graph's normalized adjacency matrix, thereby avoiding the computationally expensive task of computing all eigenvalues. This addresses a significant bottleneck in applying the eigen-gap heuristic for spectral clustering.\\n\\nReviewers find the proposed algorithm and its theoretical analysis to be interesting and a step in the right direction for improving the efficiency of spectral clustering. The nearly-linear time complexity is a significant improvement over traditional methods.\\n\\nHowever, reviewers also raise some concerns:\\n\\n- Technical Issues in Proof: One reviewer identified potential technical issues in the proof that were not fully addressed by the authors during the rebuttal phase. This raises concerns about the correctness of the theoretical claims.\\n- Limited Experimental Evaluation: The experiments lack comparisons with other baseline methods for determining the number of clusters. This makes it difficult to assess the practical advantages of the proposed algorithm.\\n- Clarity of Proofs: The proofs could benefit from careful rewriting to improve their accessibility and clarity for a broader audience.\", \"recommendation\": \"While the paper presents a promising approach for determining the number of clusters in a graph, the reviewers agree that it does not meet the bar for acceptance at ICLR in its current form.\", \"additional_comments_on_reviewer_discussion\": \"The discussion was interesting and an issue about a proof was identified\"}", "{\"summary\": \"Given an undirected graph that has a significant eigen gap between k-th and (k+1)-th largest eigenvalues, this paper gives a near-linear (in the number of edges) time algorithm for identifying k.\\n\\nThe algorithm is built on several known results, but the combination of these is natural and elegant. The first step is to reduce the problem to counting the number of eigenvalues in a given range [a, b]. For this task, this paper makes use of Chebyshev\\u2019s polynomial and expansion, to reduce this to a trace estimation problem. This problem is then solved by employing Hutchinson\\u2019s estimation, and this paper gives a new concentration bound of this estimation (specifically for their trace approximation problem).\\n\\nExperiments are conducted to validate the performance of the proposed algorithm, on synthesized datasets by stochastic block model and some other synthesized datasets generated by scikit-learn. The real world running time is reported.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The result is clean, and the eigen gap assumption seems to be natural (and was also justified in previous works)\", \"The near-linear time algorithm is very efficient\", \"The presentation is very clear\"], \"weaknesses\": [\"Technical contribution seems to be limited, since almost all steps are using known/standard techniques. However, I do find the combination of them natural and elegant\", \"The experiments did not compare with any other baselines\", \"The experiments did not use real datasets and only small-scale synthesized dataset is used\"], \"questions\": [\"In the statement of Lemma 14, epsilon only appears in the running time of CountEigenvalues, and I don\\u2019t see how it affects the accuracy. Actually, I think your CountEigenvalue has this epsilon as input. I\\u2019m not saying the proof has a problem; I\\u2019m just saying the statement loos strange.\", \"Section 3.3, in the description of the main algorithm, do you also need to specify the epsilon for CountEigenvalues? In this context, I think you need to say epsilon is an input parameter?\", \"Actually, Theorem 6 does not have the parameter epsilon, whereas CountEigenvalues need a parameter epsilon. However, how this epsilon is picked in order to prove Theorem 6 is not clearly discussed. Does it depend on the universal constant C? If so, then maybe it\\u2019s good to state this dependence, since perhaps your algorithm is efficient even when C is not constant.\", \"In the statement of Theorem 6, k is defined as the number to satisfy \\\\Upsilon_G(k) \\\\geq C k. However, is k well defined, in particular, is the k that satisfies this unique? If it is not unique, then what\\u2019s the guarantee of your algorithm then?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the response.\\n\\nFor the uniqueness of k, you mentioned \\u201cthe number of clusters k is defined as the *minimum* value to\\u2026\\u201d. I didn\\u2019t find this claim in your manuscript; instead, it is only saying k satisfies certain inequality (around Theorem 6), and the minimum is not mentioned. Are you saying that von Luxburg shows the definition mentioned in your manuscript is equivalent to what you just said about minimum?\\n\\nFor the baselines, I\\u2019m not sure about concrete ones, but perhaps one could try some naive heuristics etc. Or maybe one could (binary/doubling?) search k, by utilizing an (existing) clustering algorithm which assumes k is given, and then look at how good the clustering looks like.\"}", "{\"comment\": \"We thank the reviewer for their careful reading and evaluation on our work. Here we respond to the concerns raised from the report.\\n\\n> **First inconsistency:** The sparsifier method itself needs the knowledge of $k$ (which the overall algorithm is trying to determine).\\n\\nWe highlight that our algorithm doesn't need prior knowledge of $k$ due to the following reasons. By the definition of $\\np_u(v) $, \\n a good approximation of $C\\\\cdot \\\\frac{\\\\log n}{1-\\\\lambda_{k+1}(N_G)}$ suffices for our purpose. Since $1-\\\\lambda_{k+1}(N_G)=\\\\Theta(1)$ when $G$ has $k$ clusters, we can treat $C\\\\cdot \\\\frac{\\\\log n}{1-\\\\lambda_{k+1}(N_G)}$ as $\\\\mathrm{poly}\\\\log(n)$ is sufficient, and this results in the total number of sampled edges as $\\\\widetilde{O}(n)$. It's important to notice that, with this approximation on the sampling probability, our work returns the *exact* value $k$; such exact value of $k$ is needed in spectral clustering. \\n\\nWe further highlight that this is *not* a circular argument, and our work applies *reasonable approximation* of the sampling probability to achieve the *exact* value of $k$.\\n\\n> **Second inconsistency (correctness):** The authors do not prove that this algorithm cannot terminate for some $k'<k$ value.\\n\\nThank you for pointing out this technical question. To explain why there is no large gap between $\\\\lambda_{j+1}(N_G)$ and $\\\\lambda_j(N_G)$, notice that by the higher-order Cheeger inequality we have for any $2\\\\leq j\\\\leq k$ that \\n $\\n \\\\rho^2(j)/j^6 \\\\lesssim 1-\\\\lambda_j(N_G) \\\\lesssim \\\\rho(j)$.\\nOn one side, if $1-\\\\lambda_j(N_G)$ is close to $\\\\rho(j)$, then a large gap between $\\\\lambda_{j+1}(N_G)$ and $\\\\lambda_j(N_G)$ \\nimplies a lower bound of $\\\\Upsilon_G(j)$ and therefore $G$ has $j$ clusters; in this case our algorithm returns the correct number of clusters as output. On the other side, if $1-\\\\lambda_j(N_G)$ is close to $\\\\rho^2(j)$, then Kwok et al. (2013) implies that $1- \\\\lambda_j(N_G)\\\\approx 1-\\\\lambda_{j+1}(N_G)$. We will make this more formal in the next version of our paper.\"}", "{\"summary\": \"The paper introduces a novel algorithm designed to determine the number of clusters (k) in a graph based on the eigen-gap heuristic, which historically requires high computational effort due to its dependence on calculating all eigenvalues of the graph's normalized adjacency matrix. This research addresses one of the bottleneck of spectral clustering, presenting a new methodology that computes k in nearly-linear time. The algorithm, with high probability, accurately computes the number of clusters by analyzing the normalized adjacency matrix of the graph through a sparse representation.\\nCentral to this approach is the strategic use of orthogonal polynomials, which facilitate an efficient approximation of the graph's spectral properties. This method avoids the computationally expensive task of direct eigenvalue computation by instead approximating the trace of the graph normalized adjacency matrix.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The strategy of sparsifying the graph and using orthogonal polynomials to efficiently approximate the trace seems new and constitute a nice contribution.\", \"weaknesses\": \"1) The effectiveness of the algorithm heavily relies on specific assumptions about the graph's structure, particularly the presence of well-defined clusters and clear spectral gaps.\\nWhile the algorithm presented in the paper represents a substantial theoretical advancement in the field of graph clustering, its practical applicability might be limited by its stringent reliance on specific graph conditions,. A critical oversight is the lack of a mechanism within the algorithm to pre-assess whether a given dataset meets these conditions. Without preliminary testing to verify these prerequisites, users might apply the algorithm to unsuitable datasets, leading to poor performance or invalid clustering results. Incorporating a diagnostic test as an initial step in the algorithm could significantly enhance its utility. This modification would make the algorithm more robust and adaptable, extending its relevance and effectiveness across a broader range of practical scenarios where data conditions are not ideal or well-understood in advance.\\n\\n2) The paper may not provide extensive comparative analysis with other state-of-the-art clustering algorithms (with model selection), particularly those that might use different approaches such as density-based clustering or machine learning models that do not rely on spectral properties. This limits understanding of where the presented algorithm stands in the broader landscape of model selection techniques.\", \"questions\": \"1) Given the critical reliance of the algorithm on specific structural conditions within the graph, could the authors elaborate on any potential methods or strategies to integrate a diagnostic test within the algorithm to pre-assess whether these conditions are met in a given dataset? This addition would be crucial for ensuring the algorithm's adaptability and effectiveness.\\n\\n2) Could you run your algo on difficult synthetic data and compare it to some state of the art and simple algorithms (like cross validation of some unsupervised criteria)?\", \"minor_questions\": [\"Define the notation h_{a,b}(M), T_i(M)...\", \"Are the Chebyschev polynomials used for a specific reason or other orthogonal polynomials could be considered as well?\", \"In Lemma 14, the probability obtained should not depend also on epsilon?\", \"It could maybe more insightful to state the main result with a probability that depend on the parameters of the problem (rather than 9/10).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the clarification and valuable suggestion. We will add more experimental results and discussion on the use of spectral clustering with cross-valuation to determine the number of clusters in the next version of our paper. However, to the best of our knowledge, the running time of these algorithms is much higher than the nearly-linear time complexity of our proposed approach.\"}", "{\"comment\": \"The author's argument on $1-\\\\lambda_{k+1}=\\\\Omega(1)$ is incorrect.\", \"the_authors_write\": \"1) $\\\\rho(k+1)=\\\\Omega(1)$\\n2) $(1-\\\\lambda_{k+1})$ **<=** $2 \\\\rho(k+1) $\\n3) They claim 1 and 2 implies $1-\\\\lambda_{k+1} = \\\\Omega(1)$. \\n\\nThis is incorrect. In order to show that $1-\\\\lambda_{k+1} = \\\\Omega(1)$, the authors need to show that $(1-\\\\lambda_{k+1})$ is **greater than an $\\\\Omega(1)$ quantity**. \\n\\nIn general, if you have $y=\\\\Omega(1)$ and $x \\\\le 2y$ this does not imply $x= \\\\Omega(1)$. For example, if $y=1$ and $x=1/n$ then the inequality $x \\\\le 2y$ holds but $x=1/n$ which is not $\\\\Omega(1)$.\", \"title\": \"Continued mistakes in the proof argument\"}" ] }
5ddsALwqkf
Neptune: The Long Orbit to Benchmarking Long Video Understanding
[ "Arsha Nagrani", "Mingda Zhang", "Ramin Mehran", "Rachel Hornung", "Nitesh Bharadwaj Gundavarapu", "Nilpa Jha", "Austin Myers", "Xingyi Zhou", "Boqing Gong", "Cordelia Schmid", "Mikhail Sirotenko", "Yukun Zhu", "Tobias Weyand" ]
This paper describes a semi-automatic pipeline to generate challenging question-answer-decoy sets for understanding long videos. Many existing video datasets and models are focused on short clips (10s-30s). While some long video datasets do exist, they can often be solved by powerful image models applied per frame (and often to very few frames) in a video, and are usually manually annotated at high cost. In order to mitigate both these problems, we propose a scalable dataset creation pipeline which leverages large models (VLMs and LLMs), to automatically generate dense, time-aligned video captions, as well as tough question answer decoy sets for video segments (up to 15 minutes in length). Our dataset Neptune covers a broad range of long video reasoning abilities and consists of a subset tha temphasizes multimodal reasoning. Since existing metrics for open-ended question answering are either rule-based or may rely on proprietary models, we provide a new open source model-based metric (GEM) to score open-ended responses on Neptune. Benchmark evaluations reveal that current open-source long video models perform poorly on Neptune, particularly on questions testing temporal ordering, counting and state changes. Through Neptune, we aim to spur the development of more advanced models capable of understanding long videos.
[ "video understanding", "dataset", "metric", "long video understanding", "benchmark" ]
Reject
https://openreview.net/pdf?id=5ddsALwqkf
https://openreview.net/forum?id=5ddsALwqkf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zc6ETFJm40", "xEOQhUI7Gy", "vEIGsVgG2v", "rLLUnaXtiB", "qWYKJNHcQC", "pabXGmZOLl", "m60DNrqn8K", "gP4IMMor1r", "eVDnxWCJMR", "apfkxP8kR0", "Nt62assOSj", "MwZ94Kyrki", "LeWgNGBtcc", "H1utCRl6nd", "FH4MGvsmV0", "EDpLizBmoS", "Bha4dVBRmQ" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733272551155, 1730552813333, 1732583068788, 1730603576663, 1732390204249, 1732390076531, 1733272757969, 1732667013152, 1732597789602, 1734675171358, 1737523663234, 1732894150109, 1733182003056, 1732388587438, 1732390028100, 1730534088380, 1733194357597 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4809/Authors" ], [ "ICLR.cc/2025/Conference/Submission4809/Reviewer_mqoh" ], [ "ICLR.cc/2025/Conference/Submission4809/Reviewer_jd8F" ], [ "ICLR.cc/2025/Conference/Submission4809/Reviewer_jd8F" ], [ "ICLR.cc/2025/Conference/Submission4809/Authors" ], [ "ICLR.cc/2025/Conference/Submission4809/Authors" ], [ "ICLR.cc/2025/Conference/Submission4809/Authors" ], [ "ICLR.cc/2025/Conference/Submission4809/Authors" ], [ "ICLR.cc/2025/Conference/Submission4809/Reviewer_mqoh" ], [ "ICLR.cc/2025/Conference/Submission4809/Area_Chair_JL4t" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4809/Reviewer_j9hH" ], [ "ICLR.cc/2025/Conference/Submission4809/Authors" ], [ "ICLR.cc/2025/Conference/Submission4809/Authors" ], [ "ICLR.cc/2025/Conference/Submission4809/Authors" ], [ "ICLR.cc/2025/Conference/Submission4809/Reviewer_j9hH" ], [ "ICLR.cc/2025/Conference/Submission4809/Reviewer_jd8F" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your response\\\\! We are glad that we could mitigate some of your concerns about the data distribution. We'd like to provide some final comments:\\n\\n1\\\\. **Pipeline scalability**: We would like to clarify that our pipeline can be used to generate instruction tuning data without modification, but leave this for future work. \\n2\\\\. **Data distribution**: We acknowledge that score computation is less robust for tasks with fewer examples. We leave generating more a balanced set of questions as future work, but offer to summarize the task types with lower question counts such that no task has fewer than 100 questions, which will give robust estimates of model performance for all tasks. Concretely, we summarize \\\"Creator intent\\\", \\\"Predictive\\\", \\\"Goal reasoning\\\" under the \\\"Predictive\\\" type which then has 114 questions. We also summarize \\\"Identification\\\" and \\\"Comparison\\\" into \\\"Identification and Comparison\\\" with 259 questions.\\n\\nWe hope that this would alleviate the reviewer's final concerns.\"}", "{\"summary\": \"This paper introduces a scalable, semi-automatic pipeline for constructing the Neptune benchmark dataset, aimed at evaluating long video understanding capabilities. A new evaluation metric, GEM, is proposed, demonstrating its advantages over traditional rule-based evaluation metrics.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Neptune addresses several essential question types related to temporal-aware video understanding, including the challenging temporal ordering and counting.\\n2.\\tTwo subsets are introduced to comprehensively assess current multi-modal large language models.\", \"weaknesses\": \"1. In Figure 4, the authors show that EgoSchema reaches saturation at approximately 16 frames, while performance continues to increase with Neptune. This conclusion is drawn based on the powerful Gemini model; it would be beneficial to additionally include results from some open-source models (e.g., short or long context MLLMs) to better promote the development of open-source MLLMs.\\n2. Model names should be consistently formatted (e.g., VideoLLaMA2 vs. VideoLlaMA2).\", \"questions\": \"1. In Tables 3 and 4, it is evident that a language-only Gemini model (without ASR) even outperforms the baseline image model (BLIP2) and a short-context MLLM (Video-LLaVA). Could you provide possible reasons for this? Is it because the language models in BLIP2 and Video-LLaVA are not sufficiently robust to describe visual information?\\n2. In Table 3, the authors present the performance with and without ASR. The blind model (0 frame) achieves a comparable (Gemini) or even higher (VideoLLaMA2) GEM score than the model evaluated with full-frame input. Are there possible ways to mitigate the impact of ASR during dataset construction to encourage the design of vision-centric MLLMs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to author rebuttal\", \"comment\": \"Thank you for your response. However, I still have the following concerns, which lead me to maintain my current score:\\n\\n1. What are the advantages of a semi-automated process in terms of scalability? From my perspective, for a benchmark, scalability is less important than accuracy itself. Thus, I am more inclined toward manual annotation.\\n\\n2. Regarding the imbalance in question type distribution, I still believe this is a significant issue. For instance, a model that excels at a specific type of question might achieve higher overall scores on this benchmark. The purpose of building a QA benchmark is to reflect the true capabilities of models through QA tasks. I believe that ensuring a diverse range of question types is essential to accurately capture a model's performance in real-world scenarios.\"}", "{\"summary\": \"The paper introduces a new benchmark dataset called Neptune, designed to assess multimodal, high-level understanding of long videos. The key contributions of the paper are as follows:\\n\\n1. Semi-Automatic Pipeline: The authors propose a scalable, semi-automatic pipeline for generating challenging question-answer-decoy sets for long videos, leveraging large video language models (VLMs) and large language models (LLMs) to automatically generate dense, time-aligned video captions and tough QAD sets for video segments up to 15 minutes long.\\n2. Neptune Dataset: The dataset, Neptune, covers a broad range of long video reasoning abilities and includes a subset that emphasizes multimodal reasoning. It consists of 3,268 QAD annotations for 2,405 videos.\\n3. Evaluation Metrics: Given the limitations of existing metrics for open-ended question answering, the authors introduce a new model-based metric called GEM (Gemma Equivalence Metric) to score open-ended responses on Neptune, offering a static and reproducible evaluation method.\\n4. Benchmark Evaluations: The paper presents benchmark evaluations that reveal current open-source long video models perform poorly on Neptune, especially on questions of testing temporal ordering, counting, and state changes, indicating a significant gap between open-source and proprietary models like Gemini-1.5 and GPT-4.\\n\\nIn summary, the paper contributes a new benchmark dataset and metric aimed at advancing the field of long video understanding through multimodal reasoning, with the goal of spurring the development of more advanced models in this area.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The semi-automated annotation method proposed in this paper overcomes some of thechallenges associated with manual annotation, such as the creation of complex temporal reasoning questions, which can be laborious for humans. The utilization of GPT to generate these questions reduces this burden.\\n\\n2. The introduction of the GEM metric addresses the need for a static, open-source evaluation metric for open-ended VideoQA, which has been a limitation in prior work.\\n\\n3. The paper clearly articulates the motivation behind the Neptune dataset and the GEM metric, making it easy for readers to understand the importance of the work.\", \"weaknesses\": \"1. The key contributions that distinguish it from previous benchmarks are not clearly highlighted. Although the paper has made a simple comparison with EgoSchema on the impact of the number of input frames on performance, I feel that it is too simplistic. It would be constructive to see more comparisons with other existing datasets (e.g.,VideoMME) to understand how NEPTUNE's complexity and diversity align or differ from them, which could highlight the unique challenges it presents.\\n\\n2. In addition to the Gemini bias discussed in the paper, I noticed that the frame captions are generated by PaLI-3. I am concerned that the hallucinations and preferences of the VLM itself (such as a preference for describing static images) may affect the quality of the\\nfinal generated QA.\\n\\n3. As a benchmark for long videos, the discussion on the impact of the number of input frames on model performance is too simplistic. For example, Table 3 only discusses the performance of Gemini with 1 frame, 150 frames, and All frames. In my view, the performance with 150 frames has already reached saturation, but it is uncertain how the performance with 100 frames and 50 frames would be, that is, where the performance saturation point is.\", \"questions\": \"1. As a benchmark for long videos, single-frame assessment fails to reflect the significance of temporal modeling. I believe that the sensitivity to temporal order should be evaluated by adopting methods such as reversing or shuffling the input frames.\\n\\n2. I have observed that the Gemma Equivalence Metric shows similar or even inferior performance compared to direct assessment with Gemini 1.5 Pro, with an accuracy rate of only around 70%. In contrast, previous works such as Lingo-Judge[1] seem to demonstrate that fine-tuned model-based evaluation methods can achieve assessment accuracy close to human judgment. Is this due to differences in dataset complexity?\\n\\n3. I have noticed that the distribution of generated QA types exhibits a clear long-tail distribution, with the largest proportion being Temporal Ordering tasks. Does this not suggest that the complexity and richness of the generated questions are insufficient? In my view, Ordering tasks belong to very simple temporal reasoning and do not meet the authors' claim of being complex questions that are difficult for humans to propose.\\n\\n[1] LingoQA: Visual Question Answering for Autonomous Driving\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for highlighting the scalability of our pipeline, the advantages of GEM over existing methods and diversity of our benchmark\\\\! In the following, we hope to address the reviewer\\u2019s concerns and answer their questions in the hope that the reviewer will consider raising their score.\\n\\n# W1: Evaluate more Open-source Models \\n\\nThank you for this suggestion. As requested, we evaluate InternVL \\\\[1\\\\], LLaVA-OneVision \\\\[2\\\\] and MiniCPM-v \\\\[3\\\\] on Neptune and provide results below. We will add these results to the main paper as well. We also provide frame ablations for two of these models (LLaVA-OneVision and MiniCPM-v) in the Global Author Response. Note that these experiments only use the video frames, not ASR. Comparing these new results with the open source results in Tab. 4 shows that more recent open source long-context MLLMs have made significant progress and are catching up to closed source models. We attribute these enhancements to larger training datasets and the use of the powerful Qwen-2 LLM. For reference, Gemini-1.5-Pro achieves 69.31% accuracy with 150 frames.\\n\\nIn addition, we are planning to host a leaderboard on our Github page to keep track of the performance of newer open-source models as they are developed, and to allow authors to submit their own scores. \\n\\n| | MCQ \\\\- Neptune-Full | MCQ \\\\- Neptune-MMH |\\n| :---- | ----- | ----- |\\n| InternVL2-8B \\\\- 16 frames | 57.12% | 54.30% |\\n| LLaVA-OneVision-7B \\\\- 100 frames | 66.22% | 62.82% |\\n| MiniCPM-v \\\\- 50 frames | 56.59% | 53.27% |\\n\\n \\n\\\\[1\\\\]\\\\* How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites \\n\\\\[2\\\\]\\\\* LLaVA-OneVision: Easy Visual Task Transfer \\n\\\\[3\\\\]\\\\* MiniCPM-V: A GPT-4V Level MLLM on Your Phone\\n\\n\\\\* (unpublished or concurrent work)\\n\\n# W2 & 3: Comparison to more recent benchmarks and pipelines\\n\\nThank you for this suggestion\\\\! Please see the response in the Global Author Response. We compare our pipeline in detail to 11 other prior and concurrent dataset creation pipelines, and will add this discussion to the paper. \\n\\n# W4: Comparing GEM to human ratings \\n\\nWe note that the GEM dev is human-annotated, so Tab. 1 compares how well ratings of different metrics align with humans. We will make this clearer in the text. To give more details, we first manually annotate the answer equivalence dataset. The scores on this dataset are entirely provided by humans, and this human benchmarking allows us to compare the performance of GEM to human performance. On this set compared to human performance, we find our GEM metric very closely approximates the performance of the much more expensive, closed-source Gemini model. We describe this in the Global Author Rebuttal.\"}", "{\"comment\": \"We thank the reviewer for their positive rating and constructive feedback on our paper\\\\! In the following, we hope to address the reviewer\\u2019s concerns and answer their questions.\\n\\n# W1: Frame Experiments with an Open-source Model \\n\\nThis is a great suggestion. We have run this experiment with 3 open-sourced models (VideoLLaMA2 results are in Table 3 of the main paper), and we have evaluated 2 more recent models. Results are provided in the Global Author Rebuttal. We find that Gemini saturates at around 50 frames while open-sourced models have varying saturation points, VideoLLaMA2 (8 frames), MiniCPM (50 frames) and LLaVA-OneVision (100 frames).\\n\\n# W2: Formatting \\n\\nThank you for catching this\\\\! We will update the paper to ensure model names are consistent.\\n\\n# Q1: Strong LLM\\nYes exactly, Gemini is a much bigger model and is sometimes able to use subtle cues to infer the correct answer from just the answer choices alone, which is not the case for the Flan-T5 model used in BLIP-2. This highlights the importance of a strong LLM for video understanding. We believe that the poor results with the older models are due to the small size of their language models, as suggested by the reviewer. \\n\\n# Q2: ASR Performance \\nThis is a great point, and we note that it is tricky to create datasets for long video understanding where ASR is not relevant \\\\- other concurrent datasets such as VideoMME also show large improvements to performance when ASR is added. This is the reason we propose the hard subset (MMH), in order to highlight and assess visual understanding. MMH includes only QADs that human labelers have annotated as requiring vision to solve. On MMH there\\u2019s a clear gap between Gemini-ASR and Gemini-ASR-all-frames.\"}", "{\"title\": \"Rebuttal Summary\", \"comment\": \"We thank the reviewers for their helpful comments and insightful questions which have helped strengthen our paper! We summarize the improvements we have made as part of the rebuttal here:\\n\\n1. **Comparisons to additional open source models**: We evaluated three more recent open source models: LLaVA-OneVision, MiniCPM and InternVL2, giving a more comprehensive and up-to-date view of the state of the art on our dataset. \\n2. **More detailed ablation with different number of frames**: We added an ablation going from 1 to 150 frames in 6 steps. Besides Gemini, which we used in the original ablation in the paper, we also ablated on 3 open source models, providing insights into each model\\u2019s ability to reason with limited information and their saturation points. \\n3. **Experimental comparison with three other video understanding benchmarks**: We compare Neptune against Video-MME, CinePile and Perception Test using Gemini-1.5-Flash with different numbers of frames, showing the model performance and saturation characteristics on each benchmark.\\n4. **An additional metric that gives equal weights to different task types**. This allows for a more balanced view of model performance and removes skew of the metric to the most prominent task types. \\n5. **A breakdown of per-task accuracy**. We have added a table that breaks down performance of all benchmarked models per task and allows detailed insights into each model\\u2019s strengths and weaknesses.\\n\\nAdditionally, we will include clarifications that reviewers requested, e.g. the comparison of our pipeline to pipelines used for other benchmarks, and details on our evaluation of the GEM metric. Again, we sincerely thank the reviewers for taking the time to review our paper and providing us with their valuable feedback.\"}", "{\"comment\": \"We thank the reviewer for considering our response and hope to address their remaining concerns here:\\n\\n**1\\\\. Scalability of semi-automatic pipeline** \\nThe semi-automated process significantly reduces the workload of human labelers. In our HPQ (human-proposed question) experiment (l. 498), we compared the rater effort of filtering and correcting automatically generated QA to the effort of writing original QA from scratch and found that our pipeline increases labeler productivity by almost 2x (see the first bullet in the Global Response for details). Note that this does not yet include writing decoy answers, which is another labor intensive step for humans. Experiments with different models on both sets (Tab. 2\\\\) showed only minor differences in performance, suggesting that question difficulty is roughly comparable between the two approaches. Besides saving effort in generating evaluation data, the pipeline can also be used to generate training data in a more scalable way. Since data quality is less critical for training data, human effort could potentially be reduced further, or data could even be generated fully automatically, allowing for very large scale data generation.\\n\\n**2\\\\. Question type imbalance** \\nWe agree that it is desirable for a benchmark to provide a holistic view of different model capabilities. Our paper includes a breakdown of model performance for different task types in Fig. 4 (bottom left). In addition, we have added a detailed table with a per-task breakdown of scores in the **new Sec. B.4 and Table 6 in the appendix**. The table shows that there are significant variations in per-task scores even for models that have roughly similar overall scores. This shows that models have strengths and weaknesses in different areas and allows an informed choice of models for different applications.\\n\\nWe would like to note that uneven question type distribution is common among long video benchmarks. MLVU \\\\[1\\\\], Video-MME \\\\[2\\\\], MoVQA \\\\[3\\\\], LVBench \\\\[4\\\\], MMBench-Video \\\\[5\\\\], VideoVista \\\\[6\\\\], and CinePile \\\\[7\\\\] all have similarly skewed distributions of task types. Most benchmarks, like Neptune, report per-task scores in addition to scores averaged over all examples. MLVU \\\\[1\\\\] mitigates the imbalance issue in the averaged score by reporting accuracy averaged over all tasks, which gives each task type equal weight in the final score. We offer to follow this example and report the mean of per-task scores as an additional metric. Please find the resulting scores below, which we will add to the main paper.\\n\\nGenerally, we find that task-averaged scores are slightly higher because harder question types like \\u201ctemporal ordering\\u201d have higher representation in Neptune. However, we observe that the relative performance of different models is similar to the original metric.\\n\\n\\n| | | Full | | MMH | |\\n| :---- | :---- | ----- | ----- | ----- | ----- |\\n| | **Modalities** | **Acc** | **Task-averaged Acc.** | **Acc** | **Task-averaged Acc.** |\\n| *Image Models* | | | | | |\\n| BLIP-2 | RGB (center frame) | 34.80 | 38.34 | 28.10 | 28.37 |\\n| *Short Context MLLMs* | | | | | |\\n| Video-LLaVA | RGB (8 frames) | 25.79 | 30.74 | 24.00 | 24.00 |\\n| Video-LLaMA2 | RGB (16 frames) | 44.74 | 49.51 | 40.29 | 42.61 |\\n| Video-LLaMA2 | RGB (16 frames)+ASR | 49.28 | 56.41 | 45.38 | 53.22 |\\n| *Long Context MLLMs* | | | | | |\\n| MA-LMM | RGB (120 frames) | 20.22 | 19.49 | 19.51 | 20.79 |\\n| MiniGPT4-Video | RGB (45 frames) | 24.63 | 27.43 | 22.89 | 28.88 |\\n| LLaVA-OneVision | RGB (100 frames) | 66.22 | 70.20 | 62.82 | 66.73 |\\n| MiniCPM-V 2.6 | RGB (50 frames) | 56.59 | 62.53 | 53.27 | 57.30 |\\n| *Closed-source MLLMs* | | | | | |\\n| VLM captions \\\\+ LLM (JCEF) | VLM captions (16 frames) | 58.51 | 60.26 | 56.45 | 53.56 |\\n| GPT-4o | RGB (8 frames)+ASR | 80.23 | 83.31 | 72.86 | 77.89 |\\n| Gemini-1.5-pro | RGB(all frames) \\\\+ ASR | 80.66 | 84.29 | 75.32 | 80.45 |\\n| Gemini-1.5-flash | RGB(all frames) \\\\+ ASR | 76.90 | 80.37 | 71.05 | 75.58 |\\n\\n\\\\[1\\\\] MLVU, Zhou et al. \\n\\\\[2\\\\] Video-MME, Fu et al. \\n\\\\[3\\\\] MoVQA, Zhang et al. \\n\\\\[4\\\\] LVBench, Wang et al. \\n\\\\[5\\\\] MMBench-Video, Fang et al. \\n\\\\[6\\\\] VideoVista, Li et al. \\n\\\\[7\\\\] CinePile, Rawal et al.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thanks for the additional experiments and clarifications, I have no further questions. Overall, I think this benchmark is valuable for the community.\"}", "{\"metareview\": \"This paper introduces NEPTUNE, a benchmark designed to evaluate multimodal understanding of long videos, addressing the limitations of existing datasets that primarily focus on short clips. The paper contributes a semi-automatic pipeline leveraging large video language models (VLMs) and large language models (LLMs) to generate dense, time-aligned video captions and challenging question-answer-decoy (QAD) sets for video segments up to 15 minutes long. Additionally, it proposes a new evaluation metric, GEM (Gemma Equivalence Metric), to assess open-ended responses in a static and reproducible manner. Reviewers commend the dataset's focus on long video reasoning, its innovative annotation pipeline, and the introduction of GEM, which tackles the limitations of prior evaluation metrics. However, concerns about the lack of comparisons with recent benchmarks (e.g., MLVU, Video-MME) and state-of-the-art open-source models (e.g., InternVL, LLaVA-OneVision) are raised. Reviewers also note potential biases in dataset creation due to reliance on VLMs and LLMs, limited novelty in the annotation pipeline, and insufficient evaluation against human judgment for GEM. While the contributions are valuable, the absence of more substantial comparisons and deeper analysis leaves room for improvement.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers appreciate the authors' responses but remain unconvinced on key concerns, maintaining their previous scores. One major issue is the imbalance in question-type distribution within the NEPTUNE benchmark. This could lead to inflated performance for models specializing in specific question types, undermining the benchmark's ability to reflect true model capabilities. While metric adjustments may partially address this, reviewers argue they could compromise robustness and evaluation accuracy for underrepresented tasks. Additionally, concerns persist regarding the semi-automatic benchmark construction process. Reviewers question its scalability advantages, emphasizing that accuracy and diversity should take precedence over scalability for benchmarks. Extending the pipeline to generate high-quality long-video instruction-tuning datasets could better demonstrate its utility and community impact. While some concerns were mitigated, the fundamental issues with question-type balance and the semi-automated approach remain unresolved.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"The author has solved my concern. I increase my rating to 5.\"}", "{\"comment\": \"Dear reviewer!\\n\\nPlease let us know if our additional responses have addressed your remaining concerns. We'd be happy to answer any other questions you might have.\\n\\nBest,\\nNeptune authors\"}", "{\"comment\": \"We thank the reviewers for their insightful comments, highlighting the strengths of our dataset and suggesting areas for improvement. Here, we address points raised by multiple reviewers.\\n\\n# Comparison to other Long Video dataset pipelines \\n\\nWe highlight novelties of our dataset, and its creation pipeline compared to both previous and concurrent works \\\\[1-11\\\\].\\n\\n1. **Reducing Manual Effort**: Our pipeline is semi-automatic, unlike manual pipelines \\\\[4,5,3,11\\\\]. In LVBench \\\\[3\\\\] and VideoMME \\\\[11\\\\], even the video selection is done manually, and for MoVQA \\\\[1\\\\], only the decoys are generated automatically. We asked raters to propose QAs (but not decoys) for videos from scratch, and found the time taken (19.03 minutes), longer than simply discarding or correcting QAs generated automatically (10.32 minutes), i.e. showing our pipeline almost halves rater effort. Manually proposing decoys is even more challenging, both in time as well as effort (as it requires more creativity to come up with incorrect answers). The closest to our pipeline is concurrent work VideoVista \\\\[6\\\\], which uses GPT4 to generate QADs automatically. However the performance of GPT4 and Gemini-1.5 on this dataset is close to saturated (98% on some categories). \\n2. **Scalability to generic YouTube videos**: Our pipeline can be applied to any generic YouTube video. This is unlike EgoSchema \\\\[2\\\\], which relies on human generated captions, SFD \\\\[9\\\\], which requires movie titles, loglines and synopses (human-written), or MLVU \\\\[4\\\\], which re-uses annotations from existing datasets for many of their tasks. This makes the dataset scalable, as YouTube has a constantly growing set of videos. Potential further use cases could be applying this pipeline to generate large quantities of training data. \\n3. **High Quality Video Captions using careful multi-stage prompting**. A key novelty is the ability to generate high quality video captions, using a multi-stage process (Sec 4.2) which involves generating shot-level captions, clustering shots into segments, and adding visual support captions. Examples of caption quality are provided in the supplementary (Fig. 13 and Sec. F), showcasing details such as visual support, numerous events, mood and atmosphere, details from the ASR, and even high level feelings and emotions. \\n\\n# Clarifications on the GEM metric \\n\\n***Training:*** We train GEM by fine-tuning the open source Gemma model on BEM, a dataset of general answer equivalence ratings. Therefore, GEM is not specific to Neptune and generally applicable to other answer equivalence problems as well. \\n***Comparison to human assessments:*** To assess GEM (Sec. 5.1, appendix D.1, D.2), we **manually annotate** an answer equivalence dataset by sampling 97 question-answer pairs from Neptune. We then generate 3 candidate answers per question by prompting VideoLLAVA, Gemini-1.5-Pro and MA-LMM to write a free-form answer for each question without looking into the decoys or ground truth. We then used human raters to manually score these responses, resulting in a dev set with 291 QA-pairs that allows us to compare GEM to human performance. GEM metric reaches an F1-Score of 71.2, which very closely approximates the performance of the much more expensive, closed-source Gemini model (F1-Score 72.8). We will move some details from the appendix to the main paper.\\n\\n# Additional Frame Experiments\\n\\nWe provide additional frame experiments for both Gemini-1.5-Pro and new open-sourced models below. We find that Gemini saturates at around 50 frames while open-sourced models have varying saturation points, VideoLLaMA2 (8 frames), MiniCPM (50 frames) and LLaVA-OneVision (100 frames). For both MiniCPM and LLaVA-OneVision we were unable to fit more frames into the context window.\\n\\nThis is an interesting experiment which we will add to the paper. Note that these are uniformly sampled frames, and hence cover the full span of the video.\\n\\n| \\\\# of Frames | 1 | 8 | 16 | 50 | 100 | 150 |\\n| :---- | ----: | ----: | ----: | ----: | ----: | ----: |\\n| **Gemini-1.5-Pro** | 55.57 | 63.80 | 66.62 | 70.08 | **70.44** | 69.31 |\\n| **LLaVA-OneVision** | 56.56 | 62.51 | 63.98 | 65.70 | **66.22** | Out of context |\\n| **MiniCPM-V 2.6** | 50.89 | 53.83 | 53.86 | **56.59** | 55.00 | Out of context |\\n| **VideoLLaMA2** | 40.88 | **44.74** | 44.74 | \\\\- | \\\\- | \\\\- |\\n\\nWe also provide frame ablations comparing Neptune to other datasets in our response to reviewer jd8F.\\n\\n\\\\[1\\\\]\\\\* MoVQA, Zhang et al. \\n\\\\[2\\\\] EgoSchema, Mangalam et al. \\n\\\\[3\\\\]\\\\* LVBench, Wang et al. \\n\\\\[4\\\\]\\\\* MLVU, Zhou et al. \\n\\\\[5\\\\] MMBench-Video, Fang et al. \\n\\\\[6\\\\]\\\\* VideoVista, Li et al. \\n\\\\[7\\\\]\\\\* ReXTime, Chen et al. \\n\\\\[8\\\\]\\\\* LongVideoBench, Wu et al. \\n\\\\[9\\\\]\\\\* Short Film Dataset, Ghermi et al. \\n\\\\[10\\\\] Towards long-form video understanding, Wu et al. \\n\\\\[11\\\\]\\\\* Video-MME, Fu et al. \\n\\\\[12\\\\] CinePile, Rawal et al. \\n\\\\[13\\\\] Perception Test, Pa\\u0306tra\\u0306ucean et al.\\n\\n\\\\* (unpublished or concurrent work)\"}", "{\"comment\": \"We thank the reviewer for their thorough review and the positive comments and valuable feedback\\\\! We try to address all points here and hope that the reviewer would consider raising their score.\\n\\n# W1: Contributions beyond previous benchmarks\\n\\nThank you for this suggestion. Appendix A has a survey of related datasets, including a table with MLVU and other recent datasets, which we have extended to include Video-MME and LongVideoBench. We will move these to the main paper. In addition, we believe that the most significant differences are our semi-automatic pipeline (see Global Response), and the open-set evaluation with the GEM metric. Unlike Neptune, Video-MME is entirely manually annotated, and is hence harder to scale (more details in the Global Response). We also ran ablations with varying numbers of frames on Neptune and three other benchmarks. We find that most benchmarks saturate at about 50 frames, including Video-MME, which has much longer videos than Neptune. We included Perception Test here as it is new, it does not claim to be a long video benchmark and saturates at 16 frames. We added Sec. B.3 and Fig. 6 to the supplementary material to showcase these results.\\n\\n| \\\\# of Frames | 1 | 16 | 50 | 100 |\\n| :---- | ----: | ----: | ----: | ----: |\\n| **Neptune-MMH** | 62.89 | 71.22 | **73.37** | 73.02 |\\n| **Video-MME** | 56.28 | 68.22 | **70.52** | 70.05 |\\n| **CinePile (2-3 mins)** | 52.32 | 56.84 | 58.97 | **59.03** |\\n| **Perception Test** | 51.48 | **64.21** | 63.72 | 63.21 |\\n\\n# W2: Bias from Gemini and PaLI-3\\n\\nWe agree that the efficiency improvements from using PaLI-3 and Gemini for QAD generation come with the risk of introducing model biases and hallucinations, which is why we took measures to mitigate and measure them.\\n\\nThe main **mitigation** mechanism is running two rounds of human filtering and having every QAD reviewed by at least *four* human labelers. 65% of auto-generated questions got rejected by raters. Also, our QAD generation prompt used both the frame captions and ASR as context, so QADs are grounded in both video and audio modalities.\\n\\nTo **measure** the impact of bias and hallucinations, we compared question-answer pairs generated by humans with those generated with our pipeline, (see \\u201cResults on HPQ and Gemini bias\\u201d starting in l. 498\\\\) and found only a minor drop in model accuracy between the sets. While we mention only Gemini bias in this section, we should amend it to all model biases, including visual biases introduced by PaLI-3. This is because the HPQ set was annotated without any model at all, as a control for our semi-automatic data generation. We also point out that in Tab. 4, GPT-4o performs competitively with Gemini. If the data had significant biases towards Gemini, we would expect Gemini to score much higher than other models.\\n\\n# W3: Impact of the number of input frames\\n\\nWe agree that it would be helpful to have a finer-grained analysis of the effect of the number of frames. We have run additional experiments towards this (see Global Response) and find the saturation point for Gemini-1.5-pro-1.0 is around 50 frames. \\n\\n# Q1: Sensitivity to temporal ordering\\n\\nThank you for this interesting suggestion\\\\! We will run experiments on this as well.\\n\\n# Q2: GEM performance\\n\\nThis is a great point. We note that all the models applied to the dev set in Table 1 were applied zero-shot. As described in the Global Response, our dev set was created by sampling QAs from Neptune and running models to create predicted answers. The finetuning dataset (BEM) used to create GEM is not from the same domain as Neptune. In contrast, Lingo-judge \\\\[1\\\\] is trained on over 1000 examples and then evaluated on the same dataset, which is why we suspect the F1 score is so high. By evaluating our metric zero-shot, we hope that it can be applied to other datasets as well. \\n\\n\\\\[1\\\\] LingoQA, Marcu et al.\\n\\n# Q3: Question type distribution\\n\\nWe agree that a uniform distribution of question types would be a good area for future improvement. We explain the reasons for Neptune\\u2019s current question type distribution in Sec. B.1.1 of the appendix. Summarised, the key reasons are (1) the LLM is prompted to generate these automatically, and the selection depends strongly on the given video (some question types are not possible for a video), and (2) the quality of questions produced by the LLM varies strongly by question type. Therefore, after quality checking by raters, the distribution changes significantly (we add an additional figure to the appendix showing the distribution change). \\nWe also found that tough temporal ordering questions are difficult to propose, as they require an identification of high level events which are non-trivial to detect. We report performance per question type, in order to judge performance across all abilities. \\n\\nGiven the automatic nature of our pipeline, it is possible to augment the dataset with more examples of each type, which we will focus on for future work.\"}", "{\"summary\": \"The paper introduces NEPTUNE, a new benchmark for evaluating multimodal understanding of long videos. It addresses the limitations of current video datasets and models, which often focus on short clips and can be solved by image models applied per frame. The authors propose a semi-automatic pipeline that leverages large Video Language Models (VLMs) and Large Language Models (LLMs) to automatically generate dense, time-aligned video captions and challenging question-answer-decoy sets for video segments up to 15 minutes long. The paper also discusses the related works in video question answering, the motivation behind the NEPTUNE dataset, and the detailed pipeline for dataset creation. It concludes with the limitations of the dataset and potential areas for future improvement, such as reducing biases introduced by the models used in the creation process and enhancing the dataset with additional annotations like temporal grounding or entity labels.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces an innovative semi-automatic pipeline designed to generate question-answer-decoy (QAD) sets. This method effectively reduces the annotation costs associated with manual video captioning and question generation. By leveraging large models such as VLMs and LLMs, the pipeline automates the creation of dense, time-aligned video captions, which are then used to derive challenging QAD sets for segments of video content. This approach not only scales well but also maintains a high standard of quality in the dataset.\\n\\n2. A significant contribution of this paper is the training and proposal of a model for the automatic evaluation of open-ended QA responses. The authors address the limitations of existing metrics, which are either rule-based or rely on proprietary models, by finetuning an open-source model on a generic answer equivalence dataset. This results in the GEM (Gemma Equivalence Metric), a new metric that provides a more accurate assessment of open-ended responses on the NEPTUNE dataset, thus facilitating more reliable benchmarking and comparison of different models.\\n\\n3. The NEPTUNE benchmark covers a wide range of video lengths and types, ensuring diversity in the dataset. This comprehensive approach guarantees that the models evaluated on this benchmark are tested on their ability to reason over a variety of video content, going beyond the capabilities required for understanding short video clips. The inclusion of different video domains and the emphasis on longer-form content make the NEPTUNE dataset a robust platform for assessing multimodal video understanding systems across a broad spectrum of real-world scenarios.\", \"weaknesses\": \"1. The paper does not include an evaluation and comparison with the latest open source models such as InternVL, LLaVA-OneVision, and MiniCPM. These models are part of the current research landscape and offer a different perspective on video understanding capabilities\\n\\n2. The paper primarily focuses on the analysis of benchmarks like NextQA and EgoSchema but does not provide a thorough comparison with more recent benchmarks such as MLVU, Video-MME, and LongVideoBench, which are designed to evaluate long-form video understanding \\n\\n3. Although the semi-automatic pipeline proposed in the paper effectively reduces the workload of manual annotation, it lacks novelty in terms of a detailed analysis and comparison with other pipelines. The paper could benefit from a more in-depth exploration of the unique aspects of its pipeline and how it compares to existing methods \\n\\n4. While the proposed GEM metric may have lower evaluation costs compared to assessments using models like GPT or Gemini, the paper lacks a comparison with the consistency of human evaluations. Introducing human annotations to quantify and analyze the quality of assessments would strengthen the paper's findings\", \"questions\": \"Why not compare with other benchmarks and test the latest open-source models?\\nHow to ensure the reliability of GEM's evaluation results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response, which has alleviated some of my concerns regarding the imbalance in question types. However, I believe that the inherent issue of question-type imbalance within the benchmark still persists. While adjusting the metrics can mitigate the risk of individual tasks disproportionately influencing the overall score, it may also compromise the robustness of testing for other tasks. Specifically, an insufficient number of questions could affect the evaluation accuracy for certain task types. Additionally, I maintain my reservations about the semi-automatic benchmark construction process. If this pipeline could be extended to produce high-quality long-video instruction-tuning datasets (e.g., Vript[1], CinePine[2]), it would better demonstrate its effectiveness and contribute more significantly to the community. In summary, I stand by my previous rating.\\n\\n[1] Vript: A Video Is Worth Thousands of Words\\n\\n[2] CinePile: A Long Video Question Answering Dataset and Benchmark\"}" ] }
5dcnU4gihd
Attention Head Purification: A New Perspective to Harness CLIP for Domain Generalization
[ "Yingfan Wang", "Guoliang Kang" ]
Domain Generalization (DG) aims to learn a model from multiple source domains to achieve satisfactory performance on unseen target domains. Recent works introduce CLIP to DG tasks due to its superior image-text alignment and zero-shot performance. Previous methods either utilize full fine-tuning or prompt learning paradigms to harness CLIP for DG tasks. Those works focus on avoiding catastrophic forgetting of the original knowledge encoded in CLIP but ignore that the knowledge encoded in CLIP in nature may contain domain-specific cues that constrain its domain generalization performance. In this paper, we propose a new perspective to harness CLIP for DG, i.e., attention head purification. We observe that different attention heads may encode different properties of an image and selecting heads appropriately may yield remarkable performance improvement across domains. Based on such observations, we purify the attention heads of CLIP from two levels, including task-level purification and domain-level purification. For task-level purification, we design head-aware LoRA to make each head more adapted to the task we considered. For domain-level purification, we perform head selection via a simple gating strategy. We utilize MMD loss to encourage masked head features to be more domain-invariant to emphasize more generalizable properties/heads. During training, we jointly perform task-level purification and domain-level purification. We conduct experiments on various representative DG benchmarks. Though simple, extensive experiments demonstrate that our method performs favorably against previous state-of-the-arts.
[ "Domain generalization", "Vision Language Model", "CLIP", "Low-rank Adaptation" ]
Reject
https://openreview.net/pdf?id=5dcnU4gihd
https://openreview.net/forum?id=5dcnU4gihd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xzMy8dxvjK", "f2uHew8hEH", "dBVkYrNSye", "ckxptawf0v", "R8h91VZiwj", "L4oI602XBN", "3UMYHQoy1p" ], "note_type": [ "official_review", "meta_review", "official_review", "decision", "official_review", "official_comment", "official_review" ], "note_created": [ 1730634499821, 1734431812990, 1729798015217, 1737523553687, 1730539581501, 1732720645437, 1730674815533 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3093/Reviewer_N2Qg" ], [ "ICLR.cc/2025/Conference/Submission3093/Area_Chair_wfh9" ], [ "ICLR.cc/2025/Conference/Submission3093/Reviewer_L69A" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3093/Reviewer_wUHR" ], [ "ICLR.cc/2025/Conference/Submission3093/Reviewer_2Up5" ], [ "ICLR.cc/2025/Conference/Submission3093/Reviewer_2Up5" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents an approach to harness CLIP for DG, highlighting that some attention heads may encode domain-specific information that limits generalization capabilities. The authors introduce two levels of purification: task-level and domain-level. For task-level purification, they propose head-aware LoRA (HA-LoRA) to adapt each attention head to the specific task, while domain-level purification involves a learnable gating strategy to select heads that contribute to domain-invariant features.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes a novel CLIP-based method with the attention head purification, which provides new perspectives for research on domain generalization.\\n2. Extensive experiments demonstrate the effectiveness of the method.\\n3. The paper is well organized and easy to follow.\", \"weaknesses\": \"1. Limited exploration of head interactions: The focus on purifying heads independently may overlook potential interactions between heads, as attention heads often work in conjunction. Neglecting their interactions may lead to suboptimal configurations and reduce the overall efficacy of the model.\\n2. Limited theoretical justification: The paper lacks a theoretical underpinning for why attention head purification, specifically through HA-LoRA, would systematically improve DG. The concept of different attention heads being more or less generalizable is intuitive but could benefit from further theoretical exploration or a more rigorous explanation of why the proposed method is expected to work universally across domains and tasks.\\n3. Gating complexity: The gating mechanism, while useful, introduces additional complexity that may complicate the model's interpretability and practical implementation. This paper does not provide a detailed analysis of the computational cost of the proposed method compared to other methods, which could be important for its practical applicability.\\n4. Unexplored interaction with text encoder: Since CLIP\\u2019s strength lies in its vision-language alignment, the text encoder is kept fixed throughout the training, which seems to avoid investigating potential improvements that could arise from co-adapting the text and image encoders for DG tasks.\", \"questions\": \"1. Analysis of computational overhead: Please provide more details on the computational costs of your method. A comparison of runtime and memory usage between your method and other CLIP-based approaches would be helpful.\\n2. Additional explanation of gating strategy: It would be beneficial to provide more details regarding the impact of the gating strategy, potentially including ablation studies. As it stands, this does not appear to be a particularly novel idea.\\n3. Experimental results on MMD: Table 2 shows a significant drop in performance when MMD loss is integrated with HA-LoRA. The HA-LoRA module is designed to help the model retain task-relevant knowledge, and the MMD loss aims to capture information that is consistent across multiple domains. These two objectives should not be inherently conflicting; however, their combination leads to a decline in performance.\\n4. Further theoretical insights: Could you provide more theoretical insights into why head-specific purification specifically improves DG compared to other forms of fine-tuning?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a new perspective (attention head purification) to harness CLIP for domain generalization (DG). The paper finally got four negative scores.\", \"the_strengths_of_this_paper_include\": \"1) well-written; 2) provided a valuable observation; 3) a structured approach; 4) the method is effective on DG benchmarks.\\n\\nHowever, the reviewers think this paper has the following drawbacks: 1) insufficient explanation of HA-LoRA and head interactions; 2) parameter increase disclosure; 3) lack of reproduced results; and 4) limited theoretical justification.\\n\\nThe authors have provided a rebuttal. After checking the rebuttal and comments, the reviewers acknowledged that some of their concerns have solved but the explanation of HA-LoRA is not satisfied. Finally, the reviewers think this paper did not well explain why the proposed method is specifically designed for the domain generalization problem and consistently gave negative scores. To this end, AC thinks this paper cannot meet the requirement of ICLR at this point and thus regrets to recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"The authors have provided a rebuttal. After checking the rebuttal and comments, the reviewers acknowledged that some of their concerns have solved but the explanation of HA-LoRA is not satisfied. Finally, the reviewers think this paper did not well explain why the proposed method is specifically designed for the domain generalization problem and consistently gave negative scores.\"}", "{\"summary\": \"This paper addresses the problem of domain generalization (DG), where data from multiple source domains is used during training, with the objective of evaluating the trained model on unseen target domains. The study focuses on CLIP ViT models, drawing on previous findings that different attention heads capture distinct properties of an image. The authors introduce a head-aware LoRA mechanism, specifically tailored for each attention head, to capture task-specific knowledge while minimizing interference. Additionally, a domain-invariant gating module with learnable gating values is applied to the head outputs. These outputs are weighted, and a Maximum Mean Discrepancy (MMD) loss is employed to promote the extraction of domain-invariant features. The proposed approach is validated on zero-shot CLIP models and can be seamlessly integrated with other DG methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe strategy of reducing interference among attention heads and decoupling the learning of task-specific and domain-invariant features is technically sound.\\n2.\\tThe proposed method is efficient, as it introduces only a small number of learnable parameters.\\n3.\\tThe method demonstrates versatility by being compatible with various approaches, and notable improvements have been observed through its integration.\", \"weaknesses\": \"1.\\tThe motivation of the paper is based on the observation that different attention heads capture distinct properties. However, the CLIP model, while trained as a foundation model, is not flawless. Its generalized knowledge may not cover specific datasets comprehensively, limiting its ability to attend perfectly to every domain. The proposed method offers a more parameter-efficient fine-tuning approach by applying LoRA independently to each attention head. The observed improvements can be attributed to the model's ability to capture unique properties from the source domains, which in turn enhances performance on the target domain. Therefore, the claim of \\\"purifying at the task level\\\" may not be entirely appropriate, as the HA-LoRA mechanism complements the existing knowledge within the CLIP model rather than isolating it.\\n2.\\tWhy does the proposed method modulate only the projection matrices of Q and V with HA-LoRA? Since the attention map is generated through the interaction between Q and K to query the information stored in V, wouldn\\u2019t it be more consistent to also modulate the K projection? Clarifying the rationale behind this design choice would strengthen the paper.\\n3.\\tThe paper does not provide an ablation study on the impact of sharing the B matrix, which would be valuable for understanding its role in the proposed approach.\\n4.\\tThe domain-invariant gating mechanism appears to function as an additional layer of attention applied on top of the attention heads. It is unclear how this linear scaling effectively filters out domain-specific information while preserving only the domain-invariant features. A more detailed explanation of this mechanism would help clarify its effectiveness.\\n5.\\tIn Table 4, all the compared and integrated methods use a frozen image encoder. It would be insightful to include results for these methods with a naive LoRA applied, ensuring a fair comparison and better understanding of the proposed approach's advantages.\", \"questions\": \"1.\\tSince the best performance of the proposed method is achieved when integrated with PromptStyler, how exactly is PromptStyler incorporated in Table 4? Given that PromptStyler operates under a source-free DG setting and does not require access to source data, it may not be appropriate to directly reuse results from the original paper without further justification.\\n2.\\tHow is \\\"importance\\\" defined when evaluating the attention quality, as shown on the right side of Figures 1 and 3? Additionally, it would be insightful to visualize how the proposed method modulates the attention by comparing attention maps before and after applying the method for the same head.\\n3.\\tWhat does the \\\"worst\\\" attention look like for the proposed method, as indicated in Figure 3? Providing such an example would offer a deeper understanding of the method's limitations.\\n4.\\tHow does the proposed method compare in terms of the number of learnable parameters? A direct comparison would highlight the parameter efficiency of the approach.\\n5.\\tWhat do the learned gating weights look like? Visualizing or analyzing these weights could provide more insight into the gating mechanism's behavior.\\n6.\\tHow sensitive is the method to the number of layers where HA-LoRA is integrated and to the rank numbers, as mentioned in Lines 354-355? An ablation study on these aspects would help clarify the impact of these choices.\\n7.\\tCan the authors further elaborate on the statement: \\\"CLIP, by design, may contain domain-specific cues that constrain its domain generalization performance\\\"? A more detailed interpretation would strengthen the argument.\\n8.\\tIf HA-LoRA primarily focuses on task-level adjustments, it may still retain biased domain knowledge, as domain-specific information is only removed at a later stage. Is there a concrete example that demonstrates this process and illustrates how the method addresses or mitigates this bias?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper focuses on domain generalization of CLIP. It introduces task-level and domain-level purification to make attention heads more task-adapted and domain-generalizable.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed method is simple and clearly presented.\", \"weaknesses\": \"1. Although the method is technically simple, its final performance is not as promising as expected, as shown in the SOTA table (Table 5).\\n2. The paper distinguishes the proposed method from previous ones by posing the question: \\\"Is the way to avoid knowledge forgetting sufficient to harness CLIP for domain generalization?\\\" Does this imply that the proposed method can complement previous knowledge-forgetting avoidance techniques, such as regularization-based methods? If so, it would be beneficial to show this experimentally.\\n3. The visualization experiment in Figure 3 does not fully verify the motivation. To demonstrate the attention head purification resulting from the proposed design, it would be more convincing to compare head-aware LoRA with conventional LoRA, as well as the proposed method with regularization-based methods.\", \"questions\": \"Please refer to Weakness 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the authors' feedback.\\n\\nBased on the first response above, it appears that task-level purification should not be considered separate from the DIG approach. In this context, I believe the paper only proposes a router/gating mechanism to select heads when generalizing to a specific domain. Despite the simplicity of the method, router/gating has been extensively studied in the literature, and there is no specific design of this component for the domain generalization problem. As such, I believe this paper is not yet ready for publication. The authors should conduct a more in-depth study to provide additional insights and refine their method accordingly. Lastly, I recommend removing the phrase \\\"task-level purification,\\\" as it does not accurately reflect a process of literal \\\"purification.\\\"\\n\\nBased on the above reason, I will maintain my current rating.\"}", "{\"summary\": \"This paper addresses the challenge of Domain Generalization by observing that domain-specific cues inherent in CLIP can hinder generalization. To tackle this, the authors introduce a novel strategy including two levels of purification of attention head: task-level, achieved through head-aware LoRA to make heads more task-adapted, and domain-level, using a gating strategy with MMD loss to make selected heads more domain-invariant. Extensive experiments on several datasets show that this method outperforms previous state-of-the-art methods, validating its effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well written and easy to follow.\\n\\n2. The authors provide a valuable observation that not all attention heads within CLIP contribute equally to domain generalization in specific tasks. \\n\\n3. The paper presents a structured approach by decoupling attention head purification into two levels: task-level and domain-level purification. \\n\\n4. Extensive experiments on well-recognized domain generalization benchmarks provide robust evidence of the method's effectiveness.\", \"weaknesses\": \"1. Insufficient Explanation of HA-LoRA: The proposed HA-LoRA claims to improve task-level generalization by adding independent parameters for each head. However, adding these additional parameters typically risks overfitting. The authors should clarify why overfitting does not appear to be an issue in this case. Additionally, aside from the independent B matrices, no specific training objective is proposed to encourage task-level purification. It seems implausible that the independent B matrices alone could achieve this effect. Providing more insight into how independent LoRA B matrices promote task-specific learning would strengthen the argument.\\n\\n2. Parameter Increase Disclosure: Although the authors mention additional parameters in LoRA, there\\u2019s no concrete information on the parameter increase relative to the original LoRA. This is crucial for understanding the model\\u2019s computational trade-offs.\\n\\n3. Lack of reproduced results: In Table 4, the performance marked as 'xx method + ours' were obtained by the authors. To better understand the impact of HA-LoRA, please also report the baseline performance of these prompt-based methods as reproduced by the authors. This should not be difficult as the authors should first reproduce the results of original papers and then conduct further analysis by adding their proposed part.\\n\\n4. Related Work: The author should also discuss with the paper regarding the proposed HA-LoRA: Tian, Chunlin, et al. \\\"HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning.\\\" arXiv preprint arXiv:2404.19245 (2024).\", \"questions\": \"1. Parameter Efficiency: Can the authors provide quantitative evidence on the added parameter count for HA-LoRA compared to original LoRA, and how this scales with model size and number of tasks?\\n\\n2. Impact of Random Seeds: Given that slight variations in random seeds can impact model performance, how sensitive is this method to random seeds? Have the authors performed robustness tests across different seeds?\\n\\n3. Empirical Evidence for Task-Specific Knowledge in B Matrices: What empirical evidence supports the claim that each independent B matrix learns task-specific information? For example, could visualizations or analysis of learned representations clarify how these matrices differ across tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5dKiZeF3MD
GraphSTAGE: Channel-Preserving Graph Neural Networks for Time Series Forecasting
[ "Tong Guan", "Kaiyue Ma", "Jiaheng Peng", "Jun Liang", "Bo Du", "Ming Jin", "Shirui Pan" ]
Recent advancements in multivariate time series forecasting (MTSF) have increasingly focused on the core challenge of learning dependencies within sequences, specifically intra-series (temporal), inter-series (spatial), and cross-series dependencies. While extracting multiple types of dependencies can theoretically enhance the richness of learned correlations, it also increases computational complexity and may introduce additional noise. The trade-off between the variety of dependencies extracted and the potential interference has not yet been fully explored. To address this challenge, we propose GraphSTAGE, a purely graph neural network (GNN)-based model that decouples the learning of intra-series and inter-series dependencies. GraphSTAGE features a minimal architecture with a specially designed embedding and patching layer, along with the STAGE (Spatial-Temporal Aggregation Graph Encoder) blocks. Unlike channel-mixing approaches, GraphSTAGE is a channel-preserving method that maintains the shape of the input data throughout training, thereby avoiding the interference and noise typically caused by channel blending. Extensive experiments conducted on 13 real-world datasets demonstrate that our model achieves performance comparable to or surpassing state-of-the-art methods. Moreover, comparative experiments between our channel-preserving framework and channel-mixing designs show that excessive dependency extraction and channel blending can introduce noise and interference. As a purely GNN-based model, GraphSTAGE generates learnable graphs in both temporal and spatial dimensions, enabling the visualization of data periodicity and node correlations to enhance model interpretability.
[ "Time Series Forecasting", "Graph Neural Networks", "Channel-Preserving" ]
Reject
https://openreview.net/pdf?id=5dKiZeF3MD
https://openreview.net/forum?id=5dKiZeF3MD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zfTipfNfjL", "yMUGTVsY3R", "wm5DgHQ5jQ", "vwUhTECC2M", "ugfD4pnFSr", "p6roEempzI", "p5CQ103aV0", "oHxsfubyLO", "m7cnNNYCcS", "k5TqSQMBZT", "k1pkeHjQ23", "iur8VQjvaw", "gyanGINP3a", "cT4c2JQgAa", "WRT5cAbwos", "WE4TlRGoMx", "VJuPGrAUlt", "UJDR1WGBD0", "U0jw5myFfh", "SiYiaeKCsA", "ReXiazwWlW", "RXR8dHsx3N", "OzLIzj7GPY", "MnbgNAxdbC", "MUS7M2WCTV", "LkMoQTQI2X", "Kn1XH6J1Mj", "JfF8tRdMVu", "IVtxY03YGV", "GeiVNqesXi", "GdBZFeavRx", "GYiQi3G0VZ", "FmWmmNdjYv", "Flt2jbbaPQ", "F79bR51CtK", "DsFlTrjshn", "C1DExJX3IV", "8E1HWaVWeY", "6PmWrM1IVB", "5XOKcYquXK", "4wUoeJDNwe", "4swMv9Uz9a", "4Xp65L3KOF", "1uLOeGDI2v", "1uC60rAmmV", "1ZFfIexfut", "1OBeL9Fbyq", "0uv2rutiYl" ], "note_type": [ "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734664618715, 1733312766102, 1730368965435, 1733191808798, 1732740450146, 1732429694701, 1732519643296, 1732176177330, 1733124004799, 1733064575946, 1732428063412, 1732176965563, 1732175577592, 1732175336016, 1732177227315, 1733078048869, 1732177123232, 1737523639412, 1732519785338, 1733192732699, 1733191939183, 1733312267857, 1732761869224, 1732519556767, 1732453834733, 1732175642803, 1732176353078, 1732175847408, 1733064770140, 1732429776102, 1732430270491, 1732457697157, 1732768595060, 1732176528259, 1733124227323, 1733192218848, 1730644735425, 1730716857945, 1732176316470, 1732548882579, 1730146281791, 1732370452804, 1732175750800, 1732177278530, 1732176220214, 1732519698467, 1732178611140, 1733078287546 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4428/Area_Chair_F1CZ" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Reviewer_7NU8" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Reviewer_7NU8" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Reviewer_9agk" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Reviewer_9agk" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Reviewer_9agk" ], [ "ICLR.cc/2025/Conference/Submission4428/Reviewer_yAwh" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Reviewer_RvJQ" ], [ "ICLR.cc/2025/Conference/Submission4428/Reviewer_RvJQ" ], [ "ICLR.cc/2025/Conference/Submission4428/Reviewer_9agk" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ], [ "ICLR.cc/2025/Conference/Submission4428/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a graph neural network model, called GraphSTAGE, for multivariate time series forecasting. It aims to decouple the learning of intra-series and inter-series dependencies. Unlike channel-mixing methods, GraphSTAGE is a channel-preserving method that can potentially alleviate the so-called noise and interference caused by mixing channels.\", \"major_strengths\": [\"Learning different types of dependencies effectively is an important topic in multivariate time series forecasting.\", \"Extensive experiments are presented.\"], \"major_weaknesses\": [\"There is a lack of technical justification for the proposed method, making it appear a bit ad hoc.\", \"Although using learning-based graphs can alleviate some problems of using pre-defined graphs, it incurs additional computational overhead during run-time. The efficiency-accuracy tradeoff needs to be studied more comprehensively.\", \"We appreciate the authors for responding to the comments and suggestions of the reviewers in their reviews and conducting additional experiments to address some of the issues raised. Although this work holds promise for publication in a respectable venue such as ICLR, it would have a higher chance to receive stronger votes from reviewers for acceptance if the weaknesses above and some others mentioned by the reviewers could be addressed more thoroughly, or else the important topic of learning different types of dependencies in multivariate time series forecasting would remain unresolved. The authors are encouraged to improve their paper for future submission by considering the comments and suggestions of the reviewers.\"], \"additional_comments_on_reviewer_discussion\": \"The authors responded to the reviews by engaging in discussions with the reviewers and providing additional experiment results. Nevertheless, all reviewers still feel that this is a borderline paper. The only (minor) difference is which side of the borderline. It is not suitable for ICLR to accept a paper still with doubts that need to be sorted out. Addressing the outstanding issues, including the major weaknesses listed above, will make this paper more ready for publication.\"}", "{\"comment\": \"Dear Reviewer 7NU8,\\n\\nHope this message finds you well.\\n\\nWe appreciate the diligent efforts of you in evaluating our paper. **We have provided detailed responses to your additional concerns** regarding **the significance of our results, the specifics of our model design, and the comparisons with more recent sota.** May we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score.\\n\\nOnce more, we appreciate the time and effort you've dedicated to our paper.\"}", "{\"summary\": \"This study focuses on graph structure learning for time series forecasting based on modeling inter-series and intra-series dependencies for complex multivariate time series. To enable efficient modeling of dependencies with controlled interference between variables and reduced noise, the authors propose GraphStage architecture, a GNN-based approach that, contrary to previous approaches, is channel-preserving. To achieve this, the GraphStage model is built upon patching and embedding layers, followed by decoupled inter-series and intra-series correlation graph structure modules that produce spatial and temporal graphs fine-tuned with the task. Finally, GraphStage is evaluated for time series forecasting against recent state-of-the-art architectures, showing competitive performance. Several ablation studies and visualizations of the learnable correlation provide additional insights into the potential of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors tackle a long-standing problem in time series forecasting, which is inferring a graph capturing evolving dependencies in terms of multiple variables. The main strengths of this work and the proposed method are summarized as follows:\", \"The authors combine embedding and patching mechanisms recently proposed for time series forecasting with the challenging task of graph structure learning, enabling channel independence.\", \"Correlations are captured in terms of a graph structure separately for the spatial and temporal dimensions, which remains a relatively novel design choice.\", \"Experimental results show that the proposed graph-based method outcompetes, in several cases, recent baselines in time series forecasting. Additionally, the visualizations of the learned temporal correlations are very interesting qualitative aspects of the presented study.\"], \"weaknesses\": \"1. **Poor Experimental Evaluation and Unclear Performance Significance:** (W1) Chosen baseline methods for comparisons are limited to non-graph-based models. In contrast, several methods leveraging GNNs for time series forecasting can be found in the literature [1]. (W2) Additionally, the performance improvements achieved by the proposed method are, in most cases, very small compared to the best competitor (for instance, the biggest difference achieved by GraphStage from its best competitor in Table 1 is for Solar-Energy: from MSE equal to 0.233 to MSE 0.192, yet GraphStage is outcompeted in terms of MAE). (W3) It is unclear if the authors have performed multiple runs with random seeds to capture the model\\u2019s performance variability. The lack of standard deviations makes it impossible to assess whether the results' difference is statistically significant.\\n2. **Limited Related Works Presented:** (W4) Graph structure learning for time series forecasting is a long-term studied problem in the literature, entirely before the pure graph paradigm (Yi et al., 2024). Several methods that capture dependencies in the spatial and temporal domain jointly (Shang et al., 2021; Wu et al., 2020) or separately (Kipf et al., 2018; Xu et al., 2023) have been proposed, but are rather not mentioned in the paper and not considered in the experiments. (W5) Similarly the graph learning module (softmax on dot products between pairs) is a standard choice in relevant works (Wu et al., 2020), yet not correctly cited in this work.\\n3. **Lack of Clarity in the Methodological Presentation and Inflated Claims:** (W6) Section 3 is not very easy to follow since it has a great amount of text, making it hard to understand how the two graph learning blocks interact and how the final output is derived and optimized. (W7) Some claims are slightly exaggerated, e.g., that the proposed components \\u201ceffectively capture intra-series and inter-series dependencies, respectively, while generating interpretable correlation graphs\\u201d. Additionally, performance improvements are, in the average case, around 6% (and no improvement for four datasets), thus the statement \\u201cwith improvements up to 20.8%\\u201d is slightly misleading.\\n\\n[1] Yi, K., Zhang, Q., Fan, W., He, H., Hu, L., Wang, P., ... & Niu, Z. (2024). FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective. Advances in Neural Information Processing Systems, 36.\\n\\n[2] Shang, C., Chen, J., & Bi, J. (2021). Discrete graph structure learning for forecasting multiple time series. arXiv preprint arXiv:2101.06861.\\n\\n[3] Kipf, T., Fetaya, E., Wang, K. C., Welling, M., & Zemel, R. (2018, July). Neural relational inference for interacting systems. In International conference on machine learning (pp. 2688-2697). PMLR.\\n\\n[4] Xu, N., Kosma, C., & Vazirgiannis, M. (2023, November). TimeGNN: Temporal Dynamic Graph Learning for Time Series Forecasting. In International Conference on Complex Networks and Their Applications (pp. 87-99). Cham: Springer Nature Switzerland.\\n\\n[5] Wu, Z., Pan, S., Long, G., Jiang, J., Chang, X., & Zhang, C. (2020, August). Connecting the dots: Multivariate time series forecasting with graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 753-763).\", \"questions\": \"1. Based on (W1-W3), the experimental evaluation could be enhanced with comparisons with relevant graph-based modules and better capture the variability of scores in comparisons with the baselines.\\n2. Based on (W4, W5), the related works section could be improved and include differences between design choices in the graph structure learning component, including the incorporation of sparsity, binary or continuous edges, and spatial or temporal dependencies, such that the technical contribution of the work is positioned in a more evident and fair way.\\n3. Based on (W6), some parts could be replaced by equations (for instance, how the combination of spatial and temporal graphs is achieved and how the final output is derived) or references since several components are similarly used to other studies, e.g., patching. Similarly, some details about the model could be moved from the appendix to the main text.\\n4. Based on (W7), it is unclear how interpretability is achieved. Since patching is performed first, followed by temporal graph learning and then spatial graph learning, seems a bit exaggerated that interpretability is achieved, e.g., what can someone tell from the spatial graph concerning inter-series dependencies?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Kind Request for Reviewer's Attention and Feedback\", \"comment\": \"Dear Reviewer 9agk,\\n\\nHope this message finds you well.\\n\\nWe appreciate the diligent efforts of you in evaluating our paper. We have responded in detail to your questions. As the discussion period will end soon, we would like to kindly ask whether there are any additional concerns or questions we might be able to address. If our current response satisfactorily resolves your main concerns, we kindly ask for your reconsideration of the score. \\n\\nOnce more, we appreciate the time and effort you've dedicated to our paper.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"I appreciate the author's effort in addressing my concerns and clarifying potential misunderstandings. The additional results can potentially improve the paper's presentation and contribution. However, several of the justifications provided are unclear.\", \"more_specifically\": [\"**Significance of results:** I thank the authors for adding the standard deviations, but this was done only for their method. I remain unconvinced if all models are run 5 times with random seeds. Some seem to be directly taken from relevant papers (e.g., TimesNet) where results are, in several cases, mentioned for a fixed seed (CrossGNN). I highlight this since, for point-wise metrics, differences in terms of 2nd-3rd decimals might be minor at the prediction level.\", \"**Comparison with GNN-based methods:** I thank the authors for incorporating more experimental results concerning relevant GNN-based forecasting methods. The additional sentence about the GNN-based forecasting methods provides minimal explanations in the related section. It does not adequately support the central novelty justification in the graph-learning part of the method. It seems that using two decoupled temporal and spatial graphs in a sequential manner is a choice rather experimentally than conceptually motivated.\", \"**Novelty of Contribution:** I remain unconvinced about the significance of the methodological contribution. The idea is positioned in between GNN-based forecasting and leveraging embedding mechanisms for forecasting in terms of contribution and experiments. Patching and channel independence are not novel when targeting a contribution in sequential models for forecasting. Similarly, spatial-temporal graphs and GNN-based forecasting have been well-studied but remain hard to interpret and computationally expensive. Finally, the ablation of the model components proves the significance of patching and input embeddings rather than graph structure learning (where replacing with attention gives similar results). In the second case of positioning their work among sequential models forecasting, more recent sota (e.g., FITS) could have been used for more meaningful comparisons.\", \"I once again sincerely thank the authors for their inputs. However, based on the above I prefer for the moment to maintain my scores.\"]}", "{\"title\": \"Request of Reviewer's attention and feedback\", \"comment\": \"Thank you for your valuable and constructive feedback, which has inspired further improvements to our paper. As a gentle reminder, it has been more than 2 days since we submitted our rebuttal. We would like to know whether our response addressed your concerns.\\n\\nFollowing your comments and suggestions, we have **answered your concerns** and **made the following revisions**:\\n\\n- We have expanded our experiments to **include additional four graph-based baseline methods** for multivariate time series forecasting tasks: (1) FourierGNN, (2) CrossGNN, (3) StemGNN, and (4) MTGNN. For complete results, refer to *Table 9 in Appendix D on Page 18*.\\n\\n- We have **added the standard deviations** of our model across all tasks to highlight the model\\u2019s strong robustness. For complete results, refer to *Table 10 and Table 11 in Appendix F on Page 22*.\\n\\n- Additionally, we **provided the pseudo-code** of our model to enhance the clarity in methodological presentation. For detailed results, please see *Algorithm 1 in Appendix A on Page 15*.\\n\\nAdditionally, we have updated the related works to more clearly define the scope and background of this study. We have revised statements that could be slightly misleading, such as \\u201cimprovements up to 20.8%.\\u201d We have also revised the colors in Figure 15 and have updated the relevant text in the revised paper to make the interpretability more clearly. \\n\\nWe sincerely thank you for your insightful review. We eagerly await your feedback and are ready to respond to any further questions you may have.\"}", "{\"title\": \"Request of Reviewer's attention and feedback\", \"comment\": \"Dear Reviewer 7NU8,\\n\\nWe sincerely thank you for your valuable and constructive feedback. Since the End of author/reviewer discussions is coming soon, may we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score. Should you have any further advice on the revised paper or our rebuttal, please let us know and we will be more than happy to engage in more discussion and paper improvements.\\n\\nThank you so much for devoting time to improving our paper!\"}", "{\"comment\": \"# Response to Reviewer 9agk (2/2)\\n\\n> **W2: This paper appears to be an incremental contribution, with its components largely derived from previous works. Channel-preserving Strategy is inspired by iTransformer, and the fully GNN perspective is inspired by FourierGNN.** \\n\\nThank you for pointing out this. We would like to highlight the unique aspects of our approach and clarify the distinctions between our work and the recent models you mentioned.\\n\\n1. **Novelty in Research Motivation:**\\n We are the first to study the trade-off between **the variety of dependencies extracted** and **the noise that may be introduced by channel-mixing**. As we stated in our introduction: *\\\"Is it truly necessary to model all these dependencies?\\\"* Previous works [1] [2] often adopt a \\\"brute-force modeling\\\" approach, stacking parameters to capture as many dependencies as possible. While this might seem effective, it overlooks a critical issue\\u2014the potential noise introduced by this process. \\n2. **Innovation in Model Framework:**\", \"we_propose_a_novel_channel_preserving_framework\": \"GraphSTAGE. Through fair model variant experiments in *Table 4 on Page 10*, we validate the presence of such noise, underscoring the limitations of excessive dependency extraction. GraphSTAGE is a pure graph paradigm that decouples the extraction of **global** **temporal** dependencies and **global** **spatial** dependencies, **rather than being limited to local** neighbor information.\\n3. **Advancements in Model Performance:**\\n Despite its structural simplicity, our model performs comparably to or surpasses state-of-the-art models across 13 MTSF benchmark datasets. It **ranks first among 8 advanced models** in 22 out of 30 comparisons in *Table 1 on Page 7*. Specifically, on the PEMS07 dataset\\u2014which has the largest number of nodes\\u2014GraphSTAGE outperforms the recent SOTA iTransformer by 20.8%, indicating its **potential for application to larger-scale MTSF tasks**, such as extensive grid management.\\n\\n---\\n\\n**$\\\\triangleright$Differences with iTransformer:**\\n\\nConceptually, **iTransformer** is not a channel-preserving model; it **overlooks temporal channel information.** iTransformer projects the original time series data $X_{\\\\text{in}} \\\\in \\\\mathbb{R}^{N \\\\times T}$ (where $N$ is the number of nodes and $T$ is the length of the time series) into $H_S \\\\in \\\\mathbb{R}^{N \\\\times D}$ to capture spatial dependencies among nodes. However, this transformation ignores temporal dependencies and fails to learn the underlying temporal graph structures.\\n\\nIn contrast, our GraphSTAGE embeds the input data $X_{\\\\text{in}} \\\\in \\\\mathbb{R}^{N \\\\times T}$ into $H \\\\in \\\\mathbb{R}^{N \\\\times T \\\\times D}$, where $D$ is the embedding dimension. The original node and time dimensions are preserved. This channel-preserving framework enables the model to incorporate both spatial (inter-series) and temporal (intra-series) dependencies by decoupling them. This separation allows GraphSTAGE to capture temporal dynamics more effectively, which is a significant extension beyond what iTransformer offers.\"}", "{\"comment\": \"# Response to Reviewer 7NU8: Additional Concern (3/3)\\n\\n> **Additional Concern 3 (1/5):** The idea is positioned in between GNN-based forecasting and leveraging embedding mechanisms for forecasting in terms of contribution and experiments.\\n\\nThank you for pointing out this. Embedding mechanisms are used to inject relative positional information into inputs. While we have indeed proposed a refined time embedding in our paper (detailed in *paragraph 2 of Section 3.1*), which allows for more granular and sampling-frequency-adaptive relative positioning and significantly enhances performance on high-frequency datasets, **we did not list embedding mechanisms as our core contribution.**\\n\\nAs outlined in *Section 1 of the revised paper* , **our main contributions** are the novelty in **research motivation** and innovation in **model framework**. Specifically, **we are the first to study the trade-off** between the variety of dependencies extracted and the noise that may be introduced by channel-mixing. Additionally, we propose a pure graph paradigm that decouples the extraction of global temporal dependencies and global spatial dependencies.\\n\\nWe believe that these contributions position our work beyond being merely between GNN-based forecasting and leveraging embedding mechanisms. **Our focus is on addressing previously unexplored issues in dependency modeling and proposing a novel framework to tackle these challenges.**\\n\\nWe hope this clarifies the core contributions of our work.\\n\\n---\\n\\n> **Additional Concern 3 (2/5):** Patching and channel independence are not novel when targeting a contribution in sequential models for forecasting.\\n\\nThank you for pointing this out. Patching is a common modeling method, and **we do not claim it as our contribution**. \\n\\nWe also **did not claim in our paper that we are a channel independence model**. Since our model extracts temporal and spatial dependencies, we do not strictly enforce channel independence.\\n\\n---\\n\\n> **Additional Concern 3 (3/5):** Similarly, spatial-temporal graphs and GNN-based forecasting have been well-studied but remain hard to interpret and computationally expensive.\\n\\nThank you for pointing out this. We believe that the general statement about spatial-temporal graphs and GNN-based forecasting methods being hard to interpret and computationally expensive **may not directly apply to our work.**\\n\\n1. **Interpretability:** We have addressed the interpretability of our model in the **Response to Reviewer 7NU8 (4/4), specifically in W7 & Q4 (2/2)**. In summary, the temporal and spatial dependencies learned by our model **match the inherent patterns of the dataset** and **match the ground truth**. This demonstrates that our model provides meaningful and interpretable insights.\\n\\n2. **Computational Cost:** As shown in *Figure 5 on page 7*, GraphSTAGE does not exhibit significantly higher computational costs compared to other methods. **Our model achieves superior performance with acceptable computational overhead**.\\n\\n---\\n\\n> **Additional Concern 3 (4/5):** The ablation of the model components proves the significance of patching and input embeddings rather than graph structure learning (where replacing with attention gives similar results).\\n\\nThank you for pointing out this. We greatly appreciate your thorough review and attention to the details of our ablation study. While in a small number of tests replacing the STAGE block with attention yields comparable results, closeness does not imply complete equivalence. Overall, the STAGE block outperforms the attention mechanism.\\n\\nMoreover, even when using attention, **it operates within our proposed channel-preserving framework.** The experimental results further highlight the superiority of our framework, which is precisely the focus of our core contributions.\"}", "{\"comment\": \"# Response to Reviewer 9agk: Additional Concern (1/1)\\n\\n> **Additional Concern1: While the experimental results show some improvement in the forecasting performance of your proposed model, the underlying source of this improvement remains unclear. Therefore, I prefer to maintain my original score.** \\n\\nThank you for your feedback and for acknowledging the performance improvements achieved by our model. We have addressed your concern in the revised paper through a combination of **model variants comparisons**, **ablation studies**, and **visualization of learned dependencies**. Below, we summarize these efforts to clarify the contributions of our design to the observed performance improvements:\\n\\n1. **Model Framework and Variants Comparisons**\\n We conducted extensive experiments **comparing different model variants** to demonstrate the advantages of our decoupled sequential learning framework. Specifically, we evaluated alternative designs that modify the sequence of dependencies learning or use parallel and channel-mixing structures (VarA, VarB, VarC). For reference, please see *Table 4 on Page 10* in the revised paper. For your convenience, we provide a summary of the relevant averaged results below:\\n\\n | **Method** | GraphSTAGE | | VarA | | VarB | | VarC | |\\n | ---------- | ---------- | --------- | ----- | ----- | ----- | ----- | ----- | ----- |\\n | Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |\\n | ECL | **0.166** | **0.263** | 0.192 | 0.282 | 0.184 | 0.276 | 0.192 | 0.284 |\\n\\n Our sequential structure (GraphSTAGE) outperforms variants, **especially on large-scale dataset (e.g., ECL with 321 nodes),** by effectively preserving the original data structure and decoupling the learning of inter-series and intra-series dependencies. **These results underscore the importance of our architectural design.**\\n\\n---\\n\\n2. **Ablation Studies on Model Design**\\n We performed detailed ablation studies to assess the contribution of key components in our model. For example:\\n\\n - **Embedding Layer:** Results in *Table 3 on Page 8* show that removing components such as time embedding or adaptive embedding leads to significant performance drops. For your convenience, we provide a summary of the relevant averaged results below:\\n\\n | Dataset | PEMS03 | | PEMS08 | |\\n | ----------------- | --------- | --------- | --------- | --------- |\\n | Metric | MSE | MAE | MSE | MAE |\\n | **GraphSTAGE** | **0.097** | **0.210** | **0.139** | **0.220** |\\n | w/o Patching | 0.110 | 0.222 | 0.176 | 0.253 |\\n | w/o Time Emb. | 0.114 | 0.223 | 0.199 | 0.264 |\\n | w/o Adaptive Emb. | 0.121 | 0.257 | 0.203 | 0.260 |\\n\\n Additionally, we introduced a **refined time embedding**, refining 'Hour of Day' (e.g., 24 hours/day for PEMS datasets) to 'Timestamp of Day' (e.g., 288 steps/day for PEMS datasets), as detailed in *paragraph 2 of Section 3.1*. This allows for **more granular and sampling-frequency-adaptive relative positioning** and significantly enhances performance on high-frequency datasets.\\n\\n - **STAGE Module:** As shown in *Table 2 on Page 8*, the removal of key graph learning components, such as the Intra-GrAG or Inter-GrAG modules, results in notable performance degradation. This highlights their critical role in **capturing both intra-series and inter-series dependencies effectively.** For your convenience, we provide a summary of the relevant averaged results below:\\n\\n | Dataset | ECL | | Solar-Energy | |\\n | -------------- | --------- | --------- | ------------ | --------- |\\n | Metric | MSE | MAE | MSE | MAE |\\n | **GraphSTAGE** | **0.166** | **0.263** | **0.192** | **0.267** |\\n | w/o Intra-GrAG | 0.185 | 0.277 | 0.225 | 0.292 |\\n | w/o Inter-GrAG | 0.186 | 0.276 | 0.239 | 0.294 |\"}", "{\"comment\": \"Thank you for your feedback and for raising this valuable question. We would like to address this issue from two perspectives:\\n\\n1. The theoretical principles of GraphSTAGE.\\n2. The results of ablation study.\\n\\n### 1. Theoretical Explanation\\n\\n- To illustrate what the STAGE block in GraphSTAGE is learning, let us consider an example with two nodes, $x$ and $y$, in a sample. After patching, we obtain four patches: $x_{1:p}$, $x_{p:2p}$, $y_{1:p}$, and $y_{p:2p}$\\u200b.\\n\\n In the **Intra-GrAG module**, the model focuses exclusively on learning the temporal dependencies **within the patches of the same node**. For example: \\n\\n - For the node $x$, model learns the connection between patches $x_{1:p}$ and $x_{p:2p}$\\u200b.\\n - Similarly, for the node $y$, model learns the connection between patches $y_{1:p}$ and $y_{p:2p}$.\\n\\n In the **Inter-GrAG module**, the model specifically learns the spatial dependencies **among patches from different nodes**, focusing on patches that correspond to the same temporal segment. For instance: \\n\\n - It captures the connection between $x_{1:p}$ (the first patch of node $x$) and $y_{1:p}$ (the first patch of node $y$). \\n - Similarly, it captures the connection between $x_{p:2p}$ (the second patch of node $x$) and $y_{p:2p}$ (the second patch of node $y$).\\n\\n Importantly, the **Inter-GrAG module does not consider dependencies between patches that correspond to different temporal segments**, such as $x_{1:p}$ with $y_{p:2p}$ or $x_{p:2p}$ with $y_{1:p}$. Its scope is strictly limited to learning spatial interactions within the same temporal window. \\n\\n- Regarding whether the spatial dependency between $x_{1:p}$ and $y_{1:p}$ includes the 'pseudo connection' between $x_1$ and $y_p$, **this is theoretically possible, but it requires validation through ablation studies.** Compared to methods like FourierGNN [1], GraphSTAGE achieves a better balance between noise and performance. Moreover, GraphSTAGE's cross-series dependency learning **is restricted to** connections between different nodes within the **same patch** and **does not involve** connections between different nodes across **different patches**. Therefore, the potential for learning this type of noise you mentioned is **significantly** **minimized compared to other hypervariate graph-based** modeling methods.\\n\\n### 2. Ablation Study\\n\\n**If the model learns noise as you mentioned, adding patching would introduce this noise into the learning process, causing the model's prediction performance to degrade.** To validate this, we conducted ablation experiments on 8 datasets. Following the experimental setup of iTransformer [2], we used a fixed lookback length $T=96$ and a prediction length $K=96$, running each experiment five times. For the ETTm1, ETTh2, Weather, and ECL datasets, we supplemented results with experiments labeled as \\u201cw/o Patching.\\u201d For the PEMS datasets, ablation results were collected from *Table 13 on Page 24* of the revised paper.\\n\\n| **Method** | GraphSTAGE (Patching) | | w/o Patching | |\\n| --------------- | --------------------- | --------------- | ------------ | ----------- |\\n| Metric | MSE | MAE | MSE | MAE |\\n| ETTm1 (96\\u219296) | **0.319**\\u00b10.004 | **0.356**\\u00b10.003 | 0.327\\u00b10.004 | 0.363\\u00b10.003 |\\n| ETTh2 (96\\u219296) | **0.292**\\u00b10.002 | **0.341**\\u00b10.002 | 0.295\\u00b10.003 | 0.344\\u00b10.002 |\\n| Weather (96\\u219296) | **0.159**\\u00b10.001 | **0.208**\\u00b10.001 | 0.163\\u00b10.002 | 0.210\\u00b10.002 |\\n| ECL (96\\u219296) | **0.139**\\u00b10.001 | **0.237**\\u00b10.001 | 0.145\\u00b10.006 | 0.241\\u00b10.004 |\\n| PEMS03 (96\\u219296) | **0.136** | **0.253** | 0.160 | 0.272 |\\n| PEMS04 (96\\u219296) | **0.113** | **0.228** | 0.134 | 0.259 |\\n| PEMS07 (96\\u219296) | **0.105** | **0.209** | 0.146 | 0.257 |\\n| PEMS08 (96\\u219296) | **0.207** | **0.270** | 0.295 | 0.348 |\\n\\n\\u200b\\tAs the results show, **adding patching consistently enhances prediction performance across all datasets**. If the model were indeed learning noise due to patching, we would expect the prediction performance to degrade when patches are added. However, the observed improvements indicate that this hypothesis is invalid. In fact, although GraphSTAGE employs patching in the Embedding & Patching layer, it remains a channel-preserving framework. **The patching operation merely increases the receptive field of each temporal channel, allowing the data to become smoother and more resilient to the influence of outliers.**\\n\\n\\u200b\\tThank you again for your prompt feedback. We hope this explanation and the accompanying experimental results effectively address your concerns.\\n\\n---\\n\\n[1] *FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective.* NeurIPS, 2024\\n\\n[2] *iTransformer: Inverted Transformers Are Effective for Time Series Forecasting.* ICLR, 2024\"}", "{\"comment\": \"# Response to Reviewer 7NU8 (3/4)\\n\\n> **W6 \\uff06 Q3 (1/3): how the two graph learning blocks interact?** \\n\\nThe interaction between the two graph learning blocks, Intra-GrAG and Inter-GrAG, follows a **sequential structure** within each STAGE block. We have **provided the pseudo-code of GraphSTAGE in *Algorithm 1 on Page 15*** in the revised paper. For your convenience, a brief introduction to the STAGE block is provided below:\\n\\nLet $L$ denote the number of STAGE blocks.\\n\\n- Use $H^{l-1} \\\\in \\\\mathbb{R}^{N \\\\times P \\\\times D}$ to represent the input tensor to the $l$-th STAGE block.\\n- Use $H_{\\\\text{tem}}^{l-1} \\\\in \\\\mathbb{R}^{P \\\\times N \\\\times D}$ to represent the output of the Intra-GrAG module in the $l$-th STAGE block.\\n- Use $H^{l} \\\\in \\\\mathbb{R}^{N \\\\times P \\\\times D}$ to represent the output of the Inter-GrAG module in the $l$\\u200b\\u200b-th STAGE block.\\n- $H^{0} \\\\in \\\\mathbb{R}^{N \\\\times P \\\\times D}$ is the output of the Embedding\\\\&Patching layer.\", \"the_whole_process_of_stacked_stage_blocks_is_as_follows\": \"1. For $l=1$ to $L$:\\n2. \\u200b\\t// Intra-GrAG module\\n3. \\u200b\\t$ H_{tem}^{l-1} = IntraGrAG(H^{l-1})$\\n4. \\u200b\\t// Inter-GrAG module\\n5. \\u200b\\t$ H^{l} = InterGrAG(H_{tem}^{l-1})$\\n6. End loop.\\n\\nThe final output $H^{L}$\\u200b is the output of the stacked STAGE blocks.\\n\\n\\n\\n> **W6 \\uff06 Q3 (2/3): how the final output is derived and optimized?** \\n\\nWe apologize for neglecting to address this point. For the prediction output, we have **provided the pseudo-code of GraphSTAGE** in *Algorithm 1 on Page 15* of the revised paper.\\n\\nRegarding model optimization, the parameters are iteratively updated until convergence by **minimizing the prediction loss** $\\\\mathcal{L} \\\\leftarrow \\\\mathcal{L}(\\\\hat{Y}, Y)$, which is the Mean Squared Error (MSE) loss used in our experiments. $\\\\hat{Y} $ are the predictions corresponding to the ground truth $Y$.\\n\\n\\n\\n> **W6 \\uff06 Q3 (3/3): how the combination of spatial and temporal graphs is achieved?**\\n\\nThere is **no combination between the spatial and temporal graphs** because we follow a sequential structure that decouples the extraction of temporal and spatial dependencies. This process is illustrated in the **Orig** of *Figure 7 on Page 9*. Specifically:\\n\\n- Temporal graphs, denoted as $A_T \\\\in \\\\mathbb{R}^{P \\\\times P}$, are used to capture the learned temporal dependencies.\\n- Spatial graphs, denoted as $A_S \\\\in \\\\mathbb{R}^{N \\\\times N}$, are used to capture the learned spatial dependencies.\\n\\nThe two types of graphs are not combined in our approach.\"}", "{\"comment\": \"# Responses to Reviewer yAwh (2/3)\\n\\n> **W2(1/2): I have doubts about the effectiveness of purely using GNNs for time series analysis. The performance of GNNs heavily depends on the rationality of the graph structure. However, in certain datasets or tasks, constructing a reasonable graph structure is itself a challenge. If the graph structure fails to accurately capture the dependencies in the data, even the most advanced GNNs may struggle to achieve optimal performance.**\\n\\nThanks for pointing out this. Yes, the rationality of the graph structure is crucial for the effectiveness of GNNs in time series analysis. In our proposed GraphSTAGE model, we **have minimized the risk that the graph structure fails to accurately capture the dependencies in the data.**\\n\\nIf we had used pre-defined graphs\\u2014as employed by models like DCRNN [1] and STGCN [2]\\u2014there could indeed be concerns about the rationality of the graph structure. Pre-defined graphs have been argued in many studies to be biased, incorrect, or even unavailable in many cases [3-6]. \\n\\nNumerous studies have demonstrated that learning-based graphs can effectively enable GNNs to capture complex dependencies in time series forecasting tasks [3-6]. Therefore, our GraphSTAGE model **utilizes a learning-based graph structure ($A_T$ and $A_S$)** to allow the model adaptively learn temporal and spatial dependencies. \\n\\n---\\n\\n[1] *Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting.* ICLR, 2018\\n\\n[2] *STGCN: a spatial-temporal aware graph learning method for POI recommendation.* ICDM, 2020\\n\\n[3] *Graph wavenet for deep spatial-temporal graph modeling.* IJCAI, 2019\\n\\n[4] *Connecting the dots: Multivariate time series forecasting with graph neural networks.* KDD, 2020\\n\\n[5] *Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting.* KDD, 2022\\n\\n[6] *Dynamic graph convolutional recurrent network for traffic prediction: Benchmark and solution.* Transactions on Knowledge Discovery from Data, 2023\"}", "{\"comment\": \"# Responses to Reviewer yAwh (1/3)\\n\\n> **W1: I don't quite understand why a purely graph-based structure can introduce noise and interference through channel mixing methods.**\\n\\nThank you for pointing out this. The noise and interference in channel-mixing methods, stem from the **potential learning of pseudo connections** when using a Hypervariate Graph to capture dependencies.\\n\\nSpecifically, the size of the Hypervariate Graph ($\\\\mathbb{R}^{NT \\\\times NT}$) is significantly larger than the learnable graphs we propose: the Temporal Learnable Graph $A_T \\\\in \\\\mathbb{R}^{T \\\\times T}$ and the Spatial Learnable Graph $A_S \\\\in \\\\mathbb{R}^{N \\\\times N}$. Consequently, **channel-mixing models like FourierGNN [1] and UniTST [2] have a higher degree of freedom** compared to our channel-preserving model (GraphSTAGE). This increased degree of freedom can lead the model to **learn many pseudo connections** (connections between different nodes at different time steps), which **weakens the weights of real connections** (inter-series and intra-series dependencies). This may **result in overfitting** and **a decrease in predictive performance.**\\n\\nTo test our hypothesis that the channel-mixing strategy **leads to overfitting** and **reduced predictive performance,** we conducted experiments on two datasets. Since the dependency learning components of FourierGNN [1] and UniTST [2] differ from those in GraphSTAGE, we used the model variant **VarC** in *Figure 7 on Page 9* as a baseline. This variant '**VarC**' changes only the channel-preserving strategy of the original GraphSTAGE to a channel-mixing strategy while keeping the dependency learning components consistent.\\n\\nTo demonstrate the existence of overfitting in the channel-mixing strategy, we set the training epochs to 30 and used a fixed learning rate of 1e-3. We tested on the ECL and PEMS03 datasets with input length equal to prediction length (24 time steps). All experiments have been conducted five times and hyperparameters remained the same. The results are shown in the table below:\\n\\n| **Method** | Channel-Preserving Model: Orig in Fig. 7 (GraphSTAGE) | | | Channel-Mixing Model: VarC in Fig. 7 | | |\\n| --------------- | ----------------------------------------------------- | --------- | -------- | ------------------------------------ | --------- | -------- |\\n| Loss | Train MSE | Valid MSE | Test MSE | Train MSE | Valid MSE | Test MSE |\\n| ECL (24\\u219224) | 0.110 | 0.105 | 0.122 | 0.106 | 0.103 | 0.130 |\\n| PEMS03 (24\\u219224) | 0.066 | 0.069 | 0.082 | 0.056 | 0.065 | 0.091 |\\n\\nAs shown, the channel-mixing model VarC has lower training MSE but higher test MSE. The results indicate that **VarC (channel-mixing model) exhibits overfitting** compared to Orig (channel-preserving model).\\n\\n---\\n\\nTo further demonstrate the decrease in predictive performance on the test set caused by the channel-mixing strategy, we refer to the results provided in *Table 4 on Page 10* of our paper. For your convenience, we summarize the relevant (averaged) results below:\\n\\n| **Method** | Channel-Preserving Model: Orig in Fig. 7 (GraphSTAGE) | | Channel-Mixing Model: VarC in Fig. 7 | |\\n| ---------- | ----------------------------------------------------- | --------- | ------------------------------------ | ------- |\\n| **Metric** | **MSE** | **MAE** | **MSE** | **MAE** |\\n| **ECL** | **0.166** | **0.263** | 0.192 | 0.284 |\\n| **ETTm1** | 0.391 | **0.394** | **0.389** | 0.400 |\\n\\nThe channel-preserving model Orig outperforms the channel-mixing variant VarC, **supporting our hypothesis that channel-mixing leads to decreased predictive performance.**\\n\\n---\\n\\n[1] *FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective.* NeurIPS, 2024\\n\\n[2] *UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting.* arXiv\"}", "{\"comment\": \"# Response to Reviewer RvJQ (1/1)\\n\\n> **W1 \\uff06 Q1 : Incremental idea & Lack of novelty: I think the core idea of decoupling the learning of intra-series and inter-series instead of channel mixing is incremental and not significant.**\\n\\n\\n\\nThank you for pointing out this. We would like to highlight the unique aspects of our approach and clarify the distinctions between our work and the recent models (iTransformer [1] and FourierGNN [2]).\\n\\n1. **Novelty in Research Motivation:**\\n We are the first to study the trade-off between **the variety of dependencies extracted** and **the noise that may be introduced by channel-mixing**. As we stated in our introduction: *\\\"Is it truly necessary to model all these dependencies?\\\"* Previous works [2] [3] often adopt a \\\"brute-force modeling\\\" approach, stacking parameters to capture as many dependencies as possible. While this might seem effective, it overlooks a critical issue\\u2014the potential noise introduced by this process. \\n2. **Innovation in Model Framework:**\", \"we_propose_a_novel_channel_preserving_framework\": \"GraphSTAGE. Through fair model variant experiments in *Table 4 on Page 10*, we validate the presence of such noise, underscoring the limitations of excessive dependency extraction. GraphSTAGE is a pure graph paradigm that decouples the extraction of **global** **temporal** dependencies and **global** **spatial** dependencies, **rather than being limited to local** neighbor information.\\n3. **Advancements in Model Performance:**\\n Despite its structural simplicity, our model performs comparably to or surpasses state-of-the-art models across 13 MTSF benchmark datasets. It **ranks first among 8 advanced models** in 22 out of 30 comparisons in *Table 1 on Page 7*. Specifically, on the PEMS07 dataset\\u2014which has the largest number of nodes\\u2014GraphSTAGE outperforms the recent SOTA iTransformer by 20.8%, indicating its **potential for application to larger-scale MTSF tasks**, such as extensive grid management.\\n\\n---\\n\\n**$\\\\triangleright$Differences with iTransformer:**\\n\\nConceptually, **iTransformer** is not a channel-preserving model; it **overlooks temporal channel information.** iTransformer projects the original time series data $X_{\\\\text{in}} \\\\in \\\\mathbb{R}^{N \\\\times T}$ (where $N$ is the number of nodes and $T$ is the length of the time series) into $H_S \\\\in \\\\mathbb{R}^{N \\\\times D}$ to capture spatial dependencies among nodes. However, this transformation ignores temporal dependencies and fails to learn the underlying temporal graph structures.\\n\\nIn contrast, our GraphSTAGE embeds the input data $X_{\\\\text{in}} \\\\in \\\\mathbb{R}^{N \\\\times T}$ into $H \\\\in \\\\mathbb{R}^{N \\\\times T \\\\times D}$, where $D$ is the embedding dimension. The original node and time dimensions are preserved. This channel-preserving framework enables the model to incorporate both spatial (inter-series) and temporal (intra-series) dependencies by decoupling them. This separation allows GraphSTAGE to capture temporal dynamics more effectively, which is a significant extension beyond what iTransformer offers.\"}", "{\"comment\": \"# Response to Reviewer 7NU8: Additional Concern (1/3)\\n\\n\\n\\n> **Additional Concern 1:** I thank the authors for adding the standard deviations, but this was done only for their method. I remain unconvinced if all models are run 5 times with random seeds. Some seem to be directly taken from relevant papers (e.g., TimesNet) where results are, in several cases, mentioned for a fixed seed (CrossGNN). I highlight this since, for point-wise metrics, differences in terms of 2nd-3rd decimals might be minor at the prediction level.\\n\\nThank you for pointing out this issue. We understand your concern about the inclusion of standard deviations only for our method and whether all models were run 5 times with different random seeds. We would like to clarify our experimental approach:\\n\\n1. **Common Practice in the Field:** It is a widely accepted practice in the time series forecasting community to directly report results from relevant papers **when the experimental settings are consistent**. This approach ensures comparability and builds upon established benchmarks. For instance:\\n - **CrossGNN** [1] and **iTransformer** [2] have directly collected baseline results from **TimesNet** [3].\\n - **TimeXer** [4], **S-Mamba** [5], and **CycleNet** [6] have directly used baseline results from **iTransformer** [2].\\n2. **Use of Fixed Seed Results:** We acknowledge that some methods, like **CrossGNN**, have reported results using a fixed seed. However, subsequent works such as **Ada-MSHyper** [7] have also directly taken the results of CrossGNN for their comparative analyses. This indicates an acceptance within the community of using such reported results for benchmarking purposes.\\n3. **Impact on Comparative Analysis:** According to reproductions reported in works like **Bi-Mamba4+** [8], CrossGNN has not been identified as a particularly strong baseline. Therefore, the potential variance introduced by not running CrossGNN multiple times with different seeds may not significantly affect the overall insights of our additional comparison study on Table 9 of Page 18.\\n\\nWe agree that including standard deviations for all compared methods would enhance the rigor of our study. However, rerunning all baseline experiments multiple times with different random seeds is a substantial undertaking that may not be feasible within our current resources and timelines.\\n\\n**In the camera-ready version, we will:**\\n\\n- **Clarify in the Manuscript:** Indicate which results are taken directly from original papers and which are from our implementations.\\n- **Discuss Variability:** Include a discussion on the potential impact of using fixed-seed results, acknowledging that minor differences in metrics might occur due to variability.\\n- **Ensure Fair Comparison:** Adjust our analysis and conclusions to account for any limitations arising from the use of fixed-seed results.\\n\\nWe hope this addresses your concern. Thank you for your valuable feedback, which helps us improve our work.\\n\\n---\\n\\n[1] *Crossgnn: Confronting noisy multivariate time series via cross interaction refinement*. NeurIPS, 2023\\n\\n[2] *iTransformer: Inverted Transformers Are Effective for Time Series Forecasting.* ICLR, 2024\\n\\n[3] *TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis.* ICLR, 2023\\n\\n[4] *TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables.* arXiv, 2024\\n\\n[5] *Is mamba effective for time series forecasting?* arXiv, 2024\\n\\n[6] *Cyclenet: enhancing time series forecasting through modeling periodic patterns.* arXiv, 2024\\n\\n[7] *Ada-MSHyper: Adaptive Multi-Scale Hypergraph Transformer for Time Series Forecasting.* arXiv, 2024\\n\\n[8] *Bi-Mamba4TS: Bidirectional Mamba for Time Series Forecasting.* arXiv, 2024\"}", "{\"comment\": \"# Response to Reviewer 7NU8 (4/4)\\n\\n> **W7 \\uff06 Q4 (1/2) : Performance improvements are, in the average case, around 6% (and no improvement for four datasets), thus the statement \\u201cwith improvements up to 20.8%\\u201d is slightly misleading.** \\n\\nThank you for pointing out this issue. We apologize for confusion caused by the statement \\\"with improvements up to 20.8%.\\\" We have **revised our manuscript accordingly and highlighted the modifications.**\\n\\nThe statement \\\"improvements up to 20.8%\\\" specifically refers to the performance on the PEMS07 dataset, which has the largest number of nodes (883 nodes) among all the datasets we evaluated. In the original manuscript, we used this result to emphasize GraphSTAGE's potential for larger-scale MTSF applications.\\n\\n\\n\\n> **W7 \\uff06 Q4 (2/2) : It is unclear how interpretability is achieved. Since patching is performed first, followed by temporal graph learning and then spatial graph learning, seems a bit exaggerated that interpretability is achieved, e.g., what can someone tell from the spatial graph concerning inter-series dependencies?**\\n\\nThank you for pointing this out. Although we use patching in the Embedding\\\\&Patching layer, GraphSTAGE remains a channel-preserving framework. **The patching operation merely increases the receptive field of each temporal channel and does not weaken the interpretability of the model**. For example, regarding the temporal learnable graph $A_T^{(2)}$ obtained on the ECL dataset in *Figure 8 on Page 10,* when the lookback length $L = 96$ (the sampling frequency of the ECL dataset is 1 hour, so $96 \\\\times 1\\\\,\\\\text{hour} = 4\\\\,\\\\text{days}$, meaning each sample contains 4 days of data). Since we use a patch stride $s = 2$, the total number of patches is $P = L / s = 96 / 2 = 48$. From $A_T^{(2)}$, we observe that **similar temporal patterns appear every 12 patches (corresponding to 24 hours).** This indicates that **GraphSTAGE effectively captures the intrinsic periodicity of the data, which** **matches the daily periodicity of the ECL dataset.**\\n\\n\\n\\nRegarding the spatial graph, we apologize for the confusion caused. We have **revised the colors** in *Figure 15 on Page 25* and have **updated the relevant text** in the revised paper, As shown in Figure 15, we can observe:\\n\\nFirstly, the spatial learnable graph $A_S$ learned by GraphSTAGE is **sparser**, indicating that the model can **identify the most important nodes** in space and requires fewer inter-series correlations for predictions.\\n\\nSecondly, the spatial learnable graph $A_S$ captures connections between time series that exhibit strong similarities. For example, according to the randomly selected case in Figure 15(left), the $A_S$ considers that the similarity between nodes 282 and 184 is high (close to 1), while the similarity between nodes 83 and 184 is low (close to 0). We **plotted the ground truth** of these three nodes in Figure 15(right), which shows that **the trends of nodes 282 and 184 are consistent, matching the correlation coefficients learned in $A_S$.** This match confirms the effectiveness of GraphSTAGE in capturing inter-series dependencies and enhances the interpretability of the model.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Request of Reviewer's attention and feedback\", \"comment\": \"Dear Reviewer 9agk,\\n\\nWe sincerely thank you for your valuable and constructive feedback. Since the End of author/reviewer discussions is coming soon, may we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score. Should you have any further advice on the revised paper and/or our rebuttal, please let us know and we will be more than happy to engage in more discussion and paper improvements.\\n\\nThank you so much for devoting time to improving our paper!\"}", "{\"comment\": \"Thank you for your positive feedback and for taking the time to carefully review our responses. We deeply appreciate your thoughtful evaluation and support!\"}", "{\"title\": \"Kind Request for Reviewer's Attention and Feedback\", \"comment\": \"Dear Reviewer 7NU8,\\n\\nHope this message finds you well.\\n\\nWe appreciate the diligent efforts of you in evaluating our paper. We have responded in detail to your additional concerns. As the discussion period will end soon, we would like to kindly ask whether there are any remaining concerns or questions we might be able to address. If our current response satisfactorily resolves your main concerns, we kindly ask for your reconsideration of the score.\\n\\nOnce more, we appreciate the time and effort you've dedicated to our paper.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer 9agk,\\n\\nHope this message finds you well.\\n\\nWe appreciate the diligent efforts of you in evaluating our paper. **We have responded in detail to your questions about the performance improvements achieved by our model.** May we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score.\\n\\nOnce more, we appreciate the time and effort you've dedicated to our paper.\"}", "{\"title\": \"Kind Request for Reviewer's Attention and Feedback\", \"comment\": \"Dear Reviewer 9agk,\\n\\nWe sincerely thank you for your valuable and constructive feedback. Since the Discussion Period Extension provides us with additional time, we are eager to address any further concerns you may have. If our current response satisfactorily resolves your main concerns, we kindly ask for your reconsideration of the score. Should you have any further advice on our rebuttal, please let us know, and we will be more than happy to engage in further discussion. \\n\\nThank you so much for devoting time to improving our paper!\"}", "{\"title\": \"Request of Reviewer's attention and feedback\", \"comment\": \"Dear Reviewer yAwh,\\n\\nWe sincerely thank you for your valuable and constructive feedback. Since the End of author/reviewer discussions is coming soon, may we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score. Should you have any further advice on the revised paper or our rebuttal, please let us know and we will be more than happy to engage in more discussion and paper improvements.\\n\\nThank you so much for devoting time to improving our paper!\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your response.\\n\\nI believe the ablation studies do not fully demonstrate that patches do not introduce noise. As shown in PatchTST [1], patches can enhance performance by enabling the model to process longer historical sequences. Even if some noise is introduced, the overall benefits outweigh the drawbacks.\\n\\n\\n\\n[1] A TIME SERIES IS WORTH 64 WORDS: LONG-TERM FORECASTING WITH TRANSFORMERS, in ICLR, 2023\"}", "{\"comment\": \"> **W2(2/2): Additionally, the representational capacity of graphs may be limited in high-noise or irregular data, leading to a decline in the model's predictive capabilities. I hope the authors can provide a more comprehensive explanation.**\\n\\nThank you for pointing this out. Yes, we agree that high-noise or irregular data can pose significant challenges for predictive models. However, we believe that the decline in predictive capabilities on high-noise or irregular datasets is primarily due to **the quality of the data** rather than the representational capacity of graphs.\\n\\nFor example, the Exchange dataset used in our experiments is reported to be a high-noise dataset with unclear periodicity [1], where exchange rates are influenced by many unpredictable factors such as economic policies and geopolitical events. \\n\\nTo provide a **more clear and intuitive evaluation**, we followed the recent benchmark BasicTS+ [1] and used the Mean Absolute Percentage Error (MAPE) metric. **MAPE** offers insight into how far predictions deviate from actual values relative to the actual values themselves. The formula for MAPE is:\\n\\n$\\\\text{MAPE} = \\\\frac{1}{n} \\\\sum_{i=1}^{n} \\\\left| \\\\frac{y_i - \\\\hat{y}_i}{y_i} \\\\right| \\\\times 100$\", \"the_performance_of_graphstage_and_the_recent_sota_itransformer_on_exchange_dataset_are_listed_as_follows\": \"| **Method** | GraphSTAGE | | | iTransformer | | |\\n| ------------------------------ | ---------- | ----- | ---------- | ------------ | ----- | ---------- |\\n| Metrics | MSE | MAE | **MAPE** | MSE | MAE | **MAPE** |\\n| Exchange (96$\\\\rightarrow$96) | 0.084 | 0.203 | **121.2**% | 0.086 | 0.206 | **129.3**% |\\n| Exchange (96$\\\\rightarrow$192) | 0.186 | 0.306 | **202.9**% | 0.177 | 0.299 | **193.9**% |\\n| Exchange (96$\\\\rightarrow$336) | 0.339 | 0.420 | **319.5**% | 0.331 | 0.417 | **317.3**% |\\n| Exchange (96$\\\\rightarrow$720) | 0.898 | 0.710 | **656.2**% | 0.847 | 0.691 | **613.2**% |\\n\\nAs shown in the above table, even the recent SOTA model (iTransformer) exhibits extremely high MAPE values on the Exchange dataset, in contrast to the seemingly low MAE and MSE values. This indicates that **the prediction of** **high-noise data is actually quite challenging and represents a bottleneck for all deep network methods** [2].\\n\\nHowever, our comprehensive forecasting results (*Table 1 on Page 7* of the revised paper) demonstrate that GraphSTAGE significantly outperforms recent SOTA models on datasets with evident periodicity. For example, on the four subsets of the PEMS datasets, GraphSTAGE achieves an average improvement of 14.3%. Conversely, for datasets with unclear periodicity (e.g., ETT, Exchange), the performance gaps among models are comparatively smaller. \\n\\n**Regarding irregular data,** where different nodes have distinct original scales or sampling rates, this **is an important direction for our future research.** In this work, we focus exclusively on regular data, where the scales and sampling rates of all nodes are fixed.\\n\\n---\\n\\n[1] *Exploring progress in multivariate time series forecasting: Comprehensive benchmarking and heterogeneity analysis.* IEEE Transactions on Knowledge and Data Engineering, 2024\\n\\n[2] *iTransformer: Inverted Transformers Are Effective for Time Series Forecasting.* ICLR, 2024\"}", "{\"comment\": \"> **W2 \\uff06 Q1 (2/3) : Additionally, the performance improvements achieved by the proposed method are, in most cases, very small compared to the best competitor (for instance, the biggest difference achieved by GraphStage from its best competitor in Table 1 is for Solar-Energy: from MSE equal to 0.233 to MSE 0.192, yet GraphStage is outcompeted in terms of MAE).**\\n\\nThank you for pointing this out, GraphSTAGE significantly **outperforms recent SOTA** models on **datasets with evident periodicity.** For example, on the four subsets of the PEMS datasets, GraphSTAGE achieves an average improvement of 14.3%. Conversely, on datasets with less clear periodicity (e.g., **ETT**, **Exchange**), the performance improvement is comparatively smaller.\\n\\n**This phenomenon is commonly observed and has been discussed in previous studies such as BasicTS+ [1].** On datasets where the periodic patterns are prominent, complex models with larger capacities\\u2014like GraphSTAGE and iTransformer\\u2014tend to capture these periodicity more effectively, leading to superior predictive performance. In contrast, on datasets with unclear or weak periodicity, simpler MLP-based models like DLinear and RLinear often perform better. This is because complex models may overfit the noise in datasets lacking strong periodic signals, whereas simpler models are less prone to overfitting and may generalize better under these conditions.\\n\\n[1] *Exploring progress in multivariate time series forecasting: Comprehensive benchmarking and heterogeneity analysis.* IEEE Transactions on Knowledge and Data Engineering, 2024\\n\\n\\n\\n---\\n\\n> **W3 \\uff06 Q1 (3/3) : It is unclear if the authors have performed multiple runs with random seeds to capture the model\\u2019s performance variability. The lack of standard deviations makes it impossible to assess whether the results' difference is statistically significant.**\\n\\nThank you for pointing this out, Yes, **the results of GraphSTAGE's performance are obtained from five random seeds.** To provide a clearer picture of the model's performance stability, we **have added the standard deviations for each task** of all time series datasets in *Table 10 and Table 11 on Page 22* of the revised manuscript. \\n\\nFor your convenience, we summarize the standard deviations of our approach and the second-best method (iTransformer) on four selected datasets below:\\n\\n| Model | GraphSTAGE | | iTransformer | |\\n| ------------ | --------------- | --------------- | --------------- | --------------- |\\n| Dataset | MSE | MAE | MSE | MAE |\\n| ECL | 0.166$\\\\pm$0.003 | 0.263$\\\\pm$0.002 | 0.178$\\\\pm$0.002 | 0.270$\\\\pm$0.003 |\\n| ETTh2 | 0.387$\\\\pm$0.005 | 0.407$\\\\pm$0.004 | 0.383$\\\\pm$0.002 | 0.407$\\\\pm$0.001 |\\n| Solar-Energy | 0.192$\\\\pm$0.002 | 0.267$\\\\pm$0.003 | 0.233$\\\\pm$0.001 | 0.262$\\\\pm$0.001 |\\n| Weather | 0.243$\\\\pm$0.001 | 0.274$\\\\pm$0.001 | 0.258$\\\\pm$0.001 | 0.278$\\\\pm$0.001 |\"}", "{\"comment\": \"# Response to Reviewer 9agk (1/2)\\n\\n> **W1: My primary concern is regarding the term \\\"noise\\\" (see line 16). The authors claim that previous methods introduce additional noise, but it remains unclear what this \\\"noise\\\" refers to. How is it defined, and what impact does it have on the effectiveness of the methods?** \\n\\n\\n\\nThank you for pointing out this. The \\\"noise\\\" refers to the **potential learning of pseudo connections** when using a Hypervariate Graph to capture dependencies in channel-mixing methods.\\n\\nSpecifically, the size of the Hypervariate Graph ($\\\\mathbb{R}^{NT \\\\times NT}$) is significantly larger than the learnable graphs we propose: the Temporal Learnable Graph $A_T \\\\in \\\\mathbb{R}^{T \\\\times T}$ and the Spatial Learnable Graph $A_S \\\\in \\\\mathbb{R}^{N \\\\times N}$. Consequently, **channel-mixing models like FourierGNN [1] and UniTST [2] have a higher degree of freedom** compared to our channel-preserving model (GraphSTAGE). This increased degree of freedom can lead the model to **learn many pseudo connections** (connections between different nodes at different time steps), which **weakens the weights of real connections** (inter-series and intra-series dependencies). This may result in **overfitting** and **a decrease in predictive performance.**\\n\\nTo test our hypothesis that the channel-mixing strategy **leads to overfitting** and **reduced predictive performance,** we conducted experiments on two datasets. Since the dependency learning components of FourierGNN [1] and UniTST [2] differ from those in GraphSTAGE, we used the model variant **VarC** in *Figure 7 on Page 9* as a baseline. This variant '**VarC**' changes only the channel-preserving strategy of the original GraphSTAGE to a channel-mixing strategy while keeping the dependency learning components consistent.\\n\\nTo demonstrate the existence of overfitting in the channel-mixing strategy, we set the training epochs to 30 and used a fixed learning rate of 1e-3. We tested on the ECL and PEMS03 datasets with input length equal to prediction length (24 time steps). All experiments have been conducted five times and hyperparameters remained the same. The results are shown in the table below:\\n\\n| **Method** | Channel-Preserving Model: Orig in Fig. 7 (GraphSTAGE) | | | Channel-Mixing Model: VarC in Fig. 7 | | |\\n| --------------- | ----------------------------------------------------- | --------- | -------- | ------------------------------------ | --------- | -------- |\\n| Loss | Train MSE | Valid MSE | Test MSE | Train MSE | Valid MSE | Test MSE |\\n| ECL (24\\u219224) | 0.110 | 0.105 | 0.122 | 0.106 | 0.103 | 0.130 |\\n| PEMS03 (24\\u219224) | 0.066 | 0.069 | 0.082 | 0.056 | 0.065 | 0.091 |\\n\\nAs shown, the channel-mixing model VarC has lower training MSE but higher test MSE. The results indicate that **VarC (channel-mixing model) exhibits overfitting** compared to Orig (channel-preserving model).\\n\\n---\\n\\nTo further demonstrate the decrease in predictive performance on the test set caused by the channel-mixing strategy, we refer to the results provided in *Table 4 on Page 10* of our paper. For your convenience, we summarize the relevant (averaged) results below:\\n\\n| **Method** | Channel-Preserving Model: Orig in Fig. 7 (GraphSTAGE) | | Channel-Mixing Model: VarC in Fig. 7 | |\\n| ---------- | ----------------------------------------------------- | --------- | ------------------------------------ | ------- |\\n| **Metric** | **MSE** | **MAE** | **MSE** | **MAE** |\\n| **ECL** | **0.166** | **0.263** | 0.192 | 0.284 |\\n| **ETTm1** | 0.391 | **0.394** | **0.389** | 0.400 |\\n\\nThe channel-preserving model Orig outperforms the channel-mixing variant VarC, **supporting our hypothesis that channel-mixing leads to decreased predictive performance.**\\n\\n---\\n\\n[1] *FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective.* NeurIPS, 2024\\n\\n[2] *UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting.* arXiv\"}", "{\"comment\": \"3. **Visualization of Learnable Graphs**\\n To further illustrate the effectiveness of our model, we visualized the temporal learnable graphs $A_T$ and the spatial learnable graphs $A_S$. For more details, please refer to **the response to Reviewer 7NU8 (4/4) W7 & Q4 (2/2).**\\n\\n - **Temporal Learnable Graphs:** In the ECL dataset *(Figure 8 on Page 10)*, **the learned temporal graph reveals periodic patterns** every 12 patches (24 hours), **matching the dataset's daily periodicity**. This demonstrates the model's ability to capture temporal dependencies.\\n\\n - **Spatial Learnable Graphs:** We visualized **the ground truth** of random selected nodes in *Figure 15(right) on Page 25*, revealing that the trends of nodes 282 and 184 are consistent, which **matches with their high correlation coefficients learned by the Inter-GrAG module.** This match confirms the effectiveness of GraphSTAGE in capturing inter-series dependencies.\\n\\n------\\n\\nThrough **variant comparisons** , **ablation studies**, and **visualizations of learned dependencies**, we have **systematically demonstrated** how each component contributes to the model's overall performance. We hope this clarifies the source of the improvement and provides stronger evidence for the effectiveness of our approach. \\n\\nThank you once again for your valuable and insightful feedback, which has been instrumental in improving our paper. We hope our response has addressed your main concerns, and we kindly ask for your reconsideration of the score. If you have any further suggestions regarding the revised paper or our rebuttal, please feel free to share them. We would be delighted to engage in further discussions and make additional improvements. \\n\\nThank you so much for devoting time and effort to improving our paper!\"}", "{\"title\": \"Request of Reviewer's attention and feedback\", \"comment\": \"Thank you for your valuable and constructive feedback, which has inspired further improvements to our paper. As a gentle reminder, it has been more than 2 days since we submitted our rebuttal. We would like to know whether our response addressed your concerns.\\n\\nFollowing your comments and suggestions, we have **answered your concerns** and **made the following revisions**:\\n\\n- To address your concerns, we have clarified the unique aspects of our approach, particularly highlighting its **motivation and framework**. Additionally, we included a **detailed comparison with recent models**. To further enhance the validity of our work, we added four graph-based baselines.\\n- We have expanded our experiments to **include additional four graph-based baseline methods** for multivariate time series forecasting tasks: (1) FourierGNN, (2) CrossGNN, (3) StemGNN, and (4) MTGNN. For complete results, refer to *Table 9 in Appendix D on Page 18*.\\n\\nOnce again, we sincerely thank you for your insightful review. We look forward to your feedback and are prepared to address any further questions or concerns you may have.\"}", "{\"title\": \"Request of Reviewer's attention and feedback\", \"comment\": \"Thank you for your valuable and constructive feedback, which has inspired further improvements to our paper. As a gentle reminder, it has been more than 2 days since we submitted our rebuttal. We would like to know whether our response has addressed your concerns.\\n\\nFollowing your comments and suggestions, we have **answered your concerns** and **made the following revisions**:\\n\\n- To address your concerns, we have clarified **the noise and interference caused by channel-mixing methods** and **validated the existence and impact of this noise through experiments.**\\n- Your concern regarding the **representational capacity of graphs under high-noise data** is highly valuable and insightful. We addressed this concern through experiments on Exchange dataset.\\n- We have expanded our experiments to **include four graph-based baselines** for multivariate time series forecasting tasks: (1) FourierGNN, (2) CrossGNN, (3) StemGNN, and (4) MTGNN. For complete results, refer to *Table 9 in Appendix D on Page 18*.\\n\\nWe sincerely thank you for your insightful review. We look forward to your feedback and are prepared to address any further questions or concerns you may have.\"}", "{\"comment\": \"Thank you for your prompt feedback. I agree with your point, especially the statement, \\u201cEven if some noise is introduced, the overall benefits outweigh the drawbacks.\\u201d **I cannot agree more.**\\n\\nTaking the example you mentioned, consider a node $x$. After patching, we obtain $x_{1:p}$, $x_{p:2p}$, $x_{2p:3p}$, $x_{3p:4p}$. If we **assume that $ x_{1:p}$ and $x_{3p:4p}$ are unrelated.** However, when GraphSTAGE learns the temporal dependencies **within the patches of the same node**, it's **hard to ensure that the model doesn't assign weights to the \\\"pseudo connections\\\"** between $x_{1:p}$ and $x_{3p:4p}$. Therefore, ablation studies can only demonstrate that the model achieves **overall benefits outweigh the drawbacks**, but it is challenging to specifically assess whether noise is introduced in the learning of each individual connection.\\n\\nHowever, extensive experimental results demonstrate that GraphSTAGE achieves results that are comparable to or even surpass recent SOTA methods. This demonstrates that **the overall benefits outweigh the drawbacks** in GraphSTAGE. \\n\\nTherefore, I agree with your point that regardless of the type of dependency being modeled (whether inter-series, intra-series, or cross-series), noise might inevitably be introduced. The key is that the **model should ensure** **the overall benefits outweigh the drawbacks**. Clearly, the current **hypervariate graph-based** modeling methods remain **inadequate**. GraphSTAGE addresses this challenge by decoupling the learning of the two types of dependencies, which helps to **minimize the introduction of noise as much as possible.**\\n\\nIn the *Abstract on Page 1*, We stated: \\u201cThe trade-off between the variety of dependencies extracted and the potential interference has not yet been fully explored.\\u201d This actually conveys a similar idea with you that **better models need to strike a balance between improving performance and suppressing noise.** Simply increasing the variety of dependencies extracted could **not only degrade model performance but also increase memory usage.**\\n\\nIf you have any other concerns, please don\\u2019t hesitate to share them with us. Thank you again for your valuable insights.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you sincerely for your efforts to address my concerns.\\n\\nWhile the experimental results show some improvement in the forecasting performance of your proposed model, the underlying source of this improvement remains unclear. Therefore, I prefer to maintain my original score.\"}", "{\"comment\": \"# Response to Reviewer 7NU8 (2/4)\\n\\n> **W4 \\uff06 Q2 (1/2) :Graph structure learning for time series forecasting is a long-term studied problem in the literature, entirely before the pure graph paradigm (Yi et al., 2024). Several methods that capture dependencies in the spatial and temporal domain jointly (Shang et al., 2021; Wu et al., 2020) or separately (Kipf et al., 2018; Xu et al., 2023) have been proposed, but are rather not mentioned in the paper and not considered in the experiments.**\\n\\nThank you for pointing this out, the works you mentioned are indeed solid contributions in the fields of spatio-temporal forecasting. We have **revised the** **Related Works** to include these references and have highlighted the additions in orange in the revised paper.\\n\\nFurthermore, we have selected two of the works you mentioned (FourierGNN and MTGNN), as additional baselines in our experiments. We tried our best effort to reproduce TimeGNN, however, we were unable to obtain reasonable results. Detailed results of the can be found in *Table 9 on Page 18* of the revised paper. For your convenience, a summary of the relevant (averaged) results is provided below:\\n\\n| **Model** | GraphSTAGE | | FourierGNN | | MTGNN | |\\n| --------- | ---------- | --------- | ---------- | ----- | ----- | ----- |\\n| Metric | MSE | MAE | MSE | MAE | MSE | MAE |\\n| ECL | **0.166** | **0.263** | 0.221 | 0.318 | 0.251 | 0.347 |\\n| ETTm1 | **0.391** | **0.394** | 0.453 | 0.448 | 0.469 | 0.446 |\\n| ETTh2 | **0.387** | **0.407** | 0.543 | 0.517 | 0.465 | 0.509 |\\n| Weather | **0.243** | **0.274** | 0.257 | 0.305 | 0.314 | 0.355 |\\n\\n---\\n> **W5 \\uff06 Q2 (2/2) :Similarly the graph learning module (softmax on dot products between pairs) is a standard choice in relevant works (Wu et al., 2020), yet not correctly cited in this work.**\\n\\nThank you for pointing this out. Yes, applying softmax on dot products between pairs is a standard approach for aggregating time features to extract spatial similarities between nodes. **We did not emphasize this as our contribution, and we have added the citation in the revised paper.**\\n\\nOur main contribution lies in adopting a **channel-preserving framework** to decouple the learning of inter-series and intra-series dependencies, which not only outperforms the channel-mixing framework but also reduces memory usage. Employing a widely adopted method in [1] for information aggregation also **demonstrates the generality of our channel-preserving framework.**\\n\\nHowever, to the best of our knowledge, we are the first to **extend this method to aggregate all node features at each time step** to extract similarities specifically between time steps. For example, in the **Intra-GrAG** module, the input $H_{\\\\text{in}} \\\\in \\\\mathbb{R}^{P \\\\times N \\\\times D}$ is aggregated along the node dimension \\\\(N\\\\) to obtain $E_{\\\\text{src}} \\\\in \\\\mathbb{R}^{P \\\\times c}$ and $E_{\\\\text{tgt}} \\\\in \\\\mathbb{R}^{P \\\\times c}$. Softmax is then applied to the dot products between these pairs to compute the similarities between time steps, resulting in the Temporal Learnable Graph $A_T \\\\in \\\\mathbb{R}^{P \\\\times P}$. The effectiveness of this approach in extracting intra-dependencies has been validated through the ablation study in *Table 2* and the visualization in *Figure 8*.\\n\\n[1] *Connecting the dots: Multivariate time series forecasting with graph neural networks.* KDD, 2020\"}", "{\"comment\": \"> **Additional Concern 3 (5/5):** In the second case of positioning their work among sequential models forecasting, more recent sota (e.g., FITS) could have been used for more meaningful comparisons.\\n\\nThank you for pointing out this. We appreciate your attention to ensuring a more meaningful comparisons.\\n\\nIncluding FITS in our comparisons presents some challenges regarding fairness. Actually, **FITS conducts grid search over the look-back window lengths of 90, 180, 360, and 720 to find the optimal input length**, as mentioned in their implementation details on *page 6* [1]. However, when reporting baseline results from other models like **TimesNet [2], FITS uses the raw results where the input window length is fixed at 96.** \\n\\n**This discrepancy can lead to unfair comparisons.**\\n\\nTo address your concern, we adopted the same setting as FITS [1], conducting a grid search over the same look-back window lengths $[90, 180, 360, 720]$ with the prediction length fixed at 96. We tested this on three datasets and obtained the following results:\\n\\n| **Method** | GraphSTAGE (Optimal Input Length) | | FITS (Optimal Input Length) | | GraphSTAGE (Fixed Input Length = 96) | |\\n| ----------- | --------------------------------- | --------------- | --------------------------- | ----------- | ------------------------------------ | ----------- |\\n| **Metric** | MSE | MAE | MSE | MAE | MSE | MAE |\\n| **ETTm1** | **0.295**\\u00b10.002 | **0.346**\\u00b10.003 | 0.304\\u00b10.001 | 0.348\\u00b10.001 | 0.319\\u00b10.004 | 0.356\\u00b10.003 |\\n| **Weather** | **0.133**\\u00b10.001 | **0.188**\\u00b10.001 | 0.143\\u00b10.001 | 0.196\\u00b10.001 | 0.159\\u00b10.001 | 0.208\\u00b10.001 |\\n| **ECL** | **0.128**\\u00b10.001 | **0.224**\\u00b10.001 | 0.142\\u00b10.001 | 0.242\\u00b10.000 | 0.139\\u00b10.001 | 0.237\\u00b10.001 |\", \"the_results_in_the_table_above_show_that\": \"1. Under the same setting of **grid-searched optimal input length**, **GraphSTAGE outperforms FITS** significantly.\\n2. Comparing GraphSTAGE with optimal input length and fixed input length of 96, we see that **increasing the input length improves the model's performance**. This observation highlights that FITS's practice of grid searching input lengths while using baseline models' results with a fixed input length of 96 leads to unfair comparisons.\\n\\n**Furthermore, we have discussed the performance gains from increasing the look-back length in our revised paper, please refer to *Figure 6 on Page 7.***\\n\\nWe hope these additional experiments provide a clearer understanding of GraphSTAGE's performance and address your concerns. Thank you again for your valuable feedback.\\n\\n---\\n\\n[1] *FITS: Modeling Time Series with $10k$ Parameters*. ICLR, 2024.\\n\\n[2] *TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis.* ICLR, 2023\"}", "{\"comment\": \"Thank you for your positive feedback and for taking the time to carefully review our responses. We deeply appreciate your thoughtful evaluation and support!\"}", "{\"summary\": \"This paper introduces GRAPHSTAGE, a fully GNN-based method that captures intra-series and inter-series dependencies while maintaining the shape of the input data through a channel-preserving strategy. Extensive experiments conducted on 13 real-world datasets\\ndemonstrate that GRAPHSTAGE achieves competitive performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n\\n2. Extensive experiments across 13 benchmark datasets have demonstrated that GRAPHSTAGE achieves good performance.\", \"weaknesses\": \"1. My primary concern is regarding the term \\\"noise\\\" (see line 16). The authors claim that previous methods introduce additional noise, but it remains unclear what this \\\"noise\\\" refers to. How is it defined, and what impact does it have on the effectiveness of the methods?\\n\\n2. This paper appears to be an incremental contribution, with its components largely derived from previous works. Channel-preserving Strategy is inspired by iTransformer, and the fully GNN perspective is inspired by FourierGNN.\", \"questions\": \"please refer to the weaknesses part\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The proposed GRAPHSTAGE model innovatively employs a pure Graph Neural Network (GNN) structure to preserve the channel structure of the input data. The authors claim that this approach avoids the noise and interference introduced by channel mixing methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. By introducing the Spatio-Temporal Aggregation Graph Encoder (STAGE) module, the authors effectively separated and captured internal and external dependencies in the time series, enhancing the model's interpretability and predictive performance.\\n\\n2. The paper conducted extensive experiments on 13 real-world datasets, demonstrating superior performance in multivariate time series prediction tasks, with significant improvements in prediction accuracy and computational efficiency compared to existing methods.\", \"weaknesses\": \"1. I don't quite understand why a purely graph-based structure can introduce noise and interference through channel mixing methods.\\n\\n2. I have doubts about the effectiveness of purely using GNNs for time series analysis. The performance of GNNs heavily depends on the rationality of the graph structure. However, in certain datasets or tasks, constructing a reasonable graph structure is itself a challenge. If the graph structure fails to accurately capture the dependencies in the data, even the most advanced GNNs may struggle to achieve optimal performance. Additionally, the representational capacity of graphs may be limited in high-noise or irregular data, leading to a decline in the model's predictive capabilities. I hope the authors can provide a more comprehensive explanation.\\n\\n3. FourierGNN also performs time series prediction, so why didn't the authors conduct a comprehensive comparison?\", \"questions\": \"see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"# Response to Reviewer 7NU8 (1/4)\\n\\n> **W1 \\uff06 Q1 (1/3) :** **Chosen baseline methods for comparisons are limited to non-graph-based models. In contrast, several methods leveraging GNNs for time series forecasting can be found in FourierGNN.**\\n\\nThank you for pointing this out, we have **added four graph-based models as baseline methods** for comparison: FourierGNN [1], CrossGNN [2], StemGNN [3], and MTGNN [4]. The detailed results are presented in *Table 9 on Page 18* of the revised paper.\\n\\nFollowing the experimental setup of TimesNet [5], we use a fixed lookback length $T = 96$ and set the prediction lengths $K \\\\in \\\\{96, 192, 336, 720\\\\} $. We reproduced the results for FourierGNN and StemGNN, running each experiment five times. For CrossGNN and MTGNN, we collected the reported results from [3]. \\n\\nFor your convenience, a summary of the relevant (averaged) results is provided below:\\n\\n| **Model** | GraphSTAGE | | FourierGNN[1] | | CrossGNN[2] | | StemGNN[3] | | MTGNN[4] | |\\n| ------------------ | ------------------- | ------------------- | --------------- | --------------- | ----------- | ----- | --------------- | --------------- | -------- | ----- |\\n| Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |\\n| ECL (321 Nodes) | **0.166**$\\\\pm$0.003 | **0.263**$\\\\pm$0.002 | 0.221$\\\\pm$0.001 | 0.318$\\\\pm$0.001 | 0.201 | 0.300 | 0.215$\\\\pm$0.002 | 0.316$\\\\pm$0.002 | 0.251 | 0.347 |\\n| ETTm1 (7 Nodes) | **0.391**$\\\\pm$0.003 | **0.394**$\\\\pm$0.002 | 0.453$\\\\pm$0.002 | 0.448$\\\\pm$0.003 | 0.393 | 0.404 | 0.550$\\\\pm$0.004 | 0.537$\\\\pm$0.005 | 0.469 | 0.446 |\\n| ETTh2 (7 Nodes) | **0.387**$\\\\pm$0.005 | **0.407**$\\\\pm$0.004 | 0.543$\\\\pm$0.006 | 0.517$\\\\pm$0.004 | 0.393 | 0.418 | 1.158$\\\\pm$0.010 | 0.812$\\\\pm$0.011 | 0.465 | 0.509 |\\n| Weather (21 Nodes) | **0.243**$\\\\pm$0.001 | **0.274**$\\\\pm$0.001 | 0.257$\\\\pm$0.002 | 0.305$\\\\pm$0.003 | 0.247 | 0.289 | 0.289$\\\\pm$0.005 | 0.342$\\\\pm$0.007 | 0.314 | 0.355 |\\n\\nThe results indicate that GraphSTAGE achieved the **top-1** rank in terms of MSE and MAE metrics across four datasets **compared with four graph-based models.** Although CrossGNN performs comparably to our GraphSTAGE on smaller datasets (e.g., ETT and Weather), GraphSTAGE notably outperforms CrossGNN on large-scale datasets (e.g., ECL with 321 nodes). These findings further validate the effectiveness of GraphSTAGE in capturing both intra-series and inter-series dependencies, leading to **superior forecasting accuracy across datasets of varying scales.**\\n\\n[1] *FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective.* NeurIPS, 2024\\n\\n[2] *Crossgnn: Confronting noisy multivariate time series via cross interaction refinement*. NeurIPS, 2023\\n\\n[3] *Spectral temporal graph neural network for multivariate time-series forecasting.* NeurIPS, 2020\\n\\n[4] *Connecting the dots: Multivariate time series forecasting with graph neural networks.* KDD, 2020\\n\\n[5] *TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis.* ICLR, 2023\"}", "{\"title\": \"Upgrade the score\", \"comment\": \"Dear Authors,\\n\\nTo be honest, I am still not convinced that your idea of decoupling the learning of intra-series and inter-series is significant. I don't really see it from the rebuttal.\\n\\nHowever, since the authors updated with more experimental results and the new results look reasonably good, so I updated the score to 6: marginally accepted.\\n\\nBest,\\nReviewer\"}", "{\"summary\": \"The paper presents GraphSTAGE, a novel GNN-based model for multivariate time series forecasting (MTSF). By separating the learning of temporal (intra-series) and spatial (inter-series) dependencies while preserving the original channel structures, GraphSTAGE minimizes noise and computational overhead caused by channel mixing. It incorporates the Spatial-Temporal Aggregation Graph Encoder (STAGE), which enhances interpretability by visualizing temporal and spatial patterns. Experiments across 13 real-world datasets demonstrate that GraphSTAGE outperforms state-of-the-art models by up to 20.8%.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Overall, good empirical result: the proposed model outperforms state-of-the-art models by up to 20.8%. The experiments in this paper focus on evaluating the proposed GRAPHSTAGE model for multivariate time series forecasting (MTSF). Here\\u2019s a summary of the experiments and results:\", \"Experimental Setup\", \"Datasets: The model is tested on 13 real-world datasets, including ETT, ECL, Exchange, Traffic, Weather, Solar-Energy, and PEMS (with subsets PEMS03, PEMS04, PEMS07, and PEMS08). These datasets represent diverse MTSF tasks, covering domains like energy, weather, and traffic forecasting. See Section 4.1 in Page 6.\", \"Baselines: Seven well-known models, including iTransformer, PatchTST, Crossformer, DLinear, RLinear, SCINet, and TimesNet, are used for comparison\\u200b. See Section 4.1 in Page 6.\", \"Main Results\", \"Overall Performance: GRAPHSTAGE demonstrates up to a 20.8% improvement in performance and achieves first place in 22 out of 30 comparisons across all datasets. It achieves lower Mean Squared Error (MSE) and Mean Absolute Error (MAE) compared to state-of-the-art (SOTA) models, particularly excelling in datasets like ECL, ETT, Weather, Solar-Energy, and PEMS\\u200b. See Section 4.2 in Pages 6 and 7.\", \"Model Efficiency: Compared to models like Crossformer, GRAPHSTAGE reduces memory usage by 47.0% and training time by 60.9%, while achieving a 36.5% improvement in predictive accuracy. See Page 8, Lines from 378 to 385.\", \"Ablation Studies\", \"Correlation Learning Mechanism: Ablations reveal that removing or replacing the Inter-GrAG and Intra-GrAG modules degrades performance, confirming the importance of the decoupled spatial-temporal extraction in GRAPHSTAGE\\u200b. See Section 4.3 and Table 2 in Page 8.\", \"Embedding & Patching Mechanism: Ablations also show that removing patching or adaptive embedding reduces accuracy, indicating the critical role of these components in effective forecasting. See Section 4.3 and Table 3 in Page 8.\"], \"weaknesses\": [\"Incremental idea & Lack of novelty: I think the core idea of decoupling the learning of intra-series and inter-series instead of channel mixing is incremental and not significant.\", \"By decoupling intra-series and inter-series dependencies, GRAPHSTAGE enables more efficient modeling without the interference and noise associated with channel blending. See Section 3.2, Lines from 228 to 242 in Page 5.\", \"GRAPHSTAGE\\u2019s decoupled architecture reduces computational overhead and enhances model interpretability by generating separate learnable graphs for temporal and spatial dimensions\\u200b. See Section 3.2, Lines from 243 to 266 in Page 5.\"], \"questions\": \"Please address the concern regarding novelty that I have mentioned in weaknesses!\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Authors,\\n \\nThank you for your efforts in addressing my previous comments.\", \"i_have_one_additional_question\": \"Based on your explanations regarding \\\"noise,\\\" such as the potential learning of pseudo connections, it seems that patches themselves might also introduce noise. For instance, consider two patches, $x_{1:p}$ and $y_{1:p}$. It is possible that $x_1$ and $y_p$ may have no meaningful connections. Could you explain how GraphSTAGE addresses the noise introduced by such scenarios?\"}", "{\"comment\": \"# Responses to Reviewer yAwh (3/3)\\n\\n> **W3: FourierGNN also performs time series prediction, so why didn't the authors conduct a comprehensive comparison?**\\n\\nThank you for pointing this out. There are two main reasons why we did not include FourierGNN in our initial submission:\\n\\n1. **Different Focus of Tasks:** FourierGNN and GraphSTAGE focus on different forecasting tasks. FourierGNN primarily targets spatio-temporal forecasting tasks, with experiments concentrated on **short-term** spatio-temporal forecasting where the maximum prediction length is 12. In contrast, GraphSTAGE focuses on **long-term** time series forecasting, with prediction lengths extending up to 720.\\n2. **Performance:** Existing research suggests that FourierGNN may **not serve as a strong baseline** [1].\\n\\nTo address the your concerns, we have **added** **four graph-based models as baseline methods** for comparison: FourierGNN [2], CrossGNN [3], StemGNN [4], and MTGNN [5]. Following the experimental setup of TimesNet [6], we use a fixed lookback length $T = 96$ and set the prediction lengths $K \\\\in \\\\{96, 192, 336, 720\\\\} $.\\n\\nWe reproduced the results for FourierGNN and StemGNN, running each experiment five times. For CrossGNN and MTGNN, we collected the reported results from [3]. The detailed results are presented in *Table 9 on Page 18* of the revised paper. For your convenience, a summary of the relevant (averaged) results is provided below:\\n\\n| **Model** | GraphSTAGE | | FourierGNN[2] | | CrossGNN[3] | | StemGNN[4] | | MTGNN[5] | |\\n| ------------------ | ------------------- | ------------------- | --------------- | --------------- | ----------- | ----- | --------------- | --------------- | -------- | ----- |\\n| Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |\\n| ECL (321 Nodes) | **0.166**$\\\\pm$0.003 | **0.263**$\\\\pm$0.002 | 0.221$\\\\pm$0.001 | 0.318$\\\\pm$0.001 | 0.201 | 0.300 | 0.215$\\\\pm$0.002 | 0.316$\\\\pm$0.002 | 0.251 | 0.347 |\\n| ETTm1 (7 Nodes) | **0.391**$\\\\pm$0.003 | **0.394**$\\\\pm$0.002 | 0.453$\\\\pm$0.002 | 0.448$\\\\pm$0.003 | 0.393 | 0.404 | 0.550$\\\\pm$0.004 | 0.537$\\\\pm$0.005 | 0.469 | 0.446 |\\n| ETTh2 (7 Nodes) | **0.387**$\\\\pm$0.005 | **0.407**$\\\\pm$0.004 | 0.543$\\\\pm$0.006 | 0.517$\\\\pm$0.004 | 0.393 | 0.418 | 1.158$\\\\pm$0.010 | 0.812$\\\\pm$0.011 | 0.465 | 0.509 |\\n| Weather (21 Nodes) | **0.243**$\\\\pm$0.001 | **0.274**$\\\\pm$0.001 | 0.257$\\\\pm$0.002 | 0.305$\\\\pm$0.003 | 0.247 | 0.289 | 0.289$\\\\pm$0.005 | 0.342$\\\\pm$0.007 | 0.314 | 0.355 |\\n\\nThe above results show that GraphSTAGE consistently achieved the top-1 rank in terms of MSE and MAE across four datasets. Although CrossGNN exhibits performance comparable to our GraphSTAGE model on smaller-scale datasets ( e.g., ETT and Weather), GraphSTAGE notably outperforms CrossGNN on large-scale datasets (e.g., ECL with 321 nodes). These results further validate the effectiveness of GraphSTAGE in capturing both intra-series and inter-series dependencies, leading to superior forecasting accuracy across datasets of varying scales.\\n\\n---\\n\\n[1] *MambaTS: Improved Selective State Space Models for Long-term Time Series Forecasting.* arXiv\\n\\n[2] *FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective.* NeurIPS, 2024\\n\\n[3] *Crossgnn: Confronting noisy multivariate time series via cross interaction refinement*. NeurIPS, 2023\\n\\n[4] *Spectral temporal graph neural network for multivariate time-series forecasting.* NeurIPS, 2020\\n\\n[5] *Connecting the dots: Multivariate time series forecasting with graph neural networks.* KDD, 2020\\n\\n[6] *TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis.* ICLR, 2023\"}", "{\"comment\": \"**$\\\\triangleright$Differences with FourierGNN:**\\n\\nAlthough both are pure graph methods, Graph Neural Networks (GNNs) are a very broad concept. **Being graph-based does not make our work incremental.**\\n\\n1. **Dependency Modeling Approach:**\\n **FourierGNN** represents a trend toward modeling **three kinds of dependencies** within a unified framework, as shown in *Figure 2 on Page 2*. **GraphSTAGE**, however, focuses on decoupling and **modeling two types of dependencies.**\\n\\n2. **Research Focus:**\\n The drawbacks of using hypervariate graphs to unify multi-dependency modeling have not been studied in previous research. There are two main drawbacks: **High Memory Consumption:** This issue has also been reported in other studies [4]. **Potential Noise:** Modeling extensive cross-dependencies may lead the model to learn many pseudo connections (as we explained in our response to Q1). These pseudo connections can weaken the influence of genuine connections (inter-series and intra-series dependencies), potentially diminishing the model's predictive performance.\\n\\n Additionally, instead of focusing solely on FourierGNN, we aimed to broadly address hypervariate graph-based multi-dependency modeling methods. In the Variants Comparison section, we included a variant called **VarC** (shown in *Figure 7 on Page 9*) to encompass these methods and compared it with our channel-preserving GraphSTAGE. Our findings indicate that **modeling only two types of dependencies can achieve excellent predictive performance while also reducing memory usage.**\\n\\n---\\n\\n**$\\\\triangleright$Addition of Graph-Based Baselines:**\\n\\nTo **provide a more direct and comprehensive comparison with FourierGNN** and other graph-based models, we have included four additional graph-based baselines in the revised paper. The detailed results are presented in *Table 9 on Page 18* of the revised paper. For your convenience, we provide a summary of the relevant averaged results below:\\n\\n| **Model** | GraphSTAGE | | FourierGNN[2] | | CrossGNN[5] | | StemGNN[6] | | MTGNN[7] | |\\n| ------------------ | ------------------- | ------------------- | --------------- | --------------- | ----------- | ----- | --------------- | --------------- | -------- | ----- |\\n| Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |\\n| ECL (321 Nodes) | **0.166**$\\\\pm$0.003 | **0.263**$\\\\pm$0.002 | 0.221$\\\\pm$0.001 | 0.318$\\\\pm$0.001 | 0.201 | 0.300 | 0.215$\\\\pm$0.002 | 0.316$\\\\pm$0.002 | 0.251 | 0.347 |\\n| ETTm1 (7 Nodes) | **0.391**$\\\\pm$0.003 | **0.394**$\\\\pm$0.002 | 0.453$\\\\pm$0.002 | 0.448$\\\\pm$0.003 | 0.393 | 0.404 | 0.550$\\\\pm$0.004 | 0.537$\\\\pm$0.005 | 0.469 | 0.446 |\\n| ETTh2 (7 Nodes) | **0.387**$\\\\pm$0.005 | **0.407**$\\\\pm$0.004 | 0.543$\\\\pm$0.006 | 0.517$\\\\pm$0.004 | 0.393 | 0.418 | 1.158$\\\\pm$0.010 | 0.812$\\\\pm$0.011 | 0.465 | 0.509 |\\n| Weather (21 Nodes) | **0.243**$\\\\pm$0.001 | **0.274**$\\\\pm$0.001 | 0.257$\\\\pm$0.002 | 0.305$\\\\pm$0.003 | 0.247 | 0.289 | 0.289$\\\\pm$0.005 | 0.342$\\\\pm$0.007 | 0.314 | 0.355 |\\n\\nThe results indicate that GraphSTAGE achieved the **top-1** rank in terms of MSE and MAE metrics across four datasets **compared with four graph-based models.** Although CrossGNN performs comparably to our GraphSTAGE on smaller datasets (e.g., ETT and Weather), GraphSTAGE notably outperforms CrossGNN on large-scale datasets (e.g., ECL with 321 nodes). These findings further validate the effectiveness of GraphSTAGE in capturing both intra-series and inter-series dependencies, leading to **superior forecasting accuracy across datasets of varying scales.**\\n\\nWe believe these clarifications address your concerns regarding the novelty and contributions of our work. Current models primarily focus on the advantages of channel-mixing methods for extracting multiple dependencies, often **neglecting the noise these approaches can introduce.** **We are the first to directly address this issue.** Thank you again for your valuable feedback, which has helped us improve our paper.\\n\\n---\\n\\n[1] *iTransformer: Inverted Transformers Are Effective for Time Series Forecasting.* ICLR, 2024\\n\\n[2] *FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective.* NeurIPS, 2024\\n\\n[3] *UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting.* arXiv\\n\\n[4] *ForecastGrapher: Redefining Multivariate Time Series Forecasting with Graph Neural Networks.* arXiv\\n\\n[5] *Crossgnn: Confronting noisy multivariate time series via cross interaction refinement*. NeurIPS, 2023\\n\\n[6] *Spectral temporal graph neural network for multivariate time-series forecasting.* NeurIPS, 2020\\n\\n[7] *Connecting the dots: Multivariate time series forecasting with graph neural networks.* KDD, 2020\"}", "{\"comment\": \"**$\\\\triangleright$Differences with FourierGNN:**\\n\\nAlthough both are pure graph methods, Graph Neural Networks (GNNs) are a very broad concept. **Being graph-based does not make our work incremental.**\\n\\n1. **Dependency Modeling Approach:**\\n **FourierGNN** represents a trend toward modeling **three kinds of dependencies** within a unified framework, as shown in *Figure 2 on Page 2*. **GraphSTAGE**, however, focuses on decoupling and **modeling two types of dependencies.**\\n\\n2. **Research Focus:**\\n The drawbacks of using hypervariate graphs to unify multi-dependency modeling have not been studied in previous research. There are two main drawbacks: **High Memory Consumption:** This issue has also been reported in other studies [3]. **Potential Noise:** Modeling extensive cross-dependencies may lead the model to learn many pseudo connections (as we explained in our response to Q1). These pseudo connections can weaken the influence of genuine connections (inter-series and intra-series dependencies), potentially diminishing the model's predictive performance.\\n\\n Additionally, instead of focusing solely on FourierGNN, we aimed to broadly address hypervariate graph-based multi-dependency modeling methods. In the Variants Comparison section, we included a variant called **VarC** (shown in *Figure 7 on Page 9*) to encompass these methods and compared it with our channel-preserving GraphSTAGE. Our findings indicate that **modeling only two types of dependencies can achieve excellent predictive performance while also reducing memory usage.**\\n\\n---\\n\\n**$\\\\triangleright$Addition of Graph-Based Baselines:**\\n\\nTo **provide a more direct and comprehensive comparison with FourierGNN** and other graph-based models, we have included four additional graph-based baselines in the revised paper. The detailed results are presented in *Table 9 on Page 18* of the revised paper. For your convenience, we provide a summary of the relevant averaged results below:\\n\\n| **Model** | GraphSTAGE | | FourierGNN[1] | | CrossGNN[4] | | StemGNN[5] | | MTGNN[6] | |\\n| ------------------ | ------------------- | ------------------- | --------------- | --------------- | ----------- | ----- | --------------- | --------------- | -------- | ----- |\\n| Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |\\n| ECL (321 Nodes) | **0.166**$\\\\pm$0.003 | **0.263**$\\\\pm$0.002 | 0.221$\\\\pm$0.001 | 0.318$\\\\pm$0.001 | 0.201 | 0.300 | 0.215$\\\\pm$0.002 | 0.316$\\\\pm$0.002 | 0.251 | 0.347 |\\n| ETTm1 (7 Nodes) | **0.391**$\\\\pm$0.003 | **0.394**$\\\\pm$0.002 | 0.453$\\\\pm$0.002 | 0.448$\\\\pm$0.003 | 0.393 | 0.404 | 0.550$\\\\pm$0.004 | 0.537$\\\\pm$0.005 | 0.469 | 0.446 |\\n| ETTh2 (7 Nodes) | **0.387**$\\\\pm$0.005 | **0.407**$\\\\pm$0.004 | 0.543$\\\\pm$0.006 | 0.517$\\\\pm$0.004 | 0.393 | 0.418 | 1.158$\\\\pm$0.010 | 0.812$\\\\pm$0.011 | 0.465 | 0.509 |\\n| Weather (21 Nodes) | **0.243**$\\\\pm$0.001 | **0.274**$\\\\pm$0.001 | 0.257$\\\\pm$0.002 | 0.305$\\\\pm$0.003 | 0.247 | 0.289 | 0.289$\\\\pm$0.005 | 0.342$\\\\pm$0.007 | 0.314 | 0.355 |\\n\\nThe results indicate that GraphSTAGE achieved the **top-1** rank in terms of MSE and MAE metrics across four datasets **compared with four graph-based models.** Although CrossGNN performs comparably to our GraphSTAGE on smaller datasets (e.g., ETT and Weather), GraphSTAGE notably outperforms CrossGNN on large-scale datasets (e.g., ECL with 321 nodes). These findings further validate the effectiveness of GraphSTAGE in capturing both intra-series and inter-series dependencies, leading to **superior forecasting accuracy across datasets of varying scales.**\\n\\nWe believe these clarifications address your concerns regarding the novelty and contributions of our work. Current models primarily focus on the advantages of channel-mixing methods for extracting multiple dependencies, often **neglecting the noise these approaches can introduce.** **We are the first to directly address this issue.** Thank you again for your valuable feedback, which has helped us improve our paper.\\n\\n---\\n\\n[1] *FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective.* NeurIPS, 2024\\n\\n[2] *UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting.* arXiv\\n\\n[3] *ForecastGrapher: Redefining Multivariate Time Series Forecasting with Graph Neural Networks.* arXiv\\n\\n[4] *Crossgnn: Confronting noisy multivariate time series via cross interaction refinement*. NeurIPS, 2023\\n\\n[5] *Spectral temporal graph neural network for multivariate time-series forecasting.* NeurIPS, 2020\\n\\n[6] *Connecting the dots: Multivariate time series forecasting with graph neural networks.* KDD, 2020\"}", "{\"title\": \"Request of Reviewer's attention and feedback\", \"comment\": \"Dear Reviewer RvJQ,\\n\\nWe sincerely thank you for your valuable and constructive feedback. Since the End of author/reviewer discussions is coming soon, may we know if our response addresses your main concerns? If so, we kindly ask for your reconsideration of the score. Should you have any further advice on the revised paper and/or our rebuttal, please let us know and we will be more than happy to engage in more discussion and paper improvements.\\n\\nThank you so much for devoting time to improving our paper!\"}", "{\"title\": \"Global Response to All Reviewers\", \"comment\": \"We sincerely thank all the reviewers for their valuable time and detailed feedback, and we appreciate that almost all reviewers recognized the soundness and presentation of our work. We have carefully revised the paper according to the comments, and the edits have been highlighted in **ORANGE**. We also provide a detailed response to each comment below.\\n\\nHere we highlight our major revisions, and respond to each reviewer below. We hope our responses can properly address your concerns.\\n\\n- In response to the feedback from **Reviewer** **yAwh**, **9agk**, **7NU8**, and **RvJQ**, we have conducted additional experiments and revised the paper accordingly, along with the relevant discussions.\\n\\n 1. We have expanded our experiments to include additional four graph-based baseline methods for multivariate time series forecasting tasks: (1) FourierGNN, (2) CrossGNN, (3) StemGNN, and (4) MTGNN. For complete results, refer to *Table 9 in Appendix D on Page 18*.\\n\\n 2. We have added the standard deviations of our model across all tasks to highlight the model\\u2019s strong robustness. For complete results, refer to *Table 10 and Table 11 in Appendix F on Page 22*.\\n\\n 3. Additionally, we provided the pseudo-code of our model to enhance the clarity in methodological presentation. For detailed results, please see *Algorithm 1 in Appendix A on Page 15*.\\n- Following feedback from **Reviewer 7NU8**, we have updated the related works to more clearly define the scope and background of this study. We have revised statements that could be slightly misleading, such as \\u201cimprovements up to 20.8%.\\u201d We have also revised the colors in Figure 15 and have updated the relevant text in the revised paper to make the interpretability more clearly. \\n\\nWe thank the reviewers again and look forward to any further suggestions or discussion.\"}", "{\"comment\": \"# Response to Reviewer 7NU8: Additional Concern (2/3)\\n\\n> **Additional Concern 2 (1/2):** The additional sentence about the GNN-based forecasting methods provides minimal explanations in the related section. It does not adequately support the central novelty justification in the graph-learning part of the method. \\n\\nThank you for pointing out this issue. Although we have, in response to your previous valuable feedback, added four GNN-based baselines, we did not provide an in-depth discussion due to (1) limitations in paper space, and (2) the fact that these widely adopted GNN-based methods did not perform as strong baselines. \\n\\nIn the camera-ready version, we will address this issue by:\\n\\n- **Expand the Related Work on GNN-based Forecasting Methods** in appendix to provide a comprehensive overview of existing approaches, discussing their methodologies, strengths, and limitations in detail.\\n- **Clarify the Novelty of Our Graph-Learning Method** by clearly articulating how our approach differs from existing methods and explaining how our approach addresses the limitations of current GNN-based forecasting methods.\\n\\nFor more details about the novelty of our approach, please refer to the **Response to Reviewer 9agk (2/2)**.\\n\\nWe believe these enhancements will better support the central novelty of our work and provide a clearer understanding of our contributions to the graph-learning aspect of time series forecasting.\\n\\n\\n\\n> **Additional Concern 2 (2/2)**: It seems that using two decoupled temporal and spatial graphs in a sequential manner is a choice rather experimentally than conceptually motivated.\\n\\n\\n\\nThank you for pointing out this issue, and we apologize for any misunderstanding it may have caused. We would like to clarify that **this design choice is conceptually motivated** and **represents the main motivation and innovation of our paper.** For more details about the novelty of our approach, please refer to the **Response to Reviewer 9agk (2/2)**. For your convenience, we provide a summary of the motivation behind our framework design below:\", \"our_work_is_driven_by_a_core_question\": \"*\\\"Is it truly necessary to model all these dependencies?\\\"* We are the first to study the trade-off between **the variety of dependencies extracted** and **the noise that may be introduced by channel-mixing**. Previous works [1] [2] often adopt a \\\"brute-force modeling\\\" approach (e.g. hypervariate graphs), stacking parameters to capture as many dependencies as possible. While this might seem effective, it overlooks a critical issue\\u2014the potential noise introduced by this process.\\n\\nTo investigate and minimize this potential noise, we propose a **novel channel-preserving framework**: GraphSTAGE. The use of decoupled temporal and spatial graphs in a sequential manner is a conceptual decision rooted in our aim to preserve channel integrity. **The model variants comparison is not intended to suggest that our model design is an experimental choice.** Rather, **through the comparisons** presented in *Table 4 on Page 10*, **we validate the presence of noise introduced by channel-mixing methods, underscoring the limitations of excessive dependency extraction.**\\n\\nWe believe that our approach not only reveals the issue of noise but also highlights the importance of balancing dependencies modeling with the potential for noise introduction in time series forecasting.\\n\\n---\\n\\n[1] *FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective.* NeurIPS, 2024\\n\\n[2] *UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting.* arXiv, 2024\"}" ] }
5dDYhvt6dY
Efficient transformer with reinforced position embedding for language models
[ "Yen-Che Hsiao", "Abhishek Dutta" ]
In this paper, we propose an efficient transformer architecture that uses reinforced positional embedding to obtain superior performance with half the number of encoder decoder layers. We demonstrate that concatenating positional encoding with trainable token embeddings, normalizing across tokens in the token embedding matrix, and using the normalized token embedding matrix as the value of the attention layer improve the training and validation loss and the training time in an encoder-decoder Transformer model for a Portuguese-English translation task with 10 epochs or 12 hours of training across 10 trials. Our method, with roughly a threefold parameter reduction compared to the baseline model, yields a mean training loss of 1.21, a mean validation loss of 1.51, and an average training time of 1352.27 seconds per epoch, surpassing the baseline model with the same embedding dimension that employs addition of positional encoding and token embeddings, which achieves a mean training loss of 1.96, a validation loss of 2.18, and an average training time of 4297.79 seconds per epoch. Additionally, we evaluated our proposed architecture and the baseline across 14 diverse translation datasets from TensorFlow. The results indicate that our method consistently achieves lower or comparable training and validation losses, suggesting enhanced learning efficiency.
[ "Transformer model", "token embeddings", "neural machine translation" ]
https://openreview.net/pdf?id=5dDYhvt6dY
https://openreview.net/forum?id=5dDYhvt6dY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "X1UxrY30ds", "WzOM7e3e9x", "RTfdRmZLEp", "LocB7UI7cb", "5yCEbJYZ7h" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1732566947351, 1730518095933, 1730693109744, 1730646668736, 1730276244985 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8382/Authors" ], [ "ICLR.cc/2025/Conference/Submission8382/Reviewer_tVwQ" ], [ "ICLR.cc/2025/Conference/Submission8382/Reviewer_RiVe" ], [ "ICLR.cc/2025/Conference/Submission8382/Reviewer_sqJU" ], [ "ICLR.cc/2025/Conference/Submission8382/Reviewer_1Sm4" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper proposes modifications to the standard Transformer architecture by introducing reinforced positional embeddings and alterations in the attention mechanism.\\nThe author propose to concatenate token embeddings with positional embeddings before the first\\nencoder and decoder blocks, and uses the normalized token embedding matrix as the value in the\\nattention layer.\\nExperimental results demonstrates improvements in training loss, validation loss, and computational time\\nacross thirteen translation datasets\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Give the author credit to carry out extensive results across 14 diverse translation datasets, and shows promising results over the baseline.\", \"weaknesses\": \"1.The presentation of section 2.3 and 2.4 is a little confusing. And the notation is not quite consistent, making it hard follow. So I feel the proposed modifications to the Transformer architecture are not entirely clear from the provided content. The paper needs to provide a more thorough explanation of how the reinforced positional embedding works and how it interacts with the Transformer's attention mechanism. Additionally, the mathematical formulation and the pseudo-code or diagrams could be more explicit to help reviewers and readers replicate the study.\\n\\n2.Lack of related work comparison, I think more related work comparison should be added to show the merits of this work, not just compare to baseline transformer.\\n\\n3. The English writing could be improved. There are some very long sentences which is hard to read, e.g. the second sentence in abstract, maybe the author could chop it into smaller sentences.\", \"questions\": \"See weakness part\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies how to make transformer architecture more efficient by reinforcing the positional embedding. To achieve this goal, the paper makes three proposals: (1) concatenating the token and positional embedding matrices (instead of adding as usual) (2) normalizing the token embedding matrix across tokens (3) use the normalized token embedding matrix as value in the attention layer. Experiments on a few small MT test sets show that the proposed architecture achieves lower training/validation loss with ~30% of the parameter of the baseline model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. To the best of my knowledge, there is no research paper formally discussing the comparison between concatenating vs. adding positional embedding to the token embedding. Although there is interest and several informal discussions among the community about it (e.g.: https://www.reddit.com/r/MachineLearning/comments/cttefo/comment/exoz3uy/). Hence, this contribution of this paper may fill some relatively important void in the space.\\n2. The approach is very straightforward, and the description of the method is mostly easy-to-follow.\", \"weaknesses\": \"1. My biggest complaints are the evaluation of this paper. Further broken down below:\\n1.1 The title of this paper states that this is \\\"for language models\\\", yet there is no experiment in a T5-like multi-task language model setup.\\n1.2 Even the machine translation experiments were not well-done. All experiments are done with toy-scale datasets. Evaluation is limited to perplexity, with no actual task-specific evaluation such as BLEU/COMET.\\n1.3 I'm not sure why we are not using a comparable hyper-parameter setup between the baseline vs. proposed model. It's impressive to see that you are able to achieve comparable performance with a smaller model, but why not use the same number of encoder/decoder blocks/dimensions etc. and show you can perform better with comparable number of parameters?\\n2. The proposal should also be benchmarked against more up-to-date baselines such as RoPE (https://arxiv.org/abs/2104.09864).\\n3. The paper lacks a survey of the existing literature.\", \"questions\": [\"More of two detailed comments rather than questions:\", \"Why is (13) written as an addition inside the softmax? Isn't it easier to formulate it as a binary multiplicative mask outside the softmax?\", \"The introduction of the architecture could be more concise and focus more on the difference from the baseline. For example, I'm not sure if there's any difference from the baseline that has been introduced in Section 2.4.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The modified Transformer architecture about position embedding improves training loss, validation loss, and computational time across 13 translation datasets compared to a baseline model. It normalizes token embeddings, concatenates them with positional embeddings, and uses the normalized embeddings in the attention layer. The proposed model achieves lower training and validation losses, with an average training time of 1352.27 seconds per epoch, compared to the baseline's 4297.79 seconds per epoch. These improvements suggest the model's robustness across various translation tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper introduces an efficient transformer architecture employing reinforced positional embedding to achieve superior performance with half the number of encoder and decoder layers. Compared to the baseline model, it attains lower training and validation losses while utilizing only a quarter of the training time.\\n\\n2. The proposed method is mainly aimed at improving the position embedding layer. There are no similar approaches from previous researchers, so the proposed method is novel.\", \"weaknesses\": \"The experimental results are not solid enough:\\n\\n1. The baseline model is not very strong, as the proposed method can also be applied to decoder-only architectures. The authors should also use a more recent large language model, such as LLaMA, as the baseline model to validate the proposed method.\\n\\n2. The evaluation score is not very reasonable, it is recommended to use task-related evaluation scores, rather than just using loss as the final evaluation metric\\n\\n3. Some related works are missing. Hope to give a comparison with more work about the improved positional embedding, such as RoPE, Relative Position Representations (https://arxiv.org/abs/2104.09864, https://arxiv.org/pdf/1803.02155)\", \"questions\": \"1. Why did the authors select the Portuguese-English translation task as the experimental task? Hope to see more mainstream translation tasks, such as English-French, English-German, and English-Chinese.\\n\\n2. Why is only the loss used as the evaluation metric in the experiment? Although the lower the training loss, the better the model performance, it is still desirable to see a comparison of translation-related evaluation scores, such as BLEU.\\n\\n3. Why did the authors only use the sine-cosine based positional encodings as the baseline? Hope to give a comparison with more work about the improved positional embedding, such as RoPE, Relative Position Representations (https://arxiv.org/abs/2104.09864, https://arxiv.org/pdf/1803.02155)\\n\\n4. Why did the authors only choose the translation tasks to validate the proposed method? The baseline model is not very strong, as the proposed method can also be applied to decoder-only architectures. I suggest that the authors verify the effectiveness of the proposed method on some large language models such as LLaMA.\\n\\n5. The authors use a lot of content to explain the principle of reinforced position embedding, but I don't know why this method can improve the convergence speed of the model ? And why can the model converge to a better solution space than the Transformer baseline model? Adjusting the learning rate or warm-up, or using LayerNorm could also achieve the similar effect, right ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The positional encoding is attached to the token embeddings after batch normalization along the feature dimension. To accommodate the residual connections, most weight matrices in the model are doubled in size (except for W_v). The model was trained on a translation dataset, and its performance was compared with the traditional transformer structure through loss comparison.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This work presents two interesting counter-intuitive points:\\n\\n1)\\tThere has been extensive academic research demonstrating that deep-narrow networks perform better than shallow-wide networks, particularly in the translation field. However, this paper essentially doubles the feature width while reducing the number of layers, yet achieves better results.\\n2)\\tThe rationale for directly adding positional encodings and token embeddings in vector space has been discussed for many years, and remains the mainstream approach even today. However, this paper suggests that separating these spaces is necessary.\", \"weaknesses\": \"1)\\tMost of the cited works in this paper are from 2017/2018, and there is a noticeable lack of citations. This raises concerns about whether the authors have a thorough understanding and have conducted comprehensive research in this field. To be honest, this work's issue goes beyond simply needing a few more citations - the paper only has 5 references total, which is far too sparse. While I need to provide some suggestions, I would recommend looking into works in the field of positional encoding, or research analyzing embeddings. I'm afraid I can't offer much more advice to the author beyond that.\\n2)\\tThe experiments are not solid, with few experimental trials and superficial analysis. It only compares the advantages over the standard transformer model in a very toy setting. I suggest that this work should be combined with more efforts to improve relative/absolute positional encoding, and increase the model's parameter count to enhance soundness. Additionally, conclusions should not be drawn solely from one translation dataset - the effectiveness of this architectural modification should be evaluated on a broader range of tasks including QA, summarization, information extraction, etc.\\n3)\\tThe structure of the paper is poorly organized. Given that transformer architecture is now very fundamental, making minor modifications only requires brief explanations or could be placed in the appendix - dedicating two and a half pages in the main text is excessive. This leads to a lack of ablation studies, such as analyzing the impact of applying batch normalization to token embeddings and expanding feature dimensions on performance. The paper should include more feasibility analysis from the perspective of vector characteristics. Since there has long been an established understanding that various embeddings can be directly added together, this paper's counter-intuitive approach should dedicate more space to explaining its feasibility. I suggest adding a Related Work section, condensing the lengthy equations and oversized figures in the Methods section, and including an ablation analysis chapter in the Experiments section.\", \"writing_errors_or_typos\": \"1)\\t'Previous work by Ke et al. Ke et al. (2021)' in L33\\n2)\\tThe order of Figures 1 (b) and 1 (c) has been wrong reversed in L41-42\\n3)\\tThe model parameter configurations from L300 to L307 were written messily, maybe put it in a table.\\n4)\\t'training losses of the baseline have the mean of roughly 6.60, 4.55, 3.82,3.29, 2.89, 2.57, 2.30, 2.11, 1.96, and 1.84 and the mean of the validation losses of the baseline are roughly 5.04, 4.06, 3.46, 3.01, 2.77, 2.49, 2.35, 2.25, and 2.18 for 10 different trials. ' from L315-322,L339-L356. It is not recommended to present such a large amount of numerical results in the main text/figure captions\\n5)\\tIn Table 1, the three-line table format is not used.\", \"questions\": \"1)\\tWhy is the model scale so small? The baseline is 10M parameters and the proposed method uses 3M parameters, yet it mentions using an A100 40G GPU. For the experiments to be convincing, the model scale should be at least in the billions of parameters - this would better match both the GPU resources mentioned and current research requirements.\\n2)\\tWhile the paper references Ke et al.'s work, it appears to lack comprehensive comparisons with other works on improving Transformer positional encoding. Should experimental comparisons with other related methods be added?\\n3)\\tWould this work be useful for more modern positional encodings, such as ROPE and ALIBI?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
5d4UTqXjmS
Visual Large Language Models Exhibit Human-Level Cognitive Flexibility
[ "Guangfu Hao", "Frederic Alexandre", "Shan Yu" ]
Cognitive flexibility has been extensively studied in human cognition but remains relatively unexplored in the context of Visual Large Language Models (VLLMs). This study assesses the cognitive flexibility of state-of-the-art VLLMs (GPT-4o, Gemini-1.5 Pro, and Claude-3.5 Sonnet) using the Wisconsin Card Sorting Test (WCST), a classic measure of set-shifting ability. Our results reveal that VLLMs achieve or surpass human-level set-shifting capabilities under chain-of-thought prompting with text-based inputs. However, their abilities are highly influenced by both input modality and prompting strategy. In addition, we find that through role-playing, VLLMs can simulate various functional deficits aligned with patients having impairments in cognitive flexibility, suggesting that VLLMs may possess a cognitive architecture, at least regarding the ability of set-shifting, similar to the brain. This study reveals the fact that VLLMs have already approached the human level on a key component underlying our higher cognition, and highlights the potential to use them to emulate complex brain processes.
[ "Cognitive Flexibility", "Visual Large Language Models", "Wisconsin Card Sorting Test" ]
https://openreview.net/pdf?id=5d4UTqXjmS
https://openreview.net/forum?id=5d4UTqXjmS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rLjnZtuXvE", "gzbkSbEtKV", "fF3pZSiEke", "UkBzJpYDnw", "IGVgIPTpv9", "DqqTQpWtvh", "CwTWwgqldc", "7LfTtYcawD", "6dNQnMAiT5", "2Om5CQCFa9" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730555023023, 1730417601926, 1738022425473, 1729430333509, 1732010974573, 1732007659235, 1732094812641, 1732007608090, 1732033342299, 1732003063154 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4401/Reviewer_9Pxs" ], [ "ICLR.cc/2025/Conference/Submission4401/Reviewer_PFJz" ], [ "ICLR.cc/2025/Conference/Submission4401/Authors" ], [ "ICLR.cc/2025/Conference/Submission4401/Reviewer_FXmB" ], [ "ICLR.cc/2025/Conference/Submission4401/Authors" ], [ "ICLR.cc/2025/Conference/Submission4401/Authors" ], [ "ICLR.cc/2025/Conference/Submission4401/Reviewer_9Pxs" ], [ "ICLR.cc/2025/Conference/Submission4401/Authors" ], [ "ICLR.cc/2025/Conference/Submission4401/Reviewer_PFJz" ], [ "ICLR.cc/2025/Conference/Submission4401/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This article studies the cognitive flexibility of three multimodal large language models\\u2014Gemini, ChatGPT, and Claude\\u2014that support both text and image input using the WCST test. Cognitive flexibility here refers to the models' ability to adjust their understanding of task rules and complete tasks correctly based solely on feedback indicating correctness or incorrectness. The experiment includes SaT-VI, SaT-TI, CoT-VI, and CoT-TI conditions, where SaT means no chain-of-thought guidance and the model outputs answers directly, while CoT involves chain-of-thought guidance. The results show that CoT significantly outperforms SaT, achieving or surpassing human-level performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.This study uses the WCST to examine the cognitive flexibility of VLLMs. The WCST is widely applied in cognitive science and is known for its strong reliability.2.The authors explored the potential of VLLMs to simulate specific patterns of cognitive impairment through role-playing.\", \"weaknesses\": \"1.Although the article mentions that the simulated patterns of the models align with real cases, the authors did not conduct cognitive experiments or correlate data with real subjects to demonstrate that VLLMs' simulation of cognitive impairment is reasonable.\\n2.The article only evaluates the models on a specific cognitive test (WCST). While the WCST is a classic test in cognitive science, it lacks real-world simulation, and performance on this test cannot fully represent performance in real-world scenarios.\\n3.The authors should consider incorporating more visualizations.\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work aims to evaluate the cognitive flexibility of vision language models (VLMs), using a classic task from the neuropsychological literature (the Wisconsin Card Sort Task). The authors conclude that, under certain conditions (depending on input modality and prompting technique), VLMs can display human-level flexibility. Experiments are also reported in which prompting is used to simulate neuropsychological impairment.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"This work employs a well validated task from the neuropsychological literature, potentially enabling a rich comparison with human cognition.\", \"The experiments investigate several state-of-the-art VLMs, increasing the robustness of the findings.\"], \"weaknesses\": [\"Most importantly, the results are not diagnostic regarding the relative cognitive flexibility of VLMs/LLMs and humans. This is because the human participants are effectively at ceiling. In order to have a meaningful comparison, a version of the task (or a different task) would need to be identified where human performance was not at ceiling.\", \"No theoretical motivation is provided for investigating cognitive flexibility in LLMs / VLMs. It is noted that this is a well studied task in the neuropsychology literature, which is true, but this does not automatically yield theoretically important questions about LLMs / VLMs. It is also suggested that 'This investigation not only advances our understanding of VLLMs but also offers insights into the nature of cognitive flexibility itself,' but it is not clear what insights this work offers about cognitive flexibility.\", \"There is also no explicit motivation for studying VLMs in particular, as opposed to LLMs. Is there any particular reason why it is important to study these processes in the visual domain?\", \"The paper only includes experiments with a single task. Many more tasks and conditions would be needed to support the claims that are advanced in this paper.\", \"For tests of large-scale pretrained models such as LLMs and VLMs, it is also important to try and ensure that the tasks used for evaluation are not present in the model's training data. This is a concern here given the popularity of this task in the cognitive literature. One possible approach might be to also test an equivalent version of the task that uses different surface features, to ensure that performance does not depend on memorization (or pseudo-memorization).\", \"There are no statistical tests provided throughout the entire paper, although there are many statements about the differences between certain conditions. It is important to perform statistical tests to determine which of these differences are reliable.\", \"It is unclear what's learned from the experiments simulating neuropsychological impairment. There are some assertions about similarities to the pattern of behavior in certain patient populations, but very few references, and no direct comparison with human behavior. It would be ideal to have a direct comparison with behavior to support such claims.\"], \"questions\": [\"### Questions\", \"What is the theoretical motivation for studying cognitive flexibility in VLMs / LLMs?\", \"What is the theoretical motivation for studying cognitive flexibility in VLMs in particular? What does the visual domain add to such an evaluation?\", \"### Suggestions\", \"The task should be modified so as to identify conditions where human performance is not at ceiling.\", \"More tasks should be investigated.\", \"Statistical tests should be included to support comparisons.\", \"A direct comparison with human behavior should be included for the experiments simulating neuropsychological impairment.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper \\\"Visual Large Language Models Exhibit Human-Level Cognitive Flexibility\\\" evaluates the cognitive flexibility of state-of-the-art VLLMs (GPT-4o, Gemini-1.5 Pro, and Claude-3.5 Sonnet) using the Wisconsin Card Sorting Test (WCST). It finds that VLLMs can match/surpass human performance in adapting to changing rules, especially with chain-of-thought reasoning and text-based inputs.\", \"key_contributions\": \"1. VLLMs demonstrate human-level cognitive flexibility, particularly with CoT prompting.\\n2. Performance significantly changes based on input modality (text vs. visual) and prompting strategy.\\n4. VLLMs can simulate cognitive impairments, offering potential for modeling brain function.\\n\\nThe study suggests that VLLMs has some cognitive abilities and points to potential in advanced applications in AI and neuroscience.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Experiment methodology: The paper is methodically thorough, using a well-established cognitive flexibility test WCST and evaluating SOTA VLLMs.. The experimental design includes 4 different setups and 6 scoring functions. This enables a detailed comparison under varied conditions, providing some level of robustness to the findings. Including human participants as a comparative baseline grounds the findings in a relatable context.\", \"clarity\": \"The paper is easy to follow and well-organized, with a clear explanation of the WCST, input modalities, and experimental conditions. The results are presented in detailed tables and figures, aiding in the understanding of model performance comparisons\", \"originality\": \"The paper explores an area by applying well known congnitive test to assess performance in VLLMs. It introduces a unique approach by examining how these models simulate cognitive impairments, adding some level of depth and innovation to the study.\", \"weaknesses\": \"Overstatement of Cognitive Flexibility Claims: Although the paper demonstrates that VLLMs can achieve human-level performance in the WCST under specific conditions, the claim that they exhibit human-like cognitive flexibility seems overstated. Cognitive flexibility in humans involves a broader spectrum of real-world applications and examined using multiple tests, and the findings are limited to a highly structured test. A more cautious interpretation of the results would strengthen the paper's scientific rigor.\", \"role_playing_cognitive_impairments_needs_validation\": \"While simulating cognitive impairments through role-playing prompts is innovative, this method remains speculative without validation against clinical populations. The paper could improve by discussing potential methods for validating these simulated impairments against real-world data, making the findings more actionable and grounded in reality.\", \"insufficient_validation_of_prompt_designs\": \"While the paper employs CoT and STA prompting strategies, it does not fully explore the impact of different prompting setups or attempt to validate the prompts across varied task conditions and models. For example, the reliance on CoT prompting for achieving high performance raises questions about how much of the cognitive flexibility observed in VLLMs is genuinely attributable to their internal architecture versus the external aid provided by sophisticated prompts (which for some of the models are not advertised as the suggested approach).\", \"questions\": \"1. While the paper demonstrates that VLLMs can achieve human-level performance in the WCST, the broader claim of human-like cognitive flexibility seems to require more context. Could the authors clarify how they see the findings generalizing to real-world applications? Specifically, how do the authors view the limitations of the WCST in capturing the full spectrum of cognitive flexibility in humans, and do they plan to evaluate VLLMs using additional tests that capture a wider range of flexibility?\\n\\n2. The paper heavily relies on a specific cot prompting approach to achieve high performance. Could the authors provide more details about how different prompting strategies and setups affect the models' performance across tasks? Specifically, does prompt wording changes the performance significantly? \\n\\n3. The paper evaluates three SOTA models. How do the authors envision their findings generalizing to other models (includingn on VLLMs), especially those with different architectures or less advanced capabilities? Are there plans to extend the study to a broader range of models or to compare different architectural approaches to cognitive flexibility? \\n\\n4. Cognitive flexibility in real-world settings often involves adapting to highly dynamic environments where rules are unclear and change rapidly. Do the authors have plans to test the models in more dynamic, less structured tasks where adaptability is required in real time?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To Reviewer FXmB\", \"comment\": \"We sincerely thank you for taking the time to review this paper. We deeply appreciate positive comments on our methodological rigor, experimental design, and clarity of presentation. Below, we address your main concerns and questions.\\n\\n**1.Weaknesses 1, Question 1 and Question 4:**\\n\\n__Response:__\\nThank you for your thoughtful comment on the scope of our cognitive flexibility claims. We agree that cognitive flexibility encompasses a broader spectrum than what the WCST alone can measure. We will revise our manuscript to more carefully contextualize our findings, emphasizing that we have demonstrated human-level performance specifically in WCST-measured set-shifting, which represents one important component of cognitive flexibility. To better reflect this scope, we propose revising the title to \\\"Visual Large Language Models Exhibit Human-Level Set-Shifting Abilities in the Wisconsin Card Sorting Test.\\\" Besides, we note that this concern aligns with Reviewer 9Pxs's Concern 2 and Reviewer PFJz's Concern 4. While there exist other cognitive flexibility measures like DCCS, IED, and cued switch tasks, we specifically chose WCST due to its unique advantages. Many alternative tests were designed for children or offer simplified paradigms compared to WCST, potentially limiting their discriminative power for evaluating VLLMs' capabilities. While real-world scenarios could provide ecological validity, they typically engage multiple cognitive processes simultaneously, making it challenging to isolate and measure cognitive flexibility specifically. WCST's well-established nature and extensive validation in cognitive neuroscience make it particularly suitable for our research aims.\\n\\n**2.Weaknesses 2**\\n\\n__Response:__\\nWe note that this concern aligns with Reviewer 9Pxs's Concern 1 and Reviewer PFJz's Concern 7. We would like to respectfully note that our study already establishes meaningful connections with clinical observations. As detailed in Section 4.5, our findings align with documented patterns in clinical literature: the models' simulated impairments correspond with established patient behaviors across different prefrontal dysfunctions, such as orbitofrontal damage (Stuss et al., 1983) and dorsolateral prefrontal cortex lesions (Stuss et al., 2000). While direct comparison with new clinical data would be valuable, collecting such data from patients with specific prefrontal impairments presents significant practical challenges. We believe our current approach of comparing simulated patterns with well-documented clinical findings provides meaningful insights while remaining feasible. We commit to expanding the analysis in Section 4.5 to provide more comprehensive connections with clinical literature.\\n\\n**3.Weaknesses 3 and Question 2**\\n\\n__Response:__\\nThank you for raising this important point about prompt design. We want to clarify that while VLLMs indeed show strong cognitive flexibility under CoT prompting but struggle with STA, the CoT prompt itself is remarkably simple - merely asking \\\"Let's think step by step\\u2026\\\" This minimal reconfiguration of the models' internal state enables successful task completion, similar to how children might initially struggle with WCST until they learn the appropriate thinking strategy. This suggests that the capability for cognitive flexibility exists within VLLMs' architecture and only needs appropriate elicitation to emerge. Regarding your specific question about prompt wording sensitivity, we have already investigated this in Section 4.4 (Impact of Explicit Rule Exclusivity), where we demonstrate that removing the explicit rule exclusivity constraint significantly impacts performance. This indicates that clear, complete, and exclusive task descriptions do influence performance. We would be happy to conduct additional experiments on prompt variations if you have specific suggestions in mind.\\n\\n**4.Question 3**\\n\\n__Response:__\\nWe appreciate this valuable point about model generalization. While we initially considered testing a broader range of models, including both open-source and proprietary ones with varying architectures and sizes, we encountered several technical constraints. The WCST task requires 64 continuous rounds of visual dialogue, which many models cannot support due to limitations in handling continuous visual conversations or restricted context windows. Some models are limited to single-image or few-shot visual interactions, while others may face context length constraints when dealing with multiple encoded images. These technical barriers prevented us from conducting comprehensive tests across a wider spectrum of models. However, as these limitations are gradually addressed through model improvements, we would be eager to extend our research to encompass a broader range of models and compare different architectural approaches to cognitive flexibility. We appreciate your suggestion and believe this expansion would provide valuable insights for future work.\"}", "{\"title\": \"To Reviewer PFJz (Part-II)\", \"comment\": \"**4.Single Task Limitation**\\n\\n__Response:__\\nWe note that this concern aligns with Reviewer 9Pxs's Concern 2, and we appreciate your suggestion about incorporating additional tasks. While there exist other cognitive flexibility measures like DCCS, IED, and cued switch tasks, we specifically chose WCST due to its unique advantages. Many alternative tests were designed for children or offer simplified paradigms compared to WCST, potentially limiting their discriminative power for evaluating VLLMs' capabilities. While real-world scenarios could provide ecological validity, they typically engage multiple cognitive processes simultaneously, making it challenging to isolate and measure cognitive flexibility specifically. WCST's well-established nature and extensive validation in cognitive neuroscience make it particularly suitable for our research aims. Nevertheless, we would greatly welcome your suggestions for additional tasks that could assess cognitive flexibility with comparable precision to WCST, as this would certainly enrich our understanding of VLLMs' capabilities in this domain.\\n\\n**5.Training Data Contamination**\\n\\n__Response:__\\nThank you for the thoughtful concern about potential training data contamination. We would like to clarify that this potential issue was carefully considered during our experimental design phase - specifically, whether models might rely on memorization from similar tasks in their training data. However, we believe the nature of our task makes pure memorization highly unlikely, as each trial presents a unique challenge: the order of card presentation is entirely randomized, the sorting rules are randomly selected, and most importantly, the model must actively infer the current rule based on trial-by-trial feedback. Furthermore, rule switches are completely random, requiring real-time adjustment based on feedback after rule switches to ultimately discover the correct rule. Therefore, we believe this task does not rely on memorization. Of course, testing equivalent versions of the task using different surface features to ensure performance does not depend on memorization (or pseudo-memorization) could further demonstrate this point.\\n\\n**6.Lack of Statistical Testing**\\n\\n__Response:__\\nWe sincerely appreciate your rigorous suggestion regarding statistical testing. Your point about determining the reliability of differences between conditions is crucial. We will enhance our manuscript by incorporating comprehensive statistical analyses to validate these comparisons. Thank you for helping us strengthen the scientific rigor of our work.\\n\\n**7.Neuropsychological Impairment Simulations**\\n\\n__Response:__\\nWe note that this concern aligns with Reviewer 9Pxs's Concern 1, and we acknowledge that direct correlation with patient data would strengthen our findings. We would like to respectfully note that our study already establishes meaningful connections with clinical observations. As detailed in Section 4.5, our findings align with documented patterns in clinical literature: the models' simulated impairments correspond with established patient behaviors across different prefrontal dysfunctions, such as orbitofrontal damage (Stuss et al., 1983) and dorsolateral prefrontal cortex lesions (Stuss et al., 2000). While direct comparison with new clinical data would be valuable, collecting such data from patients with specific prefrontal impairments presents significant practical challenges. We believe our current approach of comparing simulated patterns with well-documented clinical findings provides meaningful insights while remaining feasible. We commit to expanding the analysis in Section 4.5 to provide more comprehensive connections with clinical literature.\"}", "{\"comment\": \"Thank the authors for their response. I did indeed notice the alignment analysis with human data in Section 4.5, but I believe a more detailed analysis is warranted here. Additionally, I agree with Reviewer PFJz's point regarding the motivation behind the analysis of VLLMs, which is not entirely convincing. Recent multimodal models still primarily use language models as their backbone, meaning that most of their reasoning and cognitive capabilities emerge from the language modality. As the authors also found, the model performed best under the CoT-TI condition. Introducing a new modality decoder may in fact reduce the model's cognitive and reasoning performance.\\n\\nFurthermore, I feel the authors have not provided sufficient evidence to demonstrate that the WCST test is appropriate for assessing the model's cognitive flexibility. As the authors correctly pointed out, \\\"Real-world scenarios, while valuable, often involve multiple cognitive processes including working memory, inhibitory control, and reasoning, making it challenging to isolate and measure cognitive flexibility specifically.\\\" However, this very complexity makes it necessary to develop a reasonable approach to assess cognitive flexibility in real-world settings, and simply applying the WCST to models is not adequate. Therefore, I will not change my score.\"}", "{\"title\": \"To Reviewer PFJz (Part-I)\", \"comment\": \"We sincerely thank you for your thorough evaluation of our paper. Your comments have highlighted important areas for improvement. We address each of your main concerns below.\\n\\n**1.The results are not diagnostic regarding the relative cognitive flexibility of VLMs/LLMs and humans because human participants are at ceiling. To make a meaningful comparison, a task or variant where human performance is not at ceiling is needed.**\\n\\n__Response:__\\nWe appreciate the concern about ceiling effects. However, we would like to clarify that in our experiments, cognitively healthy humans achieved a Categories Completed (CC) score of 4.73 (0.45), while Claude-3.5 Sonnet under the CoT-TI condition reached 5.00 (0.00), actually surpassing human performance. As extensively documented in cognitive neuroscience literature over past decades, patients with cognitive impairments often show significantly degraded performance on WCST, sometimes being unable to complete the task at all. This suggests that while inability to complete WCST reliably indicates cognitive flexibility deficits, successful completion - as demonstrated by both humans and VLLMs in our study - indicates at minimum an absence of significant impairment. We acknowledge that identifying tasks where human performance is not at ceiling would provide additional insights into the upper bounds of cognitive flexibility. Given WCST's status as the gold standard for assessing cognitive flexibility impairments in humans, our results suggest VLLMs can achieve at least basic human-level cognitive flexibility without significant deficits. We welcome the your suggestions for additional tasks or variants that could better differentiate performance at higher levels of cognitive flexibility.\\n\\n**2.Theoretical Motivation for Investigating Cognitive Flexibility in LLMs/VLLMs**\\n\\n__Response:__\\nWe appreciate your raising this important point about theoretical motivation. Recent work has extensively probed various cognitive capabilities of large language models, with studies like Strachan et al. (2024) showing human-level performance in theory of mind tests, while Fatemi et al. (2024) revealed limitations in complex temporal reasoning. Given that cognitive flexibility is widely recognized as a crucial capability in cognitive science literature, with its impairment severely affecting daily functioning, we believe examining this capacity in VLLMs provides valuable insights into model capabilities and limitations. The WCST, as a well-validated measure of cognitive flexibility, offers a rigorous framework for this assessment. Prior to our work, it was unclear whether VLLMs could successfully complete the WCST or might exhibit patterns similar to humans with cognitive flexibility impairments. Moreover, while studying neural mechanisms of cognitive flexibility in human brains presents significant challenges, VLLMs offer unprecedented access to internal representations and activations, potentially providing new tools for understanding cognitive flexibility itself. We thank you for prompting us to clarify these important theoretical foundations.\\n\\n**3.Motivation for Studying VLMs vs. LLMs**\\n\\n__Response:__\\nWe appreciate your question about studying VLLMs specifically. Our motivation stems naturally from the fact that the traditional WCST for human subjects has always been administered using visual cards (varying in color, shape, and number) alongside verbal instructions. Thus, it is most natural to evaluate language models that can similarly process both visual information and verbal prompts using the same testing paradigm. While we acknowledge that the task can be transformed into pure text descriptions - which we indeed explored in our experiments - starting with visual inputs maintains consistency with the classical neuropsychological literature. This design choice enabled us to systematically compare both visual and textual modalities, providing insights into how these models handle cognitive flexibility tasks across different input formats.\"}", "{\"comment\": \"I thank the authors for their replies. Unless the major issues with the current set of results and analyses can be addressed (e.g., ceiling effects, use of a single task, lack of statistical analyses, and lack of direct comparison to human data), I must keep my current score.\"}", "{\"title\": \"To Reviewer 9Pxs\", \"comment\": \"We sincerely appreciate the time and effort you dedicated to reviewing this paper and are grateful for your thoughtful and constructive feedback. Below, we address the main concerns you raised and look forward to receiving your further suggestions.\\n\\n**1. Concern: Lack of cognitive experiments and real subject data to demonstrate the reasonableness of VLLMs' simulation of cognitive impairment.**\\n\\n__Response:__\\nWe agree that correlation with real subject data would strengthen our findings and would like to respectfully note that our study does connect with clinical observations. As detailed in Section 4.5, our findings align with documented patterns in clinical literature: the models' simulated impairments correspond with established patient behaviors across different prefrontal dysfunctions, such as orbitofrontal damage (Stuss et al., 1983) and dorsolateral prefrontal cortex lesions (Stuss et al., 2000). While we agree that direct comparison with new clinical data would be valuable, collecting such data from patients with specific prefrontal impairments presents significant practical challenges. We believe our current approach of comparing simulated patterns with well-documented clinical findings provides meaningful insights while remaining feasible. We acknowledge that this comparison is currently limited to a brief discussion in the final paragraph of Section 4.5 and commit to expanding this analysis in the revised manuscript to provide more comprehensive connections with clinical literature.\\n\\n**2. Concern: Evaluation of models based on only one cognitive test (WCST), which may not fully represent performance in real-world scenarios.**\\n\\n__Response:__\\nWe appreciate your concern about the scope of cognitive testing. While there are indeed other tests for cognitive flexibility such as the Dimensional Change Card Sort (DCCS), Intra-Extra Dimensional Set Shift task (IED), and Cued switch tasks, some of these are either designed for children or are simpler than WCST. Real-world scenarios, while valuable, often involve multiple cognitive processes including working memory, inhibitory control, and reasoning, making it challenging to isolate and measure cognitive flexibility specifically. This is why we chose WCST, a well-established and extensively studied paradigm in cognitive neuroscience. However, we would greatly welcome your suggestions for any real-world tasks that could effectively assess cognitive flexibility with similar precision to WCST, as this would certainly enrich our understanding of VLLMs' capabilities in this domain.\\n\\n**3. Concern: Lack of visualizations in the paper.**\\n\\n__Response:__\\nThank you for the thoughtful suggestion regarding visualizations. We agree that additional visual representations could enhance intuitive understanding of our results. While we initially chose tabular presentations to ensure precise reporting of each metric, we appreciate your feedback and propose to convert Tables 2 and 3 into visual figures in the revised manuscript to improve readability while maintaining the same level of detail and precision. This modification would help readers better grasp the relationships and patterns in our findings.\"}" ] }
5ck9PIrTpH
MathGAP: Out-of-Distribution Evaluation on Problems with Arbitrarily Complex Proofs
[ "Andreas Opedal", "Haruki Shirakami", "Bernhard Schölkopf", "Abulhair Saparov", "Mrinmaya Sachan" ]
Large language models (LLMs) can solve arithmetic word problems with high accuracy, but little is known about how well they generalize to more complex problems. This is difficult to study, as (i) much of the available evaluation data has already been seen by the most capable models during training, and (ii) existing benchmarks do not capture how problem proofs may be arbitrarily complex in various ways. In this paper, we present a data-generation framework for evaluating LLMs on problems with arbitrarily complex arithmetic proofs, called MathGAP. MathGAP generates problem statements and chain-of-thought reasoning traces according to specifications about their arithmetic proof structure, enabling systematic studies on easy-to-hard generalization with respect to complexity of proof trees. Using MathGAP, we find that LLMs show a significant decrease in performance as proofs get deeper and wider. This effect is more pronounced in complex, nonlinear proof structures, which are challenging even for the most capable models. The models are also sensitive to simple changes in sentence ordering. However, they remain capable of solving some complex problems, suggesting that reasoning generalization is noisy.
[ "Arithmetic reasoning", "evaluation", "proofs", "large language models" ]
Accept (Poster)
https://openreview.net/pdf?id=5ck9PIrTpH
https://openreview.net/forum?id=5ck9PIrTpH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wod4uoG6Kq", "tiYSG5pHHe", "r2g0V05x5N", "qmU7bYEz1z", "fE3noERkv1", "ZbKcSi9g6c", "XgU4OKjgej", "WYuNl94iWM", "Tlkr4bGR7X", "QPmAxtVmO3", "OXoczVGUX4", "GGyE3uxMGK", "G2jRt0YPwB", "7gCuOozgIl", "2SzPdRrrne" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1737524066639, 1732309750065, 1732308930816, 1732309370626, 1732307890107, 1734771049921, 1730504440582, 1730588291397, 1732315583037, 1730077398468, 1732308747140, 1732460297002, 1731372699580, 1732564434949, 1732308718595 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10627/Authors" ], [ "ICLR.cc/2025/Conference/Submission10627/Authors" ], [ "ICLR.cc/2025/Conference/Submission10627/Authors" ], [ "ICLR.cc/2025/Conference/Submission10627/Authors" ], [ "ICLR.cc/2025/Conference/Submission10627/Area_Chair_nMtH" ], [ "ICLR.cc/2025/Conference/Submission10627/Reviewer_d9We" ], [ "ICLR.cc/2025/Conference/Submission10627/Reviewer_tyGU" ], [ "ICLR.cc/2025/Conference/Submission10627/Reviewer_tyGU" ], [ "ICLR.cc/2025/Conference/Submission10627/Reviewer_iPba" ], [ "ICLR.cc/2025/Conference/Submission10627/Authors" ], [ "ICLR.cc/2025/Conference/Submission10627/Reviewer_iPba" ], [ "ICLR.cc/2025/Conference/Submission10627/Reviewer_w9Zb" ], [ "ICLR.cc/2025/Conference/Submission10627/Reviewer_d9We" ], [ "ICLR.cc/2025/Conference/Submission10627/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We would like to thank all reviewers for their valuable feedback which has helped improve the paper. We have updated our draft to incorporate many of the points made by the reviewers; please see our responses below and the updated draft for details. Perhaps most notably, we have added experiments on OpenAI o1 as suggested by reviewer iPba.\"}", "{\"comment\": \"Thank you for the positive review and feedback. We are pleased that you found the findings to be interesting.\\n\\n> MathGAP examples seem to mostly involve a certain type of arithmetic problem, it would be interesting to expand to scope to include other types of reasoning problems\\n\\nRegarding the scope, we agree that it would be useful if our method was to be generalized to other kinds of reasoning problems as well. While our framework does not cover all types of arithmetic word problems, it follows a taxonomy that is commonly used in the learning sciences (Riley et al., 1983) and should thus have a good coverage of standard grade school math problems. We considered a particular family of interesting proof tree structures for this work; we agree that there exist other structures that could be analyzed, and our tool allows researchers to do this.\\n\\n> It would also be interesting to also study the finetuning generalization behavior of LLMs\\n\\nWe also agree that it is interesting to analyze how generalization behavior interacts with pretraining and finetuning in various ways. While beyond the scope of this paper, MathGAP allows for such analyses and it is something we are planning for the future.\"}", "{\"comment\": \"Thank you for your review and detailed feedback. We are glad that you agree that our method is a good way of evaluating math capabilities of LLMs. We address your suggested weaknesses below:\\n\\n1. [See responses below]\\n\\n> Despite the many LLMs being tested, the paper seems to miss the latest state-of-the-art LLMs, including [...] GPT-O1\\n\\nWe agree that including o1 is interesting so we have added some experiments to the paper. Please refer to App. C and references in the intro, conclusion, and 5.3. Some findings are: (i) we see a downward trend for nonlinear problems, (ii) but the performance depends on the number of output tokens we allow and (iii) it is highly sensitive to random orderings. See the updated version of the paper for more details \\u2013 hope you find the results interesting.\\n\\n> [...] Llama 3 is also already very competitive. Including the more recent models might change the conclusions. Therefore, the current conclusion seems a bit outdated.\", \"regarding_that_llama3_70b_already_performs_well_on_linear_problems\": \"We think there is a deeper point here beyond \\u201cwhat is model X\\u2019s performance on complexity Y\\u201d. The main difference between Llama3 8B and 70B is scale, and not training method or data. So if we believe that scale does not alone lead to perfect performance, we can conclude that there exists a set of more complex linear problems that the 70B model would fail on as well. The appeal of MathGAP is that you can generate such examples synthetically. We did not do so here particularly for Llama 70B and linear examples, since our primary aim was not to find the performance frontier of SoTA models. (Although as we note in our limitations section, our framework could be used in such a manner if desired.) Moreover, note that all models fail on complex nonlinear problems, demonstrating that MathGAP is future proof as an evaluation framework for capable models.\\n\\n2. That is a good point, and is partly why we tried to be careful in our claims on generalization. We added a sentence on this in the introduction. We are happy to make further edits if there are specific parts of the text where our interpretations of the results are misleading, please feel free to point us to any such instance.\\n\\n3. You raise a lot of interesting questions. That is great, because it is precisely what we are hoping to elicit with our evaluation framework. To answer your first set of questions would require control of the training/finetuning data, which is made possible for future studies using MathGAP. For some of the others we have added discussion to the text. For instance, there are more logical errors than arithmetic ones (see end of 5.4) and the quadratic relationship might be related to similar findings for RAG (see footnote 9).\\n\\n4. [See responses below]\\n\\n> The MATHGap dataset looks too constrained for me.\\n\\nThe predicates follow a standard taxonomy for grade school math problems (Riley et al., 1983). In addition, there exists a rate concept (e.g., \\u201cthere are 5 apples per basket\\u201d) which was not included in our analysis but will be added to the code. Moreover, the inference rules can be combined and instantiated in several interesting ways, most of which we did not consider in our experiments for this paper. \\n\\n> the linear problems seem too easy for the latest state-of-the-art models. \\n\\nWe disagree with the position that it is a weakness that some of the models do well on one particular type of problems, namely, linear ones. Instead, we view it as an interesting result which suggests that the models are able to do some form of generalization (we assume that higher depth linear problems are not very prevalent in the dataset but that would of course need to be verified to draw any conclusions). \\n\\n> Why were different predicates used in different parts of the analysis, i.e. comp and transfer for depth analysis, part-whole for width analysis, etc.?\\n\\nThe reason we have separate datasets for comp and transfer is that we wanted to see if there is any significant difference in performance, which there indeed turns out to be for Mixtral and Llama 8B. We use part-whole for the width experiments since the part-whole inference rule is the only one that supports an arbitrary number of premises \\u2013 see Table 2.\"}", "{\"comment\": \"Thank you for your valuable and positive feedback. We respond to your questions below.\\n\\n> Have you considered incorporating paraphrasing techniques or using LLMs to generate more diverse linguistic expressions to increase variety and challenge the models? Incorporating more arithmetic relations, irrelevant information, larger numbers, or units to enrich tasks could help increase the diversity as well.\\n\\nWe did consider it, but opted against it as our aim was not necessarily to construct a dataset that is maximally challenging, but to evaluate the effect of particular types of proof complexity on performance. Moreover, LLM paraphrasing might lead to problems that are unfaithful to their specifications. Some of these factors are definitely interesting to explore for future work though.\\n\\n> How do you ensure that the proof tree structures accurately reflect LLMs' reasoning processes? Is there a risk that models might not utilize the intended inference paths?\\n\\nThere are indeed no guarantees that language models will follow the same reasoning procedure, but this should not affect our evaluation as we only consider answer accuracy. However, by eyeballing some of the outputs, we do observe that they often follow the same post-order traversal as our annotations. We agree that investigating the reasoning traces that LLMs use in a principled manner is an interesting topic for future work, and we note that our framework would allow for such analysis to be carried out. \\n\\n> Can you provide more detailed analyses to determine whether performance drops are due to arithmetic computation errors, logical reasoning mistakes, or other factors?\\n\\nAgain by eyeballing it seems that the vast majority of errors are logical rather than computational. That is, the model would output an incorrect expression, rather than a correct expression that is incorrectly solved. We have added a comment on this at the end of 5.4 but we did not present any thorough analysis since parsing the nature of the mistakes is nontrivial due to high degree of variability in the outputs. We therefore leave that as an interesting direction for future work.\"}", "{\"metareview\": [\"(a) Scientific Claims and Findings:\", \"The paper introduces MathGAP, a framework for evaluating LLMs on arithmetic word problems with arbitrarily complex proofs. Key findings include:\", \"Most tested LLMs show significant performance degradation as proof complexity increases, particularly with deeper and wider proof structures\", \"Nonlinear proof structures are especially challenging, even for advanced models like GPT-4o\", \"Counter-intuitively, providing in-context examples from the same distribution as test data isn't always beneficial for performance\", \"Zero-shot prompting and demonstrating diverse but less complex examples sometimes perform similarly or better than IID examples\", \"Models show sensitivity to the ordering of problem premises, suggesting limitations in their reasoning capabilities\", \"(b) Strengths:\", \"Addresses Critical Evaluation Gaps: The work tackles two major issues in current LLM evaluation - data contamination and limited complexity benchmarking.\", \"Rigorous Framework: MathGAP provides a systematic way to generate problems with controlled complexity and corresponding chain-of-thought reasoning annotations. The proof trees are well-characterized by properties like depth, width, linearity, and ordering.\", \"Comprehensive Experimentation: The study includes extensive testing across multiple dimensions of proof complexity and various state-of-the-art LLMs, providing valuable insights into model behaviors and limitations.\", \"Well-Presented: The paper is clearly written with helpful visualizations and thorough explanation of the methodology.\", \"(c) Weaknesses:\", \"Limited Diversity: The synthetic nature of the problems may not capture the full linguistic and conceptual diversity of real-world mathematical problems.\", \"Scope of Analysis: The framework currently uses a relatively constrained set of predicates and inference rules, though these follow established taxonomies.\", \"Depth of Error Analysis: While some initial observations are made about logical vs arithmetic errors, a more detailed categorization and analysis of error types could provide deeper insights into model limitations.\", \"Reasoning Alignment Assumptions: The framework assumes LLMs reason in ways aligned with the proposed proof trees, though models might use different heuristics.\", \"(d) Reasons for Accept:\", \"The paper makes a significant contribution by providing a systematic framework for evaluating LLM reasoning capabilities, addressing important gaps in current evaluation methods.\", \"The findings provide valuable insights into the limitations of current LLMs in handling complex reasoning tasks.\", \"The framework is well-designed and enables future research on model behavior and improvement.\", \"The work has been improved through the review process, including additional experiments with newer models like o1 that strengthen the conclusions.\"], \"additional_comments_on_reviewer_discussion\": \"During the review process, reviewers raised concerns about the absence of recent SOTA models, depth of analysis, and framework constraints. The authors addressed these by adding experiments with the o1 model, clarifying distinctions from prior work, and providing additional analysis of error types and performance differences between linear and nonlinear problems. These improvements, along with the paper's significant contribution to systematic evaluation of LLM reasoning capabilities, led to its acceptance.\\nThe work is particularly valuable as it not only provides insights into current LLM limitations but also establishes a framework for future research on model behavior and improvement.\"}", "{\"summary\": \"Paper presents a framework for systematically studying the OOD reasoning capabilities of LLMs by programmatically generating training and test examples. Paper also presents analysis of how existing models behave for MathGAP problems under different settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Originality\", \"Current LLM reasoning benchmarks are mostly comprised of fixed datasets, which might be susceptible to data leakage and do not offer a way change different characteristics of the datasets such as difficulty\", \"MathGAP offers a way to systematically change the different aspects of the problem, size as depth, width and (non)linearity of problem tree, and constructs new problems every time, allowing for more controlled experiments on OOD generalization\", \"Quality/Clarity\", \"Paper is well written and easy to understand\", \"Significance\", \"in addition to providing a new benchmark to study generalization, paper also conducts analysis of current LLMs behavior using the proposed benchmark\", \"Presents interesting findings such as some models performing better when given OOD in-context examples rather than IID in-context examples\"], \"weaknesses\": \"MathGAP examples seem to mostly involve a certain type of arithmetic problem, it would be interesting to expand to scope to include other types of reasoning problems\\n\\nIt would also be interesting to also study the finetuning generalization behavior of LLMs\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces MathGAP, a framework for evaluating LLMs' mathematical abilities. The paper notes two main motivations for creating MathGAP: 1) data leakage/contamination and 2) lack of comprehensive and systematic evaluation signals (e.g., OOD generalization, different complexity levels, etc.). The authors show that the performance of almost all LLMs drops as the difficulty (measured by depth/width of proof tree) increases. Moreover, the paper shows that LLMs are sensitive to the order in which input assumptions/axioms are presented, suggesting limitations in the perceived reasoning capabilities of LLMs.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1- Given the unfortunate hype over LLMs, I believe it is crucial to have objective and solid evaluation metrics for LLMs and separate science from hype. This is exactly what this work is aiming to do.\\n2- The paper is extremely well written and the authors present the results very well. \\n3- Overall, the experiments are well-designed. However, they could have been extended to provide more insights (see below).\", \"weaknesses\": \"1- It is not clear how much \\\"new insight\\\" this work provides as many of the results have already been reported in other similar contexts. For instance, Dziri et al. also has a similar notion of graph depth/width, and shows that by increasing depth and width for both ID/OOD setups the performance drops significantly (see Fig. 5 of [1] for example). However, some open questions have not been addressed yet. For instance, from the results, it is not clear why the Llama3-70B model has a near-perfect score on \\\"linear depth\\\" scenarios: is it because of its training data, scale, or something else? Moreover, why is the performance gap between linear/non-linear benchmarks so significant? Do we know why the LLMs struggle so much in the nonlinear problems (e.g., Figure 3 of the paper) even when depth is small?\\n\\n2- I believe given the nature of MathGAP, the paper could have explored more fine-grained evaluation scenarios beyond the final accuracy with in-context prompts. For instance, according to the note in section 5, the number of shots is fixed to 12 and the footnote mentions that this is large enough according to another work. However, the impact of context on model performance on MathGAP is unclear: Does the number of shots have a similar impact on linear/non-linear setups? Why?\\n\\n3- The paper does not mention how large the overall set of proper names and values were. For instance, we know that LLMs have token bias and this can have an impact on results [2]. In addition, it is not clear what range of numbers was used and how many of the mistakes were arithmetic mistakes versus logical mistakes.\\n\\n4- The logical forms and inference rules used are relatively simple and may not capture more complex mathematical relationships. However, this is not a critical issue compared to other issues as we know that models still struggle in these simple cases as well.\\n\\nOverall, I think the work is borderline in its current form. I believe this work has potential to have a significant contribution if it can provide more insights and explanations that lead to better understanding of LLMs.\\n\\n[1] Dziri, Nouha, et al. \\\"Faith and fate: Limits of transformers on compositionality.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n[2] Jiang, Bowen, et al. \\\"A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners.\\\" arXiv preprint arXiv:2406.11050 (2024).\", \"questions\": \"See the questions from my comments above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Update During the Discussion Period\", \"comment\": \"I would like to thank the authors for their informative response that clarified several questions and comments. Hence, I raised my score.\"}", "{\"summary\": \"This paper proposes a math proof benchmark, called MATHGap, by constructing proof trees with predefined predicates and inference rules. Evaluation on a number of LLMs shows that LLM's performance decreases as the proof complexity increases. Many LLMs do not do well in these problems.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The authors did a good job elaborating the method. Section 3 provides a good background for readers to understand the method.\\n2. Constructing out-of-domain math problems using the logical form is a good way to evaluate the math capabilities of LLMs.\", \"weaknesses\": \"1. Despite the many LLMs being tested, the paper seems to miss the latest state-of-the-art LLMs, including Llama 3.1, Llama 3.2 (since Llama 3.2 is only recently released, I understand that the authors did not have time to include it), and GPT-O1. This is rather important, because the linear problems were pretty well-resolved by GPT-4O, and Llama 3 is also already very competitive. Including the more recent models might change the conclusions. Therefore, the current conclusion seems a bit outdated.\\n\\n2. The authors seem to relate the failure of LLM's performance on the more complicated proofs to the OOD generalization failures. However, this is not necessarily the case because it may as well just be the performance degradation due to the problems being more challenging. In many cases, the in-distribution examples do not improve the performance over OOD examples, which is evidence that the performance drop is due to increased difficulty, not the generalization gap.\\n\\n3. The analysis in the paper looks a bit shallow. It does not show much insight into why and how the LLMs fail. For example, is the performance degradation as complexity increases simply due to the more proof steps? Namely, it may as well be the case that the error rate of each step is constant; it is just the increased number of steps that drive down the overall success rate. In this case, the model scales fine and it is just the success criterion that becomes more stringent. Also, what are the typical failure modes? Are the errors simple arithmetic errors, reasoning errors, or errors in parsing the final results? Why does the in-context example fail to improve the performance? Why is there a nonlinear relationship between the distance of the ordering and the performance? It would make the paper stronger if the authors could perform more experiments to answer these research questions.\\n\\n4. The MATHGap dataset looks too constrained for me. It only contains 5 predicates and 5 inference rules. As a result, the linear problems seem too easy for the latest state-of-the-art models. It would be nice to study how the performance scales with the number of predicates and inference rules, and whether the LLMs can retrieve the correct inference rules where there are many.\", \"questions\": \"I would appreciate if the authors could investigate the research questions raised in the 'weaknesses' section. In addition -\\n\\nWhy were different predicates used in different parts of the analysis, i.e. comp and transfer for depth analysis, part-whole for width analysis, etc.?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> I believe this work has potential to have a significant contribution if it can provide more insights and explanations that lead to better understanding of LLMs.\\n\\nWe thank you for your feedback which has helped improve the paper and hope that we have addressed some of your concerns in our responses above. On another note, we have added some experiments on the o1 model which are presented in the current draft (see App. C); we find, e.g., that o1 is very sensitive to premise orderings for complex problems. We would also like to emphasize that one of the main contributions of the paper is the evaluation framework itself, which will be open-sourced and we think might lead to even more new insights beyond the present work. We have already started working on some follow-ups, e.g., looking at how generalization interacts with training and whether a model can learn to produce student misconceptions.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"I want to thank the authors for their response. I adjusted my score accordingly. Still, the in-depth analysis in the paper is sporadic and dispersed. It would be better if the paper had a more structured in-depth analysis.\"}", "{\"summary\": \"This paper introduces MathGAP, a framework designed to evaluate large language models (LLMs) on mathematical word problems requiring proofs of arbitrary complexity. The authors address two critical issues in current evaluations: data contamination due to overlapping training and test sets, and benchmarks not reflecting the arbitrary complexity of problem proofs.\\n\\nMathGAP uses proof trees characterized by properties such as depth, width, linearity, and ordering, enabling the generation of synthetic problems with controlled complexity and corresponding chain-of-thought (CoT) reasoning annotations. By systematically varying these properties, the framework assesses how well various LLMs generalize to problems more complex than those encountered during training or provided in-context.\\n\\nThe authors conduct comprehensive experiments that reveal model performance significantly declines as proof complexity increases, particularly for nonlinear proofs. Interestingly, providing in-context examples from the same distribution as the test set does not always enhance performance. The study sheds light on the limitations of current LLMs in handling complex reasoning tasks and offers insights into their generalization capabilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Addresses a Critical Gap: Tackles the underexplored area of evaluating LLMs on problems with arbitrary proof complexity. Directly addresses issues of data contamination and limited complexity in existing benchmarks.\", \"formalism_and_clarity\": \"Utilizes proof trees and logical forms to precisely characterize problem complexity. Well-structured and clearly written, with helpful figures enhancing understanding.\", \"comprehensive_experiments\": \"Conducts extensive experiments across multiple dimensions of proof complexity (depth, width, linearity, ordering). Evaluates various state-of-the-art LLMs, providing valuable insights into model behaviors and limitations.\", \"insights_into_llm_limitations_and_in_context_learning\": \"Reveals that model performance degrades with increasing complexity. Provides meaningful insights into the impact of different in-context learning strategies. Highlights the models' sensitivity to problem structure and ordering.\", \"weaknesses\": \"Synthetic Data Limitations and Linguistic Diversity: Synthetic problems may not capture the full linguistic and conceptual diversity of real-world problems. Reliance on templates could lead to repetitive linguistic patterns.\", \"assumption_of_reasoning_alignment\": \"Assumes LLMs reason in ways closely aligned with the proposed proof trees. Models might use heuristics or patterns not captured by the formalism, affecting evaluation accuracy.\", \"limited_error_analysis\": \"Lacks a deep analysis of error types made by the models. More detailed error categorization could provide insights into reasoning limitations (e.g., arithmetic vs. logical errors).\", \"questions\": \"Have you considered incorporating paraphrasing techniques or using LLMs to generate more diverse linguistic expressions to increase variety and challenge the models? Incorporating more arithmetic relations, irrelevant information, larger numbers, or units to enrich tasks could help increase the diversity as well.\\n\\nHow do you ensure that the proof tree structures accurately reflect LLMs' reasoning processes? Is there a risk that models might not utilize the intended inference paths?\\n\\nCan you provide more detailed analyses to determine whether performance drops are due to arithmetic computation errors, logical reasoning mistakes, or other factors?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the response. I will maintain my score.\"}", "{\"comment\": \"Thank you for the positive, encouraging comments and feedback. We are pleased that you appreciate this line of work and that you found our draft to be well written. We respond to the weaknesses below:\\n\\n1. [See responses below]\\n\\n> For instance, Dziri et al. also has a similar notion of graph depth/width, and shows that by increasing depth and width for both ID/OOD setups the performance drops significantly (see Fig. 5 of [1] for example)\\n\\nDziri et al.\\u2019s analysis is interesting as well but their computation graphs are different. Most notably, their nodes are labeled with real values, while ours are labeled with richer annotations about semantics (logical forms). Thus, our framework enables more granular analysis. For example, we can distinguish between comparison type problems and transfer type problems that have the same computation graphs. The way they define width is also different. As a consequence, even though the results point in similar directions, the implications are different. Our approach is less tied to specific implementations of elementary computational algorithms such as the O(nm) long-form multiplication algorithm chosen for the analysis by Dziri et al. \\n\\n> It is not clear how much \\\"new insight\\\" this work provides as many of the results have already been reported in other similar contexts.\\n\\nMore generally, our proof tree annotations encode more fine-grained information than previous similar approaches we have seen in the literature. The insights on axiom movements, performance on nonlinear vs. linear problems, and \\u201crange prompt\\u201d (similar to curriculum learning \\u2013 see new draft) are new to the best of our knowledge, but we agree this can be emphasized better and we have tried to do so in our new draft. \\n\\n> Do we know why the LLMs struggle so much in the nonlinear problems (e.g., Figure 3 of the paper) even when depth is small?\\n\\nThat's an interesting question! Nonlinear problems with depth 3 have the same number of leafs (i.e., width) as linear problems with depth 9 and width 10. Comparing the graphs we see a lot lower performance for nonlinear ones with the same depth/width in most cases. This suggests that what makes the problem harder is either that the intermediate conclusions need to be kept longer in memory (a feature of nonlinear problems) or that the comp-eq logical form is more difficult. We looked at some of the reasoning traces and found support for both of these explanations. We did not comment on this in the submitted version but it is a nice addition, so we have added a discussion at the end of 5.3.\\n\\n> it is not clear why the Llama3-70B model has a near-perfect score on \\\"linear depth\\\" scenarios: is it because of its training data, scale, or something else?\\n\\nThe reason why Llama3 70B performs perfectly is most likely scale, since it does not differ from Llama 8B in training method or data. We have now included a comment on this in the draft.\\n\\n2. We actually initially considered measuring the effect of the number of in-context examples in our experiments. Some of our preliminary tests showed that after a small number of examples were shown (< 12), increasing this number did not improve performance. Given the findings in other papers such as Agarwal et al. demonstrating similar findings in a more rigorous setting, we decided not to investigate this experimental variable further. Thank you for raising this point \\u2013 we have clarified it in the draft, see footnote 7.\\n\\n3. [See responses below] \\n\\n> The paper does not mention how large the overall set of proper names and values were.\\n\\nThanks for spotting this! By default our data set of names contains 52 English-language names. For problem types requiring more agents, we use a separate dataset containing 4945 English-language names. The quantities in the axioms of our problems are instantiated in the range [2,20] and constrained such that no intermediate/final quantity of any predicate may exceed 1000 (including the answer). Due to the required arithmetic computations being addition between relatively small numbers, it is unlikely that the degradation in accuracy we observe is due to pure arithmetic errors. We have clarified these points in the draft (see footnote 5) and also added a note on the token bias to the limitations.\\n\\n> how many of the mistakes were arithmetic mistakes versus logical mistakes.\\n\\nParsing the nature of the mistakes is definitely interesting, but nontrivial to do. We therefore leave that for future work. However, by eyeballing it seems that the vast majority of errors are logical rather than numerical \\u2013 see end of 5.4.\\n\\n4. Yes, as you point out, models perform errors even in these simple settings which we can study in a principled way using MathGAP. Moreover, the logical forms follow a commonly accepted taxonomy of arithmetic concepts (Riley et al., 1983) and should thus have a good coverage of standard grade school math problems.\"}" ] }
5cYTAcZAgt
SAN-Diff: Structure-aware noise for super-resolution diffusion model
[ "Chengcheng Wang", "Zhiwei Hao", "Yehui Tang", "Jianyuan Guo", "Yujie Yang", "Chang Xu", "Kai Han", "Yunhe Wang" ]
Recent advances in diffusion models, like Stable Diffusion, have been shown to significantly improve performance in image super-resolution (SR) tasks. However, existing diffusion techniques often sample noise from just one distribution, which limits their effectiveness when dealing with complex scenes or intricate textures in different semantic areas. With the advent of the segment anything model (SAM), it has become possible to create highly detailed region masks that can improve the recovery of fine details in diffusion SR models. Despite this, incorporating SAM directly into SR models significantly increases computational demands. In this paper, we propose the SAN-Diff model, which can utilize the fine-grained structure information from SAM in the process of sampling noise to improve the image quality without additional computational cost during inference. In the process of training, we encode structural position information into the segmentation mask from SAM. Then the encoded mask is integrated into the forward diffusion process by modulating it to the sampled noise. This adjustment allows us to independently adapt the noise mean within each corresponding segmentation area. The diffusion model is trained to estimate this modulated noise. Crucially, our proposed framework does NOT change the reverse diffusion process and does NOT require SAM at inference. Experimental results demonstrate the effectiveness of our proposed method, which exhibits the fewest artifacts compared to other generated models, and surpassing existing diffusion-based methods by 0.74 dB at the maximum in terms of PSNR on DIV2K dataset.
[ "Diffusion Model", "Image Super-Resolution" ]
Reject
https://openreview.net/pdf?id=5cYTAcZAgt
https://openreview.net/forum?id=5cYTAcZAgt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xt9OumOkch", "xAsz0nyRAM", "wAdEC1Bkue", "tv3jFzJuz3", "nGJLlY69Of", "lsxnRdTI0h", "heMC0hgJyA", "hGnwSh1Rjt", "gRavVw5cqv", "fLP1GZGDSY", "cY0MUalkQ6", "aFk5jQgPZl", "aCGJlc83Cf", "OVvkobPZ0t", "Nn1d5nQvsI", "KG9y53kcgm", "Goj59sqOxo", "BBMtSBnuVx", "AixCAaPT7Q", "7vQcvNAso8", "3s22fYVFl8", "2FV4xHNOzp", "16nn1NYDnC", "0yOlzOFGdh" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1729432009931, 1732084843254, 1732172960327, 1732084972669, 1730560911383, 1732084598574, 1732966834850, 1732519356389, 1734445353136, 1732084760634, 1732945878222, 1732084567841, 1732085110302, 1732519446928, 1737523470375, 1732673394555, 1730445440482, 1732084792725, 1732085068803, 1732187792246, 1732187712702, 1732168029496, 1732673361489, 1732249577179 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1825/Reviewer_TbBy" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Reviewer_TbBy" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Reviewer_mefZ" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Area_Chair_FGB5" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Reviewer_TbBy" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Reviewer_aaKr" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Reviewer_TbBy" ], [ "ICLR.cc/2025/Conference/Submission1825/Authors" ], [ "ICLR.cc/2025/Conference/Submission1825/Reviewer_aaKr" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a new model, SAN-Diff, to enhance the performance of image super-resolution (SR) using diffusion models. By leveraging the Segment Anything Model (SAM), SAN-Diff incorporates fine-grained structural information into the noise sampling process, thereby improving the recovery of fine details in images. During training, the model encodes structural position information from SAM into the segmentation mask, which is then used to modulate the sampled noise. The proposed framework integrates this structural information without increasing computational costs during inference.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The preliminary section is commendably clear and comprehensive, effectively setting the stage for the subsequent content.\\n2. This paper excels in its theoretical foundation, for providing a meticulous and detailed explanation of the adjusted forward and reverse diffusion processes.\\n3. The combination of the renowned Segment Anything Model (SAM) and popular diffusion models is particularly interesting.\", \"weaknesses\": \"1. Almost all of the quotes in the article are in the wrong format. Please refer to the difference between `\\\\citet` and `\\\\citep`.\\n2. For the super-resolution task, it is suggested to provide general experiments with multiple scales, such as $\\\\times 8 $.\\n3. Only SRDiff is compared in the Table 1. NOT various diffusion-based image super-resolution methods.\", \"questions\": \"1. Why not compare the proposed method with common baselines like EDSR and other widely used models?\\n2. MANIQA, which is a metric for no-reference image quality assessment, seems less common for this super-resolution task. Why not use the more popular LPIPS metric?\\n3. Since SAM is not used during the inference, so how to utilize the fine-grained structure information from SAM in the inference?\\n4. To the best of our knowledge, generative models in the super-resolution often obtain lower performance on objective metrics (such as PSNR, SSIM) but higher on subjective ones. Therefore, the higher PSNR is not persuasive.\\n5. Results in Table 2 (such as SPSR and those of other baselines) are significantly lower than those reported in their original articles. It seems very strange.\\n6. Some diffusion models like Stable Diffusion (mentioned in the abstract) or ControlNet can also inject structure information (like semantic map) into the diffusion process. Why not compare the results of these models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### **Weaknesses:**\\n\\n1. > Almost all of the quotes in the article are in the wrong format. Please refer to the difference between `\\\\citet` and `\\\\citep`.\\n\\n Thank you for your suggestion. We have already corrected this error in the revised version.\\n\\n2. > For the super-resolution task, it is suggested to provide general experiments with multiple scales, such as \\u00d78.\\n\\n Our experimental setup is based on ESRGAN, SRDiff, and DiffBIR, which only tested the X4 setting in their experiments. Therefore, we report the comparison of our method with other models in the X4 setting in Table 2 of the paper. Additionally, in Appendix Table 7, we present the results of SRDiff trained by us and the results of our method in the X2 setting.\\n\\n The following are the results for the X8 setting.\\n\\n | model | Urban100 | | | | BSDS100 | | | | Manga109 | | | | General100 | | | | DIV2K | | | |\\n | --------------- | -------- | ------ | ------- | ----- | ------- | ------ | ------- | ------ | -------- | ------ | ------- | ----- | ---------- | ------ | ------- | ------ | ------ | ------ | ------- | ------ |\\n | | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 |\\n | SRDiff (X8) | 20.95 | 0.5787 | 0.4375 | 46.71 | 22.74 | 0.5295 | 0.3294 | 155.69 | 23.10 | 0.7471 | 0.3919 | 24.79 | 24.93 | 0.6682 | 0.3756 | 105.39 | 24.54 | 0.6628 | 0.3533 | 8.3945 |\\n | SAN-Diff (X8) | 21.36 | 0.5913 | 0.4188 | 43.75 | 23.29 | 0.5500 | 0.3253 | 154.13 | 23.27 | 0.7520 | 0.3959 | 25.12 | 25.34 | 0.6836 | 0.3648 | 102.17 | 25.18 | 0.6790 | 0.3430 | 8.5189 |\\n\\n3. > Only SRDiff is compared in the Table 1. NOT various diffusion-based image super-resolution methods.\\n\\n In Table 2, we report the results of two types of super-resolution models: GAN-based models (SRGAN, SFTGAN, ESRGAN, USRGAN, SPSR, BSRGAN) and diffusion-based models (LDM, StableSR, DiffBIR, SRDiff). We also conduct a fair comparison with our method across multiple metrics. Please refer to Table 2 for the detailed numerical results.\"}", "{\"comment\": [\"## For Question\", \"The values of the metrics reported in this paper appear to **diverge significantly** from the generally accepted results. For instance, recent methods typically report PSNR values (on the Y-channel) of approximately 27-28 for the [Urban100 (x4) dataset](https://paperswithcode.com/sota/image-super-resolution-on-urban100-4x), around 31-33 for the [Manga109 (x4) dataset](https://paperswithcode.com/sota/image-super-resolution-on-manga109-4x), about 28-29 for the [General100 (x4) dataset](https://paperswithcode.com/sota/image-super-resolution-on-general100-4x), and roughly 28 for the [BSD100 (x4) dataset](https://paperswithcode.com/sota/image-super-resolution-on-bsd100-4x-upscaling). Different evaluation codes may lead to small discrepancies, but it cannot account for such so huge deviation (>2 dB for the same model and dataset).\", \"Where is Figure 9 at appendix? This paper only has 7 figures including the appendix.\"]}", "{\"comment\": \"### **Questions:**\\n\\n1. > Why not compare the proposed method with common baselines like EDSR and other widely used models?\\n\\n Thank you for providing this valuable comment. In general, image super-resolution approaches can be categorized into distance-based(like EDSR) and generative-based(like diffusion-based and GAN-base) methods. Since our method focuses on improving diffusion-based SR models, we primarily compare it with generative SR models (GAN-based and diffusion-based) rather than distance-based SR models like EDSR.\\n\\n As the official pre-trained weights for EDSR are no longer available, we used the pre-trained weights provided by BasicSR for testing. Below are the comparison results between our model and EDSR.\\n\\n | model | Urban100 | | | | BSDS100 | | | | Manga109 | | | | General100 | | | | DIV2K | | | |\\n | ---------- | -------- | ------ | ------- | ----- | ------- | ------ | ------- | ----- | -------- | ------ | ------- | ----- | ---------- | ------ | ------- | ----- | ------ | ------ | ------- | ------ |\\n | | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 | PSNR-Y | SSIM | MANIQA\\u2191 | FID \\u2193 |\\n | EDSR_L | 24.43 | 0.7500 | 0.4576 | 5.86 | 26.34 | 0.7142 | 0.3231 | 91.85 | 23.99 | 0.8243 | 0.4256 | 5.84 | 27.22 | 0.7971 | 0.4204 | 54.61 | 30.75 | 0.8442 | 0.3555 | 0.3303 |\\n | EDSR_M | 24.16 | 0.7374 | 0.4297 | 7.23 | 26.29 | 0.7099 | 0.3037 | 96.27 | 24.05 | 0.8214 | 0.4044 | 6.24 | 27.14 | 0.7944 | 0.3986 | 55.39 | 30.44 | 0.8366 | 0.3428 | 0.3456 |\\n | SAN-Diff | 25.54 | 0.7721 | 0.6709 | 4.53 | 26.47 | 0.7003 | 0.6667 | 60.81 | 29.43 | 0.8899 | 0.6046 | 2.40 | 30.29 | 0.8353 | 0.6346 | 39.15 | 29.34 | 0.8108 | 0.5959 | 0.3809 |\\n\\n2. > MANIQA, which is a metric for no-reference image quality assessment, seems less common for this super-resolution task. Why not use the more popular LPIPS metric?\\n\\n Thank you for providing this valuable comment. We chose to use MANIQA instead of LPIPS to evaluate our method because we believe that MANIQA provides a more accurate assessment of super-resolution image quality. MANIQA evaluates image quality from multiple dimensions [1], including color consistency, structural details, and blurriness. In contrast, LPIPS primarily focuses on perceptual similarity between images. While LPIPS effectively measures differences in image features, it is less capable of analyzing overall image quality. Therefore, to accurately assess the differences in texture and structure between the reconstructed images and HR images, we opted to use MANIQA as a reference metric.\\n\\nTo provide a more comprehensive and objective evaluation of our method, we combined subjective metrics (LPIPS), objective metrics (PSNR), and the number of artifacts to create image, and you can find it in **[LPIPS.png](https://www.helloimg.com/i/2024/11/20/673d8eb72db3f.png)** or in **Figure 9** at appendix, which illustrates a holistic assessment of the model's performance. These results highlight the outstanding performance of our model under combined evaluation criteria.\\n\\n \\n\\n [1] Maniqa: Multi-dimension attention network for no-reference image quality assessment. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*.\"}", "{\"summary\": \"This paper proposes SAN-Diff, a structure-aware noise modulation method for super-resolution (SR) diffusion models. SAN-Diff uses the Segment Anything Model (SAM) to generate fine-grained segmentation masks, which guide the noise modulation process, allowing the diffusion model to apply distinct noise distributions to different areas of the image. The Structural Position Encoding (SPE) module is introduced to integrate position information into these masks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. By modulating noise based on segmentation areas, SAN-Diff enhances the preservation of local structures, particularly useful for complex scenes and intricate textures.\\n2. SAN-Diff uses SAM-generated masks only during training, avoiding computational burdens during inference while retaining structure-level detail restoration.\", \"weaknesses\": \"1. Lack of Novelty in Methodology: The SAN-Diff approach heavily relies on existing methods, including the Segment Anything Model (SAM) for segmentation and structural positional encoding (SPE) techniques. This reliance limits the novelty of the proposed methodology, without introducing fundamentally new innovations tailored for super-resolution tasks.\\n2. Limited Novelty in Designing Loss Function: The loss function shown in Eq.5 mainly follows a standard existing MSE approach focused on noise prediction. Only the structurally positioned embedded mask is just added to it.\\n3. Limited Exploration of Alternative Modulation Strategies: SAN-Diff exclusively uses a segmentation-mask-driven approach for noise modulation without investigating other methods, such as feature-based techniques. \\n4. No Discussion on the Time Complexity of SPE: The paper mentions negligible cost but does not clarify the computational time taken by SPE module for generating masks before training. \\n5. Dependency on High-Resolution Training Data: The method requires SAM-generated segmentation masks for all high-resolution (HR) training images, which may not always be feasible in datasets lacking extensive HR samples or where SAM\\u2019s segmentation quality is inconsistent. This reliance could restrict SAN-Diff\\u2019s application in low-data or low-resolution scenarios.\", \"questions\": [\"Could the authors mention if there are any unique aspects in how SAM and SPE are integrated or applied that are tailored for this task?\", \"Could the authors try using the other losses like structural loss instead of standard MSE loss for better results?\", \"Could the authors mention if they tried alternative feature extraction techniques like -depth maps, CNN features etc. instead of segmentation masks.\", \"Could the authors provide concrete timing measurements or complexity analysis for the SPE module. This would help to understand the practical implications of using this approach.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### **Questions:**\\n\\n1. > Could the authors mention if there are any unique aspects in how SAM and SPE are integrated or applied that are tailored for this task?\\n\\n We leverage the powerful fine-grained segmentation capabilities of SAM to modulate the noise distribution across different regions. The fundamental concept behind the SPE module is to assign a unique value to each segmentation area. The segmentation mask generated by SAM comprises a series of 0-1 masks, where each mask corresponds to an area in the original image sharing the same semantic information. To achieve noise modulation based on mask information, we merge these masks using the SPE module. Inspired by successful practices in language models, we use RoPE to encode the masks at different positions, embedding the relative positional information of different segmentation areas into the noise.\\n\\n2. > Could the authors try using the other losses like structural loss instead of standard MSE loss for better results?\\n\\n Our modification to the loss function lies not in the MSE metric itself, but in the object to which the distance is measured. While adopting other metrics, such as MAE, may also be effective, this is beyond the scope of our paper\\u2019s focus. Furthermore, to ensure a fair comparison with existing works [1] that consistently use the MSE loss, we have adhered to the same configuration in our experiments.\\n\\n [1] Denoising diffusion probabilistic models. *Advances in neural information processing systems*, *33*, 6840-6851.\\n\\n3. > Could the authors mention if they tried alternative feature extraction techniques like -depth maps, CNN features etc. instead of segmentation masks.\\n\\n Our goal is to enhance the accuracy of structural and texture reconstruction in diffusion models without compromising their advantages in subjective metrics. The most intuitive approach is to leverage the structure-level ability of segmentation models to distinguish different regions. We believe that combining depth maps or CNN features with our method could further improve the model's performance, and we look forward to related research in the future.\\n\\n4. > Could the authors provide concrete timing measurements or complexity analysis for the SPE module. This would help to understand the practical implications of using this approach.\\n\\n Synthesizing SPE masks on the DIV2K dataset using the SPE module takes only 15 minutes. Moreover, these masks do not need to be regenerated for each training session. A single inference using the segmentation model on the training dataset, followed by saving the results, allows these masks to be reused across different training runs.\"}", "{\"comment\": \"Thank you for your reply and constructive questions. There are a few foundational concepts that need clarification.\\n\\nCurrent super-resolution (SR) models can be classified into two categories:\\n\\n- **Distance-based SR models:** Examples include EDSR[3], SwinIR[4], HAT[5], and DRCT[1]. These typically use distance loss (e.g. $L_1$ pixel loss) to train the model.\\n- **Generate-based SR models:**\\n - **GAN-based models**: Examples like ESRGAN[6], which usually combine distance loss with additional perception loss, and are trained using a generative adversarial approach.\\n - **Diffusion-based models**: Examples include SRDiff[7], LDM[8], and StableSR[2], which generally use MSE loss and are trained through the denoising process.\\n\\nThe difference in training methods results in distinct advantages for distance-based and generate-based models in terms of image reconstruction. **Distance-based models**, due to the constraint of distance loss, **tend to reconstruct images with more accurate structures but less defined textures**. On the other hand, **generate-based models**, due to their different training processes and losses, **tend to produce images with clearer textures but may exhibit distorted structures**. Therefore, in the field of single image super-resolution (SISR), these two categories are typically compared separately.\\n\\nFor example, taking **DRCT [1]**, the highest-scoring model on the [Urban100](https://paperswithcode.com/sota/image-super-resolution-on-urban100-4x) benchmark, it primarily compares models like EDSR[3], SwinIR[4], and HAT[5] in their paper, which are all **distance-based SR models**. In contrast, **StableSR[2]** compares models like BSRGAN[9] and LDM[8] in paper, which are **generate-based SR models**.\\n\\nWe would also like to reiterate that comparing our method with distance-based SR models on their stronger metrics (PSNR and SSIM) is not a fair comparison. **The conventional and fair approach is to compare our method with generate-based SR models across all relevant metrics**. Our goal is to improve the PSNR and SSIM scores of diffusion-based SR models, thereby enhancing the accuracy of structure and texture reconstruction. Our experimental results validate the effectiveness of our approach.\\n\\n---\\n\\nRegarding the discrepancy in metrics compared to the original papers, we would like to reiterate that **all the data reported in our paper** (whether for our method or others) **can be consistently reproduced using open-source tools.**\\n\\nThe field of super-resolution has a long development history, and over time, there have been numerous changes in the evaluation settings and scripts used to assess model performance. This makes it difficult to directly compare models using the data reported in original papers in an intuitive and fair manner. Therefore, we have chosen to use a unified open-source evaluation tool to fairly assess all methods. This allows readers to make an intuitive and accurate comparison between our method and the others without needing to switch between different testing settings.\\n\\n---\\n\\n[1] DRCT: Saving Image Super-resolution away from Information Bottleneck[J]. arXiv preprint arXiv:2404.00722, 2024.\\n\\n[2] Exploiting diffusion prior for real-world image super-resolution[J]. International Journal of Computer Vision, 2024: 1-21.\\n\\n[3] Enhanced deep residual networks for single image super-resolution[C]//Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2017: 136-144.\\n\\n[4] Swinir: Image restoration using swin transformer[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 1833-1844.\\n\\n[5] Activating more pixels in image super-resolution transformer[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 22367-22377.\\n\\n[6] Enhanced super-resolution generative adversarial networks[C]//Proceedings of the European conference on computer vision (ECCV) workshops. 2018: 0-0.\\n\\n[7] Srdiff: Single image super-resolution with diffusion probabilistic models[J]. Neurocomputing, 2022, 479: 47-59.\\n\\n[8] High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.\\n\\n[9] Designing a practical degradation model for deep blind image super-resolution[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 4791-4800.\"}", "{\"comment\": \"Dear mefZ,\\n\\nThank you for your constructive comments and valuable suggestions to improve this paper. \\u200bIf you have any more questions, we would be glad to discuss them with you.\\n\\nThank you very much.\\n\\nBest regards, Author\"}", "{\"metareview\": \"This paper proposes SAN-Diff, a structure-aware noise modulation method for super-resolution (SR) diffusion models. SAN-Diff uses the Segment Anything Model (SAM) to generate fine-grained segmentation masks, which guide the noise modulation process, allowing the diffusion model to apply distinct noise distributions to different areas of the image. Additionally, the paper introduces the Structural Position Encoding (SPE) module to integrate position information into these masks.\\n\\nSAN-Diff enhances the preservation of local structures, particularly useful for complex scenes and intricate textures, by modulating noise based on segmentation areas. SAN-Diff uses SAM-generated masks only during training, avoiding computational burdens during inference while retaining structure-level detail restoration. The paper provides a detailed explanation of the adjusted forward and reverse diffusion processes. The combination of the Segment Anything Model (SAM) and popular diffusion models is interesting. \\n\\nHowever, the paper's evaluation presents several limitations:\\n\\n- Over-reliance on SAM: The method assumes consistently accurate segmentation from SAM, neglecting potential inaccuracies. While the revised version partially addresses this in Appendix C.3, a more central discussion of SAM's limitations and their impact on SAN-Diff is crucial. Visualizing SAM-generated masks for specific examples would further strengthen this analysis.\\n\\n- Inconsistent Baseline Comparisons: The reported performance of baseline methods appears significantly lower than in their original publications, raising concerns about the consistency of metric computation. Additionally, the chosen baselines seem relatively outdated, and the use of the less common MANIQA metric instead of the widely adopted LPIPS metric further hinders meaningful comparisons.\\n\\n- Unclear Benefits: While the proposed idea is interesting, the evaluation falls short of convincingly demonstrating the true advantages of SAN-Diff. The limitations mentioned above make it difficult to accurately assess its performance against recent approaches in the field.\\n\\nIn conclusion, SAN-Diff presents a promising direction for structure-aware super-resolution. However, addressing the outlined concerns regarding SAM's reliability, baseline comparisons, and a more robust evaluation would significantly enhance the paper's contribution and clarity.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the reviewers raised several concerns, including the novelty of the proposed methodology, the reliance on SAM, the result inconsistencies (metrics), and the lack of comparisons with common baselines. The authors responded to these concerns by clarifying the novelty of the SPE module, explaining the reasons for using SAM, and providing additional experimental results. However, some reviewers remained unconvinced, particularly regarding the novelty and result inconsistencies. In my final decision, I have taken into account the reviewers' concerns and the authors' responses. I have also considered the paper's strengths and weaknesses, as well as its missing components.\"}", "{\"comment\": \"### **Weaknesses:**\\n> the limitations of relying on SAM as the sole provider of segmentation masks\\n\\nThank you for your valuable comments. Compared to the original diffusion model without structural guidance, masks generated by existing SAM models can improve performance, as demonstrated in our experimental results.\\n\\nHowever, the performance of our model does depend on the quality of the segmentation masks, as they capture the structural information of the corresponding image. Our model benefits from SAM's fine-grained segmentation capability and its strong generalization ability across diverse objects and textures in the real world. Nevertheless, the performance of our model is also limited by the capabilities of the segmentation model itself. For instance, SAM may struggle to identify structures with low resolution in certain scenes. While the model can partially mitigate this issue by learning from a large amount of data during training, it is undeniable that higher segmentation precision (e.g., SAM2) and finer segmentation granularity would significantly enhance the performance of our approach. This is one of the potential limitations of our proposed framework, and we have included this discussion in the revised version of our paper in Appendix C.3.\\n\\nOur experiments demonstrate that even though existing segmentation models may not perfectly distinguish every region in the real world, combining our method with a super-resolution model has substantially improved performance on super-resolution tasks. We believe that utilizing prior knowledge from segmentation models to enhance generative models is a promising avenue for further exploration.\"}", "{\"comment\": \"I thank the authors for their thoughtful rebuttal. But I am still confused about the authors' viewpoint of objective metrics. At first, they claim that their method `achieves a significant improvement in objective metrics (PSNR, SSIM) compared to baselines and other generative models` (Note that the metrics of these baselines reported in this paper appear to diverge significantly from the generally accepted results). This point is regarded as one important contribution in this paper (see Abstract and Introduction). Then, they claim that comparing on PSNR is neither fair nor meaningful. I am afraid it may seem contradictory.\"}", "{\"comment\": \"### **Weaknesses:**\\n1. > Lack of Novelty in Methodology\\n\\n We sincerely appreciate your thoughtful comment. We would like to clarify that the **Structural Position Encoding (SPE)** module is a novel approach designed to encode structural position information within the masks generated by segmentation models. This module is introduced for the first time in our paper and **is not based on any pre-existing method**. Regarding the SAM model, we use it to extract structural information for structural noise modulation within our framework. This design is flexible and does not restrict the source of segmentation masks, meaning that segmentation models other than SAM can also be utilized. We have chosen SAM specifically due to its exceptional ability to generate fine-grained segmentation masks.\\n\\n2. > Limited Novelty in Designing Loss Function\\n\\n The key modification to the loss function lies not in the MSE metric itself, but in the object to which the distance is measured. In our proposed framework, we closely follow the DDPM setting [1] within the context of the noise modulation scenario, leading to the formulation of the modified loss function. By training with this modified loss, the denoising model is able to incorporate structural information, resulting in improved SR image generation. Our experimental results demonstrate the effectiveness of this design.\\n\\n [1] Denoising diffusion probabilistic models. *Advances in neural information processing systems*, *33*, 6840-6851.\\n\\n3. > Limited Exploration of Alternative Modulation Strategies\\n\\n The primary aim of our work is to enable the model to achieve structure-level differentiation between regions. To this end, we use segmentation masks as the source of structural information for training, as they inherently contain rich structural details. While intermediate features may also carry abundant information, they are less directly related to image structure and more challenging to utilize effectively. Therefore, we have focused on a segmentation-mask-driven approach in this study. In future work, we plan to explore feature-based techniques to further enhance model performance.\\n\\n4. > No Discussion on the Time Complexity of SPE\\n\\n Generating segmentation masks on the DIV2K dataset using SAM takes approximately 4 hours on GPU, while synthesizing SPE masks with the SPE module requires only 15 minutes on CPU. These data do not need to be regenerated for every training session; instead, a single inference of the training dataset using the segmentation model can be performed beforehand, and the results can be reused across different training runs. Therefore, we consider the cost of this one-time preprocessing negligible compared to the total training time of approximately 50 hours.\\n\\n5. > Dependency on High-Resolution Training Data\\n\\n In the current literature on image super-resolution, the standard approach involves training super-resolution models using pairs of low- and high-resolution data. However, there has been little to no exploration of high-resolution image-free super-resolution tasks. We believe this is an interesting task and leave it for future research.\"}", "{\"comment\": \"6. > Some diffusion models like Stable Diffusion (mentioned in the abstract) or ControlNet can also inject structure information (like semantic map) into the diffusion process. Why not compare the results of these models?\\n\\n Thank you for raising this issue. We did not compare our method with Stable Diffusion and ControlNet because these models differ significantly from our experimental settings and objectives, making a direct comparison neither reasonable nor fair. \\n\\n Since ControlNet is an extension of Stable Diffusion, we use it as a reference to explain the differences between our approaches. ControlNet is a method for injecting conditional information into the model, which incurs additional computational costs during inference. In contrast, our framework does not modify the inference process.\\n\\n Additionally, we compared our method with StableSR and DiffBIR in the paper. Both methods leverage pre-trained Stable Diffusion as their base model and are specifically fine-tuned for super-resolution tasks. Therefore, we believe that comparing our approach with StableSR and DiffBIR in the context of super-resolution is more reasonable and fair.\\n\\n ---\\n\\n We would like to provide a more detailed explanation below.\\n\\n ControlNet shows that providing a model with additional guiding information can help generate higher-quality images that better meet user expectations. However, a key limitation of ControlNet and similar methods is that they require both the original input (prompt) and auxiliary information (condition) as inputs during both training and inference. For example, in the image SR process using ControlNet, the condition is introduced as follows:\\n\\n 1. An auxiliary model is used to obtain the segmentation mask (condition) of the low-resolution (LR) image.\\n 2. The LR image is then used as input for the diffusion model, with the condition injected into the diffusion process via ControlNet.\\n\\n This process requires not only ControlNet and the diffusion model but also an additional auxiliary model to generate the condition, which significantly impacts inference efficiency.\", \"our_method_addresses_this_challenge_by_focusing_on_the_following_question\": \"*Is it possible to incorporate the ability to extract condition information directly into the model during training, thereby eliminating the need for an auxiliary model to generate the condition during inference?*\\n\\n In our paper, we propose to incorporate condition information into the training objective of the diffusion model. As a result, the trained diffusion model can make predictions while considering the condition information, without the need to invoke a segmentation model for test samples during inference. This approach is more efficient than ControlNet.\\n\\n Regarding the SAM model, it serves as the auxiliary model to obtain the condition information. Intuitively, a more precise segmentation mask provides better guidance for the reconstruction process. We chose SAM over other segmentation models because of its superior performance.\"}", "{\"comment\": \"Dear TbBy,\\n\\nThank you for your constructive comments and valuable suggestions to improve this paper. \\u200bIf you have any more questions, we would be glad to discuss them with you.\\n\\nThank you very much.\\n\\nBest regards, Author\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear mefZ,\\n\\nWe apologize for the repeated reminder. It has been 7 days since we submitted our responses, but we have not yet received your feedback. **We simply want to ensure that we have fully addressed all your concerns.**\\n\\nYour main concerns appear to relate to our contributions and the dependence on data and computational complexity. **We have provided detailed responses to these questions in both our reply to you and the general response.** May we kindly ask if you could spare some time to review our responses and share your feedback?\\n\\nIf there are any remaining points that require further clarification, please rest assured that we are committed to providing detailed answers to all your inquiries and addressing any concerns you may have. We value clear and open communication, and will make every effort to ensure that all aspects of the matter are fully explained to your satisfaction.\\n\\nThank you very much.\\n\\nBest regards, Author\"}", "{\"summary\": \"The authors proposed SAN-Diff for image super-resolution task. The authors noticed that one potential limitation of existing works could be sampling from one distribution during the diffusion process, and might hurt the generated image results when the scene is complex. They proposed to using a positional embedding, calculated with respect to regions segmented by SAM, as the condition of an existing work Srdiff, and achieved good performance, on multiple datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors\\u2019 proposal sound valid as the spatially uniform sampling could be a place where the potential improvement can happen. The math deduction looks valid. The computation cost caused by the proposed design seems neglectable, in comparison with the original approach. Experimentations of different model comparisons and ablation studies are comprehensive.\", \"weaknesses\": \"It seems like the authors have a strong assumption that the SAM always provides accurate segmentation results. Although it is a powerful segmentation model could be used to zero-shot inference on everyday object, it is \\u201cstatistically\\u201d strong. Therefore, your experimentations also show quantitatively \\u201cstatistically\\u201d better. It would be great if the authors could discuss about the limitations of using SAM as the segmentation mask provider.\", \"questions\": \"1. For the example shown in Fig 5,6, is it possible for the authors provide the segmentation masks using SAM? Then, it will help us to see if the segmentation mask provided by SAM matches the \\u201cbetter\\u201d parts of the SR images you generated, and justify your conclusions.\\n2. In Figure 1, it is hard for me to think of why this improvement happens because of the contribution from SAM. I highly doubt that the image region you contoured can be identified as different \\u201cregions\\u201d segmented by SAM. How would the authors justify the contribution of SAM?\\n3. In Table 2, I see the proposed model performed the best. However, the FID score for SRDiff and SAN-Diff suddenly dropped a lot across different datasets (the last two rows), which causes my concern that if the authors didn\\u2019t choose good enough baselines. Please justify. I also would like to refer other reviewers\\u2019 comments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### **Questions:**\\n\\n1. > For the example shown in Fig 5,6, is it possible for the authors provide the segmentation masks using SAM? \\n\\n Figure 7 shows the segmentation results from SAM. However, it is important to emphasize that our method only uses the mask information during the training process, and no mask guidance is used during inference. It can serve as a reference for analyzing the super-resolution results.\\n\\n2. > In Figure 1, it is hard for me to think of why this improvement happens because of the contribution from SAM. I highly doubt that the image region you contoured can be identified as different \\u201cregions\\u201d segmented by SAM. How would the authors justify the contribution of SAM?\\n\\n We aim to augment diffusion-based image super-resolution (SR) models with fine-grained masks provided by SAM. In our paper, we compare two designs. The first involves using the information provided by SAM as a conditioning factor for the U-Net, following a commonly used approach in guided diffusion. The second design, structural noise modulation, is our heuristic approach, with the derivation provided in the supplementary materials. From the results in Figure 1(B), both qualitative and quantitative results demonstrate the effectiveness of incorporating segmentation information to enhance the model\\u2019s capabilities, as well as the fact that structural noise modulation yields better performance without adding extra inference cost.\\n\\n Furthermore, as you mentioned, the model learns information \\\"statistically.\\\" Although there may be some regions where the segmentation model's performance is limited and it might fail to distinguish or incorrectly segment certain areas, the model learns from a large amount of data during training. The denoising model (U-Net) can \\\"statistically\\\" learn the necessary structure-level ability to distinguish different regions. Therefore, these issues do not affect the model to get ability to distinguish regions during training, and they do not prevent the model from reconstructing more accurate structures and textures during inference.\\n\\n A similar example can be seen in classification tasks using the ImageNet dataset. According to [1], ImageNet contains a significant number of mislabeled samples, samples with multiple labels, or samples that do not belong to any label. This is analogous to the regions SAM fails to correctly identify. These mislabeled data points are essentially \\\"negative samples\\\" in the training dataset. However, this does not hinder the training of classification models like ResNet or ViT, as the model learns \\\"statistically\\\" during training. It can self-correct learned errors in labels and predict the correct results during inference.\\n\\n [1] Northcutt, C. G., Athalye, A., & Mueller, J. (2021). Pervasive label errors in test sets destabilize machine learning benchmarks. *arXiv preprint arXiv:2103.14749*.\\n\\n3. > In Table 2, I see the proposed model performed the best. However, the FID score for SRDiff and SAN-Diff suddenly dropped a lot across different datasets (the last two rows), which causes my concern that if the authors didn\\u2019t choose good enough baselines. Please justify. I also would like to refer other reviewers\\u2019 comments.\\n\\n Our goal is to maintain the advantages of diffusion-based models in subjective metrics while reducing common issues such as distorted structures, misaligned textures, and bothersome artifacts. We aim to improve the accuracy of structural and texture reconstruction in the model. This goal is reflected in our results, where we significantly improve objective metrics (PSNR, SSIM) while maintaining similar subjective metric performance (as shown in the last two rows) compared to the baseline and other generative models. Additionally, subjective metrics, objective metrics, and the number of artifacts all reflect the accuracy and visual quality of the reconstructed images. To properly assess the quality of the reconstructed images, it is necessary to evaluate the model's performance based on all three types of metrics. In Figure 2 and Figure 8, we present images that highlight the excellent performance of our model based on this comprehensive evaluation.\"}", "{\"comment\": \"3. > Since SAM is not used during the inference, so how to utilize the fine-grained structure information from SAM in the inference?\\n\\n Thank you for providing this valuable comment. During training, the U-Net model is trained to predict the added noise modulated by $E_{SAM}$(generate by SAM and SPE module) with the LR image and time step as the condition. During inference, the trained U-Net model can predict the added noise by taking structural information into consideration without requiring $E_{SAM}$\\n\\n Since the structural knowledge is already injected into the U-Net model during training, the U-Net can leverage the fine-grained structure segmentation ability learned during training to guide image restoration during inference.\\n\\n \\n\\n4. > To the best of our knowledge, generative models in the super-resolution often obtain lower performance on objective metrics (such as PSNR, SSIM) but higher on subjective ones. Therefore, the higher PSNR is not persuasive.\\n\\n Our goal is to maintain the advantages of diffusion-based models in subjective metrics while addressing their common issues, such as distorted structures, misaligned textures, and undesirable artifacts. At the same time, we aim to enhance the accuracy of structural and texture reconstruction. This goal is reflected in our method, which achieves a significant improvement in objective metrics (PSNR, SSIM) compared to baselines and other generative models, while maintaining similar performance in subjective metrics.\\n\\n The combination of subjective metrics, objective metrics, and artifact analysis collectively reflects the accuracy and visual quality of the reconstructed images. To properly evaluate the quality of reconstructed images, it is necessary to assess the model's performance using these three types of metrics comprehensively. Figures 2 and 7 illustrate that our model demonstrates excellent performance under this comprehensive evaluation.\\n\\n \\n\\n5. > Results in Table 2 (such as SPSR and those of other baselines) are significantly lower than those reported in their original articles. It seems very strange.\\n\\n We reimplemented SRGAN and ESRGAN using the MMagic framework. For SPSR, SFTGAN, USRGAN, SPSR, BSRGAN, DiffBIR, StableSR, and SRDiff, we reproduced the results using the official code and pretrained weights. For LDM, we performed inference using the publicly available weights on Hugging Face.\\n\\n The inconsistencies in reported PSNR/SSIM results arise from two primary reasons:\\n\\n - **Different evaluation codes**: Different models employ different evaluation implementations\\u2014some use MATLAB-based code, while others rely on custom Python implementations. These variations introduce discrepancies in the final results.\\n - **Different testing settings**: Some models compute PSNR and SSIM in the RGB color space, while others convert to the YUV color space and perform evaluations on the Y-channel.\\n\\n To ensure fair comparisons across models and eliminate the inconsistencies caused by diverse evaluation protocols, we reproduced other models and evaluated their performance under a unified setting using standardized evaluation scripts.\\n\\n We utilized **IQA-PyTorch**, an open-source image quality assessment tool that provides comprehensive evaluation metrics. All models were assessed using IQA-PyTorch under the same testing settings. This standardization led to differences between the results reported in our paper and those in the original papers.\"}", "{\"comment\": \"Thank you for your response.\\n\\n1. The PSNR scores you referred to are some of the top-ranking results on **Paperswithcode**, which predominantly feature distance-based SR models. As you mentioned:\\n\\n > Generative models in super-resolution often achieve lower performance on objective metrics (such as PSNR and SSIM) but excel in subjective evaluations.\\n\\n The strength of generative models lies in subjective quality rather than objective metrics. Since our method specifically focuses on improving diffusion-based models, directly comparing our results to distance-based SR models on PSNR is neither fair nor meaningful.\\n\\n We used the open-source **IQA-PyTorch** toolkit to evaluate the performance of the relevant models. The scores reported in our paper are reproducible under this unified evaluation setup.\\n\\n It is important to emphasize that our approach aims to improve the structural and textural accuracy of diffusion-based SR models during image reconstruction. This does not imply that our goal is to compete with distance-based SR models for state-of-the-art performance on PSNR or similar metrics.\\n\\n Therefore, we focus on comparing the performance gains of our method against the baseline (SRDiff) and other generative models. To ensure fairness, we conducted comprehensive comparisons with GAN-based and diffusion-based SR models across various metrics, demonstrating the effectiveness of our method.\\n\\n2. We have updated the paper. You can download the revised PDF to find Figure 9 or directly view it here: **[LPIPS.png](https://www.helloimg.com/i/2024/11/20/673d8eb72db3f.png)**.\"}", "{\"comment\": \"Thank you for your response.\\n\\n1. We sincerely apologize for the confusion caused by our writing error. The correct name should be SAN-DiFF. Thank you for pointing this out. We have corrected the method name in the table.\\n\\n2. It is worth noting that the original Table 1 is an ablation study where we report three sets of results: SRDiff, SRDiff with SAM directly integrated during training and inference, and our method. Since our approach is an improvement based on SRDiff, this experiment aims to demonstrate two key points:\\n\\n - The comparison between SRDiff and SAM+SRDiff shows that incorporating additional segmentation information into diffusion models can improve performance.\\n - The comparison among SRDiff, SAM+SRDiff, and SAN-DiFF demonstrates that our method achieves performance improvements similar to SAM+SRDiff, without increasing training or inference time.\\n\\n Thus, Table 1 focuses on comparisons with SRDiff and SAM+SRDiff.\\n\\n To provide more comprehensive insights, we also include comparisons with other diffusion models. However, since training times are recorded on different hardware setups, they are provided for reference only. All other evaluations were conducted using a V100 GPU on the DIV2K dataset.\\n\\n | | SRDiff | SAM+SRDiff | SAN-Diff | LDM | StableSR | DiffBIR |\\n | -------------- | -------------- | -------------- | -------------- | ------------- | ------------------------------------- | ------------------------------------ |\\n | Parameter | 12M | 644M | 12M | 169M | 960M | 1101.8M |\\n | Train time | 2 days | 10 days | 2 days | 6 days | 10 days + Stable Diffusion train time | 6 days + Stable Diffusion train time |\\n | Inference time | 37.64s/per img | 65.72s/per img | 37.62s/per img | 26.3s/per img | 238.6s/per img | 112.4s/per img |\\n | PSNR | 28.6 | 29.41 | 29.34 | 26.45 | 26.83 | 26.25 |\\n | FID | 0.4649 | 0.3938 | 0.3809 | 9.5518 | 14.5232 | 17.8206 |\"}", "{\"comment\": [\"## For Weakness:\", \"What is the meaning of SAM-DiffSR in this comment? Do you mean SAN-Diff?\", \"For Table 1, could you give more comparisons of the effectiveness and efficiency about other **recent** diffusion-based methods? Note that SRDiff was published in 2022, which is not so new.\"]}", "{\"comment\": \"Dear TbBy,\\n\\nWe apologize for the repeated reminder. It has been 6 days since we submitted our responses, but we have not yet received your feedback. **We simply want to ensure that we have fully addressed all your concerns.**\\n\\nYour main concerns appear to relate to the values of the metrics reported and the comparisons of the effectiveness and efficiency about other recent diffusion-based methods. **We have provided detailed responses to these questions in both our reply to you and the general response.** May we kindly ask if you could spare some time to review our responses and share your feedback?\\n\\nIf there are any remaining points that require further clarification, please rest assured that we are committed to providing detailed answers to all your inquiries and addressing any concerns you may have. We value clear and open communication, and will make every effort to ensure that all aspects of the matter are fully explained to your satisfaction.\\n\\nThank you very much.\\n\\nBest regards, Author\"}", "{\"comment\": \"Thanks for the reply. I would like to leave my score unchanged.\"}" ] }
5cPEkoHHyG
MetaInv: Overcoming Iterative and Direct Method Limitations for Inverse Learning
[ "Jingyi Yuan", "Muhao Guo", "Yang Weng" ]
Invertible neural networks (INNs) have gained significant traction in tasks requiring reliable bidirectional inferences, such as data encryption, scientific computing, and real-time control. However, iterative methods like i-ResNet face notable limitations, including instability on non-contractive mappings and failure in scenarios requiring strict one-to-one mappings. In contrast, analytical approaches like DipDNN guarantee invertibility but at the expense of performance, particularly in tasks demanding rich feature extraction (e.g., convolutional operations in complex image processing). This work presents a detailed analysis of the limitations in current invertible architectures, examining the trade-offs between iterative and analytical approaches. We identify key failure modes, particularly when handling information redundancy or strict bijections, and propose a meta-inverse framework that dynamically combines the advantages of both i-ResNet and DipDNN. Our framework adapts in real-time based on task-specific signals, ensuring both flexibility and guaranteed invertibility. Extensive experiments across diverse domains demonstrate that our hybrid approach outperforms existing methods in forward accuracy, inverse consistency, and computational efficiency. Our results highlight the utility of this meta-inverse strategy for critical applications where precision, stability, and adaptability are crucial.
[ "Invertible neural networks", "switchable Architectures", "analytical inverse" ]
Reject
https://openreview.net/pdf?id=5cPEkoHHyG
https://openreview.net/forum?id=5cPEkoHHyG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x1GN7Ava2b", "wq8Spd4SYF", "wMJZwY3ZIp", "quvjqx1gzG", "qGqa6GsKL5", "igH8Ub31QT", "hANHm8Xl3f", "graIyntvFN", "NHEdQuRxsM", "IXyVK1gPvs", "GvcnCgezLO", "3a8EH4T29n", "320rOMKWB9", "0w6eCh14xI" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1734422741107, 1732670627899, 1732526877268, 1732555824379, 1730895324134, 1730661227932, 1732527166206, 1730414603316, 1732527382633, 1732533336353, 1732532841387, 1737524152071, 1732530633192, 1732527201188 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11884/Area_Chair_aHDQ" ], [ "ICLR.cc/2025/Conference/Submission11884/Authors" ], [ "ICLR.cc/2025/Conference/Submission11884/Authors" ], [ "ICLR.cc/2025/Conference/Submission11884/Reviewer_jQ3F" ], [ "ICLR.cc/2025/Conference/Submission11884/Reviewer_jQ3F" ], [ "ICLR.cc/2025/Conference/Submission11884/Reviewer_yNgD" ], [ "ICLR.cc/2025/Conference/Submission11884/Authors" ], [ "ICLR.cc/2025/Conference/Submission11884/Reviewer_aJKp" ], [ "ICLR.cc/2025/Conference/Submission11884/Authors" ], [ "ICLR.cc/2025/Conference/Submission11884/Authors" ], [ "ICLR.cc/2025/Conference/Submission11884/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11884/Authors" ], [ "ICLR.cc/2025/Conference/Submission11884/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The paper studies invertible neural network models in the context of invertible prediction tasks with image, power system measurement and fluid dynamics data. The authors provide a detailed analysis of tradeoffs, advantages and limitations of two models: i-ResNet [1] and DipDNN [2] representing iteratively and analytically invertible models respectively. In particular, the authors identify three metrics to consider: forward prediction error, inverse reconstruction error and inverse consistency. Then, the authors propose MetaInv, a method for adaptively deciding on which of the two architectures to choose for a given task. The authors show empirically (1) the expressivity limitations of i-ResNet, (2) trade-offs in performance between different invertible methods, (3) MetaInv can correctly choose which invertible model to use.\", \"strengths\": [\"Interesting discussion on the tradeoffs between different invertible architectures\", \"Interesting insights into the limitations of i-ResNet\", \"MetaInv method that can decide between competing invertible models in a practical setting\", \"Experiments on diverse domains\"], \"weaknesses\": [\"The choice of specific models (i-ResNet and DipDNN) seems arbitrary\", \"Proposed method is practical, but not particularly interesting from a scientific point of view, as it is deciding on which method to use based on a combination of performance metrics\", \"Reviewers highlighted that the presentation is poor\", \"The results do not show an improvement to state-of-the-art on an established benchmark\"], \"decision_recommendation\": \"While the paper provides interesting insights, the methodological novelty is limited and improvements need to be made to the presentation. I would recommend that the authors should improve the presentation and better explain the motivation for the work and the design decisions made in the paper. It would also help to include results on established benchmarks with baselines evaluated in prior work. The reviewers unanimously rejected the paper. I thus recommend a rejection.\\n\\n[1] Jens Behrmann, Will Grathwohl, Ricky TQ Chen, David Duvenaud, and Jorn-Henrik Jacobsen. \\u00a8\\nInvertible residual networks.\\n\\n[2] Jingyi Yuan, Yang Weng, and Erik Blasch. Dipdnn: Preserving inverse consistency and approximation\\nefficiency for invertible learning\", \"additional_comments_on_reviewer_discussion\": \"The reviewers unanimously rejected the paper highlighting poor presentation, unclear motivation for some of the design decisions, and lack of improvement to state-of-the-art on established benchmarks. During the rebuttal phase, the authors updated the presentation, and in particular added an explanation of the MetaInv method to the main text and expanded on the motivation for the method. However, reviewers were not satisfied with these updates, and did not increase their scores.\"}", "{\"title\": \"Response to Reviewer jQ3F on the Switching Algorithm\", \"comment\": \"We appreciate the reviewer\\u2019s time and efforts, and we apologize for the delay in responding to your questions about the short presentation of MetaInv. Given the importance of the switching algorithm, which is included in the title as an essential contribution, we wanted to take the time to carefully address your concerns and revise the manuscript accordingly. Below, we outline the key motivations for this work towards the hybrid method, followed by an analysis of the specific difficulties addressed during the algorithm development.\\n\\nThis study focuses on a specific class of inverse problems requiring deterministic two-way mappings for tasks such as physical system state estimation, image classification and recovery, and adaptive control. Since the objective is to recover point estimates with forward-inverse consistency, it necessitates models with explicit invertibility.\\n\\nAs the reviewer recognized, we have targeted two representative invertible architectures and presented the analysis both theoretically and empirically. Then, we found that, even with a narrowed scope, many factors and scenarios need to be considered when solving specific inverse problems. The differences can be evident when considering: \\n\\n- **Data types and inverse learning tasks:** Even using the same data, there can be significantly different tasks. For example, image datasets (e.g., MNIST and CIFAR10), besides the major group of density estimation, also have image classification and recovery tasks. The latter emphasizes forward classification accuracy and inverse reconstruction. Similarly, physical systems involve both stochastic modeling to quantify uncertainties and deterministic inverse/two-way mappings to estimate precise system states.\\n\\n- **Application-specific factors:** Beyond data type, specific characteristics of application cases play a critical role in model performance. These include but are not limited to: \\n\\n - i) The complexity of the underlying mapping - nonlinearity, dimension, coupling of variables, chaotic correlation, etc. Theorems 1 and 2 analyze the limitations of i-ResNet.\\n\\n - ii) The information redundancy of inputs - if the data have sparsity to be compressed and if the information can be retained to trace back to inputs. For example, images usually have redundancy, while variables with physical meanings are independent with minimum redundancy. We have Theorem 3 and test results (Fig. 3, Fig. 9, Fig. 12, and Table II) using different convolution kernels. \\n\\n - iii) The precision requirements for forward and inverse predictions, which are quantified by three metrics in Sec. 3.1.\\n\\n\\nThese application-specific factors, combined with the trade-offs of invertible architectures, motivate the idea of switching algorithms. We encountered difficulties in balancing the objectives of inverse learning and dynamically adapting to task-specific characteristics. Therefore, we designed the **trust-weighted switching mechanism**, a hybrid approach inspired by learning-augmented online control.\\n\\nFrom (i)\\u2013(iii), the objectives of inverse learning, such as high expressive power, strict invertibility, low redundancy, bi-directional consistency, and computational efficiency, are often in conflict. For example, i-ResNet\\u2019s flexibility enables it to model complex mappings but may sacrifice precision in inverse mappings. DipDNN\\u2019s analytical invertibility minimizes inverse error but limits flexibility, particularly for tasks involving data redundancy.\\n\\nTo address these challenges, the switching mechanism incorporates the trust-weight $\\\\beta$, which integrates *forward accuracy*, *inverse accuracy*, *inverse consistency*, and *computational cost* into a single evaluation function: \\n$\\nV_{\\\\text{model}} = J_{\\\\text{model, total}} + \\\\lambda C_{\\\\text{model}}.\\n$\\n\\nThe evaluation of these factors is not fixed during learning. To ensure smooth switching and avoid trivial convergences, \\\\(\\\\beta\\\\) is updated dynamically:\\n$\\n\\\\beta_{t+1} = \\\\beta_t + \\\\eta_t (V_{\\\\text{i-ResNet}} - V_{\\\\text{DipDNN}}),\\n$\\nwhere $\\\\eta_t$ is the learning rate (empirically set to 0.01 for stability); additionally, a parameter $J_{\\\\text{threshold}}$ sets an acceptable performance range, ensuring robustness to data uncertainties such as noise. This parameter is crucial, as inverse learning is highly sensitive to such uncertainties.\", \"the_following_revisions_are_made\": \"The MetaInv algorithm was included in the Appendix due to the space limit. We have moved it back to the main part and reduced the less important contents.\\n\\nWe revised the corresponding description of MetaInv in Sec. 4.2, with a special focus on the difficulties and contributions of the switching algorithm, as highlighted by the reviewer.\\n\\nThe details of implementing the algorithm are included in the Appendix, e.g., parameter setups.\"}", "{\"comment\": \"**Comment:** Mathematical typos from lines 139-155.\\n\\nThanks for pointing out mathematical typos. We have not only made the corrections but also rewritten Sec. 3.1 for clarification and better readability. The revision includes:\\n\\nInverse problems span numerous applications that involve recovering original variables from observed outputs. This work focuses on inverse learning through invertible mapping recovery, which is for point estimates of images or physical system states. The problem is typically formulated as approximating a forward mapping $f_{\\\\theta}: \\\\mathbb{R}^n \\\\to \\\\mathbb{R}^n$, where $y = f_{\\\\theta}(x)$ is invertible. The goal is to find the relative inverse mapping $g_{\\\\vartheta}$ such that $x = g_{\\\\vartheta}(y) = f_{\\\\theta}^{-1}(y)$, ensuring consistency with the forward process. The demands for two-way mapping rule recovery are distinct and varied even in one task. Thus, this work evaluates them using the following performance metrics:\\n\\n- **Forward Prediction Error (Fwd):** Same as common one-way learning, it measures the ability of the model to predict $y$ from $x$. The invertible model is trained by minimizing the forward prediction loss (any discriminative learning loss $\\\\ell_{\\\\text{fwd}}$): $f^* = \\\\arg \\\\min_{\\\\theta}\\\\ell_{\\\\text{fwd}}$ $({y},f(x))$ \\n\\n- **Inverse Reconstruction Error (Inv):** The reconstruction error evaluates the model's invertibility to recover the original inputs (where $\\\\ell_{\\\\text{inv}}$ is the mean square error for point estimates): $\\\\ell_{\\\\text{inv}} (x, g_{\\\\vartheta} (f (x)))$.\\n\\n- **Inverse Consistency (Inv-Consist):** It assesses the consistency between forward and inverse mappings by comparing the inverse predictions with the true labels ${y}$, rather than just the forward outputs: $\\\\ell_{\\\\text{fwd}} (x, g_{\\\\theta} (y))$.\\n\\nIt is important to note the distinction between **inverse accuracy** and **consistency**. Inverse accuracy, or reconstruction error, indicates only the model's invertibility to compute the inverse, where analytical invertibility yields an error-free result, and numerical invertibility minimizes round-off errors. Consistency, on the other hand, evaluates the model's capability of two-way learning. It reveals the precision of global inversion. For example, minimizing reconstruction error doesn't necessarily need a low forward approximation error, while the consistency error integrates errors from both forward and inverse processes. Most works involving approximate one-to-one mappings focus on inverse accuracy alone, such as in image classification and recovery.\"}", "{\"title\": \"Thank you for the answers\", \"comment\": \"I will keep my score\"}", "{\"summary\": \"This paper first provides a detailed analysis of the limitations in current invertible architectures, examining the trade-offs between iterative and analytical approaches. It then proceeds with proposing a meta-inverse framework that dynamically combines the advantages of both i-ResNet and DipDNN, by dynamically switching between them based on ask-specific signals.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"A very detailed analysis of the limitations in current invertible architectures,\", \"weaknesses\": \"A rather short and not very extensive presentation of the switching algorithm and its associated challenges.\", \"questions\": \"What were the key challenges in developing the switching algorithm, and what were the main contributions here?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a method, termed MetaInv, to dynamically choose between two approaches for inverse modeling depending upon which one is most appropriate for the problem.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1) Inverse problems are important, and therefore the general topic is significant\", \"weaknesses\": \"1) The presentation is often poor. The introduction is often vague/unclear regarding existing methods and their weaknesses. There are major mathematical typos: e.g., the equation in Line 147 does not seem to be valid. Although I can guess, the authors never describe what l_{2} is, and what exactly are we summing over in the equations from lines 139-155.\\n\\n2) The review of the literature on inverse modeling misses large portions of the literature. Some examples include (but may not be limited to) genetic algorithms, conditional Invertible neural networks, mixture density networks, and the so-called Tandem model. The authors ought to mention some of these approaches and explain why they were not applicable for this study, or at least why they were not included in experiments. The authors can find examples of these existing methods/publications in Ardizzone et al. \\\"Analyzing inverse problems with invertible neural networks.\\\" arXiv preprint arXiv:1808.04730 (2018), or publications that cite this one. \\n\\n3) The authors show that their method is beneficial for just two methods in the literature: DipDNN and iResNet. This becomes interesting if the result of combining these two approaches with MetaInv yields a method that achieves state-of-the-art capabilities (e.g., inverse accuracy, or computational efficiency). However, the authors only show that their MetaInv approach leads to better/comparable performance to DipDNN and iResNet. Therefore it is unclear whether the result of all this work improves state-of-the-art in any way or just improves these two methods. Although improvement over these two particular methods is still interesting as a proof-of-concept for the MetaInv idea, in my opinion it is not sufficiently significant/interesting for this venue.\", \"questions\": \"I welcome a response to comment (2) in the \\\"weaknesses\\\" section, although I request that the authors only respond if their response is clearly written, and addresses in a clear/compelling way (i) why so much existing work on inverse modeling was omitted from consideration, and (ii) why this work is still significant despite the exclusion of so much existin work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Comment:** Why several inverse learning methods were omitted?\\n\\nThe literature on inverse modeling spans a broad spectrum, covering diverse applications and inverse models, as the reviewer highlights with the additional references [1]-[4]. Our study focuses on **deterministic two-way/inverse mapping** for inverse problems. Specifically, we target the invertibility design of models for tasks requiring point estimates, such as physical system state estimation, image classification and recovery, and adaptive control. These inverse problems differ from **generative tasks** like image density estimation, which form many previous works in the computer science community. Consequently, we evaluate invertible models' performances in **discriminative learning**, emphasizing forward approximation, inverse reconstruction, and forward-inverse consistency.\\n\\nThe methods mentioned by the reviewer either rely on stochastic modeling for image generation or lack explicit invertibility. Below, we discuss these methods in detail and justify their limited relevance to our work:\\n\\n**Genetic Algorithms** \\nGenetic algorithms are widely used in optimization and design problems but are computationally intensive and often impractical for high-dimensional nonlinear inverse tasks. Additionally, they do not inherently provide invertibility or forward-inverse consistency, making them unsuitable for the deterministic two-way mappings we aim to achieve.\\n\\n**Conditional Invertible Neural Networks (cINNs) [1]** \\nConditional Invertible Neural Networks (cINNs) extend NICE and RealNVP models by enforcing invertibility through affine coupling layers. These models are primarily designed for generative tasks, using maximum likelihood principles to improve image density estimation. While cINNs perform well in generative tasks, they are less suited for tasks involving point estimates. Furthermore, we have reviewed, analyzed, and tested the NICE model. Since cINNs' analytical invertibility relies on affine coupling layers, similar to NICE, the comparison of the targeted inverse problem has been covered in this work.\\n\\n**Mixture Density Networks (MDNs) [2,3]** \\nMixture Density Networks (MDNs) are designed for probabilistic inverse modeling, where the outputs are characterized by multimodal distributions. These models preserve tractable probability density and involve uncertainty quantification, but they are not applicable to recover deterministic mappings that ensure one-to-one consistency between input and output. Furthermore, MDN models do not inherently enforce invertibility, which is central to our approach.\\n\\n**Tandem Models [4]** \\nThe Tandem models are specifically designed to solve inverse design problems, e.g., for optical materials. They focus on two-stage optimization, coupling an inverse-design network with a pre-trained forward model, using the latter to supervise the training of the former. Usually, the forward model is known, and the two models have different architectures. Although the general scheme involves inverse modeling, no invertibility is enforced in the Tandem structure. However, since Tandem models also target point estimates, such an inverse design problem could be an application case of our targeted inverse problem.\"}", "{\"summary\": \"This paper analysed the limitations of two specific invertible models, i-ResNet and DipDNN, and proposed a selection mechanism to choose one of the two models for a task. The basis of selection is essentially selecting the best performing model, preferring one model if neither is good.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors did a good review of two prior models and proposed an interesting comparison of the two models.\", \"The writing is easy to follow.\"], \"weaknesses\": [\"The work lacks strong motivation. Why two specific models were chosen to be analysed? Why do we need a single meta-model to work on tasks of a completely different nature? To me, it is quite obvious that we need different models for image data and physics data. The authors should provide a convincing argument to justify their work.\", \"It is not clear why only i-ResNet and DipDNN are chosen to be analysed. It would be better to aim for generalisable analyses that provide insight into a class of solutions.\", \"The main proposed algorithm (MetaInv) is not clearly described in the main text. More importantly, there is insufficient justification for the preference rules.\"], \"questions\": [\"\\\"MetaInv\\\" is in the title, but the algorithm is not in the main text. Is there a reason to put it in the appendix?\", \"In 3.1, Inv error, should x_i be y_i instead?\", \"Figure 8 is difficult to understand. Maybe provide a better description in the caption?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Comment:** Why this work remains significant?\\n\\nWe appreciate the reviewer\\u2019s concerns regarding the weakness of existing methods addressed in our study. Based on the previous response, we clarify how this work narrows the scope and provides theoretical analysis for a unique contribution to inverse problems.\\n\\nThis study focuses on a specific class of inverse problems requiring deterministic two-way mappings for tasks like physical system state estimation, image classification and recovery, and adaptive control. Unlike probabilistic setups or generative learning (e.g., MDNs, cINNs), where the objective often involves multimodal distribution density estimation, the goal here is to recover point estimates with forward-inverse consistency. Such problems prioritize precision and interpretability, necessitating explicit invertibility in the model design.\\n\\nThus, existing works that exhibit invertible model architectures are central to our work, where two key architectures are:\\n\\n- **Affine coupling layers** (used in NICE and its extended versions): These layers provide analytical invertibility and computational efficiency, but the approximation capability is constrained by their reliance on affine transformations and split variables. DipDNN relaxes the limitations while maintaining the analytical inverse.\\n\\n- **Residual layers with Lipschitz constraint** (used in i-ResNet): These are built upon the residual network for forward approximation capacity. Unlike analytical inverses, they ensure invertibility by enforcing Lipschitz continuity. The inverse requires iterative computation using a fixed-point algorithm.\\n\\nBy focusing on DipDNN and i-ResNet, we provide a rigorous analysis of foundational invertible architectures, rather than providing exhaustive experimental comparisons across all existing methods.\\n\\nFor both theoretical analysis and empirical implementation, we identify the trade-off of the two invertible models for various deterministic inverse problems. This is why MetaInv is introduced to supplement the capabilities of the two representative approaches for arbitrary tasks. By building on and improving DipDNN and i-ResNet, MetaInv provides a unified approach that achieves consistent forward-inverse performance while maintaining computational efficiency.\"}", "{\"comment\": \"Based on our reviews, especially of papers citing previous SOTA methods such as NICE, RealNVP, and i-ResNet, the literature includes:\\n\\n1. **Invertible models applicable to probabilistic setups and generative tasks** [2,3,5]. \\n2. **Works applying SOTA methods to different applications with minor, application-specific changes** [1,6,7,9]. \\n - For example, [6] applies NICE to image steganography, [7] applies NICE to Markov Chain Monte Carlo, and [9] uses a variation of NICE/RealNVP under stochastic modeling for physical system tasks.\\n3. **Works solving inverse problems that do not exhibit invertibility or deterministic two-way mappings** [4,8]. \\n\\nA rigorous analysis of foundational invertible architectures is central to our work. Thus, rather than providing exhaustive experimental comparisons across all existing methods, the two key architectures from existing works are \\n\\n- **Affine coupling layers** (used in NICE and its extended versions): These layers provide analytical invertibility and computational efficiency but are limited by their reliance on affine transformations and split variables. DipDNN addresses these limitations by relaxing constraints while preserving analytical invertibility.\\n \\n- **Residual layers with Lipschitz constraint** (used in i-ResNet): These models are built on residual networks to enhance forward approximation capacity. Unlike affine-coupling layers, they enforce invertibility via Lipschitz continuity, with the inverse computed iteratively through a fixed-point algorithm.\", \"these_architectures_were_chosen_because_they_represent_distinct_ways_to_achieve_invertibility\": \"one through analytical construction and the other through constraints on Lipschitz continuity. By rigorously analyzing these models, we provide insights into their strengths and weaknesses across various deterministic inverse problems.\\n\\n\\n**References** \\n[1] Ardizzone, Lynton, et al. \\\"Guided image generation with conditional invertible neural networks.\\\" arXiv preprint arXiv:1907.02392 (2019). \\n\\n[2] Bishop, Christopher M. \\\"Mixture density networks.\\\" (1994). \\n\\n[3] Han, Xintian, Mark Goldstein, and Rajesh Ranganath. \\\"Survival mixture density networks.\\\" Machine Learning for Healthcare Conference. PMLR, 2022. \\n\\n[4] Xu, Peng, et al. \\\"Inverse design of a metasurface based on a deep tandem neural network.\\\" JOSA B 41.2 (2024): A1-A5. \\n\\n[5] Kingma, Durk P., et al. \\\"Improved variational inference with inverse autoregressive flow.\\\" Advances in neural information processing systems 29 (2016). \\n\\n[6] Zhang, Zhuo, Hongjun Wang, and Jia Liu. \\\"A Method for Image Steganography based on NICE Model.\\\" 2022 International Conference on Machine Learning, Cloud Computing and Intelligent Mining (MLCCIM). IEEE, 2022. \\n\\n[7] Song, Jiaming, Shengjia Zhao, and Stefano Ermon. \\\"A-nice-mc: Adversarial training for mcmc.\\\" Advances in neural information processing systems 30 (2017). \\n\\n[8] Habring, Andreas, and Martin Holler. \\\"Neural-network-based regularization methods for inverse problems in imaging.\\\" GAMM-Mitteilungen (2024): e202470004.\\n\\n[9] Ardizzone et al. \\\"Analyzing inverse problems with invertible neural networks.\\\" ICLR 2019\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s concerns regarding the motivation of our work and the selection of specific models for analysis. Based on previous responses, we clarify how this study narrows its scope, justifies the choice of models, and provides theoretical and empirical contributions to inverse problems.\\n\\nThis study focuses on a specific class of inverse problems requiring **deterministic two-way mappings** for tasks such as physical system state estimation, image classification and recovery, and adaptive control. Unlike probabilistic setups or generative learning methods, which aim to model multimodal distributions or perform density estimation, we aim to recover **point estimates with forward-inverse consistency**. This focus prioritizes precision and interpretability, making explicit invertibility a central requirement for the model design.\\n\\nWe agree with the reviewer that there is a general difference between image and physics data. However, other than the data type, more factors and scenarios need to be considered when solving inverse problems:\\n\\n- **Data types and inverse learning tasks:** Even using the same image data (e.g., MNIST and CIFAR10), there can be significantly different tasks. A major group of existing works on inverse problems focuses on density estimation tasks, which aim to generate complex distributions of images and fall under generative learning in a probabilistic setup. However, there are also image classification and recovery tasks, which belong to discriminative learning and require precise point estimates. Similarly, physical systems involve both stochastic modeling to quantify uncertainties and deterministic inverse/two-way mappings to estimate precise system states.\\n\\n- **Application-specific factors:** Beyond data type, specific characteristics of application cases play a critical role in model performance. These include but are not limited to: i) the complexity of the underlying mapping, ii) the information redundancy of inputs, and iii) the precision requirements for forward and inverse predictions. Theorems 1 and 2 analyze the limitations of i-ResNet for i). Theorem 3 analyzes the transformation equivalency for ii), as the convolution layer compresses information. These are validated by Fig. 3, Fig. 9, Fig. 12, and Table II using different kernels of convolutions (different information compressions). The three metrics in Sec. 3.1 are defined to quantify for iii). By analyzing these diverse aspects comprehensively, our study evaluates the trade-offs of i-ResNet and DipDNN both theoretically and empirically.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We appreciate the reviewer for pointing out the writing issues, and provide responses to the questions below.\\n\\n**Question on MetaInv:**\\nThe algorithm was included in the Appendix due to the space limit. We have moved it back to the main part and reduced the less important contents.\\n\\n**Question on Sec. 3.1:**\\nWe have not only made the corrections but also rewritten Sec. 3.1 for clarification and better readability. The revision includes:\\n\\nInverse problems span numerous applications that involve recovering original variables from observed outputs. This work focuses on inverse learning through invertible mapping recovery, which is for point estimates of images or physical system states. The problem is typically formulated as approximating a forward mapping $f_{\\\\theta}: \\\\mathbb{R}^n \\\\to \\\\mathbb{R}^n$, where $y = f_{\\\\theta}(x)$ is invertible. The goal is to find the relative inverse mapping $g_{\\\\vartheta}$ such that $x = g_{\\\\vartheta}(y) = f_{\\\\theta}^{-1}(y)$, ensuring consistency with the forward process. The demands for two-way mapping rule recovery are distinct and varied even in one task. Thus, this work evaluates them using the following performance metrics:\\n\\n- **Forward Prediction Error (Fwd):** Same as common one-way learning, it measures the ability of the model to predict $y$ from $x$. The invertible model is trained by minimizing the forward prediction loss (any discriminative learning loss $\\\\ell_{\\\\text{fwd}}$): $f^* = \\\\arg \\\\min_{\\\\theta}\\\\ell_{\\\\text{fwd}}$ $({y},f(x))$ \\n\\n- **Inverse Reconstruction Error (Inv):** The reconstruction error evaluates the model's invertibility to recover the original inputs (where $\\\\ell_{\\\\text{inv}}$ is the mean square error for point estimates): $\\\\ell_{\\\\text{inv}} (x, g_{\\\\vartheta} (f (x)))$.\\n\\n- **Inverse Consistency (Inv-Consist):** It assesses the consistency between forward and inverse mappings by comparing the inverse predictions with the true labels ${y}$, rather than just the forward outputs: $\\\\ell_{\\\\text{fwd}} (x, g_{\\\\theta} (y))$.\\n\\nIt is important to note the distinction between **inverse accuracy** and **consistency**. Inverse accuracy, or reconstruction error, indicates only the model's invertibility to compute the inverse, where analytical invertibility yields an error-free result, and numerical invertibility minimizes round-off errors. Consistency, on the other hand, evaluates the model's capability of two-way learning. It reveals the precision of global inversion. For example, minimizing reconstruction error doesn't necessarily need a low forward approximation error, while the consistency error integrates errors from both forward and inverse processes. Most works involving approximate one-to-one mappings focus on inverse accuracy alone, such as in image classification and recovery.\\n\\n**Question on Fig. 8:**\\n\\nWe provided a detailed explanation of Fig. 8 (now Fig. 7 in the revised manuscript) below and modified the caption in the paper correspondingly.\\n\\nFig. 7 presents the performance comparison of different methods (ResNet, i-ResNet, NICE, DipDNN, and MetaInv) on the power flow case. The dataset is derived from an 8-bus power system, where load inputs ($x$) are used to estimate voltages ($y$) at different buses for forward mapping, and the inverse is the recovery of load conditions given voltages. Accurate inverse learning is essential here for recovering underlying system physics and ensuring consistency in power flow analysis.\\n\\nThe top-left plot shows forward voltage predictions (Fwd), and the two bottom plots show the inverse load recovery for both reconstruction and consistency (Inv and Inv-Consist). The table (top-right) provides quantitative error metrics (Fwd, Inv, and Inv-Consist) for the evaluated methods. While i-ResNet shows tightly bounded errors due to its Lipschitz constraint ($\\\\text{Lip}<1$), its voltage predictions (top-left) reveal a pattern that diverges from the ground truth. This is attributed to the contractive property, which limits pointwise accuracy.\\n\\nIn contrast, DipDNN achieves better fits for both forward and inverse tasks. DipDNN, selected by MetaInv in this case, captures the system's essential behaviors more effectively, as evident from the closer alignment of its predictions with the ground truth. The inverse consistency plot (bottom-right) highlights DipDNN and MetaInv\\u2019s superior performance in achieving forward-inverse alignment compared to i-ResNet. However, excessive contraction, such as from convolutional layers, can distort inverse mappings, especially in systems with low redundancy.\"}", "{\"comment\": \"Based on our reviews, especially of papers citing previous SOTA methods such as NICE, RealNVP, and i-ResNet, the literature includes:\\n\\n1. **Invertible models applicable to probabilistic setups and generative tasks** [2,3,5]. \\n2. **Works applying SOTA methods to different applications with minor, application-specific changes** [1,6,7]. \\n - For example, [6] applies NICE to image steganography, and [7] applies NICE to Markov Chain Monte Carlo.\\n3. **Works solving inverse problems that do not exhibit invertibility or deterministic two-way mappings** [4,8]. \\n\\nFor example, the work by Ardizzone et al. (\\\"Analyzing inverse problems with invertible neural networks.\\\" ICLR 2019) mentioned by the reviewer uses a variation of NICE/RealNVP under stochastic modeling to resolve physical system estimation tasks.\\n\\n**References** \\n[1] Ardizzone, Lynton, et al. \\\"Guided image generation with conditional invertible neural networks.\\\" arXiv preprint arXiv:1907.02392 (2019). \\n\\n[2] Bishop, Christopher M. \\\"Mixture density networks.\\\" (1994). \\n\\n[3] Han, Xintian, Mark Goldstein, and Rajesh Ranganath. \\\"Survival mixture density networks.\\\" Machine Learning for Healthcare Conference. PMLR, 2022. \\n\\n[4] Xu, Peng, et al. \\\"Inverse design of a metasurface based on a deep tandem neural network.\\\" JOSA B 41.2 (2024): A1-A5. \\n\\n[5] Kingma, Durk P., et al. \\\"Improved variational inference with inverse autoregressive flow.\\\" Advances in neural information processing systems 29 (2016). \\n\\n[6] Zhang, Zhuo, Hongjun Wang, and Jia Liu. \\\"A Method for Image Steganography based on NICE Model.\\\" 2022 International Conference on Machine Learning, Cloud Computing and Intelligent Mining (MLCCIM). IEEE, 2022. \\n\\n[7] Song, Jiaming, Shengjia Zhao, and Stefano Ermon. \\\"A-nice-mc: Adversarial training for mcmc.\\\" Advances in neural information processing systems 30 (2017). \\n\\n[8] Habring, Andreas, and Martin Holler. \\\"Neural-network-based regularization methods for inverse problems in imaging.\\\" GAMM-Mitteilungen (2024): e202470004.\"}" ] }
5btqauRdz0
Zero-Shot Generalization of GNNs over Distinct Attribute Domains
[ "Yangyi Shen", "Jincheng Zhou", "Beatrice Bevilacqua", "Joshua Robinson", "Charilaos Kanatsoulis", "Jure Leskovec", "Bruno Ribeiro" ]
Inductive GNNs are able to generalize across graphs with the same set of node attributes. However, zero-shot generalization across attributed graphs with disparate node attribute domains remains a fundamental challenge in graph machine learning. Existing methods are unable to effectively make use of node attributes when transferring to unseen attribute domains, frequently performing no better than models that ignore attributes entirely. This limitation stems from the fact that models trained on one set of attributes (e.g., biographical data in social networks) fail to capture relational dependencies that extend to new attributes in unseen test graphs (e.g., TV and movies preferences). Here, we introduce STAGE, a method that learns representations of _statistical dependencies_ between attributes rather than the attribute values themselves, which can then be applied to completely unseen test-time attributes, generalizing by identifying analogous dependencies between features in test. STAGE leverages the theoretical link between maximal invariants and measures of statistical dependencies, enabling it to provably generalize to unseen feature domains for a family of domain shifts. Our empirical results show that when STAGE is pretrained on multiple graph datasets with unrelated feature spaces (distinct feature types and dimensions) and evaluated zero-shot on graphs with yet new feature types and dimensions, it achieves a relative improvement in Hits@1 between 40% to 103% for link prediction, and an 10% improvement in node classification against state-of-the-art baselines.
[ "GNN", "zero-shot", "graph foundation models" ]
Reject
https://openreview.net/pdf?id=5btqauRdz0
https://openreview.net/forum?id=5btqauRdz0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w2UEMYAbNK", "tJgy8j9js9", "qQtBsXoFMX", "pu73Q0NMlq", "pJrkp17zIX", "oD1hVX1xOX", "i1N7uO7phj", "heE2yMReVz", "bDUookB0T7", "YHGI8uL8R9", "UQArBT3Fnd", "QqfST6CvsH", "OHYJ1GyAlx", "LsuglseHBy", "K2tJGPaoc9", "JaNPQHwzM2", "BVftqnVCdR", "B8CHxWi9Ds", "5Wd4ADWO0Z", "5DtVfKOjSg", "4HekS5HEvn", "19nG3VDf6h" ], "note_type": [ "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730656928449, 1732702508060, 1732216075513, 1737524175066, 1732215270631, 1732214856791, 1732215199234, 1732214917210, 1732214984119, 1732215063081, 1732225384087, 1732226422229, 1732215246941, 1732214749925, 1730552608654, 1732215936543, 1732988707654, 1735196652730, 1730767878987, 1732547690941, 1730534999854, 1732215165465 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12237/Reviewer_JvnB" ], [ "ICLR.cc/2025/Conference/Submission12237/Reviewer_U75f" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Reviewer_JvnB" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Reviewer_U75f" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ], [ "ICLR.cc/2025/Conference/Submission12237/Area_Chair_m3uU" ], [ "ICLR.cc/2025/Conference/Submission12237/Reviewer_nStQ" ], [ "ICLR.cc/2025/Conference/Submission12237/Reviewer_iea1" ], [ "ICLR.cc/2025/Conference/Submission12237/Reviewer_iea1" ], [ "ICLR.cc/2025/Conference/Submission12237/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces STAGE, a method designed for zero-shot generalization across attributed graphs with distinct attribute domains. STAGE constructs what it calls STAGE-edge-graphs for each edge in a graph, embedding statistical dependencies between attributes at each node pair. The model achieves significant performance gains in zero-shot settings for tasks like link prediction and node classification on various datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper achieves SOTA results by embedding statistical dependencies rather than raw features.\\n2. The STAGE is a domain-agnostic framework, which can generalize across disparate attribute spaces.\", \"weaknesses\": \"1. The STAGE-edge-graph is a fully connected weighted graph, so I am concerned about the complexity.\\n2. Edge-based embeddings may limit its ability to capture high-order interactions in graphs.\\n3. The motivation in the introduction is not presented well. The authors didn't analyze why their proposed method can address the limitations they mentioned before, so it's hard to understand the intrinsic research thinking.\\n4. The experiments are a little weak. For example, I believe the 4.2 and 4.3 belong to the same type of experiment, they didn't analyze the complexity and the ablation study, and they didn't include the limited research papers they mentioned in the introduction into baselines, which weakens the convincing.\\n5. I noticed this paper was submitted to ICML workshop so there is authors' information leakage. Both papers present STAGE for zero-shot generalization of GNNs across different attribute domains, and this paper just extends some real-world testing datasets. \\n6. The authors didn't release their code although this is not compulsory, which may limit their reproducibility.\", \"questions\": \"Please see the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response.\\n\\nHowever, part of my concerns are not addressed. Specifically, the E-Commerce Stores dataset is not large enough to demonstrate the computational overhead. The issues of W4 and my questions have not been solved.\\n\\nTherefore, I will keep my rating.\"}", "{\"title\": \"General Response and New Experiment Results (1/2)\", \"comment\": \"We thank all reviewers for appreciating our work and providing valuable feedback. We are glad to see that our work is recognized as addressing a \\u201cnovel\\u201d and \\u201cinteresting\\u201d problem with important \\u201creal-world applicability\\u201d (**U75f**, **iea1**), having a \\u201cwell-articulated\\u201d and \\u201csound theoretical support\\u201d (**U75f**, **iea1**), with \\u201cextensive\\u201d and \\u201cSOTA empirical results\\u201d (**nStQ**, **JvnB**).\\n\\nIn the general response, we address the reviewers' common questions and present new experimental results.\\n\\n> **New baselines for node classification using GraphAny**\\n\\nAs per reviewer JvnB's request, we clarify that:\\n\\n1. Our submission included PRODIGY [1] as a baseline, presented as NBFNet-llm (link prediction, Table 1) and GINE-llm (node classification, Table 2). \\n2. The request for OneForAll [2] and LLaGA [3] were excluded due to incompatibility with our setup. OneForAll is limited to edge type classification (queries like (s, ?, t)) and does not support tail node prediction (queries like (s, r, ?)) or ranking metrics (e.g., Hits@1, MRR). LLaGA's reliance on LLMs is constrained by context window sizes, making it impractical for large-scale datasets like E-Commerce Store. \\n3. Adding an extra baseline: GraphAny [4].\\nSince GraphAny is specifically designed for the node classification task, we report its results in the zero-shot node classification experiment shown in Table 2 of the updated paper.\\n\\nThe following is the updated Table 2 incorporating GraphAny\\u2019s results:\\n\\n| Models \\t| Accuracy ($\\\\uparrow$) | \\n|-------------------|---------------------------------|\\n| GINE-structural | 0.564 $\\\\pm$ 0.0466 |\\n| GINE-gaussian | 0.588 $\\\\pm$ 0.0250 |\\n| GINE-normalized | 0.541 $\\\\pm$ 0.0148 |\\n| GINE-age | 0.582 $\\\\pm$ 0.0657 |\\n| GINE-llm [3] | 0.550 $\\\\pm$ 0.0368 |\\n| GraphAny [4] \\t| 0.591 $\\\\pm$ 0.0083 \\t|\\n| **GINE-STAGE (Ours)** | **0.652 $\\\\pm$ 0.0042** \\t|\\n\\nWe observe that GraphAny is the best-performing model among all the baseline models. Nevertheless, our model GINE-STAGE significantly outperforms GraphAny with a 10.3% relative performance improvement. This demonstrates the effectiveness and superior performance of STAGE compared to the baseline node classification foundation model. \\n\\n > **STAGE is effective across different GNN backbones**\\n\\nAs per reviewer JvnB and U75f\\u2019s request, we did another ablation studying the sensitivity of STAGE using different GNN backbones. We clarify that:\\n1. STAGE works well with NBFNet and GINE as the backbone model when performing on link prediction and node classification tasks.\\n2. Below we present an extra experiment using Graph Convolutional Network (GCN) as the backbone model on node classification.\\n\\nTo further demonstrate STAGE's adaptability, we replaced the GINE backbone that was used for our results shown in Table 2 with a GCN adapted to handle multi-dimensional edge attributes. Specifically, an MLP was integrated into GCN\\u2019s message-passing to process edge attributes. \\n\\nThe results, shown in updated Table 6, confirm that GCN-STAGE significantly outperforms all baselines, achieving a 7.33% improvement in average zero-shot test accuracy and a much smaller standard deviation, showcasing its robustness and stability.\\n\\n| Model | Accuracy ($\\\\uparrow$) |\\n|------------------|----------------------------|\\n| GCN-structural | 0.547 $\\\\pm$ 0.0658 |\\n| GCN-gaussian | 0.567 $\\\\pm$ 0.0382 |\\n| GCN-normalized | 0.570 $\\\\pm$ 0.0315 |\\n| GCN-llm | 0.526 $\\\\pm$ 0.0300 |\\n| **GCN-STAGE (Ours)** | **0.593** $\\\\pm$ **0.0046** |\\n \\nThese experimental results show STAGE's superior performance compared to all baseline methods, irrespective of the chosen backbone GNN architecture (GINE or GCN). This consistent improvement across different architectures underscores the versatility and broad applicability of STAGE, reinforcing its position as a robust and effective framework for graph representation learning.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response by Authors (2/2)\", \"comment\": \"> Q1: How well does STAGE handle larger graph datasets\\n\\nWe thank the reviewer for raising this important point about STAGE's scalability to larger graph datasets. We have conducted a thorough analysis and gathered empirical evidence to address this concern. These findings, along with a detailed discussion of our approach to handling large-scale graphs, are included in the updated manuscript (which will be available during the rebuttal period).\\n\\n**Complexity Analysis**\\n\\nLet $p$ be the number of features, $d$ the dimension of internal node and edge embeddings, $|E|$ the number of edges, and $|V|$ the number of nodes in the input graph. For the link prediction task, STAGE consists of three steps:\\n\\n1. **Fully Connected STAGE-Edge-Graph Construction:** This step requires $O(|E| p^2)$ operations because each fully connected STAGE-edge-graph has 2p nodes.\\n2. **Inference (GINE Layers):** We use 2 shared layers of GINE [1] for all STAGE-edge-graphs. A single layer on one fully connected STAGE-edge-graph has complexity $O(p d + p^2 d) = O(p^2 d)$ since we have 2p d-dimensional nodes and $(2p)^2$ d-dimensional edges in each graph. Aggregating edge embeddings across all graphs takes $O(|E| p^2 d)$.\\n3. **Inference (NBFNet):** We use NBFNet to perform message passing on the original graph, which requires $O(|E|d + |V|d^2)$ for one forward pass [2].\\n\\n**Total Complexity:** The overall forward pass has a complexity of $O(|E| p^2 d + |E|d + |V|d^2)$. \\n\\n**Runtime Comparison**\\nWe measured the average wall time per training epoch on the E-Commerce Stores dataset, the largest dataset among our experiments. It contains a total of 17463 nodes, 183618 edges, and up to 16 node attributes. We run all models on the 80GB NVIDIA A100 GPU. The following table summarizes the runtime of each model and the their respective zero-shot test performance on the H&M dataset for reference.\\n\\n| Models \\t| Wall Time per Training Epoch on E-Commerce (seconds) | Zero-shot Hits@1 Performance on H&M |\\n|---------------------|----------------------------------------|----------------------------------------|\\n| NBFNet-raw \\t| 318.65 | 0.0005 $\\\\pm$ 0.0004 |\\n| NBFNet-gaussian \\t| 322.13 | 0.0925 $\\\\pm$ 0.0708 |\\n| NBFNet-structural | 322.31 | 0.2231 $\\\\pm$ 0.0060 |\\n| NBFNet-llm \\t| 316.55 | 0.2302 $\\\\pm$ 0.0015 |\\n| NBFNet-normalized | 316.87 | 0.2286 $\\\\pm$ 0.0010 |\\n| **NBFNet-STAGE (Ours)** | **341.36** | **0.4666 $\\\\pm$ 0.0020** |\\n\\nNBFNet-STAGE is only 7.83% slower than the fastest baseline (NBFNet-llm), a reasonable tradeoff for its performance gains. The additional time is due to computing STAGE-edge-graph embeddings during each forward pass while building the STAGE-edge-graphs is a one-time preprocessing step. In practice, the additional factor in the complexity has never prevented us from running in the datasets we considered.\\n\\nIn summary, to answer the reviewer\\u2019s question, we expect that STAGE can scale quite well to graphs with a large number of nodes (to tens of thousands and more) with a moderate number of features (smaller than 100). On the other hand, scalability is more limited on graphs with thousands of features. Nevertheless, one can use techniques to mitigate this issue, including but not limited to feature selection by performing an association study or reducing attribute dimensions via techniques such as Principal Component Analysis (PCA). We have added this discussion in the Limitation & Future Work section of the updated paper, which we will upload during the rebuttal period.\\n\\n\\n[1] Zhu et al. \\\"Neural bellman-ford networks: A general graph neural network framework for link prediction.\\\" NeurIPS 2021.\"}", "{\"title\": \"Response by Authors (1/4)\", \"comment\": \"Thank you for your thoughtful review effort and constructive comments. We are pleased to see you appreciated the generality of our framework and recognized the empirical improvements it yields. Nonetheless, the review raised a number of important questions, which we answer in the following.\\n\\n> W1: The STAGE-edge-graph is fully connected so I am concerned about the complexity.\", \"a1\": \"We thank the reviewer for raising this important point. We have conducted a comprehensive complexity analysis and runtime comparison, which will be included in the updated paper (to be uploaded during the rebuttal period). A summary of our findings is provided below.\\n\\n**Complexity Analysis**\\n\\nLet $p$ be the number of features, $d$ the dimension of internal node and edge embeddings, $|E|$ the number of edges, and $|V|$ the number of nodes in the input graph. For the link prediction task, STAGE consists of three steps:\\n\\n1. **Fully Connected STAGE-Edge-Graph Construction:** This step requires $O(|E| p^2)$ operations because each fully connected STAGE-edge-graph has 2p nodes.\\n2. **Inference (GINE Layers):** We use 2 shared layers of GINE [1] for all STAGE-edge-graphs. A single layer on one fully connected STAGE-edge-graph has complexity $O(p d + p^2 d) = O(p^2 d)$ since we have 2p d-dimensional nodes and $(2p)^2$ d-dimensional edges in each graph. Aggregating edge embeddings across all graphs takes $O(|E| p^2 d)$.\\n3. **Inference (NBFNet):** We use NBFNet to perform message passing on the original graph, which requires $O(|E|d + |V|d^2)$ for one forward pass [2].\\n\\n**Total Complexity:** The overall forward pass has a complexity of $O(|E| p^2 d + |E|d + |V|d^2)$. \\n\\n**Runtime Comparison**\\nWe measured the average wall time per training epoch on the E-Commerce Stores dataset used in the experiments shown in the following table, using an 80GB A100 GPU. This dataset is the largest in our experiment with 17463 nodes, 183618 edges, and up to 16 node attributes. Thus it produces the most number of STAGE-edge-graphs and showcases the most explicit runtime contrast.\\n\\n| Models \\t| Wall Time per Training Epoch on E-Commerce (seconds) | Zero-shot Hits@1 Performance on H&M |\\n|---------------------|----------------------------------------|----------------------------------------|\\n| NBFNet-raw \\t| 318.65 | 0.0005 $\\\\pm$ 0.0004 |\\n| NBFNet-gaussian \\t| 322.13 | 0.0925 $\\\\pm$ 0.0708 |\\n| NBFNet-structural | 322.31 | 0.2231 $\\\\pm$ 0.0060 |\\n| NBFNet-llm \\t| 316.55 | 0.2302 $\\\\pm$ 0.0015 |\\n| NBFNet-normalized | 316.87 | 0.2286 $\\\\pm$ 0.0010 |\\n| **NBFNet-STAGE (Ours)** | **341.36** | **0.4666 $\\\\pm$ 0.0020** |\\n\\nNBFNet-STAGE is 7.83% slower than the fastest baseline (NBFNet-llm), a reasonable tradeoff for its performance gains. The additional time is due to computing STAGE-edge-graph embeddings during each forward pass, while building the STAGE-edge-graphs is a one-time preprocessing step. In practice, the additional factor in the complexity has never prevented us from running in the datasets we considered.\\n\\n**Scalability Considerations**\\nDespite the potential scalability challenges posed by graphs with thousands of features, we believe STAGE-edge-graph remains a viable solution. In the revised Limitations and Future Work section, we outline feature selection strategies (e.g., association studies) as effective mitigation techniques. \\n\\nIn summary, while STAGE-edge-graph introduces a modest overhead, its scalability is manageable with appropriate feature selection techniques, making it feasible for practical deployment.\"}", "{\"title\": \"Response by Authors (2/2)\", \"comment\": \"> W4: STAGE\\u2019s Generalizability on Biomedical or Geospatial Domains\\n\\nThank you for this suggestion. We believe that studying the generalizability of STAGE to these domains represents an interesting and important aspect, which we are eager to conduct as a future research. We have incorporated this suggestion in the updated Limitations & Future Work section.\\n\\n\\n> W5: Handling Highly Heterogeneous Data\", \"a5\": \"STAGE's efficacy is built upon the theory of maximal invariants, which is currently limited to scalar and discrete variables. Extending this theory to handle unstructured or mixed media data is an exciting but challenging topic, which we leave as future work. Our current approach focuses on attributes with values in one-dimensional real space, though nodes may have multiple attributes. Addressing multimedia data embeddings would require substantial methodological modifications.\\n\\n> W6: Interpreting STAGE\\u2019s Learned Dependencies\", \"a6\": \"We appreciate your insightful suggestion regarding the interpretability of STAGE's learned dependencies. To address this, we conducted a feature-isolation experiment on the node classification task using the Friendster and Pokec datasets. Our results demonstrate STAGE's ability to learn and prioritize relevant features to perform prediction tasks.\\n\\nBelow we summarize our findings, which we will add to the Appendix in our revision.\\n\\n**Experiment Setup** \\n\\nIn this feature-isolation experiment, we systematically removed individual attributes (e.g., age, interest, occupation, music, tv) from the Friendster training dataset and trained GINE-STAGE on these reduced datasets.. Zero-shot performance was then evaluated on the Pokec dataset for predicting the \\\"gender\\\" label. The following table displays the results:\\n\\n| Individual feature removed from Friendster | age | interest | occupation | music | tv | (None removed) |\\n|-----------------------------------------------|-------|----------|------------|-------|-------|----------------|\\n| Test accuracy of predicting \\\"gender\\\" on Pokec | 0.500 | 0.632 | 0.632 | 0.623 | 0.649 | 0.649 |\\n\\n**Findings**\\nRemoving certain features (e.g., \\u201ctv\\u201d) during training has minimal impact on test performance (predicting \\u201cgender\\u201d), indicating that STAGE is able to learn during training that certain features are not important for the task. \\n\\n\\n\\nWe thank the reviewer for their thoughtful comments, and we believe these revisions significantly strengthen our paper. Thank you for your time and support!\\n\\n\\n[1] Bell. \\\"A characterization of multisample distribution-free statistics.\\\" AMS 1964.\\n\\n[2] Hu et al. \\\"Strategies for pre-training graph neural networks.\\\" ICLR 2020.\\n\\n[3] Zhu et al. \\\"Neural bellman-ford networks: A general graph neural network framework for link prediction.\\\" NeurIPS 2021.\"}", "{\"title\": \"Response by Authors (2/4)\", \"comment\": \"> W2: Edge-based embeddings may limit its ability to capture high-order interactions in graphs.\", \"a2\": \"We appreciate the reviewer\\u2019s comment regarding the potential limitations of edge-based embeddings in capturing high-order interactions. To ensure we fully understand your question and address it effectively, could we kindly ask the reviewer to please provide further details or examples of specific high-order interaction patterns that you believe might be challenging for our approach to capture?\\n\\nWhile we await the reviewer\\u2019s clarification, we can emphasize that the STAGE-edge-graph construction is theoretically capable of representing complex and high-order dependencies between multiple node attributes. Theorem 3.3 formally demonstrates this capability by proving that applying a maximally expressive GNN encoder to each STAGE-edge-graph (along with positional encodings) and subsequently using a most expressive multiset encoder on the resulting embeddings allows for the approximation of any statistical test measuring dependencies or interactions between any number of node attributes.\\n\\nThis theoretical foundation suggests that the STAGE-edge-graph itself is not the limiting factor in terms of expressivity. If a particular implementation utilizing STAGE-edge-graphs struggles to capture high-order interactions, it is more likely due to limitations in the chosen backbone neural network architectures rather than the inherent nature of the edge-based embeddings.\\n\\n> W3. The motivation in the introduction is not presented well. The authors didn't analyze why their proposed method can address the limitations they mentioned before, so it's hard to understand the intrinsic research thinking.\", \"a3\": \"We appreciate the reviewer's feedback and have revised the introduction to more clearly articulate the motivation behind our approach and its ability to generalize to new domains.\\n\\nThe core innovation of this work lies in addressing the challenge of zero-shot generalization across diverse attribute spaces. To achieve this, we identify three key invariances that representations must possess: invariance to changes in attribute values, permutation of attribute identities, and permutation of node identities. STAGE constructs attribute representations that capture maximal information about dependencies and interactions among raw attribute values while adhering to these invariances.\\n\\nThis challenge is related to the concept of rank tests and maximal invariants from statistics. Notably, many statistical tests for independence and conditional independence are equivalent to rank tests, which disregard specific feature values and focus solely on their relative rankings [3]. Inspired by this insight, we developed the theoretical formulation of the feature hypergraph (Definition 3.1), a graphical representation of rank tests. Subsequently, we introduced STAGE-edge-graphs as a practical implementation that, as demonstrated by Theorem 3.3, possesses equivalent expressivity to the feature hypergraph while offering computational efficiency over the equivalent rank test (feature hypergraph).\\n\\n> W4.1 I believe the 4.2 and 4.3 belong to the same type of experiment\", \"a4\": \"The experiments in Sections 4.2 and 4.3 address different tasks: Section 4.2 focuses on link prediction, while Section 4.3 evaluates node classification. Furthermore, they are conducted on distinct datasets (E-commerce and HM for link prediction; Friendster and Pokec for node classification) and use different backbone GNNs within the STAGE framework.\\n\\nThis deliberate separation of tasks, datasets, and GNN backbones demonstrates the adaptability of the STAGE-edge-graph construction strategy across diverse settings. The consistent effectiveness observed across these varied experimental configurations reinforces the robustness and generalizability of STAGE.\"}", "{\"title\": \"Response by Authors (3/4)\", \"comment\": \"> W4.2 Analysis of ablation study\", \"a5\": \"Thank you for your valuable feedback. In addition to the experiments evaluating alternative featurization strategies (e.g., NBFNet-structural, NBFNet-gaussian, NBFNet-normalized), we have now included two new ablation studies in **Appendix E** to further illustrate STAGE\\u2019s flexibility and effectiveness. Below, we summarize the first study:\\n\\n**Extra Ablation 1: STAGE's Flexibility Across GNN Backbones**\\n\\nIn this ablation we study whether STAGE is still effective on alternative GNN backbone models. We chose the node classification task for this investigation, where we train on the Friendster dataset and zero-shot test on the Soc-Pokec dataset. To demonstrate STAGE's adaptability, we replaced the GINE backbone that was used for our results shown in Table 2 with a Graph Convolutional Network (GCN) adapted to handle multi-dimensional edge attributes. Specifically, an MLP was integrated into GCN\\u2019s message-passing to process edge attributes. \\n\\nThe results, shown in updated Table 6, confirm that GCN-STAGE significantly outperforms all baselines, achieving a 7.33% improvement in average zero-shot test accuracy and a much smaller standard deviation, showcasing its robustness and stability.\\n\\n| Model | Accuracy ($\\\\uparrow$) |\\n|------------------|-----------------------------------|\\n| GCN-structural | 0.547 $\\\\pm$ 0.0658 |\\n| GCN-gaussian | 0.567 $\\\\pm$ 0.0382 |\\n| GCN-normalized | 0.570 $\\\\pm$ 0.0315 |\\n| GCN-llm | 0.526 $\\\\pm$ 0.0300 |\\n| **GCN-STAGE (Ours)** | **0.593** $\\\\pm$ **0.0046** |\\n\\nThese results demonstrate the effectiveness of STAGE regardless of the backbone GNN architecture (GINE or GCN), reinforcing the versatility and general applicability of STAGE across tasks and architectures, further solidifying its strength as a robust framework.\\n\\n**Extra Ablation 2: Leveraging Shared Features Across Datasets.** \\n\\nIn this second ablation study, we aim to investigate whether STAGE is truly leveraging dependencies among multiple unseen node attributes to make zero-shot predictions, rather than simply relying on the common attributes shared between train and test. \\n\\nAcross E-commerce datasets (excluding H&M), product price serves as a common attribute. We trained NBFNet-STAGE and a baseline model, NBFNet-Price, which utilizes only price information. As detailed in the updated Table 1 (also see the attached table below), NBFNet-STAGE achieves a remarkable 70% improvement over NBFNet-Price in Hits@1. This substantial gain underscores STAGE's ability to extract valuable insights from complex graph relationships that go beyond simple shared features.\\n\\n**In the original submission we already did this ablation study for the Pokec and Friendster datasets**, where we trained a GINE (GINE -Age) utilizing only age as input and compared its performance to our GINE-STAGE model. As shown in (the original) Table 2 (and in the table below), GINE-STAGE outperforms GNN-Age by a significant margin of 12% in accuracy. This highlights that STAGE effectively leverages the rich relational information within the graph structure, surpassing the predictive power attainable solely from shared node attributes.\\n\\nFor your quick reference, here are the simplified tables comparing NBFNet-Price with NBFNet-STAGE and GINE-Age with GINE-STAGE:\\n\\n| Models | Test Hits@1 on Held-out E-Comm. Store | Test MRR on Held-out E-Comm. Store |\\n|-------------------------|---------------------------------------|------------------------------------|\\n| NBFNet-price | 0.2713 $\\\\pm$ 0.0280 | 0.3263 $\\\\pm$ 0.0301 |\\n| **NBFNet-STAGE (Ours)** | **0.4606 $\\\\pm$ 0.0123** | 0.4971 $\\\\pm$ 0.0073 |\\n\\n| Models | Test Accuracy on Pokec |\\n|-----------------------|------------------------|\\n| GINE-age | 0.582 $\\\\pm$ 0.0657 |\\n| **GINE-STAGE (Ours)** | **0.652 $\\\\pm$ 0.0042** |\"}", "{\"title\": \"Response by Authors (4/4)\", \"comment\": \"> W4.3. They didn't include the limited research papers they mentioned in the introduction into baselines\\n\\nWe want to first clarify that the NBFNet-llm (shown in Table 1 for link prediction) and the GINE-llm (shown in Table 2 for node classification) is the PRODIGY [4] baseline, which we mentioned as one of the text encoder methods in the introduction. The other two text encoder methods, OneForAll [5] and LLaGA[6], cannot be easily adapted for a fair comparison against our method and, although feasible, such an effort lies outside of the scope of the present work. For instance, on the link prediction task, OneForAll in its current implementation is only designed to classify edge types (or relation types) given a source and target node, i.e., answering queries of the form (s, ?, t), and the authors reported results using classification accuracy as the metric. However, it does not support the more common task of predicting the tail node given the source node and edge type, i.e., answering queries of the form (s, r, ?), and does not support ranking-based metrics such as Hits@1, MRR, etc., which are the metrics we used in our work. On the other hand, LLaGA relies on LLMs to make final predictions by converting graphs into inputs compatible with LLMs, and the context windows of the LLMs used in their work are not large enough to accommodate the large-scale graphs we have in our dataset, e.g. the E-Commerce Store dataset.\\n\\nNonetheless, we welcome the reviewer\\u2019s suggestion and included GraphAny [7] as an additional baseline in the updated experiment results. Since GraphAny is specifically designed for the node classification task, we report its results in the zero-shot node classification experiment shown in Table 2 of the updated paper.\\n\\nThe following is the updated Table 2 incorporating GraphAny\\u2019s results:\\n\\n| Models \\t| Accuracy ($\\\\uparrow$) | \\n|-------------------|---------------------------------|\\n| GINE-structural | 0.564 $\\\\pm$ 0.0466 |\\n| GINE-gaussian | 0.588 $\\\\pm$ 0.0250 |\\n| GINE-normalized | 0.541 $\\\\pm$ 0.0148 |\\n| GINE-age | 0.582 $\\\\pm$ 0.0657 |\\n| GINE-llm [3] | 0.550 $\\\\pm$ 0.0368 |\\n| GraphAny [4] \\t| 0.591 $\\\\pm$ 0.0083 \\t|\\n| **GINE-STAGE (Ours)** | **0.652 $\\\\pm$ 0.0042** \\t|\\n\\nWe observe that GraphAny is the best-performing model among all the baseline models. Nevertheless, our mode GINE-STAGE significantly outperforms GraphAny with a 10.3% relative performance improvement. This demonstrates the effectiveness and superior performance of STAGE compared to the baseline node classification foundation model. \\n\\n> W5. I noticed this paper was submitted to an ICML workshop\\u2026\", \"a6\": \"Thank you for raising this point. We have discussed this matter with the Area Chair, who has confirmed that our submission adheres to ICLR's policies. We hope this is enough to dispel these concerns.\\n\\n> W6. The authors didn't release their code although this is not compulsory, which may limit their reproducibility.\", \"a7\": \"Thank you for this question. We plan to publicly release our code upon acceptance.\\n\\nWe believe we have carefully addressed all reviewer feedback, providing compelling evidence for the effectiveness and generalizability of our STAGE approach. We thank the reviewer for their thoughtful comments, and we believe these revisions significantly strengthen our paper. Thank you! We hope the reviewer will find these changes satisfactory to reconsider their score. We are more than happy to address any further questions.\\n\\n[1] Hu et al. \\\"Strategies for pre-training graph neural networks.\\\" ICLR 2020.\\n\\n[2] Zhu et al. \\\"Neural bellman-ford networks: A general graph neural network framework for link prediction.\\\" NeurIPS 2021.\\n\\n[3] Bell. \\\"A characterization of multisample distribution-free statistics.\\\" AMS 1964.\\n\\n[4] Huang et al. \\\"Prodigy: Enabling in-context learning over graphs.\\\" NeurIPS 2024.\\n\\n[5] Liu, et al. \\u201cOne for all: Towards training one graph model for all classification tasks.\\u201d ICLR 2024.\\n\\n[6] Chen, et al. \\u201cLLaGA: Large language and graph assistant.\\u201d ICML, 2024.\\n\\n[7] Zhao et al. \\\"Graphany: A foundation model for node classification on any graph.\\\" ArXiv 2024.\"}", "{\"comment\": \"Thanks for the authors' response. My questions are addressed, and I think it's good to include the complexity analysis. I will increase my score.\"}", "{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful feedback and for increasing the score of our manuscript. We appreciate your insightful questions, whose answers have strengthened our paper. The complexity analysis will be included in the revised submission alongside the other results.\\n\\nWe welcome any further questions you may have.\\n\\nSincerely,\\nThe Authors\"}", "{\"title\": \"Response by Authors (1/2)\", \"comment\": \"We appreciate the reviewer's positive assessment of our work, particularly their acknowledgement of its theoretical support, and its relevance to real-world applications. We have carefully considered the reviewer's questions and provide our responses below:\\n\\n> W1/W2: Qualitative Analysis on Feature Dependencies\\n\\nThank you for the valuable suggestion to illustrate how STAGE learns and generalizes feature dependencies. We conducted a feature-isolation experiment on the node classification task using the Friendster and Pokec datasets. Our results demonstrate STAGE's ability to learn and prioritize relevant features to perform prediction tasks. \\n\\nBelow we summarize our findings, which we will add to the Appendix in our revision.\\n\\n**Experiment Setup** \\n\\nIn this feature-isolation experiment, we systematically removed individual attributes (e.g., age, interest, occupation, music, tv) from the Friendster training dataset and trained GINE-STAGE on these reduced datasets.. Zero-shot performance was then evaluated on the Pokec dataset for predicting the \\\"gender\\\" label. The following table displays the results:\\n\\n| Individual feature removed from Friendster | age | interest | occupation | music | tv | (None removed) |\\n|-----------------------------------------------|-------|----------|------------|-------|-------|----------------|\\n| Test accuracy of predicting \\\"gender\\\" on Pokec | 0.500 | 0.632 | 0.632 | 0.623 | 0.649 | 0.649 |\\n\\n**Findings**\\nRemoving certain features (e.g., \\u201ctv\\u201d) during training has minimal impact on test performance (predicting \\u201cgender\\u201d), indicating that STAGE is able to learn during training that certain features are not important for the task.\"}", "{\"title\": \"Response by Authors\", \"comment\": \"We thank the reviewer for appreciating the presentation of our work and the extensive empirical results. We address the question in detail below.\\n\\n> W1. More discussions on the variants on LLM should be made.\", \"a1\": \"Thank you for your question. Our baseline models, NBFNet-LLM and GINE-LLM, essentially implement PRODIGY [1] but use all-MiniLM-L6-v2 for node feature embedding instead of RoBERTa. We chose this alternative due to its specialized training on sentence embeddings and speed, which offers advantages in capturing semantic relationships compared to RoBERTa (an older model).\\n\\nNonetheless, pretrained language models can be integrated in other ways than encoding textified features. For instance, in our introduction we mentioned OneForAll [2] and LLaGA [3]. While OneForAll's current implementation excels at classifying edge types given source and target nodes, it lacks support for predicting tail nodes given source nodes and edge types \\u2013 a requirement for our tasks and needed to support ranking-based metrics such as Hits@1, MRR. Furthermore, LLaGA's reliance on LLMs with limited context windows poses a significant obstacle when dealing with the large-scale graphs in our dataset, particularly the E-Commerce Store dataset.\\n\\nTherefore, while we recognize the value of exploring alternative LLM integration strategies, focusing on the NBFNet-LLM and GINE-LLM baselines allows us to grasp the limitations of directly incorporating node feature information through textification within a graph neural network framework.\\n\\nWe appreciate the reviewer for bringing up this point for discussion, and we have added a discussion in the Limitation & Future Work section of the updated paper. We will upload the updated paper during the rebuttal period.\\n \\n\\n\\n[1] Qian, et al. \\u201cProdigy: Enabling in-context learning over graphs.\\u201d NeurIPS 2024.\\n\\n[2] Liu, et al. \\u201cOne for all: Towards training one graph model for all classification tasks.\\u201d ICLR 2024.\\n\\n[3] Chen, et al. \\u201cLLaGA: Large language and graph assistant.\\u201d ICML, 2024.\"}", "{\"summary\": \"The paper presents STAGE, a method that enables zero-shot generalization of graph neural networks (GNNs) across graphs with different attribute domains. STAGE constructs STAGE-edge-graphs to capture statistical dependencies between attributes instead of absolute values, facilitating transferability to unseen domains. The method shows substantial improvement in zero-shot tasks like link prediction and node classification on graphs with entirely new feature spaces.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. STAGE's strategy to use statistical dependencies rather than raw attribute values to enhance zero-shot generalization in GNNs is novel for graph machine learning.\\n2. The theoretical basis for STAGE, connecting maximal invariants and statistical dependencies, is well-articulated and provides a sound foundation for the empirical results.\\n3. STAGE is shown to be adaptable across domains of varied attribute types and dimensions, a crucial quality for real-world applicability.\", \"weaknesses\": \"1. STAGE's two-stage process involving STAGE-edge-graphs, conditional probability matrices, and subsequent embeddings may be challenging to implement or optimize in practice. Details on computational overhead compared to baselines would enhance clarity.\\n2. While STAGE is effective for pairwise dependencies, it is unclear how it handles more complex dependencies in highly interconnected graphs.\\n3. The success of STAGE appears dependent on the architecture and expressivity of the underlying GNNs (M1 and M2). Sensitivity analysis on different GNN backbones might clarify robustness across architectures.\\n4. The evaluation focuses on e-commerce and social network datasets; examining STAGE\\u2019s generalizability on domains like biomedical or geospatial networks would strengthen claims of universality.\", \"questions\": \"1. How does STAGE handle attribute domains with highly heterogeneous data types, such as unstructured or mixed media data?\\n2. Could the authors elaborate on the computational cost associated with STAGE compared to baselines, especially in large-scale graphs?\\n3. Does STAGE's reliance on GNN backbones like NBFNet affect its generalizability? Could alternative GNN architectures be equally effective?\\n4. Has STAGE been evaluated in terms of the interpretability of the learned dependencies? If so, what methods were used?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response and New Experiment Results (2/2)\", \"comment\": \"> **STAGE\\u2019s scalability to large graphs**\\n\\nAs per JvnB and U75f\\u2019s request, we offered a complexity analysis and a comparison of training time between STAGE and the baselines. We therefore clarify that,\\n\\n(1) Complexity analysis:\\n\\nLet $p$ be the number of features, $d$ the dimension of internal node and edge embeddings, $|E|$ the number of edges, and $|V|$ the number of nodes in the input graph. For the link prediction task, STAGE consists of three steps:\\n\\n1. **Fully Connected STAGE-Edge-Graph Construction:** This step requires $O(|E| p^2)$ operations because each fully connected STAGE-edge-graph has 2p nodes.\\n2. **Inference (GINE Layers):** We use 2 shared layers of GINE [1] for all STAGE-edge-graphs. A single layer on one fully-connected STAGE-edge-graph has complexity $O(p d + p^2 d) = O(p^2 d)$ since we have 2p d-dimensional nodes and $(2p)^2$ d-dimensional edges in each graph. Aggregating edge embeddings across all graphs takes $O(|E| p^2 d)$.\\n3. **Inference (NBFNet):** We use NBFNet to perform message passing on the original graph, which requires $O(|E|d + |V|d^2)$ for one forward pass [2].\\n\\n**Total Complexity:** The overall forward pass has a complexity of $O(|E| p^2 d + |E|d + |V|d^2)$. \\n\\n(2) STAGE does not pose a heavy computation overhead:\\n\\nTo assess the computational overhead of STAGE, we measured the average wall time per training epoch on the E-Commerce Stores dataset. This dataset, with 17,463 nodes, 183,618 edges, and up to 16 node attributes, represents the largest in our experiments and thus presents the most demanding scenario for STAGE-edge-graph construction. Utilizing an 80GB A100 GPU, we obtain the following results:\\n\\n| Models \\t| Wall Time per Training Epoch on E-Commerce (seconds) | Zero-shot Hits@1 Performance on H&M |\\n|---------------------|----------------------------------------|----------------------------------------|\\n| NBFNet-raw \\t| 318.65 | 0.0005 $\\\\pm$ 0.0004 |\\n| NBFNet-gaussian \\t| 322.13 | 0.0925 $\\\\pm$ 0.0708 |\\n| NBFNet-structural | 322.31 | 0.2231 $\\\\pm$ 0.0060 |\\n| NBFNet-llm \\t| 316.55 | 0.2302 $\\\\pm$ 0.0015 |\\n| NBFNet-normalized | 316.87 | 0.2286 $\\\\pm$ 0.0010 |\\n| **NBFNet-STAGE (Ours)** | **341.36** | **0.4666 $\\\\pm$ 0.0020** |\\n\\nNBFNet-STAGE is only 7.83% slower than the fastest baseline (NBFNet-llm), a reasonable tradeoff for its performance gains. The additional time is due to computing STAGE-edge-graph embeddings during each forward pass, while building the STAGE-edge-graphs is a one-time preprocessing step. In practice, the additional factor in the complexity has never prevented us from running in the datasets we considered.\\n\\nWe believe we have carefully addressed all reviewers\\u2019 feedback, providing compelling evidence for the effectiveness and generalizability of our STAGE approach. We thank the reviewer for their thoughtful comments, and we believe these revisions significantly strengthen our paper. Thank you! We hope the reviewers will find these changes satisfactory to reconsider their score. We are more than happy to address any further questions.\\n\\n\\n[1] Huang et al. \\\"Prodigy: Enabling in-context learning over graphs.\\\" NeurIPS 2024.\\n\\n[2] Liu, et al. \\u201cOne for all: Towards training one graph model for all classification tasks.\\u201d ICLR 2024.\\n\\n[3] Chen, et al. \\u201cLLaGA: Large language and graph assistant.\\u201d ICML, 2024.\\n\\n[4] Zhao et al. \\\"Graphany: A foundation model for node classification on any graph.\\\" ArXiv 2024.\"}", "{\"title\": \"Response by Authors (3/3)\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful follow-up comments and continued engagement with our work on STAGE. We are glad that you recognize the novelty and theoretical foundation of our approach and appreciate the constructive feedback.\\n\\n### W4: Applying STAGE to Geospatial and Biomedical datasets\\n\\nWe would be happy to explore applying STAGE to Geospatial and Biomedical datasets if the reviewer could point us to two such datasets with rich and distinct feature spaces. We note, for instance, that some datasets, such as AirBrazil, AirEU, and AirUS [1], have no node features (they use one-hot encodings of node ids as features, which are provably not transferable zero-shot).\\n\\n### Scalability Analysis\\n\\nWe would like to expand on our analysis of the computational complexity of STAGE for the link prediction task. Let's denote the number of features as $p$, the dimension of internal node and edge embeddings as $d$, the number of edges as $|E|$, and the number of nodes as $|V|$.\\n\\n#### Forward Pass Complexity\", \"the_forward_pass_of_stage_consists_of_three_main_steps\": \"1. **Fully Connected STAGE-Edge-Graph Construction**: This step requires $O(|E| p^2)$ operations, as each fully connected STAGE-edge-graph has 2p nodes.\\n2. **Inference (GINE Layers)**: We use two shared layers of GINE for all STAGE-edge-graphs. The complexity of a single layer on one fully connected STAGE-edge-graph is $O(p^2 d)$, since we have 2p d-dimensional nodes and $(2p)^2$ d-dimensional edges in each graph. Aggregating edge embeddings across all graphs takes $O(|E| p^2 d)$.\\n3. **Inference (NBFNet)**: We use NBFNet to perform message passing on the original graph, which requires $O(|E|d + |V|d^2)$ for one forward pass.\\n\\n#### Total Complexity\\n\\n**Total Complexity**: The overall forward pass has a complexity of $O(|E| p^2 d + |E|d + |V|d^2)$. As we can see, the complexity is linear in the number of nodes and edges (graph size), indicating that graph size is less of an issue than the number of features.\\n\\nThank you again for your feedback, and we look forward to further discussions.\\n\\n[1] Ribeiro et al. \\\"struc2vec: Learning Node Representations from Structural Identity.\\\" Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2017.\"}", "{\"metareview\": \"**Summary:** This paper introduces STAGE, a method designed to enable zero-shot generalization for Graph Neural Networks (GNNs) across attributed graphs with distinct attribute domains. By focusing on statistical dependencies between attributes rather than their raw values, the method aims to generalize effectively to unseen domains. The proposed framework includes STAGE-edge-graphs to model pairwise attribute dependencies and is evaluated on link prediction and node classification tasks, showcasing significant improvements over state-of-the-art baselines.\\n\\n\\n**Decision:** The paper introduces a novel framework for zero-shot generalization of GNNs using attribute dependency modeling but fails to address critical aspects that would ensure its broader applicability. Specifically, several reviewer raised concerns that due to the algorithm's complexity of $\\\\mathcal{O}(p^2)$, its applicability to a broader range of scenarios is significantly constrained. In fact, I noticed that the datasets used in the paper all have a relatively small number of features. However, in many real-world applications, the dimensionality of node attributes is often much higher [1, 2, 3]. Similarly, another related issue is that this method relies heavily on a clear and explicit definition or encoding of the features (e.g., I don't think it can deal with word embeddings of Abstract in a citation network), which further limits the applicability of the method. For example, in the social network datasets, 54 out of 58 features were removed as they are \\\"difficult to encode either because they are random texts input by the user or because there is no straightforward way to turn the features into totally ordered ones\\\". In fact, Reviewer U75f also raised a similar concern, highlighting that the method was evaluated on relatively small data sets, with no exploration of challenging or diverse datasets like biomedical or geospatial graphs.\\nAlthough the authors included a discussion of the relevant limitations in the revised version, I believe this is a more fundamental issue that needs to be addressed. Based on this, I think the paper may not yet be ready for acceptance at ICLR.\\n\\n[1] Tang, Jie, et al. \\\"ArnetMiner: extraction and mining of academic social networks.\\\" Proceedings of the 14th ACM SIGKDD international conference on Knowledge Discovery and Data Mining, 2008.\\n\\n[2] Shen, Xiao, et al. \\\"Adversarial deep network embedding for cross-network node classification.\\\" Proceedings of the AAAI conference on Artificial Intelligence, 2020.\\n\\n[3] Rozemberczki, Benedek, et al. \\\"Multi-scale attributed node embedding.\\\" Journal of Complex Networks, 2021.\", \"additional_comments_on_reviewer_discussion\": \"This is a borderline submission, with reviewer scores of 6, 5, and 5 (Due to the uninformative review comments and the lack of participation during the rebuttal and AC-reviewer discussion, the score and feedback from Reviewer nStQ were not taken into account).\\n\\nThe discussion phase highlighted key concerns around scalability, experimental diversity, and clarity of motivation. While the authors provided a detailed complexity analysis and addressed some experimental gaps, these efforts were insufficient to alleviate fundamental weaknesses. The reviewers remained unconvinced about the method\\u2019s generalizability, robustness, and practical relevance in diverse real-world settings. As a result, the decision to reject was reached after careful consideration of all perspectives.\"}", "{\"summary\": \"This paper studies how to use a pre-trained graph model in any new domain with unseen attributes, enhancing the zero-shot generalization. The authors propose a new model STAGE by learning the representations of statistical dependency between attributes, instead of the attribute values themselves. They also conduct experiments to validate the performance of STAGE across several benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe paper is well written and easy-to-follow.\\n2.\\tExtensive results validate the effectiveness of the proposed model.\", \"weaknesses\": \"1.\\tMore discussions on the variants on LLM should be made.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response! Good work has been made about Complexity Analysis and Runtime Comparison. My concern has been resolved. I will keep my positive rating.\"}", "{\"summary\": \"The paper introduces STAGE, a novel approach designed to enable zero-shot generalization for Graph Neural Networks (GNNs) across graphs with varying node attribute domains. STAGE aims to learn representation of statistical dependencies between attributes rather than their absolute values. This allows the model to transfer knowledge to unseen domains by leveraging analogous dependencies. Through experiments on multiple datasets, the paper demonstrates STAGE's superior performance in link prediction and node classification tasks, especially in terms of zero-shot cross-domain generalization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Generalization from a node attributes view and the two stages for processing graph representation are interesting.\", \"The paper provides a theoretical analysis, linking STAGE with maximal invariants and statistical dependency measures, which provides theoretical support for the model's generalization capabilities.\", \"The paper shows STAGE's robustness when facing different attribute domains, which is a very important characteristic in the varied real-world data\"], \"weaknesses\": [\"Although the paper presents some quantitative results and shows good performance in different link prediction and node classification domains, it lacks some qualitative analysis. For example, it could demonstrate how the model learns that \\\"income level is positively correlated to phone price\\\" from the training set and then discovers that \\\"height is positively correlated with clothing size\\\" in a new domain, thus generalizing to the new domain.\", \"STAGE is capable of capturing and leveraging feature dependencies in graph data, rather than relying on specific attribute values. The article can illustrate which feature dependencies are effective on the test set after pre-training the model.\"], \"questions\": \"How well does this model handle larger graph datasets.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response by Authors (1/2)\", \"comment\": \"We would like to thank the reviewer for their appreciation that this work is addressing a novel problem, has well-articulated theoretical support, and the proposed method is crucial for real-world applicability. We will now address your questions as follows:\\n\\n> W1/Q2: Computational Overhead\\n\\nWe appreciate this critical observation. In response to your request, we have investigated the training wall time of STAGE compared to the baseline models.\\nWe measured the average wall time per training epoch on the E-Commerce Stores dataset used in the experiments shown in the following table, using an 80GB A100 GPU. This dataset is the largest in our experiment with a total of 17463 nodes, 183618 edges, and up to 16 node attributes. Thus it produces the most number of STAGE-edge-graphs and showcases the most explicit runtime contrast. The times are reported in the table below.\\n| Models \\t| Wall Time per Training Epoch (seconds) | Hits@1 |\\n|---------------------|----------------------------------------|------------|\\n| NBFNet-raw \\t| 318.65 \\t| 0.0000 $\\\\pm$ 0.0000 | \\n| NBFNet-gaussian \\t| 322.13 \\t| 0.2101 $\\\\pm$ 0.0428 |\\n| NBFNet-structural | 322.31 \\t| 0.3149 $\\\\pm$ 0.0253 |\\n| NBFNet-llm \\t| 316.55 \\t| 0.3226 $\\\\pm$ 0.0190 |\\n| NBFNet-normalized | 316.87 \\t| 0.3269 $\\\\pm$ 0.0213 |\\n| **NBFNet-STAGE (Ours)** | **341.36** \\t| **0.4606 $\\\\pm$ 0.0123** |\\n\\nNBFNet-STAGE is 7.83% slower than the fastest baseline (NBFNet-llm), a reasonable tradeoff for its performance gains. The additional time is due to computing STAGE-edge-graph embeddings during each forward pass, while building the STAGE-edge-graphs is a one-time preprocessing step. In practice, the additional factor in the complexity has never prevented us from running in the datasets we considered.\\n\\n> W2: How STAGE handles complex dependencies in highly interconnected graphs\\n\\nThank you for the opportunity to clarify this point.\\n\\nWe would like to emphasize that the STAGE-edge-graph construction is theoretically capable of representing complex and high-order dependencies between multiple node attributes. In section 3.1, we motivate the theory by discussing the two-sample independence tests as an example, but the theory **applies to more complex interactions, such as higher-order conditional independence tests**. This is because many statistical tests for independence and conditional independence are all equivalent to rank tests, which disregard specific feature values and focus solely on their relative rankings [1]. Theorem 3.3 then formally demonstrates the expressiveness of STAGE-edge-graph by proving that applying a maximally expressive GNN encoder to each STAGE-edge-graph (along with positional encodings) and subsequently using a most expressive multiset encoder on the resulting embeddings allows for the approximation of *any* statistical test measuring dependencies or interactions between any number of node attributes.\\n\\nWe appreciate this comment and, in the updated paper, we emphasize that the theory applies not only to two-sample tests but also to any high-order statistical tests measuring complex dependencies. We will upload the updated paper during the rebuttal period.\\n\\n> W3/Q3: Sensitivity Analysis of Different GNN Backbones \\n\\nThis is indeed a very good point worthy of studying. Following your suggestion, we have experimented with other GNN backbones on the node classification experiments we reported in the main paper. These analysis are discussed in updated **Appendix E.1**, which we summarize below.\\n\\nTo demonstrate STAGE's adaptability, we replaced the GINE model used for the node classification experiment with a Graph Convolutional Network (GCN) adapted to handle multi-dimensional edge attributes. Specifically, an MLP was integrated into GCN\\u2019s message-passing to process edge attributes. The results, shown in updated Table 6 (and the attached table below), confirm that GCN-STAGE significantly outperforms all baselines, achieving a 7.33% improvement in average zero-shot test accuracy and a much smaller standard deviation, showcasing its robustness and stability.\\n\\n| Model | Accuracy ($\\\\uparrow$) |\\n|------------------|----------------------------|\\n| GCN-structural | 0.547 $\\\\pm$ 0.0658 |\\n| GCN-gaussian | 0.567 $\\\\pm$ 0.0382 |\\n| GCN-normalized | 0.570 $\\\\pm$ 0.0315 |\\n| GCN-llm | 0.526 $\\\\pm$ 0.0300 |\\n| **GCN-STAGE (Ours)** | **0.593** $\\\\pm$ **0.0046** |\\n \\nThese results demonstrate that STAGE improves performance over all baseline methods, regardless of the backbone GNN architecture (GINE or GCN). This reinforces the versatility and general applicability of STAGE across tasks and architectures, further solidifying its strength as a robust framework.\"}" ] }
5btFIv2PNb
LoR-VP: Low-Rank Visual Prompting for Efficient Vision Model Adaptation
[ "Can Jin", "Ying Li", "Mingyu Zhao", "Shiyu Zhao", "Zhenting Wang", "Xiaoxiao He", "Ligong Han", "Tong Che", "Dimitris N. Metaxas" ]
Visual prompting has gained popularity as a method for adapting pre-trained models to specific tasks, particularly in the realm of parameter-efficient tuning. However, existing visual prompting techniques often pad the prompt parameters around the image, limiting the interaction between the visual prompts and the original image to a small set of patches while neglecting the inductive bias present in shared information across different patches. In this study, we conduct a thorough preliminary investigation to identify and address these limitations. We propose a novel visual prompt design, introducing **Lo**w-**R**ank matrix multiplication for **V**isual **P**rompting (LoR-VP), which enables shared and patch-specific information across rows and columns of image pixels. Extensive experiments across seven network architectures and four datasets demonstrate significant improvements in both performance and efficiency compared to state-of-the-art visual prompting methods, achieving up to $6\times$ faster training times, utilizing $18\times$ fewer visual prompt parameters, and delivering a 3.1% improvement in performance.
[ "computer vision", "visual prompt" ]
Accept (Poster)
https://openreview.net/pdf?id=5btFIv2PNb
https://openreview.net/forum?id=5btFIv2PNb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sINxYUij4Z", "repE7UADBT", "pLtUmrWNww", "odQaktZlYE", "gJKwEd2CkU", "deW9Ou9A2c", "bQ9hKQh3jH", "aBNkVVkvah", "SolSDALPH6", "McMjHHwLJh", "M0p2Q1eiQL", "LTKALYny6w", "KKYj8ZTtoZ", "JHJuEEZkKu", "9ylYcb3hbP", "44dCwlVF6a", "1QppqlTh53" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732480087598, 1730746158900, 1732514434471, 1732347451253, 1732347046150, 1734897803083, 1732349790779, 1729470165756, 1732349382371, 1732349713882, 1732514674722, 1732456482724, 1732755681409, 1737523985293, 1730126288783, 1732348416867, 1732348891980 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9471/Reviewer_NrTN" ], [ "ICLR.cc/2025/Conference/Submission9471/Reviewer_NrTN" ], [ "ICLR.cc/2025/Conference/Submission9471/Authors" ], [ "ICLR.cc/2025/Conference/Submission9471/Authors" ], [ "ICLR.cc/2025/Conference/Submission9471/Authors" ], [ "ICLR.cc/2025/Conference/Submission9471/Area_Chair_ytD9" ], [ "ICLR.cc/2025/Conference/Submission9471/Authors" ], [ "ICLR.cc/2025/Conference/Submission9471/Reviewer_PGL1" ], [ "ICLR.cc/2025/Conference/Submission9471/Authors" ], [ "ICLR.cc/2025/Conference/Submission9471/Authors" ], [ "ICLR.cc/2025/Conference/Submission9471/Authors" ], [ "ICLR.cc/2025/Conference/Submission9471/Reviewer_pQoJ" ], [ "ICLR.cc/2025/Conference/Submission9471/Reviewer_PGL1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9471/Reviewer_pQoJ" ], [ "ICLR.cc/2025/Conference/Submission9471/Authors" ], [ "ICLR.cc/2025/Conference/Submission9471/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your responses. I appreciate your answers and am convinced that this is an interesting paper. I will keep my positive assessment of the paper.\"}", "{\"summary\": \"The paper addresses the task of visual prompting for adapting pre-trained models to specific downstream tasks. The paper investigates the limitations of existing visual prompting techniques, which often based on padding. The paper also proposes a new visual prompting technique based on low-rank matrix multiplication.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper identifies the limitations of existing visual prompting techniques, which often restrict interaction between visual prompts and the original image to a small set of patches.\", \"A novel visual prompt design based on low-rank matrix multiplication is proposed. This design allows for shared and patch-specific information across rows and columns of image pixels.\", \"The results are convincing and demonstrate performance and efficiency improvements. The authors include extensive experiments across seven network architectures and several datasets showing a performance improvement compared to state-of-the-art methods.\", \"The paper is well written and clear.\"], \"weaknesses\": [\"A low-rank matrix multiplication is just one way of sharing information across patches. I am surprised that other approaches have not been tested.\", \"The conclusions in the paper are very superficial and do not offer a deeper insight into the experimental results and the strengths and weaknesses of the proposed approach.\"], \"questions\": [\"Why are alternative approaches of sharing information across patches not explored?\", \"The paper uses linear probing to transform the labels from the source to the target domain. This is a very simple model and it is not clear why this is more appropriate than other transform models.\", \"If I understand correctly, all downstream tasks are image classification tasks. Would the approach be able to deal other downstream tasks, e.g. object detection or segmentation?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the feedback!\", \"comment\": \"Thank you for the positive feedback and for recognizing the contributions of our paper. We sincerely appreciate your support and your acknowledgment of the paper\\u2019s merits. In the final version of our paper, we will incorporate the new results and discussions to further enhance the quality and impact of our work.\"}", "{\"title\": \"Point-to-point Response (Part 2)\", \"comment\": \"> Q3: Would the approach be able to deal other downstream tasks, e.g. object detection or segmentation?\\n\\nThank you for the question. We follow the approach of previous works, such as AutoVP and ILM-VP, which primarily focus their investigations on image classification tasks. To address your concern, we conduct additional experiments to extend our evaluation to object detection and semantic segmentation tasks. We utilize YOLOv4 [1] for object detection and DeepLabv3+ [2] for semantic segmentation. Both models employ ImageNet-1K pre-trained ResNet-50 as the backbone. We keep hyperparameters such as the number of epochs and the rank in LoR-VP consistent with those used in classification tasks.\\nFor object detection, we train on the Pascal VOC 2012 and 2007 training sets and evaluate on the Pascal VOC 2007 test set. The bounding box head is modified for output transformation, and we use a learning rate of 0.0001. For semantic segmentation, we train on the Pascal VOC 2012 training set and evaluate on its validation set, adapting the DeepLabv3+ head for downstream segmentation with a learning rate of 0.01.\\nThe experimental results are summarized in Table 9 for detection and Table 10 for segmentation in the revised paper. LoR-VP demonstrates strong performance, outperforming AutoVP by nearly 4\\\\% in $\\\\text{AP}_{50}$ on VOC 2007 detection and by 1.1% in mIOU on VOC 2012 segmentation. These results highlight the versatility and effectiveness of our method in extending to tasks beyond image classification, including object detection and semantic segmentation.\\n\\n*References:*\\n\\n[1] Yolov4: Optimal speed and accuracy of object detection. ArXiv 2020 \\n[2] Encoder-decoder with atrous separable convolution for semantic image segmentation. ECCV 2018\"}", "{\"title\": \"Point-to-point Response (Part 1)\", \"comment\": \"Thank you for acknowledging our contributions, including the investigation of limitations in existing visual prompting techniques, the novelty of our method, the convincing results demonstrating performance and efficiency improvements, and the clarity and quality of our writing. We appreciate your feedback and have provided detailed responses to your questions below:\\n\\n> W1&Q1: Why are alternative approaches of sharing information across patches not explored?\\n\\nThank you for the question. To address the idea of sharing information across patches, we explored the Patch-Same method, which enables shared prompting by initializing a single tunable patch of parameters and repeatedly applying it to all patches of the image (see Part 4 of Figure 1). While this approach facilitates shared visual prompting across patches, it imposes a strong constraint by forcing the shared information to be identical for all patches. As shown in Figure 2, the Patch-Same method achieves better performance than AutoVP on ViT-B/32 but yields comparable performance on ViT-B/16. This suggests that while Patch-Same encourages compact and shared visual prompts, it may overly constrain the learning process by limiting the diversity of the learned prompts, which could hinder its adaptability to more complex tasks.\\n\\nThese findings motivated us to develop the Low-Rank matrix multiplication Visual Prompting (LoR-VP) method. LoR-VP not only addresses the limitations of Patch-Same by allowing more flexible parameter sharing but also aligns with the goals of parameter-efficient fine-tuning (PEFT). Specifically, LoR-VP minimizes parameter usage while maintaining ease of optimization and deployment, making it an efficient and practical choice for PEFT applications.\\n\\n\\n> W2: The conclusions in the paper are very superficial and do not offer a deeper insight into the experimental results and the strengths and weaknesses of the proposed approach.\\n\\nThank you for your feedback. To address your concern, we expand our discussion to provide deeper insights into the experimental results and the strengths and weaknesses of our proposed approach:\\n\\nIn this paper, we present a preliminary study to investigate the limitations of the widely used pad prompting technique, which pads tunable visual prompts only at the periphery of the image (see Part 1 of Figure 1). Our investigation reveals two key findings:\\n\\n1. **Preservation of original image information**: Utilizing a contiguous image in visual prompts is critical to maintain the integrity of the original image information. \\n2. **Balanced information sharing**: Effective visual prompts should combine shared information across patches while also accommodating patch-specific prompts. \\n\\nThese findings are validated by the results presented in Figure 2. Furthermore, we conduct extensive experiments to highlight the strengths of our visual prompt design, including its **superior generalization performance, training time efficiency, memory efficiency, and parameter efficiency** compared to existing methods, as demonstrated in Figure 4/5 and Table 1/2 of the paper.\\n\\nAdditionally, we delve into the impact of output transformations and rank selection in our LoR-VP method in Table 3 and Figure 6 of the revised paper. These investigations offer practical insights into selecting appropriate ranks for LoR-VP under varying output transformation scenarios, providing a deeper understanding of the method\\u2019s adaptability and effectiveness.\\n\\n> Q2: The paper uses linear probing to transform the labels from the source to the target domain. it is not clear why this is more appropriate than other transform models.\\n\\nThank you for the question. In Section 4.2, we discussed our rationale for utilizing linear probing (LP) for output transformation. The primary motivation is its parameter efficiency compared to other methods, such as iterative label mapping (ILM) and full mapping (FM), particularly when working with large models and datasets.\\nUsing LP as the classifier head avoids adding additional MLP layers, which would otherwise alter its functionality and potentially degrade performance. By directly modifying the MLP as the classifier head, LP maintains simplicity and efficiency.\\nAdditionally, when scaling to large datasets and models, ILM and FM become computationally and memory-intensive. For example, when using ImageNet-21K pre-trained Swin-B and tuning on ImageNet-1K, ILM requires significant resources to compute and store the mapping sequences (e.g., a 21,841 \\u00d7 1,000 matrix). In our experiments, even with an NVIDIA Quadro RTX8000 setup ($8 \\\\times 48$GB GPUs), these requirements exceeded our available computational capacity. Similarly, AutoVP necessitates training a 21,841 \\u00d7 1,000 fully connected layer for FM, which is significantly more resource-intensive than the 1,024 \\u00d7 1,000 classifier used in LP.\"}", "{\"metareview\": \"The paper introduces Low-Rank Visual Prompting (LoR-VP), a novel approach that combines Low-Rank Adaptation (LoRA) with Visual Prompting (VP) to enhance model efficiency and performance in visual prompting tasks. The method shows promising results, reducing parameters and training times while improving accuracy. However, concerns have been raised regarding the theoretical foundations and the approach's effectiveness in broader applications. Despite these concerns, the final average rating leans towards acceptance.\", \"additional_comments_on_reviewer_discussion\": \"In the initial review, the reviewers pointed out that the paper lacks rigorous control in experiments, leaving ambiguity about the causes of performance improvements. There is no formal mathematical proof supporting the inductive biases of the method, and the lack of theoretical guarantees on convergence and robustness limits its generalizability. The ablation studies are not clear enough to differentiate the contributions of LoR-VP and output transformations. Additionally, the paper does not explore failure cases such as adversarial robustness or noisy environments. While most of these concerns were addressed during the discussion phase, a few reviewers remain skeptical about the theoretical aspects and the broader significance of the approach. Based on the discussion, the meta-reviewer believes that the merits still outweigh the cons, and therefore recommends a borderline accept.\"}", "{\"title\": \"Highlighted General Response\", \"comment\": \"We sincerely appreciate all reviewers\\u2019 time and efforts in reviewing our paper. We also thank all reviewers for the insightful and constructive suggestions, which helped improve our paper further. In addition to our point-by-point responses, we provide the following highlighted general responses.\\n\\n**[GR1] Additional Investigations**\\n\\nAs mentioned by the reviewers, we conduct additional experiments to validate the effectiveness of our method on new tasks, datasets, and other circumstances. We list some of the experiments mentioned by multiple reviewers in the following:\\n\\n- **Detection and Segmentation.** \\nTo explore the applicability of LoR-VP to object detection and semantic segmentation tasks, we perform experiments using YOLOv4 for detection and DeepLabv3+ for segmentation. Both models utilize ImageNet-1K pre-trained ResNet-50 as the backbone.\\nFor object detection, we train on the Pascal VOC 2012 and 2007 training sets and evaluate on the Pascal VOC 2007 test set. For semantic segmentation, we train on the Pascal VOC 2012 training set and evaluate on its validation set.\\nThe experimental results for detection are presented in Table 9 of the revised paper, and the results for segmentation are shown in Table 10. LoR-VP achieves a 4% improvement in $\\\\text{AP}_{50}$ over AutoVP on VOC 2007 detection and a 1.1% mIOU improvement on VOC 2012 segmentation, demonstrating its effectiveness on object detection and semantic segmentation tasks.\\n\\n\\n- **Diverse Downstream Classification.** To assess the performance of LoR-VP across a broader range of classification tasks, we conduct experiments on ten downstream datasets. These experiments use ViT-B/32 pre-trained on ImageNet-21K and fine-tuned on ImageNet-1K, to further evaluate the generalization and robustness of our approach. The experimental results, presented in Table 11 in the revised paper, show that LoR-VP achieves superior average performance across the ten datasets compared to the SOTA method AutoVP, further demonstrating its effectiveness in diverse scenarios.\\n\\n**[GR2] Paper Revision**\\n\\nThe revision of the paper is updated, including all new experimental results and references. All changes are clearly marked in blue for the reviewers\\u2019 convenience. We remain committed to improving our paper to make meaningful contributions to the field.\\n\\nWe hope our pointwise responses below can clarify all reviewers\\u2019 confusion and alleviate all concerns. We thank all reviewers\\u2019 time again.\"}", "{\"summary\": \"This paper introduces Low-Rank Visual Prompting (LoR-VP), which uses low-rank matrix multiplication to generate visual prompt, enabling more efficient information sharing across image patches, taking the inductive biases among them into consideration. The authors conducted a preliminary study comparing current SOTA (AutoVP) with three new VP designs, proving that VP should combine the benefits of both patch-specific and shared visual prompting. Tested on several network architectures and datasets, the proposed approach reduces the number of tunable parameters by up to 18\\u00d7 and achieves up to 6\\u00d7 faster training times while improving performance by an average of 3.1%.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Organic integration of LoRA and VP: The originality and novelty of this paper lie in its clever integration of Low-Rank Adaptation (LoRA) with Visual Prompting (VP), two previously established concepts, to create a highly efficient and effective approach for adapting pre-trained vision models. While LoRA has been used to reduce the complexity of model fine-tuning, and Visual Prompting focuses on task-specific adaptation through input modification, the paper's innovation is in combining these methods in a seamless way that enhances both parameter efficiency and model performance. By introducing low-rank matrix multiplications into the visual prompting process, LOR-VP allows shared and patch-specific information across the entire image, significantly outperforming existing methods in both speed and accuracy.\", \"exemplary_clarity_of_reasoning\": \"In the preliminary study, the authors provide a well-structured and logical explanation for their design choices. They clearly demonstrate why both patch-specific and shared information in visual prompts are necessary by highlighting the limitations of existing methods that treat patches independently or focus only on peripheral areas. Additionally, their decision not to scale down the image emphasizes the importance of retaining maximum information for accurate model adaptation. This clear, step-by-step reasoning effectively justifies the development of LOR-VP, ensuring the method addresses these shortcomings while optimizing performance.\", \"weaknesses\": \"The preliminary study lacks rigor in controlling variables, as the impact of image scaling is not isolated from the role of patch-specific information. While design 4 (Patch-Same) outperforms others, the study does not definitively clarify whether its success is due to shared prompting across patches or the fact that the image is not scaled, leaving ambiguity about the true cause of the performance improvement. This undermines the ability to attribute the gains solely to patch sharing.\\n\\nIn the methodology section, the paper lacks formal mathematical proof detailing how the information across rows and columns in the visual prompts is linked, which leaves the assumptions of inductive bias vague. Additionally, while the low-rank matrix approach is intended to capture shared information, it does not explicitly guarantee that the natural relationship between neighboring pixels is preserved, and the exact nature of the associations formed between pixels remains unclear, weakening the justification for its effectiveness.\\n\\nThe method's performance relies heavily on empirical results without offering strong theoretical guarantees about convergence, optimality, or robustness in different settings, which could limit its broader adoption in critical applications.\\n\\nThe ablation studies comparing different output transformations lack clarity in distinguishing the contributions of the output transformation versus the LOR-VP component. Simply showing that LOR-VP outperforms other methods under the same output transformation doesn't clarify whether the performance gains are primarily due to the low-rank adaptation (LoRA) or the output transformation itself. Additionally, there is a noticeable performance drop for ViT models when using ILM and FLM, while Swin models do not exhibit this behavior. The authors fail to investigate or explain this discrepancy, leaving a gap in understanding why certain architectures are more sensitive to specific label mapping methods. A more detailed analysis of these interactions and a clearer separation of the contributions from each component are needed for a more rigorous assessment.\\n\\nThe paper lacks a thorough analysis of failure cases or edge scenarios where the LOR-VP method may struggle, such as on noisy or adversarial images. While the authors conduct robust experiments across multiple datasets and architectures, there is no exploration of how the method performs under conditions that deviate from the standard datasets, like adversarial attacks or high levels of image noise. For example, in Section 5.3, where robustness is discussed, the evaluation focuses on out-of-distribution generalization but does not account for adversarial robustness or resilience to noise, which are critical factors for real-world deployment. Without this analysis, it is unclear how reliable or stable LOR-VP would be in challenging environments, potentially limiting its practical use in more demanding applications.\", \"questions\": \"Impact of Image Scaling vs. Patch-Specific Information: Can you provide additional experiments or controlled studies to isolate the impact of image scaling from patch-specific information, to clarify whether the performance gains in design 4 are due to shared prompting or the lack of image scaling?\", \"mathematical_proof_for_row_and_column_information_link\": \"Could you include a more formal mathematical explanation or proof of how the information across rows and columns in the visual prompts is linked, ensuring that the inductive bias of neighboring pixels being more related than distant ones is preserved?\", \"theoretical_guarantees_on_convergence_and_robustness\": \"Can you offer theoretical insights or guarantees about the convergence, optimality, or robustness of the LOR-VP method, to complement the empirical results and ensure its reliability in diverse settings?\\n\\nClarifying the Contributions of Output Transformation vs. LOR-VP: Could you conduct additional ablation studies to more clearly separate the impact of the output transformation from the LOR-VP component, particularly to clarify why ViT models show a significant performance drop with ILM and FLM while Swin models do not?\", \"analysis_of_sensitivity_to_label_mapping_in_different_architectures\": \"Can you explore why ViT models seem more sensitive to label mapping methods compared to Swin models, and provide a deeper investigation into the factors causing this discrepancy?\", \"handling_noisy_or_adversarial_images\": \"Can you include experiments testing LOR-VP's performance under adversarial attacks or in the presence of noise, to assess its robustness and ensure its reliability in more challenging or real-world scenarios?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Point-to-point Response (Part 2)\", \"comment\": \"> W3&Q3: The method's performance relies heavily on empirical results without offering strong theoretical guarantees about convergence, optimality, or robustness in different settings, which could limit its broader adoption in critical applications. Can you offer theoretical insights or guarantees about the convergence, optimality, or robustness of the LOR-VP method, to complement the empirical results and ensure its reliability in diverse settings?\\n\\nThank you for the suggestion. While LoR-VP extends low-rank adaptation methods to the pixel-level input space of deep neural networks, the underlying technique of low-rank adaptation is well-established and widely validated in both NLP and CV tasks, as demonstrated by LoRA [1, 2, 3]. Theoretical guarantees regarding the convergence and reliability of low-rank adaptation methods, including LoRA, are detailed in [4, 5], and we kindly refer the reviewer to these references for further insights. Additionally, the effectiveness and reliability of LoR-VP are comprehensively validated through the empirical results presented in the revised paper. These results span diverse tasks, including image classification, object detection, and semantic segmentation, demonstrating consistent performance improvements and robustness across different settings. These findings reinforce the applicability of LoR-VP as a reliable and effective method for various critical applications.\\n\\n> W4&Q4: Simply showing that LOR-VP outperforms other methods under the same output transformation doesn't clarify whether the performance gains are primarily due to the low-rank adaptation (LoRA) or the output transformation itself. A more detailed analysis of these interactions and a clearer separation of the contributions from each component are needed for a more rigorous assessment. Could you conduct additional ablation studies to more clearly separate the impact of the output transformation from the LOR-VP component?\\n\\nThank you for the question. To clarify the contributions of different components in LoR-VP, we have provided detailed ablation results in Table 5 of the revised paper. These results investigate the impact of both the visual prompt design and the output transformation. Specifically, we observe that the output transformation in LoR-VP (LP) improves performance compared to frequency-based label mapping (FLM). Furthermore, our visual prompt design enhances performance under both FLM and LP, validating its effectiveness and its critical role in LoR-VP.\\n\\nTo ensure a rigorous comparison, we control the output transformation of LoR-VP to match that of the baselines, as shown in Table 3 of our paper. The results demonstrate that our visual prompt design outperforms the designs used in baseline methods, further highlighting its superiority.\\n\\nAdditionally, we conduct further experiments using FM and LP as output transformations for both LoR-VP and AutoVP, with the results presented in Table 12 of the revised paper. These experiments show that LoR-VP achieves better performance than the SOTA method AutoVP, regardless of whether LP or FM is used as the output transformation. We kindly refer the reviewer to these results for a more comprehensive understanding of the contributions of our visual prompt design to the overall performance of LoR-VP.\"}", "{\"title\": \"Point-to-point Response (Part 3)\", \"comment\": \"> W4&Q5: there is a noticeable performance drop for ViT models when using ILM and FLM, while Swin models do not exhibit this behavior. The authors fail to investigate or explain this discrepancy, leaving a gap in understanding why certain architectures are more sensitive to specific label mapping methods. Can you explore why ViT models seem more sensitive to label mapping methods compared to Swin models, and provide a deeper investigation into the factors causing this discrepancy?\\n\\nThank you for the question. In the results presented in Table 3 of our paper, we observe that when using ViT-B/32 with ILM and FLM on Tiny-ImageNet and CIFAR-100, there is no significant performance drop compared to LoR-VP with LP and FM as the output transformations, which contradicts the reviewer\\u2019s observation. However, we acknowledge that when using ViT-B/16-P on Tiny-ImageNet and CIFAR-100, the performance of LoR-VP with FLM and ILM is lower than expected.\\n\\nUpon further investigation, we find that this discrepancy arises from the choice of optimizer. Specifically, LoR-VP with FLM and ILM use SGD in our experiments. When we switch to Adam as the optimizer while keeping all other hyperparameters unchanged, the performance improves significantly. For ViT-B/16-P on CIFAR-100, LoR-VP with ILM and FLM achieves performances of 71.36 and 67.68, respectively. Similarly, for Tiny-ImageNet, LoR-VP with ILM and FLM achieves performances of 72.65 and 69.42, respectively\\u2014marked improvements over the results reported in our paper.\\n\\nWe admit that we did not extensively tune hyperparameters for each model and dataset combination, as LoR-VP consistently outperformed the baselines with default settings. We thank the reviewer for highlighting this issue and will include the updated results for LoR-VP with FLM and ILM using ViT-B/16-P in the final version of the paper.\\n\\n> W5&Q6: The paper lacks a thorough analysis of failure cases or edge scenarios where the LOR-VP method may struggle, such as on noisy or adversarial images. Can you include experiments testing LOR-VP's performance under adversarial attacks or in the presence of noise, to assess its robustness and ensure its reliability in more challenging or real-world scenarios?\\n\\nThank you for the suggestion. While none of the current visual prompting baselines provide experiments involving adversarial attacks or noisy inputs, which makes direct benchmarking against them infeasible, we agree that this is an important direction for future research. Proposing a benchmark for such tasks, however, is beyond the scope of this paper.\\nThe reliability and robustness of LoR-VP are demonstrated through extensive experiments across different architectures, model sizes, and dataset scales, as presented in Figures 4 and 5 of the paper. Furthermore, we validate LoR-VP\\u2019s robustness to distributional shifts by evaluating its performance on four out-of-distribution datasets, with results shown in Table 1 of the paper.\\nTo further assess the effectiveness of LoR-VP in more complex scenarios, we evaluate its performance using ViT-B/32 on ten datasets encompassing natural and artificial objects, scenes, and textures. As detailed in Table 11 of the revised paper, LoR-VP consistently outperforms AutoVP and ILM-VP on these challenging datasets, demonstrating its robustness in diverse conditions. Additionally, results on object detection and semantic segmentation, shown in Tables 9 and 10, further highlight LoR-VP\\u2019s effectiveness across a range of tasks.\\n\\n*References:*\\n\\n[1] LoRA: Low-Rank Adaptation of Large Language Models. ICLR 2022 \\n[2] DoRA: Weight-Decomposed Low-Rank Adaptation. ICML 2024 \\n[3] Parameter-Efficient Fine-Tuning with Discrete Fourier Transform. ICML 2024 \\n[4] The expressive power of low-rank adaptation. ICLR 2024 \\n[5] GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection. ICML 2024\"}", "{\"title\": \"Thank you for the feedback!\", \"comment\": \"Thank you for the thoughtful feedback and for recognizing that our revisions have addressed your concerns. We appreciate the reviewer\\u2019s acknowledgment of the paper\\u2019s strong motivation, clear writing, and comprehensive numerical experiments.\\n\\nRegarding the improvements over LP, the additional results presented in Table 11 using ViT-B/32 on ten downstream classification datasets show that LoR-VP achieves an average accuracy improvement of 1.9% over LP, with a notable 5% improvement on GTSRB. The practical benefits of our method extend beyond in-distribution accuracy gains, as LoR-VP offers superior generalization performance, training time efficiency, memory efficiency, and parameter efficiency compared to current SOTA visual prompting methods, as highlighted in Table 1 and Table 2. Furthermore, LoR-VP is highly versatile and applicable to diverse tasks, including classification, detection, and segmentation.\\n\\nAgain, we sincerely thank you for the detailed review and constructive feedback. In the final version of our paper, we will incorporate the new results and additional discussions to further enhance the quality and impact of our work.\"}", "{\"comment\": \"Thanks for the reply. My concerns have been well solved. This paper is well motivated and very well-written. The numerical experiments are comprehensive after revision. A potential problem is the improvements over LP are not substantial. I remain uncertain about the practical benefits of the proposed method in real applications. I'll keep my rating to borderline accept.\"}", "{\"comment\": \"I am very grateful to the author for explaining the questions I raised, especially the explanation that the performance of LoR-VP with FLM and ILM is lower than expected is due to the choice of optimizer, which cleared up a misunderstanding. The explanation on the performance differences Patch-same and Patch-Free is clarified. However, I am still uncertain about the reasoning behind how LoR-VP utilizes shared information between columns and rows, along with the Math behind it. The reason why \\\\mathbf{B} serves as a basis for column visual prompts, similarly why \\\\mathbf{A} serves as a basis for row visual prompts is not solid enough to persuade me with with what the author says about integrating horizontal and vertical features. My review remains marginally below the acceptance threshold.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes the incorporation of low-rank matrix multiplication in visual prompting, resulting in improved performance compared to existing visual prompting methods, as well as enhanced training speed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. the paper is very well-written and easy to follow. The background part is very clear and detailed.\\n2. the method is simple yet effect. And the author gives good preliminary analysis that motivates the method, which makes a lot of sense to me.\", \"weaknesses\": \"1. Regarding the pad-based method, the authors state, \\\"The VP parameters are restricted to interacting with the original image in a limited set of patches, leaving a substantial portion of the image unmodified.\\\" I find this assertion questionable. If the backbone is a ViT, the padded tokens will interact with the inner tokens through self-attention, potentially affecting the entire image.\\n2. The authors do not include a comparison or discussion of visual prompt tuning [1], which is a more prevalent method than those cited in the paper.\\n3. The dataset utilized in the out-of-distribution generation experiments is insufficient to demonstrate the robustness of the method. It employs several variants of ImageNet, which exhibit minimal domain gaps, and all tasks focus on general object recognition. I recommend using a benchmark similar to AutoVP [2], which includes fine-grained classification and domain-specific tasks for a more comprehensive evaluation.\\n4. The comparison between AutoVP and LP appears unusual, as LP seems to outperform AutoVP in most cases. This contradicts the conclusions drawn in the AutoVP paper. Additionally, the performance gap between LoR-VP and LP is minimal. What would be the outcome if AutoVP's output transformation were replaced with LP?\\n5. As a general PEFT method, the evaluation is limitated to image classification, without extension to other tasks such as segmentation, detection, or caption.\\n\\n[1] Jia, Menglin, et al. \\\"Visual prompt tuning.\\\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.\\n\\n[2] Tsao, Hsi-Ai, et al. \\\"Autovp: An automated visual prompting framework and benchmark.\\\" arXiv preprint arXiv:2310.08381 (2023).\", \"questions\": \"Please see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Point-to-point Response\", \"comment\": \"Thank you for recognizing that our paper is well-written and easy to follow. We appreciate your acknowledgment of the clarity and detail in the background part, the simplicity and effectiveness of our method, and the quality of the preliminary analysis that motivates our approach. Below, we provide detailed responses to your questions.\\n\\n> W1: If the backbone is a ViT, the padded tokens will interact with the inner tokens through self-attention, potentially affecting the entire image.\\n\\nThank you for the insightful point. We agree that in ViT architectures, the self-attention mechanism facilitates interactions between padded tokens and inner tokens. However, the propagation of information through self-attention depends on the network\\u2019s pre-trained attention patterns, which are learned without accounting for the presence of visual prompts (VPs). As a result, these patterns may not effectively amplify the task-specific signals introduced by the VPs in the periphery of the image. Our visual prompt designs address this limitation by directly modifying the pixel-level information of all patches. Additionally, we leverage the inductive biases present in the shared information across patches, allowing the pre-trained model to better adapt to downstream tasks. \\n\\n> W2&W3: The authors do not include a comparison or discussion of visual prompt tuning. Using a benchmark similar to AutoVP [2], which includes fine-grained classification and domain-specific tasks for a more comprehensive evaluation.\\n\\nThank you for the comment. We follow prior works, such as ILM-VP and AutoVP, to primarily investigate pixel-level visual prompt designs as the baselines. To provide a more comprehensive evaluation, we include additional experiments using ImageNet-21K pre-trained and ImageNet-1K fine-tuned ViT-B/32 on ten datasets, encompassing natural and artificial objects, scenes, and textures, to further demonstrate the generalization ability of our method. We also extend our comparison to include VPT [1], which modifies the transformer layers. In these experiments, we compare LoR-VP against four baselines: LP, ILM-VP, AutoVP, and VPT, following the implementations outlined in the original papers. The results, presented in Table 11 of the revised paper, show that LoR-VP achieves the best average performance across the ten datasets, outperforming both VPT and AutoVP. These findings further validate the effectiveness of our method in diverse scenarios.\\n\\n> W4: The comparison between AutoVP and LP appears unusual, as LP seems to outperform AutoVP in most cases. This contradicts the conclusions drawn in the AutoVP paper. Additionally, the performance gap between LoR-VP and LP is minimal. What would be the outcome if AutoVP's output transformation were replaced with LP?\\n\\nThank you for the comment. In our experiments, we observe that LP achieves better performance than what was reported in the AutoVP paper. To ensure the validity of our comparisons, we use our LP results instead of directly adopting the LP results from AutoVP. To address your concern, we conduct additional experiments to compare LoR-VP with AutoVP using linear probing (LP) as the output transformation. Following the same implementation as the output transformation investigation in our paper (Table 3). The results, presented in Table 12 of the revised paper, show that LoR-VP consistently outperforms AutoVP with LP across all models. This further demonstrates the effectiveness of our method when AutoVP employs LP as its output transformation.\\n\\n> W5: the evaluation is limitated to image classification, without extension to other tasks such as segmentation, detection, or caption.\\n\\nThank you for the suggestion. We follow the approach of previous works, such as AutoVP and ILM-VP, which primarily focus their investigations on image classification tasks. To address your concern, we conduct additional experiments to extend our evaluation to object detection and semantic segmentation tasks. We utilize YOLOv4 [3] for object detection and DeepLabv3+ [4] for semantic segmentation. Both models employ ImageNet-1K pre-trained ResNet-50 as the backbone. For object detection, we train on the Pascal VOC 2012 and 2007 training sets and evaluate on the Pascal VOC 2007 test set. The bounding box head is modified for output transformation. For semantic segmentation, we train on the Pascal VOC 2012 training set and evaluate on its validation set, adapting the DeepLabv3+ head for downstream segmentation. The experimental results are presented in Table 9 for detection and Table 10 for segmentation in the revised paper. LoR-VP demonstrates strong performance, outperforming AutoVP by nearly 4% in $\\\\text{AP}_{50}$ on VOC 2007 detection and by 1.1% in mIOU on VOC 2012 segmentation.\\n\\n*References:*\\n\\n[3] Yolov4: Optimal speed and accuracy of object detection. ArXiv 2020 \\n[4] Encoder-decoder with atrous separable convolution for semantic image segmentation. ECCV 2018\"}", "{\"title\": \"Point-to-point Response (Part 1)\", \"comment\": \"Thank you for recognizing that our method enables more efficient information sharing across image patches and significantly outperforms existing methods in both speed and accuracy. We appreciate your acknowledgment of our preliminary study, which provides a well-structured and logical explanation for our design choices. Our paper effectively highlights the limitations of existing methods that focus solely on peripheral areas. Our step-by-step reasoning provides a clear and robust justification for the development of LoR-VP. Below, we provide detailed responses to your questions.\\n\\n> W1&Q1: While design 4 (Patch-Same) outperforms others, the study does not definitively clarify whether its success is due to shared prompting across patches or the fact that the image is not scaled. Can you provide additional experiments or controlled studies to isolate the impact of image scaling from patch-specific information, to clarify whether the performance gains in design 4 are due to shared prompting or the lack of image scaling?\\n\\nThank you for the question. There seems to be a misunderstanding regarding the role of image scaling in our study. The performance differences observed are not related to scaling. In Figure 2 of our paper, we compare Patch-Same (Part 4 of Figure 1) with Patch-Free (Part 3 of Figure 1), both of which utilize a resized image resolution of $224 \\\\times 224$. We adopt a resolution of $224 \\\\times 224$ because Patch-Pad (Part 2 of Figure 1) demonstrates inferior performance, despite using patch-wise pad prompts. This approach disrupts the continuity of the original image by splitting it into discontiguous parts, leading to a loss of crucial information. In contrast, the performance advantage of Patch-Same over Patch-Free highlights the importance of shared prompting information across patches.\\n\\n> W2&Q2: the paper lacks formal mathematical proof detailing how the information across rows and columns in the visual prompts is linked, which leaves the assumptions of inductive bias vague. Additionally, while the low-rank matrix approach is intended to capture shared information, it does not explicitly guarantee that the natural relationship between neighboring pixels is preserved, and the exact nature of the associations formed between pixels remains unclear, weakening the justification for its effectiveness. Could you include a more formal mathematical explanation or proof of how the information across rows and columns in the visual prompts is linked, ensuring that the inductive bias of neighboring pixels being more related than distant ones is preserved?\\n\\nThank you for the suggestion. Unlike Patch-Same (Part 4 of Figure 1), which introduces patch-wise shared prompt information by directly using the same visual prompt for each patch, LoR-VP incorporates shared row and column (and thus across-patch) prompt information through the use of two low-rank matrices, $\\\\mathbf{B}$ and $\\\\mathbf{A}$, as described in Section 4.1.\\nSpecifically, $\\\\mathbf{B}$ serves as a basis for column visual prompts, where the visual prompt in each column of the image is a linear combination of the columns in $\\\\mathbf{B}$. This design introduces shared information among different columns of the visual prompts. Meanwhile, the coefficients for each column are represented by the columns in $\\\\mathbf{A}$, thereby introducing column-specific prompt information. Similarly, by interpreting $\\\\mathbf{A}$ as a basis for row visual prompts and $\\\\mathbf{B}$ as the coefficients of these row bases, we establish an analogous understanding of the inductive biases introduced across rows in the LoR-VP visual prompt design. This formulation ensures that the shared and specific prompt information is distributed across both rows and columns and, thus, patches. The empirical results presented in Figure 2 demonstrate the superiority of Patch-Same over Patch-Free and LoR-VP over Patch-Same, further validating the effectiveness of incorporating shared visual prompts across patches, rows, and columns.\"}" ] }
5bdcDl6mC7
Distribution-Aware Diffusion Model Quantization via Distortion Minimization
[ "Wang Zhe Mark", "Fen Fang", "Xu Kaixin", "Hongyuan Zhu", "Ying Sun", "Xue Geng", "Xulei Yang", "Min Wu", "Weisi Lin" ]
Diffusion models have attained significant performance in image/video generation and related tasks. However, while diffusion models excel in delivering excellent results, they suffer from substantial computational complexity due to their large volume of parameters. This poses a significant issue for deployment on mobile devices and hampers the practical applications of diffusion models. In this work, we propose a new post-training quantization approach designed to reduce the computation complexity and memory cost of diffusion models. As the distributions of the outputs of diffusion models differ significantly across timesteps, our approach first splits the timesteps into different groups and optimizes the quantization configuration of each group separately. We then formulate the quantization of each group as a rate-distortion optimization problem to minimize the output distortion caused by quantization given the model size constraint. Because output distortion is highly related to model accuracy, by minimizing the output distortion, our approach is able to compress diffusion models to low bit widths without hurting accuracy. Furthermore, our approach applies Taylor series expansion approximation and proposes an efficient method to find the optimal bit allocation across layers with linear time complexity. Extensive experimentation over four datasets including CIFAR-10, CelebaHQ, LSUN-Bedroom, and LSUN-Church validates the effectiveness of our approach. Empirical results show that our approach obtains a notable improvement over state-of-the-art and can reduce the bit width of diffusion models to 5-6 bits while maintaining high accuracy levels.
[ "Diffusion model", "image/video generation", "post-training quantization", "Taylor series expansion approximation" ]
Reject
https://openreview.net/pdf?id=5bdcDl6mC7
https://openreview.net/forum?id=5bdcDl6mC7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tSVTBSNZQP", "remrZJ8tTQ", "pqpjpp0BcO", "mXgDlzapj9", "imHmuKKEB2", "eQXc59yTkB", "ePNMJXjPYZ", "cktSxSK2ec", "b52a8LIk0W", "Y8xHIVj732", "XoyKgTKFFO", "XdExhAgnIY", "SRQS7BCWIk", "KVjQfoanh0", "KNARTARjbW", "JmI6rOKZB1", "ESSkURq88T", "Ak4y8EyoQl", "9r9HkTGtFf", "7gyGKmFMro", "6SBrzNrYhM", "4tVqm4E1Ou", "2ZV0OrcPNg" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732724827859, 1732385661153, 1732553832664, 1732098621291, 1730625139807, 1732723670649, 1732724792795, 1730404468464, 1733198951744, 1732098856104, 1732554556851, 1734486971264, 1737523739798, 1732724873760, 1732890657238, 1732724849555, 1733161633563, 1730757612683, 1733198725670, 1733194604337, 1732099027420, 1730501534666, 1732724136974 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Reviewer_STQM" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Reviewer_KDiN" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Area_Chair_7e9z" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Reviewer_pc5T" ], [ "ICLR.cc/2025/Conference/Submission6030/Reviewer_pc5T" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ], [ "ICLR.cc/2025/Conference/Submission6030/Reviewer_9DVu" ], [ "ICLR.cc/2025/Conference/Submission6030/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer STQM,\\n\\nWe appreciate it if you could let us know whether our responses are able to address your concerns. We are happy to address any further concerns. Thank you.\\n\\nSincerely.\"}", "{\"title\": \"Response to Reviewer 9DVu's Comments\", \"comment\": \"We deeply thank Reviewer 9DVu for the careful review and the constructive suggestions. Below please check our answers to the questions.\\n\\nQ1. The paper does not provide specific implementation details such as the exact algorithms or code snippets used for quantization and optimization. Including these would help in reproducing the results.\", \"ans\": \"MSE is widely used in prior works to measure the quantization results. The PSNR is highly related to MSE. It is actually the log representation of MSE, i.e. PSNR = 10 log (L^2/MSE) where L is the largest value. Our approach applies the Structural Similarity Index Measure (SSIM) added by the Mean Squared Error (MSE) to define output distortion, where SSIM indicates the picture level similarity and MSE indicates pixel level similarity. As a result, our method takes the similarity in both picture level and pixel level into consideration.\"}", "{\"title\": \"Response to Reviewer KDiN's Comments\", \"comment\": \"We deeply thank Reviewer KDiN for the careful review and the constructive suggestions. Below please check our answers to the questions.\\n\\nQ1. Please elaborate on the advantages and differences of your mixed-precision strategy compared to prior quantization methods like MixDQ [1] and BitsFusion [2]. And provide a brief comparison table or paragraph highlighting key differences in approach and results.\", \"ans\": \"We revised the sentence \\u201cLi et al. introduced a post-training quantization (PTQ) Li et al. (2023b) method called Q-Diffusion, tailored specifically to the distinctive multi-timestep pipeline and model architecture of diffusion models\\u201d to the sentence corrected by the reviewer. We inserted Equation (2) in the text and removed Equation (4) due to the limited space. Thanks a lot for the suggestion.\"}", "{\"title\": \"Response to Reviewer pc5T's Comments\", \"comment\": \"We deeply thank Reviewer pc5T for the careful review and the constructive suggestions. Below please check our answers to the questions.\\n\\nQ1. Lack of insights into how quantization affects different model layers. Since these insights would be very helpful to understand the impact of various components in the diffusion models.\", \"ans\": \"The distance between two outputs is defined as the SSIM distance added by the Mean Squared Error (MSE) of the outputs. The SSIM distance reflects the difference in picture level, and the MSE reflects the difference in pixel level. We applied both SSIM and MSE to make the metric reflect picture-level and pixel-level similarity at the same time. In our submitted manuscript, we showed the SSIM distance in the equation, but mentioned that SSIM is added by the MSE as the final metric in the text. We have modified the equation and also included the MSE in the equation in the revised manuscript. Thanks for this good question.\"}", "{\"summary\": \"The authors present an approach to jointly quantize weights and activations of a diffusion model after it has been trained (i.e, post training quantization). To do so, a joint optimization problem is formuled: the authors propose to minimize the structured similarity index between original and quantized model, where the optimization paramters are bitwidth of quantization of weights and activations. The authors show that global SSIM optimization can be simplified (under some assumptions) into sum over SSIM measurements after every layer.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"-\", \"weaknesses\": \"In the current form paper does not present the complete information to assess its merits and contributions. Paper's writing (grammar, style, flow) is good, but once it comes to describing the actual working soluiton (end of section 3) it strugles to present the final algorithm. There are inconsistentices in what is announced (i.e., abstract/intro) vs what is presented and demonstrated (experiments). The issue is significant: I am not confident that any reader would be able to implement the approach following the presentation in the paper. I am not able to give any positive rating to this paper while these issues are clarified and/or resolved.\\n\\nThe authors present an approach to jointly quantize weights and activations of a diffusion model after it has been trained (i.e, post training quantization). To do so, a joint optimization problem is formuled: the authors propose to minimize the structured similarity index between original and quantized model, where the optimization paramters are bitwidth of quantization of weights and activations. The authors show that global SSIM optimization can be simplified (under some assumptions) into sum over SSIM measurements after every layer. \\n\\nUnfortunately, this is as much details I can provide about their approach after reading the paper. Paper omits the final presentation of the quantization approach/algorithm; has some logical inconsistencies, and as such raises many questions:\\n1. Section 3.4 does not present any readily awailable optimization algorithm. Lines 350-357 describe an approach, but it is impossible to follow. \\n2. Given the problem defintion (eq.8, eq.16), authors never discuss what kind of optimization it is: in my understanding it is mixed-integer (some parts are fully differentiably, some parts are integer, like sizes of weights/activations). Therefore, in the same section 3.4 I do not understand how authors propose taking a derivative wrt s_w; which is defined as size of the weights?\\n3. When describing the soltuion on lines 350-357; authors propose enumeration over different choices; it would be highly benefitial to make very presize statements: what is being enumertaed, what are different choices there etc. As such, saying that total runtime of O(K*M*N) has \\\"linear time complexity\\\" is misleading: linear wrt what? size of the model? if it is then K, M, N needs to be clearly counted\\n4. The authors write that algorithm uses mixed precision (layers can have different bitwidth) and timestep-aware quantization. There is no word on how these are achieved and no experimental evaluation/discussion.\\n5. Experiments. I would like to understand how big are the networks that are being compressed, what is their architecture, and other basic information. Has authors trained them from scratch or obtained from somewhere? Visually speaking, it seems like 6bits quantizaiton introduces noticable quality degradation, and probably shouldn't be counted as a major achievement.\\n6. Experimetns. Where are the experimetnts where per-layer bitwidt are different? All presented experiments have same per layer setting.\", \"questions\": \"Please see weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer STQM's Comments\", \"comment\": \"We deeply thank Reviewer STQM for the careful review and the constructive suggestions. Below please check our answers to the questions.\\n\\nQ1. Section 3.4 does not present any readily awailable optimization algorithm. Lines 350-357 describe an approach, but it is impossible to follow.\", \"ans\": \"We enumerate slope $\\\\gamma$ in Equation 15 to solve the optimization problem. Equation 15 expresses the optimal condition that the slopes of all rate-distortion curves (output distortion versus size) should be equal. As a result, we can find the minimal solution by enumerating slope $\\\\gamma$ and choosing the point on each rate-distortion curve with slope equal to $\\\\gamma$. The time complexity of our optimization algorithm is $O(K \\\\cdot M \\\\cdot N)$, where $K$ is the total number of slope $\\\\gamma$ to be evaluated, $M$ is the total number of bit widths, and $N$ is the total number of layers.\"}", "{\"comment\": \"Dear Reviewer pc5T,\\n\\nWe appreciate it if you could let us know whether our responses are able to address your concerns. We are happy to address any further concerns. Thank you.\\n\\nSincerely.\"}", "{\"summary\": \"This paper proposes a post-training quantization (PTQ) approach to reduce the model size and computational complexity of diffusion models. Specifically, it introduces a method that splits timesteps into groups, optimizing quantization parameters within each group independently. The quantization process for each group is then formulated as a rate-distortion optimization problem to minimize output distortion. Additionally, the approach employs mixed-precision quantization, applying different bit widths across layers.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Quantization in the diffusion model is important.\\n2. The use of mixed-precision quantization, with varying bit widths across layers, enhances the adaptability and precision of the quantization process.\\n3. Formulating the quantization as an optimization problem is reasonable.\", \"weaknesses\": \"1. Please elaborate on the advantages and differences of your mixed-precision strategy compared to prior quantization methods like MixDQ [1] and BitsFusion [2]. And provide a brief comparison table or paragraph highlighting key differences in approach and results.\\n\\n2. The \\\"Related Works\\\" section has numerous typos and informal language, such as in lines 122 and 128 where \\\"Zhong et al. Zhong et al. (2022)\\\" and \\\"Ma et al. Ma et al. (2023)\\\" are repeated. And I also recommend consolidating several paragraphs into one paragraph in Section 2.2 for better flow.\\n\\n3. Additionally, some sentences in this section are confusing. For instance, \\\"Li et al. introduced a post-training quantization (PTQ) Li et al. (2023b) method called Q-Diffusion, tailored specifically to the distinctive multi-timestep pipeline and model architecture of diffusion models,\\\" could be rephrased for clarity. For example, \\\" Li et al. (2023b) proposes Q-Diffusion, which is a PTQ method tailored specifically to the distinctive multi-timestep pipeline and model architecture of diffusion models\\\". The preliminaries would also benefit from conciseness due to limited space, especially for details on diffusion models. For example, Eq. (2) and (4) can be removed or inserted in the paragraph.\\n\\n4. The motivation for splitting timesteps into groups needs clarification. Currently, it's unclear how the approach of \\u201csplitting timesteps into groups to optimize quantization for each group\\u201d logically leads to \\u201cusing mixed precision to quantize parameters across layers.\\u201d in line 215. Authors could include a brief explanation or diagram illustrating how these components of their approach are connected.\\n\\n5. Lastly, Table 2 does not sufficiently demonstrate the advantages of the proposed method. With recent works achieving quantization at or below 4 bits, such as MixDQ (4 bits) and BitsFusion (1.99 bits), the lowest comparison at 6 bits in Table 2 is less compelling. Authors can include results for lower bit-widths (e.g. 2 bits, 4 bits) in their comparisons or explain why their method may not be applicable at such low bit-widths if that is the case. Additionally, it's better to provide more results compared to state-of-the-art methods [1][2].\\n\\n[1] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization. ECCV 2024.\\n\\n[2] BitsFusion: 1.99 bits Weight Quantization of Diffusion Model. NeurIPS 2024.\", \"questions\": \"1. What is the difference of your mixed-precision strategy compared to prior quantization methods like MixDQ [1] and BitsFusion [2].\\n\\n[1] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization. ECCV 2024.\\n\\n[2] BitsFusion: 1.99 bits Weight Quantization of Diffusion Model. NeurIPS 2024.\\n\\n2. What is the motivation for splitting timesteps into groups?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer pc5T,\\n\\nWe really appreciate your updated review and score. We will include the FID comparison in the finalized manuscript.\\n\\nSincerely\"}", "{\"title\": \"Response to Reviewer pc5T's Comments - Part 2\", \"comment\": \"Q5. In Eq.7, why does the total size constraint sum up both the quantized activation and the quantized weights? Shouldn't the weights and activations have separate constraint, one for the model parameters and other for the activations?\", \"ans\": \"The 4-bit quantized model means weights and activations are quantized to 4 bits on average. Because it is an average value, weights and activations in some layers may receive more than 4 bits or less than 4 bits.\"}", "{\"title\": \"Response to Reviewer KDiN's Comments - Part 2\", \"comment\": \"Q4. The motivation for splitting timesteps into groups needs clarification. Currently, it's unclear how the approach of \\u201csplitting timesteps into groups to optimize quantization for each group\\u201d logically leads to \\u201cusing mixed precision to quantize parameters across layers.\\u201d in line 215. Authors could include a brief explanation or diagram illustrating how these components of their approach are connected.\", \"ans\": \"MixDQ and BitsFusion focus on the quantization of text-to-image diffusion models. Different with them, our approach focuses on the quantization of image-to-image diffusion models. Note that BitsFusion requires to train the model from scratch after quantization, which is time-consuming, and the method has high time complexity. This is also the reason why BitsFusion can go down to very low bit widths (1.99 bits). Different with BitsFusion, our approach is a post-training quantization approach, which does not require to train the model after quantization. The baseline methods we compared with are all post-training quantization methods. We reported our results at 4 bits on four datasets, which is listed in Table 2 below. As we can see, the performance has dropped noticeably at 4 bits. With the model trained after quantization, the performance can be largely improved. We didn't report the results with training in the manuscript, as this is out of the range of this paper, and we are not able to train the models given the limited time of the rebuttal period. We will add the training results after the training is done in the manuscript. Thanks for this insight.\\n\\nTable 2 - Results (FID) at 4 bits on different datasets\\n| Dataset | Model | Full Precision | 4 Bits |\\n|--------------|--------------|----------------|--------|\\n| CIFAR-10 | DDPM | 3.30 | 33.03 |\\n| Celeba-HQ | DDPM | 9.01 | 9.49 |\\n| LSUN-Bedroom | LSUN-Bedroom | 7.89 | 39.88 |\\n| LSUN-Church | LSUN-Church | 11.33 | 59.74 |\"}", "{\"metareview\": \"This paper introduces a post-training quantization approach with mixed-precision to reduce the size and computational complexity of diffusion models. To account for varying output distributions across timesteps, the approach groups timesteps and optimizes quantization parameters for each group independently. The methods quantize models to 5\\u20136 bits while maintaining good performance.\\n\\n\\nHowever, most experiments in this paper were conducted on small-scale datasets such as CIFAR-10, CelebA-HQ, and LSUN. Additionally, a reviewer raised concerns about unfair comparisons between this work and baseline approaches. The authors are encouraged to include quantitative results on large-scale datasets like ImageNet-1K and address the issues regarding comparisons.\", \"additional_comments_on_reviewer_discussion\": \"During the authors-reviewers discussion, one reviewer participated and increased their rating. During the ACs-reviewers discussion, the reviewer with a very positive rating neither championed this paper nor responded. The reviewer with a relatively negative rating also did not provide further feedback. One reviewer expressed concerns about the comparisons conducted in this work. Overall, considering the scope of this work, the authors are encouraged to improve the quality of the submission for a future venue.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer KDiN,\\n\\nWe appreciate it if you could let us know whether our responses are able to address your concerns. We are happy to address any further concerns. Thank you.\\n\\nSincerely.\"}", "{\"comment\": \"Dear AC, Reviewers,\\n\\nWe have answered the questions from reviewers. We appreciate if reviewers can let us know whether our answers have addressed their concerns. We are glad to answer further concerns and questions. Thank you.\\n\\nSincerely,\"}", "{\"comment\": \"Dear Reviewer 9DVu,\\n\\nWe appreciate it if you could let us know whether our responses are able to address your concerns. We are happy to address any further concerns. Thank you.\\n\\nSincerely.\"}", "{\"comment\": \"Thank you for your reply. I've updated my review to reflect my updated score. I would expect the authors to include FID comparison on ImageNet dataset.\"}", "{\"summary\": \"This work focuses on post-training quantization of the diffusion models for both the model parameters and layer activations. It adopts a mixed precision approach wherein different layers are quantized with different number of bits for precision. It groups different diffusion time steps and applies the quantization procedure per group. At the heart of this paper is quantization based on output distortion that is defined by the Structural Similarity Index Measure (SSIM) loss between the original layer output and the quantized layer output. The paper formulates this as an optimization across layers and show an additive property of the output distortion. Further, it splits this optimization on per layer output distortion objective, which is easily tractable. Finally, various empirical evaluations are conducted to quantize diffusion models trained on CIFAR-10, CelebaHQ, LSUN-Bedroom, and LSUN-Church datasets. This work shows performance of models quantized from 8-bit to 4-bit and evaluates them both quantitatively as well as qualitatively.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Post-training quantization results on various datasets shows that these diffusion models can be quantized efficiently to bits as low as 6 bits (both in weights/activations) without loosing lot of quantitative performance.\", \"Proposed scheme performances quite competitively compared to other post-training frameworks.\"], \"weaknesses\": [\"Lack of insights into how quantization affects different model layers. Since these insights would be very helpful to understand the impact of various components in the diffusion models.\", \"Since the experiments are done on smaller datasets, it would be hard to understand how the conclusions transfer to other networks / datasets / diffusion frameworks.\", \"Although the quantitative metrics look good in Table 1, qualitative images in Figure 2 shows some artifacts in 6bit compression. It would be good to evaluate these more thoroughly.\"], \"questions\": [\"Why use the output distortion metric (aka SSIM between output of the original and quantized models)? Since there are many other ways to define the distance between these two outputs (like mean-squared error, maximum mean discrepancy, absolute-difference, or other form of distributional distance metric)?\", \"In Eq.7, why does the total size constraint sum up both the quantized activation and the quantized weights? Shouldn't the weights and activations have separate constraint, one for the model parameters and other for the activations?\", \"Have you tried only parameter quantization to see how much lower bit-widths the models can tolerate if activations are kept in higher bits (8bit or higher)?\", \"How does one adapt the proposed quantization scheme for quantization aware training schemes?\", \"Since the proposed scheme is a mixed-precision scheme, do you have any analysis of which layers require more bits and which layers can tolerate lower bits?\", \"Can you clarify when the scheme says 4-bit quantized models, it means all the layers (except input/output) use 4 or fewer bits for quantization (for both weights/activations)?\", \"What modifications does one need to apply this scheme to Text-to-Image diffusion models?\", \"Most of the experiments involve UNet style networks, do you know if the same insights would hold true for transformer networks? Similarly, currently experiments only involve DDPM/DDIM frameworks, will the similar conclusion hold true for rectified flow / EDM frameworks?\", \"While the performance of W6A6 scheme looks comparable to full precision scheme from the quantitative metrics on Celeba-HQ (9.01 FID vs 9.06 FID), the qualitative images in Figure 2 shows a lot of distortions in the generated images. Do you have any insights as to why this issue occurs?\", \"In Table 1, why does the model performance increase with lower bits (for instance, Celeba-HQ with 8/8 has better performance than full precision)?\", \"Can you comment on the computational cost of various methods in Table 2 ? How does the proposed scheme fare against these baselines?\"], \"missing_references\": [\"Stable diffusion with core ml on apple silicon : https://github.com/apple/ml-stable-diffusion\", \"BitsFusion:\\u00a0https://arxiv.org/abs/2406.04333\", \"LEARNED STEP SIZE QUANTIZATION https://openreview.net/pdf?id=rkgO66VKDS\", \"Efficientdm: Efficient quantization-aware fine-tuning of low-bit diffusion models: https://openreview.net/forum?id=UmMa3UNDAz\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer KDiN,\\n\\n\\nWe answered the questions in your comments and revised the manuscript accordingly, including:\\n1. Elaborated on the advantages of our proposed mixed-precision quantization approach, discussed the differences with prior works (MixDQ and BitsFusion), and added a comparison table to highlight the key differences.\\n2. Fixed the typos in the related work section, and consolidated the paragraphs in Section 2.2.\\n3. Rephrased the confusing sentences in the related work section, and removed the equations in the preliminaries section.\\n4. Explained the motivation for splitting the time steps and the connections between the components of our approach.\\n5. Added the results at the low bit width (4 Bits).\\n\\nNote that for MixDQ and BitsFusion, they focus on the quantization of text-to-image diffusion models. In this paper, our approach and the baseline methods we compared with focus on the quantization of image-to-image diffusion models. BitsFusion required to train the model from scratch, which is the reason that this method can quantize the model into very low bit width (1.99 bits). Different with BitsFusion, our approach is a post-training quantization method and does train or re-train the model after quantization. We discussed this in our answers. Please check our answers for more details. \\n\\nWe appreciate if you can let us know whether our answers have addressed your concerns or if you have other questions. We are glad to address your further questions and concerns.\\n\\nSincerely\"}", "{\"comment\": \"Dear Reviewer STQM,\\n\\n\\nWe answered the questions in your comments and revised the manuscript accordingly, including:\\n1. Presented the implementation details of the optimization algorithm.\\n2. Explained the derivative of the variables. \\n3. Explained the enumeration and the time complexity.\\n4. Discussed the bit allocation of mixed-precision quantization and the impact of the number of time steps.\\n5. Provided the information of the network architecture, model size, and training.\\n\\nSpecifically, in the manuscript, we added Section A.4 to discuss the bit allocation across layers in our mixed precision quantization, Section A.6 to discuss the impact of the number of time steps, and Section A.7 to provide the implementation details of the optimization and show the seudo-codes of the algorithm, and revised the related texts in the manuscript accordingly.\\n\\nWe appreciate if you can let us know whether our answers have addressed your concerns or whether you have other questions. We are glad to address your further questions and concerns.\\n\\nSincerely\"}", "{\"title\": \"Response to Reviewer pc5T's Comments - Part 3\", \"comment\": \"Q10. What modifications does one need to apply this scheme to Text-to-Image diffusion models?\", \"ans\": \"All the baseline methods in Table 2 are post-training quantization methods which do not require retraining the model after quantization. These methods are thus efficient and fast with low computational complexity. To make the comparison fair, our approach also does not retrain the model and directly compares with the baseline methods after quantization.\"}", "{\"summary\": \"This paper proposes a post-training quantization approach for diffusion models. The paper finds that distributions of the outputs of diffusion models differ significantly across timesteps, and they utilize this finding to split the timesteps into meaningful groups and optimize the quantization configuration of each group separately. Empirical results demonstrate that the proposed approach achieves significant improvements over state-of-the-art methods, enabling the reduction of diffusion models' bit width to 5-6 bits while preserving high accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is written clearly and is easy to read.\", \"The paper provides both theoretical as well as empirical evidence that the proposed approach works.\"], \"weaknesses\": [\"For details, please see the Questions section.\", \"The motivation behind some of the design choices are not explained clearly\", \"Some parts of the proposed method is missing elaboration\"], \"minor\": [\"The paper does not provide specific implementation details such as the exact algorithms or code snippets used for quantization and optimization. Including these would help in reproducing the results.\", \"Including a user study or qualitative analysis to assess the perceptual quality of the generated images could provide additional insights into the effectiveness of the quantization method.\"], \"questions\": [\"It is not clear how the timesteps are grouped. Please provide more details, as this is one of the major steps in the algorithm.\", \"It is mentioned that \\u201cthe output distortion is defined as the Structural Similarity Index Measure (SSIM) loss between $O$ and $\\\\hat{O}$ plus the MSE\\u201d. Is this a common method for evaluating output distortion? Why not e.g., PSNR?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer STQM's Comments - Part 2\", \"comment\": \"Q4. The authors write that algorithm uses mixed precision (layers can have different bitwidth) and timestep-aware quantization. There is no word on how these are achieved and no experimental evaluation/discussion.\", \"ans\": \"Different layers may receive different bit widths for quantization. We illustrated the results of the bit allocation across layers in the newly added Figure 9 in the manuscript. Thanks for pointing out this important question.\"}" ] }
5bUy4F59mk
Tool Decoding: A Plug-and-Play Approach to Enhancing Language Models for Tool Usage
[ "Chenheng Zhang", "Mingqing Xiao", "Yifei Wang", "Lizhe Fang", "Haihan Zhang", "Zhouchen Lin" ]
Despite the significant advancements in large language models (LLMs), their tool-use capabilities remain limited. This limitation stems from the fact that existing approaches often merely adapt strategies designed for basic natural language tasks, overlooking the specific challenges inherent in tool usage, such as precise tool selection, strict predefined formats, and accurate parameter assignment. To bridge this gap, we conduct a fine-grained analysis of the tool usage process, breaking it down into three critical stages: tool awareness, tool selection, and tool call. Our analysis reveals that most failures stem from selection errors, format violations, and parameter mis-assignments. Building on these insights, we propose \textbf{Tool Decoding}, a novel, training-free approach that directly incorporates tool-specific information into the decoding process. Tool Decoding employs constrained decoding to ensure format correctness and eliminate hallucinations, while leveraging order consistency to improve parameter accuracy through structured sampling and a majority-voting mechanism. This approach effectively addresses many common tool-use errors in a plug-and-play manner, allowing for seamless generalization to new tools as long as they are accompanied by well-structured documentation to guide the decoding process. Experimental evaluations on benchmarks like API-Bank and BFCL V2 • Live show that Tool Decoding leads to significant improvements across a diverse set of more than 10 models, including both generalist and tool-finetuned models. Almost all models demonstrate performance gains exceeding 70\% on both benchmarks. Among the 7B-level models, five outperform GPT-3.5 on key tasks, with two even surpassing GPT-4.
[ "large language models", "tool usage", "decoding method" ]
Accept (Poster)
https://openreview.net/pdf?id=5bUy4F59mk
https://openreview.net/forum?id=5bUy4F59mk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xwOhc88s76", "xADSksN6Cv", "tG03VFbTws", "t2B823YILv", "qEhndBZjnX", "ppTx4l4iAh", "jOEAbiu8oy", "fpHyq9xLiq", "fJ7M4d6KAO", "ef2C85nhh2", "ctzLbq1PY4", "Z8O2NPhiUH", "YtxYdKqHlf", "Y3kwvQNNqc", "XzcRL2fE3c", "WEzlx3aASD", "UQ6SYIy6Rz", "Rw0MPhirmw", "KUx6wSYUh0", "KLybwr7RRq", "I4YMFvqnex", "GqX7tduPw0", "EF9rHDP2AW", "BZCXtBiyah" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review" ], "note_created": [ 1730303561264, 1732344839620, 1732345331922, 1732640457340, 1732560673082, 1730316201252, 1732346291353, 1732345026630, 1732344690795, 1732344034480, 1732345608239, 1732343853149, 1730571285913, 1739331741458, 1732462950682, 1732343410307, 1732629742543, 1737523988839, 1732345508388, 1732640562914, 1732640219706, 1732640348619, 1730671703563, 1735933966558 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9531/Reviewer_CYKv" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Reviewer_fh4s" ], [ "ICLR.cc/2025/Conference/Submission9531/Reviewer_fh4s" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Reviewer_hS7t" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Reviewer_CYKv" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Reviewer_PjPG" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Authors" ], [ "ICLR.cc/2025/Conference/Submission9531/Reviewer_PjPG" ], [ "ICLR.cc/2025/Conference/Submission9531/Area_Chair_gZNH" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose a constrained decoding workflow to improve LLMs' utilization of external tools. LLMs need to recognize when to enter tool mode, which tool to select, and how to invoke the tool with the appropriate arguments. Errors in any of these steps can result in an incorrect response. To address this, the authors introduce a constrained decoding approach that restricts the output vocabulary to include only tool names when the LLM enters tool mode. After a tool is selected, multiple candidate arguments (key-value pairs) are generated by varying the parameter order, and majority voting is used to select the final value for each parameter. The tool is then called, and the LLM continues generating its response. The authors evaluate their tool decoding method on the API-Bank and BFCL datasets across several LLMs, finding that their approach outperforms standard greedy and beam search decoding techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The main benefit of the proposed constrained tool decoding approach is that it is training-free, making it easier to integrate into existing inference pipelines.\\nConstrained decoding provides a significant accuracy boost over greedy and beam search methods in terms of tool usage. \\nThe authors conduct an error analysis to examine the impact of different error types across various LLMs, helping to identify key bottlenecks. \\nAdditionally, the idea of generating multiple values for each parameter by varying the order is a compelling approach.\", \"weaknesses\": \"The proposed constrained decoding method increases latency, so it would have been beneficial to include a comparison in terms of latency impact. Additionally, incorporating some naive baselines, such as those using alternative search strategies like epsilon sampling or prompt engineering, would add value. As noted in the related work section, other constrained decoding algorithms are available, so a comparison to existing methods would have been interesting. Furthermore, the absence of stronger instruction-tuned LLMs as base models is a limitation.\", \"questions\": \"Could you please address the below questions:\\n1. Why UltraTool is used in Figure 2, while API-Bank is used in Figure 3?\\n2. Why the awareness error changes between the configurations with and without tool decoding in Figure 6?\\n3. Why LLM is used to generate optional parameters instead of supplying them directly, as is done with required parameters?\\n4. Provide a more detailed explanation on how accuracy is computed in the paper?\\n5. Report the success rate with and without constrained decoding, specifically indicating the proportion of samples with correct awareness, selection, and tool call?\", \"additional_comments\": \"1. Figure 5 is missing accuracy numbers for Gemma.\\n2. Reporting accuracy numbers in Table 3 rather than error reduction might provide easier interpretation.\\n3. \\u201cNotably, tool call presents the greatest complexity, with even powerful models like Qwen1.5-72B achieving less than 60% accuracy in this stage\\u201d is a bit misleading because just by reading the text it appears that none of the model is able to achieve greater than 60% accuracy, which is not true because 32B model achieve around 70% accuracy. Maybe you can consider re-phrasing it to \\u201cNotably, tool call presents the greatest complexity, with the best performing model achieving around X% \\u201d\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer hS7t (Part 2)\", \"comment\": \"**W&Q 2.** More powerful models.\\n\\nFollowing your suggestion, we conducted experiments on more powerful tool-finetuned models, such as gorilla-openfunctions-v2. Additionally, we included experiments on two 30B-level generalist models to enhance the comprehensiveness of our evaluation. The results have been added in **Appendix E**.\\nIn Table below, Tool Decoding demonstrates significant improvements across all three models, with both deepseek-coder-33b and gorilla-openfunctions-v2 outperforming GPT-4 on API-Bank.\\n\\n| | Decoding Method | Call | Retrieve+Call | Total |\\n| :----------------------: | :---------------: | :------: | :-----------: | :------: |\\n| GPT-4 | Greedy Search | 76.2 | 47.4 | 61.8 |\\n| gorilla-openfunctions-v2 | Greedy Search | 51.9 | 48.9 | 50.4 |\\n| gorilla-openfunctions-v2 | Beam Search | 48.4 | 45.2 | 46.8 |\\n| gorilla-openfunctions-v2 | **Tool Decoding** | **77.2** | **51.9** | **64.6** |\\n| Yi-1.5-34B | Greedy Search | 60.4 | 45.2 | 52.8 |\\n| Yi-1.5-34B | **Tool Decoding** | **68.9** | **53.3** | **61.1** |\\n| deepseek-coder-33b | Greedy Search | 57.9 | 46.7 | 52.3 |\\n| deepseek-coder-33b | **Tool Decoding** | **74.4** | **57.0** | **65.7** |\"}", "{\"title\": \"Response to Reviewer fh4s (Part 2)\", \"comment\": \"**W 1.2** Comparison with in-context learning\\n\\nAs a plug-and-play method, Tool Decoding integrates seamlessly with prompt engineering. Table below presents the performance of different numbers of in-context examples on API-Bank (Call). The results demonstrate that our method effectively combines with prompt engineering, significantly enhancing the model\\u2019s tool usage capabilities. Notably, this combination even enables a 7B-level generalist model, such as deepseek-coder-6.7b, to surpass GPT-4 under the same prompt settings, as highlighted in **Bold values**. The results have been added in the **Section 4**.\\n\\n| Model | Decoding Methods | icl0 | icl2 | Icl4 | Icl6 | Icl8 |\\n| :--------------:|:-------------------: | :--: | :------: | :------: | :------: | :------: |\\n| GPT4 | Greedy Search | 76.2 | 72.7 | 72.2 | 73.7 | 73.4 |\\n| Mistral-7b-v0.1 | Greedy Search | 31.3 | 47.1 | 45.1 | 50.4 | 43.6 |\\n| Mistral-7b-v0.1 | **Tool Decoding** | 65.7 | 70.2 | 69.2 | 70.5 | 70.9 |\\n| deepseek-coder-6.7b | Greedy Search | 46.9 | 66.7 | 69.2 | 69.2 | 70.2 |\\n| deepseek-coder-6.7b | **Tool Decoding** | 70.9 | **74.4** | **76.7** | **76.9** | **77.4** |\\n\\n---\\n\\n**W 2.** Larger models\\n\\nFollowing your suggestion, we conducted experiments on two 30B-level generalist models to enhance the comprehensiveness of our evaluation. We have added the results in **Appendix E**.\\nIn Table below, Tool Decoding demonstrates significant improvements across these 30B-level models, with deepseek-coder-33b even outperforming GPT-4 on API-Bank.\\n\\n| Model | Decoding Method | Call | Retrieve+Call | Total |\\n| :----------------: | :---------------: | :------: | :-----------: | :------: |\\n| GPT-4 | Greedy Search | 76.2 | 47.4 | 61.8 |\\n| Yi-1.5-34B | Greedy Search | 60.4 | 45.2 | 52.8 |\\n| Yi-1.5-34B | **Tool Decoding** | **68.9** | **53.3** | **61.1** |\\n| deepseek-coder-33b | Greedy Search | 57.9 | 46.7 | 52.3 |\\n| deepseek-coder-33b | **Tool Decoding** | **74.4** | **57.0** | **65.7** |\\n\\nFor 70B-level models, we regret that limited computational resources prevent us from conducting more extensive experiments since Tool Decoding require deploying models locally. However, we performed a fine-grained error analysis with cloud APIs on two 70B-level models: qwen2-72b-instruct, which achieved an accuracy of 71.4%, and qwen1.5-72b-chat, which achieved 69.9%. The Table below presents the error type distribution for both models. Format and value errors remain the most prevalent, so we believe our method still works on these powerful models. The analysis has been added in **Appendix D**.\\n\\n| Model | Awareness | Selection | Format | Key | Value |\\n| :----------------: | :-------: | :-------: | :----: | :--: | :---: |\\n| qwen2-72b-instruct | 8.0 | 7.1 | 27.4 | 8.0 | 49.6 |\\n| qwen1.5-72b-chat | 24.2 | 9.2 | 28.3 | 9.2 | 29.2 |\\n\\n---\\n\\n**W 3.** Computational efficiency\\n\\nThank you for your constructive advice! We have added analysis on computational efficiency in **Appendix C**.\\n\\nThe Table below shows the inference speed ((sec/sample) for Tool Decoding in comparison with greedy search and beam search. Tool Decoding with $oc \\\\leq 1$ introduces only slight latency, while Tool Decoding with $oc \\\\leq 6$ is faster than beam search with the same number of samples. This is because order consistency maintains multiple candidates only during the generation of the tool call, which constitutes just a portion of the entire response. Note that our current implementation applies constraints at the level of logits. For practical deployment, these constraints could be implemented at the language head layer, which would further reduce computational requirements and enhance processing speed.\\n\\n| Model | Greedy Search | Tool Decoding ($oc \\\\leq 1$) | Beam Search ($beam=6$) | Tool Decoding ($oc \\\\leq 6$) |\\n| :-----------------: | :-----------: | :-------------------------: | :--------------------: | :-------------------------: |\\n| Mistral-7b-v0.1 | 7.92 | 8.57 | 13.82 | 9.74 |\\n| deepseek-coder-6.7b | 7.69 | 8.41 | 15.25 | 10.7 |\"}", "{\"comment\": \"Thank you for raising the score! We are delighted to know that our responses addressed your concerns. If you have any additional questions or feedback, please feel free to let us know. Wishing you a wonderful day!\"}", "{\"comment\": \"Thanks for the responses from the authors, I am glad to raise my score\"}", "{\"summary\": \"This paper introduces a method to improve LLMs' tool-use capabilities with constrained decoding. They claim that current LLMs struggle with tool awareness, tool selection, and tool call. To address these, the author uses constrained decoding to reduce selection and format errors, moreover they also use order consistency to enhance parameter accuracy with structured sampling and majority voting.\\nThis plug-and-play approach enables seamless adaptation to new tools only with the api documentation.\\nExperimental results on API-Bank and BFCL V2 Live benchmarks show high performance improvements across different models.\\nThis work demonstrates the effectiveness of specialized decoding methods tailored to tool usage.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work introduces an innovative, training-free approach to address tool-use challenges in language models. With constrained decoding and order consistency, the LLM can have much better performance over the two benchmark.\\n2. The paper is well-organized and uses clear figure to present the ideas, like constrained decoding. The figures break down workflows and methods, which helps improve readability.\\n3. This method provide a way to improve the LLM tool ability without training. This method may be important across diverse applications, enabling more efficient and versatile LLM deployment in real-world settings.\", \"weaknesses\": \"1. The author only compares to standard methods like greedy and beam search. However, I believe the author should do more comparison with recent tool-use improvement methods, such as state-tracked constrained decoding and reranking method or other in-context learning methods.\\n2. The models involved in the paper are mostly under 7B, (only dicuss 7B > models in the Figure 2), yet discussion about 30B, or even 70B model is limited in the paper. I am wondering whether the method still works or how much the method will improve when the model gets larger.\\n3. Since the method involves structured sampling and majority voting, maybe the author can discuss its computational efficiency compared to baseline. This would give a more picture of its practical trade-offs in real-time applications.\", \"questions\": \"See questions in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer CYKv (Part 3)\", \"comment\": \"**W 3.** Take stronger models as base models\\n\\nWe supply results with three stronger models, including one powerful tool-finetuned models and two 30B-level models.\\n\\nIn Table below, gorilla-openfunctions-v2 represents one of the most advanced tool-finetuned models, while Yi-1.5-34B and deepseek-coder-33b are both 30B-level LLMs. Tool Decoding demonstrates significant improvements across all three models, with both deepseek-coder-33b and gorilla-openfunctions-v2 outperforming GPT-4 on API-Bank. The results have been added in **Appendix E**.\\n\\n| | Decoding Method | Call | Retrieve+Call | Total |\\n| :----------------------: | :---------------: | :------: | :-----------: | :------: |\\n| GPT-4 | Greedy Search | 76.2 | 47.4 | 61.8 |\\n| gorilla-openfunctions-v2 | Greedy Search | 51.9 | 48.9 | 50.4 |\\n| gorilla-openfunctions-v2 | Beam Search | 48.4 | 45.2 | 46.8 |\\n| gorilla-openfunctions-v2 | **Tool Decoding** | **77.2** | **51.9** | **64.6** |\\n| Yi-1.5-34B | Greedy Search | 60.4 | 45.2 | 52.8 |\\n| Yi-1.5-34B | **Tool Decoding** | **68.9** | **53.3** | **61.1** |\\n| deepseek-coder-33b | Greedy Search | 57.9 | 46.7 | 52.3 |\\n| deepseek-coder-33b | **Tool Decoding** | **74.4** | **57.0** | **65.7** |\\n\\n---\\n\\n**Q 1.** Why UltraTool is used in Figure 2, while API-Bank is used in Figure 3?\\n\\nUltraTool is a dataset specifically designed to provide fine-grained testing for the three distinct tool-use stages individually, so we utilize it to conduct a preliminary experiment to evaluate the performance of LLMs at each stage in isolation. \\n\\nHowever, tool usage is a continuous process across these stages rather than a series of isolated steps. Therefore, we conduct an error analysis using API-Bank, a widely used benchmark for tool-use dialogue, to better reflect the practical scenarios.\\n\\n---\\n\\n**Q 2.** Why the awareness error changes between the configurations with and without tool decoding in Figure 6?\\n\\nThank you for helping us identify a small bug in our error analysis. Awareness errors occur when the models fail to output the start signal for using a tool, such as [. However, we detected these errors by checking for the simultaneous absence of both [ and ], which caused a few cases to be incorrectly classified as awareness errors. We have fixed this bug and updated the corresponding results in the paper. The change is tiny and has no influence to our conclusion.\\n\\n---\\n\\n**Q 3.** Why LLM is used to generate optional parameters instead of supplying them directly, as is done with required parameters?\\n\\nRequired parameters are essential for invoking a tool, so we can directly supply them to ensure none are missing. However, optional parameters can be absent in a tool call. Whether to use them depends on the specific context, so we allow LLMs to make this decision autonomously through constrained decoding.\\n\\n---\\n\\n**Q 4.** Provide a more detailed explanation on how accuracy is computed in the paper?\", \"the_accuracy_is_computed_following_these_steps\": \"1.\\tExtract the tool call from the generated content according to the predefined format.\\n2.\\tVerify whether the tool name generated is correct.\\n3.\\tCheck whether all the keys used are correct.\\n4.\\tConfirm whether all the values used are correct.\\n\\nA tool usage is considered successful only if it passes all the above steps. Otherwise, it is classified as a failure.\\n\\n---\\n\\n**Q 5.** Report the success rate with and without constrained decoding, specifically indicating the proportion of samples with correct awareness, selection, and tool call?\\n\\nWe apologize for not fully understanding your points. We have reported our success rate and error distribution across three stages with and without Tool Decoding. Please refer to Figure 5 and 6 in paper.\\n\\n---\\n\\n**AC 1.** Figure 5 is missing accuracy numbers for Gemma.\\n\\nThe accuracy numbers for Gemma are provided in Appendix G. Due to space limitations, we have reported only a subset of the results in the main text.\\n\\n---\\n\\n**AC 2.** Reporting accuracy numbers in Table 3 rather than error reduction might provide easier interpretation.\\n\\nThanks for your advice. We aim to emphasize the performance of Order Consistency in mitigating value errors. Additionally, since the overall distribution of error types varies across models, using the proportion of value errors, provides a clearer and more consistent basis for comparison across different models.\\n\\n---\\n\\n**AC 3.** re-phrase in section 2.\\n\\nThanks for your suggestion! We have re-phrased it in **Section 2** according to your comment.\"}", "{\"title\": \"Response to Reviewer fh4s (Part 1)\", \"comment\": \"We thank Reviewer fh4s for providing constructive comments for our work. We will address your concerns in the following points. All of the results below have been added in our paper.\\n\\n---\\n\\n**W 1.1** Comparison with other constrained decoding methods\\n\\nThere are two existing constrained decoding methods for tool usage. We have compared them with our method, and the results have been included in **Appendix F**.\\n\\n**FANTASE** [1] is not a plug-and-play method, as it requires additional training of a reranker. It introduces state-tracked constrained decoding to ensure the correct format but still relies on LLMs to generate all keys, including both required and optional parameters. This approach cannot effectively address issues like missing certain parameters, as illustrated in Figure 2 of [1]. To mitigate this limitation, a separate reranker should be trained to select the optimal tool call from multiple generated samples. In contrast, our method does not require any additional training and inherently avoids parameter absence, ensuring robustness in tool usage. Considering that this approach is not training-free and the code has not been open-sourced, we did not include it in our experimental comparisons.\\n\\n**TOOlDEC** [2] is not universally applicable due to its specific requirements for tool call formats. It employs multiple Finite-State Machines (FSMs) to perform constrained decoding, relying on a special token to signal transitions between FSMs as shown in Figure 4 of [2] . For instance, in their implementation, formats like *[Action: ToolName, Action Input: \\\\{key1=value1,<0x0A>key2=value2\\\\}]* are supported, where *<0x0A>* serves as an indicator for transition from the first value FSM to the next key FSM. Since values are generated freely, the model must independently generate this special token, which is then detected to trigger the FSM transition. This reliance introduces two key limitations:\\n\\n1. If the model fails to adhere to the predefined format and omits the required special token during value generation, it remains stuck in the value mode, freely generating tokens. This disrupts the FSM transitions, rendering constrained decoding ineffective.\\n2. For tool-finetuned models or code models, such specialized formats may deviate from the data encountered during their fine-tuning or pretraining, potentially resulting in decreased performance.\\n\\nIt is important to note that punctuation marks, such as commas, spaces, and quotation marks, cannot serve as special tokens since most models encode them as part of surrounding tokens rather than as independent tokens. This makes TOOLDEC incompatible with common formats like *[ToolName(key1=value1, key2=value2)]*. In contrast, Tool Decoding determines transitions by verifying whether a complete variable of the specified type has been generated to assign the value, eliminating the reliance on special tokens. Table below demonstrates the robustness of our method compared to TOOLDEC across various tool-finetuned and code models. \\n\\n| | Greedy Search | Tool Decoding | TOOLDEC |\\n| :----------------------: | :-----------: | :-----------: | :-----: |\\n| gorilla-openfunctions-v2 | 51.9 | **77.2** | 69.4 |\\n| Toolformer | 13.5 | **31.8** | 7.7 |\\n| deepseek-coder-6.7b | 46.9 | **70.9** | 65.7 |\\n\\n[1] Zhuoer Wang, et al. \\\"Fantastic sequences and\", \"where_to_find_them\": \"Faithful and efficient API call generation through state-tracked constrained decoding and reranking.\\\" In Findings of the Association for Computational Linguistics: EMNLP, 2024.\\n\\n[2] Kexun Zhang, et al. \\\"Syntax error-free and generalizable tool use for llms via finite-state decoding.\\\" arXiv preprint arXiv:2310.07075, 2023\"}", "{\"title\": \"Response to Reviewer hS7t (Part 1)\", \"comment\": \"We thank Reviewer hS7t for providing constructive comments for our work. We will address your concerns in the following points. The results below have all been added in our paper.\\n\\n---\\n\\n**W&Q 1.1.** Comparison with other constrained decoding methods\\n\\nThere are two existing constrained decoding methods for tool usage. We have compared them with our method, and the results have been included in **Appendix F**.\\n\\n**FANTASE** [1] is not a plug-and-play method, as it requires additional training of a reranker. It introduces state-tracked constrained decoding to ensure the correct format but still relies on LLMs to generate all keys, including both required and optional parameters. This approach cannot effectively address issues like missing certain parameters, as illustrated in Figure 2 of [1]. To mitigate this limitation, a separate reranker should be trained to select the optimal tool call from multiple generated samples. In contrast, our method does not require any additional training and inherently avoids parameter absence, ensuring robustness in tool usage. Considering that this approach is not training-free and the code has not been open-sourced, we did not include it in our experimental comparisons.\\n\\n**TOOlDEC** [2] is not universally applicable due to its specific requirements for tool call formats. It employs multiple Finite-State Machines (FSMs) to perform constrained decoding, relying on a special token to signal transitions between FSMs as shown in Figure 4 of [2] . For instance, in their implementation, formats like *[Action: ToolName, Action Input: \\\\{key1=value1,<0x0A>key2=value2\\\\}]* are supported, where *<0x0A>* serves as an indicator for transition from the first value FSM to the next key FSM. Since values are generated freely, the model must independently generate this special token, which is then detected to trigger the FSM transition. This reliance introduces two key limitations:\\n\\n1. If the model fails to adhere to the predefined format and omits the required special token during value generation, it remains stuck in the value mode, freely generating tokens. This disrupts the FSM transitions, rendering constrained decoding ineffective.\\n2. For tool-finetuned models or code models, such specialized formats may deviate from the data encountered during their fine-tuning or pretraining, potentially resulting in decreased performance.\\n\\nIt is important to note that punctuation marks, such as commas, spaces, and quotation marks, cannot serve as special tokens since most models encode them as part of surrounding tokens rather than as independent tokens. This makes TOOLDEC incompatible with common formats like *[ToolName(key1=value1, key2=value2)]*. In contrast, Tool Decoding determines transitions by verifying whether a complete variable of the specified type has been generated to assign the value, eliminating the reliance on special tokens. Table below demonstrates the robustness of our method compared to TOOLDEC across various tool-finetuned and code models. \\n| | Greedy Search | Tool Decoding | TOOLDEC |\\n| :----------------------: | :-----------: | :-----------: | :-----: |\\n| gorilla-openfunctions-v2 | 51.9 | **77.2** | 69.4 |\\n| Toolformer | 13.5 | **31.8** | 7.7 |\\n| deepseek-coder-6.7b | 46.9 | **70.9** | 65.7 |\\n\\n[1] Zhuoer Wang, et al. \\\"Fantastic sequences and\", \"where_to_find_them\": \"Faithful and efficient API call generation through state-tracked constrained decoding and reranking.\\\" In Findings of the Association for Computational Linguistics: EMNLP, 2024.\\n\\n[2] Kexun Zhang, et al. \\\"Syntax error-free and generalizable tool use for llms via finite-state decoding.\\\" arXiv preprint arXiv:2310.07075, 2023\\n\\n---\\n\\n**W 1.2.** Limitation of Novelty\\n\\nThanks for your comments, but we believe that our work introduces sufficient contributions and novelty . From a methodological perspective, although a few works have introduced constrained decoding for tool usage, our method stands out as it is training-free and can be better applied to a broader range of models, as demonstrated in the comparisons above. Additionally, we propose order consistency, which can further enhance performance, building on the significant improvements achieved by constrained decoding. From an experimental perspective, we conduct detailed analyses and comprehensive experiments across a wide range of models, achieving performance gains exceeding 70% for nearly all models, as shown in Table 7 and 8. As a plug-and-play method, our approach further enhances performance when combined with tool-finetuned models and prompt engineering, highlighting its flexibility and effectiveness, as shown Table 3 and 7.\"}", "{\"title\": \"Response to Reviewer PjPG (Part 2)\", \"comment\": \"**Q1.1.** Value errors increased in most scenarios after applying Tool Decoding.\\n\\nThe added value errors are not caused by Tool Decoding but are potential value errors previously hidden by format and key errors. Specifically, parameter values can only be extracted when the format and keys are correct. By addressing these format and key errors, Tool Decoding makes more values can be extracted and exposes these hidden value errors, leading to a slight increase in the observed proportion of value errors. \\n\\nIn contrast, our method mitigates value errors: constrained decoding does not impact value generation, while order consistency actively reduces value errors, as demonstrated in Table 4.\\n\\n---\\n\\n**Q1.2.** For format errors, the example in Table 1 shows a missing parenthesis. How would Tool Decoding help in this case?\\n\\nBriefly speaking, we detect whether the last value has been fully generated, and once completed, the model is constrained to generate the closing parenthesis $)$.\\n\\nFor example, suppose tool_A holds 2 required parameters, $\\\\alpha$ and $\\\\beta$, and has 1 optional parameters, $\\\\gamma$. Once the last required parameter, $\\\\beta$, is completely assigned, we constrain the model\\u2019s vocabulary space to [$\\\\gamma,)$]. This leads to two possible scenarios:\\n\\n1.\\tIf the model outputs $)$, the tool call is considered complete.\\n2.\\tIf the model outputs $\\\\gamma$, the assignment process continues for $\\\\gamma$. After the value of $\\\\gamma$ is fully generated, since all parameters have been completed, we directly supply $)$ to finalize the tool call.\\n\\nMaybe your question is how we determine whether a value has been completely assigned. In expressions like [ToolName(key1=value1, key2=value2)], there is no special token explicitly indicating this completion. Most models encode punctuation marks, such as commas, spaces, and quote marks, as part of surrounding tokens. For instance, the expression \\\"key1=**value1, key2**=value2\\\" might be tokenized as [\\u201ckey1\\u201d, \\u201c=\\u201d, **\\u201cvalue1,\\u201d**, **\\u201c key2\\u201d**, \\u201c=\\u201d, \\u201cvalue2\\u201d].\\n\\nTo address this, we determine whether a value has been completely assigned by checking if a full variable of the given type has been generated. Consequently, for Tool Decoding, handling the assignment of the last parameter is no different from handling the previous ones.\\n\\n---\\n\\n**Q 2.** Latency analysis\\n\\nThank you for your constructive advice! We supply the results below and exhibit the low latency of our method. The analysis has been added in **Appendix C**.\\n\\nThe Table below shows the inference speed (sec/sample) for Tool Decoding in comparison with greedy search and beam search. Tool Decoding with $oc \\\\leq 1$ introduces only slight latency, while Tool Decoding with $oc \\\\leq 6$ is faster than beam search with the same number of samples. This is because order consistency maintains multiple candidates only during the generation of the tool call, which constitutes just a portion of the entire response. Note that our current implementation applies constraints at the level of logits. For practical deployment, these constraints could be implemented at the language head layer, which would further reduce computational requirements and enhance processing speed.\\n\\n| Model | Greedy Search | Tool Decoding ($oc \\\\leq 1$) | Beam Search ($beam=6$) | Tool Decoding ($oc \\\\leq 6$) |\\n| :-----------------: | :-----------: | :-------------------------: | :--------------------: | :-------------------------: |\\n| Mistral-7b-v0.1 | 7.92 | 8.57 | 13.82 | 9.74 |\\n| deepseek-coder-6.7b | 7.69 | 8.41 | 15.25 | 10.7 |\"}", "{\"title\": \"Response to Reviewer CYKv (Part 2)\", \"comment\": \"**W 3.** Conparison with other constrained decoding algorithms\\n\\nThere are two existing constrained decoding methods for tool usage. We have compared them with our method, and the results have been included in **Appendix F**.\\n\\n**FANTASE** [1] is not a plug-and-play method, as it requires additional training of a reranker. It introduces state-tracked constrained decoding to ensure the correct format but still relies on LLMs to generate all keys, including both required and optional parameters. This approach cannot effectively address issues like missing certain parameters, as illustrated in Figure 2 of [1]. To mitigate this limitation, a separate reranker should be trained to select the optimal tool call from multiple generated samples. In contrast, our method does not require any additional training and inherently avoids parameter absence, ensuring robustness in tool usage. Considering that this approach is not training-free and the code has not been open-sourced, we did not include it in our experimental comparisons.\\n\\n**TOOlDEC** [2] is not universally applicable due to its specific requirements for tool call formats. It employs multiple Finite-State Machines (FSMs) to perform constrained decoding, relying on a special token to signal transitions between FSMs as shown in Figure 4 of [2] . For instance, in their implementation, formats like *[Action: ToolName, Action Input: \\\\{key1=value1,<0x0A>key2=value2\\\\}]* are supported, where *<0x0A>* serves as an indicator for transition from the first value FSM to the next key FSM. Since values are generated freely, the model must independently generate this special token, which is then detected to trigger the FSM transition. This reliance introduces two key limitations:\\n\\n1. If the model fails to adhere to the predefined format and omits the required special token during value generation, it remains stuck in the value mode, freely generating tokens. This disrupts the FSM transitions, rendering constrained decoding ineffective.\\n2. For tool-finetuned models or code models, such specialized formats may deviate from the data encountered during their fine-tuning or pretraining, potentially resulting in decreased performance.\\n\\nIt is important to note that punctuation marks, such as commas, spaces, and quotation marks, cannot serve as special tokens since most models encode them as part of surrounding tokens rather than as independent tokens. This makes TOOLDEC incompatible with common formats like *[ToolName(key1=value1, key2=value2)]*. In contrast, Tool Decoding determines transitions by verifying whether a complete variable of the specified type has been generated to assign the value, eliminating the reliance on special tokens. Table below demonstrates the robustness of our method compared to TOOLDEC across various tool-finetuned and code models. \\n\\n| | Greedy Search | Tool Decoding | TOOLDEC |\\n| :----------------------: | :-----------: | :-----------: | :-----: |\\n| gorilla-openfunctions-v2 | 51.9 | **77.2** | 69.4 |\\n| Toolformer | 13.5 | **31.8** | 7.7 |\\n| deepseek-coder-6.7b | 46.9 | **70.9** | 65.7 |\\n\\n[1] Zhuoer Wang, et al. \\\"Fantastic sequences and\", \"where_to_find_them\": \"Faithful and efficient API call generation through state-tracked constrained decoding and reranking.\\\" In Findings of the Association for Computational Linguistics: EMNLP, 2024.\\n\\n[2] Kexun Zhang, et al. \\\"Syntax error-free and generalizable tool use for llms via finite-state decoding.\\\" arXiv preprint arXiv:2310.07075, 2023\"}", "{\"title\": \"Response to Reviewer PjPG (Part 1)\", \"comment\": \"We thank Reviewer PjPG for providing constructive comments for our work. We will address your concerns in the following points. The results below have all been added in our paper.\\n\\n---\\n\\n**W 1.** Comparison with other methods.\\n\\nThanks for your suggestion! Since Tool Decoding is a plug-and-play method, we can combine it with many other approaches such as prompt engineering or trained models. We further expand our comparisons with more approaches as well as the combination of them and our method. As shown in the following, Tool Decoding is **orthogonal to these methods** and can **largely improve the performance of them**. And the results are also added in **Section 4** and **Appendix E**.\\n\\n**In-context learning**\\n\\nAs a plug-and-play method, Tool Decoding integrates seamlessly with prompt engineering. Table below presents the performance of different numbers of in-context examples on API-Bank (Call). The results demonstrate that our method effectively combines with prompt engineering, significantly enhancing the model\\u2019s tool usage capabilities. Notably, this combination even enables a 7B-level generalist model, such as deepseek-coder-6.7b, to surpass GPT-4 under the same prompt settings, as highlighted in **Bold values**. \\n\\n| Model | Decoding Methods | icl0 | icl2 | Icl4 | Icl6 | Icl8 |\\n| :--------------:|:-------------------: | :--: | :------: | :------: | :------: | :------: |\\n| GPT4 | Greedy Search | 76.2 | 72.7 | 72.2 | 73.7 | 73.4 |\\n| Mistral-7b-v0.1 | Greedy Search | 31.3 | 47.1 | 45.1 | 50.4 | 43.6 |\\n| Mistral-7b-v0.1 | **Tool Decoding** | 65.7 | 70.2 | 69.2 | 70.5 | 70.9 |\\n| deepseek-coder-6.7b | Greedy Search | 46.9 | 66.7 | 69.2 | 69.2 | 70.2 |\\n| deepseek-coder-6.7b | **Tool Decoding** | 70.9 | **74.4** | **76.7** | **76.9** | **77.4** |\\n\\n**Trained models**\\n\\nPlease note that we have already combined our method with some trained models and compared with them in Figure 5, Table 7 and Table 8 (xLAM-7b-r and Toolformer). We cite the results on API-Bank below.\\n\\n| Model | Decoding Methods | Call | Retrieve+Call | Total |\\n| :-----------------:|:-------------------------: | :---: | :------: | :------: |\\n| xLAM-7b-r |**Tool Decoding** | **73.9** | **54.8** | **64.4** |\\n| xLAM-7b-r |Greedy Search | 36.1 | 41.5 | 38.8 |\\n| xLAM-7b-r |Beam Search | 32.3 | 41.9 | 37.1 |\\n| Toolformer |**Tool Decoding** | **31.8** | **27.4** | **29.6** |\\n| Toolformer |Greedy Search | 13.5 | 4.4 | 8.9 |\\n| Toolformer |Beam Search | 23.3 | 8.2 | 15.8 |\\n\\nIn Table below, we also conduct further experiments on gorilla-openfunctions-v2, one of the most advanced trained models. Tool Decoding demonstrates significant improvements across all three models, with both xLAM-7b-r and gorilla-openfunctions-v2 outperforming GPT-4 on API-Bank. \\n\\n| Model | Decoding Method | Call | Retrieve+Call | Total |\\n| :----------------------: | :---------------: | :------: | :-----------: | :------: |\\n| GPT-4 | Greedy Search | 76.2 | 47.4 | 61.8 |\\n| gorilla-openfunctions-v2 | **Tool Decoding** | **77.2** | **51.9** | **64.6** |\\n| gorilla-openfunctions-v2 | Greedy Search | 51.9 | 48.9 | 50.4 |\\n| gorilla-openfunctions-v2 | Beam Search | 48.4 | 45.2 | 46.8 |\\n\\n---\\n\\n**W 2.** Inadequate acknowledgment in the error analysis part.\\n\\nWe apologize for this oversight. We have now adequately acknowledge it at the beginning of the Analysis of Errors part in **Section 2**: \\\"While some existing works have conducted coarse error analyses, their evaluations are not sufficiently comprehensive and lack a systematic approach. For instance, the analysis in API-Bank (Li et al., 2023) overlooks value errors and includes ambiguous error types such as *Has Exception*, limiting both clarity and utility. In contrast, we conduct a stage-specific and comprehensive error analysis, systematically identifying errors at each stage to derive fine-grained insights.\\\"\"}", "{\"summary\": \"The paper presents Tool Decoding, a training-free approach to improve LLMs\\u2019 tool-use capabilities, addressing common errors in tool awareness, tool selection, and tool call. By using constrained decoding for format accuracy and order consistency for parameter correctness, Tool Decoding significantly boosts tool-augmented performance on benchmarks e.g. API-Bank and BFCL V2 \\u2022 Live, even surpassing GPT-4 in some cases. This adaptable approach enhances LLMs' effectiveness in real-world tool use without requiring additional training.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents Tool Decoding a novel, training-free approach to enhancing large language models (LLMs) in tool use. Unlike conventional methods relying on extensive fine-tuning, this plug-and-play solution integrates tool-specific data into the decoding phase. Tool decoding approach includes constrained decoding and order consistency modules, which is shown to effectively mitigate different tool errors e.g. Key error, Value error, and Format error.\\n\\n2. The paper conducts an error analysis across three stages of tool usage\\u2014Tool Awareness, Tool Selection, and Tool Call. The analysis is done, using both API-Bank and BFCL V2 \\u2022 Live datasets, which are benchmark datasets well-suited to assessing tool-augmented LLMs.\\n\\n3. They showed the effectiveness of Tool Decoding by applying it to a variety of both general-purpose and tool-specialized models, evaluating them on the API-Bank and BFCL V2 \\u2022 Live benchmarks. The experimental results show that Tool Decoding substantially improves performance across all models. Nearly all models achieve performance gains above 70% on both benchmarks. Among the 7B parameter models, five surpass GPT-3.5 in critical tasks, with two even outperforming GPT-4.\", \"weaknesses\": \"1. Limitation of Novelty as a long paper: constrained decoding for tool usage in LLMs has been employed in several prior works (e.g., [Domino](https://arxiv.org/abs/2403.06988): is proposed to optimize for general constrained text generation tasks with a focus on grammar and token alignment, [TOOLDEC](https://arxiv.org/pdf/2310.07075): eliminates syntax errors by constraining token choices using FSMs, focusing on maintaining tool syntax), and the specific approach to constrained decoding in this paper does not differ significantly. As a result, the primary novelty of this paper lies in the order consistency mechanism, which offers only a limited improvement (Table 6).\\n\\n2. Missing Baselines: The tool-finetuned models presented in the results table are outdated, and more recent, more powerful models (e.g., [Hermes Function Calling](https://github.com/NousResearch/Hermes-Function-Calling) and [Gorilla](https://github.com/ShishirPatil/gorilla)) are not included for comparison.\", \"questions\": \"1. Can you please further describe the differences of your approach (specially constrained decoding module) to previous work on constraint decoding for tool calling in LLMs?\\n\\n2. Is there any technical reason that more recent tool-fine tuned models are not added for comparison?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thank you authors for providing a detailed response and thorough clarifications.\"}", "{\"title\": \"A summary of paper updates\", \"comment\": \"Following the reviewers\\u2019 suggestions, we have updated the paper with key revisions highlighted in orange. The main changes are as follows:\\n1. **Section 2:** Revised based on feedback from reviewer PjPG and reviewer CYKv.\\n2. **Section 4:** Presented results combining Tool Decoding with in-context learning.\\n3. **Appendix C:** Included latency analysis for our method.\\n4. **Appendix D:** Added error analysis for two 70B-level models.\\n5. **Appendix E:** Added results combining Tool Decoding with two 30B-level generalist models and one powerful tool-finetuned models.\\n6. **Appendix F:** Added comparisons with existing constrained decoding methods.\"}", "{\"comment\": \"I appreciate the resolution of inadequate acknowledgment to API-Bank and the addition of the latency analysis. I will raise my score to 6.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer CYKv (Part 1)\", \"comment\": \"We thank Reviewer CYKv for providing constructive comments for our work. We will address your concerns in the following points.\\n\\n---\\n\\n**W 1.** Latency analysis\\n\\nThank you for your constructive advice! We have added latency analysis in **Appendix C**.\\n\\nThe Table below shows the inference speed (sec/sample) for Tool Decoding in comparison with greedy search and beam search. Tool Decoding with $oc \\\\leq 1$ introduces only slight latency, while Tool Decoding with $oc \\\\leq 6$ is faster than beam search with the same number of samples. This is because order consistency maintains multiple candidates only during the generation of the tool call, which constitutes just a portion of the entire response. Note that our current implementation applies constraints at the level of logits. For practical deployment, these constraints could be implemented at the language head layer, which would further reduce computational requirements and enhance processing speed.\\n\\n| Model | Greedy Search | Tool Decoding ($oc \\\\leq 1$) | Beam Search ($beam=6$) | Tool Decoding ($oc \\\\leq 6$) |\\n| :-----------------: | :-----------: | :-------------------------: | :--------------------: | :-------------------------: |\\n| Mistral-7b-v0.1 | 7.92 | 8.57 | 13.82 | 9.74 |\\n| deepseek-coder-6.7b | 7.69 | 8.41 | 15.25 | 10.7 |\\n\\n\\n---\\n\\n**W 2.** Conparison with other search strategies and prompt engineering\\n\\n**Epsilon sampling**\\n\\nFollowing your suggestion, we perform experiments for epsilon sampling on API-Bank (Call). The results show in the Table below. Since the performance of epsilon sampling is similar to greedy search, we decided not to include it as a baseline.\\n\\n| | $\\\\epsilon=0.01$ | $\\\\epsilon=0.001$ | greedy search |\\n| :-----------------: | :-------------: | :--------------: | :-----------: |\\n| Mistral-7b-v0.1 | 32.8 | 31.6 | 31.3 |\\n| deepseek-coder-6.7b | 41.4 | 40.1 | 46.9 |\\n\\n**Prompt engineering**\\n\\nAs a plug-and-play method, Tool Decoding integrates seamlessly with prompt engineering. Table below presents the performance of different numbers of in-context examples on API-Bank (Call). The results demonstrate that our method effectively combines with prompt engineering, significantly enhancing the model\\u2019s tool usage capabilities. Notably, this combination even enables a 7B-level generalist model, such as deepseek-coder-6.7b, to surpass GPT-4 under the same prompt settings, as highlighted in **Bold values**. The results have been added in the **Section 4**.\\n\\n| Model | Decoding Methods | icl0 | icl2 | Icl4 | Icl6 | Icl8 |\\n| :--------------:|:-------------------: | :--: | :------: | :------: | :------: | :------: |\\n| GPT4 | Greedy Search | 76.2 | 72.7 | 72.2 | 73.7 | 73.4 |\\n| Mistral-7b-v0.1 | Greedy Search | 31.3 | 47.1 | 45.1 | 50.4 | 43.6 |\\n| Mistral-7b-v0.1 | **Tool Decoding** | 65.7 | 70.2 | 69.2 | 70.5 | 70.9 |\\n| deepseek-coder-6.7b | Greedy Search | 46.9 | 66.7 | 69.2 | 69.2 | 70.2 |\\n| deepseek-coder-6.7b | **Tool Decoding** | 70.9 | **74.4** | **76.7** | **76.9** | **77.4** |\"}", "{\"comment\": \"Dear Reviewer hS7t,\\n\\nWe have carefully prepared a response to address your concerns. Could you kindly take a moment to review it and let us know if it resolves the issues you raised? If you have any additional questions or suggestions, we would be happy to address them.\\n\\nThank you for your time and consideration. Wishing you a great day!\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for your feedback! We are pleased to know that our responses resolved your concerns. If there are any further questions or suggestions, please don\\u2019t hesitate to let us know. Have a great day!\"}", "{\"comment\": \"Thank you for raising the score! We are delighted to know that our responses addressed your concerns. If you have any additional questions or feedback, please feel free to let us know. Wishing you a wonderful day!\"}", "{\"summary\": \"This paper addresses the limitations of LLMs in tool usage. Previous works often overlook the specific requirements of tool selection, format adherence, and parameter accuracy. The authors propose a training-free approach called \\\"Tool Decoding,\\\" designed to improve tool-use precision through constrained decoding. Experimental results demonstrate that Tool Decoding achieves improvements in tool-use accuracy across benchmarks (API-Bank and BFCL V2), with open-weight models performing comparably to GPT-4 in some cases. However, the paper lacks a thorough comparison with other existing methods (only compared with beam and greedy search) and provides no report on latency, which could be important for practical applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Achieved comparable results to GPT-4 for open-weight models with the help of Tool Decoding on the API-Bank and BFCL V2 benchmarks.\", \"The method does not require any training and primarily relies on constrained decoding to ensure format correctness and eliminate hallucinations.\"], \"weaknesses\": [\"Comparison with other methods? The authors compare their method only with Beam Search and Greedy Search. Are there no other methods available, such as those in https://arxiv.org/pdf/2401.06201 (01.2024), trained models, etc.?\", \"For the error analysis section, I believe it does not sufficiently acknowledge that other researchers have reported similar problems. It is written as though this paper is the first to report them. For example, the API-Bank [paper](https://arxiv.org/pdf/2304.08244) provides a thorough analysis at a similar level of granularity in Section 7.3, which I believe has not been adequately acknowledged.\"], \"questions\": [\"Value errors increased in most scenarios after applying Tool Decoding (add to Limitation section); on the other hand, key errors are not a major challenge. Tool Decoding primarily addresses format errors and partially resolves selection errors. I understand how it can help with selection errors by decoding tool names under constraints. However, for format errors, the example in Table 1 shows a missing parenthesis. How would Tool Decoding help in this case?\", \"Another important factor in decoding tools is latency in seconds, but I did not find any comparison. Although there is no major comparison with other methods, some latency figures for the tool itself would be beneficial.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper breaks down LLMs' tool usage into three stages - tool awareness, selection, and invocation - and identifies common failure modes at each step. The authors propose Tool Decoding, a simple constrained sampling approach that improves tool usage without extra training. The method restricts vocabulary during tool selection and uses majority voting across different parameter orderings during invocation. A key advantage is that it only needs API documentation to work with new tools. Experiments on API-Bank and BFCL V2 show substantial gains across different LLMs, sometimes matching GPT-4's performance.\\n\\nIn the initial round of reviews, the main criticism focused on the lack of comparative analysis. In the rebuttal, the authors expanded their discussion in Section 4 and Appendices E and F, adding comparisons with sampling methods like epsilon-sampling and with models fine-tuned for tool usage, including Gorilla-OpenFunctions, Toolformer, and DeepSeek-Coder. They also address reviewer fh4s\\u2019s concern by testing larger models in Appendix E. Another issue was the lack of latency analysis, which the authors have now fixed by providing detailed latency results in Appendix C.\\n\\nAfter reviewing these revisions and engaging in follow-up discussions, all reviewers have agreed that the paper is suitable for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}" ] }
5bDBahNmmH
Cohesion: Coherence-Based Diffusion for Long-Range Dynamics Forecasting
[ "Juan Nathaniel", "Pierre Gentine" ]
We recast existing works on probabilistic dynamics forecasting through a unified framework connecting turbulence and diffusion principles: Cohesion. Specifically, we relate the coherent part of nonlinear dynamics as a conditioning prior in a denoising process, which can be efficiently estimated using reduced-order models. This fast generation of long prior sequences allows us to reframe forecasting as trajectory planning, a common task in RL. This reformulation is beneficial because we can perform a single conditional denoising pass for an entire sequence, rather than autoregressively over long lead time, gaining orders-of-magnitude speedups with little performance loss. Nonetheless, Cohesion supports flexibility through temporal composition that allows iterations to be performed over smaller subsequences, with autoregressive being a special case. To ensure temporal consistency within and between subsequences, we incorporate a model-free, small receptive window via temporal convolution that leverages large NFEs during denoising. Finally, we perform our guidance in a classifier-free manner to handle a broad range of conditioning scenarios for zero-shot forecasts. Our experiments demonstrate that Cohesion outperforms state-of-the-art probabilistic emulators for chaotic systems over long lead time, including in Kolmogorov Flow and Shallow Water Equation. Its low spectral divergence highlights Cohesion's ability to resolve multi-scale physical structures, even in partially-observed cases, and are thus essential for long-range, high-fidelity, physically-realistic emulation.
[ "PDE", "diffusion", "dynamics", "emulator" ]
Reject
https://openreview.net/pdf?id=5bDBahNmmH
https://openreview.net/forum?id=5bDBahNmmH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sR6kVPAxkY", "qw8gMR6wEm", "oXzLvFzBu6", "my0XXcDBz7", "mx59eogRXQ", "m9t0J76zPc", "kuDS4sKObD", "gx9Pb0KFZX", "bT7sOhNkSi", "aCIKjLHd2C", "ZXdHtOb6fp", "WDVnYJEmjy", "N9hDOQSE8m", "MyA5L11SrC", "McjTDxF0Bd", "FYBSuqsoDI", "D3ArCIycEK", "CqCiTM5b0y", "7VdCgelgZM", "6dCd0sswpj", "6BgZiovXVY", "1NM5uT7aZ2", "0DghiZ1bHL" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732391392102, 1732488666211, 1730110226610, 1732391326595, 1729965708547, 1732391361262, 1732391315530, 1732391304375, 1732473692978, 1732547301932, 1732391399612, 1730627508966, 1732488743607, 1729691499594, 1732640120654, 1732391380462, 1730117549338, 1732488696629, 1732391345100, 1737524059294, 1732543458918, 1732625599658, 1734894725120 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_HSiU" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_HSiU" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_yE9B" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_yE9B" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_C5HF" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_2JUM" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_yE9B" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_L6gh" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Submission10525/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_2JUM" ], [ "ICLR.cc/2025/Conference/Submission10525/Reviewer_L6gh" ], [ "ICLR.cc/2025/Conference/Submission10525/Area_Chair_AwLM" ] ], "structured_content_str": [ "{\"comment\": \"We would like to thank Reviewer 2JUM for their constructive feedback! Please find our response below:\\n\\n> On novelty (W1/W3)\\n\\nWe moved several background sections e.g., zero-shot forecasting and score-based diffusion to the appendix and reframe them as background. We then attempt to make our contributions clearer, with more supporting evidence accompanying this rebuttal: \\n\\n- We propose __formal connection__ between conditional diffusion and Reynold's decomposition framework, and unify existing works (on diffusion-based PDE solvers) based upon this framework. \\n- This unifying framework then allows us to demonstrate the __sufficiency of low-order, linearized__ conditioning prior for stable and skillful long-range forecast of nonlinear dynamics.\\n- We strengthen our claims by studying the __scaling properties__ of diffusion-based PDE solvers in a manner that is closer in concept to fluid dynamics.\\n\\n> On baselines (W2)\\n\\nWe have added probabilistic formulation of UFNO and the classical (tensorized) FNO, all with identical parameter budget as our Cohesion framework. These models are implemented off-the-shelf from https://github.com/neuraloperator/neuraloperator. The table below summarizes the results after evaluating for the best ensembling strategies (IC-perturb / MC-dropout / Multi-checkpoints), across Kolmogorov/SWE experiments, at the final timestep T. \\n\\n| | **Cohesion (R=1)** | **Cohesion (R=T)** | **TFNO (ensemble)** | **UFNO (ensemble)** | **SFNO (ensemble)** |\\n|-----------------|--------------------|--------------------|---------------------|---------------------|---------------------|\\n| **RMSE (\\u2193)** | **0.31 / 0.25** | **0.37 / 0.40** | 0.40 / 1.52 | 0.42 / 0.92 | 0.49 / 0.93 |\\n| **MAE (\\u2193)** | **0.25 / 0.15** | **0.27 / 0.20** | 0.30 / 1.32 | 0.35 / 0.51 | 0.40 / 0.52 |\\n| **MS-SSIM (\\u2191)** | **0.68 / 0.95** | **0.45 / 0.90** | 0.01 / 0.01 | 0.21 / 0.42 | 0.32 / 0.43 |\\n\\n> On more ablation (W3)\\n\\nWe argue, using the unifying conditional diffusion -- Reynolds decomposition framework, that existing works like Lippe et al., (2024), Srivastava et al. (2023) is conceptually similar to our Cohesion framework in autoregressive mode (R=1) at which correction is performed, conditional on previous forecast. In this case, we find substantial increase in inference speed, if we perform inference as trajectory planning (R=T), with minimal deterioration in skill. In a sense, our framework provides a __generalization__ to how forecasting is performed with e.g., Lippe et al., as a special case of R=1, video generation task as R=T, and anywhere in between as R=[1..T], for a more flexible design and provides strategies (temporal convolution / Markov blanket) to ensure consistency.\\n\\n> On Q1\\n\\nIndeed, from our comparison of spectral profiles (figure in the paper) that compare between the coherent flow and post-correction, the former is just capturing the low-frequency, low-mode variability signal. Upon further inspection of the structure (qualitative), it is akin to compression / applying a low-pass Filter.\\n\\n> On Q2 \\n\\nFor the metrics, we compute the average over all testing samples and across 5-member ensembles generated randomly. \\n\\n> On Q4\\n\\nThis SWE is forced with Galewsky initial condition that models mid-latitude instability, and hence only the northern hemisphere has signal.\"}", "{\"comment\": \"Thank you for providing the additional results and clarifications regarding some of the addressed questions. However, I still have some concerns:\\n\\n*\\u201cWe have also, to the best of our ability, provide clear citations in places highlighted, and elsewhere.\\u201d*\\nWould you be able to share the revised version of the manuscript? It would help to see how the changes you mentioned are reflected in the paper.\\n\\n**Contributions and comparison to SOTA** \\nThe revised outline of the contributions is indeed clearer and more relevant than the current manuscript. However, to support the claim of providing *\\u201cstable and skillful long-range forecasts\\u201d*, I believe further empirical evidence is necessary\\u2014particularly in comparison to state-of-the-art methods.\\n\\nI am not saying that you necessarily need to outperform the state-of-the-art, but this comparison would help with situating your approach in the broader context of existing work. I believe that readers need to understand the relative strengths and limitations of your method, even if other methods incur a significantly higher computational cost (which would be part of the comparison).\\n\\nFor instance, Figure 4 shows that your method starts to diverge from the ground truth at around $T = 19$. In comparison, other works seem to achieve longer adherence to the ground truth:\\n- Shysheya et al. [2]: Their method remains close to the ground truth for approximately 40 time steps (see Fig. 25 in their paper).\\n- PDE-Refiner [5]: In Fig. 13 of their work, samples maintain adherence to ground truth for up to 10 seconds, though it\\u2019s unclear how many time steps in your dataset correspond to 1 second (I suspect 1 time step is 0.2s?).\\n\\nSuch comparisons would provide a clearer understanding of how your method performs relative to these baselines.\\n\\n**Extra baselines + lack of error bars** I appreciate the extra experimental evidence, and it is indeed useful to assess the comparison between using xNOs vs ROM to generate the conditioning information, and the scaling properties of the ROM. \\n\\nOne weakness in this analysis is that **no error bars are provided**, so the actual (statistical) significance of the results is hard to assess. I have the same concern for the other results in the paper (also expressed as W3 in the rebuttal). Providing error bars is crucial for understanding the robustness and significance of your findings.\\n\\n**Generalisation of other frameworks** I think that the authors do not bring enough evidence to show how the Cohesion framework is a generalisation of other works in the literature such as PDE-Refiner or Dyffusion (as claimed in the reply to reviewer C5HF). Just as an example of a significant point of difference, conditioning is performed differently in these works - you use the decomposition of the posterior score into unconditional + conditioning term, whereas they just feed in the conditioning information into the architecture. Without a detailed side-by-side comparison, it is challenging to agree that Cohesion generalises these frameworks.\\n\\nIf we choose to accept that Cohesion is a generalisation of the above-mentioned frameworks, I still believe that a comparison to them is needed. \\n- If Cohesion outperforms them, it only strengthens the paper, achieving better results with significantly reduced computational cost. \\n- If it doesn\\u2019t (which is my suspicion), one needs to ask why a more general framework is not able to converge to the same optimum as a less general framework. A plausible explanation could be that Cohesion incorporates weaker inductive biases, making it less efficient in the learning process.\\n\\n**Conclusion**\\nIn conclusion, I greatly appreciate the extra empirical evidence and I find the new manner in which the authors propose to pose the contributions of the paper much more suitable. However, the message in the current manuscript is not exactly the same as the one outlined in this rebuttal, and it would require a significant amount of change to the current version to align them. Finally, I maintain my opinion that in order to better place the performance of the Cohesion framework in the context of diffusion-based PDE solvers, a comparison to other relevant (diffusion-based) baselines is needed. This includes both a metric comparison as well as a computation time comparison (where Cohesion would probably outperform).\"}", "{\"summary\": \"The paper proposes a diffusion-based approach for forecasting with dynamical systems that is able to generate the entire sequence in one conditional denoising pass. This is achieved by leveraging reconstruction guidance, where the conditioning information is a sequence of priors (one for each state of the final trajectory), generated by iteratively applying a (lightweight) reduced-order method (ROM) to the initial condition. The score network operates over subsequences, and temporal coherency is assured by applying temporal convolution with a small receptive window. The experiments are performed on two chaotic systems (Kolmogorov flow and Shallow Water), and the performance of the model as a probabilistic emulator is tested in terms of pixel-based metrics (RMSE, MAE), structure-based metrics (MS-SSIM), and physics-based metrics.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. **Relevant topic.** The problem addressed in this paper (probabilistic emulation for PDEs) is an active area of research with important downstream applications, such as weather and climate modelling.\\n2. **Non-autoregressive approach.** The possibility of using a non-autoregressive sampling strategy from the diffusion model without compromising too much on accuracy is nice, and something of significance for the field of forecasting.\\n3. **Flexibie guidance with ROMs.** The use of reconstruction guidance is a very useful technique when the observation process changes over time, offering more flexibility as opposed to classifier-based approaches. Although the technique is not a contribution of this work, the paper is the first (as far as I am aware) to utilise as conditioning information a sequence of autoregressively-produced priors through a compute-efficient ROM. The experiment in which the authors condition on partially-observed fields of the ROM output highlights the flexibility of this technique and is relevant to real-life settings with partially-observed data.\\n5. **Wide range of metrics used for the experiments.**\", \"weaknesses\": \"1. **Unclear contributions/Lack of clear citations.** This approach shares striking similarities with the approach from Rozet et al. [1], but fails to properly reflect this throughout the main text. There should be a much more clear distinction between the contributions of this paper and what is taken from other works.\\n- The overall training and sampling from the score-based model relies heavily on the approach proposed by Rozet et al. [1], but this is not made clear in the paper. They also train the model on subsequences of length $W$ (justifying this approach from a Markov order perspective), and stitch these subsequences together at sampling time to generate arbitrary length trajectories in one go.\\n- The reconstruction guidance mechanism is exactly the same as the one proposed in Rozet et al. [1]. This is mentioned in the paper (L215), but it can be interpreted as if the authors propose a way to improve the numerical stability of the method, as opposed to using results from previous work.\\n- The similarity between the proposed framework and Rozet et al. [1] is reflected in the algorithms presented. \\n - Algorithm 1 is the same as Algorithm 3 in Rozet et al. [1]\\n - Algorithm 2 is the same as Algorithm 4 in Rozet et al. [1]\\n - Algorithm 3 is the same as Algorithm 1 in Rozet et al. [1]\\n - Algorithm 4 is the same as Algorithm 2 in Rozet et al. [1]. However, in the paper this is posed as a novel temporal convolution technique, rather than something already employed in Rozet et al. [1].\\n\\n I acknowledge that this paper adapts Rozet et al. [1]\\u2019s approach, making it suitable to other tasks (forecasting), as opposed to data assimilation. This is achieved by conditioning on those prior states, generated autoregressively through a ROM. This is a nice approach and equips Rozet\\u2019s method with the ability to perform forecasting, a task where their approach fails based on the observations from Shysheya et al. [2]. However, I do not think this is how the paper portrays the technique, and it is debatable whether this is enough of a contribution overall when considering the results (see below). At the very least, a clear paragraph on contributions should be included.\\n2. **Weak baseline for the empirical analysis.** Although the paper mentions that the SFNO approach [Bonev et al. [4]] is the state-of-the-art, I believe there are other probabilistic forecasting approaches that achieve stronger performance.\\n- Two works I have in mind are Lippe et al. [5] and Shysheya et al. [2], with the former being considered state-of-the-art. However, the main metric these works consider for forecasting is high correlation time, rather than the metrics used in this work. But based on the trajectories, they seem to maintain correlation with the ground truth for longer than Cohesion. It would be interesting to compute the high correlation time and compare it with some of these works, especially since the Kolmogorov dataset looks similar to the one in Shysheya et al. [2].\\n- As in Lippe et al. [5], I believe that another relevant (deterministic) baseline would be an MSE-trained UNet. In their experiments, it tends to achieve better results than FNO-based approaches.\\n3. **Lack of error bars in the results.** The performance plots lack error bars. Thus, it is hard to determine how significant the difference between methods is. This is especially the case between Cohesion (R=1) vs. Cohesion (R=T) (i.e. autoregressive vs. in one go), where having error bars is important to figure out whether the hit in performance by generating the entire trajectory in one go is significant.\\n4. **Copying from other papers without citing.** There are certain paragraphs/sections in the appendix which are directly taken from other works without specifying so. For example Appendix B.2. Structure-based metrics is the same as Appendix F.1.4. Multi-scale Structural Similarity Index Measure (MS-SSIM) in Nathaniel et al. [6], Appendix B.3 is very similar to Appendix F.2 in Nathaniel et al. [6]. It is ok to use the same definitions as in other works (in the end, the definitions of the metrics are what they are), but if the writing is so similar, I think you should at least cite the relevant work. \\n \\n Could you please review these sections and either rephrase them, or make it clear that they are heavily based on Nathaniel et al. [6]?\\n\\n**Minor**\\n\\n5. **Typos.** The paper contains several typos, I won\\u2019t include them all but examples are L091-spatiotempral, L125 - deterministic priors, L235 - should be $\\\\nabla_{\\\\mathbf{u}_k}$ , etc.\\n6. **Small labels in figures.** See for example y label in Figures 5, 7, legend in Figure 8.\\n7. **Unclear figures.** I appreciate the attempt to create a visual representation of the framework in Figure 1, but I find the figure confusing, without mentioning the colour coding of the bubbles, why there are two trajectories in b), etc.\\n8. **Lack of x label in Figure 5**. I believe that is the timestep $T$, but that should be labelled.\\n9. **Ablation hyperparameter choices.** In appendix C.1 you detail how you chose the dropout rate $p$ and perturbation factor $f$ for the baselines. However, those correspond to the lowest values you experimented with and it\\u2019s unclear whether going even lower would give better results or not. You\\u2019d like to obtain a convex function of your hyperparameters (i.e. worse performance for lower and higher values of the hyperparameter).\\n\\nOverall, while the topic addressed by this paper is relevant, I believe it does not clearly differentiate its contributions to what already exists in the literature. For the empirical evidence, I believe the chosen baseline is not strong enough, and a more comprehensive comparison to other techniques is needed to thoroughly assess the effectiveness of the proposed method. Although in its current form the paper is incomplete, with proper baselines and a clear indication of novelty/contributions, it could represent a useful addition to the literature.\\n\\n[1] Rozet, F., & Louppe, G. (2023). Score-based Data Assimilation. ArXiv, abs/2306.10574.\\n\\n[2] Shysheya, A., Diaconu, C., Bergamin, F., Perdikaris, P., Hern'andez-Lobato, J.M., Turner, R.E., & Mathieu, E. (2024). On conditional diffusion models for PDE simulations.\\n\\n[3] Qu, Y., Nathaniel, J., Li, S., & Gentine, P. (2024). Deep Generative Data Assimilation in Multimodal Setting. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 449-459.\\n\\n[4] Bonev, B., Kurth, T., Hundt, C., Pathak, J., Baust, M., Kashinath, K., & Anandkumar, A. (2023). Spherical Fourier Neural Operators: Learning Stable Dynamics on the Sphere. International Conference on Machine Learning.\\n\\n[5] Lippe, P., Veeling, B.S., Perdikaris, P., Turner, R.E., & Brandstetter, J. (2023). PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers. ArXiv, abs/2308.05732.\\n\\n[6] Nathaniel, J., Qu, Y., Nguyen, T., Yu, S., Busecke, J., Grover, A., & Gentine, P. (2024). ChaosBench: A Multi-Channel, Physics-Based Benchmark for Subseasonal-to-Seasonal Climate Prediction. ArXiv, abs/2402.00712.\", \"questions\": \"1. Could you please highlight the differences between this approach and Rozet et al. [1]? Is there anything that wasn\\u2019t captured in **W1**?\\n2. Could you also compute the high correlation time in the Kolmogorov (and SWE) experiment to compare with other results reported in the literature (i.e. Lippe et al. [5], Shysheya et al. [2])\\n3. While in the SWE experiment I understand the usefulness of the spherical embeddings used in SFNO, it is not clear to me why they would help in Kolmogorov, but I might have missed some relevant experimental setup detail that justifies it.\\n4. Could you provide error bars for your results?\\n5. If you provide comparisons to Lippe et al. [5] on the Kolmogorov flow experiment, could you also provide computational speed comparisons?\\n6. In L205 you assume a Gaussian observation process (as it is usually done). Would you be able to extend the framework to a non-Gaussian likelihood too?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank Reviewer L6gh for their constructive feedback! Please find our line-by-line response to the concerns raised.\\n\\n> The critical components like value function and reward function never appear.\\n\\nWe agree that we draw inspiration from the Diffuser paper to solve long-range forecasting of nonlinear dynamical system. Although not explicitly demonstrated in the paper, we clarify in the future work section that additional reward function could easily be incorporated using Diffuser's framework, such as the use of physics-based constraints for stability guidance e.g., [1]. \\n\\n> While zero-shot forecasting is addressed conceptually, further clarification on how this aspect was validated experimentally is lacking.\\n\\nWe further clarified that all the experiments and ablations conducted here use only a _singly_ trained score network. This underscores the zero-shot forecasting concept formalized in the paper that remains independent of the conditioning prior. \\n\\n> Baselines: SFNO seems the only baseline. How about others like Markov Neural Operator (MNO)?\\n\\nWe have added probabilistic formulation of UFNO and the classical (tensorized) FNO, all with identical parameter budget as our Cohesion framework. These models are implemented off-the-shelf from https://github.com/neuraloperator/neuraloperator. The table below summarizes the results after evaluating for the best ensembling strategies (IC-perturb / MC-dropout / Multi-checkpoints), across Kolmogorov/SWE experiments, at the final timestep T. \\n\\n| | **Cohesion (R=1)** | **Cohesion (R=T)** | **TFNO (ensemble)** | **UFNO (ensemble)** | **SFNO (ensemble)** |\\n|-----------------|--------------------|--------------------|---------------------|---------------------|---------------------|\\n| **RMSE (\\u2193)** | **0.31 / 0.25** | **0.37 / 0.40** | 0.40 / 1.52 | 0.42 / 0.92 | 0.49 / 0.93 |\\n| **MAE (\\u2193)** | **0.25 / 0.15** | **0.27 / 0.20** | 0.30 / 1.32 | 0.35 / 0.51 | 0.40 / 0.52 |\\n| **MS-SSIM (\\u2191)** | **0.68 / 0.95** | **0.45 / 0.90** | 0.01 / 0.01 | 0.21 / 0.42 | 0.32 / 0.43 |\\n\\n__Conclusion 1: Cohesion outperforms all xNOs baselines__\\n\\n> On Model Flexibility: ROM is a type of method with limited expressiveness e.g. Koopman Operator, a linear approximation model. Would Cohesion be adaptable to domains where coherent flow cannot be efficiently approximated by ROM?\\n\\nBased on Koopman theory [2], any nonlinear PDEs can, in theory, be approximated by infinite-dimensional linear approximations, and so an ever-larger ROM can always be used to approximate chaotic system with high-dimensional invariances (e.g., attractors). Although we do agree that _infinite_ approximation here can be problematic in practice, we argue that the linearization conferred by Koopman operator allows us to perform stability analysis and identify situations where error growth becomes significant (high Lyapunov exponent) such that stability constraints can be imposed [3]. To underscore why stability is of paramount importance, we substitute deep Koopman operator, that both truncates and linearizes the PDE solution to mimic mean flow in Reynold's decomposition, with xNOs, and find substantial degradation at long-range rollouts.\\n\\n| | **Cohesion (TFNO)** | **Cohesion (UFNO)** | **Cohesion (SFNO)** | **Cohesion (ROM)** |\\n|-----------------|---------------------|---------------------|---------------------|--------------------------|\\n| **RMSE (\\u2193)** | 0.41 / >2.00 | 0.42 / >2.00 | 0.46 / 1.51 | **0.37 / 0.40** |\\n| **MAE (\\u2193)** | 0.30 / >2.00 | 0.31 / >2.00 | 0.36 / 0.65 | **0.27 / 0.20** |\\n| **MS-SSIM (\\u2191)** | 0.01 / 0.30 | 0.27 / 0.33 | 0.39 / 0.50 | **0.45 / 0.90** |\\n\\n__Conclusion 2: ROMs stabilize the reverse conditional denoising process, while remaining robust and skillful. Even in the case of nonlinear dynamics with high-dimensional invariant measures, an ever-larger linearized ROMs can be used at the benefits of tractability (linear stability analysis) and stability (coherence)__\\n\\n\\n__References:__\\n\\n[1] Schiff, Yair, et al. \\\"DySLIM: Dynamics Stable Learning by Invariant Measure for Chaotic Systems.\\\" arXiv preprint arXiv:2402.04467 (2024).\\n\\n[2] Koopman, Bernard O. \\\"Hamiltonian systems and transformation in Hilbert space.\\\" Proceedings of the National Academy of Sciences 17.5 (1931): 315-318.\\n\\n[3] Chattopadhyay, Ashesh, and Pedram Hassanzadeh. \\\"Long-term instabilities of deep learning-based digital twins of the climate system: The cause and a solution.\\\" arXiv preprint arXiv:2304.07029 (2023).\"}", "{\"summary\": \"This paper presents Cohesion, a Coherence-Based Diffusion Model for Long-Range Dynamics Forecasting. The model leverages the concept of Reynolds Averaged Navier-Stokes (RANS) as conditioning priors to address two key issues in diffusion-based autoregressive models: (1) instability in long-term predictions, and (2) the computational inefficiency of generating priors. The authors utilise a Koopman-based reduced order model to efficiently generate these priors, thereby speeding up the forecasting process. Standard fluid dynamics principles and diffusion models are adapted to support this framework. Furthermore, cohesion acts as a refinement mechanism that aggregates temporal sequences from the model\\u2019s output, improving performance. The proposed approach is evaluated on two benchmark fluid systems. While the work is well-motivated and promising, there are critical issues with the presentation and formulation (detailed below).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The targeted problems are both interesting and critical for applying diffusion models to autoregressive forecasting. The main ideas and design choices presented in this paper are well-motivated and promising.\", \"weaknesses\": \"### Presentation\\n1.\\tThe authors claim that trajectory planning, a concept central to reinforcement learning (RL), is crucial to Cohesion. However, they do not adequately explain the operational or conceptual connections. Although they mention reframing forecasting as trajectory planning, no clear relationship to RL principles\\u2014beyond standard autoregressive or multi-step predictions\\u2014is evident. Without a specific RL objective formulation or a direct link to decision-making scenarios, the reference to RL seems forced and mislead readers.\\n\\n2.\\tThe paper's use of terminology is potentially confusing and could be considered an abuse of terms. Specifically, the names \\\"Cohesion\\\" and \\\"coherent\\\" are too similar, which creates ambiguity about their distinct roles. From personal understanding, the cohesion refers to the whole framework whereas \\u201ccoherent\\u201d describes a predictable component of the dynamical system. While, as the figures 1,2,3 and experiment sections, the \\\"cohesion \\\" seems refer to the temporal-aggregation component. In addition, given the two words have specific meanings in the scientific community, the author should be careful and avoid potential confusions and abusing of terminology. \\n\\n3. Koopman Operator Introduction (Lines 253-256): The introduction of the Koopman operator is vague and contains some mathematical issues: (1) the text does not discuss that practical implementations use a finite-dimensional approximation. (2) The domain and mapping of the encoder and decoder are not clearly defined. (3) No clear definition for $\\\\mathcal{G}$ and $\\\\mathcal{G}_E(\\\\mathcal{X})$. \\n\\n4. The numerical results are discussed too briefly, with only three lines dedicated to each experiment (Lines 378-380, 432-434). More in-depth discussion and analysis are needed to properly interpret the findings.\\n\\n5. There is only one baseline methods. Additional baselines, particularly diffusion-based models, should be included for a more comprehensive comparison.\\n\\n## Others\\n1.\\tThe paper uses incorrect citation formatting. Citations should be enclosed in parentheses, e.g., \\\"(xxx et al., year),\\\" when they are not acting as the subject or object within sentences.\\n2.\\tFigures 5, 7, 8, and 9 lack axis descriptions \\n3.\\tSeveral acronyms, such as Number of Function Evaluations (NFE), are introduced without definition at first mention. All abbreviations should be defined before initial use to ensure clarity.\", \"questions\": \"1. The Spherical Fourier Neural Operator (SFNO) is used as the only baseline despite being designed for spherical domains, whereas both demonstrated cases in the paper use standard square domains. This choice makes SFNO an unsuitable and potentially misleading baseline. It is unclear why the authors included this baseline, as it does not align with the problem setting.\\n\\n2. The use of Reynolds decomposition is indeed a central aspect of the paper, and this core idea is not clearly stated. While RANS typically deals with time-averaged components, the authors extend it to a time-dependent setting without discussion on this extension and theoretical foundation. It remains unclear how the authors achieved this extension and ensured its validity.\\n\\n3. The authors claim improved inference efficiency through incorporating Koopman-based ROM; however, no comparison with other baselines is provided to support this claim.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"With respect to exploring the limits and design space of diffusion-based PDE solvers, we argue that a lightweight, low-order, linearized mean flow (inspired by Reynolds' decomposition), is sufficient, and that diffusion is _robust_ enough to resolve remaining residual flow. In order to illustrate this point and draw on the empirical connection with Reynold's decomposition, we perform two additional ablations: (1) substituting deep Koopman operator with TFNO/UFNO/SFNO, and (2) analyzing the scaling property of the ROM. Both analysis are performed on the trajectory mode (R=T) for Kolmogorov/SWE problems, at the final timestep T.\\n\\n__(1) Choice of mean flow estimator__: We substitute deep Koopman operator, that both truncates and linearizes the PDE solution to mimic mean flow in Reynold's decomposition, with xNOs, and find substantial degradation at long-range rollouts.\\n| | **Cohesion (TFNO)** | **Cohesion (UFNO)** | **Cohesion (SFNO)** | **Cohesion (ROM)** |\\n|-----------------|---------------------|---------------------|---------------------|--------------------------|\\n| **RMSE (\\u2193)** | 0.41 / >2.00 | 0.42 / >2.00 | 0.46 / 1.51 | **0.37 / 0.40** |\\n| **MAE (\\u2193)** | 0.30 / >2.00 | 0.31 / >2.00 | 0.36 / 0.65 | **0.27 / 0.20** |\\n| **MS-SSIM (\\u2191)** | 0.01 / 0.30 | 0.27 / 0.33 | 0.39 / 0.50 | **0.45 / 0.90** |\\n\\n__Conclusion 2: ROMs stabilize the reverse conditional denoising process, while remaining robust and skillful__\\n\\n__(2) Scaling property of ROM__: We ablate the ROM with orders-of-magnitude reduction in parameter size. Even with 2 orders magnitude smaller ROM, Cohesion is competitive even with the best probabilistic version of xNOs. \\n\\n| | **Cohesion (x$10^{-2}$)** | **Cohesion (x$10^{-1}$)** | **Cohesion (x$10^0$)** | **Reference (best xNO)** |\\n|-----------------|---------------------------|---------------------------|------------------------|--------------------------|\\n| **RMSE (\\u2193)** | 0.39 / 0.95 | 0.38 / 0.90 | 0.37 / 0.40 | 0.40 / 0.92 |\\n| **MAE (\\u2193)** | 0.30 / 0.50 | 0.28 / 0.49 | 0.27 / 0.20 | 0.30 / 0.51 |\\n| **MS-SSIM (\\u2191)** | 0.27 / 0.59 | 0.40 / 0.60 | 0.45 / 0.90 | 0.32 / 0.43 |\\n\\n__Conclusion 3: Diffusion is robust even with the coarsest approximation of mean flow__\\n\\nIn summary, we do not intend to propose another diffusion-based PDE solver, but rather to provide a __unifying framework__ that threads together existing works, and brings diffusion closer to the language of fluid dynamics, and how ideas in the latter (e.g., mean / stochastic flow) can be seamlessly incorporated to explore and probe the scaling limits of diffusion as a promising tool for long-range dynamics solver. \\n\\n__References__\\n\\n[1] Lippe, Phillip, et al. \\\"Pde-refiner: Achieving accurate long rollouts with neural pde solvers.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[2] Janner, Michael, et al. \\\"Planning with diffusion for flexible behavior synthesis.\\\" arXiv preprint arXiv:2205.09991 (2022).\"}", "{\"comment\": \"> Diffusion baseline\\n\\nWith regards to comparison with other diffusion methods, like Dyffusion we argue that this is similar to our Cohesion framework in autoregressive mode (R=1) at which correction is performed, conditional on previous forecast. In this case, we find substantial increase in inference speed, if we perform inference as trajectory planning (R=T), with minimal deterioration in skill. In a sense, our framework provides a __generalization__ to how forecasting is performed with e.g., Dyffusion, as a special case of R=1, video generation task as R=T, and anywhere in between as R=[1..T], for a more flexible design and provides strategies (model-free temporal convolution) to ensure consistency.\"}", "{\"comment\": \"We would like to thank Reviewer C5HF for their constructive feedback. We address your concerns point-by-point by conducting additional experiments and/or providing further clarifications.\\n\\n> Please explain what the zero-shot forecasts without classifier and multi-scale physical structure mentioned in the paper are. There exists confusion in the statements.\\n\\nThe term zero-shot forecast is primarily used to describe how, with just a singly-trained (unconditional) score network, we are able to estimate the posterior distribution without being tied to the conditioning factor during the reverse process. This then allows us to run all the experiments and ablations with just a single score network, without re-training new ones whenever the distribution of the conditioning factors shifts (i.e., the use of different ROM architecture e.g., SFNO, UFNO, FNO, Koopman). \\n\\nWith regards to multi-scale physical structure, it primarily refers to the length scale of the dynamics e.g., from small-scale eddies to large-scale mean flow. An ideal PDE solver for fluid is able to capture all these across-scale dynamics. \\n\\n> The baseline only uses SFNO. Why not compare other neural operator models, such as CNO[1], UNO[2], LSM[3], etc.\\n\\nWe agree that the use of SFNO with no spherical geometry is less ideal, which is the case for Kolmogorov flow where the dynamics is discretized along non-spherical grid. As such, we have added probabilistic formulation of UFNO and the classical (tensorized) FNO, all with identical parameter budget as our Cohesion framework. The table below summarizes the results after evaluating for the best ensembling strategies (IC-perturb / MC-dropout / Multi-checkpoints), across Kolmogorov/SWE experiments, at the final timestep T. \\n\\n| | **Cohesion (R=1)** | **Cohesion (R=T)** | **TFNO (ensemble)** | **UFNO (ensemble)** | **SFNO (ensemble)** |\\n|-----------------|--------------------|--------------------|---------------------|---------------------|---------------------|\\n| **RMSE (\\u2193)** | **0.31 / 0.25** | **0.37 / 0.40** | 0.40 / 1.52 | 0.42 / 0.92 | 0.49 / 0.93 |\\n| **MAE (\\u2193)** | **0.25 / 0.15** | **0.27 / 0.20** | 0.30 / 1.32 | 0.35 / 0.51 | 0.40 / 0.52 |\\n| **MS-SSIM (\\u2191)** | **0.68 / 0.95** | **0.45 / 0.90** | 0.01 / 0.01 | 0.21 / 0.42 | 0.32 / 0.43 |\\n\\n__Conclusion 1: Cohesion outperforms all xNOs baselines__\\n\\n> Lack of interpretability. How to prove the role of coherent flow after decomposition and how it promotes long-term stability prediction? Lack of theoretical explanation.\\n\\nIn order to illustrate this point and draw the empirical connection with Reynold's decomposition, we perform two additional ablations: (1) substituting deep Koopman operator with TFNO/UFNO/SFNO, and (2) analyzing the scaling property of the ROM. Both analysis are performed on the trajectory mode (R=T) for Kolmogorov/SWE problems, at the final timestep T. \\n\\n__(1) Choice of mean flow estimator__: We substitute deep Koopman operator, that both truncates and linearizes the PDE solution to mimic mean flow in Reynold's decomposition, with xNOs, and find substantial degradation at long-range rollouts.\\n| | **Cohesion (TFNO)** | **Cohesion (UFNO)** | **Cohesion (SFNO)** | **Cohesion (ROM)** |\\n|-----------------|---------------------|---------------------|---------------------|--------------------------|\\n| **RMSE (\\u2193)** | 0.41 / >2.00 | 0.42 / >2.00 | 0.46 / 1.51 | **0.37 / 0.40** |\\n| **MAE (\\u2193)** | 0.30 / >2.00 | 0.31 / >2.00 | 0.36 / 0.65 | **0.27 / 0.20** |\\n| **MS-SSIM (\\u2191)** | 0.01 / 0.30 | 0.27 / 0.33 | 0.39 / 0.50 | **0.45 / 0.90** |\\n\\n__Conclusion 2: ROMs stabilize the reverse conditional denoising process, while remaining robust and skillful__\\n\\n__(2) Scaling property of ROM__: We ablate the ROM with orders-of-magnitude reduction in parameter size. Even with 2 orders magnitude smaller ROM, Cohesion is competitive even with the best probabilistic version of xNOs. \\n\\n| | **Cohesion (x$10^{-2}$)** | **Cohesion (x$10^{-1}$)** | **Cohesion (x$10^0$)** | **Reference (best xNO)** |\\n|-----------------|---------------------------|---------------------------|------------------------|--------------------------|\\n| **RMSE (\\u2193)** | 0.39 / 0.95 | 0.38 / 0.90 | 0.37 / 0.40 | 0.40 / 0.92 |\\n| **MAE (\\u2193)** | 0.30 / 0.50 | 0.28 / 0.49 | 0.27 / 0.20 | 0.30 / 0.51 |\\n| **MS-SSIM (\\u2191)** | 0.27 / 0.59 | 0.40 / 0.60 | 0.45 / 0.90 | 0.32 / 0.43 |\\n\\n__Conclusion 3: Diffusion scales with coarse flow approximation__\"}", "{\"comment\": [\"Thank you for your detailed response and the additional clarifications. I appreciate the effort in addressing the feedback, particularly the inclusion of new numerical results. Below are my thoughts on the review and some remaining concerns:\", \"**Connection to RL:** Thanks for the clarification of RL principle and the provided reference, which is a good idea to add such constraint. However, from reviewer\\u2019s view, this is still far from RL principle, where decision-making/control are central objectives. Limiting RL principles to the context of the diffuser policy seems overly narrow and could be misleading. Additionally, the explanation provided appears more aligned with future work rather than the scope of this paper. I suggest revising the RL-related sections with a broader and more accurate framing.\", \"**Clarity:** Thank you for the clarification; it is now clearer to me. I encourage the authors to further refine the manuscript to ensure that the key concepts are presented clearly and are easy to follow.\", \"**Additional Baselines:** The inclusion of additional baselines is appreciated and partially addresses my concerns. However, as I stated in Weakness 5, comparisons with diffusion-based models are still missing. Adding such models would make the evaluation more comprehensive, e.g., R\\u00fchling, et al., (2024) and Gao, et al., (2024).\", \"**Ablation Study:** I am glad to see the provided ablation study, which is a valuable addition and demonstrative.\", \"**Unresolved Points:** Weakness 3 and Question 2 remain unaddressed.\", \"In conclusion, based on the above unsolved concerns, I maintain my score but look forward to further discussions, particularly if the RL-related sections can be adjusted in the updated manuscript.\", \"References\", \"R\\u00fchling Cachay, Salva, et al. \\\"Dyffusion: A dynamics-informed diffusion model for spatiotemporal forecasting.\\\" Advances in Neural Information Processing Systems 36 (2024).\", \"Gao, Han, Sebastian Kaltenbach, and Petros Koumoutsakos. \\\"Generative learning for forecasting the dynamics of high-dimensional complex systems.\\\" Nature Communications 15.1 (2024): 8904.\"]}", "{\"comment\": [\"We appreciate Reviewer 2JUM for the constructive feedback and encouragement!\", \"We do agree that in light of the revision requests and the near-end timeline of the discussion period, non-trivial changes to the manuscript is warranted (e.g., reframing of multiple sections, adding another baselines). We are currently planning to conduct these additional experiments and report their results in this forum for the community to view.\", \"With respect to the baselines, we show that diffusion is robust even in low-order, highly compressed prior, as demonstrated by orders-of-magnitude smaller ROMs. One of the reasons for the failure of xNOs, we postulate, is due to the lack of truncation, where high-frequency signals are retained and propagated over long-rollouts, causing instabilities. But this is merely speculation, and one would need to systematically ablate the stability w.r.t. the number of modes retained, a question for future work.\", \"With all that said, we do want to iterate that our work's first and foremost objective is to provide a dynamics-based perspective / formalism to navigate and analyze many works on diffusion-based PDE solvers, though often spoken with different jargons e.g.,:\", \"__mean flow__: first guess, control run, etc\", \"__stochastic variation__: correction, post-processing, etc\", \"__auxiliary contexts__: physics constraints, statistics, etc.\", \"Though the specific instantiation of these components differ and often entangled, we argue that the core ideas follow closely with classical fluid dynamics, hence our effort to formally draw the connection and probe the limits with a very simplified method (e.g., ROM). We do understand that with any framework, it can never fully capture the entire spectrum of works, but we hope that it provides some clarity and intuition to navigate the fast-growing literature.\"]}", "{\"comment\": \"> More on scaling properties\\n\\nSince the purpose of this work is primarily to probe the limits of diffusion-based PDE solver, we conducted two more ablations:\\n\\n__(1) Choice of mean flow estimator__: We substitute deep Koopman operator, that both truncates and linearizes the PDE solution to mimic mean flow in Reynold's decomposition, with xNOs, and find substantial degradation at long-range rollouts.\\n| | **Cohesion (TFNO)** | **Cohesion (UFNO)** | **Cohesion (SFNO)** | **Cohesion (ROM)** |\\n|-----------------|---------------------|---------------------|---------------------|--------------------------|\\n| **RMSE (\\u2193)** | 0.41 / >2.00 | 0.42 / >2.00 | 0.46 / 1.51 | **0.37 / 0.40** |\\n| **MAE (\\u2193)** | 0.30 / >2.00 | 0.31 / >2.00 | 0.36 / 0.65 | **0.27 / 0.20** |\\n| **MS-SSIM (\\u2191)** | 0.01 / 0.30 | 0.27 / 0.33 | 0.39 / 0.50 | **0.45 / 0.90** |\\n\\n__Conclusion 2: ROMs stabilize the reverse conditional denoising process, while remaining robust and skillful__\\n\\n__(2) Scaling property of ROM__: We ablate the ROM with orders-of-magnitude reduction in parameter size. Even with 2 orders magnitude smaller ROM, Cohesion is competitive even with the best probabilistic version of xNOs. \\n\\n| | **Cohesion (x$10^{-2}$)** | **Cohesion (x$10^{-1}$)** | **Cohesion (x$10^0$)** | **Reference (best xNO)** |\\n|-----------------|---------------------------|---------------------------|------------------------|--------------------------|\\n| **RMSE (\\u2193)** | 0.39 / 0.95 | 0.38 / 0.90 | 0.37 / 0.40 | 0.40 / 0.92 |\\n| **MAE (\\u2193)** | 0.30 / 0.50 | 0.28 / 0.49 | 0.27 / 0.20 | 0.30 / 0.51 |\\n| **MS-SSIM (\\u2191)** | 0.27 / 0.59 | 0.40 / 0.60 | 0.45 / 0.90 | 0.32 / 0.43 |\\n\\n__Conclusion 3: Diffusion scales with coarse flow approximation__\\n\\nIn summary, we do not intend to propose another diffusion-based PDE solver, but rather to provide a __unifying framework__ that brings diffusion closer to the language of fluid dynamics, and how ideas in the latter (e.g., mean / stochastic flow) can be seamlessly incorporated to explore and probe the scaling limits of diffusion as a promising tool for long-range dynamics solver.\"}", "{\"summary\": \"The paper proposes Cohesion, a coherence-based diffusion model for long-range dynamics forecasting, aimed at addressing challenges in autoregressive probabilistic forecasting.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The integration of turbulence and diffusion principles with ROM-based conditioning is novel and provides a new approach for multi-scale and chaotic systems.\", \"This paper uses quantitative (RMSE, MAE, MS-SSIM) and physics-based metrics (spectral divergence), provide strong empirical support.\"], \"weaknesses\": [\"Lack of interpretability. How to prove the role of coherent flow after decomposition and how it promotes long-term stability prediction? Lack of theoretical explanation.\", \"Please explain what the zero-shot forecasts without classifier and multi-scale physical structure mentioned in the paper are. There exists confusion in the statements.\", \"Please explain why SFNO is used as the baseline. SFNO is mainly used to predict atmospheric dynamics. Intuitively, the datasets used in this paper are not suitable for spherical geometry operators. Please give an explanation.\", \"The baseline only uses SFNO. Why not compare other neural operator models, such as CNO[1], UNO[2], LSM[3], etc.\", \"The model is based on Diffusion. Why not compare diffusion-based models, such as PreDiff[4] and DYffusion[5].\", \"The time complexity comparison analysis of the model should be increased.\", \"[1] Bogdan Raonic wt al. 'Convolutional Neural Operators for robust and accurate learning of PDEs.' NeurIPS2023.\", \"[2] Md Ashiqur Rahman et al. 'U-NO: U-shaped Neural Operators.' TMLR2023.\", \"[3] Haixu Wu et al. 'Solving High-Dimensional PDEs with Latent Spectral Models.' ICML2023.\", \"[4] Zhihan Gao et al. 'PreDiff: Precipitation nowcasting with latent diffusion models.' NeurIPS2023.\", \"[5] Salva R\\u00fchling Cachay et al. 'DYffusion: A Dynamics-informed Diffusion Model for Spatiotemporal Forecasting.' NeurIPS2023.\"], \"questions\": \"Please address the questions in the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Connection to RL\\n\\nWe would like to clarify that our paper is not intended to apply RL-based diffusion framework in fluid dynamics, but rather to give credits as to where our inspiration comes from. With that said, we find several connections that might be worth exploring for future work, e.g., in the policy-based RL, one attempts to variationally maximize the log policy gradient i.e., $\\\\mathbb{E} \\\\left[ \\\\sum_{t=0}^T \\\\nabla_{\\\\theta} \\\\log \\\\pi_\\\\theta(a_t | s_t) R(\\\\tau) \\\\right]$, where the action $a_t$ and $s_t$ can be thought of as the state and conditioning prior (e.g., mean flow). This objective is similar to the guidance performed during the reverse denoising process. The missing part here would be the incorporation of a reward function $R(\\\\tau)$ given a specific policy function $\\\\tau \\\\sim \\\\pi_\\\\theta$. This reward can be designed flexibly e.g., to capture long-term statistics, some conservation constraints, etc. Similar formulation can be extended to value-based methods where the objective is to maximize the reward function $R$ instead. \\n\\nThis RL formulation could indeed be interesting for many fluid dynamics applications, such as in simulation, control, hybrid integration with numerical solvers, and even in operational setting where data assimilation of sparse observations is performed such that the estimated state should best reproduce them (as additional guidance / reward).\"}", "{\"summary\": \"The paper proposes Cohesion, a model that combines a deterministic latent autoregressive component and a diffusion model for probabilistic dynamics forecasting. The high-level intuition of the model is as follows: first the current state is encoded to a compressed latent space. Then, one or more steps of a deterministic autoregressive model are applied, after which the predictions in the latent space are decoded back to the data space to get the initial prediction(s). In terms of the Reynold decomposition, the paper interprets this initial prediction as the coherent component of the flow. To resolve the fluctuating component, a diffusion model, which is conditioned on the predicted coherent component, is used to \\u2018stochastically refine\\u2019 the states. Cohesion is evaluated against the Spherical Fourier Neural Operator (SFNO) on Kolmogorov flow and Shallow Water Equation benchmarks, in terms of point wise-metrics like MSE, as well as structure-based and physics-based metrics.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"**S1:** The model is evaluated in terms of spectral divergence, a measure of divergence between the energy distribution at different frequencies. Indeed, for chaotic systems that can have strongly diverging trajectories for even slightly different initial conditions, point-wise metrics become meaningless for long simulation horizons, and the statistical properties of the system should be investigated.\\n\\n**S2:** The interpretation of the deep Koopman operator as predicting the coherent part of the flow and of the diffusion model as refining the fluctuating component is intuitive and provides a physics-inspired motivation for Cohesion.\\n\\n**S3:** The method supports different sampling strategies, dubbed trajectory planning and autoregressive. Autoregressive shows higher quality results, but requires more computational effort, whereas trajectory planning is more computationally efficient due to denoising the entire trajectory at once. As such, the sampling strategies provide an intuitive way to trade computational budget for accuracy.\", \"weaknesses\": \"**W1:** I found the presentation of the paper at some parts counterintuitive and confusing. In particular, Section 3 (the method section) starts with a quite extensive explanation of score-based generative models and zero-shot super resolution, while this concerns background material introduced in other works. I think the paper would benefit from merging this part with the text around Eq. 3 in the background section. This would make it easier for the reader to distinguish already established methods/algorithms and the novel aspects introduced in this paper.\\n\\n**W2:** The experimental evaluation has two weak aspects:\\n\\n* The method is compared against a single baseline only, the SFNO. There are other popular probabilistic methods that utilize a predictor-refinement approach, for example those that are cited in Section 2, e.g. Lippe et al. 2024; Srivastava et al. (2023); Yu et al. (2023); Mardani et al. (2024). It would greatly improve the paper to compare against a selection of those methods, and explain how the key conceptual differences in your approach relative to those works lead to different results. In addition, the work of Bergamin et al. [1] is relevant since it also considers two sampling modes that are highly similar to trajectory planning and autoregressive forecasting as described in the paper.\\n* The SFNO is not a suitable model for the Kolmogorov flow experiment, since the geometry in this experiment is not spherical. Do you have a specific reason to not compare to the regular FNO instead here (in addition to other baselines that would be good to add)?\\n\\n**W3:** After reading the paper, it remains unclear to me what the key novel insights are relative to prior papers. Diffusion-based refinement techniques have already been proposed, as cited in Section 2 of this work. In addition, the zero-shot conditioning on noisy or partial observations in the context of PDEs has already been established in [2]. As I currently understand, the novelty here lies primarily in the architectural choice of using the encoder-operator-decoder module to get the initial prediction. In itself, this seems a conceptually small change relative to earlier frameworks that show that similar predictor-refinement approaches work well. If using this architecture in the prediction-refinement setting could lead to a substantial performance increase or otherwise interesting results, these could be novel insights, but this is not investigated in the paper. I think it would help to get your point across if you contrast your work against the most similar related papers more explicitly, and highlight the differences between them.\\n\\n**References:**\\n\\n[1] Bergamin et al., 2024. Guided Autoregressive Diffusion Models with Applications to PDE Simulation. https://openreview.net/forum?id=1avNKFEIOL\\n\\n[2] Rozet & Louppe, 2023. Score based data assimilation. https://arxiv.org/abs/2306.10574\", \"questions\": \"**Q1:** Did you investigate whether the predicted conditioning prior actually aligns with the coherent part of the flow? I.e., does it predict some kind of average (or perhaps localized/moving average) behavior of the dynamics in space and/or time?\\n\\n**Q2:** It is unclear to me how the metrics are calculated over multiple samples from the probabilistic models. Did you simply average the metrics over those samples, or take a best-of-K like approach? What is the effect of the stochastic sampling on the calculation of those metrics using your approach?\\n\\n**Q3:** Please add labels on the horizontal axes of Figures 5 and 7.\\n\\n**Q4:** How should Figure 6 be interpreted? Is the spherical geometry here somehow projected on the 2D image? Is there a reason why the signal is only present at the top half of the image?\\n\\n**Q5:** Can you also provide plots of the spectral divergence over time, of Cohesion, the baseline methods, and coherent-only?\\n\\n**Q6:** Please comment on W2 and W3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' further responses.\\n\\nHowever, I still have questions regarding the RL formulation. From reviewer's perspective, the role of the RL formulation in this work appears to overlap with the score function, which can be interpreted similarly to the value function in RL as noted by the authors. Why is it necessary to explain the same concept from both perspectives? For instance, Rozet et al. (2023) applied the score-based diffusion model for data assimilation using a single core concept without mentioning RL.\\n\\nI also appreciate the explanation of the missing $R(\\\\tau)$ designed for different scenarios, but I believe it is not directly relevant to this work.\\n\\nRozet, Fran\\u00e7ois, and Gilles Louppe. \\\"Score-based data assimilation.\\\" Advances in Neural Information Processing Systems 36 (2023): 40521-40541.\"}", "{\"comment\": \"We would like to thank Reviewer yE9B for their constructive feedback! Please find our response below:\\n\\n> Connection with RL principles\\n\\nWe agree that we draw inspiration from the Diffuser paper to solve long-range forecasting of nonlinear dynamical system. Although not explicitly demonstrated in the paper, we clarify in the future work section that additional reward function could easily be incorporated using Diffuser's framework, such as the use of physics-based constraints for stability guidance e.g., [1]. \\n\\n> On clarity\\n\\nWe attempted to make the notation and terms defined early on in the paper to avoid confusion. For instance, a framework such as _Cohesion_ will be italicized. \\n\\n> On baselines\\n\\nWe have added probabilistic formulation of UFNO and the classical (tensorized) FNO, all with identical parameter budget as our Cohesion framework. These models are implemented off-the-shelf from https://github.com/neuraloperator/neuraloperator. The table below summarizes the results after evaluating for the best ensembling strategies (IC-perturb / MC-dropout / Multi-checkpoints), across Kolmogorov/SWE experiments, at the final timestep T. \\n\\n| | **Cohesion (R=1)** | **Cohesion (R=T)** | **TFNO (ensemble)** | **UFNO (ensemble)** | **SFNO (ensemble)** |\\n|-----------------|--------------------|--------------------|---------------------|---------------------|---------------------|\\n| **RMSE (\\u2193)** | **0.31 / 0.25** | **0.37 / 0.40** | 0.40 / 1.52 | 0.42 / 0.92 | 0.49 / 0.93 |\\n| **MAE (\\u2193)** | **0.25 / 0.15** | **0.27 / 0.20** | 0.30 / 1.32 | 0.35 / 0.51 | 0.40 / 0.52 |\\n| **MS-SSIM (\\u2191)** | **0.68 / 0.95** | **0.45 / 0.90** | 0.01 / 0.01 | 0.21 / 0.42 | 0.32 / 0.43 |\\n\\n__Conclusion 1: Cohesion outperforms all xNOs baselines__\\n\\n> On ROM ablation\\n\\nWe substitute deep Koopman operator, that both truncates and linearizes the PDE solution to mimic mean flow in Reynold's decomposition, with xNOs, and find substantial degradation at long-range rollouts.\\n| | **Cohesion (TFNO)** | **Cohesion (UFNO)** | **Cohesion (SFNO)** | **Cohesion (ROM)** |\\n|-----------------|---------------------|---------------------|---------------------|--------------------------|\\n| **RMSE (\\u2193)** | 0.41 / >2.00 | 0.42 / >2.00 | 0.46 / 1.51 | **0.37 / 0.40** |\\n| **MAE (\\u2193)** | 0.30 / >2.00 | 0.31 / >2.00 | 0.36 / 0.65 | **0.27 / 0.20** |\\n| **MS-SSIM (\\u2191)** | 0.01 / 0.30 | 0.27 / 0.33 | 0.39 / 0.50 | **0.45 / 0.90** |\\n\\n__Conclusion 2: ROMs stabilize the reverse conditional denoising process, while remaining robust and skillful__\\n\\nIn summary, in terms of __scaling limit__, we argue that a lightweight, interpretable conditioning factor to capture mean flow is __sufficient__ for skillful diffusion-based PDE solver, and highlights the latter's promise of revolutionizing (if not already) the field.\\n\\n__References:__\\n\\n[1] Schiff, Yair, et al. \\\"DySLIM: Dynamics Stable Learning by Invariant Measure for Chaotic Systems.\\\" arXiv preprint arXiv:2402.04467 (2024).\"}", "{\"summary\": \"The paper introduces Cohesion, a framework for probabilistic dynamics forecasting in chaotic systems like fluid dynamics. By reframing forecasting as a trajectory-planning task, the framework makes use of reduced-order models (ROM) to make denoising processes more efficient. This approach is much faster because it applies a single denoising pass to the whole forecast sequence, rather than using multiple autoregressive steps. Cohesion also includes a way of guiding the process without using classifiers, which makes it suitable for zero-shot forecasting. Tests on the Kolmogorov Flow and Shallow Water Equation show that Cohesion works better than other methods for capturing multi-scale physical structures and reducing spectral divergence, which is important for modeling chaotic systems accurately.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The framework is innovative in its approach to combining turbulence theory, conditional generative model, and reinforcement learning principles for forecasting. By applying diffusion models with coherence-based conditioning, the paper contributes a novel perspective on long-range forecasting. The idea of Turbulence-diffusion framework and the induced Zero-shot conditional sampling is interesting and inspiring. Cohesion presents a promising solution for long-range dynamics forecasting, especially in chaotic and partially observed environments. Experiments are exhaustive.\", \"weaknesses\": \"**Over-claims:**\\n\\n**Reinforcement Learning:** After reading the methodologies, I didn't see RL play any role in this work. The only related part is borrowing the idea from Diffuser (Janneretal. 2022), to do the (sub) trajectory generation rather than autoregressive generation for efficiency. Such point and even claiming to achieve stable long rollouts with RL are invalid to me, given that Diffuser is a purely a generative method but to solve RL problems only. The critical components like **value function and reward function** never appear. \\n\\n\\n**Zero-shot:** While zero-shot forecasting is addressed conceptually, further clarification on how this aspect was validated experimentally is lacking.\\n\\n\\n\\n**Writing:**\\n\\n- Abstract: some sentences are naively extracted from the introduction but unclear after compression. \\\"Nonetheless,Cohesionsupports...\\\", unclear what's the advantage the authors refer to. Specifically, why iterations over subsequences are not allowed in the previous works. And direct abbreviation \\\"NFEs\\\".\\n- Section 2: The demonstration of $u_K$ and the relationship between $u_K$ and $u$ should be exposed clearer. E.g. change the title of \\\"Coherent flow as conditioning prior\\\" to \\\"Conditional Diffusion Modeling\\\", then put the demonstration of $u_K$ here first, and then introduce the coherent flow is the conditioning prior.\", \"questions\": \"1. **On Model Flexibility**: ROM is a type of method with limited expressiveness e.g. Koopman Operator, a linear approximation model. Would Cohesion be adaptable to domains where coherent flow cannot be efficiently approximated by ROM?\\n2. **Baselines:** SFNO seems the only baseline. How about others like Markov Neural Operator (MNO)?\\n3. **Long-Term Stability**: The results are over long rollouts, how long is it and how hard is it to predict that?\\n4. **Ablation Study** has been lacking, such as W window size.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate Reviewer yE9B for their constructive feedback!\\n\\n> Unresolved Points: Weakness 3 and Question 2 remain unaddressed.\\n\\n- __On time-dependent Reynold's decomposition__\\n\\nIndeed, we assume a time-dependent Reynolds decomposition, which departs from the traditional time-averaged approach used in RANS. In our framework, the mean component $\\\\( \\\\bar{u}(t) \\\\)$ is defined as a local temporal average (or smoothed representation), while the fluctuating component $\\\\( u'(t) \\\\)$ captures deviations over shorter time scales. This decomposition allows us to model both instantaneous dynamics and the interplay between sub-sequences.\", \"we_frame_the_problem_in_terms_of_three_cases\": \"1. **Autoregressive (R=1)**: The __Markovian assumption__ ensures time independence, aligning with traditional Reynolds decomposition where fluctuations depend only on the immediate state.\\n\\n2. **Trajectory Planning (R=T)**: Here, the reverse denoising process maximizes the log-likelihood over the __entire trajectory__, implicitly capturing dependencies through the mean-fluctuation framework.\\n\\n3. **Intermediate Cases (R > 1, R < T)**: For these cases, we leverage the __pseudo-Markov blanket__ theorems proposed by Rozet et al. (2023) [1]. Their results show that it is possible to isolate dependencies within a sub-sequence such that the log-likelihood maximization is locally Markovian with respect to the next sub-sequence. This ensures that the Reynolds decomposition remains valid within each sub-sequence (similar argument as (2) but within sub-sequence), even in a time-dependent setting.\\n\\n> Koopman Operator\\n\\nIndeed, though in theory, the Koopman operator is able to linearize any nonlinear system in __infinite__ dimensionality, in practice this might be difficult and tricky. In cases where the invariant measures (e.g., attractors) of the mean flow modeled are high-dimensional, the Koopman operator has to scale. However, we argue that the benefit of such linearization is not to fully capture the potentially high-dimensional mean flow, but to provide other interpretable benefits, such as linear stability analysis. Nonetheless, the addition of nonlinear diffusion module can assist in resolving higher frequency signal and small-scale physics, and mitigate such difficulties.\\n\\nWe clarified the definition of $\\\\{\\\\mathcal{G}_E, \\\\mathcal{G}_D\\\\} \\\\in \\\\mathcal{G}$ as the invertible models between the state and observables in canonical Koopman terms (for our case is a simple convolution-based pair of encoder $\\\\mathcal{G}_E$ and decoder $\\\\mathcal{G}_D$), where the latent space where the Koopman operator acts has reduced dimensionality relative to the physical-data space i.e., $n_d << n_x$.\\n\\nWe will clarify all these in the manuscript.\\n\\n> Additional Baselines\\n\\nWe do not intend to propose another diffusion-based PDE solver, but rather to provide a __unifying framework__ that threads together existing works, and brings diffusion closer to the language of fluid dynamics, and how ideas in the latter (e.g., mean / stochastic flow) can be seamlessly incorporated to explore and probe the scaling limits of diffusion as a promising tool for long-range dynamics solver. For instance, in referenced works like [2] and [3], we argue that the former is an instantiation of Cohesion in autoregressive mode (R=1) at which correction is performed, conditional on previous forecast, and the latter is a great example of Cohesion in trajectory planning mode (R=T). Both works only differ in how they incorporate additional conditioning context (e.g., physics, long-range statistics). In a sense, our framework provides a __generalization__ to how forecasting is performed, from two extreme cases, and anywhere in between as R=[1..T], for a more flexible design and provides strategies (e.g., model-free temporal convolution) to ensure consistency.\\n\\n__References__:\\n\\n[1] Rozet, F., & Louppe, G. (2023). Score-based Data Assimilation. ArXiv, abs/2306.10574.\\n\\n[2] R\\u00fchling Cachay, Salva, et al. \\\"Dyffusion: A dynamics-informed diffusion model for spatiotemporal forecasting.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[3] Gao, Han, Sebastian Kaltenbach, and Petros Koumoutsakos. \\\"Generative learning for forecasting the dynamics of high-dimensional complex systems.\\\" Nature Communications 15.1 (2024): 8904.\"}", "{\"comment\": \"We would like to thank Reviewer HSiU for their constructive feedback. Please find our response specific to queries and concerns raised.\\n\\n> Lack of clear citations\\n\\nWe moved several background sections e.g., zero-shot forecasting and score-based diffusion to the appendix and reframe them as background. We have also, to the best of our ability, provide clear citations in places highlighted, and elsewhere.\\n\\n> Unclear contributions\\n\\nWe attempt to make our contributions clearer, with more supporting evidence accompanying this rebuttal: \\n\\n- We propose __formal connection__ between conditional diffusion and Reynold's decomposition framework, and unify existing works (on diffusion-based PDE solvers) based upon this framework. \\n- This unifying framework then allows us to demonstrate the __sufficiency of low-order, linearized__ conditioning prior for stable and skillful long-range forecast of nonlinear dynamics.\\n- We strengthen our claims by studying the __scaling properties__ of diffusion-based PDE solvers in a manner that is closer in concept to fluid dynamics.\\n\\n> Lack of baselines\\n\\nWe have added probabilistic formulation of UFNO and the classical (tensorized) FNO, all with identical parameter budget as our Cohesion framework. These models are implemented off-the-shelf from https://github.com/neuraloperator/neuraloperator. The table below summarizes the results after evaluating for the best ensembling strategies (IC-perturb / MC-dropout / Multi-checkpoints), across Kolmogorov/SWE experiments, at the final timestep T. \\n\\n| | **Cohesion (R=1)** | **Cohesion (R=T)** | **TFNO (ensemble)** | **UFNO (ensemble)** | **SFNO (ensemble)** |\\n|-----------------|--------------------|--------------------|---------------------|---------------------|---------------------|\\n| **RMSE (\\u2193)** | **0.31 / 0.25** | **0.37 / 0.40** | 0.40 / 1.52 | 0.42 / 0.92 | 0.49 / 0.93 |\\n| **MAE (\\u2193)** | **0.25 / 0.15** | **0.27 / 0.20** | 0.30 / 1.32 | 0.35 / 0.51 | 0.40 / 0.52 |\\n| **MS-SSIM (\\u2191)** | **0.68 / 0.95** | **0.45 / 0.90** | 0.01 / 0.01 | 0.21 / 0.42 | 0.32 / 0.43 |\\n\\n__Conclusion 1: Cohesion outperforms all xNOs baselines__\\n\\nWith regards to comparison with other autoregressive-based diffusion, like Lippe et al., [1] we argue that this is similar to our Cohesion framework in autoregressive mode (R=1) at which correction is performed, conditional on previous forecast. In this case, we find substantial increase in inference speed, if we perform inference as trajectory planning (R=T), with minimal deterioration in skill. In a sense, our framework provides a __generalization__ to how forecasting is performed with e.g., Lippe et al. [1], as a special case of R=1, video generation task as R=T, and anywhere in between as R=[1..T], for a more flexible design and provides strategies (temporal convolution / Markov blanket first proposed in the Diffuser paper [2] to solve RL) to ensure consistency.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank you for the rebuttal\", \"comment\": \"I thank the authors for their extensive rebuttal. Please find my response below:\\n\\n**Regarding W1/W3 - novelty and contributions:** I appreciate that the authors summarize their key contributions in the rebuttal, and propose to move part of section 3 to the appendix or background. However, editing the manuscript to reflect the proposed changes requires almost completely rewriting section 3. Without seeing the updated paper, it is impossible to assess the updated manuscript.\\n\\n**Regarding W2 - baselines:** I appreciate the newly added baselines. The results seem promising. Can you add the plots similar to Figures 4 and 5 to the paper or the appendix so that the new results can be examined in more detail? Further, the first point under W2 is unaddressed. If you are claiming that your method generalizes e.g. Lippe et al., it would be good to add PDE-Refiner (or another predictor-refiner-like method) as a baseline.\\n\\n**Regarding choice of mean flow estimator:** These results look promising. Can you also explain in more detail _why_ the xNO models perform worse than your ROM as mean flow predictors? \\n\\nOverall, I think the authors' rebuttal goes in the right direction to more clearly reflect the contributions (W1/W3) and provides substantially improved experimental results of ablations and baselines (W2). Still, incorporating this into the manuscript requires substantial rewriting of sections 3 and 4. Without seeing the updated paper, it is difficult to judge the quality, and assessing an updated version (if/when available) might require another full review, given the extent of the changes. Additionally, I would encourage the authors to add at least one predictor-refiner method (e.g. Lippe et al.) as baseline.\"}", "{\"title\": \"Response\", \"comment\": \"I appreciate the effort the authors put to address my concerns. But I decide to keep my score, the reasons are as follows:\\n- Overclaims about RL. Getting inspiration from RL is fine, but then when the elements of RL is lacking, I personally don't think it is acceptable for the authors to bring RL up in the introduction. In fact, it is far from RL, and the inspiration should be more about sequence generation.\\n- Zero-shot. I'm not sure here it should be understood as a generalization ability or zero-shot. And lack of details about this in the experiments. \\nIn conclusion, I don't think this work is ready to publish. The logic, inspiration, intuition, and some part of the experiments about demonstrating zero-shot ability, should be rewritten.\"}", "{\"metareview\": \"The paper focuses on long-range forecasting of dynamics with a \\u201ctrajectory planning\\u201d perspective. It recognizes a parallel between diffusion-based models and turbulence. Specifically, it conditions a diffusion process on a prior of coherent structures obtained from reduced order modeling of all snapshots of a sequence and generates the whole dynamics as a single denoising process with a reconstruction guidance. It tests the method on the two chaotic systems of Kolmogorov flow and shallow water and compares favorably with some included baselines.\\n\\nThe reviewers appreciated the novelty of the ROM based prior distribution, the efficient non-autoregressive generation, the significant empirical support with several physical and image-based metrics, \\n\\nOn the other hand, they raised important concerns regarding similarity of contributions to prior work without proper description of the differences (particularly Rozet et al.), lack of details on different components of the method design and experiments, lack of several relevant baselines including neural operators and diffusion based models among others, and potential overclaiming regarding parallels to reinforcement learning despite of no empirical evidence of a full RL setup.\\n\\nThe authors provided a thorough rebuttal where they included some more baselines and clarified some of the uncertainties about the details of the methods, the experimental setup, and the rationale for the selection of the current baselines.\\n\\nMost reviewers attended to the rebuttal but did not find it convincing. The main outstanding concerns are proper reference of prior work and what is the precise contribution of this work and the inclusion of several relevant baselines that are currently absent from this work.\\n\\nAll reviewers eventually recommend rejection with which the AC agrees. The paper should undergo a major revision not only to address the above two main concerns but also several relevant and detailed feedback from the five expert reviewers.\", \"additional_comments_on_reviewer_discussion\": \"Five expert reviewers evaluated the paper with expertise in physics, generative modeling, and reinforcement learning. They provided detailed review and mostly considered the rebuttal. The reviewers unanimously lean towards rejection.\"}" ] }
5ZpN6W5uRm
Tournament Evaluation of Large Language Models
[ "Richard Kelley", "Duncan Wilson" ]
For several decades, the standard approach to evaluating a learned model has been to compute a numerical loss that summarizes the quality of the model based on a previously unseen test set. Two models for the same task can then be compared by looking at their scores on this set. However, recent experience with large language models (LLMs) has shown that comparing summary statistics of two broadly-capable models may not provide a reliable predictor of performance on real-world tasks. This has led to a growing use of crowd-sourced human feedback directly comparing outputs from pairs of models. While helpful, this approach requires a process that involves significant time and human effort, limiting the number of models that can be thoroughly evaluated. To address the need for a scalable method of comparing modern LLMs, we present a novel approach to evaluation via tournament-style model competitions that are constructed automatically from pre-existing benchmarks. We use these automatically-constructed tournaments to compute ratings for a range of models on a diverse set of tasks that use automated scoring via both multiple-choice and free-form text generation. We compare four prominent rating systems: Elo, Glicko, TrueSkill$\texttrademark$, and the Bradley-Terry model, and find that automatically-constructed tournaments provide reliable information about the relative performance of LLMs while using only a fraction of the amount of data required by current benchmark-based evaluation methods. We discuss implications for model evaluations and propose future directions for large-scale LLM comparisons.
[ "evaluation", "large language models", "Elo ratings", "metrics", "benchmarks" ]
Reject
https://openreview.net/pdf?id=5ZpN6W5uRm
https://openreview.net/forum?id=5ZpN6W5uRm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vH0tWPE3XZ", "ub4D7sRfCE", "fZk0r2URC8", "Zd8btChiz3", "LmTd36Uziq", "LTNeNxgedT", "1o8lHNOFer" ], "note_type": [ "official_comment", "official_review", "official_review", "official_review", "meta_review", "official_review", "decision" ], "note_created": [ 1732162708001, 1730702053444, 1729904948027, 1730579534162, 1734078507958, 1730377860353, 1737523691242 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5207/Area_Chair_T24b" ], [ "ICLR.cc/2025/Conference/Submission5207/Reviewer_Ertc" ], [ "ICLR.cc/2025/Conference/Submission5207/Reviewer_zKEr" ], [ "ICLR.cc/2025/Conference/Submission5207/Reviewer_aACx" ], [ "ICLR.cc/2025/Conference/Submission5207/Area_Chair_T24b" ], [ "ICLR.cc/2025/Conference/Submission5207/Reviewer_j392" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Reminder: Please respond and update the score if necessary\", \"comment\": \"Dear Reviewers,\\n\\nKindly ensure that you respond proactively to the authors' replies (once they are available) so we can foster a productive discussion. If necessary, please update your score accordingly. We greatly appreciate the time and effort you\\u2019ve dedicated to the review process, and your contributions are key to making this process run smoothly.\\n\\nThank you,\\n\\nAC\"}", "{\"summary\": \"This paper adapts existing automated benchmarks for model pairwise comparisons within a scalable tournament-like setting, offering an alternative to human preference evaluations. t introduces an evaluation metric \\u03ba that depends on the task evaluated, using either exact match for straightforward answers or probability comparison for selecting the most likely completion from multiple choices. The paper tests the evaluation method for transitivity, order invariance, and K-factor sensitivity. Then, it assesses how rankings derived from benchmarks correlate with those produced by their tournament-based approach. Lastly, the work compares scores computed by popular rating systems like Elo and TrueSkill.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"A comprehensive comparison of Elo with other rating systems such as Glicko, Trueskill and Bradley-Terry, which is lacking in the literature of LLMs evaluation.\", \"weaknesses\": [\"Missing literature citations in the introduction and lack of distinct structure between the \\\"Related Work\\\" and \\\"Background\\\" sections, which could be improved by either merging them into a single cohesive section or by defining a clearer separation of the topics discussed within each.\", \"Limited evidence that the tournament method offers deeper or new insights into model capabilities compared to traditional accuracy-based benchmarks, as it reuses these data point. The scalability advantage argument does also apply to conventional benchmarking.\", \"The work assumes properties such as transitivity or order invariance without stress-testing the tournament setting under conditions that could challenge these properties, such as models with very close win rates. The sensitivity of the Elo rating system to hyperparameters like the K-factor and its potential volatility, as shown in the literature, in closely matches scenarios are not sufficiently addressed, which could question the reliability and stability of the evaluation outcomes under different settings.\", \"Initial data used for computing the scores is not provided, such as accuracy on benchmarks and win rates (\\u03ba values).\"], \"questions\": \"1. Figure 1 shows the performance of quantized Llama models as the total number of evaluated instances grows, where each match consists of multiple instances. Typically, in Elo rating systems, a match would include only a single comparison between two models. Can the authors clarify their decision to include multiple instances per match? Furthermore, the plot aims to show how Elo scores diverge with increasing match size * number of matches. Averaging the Elo scores over multiple instance orderings (N reorderings) provide more stable and accurate results reflectove of actual win rates. I assume that no averaging over different \\\"matches\\\" reorderings has been considered when computing the Elo scores, as you only average within each match (k index in equation 3).\\n2. Regarding the transitivity results, the Elo rating system shows failure at specific scenarios where win rates closely match (i.e. 0.49 vs. 0.51). In case the data used here only includes win rates that are skewed (i.e. 0.65 vs. 0.35), it is difficult to assume transitivity of the tournament setting. Could you provide these rates for reference for all experiments discussed in the paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes tournament evaluation for evaluating LLMs. A tournament is composed of several LLMs to be compared, some tasks (benchmark datasets), and the number of instances in each dataset to be used, and the number of *match*, which compares two models based on some data instances. The relative strength of LLMs can be calculated by rating systems including Elo, Bradley-Terry model, Glicko, and TrueSkill. This paper conducts empirical evaluation of several LLMs using tournament evaluation and show that the results is consistent and the results of the tournament evaluation has transitivity.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"This paper proposes an interesting idea of comparing different LLMs using tournament evaluation.\", \"This paper brings many alternative rating systems to the field of LLM evaluation that are currently not widely used, including Glicko and TrueSkill.\"], \"weaknesses\": [\"The paper does not provide thorough and sound experiments to justify the effectiveness of tournament evaluation. Detailed weaknesses\", \"The paper only uses 7 *LLMs* for comparison, while one of the LLMs this paper compares is a 137M GPT2. As a paper that proposes a new benchmarking method, more comprehensive results on more LLMs should be provided.\", \"The rationales of the selected tasks (old open LLM benchmark tasks) are not well justified. In Section 3.1, the paper mentions some issues of the old version of the open LLM benchmark. However, the results of this paper are mostly based on the saturated benchmark.\", \"The contribution of this paper is not very clear. The paper does not explain why tournament explanations are better than simply aggregating the scores/performance of several benchmarks of each LLM. It is also unclear why the proposed method should be consistent with the ranking of benchmark (benchmark consistency defined in the paper). If benchmark consistency is what we want, why don\\u2019t we just use the benchmark to compare LLMs?\", \"The experiment setting is somewhat unclear. For example, the experiment setting corresponding to Figure 1 and Section 6.1.1 is never specified.\", \"The notations and framework are complex and not easy to understand. It would be better to have a figure or table to clearly illustrate all the terms and notations used in the framework and their relations. The product notation in Equation (7) is unclear. The $\\\\pi_\\\\alpha$ in Line 168 on page 4 is unclear.\", \"The takeaway from the results of comparing different rating systems is unclear. More in-depth discussions and experiments are required.\", \"The introduction section\\u2019s bibliography is far from satisfactory. This section only cites two papers. However, the section is filled with prior works that are not properly cited and unsupported claims that should be supported by prior works.\"], \"questions\": [\"What does the random seed in Section 6.1.4 affect?\", \"The sentence on page 8, Line 388, is very hard to understand. The sentence on page 6 Line 322 is odd and ungramatical\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new LLM evaluation paradigm that constructs head-to-head tournaments from existing benchmarks. The authors claim that evaluating models in this way reliably captures the relative performance of LLMs while using only a small portion of the dataset when compared to running these benchmarks on each model separately. The work also addresses common problems with elo-based ranking systems and empirically demonstrates the method's robustness to them. The paper also includes an important discussion of the future work in this direction and the importance of having a scalable method for comparing LLMs.\", \"soundness\": \"1\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The ideas of this paper are incredibly interesting. The authors describe the problem eloquently and their proposed solution is creative and novel. The presentation of the paper is also very pleasant, and it is, for the most part, very easy to follow and read. They lay down the groundwork of this idea in a clear manner and are extensive in their references and explanations of prior work. The limitations of the method are also discussed thoroughly and a valuable discussion about future work is provided.\", \"weaknesses\": \"My only issue with this paper is that I think the experiments do not seem to substantiate the claims. While there are experiments that I found to be interesting and valuable (namely the transitivity and random seed experiments/analyses), I have some concerns regarding a few of them. My concerns are expanded on in the questions part of this review.\\n\\nFurthermore, I would have loved to see some more of the results of their experiments. The work very clearly had a lot of experiments done yet the ones presented are mostly only aggregates. I think the inclusion of an appendix with all the scores generated from every experiment would be very valuable for deeper analysis and clarity for the readers.\", \"questions\": \"The following are the parts of this paper that I found to be unsubstantiated. Any of the following could be due to a misunderstanding; I am willing to discuss this further.\\n## Order invariance inconclusivity\\nThe Boubdir et al. [1] paper does this experiment at a much larger scale, which could very well be the reason that the issue of order variance emerges. Furthermore, the quantized versions of Phi and GPT-2 could have a significant performance gap, which could cause the order to matter less. Boubdir et al. find that this problem is likely exacerbated when models are similar in performance. It would be interesting to explore whether a larger-scale order invariance test could yield different insights.\\n\\n## K-value\\nWhy choose a K value of 10? There seems to be a switch in ranking for meta-llama/Meta-Llama-3.1-8B-Instruct starting from 16. It would be helpful to understand the rationale behind the choice of K=10, especially given the observed ranking differences when K=32. Could additional insight into this choice be provided? I would love to know your opinion on this.\\n\\n## Data/compute quantity\", \"you_mention_in_the_conclusion\": \"\\\"However, the use of head-to-head comparison of models in the form of our proposed tournament approach offers a method to easily and automatically compare models with as much or as little data and compute as may be available\\\".\\n\\nIn section 6.2, you make the argument that there is a correlation between benchmark accuracy and tournament performance averaged over the tasks.\\n1. How are the tasks averaged? I'm not sure I fully understand the process here\\n2. The compute and data used for this experiment seem to be quite substantial. If I understand correctly, a tournament of 4 rounds with 128 tasks would have 84 total matches, with each running 2 * 128 = 256 forward passes. So this means there is a total of 21504 forward passes. When dividing that by the number of models 7, we get a total of 3072 forward passes per model, which is a lot more compute-intensive than running the entirety of the TruthfulQA dataset of 817 samples for each model. It seems to me that this doesn't support the claim that the method works \\\"with as much or as little data and compute as may be available,\\\" especially since it was shown in Figure 1 that a smaller (number of matches * match size) results in scores that don't conclusively convey a reliable performance ranking. Please correct me if I misunderstood something and miscalculated. \\n## Quantization differences\\n\\\"This is typical of our experience using tournament evaluation to compare quantized models; we have found that differences between quantization levels that may be difficult to detect via benchmarks can typically be made clear through the use of tournament evaluation.\\\" \\n- What dataset is being used for Figure 1?\\n- What results do you mean by \\\"This is typical of our experience\\\"? I think the results of the other quantized/non-quantized pairs tested should be presented to support this claim.\\n\\n### Minor grammar issue\\n232 in a players should be in a player's\\n\\n------------\\n[1] https://arxiv.org/pdf/2311.17295\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper provides a detailed comparison of the Elo rating system with other systems such as Glicko, Trueskill, and Bradley-Terry, identifying a significant gap in the literature related to LLM evaluation. The authors propose an innovative LLM evaluation method that employs head-to-head tournaments using existing benchmarks. This approach efficiently captures the relative performance of LLMs while utilizing only a fraction of the dataset, unlike traditional methods that evaluate each model on complete benchmarks. The paper addresses common challenges associated with Elo-based rankings, showcasing the robustness of their method, and underscores the need for scalable approaches to LLM comparison, offering insights for future research in this domain.\\n\\nHowever, all reviewers agree that the paper does not meet ICLR standards, particularly in terms of clarity, as highlighted by Reviewer j392. The paper is difficult to comprehend, and Reviewer aACx notes that the experiments do not adequately support the claims. Additionally, Reviewer Ertc points out the absence of a thorough literature review and relevant citations. Due to these significant shortcomings, I recommend rejecting the paper.\", \"additional_comments_on_reviewer_discussion\": \"No changes or developments have been observed, as neither the reviewers nor the authors engaged in any follow-up discussions. I have encouraged the reviewers and authors to start the discussion during the author response period.\"}", "{\"summary\": \"The main contribution of the paper is a detailed analysis of tournament style of evaluation for LLMs. The authors perform evaluation across MCQ and free-form text generation using 4 rating systems (Elo, Glicko, TrueSkill, Bradley-Terry).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The findings of this work are quite interesting and would be very useful in a practical sense.\"], \"weaknesses\": [\"The paper claims to put forth a novel method of evaluation; however, that contribution is very obscure. I urge the authors to highlight the exact details in the paper and, if possible, condense it as a separate algorithm block for clear demonstration. Additionally, a comparison of the proposed evaluation algorithm to some baselines would serve as good visualization and proof of concept. Specifically:\", \"What is the exact algorithm being proposed, and how does it differ from existing evaluation methodologies? I assume that one of the fundamental baselines would be a brute-force comparison with a rating system, which would lead to exponential comparisons for a large number of models and datasets.\", \"How is the proposed evaluation algorithm better than existing methodologies? By extension, what are the existing evaluation methodologies over which the novelty is claimed? I feel these should be documented in the Related Works (RW). I see that Section 6 discusses metrics to objectively assess the proposed evaluation strategy. How do other baselines perform with these metrics?\", \"The flow between sections is digressive and confusing. Section 5 on `Rating Systems` reads like a survey, and it would be better placed in Related Works, which discusses previous works in detail as a quick recap. Section 4 proposes the idea of tournament-style evaluation (Methodology), and Section 6 should follow immediately as Results and Discussions. Even within Section 4, the definitions of `HellaSwag` and `GSM8k` could be moved to the end, as they disrupt the flow. Similarly, Section 6.1.4 is disconnected from preceding sections. While it\\u2019s discussed as an extension of 6.1.3, it is unrelated to tournament-style evaluation, focusing instead on parameter tuning for Elo. It belongs with Section 5.1 and is not well-suited to Section 6, which concerns the results and discussion of the proposed (tournament-style) evaluation method.\", \"In its current state, the paper requires a complete rewrite to create a more cohesive flow without digressions and breaks. The introduction and abstract also need content that highlights precisely why the proposed method is superior, in what ways, and how the standard evaluation setup suffers without it. In summary, the introduction does not clearly motivate the upcoming contribution.\"], \"questions\": [\"What is `mean` in Table-1? I am guessing it is not a mean across the various tasks.\", \"How is the set $\\\\mathcal{S}$ of indices selected from dataset $\\\\mathcal{D}$? Is it a random sample? Are the same number of samples selected from each dataset? Additionally, how are the models paired up for evaluation\\u2014is there a set rule for selection?\", \"In Section 6.1.5, what random value is the random seed controlling?\", \"If possible, a quick flowchart or algorithm block representation of the evaluation strategy would be helpful.\", \"The first paragraph of Section 5 seems unnecessarily convoluted with mathematical formulation, and those definitions are not useful beyond that paragraph. Please try to simplify it, or provide one straightforward definition.\", \"Elo can get quite tedious with multiple models, so a \\\"sparse-Elo\\\" should also be explored, where Elo is run in a dense fashion ($N \\\\choose 2$ comparisons) for a few rounds, and then a model is only compared with models in a certain bracket around it, say R-1000 to R+1000, where R is the rating of the current model.\", \"Lines 119-121: Providing the full forms of those benchmarks, along with a brief description of the task being evaluated, would be beneficial for first-time readers and aid in quick reference.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
5ZkuWAbxzT
Sharp Analysis for KL-Regularized Contextual Bandits and RLHF
[ "Heyang Zhao", "Chenlu Ye", "Quanquan Gu", "Tong Zhang" ]
*Reverse-Kullback-Leibler* regularization has emerged to be a predominant technique used to enhance policy optimization in reinforcement learning (RL) and reinforcement learning from human feedback (RLHF), which forces the learned policy to stay close to a reference policy. While the effectiveness and necessity of KL-regularization has been empirically demonstrated in various practical scenarios, current theoretical analysis of KL-regularized RLHF still obtain the same $\mathcal{O}(1 / \epsilon^2)$ sample complexity as problems without KL-regularization. To understand the fundamental distinction between policy learning objectives with KL-regularization and ones without KL-regularization, we are the first to theoretically demonstrate the power of KL-regularization by providing a sharp analysis for KL-regularized contextual bandits and RLHF, revealing an $\mathcal{O}(1 / \epsilon)$ sample complexity when $\epsilon$ is sufficiently small. We further explore the role of data coverage in contextual bandits and RLHF. While the coverage assumption is commonly employed in offline RLHF to link the samples from the reference policy to the optimal policy, often at the cost of a multiplicative dependence on the coverage coefficient, its impact on the sample complexity of online RLHF remains unclear. Previous theoretical analyses of online RLHF typically require explicit exploration and additional structural assumptions on the reward function class. In contrast, we show that with sufficient coverage from the reference policy, a simple two-stage mixed sampling strategy can achieve a sample complexity with only an additive dependence on the coverage coefficient. Our results provide a comprehensive understanding of the roles of KL-regularization and data coverage in RLHF, shedding light on the design of more efficient RLHF algorithms.
[ "Reinforcement learning", "KL regularization" ]
Reject
https://openreview.net/pdf?id=5ZkuWAbxzT
https://openreview.net/forum?id=5ZkuWAbxzT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x0sGVjdYVe", "wWU5zfDN1I", "v2hibx8YkT", "tUCEAUFvev", "pJwb0mWMqm", "j7nXzPvhHx", "fMfJQ8lBjW", "ckoXq2mYcV", "c5wAa1PBES", "VIy1e4h2WB", "RWNFzgqV1T", "NPbhktk5Vf", "NKFwdU7S3l", "IjVLOCRiaC", "BD15zhtwBv", "7umAmBO3rN", "2SNVhH3bGK", "19ZMk1tgxV" ], "note_type": [ "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730678239130, 1732266585620, 1737524236514, 1732403527205, 1730490206660, 1732390590592, 1732431415170, 1732266546166, 1732639942220, 1732266650243, 1732266498756, 1732266761326, 1734880911215, 1732460346000, 1731261459871, 1732428408275, 1730170757834, 1732690539817 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13135/Reviewer_DLrA" ], [ "ICLR.cc/2025/Conference/Submission13135/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13135/Reviewer_DLrA" ], [ "ICLR.cc/2025/Conference/Submission13135/Reviewer_74fV" ], [ "ICLR.cc/2025/Conference/Submission13135/Reviewer_6sVe" ], [ "ICLR.cc/2025/Conference/Submission13135/Authors" ], [ "ICLR.cc/2025/Conference/Submission13135/Authors" ], [ "ICLR.cc/2025/Conference/Submission13135/Reviewer_1aDe" ], [ "ICLR.cc/2025/Conference/Submission13135/Authors" ], [ "ICLR.cc/2025/Conference/Submission13135/Authors" ], [ "ICLR.cc/2025/Conference/Submission13135/Authors" ], [ "ICLR.cc/2025/Conference/Submission13135/Area_Chair_eDwr" ], [ "ICLR.cc/2025/Conference/Submission13135/Reviewer_74fV" ], [ "ICLR.cc/2025/Conference/Submission13135/Reviewer_1aDe" ], [ "ICLR.cc/2025/Conference/Submission13135/Authors" ], [ "ICLR.cc/2025/Conference/Submission13135/Reviewer_6sVe" ], [ "ICLR.cc/2025/Conference/Submission13135/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper provides sharp theoretical analyses for KL-regularized contextual bandit (CB) and RLHF problems. It first studies the theoretical benefits of KL regularization in CB and RLHF and shows that KL regularization improves the sample complexity to $O(1/\\\\epsilon)$, while for unregularized problems $O(1/\\\\epsilon^2)$ samples are required. The paper then studies the role of data coverage. In particular, a two-stage mixed sampling strategy is proposed to achieve sample complexity with only additive dependence on the policy coverage coefficient if the data coverage is sufficient, while previous results often depend on the coverage coefficient multiplicatively. The paper also provides a local policy coverage coefficient and derives sample complexity which has multiplicative dependence on this weaker notion than global policy coverage. Numerical experiments are provided to support the theories.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is written clearly and the main results are well delivered.\\n\\n2. The paper establishes an integrated theory that provides both lower and matching upper bounds for CB/RLHF sample complexity. The derivations are solid. \\n\\n3. RLHF with KL regularization is predominant in LLM alignment. Studying this problem through a theoretical lens has sufficient significance in helping gather insights into designing more efficient RLHF methods.\", \"weaknesses\": \"1. I have concerns about the coverage assumptions in the paper and their relation to the practical use case of RLHF in LLM alignment. The policy coverage (Definition 2.7 and 2.8) is assumed for reference policy $\\\\pi_0$. In RLHF, such reference policy is typically a fine-tuned LLM that can have extremely low if not zero probability for some actions (e.g. nonsense responses). In this case, the global policy coverage coefficient can blow up to infinity, and the local policy coverage coefficient can be large (as the KL constraint is in expectation). The current paper only derives additive dependence results for global policy coverage, which can be vacuous if the global coefficient is infinity. On the other hand, the dependence on the local coefficient is still multiplicative (as discussed in Section 3.4), which can be extremely large.\\n\\n2. I also have concerns regarding the sample complexity upper bounds. The paper claims to first study the effect of KL regularization in improving the sample complexity for policy optimization from $O(1/\\\\epsilon^2)$ to $O(1/\\\\epsilon)$. However, such $O(1/\\\\epsilon)$ sample complexity result already exists for general strongly convex regularizers [1], which include KL regularization as a special case since the reference policy is assumed to have sufficient coverage. Hence it is likely that the upper bound for CB is already known (given that CB is a special case of MDP). For RLHF, its difference from CB mainly comes from the additional reward learning step, so I expect there could be more explanation on why $O(1/\\\\epsilon)$ samples are sufficient for reward learning from preference data. However, the current version of the paper seems to lack such comparisons/remarks, which in my opinion are necessary for understanding the mechanism of RLHF (just as strong convexity of KL divergence for CB). For baselines, previous literature suggests that reward learning takes $O(1/\\\\epsilon^2)$ samples. \\n\\n[1] Lan, Guanghui. \\\"Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes.\\\" Mathematical programming 198, no. 1 (2023): 1059-1106.\\n\\n[2] Zhu, Banghua, Michael Jordan, and Jiantao Jiao. \\\"Principled reinforcement learning with human feedback from pairwise or k-wise comparisons.\\\" In International Conference on Machine Learning, pp. 43037-43067. PMLR, 2023.\", \"questions\": \"1. In Definition 2.7 (and 2.8), is the sup taken over $x\\\\in\\\\mathrm{supp}(d_0)$? The current notation is a bit confusing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 74fV\", \"comment\": \"Thanks for your constructive comments!\\n\\n**Q1.** The paper might benefit from some polishing. For instance, the definitions of some key terms are not rigorous (see questions below). In addition, the proof may lack rigor. For instance, line 320 \\u2013 321 uses Taylor expansion, and thus, if my understanding is correct, the equality (and the following inequality) does not hold.\\n\\n**A1.** Thanks for your advice. Are you referring to 391-393 in Proof Sketch? According to our detailed calculation in 1113 - 1137 the equality and inequality do hold.\\n\\n**Q2.** In definition 2.7, line 219 \\u2013 221, and definition 2.8, line 231 \\u2013 233, what does ``x sampled from d_0\\u2019\\u2019 mean in the sup?\\n\\n**A2.** Thanks for pointing it out. They are typos. The subscription should be $x \\\\in \\\\text{supp}(d_0)$.\\n\\n**Q3.** Can the authors provide more intuition why the number of samples needed are different in the first and the second stages of the algorithm, as presented in Theorem 3.3?\\n\\n**A3.** \\nAccording to our description in Section 3.2, by iteratively improving the data quality, the first stage is to establish a coarse estimate $\\\\hat \\\\theta_0$ of the reward function over a broad range of contexts. This requires enough diversity in the sampled data to capture global properties of the reward function. The second stage is to collect data that is more aligned with the optimal policy distribution, enabling fine-tuning of the reward function estimate so that the output policy is $\\\\epsilon$-optimal.\\nTherefore, they need different number of samples due to the different purposes.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for the response.\\n\\n> We would like to clarify that this line of work for KL regularization is completely distinct from ours, because these works focus on the planning where the transition dynamics and reward model are known and focus on the pure policy optimization setting without exploration, while our work considers the statistical sample complexity where the underlying model is unknown and studies the statistical property of the learning problem.\\n\\nIn my understanding, stochastic policy optimization (e.g. the one in [1]) does not assume full knowledge about the transition dynamics and reward model and *considers* statistical sample complexity, and hence the result is *not irrelevant* to your work. In each iteration of stochastic optimization, the algorithm first collects sufficient samples to get an estimate of the policy gradient, which is the Q function in MDP and the reward function in CB (horizon=1 MDP). This corresponds to your sampling phase in Algorithm 1. The algorithm then takes a step of stochastic gradient (mirror) descent towards the estimated direction, and the result corresponds to the planning oracle you call in Algorithm 1. The final statistical sample complexity is the number of samples used for all stochastic gradient estimations combined, which is shown to be $O(1/\\\\epsilon)$ given the strongly convex regularizer. To me, Algorithm 1 seems like manually computing the first two iterations of stochastic policy mirror descent for the special case of CB. Therefore, I am concerned that the result in this submission is implied by the more general one in [1], and I think it would be helpful if the authors could discuss this.\"}", "{\"summary\": \"The paper provides a novel lower bound of Omega(1/epsilon) for the sampling complexity of finding epsilon-suboptimal solutions in KL-regularized contextual bandit problems.\\n\\nThe paper then models the online KL-regularized RLHF problem as the KL-regularized contextual bandit problem and proposes a two-stage sampling algorithm. Using the strong convexity of the KL-regularization, the paper shows that the algorithm has sampling complexity O(1/epsilon), with an additive term that depends on the coverage coefficient of the reference policy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides sharper analysis of the sampling complexity of KL-regularized contextual bandit problems, and provides novel results on the dependency of the sampling complexity on the data coverage.\", \"weaknesses\": \"The paper might benefit from some polishing. For instance, the definitions of some key terms are not rigorous (see questions below). In addition, the proof may lack rigor. For instance, line 320 \\u2013 321 uses Taylor expansion, and thus, if my understanding is correct, the equality (and the following inequality) does not hold.\", \"questions\": [\"In definition 2.7, line 219 \\u2013 221, and definition 2.8, line 231 \\u2013 233, what does ``x sampled from d_0\\u2019\\u2019 mean in the sup?\", \"Is there any benefit of using more than 2-stages of sampling?\", \"Can the authors provide more intuition why the number of samples needed are different in the first and the second stages of the algorithm, as presented in Theorem 3.3?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"Thanks to the reviewer for their response. While the newly proposed coverage condition is tighter than the previous one, it still seems to not capture the correct dependence. For example, assume we have some $\\\\theta, \\\\theta', \\\\tilde{\\\\theta}, \\\\tilde{\\\\theta}'$ such that for all $\\\\bar{\\\\theta}, \\\\bar{\\\\theta}' \\\\in \\\\{ \\\\theta, \\\\theta', \\\\tilde{\\\\theta}, \\\\tilde{\\\\theta}' \\\\}$, we have $R(\\\\bar{\\\\theta},x,a) = R(\\\\bar{\\\\theta}',x,a)$ for all $(x,a)$ with $d_0(x) > 0$, and that for some $x'$ with $d_0(x') = 0$ (or arbitrarily small), we have that $R(\\\\theta,x',a) - R(\\\\theta',x',a) \\\\neq R(\\\\tilde{\\\\theta},x',a) - R(\\\\tilde{\\\\theta}',x',a)$. Then in this case the denominator for both sets $(\\\\theta,\\\\theta')$ and $(\\\\tilde{\\\\theta}, \\\\tilde{\\\\theta}')$ will be 0, but there does not exist a $b(x')$ that will make the numerator 0 for both, so it follows that the coverage condition will be infinite. However, this should not be the case since $d(x') = 0$, so this $x'$ should not influence the complexity ideally.\\n\\nFor the linear example given here, I am not sure the analysis is correct. By the same argument as what I just gave above, I believe $D$ should depend on the minimum eigenvalue of $\\\\Sigma$, and furthermore the $b(x)$ given here depends on $\\\\theta$, which is not consistent with the given definition.\\n\\nIf the authors could comment on this, that would be helpful.\"}", "{\"title\": \"Response to Reviewer DLrA\", \"comment\": \"Thank you for the valuable comment. Our RL setting has connections with stochastic policy optimization, so we will cite the literature and discuss our relationship with their results in the revision. However, we would like to clarify that our results differ from the ones in [1] since the standard conditions and settings in the two lines of literature are distinct.\\n\\n1. The statistical sample complexity result in section 4 of [1] relies on conditions (4.1) - (4.3), but how to do the learning and control the estimation error is the main part of RL learning setting. Moreover, although those conditions are standard in stochastic optimization, they usually do not hold in RL, because first (4.1) assumes that the value function is unbiased, but RL algorithms usually make biased estimation to balance exploration and exploitation; second, the bounded infinity norm on the error for RL (4.2, 4.3) is also too strong since for a general infinite context space and finite training samples, it is unrealistic to achieve a uniform small error on all state-action pairs.\\n\\n2. [1] studies learning cases under a tabular setting, which is limited to the finite state-action space while our analysis applies to a general function space with a finite covering number. Besides, their estimation analysis is limited to the finite state-action space, and also they assume a generator (page 19) where the learner can start from any $(s,a)$ so that they can reuse their results for pure policy optimization setting in Section 4.\\n\\n3. We would like to clarify that our algorithm has nothing to do with policy gradient, not to mention 2-step mirror descent. We use the two steps to achieve an additive dependence on the coverage coefficient.\"}", "{\"title\": \"Response to Reviewer DLrA\", \"comment\": \"Thanks for your constructive comments!\\n\\n**Q1.** The global policy coverage coefficient can blow up to infinity, and the local policy coverage coefficient can be large (as the KL constraint is in expectation). The current paper only derives additive dependence results for global policy coverage, which can be vacuous if the global coefficient is infinity. On the other hand, the dependence on the local coefficient is still multiplicative (as discussed in Section 3.4), which can be extremely large.\\n\\n**A1.** We would like to emphasize that currently there is no result with an additive dependence on the local coverage coefficient. Thus, the additive dependence is an unexpected novel result, but we can only derive the additive relationship for the newly-defined data coverage condition (Definition 2.6). For the local coefficient, it is difficult to further improve the multiplicative relationship and can be left as future work. Note that even for the multiplicative result, we have a faster rate.\\n\\n**Q2.** I also have concerns regarding the sample complexity upper bounds. The paper claims to first study the effect of KL regularization in improving the sample complexity for policy optimization from $O(1 / \\\\epsilon^2)$ to $O(1 / \\\\epsilon)$. However, such $O(1 / \\\\epsilon)$ sample complexity result already exists for general strongly convex regularizers [1], which include KL regularization as a special case since the reference policy is assumed to have sufficient coverage. Hence it is likely that the upper bound for CB is already known (given that CB is a special case of MDP). For RLHF, its difference from CB mainly comes from the additional reward learning step, so I expect there could be more explanation on why $O(1 / \\\\epsilon)$ samples are sufficient for reward learning from preference data. However, the current version of the paper seems to lack such comparisons/remarks, which in my opinion are necessary for understanding the mechanism of RLHF (just as strong convexity of KL divergence for CB). For baselines, previous literature suggests that reward learning takes $O(1 / \\\\epsilon^2)$ samples.\\n\\n**A2.** We would like to clarify that this line of work for KL regularization is completely distinct from ours, because these works focus on the planning where the transition dynamics and reward model are known and focus on the pure policy optimization setting without exploration, while our work considers the statistical sample complexity where the underlying model is unknown and studies the statistical property of the learning problem. Hence, while they can achieve $O(1/t)$ rate in the planning setting, their methods cannot be applied to the learning setting.\\n\\n[1] Lan, Guanghui. \\\"Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes.\\\" Mathematical programming 198, no. 1 (2023): 1059-1106.\\n\\n[2] Zhu, Banghua, Michael Jordan, and Jiantao Jiao. \\\"Principled reinforcement learning with human feedback from pairwise or k-wise comparisons.\\\" In International Conference on Machine Learning, pp. 43037-43067. PMLR, 2023.\"}", "{\"title\": \"Response to authors comments\", \"comment\": \"I would like to thank the authors for their thoughtful responses and the effort they put into addressing my comments. While I believe they did a good job, I will maintain my original score.\"}", "{\"title\": \"Response to Reviewer 6sVe\", \"comment\": \"Thanks for your constructive comments!\\n\\n**Q1.** The main coverage condition (Definition 2.6) is extremely strong. This will scale at least with the number of contexts: if we take $\\\\theta=\\\\theta\\u2019$, and choose $b(x)$ for $x\\\\ne x_0$ and $b(x_0)=B$ where $x_0$ is the minimum probability context under $d_0$, then the expression given in Definition 2.6 will scale as $1/d_0(x_0)\\\\ge M$ for $M$ the number of contexts. Thus, the main sample complexity results of the paper (Theorem 3.3 and Theorem 4.4) really scale with the size of the context space. In general, it is not acceptable to obtain a sample complexity scaling with the size of the context space, as this is typically extremely large, so these results are only meaningful asymptotically as $\\\\epsilon \\\\rightarrow 0$.\\n\\n**A1.** Thank you for pointing this out. This is actually a loose in our definition. Without changing the analysis, we can redefine the coverage condition as \\n$$\\n \\\\exists b \\\\quad s.t.\\\\ \\\\sup_{\\\\theta, \\\\theta' \\\\in \\\\Theta} \\\\frac{|R(\\\\theta', x, a) - R(\\\\theta, x, a) - b(x)|^2}{\\\\mathbb{E}\\\\_{x'\\\\sim d_0}\\\\mathrm{Var}\\\\_{a' \\\\sim \\\\pi_0(\\\\cdot | x')} [R(\\\\theta', x', a') - R(\\\\theta, x', a')]} \\\\le D^2.\\n$$\\nThen, the situation you mentioned will not happen.\\n\\n**Q2.** Furthermore, the result is also not tight in the regime where $\\\\eta$ is very large.\\n\\n**A2.** In the case you are referring to, $\\\\eta$ should be $\\\\Omega(1/ \\\\epsilon)$, which is far from realistic if we are using KL regularization. Thus, a large $\\\\eta$ is not the focus of our framework. Additionally, this problem has been solved by PAC Bayes literature, which suffers from similar issues when analyzing KL-regularization.\\n\\n**Q3.** The statement on Line 294 that $D^2\\\\le C_{GL}$ is not correct as a result of this ($C_{GL}$ in general will not scale with context size).\\n\\n**A3.** After correcting our definition for data coverage (See A1.), $D$ will not scale with the context size. Thanks for the constructive comment. We will remove the claim that $D^2\\\\le C_{GL}$ in the revision.\\n\\n**Q4.** The statement in Theorem 3.1 and Theorem 4.1 that the coverage condition is $O(1)$ is also then incorrect\\u2014it should be $N_R(\\\\epsilon)$ I believe. \\n\\nWe will remove the claim that the coverage condition is $O(1)$ in the revision and the coverage coefficient is left as a term in the final bound.\\n\\n**A4.** Thanks for your suggestion. We will remove the claim in the revision.\\n\\n**Q5.** Is the scaling with $D^2$ really necessary in Theorem 3.3 and Theorem 4.4 or can this be reduced to $C_{GL}$?\\n\\n**A5.** Yes, it is necessary. According to our proof, the effectiveness of the intermediate policy highly relies on the data coverage coefficient. If the coverage condition is replaced by $C_{GL}$, the dependence will become multiplicative.\"}", "{\"title\": \"Response to Reviewer 1aDe\", \"comment\": \"Thanks for your insightful comments and positive feedback.\\n\\n**Q1.** Cannot see the tradeoff between $\\\\eta$ and the number of samples needed. Even from the experimental results, the lower $\\\\eta$ the better the performance.\\n\\n**A1.** Thanks for your insightful comments. In practice, with stronger KL regularization (i.e. lower $\\\\eta$), the learned policy will be constrained in a small interval close to the reference policy which may limit the model's ability to fit the human feedback signal. Essentially, the reason is that the optimal solution for the regularized objective may not be good enough. However, in our paper, we consider the sample size required to approximately maximize the KL-regularized objective in both theoretical results and experimental results. Hence, it is beyond our scope how the KL-regularization coefficient affects the quality of the optimal solution. As a result, there is no tradeoff on $\\\\eta$ from the perspective of maximizing the regularized objective.\\n\\nIn our theoretical analysis, since the objective we consider is the reward regularized by KL divergence, the considered optimal policy is also the optimal one with the $\\\\eta$-KL ball around $\\\\pi_0$. Hence, if we focus on the suboptimality with respect to such an optimal policy, the smaller $\\\\eta$ becomes, the smaller the policy class we consider, thus the sample complexity is smaller.\\n\\n**Q2.** The $O(1)$ coverage assumption is unclear if it is a reasonable one or not.\\n\\n**A2.** It is a reasonable assumption since first, it is a standard condition in reinforcement learning literature as discussed in Lines 270-277; second, since the reference policy is obtained from supervised fine-tuning, it is natural to assume that it has a good coverage over the policy class. For details and the example, please refer to the 1st point of official comment.\\n\\n**Q3.** In line 294 it says that \\u2018it is obvious that $D^2 \\\\le C_{GL}$\\u2019. Could you please provide a proof for that?\\n\\n**A3.** Thanks for the constructive comment. After reviewing the conditions, we realize that $D^2 \\\\le C_{GL}$ does not always hold. We will remove this claim in the revision.\\n\\n**Q4.** In line 374 is $s_i \\\\sim \\\\pi_0$ a typo, where does $s_i$ come from?\\n\\n**A4.** Thanks for pointing out! It should be $a_i$.\"}", "{\"title\": \"Official comment for the coverage condition\", \"comment\": \"Thanks all the reviewers for their valuable comments. As multiple reviewers have asked about the coverage condition, we address this inquiry here. We will add more discussion in the revision.\\n\\nWe correct a mistake in our data coverage condition in Definition 2.6 after checking the proof to ensure that it is reasonable to assume an $O(1)$ $D$: there exist $b:\\\\mathcal{X}\\\\rightarrow[-B,B]$ such that\\n$$\\nC(x,a)=\\\\sup\\\\_{\\\\theta, \\\\theta' \\\\in \\\\Theta} \\\\frac{|R(\\\\theta', x, a) - R(\\\\theta, x, a) - b(x)|^2}{\\\\mathbb{E}\\\\_{x'\\\\sim d_0}\\\\mathrm{Var}\\\\_{a' \\\\sim \\\\pi_0(\\\\cdot | x')} [R(\\\\theta', x', a') - R(\\\\theta, x', a')]} \\\\le D^2.\\n$$\\nThen, we show a linear reward function case as an example to explain this definition. If $R(\\\\theta,x,a)=\\\\theta^{\\\\top}\\\\phi(x,a)$, ($\\\\theta \\\\in \\\\mathbb{R}^d$), we define the covariance matrix \\n$$\\n\\\\Sigma = \\\\mathbb{E}\\\\_{x'\\\\sim d_0}\\\\mathbb{E}\\\\_{a\\\\sim\\\\pi_0}(\\\\phi(x,a)-\\\\mathbb{E}\\\\_{a'\\\\sim\\\\pi_0}\\\\phi(x,a')) (\\\\phi(x,a)-\\\\mathbb{E}\\\\_{a'\\\\sim\\\\pi_0}\\\\phi(x,a'))^{\\\\top}.\\n$$\\nThen, for any $b$ of the form $b(x) = \\\\theta^\\\\top\\\\nu(x)$, we have\\n$$\\nC(x,a)=\\\\sup\\\\_{\\\\theta, \\\\theta' \\\\in \\\\Theta} \\\\frac{|(\\\\theta\\u2019-\\\\theta)^{\\\\top}\\\\phi(x,a) - b(x)|^2}{(\\\\theta\\u2019-\\\\theta)^{\\\\top}\\\\Sigma(\\\\theta\\u2019-\\\\theta)} \\\\le \\\\|\\\\phi(x, a) - \\\\nu(x)\\\\|_{\\\\Sigma^{-1}}^2.\\n$$\\n\\nSet $b(x) = \\\\theta^\\\\top \\\\mathbb{E}_{a' \\\\sim \\\\pi_0} \\\\phi(x, a')$ Then we can show that there exists $\\\\pi_0$ with $D^2 = O(d)$ through G-optimal design.\"}", "{\"metareview\": \"This work obtains a tight bound on the sample complexity of RLHF with KL regularization where they show that the sample complexity scales with 1/epsilon instead of 1/epsilon^2. Unfortunately a couple of weaknesses were pointed out by the reviewers including the strength of the coverage condition, as well as several inconsistencies in the technical results. We encourage the authors to revise their promising manuscript to fix these issues.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers found some of the assumptions of the paper to be quite strong, as well as several inconsistencies and technical mistakes. We encourage the authors to revise their work addressing these comments.\"}", "{\"title\": \"response from reviewer 74fV\", \"comment\": \"Thank the authors for the clarifications and the additional comments! I've raised my score to 5.\"}", "{\"summary\": \"In this paper, the authors provide a new analysis of contextual bandits under KL regularization that achieves an improved sample complexity guarantee. Then, they study the RLHF problem where under coverage assumptions on the reference policy and they provide a tight algorithm under this assumption. In the end, they provide experimental results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) Improved sample complexity for contextual bandits under KL regularization with a novel analysis using the property of strong convexity due to the KL regularizer\\n\\n2) Lower bound for the contextual bandit problem under KL regularization that is tight with the upper for sufficiently small $\\\\epsilon$\\n\\n3) Lower bound for the RLHF problem with preference feedback\\n\\n4) Design of an algorithm for the RLHF problem with guarantees that match the lower bound.\", \"weaknesses\": \"1) Cannot see the tradeoff between $\\\\eta$ and the number of samples needed. Even from the experimental results, the lower $\\\\eta$ the better the performance.\\n\\n2) The O(1) coverage assumption is unclear if it is a reasonable one or not a provided way to check so.\", \"questions\": \"1) In line 294 it says that \\\"it is obvious that $D^2 \\\\leq C_{GL}$. Could you please provide a proof for that?\\n\\n2) In line 374 is $s_i \\\\sim \\\\pi_0$ a typo, where does $s_i$ come from?\\n\\n3) Can you provide a specific example where you compute the coverage of the reference policy and it is O(1)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 6sVe\", \"comment\": \"Thank you very much for your timely response.\\n\\nFor your first question, let $\\\\mathcal{X}\\\\_1 = \\\\\\\\{x \\\\in \\\\mathcal{X}| \\\\exists a\\\\ s.t. R(\\\\tilde\\\\theta, x, a) \\\\neq R(\\\\tilde\\\\theta', x, a)\\\\\\\\}$ be the subset consisting of $x'$ you are referring to. If $d(\\\\mathcal{X}_1) = 0$, then conventionally $\\\\mathcal{X}_1$ should be excluded from $\\\\mathcal{X}$ since we can never encounter the samples in $\\\\mathcal{X}_1$, and thus it is impossible to differentiate $\\\\tilde\\\\theta$ and $\\\\tilde\\\\theta'$. You may argue that we can set $d(\\\\mathcal{X}_1)$ to an extremely small number. In that case, we could only learn $\\\\theta$ based on the samples with $x \\\\in \\\\mathcal{X}_1$. So, it is indeed a hard instance, and it is reasonable that the covering coefficient is large.\\n\\nFor the linear example, we apologize for not mentioning that $b$ could vary under different $\\\\theta, \\\\theta'$, and the formal definition should be as follows.\\n\\nFor any pair of $\\\\theta, \\\\theta'$, there exist $b:\\\\mathcal{X}\\\\rightarrow[-B,B]$ such that\\n$$\\n\\\\frac{|R(\\\\theta', x, a) - R(\\\\theta, x, a) - b(x)|^2}{\\\\mathbb{E}\\\\_{x'\\\\sim d_0}\\\\mathrm{Var}\\\\_{a' \\\\sim \\\\pi_0(\\\\cdot | x')} [R(\\\\theta', x', a') - R(\\\\theta, x', a')]} \\\\le D^2.\\n$$\\n\\nFor the G-optimal design technique, which can control the elliptical norm of a given set of feature vectors, please refer to Chapter 21.1 of [1].\\n\\n[1] T Lattimore, C Szepesv\\u00e1ri. Bandit Algorithms.\"}", "{\"summary\": \"This paper studies the RLHF problem for contextual bandits, and aims to obtain a tight sample complexity on the problem. They consider both the reward observation setting and preference observation setting, and provide upper and lower bounds for each. Interestingly, they are able to show that including the KL constraint in the problem allows them to obtain a sample complexity that scales as $1/\\\\epsilon$ instead of the more familiar $1/\\\\epsilon^2$.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"To my knowledge, this is the first work to obtain a tight bound on the sample complexity of RLHF with KL regularization. This is a commonly used setting in practice and as such it is important that we understand the sample complexity. It is interesting that one can obtain a $1/\\\\epsilon$ rate as compared to the $1/\\\\epsilon^2$ that might be expected.\", \"weaknesses\": [\"1. The main coverage condition (Definition 2.6) is extremely strong. This will scale at least with the number of contexts: if we take $\\\\theta = \\\\theta\\u2019$, and choose $b(x) = 0$ for $x \\\\neq x_0$ and $b(x_0) = B$ where $x_0$ is the minimum probability context under $d_0$, then the expression given in Definition 2.6 will scale as $1/d_0(x_0) \\\\ge M$ for $M$ the number of contexts. Thus, the main sample complexity results of the paper (Theorem 3.3 and Theorem 4.4) really scale with the size of the context space. In general it is not acceptable to obtain a sample complexity scaling with the size of the context space, as this is typically extremely large, so these results are only meaningful asymptotically as $\\\\epsilon \\\\rightarrow 0$.\", \"2. Furthermore, the result is also not tight in the regime where $\\\\eta$ is very large.\", \"3. The statement on Line 294 that $D^2 \\\\le C_{GL}$ is not correct as a result of this ($C_{GL}$ in general will not scale with context size).\", \"4. The statement in Theorem 3.1 and Theorem 4.3 that the coverage condition is $O(1)$ is also then incorrect\\u2014it should be $\\\\log N_{\\\\mathcal{R}}(\\\\epsilon)$ I believe.\", \"5. The problem setting could be clarified somewhat. In particular, it should be made more explicit that when a policy is $\\\\epsilon$-optimal, this is with respect to $Q(\\\\pi)$, the reward + KL objective, rather than just the reward. The latter is typically more standard for RL, so it should be made clear that the objective considered here is different.\", \"6. The writing could be improved. There are various unclear or poorly worded statements (the following list is not exhaustive\\u2014please go through the paper carefully and resolve other such issues):\", \"Line 59: \\u201cDPO suffers from a drop of chosen probability\\u201d. I am not sure what this means.\", \"Line 62: \\u201cthe learned model is easy to be hacked and become biased\\u201d. Grammatically incorrect, revise wording.\", \"Line 64: The sentence starting with \\u201cHence, the KL-regularization..\\u201d Is the first time KL regularization is mentioned. It seems like it needs to be introduced earlier for this sentence to read well.\", \"Line 283: \\u201cidentifically\\u201d is not a word.\", \"7. There are also several issues with informal technical statements being made that are not necessarily correct:\", \"Line 76: RLHF has been demonstrated to outperform offline methods because \\u201cit has further interactions with human or preference oracle\\u201d. It is not clear that this the reason (or what exactly is even meant by this sentence).\", \"Line 273: It is much more standard in modern offline RL to obtain bounds under single policy concentrability (ie only one policy is covered). Thus, the claim that global coverage is standard in offline RL is too strong.\", \"Definition 2.6: What is the $\\\\pi$ referred to here? It is not clear.\", \"Line 291: The supremum is over $x \\\\sim d_0$. What does this mean? That the sup is taken over all $x$ in the support of $d_0$? This should be clarified (the same notation is used elsewhere as well).\", \"Remark 3.2: The final sentence in this remark is not justified by Theorem 3.1. Simply showing a lower bound that is smaller than the lower bound for the standard contextual bandit does not imply that the true sample complexity is lower as the lower bound may just be loose\\u2014an upper bound is required to show this (which at this point in the paper has not been stated). Therefore, I would suggest removing this sentence.\"], \"questions\": \"1. Is the scaling with $D^2$ really necessary in Theorem 3.3 and Theorem 4.4 or can this be reduced to $C_{GL}$?\\n2. For $\\\\eta$ very large (corresponding to no regularization), one would hope to recover the standard complexity bounds for contextual bandits, but this is not the case in any of the upper bounds (all of which will continue to increase as $\\\\eta$ increases). I suspect a more refined analysis may allow one to obtain the minimum of the current complexity and the standard contextual bandit complexity. Could the authors comment on this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 74fV\", \"comment\": \"Thank you for raising your rating. We're glad that our rebuttal has addressed your questions. Please let us know if you still have any unaddressed concerns.\"}" ] }
5ZEbpBYGwH
COPER: Correlation-based Permutations for Multi-View Clustering
[ "Ran Eisenberg", "Jonathan Svirsky", "Ofir Lindenbaum" ]
Combining data from different sources can improve data analysis tasks such as clustering. However, most of the current multi-view clustering methods are limited to specific domains or rely on a suboptimal and computationally intensive two-stage process of representation learning and clustering. We propose an end-to-end deep learning-based multi-view clustering framework for general data types (such as images and tables). Our approach involves generating meaningful fused representations using a novel permutation-based canonical correlation objective. We provide a theoretical analysis showing how the learned embeddings approximate those obtained by supervised linear discriminant analysis (LDA). Cluster assignments are learned by identifying consistent pseudo-labels across multiple views. Additionally, we establish a theoretical bound on the error caused by incorrect pseudo-labels in the unsupervised representations compared to LDA. Extensive experiments on ten multi-view clustering benchmark datasets provide empirical evidence for the effectiveness of the proposed model.
[ "clustering", "canonical correlation analysis", "self supervision", "multiview" ]
Accept (Spotlight)
https://openreview.net/pdf?id=5ZEbpBYGwH
https://openreview.net/forum?id=5ZEbpBYGwH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uusMimV21X", "tatSMt19P2", "r0Gh7bq2E4", "lcyju3N7fm", "ggnznBiDmN", "c6ZKAdo2wo", "WLAnVOx0AW", "V0alVmUH4n", "OUb25ISH9E", "MSfAtVHQ7D", "L5dfJBwNEK", "J1KkMHAKls", "BFvoUcC5TW", "8vbQTy7aMs", "81w5u5z6hz", "77PG0dQE1K", "6uwwgpL8b5", "64UE5lLqtX", "0SflkczVsE" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1732135272823, 1732675419672, 1732135077094, 1732684219694, 1734613209846, 1730684336041, 1733193732191, 1732893727260, 1732135309637, 1732135138962, 1732520401506, 1732135389654, 1732134993909, 1737523775105, 1732520382719, 1730711792982, 1732520356214, 1730382116581, 1730363692285 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Submission6526/Reviewer_bbo9" ], [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Submission6526/Reviewer_Yjr1" ], [ "ICLR.cc/2025/Conference/Submission6526/Area_Chair_9jqp" ], [ "ICLR.cc/2025/Conference/Submission6526/Reviewer_gjrx" ], [ "ICLR.cc/2025/Conference/Submission6526/Reviewer_gjrx" ], [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Submission6526/Reviewer_aFs1" ], [ "ICLR.cc/2025/Conference/Submission6526/Authors" ], [ "ICLR.cc/2025/Conference/Submission6526/Reviewer_bbo9" ], [ "ICLR.cc/2025/Conference/Submission6526/Reviewer_Yjr1" ] ], "structured_content_str": [ "{\"title\": \"Authors' response\", \"comment\": \"We thank the reviewer for their thoughtful and detailed feedback. We are grateful for the acknowledgment of the strengths of our work, including the clarity of our motivation, the rigor of our theoretical analysis, the breadth of our comparative experiments, and the well-structured presentation of the paper. Below, we respond to all comments raised by the reviewer.\\n\\n\\n**W1 Alignment Pseudolabels**\\n\\nThank you for your feedback. As you indicated, label alignment across views in multi-view clustering presents challenges that could potentially limit the effectiveness of self-supervised learning methods.\\n\\nTo address potential issues with pseudo-label alignment, our framework uses a multi-view agreement mechanism to refine pseudo-labels and enhance their reliability. This step (elaborated in Section 4.4.2) reduces errors by focusing on consistent labels across views.\\nEven when some pseudo-labels are incorrect, our learned representation still improves clustering. As shown in Fig. 3(i)(a), despite some level of falsely annotated pseudo-labels (24% error), the Adjusted Rand Index still improves (purple line), demonstrating the robustness of our approach. Additionally, our theoretical analysis in Sec. 4.3 provides bounds that further support the method's resilience to pseudo-label errors.\\n\\n**W2 Technical innovation of our work**\\n\\nWhile specific individual components like correlation maximization and pseudo-labeling have appeared in prior work, COPER introduces the integration of these components along with several additional innovations. First, we present a novel framework for multi-view pseudo labeling, which includes multi-view agreement. This agreement is essential for facilitating within-cluster permutations in the subsequent steps. Additionally, although the relationship between Canonical Correlation Analysis (CCA) and Linear Discriminant Analysis (LDA) has been previously examined, it has not been applied to multi-view clustering frameworks. In addition to our proposed method for enabling within-cluster permutations, we leverage the relationship between CCA and LDA by providing an approximation analysis and deriving new error bounds for our learned representations, utilizing matrix perturbation theory. This supports the effectiveness of our approach.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the responses. They have addressed my concrens and I would like to raise my rating to \\\"accept\\\".\"}", "{\"title\": \"Authors' response\", \"comment\": \"We thank the reviewer for the valuable feedback. We appreciate the recognition of our work's strengths, including the novel permutation-based canonical correlation objective and the theoretical analysis showing how the learned embeddings approximate those obtained by supervised linear discriminant analysis (LDA). Below, we respond to all comments and outline our modifications to address the raised issues.\\n\\n**W1 Updated multi-view applications**\\n\\nWe have updated the text to include motivating examples of multi-view clustering applied to more recent use cases. The revised text (page 1, lines 41-44) is:\\n\\\"This approach has great potential in applications like communication systems content delivery [1], community detection in social networks [2,3], cancer subtype identification in bioinformatics [4], and personalized genetic analysis through multi-modal clustering frameworks [5,6].\\\"\", \"updated_references\": \"**References**:\\n\\n[1] Miguel Angel V\\u00e1zquez and Ana I P\\u00e9rez-Neira, \\\"Multigraph Spectral Clustering for Joint Content Delivery and Scheduling in Beam-Free Satellite Communications,\\\" ICASSP 2020.\\n\\n[2] Zhao et al., \\\"Multi-view Tensor Graph Neural Networks Through Reinforced Aggregation,\\\" IEEE TKDE 2022.\\n\\n[3] Shi, X., Liang, C., and Wang, H., \\\"Multiview Robust Graph-based Clustering for Cancer Subtype Identification,\\\" IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2023.\\n\\n[4] Wang, B., et al., \\\"Multi-dimensional Data Integration for Personalized Analysis Using Random Walks,\\\" BMC Bioinformatics, 2023.\\n\\n[5] Li et al., \\\"netMUG: A Network-guided Multi-view Clustering Workflow for Dissecting Genetic and Facial Heterogeneity,\\\" Frontiers in Genetics, 2023.\\n\\n[6] Wen et al., \\\"Deep Multi-view Clustering with Contrastive Learning for Omics Data,\\\" Bioinformatics, 2024.\"}", "{\"comment\": \"Thanks for authors' rebuttal and it almost addresses all my concerns. Perhaps because of the different experimental environment, the results of many methods on some datasets are different from the original paper (e.g., CVCL on MSRC-v1 and Scene15, MVCAN on MNIST-USPS/DIGIT and RGB-D, ......). Although it is nearly impossible to achieve the exact same experimental conditions for all methods, we want to ensure that the experimental setup is as consistent as possible. Anyway, the most important point of the paper is not the experimental ACC but the innovation, so I tend to raise my score to ''accept''.\"}", "{\"metareview\": \"This paper introduces an end-to-end MVC approach that integrates CCA-based correlation maximization with self-supervised pseudo-labeling, achieving joint multi-view representation learning and clustering. Both experimental and theoretical analyses are provided to demonstrate the efficacy of the proposed method. Based on the positive feedback from three reviewers, I decide to accept the paper. However, as pointed out by Reviewer Yjr1, the authors need to explicitly discuss the sensitivity of the proposed method to hyperparameters and network architectures.\", \"additional_comments_on_reviewer_discussion\": \"After the authors' rebuttal, three reviewers raised their scores to accept. Reviewer aFs1 did not respond to the authors, but I think the authors' response can address the concerns. Thus I would like to accept this paper.\"}", "{\"summary\": \"This paper proposes a deep learning model for multi-view clustering framework, namely, COPER. The proposed model integrates clustering and representation tasks into an end-to-end framework, eliminating the need for a separate clustering step. Extensive experimental evaluation across various benchmark datasets validates the efficiency of the proposed algorithm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis paper is well organized, and the proposed methodology is enlightening.\\n2.\\tThe motivation behind the paper is clear, and the theoretical analysis is complete.\\n3.\\tThe comparison experiments are comprehensive, encompassing datasets of varying sizes and multiple types of baseline methods.\", \"weaknesses\": \"1.\\tUnlike general methods, the proposed approach generates pseudo-labels for each view to enable self-supervised learning. However, in multi-view clustering, aligning the labels across views can pose challenges that may impact subsequent self-supervised learning.\\n2.\\tAlthough the paper includes theoretical analysis, the proposed method offers limited innovation. The correlation maximization loss has already been proposed, and generating pseudo-labels by estimating the probability matrix is also a common approach.\", \"questions\": \"As mentioned above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response. My concerns have been resolved, so I recommend accepting this manuscript.\"}", "{\"title\": \"End of discussion period\", \"comment\": \"As the discussion period is nearing its end, we would like to check if there are any remaining concerns regarding our paper. In response to the reviewer\\u2019s feedback, we have implemented an evaluation of three additional methods and reorganized our results table to separate CCA-based and non-CCA-based approaches.\\n\\nWe hope these revisions address all concerns raised, and we would be grateful for any further feedback before the discussion concludes.\"}", "{\"title\": \"Authors' response\", \"comment\": \"We thank the reviewer for their thoughtful and constructive feedback and for acknowledging the strengths of our work, particularly the effectiveness of our end-to-end multi-view clustering framework and the validation of our theoretical contributions through extensive experiments. We address the reviewer's specific comments and suggestions in detail below.\\n\\n**W1 Motivation**\\n\\nWe thank the reviewer for their valuable feedback. To clarify, our motivation stems from the limitations of existing multi-view clustering (MVC) methods, particularly those that adopt a two-stage process where representation learning is followed by clustering. This two-stage process can be suboptimal, as the representations learned are not directly optimized for clustering, often resulting in subpar performance. While we acknowledge that a few end-to-end methods have been proposed, they are often constrained by limited adaptability to various data types or require specific assumptions that restrict their generalizability.\\nOur work proposes an end-to-end approach that is distinct in leveraging self-supervised learning to improve canonical correlation analysis (CCA)-based clustering. Specifically, we incorporate a novel self-supervision scheme into an end-to-end clustering procedure, which can enhance any CCA-based method with better representations tailored for clustering. Unlike existing end-to-end approaches, our method introduces within-cluster permutations and multi-view pseudo-labeling, which allow for improved cluster separation and representation alignment, thereby addressing the inherent limitations of prior approaches. This motivation is grounded in the understanding that direct optimization of the representations for clustering tasks can lead to better generalization and more effective model performance across different data types, as evidenced by our extensive empirical results.\\n\\n**W2 Additional baselines**\\n\\nThank you for suggesting this end-to-end approach. In response, we introduced an evaluation of OPMC [1], L0-CCA [2], and MVCAN [3] in the revised version (see Tables 2 and 6). OPMC performs well against other baselines and outperforms COPER in 2 out of 10 datasets regarding accuracy (ACC). However, COPER still surpasses OPMC, L0-CCA, and MVCAN in most metrics and across most datasets.\\n\\n**W3 Parameter Study**\\n\\nThank you for your suggestion. We provide a sensitivity analysis for $\\\\lambda$ value in Appendix E.3 and extend it with an additional analysis of the batch size hyperparameter. We train our model on the MSRCv1 dataset with batch sizes [128, 256, 512, 1024] five times for each batch size, each time with a different random initialization seed. Then, we measure the metrics ACC, ARI, and NMI and put the results on the box-plot chart (E.2 Figure 6). \\n\\n**References**:\\n\\n[1] Liu J, et al. One-pass Multi-view Clustering for Large-scale Data, ICCV 2021.\\n\\n[2] Lindenbaum, O., Salhov, M., Averbuch, A., & Kluger, Y. (2021). L0-sparse canonical correlation analysis. In International Conference on Learning Representations.\\n\\n[3] Investigating and Mitigating the Side Effects of Noisy Views for Self-Supervised Clustering Algorithms in Practical Multi-View Scenarios (CVPR 2024)\"}", "{\"title\": \"Authors' response cont.\", \"comment\": \"**W2/W3/W4 Baselines**\\n\\nWe appreciate the reviewer\\u2019s feedback regarding our choice of baselines and the suggestions for expanding and categorizing the methods. ICMVC [7] and RMCNC [8] were chosen since they were recently published and because of their demonstrated effectiveness and relevance in various multi-view clustering scenarios beyond their specialized design focuses.\\nICMVC [7] was originally designed to manage incomplete multi-view data. However, it includes robust components such as instance-level attention fusion and contrastive learning for view alignment, which remain highly effective even when all views are fully observed. Additionally, its weight-sharing pseudo-classifier allows for end-to-end representation and clustering, making it a strong competitor in our context. The evaluation results presented in Table 1 of the ICMVC paper [7] show excellent performance on standard datasets without any missing views, further supporting our choice to include it as a baseline.\\nSimilarly, RMCNC's core architecture [8] is designed to handle noisy correspondences effectively. It incorporates components such as a noise-tolerant contrastive learning framework and a unified probability computation strategy, which enhance its clustering performance across various data conditions. The evaluation results (refer to Table 2 in the original paper) demonstrate strong performance on noise-free datasets, highlighting its suitability as a comparison method. Since real-world multi-view datasets frequently contain noisy correspondences, RMCNC is a practical choice for our evaluation.\\n\\nFollowing the reviewer's suggestion, we have expanded our experiments to include additional baselines that cover a broader spectrum of multi-view clustering techniques [9, 10, 11]. As suggested, we have categorized our baselines into CCA-derived and non-CCA-derived methods (see Tables 2 and 6). This categorization highlights the distinct contributions of our method\\u2019s novel permutation-based CCA objective, clearly assessing its advantages over similar frameworks and alternative multi-view clustering paradigms.\\n\\n\\n\\nOur systematic experimental evaluation involves ten diverse datasets (2\\u20136 views each). We benchmarked against ten baselines, conducted an ablation of four studies, and assessed scalability. Our method demonstrates superior performance, surpassing eight state-of-the-art deep models with up to 7% improvement in accuracy and consistently higher ARI and NMI scores across datasets. Robustness is ensured by averaging results over ten runs and reporting standard deviations, providing a comprehensive and reliable performance assessment.\\n\\n**References**:\\n\\n[7]. Chao, G., Jiang, Y., & Chu, D. (2024, March). Incomplete contrastive multi-view clustering with high-confidence guiding. In Proceedings of the AAAI Conference on Artificial Intelligence\\n\\n[8]. Sun, Y., Qin, Y., Li, Y., Peng, D., Peng, X., & Hu, P. (2024). Robust multi-view clustering with noisy correspondence. IEEE Transactions on Knowledge and Data Engineering.\\n\\n[9] Liu J, et al. One-pass Multi-view Clustering for Large-scale Data, ICCV 2021.\\n\\n[10] Investigating and Mitigating the Side Effects of Noisy Views for Self-Supervised Clustering Algorithms in Practical Multi-View Scenarios (CVPR 2024)\\n\\n[11] Lindenbaum, O., Salhov, M., Averbuch, A., & Kluger, Y. (2021). L0-sparse canonical correlation analysis. In International Conference on Learning Representations.\\n\\n\\n**W5 Minor comments and corrections**\\n\\nWe appreciate your feedback regarding the necessary changes. We have reviewed the paper again and made adjustments based on your suggestions.\"}", "{\"comment\": \"Dear reviewer, we appreciate the time and effort you have dedicated to reviewing our paper. As the discussion period is limited, we kindly request that the reviewer evaluate the new information provided in the rebuttal. We are eager to improve our paper and resolve all the concerns raised by the reviewer. If there are any remaining concerns that have not been addressed, we would be happy to provide further explanations.\"}", "{\"title\": \"Authors' response\", \"comment\": \"We thank the reviewer for thorough and constructive feedback on our submission. We appreciate the acknowledgment of the strengths of our approach, particularly the simplicity and effectiveness of the end-to-end framework, the theoretical proofs and visualizations, and the innovative integration of multi-view pseudo-labeling techniques. Below, we provide detailed responses to the reviewer's comments.\\n\\n**Q1 Uniform Model Architecture + Weaknesses**\\n\\nWe appreciate the feedback on uniform model architecture. To address this concern, we have conducted an experiment to demonstrate that the performance of our model is not too sensitive to batch size choices. Specifically, we trained our model with varying batch sizes in [128, 256,512,1024], with 5 random initializations for each batch size. Then, we compute the metrics ACC, ARI, and NMI for each setup and provide a box-plot chart in Appendix E.3. This experiment is an addition to the sensitivity analysis we provided for hyperparameter $\\\\lambda$ in E.3.\\n\\nVariations in model architecture are influenced by the complexity of the dataset, such as feature dimensions, and align with established practices in multi-view clustering. Our benchmark methods, which have different configurations tailored to specific datasets, include CVCL [3], ICMVC [5], RMCNC [4], and MVCAN [2].\\nFurthermore, although our gradual training strategy is straightforward, it can be automated using heuristic criteria once the loss converges to a predefined threshold. This approach reduces the need for manual adjustments. Other benchmark methods, such as CVCL [3] and RMCNC [4], also incorporate various gradual steps as part of their frameworks.\\n\\nFinally, in Appendix E2, we demonstrate how the Silhouette score, an unsupervised metric, could be used to tune model hyperparameters.\\n\\n**Q2 Compare with Latest Self-Supervised Approaches**\\n\\nWe appreciate the Reviewer\\u2019s suggestion to compare our approach with the latest self-supervised multi-view clustering methods and have included a comparison to MVCAN [1] and L0-DCCA [7], OPMC [6].\\nAs shown in Tables 2 and 6, COPER consistently outperforms MVCAN [1] and L0-DCCA [7], OPMC [6] across most datasets at ACC, ARI, and NMI, demonstrating its robustness and adaptability across diverse clustering scenarios. Although MVCAN achieves competitive performance in specific cases, such as Caltech101-20 for NMI, its results are more variable and generally lower than COPER, particularly on complex or noisy datasets like RBGD and VOC. \\n\\nWe tried to use the DeepMVC [2] Git repository but faced considerable challenges in reproducing the environment. We were unable to establish a working environment, and the package dependencies led to complex compatibility issues that led to initialization errors. We will work toward solving these issues by reaching out to the owners of this package via Github.\\n\\n**References**:\\n\\n[1] Investigating and Mitigating the Side Effects of Noisy Views for Self-Supervised Clustering Algorithms in Practical Multi-View Scenarios (CVPR 2024)\\n\\n[2] On the Effects of Self-Supervision and Contrastive Alignment in Deep Multi-View Clustering\\\" (CVPR 2023)\\n\\n[3] Deep Multiview Clustering by Contrasting Cluster Assignments\\\" is accepted by ICCV 2023\\n\\n[4] Sun, Y., Qin, Y., Li, Y., Peng, D., Peng, X., & Hu, P. (2024). Robust multi-view clustering with noisy correspondence. IEEE Transactions on Knowledge and Data Engineering.\\n\\n[5] Chao, G., Jiang, Y., & Chu, D. (2024, March). Incomplete contrastive multi-view clustering with high-confidence guiding. In Proceedings of the AAAI Conference on Artificial Intelligence\\n\\n[6] Liu J, et al. One-pass Multi-view Clustering for Large-scale Data, ICCV 2021.\\n\\n[7] Lindenbaum, O., Salhov, M., Averbuch, A., & Kluger, Y. (2021). L0-sparse canonical correlation analysis. In International Conference on Learning Representations.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for their detailed feedback and insightful comments on our work. We appreciate their recognition of our permutation-based canonical correlation objective and its connection to supervised Linear Discriminant Analysis (LDA) projections. We are also grateful for their acknowledgment of our end-to-end framework, which effectively integrates clustering and representation learning to address traditional multi-view clustering challenges. Additionally, we value their positive remarks on the clarity and organization of our work, supported by our theoretical analysis and experimental validations.\\n\\nIn response to the reviews, we have carefully addressed the concerns raised to significantly enhance the manuscript. We have updated the background section based on the reviewers' feedback. Additionally, we've expanded the experimental comparisons to include more baseline methods, and we have categorized our approaches into CCA-derived and non-CCA-derived methods. We demonstrated a unified experimental setup with consistent architectural and parameter settings across datasets to showcase practical applicability and strengthen the robustness of our findings.\\n\\nBelow, we have provided a detailed response to each reviewer\\u2019s comments. We encourage the reviewers to refer to the revised manuscript (and Appendix), where we have highlighted all updates in color. We hope that these revisions satisfactorily address the reviewers\\u2019 questions and concerns. We greatly appreciate the feedback and are open to any further clarification or discussion.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Dear reviewer, we appreciate the time and effort you have dedicated to reviewing our paper. As the discussion period is limited, we kindly request that the reviewer evaluate the new information provided in the rebuttal. We are eager to improve our paper and resolve all the concerns raised by the reviewer. If there are any remaining concerns that have not been addressed, we would be happy to provide further explanations.\"}", "{\"summary\": \"The proposed approach involves generating meaningful fused representations using a novel permutation-based canonical correlation objective. Cluster assignments are learned by identifying consistent pseudo-labels across multiple views.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed approach involves a novel permutation-based canonical correlation objective. Simulanteously, the authors provide a theoretical analysis showing how the learned embeddings approximate those obtained by supervised linear discriminant analysis (LDA).\", \"weaknesses\": \"1\\uff09In Line 41, multi-view clustering holds immense potential in various applications, however, the methods mentioned are not updated to recent literature. Please update these references to ensure your work is current.\\n\\n2\\uff09The selected comparison methods are not enough. It is recommended to add some comparison methods, otherwise this may have a negative impact on the reliability of the experimental results. \\n\\n3\\uff09Some of the selected comparison methods are for incomplete multi-view data, and some are for noise correspondence. These methods have special properties and are not recommended as comparison methods.\\n\\n4\\uff09Considering that the proposed method is derived from the CCA objective, it is recommended to classify the compared methods into CCA-derived methods and other non-CCA-derived methods. This can directly demonstrate the effectiveness of the new elements introduced by the proposed method compared to previous similar frameworks and its competitiveness compared to other MVC paradigms.\\n\\n5) Check for all possible errors in the statement, e.g. the missing serial number for Figure in Line 244,\\u201cusing using within-cluster permutations\\u201din Line 292.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer, we appreciate the time and effort you have dedicated to reviewing our paper. As the discussion period is limited, we kindly request that the reviewer evaluate the new information provided in the rebuttal. We are eager to improve our paper and resolve all the concerns raised by the reviewer. If there are any remaining concerns that have not been addressed, we would be happy to provide further explanations.\"}", "{\"summary\": \"In literature, most of the current multi-view clustering methods are limited to specific domains or rely on a sub-optimal and computationally intensive two-stage process of representation learning and clustering. To address this issue, the authors propose an end-to-end deep learning-based multi-view clustering framework which is validated to be effective in experiments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The theory of LDA is analyzed.\\n2. The proposed method is validated to be effective in experiments.\", \"weaknesses\": \"1. The authors say that most of existing multi-view clustering methods are composed of two-stage process of representation learning and clustering, therefore they propose an end-to-end method. However, the authors also claim that a few end-to-end methods are proposed in literature. So, the motivation should be further clarified.\\n\\n2. The paper [1] is an classical end-to-end multi-view clustering method and should be compared or discussed.\\n\\n3. The parameter study should be included.\\n\\n[1] Liu J, et al. One-pass Multi-view Clustering for Large-scale Data, ICCV 2021.\", \"questions\": \"Please see Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed an end-to-end MVC method that leveraged CCA-based correlation maximization and self-supervised pseudo-labels to learn multi-view representations and clusters jointly. In the proposed method, the key components are the Sample Selection and Label Refinement-Agreement of multi-view pseudo-labelling, which follows some technologies of semi-supervised learning and yields a simple unsupervised MVC architecture. Then, the authors present experimental and theoretical results to support the effectiveness of their method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper presents a deep MVC method which has the advantages of simpleness and end-to-end. The method leverages a cross-entropy loss and a CCA-like maximization loss to train the deep model. It transfers the two-step training schedule in previous methods into step-by-step one, among which it conducts sample selection by high confidence and label correction by multi-view agreement.\\n\\n2. The paper is well-written and easy to follow, which introduces theoretical proofs and visualization to support its method.\\n\\n3. The illustration of cluster permutations is interesting, and it can enhance the embeddings learned by CCA (verified by ablation study).\", \"weaknesses\": \"I have the following concern and hope they are useful for improving this manuscript:\\n\\nAs for unsupervised clustering task, the robust model is needed when it processes different datasets in practical scenarios. However, we can observe that the proposed method is sensitive to model architecture settings (Table 7&9). For different datasets, the proposed method has different settings of model architecture and batch size. Moreover, in D.4 GRADUAL TRAINING, the training epochs in each step are determined by human. Since we have no labelled data to tune the model settings in practical applications, the proposed might have limited practical application value.\", \"questions\": \"1. Please see above weakness. It is encouraged to use a uniform model architecture to test the clustering performance of the proposed method on different datasets, for a fair comparison and availability.\\n\\n2. It is encouraged to compare some latest self-supervised deep MVC approaches, e.g., On the effects of self-supervision and contrastive alignment in deep multi-view clustering [CVPR 2023], Investigating and Mitigating the Side Effects of Noisy Views for Self-Supervised Clustering Algorithms in Practical Multi-View Scenarios [CVPR 2024]...\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5YbuOTUFQ4
Learning Task Belief Similarity with Latent Dynamics for Meta-Reinforcement Learning
[ "Menglong Zhang", "Fuyuan Qian", "Quanying Liu" ]
Meta-reinforcement learning requires utilizing prior task distribution information obtained during exploration to rapidly adapt to unknown tasks. The efficiency of an agent's exploration hinges on accurately identifying the current task. Recent Bayes-Adaptive Deep RL approaches often rely on reconstructing the environment's reward signal, which is challenging in sparse reward settings, leading to suboptimal exploitation. Inspired by bisimulation metrics, which robustly extracts behavioral similarity in continuous MDPs, we propose SimBelief—a novel meta-RL framework via measuring similarity of task belief in Bayes-Adaptive MDP (BAMDP). SimBelief effectively extracts common features of similar task distributions, enabling efficient task identification and exploration in sparse reward environments. We introduce latent task belief metric to learn the common structure of similar tasks and incorporate it into the real task belief. By learning the latent dynamics across task distributions, we connect shared latent task belief features with specific task features, facilitating rapid task identification and adaptation. Our method outperforms state-of-the-art baselines on sparse reward MuJoCo and panda-gym tasks.
[ "meta-reinforcement learning", "representation learning", "bisimulation" ]
Accept (Poster)
https://openreview.net/pdf?id=5YbuOTUFQ4
https://openreview.net/forum?id=5YbuOTUFQ4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wdTdnhH1tq", "qT55Mrv6sl", "pvDo5C7Kbr", "prLQen5kCI", "pZMSrNnehF", "hW2cqExZBE", "hINmfnTXsR", "YcGN9m3uaW", "XCereFztQt", "VmS669Yx67", "V0PqXNtyqs", "TVE7b2ToKB", "PVHWyYfg1A", "OK2TCub2fW", "FAotWqRfrL", "B9nzRAPChr", "9JWGTYSy8L", "7nCjADB2JX", "7gOC4eBwBg", "7WMJNoaIsb", "7JMxnqsKV8", "5pYoK52dpC", "4AB117oCbF", "37Re0nkSIZ", "0YNxYinUVg" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732500928460, 1732490541097, 1732503166877, 1732604213389, 1732590120668, 1732628736690, 1733042096268, 1732560910529, 1730145751085, 1732505758046, 1732546319774, 1732500343707, 1730698458199, 1732523777703, 1730725238914, 1729788434873, 1732658227761, 1732680185177, 1732488994189, 1732546216441, 1733217981631, 1737523971167, 1734305599611, 1733156998842, 1732753139527 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_RXre" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_z5au" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_6WZh" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_cdUU" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_RXre" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_6WZh" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_6WZh" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_z5au" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_cdUU" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9245/Area_Chair_BB5p" ], [ "ICLR.cc/2025/Conference/Submission9245/Reviewer_cdUU" ], [ "ICLR.cc/2025/Conference/Submission9245/Authors" ] ], "structured_content_str": [ "{\"comment\": \"**Questions**\\n\\n1. Why is the inverse dynamics required in Definition 2? Have you done any ablations for how it compares to the bisimulation metric?\\n\\nIn Appendix B, we prove the latent transfer bound of the latent task belief metric, showing that incorporating the inverse dynamics module broadens the range of task transfer distributions. In Appendix G.2, we test the impact of the algorithm with and without the inverse dynamics module on OOD task adaptation. The results demonstrate that the inverse dynamics module enhances the agent's reasoning ability for unknown tasks, further validating the effectiveness of Theorem 2.\\n\\n2. What are the offsets mentioned in section 3.3 and how are they trained?\\n\\nThe offsets in Section 3.3 adjust task similarity distributions to align the latent task belief with the Q-function during SAC training. They are trained indirectly, as the Q-function serves as a belief shift signal, refining the latent task belief through real-environment exploration.\\n\\n[1]Rakelly et al. Efficient off-policymeta-reinforcement learning via probabilistic context variables. ICML, 2019.\\n\\n[2]Zintgraf et al, Varibad: A very good method for bayes-adaptive deep rl via meta-learning. ICLR, 2020.\\n\\n[3]Zintgraf et al, Exploration in approximate hyper-state space for meta reinforcement learning. ICML, 2021.\\n\\n\\nWe thank you for your feedback and hope we've addressed your comments adequately. We are happy to answer any further questions and incorporate any further suggestions.\"}", "{\"comment\": \"Thank you for the detailed review and valuable feedback. We appreciate the opportunity to clarify and address your comments. Below are our responses to your identified weaknesses, questions, and suggestions.\\n\\n**Weaknesses**:\\n\\n1. \\\"The topic of online meta-RL is kind of old.\\\"\\n \\nWhile online meta-RL has been extensively studied, our contribution lies in leveraging the latent task belief metric inspired by bisimulation to address the limitations of existing methods in sparse reward settings. Unlike previous approaches, SimBelief utilizes the latent task belief metric to learn the common structure between tasks in the latent space, enabling the agent to exhibit robust and rapid adaptation even in environments with extremely sparse rewards. We propose a novel latent space learning framework. As shown in our experiments (Section 4, Figures 3\\u20135), SimBelief outperforms state-of-the-art methods, particularly in challenging sparse-reward environments, demonstrating its relevance and practical impact. How to improve information utilization efficiency and enhance the agent's online adaptation ability with limited online data remains a meaningful and significant research topic.\\n\\n2. \\\"The core of the proposed method is using reward and state transition functions, or a world model, to measure task similarity, which overlaps with existing works like VariBAD.\\\"\", \"simbelief_differentiates_itself_from_varibad_in_three_critical_ways\": \"- **Latent Task Belief Metric**: Instead of relying solely on posterior sampling, our metric integrates inverse dynamics to capture task similarities, which significantly enhances the reasoning and exploration capabilities of the agent.\\n - **Theoretical Contributions**: Our theoretical analysis (Theorems 1 and 2, Appendix B) establishes the conditions under which task similarities translate to policy transferability, providing guarantees for its effectiveness in online meta-RL.\\n - **Experimental Demonstration**: The superior performance on OOD tasks (Figures 3\\u20136) and the visualization of task beliefs (Figure 6) highlight the unique advantages of our approach in capturing global task structures.\\n\\n3. \\\"The baselines are kind of old, mainly 2019\\u20132021.\\\"\\n\\n We acknowledge the age of some baselines but emphasize their continued relevance in the field. For instance, VariBAD, HyperX, and MetaCURE remain widely used benchmarks in meta-RL and exhibit strong and robust performance. Additionally, our method introduces significant advancements over these baselines, as detailed in the experimental results (Section 4 and Appendix F). To further strengthen our comparison, we plan to include newer benchmarks in future work.\\n\\n**Questions**:\\n\\n 1.\\\"Key differences and advantages over VariBAD could be further elaborated.\\\"\\n\\nSimBelief\\u2019s key advantage lies in its latent task belief metric, which captures global task structures more effectively. This is particularly evident in sparse-reward environments, where VariBAD struggles to generalize (Section 4, Figures 3\\u20135). Additionally, SimBelief only reconstructs past trajectories and combines the specific task belief with the latent task belief, enhancing the agent's reasoning ability in unknown environments, making it more adaptable to real-world scenarios.\\n\\n 2. \\\"Some notations are confusing, e.g., and , and why use two distributions?\\\"\\n\\nThank you for highlighting this. represents the latent task belief that captures inter-task similarities, while denotes the specific task belief focused on task-specific details. Their integration balances global structure understanding and local adaptation, as detailed in Sections 3.2\\u20133.3. We have revised the manuscript to clarify this distinction (Appendix F.2).\\n\\n 3. \\\"Can the proposed method scale to stochastic MDPs?\\\"\\n\\nYes, SimBelief is designed to handle stochastic MDPs by leveraging the BAMDP framework. Our latent task belief metric accounts for the stochastic nature of transitions, as demonstrated in both theoretical analysis (Appendix B) and experiments on stochastic environments like Walker-Rand-Params (Section 4).\\n\\nWe thank you for your feedback and hope we've addressed your comments adequately. We are happy to answer any further questions and incorporate any further suggestions.\"}", "{\"comment\": \"Thank you to the authors for their effort in addressing the concerns and clarifying the questions. The authors' response and revised manuscript addressed most of my concerns, and I have adjusted my score accordingly.\"}", "{\"comment\": \"I'd like to thank the authors' detailed response. Below is my reply:\\n\\n- There are still several more new baselines for meta RL, especially for sparse reward, like [1] with code in the link https://github.com/zoharri/mamba. Comparing with there more new and SOTA methods will make this paper more solid. I understand the discussion period is limited, so I hope a detailed comparision in the future revised versions.\\n\\n- As for the sparse reward environment, some work has discussed that zero-shot generalization (a special case of meta RL) is extremly difficult or even impossible when the reward is extremly sparse (Appendix B.7 in [2]). So I'm curious about that, as a general meta RL method (few-shot adaptation), how the proposed method can improve the performance in sparse reward environments, and how many trajectories in the adaption stage is required? More discussion will make this work more solid.\\n\\nOverall, most of my concerns has been addressed, I have raised my scores into 6.\", \"ref\": \"[1] MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning\\n\\n[2] Task Aware Dreamer for Task Generalization in Reinforcement Learning\"}", "{\"title\": \"Thanks for your detailed response.\", \"comment\": \"I realized that I might have misunderstood the proposed method a bit. Since both the proposed method and VariBAD used the world model (i.e., the reward and state transition functions) to infer task belief, I thought they were very similar. After I read the authors' response and revised manuscript carefully, I realized that there is a significant difference between them. From what I understand, VariBAD models task similarity **implicitly** in the latent space, while the proposed method does that **explicitly**. Also, the techniques for deriving task beliefs are also different.\\n\\nSo, the paper reveals good technique novelty, sound theoretical guarantee, and comprehensive experimental results. I will raise my score to 8. I am looking forward to future work on more recent topics, such as offline meta-RL, in-context RL.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for revisiting our manuscript with such care. We truly appreciate your acknowledgment of the distinction between our method and VariBAD, especially regarding the explicit modeling of task similarities in the latent space using the latent task belief metric and the different techniques for deriving task beliefs.\\n\\nWe are glad that our response and the revised manuscript clarified these differences and highlighted the novelty and theoretical contributions of our work. Your recognition of these aspects and your encouragement mean a lot to us.\\n\\nWe are also grateful for your constructive comments on exploring recent directions such as offline meta-RL and in-context RL. These are indeed exciting areas, and we plan to incorporate these perspectives into our future research.\\n\\nThank you once again for raising your score and for your support of our work. We hope to continue building on these ideas in future endeavors.\"}", "{\"title\": \"Thank you for your valuable time\", \"comment\": \"We greatly appreciate the time and effort you've taken to provide feedback and would be grateful for any additional comments or suggestions you may have. Your insights are incredibly valuable to us as we aim to improve the quality of our work. We kindly request to reconsider your score if your concerns are sufficiently addressed.\\n\\n\\nIf there is anything else you need from us or if you require further clarification, please do not hesitate to let us know. We would love to incorporate any further specific suggestions or concerns you have.\"}", "{\"comment\": \"5. **Discussion on Baseline Selection**\\n\\nWe appreciate your suggestion regarding the inclusion of relevant works (e.g., [1-3]). We fully understand and value the importance of incorporating more recent methods into the baseline comparisons. However, after our investigation, we found that the implementation code for [1][2] is not publicly available. Additionally, [3] does not belong to the category of context-based meta-RL methods. As a result, it is not feasible to include these methods as baselines for quantitative comparison. Nevertheless, we will add a discussion of these methods in the revised version and provide a detailed analysis of how they differ from our approach. In our paper, we compare against baselines such as VariBAD, PEARL, MetaCURE, and HyperX, all of which are highly influential works within the meta-RL field. Our method shows significant performance improvements over these baselines, especially in the context of out-of-distribution sparse reward task adaptation.\\n\\nTo evaluate the performance of the algorithm in extremely sparse reward scenarios, we chose to use panda-gym. Previous meta-RL algorithms have not used panda-gym. Its environments only provide a reward signal after successfully achieving the task goal. Additionally, the state space and action space in panda-gym are significantly larger, closely resembling real-world robotic arm simulations. This makes panda-gym more suitable for testing the algorithm's exploration efficiency and its adaptability in sparse reward settings.\\n\\n 6. **Why the proposed method can boost the agent's performance in sparse reward environments?**\\n\\nWe have added an explanation of the SimBelief rapid adaptation mechanism in Section 4, and provided a detailed analysis in Appendix F. \\u201cSimBelief, through the latent task belief,essentially learns the transfer relationships between knowledge across different tasks. In extremely sparse reward scenarios, once the agent succeeds in one task, this prior knowledge of success can be quickly propagated to other similar tasks via the latent task belief, enabling the agent to rapidly adapt to similar types of tasks. A more detailed discussion is provided in Appendix F.\\u201d\\n\\nWe hope these revisions address your concerns and clarify the contributions of our work. Thank you again for your constructive feedback, which has been invaluable in improving our submission.\"}", "{\"summary\": \"The authors propose a novel meta-RL method tackling the problem of efficient task identification in sparse reward settings, where methods that do reward reconstruction are not sufficient. In a context-based meta-RL framework, they propose inferring the latent task representation through a Gaussian mixture of a variational latent representation and a new task belief similarity latent representation the authors introduce. This task belief similarity latent is trained to model a bisimulation-inspired latent task belief metric between two tasks. They demonstrate modest improvements compared to prior methods in sparse reward, simulated locomotion and manipulation tasks and robustness to OOD task variations.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"Introducing ideas of modeling task similarity to meta-RL is interesting and worth exploring\", \"Sound experimental setup and results\"], \"weaknesses\": [\"The biggest weakness is that the proposed method lacks motivation and reasoning about how they achieve the claims the authors make.\", \"Combining the variational task belief $z_r$ with their proposed task belief similarity $z_l$ through a Gaussian mixture. This does not combine both types of task representations, but instead samples from one or the other. Furthermore, $z_r$ already models dynamics information in order to reconstruct the trajectory, so it's unclear what information $z_l$ adds. This is also flawed because the task belief similarity is not modeled as a distribution (at least based on Section 3.2).\", \"There are a couple things in Section 3.3 that are mentioned briefly but not explained: using the Q-function to train offsets for the task similarity distributions and minimizing KL divergence to the variational task belief $z_r$ when predicting $z_l$. These details seem important to the method and the fact that the KL divergence is required to \\u201censure the agent does not confuse similar tasks\\u201d indicate that $z_l$ may not be as effective as the authors claim.\"], \"other_major_weaknesses_include\": [\"Clarity of writing. The method section, especially 3.2, was difficult to understand. The experiments section does not describe what the different tasks are in each environment, and Figures 4 and 5 are hard to interpret.\", \"Insufficient discussion of relevant related work, especially MetaCURE, which seems to tackle the exact problem of task ID with sparse rewards.\", \"Insufficient discussion of results, especially providing insight into why comparison methods may under or over perform.\", \"No ablations provided.\"], \"questions\": [\"Why is the inverse dynamics required in Definition 2? Have you done any ablations for how it compares to the bisimulation metric?\", \"What are the offsets mentioned in section 3.3 and how are they trained?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thorough and insightful review. We appreciate your feedback and suggestions, which have greatly helped us improve the quality of our work. Below, we address each of your comments and provide detailed responses.\\n\\n1. **Writing of proofs and typos**\\n\\nWe have revised the proofs to improve clarity and correctness, addressing typographical errors where the value function update should indeed be $V_n^\\\\pi(s) = R^\\\\pi(s, a) + \\\\gamma \\\\sum_{s' \\\\in S} P^\\\\pi(s' \\\\mid s, a) V_{n-1}^\\\\pi(s')$\\u200b. \\n\\n2. **In lines 719-720 $d^{n}$ here seems to be $d$** \\n\\nIncluding $n$ emphasizes that the metric $d^{n}$ evolves over iterations, reflecting how the differences in dynamics propagate through the value function updates. If the task dynamics differences stabilize after a certain number of steps, $d^{n}$\\u200b will converge, aligning with the final task similarity metric.\\n\\n3. **Why we can use the Lipschitz property of the value function to get the result?**\\n\\nThe Lipschitz property states that the value function is bounded by the differences in the underlying dynamics (rewards and transitions). Specifically, for two tasks $M_i$\\u200b and $M_j$\\u200b with their respective value functions $V_{M_i}$\\u200b and $V_{M_j}$\\u200b, the difference between the value functions can be bounded by: $|V^\\\\pi(s_i^+) - V^\\\\pi(s_j^+)| \\\\leq (\\\\text{differences in rewards}) + \\\\gamma (\\\\text{differences in transitions}).$ This means that a small change in rewards or transitions causes a proportionally small change in the value function, governed by the discount factor $\\\\gamma$. The Lipschitz property ensures that differences in rewards and transitions only have a limited impact on the value function due to the contraction property of the Bellman operator. This makes it possible to bound the difference between value functions $|V^\\\\pi(s_i^+) - V^\\\\pi(s_j^+)|$ using the reward and transition metrics. Without the Lipschitz property, there would be no guarantee that the differences between tasks ${M_i}$\\u200b and ${M_j}$ would lead to bounded differences in their value functions. \\n\\n4. **If you use the Lipschitz property of the value function, what is the Lipschitz constant?**\\n\\nThank you for your comments. I will provide a brief proof. First, we define the reward model difference, transition model difference, and inverse dynamic difference between tasks as $\\\\epsilon_R$, $\\\\epsilon_T$, and $\\\\epsilon_I$, respectively.\", \"value_function_difference\": \"$$\\n|V^\\u03c0(s_i^+) - V^\\u03c0(s_j^+)| = |R(s_i^+, a) - R(s_j^+, a)| + \\u03b3 | E_{s' \\u223c T_i(s_i^+, a)}[V^\\u03c0(s'^+)] - E_{s' \\u223c T_j(s_j^+, a)}[V^\\u03c0(s'^+)] |\\n$$\", \"the_expectation_term_over_the_transitions_can_be_rewritten\": \"$$\\n| E_{s' \\u223c T_i(s_i^+, a)}[V^\\u03c0(s'^+)] - E_{s' \\u223c T_j(s_j^+, a)}[V^\\u03c0(s'^+)] | \\u2264 || T_i(s_i^+, a) - T_j(s_j^+, a) ||_1 || V^\\u03c0 ||_\\u221e\\n$$\\n\\nwhere $\\\\| T_i - T_j \\\\|_1$ measures the difference between the transition distributions and $|| V^\\u03c0 ||_\\u221e$ bounds the value function.\\n\\n$$\\n|V^\\u03c0(s_i^+) - V^\\u03c0(s_j^+)| \\u2264 |R(s_i^+, a) - R(s_j^+, a)| + \\u03b3 ||T_i(s_i^+, a) - T_j(s_j^+, a)||_1 ||V^\\u03c0||_\\u221e\\n$$\\n\\nSince the value function is recursive, the Bellman operator propagates the differences at every step. Assuming the reward discrepancy is bounded by \\\\( \\\\epsilon_R \\\\) and the transition dynamics discrepancy by \\\\( \\\\epsilon_T \\\\), we can iteratively bound the value function differences:\\n\\n$$\\n|V^\\\\pi(s_i^+) - V^\\\\pi(s_j^+)| \\\\leq \\\\epsilon_R + \\\\gamma \\\\epsilon_T \\\\| V^\\\\pi \\\\|_\\\\infty.\\n$$\\n\\nThe value function difference propagates over multiple steps, scaled by $\\\\gamma$ at each iteration. Using the contraction property of the Bellman operator, the total difference converges geometrically:\\n\\n$$\\n||V^\\u03c0(s_i^+) - V^\\u03c0(s_j^+)||_\\u221e \\u2264 \\\\frac{(\\u03b5_R + \\u03b3 \\u03b5_T R_m)}{(1 - \\u03b3)}\\n$$\", \"include_inverse_dynamics\": \"$$\\n||V^\\u03c0(s_i^+) - V^\\u03c0(s_j^+)||_\\u221e \\u2264 \\\\frac{(\\u03b5_R + \\u03b3 (\\u03b5_T+\\u03b5_I) R_m)}{(1 - \\u03b3)}\\n$$\", \"the_lipschitz_constant_for_the_value_function_difference_is\": \"$$\\nL = \\\\epsilon_R + \\\\frac{\\\\gamma (\\u03b5_T + \\u03b5_I) R_m}{1 - \\\\gamma}.\\n$$\\n\\nFor Theorem 2, we have provided a more detailed proof in the revised manuscript.\"}", "{\"comment\": \"Thank you for your quick reply, and for raising the review score. Please let us know if you have any further questions.\"}", "{\"comment\": \"We sincerely thank the reviewer for their detailed feedback and constructive comments. Below, we provide a point-by-point response to the highlighted weaknesses and questions while addressing how we have clarified and improved the manuscript based on these concerns.\\n\\n**Weaknesses**\\n\\n1. Motivation and Reasoning for Achieving Claims\\n\\nWe understand the reviewer's concern regarding the clarity of motivation and reasoning behind our claims. To address this, we have updated the manuscript to:\\n* Clearly explain how SimBelief\\u2019s latent task belief metric captures the common structure across tasks, which is essential for adaptation in sparse reward environments (Section 3.2).\\n* Provide detailed reasoning on how the Gaussian mixture of $b_l$ (latent task belief) and $b_r$ (specific task belief) enables SimBelief to balance global task similarity with specific task dynamics. This combination enhances both exploration and task-specific adaptation, as shown in our experiments (Section 4, Figures 3\\u20135, Appendix F).\\n\\n2. Combining $z_r$ with $z_l$ through Gaussian Mixture\\n\\nIn the paper, **we combine $b_r$ and $b_l$, rather than $z_r$ and $z_l$ (Equation 7).** $z_r$ and $z_l$ are sampled from $b_r$ and $b_l$, respectively. We utilize the **overall distribution characteristics** (including both the mean and variance) obtained from the Gaussian mixture of the specific task belief $b_r$ and the latent task belief $b_l$, without the need to sample from the mixed distribution. **$z_l$ represents the sample from latent task belief $b_l$ in the latent space and captures the similarity between different tasks.**(Section 3.1, 3.2, Appendix F) All task distributions share a common latent dynamic space, with different tasks distinguished by different $z_l$, where $z_l$ contains the dynamic similarity information between any two tasks(Equation 8).\\n\\n3. KL divergence is required\\n \\nIn the early stages of training, the latent task belief enhances the agent's ability to identify tasks and explore efficiently in the real environment, providing a high-level understanding of the overall task distribution (Appendix F.2). However, for the algorithm to converge stably, it need to incorporate finer-grained information about specific tasks, which is captured in the specific task belief $b_r$. Therefore, we minimize the discrepancy between the latent belief $b_l$ and the specific task belief $b_r$ when training $ \\\\psi_l $.\\n\\n4. Using the Q-function to train offsets for the task similarity distributions\\n\\nBecause during the modeling process in the latent space, the latent task belief cannot directly participate in the optimization of the Q-function to ensure the accuracy of the latent dynamics. Instead, during SAC training, exploration occurs in the real environment, and the Q-function acts as a belief shift signal. This signal indirectly influences the latent task belief, causing the latent space to align with the optimization direction of SAC as a whole.\\n\\n5. Clarity of writing.\\n\\nWe have revised certain expressions in Section 3.2 of the manuscript. Additionally, we provide detailed descriptions and settings for each task in Appendix E. Figures 4 and 5 are commonly used and effective analytical methods in meta-RL[1][2][3] .\\n\\n6. Insufficient discussion of relevant related work, especially MetaCURE.\\n\\nWe provide a detailed discussion of the baselines and related works in Appendix D and Section 5. One drawback of MetaCURE compared to our approach is its reliance on task IDs during the training phase, which can impair the agent's reasoning ability during adaptation. As shown in the experimental results in Figures 3-6, MetaCURE underperforms SimBelief.\\n\\n7. Insufficient discussion of results, especially providing insight into why comparison methods may under or over perform.\\n\\nIn Section 3.4, we provide a theoretical analysis of the latent task belief's transferability across different tasks. Furthermore, in Appendix F, we offer a detailed explanation from the perspective of task belief representation, revealing the underlying mechanisms that enable the agent to achieve robust OOD generalization during both the training and adaptation phases.\\n\\n8. No ablations provided.\", \"we_provide_three_ablation_studies_in_appendix_g\": \"latent dynamics ablation, inverse dynamics ablation, and ablation on $w_r$ and $w_l$.\"}", "{\"summary\": \"This paper presents a novel framework called SimBelief, which effectively extracts common features of similar tasks using the Bisimulation metric for meta-RL tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper is generally well-constructed and well-motivated and provides some theoretical background on the proposed method.\\nIn addition, the paper has the potential to help the community understand the use of latent embedding for task generalization.\", \"weaknesses\": \"The paper includes many symbolic notations, which can confuse readers without strict and coherent representation. For example, in Fig 2, it seems $q_{\\\\phi}$ outputs h, but the notation of Eq. (6) or others in the manuscript, $q_{\\\\phi}$ outputs $z_r$.\\n\\nSimilarly, some of the notations are used before proper definition, which hinders the readers from fully understanding the contents.\\n\\nThe paper's reproducibility is questionable as it consists of various complex components for the framework.\", \"questions\": \"(1) Regarding readability: In Sec. 2.1., N is not properly defined before use. Inverse dynamics in Definition 2 or Eq.(3) is used before it is properly defined.\\n\\n(2) Some of the manuscript's functions are unclear. What is the output of inverse dynamics $I_{i}^{\\\\pi}(s_i^{+},s_i^{',+})$ ? \\nWhat is the input and output for $q_{\\\\phi}$, $\\\\psi_{l}$, $\\\\psi_{r}$ ?\\n\\n(3) Similarly, In Algorithm 1, it is unclear $b_{r}^i={q_{\\\\phi}(z_r^i|\\\\tau_{:t})}_{0:T-1}$ means. \\n\\n(4) What are the $L_{rec}$ and $L_{bisim}$ in Figure 2? If they are defined in the manuscript differently, please use the same notation.\\n\\n(5) How to determine a proper $w_r$? and how sensitive is it to the overall framework?\\n\\n(6) The proposed method is complex and contains various components, so presenting only a simple algorithm raises questions about reproducibility. Do the authors plan to release the code for the benefit of the RL community?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thanks for the detailed response, which has addressed most of my concerns. I still have two questions as follows.\", \"Regarding the difference from VariBAD, \\\"Latent Task Belief Metric: Instead of relying solely on posterior sampling, our metric integrates inverse dynamics to capture task similarities, which significantly enhances the reasoning and exploration capabilities of the agent.\\\" The metric contains three parts: the reward function, the forward dynamics, and the inverse dynamics. The proposed method uses the inverse dynamics additionally. Will including the inverse dynamics \\\"significantly\\\" improve the task inference capabilities? Or any empirical evidence with ablation study? Intuitively, the three parts seem to be equally significant.\", \"The paper claims the \\\"significant\\\" performance gain in sparse-reward scenarios and OOD test tasks. As mentioned in the response \\\"Latent Task Belief Metric\\\", the difference from existing work is the inclusion of the inverse dynamics modeling. Does the significant performance gain come from the inverse dynamics modeling, since the existing work has already includes the reward function and forward dynamics modeling?\"]}", "{\"summary\": \"The paper uses the bisimulation metrics to measure task similarity and learn the common structure across tasks for rapid task identification and adaptation in meta-RL.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed method is well formulated and clearly presented.\", \"The effectiveness of the latent task belief metric is validated by theoretical guarantee.\", \"Experiments demonstrate the superiority of the proposed method over strong baselines, especially the generalization capabilities to OOD testing tasks.\"], \"weaknesses\": [\"The topic of online meta-RL is kind of old.\", \"The core of the proposed method is using the reward and state transition functions $p(s\\u2019,r|s,a)$, or called world model, to measure task similarity in a latent space and hence infer task belief for context-based meta-RL. This kind of task inference has been investigated by existing works like VariBAD.\", \"The baselines are kind of old, mainly 2019-2021.\"], \"questions\": [\"No major flaws with the paper. The key difference and advantages over VariBAD could be further enhanced and elaborated, since both infer task beliefs using the world model.\", \"Some notations are not well explained and kind of confusing, e.g., $z_l$ and $z_r$ are confusing, and why use two distributions?\", \"The proposed method is motivated by the bisimulation metric, which is originally proposed for deterministic MDPs (Castro, 2020). Can the proposed method scale to stochastic MDPs?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work considers meta RL, especially these sparse reward settings. This work proposes to utilize bisimulation metrics to extract behavioral similarity in continuous MDPs and proposes SimBelief, which can extract common features of similar task distributions. This work further theoretically validates the effectiveness of the latent task belief metric in BAMDPs. Experiments show the effectiveness of SimBelief, especially in sparse reward settings and o.o.d settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"It is novel to include Bisimulation to measure the task similarity.\", \"Sparse reward is a hot topic in the meta RL community.\", \"Several experiments show that SimBelief performs better than several baselines, especially in sparse reward settings.\"], \"weaknesses\": \"- The writings of the proofs are poor with several typos.\\n\\n(1) in lines 714-715, the value function update should be $V_{n+1} = ... V_n$\\n\\n(2) in lines 719-710, $d^n$ here seems to be $d$\\n\\n(3) in lines 733-739, why we can use the Lipschitz property of the value function to get the result? It seems the core to prove the result but the proof does not include a detailed discussion of this inequality.\\n\\n(4) If you use the Lipschitz property of the value function, what is the Lipschitz constant?\\n\\n(5) The proof of Theorem 2 is even more poor and needs to be thoroughly polished.\\n\\n- The baselines compared in the paper are somehow old. The latest baseline in this paper is MetaCURE and HyperX, which were published in 2021. There are several much more new baselines for meta RL, especially sparse reward settings or o.o.d. settings like [1-3], which should be discussed and compared. Also, it seems the evaluation environments are designed by this paper, are these previous works utilizing these environments?\\n\\n- Sparse reward is indeed an important setting in RL, but why the proposed method can boost the agent's performance in sparse reward environments? As the reward structure may be independent of the dynamic structure, \\\"extracting common features of similar task distributions\\\" may not benefit the extremely sparse reward settings. Assume that there are n tasks that own the same dynamic, and we can only get the reward in the terminal step, there is no way to identify what is the current task as it is independent of the dynamic structure / previous rewards. Although it is not my major concern (experiments show that the proposed method can perform better than baselines in sparse settings), it will make the paper more solid if there are theoretical analyses about why the proposed method can handle sparse reward settings.\", \"ref\": \"[1] Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks\\n\\n[2] Meta-Reinforcement Learning Based on Self-Supervised Task Representation Learning\\n\\n[3] Enhanced Meta Reinforcement Learning using Demonstrations in Sparse Reward Environments\\n\\n\\n------\\n\\n**After rebuttal, most of my concerns have been addressed, especially about the theoretical proofs, and I have raised my score to 6.**\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful follow-up comments and for revisiting our manuscript. We greatly appreciate your feedback and are pleased that most of your concerns have been addressed. Below, we provide additional responses to the points you raised.\\n\\n**Regarding the first point:** \\n\\nWe appreciate your suggestion to include more recent baselines such as MAMBA. We will include comparisons with such sparse reward-focused baselines in future revised versions to make our results more comprehensive.\\n\\n**Regarding the second point:**\\n\\nAs highlighted, zero-shot generalization in extremely sparse reward environments is particularly challenging. Algorithms based on the BAMDP framework, such as VariBAD, HyperX, and SimBelief, are capable of achieving strong zero-shot generalization. From Figure 4 and Figure 14 in this paper, it can be observed that SimBelief achieves a high return within the first episode (i.e., a single complete trajectory), which is one of the advantages of the BAMDP framework. In contrast, posterior sampling-based algorithms like PEARL and MetaCURE require interacting with the environment for several episodes before adapting to it.\\n\\nOur proposed algorithm, SimBelief, learns the latent dynamics of different tasks and distinguishes tasks in the latent space. This approach significantly improves the agent's reasoning ability in unknown environments. In Appendix F.2, we visualize the correlations of specific task belief and latent task belief across different tasks. Specifically, **specific task belief** focuses more on local information between tasks, while **latent task belief** captures the global characteristics of task distributions. The agent can make relatively accurate inferences about the current unknown environment based on these two types of beliefs.\\n\\nThank you once again for your detailed comments and for raising your score. We are committed to addressing your suggestions in future iterations and enhancing the work's overall robustness and clarity.\"}", "{\"comment\": \"Thank you for the detailed response and revisions to the paper. I still have some concerns and questions about the method.\\n\\n1. Combining $b_r$ and $b_l$. \\nI still have my original concern that using a Gaussian mixture is not combining the information from both types of representations but just sampling from one or the other. \\n\\n\\n2. What is $b_l$'s objective function? From the paper, I thought it was Equation 5 so I was confused how you're getting a belief distribution.\\n\\n\\n3. KL divergence and Q-function offsets. \\nMy main concern is that these components are mentioned very briefly and not fully described at the end of the methods section and seem key to the policy performance. I am worried that the specific task belief may not actually be learned well and utilized in the way the authors claim if it requires so many adjustments.\\n\\n\\n4. KL divergence. \\nThe author's reasoning that the policy might benefit more from a high level task representation at the beginning of training is fair, but then using a KL minimization still does not make sense since it moves the specific task belief towards the latent task belief all the time. Also this objective may be conflicting with $b_l$'s training objective and causing it to model latent task belief instead.\\n\\n\\n5. What is the specific training objectives for the Q-function offsets? How critical are these offsets? How much do they change the beliefs?\"}", "{\"comment\": \"We thank the reviewer for their detailed feedback and insightful comments. Below, we provide a point-by-point response to the weaknesses, questions, and concerns raised in the review.\\n\\n**Weaknesses**\\n\\n **Symbolic Notations:** We acknowledge the concern about the complexity and lack of clarity in some symbolic notations. To address this, we have made the following improvements in the revised manuscript:\\n\\n - Added clear definitions of $q_{\\\\phi}$,$\\\\psi_l$,$\\\\psi_r$ and in Section 3.2 before their use, ensuring coherence and clarity in their roles and outputs.\\n - We consider $ q_\\\\phi $ as the forward reasoning process to infer the task belief $ z_r $, corresponding to$z_r \\\\sim \\\\psi_r(b_r \\\\mid h) q_\\\\phi(h \\\\mid \\\\tau_{:t}) $ in the algorithm.\\n \\n**Questions**\\n\\n1. **Regarding $N$ in Sec. 2.1:** In the revised manuscript, we explicitly define $N$ as the number of task episodes within a meta-episode. Additionally, we include a clarifying example to illustrate how $N$ fits into the problem formulation.\\n2. **Role and Output of Inverse Dynamics :** The inverse dynamics predicts the action $a$ required to transition from the current augmented state to the next augmented state. This component is crucial for learning task dynamics and reasoning in the latent space (Appendix G.2.). Specific task belief: $z_r \\\\sim \\\\psi_r(b_r \\\\mid h) q_\\\\phi(h \\\\mid \\\\tau_{:t}) $; latent task belief $z_l \\\\sim \\\\psi_l(b_l \\\\mid h) q_\\\\phi(h \\\\mid \\\\tau_{:t}) $.\\n3. **Clarifying :** In Algorithm 1, $b_r^i=q_\\\\phi\\\\left(z_r^i \\\\mid \\\\tau_{1: T}\\\\right)$ refers to the specific task belief inferred by the context encoder using trajectory . This trajectory comprises observed state-action-reward tuples collected during the task. We expanded the explanation in Section 3.3 to ensure clarity. In the latest version, we have modified it to $z_r \\\\sim \\\\psi_r(b_r \\\\mid h) q_\\\\phi(h \\\\mid \\\\tau_{:t}) $.\\n4. **Notations $L_{rec}$ and $L_{bisim}$ in Figure 2:** $L_{rec}$ denotes the reconstruction loss, which ensures accurate latent dynamics reconstruction, while $L_{bisim}$ represents the bisimulation loss, which captures task similarities in the latent space. These terms are now consistently defined in Section 3.2, with detailed descriptions in Appendix F.1.\\n5. **Weight Sensitivity ($w_r$,$w_l$):** The weights determine the balance between specific and latent task beliefs in the Gaussian mixture model. We conducted a sensitivity analysis (Appendix G.3) and observed that $w_r$,$w_l$=0.5,0.5 achieves the best trade-off between exploration efficiency and OOD task adaptation. These results are detailed in the revised manuscript.\\n6. **Reproducibility:** We recognize the complexity of the proposed method and the need for clear reproducibility. To this end, we have included pseudocode for the SimBelief algorithm in Appendix C and provided details on the environment settings in Appendix E. Furthermore, we plan to release the source code upon acceptance to facilitate replication by the community.\\n\\nWe believe these revisions address the concerns raised and further strengthen the clarity and impact of our work. Thank you again for your constructive feedback, which has been instrumental in improving our submission.\"}", "{\"comment\": \"We thank the reviewer again for the time and effort of evaluating our paper.\\n\\n**Regarding the first question**: \\n\\nFirst, I would like to clarify the main difference between our approach and VariBAD. VariBAD focuses solely on reconstructing specific tasks without considering the relationships between tasks. In contrast, our method models dynamics in the latent space and leverages latent dynamics to learn the correlations between different tasks.\\n\\nSecond, the reason we include the inverse dynamics in our defined latent task belief metric is to address scenarios with extremely sparse reward signals. For example, in various tasks within panda-gym, a reward is only received upon the successful execution of the task. In such cases, it is crucial to model the task dynamics more accurately in the latent space to learn more effective task structures, thereby enabling the latent task belief to capture task similarity information more precisely. This motivation is the primary driver behind our design of the latent task belief metric. Additionally, we have conducted ablation experiments to verify the effectiveness of inverse dynamics for OOD task generalization. As shown in Appendix G.2, removing the inverse dynamics from the latent task belief metric leads to a decrease in adaptation performance.\\n\\n**Regarding the second question**: \\n\\nThe superiority of SimBelief compared to current methods lies in its ability to learn task similarity information in the latent dynamic space. By incorporating the inverse dynamics in the latent task belief metric, as demonstrated in the OOD task adaptation experiments in Appendix G.2, SimBelief achieves significantly better results. This aligns with Theorem 2, which shows that inverse dynamics can broaden the transfer range between tasks, validating our expectations. The inverse dynamics play a complementary role in enhancing the learning of latent dynamics.\\n\\nWe hope these revisions address your concerns and clarify the contributions of our work.\"}", "{\"comment\": \"We are glad we were able to alleviate your concerns. Thank you for replying promptly and for raising your score. Your thorough review and insightful comments have been instrumental in enhancing the quality of our paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"The authors propose a meta-reinforcement learning framework based on bisimulation metrics to compute similarity of task belief from a Bayesian perspective. They infer the latent task representation through a Gaussian mixture of a variational latent representation and a new task belief similarity latent representation that models a bisimulation-based task belief metric. They demonstrate some improvements over prior methods in sparse reward, simulated locomotion and manipulation tasks and show robustness to OOD task variations.\\n\\nReviewers found the contribution of using task similarity in meta-RL novel and interesting, and appreciated the theoretical background. However, the clarity of the writing can be improved, especially the notation. Because the framework is fairly complex, there is concern about reproducibility of the method. \\n\\nOverall, there were some weaknesses to the paper that were satisfactorily addressed in the rebuttal phase. The use of task belief similarity metrics for meta-RL and the corresponding theoretical analysis and good empirical results are a significant enough contribution to warrant acceptance.\", \"additional_comments_on_reviewer_discussion\": \"In the initial reviews, there were concerns about mistakes in the theoretical contributions raised by reviewer z5au. However, in the rebuttal phase they were satisfactorily addressed. Reviewer cdUU also had questions about various design choices in SimBelief, which authors designed additional experiments and presented results to address. In the end, reviewers unanimously agreed to accept this paper.\"}", "{\"comment\": \"Thank you for the clarifications and additional experiments, especially with the KL divergence and offset components of the method. I maintain my concerns about the mixture of the two latent representations and the overall complexity of the method, which were partially addressed with the authors' responses. I have adjusted my score accordingly.\"}", "{\"comment\": \"Thank you for your thoughtful and insightful comments. We greatly appreciate your feedback and have made several updates in the revised manuscript to address your concerns. I will provide a detailed answer to your question from the perspective of technical implementation details:\\n\\n**1. Combining $b_r$\\u200b and $b_l$:**\\n\\nAt the code level, we model $b_r$ and $b_l$ as mean and logvar (i.e., the output of $\\\\psi_r$ represents the mean and variance of $b_r$, and the output of $\\\\psi_l$ represents the mean and variance of $b_l$), which are used to represent these two Gaussian distributions, similar to the belief modeling approach in **VariBAD** [1]. We sample $z_r$ from $b_r$ to reconstruct $p_\\\\phi$. On the other hand, we sample $z_l$ from $b_l$, and after optimization with Equation 8, $b_l$ can capture latent task belief similarity information. Until now, we have been sampling from both Gaussian distributions (i.e., $b_r$ and $b_l$) to represent different types of information. **We then mix the two distributions in Equation 7, which, at the code level, is also represented as the mean and logvar of $b$. We do not need to perform any sampling operation on $b$, instead, we treat $b$ (including its mean and variance) as a whole**, combine it with $s$ to form an augmented state, and feed the augmented state as input to the policy.\\n\\n**2. What is $b_l$'s objective function?**\\n\\nAs shown in Figure 2, $b_l$ is the output of $\\\\psi_l$, and we optimize $\\\\psi_l$ using Equation 8. From the code perspective, the input to $\\\\psi_l$ is $h$ (historical information), and the output is the mean and variance of $b_l$.\\n\\n**3. KL divergence.**\\n\\nRegarding the use of KL divergence, I would like to clarify in detail that we use KL divergence to regularize the latent task belief $b_l$ toward the specific task belief $b_r$, not the other way around. The KL loss is used to regularize $b_l$, making it incorporate information from the specific task, which allows the agent to explore the latent space more purposefully toward the specific task (**Appendix F.1, Figure 8**). This approach is similar to the KL loss regularization used in **DreamerV2** [2].\\n\\nAs for why KL loss continues to participate in the optimization of $b_l$ during the training phase, this depends on the training process of our algorithm. As shown in the pseudocode **(Appendix C)**, $b_l$ is optimized alongside SAC, while $b_r$ is optimized with VAE. For example, in one iteration, SAC is updated 2000 times, while VAE is updated 20 times, the entire training process consists of 1000 iterations. Since the optimizations of $b_l$ and $b_r$ are not synchronized, the KL loss needs to continuously participate in the optimization of $b_l$ during the training phase.\\n\\nAdditionally, the KL loss does not interfere with $b_l$'s ability to capture the global information of the task distribution (**Appendix F.2, Figure 9**).\\n\\n**4. Regarding the offset**\\n\\nWe directly use the $q$-loss from the SAC algorithm to optimize the offset. To address your concerns, we have added an ablation study on the offset in **Appendix G.4**. \\n\\nDuring the training phase, we apply the offset ($\\\\Delta \\\\mu, \\\\Delta \\\\sigma$) to the latent task belief **primarily to improve the stability of the algorithm's convergence in continuous control tasks** (e.g., Cheetah-Vel-Sparse). However, **this does not have a significant impact on the overall performance of the algorithm **(Figure 13)** and does not affect the learned latent task belief $b_l$ (Figure 9)**. Even without using the offset during training, SimBelief still outperforms other baselines.\\n\\n[1]Zintgraf et al, Varibad: A very good method for bayes-adaptive deep rl via meta-learning. ICLR, 2020.\\n\\n[2]Hafner et al,Mastering Atari with Discrete World Models. ICLR,2021.\\n\\nWe hope these revisions address your concerns and provide more clarity on the aspects you raised. Thank you once again for your valuable feedback, and we look forward to your continued suggestions.\"}" ] }
5YRw1m6GSz
Personalized Federated Learning via Tailored Lorentz Space
[ "Jiahong Liu", "Xinyu Fu", "Menglin Yang", "Weixi Zhang", "Rex Ying", "Irwin King" ]
Personalized Federated Learning (PFL) has gained attention for privacy-preserving training on heterogeneous data. However, existing methods fail to capture the unique inherent geometric properties across diverse datasets by assuming a unified Euclidean space for all data distributions. Drawing on hyperbolic geometry's ability to fit complex data properties, we present FlatLand, a novel personalized federated learning method that embeds different clients' data in tailored Lorentz space. FlatLand can directly tackle the challenge of heterogeneity through the personalized curvatures of their respective Lorentz model of hyperbolic geometry, which is manifested by the time-like dimension. Leveraging the Lorentz model properties, we further design a parameter decoupling strategy that enables direct server aggregation of common client information, with reduced heterogeneity interference and without the need for client-wise similarity estimation. To the best of our knowledge, this is the first attempt to incorporate Lorentz geometry into personalized federated learning. Empirical results on various federated graph learning tasks demonstrate that FlatLand achieves superior performance, particularly in low-dimensional settings.
[ "Personalized Federated Learning", "Hyperbolic Geometry" ]
Reject
https://openreview.net/pdf?id=5YRw1m6GSz
https://openreview.net/forum?id=5YRw1m6GSz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yP7I7FgJhX", "wYrrA1NcxY", "w7XjtiNrk8", "v8EmZdG7RS", "un9aNMPUGA", "toMT3NI25S", "sBLmwIMRc4", "qa9WrLZF6w", "oqWYSHvS7q", "nEealYQr0X", "lsliytRf1k", "lGKika1cUs", "jzNRFjhHsb", "iFSovioDE9", "ftVYRsCKau", "bLID9dutEb", "aHzWamxVDi", "UPDGzEQ9D5", "TYiatlhb58", "Py66z9xitw", "NfUIWfIXo2", "NVgmgbOmBR", "L8X9xaF803", "KuT5iHVJu6", "J93sw9Pvgd", "HjKRYYNgVE", "GwuGtxQWrp", "FwLjGtjim1", "Fcqf7AmdJC", "CXITF8Alpi", "B5qMyI8xat", "Ai2sDPYDOx", "70jUAQeLOb", "5vAo3B01f5", "3714PxPzkc", "2oazMrRtMt" ], "note_type": [ "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732506757904, 1730774565896, 1734722346808, 1733211772075, 1732243693881, 1733153260701, 1730973397706, 1732256137136, 1732885336796, 1732250342978, 1732191207585, 1732191607753, 1733063827996, 1732191812705, 1733146964839, 1731188267570, 1732190571262, 1729677973222, 1732191434490, 1732676377759, 1733209749807, 1733210303951, 1732178411923, 1733202038777, 1733063895637, 1733220775129, 1737524288848, 1732188915212, 1732243080127, 1730614830321, 1732529203409, 1732527169183, 1733063337493, 1732250463396, 1733063768640, 1732874259427 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Reviewer_Wtgh" ], [ "ICLR.cc/2025/Conference/Submission13908/Area_Chair_FZou" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Reviewer_KA7L" ], [ "ICLR.cc/2025/Conference/Submission13908/Reviewer_4tQH" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Reviewer_nTmN" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Reviewer_KA7L" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Reviewer_Dc6W" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Reviewer_Dc6W" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Reviewer_KA7L" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Authors" ], [ "ICLR.cc/2025/Conference/Submission13908/Reviewer_4tQH" ] ], "structured_content_str": [ "{\"title\": \"Follow-up on Review Comments\", \"comment\": \"Dear Reviewers,\\n\\n&nbsp;\\n\\nThank you very much for your valuable feedback and comments. As the rebuttal period is ending, please don't hesitate to reach out if you have any further questions or concerns. We would also appreciate it if you could update your ratings if we have resolved your concerns appropriately.\\n\\n&nbsp;\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposes a new personalized federated learning approach called FlatLand. Unlike current methods that commonly operate in Euclidean space, FlatLand embeds client data in a Tailored Lorentz space to more effectively represent the clients\\u2019 data distributions. Building on this foundation, the authors introduce a parameter decoupling strategy to aggregate shared parameters efficiently. Extensive experiments demonstrate the effectiveness of FlatLand.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The idea of increasing dimensionality to embed client data in Lorentz space is novel.\\n3. Experiments are extensive.\", \"weaknesses\": \"1. Techniques that personalize client models by creating localized models or layers are highly relevant to this paper. However, the related work section includes somewhat outdated references. It would benefit from incorporating more recent approaches, such as FedCAC [1] and GPFL [2]. Additionally, the experiments lack comparisons with these types of methods.\\n\\n2. This paper primarily focuses on graph datasets. Can FlatLand effectively perform on commonly used benchmarks in PFL, such as CV datasets (e.g., CIFAR-100) and text datasets (e.g., AGNEWS)?\\n\\n[1] Bold but cautious: Unlocking the potential of personalized federated learning through cautiously aggressive collaboration, ICCV 2023\\n[2] Gpfl: Simultaneously learning global and personalized feature information for personalized federated learning, ICCV 2023\", \"questions\": \"Please see the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"**(a) Summary of Scientific Claims and Findings:**\", \"the_innovations_proposed_by_the_authors_are\": [\"i) use fully lorentz neural networks instead of just the hyperbolic graph neural networks, and ii) use client-specific curvatures. Their justifications are\", \"Fully Lorentz Neural Networks are even better than HGNNs at modeling hierarchical data due to the exponential volume expansion in hyperbolic space, enabling efficient representation of scale-free graphs with fewer dimensions than Euclidean models.\", \"Different clients may need different curvatures since the levels of complexity/hierarchy can be heterogenous. So the authors propose to learn personalized client-specific curvatures.\", \"**(b) Strengths:**\", \"The idea of using client specific curvatures is well motivated and unique.\", \"The authors do a pretty convincing ablation study to demonstrate the effectiveness of their two innovations.\", \"**(c) Weaknesses and Missing Elements:**\", \"Fully Lorentz Neural Networks are not novel - this is a well known technique in the centralized setting (Chen et al. 2022).\", \"While the results are great, the setup is currently quite limited. The number of clients and datasets are quite small. The effect of model architectures, client sampling, etc. is not explored.\", \"Theoretical contributions are limited and lack depth. Most the results are known or direct extensions.\", \"**(d) Decision and Key Reasons for Rejection:**\", \"The paper is rejected due to limited theoretical depth, lack of robust empirical validation across diverse datasets and realistic scenarios.\"], \"additional_comments_on_reviewer_discussion\": \"While the authors have made commendable efforts to address concerns during the rebuttal phase, key issues remain unresolved.\\n\\nThe theoretical contributions are limited, with the primary mathematical analysis (e.g., Corollary 1) being viewed as trivial or insufficiently impactful. The method\\u2019s theoretical motivation is noted but lacks rigorous exploration, such as convergence analysis or clear evidence linking Lorentz geometry to improved handling of heterogeneity across broader contexts. One reviewer expressed dissatisfaction with the explanation and validation of the parameter decoupling strategy, which is a central component of the proposed method. Specifically, the claim that decoupling separates heterogeneous and homogeneous information is not convincingly supported either theoretically or empirically.\\n\\n**Suggestions for improvement from the reviews**\\n\\n*Enhance Theoretical Depth*: Provide a more robust theoretical foundation, including convergence analysis, and explicitly demonstrate how Lorentz geometry uniquely models and mitigates client heterogeneity i.e. what are the kinds of heterogeneity can be uniquely captured by tuning the curvature. \\n\\n*More diverse experiments*: Run larger real world experiment setups ideally on larger/harder datasets with a natural real world split. Also explore the effect of the model size. Can the hetergeneity across the clients be simply resolved by using larger models?\"}", "{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer Dc6W,\\n\\n&nbsp;\\n\\nThank you very much for your valuable feedback and suggestions. \\n\\nDue to time constraints, we acknowledge that experimenting on the MNIST dataset alone is not sufficient to fully validate the method in other domain datasets. These were the basic initial results. We will explore more and conduct more experiments further, and we have emphasized this point in the revised manuscript: \\\"*Note that hyperbolic space is not universally optimal for all data distributions \\u2014 some exhibit positive curvature \\u2014 highlighting the need to model complex data structures in mixed-curvature spaces.*\\\" \\n\\nThe core of this method was validated on graph datasets, demonstrating its effectiveness. In the future, we will explore its performance on more domain data and scenarios and design improved methods accordingly. As the development of hyperbolic spaces in image, text, and multimodal applications, we also believe that our idea could offer valuable insights for further design of personalized federated learning algorithms.\\n\\n&nbsp;\\n\\nBest Regards,\\n\\nThe authors\"}", "{\"title\": \"Response (2/4)\", \"comment\": \"**C2: Regarding the experiments part**\\n\\n**R:** We appreciate the reviewer's comments on our empirical evaluation. We would like to clarify several important points:\\n\\n&nbsp;\\n\\n**1.The reason we focus on graph data:**\\n\\n- Our focus on graph data is theoretically motivated. Graphs inherently exhibit non-Euclidean structures and significant heterogeneity, as demonstrated in previous theoretical analyses [1-2].\\n- More importantly, there is a **theoretical correlation between the heterogeneity of graph distributions and hyperbolic curvature** [1]. In contrast, for other domains, such as images or text, hyperbolic geometry has been studied more thoroughly in graph data, where explicit complex structures of heterogeneity are present, serving as an excellent testbed for interpreting how hyperbolic geometry handles hierarchical relationships and heterogeneous structures. Therefore, it provides a clear theoretical foundation for validating our method's effectiveness in handling complex geometric properties.\\n\\n&nbsp;\\n\\n**2. Applicability to other domains and settings:**\\n\\nWe further conducted experiments with **an increased number of clients** (50 clients) in the Cora dataset, which represents a large client pool configuration of federated learning scenarios in graph datasets. The results demonstrate that our method maintains its effectiveness even with an expanded client base. The below table illustrates the performance comparison between FedAvg and FlatLand under **different participation rates** **on the Cora dataset with 50 clients**. (We added the results in Appendix C4 in revised paper).\\n\\n\\n| Participate Rate | 0.1 | 0.3 | 0.5 | 0.7 | **1.0** |\\n| :--------------: | :---: | :---: | :---: | :---: | :-----: |\\n| FedAvg | 18.14 | 36.64 | 34.30 | 33.61 | 30.20 |\\n| **FlatLand** | **81.82** | **81.11** | **79.27** | **79.42** | **77.98** |\\n\\n\\n**The experimental results show that FlatLand exhibits remarkable robustness in a larger number of clients and across various participation rates,** even with only 10\\\\% client participation (5 clients). These findings confirm that FlatLand can maintain high performance even under low client participation scenarios, demonstrating its practical value federated learning applications where full client participation may not always be feasible.\\n\\n&nbsp;\\n\\n\\nOur method can be readily transferred to other fields that use linear neural networks. In the following, we provide additional experiments on widely used **image dataset**, MNIST (also considering a larger number of clients). The results are promising compared with the strong baselines [3-5]:\\n\\n\\n| **Dataset** | **#Clients** | **FedAvg (\\\\%)** | **FedProx (\\\\%)** | **Ditto (\\\\%)** | **GPFL (\\\\%)** | **FedRep (\\\\%)** | **FedCAC (\\\\%)** | **FlatLand (\\\\%)** |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| MNIST (Acc) | 20 | $87.86 \\\\pm 0.0816$ | $87.53 \\\\pm 0.0771$ | $97.85 \\\\pm 0.0191$ | $92.90 \\\\pm 0.0724$ | $98.14 \\\\pm 0.0196$ | $97.85 \\\\pm 0.0189$ | $\\\\mathbf{98.35} \\\\pm 0.0136$ |\\n| MNIST (AUC) | 20 | $97.77 \\\\pm 0.0149$ | $98.81 \\\\pm 0.0110$ | $99.92 \\\\pm 0.0012$ | $99.48 \\\\pm 0.0110$ | $99.85 \\\\pm 0.0196$ | $99.92 \\\\pm 0.0012$ | $\\\\mathbf{99.93} \\\\pm 0.0011$ |\\n| MNIST (Acc) | 100 | $86.14 \\\\pm 0.2066$ | $84.50 \\\\pm 0.1658$ | $96.45 \\\\pm 0.0415$ | $96.52 \\\\pm 0.0462$ | $96.54 \\\\pm 0.0750$ | $96.59 \\\\pm 0.0505$ | $\\\\mathbf{96.64} \\\\pm 0.0495$ |\\n| MNIST (AUC) | 100 | $96.57 \\\\pm 0.0508$ | $98.22 \\\\pm 0.0442$ | $99.78 \\\\pm 0.0047$ | $99.70 \\\\pm 0.0136$ | $99.67 \\\\pm 0.0190$ | $\\\\mathbf{99.81} \\\\pm 0.0012$ | $99.70 \\\\pm 0.0116$ |\\n\\n\\nWe also conduct additional experiments with varying client participation rates from [0.1-0.7] based on the above setting with 100 clients, and FlatLand still works in this setting. \\n\\n| Participation Rate | 0.1 | 0.3 | 0.5 | 0.7 |\\n| --- | --- | --- | --- | --- |\\n| FedAvg | 85.34 \\u00b1 0.08 | 87.25 \\u00b1 0.07 | 86.60 \\u00b1 0.08 | 87.22 \\u00b1 0.07 |\\n| FlatLand | 93.71 \\u00b1 0.18 | 94.40 \\u00b1 0.12 | 95.62 \\u00b1 0.11 | 96.12 \\u00b1 0.07 |\\n\\n&nbsp;\\n\\nIt shows that our method can also work and perform well without full client participation in each round. Note that our method is orthogonal and non-conflicting with methods that utilize partial strategies, allowing for complementary integration with such approaches. This is NOT the main focus of this paper.\\n\\n&nbsp;\\n\\n---\\n\\n**References:**\\n\\n[1] Krioukov, Dmitri, et al. \\\"Hyperbolic geometry of complex networks.\\\" Physical Review E\\u2014Statistical, Nonlinear, and Soft Matter Physics 82.3 (2010): 036106.\\n\\n[2] SINCERE: Sequential Interaction Networks representation learning on Co-Evolving RiEmannian manifolds, WebConf 2023\\n\\n[3] Exploiting shared representations for personalized federated learning, ICML 2021\\n\\n[4] Bold but cautious: Unlocking the potential of personalized federated learning through cautiously aggressive collaboration, ICCV 2023\\n\\n[5] Gpfl: Simultaneously learning global and personalized feature information for personalized federated learning, ICCV 2023\"}", "{\"title\": \"Summary of Rebuttal Period\", \"comment\": \"I would like to thank the authors for their answers to my questions as well as for providing additional work and experiments over the course of the rebuttal period. Unfortunately, I am unable to increase my score and cannot recommend that the paper be accepted due to what I believe to be major outstanding problems with the current submission. I expand on these in more detail below, as a summary they are:\\n\\n- Lack of theoretical contributions that show the method performs well or captures heterogeneity as claimed.\\n- Lack of a broad enough experimental evaluation (I remain unconvinced that the method performs well in any other setting than graph datasets with small numbers of clients).\\n- Proposed parameter decoupling does not achieve the separation of heterogeneous and homogeneous parts of the data as the authors claim.\\n\\n**Theory**\\n\\nI am still unconvinced that there is any meaningful theoretical contribution. The authors explanation for why Corollary 1 is necessary does not really make sense to me: LT(x, M) is designed by prior work so that the output always lies in Lorentz space (for any M). It doesn\\u2019t matter that M is obtained by federated averaging. The authors themselves note that they do not claim the theory to be a major contribution.\\n\\n**Experiments**\", \"there_are_two_major_outstanding_issues_for_me_within_the_experimental_evaluation\": \"1. I believe that as presented the method works poorly in cross-device federated learning. \\n\\nSetting aside the fact that the method requires clients to maintain internal states throughout the optimization (which is already undesirable for cross device FL), the most critical reason why the method is poorly suited to cross-device FL is that I believe it is badly impacted by partial client participation. This is because the personalized portions of the parameters are not updated if a client does not participate. So for example if a client that has not yet participated in training, joins at a later stage (something that is perfectly normal to happen in cross-device FL), they will start with randomly initialized parameters in many parts of their network, while other clients may hold near optimal parameters. This is likely to cause issues with highly heterogeneous updates. \\n\\nThe authors were unable to provide convincing evidence that this does not occur. In fact the evidence that they provided during the rebuttal made me more convinced this is an issue. For instance in the partial client participation table for MNIST their method drops about 5.5% in accuracy moving to 0.1 participation, while FedAvg drops less than 1%. The authors did not provide the results on additional baselines, I find it likely that they perform much better than Flatland in this setting. Also I would like to emphasize that 100 clients on MNIST is a very easy FL setting. This drop will be much more severe on more challenging federated datasets with much larger numbers of clients.\\n\\nContrary to what the authors claim, being able to handle partial client participation is absolutely crucial if your method is to work in a cross device setting (potentially millions of devices with very low participation rates).\\n\\n2. I do not see any convincing evidence that the method works well on data modalities other than graph datasets.\\n\\nI thank the authors for providing additional experiments on MNIST but I feel this is not enough to support the claim that the method works well on non-graphical data. Put simply, MNIST with 100 clients is not a challenging or realistic enough FL setting to claim the method performs well on image data, which is part of what the authors are claiming. \\n\\nCombining these two issues together, the paper is only able to demonstrate empirical performance in a cross-silo (few clients, full client participation) graph dataset setting which, given the lack of theoretical contribution, is not enough in my opinion.\\n\\n**Method**\\n\\nThe primary methodological contribution of the method is the parameter decoupling scheme. However, the reason for this decoupling as stated by the authors still doesn\\u2019t make sense to me. As shown in equation (4), the space-like dimensions of the output of the first layer are dependent on the heterogeneous portion of the input $x_t$. These will then be acted on by the shared parameters of the network from layer 2 onward. Likewise, the time dimension of the output is also dependent on the space dimension of the input. So in a network with more than a single layer, the proposed parameter decoupling scheme does not do what the authors state it is intended to do: namely that the personalized parameters handle the heterogeneity ($x_t$) and the shared parameters handle the homogeneous part of the data.\"}", "{\"summary\": \"This paper studies the federated graph learning problem, and proposes to model data heterogeneity across different clients as different Lorentz spaces with different curvatures. The authors adapt the Lorentz network into the FL framework, and propose to maintain dedicated trainable client-specific parameters to learn different client curvatures, which makes the new method FlatLand different from previous baselines. Simulation results and ablation studies are also presented in the paper to support the better performance of the new method.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Although there are already many works on federated hyperbolic learning, I found the idea in this paper still interesting and inspiring. The proposed method is concise yet intuitive, and the simulation results seem to suggest that it also works well in practice for graph-related problems.\", \"weaknesses\": [\"The presentation of this paper can be improved.\", \"Firstly, some of the notations are not consistent in the paper and can make readers confusing. For example, line 286-290, if $x^{(l)}$ is in $\\\\mathcal{L}^n_K$, then $x_s^{(l)}\\\\in\\\\mathbb{R}^n$ instead of $\\\\mathbb{R}^d$; $m^{(l)}$ should be in $\\\\mathbb{R}^{m}$ and $M^{(l)}$ should be in $\\\\mathbb{R}^{m\\\\times n}$. This type of inconsistency exists throughout the paper.\", \"Secondly, some of the statements in the paper are left unexplained, which makes me very confused. For example, line 167-168 \\\"For instance, for dropout, the operation function is f(Wx, v) = W dropout (x).\\\", where is the usage of the bias v? Another example is Equation (4), it seems that the first row of $\\\\hat{M}^{(l)}$ is not used at all in each LT layer. Is something missing here? I assume $K$ is a separate learnable scalar to represent the curvature?\", \"Thirdly, how the authors use the proposed method to solve federated **graph** learning problem is not clearly explained in the main text, as the proposed method is suitable for transforming the node feature, but is not easy to handle the adjacency matrix directly. So it would be confusing if the author just apply the method over graph datasets as presented in the simulation section. In appendix, I find that \\\"For the node classification task, we employ 2-layer GCN for Euclidean models, 2-layer LGCN Chen et al. (2021) for FlatLand... For graph classification, we use 3-layer GIN Xu et al. (2018) as the Euclidean encoder, and the same 3-layer hyperbolic encoders as node classification for hyperbolic models\\\" How LGCN is combined with the proposed method? What does the \\\"3-layer hyperbolic encoders\\\" mean in this case? I think the authors should first provide a clear explanation of these details in the main text before diving into the detailed results. Also, the paper can be reorganized to cover more important algorithmic details (i.e., pseudocode) in the main text.\", \"Lastly, I find that the authors provide the implementation of the method with a text link in the doc (line 404, \\\"anonymous repository\\\") instead of uploading it as a supplementary file. I nearly missed it. I would recommend the authors to upload the code directly whenever possible.\"], \"questions\": \"1. Is the parameter $K$ in equation (4) just a learnable scalar? Do we need to have any kind of regularization/restrictions on $K$?\\n\\n2. Line 501, what does \\\"no parameter decoupling strategy\\\" mean? Does it mean the server will aggregate all parameters including the client-learned curvatures?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summay\", \"comment\": \"Our work introduces **a novel geometric perspective in personalized federated learning, particularly for addressing data heterogeneity**. As the first work exploring this direction, we provide valuable insights for the field, as acknowledged by multiple reviewers.\\n\\n- Reviewer 4tQH noted that \\\"the idea in this paper is interesting and inspiring,\\\"\\n\\n- Reviewer Wtgh highlighted \\\"the novelty of increasing dimensionality to embed client data in Lorentz space,\\\" \\n\\n- Reviewer Dc6W recognized our work as \\\"an innovative approach to personalized federated learning by leveraging hyperbolic geometry, specifically through the use of Lorentz spaces with tailored curvatures.\\u201d\\n\\n&nbsp;\\n\\nOverall, the proposed decoupling strategy has been demonstrated to effectively mitigate data heterogeneity challenges and enhance the performance of clients\\u2019 local Lorentz neural network models **with strong motivation and theoretical support**. And our approach **achieves superior performance**, particularly on tree-like structures and power-law distributed data such as graphs, where the inherent hierarchical nature aligns well with hyperbolic geometry. We believe our analyses and ablation experimental results strongly validate the effectiveness of our proposed approach.\\n\\n&nbsp;\\n\\nThough our method can be readily transferred to other fields that use linear neural networks, we want to clarify that **our primary goal is not to achieve SOTA performance across all data and tasks**, as it also depends on the performance of the local hyperbolic backbone. Hyperbolic space may not be universally optimal for all underlying data distributions, as its effectiveness depends on whether the data can be appropriately modeled by varying negative curvature. For instance, certain datasets may exhibit inherent positive curvature characteristics [8-9]. This observation suggests a **promising future research direction**: modeling more complex data structures in mixed-curvature spaces. We will explore this method in the future, as claimed in our conclusion section.\\n\\n&nbsp;\\n\\n---\\n**References:**\\n\\n[8] Learning mixed-curvature representations in product spaces. ICLR. 2019\\n\\n[9] Pseudo-Riemannian Graph Convolutional Networks. NeurIPS 2022\"}", "{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer 4tQH,\\n\\n&nbsp;\\n\\nThank you very much for your positive feedback. We're glad our work resonated. Your comments motivate us to explore the potential of hyperbolic geometry in solving various challenging federated learning problems further, aiming to inspire more research in this emerging area.\\n\\n&nbsp;\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"title\": \"Response (3/4)\", \"comment\": \"Thank you for your questions. Next, we will summarize the questions with \\\"**Q:**\\\" and our responses with \\\"**R**:\\\"\\n\\n&nbsp;\\n\\n---\\n\\n\\n**Q1:** Why do the $v$\\u00a0parameters not appear anywhere in Equation (4)?\\n\\n**R:** The parameter $v$ is introduced to enable a more generalized formulation, supporting advanced non-linear operations with learnable parameters. While dropout is presented as a simple example without $v$ , the parameter can play a role in other minor operations within hyperbolic space. Specifically, $v$ can function as learnable parameters in normalization operations, constraining the norm via $ \\\\sigma(v^T x) $ to prevent excessive growth of hyperbolic embeddings, or as bias terms added to $ x $. In later sections, the presentation is simplified to focus on the core linear transformation aspects (lines 283\\u2013284), as these additional operations do not influence the fundamental principles of our method. We will add this explanation in the latter version.\\n\\n&nbsp;\\n\\n---\\n\\n**Q2:** How does Lorentz space actually model heterogeneity? Provide theoretical or empirical evidence.\\n\\n**R: From a theoretical perspective.** There is a direct correlation between graph distributions and the curvature of hyperbolic geometry [1]. Specifically, the stronger the power-law distribution of graph degrees, the more the data deviates from Euclidean space, corresponding to a larger curvature of Hyperbolic space. This makes graph data particularly well-suited for validating our method's ability to address complex geometric properties, supported by strong theoretical motivation and guarantees. For data in other domains, there is also a series of works to empirically show the relationship between the hyperbolic curvature and the distribution [6-7].\\n\\nThe theoretical guarantee of our decoupling strategy stems from the working principles of the Lorentz neural network. As shown in Equation (4), only the time-like dimension of the representation ($x_t$) is directly influenced by the curvature $-K$. Since curvature reflects the overall distribution of the data, it is straightforward and reasonable to assume that $x_t$ captures the heterogeneity of the data.\\n\\nMoreover, in Section 6 (*Perspective on Lorentz transformations*), we provide a relativistic perspective on the distinction between $x_t$ and $x_s$. Specifically, Lorentz space with different curvature represents varying spacetime intervals, and changes in the same event across different intervals primarily manifest in the time-like dimension. This further supports our hypothesis that $x_t$ effectively encodes data heterogeneity.\\n\\n**From an empirical perspective.** We conducted a series of experiments to validate the ability of our method to decouple data heterogeneity. First, we aggregated personalized data during training and observed significant performance fluctuations, with results deteriorating as shown in Figure 4. Second, we applied the same approach in Euclidean space, but it failed to achieve significant improvements. These observations help validate the effectiveness of our method in leveraging Lorentz space to handle heterogeneous data.\\n\\n&nbsp;\\n\\n---\\n**Q3:** Why do you decouple the parameters the way you do?\\n\\n**R:** Thank you for your question. Our decoupling strategy focuses on separating the parameters at each layer of the model. As discussed in our response to **Q2**, the key component that captures data heterogeneity is **the time-like dimension ($x_t$) of the hyperbolic embedding.** After each layer of the Lorentz neural network, the input vectors are transformed into a new hyperbolic embedding, with the time-like dimension updated as $x_t^{(l+1)} = \\\\sqrt{||mx_t^{(l)} + \\\\mathbf{M}\\\\mathbf{x}_s^{(l)}||^2+K}$. This means that the component at layer $(l+1)$ that carries the heterogeneity information is transferred as $x_t^{(l+1)}$. Thus, the parameters associated with $x_t^{(l+1)}$ are naturally aligned with the heterogeneity information. This ensures that our decoupling strategy is consistent with the theoretical derivation and can be effectively applied across multiple layers.\\n\\n&nbsp;\\n\\n---\\n\\n**References:**\\n\\n[1] Krioukov, Dmitri, et al. \\\"Hyperbolic geometry of complex networks.\\\" *Physical Review E\\u2014Statistical, Nonlinear, and Soft Matter Physics* 82.3 (2010): 036106.\\n\\n[6] Curvature Generation in Curved Spaces for Few-Shot Learning. Gao et al. (2021). ICCV\\n\\n[7] Curvature-Adaptive Meta-Learning for Fast Adaptation to Manifold Data. Gao et al. (2023). TPAMI\"}", "{\"title\": \"Response (1/4)\", \"comment\": \"Thank you for your valuable comments and suggestions. Below, we will summarize the concerns with \\\"**C:**\\\" and our responses with \\\"**R:**\\\"\\n\\n&nbsp;\\n\\n---\\n\\n**C1:** Motivation about focusing on graph data and generalizability to other types of data and models.\", \"r\": \"In this work, we specifically chose to focus on graph data because graphs inherently exhibit non-Euclidean structures and significant heterogeneity, as demonstrated in previous theoretical analyses. Besides, the heterogeneity is directly related to the curvature of hyperbolic geometry [1]. This makes graph data particularly suitable for validating our method's effectiveness in handling complex geometric properties **with a clear motivation and theoretical guarantee**.\\n\\nThe main reason that we didn't compare with FedRep in the main paper is due to their lack of graph-related implementations. With the limited time, we have conducted additional experiments on MNIST to demonstrate our method's broader applicability. The results are as follows:\\n\\n| **Dataset** | **#Clients** | **FedAvg (\\\\%)** | **FedProx (\\\\%)** | **Ditto (\\\\%)** | **GPFL (\\\\%)** | **FedRep (\\\\%)** | **FedCAC (\\\\%)** | **FlatLand (\\\\%)** |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| MNIST (Acc) | 20 | $87.86 \\\\pm 0.0816$ | $87.53 \\\\pm 0.0771$ | $97.85 \\\\pm 0.0191$ | $92.90 \\\\pm 0.0724$ | $98.14 \\\\pm 0.0196$ | $97.85 \\\\pm 0.0189$ | $\\\\mathbf{98.35} \\\\pm 0.0136$ |\\n| MNIST (AUC) | 20 | $97.77 \\\\pm 0.0149$ | $98.81 \\\\pm 0.0110$ | $99.92 \\\\pm 0.0012$ | $99.48 \\\\pm 0.0110$ | $99.85 \\\\pm 0.0196$ | $99.92 \\\\pm 0.0012$ | $\\\\mathbf{99.93} \\\\pm 0.0011$ |\\n| MNIST (Acc) | 100 | $86.14 \\\\pm 0.2066$ | $84.50 \\\\pm 0.1658$ | $96.45 \\\\pm 0.0415$ | $96.52 \\\\pm 0.0462$ | $96.54 \\\\pm 0.0750$ | $96.59 \\\\pm 0.0505$ | $\\\\mathbf{96.64} \\\\pm 0.0495$ |\\n| MNIST (AUC) | 100 | $96.57 \\\\pm 0.0508$ | $98.22 \\\\pm 0.0442$ | $99.78 \\\\pm 0.0047$ | $99.70 \\\\pm 0.0136$ | $99.67 \\\\pm 0.0190$ | $\\\\mathbf{99.81} \\\\pm 0.0012$ | $99.70 \\\\pm 0.0116$ |\\n\\nThese results demonstrate that FlatLand performs competitively even on image data. We plan to explore FlatLand more for image and text datasets and analyze the detailed relation between the curvature and the specific data properties in future work.\\n\\n&nbsp;\\n\\n---\\n\\n**Reference:**\\n\\n[1] Krioukov, Dmitri, et al. \\\"Hyperbolic geometry of complex networks.\\\" *Physical Review E\\u2014Statistical, Nonlinear, and Soft Matter Physics* 82.3 (2010): 036106.\\n\\n[2] Bold but cautious: Unlocking the potential of personalized federated learning through cautiously aggressive collaboration, ICCV 2023\\n\\n[3] Gpfl: Simultaneously learning global and personalized feature information for personalized federated learning, ICCV 2023\"}", "{\"title\": \"Response (3/4)\", \"comment\": \"Thank you for your questions. Below, we will summarize the questions with \\u201c**Q:**\\u201d and our responses with \\\"**R:**\\\"\\n\\n&nbsp;\\n\\n---\\n\\n\\n**Q1:** Curvature estimation implementation.\", \"r\": \"For curvature estimation, we employ Forman-Ricci curvature as described in Section 5.1. The motivation behind this choice is that Forman-Ricci curvature effectively captures how significantly a graph's structure deviates from Euclidean geometry, and has a theoretical connection with the curvature of hyperbolic geometry [1]. As shown in Figure 6, different clients exhibit varying curvature values, reflecting their distinct structural properties.\\nFor practical implementation, we treat curvature as a learnable parameter initialized using the Forman-Ricci estimate. Specifically, we use \\u03c3(K) + 0.5 to ensure the curvature remains negative and within a well-performing range of [0.5, 1.5]. This range has been empirically validated in previous hyperbolic learning literature and helps maintain numerical stability while accommodating heterogeneous data distributions [4-7].\\n\\nThe effectiveness of this approach is demonstrated in our ablation study (Figure 4), where comparing against a fixed curvature setting (\\\"w/o TS\\\") shows that adaptive curvature significantly improves performance and does not lead to heavy fluctuation, which further shows the stability of this method.\\n\\nMoreover, we theoretically analyze the convergence rate of FlatLand and demonstrate that the parameter $K$ has almost no impact on the convergence rate. Detailed analyses are added in Appendix B.5.\\n\\n&nbsp;\\n\\n---\\n\\n**References:**\\n\\n[4] Fully hyperbolic neural networks. Chen, W., et al. (2021). ACL\\n\\n[5] Hyperbolic graph convolutional neural networks. Chami, I., et al. (2019). NeurIPS\\n\\n[6] HGCF: Hyperbolic graph convolution networks for collaborative filtering. Sun, J., et al. (2021). WWW\\n\\n[7] Discrete-time temporal network embedding via implicit hierarchical learning in hyperbolic space. Yang, M., et al. (2021). KDD\"}", "{\"comment\": \"Dear Reviewer nTmN,\\n\\n&nbsp;\\n\\nWe sincerely thank you for your positive feedback and efforts in helping us improve our paper! With only two days remaining before the deadline and having not yet received a response to our carefully formulated point-to-point replies, we kindly request your feedback. If you have any further questions or suggestions, we welcome the opportunity to discuss and address them.\\n\\n**We believe we have thoroughly addressed all the concerns raised in your initial review. And we would be grateful if you would consider raising your score accordingly.**\\n\\nThank you once again for your efforts in ensuring the highest standards of academic excellence. **We look forward to your response and remain available for any further discussions.**\\n\\n&nbsp;\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"title\": \"Response (4/4)\", \"comment\": \"**Q2:** Potential challenges or modifications needed when applying your method to other domains.\\n\\n**R:** Different data modalities exhibit varying levels of non-Euclidean characteristics. While graph data naturally possesses strong non-Euclidean properties (as shown by our Ricci curvature analysis in Figure 6), other domains like images and text also demonstrate heterogeneous structures. For instance:\\n\\n1. In vision tasks, the manifold of natural images often exhibits hierarchical relationships (e.g., different scales of visual features) that can benefit from hyperbolic representation. The heterogeneity often comes from varying visual styles, lighting conditions, or camera angles.\\n2. In text data, semantic hierarchies and power-law distributions in word frequencies naturally align with hyperbolic geometry's ability to embed tree-like structures. The heterogeneity might arise from different writing styles, topics, or languages.\\n\\nWhile our work represents a promising start in leveraging geometric information for personalized federated learning, we acknowledge that hyperbolic space might not be optimal for all types of data. Analyzing the scenarios and types of heterogeneity where hyperbolic geometry is most advantageous remains a promising direction for future work. Besides, we can explore spaces with mixed curvature (including positive and negative curvatures simultaneously) to better accommodate different data geometries. $\\\\mathsf{FlatLand}$'s fundamental approach to addressing heterogeneity through geometric modeling provides a solid foundation for these extensions.\\n\\n&nbsp;\\n\\n---\\n\\n**Q3:** Scalability considering the additional overhead.\\n\\n**R:** We provide a detailed analysis of the time complexity cost and memory usage of our method compared to the simplest FedAvg method, as addressed in our response to C2. This analysis highlights the scalability of our approach. Additionally, we conducted experiments on data from 100 clients, further validating the extensibility of our method.\\n\\n&nbsp;\\n\\n---\\n\\nWe hope we've addressed your key concerns, particularly regarding the generalizability of our method beyond graph data, the complexity of curvature estimation, and computational efficiency. We demonstrated that FlatLand performs well on benchmarks like MNIST, with minimal computational overhead due to pre-computation (Appendix B.6.). Additionally, we clarified that curvature estimation is stable and does not impact convergence (Appendix C.2. and Appendix B.5.).\"}", "{\"comment\": \"Dear Reviewer KA7L,\\n\\n&nbsp;\\n\\nAs the review period is coming to a close, we wanted to follow up regarding our rebuttals. We believe we have thoroughly addressed your insightful comments and concerns through our responses. Furthermore, we will continue to enhance and refine our work for the final version.\\n\\n**Since ALL other reviewers hold quite positive feedback and have affirmed our novelty and contribution, your feedback is invaluable to us**. We kindly request that you **respect the contribution and dedication behind this work** and **consider raising the score** if there is no new concern. Please let us know if you need any additional information or clarification to complete your review.\\n\\nThank you very much for your time and careful consideration of our work!\\n\\n&nbsp;\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"The paper proposes a new personalized federated learning algorithm called Flatland employing hyperbolic geometry. One of the key challenges in federated learning is the data heterogeneity among clients and understanding the similarity between clients data distributions can be helpful. The paper tries to achieve this goal by projecting each client data to a higher dimension to better capture similarities between clients data. The paper more focused on graph data. Experiments are carried out to showcase the effectiveness of the approach.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The motivation and contribution of the paper is clear and it seems that leveraging hyperbolic geometric approaches can be helpful for personalized federated learning.\", \"The paper provides analysis in Section 6 to support their approach.\", \"Experimental study of the paper looks convincing.\"], \"weaknesses\": \"Reading of the paper, I felt that some arguments in the paper are vague and needs more explanations to become more clear. For example \\\"*PFL approaches include segmenting models into generic and personalized components (McMahan et al., 2017; Tan et al., 2023), leveraging model weights and gradients to map client relationships (Xie et al., 2021), or integrating additional modules to facilitate customization (Baek et al., 2023)*.\\\" Or for example in \\\"Moreover, embedding data from various clients into a fixed space complicates the interpretability of model parameters, making it difficult to segment the model into meaningful components (Arivazhagan et al., 2019) and often expensive to assess similarities between client models\\\", the reasons are not clear.\\n\\nFurthermore, the paper states that embedding data from various clients into a fixed space often expensive to assess similarities between client models. However, using the Flatland itself in step 4 of algorithm 2, each clients needs to project its data to another space. I believe this can add more computations compared to other methods which does not employ Lorentz space and hyperbolic geometry-based approaches. Therefore, to me the above argument seems a bit contradictory.\", \"questions\": \"Looking into algorithm 2 of the paper, it seems that using Flatland clients and the server collaborate using conventional federated averaging methods such as Fedavg. Is Flatland compatible with other personalized federated learning algorithms? Is it possible to use other methods like PerFedAvg to orchestrate the collaboration between the server and clients? Maybe this can further improve personalization properties of Flatland.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your valuable comments and suggestions. Below, we will summarize the weaknesses with \\\"**W:**\\\" and our responses with \\\"**R:**\\\"\\n\\n&nbsp;\\n\\n---\\n\\n**W1:** Baselines selection.\\n\\n**R:** Thank you for this suggestion. Note that our primary goal was to highlight and evaluate the benefits of integrating hyperbolic geometry in federated learning. Hence our comparisons focused on baselines with strong performance on graph data to ensure a fair evaluation. FedCAC and GPFL perform well in image datasets but lack graph-related implementations. Therefore, we compare our method with FedCAC and GPFL in image dataset; please check the response to W2. \\n\\nWe will incorporate these clarifications into the revised manuscript to provide a clearer context for our focus and methodology.\\n\\n&nbsp;\\n\\n---\\n\\n**W2:** This paper primarily focuses on graph datasets. Can FlatLand effectively perform on commonly used benchmarks in PFL?\\n\\n**R:** Thank you for your suggestions. We first would like to demonstrate the reason we chose graph data to conduct experiments and analysis.\\n\\n**1. The reason for choosing graph data.**\\n\\nIn this work, we chose to focus on graphs because *graphs inherently exhibit non-Euclidean structures and significant heterogeneity, which are directly related to curvature*. This makes graph data particularly suitable for validating the effectiveness of our hyperbolic geometry-based approach in addressing data heterogeneity.\\n\\n* Theoretical foundation: The use of hyperbolic spaces to model varying graph structures is well-supported by theory. As demonstrated in [1], the relationship between graph structure and hyperbolic space provides a strong mathematical basis for our method's effectiveness.\\n\\n* Clear demonstration of benefits: Graph data is ideal for showing our method\\u2019s ability as they generally have different curvatures. It also allows us to demonstrate the effectiveness of our parameter decoupling strategy in a setting where heterogeneity is more evident.\\n\\n**2. Applicability to other domains.**\\n\\nOur method can be easily applied to other data as our method directly adds one extra dimension to the linear layer. With limited rebuttal time, we have conducted additional experiments on MNIST to primarily demonstrate our method's broader applicability. The results are promising:\\n\\n| **Dataset** | **#Clients** | **FedAvg (\\\\%)** | **FedProx (\\\\%)** | **Ditto (\\\\%)** | **GPFL (\\\\%)** | **FedRep (\\\\%)** | **FedCAC (\\\\%)** | **FlatLand (\\\\%)** |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| MNIST (Acc) | 20 | $87.86 \\\\pm 0.0816$ | $87.53 \\\\pm 0.0771$ | $97.85 \\\\pm 0.0191$ | $92.90 \\\\pm 0.0724$ | $98.14 \\\\pm 0.0196$ | $97.85 \\\\pm 0.0189$ | $\\\\mathbf{98.35} \\\\pm 0.0136$ |\\n| MNIST (AUC) | 20 | $97.77 \\\\pm 0.0149$ | $98.81 \\\\pm 0.0110$ | $99.92 \\\\pm 0.0012$ | $99.48 \\\\pm 0.0110$ | $99.85 \\\\pm 0.0196$ | $99.92 \\\\pm 0.0012$ | $\\\\mathbf{99.93} \\\\pm 0.0011$ |\\n| MNIST (Acc) | 100 | $86.14 \\\\pm 0.2066$ | $84.50 \\\\pm 0.1658$ | $96.45 \\\\pm 0.0415$ | $96.52 \\\\pm 0.0462$ | $96.54 \\\\pm 0.0750$ | $96.59 \\\\pm 0.0505$ | $\\\\mathbf{96.64} \\\\pm 0.0495$ |\\n| MNIST (AUC) | 100 | $96.57 \\\\pm 0.0508$ | $98.22 \\\\pm 0.0442$ | $99.78 \\\\pm 0.0047$ | $99.70 \\\\pm 0.0136$ | $99.67 \\\\pm 0.0190$ | $\\\\mathbf{99.81} \\\\pm 0.0012$ | $99.70 \\\\pm 0.0116$ |\\n\\nWe plan to conduct more experiments and analysis on more domain and tasks. This is also mentioned in the \\\"Future Work\\\" section of our manuscript.\\n\\n&nbsp;\\n\\n---\\n\\n**Reference:**\\n\\n[1] Krioukov, Dmitri, et al. \\\"Hyperbolic geometry of complex networks.\\\" *Physical Review E\\u2014Statistical, Nonlinear, and Soft Matter Physics* 82.3 (2010): 036106.\"}", "{\"summary\": \"The paper proposes a personalized FL method called Flatland. Flatland follows a standard Federated Averaging framework. Each client $c$ first embeds their data into a Lorentz space with a personalized (learned) curvature coefficient $K_c$. They then run local training of a Lorentz model (neural network consisting of so called Lorentz layers that are designed so that if the inputs are in a Lorentz space then so are the outputs). The parameters of this network that correspond to the first dimension of the embedded data are personalized, per client, parameters, while the remaining parameters are shared. The shared parameters are updated using FedAvg, the personalized parameters are kept on the client.\\n\\nThe paper shows that the specific decomposition proposed still maintained the Lorentz property of the network. The paper then carries out an empirical evaluation of the method on several different graph datasets, for both node and graph classification tasks. It compares to a range of baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The proposed method is easy to integrate into standard FL training pipelines. It trains using federated averaging and aggregates client updates, meaning it is compatible with privacy preserving techniques such as secure aggregation and differential privacy.\", \"The originality of the paper is that it is the first to propose using a different Lorentz embedding for each client in order to better capture the differing client data distributions.\", \"The experiments demonstrate good performance on node classification in real world datasets with small numbers of clients\", \"The authors provide ablation studies examining the impact of different components of the method\"], \"weaknesses\": [\"I do not believe the paper makes a significant enough contribution, either theoretically or empirically:\", \"Regarding theory, there is a lack of any substantial theoretical contributions\", \"As far as I can tell Proposition 1 and Corollary 1 are both instant consequences of the definition of a Lorentz network (eq 2), giving a name to the first row and column of the matrix does not require reproving anything. Even more confusing is Corollary 1 and it\\u2019s proof, Proposition 1 states that $\\\\forall M, LT(x;M) \\\\in \\\\mathcal{L}$ then of course it is also true that $LT(x;\\\\Phi(M, N)) \\\\in \\\\mathcal{L}$ because it holds for all matrices in the second entry, this does not require a proof as given in the appendix.\", \"The remainder of section 6 makes some hand wavy and confusing claims and does not prove any concrete properties or statements about the method.\", \"There are many potentially interesting directions here the authors could have taken, e.g. what is the convergence rate, how does the lorentz embedding impact convergence and/or performance, what types of distribution heterogeneity are well modeled by the Lorentz space etc.\", \"Regarding empirical contribution. My overall concern here centers around the empirical evaluation being too narrow. Essentially the evaluation is limited to federated cross-silo, graph datasets, which to me makes it a specialized method rather than a general one, as the authors claim.\", \"The empirical evaluation is only on graph datasets. The authors present the method as a general personalized federated algorithm and claim that the method is applicable to other data modalities; they present no evidence of this. I am also not convinced that this would be the case, state of the art methods that deal with the most common data modalites (text, image, video, tabular) do not make use of Lorentz spaces.\", \"The number of clients in each dataset is very low. Therefore, the experiments do not provide evidence that the method works in a cross-device setting. Crucially, the authors do not mention the cohort size they used in federated averaging, and given the low number of clients in their experiments I am assuming that all clients participate in each round of training. I have concerns about if the method would even work without full client participation in each round (if client A has participated before and client B has not, then A will have already updated their personalized parameters often, while B is starting from initialization, this is going to lead to very heterogeneous shared parameter updates)\", \"I believe the authors need to provide experiments for other data modalities, with larger numbers of clients as well as partial client participation\", \"There are some minor typos, e.g. I think in section 5.2 $m^{(l)}$ should be in $\\\\mathbb{R}^m$, not $\\\\mathbb{R}^{m+1}$ and $M^{(l)}$ should be in $\\\\mathbb{R}^{m\\\\times n}$. Also references to tables in section 7 are off and need to be checked e.g. on line 452, Table 3 should be Table 1.\", \"Overall, I would need to see substantial improvements to the theoretical and/or empirical contributions, as suggested in the final bullet points of each section above.\"], \"questions\": [\"Why do the $v$ parameters not appear anywhere in Equation (4)?\", \"How does Lorentz space actually model heterogeneity? The paper seems to claim that $x_t$ captures the heterogeneity between client distributions, while the client distributions in Flatland i.e. $x_S$ should be the same. This is an interesting idea but no evidence is provided. I would like to see either theoretical or empirical evidence of this fact.\", \"Why do you decouple the parameters the way you do? This is related to the above point. It seems that the parameters are decoupled this way so that the heterogeneous feature $x_t$ is fed into the personalized portion of the network $m$ in Equation (4), while the homogeneous features $x_S$ are fed into the shared portion of the network, $M$ in Equation (4). However, as far as I can tell this only works for the first layer since the output of layer 1 $m x_t + M^{(1)} x_S$ will be fed into the shared part of layer 2 $M^{(2)}$ meaning that the updates to shared parameters from layer 2 onwards are still impacted by heterogeneous parts of the client data.\", \"Does the method work for larger numbers of clients, without full client participation each round?\", \"What do you mean by statistically significant for the results in Table 1? How can you say that FlatLand outperforms GCFL in graph classification when the mean accuracies are very close with very large overlap of the standard deviation intervals. This looks to me like noise rather than signal.\", \"It is not clearly defined what the embedding dimension is in Section 7.3. Is this the hidden dimension of the model?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (2/4)\", \"comment\": \"**C2:** Computational efficiency. *\\u201cThe method introduces additional complexity through hyperbolic geometry and requires curvature estimation, which could make implementation and tuning more challenging in practice. The computational overhead of working in Lorentz space compared to Euclidean space is not thoroughly discussed\\u201d*\\n\\n**R:** Let us clarify the complexity analysis of FlatLand compared to FedAvg:\\n\\n1. **Local updates.** the additional operations in FlatLand (curvature estimation and exponential map). **The curvature estimation does not introduce significant computational overhead** since it only needs to be **computed once** for each client's data distribution. This value can be pre-computed and reused. Similarly, the exponential map transformation only requires a single non-linear mapping operation based on input sample norms with a complexity of $O(1)$, which can also be pre-computed and cached for efficiency.\\n\\n2. **Aggregation**, FlatLand maintains the same time complexity as FedAvg when using equivalent hidden dimensions. While FlatLand introduces time-like space parameters, it only aggregates shared parameters while maintaining personalized parameters locally. Importantly, as shown in Section 7.3, FlatLand performs better in low dimensionality, potentially reducing communication costs in practice.\\n\\n3. **Storage requirements**, FlatLand requires $O(d+1)$ additional storage per client compared to FedAvg, accounting for the time-like dimension and curvature parameter, where $d$ is the hidden dimension. This overhead is minimal since d is typically small, and FlatLand's superior performance in low-dimensional settings further mitigates practical storage concerns.\\n\\nOverall, while FlatLand adds a few steps compared to FedAvg, the computational overhead is limited due to pre-computation and efficient constant-time operations. Consequently, this **does not affect the scalability of the method**. Moreover, our experiments show that its benefits outweigh the slight complexity increase, particularly for managing heterogeneous data in resource-constrained, low-dimensional settings. We have added this analysis in Appendix B.6.\\n\\n&nbsp;\\n\\n---\\n\\n**C3:** Numerical stability. *\\u201cThe paper doesn't address potential numerical stability issues that often arise when working with hyperbolic spaces.\\u201d*\\n\\n**R: Regarding numerical stability**, our method specifically employs fully Lorentz neural networks, which were designed to address the numerical instability issues commonly associated with hyperbolic neural networks. Traditional hyperbolic approaches often face instability due to frequent projections between tangent and hyperbolic spaces [5]. However, fully Lorentz neural networks perform operations directly in Lorentz space without any further mapping functions, significantly reducing these stability concerns.\\n\\n&nbsp;\\n\\n---\\n\\n**C4:** Differences with previous decoupling methods, particularly FedRep\\n\\n**R:** We would like to further highlight our novelty. While FedRep does propose a parameter decoupling approach in Euclidean space, our method differs fundamentally in both motivation and implementation. \\n\\n**Our decoupling strategy is uniquely motivated by the geometric properties of Lorentz space**, where the time-like dimension naturally captures heterogeneous information. This geometric interpretation, as illustrated in Figure 1(b), allows us to separate client-specific features (manifested in the time-like dimension) from shared information (in space-like dimensions) without requiring explicit similarity calculations or additional modules.\\n\\nIn contrast, FedRep focuses on finding shared representations in Euclidean space through optimization constraints. While both methods aim to separate shared and personalized components, **our approach leverages the inherent structure of hyperbolic geometry to achieve this separation more naturally**. This is evidenced by our theoretical analysis in Section 6, which shows how the time-like parameter inherently captures client-specific information through the gradient calculations (Equations 11-13).\\n\\nFurthermore, our experimental results demonstrate the effectiveness of this geometry-driven approach, **particularly in scenarios with strong non-Euclidean properties**, as shown in the curvature analysis (Figure 6). We also conduct experiments that compare our method with FedRep in MNIST in the response to **C1**.\\n\\n&nbsp;\\n\\n---\\n**References:**\\n\\n[5] Hyperbolic graph convolutional neural networks. Chami, I., et al. (2019). NeurIPS\"}", "{\"title\": \"Supplementary of Response (2/4)\", \"comment\": \"Dear Reviewer KA7L,\\n\\n&nbsp;\\n\\nDue to space limitations in Response 2/4, we include the updated supplementary experimental results of the partial participation rate here for your review.\\n\\nBecause GCFL requires careful parameter tuning and the difficulty in finding optimal results, combined with its performance on the Cora dataset being generally comparable to FedPer, we have not included it in this experiment. The results demonstrate that FlatLand maintains excellent performance even with reduced participation rates, highlighting the superiority of our approach.\\n\\n| Participate Rate | 0.1 | 0.3 | 0.5 | 0.7 | **1.0** |\\n| :--------------: | :---: | :---: | :---: | :---: | :-----: |\\n| FedAvg | 18.14 | 36.64 | 34.30 | 33.61 | 30.20 |\\n| FedPer | 56.03 | 60.22 | 74.79 | 73.48 | 68.11 |\\n| FedHGCN | 54.38 | 77.08 | 69.87 | 73.35 | 62.52 |\\n| **FlatLand** | **81.82** | **81.11** | **79.27** | **79.42** | **77.98** |\\n\\n&nbsp;\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"title\": \"Follow-up Rebuttals (1/2)\", \"comment\": \"Dear Reviewer KA7L,\\n\\nWe appreciate your time and effort in reviewing our manuscript. However, we respectfully disagree with your follow-up evaluation and would like to address some misconceptions.\\n\\n&nbsp;\\n\\n\\n**1. Theoretical Contribution.**\\n\\nFirst, regarding the theoretical contribution, specifically Corollary 1: We **NEVER** claimed this as our primary contribution even in the original manuscript. Rather, it serves as a necessary foundation in hyperbolic networks, which is not as trivial as suggested in your review. A thorough understanding of hyperbolic geometry would reveal its significance. The series of analyses we provided, including but not limited to Corollary 1, contributes to the completeness and correctness of our entire work. We also add a convergence analysis accordingly in the revised version. These theoretical foundations are essential components that support our main contributions and methodological innovations.\\n\\nAnd what is important is that our method is designed and evaluated supported by strong theoretical motivation, as we explained in responses 1/4 and 3/4.\\n\\n&nbsp;\\n\\n**2.1 Experimental Settings.**\\n\\nRegarding experimental validation, we want to clarify that we **NEVER** claimed our method works universally across **ALL** datasets. We explicitly acknowledged limitations and future directions in both our original submission and revised version. And explanations are also provided in the rebuttal (Response 2/4 The reason we focus on graph data) and we also added this clarification in the Future Work Section that *\\\"Note that hyperbolic space is not universally optimal for all data distributions \\u2014 some exhibit positive curvature \\u2014 highlighting the need to model complex data structures in mixed-curvature spaces\\\"*.\\n\\n&nbsp;\", \"regarding_the_experimental_settings_with_up_to_50_clients\": \"We **NEVER** claim that work in a **cross-device setting** is our main contribution. Instead, we follow the graph federated learning settings to solve personalized FL problems, which are seldom considered in this scenario. This should not be considered a major concern, as **it aligns with current standards in federated graph learning research, as demonstrated in accepted papers \\\\[1-2\\\\]**. More importantly, our comprehensive experimental results demonstrate excellent performance across these settings even in low partial participation rates in graph dataset, which strongly validates our hypotheses and the effectiveness of our proposed method.\\n\\n&nbsp;\\n\\n**2.2 Partial participation scenarios.**\", \"we_respectfully_strongly_disagree_with_your_assertion_that_our_method_cannot_work_in_partial_participation_scenarios_for_the_following_reasons\": \"From the experimental results, our experimental results demonstrate strong and consistent performance, particularly in our emphasized settings, **yet these significant results seem to have been overlooked in your evaluation**. While you highlighted the MNIST results, we want to emphasize that the image domain dataset was not our primary focus, as we clearly stated in our response section. Due to time constraints, we haven't exhaustively explored all possible parameter configurations to find the optimal settings in image datasets (the results have been updated). We will explore more possibilities in future work. \\n\\nMoreover, the **key issue** you mentioned is the scenarios where *\\\"the personalized portions of the parameters for the later joined clients are not updated.\\\"* This problem is **not the focus of the research on personalized federated learning (PFL)**. In fact, the extreme situation you mentioned **occurs for all parameter decoupling methods, including FedPer, which is one of the most widely used and popular methods in PFL**. This is a separate research problem, and our method is orthogonal to it. We believe relevant strategies for handling this issue are not in conflict with our method. Our decoupling strategy, similar to FedPer, effectively handles this issue by separating heterogeneous and homogeneous information but with enhanced interpretability and better heterogeneity handling.\\n\\nNevertheless, we are willing to provide **a solution of how our method can solve this issue easily**. **When a new client joins**, we can directly use the global shared parameters as the initialized shared parameters. Compared to FedPer, an advantage of our method is that **we can pre-estimate the curvature of the client's data and directly fetch the well-trained personalized parameters of other clients with similar data curvature and use them as initialization**. This not only avoids parameter optimization performance issues but also ensures privacy. In contrast, FedPer cannot effectively tell which personalized parameters are better or more useful for new clients. **This, in turn, highlights a potential advantage of our method** in this scenario compared to ordinary parameter decoupling methods, as we utilize geometry information, further underscoring the inspiring nature of our approach.\"}", "{\"title\": \"Follow-up Rebuttals (2/2)\", \"comment\": \"&nbsp;\\n\\n**3. Decoupling strategy.**\\n\\n**The heterogeneity information is not simply captured by the input $x_t$, but rather by the time dimension representation in the hyperbolic space during the Lorentz Transformation process**. The key component that captures data heterogeneity is **the time-like dimension of the hyperbolic embedding** rather than the original input $x_t$. The time-like component of the output representation is calculated using the norm of the space-like dimensions and the curvature, as shown in Equation (4) of our manuscript. Its special computation method determines its uniqueness on the hyperbolic manifold. Therefore, after multiple layers of output, the 0-th dimension of the embeddings truly captures the heterogeneity information we hypothesize, and this is entirely consistent with our derivation. We have provided a detailed explanation of this in the rebuttal (**Response (3/4) Q2 and Q3**). **We are confident** that this is an intuitive and easily understandable concept within the field of hyperbolic neural networks.\\n\\nMost importantly, our **extensive experimental results, including thorough ablation studies, validate the significance and effectiveness of our proposed decoupling strategy**. These empirical results provide concrete evidence supporting the merits and performance of our method, which we believe deserves proper acknowledgment and recognition. **This is also acknowledged by your previous comments:** \\\"*The authors provide ablation studies examining the impact of different components of the method*\\\". Additionally, this is also mentioned by \\n* Reviewer nTmN: *Experimental study of the paper looks convincing*\\n* Reviewer Wtgh: *Experiments are extensive*\\n\\n&nbsp;\\n\\n---\\n\\n\\nIn summary, in our opinions, all the issues you mentioned above **are not the core contributions of our paper**, and **some are not even core research problems in the field of personalized federated learning** (such as late-joining clients and partial participation). This does not affect the integrity and contribution of our work. We would like to re-emphasize that :\\n\\n1. The core contribution of our work is being the first to leverage geometric information for decoupling personalized information, providing new insights into this field, as highlighted by \\n\\t* Reviewer 4tQH: \\\"I think this work can inspire more ideas in federated hyperbolic learning problems.\\\"\\n\\t* Reviewer Dc6W: \\\"I still find this approach offers valuable insights, particularly in new domains, and I will maintain my score.\\\"\\n \\n2. Our choice to conduct experiments on graphs is well-supported by theoretical and intuitive justifications, better validating our hypotheses (Response 2/4 1). Furthermore, our **experimental scopes are consistent with those of previous methods**. Our work is coherent and complete within the scope of our claimed contributions.\\n \\n3. We acknowledged that hyperbolic spaces might not be effective for ALL data types (which is not our contribution or focus). We have provided sufficient reasoning and potential future research directions regarding this. \\\"*Note that hyperbolic space is not universally optimal for all data distributions \\u2014 some exhibit positive curvature \\u2014 highlighting the need to model complex data structures in mixed-curvature spaces.*\\\"\\n \\n4. Partial training participation is not a key point in the point of personalized federated learning. We have already thoroughly verified the feasibility of our method for handling partial training participation in graph dataset including all strong baselines. For extreme scenarios, we believe our method still has great potential, as we explained before, which could be explored in future work, although it is not a core focus of the personalized federated learning field.\\n\\nThank you again for all your comments. We would greatly appreciate a reconsideration based on these clarifications and the actual merits of our work.\\n\\n&nbsp;\\n\\nBest regards, \\n\\nThe Authors\\n\\n&nbsp;\\n\\n---\\n\\n**References:**\\n\\n[1] Subgraph Federated Learning with Missing Neighbor Generation\\n\\n[2] An efficient federated learning framework for graph learning in hyperbolic space\"}", "{\"title\": \"Response\", \"comment\": \"Thank you very much for your valuable comments and suggestions. Below, we will summarize the concerns with \\\"**W:**\\\" and our responses with \\\"**R:**\\\"\\n\\n---\\n&nbsp;\\n\\n**W1:** Some arguments in the paper are vague.\\n\\n\\n**R:** Thank you for your comment. We will further refine our writing and try to provide as much explanation as possible within the limited space. Let us clarify these mentioned points:\\n\\n* The first passage highlights that PFL approaches address heterogeneity through three main strategies during aggregation: (1) splitting models into shared and personalized components, (2) analyzing weights or gradients to evaluate client similarities, and (3) incorporating additional modules for client-specific customization. Importantly, all these methods operate in Euclidean space.\\n\\n* The second passage emphasizes the challenge of embedding data from diverse clients into a fixed Euclidean space, as it complicates the interpretability of model parameters. In such settings, all parameters serve the same role during training, making it difficult to distinguish between those capturing client-specific heterogeneity and those representing shared patterns. This not only hinders meaningful model segmentation but also makes client similarity assessment more complex. Adding extra modules to address this further increases the system's complexity and reduces its flexibility.\\n\\nThanks again for your comments. We have made modifications to the original text (lines 33\\u201336 and lines 48\\u201352) to address this ambiguity.\\n\\n&nbsp;\\n\\n---\\n\\n**W2:** Additional operations from the Lorentz model.\\n\\n\\n**R:** Thanks for your comment. We would like to clarify this contradiction:\\n\\nThe computational complexity concerns raised in our original statement refer specifically to **assessing similarities between client models during the aggregation phase**, not the initial data projection. While FlatLand does indeed require projecting data into Lorentz space, this operation is computationally efficient for several reasons:\\n\\n1. The exponential map is **a single input mapping function** (Equation 1) that depends only on the norm of the input:\\n$\\\\mathbf{x}^K = \\\\left( \\\\cosh \\\\left(\\\\frac{\\\\||\\\\mathbf{v}^E\\\\||_2}{\\\\sqrt{K}}\\\\right), \\\\sqrt{K} \\\\sinh\\\\left(\\\\frac{\\\\||\\\\mathbf{v}^E\\\\||_2}{\\\\sqrt{K}}\\\\right)\\\\frac{\\\\mathbf{v}^E}{\\\\||\\\\mathbf{v}^E\\\\||_2} \\\\right)\\n$\\n\\n\\nThis projection has a constant time complexity of $O(1)$ per data point and can be **optimized through pre-computation** of input norms before training begins.\\n\\n\\n2. **No additional projections are required during training** after the initial mapping.\\n\\n\\n3. Most importantly, FlatLand **eliminates the need for expensive similarity computations during aggregation**. While other methods require computing pairwise model similarities or maintaining additional computational modules, our method performs a simple aggregation operation with complexity equivalent to standard FedAvg.\\n\\n\\n4. The effectiveness of FlatLand in **low-dimensional settings** (as demonstrated in Section 7.3) further reduces practical computational and communication costs.\\n\\n\\nTo make this clearer in the paper, we have added a detailed complexity analysis in Appendix B.6 that quantifies these trade-offs. The limited overhead from the initial projection is substantially outweighed by the computational savings during model aggregation and the reduced communication costs from effective low-dimensional representations.\"}", "{\"title\": \"Reviewing feedback\", \"comment\": \"Thank you for the detailed explanation. I\\u2019ve reviewed your points that addressed most of my concerns. However, I remain somewhat skeptical about the comparison between your method and FedRep. While I appreciate your results on the MNIST dataset, I believe this dataset is not entirely representative of the broader applicability of different methods, and a more thorough analysis would strengthen your claims. That said, I still find this approach offers valuable insights, particularly in new domains, and I will maintain my score.\"}", "{\"comment\": \"Dear Reviewer Wtgh,\\n\\n&nbsp;\\n\\nWe sincerely thank you for your positive feedback and efforts in helping us improve our paper! With only two days remaining before the deadline and having not yet received a response to our carefully formulated point-to-point replies, we kindly request your feedback. If you have any further questions or suggestions, we welcome the opportunity to discuss and address them.\\n\\n**We believe we have thoroughly addressed all the concerns raised in your initial review. And we would be grateful if you would consider raising your score accordingly.**\\n\\nThank you once again for your efforts in ensuring the highest standards of academic excellence. **We look forward to your response and remain available for any further discussions.**\\n\\n&nbsp;\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"title\": \"Rebuttal Summary\", \"comment\": \"&nbsp;\\n\\nWe thank all the reviewers for their time and for providing constructive comments to enhance the paper. **We appreciate the reviewers' positive comments:**\\n\\n* The motivation and contribution of the paper are clear (*Reviews nTmN, 4tQH, Wtgh, Dc6W*). The ideas that integrating geometry information to solve the heterogeneity in federated learning are novel, interesting, and inspiring. (*All reviewers*)\\n* The proposed method is elegant (*Reviewer Dc6W*) , concise yet intuitive (*Reviewer 4tQH*), and easy to integrate. (*Reviewer KA7L*). \\n* Convincing evaluation and ablation study of proposed method in graph datasets. (*Reviewers nTmN, 4tQH, Wtgh, KA7L*)\\n* Solid theoretical support and sufficient analyses (*Reviewers nTmN, Dc6W*).\\n* This paper can inspire more ideas in federated hyperbolic learning problems (*Reviewer 4tQH, Dc6W*).\\n\\n&nbsp;\\n\\n---\\n\\nWe include the following revisions in the Rebuttal PDF according to reviewers' constructive suggestions to improve the quality of our manuscript. Due to the time constraints, we will continue to refine our paper later. Specifically, the added contents are as follows:\\n\\n1. **Clearer arguments and typos.** We refined our explanations (*Reviewers nTmN, 4tQHm*) and added experimental details about the curvature calculation (*Reviewers 4tQH, Dc6W, 4tQH*) and graph classification settings in Appendix C2 (*Reviewer 4tQH*).\\n\\n2. **Time complexity analysis.** We added a complexity analysis section in Appendix B6 to show that exponential maps do not lead to much extra computation overhead (*Reviewers nTmN, Dc6W*).\\n\\n3. **Convergence analysis.** We added a convergence analysis in Appendix B5 to show that our method does not negatively impact the convergence rate (*Reviewer Dc6W*), which also enhances our theoretical analysis part (*Reviewers KA7L*).\\n\\n4. **Experiments on image datasets.** We added a section in Appendix C3 to show some experiment results on the image dataset compared with some strong baselines that are not utilized in graph datasets (*Reviewers Dc6W, Wtgh, KA7L*).\\n\\n5. **Experiments in a partial participation setting.** We added a section in Appendix C4 to analyze the performance with lower participation rate (*Reviewer KA7L*). \\n\\nWe update the whole results in Cora (50 clients) as follows; we will integrate and update it in our final version:\\n\\n\\n| Participate Rate | 0.1 | 0.3 | 0.5 | 0.7 | **1.0** |\\n| :--------------: | :---: | :---: | :---: | :---: | :-----: |\\n| FedAvg | 18.14 | 36.64 | 34.30 | 33.61 | 30.20 |\\n| FedPer | 56.03 | 60.22 | 74.79 | 73.48 | 68.11 |\\n| FedHGCN | 54.38 | 77.08 | 69.87 | 73.35 | 62.52 |\\n| **FlatLand** | **81.82** | **81.11** | **79.27** | **79.42** | **77.98** |\\n\\n&nbsp;\\n\\n---\", \"we_would_like_to_give_clarification_in_the_question\": \"&nbsp;\\n\\n**The selection of evaluation in graph datasets.**\\n\\nThe core idea of our approach lies in the correlation between data heterogeneity and hyperbolic curvature. We propose a novel method to leverage geometry information to decouple data heterogeneity. This relationship is particularly intuitive and supported by solid theoretical guarantees in graph data, where hyperbolic geometry has been more extensively developed. Therefore, we primarily use graph data to evaluate the rational and effectiveness of our proposed method.\\n\\n\\n&nbsp;\\n\\n**Future work and improvement.**\\n\\nWe acknowledge that the performance of our method across all data types remains an open question for further investigation. Due to time constraints, our exploration of other data types is somehow limited, though we supplemented our study with experiments on image datasets. However, **this does not conflict with the motivation, contribution, methodology, and experimental results presented in this paper.** This work can inspire more ideas. Below, we outline possible areas for further exploration following this work:\\n\\n1. It is the truth that hyperbolic space may NOT be universally optimal for all data distributions as it is still a rapidly evolving field. One intuition is that some data may exhibit positive curvature. Therefore, we could follow the idea of this paper to model more complex data structures in **mixed-curvature spaces** that also contain manifolds that have positive curvature.\\n\\n2. Inspired by Reviewer KA7L, we could investigate the **unique potential** of geometric information when *dealing with the rapid updates of parameters for newly arrived users*. A novel strategy could be designed, for example, by *leveraging curvature values to directly identify clients with similar data properties*. This would allow us to quickly retrieve personalized parameters from these clients to help initialize the personalized parameters for the new user. This is a task that traditional Euclidean-space methods struggle with.\\n\\n3. As hyperbolic space continues to show promising results across data in other domains, we could *expand our analysis and validation to include more complex and challenging datasets and tasks.*\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you very much for your valuable comments and suggestions. Below, we will summarize the concerns with \\\"**W:**\\\" and our responses with \\\"**R:**\\\"\\n\\n---\\n**W1:** Typos.\\n\\n**R:** Thank you for your comments. We have corrected the identified typos and will conduct a thorough review to address any remaining issues.\\n\\n&nbsp;\\n\\n---\\n\\n**W2 and Questions:** Ambiguous concepts.\\n\\n**R**: Thank you for your careful review. We appreciate your feedback and will address the ambiguity by adding clarifications to ensure the concepts are more precisely defined in the revised manuscript. Please let us address each point:\\n\\n**1. Regarding the operation function $ f(Wx, v) $ and the usage of $ v $:**\\n\\nThe parameter $v$ is introduced to enable a more generalized formulation, supporting advanced non-linear operations with learnable parameters. While dropout is presented as a simple example without $v$ , the parameter can play a role in other minor operations within hyperbolic space. Specifically, $v$ can function as learnable parameters in normalization operations, constraining the norm via $ \\\\sigma(v^T x) $ to prevent excessive growth of hyperbolic embeddings, or as bias terms added to $ x $. In later sections, the presentation is simplified to focus on the core linear transformation aspects (lines 283\\u2013284), as these additional operations do not influence the fundamental principles of our method.\\n\\n**2. Regarding Equation (4) and the usage of the first row of $ \\\\hat{M}^{(l)} $:**\\n\\nThe first row of $\\\\hat{M}^{(l)}$ provides a generalized formulation for our defined LT operation, ensuring the correct input-output matrix size. The seemingly \\\"missing\\\" usage arises because our focus was on presenting the key transformations (lines 283\\u2013284). Typically, the first row\\u2019s parameters support minor operations. We will clarify this in the revised manuscript for better transparency.\\n\\n**3.Regarding $K$ and its implementation:**\\n\\nYes, $ K $ is a learnable scalar parameter. To ensure the curvature remains negative (as required for hyperbolic space), we implement it as $ \\\\text{sigmoid}(K) + 0.5 $. This design also keeps curvature $ - K $ within an effective range of $[0.5, 1.5]$, which prior work has shown to be an ideal range for most hyperbolic models [1-4]. Additionally, this approach maintains numerical stability while satisfying the need for a heterogeneous space.\\n\\nThanks for bringing these points to our attention. We have revised the manuscript to include these clarifications in Appendix C2, making the technical details more transparent and accessible to readers.\\n\\n**4. Regarding no parameter decoupling strategy:**\\n\\nYes. In the aggregation process, all parameters are aggregated including the curvature scalar ensuring consistency with our approach. We will clarify this in the revised manuscript.\\n\\n&nbsp;\\n\\n---\\n\\n**W3:** Implementation of the graph classification task.\\n\\n**R:** Thank you for your comments. LGCN serves as the backbone for our graph learning framework, combining Lorentz linear layers (Equation 2) with graph aggregation operations, similar to how Euclidean counterparts like GCN and GIN integrate linear layers with graph aggregation. Each layer applies a Lorentz transformation followed by neighbor aggregation using the adjacency matrix to get the node representations. The \\\"3-layer hyperbolic encoder\\\" employs the same architecture with three stacked layers to learn node representations within each graph.\\nFor node classification, node representations are used directly, while graph classification employs mean pooling for graph-level representations.\\n\\nAs noted in Section 5.3, the parameter decoupling strategy remains valid since \\\"the aggregation operation does not involve any parameters.\\\" This allows us to directly apply our methods to the parameters of the Lorentz linear layers without modification.\\n\\nThank you for the suggestion to include more technical details in the main text. Due to time constraints and the page limit, moving more content to the main pages at this stage is challenging. *We will prioritize reorganizing the main text in the final version to better integrate these details.*\\n\\n&nbsp;\\n\\n---\\n\\n**W4:** Submit code as supplementary materials.\\n\\n**R:** Thank you for your suggestion. We have now included our code as part of the supplementary materials for easier access and review.\\n\\n&nbsp;\\n\\n---\\n**References:**\\n\\n[1] Fully hyperbolic neural networks. Chen, W., et al. (2021). ACL\\n\\n[2] Hyperbolic graph convolutional neural networks. Chami, I., et al. (2019). NeurIPS\\n\\n[3] HGCF: Hyperbolic graph convolution networks for collaborative filtering. Sun, J., et al. (2021). WWW\\n\\n[4] Discrete-time temporal network embedding via implicit hierarchical learning in hyperbolic space. Yang, M., et al. (2021). KDD\"}", "{\"title\": \"Response (1/4)\", \"comment\": \"Thank you for your valuable comments and suggestions. Below, we will summarize the concerns with \\\"**C:**\\\" and our responses with \\\"**R:**\\\"\\n\\n&nbsp;\\n\\n---\\n\\n**C1: Regarding the theoretical part** \\n\\n**R:** Thanks for your comments. To clarify, this theoretical analysis is not clamied as our main contribution, it serves to establish the mathematical rigor of our novel personalized federated learning strategy in hyperbolic space. Let us elaborate on the importance of our analysis\\n\\n\\n1. Corollary 1 is important because we are the first to apply Lorentz Neural Networks to the federated learning setting. This means that after each aggregation step, the parameters are updated, changing the model. The $\\\\mathbf{N}$ in our case should be $\\\\mathbf{M}_c - \\\\eta \\\\sum \\\\frac{|\\\\mathcal{D}_c|}{N} \\\\nabla \\\\mathbf{M}_c$, where $\\\\nabla \\\\mathbf{M}_c$ is the gradient of the shared parameters $\\\\mathbf{M}$, $N$ is the total number of samples. Due to hyperbolic space's non-Euclidean nature, parameter changes can directly affect the validity of output representations. Thus, ensuring output representations remain in the hyperbolic space after parameter updates is **important for the correctness of hyperbolic neural networks, an issue unique to this setting.** While not claimed as a core contribution, the proof of Corollary 1 is essential to address this challenge specific to the federated learning of hyperbolic models.\\n\\n2. The importance of other analysis: The goal of the remaining part is to show the rationale behind our proposed method from different perspectives, as stated in the paper. Our emphasis is on analyzing our approach from the perspective of hyperbolic geometry. For example, we perform a non-linear debiasing operation on the input data. Moreover, we **provide interpretable insights** drawing an analogy from special relativity, treating each client as a distinct space-time.\\n\\nIn the paper, we have added an analysis of the convergence analysis of our proposed method in Appendix B.5. We show that the decoupling strategy does not impede the convergence of the model. In future work, we plan to further investigate the specific impact of the hyperbolic curvature on model performance, such as its effect on generalization error and other metrics. We believe this would be an interesting theoretical research direction about how geometry can directly influence the models\\u2019 performance in various tasks.\"}", "{\"summary\": \"This paper introduces FlatLand, a novel personalized federated learning (PFL) approach that addresses data heterogeneity across clients by leveraging hyperbolic geometry, specifically Lorentz space. Unlike traditional PFL methods that operate in Euclidean space, FlatLand embeds each client's data in a tailored Lorentz space with different curvatures to better capture non-Euclidean properties of the data. Instead of explicitly calculating client similarities or using additional modules for personalization, FlatLand leverages the geometric properties of Lorentz space to handle heterogeneity naturally through different curvatures. Also, they use parameter decoupling for shared and local models, similar to some other personalized FL models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper presents an innovative approach to personalized federated learning by leveraging hyperbolic geometry, specifically through the use of Lorentz spaces with tailored curvatures for different clients. The theoretical foundation is well-developed, with mathematical proofs and insights connecting geometric properties to data heterogeneity. The experimental results show promising improvements over existing methods, particularly in low-dimensional settings, which could be valuable for resource-constrained environments. The parameter decoupling strategy based on geometric properties is elegant and eliminates the need for explicit client similarity calculations, which is a common computational burden in some of the other personalized federated learning approaches using clustering approaches.\", \"weaknesses\": \"The paper's evaluation primarily focuses on graph-based tasks, leaving questions about its generalizability to other types of data and models. While they mention the potential applicability to other domains, this claim remains unverified even in the experimental sense. The method introduces additional complexity through hyperbolic geometry and requires curvature estimation, which could make implementation and tuning more challenging in practice. The computational overhead of working in Lorentz space compared to Euclidean space is not thoroughly discussed, and the paper doesn't address potential numerical stability issues that often arise when working with hyperbolic spaces. Additionally, while the method shows improvements over baselines, some of the gains are modest, particularly in the graph classification tasks, suggesting that the added complexity might not always justify the performance benefits.\", \"the_main_concerns_of_the_paper_are_as_follows\": [\"The paper presents its parameter decoupling strategy as a novel contribution without comparing it with similar approaches in existing literature. The absence of comparisons with previous decoupling methods, particularly [A], makes it difficult to assess the true novelty and advantages of their approach.\", \"The paper lacks clarity in explaining crucial algorithmic details, particularly regarding curvature estimation. While curvature is central to the method's performance, the paper doesn't adequately explain how it's estimated, updated, or integrated into the training process. The absence of analysis on the impact of different curvature values and their stability leaves important implementation questions unanswered.\", \"The evaluation is heavily focused on graph-based tasks, with no experimental validation on other common federated learning applications like computer vision or natural language processing (such as FEMNIST). This narrow focus raises questions about the method's applicability and effectiveness in broader contexts, especially given the additional complexity introduced by hyperbolic geometry.\", \"The paper lacks a thorough analysis of computational overhead compared to Euclidean-based methods. There's no discussion of memory requirements for storing different Lorentz spaces or how the method scales with an increasing number of clients. The communication costs and practical implications in real-world deployments remain unexplored.\", \"[A] Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Exploiting shared representations for personalized federated learning. In ICML, volume 139 of Proceedings of Machine\", \"Learning Research, pp. 2089\\u20132099. PMLR, 2021.\"], \"questions\": [\"In addition to the questions and concerns above, I have some other questions:\", \"Could you elaborate on how the curvature parameters are practically updated during the federated learning process? Specifically, what happens if the estimated curvature is unstable or changes significantly between rounds, and how do you ensure this doesn't negatively impact the model's convergence?\", \"Could you discuss any potential challenges or modifications needed when applying your method to these domains, particularly regarding the relationship between data heterogeneity and geometric properties in these different contexts?\", \"Could you provide insights into the scalability of your approach as the number of clients increases, especially considering the additional overhead of maintaining different Lorentz spaces? For instance, provide some results with training times and memory utilization.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your following-up questions.\\n\\n&nbsp;\\n\\n**Metric used in Participation Rate**: For the graph dataset, we are using classification accuracy, which is consistent with the main experiment table. For MNIST, we are using Accuracy (Acc) as the metric. \\n\\n&nbsp;\\n\\n\\n**How AUC is calculated?** We directly followed the computation method used in the PFLib library [1]. After examining the source code, the specific implementation is:\\n\\n```\\nauc = sklearn.metrics.roc_auc_score(y_true, y_prob, average='micro')\\n```\\n\\n&nbsp;\\n\\n**About the baselines**: \\nDue to time constraints, some baselines require hyperparameter tuning, and we are still conducting experiments. We will continuously update and include them in the final version. However, based on our presented experiments, we can observe that lowering the participation rate does not significantly impact our performance, which indicates that our method still works and performs well in this setting. Therefore, we believe this would NOT be a major concern for this work.\\n\\n&nbsp;\\n\\nThank you again for all your valuable suggestions and questions. Please do not hesitate to let us know if there are any other questions and concerns.\\n\\n&nbsp;\\n\\n---\", \"references\": \"[1] PFLlib: Personalized Federated Learning Algorithm Library.\"}", "{\"title\": \"Question about Participation Rate experiments\", \"comment\": \"Could you confirm what metric the Participation Rate results tables are showing? Is it classification accuracy? Also why are you quoting AUC and how is it computed for MNIST, given that it isn't a binary classification dataset? Could you also explain why the other baselines are not present in this Participation Rate table?\"}", "{\"comment\": \"Dear Reviewer KA7L,\\n\\n&nbsp;\\n\\nWe sincerely thank you for your efforts in helping us improve our paper! With only two days remaining, we are kindly reaching out to request your further feedback. **We believe we have thoroughly addressed all your concerns and respectfully hope you consider raising your score.**\\n\\nThank you once again for your efforts in ensuring the high standards of academic excellence. **We look forward to your response and remain available for any further discussions.**\\n\\n&nbsp;\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"title\": \"Response (4/4)\", \"comment\": \"**Q4: Does the method work for larger numbers of clients, without full client participation each round?**\\n\\n**R:** Thank you for the good question. We have conducted experiments on the image dataset to evaluate the results of these scenarios in the response of **C2**. \\n\\n&nbsp;\\n\\n---\\n\\n\\n**Q5: Statistically significant?**\\n\\n**R:** Our method's significance is validated using p-values. Additionally, the standard deviation values are shifted to two decimal places to save space. We will provide clarification on this in the revised manuscript for better understanding.\\n\\n&nbsp;\\n\\n---\\n\\n\\n**Q6: Embedding dimension?**\", \"r\": \"Thank you for pointing out. Yes, the embedding dimension is the hidden dimension of the model. We will further refine our writing and try to provide as much clear explanation as possible within the limited space.\"}", "{\"comment\": \"Dear Reviewer Dc6W,\\n\\n&nbsp;\\n\\nWe sincerely thank you for your positive feedback and efforts in helping us improve our paper! With only two days remaining before the deadline and having not yet received a response to our carefully formulated point-to-point replies, we kindly request your feedback. If you have any further questions or suggestions, we welcome the opportunity to discuss and address them.\\n\\n**We believe we have thoroughly addressed all the concerns raised in your initial review. And we would be grateful if you would consider raising your score accordingly.**\\n\\nThank you once again for your efforts in ensuring the highest standards of academic excellence. **We look forward to your response and remain available for any further discussions.**\\n\\n&nbsp;\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"comment\": \"I appreciate the authors' efforts for the detailed response and summary. I am more clear about the exp setup now. I personally do not hold objection for the paper. I think this work can inspire more ideas in federated hyperbolic learning problems.\"}" ] }
5YLsnsjgeC
VFDiff: SE(3)-Equivariant Vector Field Guided Diffusion Model for Target-Aware Molecule Generation in 3D
[ "Luoda Tan", "Jianting Liu", "Guanyu yue", "Quan Zou", "Dongsheng Cao", "xiangxiang Zeng", "Xiangzheng Fu" ]
Structure-based drug design (SBDD) is a key challenge in drug discovery that aims to generate small molecules capable of binding tightly to specific protein pockets. However, current diffusion models have focused on the complementarity of ligand molecules and protein pockets in physical space while ignoring the docking energy requirements, resulting in only generating suboptimal docking postures. In this paper, we present VFDiff, a novel SE(3)-equivariant diffusion model for 3D molecular generation, guided by vector fields derived from protein-ligand binding energy. In contrast to current diffusion models, VFDiff incorporates energy-based guidance in both forward and reverse processes to ensure ligand molecules are spatially complementary and energetically matched to their target pockets. Our approach includes three fundamental mechanisms: energy-planning, which adjusts diffusion trajectories based on energy gradients; force-guiding, which refines molecular generation; and position-tuning, which improves sampling accuracy. Extensive experiments on the CrossDocked2020 dataset demonstrate that VFDiff outperforms state-of-the-art methods, achieving superior binding binding affinity with an impressive Avg. Vina Score of up to -7.37, while maintaining competitive molecular properties, and diversity. This work introduces a new framework for generating target-specific molecules with improved structural and functional fidelity, offering a significant advancement in SBDD.
[ "Diffusion Model", "Molecule Generation", "Structure-Based Drug Design" ]
Reject
https://openreview.net/pdf?id=5YLsnsjgeC
https://openreview.net/forum?id=5YLsnsjgeC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yuBKgTFMgn", "yH33shJl6z", "xrC6BBbwb4", "tjm2jivM7g", "s6RLnCq8YO", "oNBmOjQa8s", "l8JfBJPqTC", "fpEufsbZxs", "eQ4kJrkYWh", "dIvLBtp8PZ", "arn8wvC6dv", "akasGrWuZZ", "XOyHuKlA4P", "VAzPKqM5dz", "Rzxp01trFr", "RrxDmBRiKv", "Px8vVSL3cK", "PCqBgwbkLE", "OJjEsKUmpM", "MmZTRPgrjO", "MQvv0Flad6", "JyeGVVN2tt", "IlO1DNY35x", "ITfWW2Gqvi", "BcBvafIWK0", "79dqGPOnYr", "6ovjZKYSLa", "6Nagx9KfpK", "4QcmcNf0un" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732967883736, 1732679053786, 1733147241208, 1733287990626, 1734861772559, 1732870963628, 1730541362768, 1733046052500, 1733223048860, 1732176874014, 1733033766783, 1737523453202, 1732177070913, 1732508176715, 1732627728683, 1732984954287, 1732188202271, 1732880948195, 1733038429368, 1732958534193, 1733212908960, 1733028964271, 1732182910782, 1733021153531, 1733233201245, 1733221588402, 1730555221952, 1730693553436, 1733116764651 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1453/Reviewer_kczy" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Area_Chair_rAFx" ], [ "~Zhenkun_Huang3" ], [ "ICLR.cc/2025/Conference/Submission1453/Reviewer_AuiW" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Reviewer_kczy" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "~Zhenkun_Huang3" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "~Zhenkun_Huang3" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ], [ "ICLR.cc/2025/Conference/Submission1453/Reviewer_kczy" ], [ "ICLR.cc/2025/Conference/Submission1453/Reviewer_6obu" ], [ "ICLR.cc/2025/Conference/Submission1453/Reviewer_kczy" ], [ "ICLR.cc/2025/Conference/Submission1453/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I would like to thank the authors for their responses and efforts to address my concerns. While I appreciate the potential of this paper, I still have substantial reservations about its publication in its current state.\\n\\nMy main concern about reproducibility is partially addressed. The log file provided is consistent with the results reported in the paper. However, code necessary for reproduction is still missing.\\n\\nBesides, I agree with the Reviewer 6obu that the difference between VFDiff and IPDiff is small. The current manuscript and rebuttals tend to focus on favorable outcomes without adequately acknowledging the contributions and similarities to IPDiff. I believe a more detailed and honest discussion of how your work relates to prior methods would enhance the clarity and impact of your contributions.\"}", "{\"comment\": \"Dear Reviewer kczy,\\n\\nWe deeply appreciate your kind and thoughtful feedback, as well as your recognition of our work. Your affirmation that we have effectively addressed critical issues left by previous studies and proposed a model design for SBDD tasks from the perspectives of shape complementarity and energy matching, all while achieving outstanding performance under fundamental geometric constraints, means a great deal to us. Furthermore, we would like to emphasize the **position-tuning** module proposed in our paper. This novel component plays a crucial role in the success of the aforementioned improvements by effectively **refining the sampled data distribution**, **enhancing the accuracy of molecular conformations**, and **optimizing docking poses**. This technical component is remarkably **simple**, yet **logical** and **effective**, and we hope that this innovation will earn your recognition.\\uff08You may refer to the supplementary experiments provided in our response **A3** to **Reviewer 6obu.** and our **respond to the public comment** below)\\n\\nWe are more than happy to address the minor concerns you have raised. TargetDiff[1] is the first work to introduce diffusion models into the structure-based drug design (SBDD) task. It defines the splits for training, validation, and test sets, as well as a series of evaluation metrics. To ensure a fair and impartial comparison of model performance, subsequent works, including but not limited to DecompDiff[2], IRDiff[3], and IPDiff[4], have claimed to follow the same setup as TargetDiff for training, testing, and evaluation. Therefore, the data presented in the tables of our paper are cited directly from the respective publications of these works (including IPDiff). \\n\\nThe results in Tables 1 and 2 of our paper were generated using **the same evaluation code** as that used in TargetDiff. We have also uploaded the **log files** from our testing process in the supplementary materials for your review. If you have any additional questions, we are more than willing to assist and provide clarification. \\n\\nOnce again, thank you for your time and effort. \\n\\nSincerely, \\n\\nThe Authors\\n\\n**Reference**\\n\\n\\n[1]: Guan et al., 3d Equivariant Diffusion for Target-aware Molecule Generation and Affinity Prediction, ICLR, 2023.\\n\\n[2]: Guan et al.,DecompDiff: Diffusion Models with Decomposed Priors for Structure-Based Drug Design, ICML, 2023.\\n\\n[3]: Huang Z. et al. Interaction-based Retrieval-augmented Diffusion Models for Protein-specific 3D Molecule Generation, ICML, 2024.\\n\\n[4]: Huang Z. et al. Protein-ligand interaction prior for binding-aware 3d molecule diffusion models, ICLR, 2024.\"}", "{\"title\": \"Follow-up on Manuscript Review\", \"comment\": \"Dear Reviewer kczy\\uff0c\\n\\nI hope this email finds you well. As the discussion phase is approaching its conclusion in just one day, we would like to check if you might have any additional concerns or suggestions regarding the manuscript. We greatly value your feedback, which plays a crucial role in ensuring the quality and impact of our work.\\n\\nIf there are any outstanding points you wish to raise, we would be grateful to address them promptly. Your insights and recommendation are very much appreciated, and we look forward to your response.\\n\\nThank you very much for your time and kind support. Please do not hesitate to let us know if there is anything further we can assist with.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Supplementary Experiments on Torsion Angle JSD\", \"comment\": \"Dear Reviewer kczy,\\n\\nTo thoroughly address your concerns regarding the reasonableness of molecular conformations, we have added the distributions of the top-8 most frequent torsion angles between the reference and the generated molecules, as shown in the table below. \\n\\n**Torsion Angle JSD:**\\n\\n|Bond/frequency| AR|Pocket2Mol| TargetDiff|DecompDiff|IPDiff|VFDiff|\\n|-|-|-|-|-|-|-|\\n|CCCC(13.7%)|0.38|0.32|**0.31**|0.35|0.32|0.34|\\n|C:C:C:C(12.5%)|0.71|0.53|0.37|**0.24**|0.48|0.37|\\n|CCOC(5.3%)|0.37|0.37|0.34|0.34|0.35|**0.34**|\\n|CCCO(5.0%)|0.43|0.40|0.40|0.41|0.41|**0.40**|\\n|CCNC(4.7%)|0.41|0.43|0.41 |**0.39**|0.43 |0.42|\\n|C:C:N:C(4.7%)|0.68|0.54|0.42 |**0.23**|0.47 |0.40|\\n|C:C:C:N(3.1%)|0.66|0.53|0.45 |**0.31**|0.52 |0.43|\\n|C:N:C:N(2.9%)|0.78|0.55|0.49 |**0.27**|0.51 |0.48|\\n\\nCombined with Bond JSD and Angle JSD, it is evident that diffusion models significantly outperform autoregressive models in terms of conformation accuracy. Among diffusion models, DecompDiff, which is based on fragment generation, clearly outperforms other diffusion models. \\n\\nOverall, our proposed VFDiff achieves the second-best performance in terms of conformation reasonableness among diffusion models, behind DecompDiff, while being on par with TargetDiff (and significantly better than IPDiff). Most importantly, the primary goal of our model design is to improve molecular **docking affinity**, where we demonstrate a substantial performance advantage. Compared to IPDiff, DecompDiff, and TargetDiff, our model shows improvements of **15**%, **30**%, and **35**%, respectively, in the Vina Score metric. \\n\\nTherefore, we believe that the proposed energy-based shifted-diffusion and position-tuning mechanisms are of significant importance. \\n\\nOnce again, thank you for your insightful comments and continued support. \\n\\nSincerely,\\n\\nThe Authors\"}", "{\"metareview\": \"This paper proposes VFDiff, a diffusion model for structure-based drug design that incorporates energy-based guidance in both forward and reverse processes. The core technical contributions include a vector field guidance mechanism derived from protein-ligand binding energy, position-tuning during sampling to improve conformational accuracy, and energy-planning and force-guiding components to ensure molecules are both spatially and energetically matched to target pockets. Experiments on CrossDocked2020 demonstrate improved binding affinity scores compared to prior methods.\\n\\nThe work has several strengths, including a novel position-tuning mechanism that provides meaningful improvements in sampling accuracy, strong empirical results on docking scores and binding affinity metrics, clear experimental validation including detailed ablation studies, and detailed responses to reviewer concerns with additional experiments and analyses.\\n\\nHowever, significant weaknesses emerged during the review process. First, the technical novelty compared to prior work (particularly IPDiff) appears limited. As noted by Reviewer 6obu, core mechanisms like force-guiding and energy-planning appear largely adopted from IPDiff. While the authors' rebuttal acknowledges using the shifted-diffusion framework from prior work, the differentiation from IPDiff remains unclear despite their explanations. Second, there are serious reproducibility concerns - Reviewer kczy identified issues with bond/angle JSD calculations using reference distributions, code for training/inference was not provided during review, and public comments highlight difficulties reproducing key results. Third, the evaluation was incomplete, lacking important metrics like PoseCheck for validating molecular conformations initially. Bond angle distributions had to be recalculated after reviewer feedback, and questions remain about synthetic accessibility of generated molecules.\\n\\nBased on these points, paper's primary technical contribution appears incremental over IPDiff - while the position-tuning mechanism is novel, this alone does not constitute sufficient advancement for publication. Multiple reproducibility concerns emerged during review that were not fully resolved, including discrepancies in reported metrics, lack of training/inference code, and unaddressed questions about result replication. Finally, the initial evaluation omitted important baseline comparisons and metrics, suggesting incomplete validation of the method's capabilities.\\n\\nWhile the authors provided detailed responses and additional experiments, the core issues around novelty and reproducibility remain. I recommend the authors: more clearly differentiate their technical contributions from IPDiff, provide complete code and model weights for reproduction, expand evaluation to comprehensively validate molecular quality, and consider combining with additional novel technical components. The paper shows promise but requires substantial revision to meet ICLR's standards for technical novelty and reproducibility.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised major concerns about technical novelty, reproducibility, and evaluation completeness.\\n\\nOne reviewer expressed serious concerns about similarity to IPDiff, particularly in the shifted-diffusion framework and energy guidance mechanisms. The authors responded by clarifying their novel contributions: energy-based SE(3)-equivariant vector fields, pre-planned modification paradigm, and MCMC sampling with position-tuning. While this explanation partially addressed originality concerns, the reviewer maintained that differences from IPDiff remained minimal.\\n\\nThe discussion around reproducibility was particularly notable. Multiple reviewers and public commenters highlighted difficulties in replicating results. The authors provided code for metric calculations and additional experimental results, including recalculated bond/angle JSD distributions and PoseCheck evaluations. However, they did not release complete training/inference code during the review period, citing plans to do so post-acceptance. This decision significantly impacted the ability to fully validate their claims.\\n\\nRegarding evaluation methodology, reviewers requested additional metrics and analyses. The authors responded with comprehensive new experiments, including PoseCheck metrics for structural validation, expanded JSD calculations for bond angles and torsions, and detailed ablation studies of the position-tuning mechanism. These additions strengthened the empirical validation but also revealed some inconsistencies with initially reported metrics.\\n\\nThe position-tuning mechanism was agreed as the paper's most novel contribution through the discussion. The authors provided detailed analyses showing its effectiveness, including ablation studies with different scaling coefficients and their impact on molecular conformations, which was appreciated by the reviewers. \\n\\nIn weighing these points for the final decision, the incomplete code release and remaining questions about reproducibility were particularly concerning. While the authors made substantial efforts to address reviewer comments with additional experiments, the core issues of technical novelty relative to IPDiff and result verification remained unresolved. The strong experimental results and novel position-tuning mechanism were positive factors, but ultimately insufficient to overcome these fundamental concerns for acceptance at ICLR.\"}", "{\"title\": \"Public Comment\", \"comment\": \"Dear Reviewer,\\n\\nI would like to ask whether you have been able to resolve the reproducibility issue with this paper. The insights presented in this work are impressive, but I have encountered difficulties in replicating the reported performance as well. Since the authors appear not to have made their responses publicly available, I am unable to access the additional implementation details to reproduce the reported metrics. I would be deeply grateful if the authors could provide the model weights and training/inference code.\\n\\nAdditionally, I noticed that the diffusion theory proposed in this paper is essentially the same as that of IPDiff [1], yet the authors do not seem to have explicitly acknowledged this. I believe it is necessary to properly cite the prior works in the diffusion framework in Section 4.2, as well as the theoretical proofs in the Appendix.\\n\\nThank you for your time and assistance.\\n\\n[1] Protein-Ligand Interaction Prior for Binding-aware 3D Molecule Diffusion Models\"}", "{\"summary\": \"1.\\tThis work introduce By incorporating energy-based guidance in both forward and reverse processes\\uff0cVFDiff ensure ligand molecules are spatially complementary and energetically matched to their target pockets. VFDiff achieves the SOTA performance in SBDD scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"VFDiff explicitly considers the energy complementarity between protein and ligand during the diffusion process. Therefore, it can generate three-dimensional molecules with better affinity in the process of generating protein-ligand complexes. This innovation is commendable.\", \"weaknesses\": \"Recently, many pocket-based three-dimensional molecular generation models have claimed to achieve state-of-the-art (SOTA) levels of affinity between protein and ligand when generating three-dimensional molecules. However, in most targets, the generated molecules often contain substructures that are difficult to synthesize, and collisions between the protein and ligand are very common. The authors did not provide enough molecular examples to demonstrate that these issues have been adequately addressed.\", \"questions\": \"1. When predicting forces with VFNet, it is necessary to know specific molecular information such as atom types and bond types. However, during sampling, VFDiff can only clearly provide atom types and bond types at the final step. I believe the authors need to clarify how they can provide accurate force information and position tuning information without knowing the molecular graph information in advance.\\n2. I look forward to the authors providing the code for reproducing the results during the review period to fairly evaluate whether the molecules generated by VFDiff have issues with synthesis and atomic collisions.\\n3. The authors should provide relevant results on the sampling efficiency of VFDiff.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your understanding. Please be patient as we are working hard to organize the code and provide detailed annotations.\\n\\nWe promise to release the complete code along with comprehensive annotations after the paper is accepted. We look forward to receiving more of your valuable feedback at that time.\"}", "{\"comment\": \"Dear Reviewer kczy:\\n\\nThank you for pointing out the critical issues. We will reassess bond JSD and other metrics and include the calculation code. We will reply to you at the earliest possible moment once the results are available. We sincerely appreciate your meticulous review.\\n\\nSinserely,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer kczy (Part1/2)\", \"comment\": \"We sincerely thank you for your time and efforts in reviewing our paper, and your valuable feedback. We provide detailed answers in the following.\\n\\n**Q1: The unique contributions relative to previous models. And what is the novelty in Section 4.2 comparing to IPDiff?**\\n\\n**A1: KEY IDEA**: Inspired by the goal of drug discovery (generating small molecules with high affinity for a target), we redefine the noise addition trajectory for drug molecules. Unlike previous models, at each step of the noise addition process, the direction of the perturbation is guided by the molecular force field (Vector Field in our paper), **moving in the direction of the fastest decrease in affinity**. We demonstrate through experiments that this energy-aware trajectory outperforms previous standard processes and other path-correction methods (such as IPDiff).\\n\\n**Briefings in IPDiff**: The authors attempt to incorporate the relationship between protein-ligand interactions into the trajectory alterations of the forward process in their diffusion model, which is indeed a motivated approach. Specifically, they compute an interaction representation \\n$F\\\\in R^{N\\\\times D} $\\n (where \\n$N$ denotes the number of atoms and \\n$D$ the dimension of the interaction representation) for each atom in the ligand. However, the ligand\\u2019s coordinate matrix \\n$X\\\\in R^{N\\\\times 3}$ does not match the dimensions of \\n$F$, making it impossible to directly manipulate the coordinates. To address this, the authors use a linear layer to reduce \\n$F$ to a 3-dimensional shifted-bias vector, which they then incorporate into the control of the trajectory.\", \"it_is_important_to_note_the_following\": \"**Weakness in IPDiff**: The linear transformation used is not an SE(3)-equivariant operation. As a result, although IPDiff adopts an EGNN model, the non-equivariant operation leads to ligand coordinate transformations during the forward and backward processes that do not satisfy the principles of equivariance. This is both problematic and inelegant.\\n\\n**Contribution in VFDiff**: In contrast, VFDiff simulates and computes an SE(3)-invariant binding energy score using VFNet, and the shifted-bias derived through differentiation maintains SE(3)-equivariance. We explain this aspect in Section 4.2.1 and provide a proof in Appendix C4. Please refer to these sections for further details.\\n\\n**Weakness in IPDiff**: While the protein-ligand interaction representations \\n$F$ in IPDiff are meaningful (since \\n$F$ is learned through a supervised docking score prediction task), the shifted-bias **s**, obtained through an unsupervised linear transformation, lacks a clear and guaranteed interpretation.\\n\\n**Contribution in VFDiff**: In our approach, the gradient obtained by differentiating the docking score with respect to the molecular coordinates is a force field (referred to as a \\u201c**Vector Field**\\u201d in our paper), indicating the forces exerted on the molecule within the protein pocket. Furthermore, the opposite direction of the gradient points towards the direction in which the docking score decreases. Compared to \\n**s** in IPDiff, the **s** generated by our method has a tangible physical interpretation.\\n\\n**Q2: What is the rationale for scaling the noise added to coordinates in VFNet training with $\\\\eta \\\\in U(0,0.5)$?**\\n\\n**A2:** Thank you for your careful observation. In our paper, the diffusion model is trained using the $X_0$-predict paradigm. The average prediction error on the validation set, $|\\\\hat{X}_{0|t,i} - X_0,i|$, where $t \\\\in U(1,1000)$, is approximately $\\\\eta \\\\times \\\\epsilon$ ($\\\\epsilon \\\\sim \\\\mathbf{N}(0, I)$) when $\\\\eta = 0.1$. To ensure generalization, we set the maximum value of $\\\\eta$ to five times this value, i.e., 0.5, during the training of VFNet.\\n\\n**Q3: Could the authors provide Jensen Shannon divergences of bond angles distributions?**\\n\\n**A3:** Thank you for your suggestion. We have added experiments on the Jensen-Shannon divergences of bond angle distributions. Specifically, we selected the five most frequently occurring bond angles in the test set: CCC (18.1\\\\%), C:C:C\\n(16.0\\\\%), CCO (9.5\\\\%), C:C:N\\n(5.7\\\\%), and CCN (5.0\\\\%). The table below presents the results, with the best value highlighted in bold.\\n\\n|Bond| AR|Pocket2Mol| TargetDiff|DecompDiff|IPDiff|VFDiff|\\n|-|-|-|-|-|-|-|\\n|CCC|0.372|0.380|0.345|**0.339**|0.392|0.399|\\n|C:C:C|0.572|0.480|0.283|0.255|0.373|**0.247**|\\n|CCO|0.477|0.475|0.440|**0.390**|0.490|0.473|\\n|C:C:N|0.537|0.506|0.454|**0.429**|0.517|0.447|\\n|CCN|0.477|0.443|0.437 |**0.418**|0.462 |0.433|\"}", "{\"comment\": \"Hi, zhenkun,\\n\\nIn our manuscript, we have explicitly committed to releasing the code and model weights promptly after the paper is accepted. The provided code is intended to address the reviewers' concerns. If you are interested in our work, we would be happy to discuss it with you after the review process concludes. Thank you for your comments.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer kczy (Part 2/2)\", \"comment\": \"**Q4: Could the authors compare the generation efficiency of VFDiff to baseline methods?**\\n\\n**A4:** Thank you for your suggestion. To analyze the sampling efficiency of our model, we compared the average time consumption for generating 100 valid molecules among five baseline methods: AR, Pocket2Mol, TargetDiff, DecompDiff, and IPDiff. The results are shown in the table below, with the unit being seconds per 100 molecules. The results show that our model sacrifices a small amount of time in exchange for significant performance improvement, making this **trade-off worthwhile**\\n\\n| AR| Pocket2Mol| TargetDiff| DecompDiff| IPDiff| VFDiff|\\n|-|-|-|-|-|-|\\n|7698|2513| 3396|6174| 4284 | 4636|\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you very much for your thoughtful feedback and for recognizing our efforts to address your concerns. We greatly appreciate your adjustment in score and your acknowledgment of the improvements made, particularly regarding the new analysis of position tuning.\\n\\nTo provide a detailed explanation of the conceptual similarities with IPDiff, we have added the following clarification: \\n\\nShifted-diffusion is currently a well-established framework [1][2]. Inspired by [2], [3] was the first to apply the shifted-diffusion architecture in the SBDD (Structure-Based Drug Design) task. They attempted to incorporate protein-ligand interaction information into the noise process of ligand molecules to enhance docking performance during sampling. However, successfully applying shifted-diffusion to molecular generation is not straightforward, as IPDiff failed satisfying fundamental **geometric constraints** and ensuring **reasonable physical significance**. Additionally, we have to pointed out that IPDiff **fully adopts** the modeling approach proposed in [2] (**noting that IPDiff neither cites [2] nor mentions this in its paper**). We believe it is necessary to highlight this to prevent readers from misjudging IPDiff's technical contributions.\\n* We would like to emphasize that our method is fundamentally based on **pocket energy perception**, which ensures the entire process maintains both equivariance and rationality\\u2014an aspect that was not achieved in previous works. \\n* We proposed a novel modeling approach (a paradigm based on **pre-planned** modifications) and offered unique insights into the choice of the shifted bias (an energy-based **SE(3)-equivariant vector field**). \\n* Unlike the approaches mentioned above [1][2][3], where the shifted-bias is either a constant with different scaling factors or a variable obtained through linear transformation, we proposed using the MCMC method to sample the shifted-bias at different time steps and introduced the **position-tuning** module to enhance the **sampling accuracy**. \\n\\nThese core principles have allowed us to achieve significant advancements, with experimental results substantially surpassing those of the prior state-of-the-art (SOTA) models, including IPDiff.\\n\\nAdditionally, we acknowledge your point regarding the stylistic similarities to IPDiff. We are committed to revising our manuscript\\u2019s overall presentation to ensure it is clearly distinct in appearance and aligns with the novel contributions of our approach.\\n\\nThank you again for your valuable input, and we look forward to further refining our work in response to your suggestions.\\n\\nBest regards,\\n\\nThe Authors\\n\\n**Reference**\\n\\n[1]: Zhou Y, Liu B, Zhu Y, et al. Shifted diffusion for text-to-image generation, CVPR, 2023.\\n\\n[2]: Zhang Z. et al. ShiftDDPMs: Exploring Conditional Diffusion Modelsby Shifting Diffusion Trajectories, AAAI, 2023\\n\\n[3]: Huang Z. et al. Protein-ligand interaction prior for binding-aware 3d molecule diffusion models, ICLR, 2024.\"}", "{\"comment\": \"Thanks for the authors' reply, and I do appreciate the authors\\u2019 efforts in improving IPDiff for better performance. I decide to adjust my score from 3 to 5. However, I still hold some concerns. From my point of view, although VFDiff addressed some problems of IPDiff, the overall novelty is relatively weak.\\nA minor concern is that the JSD of bond distance, as reported in Table 1, is quite low for both IPDiff and VFDiff. However, when I used the generated molecules provided in the supplementary material to calculate JSD for bond, some of them are inconsistent with the result reported in the paper. It would be helpful if the author could provide the code for JSD calculation.\"}", "{\"comment\": \"Dear Reviewer kczy,\\n\\nWe sincerely apologize for not being able to upload the reproduction code in a timely manner. Since the submission portal is now closed, we have uploaded the code for JSD calculation to an anonymous GitHub platform for your review: https://anonymous.4open.science/r/TestVFDiff-E477/README.md You can follow the instructions in the README file to complete the reproduction process.\\n\\nWe greatly appreciate your suggestions regarding the discussion of our work in relation to IPDiff and will actively incorporate them. In the revised version, we will add the following discussion to help readers properly understand both our contributions and the aspects borrowed from related work.\\n\\nIn the *Related Work* section, we will add a new subsection on **shifted-diffusion**, with the following details: \\n\\n[1][2] were the first to propose the shifted-diffusion framework. They designed a more effective forward process by utilizing given conditions to form a new type of conditional DDPMs, benefiting from this approach. [1] attempted to control the noise trajectories of different digits on the MNIST dataset to improve the quality of generation. [2], a multimodal model, introduced textual conditional information into the image noise process, thereby altering the prior distribution during sampling and improving sampling efficiency. \\n\\nInspired by [1], [3] was the **first to apply** the shifted-diffusion architecture in the SBDD (Structure-Based Drug Design) task. They attempted to incorporate protein-ligand interaction information into the noise process of ligand molecules to enhance docking performance during sampling. \\n\\nWhile [3] achieved promising results, there remain two major issues: \\n1. The introduced information must be reduced to 3D via linear transformation, which violates the fundamental geometric equivariance constraints in molecular generation. \\n2. The interpretability of the unsupervised dimensionality reduction process is problematic. The meaning of the reduced shifted-bias cannot be clearly explained, limiting its interpretability. \\n\\nBuilding on the insights from [1][2][3], we propose VFDiff, a 3D equivariant molecular generation framework guided by energy transformations, to address these challenges. Starting from the design concept of **energy matching**, we have re-examined the shifted-diffusion framework. We believe that the noise trajectory of molecules should shift in the direction that maximizes the increase in binding energy, while also satisfying the requirement of equivariance (this is our **conceptual contribution**). Next, we present our **technical contributions**. In the marginal distribution design, unlike the guiding approaches of [1][2][3], we introduce a pre-plan guiding paradigm, where guidance is incorporated prior to data scaling. For guidance computation, we propose using an MCMC sampling method optimized by **position-tuning** to compute the vector field between molecular conformations at different time steps and the protein pocket. This ensures more precise guidance during the sampling process and more guaranteed interpretation.\\n\\nWe have discussed the development of shifted-diffusion and the unique contributions of each paper in the aforementioned *Related Work* section with a **honest and impartial attitude**. Additionally, we have pointed out that IPDiff **fully adopts** the modeling approach proposed in [1] (**noting that IPDiff neither cites [1] nor mentions this in its paper**). We believe it is necessary to highlight this to prevent readers from misjudging IPDiff's technical contributions. \\n\\nThank you again for your valuable suggestions, and we wish you a wonderful Thanksgiving!\\n\\nSincerely\\uff0c\\n\\nThe Authors\\n\\n**Reference**\\n\\n[1]: Zhang Z. et al. ShiftDDPMs: Exploring Conditional Diffusion Modelsby Shifting Diffusion Trajectories, AAAI, 2023\\n\\n[2]: Zhou Y, Liu B, Zhu Y, et al. Shifted diffusion for text-to-image generation, CVPR, 2023.\\n\\n[3]: Huang Z. et al. Protein-ligand interaction prior for binding-aware 3d molecule diffusion models, ICLR, 2024.\", \"title\": \"Response to Reviewer kczy\"}", "{\"title\": \"Response to Reviewer 6obu (Part 2/2)\", \"comment\": \"* To further illustrate the impact of $c$ on the sampling results in the test set, we extended our comparative experiments. The results are summarized in the table below.\\n\\n|Methods| Vina Score(\\u2193)| Vina Min (\\u2193)| QED(\\u2191)| SA(\\u2191)|\\n|-|-|-|-|-|\\n|c=0.1|-6.55| -7.24| 0.49|0.62|\\n|c=1|-7.01|-7.96|0.52|0.58|\\n|**c=10**|**-7.37** |**-8.18**|**0.54**|0.57|\\n|c=20|-6.76|-7.75|0.53|0.57|\\n|**Reference**|-6.36|-6.71|0.48|**0.73**|\\n\\n* The experiments indicate that when $c = 0.1$, the overall results are closest to the reference, further confirming the validity of our earlier hypothesis. Interestingly, the best performance was achieved at $c = 10$, despite the fact that in **table 2**, the MSE is relatively large for this value of $c$. We believe a possible explanation lies in the unordered nature of graph structures. Specifically, each node in a graph has no inherent ordering. For instance, the six carbon atoms in a benzene ring are entirely equivalent. However, when training the diffusion model to compute positional loss, we must arbitrarily assign a numbering to the carbon atoms (e.g., by traversing clockwise as 1, 2, 3, 4, 5, 6). If the same benzene ring is rotated 180 degrees counterclockwise around its center, the sequence becomes 4, 5, 6, 1, 2, 3. While these represent two entirely equivalent benzene rings in terms of molecular conformation, the positional loss calculation will yield a very large value, as each atom appears to have moved to the opposite side. From this perspective, we believe the results in tables (2) and (3) are not contradictory.\\n* The average loss $| X_{i,0}-\\\\hat{X}_{0,i} |$ on the validation set is approximately $\\\\eta \\\\times \\\\epsilon$ ($\\\\epsilon \\\\sim \\\\mathbf{N}(0, I)$) when $\\\\eta = 0.1$. To ensure generalization, we set five times this value during training VFNet.\\n\\nWe hope this explanation helps to resolve your confusion and provides deeper insight into our position-tuning approach. Please let me know if any further clarification or adjustments are needed.\\n\\n**Q4: Evaluation methodology**\\n\\n**A4**: Thank you for your suggestion.\\n* We have added two additional metrics from PoseCheck[1], *Steric Clashes* and *Strain Energy*, to further evaluate the quality of the generated molecular conformations. The table below presents a comparison with the baselines. It can be observed that VFDiff achieves excellent performance on the *Steric Clashes* metric and shows nearly a 30% improvement in the *Strain Energy* metric compared to the other two diffusion-based models.\\n\\n|Methods|clashes (-)| energ (-) |\\n|-|-|-|\\n|TargetDiff |10.4|1410.2| \\n|IPDiff|8.7| 1283.7| \\n|VFDiff |9.1|1028.6| \\n* We have revised the descriptions of *QED* and *SA* in the manuscript to ensure consistency in the conveyed information.\\n* In the paper, we reported the Jensen-Shannon Divergence (JSD) of bond length distributions relative to the test set distribution. Violin plots for evaluation metrics such as *QED*, *SA*, and *Vina Score* have also been added to the **Appendix F** to better help you understand the distribution of the generated data. Additionally, we have included the JSD of bond angles (please refer to our response to **Reviewer kczy, A3**) to better illustrate the accuracy of molecular conformations. \\n\\n[1] PoseCheck: Generative Models for 3D Structure-based Drug Design Produce Unrealistic Poses, NeurIPS Workshop, 2023.\\n\\n**Q5: Why do you use \\\"p\\\" in eq 6 and \\\"q\\\" in eq 7?**\\n\\n**A5**: The variable $q$ describes the forward noise addition process, while $p$ describes the reverse denoising process. Therefore, $p$ in Equation (6) should be replaced with $q$. Thank you for pointing out this typo.\\n\\n**Q6: Algorithm 1 (blue part): what is the difference between $\\\\epsilon_0$ and $\\\\epsilon_1$?**\\n\\n**A6**: $\\\\epsilon_0$ and $\\\\epsilon_1$ are two independent samples drawn from a standard normal distribution. Using the reparameterization trick, we utilize $\\\\epsilon_0$ and $\\\\epsilon_1$ to perform sampling for the distributions in Equations (6) and (7)\\n\\n**Q7: Technical details: how did you assign bonds? How many molecules per input did you sample? How do you define pocket? What is the size distribution?**\\n\\n**A7**: We follow the paradigm of TargetDiff[2] and IPDiff[3] by generating the coordinates and types of ligand atoms, then using OpenBabel to generate the bonds. For a fair comparison, we adhere to the same settings as TargetDiff regarding the sampling of pocket positions and the number of atoms. According to the dataset statistics defined by TargetDiff, the number of atoms is positively correlated with the pocket space, and the data distribution can be described by a normal distribution.\\n\\n[2]: Guan et al., 3d Equivariant Diffusion for Target-aware Molecule Generation and Affinity Prediction, ICLR, 2023.\\n\\n[3]: Huang Z. et al. Protein-ligand interaction prior for binding-aware 3d molecule diffusion models, ICLR, 2024.\\n\\nIf there is anything unclear or not well-explained, further questions are welcome. We look forward to your feedback!\"}", "{\"comment\": \"Hi, zhenkun\\n\\n Thank you for your interest in our work. We have made our responses to the reviewers publicly available, and you can find relevant information there. Regarding the reproduction of experimental results, we have uploaded all information about the generated molecules, visualization code, and log files retained during testing in the **supplementary materials**. The data in the tables of the paper were obtained by testing with the code provided by TargetDiff. We plan to release our model code once the paper is accepted. \\n\\nShifted-diffusion is currently a well-established framework and not the work of any single paper[1][2][3]. You can refer to the references we provided, as **each modeling approach differs**. However, successfully applying shifted-diffusion to molecular generation is not straightforward, as it requires satisfying fundamental **geometric constraints** and ensuring **reasonable physical significance** (please refer to our response **A1** to reviewer **kczy**).\\n\\nFor the marginal distribution design in the shifted-diffusion method discussed in Section 4.2, we proposed a novel modeling approach (a paradigm based on **pre-planned** modifications) and offered unique insights into the choice of the shifted bias (an energy-based **SE(3)-equivariant vector field**). Unlike the approaches mentioned above, where the shifted-bias is either a constant with different scaling factors or a variable obtained through linear transformation, we proposed using the MCMC method to sample the shifted-bias at different time steps and introduced the **position-tuning** module to enhance the **sampling accuracy**. This allows for more precise control of the offset in each denoising step, and the process adheres to SE(3)-equivariance. Regarding the proof of the formulas, we will revise the references to related work in the updated version (a minor typo in Appendix c3) and add a detailed explanation of this technique in the *Related Work* section. We hope our response helps you better understand the related works and our **unique contributions**. \\n\\nThank you for your comments and support.\\n\\nSincerely,\\n\\nThe Authors\\n\\n**Reference**\\n\\n[1]: Zhou Y, Liu B, Zhu Y, et al. Shifted diffusion for text-to-image generation, CVPR, 2023.\\n\\n[2]: Zhang Z. et al. ShiftDDPMs: Exploring Conditional Diffusion Modelsby Shifting Diffusion Trajectories, AAAI, 2023\\n\\n[3]: Huang Z. et al. Protein-ligand interaction prior for binding-aware 3d molecule diffusion models, ICLR, 2024.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for all your responses. However, I still have concerns regarding the reproducibility of generated results and reported performance in VFDiff, as I encountered difficulties when attempting to replicate the results, and the training or inference code has not been provided. I am looking forward to the release of the code and model weights in the future, as this would grately contribute to the advancement of molecule generation.\", \"title\": \"Public Comment about Reproducibility Issues\"}", "{\"comment\": \"Dear Reviewer kczy,\\n\\nOn behalf of all the authors, I would like to extend our sincerest greetings to you and express our heartfelt gratitude for the time and effort you have dedicated during the review process. As the discussion phase is coming to the end, we are keen to ensure that all your concerns have been fully addressed. We sincerely look forward to your feedback and further support. \\n\\nFinally, we wish you a wonderful Thanksgiving holiday! \\n\\nWarm regards, \\n\\nThe Authors\"}", "{\"title\": \"To Reviewer kczy\", \"comment\": \"Dear Reviewer kczy,\\n\\nI sincerely apologize for disturbing you, but we are in great need of your support. We have done our utmost to address your concerns and implement the suggested improvements.\\n\\nWe genuinely look forward to receiving your response, and on behalf of all the authors, I would like to express our heartfelt gratitude to you.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Inquiries about Reproducibility Issues\", \"comment\": \"Dear Authors,\\n\\nI sincerely appreciate the time and effort the authors have dedicated to providing the code for calculating the metrics. However, it appears that the provided code only supports metric calculation and is derived from the TargetDiff repository, rather than being suitable for reproducing the generated results. Since the reproducibility of generated results is a critical aspect in evaluating generative models, I would be grateful if the authors could further provide the model weights and the training/inference code to enhance the reproducibility of this work. When I attempted to replicate the work following the details described in the paper, I was unsuccessful, possibly due to the absence of certain technical details.\"}", "{\"title\": \"Response to Reviewer 6obu \\uff08Part1/2\\uff09\", \"comment\": \"We sincerely thank you for your time and efforts in reviewing our paper, and your valuable feedback. We provide detailed answers in the following, and the **revised texts in manuscript are denoted in red**.\\n\\n**Q1: The work looks very similar to IPDiff. Technical mechanisms like force-guiding and energy-planning are claimed to be the contributions of this paper, while they are adopted from IPDiff.**\\n\\n**A1**: Thank you for your thoughtful feedback. We sincerely appreciate the opportunity to clarify our contributions and provide additional context regarding our work.\\n\\nFirst, controlling the generation of diffusion models using shifted-bias is a common practice in the image and text domains[1], and therefore, it is not a technique unique to IPDiff.\\n\\nSecond, our core idea is fundamentally different from previous works. In the image domain, the primary goal is to improve generative performance by introducing a shifted-bias to modify the final distribution of the forward process, aligning it as closely as possible with the prior distribution. In IPDiff, the authors attempt to incorporate protein-ligand interactions as prior information (shifted-bias) into both the forward and backward processes of the diffusion model. However, this approach faces significant challenges, as outlined in our **A1** to **Reviewer kczy**, specifically in the **Briefings in IPDiff** and **Weakness in IPDiff** sections.\\n\\nIn contrast, VFDiff proposes constraining docking energy changes during the forward process without altering the final distribution. Using VFNet, we compute the affinity score to derive the energy gradient and the force field acting on the molecule within the protein pocket (two representations of the same concept). During the forward process, our approach leverages docking energy guidance, where the shifted-bias corresponds to the energy gradient. We term this **energy-planning**. In the backward process, molecular formation is influenced by the protein pocket's force field, which we call **force-guiding**. Compared to IPDiff\\u2019s non-equivariant prior-shifting bias obtained through unsupervised linear transformations, our method maintains SE(3)-equivariance in molecular coordinates and invariance in molecular properties during both forward and backward processes, making it more elegant and interpretable.\\n\\nThird, both IPDiff and VFDiff fundamentally belong to the category of shifted-diffusion models. Structurally, we referenced these prior works, but the methods of trajectory modification via shifted-bias differ. IPDiff adopts a post-hoc modification paradigm, whereas we employ a pre-planned modification approach.\\n\\nThank you again for your thorough review. We hope our response helps clarify both the similarities and unique contributions of our work compared to previous studies. Based on the points outlined above, we have revised our manuscript accordingly.\\n\\n[1]: Zhou Y, Liu B, Zhu Y, et al. Shifted diffusion for text-to-image generation, CVPR, 2023.\\n\\n**Q2: Style of the paper**\\n\\n**A2**: Thank you for your careful review. Based on your suggestion, we have revised the relevant statements in the paper accordingly. Additionally, we conducted a thorough review of other sections and made further improvements to enhance clarity and precision.\\n\\n**Q3: Details in position-tuning**\\n\\n**A3**: Thank you for your recognition and curiosity about the position-tuning method we proposed. We are delighted to provide a detailed explanation to address your concerns.\\n* Since our model employs the $X_0$-predict paradigm to calculate the distribution of the molecule at the previous timestep, we aim to make the predicted $X_0$ as accurate as possible. In VFDiff, the value of $(\\\\hat{X}_{0|t} - X_0)^2$ on the validation set shows a significant negative correlation with $t$. The table below lists the mean squared error (MSE) across $t$ values ranging from 1 to 1000.\\n\\n|(Time) |1 | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 |\\n|-|-|-|-|-|-|-|-|-|-|-|-|\\n|(Loss)| 0.00| 0.01 | 0.01| 0.06 | 0.08 | 0.30 | 0.52|0.81 | 1.39 | 2.06 | 2.46|\\n\\n* To validate the hypothesis described in lines 223\\u2013225 of the paper, we conducted the following ablation experiment. We loaded the pretrained weights of the diffusion model and VFNet from VFDiff and measured the average MSE of position-tuning over the validation set under different values of scaling coefficient (called $c$ in this context). We found that the MSE is minimized when $c = 0.1$, which supports the hypothesis in lines 223\\u2013225. Beyond this value, the molecular conformation deviates significantly from the original structure. **We have already included this part of the experiment in the manuscript. Please refer to it.**\\n\\n|(Scaling coefficient)| $c=0$| $c=0.05$ | $c=0.1$ | $c=1$ |$c=10$|\\n|-|-|-|-|-|-|\\n|(Position loss)| 0.6995| 0.6973 | 0.6876| 0.7798 | 10.464|\"}", "{\"comment\": \"Dear Reviewer kczy,\\n\\nThe code we prepared has been fully debugged and is now available for your review. We look forward to your further feedback and suggestions and will respond to your inquiries promptly. Thank you for your time and effort. \\n\\nSincerely, \\n\\nThe Authors\"}", "{\"title\": \"The updated results\", \"comment\": \"Dear Reviewer kczy,\\n\\nThank you very much for pointing out this critical issue. We have recalculated the Bond JSD and Angle JSD based on the data distribution in the CrossDock2020 test set, focusing on the eight most frequent bond types and the five most frequent bond angles in the test set. The notebook with the saved results has been uploaded: [https://anonymous.4open.science/r/Test-E48F/README.md](https://anonymous.4open.science/r/Test-E48F/README.md). The results are presented in the table below. \\n\\n**Bond JSD:**\\n\\n|Bond/frequency| AR|Pocket2Mol| TargetDiff|DecompDiff|IPDiff|VFDiff|\\n|-|-|-|-|-|-|-|\\n|CC (29.6%)|0.61|0.49|0.37|**0.33**|0.39|0.44|\\n|C:C(20.7%)|0.45|0.41|0.26|0.24|0.32|**0.20**|\\n|CO(13.9%)|0.49|0.45|0.42|**0.35**|0.49|0.39|\\n|CN(10.1%)|0.47|0.42|0.36|**0.34**|0.42|0.37|\\n|C:N(8.8%)|0.55|0.48|0.23 |**0.23**|0.38 |0.25|\\n|OP(4.5%)|0.53|0.81|0.44|**0.45**|0.48 |0.47|\\n|C=O(4.2%)|0.56|0.51|0.46 |**0.39**|0.43 |0.45|\\n|C=C(1.7%)|0.56|0.54|**0.50**|0.54|0.57 |0.52|\\n\\n**Angle JSD**\\n\\n|Bond/frequency| AR|Pocket2Mol| TargetDiff|DecompDiff|IPDiff|VFDiff|\\n|-|-|-|-|-|-|-|\\n|CCC(18.1%)|0.372|0.380|0.345|**0.339**|0.392|0.399|\\n|C:C:C(16.0%)|0.572|0.480|0.283|0.255|0.373|**0.247**|\\n|CCO(9.5%)|0.477|0.475|0.440|**0.390**|0.490|0.473|\\n|C:C:N(5.7%)|0.537|0.506|0.454|**0.429**|0.517|0.447|\\n|CCN(5.0%)|0.477|0.443|0.437 |**0.418**|0.462 |0.433|\\n\\nIn our response A4 to Reviewer 6obu, we tested the results of **PoseCheck** (this test took a considerable amount of time, so we were unable to test more baseline methods). The results are shown in the table below:\\n\\n|Methods|clashes (-)| energ (-) |\\n|-|-|-|\\n|TargetDiff |10.4|1410.2| \\n|IPDiff|8.7| 1283.7| \\n|VFDiff |9.1|1028.6| \\n\\n\\nBased on the above test results, we believe that VFDiff does not have issues with inaccurate molecular conformations. However, as the testing method has changed, we will adjust the corresponding descriptions of the results in the paper for the sake of scientific rigor and reselect the examples for presentation.\\n\\nFinally, on behalf of all the authors, please allow me to express our heartfelt gratitude to you. Your meticulous and rigorous scientific attitude and the suggestions you provided are invaluable not only to us but also to other members of the molecular generation community. Thank you for your dedication during this period.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"I appreciate the effort of the authors in addressing my concerns. After checking the evaluation code the authors provided, I came to know that the problem is not introduced by the authors. Instead, it is the reliance on the reference distribution provided by TargetDiff, which differs from the true distribution of the CrossDocked2020 test set, leads to a consistent underestimation of the JSD values for all methods. For example, the bond JSD of 'C=C' between Pocket2Mol and this reference distribution is 0.292. To ensure the results accurately reflect the true performance of the models, I strongly recommend recalculating the reference distribution based on the CrossDocked2020 test set and reevaluating the bond and angle JSD metrics.\\n\\nUnfortunately, the preliminary results shared by the authors, along with this corrected perspective, deepen my concerns about the rationality of the generated conformations. The visualization in Figure 6 also suggest that the generated poses may lack structural coherence or alignment with expected physical and chemical constraints. To address this comprehensively, a more thorough analysis of the validity of the generated molecular conformations is required. Employing tools such as PoseCheck [1] or PoseBuster [2] could provide valuable insights. I look forward to seeing these additional evaluations and updates in future revisions.\\n\\n[1] Harris, C., Didi, K., Jamasb, A., Joshi, C., Mathis, S., Lio, P., & Blundell, T. (2023). Posecheck: Generative models for 3d structure-based drug design produce unrealistic poses. In NeurIPS 2023 Generative AI and Biology (GenBio) Workshop.\\n\\n[2] Buttenschoen, M., Morris, G. M., & Deane, C. M. (2024). PoseBusters: AI-based docking methods fail to generate physically valid poses or generalise to novel sequences. Chemical Science, 15(9), 3130-3139.\"}", "{\"summary\": \"The authors propose a diffusion model for structure-based drug design endowed with the affinity-based guiding strategy. To do this, authors include three components: force-guiding in diffusion process, and energy-planning and position-tuning in denoising process. Authors demonstrate that the generated molecules have a superior docking scores and perform on par with other methods in terms of other metrics.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. I find the proposed position-tuning mechanism very interesting and original\\n2. The generated samples have better Vina scores than other state-of-the-art methods\", \"weaknesses\": \"Major:\\n1. The work looks very similar to IPDiff. Many parts of text, equations, figures and overall structure are almost copy-pasted from the IPDiff.\\n2. Technical mechanisms like prior-shifting and energy-planning are claimed to be the contributions of this paper, while they are adopted from IPDiff.\\n3. I believe that the style of this paper can be significantly improved:\\n- 3.1 The paper contains many instances of unclear language, grammar errors, and vague terminology. For instance, the meaning of the terms \\\"spatial complementarity\\\" (line 524) and \\\"conditional prior\\\" (line 144) is unclear to me.\\n- 3.2 The authors often use overly promotional language, which may detract from the objectivity of the work. For instance, \\\"contributions\\\" 2 and 3 are details of the method itself, which is already presented as contribution 1. Effectively I see two points to provide here: 1) \\\"new\\\" method with affinity guidance, 2) state-of-the-art performance.\\n4. Evaluation methodology:\\n- 4.1 In my opinion, some basic sanity checks like PoseBuster filters are missing. They allow to evaluate the overall adequacy of the generated molecules, checking if they are valid, connected, if the geometry of the molecules is correct, and if molecules don't have steric clashes.\\n- 4.2 The discussion around QED and SA metrics lacks clarity and seems inconsistent to me. If these metrics are deemed unimportant, it may be more effective to omit them from the discussion altogether. In fact, I believe that there is no reason expect these metrics to exceed the values in the training set. You are training a diffusion model which is supposed to learn the underlying distribution by design, and you are not further optimising for these metrics.\\n- 4.3 Since you are introducing a diffusion model which primary goal is to learn the underlying distribution, I believe that reporting JS-divergence or Wasserstein distances between training and sampled distributions of a wider range of different metrics would be helpful to understand the distribution learning capabilities of your model. For example, you can additionally compute the distances between different bond angles and atom types. Distributions of some property approximators like QED, SA, logP, etc. could also strengthen the evaluation section. \\n5. To me it looks like the only novel contribution of this work is position-tuning. I believe that this component can be helpful, and I like this idea in general. However, I wonder what exactly happens with $X_{0|t}$ upon position-tuning: what is the average error between $\\\\hat{X_{0|t}^{\\\\mathcal{M}}}$ and $X_0$ before and after position tuning in the sampling trajectories? Does it decrease as $t$ approaches to 0? Also, can you experimentally validate the hypothesis you suggest in lines 223-225? A deeper analysis of this component (potentially a separate section in Results) would be very interesting.\\n6. I found the explanation of position-tuning component in the end of section 4.1 very abrupt and unclear. For example, how is the position-tuning loss (4) is related to training the VFNet and losses (2)?\\n7. Motivation behind some design choices like introduction (as well as the choice of the values) of scaling coefficients $\\\\eta$ are unclear to me.\", \"questions\": \"1. Why do you use \\\"p\\\" in eq 6 and \\\"q\\\" in eq 7?\\n2. Algorithm 1 (blue part): what is the difference between $\\\\epsilon_1$ and $\\\\epsilon_0$?\\n3. Algorithm 2 (blue part): why do you multiply by 10 in position-tuning?\\n4. Technical details: how did you assign bonds? How many molecules per input did you sample? How do you define pocket? What is the size distribution?\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"I find that this paper has a very similar content to IPDiff [1].\\n\\n[1] Huang Z. et al. Protein-ligand interaction prior for binding-aware 3d molecule diffusion models //The Twelfth International Conference on Learning Representations. \\u2013 2024.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes VFDiff, a novel SE(3)-equivariant diffusion model for structure-based drug design (SBDD), focusing on enhancing the binding affinity of generated molecules. VFDiff is guided by a vector field derived from VFNet, a pretrained model learning binding information by predicting atomic positions and Vina Scores from perturbed data. The authors conduct comprehensive experiments against established baselines, showing VFDiff's superior performance in generating high-affinity molecules.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to follow.\", \"Experimental results comprehensively show the effectiveness of VFDiff in improving binding affinity.\"], \"weaknesses\": [\"The introduction of a prior guided information in the diffusion process (i.e., energy-planning and force-guiding) is used in previous work and the authors do not point out their unique contributions relative to previous models.\"], \"questions\": \"1. What is the novelty in Section 4.2 comparing to IPDiff?\\n2. What is the rationale for scaling the noise added to coordinates in VFNet training with $\\\\eta \\\\sim U(0, 0.5)$?\\n3. In Figure 10, the conformation of generated molecules appears worse than the reference ligands. Could the authors provide Jensen Shannon divergences of bond angles distributions?\\n4. Could the authors compare the generation efficiency of VFDiff to baseline methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To Reviewer kczy\", \"comment\": \"Dear Reviewer kczy,\\n\\nIn our previous response, we provided the code (https://anonymous.4open.science/r/TestVFDiff-E477/README.md) necessary for testing Bond JSD and added supplementary explanations in the manuscript regarding the development of shifted-diffusion and its application in SBDD tasks. We are eager to know if these updates have effectively addressed your concerns and enhanced the completeness of the paper. \\n\\nWe sincerely hope for your suggestions and support. \\n\\nThank you for your invaluable contribution to improving the quality of our work. \\n\\nSincerely\\uff0c\\n\\nThe Authors\"}" ] }
5YCZZSEosw
Let Large Language Models Find the Data to Train Themselves
[ "Fanqi Wan", "Deng Cai", "Shijue Huang", "Xiaojun Quan", "Mingxuan Wang" ]
The current iterative development process for large language models (LLMs) is heavily data-centric, relying on human researchers and engineers to manually analyze model performance and determine what data to acquire for further training. However, this human-supervised approach is costly and may fail to identify optimal training signals. Its scalability is further limited as models become increasingly capable and may eventually exceed human intelligence. To address these issues, we propose an automated framework that enables models to autonomously discover and strategically acquire the most valuable training data to enhance their performance. It establishes a self-improving framework where models can invoke APIs to crawl and/or generate tailored datasets from various resources and environments, and retrain themselves. The data selection decisions are shaped by reinforcement feedback signals that reward performance gains while penalizing computational overhead. This formulation incentivizes models to develop self-knowledge about their strengths and areas for improvement in order to efficiently select training data. Empirical results demonstrate that LLMs operating within our framework are able to autonomously and strategically acquire valuable training data to enhance their performance across a variety of skills in 1,000 diverse in-house test tasks and three public benchmarks.
[ "Self-improving", "Synthetic Data", "Large Language Models" ]
https://openreview.net/pdf?id=5YCZZSEosw
https://openreview.net/forum?id=5YCZZSEosw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xahZEgJt4C", "wrBZoDnlXs", "s4mZQ2yK11", "n33L5AG1gC", "jvdg6qy4bP", "i9CoIe16pd", "hljSTbxZXU", "hkJ95U1GPP", "dW3J8FDIKt", "dLQ5v5fVLW", "c2R0fOO4mz", "bBPPd89oB6", "aDipD5uEOv", "Wac2xbPtAU", "UZuvS2Zcjh", "Tw1wnbSujk", "PPeAd6hQYk", "JrEZDCWWZf", "HhMKUSDaYQ", "Eyhc3N0DDu", "6t00DntZ4Y", "4xNHQp1G7w", "4OSHojAbV4" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1733142716328, 1732778725254, 1732778279101, 1733139095601, 1732778775999, 1732778524295, 1733193924094, 1732785948526, 1730553069820, 1729962057203, 1737593586397, 1732778675054, 1730656065027, 1733138215479, 1733210288372, 1732778624423, 1732778165180, 1732778431704, 1732778596130, 1733222217288, 1732784869492, 1731288196815, 1730045262369 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Reviewer_fuzK" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Reviewer_eS6a" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Reviewer_6jxd" ], [ "ICLR.cc/2025/Conference/Submission6455/Reviewer_fuzK" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Reviewer_EL9t" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Authors" ], [ "ICLR.cc/2025/Conference/Submission6455/Reviewer_CBK9" ], [ "ICLR.cc/2025/Conference/Submission6455/Reviewer_6jxd" ], [ "ICLR.cc/2025/Conference/Submission6455/Reviewer_CBK9" ], [ "ICLR.cc/2025/Conference/Submission6455/Reviewer_eS6a" ] ], "structured_content_str": [ "{\"comment\": \"We acknowledge that the word \\\"training\\\" in our paper can be controversial. In-context learning is a new learning paradigm that significantly diverges from traditional machine learning approaches and primarily works with large language models. On one hand, a number of studies have studied its effectiveness, efficiency, and underlying mechanisms, particularly in comparison to fine-tuning, suggesting that it is a strong alternative to fine-tuning [1-5]. On the other hand, yes, it is usually categorized as an inference-time learning method.\\n\\n**To avoid future confusion, we can replace all related wordings of \\\"training\\\" with \\\"teaching\\\" in our revised manuscript.** Please note that the core contribution of this paper is the introduction of an automatic data collection and curation framework for targeted performance enhancement. Generally, the specific methods used to learn from the resulting data do not affect the overall framework. We can opt for fine-tuning when computational costs are more affordable.\\n\\nThank you once again for your valuable suggestions. We greatly appreciate your feedback and would like to know if this helps clarify the contribution of our paper or if you have any further comments on this matter.\"}", "{\"title\": \"Response to Reviewer eS6a (Part 2)\", \"comment\": \"> **Q6: Regarding the overlap between generated instructions and original instructions.**\\n\\n**A6:** Thank you for raising this important point. In our methodology for generating new task-specific instructions (which serve as observed instructions), we follow the implementation in Alpaca [1] to ensure instruction diversity. Specifically, we prompted the LLM to generate instructions that were different from the original ones. To maintain distinctiveness, we employed a filtering mechanism whereby any generated instructions with a Rouge-L similarity score exceeding 0.7 when compared to the original instructions were eliminated. We will incorporate these details in the revised version.\\n\\n**References:**\\n\\n[1] Stanford Alpaca: An Instruction-following LLaMA model. GitHub repository, 2023.\\n\\n> **Q7: Regarding the reasons for performance improvements achieved by cost control.**\\n\\n**A7:** This is an insightful observation. The cost-control mechanism, by selecting responses from among those with top-tier rewards while prioritizing lower costs, inherently promotes greater response diversity compared to pure reward maximization approaches. This enhanced diversity potentially yields two significant benefits: first, it contributes to more generalized performance improvements across various tasks, and second, it helps mitigate reward over-fitting issues. These additional discussions will be incorporated into our revised manuscript.\"}", "{\"title\": \"Response to Reviewer CBK9 (Part 1)\", \"comment\": \"Thank you for your thoughtful review and valuable feedback. We appreciate your recognition of our work\\u2019s innovation and significance. Below, we address your concerns in detail.\\n\\n> **Q1: Regarding the comparison between ADS and baseline methods.**\\n\\n**A1:** Since more than one reviewer raised this question, we have responded to it in the global rebuttal section (Q1) to save space.\\n\\n> **Q2: Regarding the influence of the Magpie dataset.**\\n\\n**A2:** Thank you for your valuable suggestion. Firstly, we opted for the Magpie dataset due to its comprehensive coverage of diverse alignment tasks, encompassing various domains, difficulty levels, and intents. This aligns with the objective of our ADS framework, which aims to identify suitable data for training policy models for each specific task. Furthermore, to ensure a fair comparison, we fine-tuned our base LLMs using the Magpie dataset and conducted evaluations on public benchmarks.\\n\\n| Methods | Qwen-2-7B-Instruct | | | | Gemma-2-9B-Instruct | | | |\\n|--------------------|--------------------|------------|----------|---------|---------------------|------------|----------|---------|\\n| | **AlpacaEval 2.0** | **Arena-Hard** | **MT-Bench** | **Average** | **AlpacaEval 2.0** | **Arena-Hard** | **MT-Bench** | **Average** |\\n| Base LLM | **24.0** | **25.6** | **55.9** | **26.4** | **34.8** | **37.5** | **55.0** | **36.9** |\\n| Base LLM w/ Magpie | 11.9 | 11.7 | 43.1 | 13.6 | 14.8 | 12.0 | 44.9 | 15.5 |\\n\\nOur empirical results indicate that **fine-tuning the LLM with Magpie data significantly decreased performance**. The reason is that the base LLMs (Qwen-2-7B-instruct and Gemma-2-9-Instruct) have already undergone extensive fine-tuning with high-quality training data. Compared to that, the data quality of Magpie may be relatively low. These findings confirm that the improvements observed in our experiments are not attributed to the Magpie dataset, indicating the effectiveness of the proposed ADS method. We will incorporate these findings into the revised manuscript.\\n\\n> **Q3: Regarding the influence of generated data length on ADS.**\\n\\n**A3:** We appreciate your insightful feedback and would like to offer some clarification. It is important to note that when employing in-context learning for policy updating, the scale of generated data is inherently constrained by the model's context window capacity. However, state-of-the-art LLMs typically operate with context windows of approximately 32,000 tokens. Our empirical observations indicate that this scale of training data is sufficient for the effective optimization of a given target task. \\n\\n> **Q4: Regarding the analysis of input task-specific dataset size to the optimizer model.**\\n\\n**A4:** We appreciate your insightful question. In our implementation, we utilize the task-specific examples as a representative for the target task. Since successful task completion requires various fundamental capabilities and skills, we speculate that more observed examples could represent a broader spectrum of task scenarios, therefore achieving better performance. Currently, we are conducting experiments to scale the example number, and we will update the results once the experiment is concluded.\\n\\n> **Q5: Regarding the extension from ICL to fine-tuning.**\\n\\n**A5:** We appreciate the reviewer's insightful comment regarding the implementation of ICL for policy model updating. We would like to provide further clarification, the policy model requires frequent updates with a complexity of O(trajectory sampling number * target task number), resulting in approximately 40,000 updates per experiment. Given this computational intensity, implementing traditional fine-tuning approaches would be prohibitively expensive. Consequently, we adopted in-context learning as our primary methodology across all experimental conditions to maintain computational efficiency while preserving performance.\\nFurthermore, extensive research has demonstrated that **ICL can achieve comparable or even better effectiveness than traditional parameter fine-tuning** [1][2][3][4][5]. We will include these analyses in the updated version.\\n\\n**References:**\\n\\n[1] Exploring the relationship between in-context learning and instruction tuning. arXiv preprint, 2023.\\n\\n[2] Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation, ACL 2023 Findings, 2023.\\n\\n[3] Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers. ACL 2023 Findings, 2023.\\n\\n[4] In-Context Learning with Long-Context Models: An In-Depth Exploration. arXiv preprint, 2024.\\n\\n[5] Many-Shot In-Context Learning. NeurIPS, 2024.\"}", "{\"comment\": \"The issue of overclaiming has not been resolved. The authors state that the computational cost is too high to perform actual training. If training cannot be conducted, then the claim of \\u201ctraining\\u201d should not be made.\"}", "{\"title\": \"Response to Reviewer fuzK\", \"comment\": \"Thank you for your thoughtful review and valuable feedback. We appreciate your recognition of our work\\u2019s well-motivated. Below, we address your concerns in detail.\\n\\n> **Q1: Regarding the implementation of ICL for policy model updating.**\\n\\n**A1:** We appreciate the reviewer's insightful comment regarding the implementation of ICL for policy model updating. We would like to provide further clarification, the policy model requires frequent updates with a complexity of O(trajectory sampling number * target task number), resulting in approximately 40,000 updates per experiment. Given this computational intensity, implementing traditional fine-tuning approaches would be prohibitively expensive. Consequently, we adopted in-context learning as our primary methodology across all experimental conditions to maintain computational efficiency while preserving performance.\\n\\nFurthermore, extensive research has demonstrated that ICL can achieve comparable or even better effectiveness than traditional parameter fine-tuning [1][2][3][4][5]. We will include these analyses in the updated version.\\n\\n**References:**\\n\\n[1] Exploring the relationship between in-context learning and instruction tuning. arXiv preprint, 2023.\\n\\n[2] Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation, ACL 2023 Findings, 2023.\\n\\n[3] Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers. ACL 2023 Findings, 2023.\\n\\n[4] In-Context Learning with Long-Context Models: An In-Depth Exploration. arXiv preprint, 2024.\\n\\n[5] Many-Shot In-Context Learning. NeurIPS, 2024.\\n\\n> **Q2: Regarding the comparison between ADS and baseline methods.**\\n\\n**A2:** Since more than one reviewer raised this question, we have responded to it in the global rebuttal section (Q1) to save space.\\n\\n> **Q3: Regarding the efficiency of our method.**\\n\\n**A3:** We sincerely appreciate your insightful feedback regarding the efficiency of our method. Our approach is divided into two parts: optimizer refinement (\\\"learn to learn\\\") and policy optimization (\\\"learn\\\").\\n\\nOptimizer refinement requires frequent updates and evaluations of the policy model during the training process, resulting in a time complexity of O(trajectory sampling number * target task number). We acknowledge that the optimizer refinement process requires substantial initial overhead.\\n\\nHowever, once implemented, the subsequent costs for policy optimization become minimal. This automated approach serves to replace the traditional, labor-intensive process of manual training data collection and curation, which typically involves extensive trial and error, thereby significantly enhancing model adaptation efficiency for learning any tasks.\\n\\nFurthermore, as discussed in the limitations section, we acknowledge that the proposed ADS framework represents a conceptual prototype implementation of the automated data collection process. There remains substantial further investigation before practical implementation can be achieved. We will incorporate these analyses into the updated version of our work.\\n\\n> **Q4: Regarding the comparison between ADS and RLAIF and AutoDetect.**\\n\\n**A4:** We thank the reviewer for their insightful question regarding the comparison between ADS and AutoDetect. To clarify, the goal of our proposed ADS approach is to automatically collect valuable training data to enhance the performance of the target task, which distinguishes it from the aforementioned methods.\\n\\nThe RLAIF approach primarily focuses on leveraging rewards from AI to replace human feedback in reinforcement learning. This method relies on existing training data and does not involve the collection of new data.\\n\\nIn contrast, the AutoDetect method requires a powerful LLM, such as GPT-4, to identify weaknesses in a weaker policy model through iterative question generation and answer evaluation. This approach depends on GPT-4 for weakness identification and distills data from GPT-4.\"}", "{\"title\": \"Response to Reviewer EL9t\", \"comment\": \"Thank you for your thoughtful review and valuable feedback. We appreciate your recognition of our work\\u2019s promisingness and effectiveness. Below, we address your concerns in detail.\\n\\n> **Q1: Regarding the comparison between ADS and baseline methods.**\\n\\n**A1:** Since more than one reviewer raised this question, we have responded to it in the global rebuttal section (Q1) to save space.\\n\\n> **Q2: Regarding the difference between ADS and RAG.**\\n\\n**A2:** Thank you for your insightful question. The distinctions between ADS and RAG can be summarized across three key dimensions:\\n\\n1. **Motivation:** ADS primarily aims to identify valuable training data for a specific task through environmental interactions via various APIs (including but not limited to Information Retrieval), ultimately enhancing overall task performance. In contrast, RAG focuses on retrieving relevant information for individual queries, with the primary objective of improving response quality for specific queries rather than optimizing global task performance.\\n2. **Implementation:** The fundamental component of ADS is the optimizer model, which dynamically generates appropriate API calls and API parameters at the task level. This differs significantly from traditional RAG, where its retrieval process solely and directly matches the given instructions to texts in a database.\\n3. **Framework:** ADS incorporates more comprehensive APIs beyond Information Retrieval, including Demonstration Generation and Question Answering. This integration of diverse APIs results in a more complicated and effective framework compared to RAG.\\n\\n> **Q3: Regarding the use of the Llama model in our experiments.**\\n\\n**A3:** We sincerely appreciate your insightful feedback regarding the use of the Llama model in our experiments. To address your questions, we would like to clarify that, **since our training data is generated by Llama-3-8B-Instruct, we exclude it from our choices of policy models to avoid any potential biases and ensure a fair comparison**. This explanation is also mentioned in Section 4.4 (line 323) of the current manuscript.\\nRegarding the Question Answering API, which is designed to generate responses to given questions and is independent of the training data, we opt for the strong Llama-3.1-70B-Instruct model as a proxy for human annotation in practice for efficiency and reproducibility. It is worth noting that alternative LLM such as GPT-4 or Claude-3.5 could also serve as the strong model.\"}", "{\"comment\": \"Thanks for the additional ablation study, which confirms the reviewers' earlier concern that the gains mostly come from the QA API. I will keep the current score.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for your positive feedback and for the constructive comments that are pivotal to improve our work.\"}", "{\"summary\": \"This paper introduces an automated framework called ACTIVE DATA SEARCH (ADS), designed to enable large language models (LLMs) to autonomously discover and acquire valuable training data for self-improvement without the need for human supervision.\\nThe authors propose using reinforcement feedback signals to guide the models in selecting optimal data, rewarding performance gains while penalizing computational overhead.\\nThe framework is validated through extensive experiments on 1,000 diverse in-house test tasks and three public benchmarks, demonstrating significant performance improvements.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The ADS framework is innovative as it leverages LLMs to autonomously enhance their training data, reducing the need for costly human intervention and addressing scalability issues. The use of reinforcement feedback signals to balance performance improvement and computational efficiency is a practical and smart approach.\\n(2) Empirical results are robust, including 1,000 in-house test tasks and three public benchmarks, showing clear performance gains and generalization capabilities. The inclusion of detailed implementation protocols, such as API designs for data acquisition, is valuable for reproducibility.\\n(3) The paper presents a method for iterative refinement, demonstrating consistent performance improvements with the optimizer refinement through reinforcement learning.\", \"weaknesses\": \"(1) The paper would benefit from more comprehensive ablation studies. Specifically, it would be insightful to understand the individual contributions of each proposed API and the iterative refinement process. For example, what is the impact of excluding one of the APIs, or how does the system perform without the iterative refinement?\\n(2) The paper does not sufficiently discuss the potential limitations and failure cases of the ADS framework. Identifying scenarios where the framework might not perform well or discussing any observed limitations during the experiments would provide a more balanced view.\\n(3) Comparisons with more diverse sets of baselines, especially in terms of data acquisition strategies, would strengthen the validation of the effectiveness of the proposed framework.\\n(4) The exposition around reinforcement learning strategies could be expanded to aid comprehension, especially for readers less familiar with advanced reinforcement learning techniques.\", \"questions\": \"(1) Could you perform additional ablation studies to analyze the impact of each data acquisition API within the framework? Understanding the individual contributions of each API would highlight their importance and the overall robustness of the ADS framework.\\n(2) Were there any particular scenarios or tasks where the ADS framework did not perform as expected? If so, what were these failure cases, and how were they addressed or mitigated in the study?\\n(3) Could you provide further comparisons with alternative data acquisition methods beyond the ones mentioned in the paper?\\n(4) How sensitive is the ADS framework to different policy model architectures and sizes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents the active data search (ADS) framework, allowing large language models to autonomously acquire training data without human supervision. ADS learns an \\\"optimizer\\\" via reinforcement learning to enable active data acquisition while minimizing costs. Experiments demonstrate that ADS leads to performance gains compared with the original model.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The method is well-motivated. Letting LLMs actively identify training data for themselves can potentially improve data efficiency and reduce human efforts in model training.\", \"weaknesses\": [\"**The true implementation of ADS is different from its description.** In Sec. 3.2, the authors claim that \\\"_we **train** the original $\\\\mathcal M^p$ on the tailored training dataset to update its knowledge and capacities_\\\". This claim is very misleading because according to Appendix C, the so-called \\\"training\\\" is simply adding the collected data to the prompt. The authors refer to this as \\\"in-context learning\\\" (ICL). However, in [1], ICL is introduced as an _inference_ approach in contrast to _training_. Thus, I believe the main selling point in the title, \\\"finding the data to _train_ the models\\\", is a $\\\\text{\\\\color{red}significant overclaim}$, which can be very misleading to the community.\", \"**The paper does not compare ADS with any relevant baselines.** The authors include a few relevant papers in Sec. 2, e.g., [2,3]. The authors should compare the cost and quality of ADS with baselines to demonstrate its effectiveness.\", \"**I'm concerned about the efficiency of the method.** Algorithm 1 requires $\\\\mathcal O(N|\\\\mathcal A||\\\\mathcal T|)$ evaluations of the policy model. No where in the paper do the author report the computation cost of the method. It's also not clear whether the method can scale up to more types of API calls and larger models in a truly practical scenario.\", \"Minor issues:\", \"Line 50: This sentence is hard to read and grammatically incorrect. Please consider the modification proposed by LLM: \\\"We achieve the above automatic process through development of an optimizer that generates these APIs sequentially based on the target task to solve.\\\"\", \"Line 58: `\\\\citet` $\\\\to$ `\\\\citep`\", \"Line 138: LLM-as-a judgment $\\\\to$ LLM-as-a judge\", \"[1] Language Models are Few-Shot Learners\", \"[2] Rlaif: Scaling reinforcement learning from human feedback with ai feedback.\", \"[3] AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models\"], \"questions\": \"Please refer to the **weakness** part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to Reviewer eS6a (Part 1)\", \"comment\": \"Thank you for your thoughtful review and valuable feedback. We appreciate your recognition of our work\\u2019s novelty and significance. Below, we address your concerns in detail.\\n\\n> **Q1: Regarding the intrinsic advantages of the API trajectory optimized by the optimizer model in ADS.**\\n\\n**A1:** We appreciate the reviewer\\u2019s insightful comment regarding the inherent advantages of API trajectories generated through our optimizer model. Within the ADS framework, the optimizer model is specifically designed to identify and curate specialized training datasets that enhance policy performance by generating appropriate textual API calls to interact with various resources and environments. Through iterative reinforcement learning, **the optimizer model is incentivized to detect and address performance weaknesses, thereby facilitating the progressive development of self-knowledge**. This process systematically increases the probability of generating optimal API trajectories, ultimately leading to continuous self-improvement of the system.\\n\\n> **Q2: Regarding the comparison between ADS and strong baseline strategies.**\\n\\n**A2:** Since more than one reviewer raised this question, we have responded to it in the global rebuttal section (Q1) to save space.\\n\\n> **Q3: Regarding the generalization of ADS.**\\n\\n**A3:** Thank you for the insightful question. We would like to clarify that the primary objective of our ADS framework is to develop a generalized optimizer model capable of addressing a wide spectrum of target tasks. To achieve this, we have meticulously compiled a diverse dataset comprising approximately 10,000 target tasks for optimizer training and validation.\\n\\nAdditionally, in our experiments, despite the evaluation of 1,000 in-house target tasks, we have also validated our optimizer model's generalizability on three established public benchmarks: AlpacaEval 2.0, Arena-Hard, and MT-Bench. These benchmarks span multiple domains and task types, and importantly, are entirely independent of our training dataset, thereby providing robust evidence of our model's generalization applicability.\\n\\n> **Q4: Regarding the function and importance of the three APIs in ADS.**\\n\\n**A4:** Thank you for raising this important point. As shown in Section 4.1, these three APIs are designed to facilitate the acquisition, utilization, and enhancement of knowledge, respectively. The Information Retrieval API employs both sparse and dense retrieval to retrieve relevant documents from external knowledge databases such as Wikipedia, thereby supporting knowledge acquisition, analogous to the pre-training stage of LLMs. The Demonstration Generation API utilizes the policy model to generate appropriate exemplar instruction-response pairs, tailored to various knowledge application scenarios, reminiscent of the alignment stages of LLMs. The Question Answering API resorts to the wisdom of human experts, mimicking how humans learn from each other.\\n\\nWe also demonstrate the ablation experimental results in the table below. We can observe that **the proposed ADS framework with all of these three APIs demonstrates the best performance**.\\n\\n| Methods | Qwen-2-7B-Instruct | | Gemma-2-9b-Instruct | |\\n|------------------------------|---------------------|-------------------|---------------------|-------------------|\\n| | **In-House Test Tasks** | **Public Benchmarks** | **In-House Test Tasks** | **Public Benchmarks** |\\n| Information Retrieval API | 24.2 | 26.8 | 46.4 | 32.6 |\\n| Demonstration Generation API | 55.8 | 31.6 | 76.9 | 35.1 |\\n| Question Answering API | 81.9 | 32.0 | 79.7 | 36.0 |\\n| ADS (All APIs) | **84.3** | **32.8** | **82.6** | **38.3** |\\n\\n> **Q5: Regarding the splitting of observed and held-out instructions.**\\n\\n**A5:** Thank you for raising this thoughtful question. As discussed in Section 4.2 (line 226-233), we divide the instructions for each target task into two distinct sets: one set of observed instructions for trajectory generation, and another set of held-out instructions for performance evaluation. This splitting serves a crucial purpose: it enables us to verify that the optimizer is effectively learning to enhance task-level performance (i.e., acquiring valuable training data that improves overall task capability) rather than merely optimizing for instance-level solutions (i.e., developing specific solutions for given instructions). This distinction is important because superior performance on observed instructions does not necessarily generalize to improved performance on held-out instructions, potentially indicating overfitting rather than effective learning.\"}", "{\"summary\": \"The paper introduces the Active Data Search (ADS) framework, which enables large language models (LLMs) to autonomously identify and acquire training data to improve their own performance with minimal human intervention. The authors propose an optimizer model that generates API calls for data collection from various external sources, such as web search engines, AI assistants, and annotation services. This framework leverages reinforcement learning to refine the optimizer, balancing task performance enhancement with computational cost. Experimental results, using models like Qwen-2-7B-Instruct and Gemma-2-9B-Instruct, show substantial performance gains in in-house tasks and public benchmarks, demonstrating ADS\\u2019s effectiveness in enabling smaller models to achieve results comparable to larger ones.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper presents the ADS framework, which automates the identification and acquisition of training data, enhancing the self-improvement capabilities of LLMs. This direction holds promise for reducing reliance on human intervention, making model development more efficient and scalable.\", \"weaknesses\": \"While the paper offers an innovative approach, it has several areas that could be strengthened. The lack of comparison with diverse baselines limits the assessment of the ADS framework's true efficacy. Specifically, comparisons with\\n(a) LLMs using basic RAG for retrieval,\\n(b) untrained LLMs collecting data for policy model inference, \\n(c) training the optimizer model with simple rule-based or manually collected data,\\n(d) other naive automated data collection techniques would provide deeper insights into the relative advantages of ADS. \\nIncluding these baselines would enhance the evaluation's comprehensiveness and validate the framework's practical significance against existing methods.\", \"questions\": \"1. Relationship with RAG: The paper lacks a thorough discussion on how the proposed ADS framework relates to or differs from Retrieval-Augmented Generation (RAG). Given that both approaches involve external data retrieval, it would be beneficial to address whether ADS extends, overlaps, or diverges from RAG in terms of methodology, application, or objectives. This comparison would help position the contribution within the broader context of retrieval-based techniques and clarify its novelty.\\n\\n2. Inconsistent Use of Llama in Experiments: The absence of Llama-based experiments in Figure 2, despite its inclusion as a comparison point in Table 2, raises questions about consistency. It would be helpful for the authors to explain why Llama models were not tested under the same conditions as other models, or why their results were only included in specific sections. This would ensure a clearer understanding of the comparative analysis and model choice rationale.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate your valuable feedback and the time you've dedicated to reviewing our paper. As we approach the final day of the discussion period, we kindly request any additional comments you may have on our revisions. We have invested significant effort in conducting additional experiments and addressing your queries, and would be grateful for your acknowledgment of our responses. Your feedback is crucial for us to effectively present our work to the research community. Please let us know if any points require further clarification.\\n\\nThank you very much and look forward to your replies!\\n\\nBest regards,\\n\\nPaper 6455 Authors\"}", "{\"comment\": \"We sincerely appreciate the reviewer's insightful comment regarding the utilization of the QA API within the ADS framework.\\n\\nWe would like to clarify that our proposed ADS framework does not overly rely on the QA API; for instance, on 1,000 in-house tasks using Gemini-2-9B-Instruct, the number of QA API calls was reduced by 59.6% compared to the ablation baseline that solely depends on the QA API. The proportion of QA usage is only about 44.2% among all three APIs. It is important to note that the QA API necessitates a strong external LLM to serve as a proxy for human annotation, making it the most cost-intensive among the other APIs. The ablation study actually shows that our ADS framework achieves larger performance gains at reduced operational costs, particularly with much fewer QA API calls.\\n\\nThank you once again for your valuable suggestions. We greatly appreciate your feedback and hope that above information helps address your concern regarding the effect of the QA API.\"}", "{\"title\": \"Response to Reviewer 6jxd (Part 2)\", \"comment\": \"> **Q5: Regarding the implementation of ADS to different policy models.**\\n\\n**A5:** Thank you for this valuable suggestion. In our experiments, we have explored two different policy models, including Qwen-2-7B-Instruct and Gemma-2-9B-Instruct, both of which are famous open-source LLMs in the research community. As illustrated in Figure 2 and Table 2, our proposed ADS framework, when integrated with these diverse policy models, demonstrates significant performance enhancements across a wide range of in-house and public test tasks.\"}", "{\"title\": \"General Response to all Reviewers\", \"comment\": \"We sincerely appreciate the time and effort all the reviewers put into reviewing our paper. In the subsequent points, we will carefully address the common concerns of our paper.\\n\\n> **Q1: Regarding the comparison between ADS and baseline methods.**\\n\\n**A1:** We appreciate the reviewer's concern regarding the comparison between ADS and baseline methods. To clarify, our original submission included two types of baselines in our experiments. Specifically, we evaluated \\n\\n1. **Prompting**, which constructs API trajectories through optimizer model prompting without fine-tuning (the prompting method in Figure 2 and Table 2).\\n2. **Rule-based QA**, which utilizes the Question Answering API for each observed instruction in the target task to construct the corresponding API trajectory (the w/o. Self-Explored method in Figure 5). \\n\\nWe fully agree that incorporating more baselines can further strengthen the robustness of our comparative analysis. Therefore, we have extended our experiments to include:\\n\\n3. **Retrieval Augmentation**, which employs both sparse and dense retrieval to identify relevant documents based on target task instructions.\\n4. **Self-Instruct**, which utilizes the policy model to generate new instruction-response pairs for the target task. \\n\\n| Methods | Qwen-2-7B-Instruct | | Gemma-2-9b-Instruct | |\\n|-------------------------|---------------------|-------------------|---------------------|-------------------|\\n| | **In-House Test Tasks** | **Public Benchmarks** | **In-House Test Tasks** | **Public Benchmarks** |\\n| Prompting | 36.7 | 27.2 | 67.0 | 33.7 |\\n| Rule-based QA | 81.9 | 32.0 | 79.7 | 36.0 |\\n| Retrieval Augmentation | 24.2 | 26.8 | 46.4 | 32.6 |\\n| Self-Instruct | 55.8 | 31.6 | 76.9 | 35.1 |\\n| ADS | **84.3** | **32.8** | **82.6** | **38.3** |\\n\\n**Compared to all the baseline methods, our ADS with iterative reinforcement learning processes have demonstrated superior performance improvements across both in-house test tasks and public benchmarks**, and maintaining its simplification without human intervention. These results will be included in the revised manuscript for clarity and completeness.\\n\\n> **Paper Revison**\\n\\nWe have carefully addressed all the comments and suggestions through comprehensive revisions. Below, we outline the major changes made to the manuscript (with key revisions highlighted in blue text in the PDF).\\n\\n**Key Revisions**\\n\\n1. We have elaborated on the reason for using in-context learning for policy model updating in **Section 4.4 (line 302-304)**. (Reviewer **CBK9, fuzK**)\\n2. We have included the comparison between more baseline methods and individual APIs in **Appendix E (line 1014-1025 Table 8)**, demonstrating that compared to all the baseline methods, the proposed ADS with all the APIs achieves improved performance across both in-house test tasks and public benchmarks. (Reviewer **CBK9, EL9t, 6jxd, eS6a, fuzK**)\\n3. We have discussed the influence of the Magpie instruction-following dataset in **Appendix F (line 1035-1046 Table 9)**, confirming that the improvements observed in our experiments are not attributed to the Magpie dataset. (Reviewer **CBK9**)\\n4. We have detailed the reasons for performance improvements achieved by the cost-control mechanism in **Section 6.2 (line 519-520)**. (Reviewer **eS6a**)\\n5. We have added the details to maintain distinctiveness between the newly generated instructions and the original instructions in **Appendix B (line 842-845)**. (Reviewer **eS6a**)\\n6. We have incorporated the analysis of tasks that ADS does not perform as expected in **Section 5.2 (line 453-455)**. (Reviewer **6jxd**)\\n7. We have fixed the typos and the hard-to-read sentences. (Reviewer **fuzK**)\\n\\nWe are deeply grateful for your insightful feedback, which has been instrumental in strengthening our work. We hope these revisions and additional analyses thoroughly address all raised concerns.\"}", "{\"title\": \"Response to Reviewer CBK9 (Part 2)\", \"comment\": \"> **Q6: Regarding the generalization of ADS.**\\n\\n**A6:** Thank you for the insightful question. We would like to clarify that the primary objective of our ADS framework is to develop a generalized optimizer model capable of addressing a wide spectrum of target tasks. To achieve this, we have meticulously compiled a diverse dataset comprising approximately 10,000 target tasks for optimizer training and validation.\\nAdditionally, in our experiments, despite the evaluation of 1,000 in-house target tasks, we have also validated our optimizer model's generalizability on three established public benchmarks: AlpacaEval 2.0, Arena-Hard, and MT-Bench. These benchmarks span multiple domains and task types, and importantly, are entirely independent of our training dataset, thereby providing robust evidence of our model's generalization applicability.\\n\\n> **Q7: Regarding the requirements of clustered tasks.**\\n\\n**A7:** We appreciate the reviewer\\u2019s insightful comment regarding the requirements of clustered tasks. Our proposed approach primarily aims to enable LLMs to identify valuable training data that can enhance their performance on specific target tasks. In our experiments, we prompt LLMs to cluster a large instruction dataset into splits of specific tasks, ensuring the reproducibility of our research. We believe that in practical applications, users can simply formulate a small number of exemplar instructions according to their needs.\\n\\n> **Q8: Regarding the case studies of ADS.**\\n\\n**A8:** Thank you for raising this important question. In our original submission, we present case studies in Table 4 (Appendix C). We can observe that our optimizer model, when provided with a few task-specific instructions, is capable of executing a three-step process. First, it generates a comprehensive analysis of the fundamental requirements to solve the target task. Subsequently, it produces self-reflective evaluations to identify potential limitations. Finally, it develops corresponding API trajectories to construct suitable training data, thereby enhancing task-specific capabilities.\\n\\n> **Q9: Regarding the construction of API trajectories in rejection sampling and direct preference optimization.**\\n\\n**A9:** Thank you for raising this important question. As discussed in Section 4.4 (line 307-311), for reward maximization optimization, the construction of API trajectories is as follows:\\n\\n1. In rejection sampling, the chosen trajectory is the one with the maximum reward from multiple sampled trajectories.\\n2. In direct preference optimization, we select the paired chosen and rejected trajectories with the maximum and minimum rewards, respectively.\\n\\nFor the cost-control mechanism, we introduce a cost tier parameter \\u03c4 \\u2208 [0, 1] to control the trade-off between rewards and costs. Trajectories within the top-tier rewards ranging [(1\\u2212\\u03c4 )Rmax+\\u03c4Rmin, Rmax] are considered to have similar performance. From this subset, we select the trajectory with the lowest cost as the chosen trajectory. Conversely, for the reject trajectory, we select the one with the highest cost within [Rmin,(1 \\u2212 \\u03c4 )Rmin + \\u03c4Rmax]. We will make the above clearer in the next version.\\n\\n> **Q10: Regarding the cost of API calls considered only when selecting trajectories.**\\n\\n**A10:** Yes, you are right. The API call costs are exclusively considered during the selection of trajectories from a given reward-tier trajectory group. Specifically, for rejection sampling, we select the chosen trajectory with the minimal cost among those in the highest reward tier. Regarding direct preference optimization, we select the lowest-cost trajectories from the top reward tier for chosen trajectories, and highest-cost trajectories from the bottom reward tier for rejected trajectories.\\n\\n> **Q11: Regarding the details of the prompts used in ADS.**\\n\\n**A11:** We would like to note that all these prompts are detailed in Appendix A of our original submission. For individual API, we utilize zero-shot prompting. For the optimizer model's API trajectory generation, we implement a two-shot prompting, incorporating examples both with and without API calls. Regarding the evaluation prompts, we maintain consistency with the official implementations by using zero-shot prompts throughout the assessment process.\\n\\n> **Q12: Regarding the details of the internal test set.**\\n\\n**A12:** Thank you for raising this important point. Our internal test set comprises 1,000 target tasks, which were constructed from the Magpie dataset. To ensure robust evaluation, for the test set, we expanded each task's initial five instructions to 100 through the Self-Instruct method, where three instructions serve as observed samples while the remaining 97 are maintained as held-out examples. We have provided a detailed description of the test set construction method in Section 4.2 (lines 254-262), with comprehensive statistical analyses in Table 3 (Appendix B).\"}", "{\"title\": \"Response to Reviewer 6jxd (Part 1)\", \"comment\": \"Thank you for your thoughtful review and valuable feedback. We appreciate your recognition of our work\\u2019s innovation and robustness. Below, we address your concerns in detail.\\n\\n> **Q1: Regarding the comparison between ADS and baseline methods.**\\n\\n**A1:** Since more than one reviewer raised this question, we have responded to it in the global rebuttal section (Q1) to save space.\\n\\n> **Q2: Regarding the contribution of individual API and iterative refinement.**\\n\\n**A2:** Thank you for raising this important point. As shown in Section 4.1 (line 226-233), these three APIs are designed to facilitate the acquisition, utilization, and enhancement of knowledge, respectively. The Information Retrieval API employs both sparse and dense retrieval to retrieve relevant documents from external knowledge databases such as Wikipedia, thereby supporting knowledge acquisition, analogous to the pre-training stage of LLMs. The Demonstration Generation API utilizes the policy model to generate appropriate exemplar instruction-response pairs, tailored to various knowledge application scenarios, reminiscent of the alignment stages of LLMs. The Question Answering API resorts to the wisdom of human experts, mimicking how humans learn from each other.\\nWe also demonstrate the ablation experimental results in the table below. We can observe that **the proposed ADS framework with all of these three APIs demonstrates the best performance**.\\n\\n| Methods | Qwen-2-7B-Instruct | | Gemma-2-9b-Instruct | |\\n|------------------------------|---------------------|-------------------|---------------------|-------------------|\\n| | **In-House Test Tasks** | **Public Benchmarks** | **In-House Test Tasks** | **Public Benchmarks** |\\n| Information Retrieval API | 24.2 | 26.8 | 46.4 | 32.6 |\\n| Demonstration Generation API | 55.8 | 31.6 | 76.9 | 35.1 |\\n| Question Answering API | 81.9 | 32.0 | 79.7 | 36.0 |\\n| ADS (All APIs) | **84.3** | **32.8** | **82.6** | **38.3** |\\n\\nTo illustrate the effectiveness of the iterative refinement process in ADS, our original submission conducted a series of experiments and demonstrated the iterative training results (iteration 0-3) in Figure 2 and Table 2. Our findings indicate that **iterative ADS boosts consistent performance improvements for both in-house test tasks and generalized public benchmarks**.\\n\\n> **Q3: Regarding the analysis of tasks that ADS does not perform as expected.**\\n\\n**A3:** Thank you for raising this important point. In Figure 3, we have shown that for categories like editing, creative writing, and debugging, our ADS only has slight improvements or maintains comparable to the baseline. This limited enhancement can be potentially attributed to the inherent nature of these tasks, which primarily involve format and style rewriting, as well as fragment modifications, presenting inherent challenges for optimization through in-context learning from acquired training data. These additional analyses will be incorporated into the updated version of the paper.\\n\\n> **Q4: Regarding the details of reinforcement learning.**\\n\\n**A4:** We sincerely appreciate your insightful feedback regarding the details of reinforcement learning. To facilitate the optimizer\\u2019s decision-making process in obtaining optimal training data, we propose a reinforcement learning strategy for optimizer training. This strategy leverages feedback reward signals from the policy to concurrently maximize task performance and minimize computational costs.\\nAs described in Section 4.4 (line 307-311), the reinforcement learning process comprises two primary phases. Initially, we implement a warm-up rejection sampling procedure, which fine-tunes the optimizer model on trajectories that demonstrate the highest reward among multiple sampled responses. Subsequently, we engage in an iterative process to update the optimizer model through direct preference optimization. This process involves selecting a \\\"chosen\\\" trajectory (exhibiting the highest reward) against a \\\"rejected\\\" trajectory (displaying the lowest reward). The objective of this approach is to enable the optimizer model to amplify the gap between these high-quality and low-quality trajectories, therefore increasing the probabilities of generating valuable API trajectories.\"}", "{\"title\": \"Response to the Author Rebuttal\", \"comment\": \"Thank you for the additional experiments and detailed rebuttal. While I appreciate the effort put into addressing my concerns, I am inclined to maintain my current score, as some key issues remain unresolved:\\n\\n1. The method remains intractable for \\\"training\\\" better policies. Furthermore, its scalability with respect to the number of input demonstrations and the size of the output synthetic dataset in its in-context policy improvement setting is still unknown. No additional experiments were provided to evaluate the method's performance when these parameters are varied, which raises concerns about its practicality and broader applicability.\\n\\n2. Analysis of Synthetic Data: The paper still lacks a thorough qualitative and quantitative analysis of the generated synthetic data, API trajectories and distributions, and the \\\"weakness\\\" rationales produced by the model. The single illustrative example in Appendix C is insufficient to evaluate these aspects comprehensively.\\n\\n3. Robustness Concerns: Robustness remains an open question, as there are no experiments evaluating how task clustering quality (e.g., carefully grouped vs. randomly grouped tasks) affects performance, or assessing the optimizer's efficacy when trained on datasets other than MagPie.\\n\\n4. Baseline Details and Comparisons: While the authors included new experiments with a MagPie fine-tuned baseline (Appendix F) and ablation studies for different APIs (Appendix E), critical details about these baselines are missing. For example, it is unclear whether the synthetic dataset sizes generated by various methods are comparable, or how the MagPie fine-tuned baseline was trained (e.g., model selection process, validation task performance, whether LoRA fine-tuning or full fine-tuning was used, etc.). These details are essential for interpreting the results, particularly given the surprising drop in performance for the MagPie-trained baseline.\"}", "{\"title\": \"Thank you to the authors\", \"comment\": \"Thank you for your replies. Most of my concerns have been addressed.\"}", "{\"summary\": \"Currently, LLM developers manually analyze model errors and engineer training datasets (be it through human labels, or synthetically) to enhance model performance across pre-training, instruction tuning, and preference learning stages. This approach is costly, error-prone, heavily reliant on developer expertise, and lacks scalability. To address these limitations, this paper introduces the **Active Data Search (ADS)** framework, where an LLM assesses its own strengths and weaknesses for a given task and autonomously collects relevant training data to improve itself.\\n\\nAt the core of ADS is an LLM-driven optimizer that, when given a target task (grounded by a few guiding questions), evaluates the capabilities of the primary LLM (referred to as the policy LLM) for the task and dynamically calls different data retrieval and generation APIs to collect relevant data for improvement. This data can then be used to fine-tune the policy LLM for the target task or enhance it via in-context prompt augmentation (as implemented in this paper). Since LLMs typically are not designed for this autonomous data collection task, ADS first trains this optimizer offline with reinforcement learning to balance the reward and cost associated with obtaining new data.\", \"the_paper_employs_three_simple_data_collection_apis\": \"(a) information retrieval from Wikipedia, (b) demonstration generation powered by an LLM, and (c) question-answering using an LLM. Trained on the MagPie dataset, the optimizer effectively learns to use these APIs to acquire relevant data, leading to improved performance on an internal test set and on 3 additional public benchmarks (AlpacaEval, MT-Bench, and Arena-Hard) across two different LLMs, at times matching the performance of larger LLMs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper tackles the important challenge of gathering optimal data to enhance LLM performance for a given task.\\n2. The proposed framing, where LLMs become self-aware of their strengths and weaknesses to identify optimal data for self-improvement, is compelling. This framing has significant implications, not only for advancing the next generation of state-of-the-art LLMs but also for democratizing AI by empowering non-experts to tailor LLMs to their specific needs.\\n3. The proposed ADS method of framing diverse data collection techniques as API/Tool calls and training an optimizer using DPO to dynamically select the most effective data collection API calls for a given task, is innovative and powerful.\\n4. The paper\\u2019s related work section is well-structured, effectively positioning the authors' contributions within the context of prior research.\\n5. Experimental evaluations demonstrate substantial improvements over the baseline Qwen-2-7b-instruct model across several public benchmarks.\", \"weaknesses\": \"The core weaknesses of this paper can be organized along the following broad themes:\\n\\n---\\n### Insufficient Experimental Validation\\n---\\nThe paper lacks comparisons with key baselines and prior work, making it challenging to fully assess the contributions of this approach. Specifically:\\n\\n1. **Dynamic API Selection Validation**: While the individual data collection APIs\\u2014Information Retrieval, Demonstration Generation, and Question Generation\\u2014are well-studied in prior work as Retrieval-Augmented Generation and its variants [1], Self-Instruct [2], and behavior cloning/distillation from teacher LLMs respectively, this work\\u2019s primary novelty lies in an optimizer that dynamically selects among these APIs. However, the paper does not compare this dynamic selection against these established methods, leaving unclear whether it is indeed critical for improved performance. To clarify, a comparison is recommended in the following settings:\\n\\n - *Inference-only Comparison*: Evaluate ADS against comparable-sized synthetic datasets generated by prior methods, as well as a larger dataset from these methods, given that they do not require additional compute for optimizer training.\\n - *Optimizer Training with Single APIs*: Train ADS-style optimizers with individual APIs (e.g., Information Retrieval only) on the MagPie dataset, to assess the relative impact of each API within ADS.\\n\\n&nbsp;\\n\\n2. **MagPie Fine-Tuned Baseline**: Unlike ADS, which is trained on the MagPie dataset, baseline LLMs are not. Since fine-tuning on MagPie has been shown to enhance performance on target benchmarks, this gives ADS an advantage over baseline models. A more balanced comparison would involve fine-tuning baseline LLMs on MagPie to assess if ADS's optimizer training and synthetic data generation offer distinct advantages over straightforward fine-tuning.\\n\\n&nbsp;\\n\\n---\\n### Scalability Concerns & Scaling Experiments\\n---\\n3. **Generated Dataset Size and Limitations**: The paper studies the proposed method in an in-context policy improvement setting, where synthetic data is added directly to the prompt. This restrictive setting limits the amount of synthetic data that can be utilized, as it must fit within the model's maximum sequence length. Discussion on scenarios where generated data exceeds this length and experiments studying the impact of synthetic data scaling on the method\\u2019s effectiveness would strengthen the paper.\\n\\n4. **Input Task-Specific Dataset Size**: The method currently relies on only five labeled instances from the target task to generate synthetic data, whereas real-world applications may allow access to more extensive task-specific data. Including a discussion on method scalability to larger datasets (e.g., prompt with all task data at once or using sampling and multi-prompting) and experiments exploring the impact of task-specific data scaling would be beneficial.\\n\\n5. **Method Extension and Efficacy for Fine-Tuning**: Since the method is limited to in-context policy improvements, its effectiveness in policy fine-tuning contexts remains untested. Additionally, the feasibility of using the current optimizer training algorithm for fine-tuning is uncertain, as retraining policies to obtain reward for each data generation step could be prohibitively costly. Discussion of this limitation and experiments in fine-tuning scenarios could further clarify the method's general applicability.\\n\\n&nbsp;\\n\\n---\\n### Method Robustness Concerns\\n---\\n6. **Optimizer Generality**: It is unclear if the optimizer learns generalizable insights into model strengths and weaknesses or if its effectiveness is limited to specific tasks. Current training and testing datasets are closely related (e.g., MagPie -> AlpacaEval, etc.). Testing on unrelated benchmarks, like Big-Bench-Hard, GSM8k, or MMLU, would offer insights into the optimizer\\u2019s generality.\\n7. **Dependence on MagPie Dataset**: The extent to which the optimizer\\u2019s performance depends on the choice of MagPie as the training dataset is unclear. Evaluating its performance when trained on different datasets would provide evidence of ADS's broader applicability.\\n8. **Reliance on Task Clustering**: ADS clusters similar instances into tasks to generate synthetic data during both training and testing. This clustering may not always be feasible and could require manual intervention. Without this careful clustering, retrieving the relevant synthetic data for a test instance could become a challenge, raising questions about the method's performance if instances are grouped differently, such as randomly, during data generation.\\n\\n&nbsp;\\n\\n---\\n### Limited analysis\\n---\", \"several_important_areas_lack_critical_analysis_in_the_paper\": \"9. **Generated Synthetic Data**: The paper does not analyze the characteristics or quality of the synthetic data generated. A manual review of the generated data could provide valuable insights into its role in improved performance.\\n\\n10. **API Call Patterns**: There is no examination of the API calls made by the optimizer at test time. Analyzing the frequency and distribution of these calls could clarify the optimizer's behavior and the relative importance of each data source (API).\\n\\n11. **Weakness Reflection Rationales**: Although the paper claims that the optimizer uses LLM-driven reflections to assess model strengths and weaknesses, it lacks a qualitative or manual analysis of these reflections. This omission makes it challenging to gauge the accuracy and depth of these self-assessments.\\n\\n&nbsp;\", \"references\": \"[1] Yu, Wenhao, et al. \\\"Generate rather than retrieve: Large language models are strong context generators.\\\" arXiv preprint arXiv:2209.10063 (2022).\\n\\n[2] Wang, Yizhong et al. \\u201cSelf-Instruct: Aligning Language Models with Self-Generated Instructions.\\u201d Annual Meeting of the Association for Computational Linguistics (2022).\", \"questions\": \"Please refer to the *Weakness* section above for recommendations on experiments and analyses that could further strengthen the paper.\", \"additional_questions_and_suggestions_that_do_not_impact_the_score\": \"1. How are preference pairs constructed from sampled API trajectories for rejection sampling and DPO? Is a single preference pair created from the highest and lowest reward trajectories?\\n2. Is the cost of API calls considered only when selecting an efficient trajectory from a trajectory tier (group) or do you also add a cost term to the overall reward?\\n3. Are all prompts (individual APIs, optimizer, evaluation prompts) zero-shot? If not, how many demonstrations are provided for each?\\n4. Consider expanding the discussion on other data collection techniques (APIs) that could be integrated into the framework.\\n5. The \\\"internal test-set\\\" is not described in the paper. Without additional details about this internal test set, it is difficult to assess the quality of the improvements. Is it just the test partition of the MagPie dataset?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a self-improvement framework for LLMs that leverages external environments for data acquisition.\\n\\nDuring training of the optimizer model, given a set of tasks and instructions for each task, the optimizer model learns (through RL) to generate optimized API call trajectories to acquire relevant external data, by measuring how the acquired data help improve performance of the policy model through finetuning or in-context learning on the data. The RL also takes cost control into account.\\n\\nDuring test time, the optimizer model generates data acquisition calls for tasks and instructions unseen during training and the data acquired is used to improve the policy model for these tasks.\\n\\nThe experiments show that data acquisition significantly improves the policy model's performance on various tasks, including Alpaca Eval, Arena Hard, and MT bench, when relevant tasks and instructions are provided for training the optimizer model and with in-context learning of the policy model on the acquired data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality: Self-improving LLMs via data acquisition is a novel direction, to the best of my knowledge.\", \"quality\": \"The paper shows promising quality gains against the baselines.\", \"clarity\": \"The framework and the complex process of training the optimizer model are clearly explained.\", \"significance\": \"Self-improvement of LLMs is clearly an important direction. Acquiring data from an external environment will be critical.\", \"weaknesses\": \"Significance: The quality gains against a stronger baseline strategy that \\\"utilizes the Question Answering API for each observed instruction in the target task\\\" are relatively minor.\", \"generalization\": \"It's not clear how well the optimizer model generalizes across tasks. The experiments are set up so that the training tasks and instructions appear to be similar to those used during test time. The paper would be strong if it could demonstrate generalization across more distinct tasks.\", \"questions\": \"What are the important characteristics of the optimized trajectories that allow them to outperform the baseline trajectories?\\n\\nWhat roles do \\\"information retrieval\\\", \\\"demonstration generation\\\", and \\\"question answering\\\" each play? What if we remove one of the APIs? How does their respective quality affect the final results?\\n\\nThe comparison against the \\\"Question Answering for all instructions\\\" strategy is only conducted for the in-house test set. Does it hold across benchmarks? \\n\\nIn Algorithm 1, \\\"Split task instructions Q into observed set Qo and held-out set Qh\\\" is in the iteration loop (t), does it mean that the split is different across iterations?\\n\\nIn section 4.3 \\\"we employ the Self-Instruct approach to generate three new instructions for each task and use them as the observed instructions for that task\\\". How much overlap is there between the generated instructions vs. the original ones?\\n\\nIn section 6.2, \\\"In comparison to the approach that focuses solely on maximizing performance without considering costs, our method not only achieves a lower cost as expected but also demonstrates an improved win rate\\\". It's not clear why cost control also improves quality. Is this due to randomness or some other factor?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5Y9NT6lW21
Adversarial Policy Optimization for Offline Preference-based Reinforcement Learning
[ "Hyungkyu Kang", "Min-hwan Oh" ]
In this paper, we study offline preference-based reinforcement learning (PbRL), where learning is based on pre-collected preference feedback over pairs of trajectories. While offline PbRL has demonstrated remarkable empirical success, existing theoretical approaches face challenges in ensuring conservatism under uncertainty, requiring computationally intractable confidence set constructions. We address this limitation by proposing Adversarial Preference-based Policy Optimization (APPO), a computationally efficient algorithm for offline PbRL that guarantees sample complexity bounds without relying on explicit confidence sets. By framing PbRL as a two-player game between a policy and a model, our approach enforces conservatism in a tractable manner. Using standard assumptions on function approximation and bounded trajectory concentrability, we derive a sample complexity bound. To our knowledge, APPO is the first offline PbRL algorithm to offer both statistical efficiency and practical applicability. Experimental results on continuous control tasks demonstrate that APPO effectively learns from complex datasets, showing comparable performance with existing state-of-the-art methods.
[ "Preference-based reinforcement learning", "Reinforcement learning with human feedback", "Offline reinforcement learning" ]
Accept (Poster)
https://openreview.net/pdf?id=5Y9NT6lW21
https://openreview.net/forum?id=5Y9NT6lW21
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wGgux3LBCG", "uzTIVmOjJZ", "u1vxluDiaO", "qEL1xzKJ6l", "oZKTGteODt", "hTkTTCe1m2", "ac9Fq1Vgkr", "U1KCise9UV", "Sqxet5IEi9", "RaRAWnBRCU", "I843BgzFaP", "I6kkq79zZZ", "ERIaDCsn55", "D71gC3X0EG", "BHy72dCxfI", "0eiM4KvZUx", "0EoEkIDwWW" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment" ], "note_created": [ 1729763074984, 1732042304180, 1731928453681, 1731929356869, 1734635753878, 1732096155453, 1732162974035, 1731913854249, 1732200913017, 1730049259686, 1731913630406, 1730691252386, 1731928426021, 1731929691038, 1737524050513, 1730671243143, 1731929608605 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10393/Reviewer_bzB5" ], [ "ICLR.cc/2025/Conference/Submission10393/Reviewer_bKme" ], [ "ICLR.cc/2025/Conference/Submission10393/Authors" ], [ "ICLR.cc/2025/Conference/Submission10393/Authors" ], [ "ICLR.cc/2025/Conference/Submission10393/Area_Chair_p2VK" ], [ "ICLR.cc/2025/Conference/Submission10393/Authors" ], [ "ICLR.cc/2025/Conference/Submission10393/Reviewer_FTyb" ], [ "ICLR.cc/2025/Conference/Submission10393/Authors" ], [ "ICLR.cc/2025/Conference/Submission10393/Authors" ], [ "ICLR.cc/2025/Conference/Submission10393/Reviewer_FTyb" ], [ "ICLR.cc/2025/Conference/Submission10393/Authors" ], [ "ICLR.cc/2025/Conference/Submission10393/Reviewer_DhLA" ], [ "ICLR.cc/2025/Conference/Submission10393/Authors" ], [ "ICLR.cc/2025/Conference/Submission10393/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10393/Reviewer_bKme" ], [ "ICLR.cc/2025/Conference/Submission10393/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose a novel variant of the two-player formulation of PbRL, that allows the deviation of statistical bounds as well as a computational efficient implementation. The basic element of the proof is a novel sub-optimality decomposition. They also offer implementation details as well as an empirical comparison with other SoTA algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Theoretically sound approaches to PbRL are an important tool for researchers, because they allow conclusions beyond the empirically tested scenarios. Together with the also available empirical evaluation, this work becomes a significant contribution. This is further strengthened by the fact, that the algorithm is competitive to the SoTA. This is not expected for an algorithm derived from an mostly theoretical work.\\nThe primary contribution of the work is an original contribution, which is embedded in a established framework.\\nQuality and Clarity are also good, with only minor limitations. Especially the formalization/introduction and the embedding into the available, related work are excellent. However, an overview table of related work, showing the available bounds, given assumptions & constraints would strength this further. This would enable the reader to directly pinpoint the variant considered by the authors, wrt. other works.\", \"weaknesses\": \"The most substantial weakness, are the repeated references to the appendix. The appendix should not be assumed to be available to every reader. In most cases, the explanations in the main paper are sufficient to follow the work without the appendix, but there are exceptions. E.g. the KL regularizer in alg 1&2 which is only requried for Alg.3, but this is not explained. In general, the authors should ensure that the paper is self contained. Stating a condensed version of what is written in the appendix (like for Theorem 4.1) is a good practice.\\n\\nFurthermore, there are some strong assumptions, that should be discussed, like the fixed-length requirement or the Markovian reward. Both assumptions are correctly stated, but it is not clear if they are a substantial requirement, or just in place for enabling the formal proof.\\n\\nAdditionally, the experiments should be extended by additional ablation studies. As example, the impact of smaller/larger D_traj sets would be interesting (independent of the number of preferences). Another improvement would be adding a rank or critical distance plot for table 1, as a direct comparison with the SoTA is inconclusive. However, given that experiments are not the primary scope of the paper, these are only a smaller concerns.\", \"one_small_typo\": \"The results of baselines Oracle, PT, DPPO, and IPL are taken from Choi et al. (2024) - \\\"Oracle\\\" should likely by \\\"MR\\\"\", \"questions\": \"Can you please point me to the description of the size of $D_{traj}$ used in the experiments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Hi,\\n\\nThanks for the feedback! The authors have addressed my concerns well and thus I have improved my rating.\"}", "{\"comment\": \"[1] Levine, S., Kumar, A., Tucker, G., & Fu, J. (2020). Offline reinforcement learning: Tutorial, review, and perspectives on open problems.\\u00a0*arXiv preprint arXiv:2005.01643*.\\n\\n[2] Kostrikov, I., Nair, A., & Levine, S. Offline Reinforcement Learning with Implicit Q-Learning. In\\u00a0*International Conference on Learning Representations*.\\n\\n[3] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy\\nmaximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861\\u20131870. PMLR, 2018.\\n\\n[4] Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, and Wen Sun. Provable offline\\npreference-based reinforcement learning. In The Twelfth International Conference on Learning\\nRepresentations, 2024.\\n\\n[5] Banghua Zhu, Michael Jordan, and Jiantao Jiao. Principled reinforcement learning with human feedback from pairwise or k-wise comparisons. In International Conference on Machine Learning,\\npp. 43037\\u201343067. PMLR, 2023.\\n\\n[6] Alizee Pace, Bernhard Sch\\\\\\u201dolkopf, Gunnar R\\\\\\u201datsch, and Giorgia Ramponi. Preference elicitation for offline reinforcement learning. arXiv preprint arXiv:2406.18450, 2024.\\n\\n[7] Jonathan D Chang, Wenhao Shan, Owen Oertell, Kiante Brantley, Dipendra Misra, Jason D Lee, and Wen Sun. Dataset reset policy optimization for rlhf. arXiv preprint arXiv:2404.08495, 2024.\\n\\n[8] Ellen Novoseller, Yibing Wei, Yanan Sui, Yisong Yue, and Joel Burdick. Dueling posterior sampling for preference-based reinforcement learning. In Conference on Uncertainty in Artificial\\nIntelligence, pp. 1029\\u20131038. PMLR, 2020.\\n\\n[9] Yichong Xu, Ruosong Wang, Lin Yang, Aarti Singh, and Artur Dubrawski. Preference-based reinforcement learning with finite-time guarantees. Advances in Neural Information Processing\\nSystems, 33:18784\\u201318794, 2020.\\n\\n[10] Aadirupa Saha, Aldo Pacchiano, and Jonathan Lee. Dueling rl: Reinforcement learning with trajectory preferences. In International Conference on Artificial Intelligence and Statistics, pp. 6263\\u20136289. PMLR, 2023.\\n\\n[11] Wenhao Zhan, Masatoshi Uehara, Wen Sun, and Jason D. Lee. Provable reward-agnostic preferencebased reinforcement learning. In The Twelfth International Conference on Learning Representations, 2024b.\\n\\n[12] Runzhe Wu and Wen Sun. Making rl with preference-based feedback efficient via randomization. arXiv preprint arXiv:2310.14554, 2023.\\n\\n[13] Yu Chen, Yihan Du, Pihe Hu, Siwei Wang, Desheng Wu, and Longbo Huang. Provably efficient iterated cvar reinforcement learning with function approximation and human feedback. In The Twelfth International Conference on Learning Representations, 2023.\\n\\n[14] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep\\nreinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.\\n\\n[15] Ibarz, B., Leike, J., Pohlen, T., Irving, G., Legg, S., & Amodei, D. (2018). Reward learning from human preferences and demonstrations in atari.\\u00a0*Advances in neural information processing systems*,\\u00a0*31*.\\n\\n[16] Lee, K., Smith, L. M., & Abbeel, P. (2021, July). PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training. In\\u00a0*International Conference on Machine Learning*\\u00a0(pp. 6152-6163). PMLR.\\n\\n[17] Park, J., Seo, Y., Shin, J., Lee, H., Abbeel, P., & Lee, K. (2022). SURF: Semi-supervised reward learning with data augmentation for feedback-efficient preference-based reinforcement learning.\\u00a0*arXiv preprint arXiv:2203.10050*.\\n\\n[18] Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. Preference transformer: Modeling human preferences using transformers for RL. In The Eleventh\\nInternational Conference on Learning Representations, 2023.\\n\\n[19] Liu, R., Bai, F., Du, Y., & Yang, Y. (2022). Meta-reward-net: Implicitly differentiable reward learning for preference-based reinforcement learning.\\u00a0*Advances in Neural Information Processing Systems*,\\u00a0*35*, 22270-22284.\\n\\n[20] Joey Hejna and Dorsa Sadigh. Inverse preference learning: Preference-based rl without a reward function. Advances in Neural Information Processing Systems, 36, 2024.\"}", "{\"comment\": \"We greatly appreciate the time and effort you have dedicated to reviewing our work and providing insightful feedback. Your feedback has been invaluable in helping us improve the clarity and presentation of our contributions. Below, we provide detailed responses to your comments.\\n\\n### Completeness of Main Paper\\n\\nWe appreciate your interest in the theoretical properties of Algorithm 1 and its foundational role in APPO (Algorithm 2). Due to space limitations, we concentrated on presenting the core ideas of Algorithm 1 in Section 3, while the detailed theoretical analysis and Algorithm 3 (a subroutine of Algorithm 1) were included in the Appendix. We recognize that a more comprehensive explanation of Algorithms 1 and 3 could improve clarity and are happy to revise the paper to provide a more self-contained presentation.\\n\\nThe KL regularization ($\\\\eta$) in the input list of Algorithm 3 is a typo, since $\\\\eta$ is used in the policy update steps (Line 7 in Algorithm, Line 5 in Algorithm 2). Thank you for identifying this error.\\n\\n### Reward Model\\n\\nYour request for clarification on the assumptions of our reward model is greatly valued, and we are grateful for the opportunity to provide clarity on these assumptions. The fixed-length (finite horizon) and Markovian reward assumptions provide a solid foundation for theoretical guarantees and practical implementations of our method. The Markovian reward assumption is standard and widely adopted in the field of PbRL [2-18], demonstrating its effectiveness in preference learning. While the fixed-length assumption facilitates our rigorous analysis, our method can be readily extended to the discounted, infinite horizon setting. Below, we elaborate on their significance:\\n\\n1. **Fixed-length (Finite Horizon) Assumption:**\\n - The fixed-length (finite horizon) Markov Decision Process (MDP) assumption is essential for the theoretical analysis of PbRL. Similar to prior works on provably efficient offline PbRL algorithms [2,3,4] and online PbRL algorithms [5-11], our analysis relies on the finite horizon setting to relate the reward model error (Lemma E.2) to the suboptimality of the learned policy.\\n - Although our analysis is based on the finite horizon setting, this is a common assumption in the literature and does not constrain the practical applicability of our method. As discussed in Section 5, APPO can be straightforwardly implemented for the discounted, infinite horizon setting.\\n1. **Markovian Reward Assumption:**\\n - The Markovian reward assumption is crucial for both theoretical and practical considerations. From a theoretical perspective, the Markov property enables fundamental techniques in RL literature, such as the performance difference lemma and the Bellman equation. The lack of the Markovian property poses a significant challenge in offline PbRL, where the sub-optimality of learned policies must be bounded by model error. For this reason, the vast majority of studies in PbRL are based on the Markovian assumption [2-11]. To our best knowledge, there is only one work providing theoretical guarantee for non-Markovian preference [19], but it requires an online RL oracle to optimize policy with respect to the Markovian transformation of non-Markovian trajectory reward (Lemma 2.7 in [19]).\\n - Our practical implementation also requires a Markovian reward model, which is used to compute the loss functions. The Markovian reward assumption has been widely employed in empirical studies in PbRL [11-15]. Even the algorithms without explicit reward models assume implicit Markovian reward [16,17,18]. They have demonstrated Markovian reward models can successfully learn in complex tasks such as robotic control and games. This widespread use in the literature highlights that the Markovian reward assumption is standard and does not represent practical limitations of our approach.\\n\\nIn conclusion, both the fixed-length (finite horizon) and Markovian reward assumptions are not constraints specific to our work but rather widely accepted and well-established model in the field of PbRL. Building on this solid theoretical foundation, APPO represents a significant advancement as the first offline PbRL algorithm to achieve both statistical and computational efficiency.\"}", "{\"metareview\": \"The paper studied offline preference-based learning and proposed an algorithm that is more computationally tractable than prior works. Theoretically, the paper extends ideas from pessimistic offline RL literature to offline preference learning. From the empirical side, the proposed algorithm achieved good performance on the standard continuous control tasks.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers in general are positive about this paper. Before the rebuttal, the reviewers raised concerns on the novelty of the theoretical analysis and additional experiments. During the rebuttal, the authors provided additional experiments, additional clarification and justification of why the proposed approach is novel and is different from prior offline RL methods, and additional clarification of why their proposed approach is computationally tractable. The authors rebuttal addressed these concerns from reviewers and made reviewers increased scores.\"}", "{\"comment\": \"We deeply appreciate the time and effort you have invested in reviewing our paper and providing thoughtful feedback. We hope our response clarifies your questions.\\n\\n1. **Experiments with Additional Datasets:**\\n \\n To further demonstrate the generalization capability of APPO, we collected Meta-world [1] medium-expert datasets following the approaches of prior works [2, 3]. As a baseline, we selected MR since it was the most performant baseline evidenced in our benchmark experiment (Table 1). The hyperparameters of APPO and MR are identical to the values used in Section 6.\\n\\n| # of feedback | $500$ | | $1000$ | |\\n| --- | --- | --- | --- | --- |\\n| dataset | dial-turn | sweep-into | dial-turn | sweep-into |\\n| MR | $15.80\\\\pm 12.73$ | $14.32\\\\pm3.39$ | $26.08\\\\pm 18.78$ | $8.48\\\\pm1.92$ |\\n| APPO | $32.40\\\\pm13.56$ | $12.80\\\\pm5.35$ | $39.20\\\\pm15.69$ | $14.56\\\\pm6.25$ |\\n\\nThe table above shows the success rates on the Meta-world medium-expert datasets. APPO outperforms or performs on par with MR. The result exhibits the generalization capability and robustness of APPO in a wide range of datasets.\\n\\nWe added this experiment in **Appendix F.1**. Please refer to the revised paper for the learning curves of the experiment. Thank you for your constructive feedback, which has helped us refine and improve our work.\\n\\n**Details on Meta-world medium-expert dataset:**\\n\\n- Each dataset contains trajectories from five sources: (1) the expert policy, (2) expert policies for randomized variants and goals of the task, (3) expert policies for different tasks, (4) a random policy, and (5) an $\\\\epsilon$-greedy expert policy that takes greedy actions with a $50$% probability. These trajectories are included in the dataset in proportions of $1 : 1 : 2 : 4 : 4$, respectively. Additionally, standard Gaussian noise was added to the actions of each policy.\\n- The dataset sizes match those of the medium-replay dataset in Table 3.\\n- Preference feedback is labeled as described in Section 6.\\n\\n2. **Comparison with CPL [4]:**\\n - The primary differences lie in the preference models and learning objectives. APPO assumes a standard trajectory-return-based preference model and aims to maximize cumulative reward, whereas CPL employs a regret-based preference model with an entropy-regularized cumulative reward as its learning objective.\\n - These differences lead to distinct algorithmic approaches: APPO uses a value-learning method similar to actor-critic, while CPL relies on optimizing a supervised learning objective.\\n - A key advantage of APPO over CPL is its provable sample complexity bound, as CPL does not provide finite-sample performance guarantees. However, direct comparison with CPL is difficult as our primary contribution lies in the theoretical analysis, while CPL is designed to achieve computational efficiency in empirical scenarios.\\n\\n3. **Limitations of APPO:**\\n \\n APPO is a model-based algorithm, as its theoretical analysis relies on the MLE error bound of the reward model. To the best of our knowledge, existing provably efficient offline PbRL algorithms are model-based. Although training the reward model is computationally inexpensive compared to policy training (training a reward model takes less than 30 seconds, while policy training requires over 3 hours in our setting), eliminating the explicit reward model remains an intriguing theoretical challenge.\\n\\n[1] Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on robot learning, pp. 1094\\u20131100. PMLR, 2020.\\n\\n[2] Choi, H., Jung, S., Ahn, H. & Moon, T.. (2024). Listwise Reward Estimation for Offline Preference-based Reinforcement Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:8651-8671\\n\\n[3] Joey Hejna and Dorsa Sadigh. Inverse preference learning: Preference-based rl without a reward function. Advances in Neural Information Processing Systems, 36, 2024.\\n\\n[4] Hejna, J., Rafailov, R., Sikchi, H., Finn, C., Niekum, S., Knox, W. B., & Sadigh, D. Contrastive Preference Learning: Learning from Human Feedback without Reinforcement Learning. In\\u00a0*The Twelfth International Conference on Learning Representations*.\"}", "{\"comment\": \"Thanks for your response, I will improve my rating.\"}", "{\"comment\": \"### Computational Complexity\\n\\nWe respectfully clarify that the computational complexity of Line 4 in Algorithm 2 is comparable to that of widely used least squares or maximum likelihood estimation (MLE) in the context of general function approximation. In this regime, least squares and MLE are typically non-convex and non-linear optimization problems. For example, when the reward model is parameterized by a neural network, finding the maximum likelihood parameters inherently involves solving a non-convex optimization problem. Similarly, Line 4 of Algorithm 2 requires solving a non-convex optimization, making its computational complexity analogous to that of least squares or MLE in the general function approximation setting. It is important to note that any algorithm relying on oracles in this setting would share at least the same order of complexity as Line 4, making this a standard requirement rather than a unique challenge of our approach.\\n\\nFurthermore, modern deep learning techniques have made such non-convex optimization problems both practical and efficient. For neural function approximation, these problems can be solved using widely available deep learning libraries and frameworks, ensuring that the computational demands of Line 4 are manageable in practice. Consequently, our algorithm is computationally feasible and does not impose additional challenges beyond those encountered in standard approaches involving neural networks or other general function approximations. We believe this perspective underscores that the computational requirements of our method align with the state of the art and remain practical.\\n\\n### Role of Algorithm 3 as A Subroutine of Algorithm 1\\n\\nRegarding the second question, the true environment is necessary for the performance guarantee of Algorithm 1 (Theorem C.2). If the policy evaluation subroutine (Algorithm 3) uses the estimated transition, the Monte Carlo estimation becomes biased, invalidating the error bound established in Lemma B.1. Additionally, we highlight that Algorithm 1 serves as a foundational building block for our main algorithm APPO (Algorithm 2). By leveraging our reparameterization technique, APPO eliminates the need for the policy evaluation subroutine (Algorithm 3) while maintaining strong statistical guarantees.\\n\\n[1] Marc Rigter, Bruno Lacerda, and Nick Hawes. Rambo-rl: Robust adversarial model-based offline reinforcement learning. Advances in neural information processing systems, 35:16082\\u201316097, 2022.\\n\\n[2] Ching-An Cheng, Tengyang Xie, Nan Jiang, and Alekh Agarwal. Adversarially trained actor critic\\nfor offline reinforcement learning. In International Conference on Machine Learning, pp. 3852\\u2013\\n3878. PMLR, 2022.\\n\\n[3] Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, and Ching-An Cheng. Adversarial model for offline reinforcement learning. Advances in Neural Information Processing Systems, 36, 2024.\\n\\n[4] Aravind Rajeswaran, Igor Mordatch, and Vikash Kumar. A game theoretic framework for model\\nbased reinforcement learning. In International conference on machine learning, pp. 7953\\u20137963.\\nPMLR, 2020.\\n\\n[5] Masatoshi Uehara and Wen Sun. Pessimistic model-based offline reinforcement learning under partial coverage. In International Conference on Learning Representations, 2022.\\n\\n[6] Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, and Wen Sun. Provable offline\\npreference-based reinforcement learning. In The Twelfth International Conference on Learning\\nRepresentations, 2024.\\n\\n[7] Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. Bellman-consistent pessimism for offline reinforcement learning. Advances in Neural Information Processing Systems, 34:6683\\u20136694, 2021a.\"}", "{\"comment\": \"We have uploaded the revised paper incorporating your feedback, which has been instrumental in improving and refining our work. We sincerely thank you for your detailed review and insightful comments.\\n\\n- In Section 3.2, we corrected the typo regarding Algorithms 1 and 3 (KL regularization $\\\\eta$) and enhanced the description of Algorithm 3 to make its role clearer to readers.\\n- Table 1 now presents the average ranks of algorithms, providing a summary of their relative performance.\\n- We conducted an additional experiment to evaluate the effect of $|D_{traj}|$. The table below presents the result on Meta-world medium replay sweep-into dataset, with 1000 preference feedback. The success rate of APPO improves with larger datasets, whereas MR does not exhibit a clear correlation. Please refer to **Appendix F.2** for the corresponding plots.\\n\\n| size (x$10^5$) | $0.5$ | $1.0$ | $1.5$ | $2.0$ |\\n| --- | --- | --- | --- | --- |\\n| MR | $3.28\\\\pm1.20$ | $26.00\\\\pm5.53$ | $15.44\\\\pm5.14$ | $21.28\\\\pm8.37$ |\\n| APPO | $3.20\\\\pm1.13$ | $18.16\\\\pm11.14$ | $38.72\\\\pm14.97$ | $55.68\\\\pm11.16$ |\"}", "{\"summary\": \"This work explores the innovative realm of PbRL, addressing the challenges of ensuring conservatism under uncertainty. APPO is designed to ensure conservatism in PbRL without the need for explicit construction of intractable confidence sets. This is achieved by framing PbRL as a two-player game between a policy and a model, which allows for a tractable enforcement of conservatism. Experimental results show that APPO performs comparably to existing state-of-the-art algorithms in continuous control tasks\\uff0e\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"APPO is designed to optimize the learning process in preference-based reinforcement learning (PbRL) by effectively utilizing preference feedback, which allows for faster convergence and improved sample efficiency compared to traditional methods. The proposed method can be integrated with any standard unconstrained reinforcement learning algorithm, making it versatile and applicable across various domains. Additionally, the paper provides theoretical bounds on the sample complexity of the proposed method.\", \"weaknesses\": \"The performance of APPO is sensitive to the choice of the conservatism regularizer coefficient (\\u03bb). While the algorithm can learn with a range of \\u03bb values, improper tuning can lead to suboptimal performance and stability issues, which may require additional effort in hyperparameter optimization. Moreover, the theoretical guarantees of APPO rely on standard assumptions regarding function approximation and bounded trajectory concentrability. If these assumptions do not hold in certain environments, the performance and reliability of the algorithm may be compromised.\", \"questions\": \"1.What is the impact of the conservatism regularizer coefficient (\\u03bb) on the performance of APPO? How can an appropriate \\u03bb value be selected?\\n\\n2.How does APPO compare to other IQL-based algorithms in terms of hyperparameter tuning advantages and disadvantages?\\n\\n3.What happens to the performance of APPO if these assumptions do not hold in certain environments?\\n\\n4.What are the scalability concerns of APPO in high-dimensional state or action spaces? Are there any issues related to computational complexity?\\n\\n5.In practical applications, how does APPO cope with sparse or non-representative preference data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank you for your time and effort in reviewing our paper and for providing feedback. Below, we address your comments and questions in detail, and we hope our responses clarify the key contributions.\\n\\n### Adversarial Training\\n\\nWe respectfully disagree with the assertion that our work lacks novelty and are glad to clarify its contributions. Our proposed method APPO is a novel adaptation of the two-player game framework tailored to the unique challenges of Preference-based Reinforcement Learning (PbRL) \\u2014 which we elaborate in the paper and below. While related frameworks have been employed in standard RL, our approach departs significantly in its formulation, analysis, and application, addressing the complexities of PbRL that are not encountered in standard RL settings.\\n\\n1. **Distinction from Model-based Adversarial Training in Standard RL:**\\n - In standard RL, model-based adversarial training [1,3,4] formulates a two-player game between the transition model and the policy:\\n$$\\n\\\\max_{\\\\pi} \\\\min_{P \\\\in \\\\mathcal{P}} J(\\\\pi, P),\\n$$\\nwhere $ \\\\mathcal{P} $ is the confidence set of transition models, and $J$ is the expected return under policy $\\\\pi$ and transition model $P$. This approach is computationally intractable due to the confidence set. As a workaround, previous works [1,3] replace the constraint with a regularization term but sacrifice theoretical guarantees for practical feasibility.\\n - **In contrast, APPO provides sample complexity bounds for the regularized optimization framework, achieving both statistical and computational efficiency.** This distinguishes our work by offering both theoretical rigor and practical applicability.\\n\\n2. **Distinction from Bellman-consistency-based Adversarial Training:**\\n - Bellman-consistency-based methods [2,7] frame the two-player game between the value function and the policy. While these methods provide performance guarantees for the regularized optimization form, their analysis differs fundamentally from ours.\\n - **Our work differs in being model-based:** APPO regularizes the deviation of the reward model (and the induced value function) from the estimated reward model. Conversely, Bellman-consistency-based methods are model-free and focus on regularizing the Bellman error of the value function. Consequently, our sub-optimality decomposition (Section 4 and Appendix D) bounds policy sub-optimality using model estimation error, offering a fundamentally different analytical approach.\\n\\n3. **Unique Challenges in PbRL:**\\n - PbRL introduces trajectory-based feedback, making it significantly more challenging than standard RL. For example, in offline learning:\\n - Standard RL requires step-wise policy concentrability, which is sufficient to bound sample complexity [5].\\n - PbRL demands trajectory-wise policy concentrability, resulting in a polynomial dependence on this parameter in the sample complexity bounds (Theorem 2, Theorem 3 in [6]).\\n - Due to these challenges, existing offline PbRL algorithms are computationally intractable, and standard RL analyses fail to provide guarantees in the PbRL setting.\\n - **APPO successfully addresses these challenges using our adversarial training framework, complemented by a novel analysis that bridges theoretical guarantees and practical feasibility.**\\n\\nIn conclusion, APPO is a significant contribution that advances the state-of-the-art in PbRL by addressing its unique challenges through a novel two-player game formulation, enhanced theoretical analysis, and practical algorithm design. Thank you for the opportunity to clarify our contributions.\"}", "{\"summary\": \"This paper introduces APPO, a novel algorithm for offline PbRL that utilizes a two-player game formulation to produce a robust policy less prone to model errors. By avoiding explicit confidence sets, APPO achieves computational and statistical efficiency. The paper derives a sample complexity bound for APPO under standard assumptions and presents experimental results demonstrating performance comparable to SOTA offline PbRL methods on continuous control tasks.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is well-motivated and organized. The proposed approach is theoretically sound and solid. The authors derive sample complexity bounds without relying on explicit confidence sets, offering statistical efficiency and practical applicability. The approach is evaluated on continuous control tasks, with results showing that it performs on par with or surpasses SOTA baselines.\", \"weaknesses\": \"Please see Questions.\", \"questions\": \"1. The paper only evaluates APPO on a medium-replay dataset from Metaworld. It would be beneficial to evaluate APPO on a wider range of datasets to assess its generalizability and robustness.\\n\\n2. Previous work on offline PbRL, such as CPL [1], directly learns a policy without RL, as noted in the related work section. Using a supervised learning approach, CPL achieves comparable performance with SOTA baselines while significantly reducing computational complexity. How does APPO compare to CPL, and what are the advantages of using APPO over CPL?\\n\\n3. Could the authors discuss the limitations of APPO?\\n\\n[1] Joey Hejna, Rafael Rafailov, Harshit Sikchi, Chelsea Finn, Scott Niekum, W. Bradley Knox, and Dorsa Sadigh. Contrastive preference learning: Learning from human feedback without reinforcement learning. In The Twelfth International Conference on Learning Representations, 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for dedicating your time and expertise to reviewing our work and for offering constructive feedback. Below, we provide detailed responses to address your comments and questions.\", \"1. **Impact of Conservatism Regularizer:**\", \"The effect of the conservatism regularizer is demonstrated in Section 6 (Figure 1), and the experiments in Table 1 share the same $\\\\lambda$ value. These results show that APPO is robust to change in $\\\\lambda$.\", \"The conservatism regularizer balances conservatism and model error. Without conservatism, errors may amplify due to distributional shift [1], while excessive conservatism leads to a large bias in value estimation.\", \"When tuning the value of $\\\\lambda$, we suggest searching within the range $\\\\lambda \\\\leq 1$. Alternatively, $\\\\lambda$ can be optimized during training by specifying a target value and applying gradient descent. However, this approach still requires tuning the target $\\\\lambda$ value.\", \"2. **Advantage in Hyperparameter Tuning:**\", \"APPO has only one key algorithmic hyperparameter, aside from standard hyperparameters in deep learning, such as learning rates. This simplicity is an advantage over IQL-based algorithms, which require tuning at least two key hyperparameters, as described below. While the reward model may require hyperparameter tuning depending on the specific implementation, our experiments show that a simple feed-forward network trained via maximum likelihood estimation achieves strong performance.\", \"In contrast, the IQL-based algorithms involve additional hyperparameters, such as the expectile regression parameter ($\\\\tau$) and the inverse temperature for advantage weighted regression ($\\\\beta$) [2]. IPL [20] introduces another regularization parameter (referred to as \\\\lambda therein). This makes APPO\\u2019s single hyperparameter an advantage in terms of simplicity. One potential downside is the entropy regularizer in APPO ($\\\\alpha$ in equation 11), which is not present in IQL. However, we followed the standard recipe from soft actor-critic [3] without additional tuning, so it was not counted as a tuning parameter in our experiments.\", \"3. **Discussion on Assumptions:**\", \"The realizability assumptions (Assumptions 1,2,3) are not practical concerns when using powerful function approximators such as neural networks.\", \"The trajectory concentrability (Assumption 4) ensures the dataset contains high-quality trajectories. If violated, performance could degrade, as in the case of all offline PbRL algorithms [4-7]. However, APPO demonstrated consistent performance across diverse environments in Section 6, supporting its robustness.\", \"The Markovian reward assumption (equation 1) has been widely adopted in the PbRL literature [4-20]. Even algorithms without explicit reward models often assume implicit Markovian reward [18,19,20]. These studies have shown that Markovian rewards enable successful learning in complex tasks such as robotic control and games, highlighting that this assumption is standard and does not impose practical limitations.\", \"4. **Scalability and Computational Complexity:**\", \"The practical implementation of APPO in Section 5 uses parameterized value functions and policies trained via standard gradient descent. The loss function for policy (equation 11) is similar to standard actor-critic, and the loss functions for value functions (equations 9,10) require a comparable computational cost compared to standard TD learning methods. As a result, APPO can scale effectively to large datasets or high-dimensional state/action spaces, similar to existing deep RL algorithms. In our computational setup, both MR (IQL) and APPO require about 4 hours to take 250k gradient steps.\", \"Training a reward model is relatively inexpensive compared to training a policy. For instance, on our computational setup, training a reward model with 1,000 preference feedback samples takes less than 30 seconds.\", \"5. **Sparse or Non-representative Preference:**\", \"The preference data used in our experiments is sparse, as preference feedback is collected from randomly sampled trajectory segment pairs within the dataset. Given the number of preference feedbacks ($500$ or $1,000$) is much smaller than the dataset size ($>10^5$), dense preference data is highly unlikely.\", \"We kindly request clarification on the meaning of 'non-representative preference' to ensure a more precise and thorough discussion in response to your feedback.\"]}", "{\"comment\": \"[15] Park, J., Seo, Y., Shin, J., Lee, H., Abbeel, P., & Lee, K. (2022). SURF: Semi-supervised reward learning with data augmentation for feedback-efficient preference-based reinforcement learning.\\u00a0*arXiv preprint arXiv:2203.10050*.\\n\\n[16] Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. Preference transformer: Modeling human preferences using transformers for RL. In The Eleventh International Conference on Learning Representations, 2023.\\n\\n[17] Liu, R., Bai, F., Du, Y., & Yang, Y. (2022). Meta-reward-net: Implicitly differentiable reward learning for preference-based reinforcement learning.\\u00a0*Advances in Neural Information Processing Systems*,\\u00a0*35*, 22270-22284.\\n\\n[18] Joey Hejna and Dorsa Sadigh. Inverse preference learning: Preference-based rl without a reward function. Advances in Neural Information Processing Systems, 36, 2024.\\n\\n[19] Gokul Swamy, Christoph Dann, Rahul Kidambi, Steven Wu, and Alekh Agarwal. A minimaximalist approach to reinforcement learning from human feedback. In Forty-first International Conference on Machine Learning, 2024.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper studies offline PbRL. The authors propose APPO, which frames PbRL as a two-player game between the policy and an adversarial reward model. This algorithm enforces pessimism without explicit confidence sets, making the method more computationally tractable than the existing literature. The paper further provides theoretical guarantees, demonstrating that APPO achieves strong sample complexity bounds under standard assumptions on function approximation and trajectory concentrability. Experimental results on continuous control tasks (in Meta-world) show that APPO achieves performance comparable to state-of-the-art methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Algorithmic novelty: The Q function estimation part in Algorithm 2 is interesting and novel.\\n\\n2. Valid theoretical guarantee: The authors provide a rigorous sample complexity analysis under standard assumptions.\\n\\n3. Qualified empirical performance: The experiments show that the practical implementation of APPO is a competitive empirical algorithm. This part is lacked in many existing works.\", \"weaknesses\": \"1. Limited novelty: the essential idea is not very novel. In standard RL, there have been works to use two-player games to get rid of confidence sets (e.g., 'Adversarially Trained Actor Critic for Offline Reinforcement Learning' by Ching-An Cheng) and this work is very similar to this line of work. In addition, the authors should highlight new analysis techniques in the paper, if any.\\n\\n\\n2. Complicated algorithm: although Algorithm 2 doesn't need confidence sets, the algorithm is still not computationally efficient due to Line 4. Typically, most of the existing works think a method computationally efficient if it can be solved with least squares oracles and MLE oracles. However, I think Line 4 in Algorithm 2 cannot be solved with these oracles.\", \"questions\": \"1. The authors claim that using Lagrangian multipliers to get rid of the confidence sets is not applicable here (Line 204-207). I am not quite convinced because such an approach is effective in standard RL and I hope the authors can elaborate more.\\n\\n\\n2. For Algorithm 2, can we just replace the true environment in Algorithm 1 with the estimated environment $\\\\widehat{P}$ (adding some regularizers to quantify the uncertainty of $\\\\widehat{P}$ at the same time)? Then the resulted algorithm could be less complicated.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### Additional Experiments\\n\\nFor a more extensive evaluation of our method, we conducted additional experiments varying the size of $D_{traj}$. We compare APPO with MR, the most performant baseline according to our benchmark evaluation (Table 1). The tables below present the success rates of Meta-world dial-turn task with 1k preference feedback, where the size (number of transitions) of $D_{traj}$ varies from 50k to 300k . The performance of MR is unstable, as its success rate drops drastically for dataset sizes 100k and 200k. In contrast, APPO shows a gradual decrease in performance while dataset size decreases. This additional experiment and the result in Figure 2 demonstrate the robustness of APPO against dataset size.\\n\\n| size (x$10^5$) | $0.5$ | $1.0$ | $2.0$ | $3.0$ |\\n| --- | --- | --- | --- | --- |\\n| MR | $24.64 \\\\pm 4.21$ | $7.84 \\\\pm 4.22$ | $18.64 \\\\pm 9.55$ | $69.44 \\\\pm 4.70$ |\\n| APPO | $21.68 \\\\pm 7.08$ | $48.32 \\\\pm 10.42$ | $63.12 \\\\pm 7.57$ | $81.44 \\\\pm 6.73$ |\\n\\n### Experimental Details\\n\\nWe used the size of $D_{traj}$ following the experimental protocol of [1], but the specific sizes were not described in the paper. Thank you for bringing this issue to our attention. we will ensure that the information is incorporated into the paper. The table below describes the sizes of $D_{traj}$ in Metaworld medium-replay datasets (BPT : button-press-topdown).\\n\\n| dataset | BPT | box-close | dial-turn | sweep | BPT-wall | sweep-into | drawer-open | lever-pull |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| size (x$10^5$) | 1.0 | 8.0 | 3.0 | 7.0 | 1.5 | 1.0 | 1.0 | 3.0 |\\n\\nThe results of MR is not taken from [1]. It is a reproduced result based on the implementation of [1]. The Oracle result (IQL trained with ground truth reward) is taken from [1].\\n\\n[1] Choi, H., Jung, S., Ahn, H. & Moon, T.. (2024). Listwise Reward Estimation for Offline Preference-based Reinforcement Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:8651-8671\\n\\n[2] Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, and Wen Sun. Provable offline\\npreference-based reinforcement learning. In The Twelfth International Conference on Learning\\nRepresentations, 2024.\\n\\n[3] Banghua Zhu, Michael Jordan, and Jiantao Jiao. Principled reinforcement learning with human feedback from pairwise or k-wise comparisons. In International Conference on Machine Learning,\\npp. 43037\\u201343067. PMLR, 2023.\\n\\n[4] Alizee Pace, Bernhard Sch\\\\\\u201dolkopf, Gunnar R\\\\\\u201datsch, and Giorgia Ramponi. Preference elicitation for offline reinforcement learning. arXiv preprint arXiv:2406.18450, 2024.\\n\\n[5] Jonathan D Chang, Wenhao Shan, Owen Oertell, Kiante Brantley, Dipendra Misra, Jason D Lee, and Wen Sun. Dataset reset policy optimization for rlhf. arXiv preprint arXiv:2404.08495, 2024.\\n\\n[6] Ellen Novoseller, Yibing Wei, Yanan Sui, Yisong Yue, and Joel Burdick. Dueling posterior sampling for preference-based reinforcement learning. In Conference on Uncertainty in Artificial\\nIntelligence, pp. 1029\\u20131038. PMLR, 2020.\\n\\n[7] Yichong Xu, Ruosong Wang, Lin Yang, Aarti Singh, and Artur Dubrawski. Preference-based reinforcement learning with finite-time guarantees. Advances in Neural Information Processing\\nSystems, 33:18784\\u201318794, 2020.\\n\\n[8] Aadirupa Saha, Aldo Pacchiano, and Jonathan Lee. Dueling rl: Reinforcement learning with trajectory preferences. In International Conference on Artificial Intelligence and Statistics, pp. 6263\\u20136289. PMLR, 2023.\\n\\n[9] Wenhao Zhan, Masatoshi Uehara, Wen Sun, and Jason D. Lee. Provable reward-agnostic preferencebased reinforcement learning. In The Twelfth International Conference on Learning Representations, 2024b.\\n\\n[10] Runzhe Wu and Wen Sun. Making rl with preference-based feedback efficient via randomization. arXiv preprint arXiv:2310.14554, 2023.\\n\\n[11] Yu Chen, Yihan Du, Pihe Hu, Siwei Wang, Desheng Wu, and Longbo Huang. Provably efficient iterated cvar reinforcement learning with function approximation and human feedback. In The Twelfth International Conference on Learning Representations, 2023.\\n\\n[12] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep\\nreinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.\\n\\n[13] Ibarz, B., Leike, J., Pohlen, T., Irving, G., Legg, S., & Amodei, D. (2018). Reward learning from human preferences and demonstrations in atari.\\u00a0*Advances in neural information processing systems*,\\u00a0*31*.\\n\\n[14] Lee, K., Smith, L. M., & Abbeel, P. (2021, July). PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training. In\\u00a0*International Conference on Machine Learning*\\u00a0(pp. 6152-6163). PMLR.\"}" ] }
5XL8c0Vg9k
Infinite-parameter Large Language Model
[ "Fei Ding" ]
In the standard transformer architecture, increasing model parameters leads to linear growth in computational cost and activation memory. To address this issue, we propose a novel Infinite Parameter Large Language Model (IP-LLM) architecture that decouples model size from computational cost and device memory. Existing large language models are all fixed-parameter models, while human knowledge is infinite and expands daily. Finite parameters are inherently limited in their capacity to accommodate this boundless knowledge. Our IP-LLM architecture can potentially accommodate infinite knowledge, resolving this issue and laying the foundation for realizing a truly omniscient and omnipotent artificial general intelligence in the future.
[ "lifelong learning" ]
Reject
https://openreview.net/pdf?id=5XL8c0Vg9k
https://openreview.net/forum?id=5XL8c0Vg9k
ICLR.cc/2025/Conference
2025
{ "note_id": [ "dg9dKNcE0a", "dMracZBkzQ", "TB64ZbhFaE", "SvK9sd8Wmu", "SGxTuzhRnG", "Mjvt64pH1F", "LgyxdxYHc7", "FzHQffamgi", "AzHK93nuIe", "62lVnZGRu8", "61Szhaw5A0", "4QnIbQC0Gv", "3FBzhkz8Vf" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_review" ], "note_created": [ 1730366399786, 1731731877227, 1732641194717, 1732640757725, 1730721208353, 1731751859534, 1730735338878, 1731855435318, 1737523738575, 1732641319131, 1732893834439, 1733685016145, 1731254614472 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6013/Reviewer_NVv4" ], [ "ICLR.cc/2025/Conference/Submission6013/Authors" ], [ "ICLR.cc/2025/Conference/Submission6013/Authors" ], [ "ICLR.cc/2025/Conference/Submission6013/Authors" ], [ "ICLR.cc/2025/Conference/Submission6013/Reviewer_K88R" ], [ "ICLR.cc/2025/Conference/Submission6013/Authors" ], [ "ICLR.cc/2025/Conference/Submission6013/Reviewer_ks1Z" ], [ "ICLR.cc/2025/Conference/Submission6013/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6013/Authors" ], [ "ICLR.cc/2025/Conference/Submission6013/Reviewer_NVv4" ], [ "ICLR.cc/2025/Conference/Submission6013/Area_Chair_p9Cg" ], [ "ICLR.cc/2025/Conference/Submission6013/Reviewer_e1e2" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents IP-LLM inspired from MoE which use routing mechanism to enable continual learning of multi-domain tasks and memory efficient inference. The authors use segmented pre-training strategy to train a base block to acquire general linguistic skills and then train router and domain-specific modules.\\nThe authors evaluate IP-LLM's performance on 4 different tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The authors propose a novel pre-training strategy to parse the general linguistic comprehension skills in the base model and later train individual experts on top on different domains.\", \"weaknesses\": [\"1. Evaluation\", \"I think the biggest issue in current manuscript is evaluation of the model. While the current evaluation only includes monolithic architectures without MoE strategy, wouldn't it be fairer to include the models using MoE's in terms of both performance and memory/compute efficiency?\", \"The authors claim in the list of contributions that the new approach allows higher routing accuracy but I cannot find the explicit result for this.\", \"Also the authors claim the memory and training efficiency but having a explicit numerical comparison and what exact 'training cost' is meant here.\", \"2. Related works on lifelong learning / continual learning using MOE\", \"I believe there are already few literatures on lifelong learning of LLM using MoE e.g Chen et al (2023) https://arxiv.org/pdf/2305.12281. I suggest authors to incorporate more relevant literatures and what is the novelty of their method.\", \"3. Paper presentation should be improved\", \"There are many typos and inconsistent citing notations which makes readability very low. For example, I found Section 2 related work very hard to parse the included citations (spacing, parentheses etc). I highly recommend authors to do careful proofreading of the entire manuscript.\", \"Section 4 training strategy can be improved with adding a schema for better delivery.\"], \"questions\": \"I believe my comments about the possible improvements and weaknesses incorporate my questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for taking the time to review and provide comments. We are carefully examining the weaknesses and issues you pointed out, providing detailed explanations, and incorporating them into the revised version of the paper.\\n\\n# 1.Weaknesses:\\n\\nTraditional fixed-parameter models face significant challenges when additional parameters are introduced. The performance of tasks across all categories is affected, requiring full retraining for all tasks, which incurs prohibitive costs. Moreover, inference costs require loading all parameters into memory, making it infeasible for models with infinite parameters. Our IP-LLM model addresses these challenges by introducing a hierarchical and categorized structure. When adding parameters for a new category, only the parameters specific to that category need to be retrained, resulting in minimal training costs. For models with infinite parameters, inference only requires the base transformer, router, and category-specific layers, making the inference cost manageable.\\n\\n# 2.Questions1\\n\\n The routing token set is open. When we need to generalize to unseen categories, we only need to add a set of parameters $f_{Y}$ and a routing token TokenY. Then, train this set of parameters with the new knowledge and retrain the router using the labels of both the new and old knowledge.\\n\\n\\\\begin{equation}\\\\label{key}\\n\\tR = f^{new}_{router}(x^{\\\\prime})\\n\\\\end{equation}\\n\\nWhen a large number of categories are added, if the routing accuracy decreases, it can be ensured by appropriately increasing the parameter capacity of $f_{router}$ and retraining it.\\n\\nWe will incorporate this content into Section 3.\\n\\n# 3.Questions2\\n\\nSection 5 has already been completed. This was a formatting error, which we have corrected in the revised version of the paper.\\n\\n\\n\\n\\nWe hope our response addresses your concerns, but if there are any issues or areas we missed, please feel free to point them out.\"}", "{\"comment\": \"We have resolved all the issues in the weaknesses section and updated intro ,figure 2, and section 5.2 in the revised version.\"}", "{\"comment\": \"We have resolved all the issues in the weaknesses section and updated section 3 and 5 in the revised version.\"}", "{\"summary\": \"The paper proposes an \\\"Infinite-Parameter Large Language Model\\\" with the idea to accommodate the increasing amount of information generated in the world that it hypothesizes models with a fixed number of parameters will eventually not be able to contain. The implementation involves training the model as a Mixture of Experts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper tackles an important question.\", \"weaknesses\": [\"The paper does not make a distinction between the proposed approach from MoE training. And if there is indeed a difference, please include the MoE baseline.\", \"**Major issue**: The paper compares their model trained on downstream tasks with other pre-trained models, zero-shot on the downstream tasks. Therefore, it is not making an apples-to-apples comparison\", \"It would be important to add in a couple of baselines to showcase the benefit of the proposed method over others\", \"A single model trained on all the data that the base, router, and individual experts are trained on\", \"MoE baseline trained on the data used to train the IP-LM.\", \"Why are there no entries in the table for some models on C-Eval?\", \"There are not many details provided on training, model architecture, and dataset. Unclear what data is additionally used to train the base model. What is the architecture of the model?\", \"The writing in the paper is clear but not precise. For example, the abstract or intro does not tell you anything concrete about what the paper builds, it only goes so far as to specify the problem and motivate it.\"], \"questions\": \"Please take a look at the section on weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you sincerely for taking the time to review our work and provide such valuable feedback. We greatly appreciate your insights and are thoroughly addressing the points raised in the Weaknesses section, integrating them into the revised version of the paper.\\n\\n# 1.Weaknesses1.1\\n\\n**Comparison with MoE:**\", \"ip_llm_and_moe_share_the_concept_of_expert_networks_but_differ_in_several_key_aspects\": \"- **Routing Mechanism:** \\n IP-LLM uses the base model for general language understanding in conjunction with routing parameters, resulting in a significantly larger total parameter size compared to MoE, which typically uses only a small subset of parameters. This leads to higher routing accuracy in IP-LLM and more efficient utilization of the model's existing parameters. \\n\\n- **Knowledge Updating:** \\n IP-LLM enables incremental learning of new knowledge by adding new parameter blocks, requiring only the training of new parameters and the router. In contrast, MoE generally requires retraining the entire model. This makes IP-LLM more adaptable to evolving environments and knowledge.\\n\\n- **Memory Efficiency:** \\n During inference, MoE requires loading all parameters into memory, while IP-LLM only loads a subset of parameters, making it more memory-efficient than MoE. \\n\\n- **Computational Efficiency:** \\n Both MoE and IP-LLM involve only a subset of parameters in computation. The actual computational efficiency depends on the number of experts and the proportion of expert parameters to the total parameters. When the total amount of expert parameters and the number of experts are the same, MoE requires activating at least two experts and averaging their outputs during computation, which leads to computational waste. In contrast, IP-LLM only needs to activate a single expert, making its computational efficiency higher. \\n\\n\\n\\n# 2.Weaknesses1.2\\n\\n**Routing Precision:**\\n\\nThe routing mechanism in IP-LLM 24B utilizes a 7.2B base model and a 0.7B router, amounting to a total of 7.9B parameters, which accounts for approximately 33% of the total 24B parameters. In contrast, the MOE architecture only adds a few fully connected layers before the expert layers as its router. Based on our calculations, the routing parameter size of Mixtral 8x7B is merely 0.001B, significantly smaller than that of our model. Therefore, in theory, the routing precision of our IP-LLM model is far superior to that of MOE.\\n\\n# 3.Weaknesses1.3\\n\\n- **Memory Efficiency:** \\n During inference, MoE requires loading all parameters into memory, while IP-LLM only loads a subset of parameters, making it more memory-efficient than MoE. \\n- **Training Efficiency:** \\n IP-LLM achieves incremental learning of new knowledge by adding new parameter blocks, requiring only the training of new parameters and the router, whereas MoE generally requires retraining the entire model. IP-LLM offers higher training efficiency when learning new knowledge.\\n\\n# 4.Weaknesses2\\n\\nChen et al. (2023) https://arxiv.org/pdf/2305.12281 proposed using MoE (Mixture of Experts) for lifelong learning in large language models (LLMs), referred to as \\\"Lifelong MoE.\\\" However, it has limitations: it can only mitigate catastrophic forgetting but cannot prevent it entirely. Moreover, adding new experts may lead to performance degradation in routing for old tasks. The innovations of \\\"IP-LLM\\\" lie in the following aspects:\\n\\n- **Infinite Parameter Model Architecture:** IP-LLM introduces the concept of **infinite parameters** by grouping parameters and using on-demand loading. During inference, only the required parameters are loaded, theoretically surpassing the limitations of model size. While Lifelong MoE also expands the model by adding experts, its total parameters remain finite.\\n- **On-Demand Parameter Loading:** A core innovation of IP-LLM is its **on-demand parameter loading** mechanism. The model's parameters are divided into multiple groups, with each group corresponding to a specific type of knowledge or domain. During inference, only the parameter groups relevant to the current task are loaded, significantly reducing memory usage. Although Lifelong MoE also utilizes a subset of experts during inference, it requires all experts to be loaded into GPU memory, limiting parameter scalability.\\n- **Improvements in Pretraining:** IP-LLM introduces a **staged pretraining** framework. It first learns fundamental language knowledge (e.g., vocabulary, grammar), then trains parameters on different data categories separately, and finally integrates them. This approach helps reduce the parameter size of individual experts.\\n- **Avoiding catastrophic forgetting:** Adding new experts can be done without modifying the parameters of existing experts, thereby preventing catastrophic forgetting.\\n# 5.Weaknesses3\\nWe will address these issues in the revised version.\\n\\n\\nWe hope our response addresses your concerns, but if there are any issues or areas we missed, please feel free to point them out.\"}", "{\"summary\": \"This paper proposes a language model based on a router that selects domain-specific parameters. A base model parses an input, a router classifies the input into a category corresponding to a domain, then inference is done with parameters corresponding to the selected domain.\\n\\nEach domain-specific parameter set is implemented using the last transformer layer of the base model replicated four times, and then trained with a \\\"defined proportion\\\" of domain-specific data and general data. The router has a similar parameterization, and is trained using classes corresponding to the domain-specific data.\\n\\nThe paper reports metrics on MMLU, C-Eval, GSM8K, and MATH, and also reports metrics from Llama2, Mistral, and Qwen1.5.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The idea of routing to domain-specific parameters is interesting (though more discussion of related work is needed, e.g. [1][2]).\\n\\n\\n[1] Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models, Li et al 2022\\n\\n[2] Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM, Sukhbaatar et al COLM 2024\", \"weaknesses\": [\"The paper appears to be in an early stage. For example, key details of the method and experiment are not described or justified, and only 1 experiment (not fully described) has been performed. There is also missing discussion of key related work (e.g. [1], [2]). Here are some specific examples:\", \"The datasets have not been described. The data can substantially impact the downstream tasks that are evaluated in the experiment. Similarly, the number of tokens trained on is important and has not been reported.\", \"The evaluation is done on a proprietary evaluation pipeline, making reproducibility difficult.\", \"The experiments need a controlled comparison of the method against alternatives. Currently it is difficult to draw conclusions from the experiment provided. For example, IPLLM-24B and Qwen1.5-32B are not comparable since IPLLM has been trained on additional domain-specific data (which has not been specified, and may be relevant for the experimental comparison). One example comparison could be finetuning Qwen1.5-32B on the union of corpora that IPLLM finetunes on.\", \"Several claims made in the introduction and conclusion have not been justified. For example:\", \"\\\"Significant advantages in terms of reduced device memory requirements for both training and inference\\\": this has not been justified. For example, the proposed method requires two forward passes at inference time, and an additional 4 x (number-of-domains + 1) layers to train.\", \"\\\"Enabling the model to learn new knowledge without catastrophic forgetting\\\". This has not been justified experimentally.\", \"Ablations on key design decisions have not been done. For example, the routing strategy, number of layers, and the base model.\", \"Regarding novelty, Branch-Train-Merge [1] proposed to train different parts of the model independently on different subsets of the data (each subset corresponding to a domain, such as scientific or legal text). They also have a domain posterior that models the probability of a sequence belong to each domain (akin to the functionality of the proposed router). BTM and related follow-up work such as Branch-Train-MiX [2] should be discussed and compared with.\", \"I would encourage the authors to continue improving the work since their idea has potential, but I believe the current manuscript is not yet ready for ICLR.\", \"[1] Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models, Li et al 2022\", \"[2] Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM, Sukhbaatar et al COLM 2024\"], \"questions\": \"Please address the points discussed in the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"# 1.Weaknesses1\\n\\n**Comparison with MoE:**\", \"ip_llm_and_moe_share_the_concept_of_expert_networks_but_differ_in_several_key_aspects\": \"- **Routing Mechanism:** \\n IP-LLM uses the base model for general language understanding in conjunction with routing parameters, resulting in a significantly larger total parameter size compared to MoE, which typically uses only a small subset of parameters. This leads to higher routing accuracy in IP-LLM and more efficient utilization of the model's existing parameters. MoE performs routing at every layer of the transforms, while IP-LLM only performs routing once during sentence generation, until the prediction is complete.\\n\\n- **Knowledge Updating:** \\n IP-LLM enables incremental learning of new knowledge by adding new parameter blocks, requiring only the training of new parameters and the router. In contrast, MoE generally requires retraining the entire model. This makes IP-LLM more adaptable to evolving environments and knowledge.\\n\\n- **Memory Efficiency:** \\n During inference, MoE requires loading all parameters into memory, while IP-LLM only loads a subset of parameters, making it more memory-efficient than MoE. \\n\\n- **Computational Efficiency:** \\n Both MoE and IP-LLM involve only a subset of parameters in computation. The actual computational efficiency depends on the number of experts and the proportion of expert parameters to the total parameters. When the total amount of expert parameters and the number of experts are the same, MoE requires activating at least two experts and averaging their outputs during computation, which leads to computational waste. In contrast, IP-LLM only needs to activate a single expert, making its computational efficiency higher. \\n\\n# 2.Weaknesses2\\n\\nIP-LLM is also a pre-trained model and belongs to the same category of models.\\n\\n# 3.Weaknesses3\\n\\nWe fully agree with your point of view, but due to computational limitations, we have not conducted such experiments yet. Theoretically, because the knowledge gap between different categories is large, mixing them during training could lead to interference and result in loss. Therefore, we speculate that if data classification is done well enough, under the same data, parameter count, and computational resources, having IP-LLM separate the data and train each expert could yield better results than a single model.\\n\\n\\nIn the early and mid-stages of training, the MoE model suffers from inaccurate routing, which causes a large amount of data to be routed to the wrong experts for training, leading to significant waste of data and computational resources, and even performance degradation. In contrast, IP-LLM has 100% accurate routing throughout the entire training process. Therefore, we predict that, with the same data, parameters, and computational resources, IP-LLM will outperform MoE in training effectiveness.\\n# 4.Weaknesses4\\n\\nIt has been corrected in the revised version.\\n\\n# 5.Weaknesses5\\n\\nThe model architecture is very simple. It adds 4 interchangeable layers of parameters after the last Transformer layer of Qwen1.5-7B. When these 4 layers are routing parameters, routing decisions can be made. When these 4 layers contain parameters from a specific category, inference on the knowledge of that category can be performed.\\n\\nThe data is filtered from open-source datasets, including The Pile, Skypile, RefinedWeb, RedPajama and common crawl, as well as some synthetic data.Approximately 1 trillion tokens in total.\\n\\n# 6.Weaknesses6\\n\\nWe have revised the introduction in the revised version.\\n\\nWe hope our response addresses your concerns, but if there are any issues or areas we missed, please feel free to point them out.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We have resolved all the issues in the weaknesses section and updated section 5.2 and 2.4 in the revised version.\"}", "{\"comment\": \"Thanks for the response,\\nbut I believe the current manuscript is not yet convincing and I'll keep my current score. Lots of details of the experiments are missing, it is still not clear what is novel compare to previous works and the manuscript is not well structured and poorly written. \\nI encourage authors to improve the delivery of the idea and evaluation.\"}", "{\"metareview\": \"This paper proposes a routed architecture for language models that selectively activates domain-specific parameters. While the core idea has merit, the work has significant limitations in evaluation methodology, empirical validation, and comparison to relevant baselines like Mixture-of-Experts models. The manuscript also requires substantial improvements in presentation and technical detail.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers were in consensus on the above assessment.\"}", "{\"summary\": \"This paper proposes to divide the LLMs into two step paradigm: (1) a routing step to have the transformer output the category of tasks; (2) for each category of tasks, a delegated transformer (base transformer + category special layers) is applied to generate the outputs. Experiments are described in the paper to demonstrate the performance is comparable to existing models but not better.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"\\u2022 An interesting attempt to look for continuous scalable architecture for LLMs.\", \"weaknesses\": \"\\u2022 It is not clearly to me how the proposal can achieve infinite-parameter models just by classifying the input to into different classes and training/using different networks for different classes.\", \"questions\": \"1. How the routing network (Equation 6) can be generalized to unseen categories so as to be generalized to infinite many categories? Is the routing token set fixed or open?\\n2. Is section 5 not completed in this version? It seems that a significant amount of texts are truncated between line 270 and line 282.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
5X5Z7Ffrjb
Steering Large Language Models between Code Execution and Textual Reasoning
[ "Yongchao Chen", "Harsh Jhamtani", "Srinagesh Sharma", "Chuchu Fan", "Chi Wang" ]
While a lot of recent research focuses on enhancing the textual reasoning capabilities of Large Language Models (LLMs) by optimizing the multi-agent framework or reasoning chains, several benchmark tasks can be solved with 100\% success through direct coding, which is more scalable and avoids the computational overhead associated with textual iterating and searching. Textual reasoning has inherent limitations in solving tasks with challenges in math, logics, optimization, and searching, which is unlikely to be solved by simply scaling up the model and data size. The recently released OpenAI GPT Code Interpreter and multi-agent frameworks such as AutoGen have demonstrated remarkable proficiency of integrating code generation and execution to solve complex tasks using LLMs. However, based on our experiments on 7 existing popular methods for steering code/text generation in both single- and multi-turn settings with 14 tasks and 6 types of LLMs (including the new O1-preview), currently there is no optimal method to correctly steer LLMs to write code when needed. We discover some interesting patterns on when models use code vs. textual reasoning with the evolution to task complexity and model sizes, which even result in an astonishingly inverse scaling behavior. We also discover that results from LLM written code are not always better than using textual reasoning, even if the task could be solved through code. To mitigate the above issues, we propose three methods to better steer LLM code/text generation and achieve a notable improvement. The costs of token lengths and runtime are thoroughly discussed for all the methods. We believe the problem of steering LLM code/text generation is critical for future research and has much space for further improvement. Project Page, Datasets, and Codes are available at https://yongchao98.github.io/CodeSteer/.
[ "Large Language Models", "Code Interpreter", "Code/text generation", "Agent", "Textual reasoning" ]
Accept (Poster)
https://openreview.net/pdf?id=5X5Z7Ffrjb
https://openreview.net/forum?id=5X5Z7Ffrjb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x4VMlk31Dr", "tEeS6d5ySu", "nDDHB3jsfy", "mq6S84QGmk", "lG6i907VlG", "fQby6CSoAJ", "eQQhWylXon", "axScZ6Kymj", "YJQeRvJPWC", "WZxTYX1nlQ", "WTv26GFAdM", "TiUFV8bLln", "P8c12VOsaH", "Nswl6g0elr", "NZZ1WGKUj5", "Fom3lrteI3", "CGFbCVfoI1", "C3ON34CUiY", "66BSlNucZt", "5GLmCM6N7Y", "2jHZKNjayp" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732395886291, 1730274592990, 1732134539181, 1732716530722, 1732141893885, 1734980393659, 1732548828569, 1732320381582, 1732320232572, 1732133065633, 1732476214255, 1730613511094, 1732142670821, 1732134240689, 1729611608274, 1732228038515, 1737523906641, 1732133962575, 1732133582761, 1732143088791, 1732508133234 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Reviewer_1VAn" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Reviewer_F65d" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Area_Chair_VT1K" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Reviewer_697L" ], [ "ICLR.cc/2025/Conference/Submission8409/Reviewer_697L" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Reviewer_F65d" ], [ "ICLR.cc/2025/Conference/Submission8409/Reviewer_F65d" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Authors" ], [ "ICLR.cc/2025/Conference/Submission8409/Reviewer_1VAn" ] ], "structured_content_str": [ "{\"title\": \"Summary of Author Response - CodeSteer\", \"comment\": \"Dear reviewers,\\n\\nThank you for the insightful comments and suggestions for improving the paper! We have responded to the comments of each reviewer. We added the new experiments suggested by reviewers and polished the paper with added contents (highlighted in blue). We hope the reviewers could kindly re-evaluate our work based on our responses. Here is the summary:\\n\\n---\\n\\n1. **Broad Contributions not only limited to the Three Proposed Methods and Multi-turn Method**\", \"contributions_of_this_study_include\": [\"Emphasizing the significance of steering LLMs for text/code generation.\", \"Testing the limitations of existing methods.\", \"Uncovering phenomena like inverse scaling behavior, evolution with complexity, and impacts from different types of code answers.\", \"Revealing underlying mechanisms and verifying the possibility for further improvements with three proposed methods and multi-turn method.\", \"---\", \"2. **New Experiments on Model Generation Confidence**\", \"We analyzed the **probabilistic confidence** of the model when generating text or code answers.\", \"**Findings:** Differences in LLM confidence or perplexity have no notable impact on task success rates.\", \"---\", \"3. **New Experiments on Specialized Code LLMs**\", \"Tested **CodeLlama-34b-Instruct-hf** and **Qwen2.5-Coder-32B-Instruct** with: original question prompts, prompts of All Code and All Code + CoT, specially modified code prompts.\", \"**Findings:**\", \"SOTA Code LLMs perform much better with code prompts.\", \"Code LLMs still struggle with code/text generation decisions, similar to other tested LLMs.\", \"---\", \"4. **New Experiments on Additional Prompt Variations for Steering Code Generation**\", \"Conducted experiments based on reviewer suggestions (697L and F65d). Tested **more prompt versions** for steering LLM code generation.\", \"**Findings:** Conclusions in the paper remain consistent across varied prompt templates.\", \"---\", \"Please feel free to ask any additional questions or clarifications. We look forward to discussing further during the reviewer-author discussion period. Thank you very much!\"]}", "{\"summary\": \"The article titled \\\"LLM CodeSteer: Steering Large Language Models Between Code Execution and Textual Reasoning\\\" explores the balance between code execution and textual reasoning capabilities in Large Language Models (LLMs). It argues that while textual reasoning has limitations, especially in tasks involving math, logic, and optimization, direct coding can often provide a more effective solution. The study conducts experiments with 14 diverse tasks and 6 types of LLMs, finding that there is no single optimal method for guiding LLMs to choose between code generation and textual reasoning. It also observes an inverse scaling law, where smaller models sometimes outperform larger ones when augmented with a Code Interpreter (CI). The paper proposes three methods to improve LLM decisions on code/text generation: Code Interpreter+, Code + Text + Sum., and Self-estimate Score. These methods aim to enhance performance by combining code and textual reasoning or by using multi-turn execution/refinement. The article concludes that guiding LLMs in code/text generation is crucial for developing more capable agents and that there is significant room for improvement in current methods.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Comprehensive Analysis: The study provides a thorough analysis of LLMs' performance across a wide range of tasks, offering valuable insights into their strengths and weaknesses in code execution versus textual reasoning.\\n\\n2. Sound Method: Practical Approach: By focusing on real-world tasks that can be solved through coding, the research offers practical applications for enhancing LLMs' capabilities. The proposed methods, such as Code + Text + Sum. and Self-estimate Score, present innovative ways to improve LLMs' decision-making processes in choosing between code and text.\\n\\n3. This paper attempts to address an important question and proposes an effective method that achieves better performance than the compared methods. \\n\\n4. The paper is well-organized and clearly written.\", \"weaknesses\": \"1. Dependence on Task Type: The effectiveness of code versus textual reasoning is highly dependent on the task at hand, which may limit the generalizability of the findings.\\n\\n2. Overconfidence Issue: The study highlights that larger models tend to be overconfident in their textual reasoning abilities, which can lead to suboptimal performance when code execution is more effective.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the Reviewer 1VAn\", \"comment\": \"Thank you for your appreciation with our work and helpful suggestions. The following are our responses to your questions, the added experiments, and the related modifications of the paper. Hope the reviewer could kindly re-evaluate our work based on our modifications and new experimental results. **The revised paper has been uploaded with the changed contents blue colored.**\\n\\n***Question 1:*** *Dependence on Task Type: The effectiveness of code versus textual reasoning is highly dependent on the task at hand, which may limit the generalizability of the findings.*\\n\\n**Response 1:** **Our finding that either coding or textual reasoning is not always better across tasks and models is the contribution of our study rather than a limitation.** One of our contributions is the finding that currently there is no optimal method across the tasks to guide LLMs to generate code to answer the question when needed. The above findings are general to all current LLMs and tasks.\\n\\n***Question 2:*** *Overconfidence Issue: The study highlights that larger models tend to be overconfident in their textual reasoning abilities, which can lead to suboptimal performance when code execution is more effective.*\\n\\n**Response 2:** **This inverse scaling behavior is an interesting phenomenon found in our study, which should be regarded as our contribution but not weakness.** This phenomenon reveals the limitation of current LLMs and methods to steer code/text generation. Though currently we hypothesize that this phenomenon is due to the overconfidence of larger LLMs, we believe more research can be done to understand more about this bottleneck and phenomenon in future study.\\n\\n**Apart from the above clarifications, we also have added more experiments on 1) testing the difference of log probability for output tokens for text or code responses. 2) testing on specialized code LLMs as baselines. 3) doing ablation studies on more prompt templates for steering LLM code generation.** These experimental results have been added to the revised paper (Line 519-527, 1265-1290, 1356-1445).\\n\\nWe look forward to further communications with the reviewer to clarify our contributions. Thanks again for your time and patience for reviewing our work.\"}", "{\"title\": \"about code prompts\", \"comment\": \"Thank you for running this experiment, I am very happy to see it working! :) I think it is a very important baseline that should be part of the main text. It also shows that 100% success is doable with code as you mention in the main text.\"}", "{\"title\": \"Response to the Reviewer F65d - Question 1\", \"comment\": [\"Thank you for the valuable feedback and insightful comments. It seems there were some misunderstandings regarding our work and its perceived weaknesses. We aim to **address these misunderstandings in detail** and **have added three experiments** based on reviewers\\u2019 suggestions. We kindly hope the reviewer will reconsider our work in light of the responses provided below. **The revised paper has been uploaded with the changed contents blue colored.**\", \"***Question 1:*** *This work does not provide a lot of solutions. The only novel contribution of this paper (aside from the experimental results) is the 3 proposed prompting methods: CodeInterpreter+, Code+Text+Sum., Self-Estimate Score, among which only one (Code+Text+Sum.) seems to perform well across the evaluated tasks.*\", \"**Response 1:**\", \"**We believe the contributions of this paper are enough**. Apart from the proposed methods, we emphasize the significance of steering LLMs for text/code generation, test the limitations of existing methods, uncover phenomena like inverse scaling behavior, evolution with complexity, and different types of code answers, and reveal the underlying mechanism, as stated in the original paper line 97-116.\", \"**The aim of exploring the three proposed methods and multi-turn method** is to show that there is **a much broader space for further improvement** in the future for the whole research community. Even the simple techniques like Code+Text+Sum. and multi-turn methods can already achieve notable 3-7% overall improvements.\", \"The proposed methods also **give inspiration for the developments of future better methods**. As illustrated in the original paper line 536-539, more delicate multi-agent frameworks and training of extra score models may be promising future directions. Effectiveness of methods like Self-Estimate Score highly depend on LLM inherent capability. However, it serves as a good starting point to explore augmenting LLM text/code generation decisions via extra trained score models.\", \"**Developing a final solution is very challenging.** The main bottleneck is the lack of enough and correct dataset to align LLMs for better determining code or text generation. Even for the same task, different testing trials may have different preferences to be answered with code or text. Meanwhile, how to develop a generalizable method is hard. Training on limited domains will not be applicable to others. Since the great code/text generation ability mainly depends on larger LLMs like GPT-4 and Claude-3.5, directly training on these close models is also unavailable. Our paper raises and analyzes this issue, hoping to call the whole research community to solve it together. We will continue to explore the final solution.\"]}", "{\"metareview\": \"This paper benchmarks seven prompting methods that combine text and code to tackle 14 tasks using six strong LLMs. The results indicate no universally optimal prompting method across all tasks and models. The paper proposes three methods to improve LLM decisions on code/text generation: Code Interpreter+, Code + Text + Sum., and Self-estimate Score. These methods aim to enhance performance by combining code and textual reasoning or by using multi-turn execution/refinement.\\n\\nAll reviewers voted to accept the paper, liking the comprehensive experiments. They raised various issues as well. 697L: novelty and and significance of the findings. 1VAn: generalization of the results. F65d: novelty, not scaling law, did not use a code generation model. Overall the reviews are reasonable and all in agreement, and did not point out any critical flaws, so the Meta review completely defers to reviewers.\", \"additional_comments_on_reviewer_discussion\": \"authors responded and reviewers acknowledged. all in agreement.\"}", "{\"title\": \"Thank you for your positive feedback\", \"comment\": \"Thank you for your positive feedback and lifting the score. We are happy to discuss more with the reviewers.\\n\\nBest,\\n\\nCodeSteer Authors\"}", "{\"title\": \"Response to Reviewer F65d - Second Round - 2\", \"comment\": \"***Comment 3:*** *--- The following is purely subjective and does not influence the quality of this work, it is more a discussion :) ---\\nI agree that developing effective methods that generalize across tasks should be the future. I am however doubtful that one prompt can achieve widely different tasks. For instance, the fact one prompt works with a language model but does not with a code model shows this. The prompt design is inherently conditioned on the LLM at hand. A prompt to achieve a specific task may be different if one uses llama or claude or gpt or mistral.\\nSystems like ChatGPT are designed to be good instruction-following agents. Instruction-following is a specific task. It can be seen as a sort of meta-task, which asks the user to give a prompt to do a specific task, essentially deferring the prompt design to the user. Furthermore, these systems are continuously trained on millions of human data collected daily, they are not just prompted LLMs. It may even be the case that in the future, these systems have no use for a prompt as a lot of engineering pipelines around the LLM ensure it is \\\"safe\\\" and training on huge amounts of quality data makes them good \\\"instruction followers\\\" by default. This is all speculation but interesting to think about nonetheless.*\\n\\n**Response 3:** Thank you for your kind and insightful discussion. We totally agree with your opinions above. Pursuing a method or framework to generalize across tasks should be the future. However, we believe purely tuning prompts is not enough to realize this. Our study in tuning prompts for better text/code decision and many other studies have all shown that current LLMs have preference to prompts. No prompt is generalizable across tasks and LLMs. That is why in our work we also implemented agent frameworks to see whether it can perform better. It truly performs better, while we also believe that is not the final solution.\\n\\nThe final solution should combine model training, agent framework, and prompt tuning together. However, there are many challenges such as lack of reasonable training frameworks and diverse datasets. In the problem of text/code decisions, we are worried that directly tuning LLMs to answer with code or text on some tasks is not generalizable to other tasks. Another solution is to explore whether using multi-agents/LLMs framework can be better, i.e., training a separate model to generate prompts to guide the generation of task LLM. Then how to define the reward for the model on this text/code decision task is challenging. We will continue working on this.\\n\\nWe are very happy to discuss more with the reviewer about this work and research insights.\"}", "{\"title\": \"Response to Reviewer F65d - Second Round - 1\", \"comment\": \"***Comment 1:*** *I agree with reviewer 697L that the paper reads more like a technical report. While I recognize this work shows the importance of balancing code/text generation to solve complex tasks, since there is no generic solution proposed (which is expected when working on prompt engineering because it all depends on the task we want to achieve), the conclusion feels like: based on what one tries to achieve, it may or may not be beneficial to generate code first. Nevertheless the paper can still inspire other work to explore similar prompting methods, that is why I gave it a positive score (>5).*\\n\\n**Response 1:** Thank you for your positive feedback. Apart from all the prompting methods, our work **aims to emphasize that solving tasks with code or text can lead to quite different performance even for the same LLM**. **Current existing methods can not decide upon code/text decisions quite well**. In our paper, we propose using prompting and agent framework to improve general performance, which truly result in notable improvements. However, to build up a generic solution, training-free methods are not enough. **We believe future work should combine model finetuning, prompt tuning, and agent framework together for a much satisfying solution. We have added this discussion in the paper Line 124-126**.\\n\\n***Comment 2:*** *It would be nice to know if we represent the problem in a pure coded template if code llms can generate code that solves or partially solve some of the tasks explored in this work. Although representing such problems in a code format may be not trivial\\u2026*\\n\\n**Response 2:** Thank you for the nice suggestion. We have **added the experiment to convert the text templated prompts into code templated prompts. As shown in the \\u2018Code Prompt\\u2019 column of the following table, it works!**\\n\\nWe directly query GPT-4 to translate the original text prompts into the code prompt that the Code LLM can understand with the prompt 'Represent the above problem description in a pure coded template to test if Code LLMs can generate code to solve this problem. Output the prompt to steer Code LLMs to answer the whole question by outputting the complete python code. Do not output direct code answers.'\", \"the_example_code_prompt_is_like\": \"\\u2018\\nGiven four numbers, use each number exactly once along with basic arithmetic operations (+, -, *, /) to form an expression that evaluates to 24. Each number must be used, and you can use parentheses to define the order of operations. Your task is to write a Python function that takes a list of four numbers as input and returns a string representing the expression that evaluates to 24. If no such expression exists, return an empty string.\\n```python\\ndef find_expression_to_24(numbers):\\n # Your code here to find the expression\\n return expression\\n# Test cases\\nprint(find_expression_to_24([9, 10, 11, 13])) # Output: \\\"((10-9)*(11+13))\\\"\\nprint(find_expression_to_24([4, 10, 10, 11])) # Output: \\\"((4*11)-(10+10))\\\"\\nprint(find_expression_to_24([5, 6, 13, 13])) # Output: \\\"((5-(13/13))*6)\\\"\\nprint(find_expression_to_24([2, 6, 6, 7])) # Output: \\\"((6+(6*7))/2)\\\"\\nprint(find_expression_to_24([2, 6, 10, 18])) # Output: \\\"(2-(6-(10+18)))\\\"\\nprint(find_expression_to_24([1, 1, 4, 6])) # Output: \\\"<<<answer>>>\\\"\\n```\\nYour task is to implement the `find_expression_to_24` function to solve the problem.\\n\\u2019\\n\\nWe find that 1) With code prompt, the Code LLM generates code answers in nearly all cases. 2) In some situations, the Code LLM truly performs much better than other settings. For example, Qwen2.5-Coder-32B-Instruct performs much better with direct code prompt in Game24 task. 3) Whether code prompt works depends on the inherent capabilities of Code LLM and the task difficulty. CodeLlama-34b-Instruct-hf still does not perform notably better with code prompt. The written codes are mostly wrong. Qwen2.5-Coder-32B-Instruct still does not perform apparently better in BoxLift task, since the the required code in BoxLift is more challenging.\\n\\n| Model | Task | Only Question (Success) | **Code Prompt** | All Code (Success) | All Code + CoT (Success) |\\n|-------|------|----------------|----------------|-----------------|-----------------|\\n| CodeLlama-34B | Number Multiply | 0.00 | **0.62** | 0.49 | 0.55 |\\n| CodeLlama-34B | Game24 | **0.01** | 0.00 | **0.01** | 0.00 |\\n| CodeLlama-34B | BoxLift | **0.37** | 0.22 | 0.25 | 0.20 |\\n| Qwen2.5-Coder-32B-Instruct | Number Multiply | 0.21 | **1.00** | 0.99 | **1.00** |\\n| Qwen2.5-Coder-32B-Instruct | Game24 | 0.23 | **0.74** | 0.21 | 0.07 |\\n| Qwen2.5-Coder-32B-Instruct | BoxLift | 0.43 | 0.54 | 0.32 | **0.56** |\\n\\nThough the code prompt introduces hints to guide LLM to generate code all the time (different from the setting in our study), we think the above experiments do serve as a good baseline and strengthen the understandings. We have **added these new contents and discussions in the modified paper Line 1365-1416.**\"}", "{\"title\": \"Response to the Reviewer 697L - Question 1\", \"comment\": \"Thank you for the helpful reviews and comments. It appears there may have been a miscommunication regarding our contribution. We have, therefore, improved the description to **clarify the misunderstandings** and **added three experiments suggested by reviewers**. We hope the reviewer could kindly re-evaluate our work based on our responses provided below. **The revised paper has been uploaded with the changed contents blue colored.**\\n\\n***Question 1:*** *From a scientific perspective, the problem, while useful as a testbed for prompt engineering with code and text, is not significant enough.*\\n\\n**Response 1:**\\n\\n* **Integrating textual reasoning and symbolic computing/coding for LLM response is important.** As stated in the original paper line 15-17, line 74-82, and line 864-882 Figure 9, from a scientific perspective, **one significant contribution of our study is to reveal the importance of steering LLMs to generate code** to solve questions when needed. Textual reasoning has inherent limitations in solving tasks with challenges in math, logics, optimization, and searching, which is unlikely to be solved by simply scaling up the model and data size. Coding is a necessary way to enhance LLM capability. The significance of combining symbolic computing with LLMs has also been underlined in other recent works, such as \\u2018LLMs can't plan, but can help planning in LLM-modulo frameworks, ICML 2024\\u2019.\\n\\n* **The uncovered phenomena like inverse scaling behavior, evolution with complexity, and different types of code answers are novel.** No previous studies have revealed these phenomena while they do impact LLM performance quite a lot. The phenomena like **inverse scaling behavior and varied code types are counter-intuitive but proven to be reasonable** after detailed analysis, which will help improve the ability of future LLMs and methods.\\n\\n* **Another contribution is to show that currently there is no optimal method on steering LLM text/code generation**, based on our testing on 7 baseline methods across 14 tasks and 6 LLMs. **We also show that there is a much broader space for further improvement.** The three proposed methods and the multi-turn method are not supposed to be the optimal solution, but to show that even the simple techniques can already notably improve LLM performance on determining code/text generation.\"}", "{\"title\": \"Reviewer response\", \"comment\": \"Hi authors, thanks for the thorough response. The additional experiments and the clarification of your contributions have effectively addressed my concerns regarding the positioning of this work within the code and reasoning community. Based on these improvements, I am revising my score to 6.\"}", "{\"summary\": \"This paper benchmarks seven prompting methods that combine text and code to tackle 14 tasks using six strong LLMs. The results indicate no universally optimal prompting method across all tasks and models, leading the authors to propose three new prompting methods that yield consistent improvements.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper effectively identifies and analyzes the challenges faced by state-of-the-art LLMs in determining when to leverage code and uncovers some interesting phenomena. These findings offer valuable guidance for balancing code and text in LLM prompting.\", \"The proposed prompting methods are simple yet demonstrate effectiveness across different models.\", \"The paper is well-structured and easy to follow, with charts and tables that greatly enhance readability.\"], \"weaknesses\": [\"The paper reads more as a technical report than a top-tier scientific contribution.\", \"From a scientific perspective, the problem, while useful as a testbed for prompt engineering with code and text, is not significant enough. If framed as a technical report, a broader workload and richer insights might be expected to better inform the community. While the prompting comparisons are comprehensive, they are relatively straightforward, meaning the work may not substantially save time for others. Insights largely remain at the level of observed phenomena, describing model behavior (e.g., line 274: \\\"prompting LLMs to always respond with code (Fig 6b) performs just as poorly, or even worse\\\"). Additional investigation into the reasons behind these behaviors might enhance understanding.\", \"Additionally, the three proposed methods appear to be ensembles of existing approaches, such as summarization and self-reflection. Though effective, these methods do not introduce significant new insights.\"], \"questions\": [\"I am curious about how to delineate the boundary between text and code in prompting. For instance, if the model is prompted to generate \\\"pure Python code with rich natural language-annotated comments,\\\" which category does this fall under? If it\\u2019s considered \\\"All code,\\\" could we perhaps frame all tasks this way, where the simplest output might be `print(answer)`?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the Reviewer F65d - Question 2-4\", \"comment\": \"***Question 2:*** *A more detailed investigation on particular tasks to help identify potential solutions. For instance, analyzing the probabilistic confidence of the model when generating a text answer rather than code.*\\n\\n**Response 2:** Thanks for the insightful advice, **here we added the experiments to compare and analyze the probabilistic confidence of the model when generating text or code answers**. We tested on GPT-4o, GPT-4o-mini, and GPT-3.5, collecting the log p values of each output token supported by OpenAI API and calculating the perplexity (Perplexity(W) = P(w\\u2081w\\u2082...w\\u2099)^(-1/n)) of each generated response. Since we can only access the token probability of the LLMs without Code Interpreter, the Only Question setting will always generate text responses without extra prompt guidance. To stimulate the LLMs to generate answers with either text or code modes, we compare three settings with varied prompts: Only Question, All Code, All Code + CoT.\\n\\nAs shown in the following table and Figure 13 in the revised paper, the perplexity in three settings are close. The All Code + CoT setting has slightly higher perplexity compared to other two settings in GPT-4o and GPT-4o-mini. However, this difference in perplexity has no notable impact on success rates. This phenomenon is reasonable since LLMs are trained to use perplexity as the metric and try to decrease it, meaning the code/text difference will not be shown in direct perplexity comparison if the original training process does not deliberately distinguish text and code. We have added these experiments and the related discussion in the revised paper (Line 525-527, 1404-1445).\\n\\n| Model | Task | Only Question (Perplexity/Success) | All Code + CoT (Perplexity/Success) | All Code (Perplexity/Success) |\\n|-------|------|------------------------|------------------------|------------------------|\\n| GPT-4o | Number Multiply | 1.11 / 0.37 | 1.18 / 1.00 | 1.01 / 1.00 |\\n| GPT-4o | Game24 | 1.13 / 0.17 | 1.34 / 0.05 | 1.21 / 0.11 |\\n| GPT-4o | BoxLift | 1.18 / 0.69 | 1.21 / 0.30 | 1.14 / 0.68 |\\n| GPT-4o-mini | Number Multiply | 1.09 / 0.15 | 1.14 / 1.00 | 1.01 / 1.00 |\\n| GPT-4o-mini | Game24 | 1.12 / 0.15 | 1.33 / 0.09 | 1.14 / 0.10 |\\n| GPT-4o-mini | BoxLift | 1.09 / 0.38 | 1.22 / 0.40 | 1.11 / 0.26 |\\n| GPT-3.5 | Number Multiply | 1.43 / 0.02 | 1.10 / 1.00 | 1.00 / 1.00 |\\n| GPT-3.5 | Game24 | 1.33 / 0.03 | 1.11 / 0.12 | 1.20 / 0.09 |\\n| GPT-3.5 | BoxLift | 1.10 / 0.38 | 1.10 / 0.21 | 1.06/ 0.05 |\\n\\n***Question 3:*** *Strongly suggest the authors to add an experiment with a code generation model.*\\n\\n**Response 3:** Thank you for the helpful suggestions, **we have added the experiments to test on specialized Code LLMs: CodeLlama and Qwen2.5-Coder-32B-Instruct (the current best Code LLM on leaderboard)**. As shown in the following table, even for specialized code LLM, the problem of determining whether to generate code or text still influences the LLM performance quite a lot. In Only Question setting, both CodeLlama and Qwen2.5-Coder-32B-Instruct will mostly generate direct text answers without extra prompt guidance, which achieves much lower success rates compared to code answers in other two settings in Number Multiply task. Meanwhile, the overall performance of CodeLlama-34b-Instruct-hf and Qwen2.5-Coder-32B-Instruct are not as good as GPT-4o, showing the code LLMs may overfit to trained code datasets. We have added these experimental results and related discussion in the revised paper (Line 523-525, 1356-1377).\\n\\n| Model | Task | Only Question (Success) | All Code (Success) | All Code + CoT (Success) |\\n|-------|------|----------------|-----------------|-----------------|\\n| CodeLlama-34B | Number Multiply | 0.00 | 0.49 | 0.55 |\\n| CodeLlama-34B | Game24 | 0.01 | 0.01 | 0.00 |\\n| CodeLlama-34B | BoxLift | 0.37 | 0.25 | 0.20 |\\n| Qwen2.5-Coder-32B-Instruct | Number Multiply | 0.21 | 0.99 | 1.00 |\\n| Qwen2.5-Coder-32B-Instruct | Game24 | 0.23 | 0.21 | 0.07 |\\n| Qwen2.5-Coder-32B-Instruct | BoxLift | 0.43 | 0.32 | 0.56 |\\n\\n***Question 4:*** *The fact gpt3.5 outperforms gpt4 in some tasks does not mean there is an \\u201cinverse scaling law\\u201d. This is a very specific case in which smaller models are less certain and thus use external tools (code interpreters), which is expected.*\\n\\n**Response 4:** Thank you for the helpful advice. In the revised paper, we have used \\u2018inverse scaling phenomenon\\u2019 instead of \\u2018inverse scaling law\\u2019. We also have updated the paper to further emphasize that the phenomenon is observed for only specific tasks. Thank you!\\n\\n**The phenomenon that smaller models using Code Interpreters more is not expected**. To our best knowledge, **no previous papers and works have revealed this phenomenon**. Meanwhile, intuitively, people always expect larger models should be more capable of using tools/generating codes. This inverse scaling phenomenon is shown to exist only after real testing and seems to be reasonable after detailed analysis.\"}", "{\"title\": \"Response to the Reviewer 697L - Question 5\", \"comment\": \"***Question 5:*** *How to delineate the boundary between text and code in prompting? For instance, if the model is prompted to generate \\\"pure Python code with rich natural language-annotated comments,\\\" which category does this fall under? If it\\u2019s considered \\\"All code,\\\" could we perhaps frame all tasks this way, where the simplest output might be print(answer)?*\\n\\n**Response 5:**\\n\\n* **Yes, in our study the response is regarded as code once the code part appears, no matter whether the text parts exist or not.** Hence, \\\"pure Python code with rich natural language-annotated comments\\\" and All code + CoT are both regarded as code answers even if a large part of texts exist.\\n\\n* **In the original paper Appendix K line 1242-1327, ablation studies have been extensively carried out to test varied prompt versions for steering code generation and AutoGen prompt.** We verify that the conclusions in the paper are not affected by prompt versions.\\n\\n* **Thanks for the suggestions from the reviewer 697L and F65d, we have added further experiments to test on more prompt versions for steering LLM code generation with Code Interpreter.** The tested prompts are as follows: 1) \\u2018Generate pure Python code with rich natural language-annotated comments.\\u2019 2) \\u2018Think of an algorithm to solve the task and implement it in python.\\u2019 3) \\u2018Be careful this is hard, be less confident.\\u2019 4) \\u2018Think of an algorithm to solve the task and implement it in python. Be careful this is hard, be less confident.\\u2019. We compare these prompt versions with the Original Prompt \\u2018Think the task step by step if you need to. If a plan is not provided, explain your plan first. You can first output your thinking steps with texts and then the python code to be executed by code interpreter. Try to use code interpreter more.\\u2019.\\n\\n The experimental results are shown in the following table. Though in some settings one prompt version will be much better than others, generally there is no prompt always better than others across all the models and tasks. Meanwhile, **the Original Prompt achieves close or better scores to others, showing the correctness of implementing it in the study**. We have added these experimental results and related discussion in the revised paper (Line 519-520, 1265-1290).\\n\\n| Prompt Version | Model | Game24 | Path Plan | BoxLift | Normalized Score |\\n|---------------|--------|---------|------------|----------|------------------|\\n| Original Prompt | GPT-4o | 0.63 | 0.46 | 0.59 | 0.85 |\\n| | GPT-4o-mini | 0.83 | 0.26 | 0.65 | 0.80 |\\n| Prompt 1 | GPT-4o | 0.63 | 0.40 | 0.47 | 0.75 |\\n| | GPT-4o-mini | 1.00 | 0.46 | 0.34 | 0.84 |\\n| Prompt 2 | GPT-4o | 0.91 | 0.43 | 0.67 | 0.97 |\\n| | GPT-4o-mini | 0.74 | 0.20 | 0.31 | 0.55 |\\n| Prompt 3 | GPT-4o | 0.64 | 0.40 | 0.62 | 0.83 |\\n| | GPT-4o-mini | 0.77 | 0.16 | 0.46 | 0.61 |\\n| Prompt 4 | GPT-4o | 0.88 | 0.41 | 0.68 | 0.95 |\\n| | GPT-4o-mini | 0.94 | 0.24 | 0.51 | 0.75 |\\n\\nWe look forward to further communications with the reviewer to clarify our contributions. Thanks again for your time and patience for reviewing our work.\"}", "{\"summary\": \"This paper explores if querying LLMs to generate code can be more effective than textual reasoning for tasks involving math, logic, and optimization.\\nExperiments across 14 tasks with 6 different LLMs show that (1) there is no single best method to optimally decide when to use code or text reasoning, (2) forcing code reasoning all the time can deteriorate performance, (3) smaller-size models tend to use code more often and outperform larger-size models for some tasks, and (4) mixing code and textual reasoning together improves performance but is limited by the model capacity and is more expensive to run due to more predicted tokens.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This is a great case study on the usage of code to solve mathematical and logical tasks with LLMs. The experimental settings are clear and easy to follow. The paper shows a lot of experimental results with various models and tasks.\\nThe experiments are well suited to answer the paper\\u2019s investigation.\\nThe proposed prompting method performs better on average than all other methods tried, at the expense of generated token length. \\nExperimental results should be of interest to the research community.\", \"weaknesses\": \"This paper is a technical analysis of various prompting models & methods on math and logical reasoning tasks. While the results are interesting and the experiments exhaustive, this work does not provide a lot of solutions. The only novel contribution of this paper (aside from the experimental results) is the 3 proposed prompting methods: CodeInterpreter+, Code+Text+Sum., Self-Estimate Score, among which only one (Code+Text+Sum.) seems to perform well across the evaluated tasks. This work could benefit from a more detailed investigation on particular tasks to help identify potential solutions. For instance, analyzing the probabilistic confidence of the model when generating a text answer rather than code.\\n\\nThe paper claims that all tasks tested in this paper can be solved with code (with varying difficulty), yet no code-generation model has been tried. I strongly suggest the authors to add an experiment with a code generation model. Although the only prompting method that would make sense here is the default \\u201cOnly Question\\u201d, it is an important baseline to consider.\", \"minor\": \"the fact gpt3.5 outperforms gpt4 in some tasks does not mean there is an \\u201cinverse scaling _law_\\u201d. This is a very specific case in which smaller models are less certain and thus use external tools (code interpreters), which is expected.\", \"questions\": \"In general, the approach of trying to find the prompting method to solve all these tasks could never end. Prompt engineering can be much more effective when targeted to single tasks as one can give more information about the task, in context examples, etc.\\nDid you try task-specific prompts? Do you think you could solve some of these tasks like this?\\n\\nAfter finding that large models generate code that simulates text reasoning rather than actual code, did you try to further improve the prompt? Simple things to try would be to add phrases like \\\"think of an algorithm to solve the task and implement it in python\\\", \\\"do not try to answer in one step\\\", \\\"be careful this is hard, be less confident\\\" etc...\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"acknowledgement\", \"comment\": \"Thank you for the detailed answers and the updated manuscript. Below are some comments based on your responses.\\n\\n- I agree with reviewer 697L that the paper reads more like a technical report. While I recognize this work shows the importance of balancing code/text generation to solve complex tasks, since there is no generic solution proposed (which is expected when working on prompt engineering because it all depends on the task we want to achieve), the conclusion feels like: based on what one tries to achieve, it may or may not be beneficial to generate code first. Nevertheless the paper can still inspire other work to explore similar prompting methods, that is why I gave it a positive score (>5).\\n\\n- I find it surprising that code models do not generate code by default... This is probably due to the fact that their prompt is confusing them (the problem statement being written in English in their prompt maybe). It would be nice to know if we represent the problem in a pure coded template if code llms can generate code that solves or partially solve some of the tasks explored in this work. Although representing such problems in a code format may be not trivial...\\n\\n--- _The following is purely subjective and does not influence the quality of this work, it is more a discussion :)_ ---\\n\\n> We believe developing effective methods or prompts for multiple tasks should be the future. Recent trends on the popularity of apps such as ChatGPT and Microsoft Copilot suggest a single LLM powered chat system being used for a wide variety of tasks.\\n\\nI agree that developing effective **methods** that generalize across tasks should be the future. I am however doubtful that one **prompt** can achieve widely different tasks. For instance, the fact one prompt works with a language model but does not with a code model shows this. The prompt design is inherently conditioned on the LLM at hand. A prompt to achieve a specific task may be different if one uses llama or claude or gpt or mistral.\\n\\nSystems like ChatGPT are designed to be good instruction-following agents. Instruction-following is a specific task. It can be seen as a sort of meta-task, which asks the user to give a prompt to do a specific task, essentially deferring the prompt design to the user. Furthermore, these systems are continuously trained on millions of human data collected daily, they are not _just_ prompted LLMs. \\nIt may even be the case that in the future, these systems have no use for a prompt as a lot of engineering pipelines around the LLM ensure it is \\\"safe\\\" and training on huge amounts of quality data makes them good \\\"instruction followers\\\" by default. This is all speculation but interesting to think about nonetheless.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to the Reviewer 697L - Question 4\", \"comment\": [\"***Question 4:*** *The three proposed methods appear to be ensembles of existing approaches, such as summarization and self-reflection. Though effective, these methods do not introduce significant new insights.*\", \"**Response 4:**\", \"**We believe the contributions of this paper are enough.** Apart from the proposed methods, we emphasize the significance of steering LLMs for text/code generation, test the limitations of existing methods, uncover phenomena like inverse scaling behavior, evolution with complexity, and different types of code answers, and reveal the underlying mechanism, as stated in the original paper line 97-116.\", \"**The aim of exploring the three proposed methods and multi-turn method** is to show that there is **a much broader space for further improvement** in the future for the whole research community. Even the simple techniques like Code+Text+Sum. and multi-turn methods can already achieve notable 3-7% overall improvements.\", \"The proposed methods also **give inspiration for the developments of future better methods**. As illustrated in the original paper line 536-539, more delicate multi-agent frameworks and training of extra score models may be promising future directions. Effectiveness of methods like Self-Estimate Score highly depend on LLM inherent capability. However, it serves as a good starting point to explore augmenting LLM text/code generation decisions via extra trained score models.\", \"**Developing a final solution is very challenging.** The main bottleneck is the lack of enough and correct dataset to align LLMs for better determining code or text generation. Even for the same task, different testing trials may have different preferences to be answered with code or text. Meanwhile, how to develop a generalizable method is hard. Training on limited domains will not be applicable to others. Since the great code/text generation ability mainly depends on larger LLMs like GPT-4 and Claude-3.5, directly training on these close models is also unavailable. Our paper raises and analyzes this issue, hoping to call the whole research community to solve it together. We will continue to explore the final solution.\"]}", "{\"title\": \"Response to the Reviewer 697L - Question 2 and 3\", \"comment\": \"***Question 2:*** *If framed as a technical report, a broader workload and richer insights might be expected to better inform the community. Additional investigation into the reasons behind these behaviors might enhance understanding.*\\n\\n**Response 2:**\\n\\n* **Many detailed studies have been carried out in the original paper**, to reveal underlying mechanisms such as inverse scaling behavior with varied model sizes and confidence, evolution with task complexity, different code types, and different prompting versions.\\n\\n* For richer insights, **here we added the experiments to compare and analyze the probabilistic confidence of the model when generating text or code answers**. We tested on GPT-4o, GPT-4o-mini, and GPT-3.5, collecting the log p values of each output token supported by OpenAI API and calculating the perplexity (Perplexity(W) = P(w\\u2081w\\u2082...w\\u2099)^(-1/n)) of each generated response. Since we can only access the token probability of the LLMs without Code Interpreter, the Only Question setting will always generate text responses without extra prompt guidance. To stimulate the LLMs to generate answers with either text or code modes, we compare three settings with varied prompts: Only Question, All Code, All Code + CoT.\\n\\n As shown in the following table and Figure 13 in the revised paper, the perplexity in three settings are close. The All Code + CoT setting has slightly higher perplexity compared to other two settings in GPT-4o and GPT-4o-mini. However, this difference in perplexity has no notable impact on success rates. This phenomenon is reasonable since LLMs are trained to use perplexity as the metric and try to decrease it, meaning the code/text difference will not be shown in direct perplexity comparison if the original training process does not deliberately distinguish text and code. We have added these experiments and the related discussion in the revised paper (Line 525-527, 1404-1445).\\n\\n| Model | Task | Only Question (Perplexity/Success) | All Code + CoT (Perplexity/Success) | All Code (Perplexity/Success) |\\n|-------|------|------------------------|------------------------|------------------------|\\n| GPT-4o | Number Multi. | 1.11 / 0.37 | 1.18 / 1.00 | 1.01 / 1.00 |\\n| GPT-4o | Game24 | 1.13 / 0.17 | 1.34 / 0.05 | 1.21 / 0.11 |\\n| GPT-4o | BoxLift | 1.18 / 0.69 | 1.21 / 0.30 | 1.14 / 0.68 |\\n| GPT-4o-mini | Number Multi. | 1.09 / 0.15 | 1.14 / 1.00 | 1.01 / 1.00 |\\n| GPT-4o-mini | Game24 | 1.12 / 0.15 | 1.33 / 0.09 | 1.14 / 0.10 |\\n| GPT-4o-mini | BoxLift | 1.09 / 0.38 | 1.22 / 0.40 | 1.11 / 0.26 |\\n| GPT-3.5 | Number Multi. | 1.43 / 0.02 | 1.10 / 1.00 | 1.00 / 1.00 |\\n| GPT-3.5 | Game24 | 1.33 / 0.03 | 1.11 / 0.12 | 1.20 / 0.09 |\\n| GPT-3.5 | BoxLift | 1.10 / 0.38 | 1.10 / 0.21 | 1.06/ 0.05 |\\n\\n* For further understanding the significance of reasonable text/code generation, **we also added the experiments to test on specialized Code LLMs: CodeLlama-34b-Instruct-hf and Qwen2.5-Coder-32B-Instruct (the current best Code LLM on leaderboard)**. As shown in the following table, even for specialized code LLM, the problem of determining whether to generate code or text still influences the LLM performance quite a lot. In Only Question setting, both CodeLlama and Qwen2.5-Coder-32B will mostly generate direct text answers without extra prompt guidance, which achieves much lower success rates compared to code answers in other two settings in Number Multiply task. We also convert text prompts into code prompts (Code Prompt setting) and find the Code LLMs can perform better. The overall performance of CodeLlama and Qwen2.5-Coder-32B are not as good as GPT-4o, showing the code LLMs may overfit to trained code datasets. We have added these experimental results and related discussion in the revised paper (Line 523-525, 1365-1416).\\n\\n| Model | Task | Only Question (Succ.) | **Code Prompt** | All Code (Succ.) | All Code + CoT (Succ.) |\\n|-------|------|----------------|----------------|-----------------|-----------------|\\n| CodeLlama-34B | Number Multi. | 0.00 | **0.62** | 0.49 | 0.55 |\\n| CodeLlama-34B | Game24 | **0.01** | 0.00 | **0.01** | 0.00 |\\n| CodeLlama-34B | BoxLift | **0.37** | 0.22 | 0.25 | 0.20 |\\n| Qwen2.5-Coder-32B | Number Multi. | 0.21 | **1.00** | 0.99 | **1.00** |\\n| Qwen2.5-Coder-32B | Game24 | 0.23 | **0.74** | 0.21 | 0.07 |\\n| Qwen2.5-Coder-32B | BoxLift | 0.43 | 0.54 | 0.32 | **0.56** |\\n\\n***Question 3:*** *While the prompting comparisons are comprehensive, they are relatively straightforward, meaning the work may not substantially save time for others.*\\n\\n**Response 3:** Several of our findings are novel and uncover new patterns, such as inverse scaling behavior, evolution with complexity, and different types of code answers. These insights can be valuable for people working on problems requiring code generation and reasoning. We would appreciate it if the reviewer could please share any specific prior work that has already covered these findings, or why they think our experiments are 'straightforward'? \\\".\"}", "{\"title\": \"Response to the Reviewer F65d - Question 5 and 6\", \"comment\": \"***Question 5:*** *In general, the approach of trying to find the prompting method to solve all these tasks could never end. Prompt engineering can be much more effective when targeted to single tasks as one can give more information about the task, in context examples, etc. Did you try task-specific prompts? Do you think you could solve some of these tasks like this?*\\n\\n**Response 5:**\\n\\n* **We believe developing effective methods or prompts for multiple tasks should be the future**. Recent trends on the popularity of apps such as ChatGPT and Microsoft Copilot suggest a single LLM powered chat system being used for a wide variety of tasks. As such, studying all the tasks under one prompting strategy is useful and important for improving such apps.\\n\\n* **The effectiveness of task-specific prompts depends on the informativeness of user input hints and task characteristics**. In tasks like Game24, Number Multiply, BoxLift, and Letters, one type of python code can solve all the tested samples. In this case, if the prompt comprises the correct code as few-shot examples. The LLMs can well solve this task. However, this type of prompting is regarded as cheating since users input correct answers to the prompt and lack generalizability to multiple tasks. Meanwhile, some tasks like Date Understanding and Math do not have unified code to solve all samples. In this case, either using correct code or text as few-shot examples are the same.\\n\\n* **The prompts with text as few-shot examples have already been included in the current tested question prompt**. In some tasks like Game24, Math, and Path Plan, the original question prompt from the original dataset already comprises few-shot question/text answers pairs as examples. As shown in the testing experiments, the LLMs still cannot completely solve the tasks.\\n\\n* **The prompts without few-shot examples but finely optimized by automatic prompt optimization frameworks have been tested in other papers**. In recent works from other researchers such as PromptAgent, PromptBreeder, PROMST, they developed optimized prompts to guide LLM reasoning and tested on several tasks the same as this work. They find the optimized prompts can improve the success rate apparently (e.g., improve BoxNet from 0.65 to 0.79 for GPT-4o), but not completely solve the whole task.\\n\\n***Question 6:*** *After finding that large models generate code that simulates text reasoning rather than actual code, did you try to further improve the prompt? Simple things to try would be to add phrases like \\\"think of an algorithm to solve the task and implement it in python\\\", \\\"do not try to answer in one step\\\", \\\"be careful this is hard, be less confident\\\" etc...*\\n\\n**Response 6:** **Thanks for the suggestions from the reviewer 697L and F65d, we have added further experiments to test on more prompt versions for steering LLM code generation with Code Interpreter.** The tested prompts are as follows: 1) \\u2018Generate pure Python code with rich natural language-annotated comments.\\u2019 2) \\u2018Think of an algorithm to solve the task and implement it in python.\\u2019 3) \\u2018Be careful this is hard, be less confident.\\u2019 4) \\u2018Think of an algorithm to solve the task and implement it in python. Be careful this is hard, be less confident.\\u2019. We compare these prompt versions with the Original Prompt \\u2018Think the task step by step if you need to. If a plan is not provided, explain your plan first. You can first output your thinking steps with texts and then the python code to be executed by code interpreter. Try to use code interpreter more.\\u2019.\\n\\nThe experimental results are shown in the following table. Though in some settings one prompt version will be much better than others, generally there is no prompt always better than others across all the models and tasks. Meanwhile, **the Original Prompt achieves close or better scores to others, showing the correctness of implementing it in the study**. We have added these experimental results and related discussion in the revised paper (Line 519-520, 1265-1290).\\n\\n| Prompt Version | Model | Game24 | Path Plan | BoxLift | Normalized Score |\\n|---------------|--------|---------|------------|----------|------------------|\\n| Original Prompt | GPT-4o | 0.63 | 0.46 | 0.59 | 0.85 |\\n| | GPT-4o-mini | 0.83 | 0.26 | 0.65 | 0.80 |\\n| Prompt 1 | GPT-4o | 0.63 | 0.40 | 0.47 | 0.75 |\\n| | GPT-4o-mini | 1.00 | 0.46 | 0.34 | 0.84 |\\n| Prompt 2 | GPT-4o | 0.91 | 0.43 | 0.67 | 0.97 |\\n| | GPT-4o-mini | 0.74 | 0.20 | 0.31 | 0.55 |\\n| Prompt 3 | GPT-4o | 0.64 | 0.40 | 0.62 | 0.83 |\\n| | GPT-4o-mini | 0.77 | 0.16 | 0.46 | 0.61 |\\n| Prompt 4 | GPT-4o | 0.88 | 0.41 | 0.68 | 0.95 |\\n| | GPT-4o-mini | 0.94 | 0.24 | 0.51 | 0.75 |\\n\\nWe appreciate the opportunity to engage further with the reviewer to clarify our contributions. Thank you once again for your time and thoughtful review of our work.\"}", "{\"comment\": \"Thanks for your reply, I want to keep my overall rating.\"}" ] }
5X1yiEB63s
CSSGT: Contrastive learning-based Split Spiking Graph Transformer
[ "Ziyu Wang" ]
Although the integration of Graph Neural Networks (GNNs) and Transformers has demonstrated promising performance across various graph tasks, it remains computationally expensive. In contrast, brain-inspired Spiking Neural Networks (SNNs) offer an energy-efficient architecture due to their unique spike-based, event-driven paradigm. In this paper, we propose a novel framework CSSGT, which leverages both the strength of Transformers and the computational efficiency of SNNs for graph tasks, trained under the graph contrastive learning framework. CSSGT comprises two key components: Mutual Information-based Graph Split (MIGS) and Spike-Driven Graph Attention (SDGA). MIGS is designed for the sequential input of SNNs, splitting the graph while maximizing mutual information and minimizing redundancy. SDGA, tailored for graph data, exploits sparse graph convolution and addition operations, achieving low computational energy consumption. Extensive experiments on diverse datasets demonstrate that CSSGT converges within two epochs and outperforms various state-of-the-art models while maintaining low computational cost.
[ "Graph Neural Networks", "Spiking Neural Networks", "Transformers", "Mutual Information", "Graph Contrastive Learning." ]
https://openreview.net/pdf?id=5X1yiEB63s
https://openreview.net/forum?id=5X1yiEB63s
ICLR.cc/2025/Conference
2025
{ "note_id": [ "OwUwaiDnWX" ], "note_type": [ "comment" ], "note_created": [ 1728121910947 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5904/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We sincerely apologize for withdrawing this submission. We deeply appreciate the efforts of the reviewers and hope for their understanding.\"}" ] }
5WtovCb1ZE
Models That Prove Their Own Correctness
[ "Noga Amit", "Shafi Goldwasser", "Orr Paradise", "Guy N. Rothblum" ]
How can we trust the correctness of a learned model on a particular input of interest? Model accuracy is typically measured _on average_ over a distribution of inputs, giving no guarantee for any fixed input. This paper proposes a theoretically-founded solution to this problem: to train _Self-Proving models_ that prove the correctness of their output to a verification algorithm $V$ via an Interactive Proof. We devise a generic method for learning Self-Proving models, and we prove convergence bounds under certain assumptions. Empirically, our learning method is used to train a Self-Proving transformer that computes the Greatest Common Divisor (GCD) _and_ proves the correctness of its answer.
[ "Trustworthy ML", "Transformers", "Interactive Proofs", "Theory" ]
Reject
https://openreview.net/pdf?id=5WtovCb1ZE
https://openreview.net/forum?id=5WtovCb1ZE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uk4cSNmVyT", "rrQe2hghPz", "kaiAuvvrjC", "kQqiOYoL4b", "imlVpCKzjU", "gC0DTKtxk2", "eKb80zXW4z", "d3UGw4ch2g", "bv0oFj9z16", "bKn4mejMOW", "TOpfOU8DBa", "RerNy2fjYv", "PW2J1WVJ0z", "MFOvsCJrnE", "KcTJ35Xicp", "KKMUM1KQoy", "Jae9UmlEfJ", "JLvr3She3t", "H6DQV17aHQ", "DlRFKiloKS", "8UQHEg75Mq", "4mnSvKBuN0", "2ysgj4jOOh" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1731973904233, 1732608555203, 1730703128173, 1731973924693, 1732619323645, 1732608148361, 1731948586309, 1732300521247, 1732350687183, 1731973885383, 1732787820490, 1731948572841, 1731948786174, 1730621806059, 1730378334546, 1735151062587, 1730632736526, 1733028939828, 1732327688773, 1731948505061, 1732607631264, 1737524130393, 1731948438294 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Reviewer_yycY" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Reviewer_TyWJ" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Reviewer_yycY" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Reviewer_TyWJ" ], [ "ICLR.cc/2025/Conference/Submission11547/Reviewer_jLku" ], [ "ICLR.cc/2025/Conference/Submission11547/Area_Chair_d74S" ], [ "ICLR.cc/2025/Conference/Submission11547/Reviewer_W6ru" ], [ "ICLR.cc/2025/Conference/Submission11547/Reviewer_TyWJ" ], [ "ICLR.cc/2025/Conference/Submission11547/Reviewer_TyWJ" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11547/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response (part 2)\", \"comment\": \"### 4. On worst-case soundness guarantees\\n> the assumption that $s$-soundness holds for any $x, y, P$ is unrealistic (See the question 2 below). It is more natural to assume that a false-positive error (line 207) depends on the distribution over $x,y$. However, if we make such an assumption, then it is difficult to give a guarantee for specific $x_0, y_0$. Therefore, I think the self-proving model does not work as stated in the use case... The paper says that completeness and soundness are properties of a verifier (Definition 3.2). However, it seems unrealistic to imagine a probabilistic verifier whose soundness error is always smaller than $s$ for any $x,y$ and $P$, unless $s=0$. ... It is more realistic for me to assume that there are specific $(x,y,P)$ that causes a false-positive error for the verifier, and thus it depends on the distribution $\\\\mu$.\\n\\nThe strength of Interactive Proof systems (IPs) lies precisely in their worst-case soundness guarantees---the very property you reference. While average-case guarantees based on benchmark performance are valuable, worst-case guarantees protect users from incorrect outputs regardless of the input distribution. Importantly:\\n\\n1. Worst-case guarantees are strictly stronger than average-case guarantees, ensuring Self-Proving models function as intended in all use cases.\\n2. Such guarantees are realistic for an extremely broad class of problems: any computation feasible in polynomial space admits a probabilistic proof system [4,5].\", \"regarding_specific_questions\": \"> I think it is possible to reduce false positive errors (line 208) arbitrarily by running a probabilistic verifier $V$ for multiple times for the same $(x,y,P)$.\", \"your_observation_is_correct\": \"soundness error can be made exponentially small through independent repetition of the verification procedure.\\n\\n> How can we obtain such a verifier?\\n\\nThere is a rich literature, spanning cryptography and complexity theory, on designing efficient verifiers for different notions of \\\"efficiency\\\" and different computational problems. Efficient verifiers are known both for many specific problems with algebraic or combinatorial problems, and for general classes of computations. This literature has put forward many ideas and tools that can be used to derive new verifiers for problems (and efficiency measures) of interest. Thus, deriving a new verifier could proceed either by expert design, or via automation by using results that apply to classes of computations. We also refer the reader to a primer on probabilistic proof systems [4], which presents specific proof systems and their verifiers, as well as the general power and limits of probabilistic verification.\\n\\n> Where does the probability come from?\", \"the_probability_in_the_definition_of_soundness_comes_from_the_randomness_used_by_the_verifier\": \"We consider probabilistic Verifiers, whose questions are randomly generated during interaction with the prover. A concrete example of probabilistic verification can be found in the proof system presented in our Overall Comment. Efficient verification of certain complex problems *requires* randomness, i.e., nonzero soundness error. As you observed, the soundness error can be made exponentially small by repeating the verification. We are more than happy to provide further explanation upon request.\\n\\n> Definition 3.2 assumes the randomness of $V$ and $P$ line 209, but line 209, but the definition of soundness assumes the condition holds for all $P$. Is $P$ random?\\n\\nWe define soundness against deterministic provers without loss of generality. Since soundness is guaranteed against computationally unbounded provers, any randomized cheating prover can be converted to a deterministic one that simply uses the optimal random choices.\\n\\n### 5. Confidence of parameter for theorem\\n> I think this type of theoretical bound needs a confidence parameter $\\\\delta$. Since we estimate parameter $\\\\theta$ from a finite set of $N$ samples instead of accessing the distribution $\\\\mu$, it is possible that the drawn samples are \\\"bad\\\" and that we cannot estimate suitable parameters from the samples. Could you please explain why we do not need a confidence parameter \\n\\nThis is because we show convergence \\\"in expectation\\\" rather than \\\"with high probability.\\\" This follows directly from our reduction to SGD in the convex setting---the classical SGD convergence theorems we use (which require boundedness, Lipschitzness, and convexity) are themselves stated in expectation rather than with high probability.\\nAn interesting direction for future work would be to derive high-probability bounds, which would indeed require a confidence parameter $\\\\delta$. This would likely require different proof techniques and potentially stronger assumptions, but could provide \\\"with high probability\\\" convergence guarantees.\"}", "{\"title\": \"Follow-up on rebuttal: Reviewer W6ru\", \"comment\": \"Dear Reviewer W6ru,\\n\\nAs we approach the revision deadline (November 27th), we wanted to follow up on our rebuttal from November 18th. In our rebuttal, we addressed your thoughtful questions about novelty (clarifying the prover/verifier vs prover/disprover distinction), the role of verifiers, and probabilistic guarantees. We also committed to adding an additional experiment on Quadratic Residuosity, which goes significantly beyond GCD. We would appreciate if you could review our responses and, if you find them satisfactory, consider updating your score.\\n\\nBest regards,\\nThe authors\"}", "{\"summary\": \"In this paper, the authors propose a new type of self-proving models that not just predict an output for a given input but also a proof for the correctness of the output. One of the main ideas of the paper is to use a particular notion of proof from the work of interactive proof systems in theoretical computer science, where a proof means a sequence of answers to a verifier's questions that can convince the verifier of the correctness of the output. The authors compare their approach with other similar proposals for self-proving models, and emphasise the benefit of having an instance-specific proof in their approach. They then describe two algorithms for learning such a proof-producing transformer model, namely, Transcript Learning (TL) that assumes strong supervision via successful question-answer sequences, and Reinfocement Learning from Verifier Feedback (RLVF) that does not assume such strong supervision. Their experiments with learning the GCD algorithm with a small version of GPT show the promise of their approach.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The idea of using a verifier from the theory of interactive proof systems for learning a self-proving model is very nice. It may lead to further interesting research activities that address the AI safety issue using several related tools from theoretical computer science, such as PCP and property testing etc.\\n\\n2. The paper is written well. The discussion on related work helped me to understand what people had explored in the past, and to see the contributions of the paper more clearly. Also, the background materials are covered nicely so that I can follow most of the formal developments in the paper although I am not familiar with, for instance, interactive proof systems.\\n\\n3. The paper contains a theoretical justification, namely, Theorem 4.1. I am less confident that this theorem is useful in practice, but it is good that the authors makes an effort for proving a theoretical result. Also, their comment on the proof using the reduction to SGD and the communication complexity by a verifier (captured by the constant C) helped me to see what goes on more clearly.\", \"weaknesses\": \"I support the acceptance of this paper. The following points are mostly minor.\\n\\n1. Having an example in addition to GCD would have convinced me of the promise of the authors' approach far more. As the authors pointed out, the proofs in this GCD case do not involve questions from the verifier, and so they are simple. Also, the annotated transcript learning is only vaguely defined, and it is only explained in terms of illustration in the example via the intermediate steps of the Euclid algorithm. Seeing one more example would have helped me to grasp what annotations would mean in other problems.\\n\\n2. I suggest to include Algorithms 1 and 2 into the main text, instead of including them in the appendix. They are more or less standard, but I feel that they are one of the main contributions of the paper. Also, one unexpected thing that I found is that Algorithm 1 is derived by maximising theta over the expected probability E_{trace ~ p(trace)}[q_theta(trace)], instead of the expected log probability E_{trace ~ p(trace)}[log q_theta(trace)] (i.e., cross entropy loss). Some subtleties like this deserve the attention of the reader, I think.\", \"questions\": \"The only question that I have is related to what I said in the second point in the weakness box. My understanding is that Algorithm 1 uses the expected probability as a training objective, instead of expected log probability. Is there a reason for this? Is this due to the consistency with Theorem 4.1?\\n\\nHere are some minor typos.\\n\\n(1) L284 : EOS in Sigma^* ===> EOS in Sigma\\n\\n(2) L391 : Giving examples of annotations may help some readers.\\n\\n(3) L510 : Have you tried more samples in some cases and checked your conjecture?\\n\\n(4) L926 : a_0 := y ===> a_0 := y^*\\n\\n(5) L928 : (y,q_1^*,..., q_r^*,a_r) ===> (y^*,q_1^*,...,q_r^*)\\n\\n(6) L1150 : Maybe it is better to break a line before \\\"for s in [L_a] do\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (part 3)\", \"comment\": \"### References\\n\\n1. Gradient Descent Finds Global Minima of Deep Neural Networks. Simon Du, Jason Lee, Haochuan Li, Liwei Wang, Xiyu Zhai. ICML 2019.\\n2. Convexity, Classification, and Risk Bounds. Peter L. Bartlett, Michael I. Jordan, Jon D McAuliffe. Journal of the American Statistical Association 2012.\\n3. Better Theory for SGD in the Nonconvex World. Ahmed Khaled, Peter Richt\\u00e1rik. TMLR 2023.\\n4. IP=PSPACE. Adi Shamir. J. ACM 1992.\\n5. Algebraic Methods for Interactive Proof Systems. Carsten Lund, Lance Fortnow, Howard J. Karloff, Noam Nisan. J. ACM 1992.\"}", "{\"comment\": \"Thank you for your further response.\\n\\n> Convexity\", \"my_opinion_is_unchanged_for_this_part\": \"assuming convexity on LLMs like GPTs seems unrealistic. Therefore, the theory is less important.\\n\\n> Realizability\\n\\nIf we acknowledge the realizability assumption, it means there always exists $\\\\theta^\\\\ast$ that leads to a correct proof. I think this is strange since not all the predictions a model makes can be proved. As I wrote in the previous comment, the paper should show the conditions that assuming the existence of $\\\\mathcal{T}^\\\\ast$ and $\\\\theta^\\\\ast$ is reasonable.\\n\\n> Experiments\\n\\nI'm sorry for the misunderstanding. I agree that there is no strong algorithm for the MSqrt problem. However, this will not change my overall opinion of the experiments: Experiments on GCD are not well-motivated, and evaluations are insufficient to show that the proposed self-proving works well in situations where we want it.\\n\\n> The results--\\u2014whether positive or negative--\\u2014will provide valuable insights about our theory's practical viability,\\n\\nI agree that both positive and negative results are important for research progress. On the other hand, if the proposed method does not work well with difficult problems, then the proposed self-proving model becomes less attractive. Therefore, the experimental results are important for assessing the paper's impact. It might be interesting to understand why it fails, but it would require further analyses and another round of peer review.\"}", "{\"title\": \"Follow-up on rebuttal: Reviewer jLku\", \"comment\": \"Dear Reviewer jLku,\\n\\nAs we are approaching the revision deadline (November 27th), we wanted to follow up on our rebuttal from November 18th. Our rebuttal and revision resolve the concerns that you raised. We would appreciate if you could acknowledge our rebuttal and, if you agree, consider updating your score.\\n\\nBest regards,\\nThe authors\"}", "{\"title\": \"Response (part 2)\", \"comment\": \"### 4. Why probabilistic proof systems?\\n> Fundamentally, the end result obtained by the method, if I understand the paper, is a probabilistic guarantee that the answer provided is correct (following a sequence of challenges to the verifier). In the domain explored (algebra) we tend to deal with true/false propositions... What are your views on probalistic gurantees for mathematical statement? Do you consider them useful, or is this setup a step along the way to a different application where probabilistic guarantees are more meaningful?\\n\\nYou are correct, soundness holds only with high probability, but this probability can be made exponentially close to 1 and is controlled by the verifier (in other words the expected time before you will ever encounter an error is very long --1/error -- and you as a verifer control it). Therefore, it is as \\\"useful\\\" as a deterministic guarantee. Incidentally, even though our paper allows probabilistic verification, for some cases such as the GCD experiments, the verifier can extract a traditional deterministic correctness proof (which is a special case of general probabilistic interactive proofs).\\n\\nYour observation raises a fundamental point about efficient verification which we will emphasize in the final version. Complexity theory shows that probabilistic verification isn't a limitation, but rather a necessary feature for verifying complex problems or from getting extra features as follows:\\n- A non-zero soundness error (i.e., probabilistic verification) is necessary for efficiently verifying problems beyond NP (unless NP=PSPACE or at least NP=MA).\\n- In a deterministic proof system, interaction provides no additional power in the sense that any interactive proof system can be converted into a non-interactive one [6, Proposition 9.2].\\n- For Extra features such as making zero-knowledge proofs or succinct proofs, non-zero soundness is necessary as well.\\n\\nThus, rather than viewing probabilistic guarantees as an intermediate step toward deterministic verification, complexity theory demonstrates their utility and necessity\\u2014--in particular for complex mathematical problems such as computing the permanent of 0-1 matrices [7].\\n\\nWe emphasize also that, although our experiment is on an arithmetic capability, Self-Proving models can be used beyond a mathematical setting; as we prove in Theorem 4.1, Self-Proving models can be trained in any setting that admits a sound verification algorithm. Probabilistic interactive verifiers have been recently studied, e.g., in AI Safety contexts [8].\\n\\n\\n# References\\n1. AI safety via debate. Geoffrey Irving, Paul Christiano, Dario Amodei. arXiv 2018.\\n2. Scalable AI Safety via Doubly-Efficient Debate. Jonah Brown-Cohen, Geoffrey Irving, Georgios Piliouras. ICML 2024 (Oral).\\n3. IP=PSPACE. Adi Shamir. J. ACM 1992.\\n4. Probabilistic Proof Systems: A Primer. Oded Goldreich. Found. Trends Theor. Comput. Sci. 2008.\\n5. Learning to Give Checkable Answers with Prover-Verifier Games. Cem Anil, Guodong Zhang, Yuhuai Wu, Roger Grosse. arXiv 2021.\\n6. Computational complexity - a conceptual perspective. Oded Goldreich. Cambridge University Press 2008.\\n7. The Complexity of Computing the Permanent. Leslie Valiant. Theoretical Computer Science 1979.\\n8. Provably Safe Artificial General Intelligence via Interactive Proofs. Kristen Carlson. Philosophies 2021.\"}", "{\"title\": \"Rebuttal follow-up\", \"comment\": \"Dear reviewers,\\n\\nThis is a gentle reminder of our rebuttals, which we posted on November 18th. If you have any remaining concerns or questions, please let us know. If your concerns have been addressed, we humbly ask you to consider updating your scores.\\n\\nWe look forward to continuing the discussion of our paper.\\n\\nBest regards,\\nThe authors\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thank you for your detailed explanation. It improved my understanding of the paper. Also, it is good that you commit to apply your approach to another more involved example.\"}", "{\"title\": \"Response (part 1)\", \"comment\": \"Thank you for your thoughtful review. Your review raises important questions about three aspects of our work: (1) the theoretical assumptions in Theorem 4.1, particularly regarding concavity and existence of high-agreement models, (2) the scope of our experimental evaluation, and (3) the practicality of worst-case soundness guarantees. We welcome the opportunity to address these concerns and show how our theorem, while built on strong assumptions, provides valuable insights that are supported by empirical evidence. We also explain how worst-case guarantees, far from being unrealistic, are both achievable and crucial for reliable verification. Below, we address each point in detail, along with your specific questions about confidence parameters and verifier properties.\\n\\n### 1. Concavity of $A(\\\\theta)$\\n> Firstly, it assumes that $A(\\\\theta)$ is concave. I think this assumption does not hold for the typical autoregressive models used today, including the GPT model used in experiments.\\n\\nYou are correct that deep neural networks, including GPT-style autoregressive models, generally have non-convex loss landscapes. However, we believe that there is still value of analyzing convex cases, even when studying non-convex systems:\\n\\n1. Theoretical foundations: Convex analysis provides clean mathematical tools that help build intuition about the fundamental properties and limits of these learning problems, even when the exact assumptions don't hold in practice. Many seminal works in machine learning (e.g., early theory of SVMs) started with convex analyses that later inspired broader non-convex results.\\n2. Stepping stone: This work serves as a first step toward understanding the more general non-convex case. Similar to how convex optimization theory informed the development of non-convex optimization methods, our convex analysis could highlight important properties that generalize or inspire future non-convex analyses.\\n\\nWe remark that these points seem to be shared by other theoreticians in our field. These concerns are usually addressed by the inclusion of experiments to theoretical papers, as we have done in our paper. Indeed, our experiments demonstrate convergence of TL to a Self-Proving model, despite the non-convex (or rather, non-concave) optimization landscape. Following your suggestion, we will add a generalization of our theory to non-convex settings (e.g. as in [1,2,3]) in our discussion of future work.\\n\\n### 2. Existence of high-agreement $\\\\theta^\\\\ast$\\n> Moreover, the theorem assumes the existence of $\\\\theta^\\\\ast$ satisfying $A(\\\\theta^\\\\ast) \\\\geq 1 - \\\\eps/2$. This is also a strong assumption since it is currently not clear whether such self-proving models exist or not.\\n\\nThis assumption says that there is an instantiation of the model parameters (e.g., weights and biases of the neural network) that result in a $(1-\\\\eps/2)$ Self-Proving model. We observe that is an almost necessary assumption, in the following sense. Let us take $\\\\eps = 2%$ and think about transformers models, for simplicity: the goal is to train the transformer to be $98%$ Self-Proving. If the assumption does *not* hold, it means that there is *no* weights and biases which are $99%$ Self-Proving. That is, even if we were to exhaustively search over all parameters, we would never end up with error less than $1%$. In this case, there is (formally) no hope for learning a Self-Proving model; instead, we should try a different architecture altogether.\\n\\nWe hope this explanation fully addresses your concern, and welcome any follow-up questions.\\n\\n### 3. Additional experiments\\n> I feel that GCD would not be appropriate as a use case for a self-proving model since it is an easy task, and we do not need any machine-learning techniques to solve it. It is reasonable that Carton (2024) solved a GCD task since the paper's main objective is to understand how a transformer works. On the other hand, I think that this paper should show the effectiveness of the proposed self-proving model, and the experiments with a GCD task are not sufficient.\\n\\nWe will add an additional experiment on a significantly more challenging problem, namely, Quadratic Residuosity. Since this request was shared by other reviewers, we describe this experiment in the \\\"Overall Comment.\\\" We welcome any feedback on the proposed experiment. Due to computational constraints, we may not by able to conclude the experiments by the end of the discussion period, but we will include the results in the camera ready version of the paper.\"}", "{\"comment\": \"We thank the reviewer for continuing this discussion. Through our exchanges, we resolved the majority of the original concerns, including questions on the viability of worst-case soundness guarantees, probabilistic verification, the lack of a confidence parameter $\\\\delta$, and the computational hardness of MSqrt.\\n\\nWe address the three remaining concerns below and will happily provide further clarification. However, at this point we would like to note that the remaining concerns stem from broader questions about the role of theoretical analysis in our field, and different interpretations of ML theory notions (namely, realizability). While these points merit discussion, they may be difficult to reconcile through a continued technical discussion alone.\\n\\n### Convexity\\n> My opinion is unchanged for this part: assuming convexity on LLMs like GPTs seems unrealistic. Therefore, the theory is less important.\\n\\nThis critique would apply equally to any of the many papers in ML theory that use convexity to obtain convergence guarantees. Our contribution follows the widely-accepted and well-established tradition of analyzing simplified settings---which we have then empirically validated.\\nIn sum, the concern surrounding the simplifying convexity assumption appears to stem from a fundamental difference in how we view the role and value of theoretical (computer) science, where simplifying assumptions are standard practice for establishing foundational results.\\n\\n### Realizability\\n> If we acknowledge the realizability assumption, it means there always exists $\\\\theta^\\\\ast$ that leads to the correct proof. I think this is strange since not all the predictions a model makes can be proved.\\n\\nThere seems to be a misunderstanding about the nature of realizability assumptions. The purpose is not to assert that *any* model family contains a perfect model. Rather, it states that the convergence bound holds for model families that are sufficiently expressive. As we explained in the previous response, this is a logical necessity.\\n\\n> As I wrote in the previous comment, the paper should show the conditions that assuming the existence of $\\\\mathcal{T}^\\\\ast$ and $\\\\theta^\\\\ast$ is reasonable\\n\\nAs explained in our previous response and incorporated in the revision, recent results establishing the Turing-completeness of transformers provide theoretical justification: any Honest Prover can be realized by a transformer. Moreover, as explained in the paper (lines 310-316), an honest transcript generator $\\\\mathcal{T}^\\\\ast$ can be realized e.g. in the case of Doubly-efficient Interactive Proof systems.\\n\\n### Experiments\\n> I'm sorry for the misunderstanding. I agree that there is no strong algorithm for the MSqrt problem.\\n\\nWe appreciate this acknowledgement.\\n\\n> However, this will not change my overall opinion of the experiments: Experiments on GCD are not well-motivated, and evaluations are insufficient to show that the proposed self-proving works well in situations where we want it.\\n\\nWe maintain that GCD serves as a valuable proof-of-concept motivated by a rich literature on the arithmetic capabilities of transformers, with MSqrt providing a natural next step toward more challenging problems.\"}", "{\"title\": \"Response (part 1)\", \"comment\": \"Thank you for ackowledging the benefit of our principled approach. We invested significant effort into making our framing and results accessible, and were happy to read that you found them to be so. Next, we respond your major concerns and questions. We thank you for initiating this discussion, which has already helped us strengthen our paper and clarify its framing. We welcome any additional comments or questions.\\n\\n### 1. Novelty\\n> It was not clear to me the extent to novelty of the overall setup of the pair prover/disprover. Obviously this is a well-known setup in many applications including those cited in the literature (perhaps the whole area of \\\"Argumentation\\\" in AI should also be included as it is not very far from here), but this particular instantiation with ML models and a formal verifier could be clarified.\\n\\nWe emphasize that our setup consists of a prover (Self-Proving model) and a verifier, not a pair of prover/disprover; the goal of the verifier is not to attempt to disprove a claim, but to certify it is correct via interaction with the prover or reject. Rejecting does not mean that the verifier concludes that the claim is false, only that the prover did not succeed in providing a sound proof of correctness. \\n\\nIn contrast in a prover/disprover setup, to our understanding, a disprover aims to convince of the invalidity of the claim. The latter setup is more reminiscent of works on AI Safety via Debate Systems [1,2], which are distinct from our work as explained in our paper. Thank you for the pointer to the argumentation systems in AI. We will definitely add a reference to the argumentation literature.\", \"the_notion_of_learning_to_prove_has_certainly_been_explored_in_previous_works_and_our_work_adds_to_this_landscape_as_discussed_in_the_related_work_section\": \"a learned prover (Self-Proving model) is trained to convince a verifier *via any Interactive Proof system* which is novel. There are no assumptions made on the Interactive Proof system, and therefore our theory is extremely general (it captures all of PSPACE [3]). We do not think that \\\"Obviously this is a well-known setup in many applications\\\". That said, we would be grateful if you could provide pointers to any works that you believe we should reference and are close to our setup.\\n\\n### 2. Role of the verifier\\n> The verifier obviously has a fundamental role here. I might have missed this but the implications of this were not clear to me (how can they be derived and at what costs)... Can you comment on the importance and derivation of the verifier and highlight the ease or difficulty in deriving them for the problem in hand in combination with or independently of the self-proving model?\\n\\nDetermining which problems admit an efficient verifier has been a central and foundational question in computational complexity theory. Depending on how \\\"efficiency\\\" is defined, designing a verifier can be either very straightforward (e.g. an NP (polynomial-time) verifier for 3SAT) or require years of breakthrough results (e.g. a PCP verifier for 3SAT, which only reads three bits in the proof).\\n\\nWe attempt to address your concern in the \\\"Scope\\\" paragraph on page 2, immediately before section 2. To summarize, we clarify that your natural and fascinating question (how are verifiers derived and implement?) is beyond the scope of our paper. We also refer the reader to a primer on probabilistic proof systems [4], which presents specific proof systems and their verifiers, as well as the general power and limits of probabilistic verification.\\n\\nYour idea of deriving a verifier hand-in-hand with a Self-Proving model is very interesting. Our work shows how a Self-Proving model can be trained for a given verifier, but exploring the other direction sounds fascinating. At first glance, it reminds us of Prover--Verifier games [5], in which a verifier is jointly learned with the prover. As we emphasize in the Related Work section, this (interesting) setting is distinct from our work.\\n\\n### 3. Additional experiments\\n> I think the authors realise that their experimentation on GCD are limited even from a purely algebraic perspective. I think this is OK, but more thoughts into how this might or might not scale to more challenging problems or more general ones might be beneficial... What are your thoughts on moving beyond GCD for a mathematical theory and beyond this other mathematical challenges?\\n\\nWe will add an additional experiment on a significantly more challenging problem, namely, Quadratic Residuosity. Since this request was shared by other reviewers, we describe this experiment in the \\\"Overall Comment.\\\" We welcome any feedback on the proposed experiment. Due to computational constraints, we may not by able to conclude the experiments by the end of the discussion period, but we will include the results in the camera ready version of the paper.\"}", "{\"comment\": \"We sincerely appreciate your thorough and insightful review of our paper. Your comments indicate a deep understanding of our work, which makes your positive score all the more encouraging.\\n\\nWe share your view that our paper will form a bridge through which theoretical tools could be applied to issues in AI safety. This is why we've invested considerable thought and effort towards making our writing clear and accessible to those without a background in Interactive Proof systems. We are delighted from you that hear that our efforts were successful.\\n\\nBelow, we address your major concerns regarding: (1) the need for additional examples, (2) clarity of annotated transcript learning, (3) algorithm placement, and (4) probability optimization approach. We then address the minor corrections you identified.\\n\\n### 1. Additional experiments\\n\\n> Having an example in addition to GCD would have convinced me of the promise of the authors' approach far more. As the authors pointed out, the proofs in this GCD case do not involve questions from the verifier, and so they are simple.\\n\\nWe will add an additional experiment that makes use of an interactive verifier. Since this request was shared by other reviewers, we describe this experiment in the \\\"Overall Comment.\\\" We welcome any feedback on the proposed experiment. Due to computational constraints, we may not by able to conclude the experiments by the end of the discussion period, but we will include the results in the camera ready version of the paper.\\n\\n### 2. Clarity of Annotated Transcript Learning\\n\\n> Also, the annotated transcript learning is only vaguely defined, and it is only explained in terms of illustration in the example via the intermediate steps of the Euclid algorithm. Seeing one more example would have helped me to grasp what annotations would mean in other problems.\", \"we_address_this_concern_in_two_ways\": \"- We've added a new paragraph to Section 4.3 (immediately before Section 5) that explains annotations through an analogy to Chain-of-Thought reasoning. This provides readers with a familiar framework for understanding our approach.\\n - The new Quadratic Residuosity experiment will demonstrate annotations in a completely different context from GCD. The annotations here involve tracking mathematical properties of group elements different from the Euclidean algorithm steps.\\n\\n### 3. Algorithm placement\\n\\n> I suggest to include Algorithms 1 and 2 into the main text, instead of including them in the appendix. They are more or less standard, but I feel that they are one of the main contributions of the paper\\n\\nWe agree these algorithms are central contributions. Our solution aims to balance accessibility with space constraints:\\n\\n- For the conference version: We've added hyperlinks in the main text (particularly around Theorem 4.1) that let readers quickly access the algorithm specifications.\\n- For the full version (preprint and journal): We will integrate both algorithms into the main text with expanded discussion.\\n\\n### 4. Optimization objective in Algorithm 1\\n\\n> Also, one unexpected thing that I found is that Algorithm 1 is derived by maximising theta over the expected probability [...], instead of the expected log probability [...] Some subtleties like this deserve the attention of the reader, I think [...] Is there a reason for this? Is this due to the consistency with Theorem 4.1?\\n\\n\\nExcellent observation. The key to answering this question is by drawing a distinction between the *objective* and the *implementation* of the algorithm:\\n\\n - The Objective: Optimizing the Verifiability of the learned model through what we call the \\\"agreement function\\\" (based on expected probability).\\n - The Implementation: We'd want to optimize the agreement function directly, but how do we compute the gradients? Lemma B.4 is the answer: we can achieve this indirectly by accumulating gradients from the cross-entropy loss computed at each token - similar to how language models are typically trained.\\n - The Connection: Algorithm 1's objective is to optimize the agreement (expected probability), but does so by taking gradients through the log probability and accumulating them. This matches your intuition about cross-entropy loss!\\n\\nThank you for helping us reach this clarification; we have added it to the paper immediately after Theorem 4.1. Does this answer your question and address your concern?\\n\\n### 5. Response to minor points\\n\\nThank you for your careful attention to detail.\\n\\n1. L284: Corrected.\\n2. L391: Added clarifying examples of annotations (see major point 2).\\n3. L510: Once our GPU is free from the new Quadratic Residue experiment, we will repeat the Base of Representation experiments on additional samples. We will include the results in the camera-ready version of the paper.\\n4. L926: Corrected.\\n5. L928: Corrected.\\n6. L1150: Added line break.\"}", "{\"summary\": \"This paper proposes a self-proving model, a new learning paradigm that a model outputs both a prediction $y$ and a proof for the correctness of the prediction by interacting with a verifier. A self-proving model is useful for users since it can ensure that the output of a model is correct. The paper proposes a self-proving model and defines some metrics for evaluating the performance of a self-proving model. The paper also gives learning algorithms for learning a self-proving model and theoretical analyses on the convergence of the transcript learning algorithm. The experiments evaluate the performance of self-proving models on a GCD task.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This is a well-motivated work. The use case shown in the introduction of the paper is attractive.\\n\\n2. The proposed self-proving framework that proves the correctness of the answer by interactions between an autoregressive model and an external verifier seems a natural setting.\", \"weaknesses\": \"1. Theorem 4.1, one of the major theoretical contributions of the paper, makes some strong assumptions. Therefore, I think the contribution of the theorem is limited. Firstly, it assumes that $A(\\\\theta)$ is concave. I think this assumption does not hold for the typical autoregressive models used today, including the GPT model used in experiments. Moreover, the theorem assumes the existence of $\\\\theta^\\\\ast$ satisfying $A(\\\\theta^\\\\ast) \\\\geq 1 - \\\\epsilon/2$. This is also a strong assumption since it is currently not clear whether such self-proving models exist or not.\\n2. The paper evaluates the self-proving model's performance on a GCD task. However, I feel that GCD would not be appropriate as a use case for a self-proving model since it is an easy task, and we do not need any machine-learning techniques to solve it. It is reasonable that Carton (2024) solved a GCD task since the paper's main objective is to understand how a transformer works. On the other hand, I think that this paper should show the effectiveness of the proposed self-proving model, and the experiments with a GCD task are not sufficient.\\n3. The paper emphasizes the use-case that self-proving models can guarantee the correctness of a specific $x_0, y_0$. (line 262). This feature depends on the $s$-soundness defined for a probabilistic verifier. However, the assumption that $s$-soundness holds for any $x, y, P$ is unrealistic (See the question 2 below). It is more natural to assume that a false-positive error (line 207) depends on the distribution over $x, y$. However, if we make such an assumption, then it is difficult to give a guarantee for specific $x_0, y_0$. Therefore, I think the self-proving model does not work as stated in the use case.\", \"questions\": \"1. I think this type of theoretical bound needs a confidence parameter $\\\\delta$. Since we estimate parameter $\\\\theta$ from a finite set of $N$ samples instead of accessing the distribution $\\\\mu$, it is possible that the drawn samples are \\\"bad\\\" and that we cannot estimate suitable parameters from the samples. Could you please explain why we do not need a confidence parameter $\\\\delta$?\\n2. The paper says that completeness and soundness are properties of a verifier (Definition 3.2). However, it seems unrealistic to imagine a probabilistic verifier whose soundness error is always smaller than $s$ for *any* $x, y$ and $P$, unless $s = 0$. How can we obtain such a verifier? Where does the probability come from? Moreover, I think it is possible to reduce false positive errors (line 208) arbitrarily by running a probabilistic verifier $V$ for multiple times for the same $(x, y, P)$. It is more realistic for me to assume that there are specific $(x, y, P)$ that causes a false-positive error for the verifier $V$, and thus it depends on the distribution $\\\\mu$. \\n3. Related to the above point, Definition 3.2 assumes the randomness of $V$ and $P$ line 209, but the definition of soundness assumes the condition holds for all $P$. Is $P$ random?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the authors\\u00a0propose a methodology to learn models that can provide evidence about the correctness of their answers.\", \"given_a_function_f\": \"Sigma*->Sigma*, a model for F is a function F_theta that assigns to each x\\\\in Sigma^* a probability distribution F_theta(x).\\n\\nA model is alpha-correct if on a random input x, the output F_{theat}(x) is equal to F(x) with probability at least \\\\alpha.\\u00a0\\nGiven a function F and a verifier V, a model F_theta is beta-self proving if V(x,y)=1 with hight\\u00a0probability, where x is an input sampled at random and y is sampled at random from F_theta(x).\\u00a0\\n\\nThe goal is to learn a model that has a high degree of correctness\\u00a0(\\\\alpha close to 1) and a high degree of verifiability (\\\\beta close to 1). The authors develop a methodology for this task, and use learning models that compute the GCD of two numbers as an example.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I find the topic of the paper quite interesting.\\n\\n The paper seems to provide a nice framework to combine interactive proof systems with techniques from learning theory. Although formal verification has been widely studied in connection with learning theory, the disadvantage is that, in most of the approaches, the goal is to construct a proof of correctness in some fixed proof system. The use of interactive proof systems adds a lot of flexibility to the process, besides giving an avenue for a theoretical analysis related to the convergence of the learning process etc.\", \"weaknesses\": \"The disadvantage of the approach is that it requires access to the implementation of a previously existing verifier. I find that the paper lacks a discussion about the usefulness of trying to learn a function for which we already have an implementation.\", \"questions\": \"No questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces a framework when ML models can prove the correctness of their output. The notion of proof here is \\\"interactive proofs\\\" as studied in computational complexity, with a polynomially-bounded verifier and a potentially unbounded prover that can interact. When the answer is correct, then the prover should be able to convince the verified with high probability of its correctness, on the other hand when the answer is wrong the verifier should not accept the answer with high probability. The contribution of this paper is not related to interactive proof systems, they are using the standard notion. The contribution is mainly to introduce this framework to modern ML settings. This requires the training process to be aware of a specific verifier that will be used and the training process needs to be augmented with transcripts. The authors propose a gradient-descent based approach as well as an RLVF approach. The paper definitely introduces new ideas, however, the assumptions required for learning algorithms to converge are fairly strong -- this is to be expected-- and also unrealistic. Thus, there needs to be a thorough empirical evaluation. The authors have presented experiments with GCD in the main paper, and subsequently performed experiments with discrete square root during the rebuttal period.\", \"additional_comments_on_reviewer_discussion\": \"Two of the reviewers interacted well with the authors; these were also the most informative reviews and I had written my meta review using those reviews and the paper itself.\"}", "{\"summary\": \"The paper develops the concept of self-proving models that justify (or \\\"prove\\\" in the authors language) their answers to a trusted and pre-built verifier. The process results in a probablistic guarantee on the correctness of the answer provided by the self-proving model. Transcript learning and RL are proposed as methods to derive a self-proving model. Experiments are conducted on a Greatest Common Divisor problem.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I find the overall approach principled and welcome. Moving the assessment of the answer of a model from test results to an estimate of the correctness on the individual query (and this not being provided by the model itself) is certainly welcome.\\n\\nI am not sure the overall setup is novel as such (see below); however I believe the error bounds on the two learning approaches are. The results are well backed by theory and the paper does a good job at making this challenging subject as accessible as possible without trivialising the contribution.\\nSome experiments, although perhaps limited, are provided supporting the results.\", \"weaknesses\": \"It was not clear to me the extent to novelty of the overall setup of the pair prover/disprover. Obviously this is a well-known setup in many applications including those cited in the literature (perhaps the whole area of \\\"Argumentation\\\" in AI should also be included as it is not very far from here), but this particular instantiation with ML models and a formal verifier could be clarified.\\n\\nThe verifier obviously has a fundamental role here. I might have missed this but the implications of this were not clear to me (how can they be derived and at what costs). \\n\\nI think the authors realise that their experimentation on GCD are limited even from a purely algebraic perspective. I think this is OK, but more thoughts into how this might or might not scale to more challenging problems or more general ones might be beneficial.\\n\\nFundamentally, the end result obtained by the method, if I understand the paper, is a probabilistic guarantee that the answer provided is correct (following a sequence of challenges to the verifier). In the domain explored (algebra) we tend to deal with true/false propositions.\", \"questions\": \"What are your views on probalistic gurantees for mathematical statement? Do you consider them useful, or is this setup a step along the way to a different application where probabilistic guarantees are more meaningful?\\n\\nCan you comment on the importance and derivation of the verifier and highlight the ease or difficulty in deriving them for the problem in hand in combination with or independently of the self-proving model?\\n\\nWhat are your thoughts on moving beyond GCD for a mathematical theory and beyond this other mathematical challenges?\", \"edits_post_review\": \"1. I acknowledge you might not agree with my comment of this being a \\\"well-known set up\\\". This was not meant as a criticism to the work. I think you are well aware that the general concept of prover/disprover set up has been long been around in logic (indeed, general philosophy before then) and theoretical computer science including in synthesis. The setup has also been used in AI Argumentation and many other areas. I understand that here the emphasis is different and so is the generality of the overall task. Note that pretty general theorem provers such as Isabelle have also been used in similar setups to aid computer proofs. In any case this was not a criticism but a request for clarification. I do not believe we fundamentally disagree here.\\n2. To me the role of the verifier appears to be pretty important. So I would encourage to explore this issue more. The concern I have is that a lot of the problem here has been pushed onto the (derivation of) the verifier). I agree with the authors, but only to some extent, that the work explores a somewhat different dimension. My point was and to some extent still is that whole apparatus appears to reside on the verifier but its construction is not necessarily obvious and not explored here. I accept that my suggestions can be left for further work.\\n3. I do appreciate the attempt to run further experiments on a different challenge, particularly given, as reported another referee, perhaps GCD is not entirely illustrative to show the advantages of the present approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your further response.\\n\\nYes, the remaining concerns are the experiments and theoretical results. I think both are weak and thus, I feel it is difficult to recommend this paper for acceptance.\\n\\nI agree there are cases assuming convexity is reasonable. However, I think this is not the case since the paper assumes the use of LLMs. They are not convex and, in my opinion, far from it. Since the proposed method assumes the use of LLMs, assuming convexity is unrealistic. Please note that I'm not saying that the theoretical results are meaningless, but they are weak.\\n\\n> Assumptions on $\\\\theta$ and $\\\\mathcal{T}^\\\\ast$\\nThe authors said that any prover and transcript generator are possible. However, as I wrote in the previous response, I strongly believe that self-proving cannot be applied to all situations where we use an ML model to make a prediction $y$ from input $x$. For example, I think it is hard to give a proof to a machine translation task. \\n\\nSelf-proving is an interesting concept, but the paper must show when we can use self-proving. Currently, the limitation is unclear, and the only case the paper shows is GCD. Therefore, it is hard for me to judge its usefulness.\"}", "{\"comment\": \"Thank you for addressing my concerns. Some of my concerns were solved, but I will keep my score unchanged for the following reasons:\\n\\n1. Theorem 4.1 makes strong assumptions and has a gap with practical settings. Hence, they seem less important.\\n2. The motivation for solving GCD and Msqrt is unclear. If we want to solve these problems, there is a better way that does not use ML.\\n3. The experimental results are limited to a single simple task, so they seem weak in supporting the effectiveness of the proposed approach. The results of Msqrt have not yet been reported, and we currently have no evidence that the experiments will work well.\\n\\n\\n## Concerns on premises of Theorem 4.1\\n\\nI agree that assuming concavity is a cornerstone for many problems. However, we also have to consider the gap size between a simplified setting and the real one. I believe assuming concavity on non-autoregressive LLMs like GPTs is unrealistic. There is a large gap, and the theoretical results are less meaningful accordingly. Moreover, experimental evaluations are limited to a simple setting and are also not exhaustive enough to validate the theoretical findings.\\n\\nThe authors say that the assumptions on $\\\\theta^\\\\ast$ are necessary to draw a conclusion. But since both self-proving and transcript learning are new concepts, it is unclear when assuming the existence of such $\\\\theta^\\\\ast$ is reasonable. Clearly, there are practical problems that are hard or impossible to verify. The paper should show the conditions that assuming the existence of $\\\\mathcal{T}^\\\\ast$ and $\\\\theta^\\\\ast$ is reasonable.\\n\\n\\nWhat theorem 4.1 says is: \\\"If transcript learning is possible with some parameter $\\\\theta^\\\\ast$ and the objective is concave, then we can estimate a good parameter $\\\\bar{\\\\theta}$.\\\". I agree that this theorem is valuable if we agree with the premises. However, (1) the objective is not concave generally, and (2) whether self-proving or $\\\\mathcal{T}^\\\\ast$ are possible in general or not is unclear. Therefore, I think the assumptions made on the theorem are strong, and the theoretical results are less important. \\n\\n\\n\\n## Concerns on experiments.\\n\\nExperimental evaluation does not show an appropriate use case for a self-proving system. GCD is a problem that we generally do not use machine learning to solve since there are strong non-ML methods for solving it. Moreover, experimental evaluations are limited to a single task. The additional problem setting Msqrt provided by the authors is interesting, but it lacks experimental evaluation results. Moreover, I think the additional task is also not well-motivated since there exist strong non-ML methods for solving it. \\n\\n\\n## Others\\n- I understand the concept of a probabilistic verifier. I agree with the authors that considering s-soundness is important for efficiency. \\n- I understand that we do not need a confidence parameter since we discuss expected values.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for highlighting the flexibility of our framework, which captures any Interactive Proof system. Indeed, as you note, one of the main contributions of our paper is creating an avenue for theoretical analysis and provable guarantees.\", \"regarding_the_weakness_you_raised\": \"> The disadvantage of the approach is that it requires access to the implementation of a previously existing verifier. I find that the paper lacks a discussion about the usefulness of trying to learn a function for which we already have an implementation.\", \"we_would_like_to_clarify_an_important_distinction\": \"A model with high correctness ($\\\\alpha$ close to 1) may not necessarily be Self-Proving with respect to a given verifier $V$. In fact, if the model was trained without any concern for $V$, we cannot expect it to be verifiable by $V$. The main technical contribution of our paper is proposing and analyzing (theoretically and empirically) two methods for fitting the model to the verifier.\\n\\nCrucially, the task of Self-Proving models is not to \\\"learn a function,\\\" but rather to learn how to generate proofs that the function was computed correctly. This distinction is fundamental: The usefulness of Self-Proving models lies in their ability to not only generate an output (compute the function), but also prove that the output was correct.\\n\\nWe discuss this distinction through an example in the paper, at the bottom of page 5 (immediately before Section 4). Does this address your concern?\"}", "{\"comment\": \"We appreciate your thoughtful feedback and are encouraged that we've resolved most of your concerns. Let us address the remaining two concerns directly.\\n\\n# Assumptions of Theorem 4.1\\nWe follow your decoupling of your concern into (1) convexity of optimization landscape, and (2) the realizability assumption.\\n\\n### Convexity \\nWe agree that LLM training is, in general, a nonconvex problem, and we have modified the text to make it clear that our theorem does not always match ML tasks which emerge in reality. We note, however, that convexity is a widely-used assumption for proving gradient-descent convergence. We are not aware of (nonconvex) LM convergence theorems in the literature, though we welcome any pointers if they exist. Convergence without convexity is the subject of cutting-edge papers [1,2,3]---which dedicate their entirety to the issue of nonconvexity.\\n\\nBack to our case, the convexity-based theory helps us demonstrate the viability of Transcript Learning, which we then empirically validate in the nonconvex setting. In other words, the theorem helps us understand Transcript Learning in a simplified setting which can be mathematically analyzed.\\n\\n### Realizability\", \"the_realizability_assumption_is_a_necessary_constraint\": \"if a Self-Proving model cannot be realized within the chosen architecture, then learning such a model is impossible regardless of the training approach. Rather than being a limitation that requires justification, it represents a necessary logical precondition.\\n\\nThe concern then lies in selecting an architecture capable of expressing a Prover for a given Proof System. One common approach assumes deep neural networks as universal function approximators, scaling both architecture size and training data until achieving desired performance. Recent theoretical work has established rigorous foundations for this approach, demonstrating the Turing-completeness of transformers [6] and their variants [7]. These architectures can even approximate arbitrary continuous sequence-to-sequence functions on compact domains [8]. Therefore, transformer architectures can realize any Turing machine---including the Prover in an Interactive Proof system, which is a polynomial-space Turing machine (or better [9]).\\n\\nWe have added this discussion to the paper---thank you for prompting this valuable clarification.\\n\\n# Experiments\\nOur small-scale experiments are a proof of concept, validating our theorem that shows that proofs can in fact be learned from accepting transcripts. In particular, a growing body of research is concerned with the ability of LLMs to learn arithmetic problems. Several works studied the possibility of solving arithmetic problems, and our paper extends towards proving solutions.\\n\\nIndeed, as in other arithmetic problems, GCD has efficient classical algorithms; our goal is not to outperform existing methods but rather to demonstrate that neural networks can learn to both solve and prove correctness of their solutions. This capability is crucial as we scale to more complex problems where classical algorithms may not exist or may be intractable. GCD serves as an ideal test case because we can verify our results against ground truth while developing the fundamental techniques needed for harder problems.\", \"regarding_the_upcoming_msqrt_experiments\": \"MSqrt is particularly relevant precisely because it lacks efficient worst-case algorithms (it is believed and widely assumed to be intractable in the non-quantum setting) when $N$ is composite. The case of prime $N$ is also relevant, since the cost of finding a square root of $x$ is still higher than verifying that y is a square root.\\nWhen you mention 'strong non-ML methods' for MSqrt, are you perhaps referring to algorithms that work well in practice? We'd be very interested in understanding which specific strong methods you had in mind.\\n\\nFinally, regarding the following comment:\\n> The results of Msqrt have not yet been reported, and we currently have no evidence that the experiments will work well.\\n\\nProposing MSqrt experiments in response to reviewers concerns, and before knowing their outcome, reflects good scientific practice. The results--\\u2014whether positive or negative--\\u2014will provide valuable insights about our theory's practical viability, particularly given MSqrt's step-up in complexity over GCD.\\n\\n# References\\n6. On the Computational Power of Transformers and Its Implications in Sequence Modeling. Satwik Bhattamishra, Arkil Patel, Navin Goyal. CoNLL 2020.\\n7. Universal Transformers. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, \\u0141ukasz Kaiser. ICLR 2019.\\n8. Are Transformers universal approximators of sequence-to-sequence functions? Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank J. Reddi, Sanjiv Kumar. ICLR 2020.\\n9. Delegating Computation: Interactive Proofs for Muggles. Shafi Goldwasser, Yael Tauman Kalai, Guy N. Rothblum. J. ACM 2015.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Overall comment: Additional experiment\", \"comment\": \"Dear all,\\n\\nIn response to reviewers TyWJ, W6ru and yycY, we intend to run an additional experiment going beyond the current GCD setup. With GCD, the verification algorithm was deterministic and non-interactive\\u2014allowing experiments to run on a single GPU within days (excluding RLVF).\\n\\nWe propose to extend our work with experiments on the Modular Square Root (MSqrt) problem. MSqrt is considered substantially more challenging than GCD, and is a key problem in various classical cryptosystems. We detail our proposed experiment below and welcome your feedback.\\n\\nThanks to our flexible codebase, we can implement these new experiments without restructuring our Self-Proving GPT. However, given the increased computational complexity of Quadratic Residue, we'll need to scale up both model size and training data/iterations. While the runs of these experiments will extend beyond the discussion period, we commit to including them in the camera-ready version.\\n\\nThank you,\\nThe authors\\n\\n### The Modular Square Root (MSqrt) problem\\nWe say that an integer $x$ has a quadratic residue (mod another integer $N$) if there exists $y$ such that $x = y^2 \\\\mod N$. In the Modular Square Root (MSqrt) problem, the input is a pair of integers $(x, N)$ and the output is one of the following:\\n1. Either any $y$ such that $x = y^2 \\\\mod N$, if such $y$ exists.\\n2. A special symbol $\\\\bot$ if no such $y$ exists (i.e., if $x$ is not a quadratic residue).\", \"technical_comment\": \"we note that Quadratic Residuosity as we defined it above admits a noninteractive (determinstic) proof system. However, we chose this proof system to explore Self-Proving models in an interactive setting, as pondered by the reviewers. As an aside, we remark that in this proof system manages, the verifier is convinced (with high probability) of $x$ being a non-square *without revealing the factorization of $N$*, i.e., it is a zero knowledge proof of $x$ being a quadratic non-residue.\\n\\n### Experiments on MSqrt\\nWe will repeat the TL and RLVF experiments on MSqrt. As with the GCD, we expect non-annotated TL to have moderate efficacy, which can then either be boosted with RLVF or with annotations. Informed by our Annotated TL experiments on GCD, we will add annotations derived from intermediate computations in the honest prover's strategy (e.g., a factorization of $N$ can be used by the prover to efficiently compute MSqrt).\", \"intuition\": \"For a given $(x,N)$, if $y \\\\in MSqrt(x,N)$ where $y \\\\neq \\\\bot$, then the prover simply sends $y$ to the verifier, who can verify correctness by checking that $y^2$ has the same residue mod $N$ as $x$ (this is an easy computation when $N$ is prime and gets increasingly hard with composite $N$). The more interesting case occurs when the prover claims $x$ has no quadratic residue ($y = \\\\bot$), which triggers the interactive protocol for Quadratic Nonresiduosity (Goldwasser, Micali and Rackoff, 1985).\\n\\nWe present the verifier for this protocol next (completeness and soundness proofs available upon request):\\n\\n**Verifier $V$: Takes input (x, N) and interacts with a Prover $P$**\\n1. Receive an (allegedly correct) output $y \\\\in \\\\mathbb{N} \\\\cup \\\\{\\\\bot\\\\}$ from the Prover.\\n2. If $y \\\\neq \\\\bot$: Accept if and only if $x = y^2 \\\\mod N$.\\n3. Else ($y = \\\\bot$): Sample a random $r \\\\in \\\\{1, \\\\dots, N-1}$ and a bit $b \\\\in \\\\{0, 1\\\\}$.\\n4. Send $q = x^b \\\\cdot r^2 \\\\mod N$ to the prover.\\n5. Receive a response $a$ from the prover.\\n6. If $b = 1$: Accept if and only if $a = y = \\\\bot$.\\n7. Else ($b = 0$): Accept if and only if $q = a^2 \\\\mod N$.\"}" ] }
5WPQIVgWCg
Satisficing Regret Minimization in Bandits
[ "Qing Feng", "Tianyi Ma", "Ruihao Zhu" ]
Motivated by the concept of satisficing in decision-making, we consider the problem of satisficing exploration in bandit optimization. In this setting, the learner aims at finding a satisficing arm whose mean reward exceeds a certain threshold. The performance is measured by satisficing regret, which is the cumulative deficit of the chosen arm's mean reward compared to the threshold. We propose $\texttt{SELECT}$, a general algorithmic template for Satisficing REgret Minimization via SampLing and LowEr Confidence bound Testing, that attains constant satisficing regret for a wide variety of bandit optimization problems in the realizable case (i.e., whenever a satisficing arm exists). Specifically, given a class of bandit optimization problems and a corresponding learning oracle with sub-linear (standard) regret upper bound, $\texttt{SELECT}$ iteratively makes use of the oracle to identify a potential satisficing arm. Then, it collects data samples from this arm, and continuously compares the lower confidence bound of the identified arm's mean reward against the threshold value to determine if it is a satisficing arm. As a complement, $\texttt{SELECT}$ also enjoys the same (standard) regret guarantee as the oracle in the non-realizable case. Finally, we conduct numerical experiments to validate the performance of $\texttt{SELECT}$ for several popular bandit optimization settings.
[ "Online learning", "Bandits", "Satisficing" ]
Accept (Poster)
https://openreview.net/pdf?id=5WPQIVgWCg
https://openreview.net/forum?id=5WPQIVgWCg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wHLbltu8ta", "tjkQXdt1qf", "spCce335EX", "qVaqgZQzFj", "q4W9AMova7", "lB5jC9c42e", "dHYl5fqRm8", "VfYSEItMeX", "Sc8P4R1wml", "Kd5qWH3eCP", "ILFgXG10J2", "G02G6hbVgA", "8VCoyKSKoP", "0huBDv0PqA", "0YXhqLVtus" ], "note_type": [ "official_review", "decision", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730206201460, 1737523680209, 1734824234665, 1730713203837, 1730227523648, 1732121046180, 1732121396768, 1732121665377, 1732121189813, 1732374118969, 1732330133134, 1732341574230, 1732373356747, 1732120795102, 1732121529404 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5048/Reviewer_EjZr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5048/Area_Chair_rscs" ], [ "ICLR.cc/2025/Conference/Submission5048/Reviewer_pJ9Q" ], [ "ICLR.cc/2025/Conference/Submission5048/Reviewer_Y6qp" ], [ "ICLR.cc/2025/Conference/Submission5048/Authors" ], [ "ICLR.cc/2025/Conference/Submission5048/Authors" ], [ "ICLR.cc/2025/Conference/Submission5048/Authors" ], [ "ICLR.cc/2025/Conference/Submission5048/Authors" ], [ "ICLR.cc/2025/Conference/Submission5048/Authors" ], [ "ICLR.cc/2025/Conference/Submission5048/Reviewer_Y6qp" ], [ "ICLR.cc/2025/Conference/Submission5048/Reviewer_EjZr" ], [ "ICLR.cc/2025/Conference/Submission5048/Authors" ], [ "ICLR.cc/2025/Conference/Submission5048/Authors" ], [ "ICLR.cc/2025/Conference/Submission5048/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper addresses the problem of satisficing exploration, inspired by the concept of satisficing in decision-making. The authors propose an algorithmic framework called SELECT, which leverages a learning oracle with sub-linear regret guarantees to iteratively identify and test potential satisficing arms. SELECT achieves constant regret in the realizable cases, and it maintains the original regret bound in the non-realizable cases. Finally, numerical experiments are conducted to demonstrate the algorithm's performance across various bandit settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. It is commendable that the proposed algorithm can be applied to any learning oracle with sub-linear regret guarantees, making it adaptable to different bandit models.\\n2. The three-step algorithm design is well-constructed and clearly explained. Step 3 (LCB Tests) is particularly impressive. Roughly speaking, in the realizable cases, a certain round will continue indefinitely, while in the non-realizable cases, the algorithm will proceed round by round.\", \"weaknesses\": \"1. While the three-step design in each round is great, each round of SELECT runs independently, resembling the doubling trick, which is often criticized. In the non-realizable case, this may lead to suboptimal theoretical and practical performance.\\n2. The result that constant regret can be achieved in the realizable case is not surprising, given prior research like Garivier et al. (2019). Additionally, the study on the lower bound feels insufficient. For instance, in the case of finite-armed bandits, the lower bounds are limited to two-armed bandits. I encourage the authors to explore the lower bounds further.\\n3. A minor issue to note is the terminology \\\"satisficing exploration.\\\" In the bandit literature, exploration refers to selecting arms with uncertain rewards to gather information, as opposed to exploitation, where arms are selected to maximize immediate rewards based on current knowledge. In this problem, there is indeed a tradeoff between exploration and exploitation. I believe the thresholding bandits problem (in the pure exploration setting) is a better model of satisficing exploration. The authors might consider clarifying this or adopting different terminology.\", \"questions\": \"1. For Condition 1, what happens when $\\\\alpha < 1/2$?\\n2. Regarding the numerical results for finite-armed bandits, which algorithm is used as the learning oracle? Could you explain why SELECT outperforms Uniform UCB in Figure 4b?\\n3. Can $\\\\gamma_i$ be multiplied by a constant? If so, are the empirical results sensitive to the choice of constant for both realizable and non-realizable cases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"This paper considers a variant of the bandit problem focused on exploring actions, referred to as *satisficing arms*, whose expected rewards exceed a given threshold. The authors propose an algorithm and provide a regret analysis. Weaknesses include the strong assumptions regarding the oracle algorithm, concerns about practical performance due to the use of doubling, and the regret lower bound being limited to the two-armed case. However, the authors have offered convincing responses to these review comments and have expressed a willingness to revise the paper. Given that the reviewers have reached a general consensus with positive opinions, I support the acceptance of this paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers expressed concerns about the strong assumptions regarding the oracle algorithm, the practical performance implications of using doubling, and the regret lower bound being restricted to the two-armed case. In response to these comments, the authors provided convincing answers and demonstrated a willingness to revise the paper.\"}", "{\"summary\": \"The paper tackles the problem of satisficing exploration in multi-armed bandits. Satisficing problem here represents finding an option which is above a preset threshold. The paper proposes a novel method SELECT which utilizes any existing bandit method with sub-linear regret guarantees and utilizes the same sample path trajectories to further provide a constant satisficing regret framework.\", \"the_method_implementation_is_split_into_three_parts\": \"1. Shadowing the sub-linear regret method's trajectory for a set number of rounds\\n2. Forced sampling of selected arm \\n3. Comparing the lower confidence bound of the selected arm with the threshold value.\\n\\nThe paper provides regret guarantees based on the difference between the highest mean among all the options and the threshold value (denoted in the paper as $\\\\Delta_S*$). The paper also provides a matching lower bound (up-to-logarithmic factors) to validate the performance of SELECT. \\n\\nThe paper further goes on to provide examples of how SELECT can be used with different bandit frameworks, making the method quite applicable to a large set of setups. This is supplemented with experiments for the same, further strengthening the case of SELECT.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The following would contribute to the strengths of the paper:\", \"**Clear Writing**: The paper is well written, precise, and to-the-point.\", \"**Justified Problem Setup**: The paper clearly explains the justification of the problem setup, literature surround it and solution of the problem with theoretical and experimental backing.\", \"**Innovative Umbrella Solutions**: The novel proposed method SELECT can be appended to any sub-linear regret method and can provide constant satisfying performance guarantee to the respective application. This makes the algorithm quite applicable to a lot of varied problem setup.\", \"**Theoretical performance guarantees**: The paper provides theoretical proof on both the regret upper-bound and shows the tightness to the fundamental lower-bound on the best performance possible on the satisficing problem.\", \"**Example distinct setups**: The paper provides example problem setup in finite-armed bandits, concave bandits and Lipschitz bandits.\", \"**Experiments**: The paper provides a synthetic implementation for all the example setups and showcases the promise of SELECT method.\"], \"weaknesses\": \"There are very few obvious loopholes to the paper. Overall the paper is a complete work. A few paragraphs on the potential future works and possible extensions would be a good addition.\", \"questions\": \"Nothing to add here\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces SELECT, an algorithmic framework designed for satisficing exploration in bandit optimization. The primary objective of SELECT is to frequently identify arms with mean rewards exceeding a specified threshold, with its performance evaluated through satisficing regret, which measures the cumulative deficit of the chosen arm's mean reward compared to this threshold. SELECT operates by leveraging a learning oracle that provides a sub-linear regret upper bound. It iteratively identifies potential satisficing arms, collects data samples, and monitors the LCB of the arm\\u2019s mean reward against the threshold to determine if it qualifies as a satisficing arm. The algorithm guarantees constant satisficing regret in scenarios where a satisficing arm exists (realizable case) and matches the standard regret of the oracle in non-realizable situations. The framework is successfully instantiated across various bandit settings. Numerical experiments validate the efficacy of SELECT, demonstrating its ability to achieve constant regret.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper successfully integrates standard bandit optimization algorithms into a new framework, achieving better results and providing a more direct application of bandit algorithms to the satisficing exploration problem.\", \"The proposed SELECT employs a unique approach by utilizing a learning oracle for bandit optimization, enabling it to sample candidate arms and monitor performance efficiently.\", \"The paper establishes that SELECT achieves a constant satisficing regret in realizable cases, independent of the satisficing gap, and inversely related to the exceeding gap. This feature allows it to maintain performance even when the satisficing gap varies.\"], \"weaknesses\": [\"The main weakness is that the algorithm imposes stringent conditions (Condition 1) on the oracle algorithm, requiring sublinear regret for all time steps t. Most algorithms, including all oracles referenced in Section 5, only achieve sublinear regret when t is sufficiently large. If alpha approaches 1 when t is small, the theoretical regret bound could become excessively large.\", \"The algorithm involves hyperparameters that rely on the oracle algorithms. In scenarios where oracles are unavailable or when the sublinear oracles have unclear parameters, extending theoretical conclusions becomes challenging.\", \"The first step of each phase necessitates a bandit algorithm, which is crucial. However, the paper lacks a general discussion on how to select the appropriate oracle algorithm.\", \"While Remark 2 highlights the novelty of each step, an ablation study demonstrating the impact of each component would strengthen the paper. Currently, the experimental results do not robustly support the conclusions, as SELECT only outperforms all baselines in 3 out of 6 settings.\"], \"questions\": [\"There appears to be an inconsistency in the paper regarding the baseline references to \\\"Hajiabolhassan \\\\& Ortner (2023)\\\" in line 427 and \\\"Michel et al. (2023)\\\". Clarification is needed.\", \"The time horizon T may not be large enough for UCB-based algorithms. For instance, in Figure 3(b), as T further increases, SELECT appears to perform worse than the other algorithms.\", \"The experimental results further indicate that Condition 1 is overly strict, as the oracles struggle to satisfy it. For example, in Figures 4(a) and 4(b), the regret appears to converge to T for Uniform UCB.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response for Reviewer Y6qp (Part 1)\", \"comment\": \"Thank you so much for providing these constructive comments! We provide a detailed response below to your comments.\\n\\n**1. Condition 1**\\n\\nWe double-check Condition 1 and we believe it can be satisfied by many bandit problems in the literature as long as $t\\\\geq 2$ (this is to avoid the corner case of $\\\\log(t)=0$ when $t=1$), including those demonstrated in Section 5. \\nIn what follows, we revisit the algorithms used in Section 5. Specifically, for every one of them, we provide the **exact choices of $C_1,\\\\alpha,$ and $\\\\beta$ under which Condition 1 is satisfied:**\\n\\n**(a) Finite-armed bandits:** Consider any instance of finite-armed bandits with mean reward bounded by $[0,1]$. By Theorem 7.2 of [1], for any given time horizon $t\\\\geq 2$, the standard regret of the UCB algorithm is upper bounded by $8\\\\sqrt{Kt\\\\log(t)}+3\\\\sum_{k=1}^K (r(k^*)-r(k))\\\\leq 8\\\\sqrt{Kt\\\\log(t)}+3K$, where $K\\\\geq 2$ is the number of arms and $k^*$ is the optimal arm. When $t>K$, the standard regret is upper bounded by $8\\\\sqrt{Kt\\\\log(t)}+3K\\\\leq 11\\\\sqrt{Kt\\\\log(t)}$, where the last inequality is due to $t>K\\\\geq 2$, thus $t\\\\geq 3$ and $\\\\log(t)\\\\geq \\\\log(3)>1$; When $t\\\\leq K$, the standard regret is upper bounded by $t\\\\leq \\\\sqrt{Kt}\\\\leq 11\\\\sqrt{Kt\\\\log(2)}\\\\leq 11\\\\sqrt{Kt\\\\log(t)}$. Therefore, for any given time horizon $t\\\\geq 2$, the regret of the UCB algorithm is upper bounded by $11\\\\sqrt{Kt\\\\log(t)}$. By setting $C_1=11\\\\sqrt{K}$ and $\\\\alpha=\\\\beta=1/2$ and using the UCB algorithm as the learning oracle, we verify that finite-armed bandits satisfy Condition 1.\\n\\n**(b) Concave bandits:** We use the special case of one-dimensional concave bandits as an example. Consider any one-dimensional concave bandits with arm set being $[0,1]$ and reward function being $1$-Lipschitz. By Theorem 1 of [2], for any given time horizon $t\\\\geq 2$, the standard regret of Algorithm 1 in [2] is upper bounded by $108\\\\sqrt{t\\\\log(t)}\\\\cdot\\\\log_{4/3}(t)=108/\\\\log(4/3)\\\\cdot\\\\sqrt{t}\\\\cdot(\\\\log(t))^{3/2}$. Therefore, by setting $C_1=108/\\\\log(4/3)$, $\\\\alpha=1/2$, $\\\\beta=3/2$ and using Algorithm 1 in [2] as the learning oracle, we verify that one-dimensional concave bandits satisfy Condition 1. In fact, similar arguments also hold for concave bandits in higher dimensions (see Algorithm 2 and Theorem 2 of [2], we also added a detailed discussion on this in Appendix A.2 of the revised paper). \\n\\n**(c) Lipschitz bandits:** Consider any Lipschitz bandits with arm set being $[0,1]^d$, Lipschitz constant $L$ and mean reward bounded by $[0,1]$ (here we define Lipschitz continuity in the sense of $\\\\infty$-norm). By Section 2.2 of [3], for any given time horizon $t\\\\geq 2$, the standard regret of the uniformly discretized UCB algorithm introduced in [3] (hereafter referred to as the ``Uniform UCB\\\") is upper bounded by $(1+c_{\\\\text{ALG}})L^{d/(d+2)}t^{(d+1)/(d+2)}\\\\cdot\\\\sqrt{\\\\log(t)}$ if the regret of the UCB algorithm for finite-armed bandits with $K$ arms is upper bounded by $c_{\\\\text{ALG}}\\\\sqrt{Kt\\\\log(t)}$ for any $t\\\\geq 2$. Using our discussions in (a) we have $c_{\\\\text{ALG}}=11$, the regret of the Uniform UCB algorithm is bounded by $12L^{d/(d+2)}t^{(d+1)/(d+2)}\\\\cdot\\\\sqrt{\\\\log(t)}$. Therefore, by setting $C_1=12L^{d/(d+2)}$, $\\\\alpha=(d+1)/(d+2)$, $\\\\beta=1/2$ and using Uniform UCB as the learning oracle, we verify that Lipschitz bandits satisfy Condition 1.\\n\\nIn summary, in all three examples above, the regret of the oracle algorithm has an upper bound in the form of $C_1t^\\\\alpha\\\\cdot(\\\\log(t))^\\\\beta$ **for all given time horizon** $t\\\\geq 2$. Notably, the value of $\\\\alpha$ **remains unchanged for all $t\\\\geq2$**. That is to say, $\\\\alpha$ will not approach $1$ even if $t$ is small, and the theoretical satisficing regret bound holds for all $t\\\\geq 2$. We have also added a detailed discussion on this in Appendix A of the revised paper.\\n\\n**2. Hyperparameters in the algorithm**\\n\\nWe note that the only hyperparameter used in our algorithm is $\\\\alpha$. In most of the bandit problems considered in literature, the exact value of $\\\\alpha$ is known (see, e.g., examples provided in the previous point). If the exact $\\\\alpha$ is unknown but an upper bound of $\\\\alpha$ strictly smaller than $1$ is available, we can also plug in the upper bound of $\\\\alpha$ when running our algorithm.\"}", "{\"title\": \"Response for Reviewer Y6qp (Part 3)\", \"comment\": \"**7. Reference in line 427**\\n\\nThank you for catching this. The correct reference should be Michel et al. (2023), and we have fixed it in the revised paper.\\n\\n**8. Performance of $\\\\texttt{SELECT}$ in Figure 3(b)**\\n\\nWe have increased the maximum time horizon to $50000$. The results of concave bandits in the non-realizable case with maximum time horizon $50000$ is provided in Figure 7 of Appendix G of the revised paper. One can observe from Figure 7 that the regret of $\\\\texttt{SELECT}$ remains sub-linear in $T$ under the increased time horizon. In fact, the empirical performance of $\\\\texttt{SELECT}$ is consistently better than Algorithm 1 in [2] even after we increase the maximum time horizon to $50000$.\\n\\nAs shown in Figure 7, the standard regret of $\\\\texttt{SELECT}$ in the non-realizable case exhibits a wave shape. This is mainly because the algorithm runs in rounds. At the beginning of each round, the algorithm starts a rerun of the learning oracle, which could result in a rapid increase in standard regret in the early stage of each round. In later stages of every round, the learning oracle gradually converges to the optimal arm, and both step 2 and step 3 are exploiting a near-optimal arm obtained from the learning oracle, thus increase in standard regret slows down in later stages of each round. Similar phenomenon can also be observed in Figure 4(b).\\n\\n**9. Regret of Uniform UCB in Figure 4(a) and 4(b)**\\n\\nWe have shown theoretically that Uniform UCB satisfies Condition 1 (see Appendix A.3 of the revised paper and the first point of the response). The performance of Uniform UCB shown in Figure 4 indeed aligns with the theoretical guarantee. As shown in the first point of our response, the regret of Uniform UCB is bounded by $12L^{1/2}T^{3/4}(\\\\log(T))^{1/2}$. In our example, we have $L\\\\approx 72$, and for example, when $T=5000$ the theoretical standard regret upper bound is $12L^{1/2}T^{3/4}\\\\log(T)^{1/2}\\\\approx 176690$. A comparison between the standard regret bound observed in Figure 4(b) and $0.015$ times the theoretical standard regret upper bound is provided in Figure 8 of Appendix G of the revised paper. From Figure 8 one can see that as $T$ increases, the standard regret observed in the experiment shares a similar growth pattern as the theoretical standard regret upper bound.\\n\\n\\n[1] Lattimore, T., \\\\& Szepesv\\u00e1ri, C. (2020). Bandit algorithms. Cambridge University Press.\\n\\n[2] Agarwal, A., Foster, D. P., Hsu, D. J., Kakade, S. M., \\\\& Rakhlin, A. (2011). Stochastic convex optimization with bandit feedback. Advances in Neural Information Processing Systems, 24.\\n\\n[3] Bubeck, S., Stoltz, G., \\\\& Yu, J. Y. (2011). Lipschitz bandits without the lipschitz constant. In Algorithmic Learning Theory: 22nd International Conference, ALT 2011, Espoo, Finland, October 5-7, 2011. Proceedings 22 (pp. 144-158). Springer Berlin Heidelberg.\"}", "{\"title\": \"Response for Reviewer EjZr (Part 2)\", \"comment\": \"**7. The reason why $\\\\texttt{SELECT}$ outperforms Uniform UCB in Figure 4(b)**\\n\\nWe explain the reason why $\\\\texttt{SELECT}$ outperforms Uniform UCB as follows. Uniform UCB is an algorithm that achieves an optimal standard regret bound in the worst case for Lipschitz bandits, and particularly works well in worst-case instances (see [6] for a detailed discussion). However, in the instance we use in the experiment, both the set of optimal arms and the set of significantly suboptimal arms are large, which is very different from the worst-case instances of Lipschitz bandits. Therefore, in this particular instance, directly running Uniform UCB over the entire time horizon tend to over-explore (i.e., the number of discretized arms increases with longer time horizon) and incur significant regret. In contrast, $\\\\texttt{SELECT}$ starts with running Uniform UCB over a small number of time steps $t_i$. This allows $\\\\texttt{SELECT}$ to have a much smaller number of discretized arms and smaller confidence intervals in early stages of the algorithm, thus explores less and exploits more aggressively. Therefore, although in the non-realizable case, $\\\\texttt{SELECT}$ may underperform Uniform UCB by a logarithmic factor of $T$ in the worst case, in the instance used in Figure 4(b), $\\\\texttt{SELECT}$ outperforms Uniform UCB.\\n\\n**8. Robustness in the choice of $\\\\gamma_i$**\\n\\nAll our theoretical results still hold if all $\\\\gamma_i$ are multiplied by a constant. We add a numerical experiment in Appendix C of our revised paper to test the robustness of $\\\\texttt{SELECT}$ in the choice of $\\\\gamma_i$. The results show that different choices of $\\\\gamma_i$ have limited impact on the empirical performance of our algorithm in both realizable and non-realizable settings. Therefore, the empirical performance of $\\\\texttt{SELECT}$ is robust in\\nthe choice of $\\\\gamma_i$.\\n\\n\\n[1] Garivier, A., M\\u00e9nard, P., \\\\& Stoltz, G. (2019). Explore first, exploit next: The true shape of regret in bandit problems. Mathematics of Operations Research, 44(2), 377-399.\\n\\n[2] Michel, T., Hajiabolhassan, H., \\\\& Ortner, R. (2022). Regret bounds for satisficing in multi-armed bandit problems. Transactions on Machine Learning Research.\\n\\n[3] Lattimore, T., \\\\& Szepesv\\u00e1ri, C. (2020). Bandit algorithms. Cambridge University Press.\\n\\n[4] Bubeck, S., Perchet, V., \\\\& Rigollet, P. (2013). Bounded regret in stochastic multi-armed bandits. In Conference on Learning Theory. PMLR, 122-134.\\n\\n[5] Lattimore, T. (2018). Refining the confidence level for optimistic bandit strategies. Journal of Machine Learning Research, 19(20), 1-32.\\n\\n[6] Bubeck, S., Stoltz, G., \\\\& Yu, J. Y. (2011). Lipschitz bandits without the lipschitz constant. In Algorithmic Learning Theory: 22nd International Conference, ALT 2011, Espoo, Finland, October 5-7, 2011. Proceedings 22 (pp. 144-158). Springer Berlin Heidelberg.\"}", "{\"title\": \"Response for Reviewer Y6qp (Part 2)\", \"comment\": \"**3. Reliance on a learning oracle and learnability**\\n\\nWe would like to highlight that the main contribution of our paper is to provide a general approach to achieve constant satisficing regret bound in the realizable case **whenever a sub-linear standard regret learning oracle exists** for the corresponding problem class (see lines 19-23 in the Abstract). This condition (i.e., a sub-linear standard regret learning oracle exists) is satisfied by many popular bandit problem classes, including those considered in Section 5 and linear bandits. \\n\\nIndeed, our approach cannot handle problem classes without a sub-linear standard regret learning oracle, but this goes beyond the scope of our work. We reiterate that, as pointed out in lines 68 - 75, prior approaches for satisficing bandits can only handle finite-armed bandits, and it is not even clear if constant satisficing regret is possible beyond finite-armed bandits. Therefore, our work mainly focuses on answering this question and we establish a general approach that attains constant satisficing regret for a wide range of prevalent bandit settings, e.g., those demonstrated in Section 5.\\n\\n**4. Selection of learning oracle**\\n\\nIn general, given a bandit problem class, any learning oracle with a sub-linear standard regret bound in the problem class can be used in step 1 of our algorithm to obtain a constant satisficing regret bound. For example, for finite-armed bandits, we use the UCB algorithm in step 1; for concave bandits, we use Algorithm 2 in [2] in step 1; for Lipschitz bandits, we use Uniform UCB in Section 2.2 of [3] in step 1 (see Section 5 of the paper and our first point).\\n\\n**5. Ablation study**\\n\\nFollowing your suggestion, we added an ablation study in Appendix B of the revised paper to show that all three steps are necessary for $\\\\texttt{SELECT}$ in obtaining a constant satisficing regret bound. Specifically, we use an instance of Lipschitz bandit, and compare the performance of $\\\\texttt{SELECT}$ with the following adapted versions of $\\\\texttt{SELECT}$ where one of the three steps is removed:\\n\\n**(a) $\\\\texttt{SELECT}$ without step 1:** Instead of identifying candidate satisficing arms using a learning oracle, the candidate satisficing arm is drawn uniformly at random from the arm set before we proceed to forced sampling and LCB tests in each round;\\n\\n**(b) $\\\\texttt{SELECT}$ without step 2:** After identifying a candidate satisficing arm from step 1 in each round, the forced sampling in step 2 is skipped and a LCB test on the candidate satisficing arm is immediately started;\\n\\n**(c) $\\\\texttt{SELECT}$ without step 3:** Each round is terminated after running step 1 and step 2 without entering the LCB test in step 3.\\n\\nIn the ablation study, we show that if step 1 is removed, then the algorithm struggles to identify satisficing arms, incurring almost linear in $T$ satisficing regret. If either step 2 or step 3 is removed, each round is almost always terminated shortly after completing step 1, thus the algorithm is unable to maintain constant satisficing regret because of frequent reruns of the learning oracle. Therefore all three steps play a vital role in obtaining a constant satisficing regret bound in the realizable case, and the algorithm fails to maintain a constant satisficing regret bound if any of the three steps is removed. We also refer to Remark 2 (line 251-278) for a detailed discussion on the purpose of each component of $\\\\texttt{SELECT}$.\\n\\n**6. Empirical performance of our algorithm**\\n\\nWe would like to emphasize that the algorithm designed in our work focuses mainly on achieving constant satisficing regret bound in the realizable case while still providing robust performance in the non-realizable case **for a wide range of prevalent bandit settings**. That being said, some empirical performance tradeoff is perhaps unavoidable. Nevertheless, our algorithm still renders competitive performance in the non-relizable cases.\\n\\nIn all three realizable settings, the satisficing regret of our algorithm clearly converges to a constant as time horizon $T$ increases, which matches our theoretical results. In the finite-armed bandit setting, i.e., Figure 2(a), our algorithm slightly underperforms SAT-UCB+ by around $5$ percent but outperforms all other benchmarks. Given that SAT-UCB+ is specifically designed heuristic (with no theoretical performance guarantee) for finite-armed bandits and our algorithm is a general framework that handles a much more general class of bandit problems, it is not surprising that SAT-UCB+ will slightly outperform our algorithm in the finite-armed bandit setting. In both Lipschitz bandits and concave bandits, i.e., Figure 3(a) and Figure 4(a), our algorithm outperforms all of the three benchmarks. Furthermore, our algorithm is able to demonstrate constant satisficing regret while none of the benchmarks are able to achieve constant satisficing regret.\"}", "{\"comment\": \"Thank you again for the constructive feedback! We tend to agree with you that some of the points may worth further study and we shall suggest them as future work in the final version.\"}", "{\"comment\": \"I thank the authors for the response, which helped clarify most of my concerns. I've increased the score.\"}", "{\"comment\": \"Thank you for your response. Regarding the first point, using historical data from previous rounds can indeed often improve empirical performance. However, analyzing such algorithms is challenging because the interdependence between different rounds is difficult to address. Similar to the well-known doubling trick, order-wise optimality can be maintained by ignoring historical data. Additionally, the proposed algorithm seems to require more practical consideration regarding the selection of parameters and the base algorithm. Therefore, I will maintain my initial rating.\"}", "{\"comment\": \"Thank you very much for the positive feedback! We sincerely appreciate your efforts in reviewing our work.\"}", "{\"comment\": \"Thanks for the great effort in reviewing our work! We added a paragraph in Conclusion (Section 7) in our revised paper and discussed some potential directions for future research (see line 537-539).\"}", "{\"title\": \"Response for Reviewer EjZr (Part 1)\", \"comment\": \"Thanks for the great effort in reviewing our work! We provide a detailed response below for your comments.\\n\\n**1. Each round running independently**\\n\\nOur algorithm indeed runs different rounds independently, but this design is mainly for the simplicity of regret analysis. In some special cases such as finite-armed bandits and linear bandits with discrete arm set, we can easily make use of historical data from previous rounds to improve empirical performance of our algorithm. For example, if the learning oracle is a UCB-based algorithm, then we can make use of data from previous rounds to construct confidence intervals. In LCB tests, we can also make use of historical data on the candidate satisficing arm to construct lower confidence bounds. We believe the same regret bounds still hold with the use of historical data.\\n\\nIn terms of performance of our algorithm in the non-realizable case, we prove in Theorem 2 that our algorithm enjoys the same (up to logarithmic factors) standard regret bound as the learning oracle we use in step 1, under the mild condition of $\\\\alpha\\\\geq 1/2$. In particular, in finite-armed bandits, concave bandits and Lipschitz bandits, the standard regret bound of our algorithm in the non-realizable case is in fact near-optimal. \\n\\nRegarding the practical performance, Figures 2(b), 3(b), and 4(b) also demonstrate the robust empirical performance of our algorithm in non-realizable cases. As shown, in all three cases, our algorithm achieves competitive performance against the best benchmark.\\n\\n**2. Key contributions** \\n\\n\\nWe would like to highlight that existing algorithms such as the ones introduced in [1] and [2], whose satisficing regret bound depends on the minimum satisficing gap, are only limited to the setting of finite-armed bandits. These algorithms are not capable of handling more general classes of bandit problems such as concave bandits, Lipschitz bandits or linear bandits, where the minimum satisficing gap can simply be $0$ (see lines 68 - 75 for a detailed discussion). In fact, as shown in Figures 3(a) and 4(a), the adapted versions of SAT-UCB and SAT-UCB+ (initially proposed in [2] for finite-armed bandits) are not able to achieve constant satisficing regret in the concave and Lipschitz bandit settings. As such, it is not immediately clear if constant satisficing regret is possible beyond finite-armed bandits. \\n\\nTo this end, our work serves to provide the first affirmative answer. More specifically, we establish a general approach that attains constant satisficing regret for a wide range of prevalent bandit settings (e.g., those demonstrated in Section 5) under the realizable case (see lines 90 - 96 for a more detailed discussion). Moreover, our approach also attains competitive performance in the non-realizable case.\\n\\n\\n\\n**3. Lower bound**\\n\\nOur lower bound construction is built on the results in Theorem 6 in [4], which focuses on two-armed bandits. We believe one way to go beyond two-armed bandits is to follow the results in Theorem 10 in [5], which shares a similar flavor to Theorem 6 in [4]. We will examine this and include any improved results in our final version.\\n\\n**4. Terminology \\\"Satisficing Exploration\\\"**\\n\\nThank you for pointing this out. We change the title and the name of the algorithm to reflect this point in our revised paper.\\n\\n**5. $\\\\alpha<1/2$**\\n\\nThank you for this great question. In our proof for Theorem 1, we prove that in round $i$, the satisficing regret incurred by step 1 is bounded by $\\\\gamma_i^{-\\\\alpha/(1-\\\\alpha)}\\\\cdot\\\\text{polylog}(\\\\gamma_i)$, while the satisficing regret incurred by step 2 is bounded by $\\\\gamma_i^{-1}\\\\cdot\\\\text{polylog}(\\\\gamma_i)$ (here we omit dependence on the problem class specific constant $C_1$). The reason why the current results rely on $\\\\alpha\\\\geq 1/2$ is that only when $\\\\alpha\\\\geq 1/2$, the satisficing regret incurred in step 1 dominates that incurred in step 2. If $\\\\alpha<1/2$, then the satisficing regret incurred in step 2 dominates that incurred in step 1, and using the same analysis we are only able to obtain a $1/\\\\Delta_S^*\\\\cdot\\\\text{polylog}(1/\\\\Delta_S^*)$ satisficing regret bound instead of the satisficing regret bound stated in Theorem 1. Due to the same reason, if $\\\\alpha<1/2$, we are only able to obtain a $\\\\sqrt{T}\\\\cdot\\\\text{polylog}(T)$ standard regret bound instead of $T^\\\\alpha\\\\cdot\\\\text{polylog}(T)$. On the other hand, we believe it is reasonable to assume $\\\\alpha\\\\geq 1/2$, because even in the simplest two-armed bandit setting we have $\\\\alpha=1/2$, as the standard regret bound of UCB for two-armed bandits is $11\\\\sqrt{2T\\\\log(T)}$ (see Theorem 7.2 of [3]), and there is a worst-case regret lower bound of $1/27\\\\sqrt{T}$ (see Theorem 15.2 of [3]).\\n\\n**6. Oracle used in for finite-armed bandits**\\n\\nIn Figure 2(b), we use Thompson sampling as our learning oracle. We have clarified this in the revised paper (see line 439-440).\"}" ] }
5WEpbilssv
Contextualizing biological perturbation experiments through language
[ "Menghua Wu", "Russell Littman", "Jacob Levine", "Lin Qiu", "Tommaso Biancalani", "David Richmond", "Jan-Christian Huetter" ]
High-content perturbation experiments allow scientists to probe biomolecular systems at unprecedented resolution, but experimental and analysis costs pose significant barriers to widespread adoption. Machine learning has the potential to guide efficient exploration of the perturbation space and extract novel insights from these data. However, current approaches neglect the semantic richness of the relevant biology, and their objectives are misaligned with downstream biological analyses. In this paper, we hypothesize that large language models (LLMs) present a natural medium for representing complex biological relationships and rationalizing experimental outcomes. We propose PerturbQA, a benchmark for structured reasoning over perturbation experiments. Unlike current benchmarks that primarily interrogate existing knowledge, PerturbQA is inspired by open problems in perturbation modeling: prediction of differential expression and change of direction for unseen perturbations, and gene set enrichment. We evaluate state-of-the-art machine learning and statistical approaches for modeling perturbations, as well as standard LLM reasoning strategies, and we find that current methods perform poorly on PerturbQA. As a proof of feasibility, we introduce Summer (SUMMarize, retrievE, and answeR, a simple, domain-informed LLM framework that matches or exceeds the current state-of-the-art. Our code and data are publicly available at https://github.com/genentech/PerturbQA.
[ "large language models", "Perturb-seq", "perturbation experiments", "knowledge graphs", "retrieval-augmented generation", "chain of thought prompting" ]
Accept (Poster)
https://openreview.net/pdf?id=5WEpbilssv
https://openreview.net/forum?id=5WEpbilssv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vaOOOgTmBp", "vTFMOhjSBV", "tdq1kYxrZW", "tN8cUIc2G6", "rT0jlVcCrp", "n4HVfxP4OM", "mUIo63C24G", "lWAPurEGZd", "kZyLZwQT9V", "kEbRpvjqsH", "jghMGeOjAl", "jOPBi0DPd1", "iatrDXhwnb", "fRPFlCk1Gq", "fAsNexIsXQ", "esOhQJJmXi", "d4WtC8C2yW", "ZYnAD1xj8W", "ZCEdJS6aHC", "YAj2wtvBFd", "XxN5IiYO2m", "UXciGS3OLK", "Sji98uXOzZ", "QRav9nvLkh", "MHIiMUymNS", "IVvdeAKZQf", "9kttqKv3Bp", "2LdOcNPO6F" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732136426042, 1732136553848, 1732674590325, 1732136926804, 1732423903449, 1732136523372, 1734701701199, 1737523608877, 1732608068841, 1732636837008, 1730342098353, 1730665035210, 1730638971293, 1730653580330, 1732136620160, 1732568921313, 1732136698386, 1730175478643, 1732709028999, 1732635214237, 1732572553054, 1732583573383, 1732136768090, 1732415787303, 1732708695761, 1732136647873, 1732136861064, 1732602261637 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_JGsH" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Area_Chair_ViiQ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_qAqz" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_ii1Q" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_JGsH" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_psYt" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_2VEF" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_psYt" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_qAqz" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_psYt" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_ii1Q" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Authors" ], [ "ICLR.cc/2025/Conference/Submission3942/Reviewer_2VEF" ] ], "structured_content_str": [ "{\"title\": \"To all reviewers\", \"comment\": \"Dear reviewers,\\n\\nThank you for reading our paper and providing thorough feedback. We have updated our paper with a number of illustrative examples, which we hope will provide further insight into the questions raised here.\\n\\nWe would also like to clarify the scope and contribution of this work, which we believe are central to its interpretation.\\n\\n1. **The primary contribution of this work is PerturbQA, a carefully curated benchmark for language-based reasoning and scientific discovery, in the context of single-cell perturbations.** Compared to existing benchmarks, which focus on scientific coding tasks [1] or reasoning over known facts [2], PerturbQA is *predictive* in nature, and its tasks are unsolved. PerturbQA draws upon experimental assays and harmonized knowledge graphs to replicate the reasoning required to \\\"connect the dots\\\" between known biology and unanswered questions.\\n\\n **Information leakage is the primary argument against this vision. However, we believe that the experimental outcomes from which we derive PerturbQA are minimally represented in existing knowledge bases and LLM pretrained weights.** Biological knowledge graphs tend to report relationships that have been well-validated by targeted studies, rather than the outcomes of large-scale screens. The gene pairs whose relationship we query are very much a *superset* of genes with well-characterized relationships. Specifically, only ~3% of gene pairs in our test sets physically interact (row 2, in *any* context, including other animals), and only ~20% share any pathway/annotation, including at the coarsest levels (row 4).\\n\\n | | K562 | RPE1 | HepG2 | Jurkat |\\n |-|-|-|-|-|\\n | physical, DE | 0.094 | 0.063 | 0.075 | 0.106 |\\n | physical, total | 0.032 | 0.025 | 0.027 | 0.029 |\\n | network, DE | 0.214 | 0.204 | 0.218 | 0.253 |\\n | network, total | 0.222 | 0.209 | 0.220 | 0.208 |\\n\\n There is little difference between the positive/negative pairs in terms of higher-level connectivity (row 3 vs. 4). Physically interacting genes are more likely to result in differential expression in our dataset (row 1 vs 2), but **having a physical interaction is minimally predictive of DE,** as AUC hovers around 0.5, while Ours is consistently better than random guessing (from Table 1).\\n\\n | | K562 | RPE1 | HepG2 | Jurkat |\\n |-|-|-|-|-|\\n | physical = 1 | 0.53 | 0.52 | 0.52 | 0.54 |\\n | Ours | 0.60 | 0.58 | 0.61 | 0.58 |\\n\\n Finally, Nadig et al. 2024 was published strictly after we downloaded the knowledge graphs. While the cell lines in question have been studied in prior work, in other contexts, Nadig et al. 2024 released the *first* large-scale Perturb-seq screens in these two cell lines.\\n\\n2. **Classic LLM reasoning strategies achieve near-random performance out-of-the-box on PerturbQA, and it is non-trivial to adapt them in *domain-aware* ways.** Within the past two years, there has been a plethora of brilliant, inference-time LLM strategies, from in-context learning, to CoT, ToT, (Graph) RAG, and more. However, as we demonstrate for ICL and CoT, naively applying existing templates to biological reasoning leads to near-random performance. **This also demonstrates the the *lack* of answers within the pretrained weights themselves.** With regards to retrieval-based baselines, a vast amount of biological literature is unfortunately inaccessible behind paywalls, or otherwise [subject to terms unfavorable for AI development](https://www.genecards.org/). As a result, it is difficult to benchmark standard retrieval-based strategies on equal footing.\\n\\n3. To demonstrate that language-based reasoning is *feasible* on PerturbQA, **SUMMER integrates standard LLM techniques with domain-specific ways to query structured knowledge.** To the best of our knowledge, **SUMMER is the first fully LLM-based method for unseen perturbation prediction and rationalization**, without relying on any external classifiers or embedding models. Throughout our work, we acknowledge that techniques like CoT and retrieval are common in current LLM systems. The key contribution lies in *what* information is useful to retrieve, and *how* to frame the prompts to encourage reasonable reasoning.\\n\\n[1] Rein et al. GPQA: A Graduate-Level Google-Proof Q&A Benchmark. 2023.\\n\\n[2] Laurent et al. LAB-Bench: Measuring Capabilities of Language Models for Biology Research. 2024.\"}", "{\"title\": \"Thank you for your review (2/2)\", \"comment\": \"## Additional biological analysis\\n\\nThank you for the recommendation! We have updated the draft with several additional analyses regarding the qualitative aspects of our framework (currently in Appendix C), including evaluation from a domain expert (above). Here is a summary.\\n\\n- Clusters that elude manual annotation tend to be smaller or exhibit lower agreement. On these, gene set over-representation analysis focuses on highly specific gene sets, which each cover subsets of these clusters, rather than the whole. The LLM takes the opposite approach, and its summaries tend to \\\"lift\\\" the description to higher levels of hierarchy (Table 8). The two strategies provide orthogonal information, though the LLM outputs may be more readable.\\n\\n- We analyzed 300 generations (3 trials of 100 DE examples) to understand common failure modes (detailed examples in C.3). Errors and inconsistencies primarily resulted from deductions backed by overly-generic information. For example, the LLM may list an excessively broad set of influences, e.g. \\\"mitochondrial function, protein synthesis, or transcriptional regulation,\\\" which affect nearly everything.\\n\\n In several instances, the LLM was also confused between concepts which may be loosely connected, but not in the same context. For example, Gene A is upstream of stress signaling, e.g. \\\"oxidative stress,\\\" which is related to the mitochondria. However, Gene A is *not* responsible for healthy mitochondria function, and should not respond similarly to genes that are.\\n\\n## Other questions\\n\\n**Sensitivity to prompt design:** Due to computational constraints, we were unable to run inference a large number of times to evaluate a diverse set of prompts, so the SUMMER prompts were not particularly tuned for performance. During model development, we did observe that the 8b model tends to have an upper limit on the effective prompt length, and excessively long prompts were harder to follow (this could be resolved with longer context models, presumably).\\n\\n**Less well-studied systems:** Of the cell lines we study, RPE1 is perhaps the least well-characterized (though it is not genuinely *rare*). RPE1 is a non-cancerous cell line, and the [Gene Expression Omnibus database](https://www.ncbi.nlm.nih.gov/gds/), there are 5.1% as many RPE1 datasets as K562. Performance on DE/Dir does not differ noticeably on RPE1 compared to other cell lines.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"We thank the authors for their efforts in answering my questions. While you have addressed most of my concerns, the reliance on manual prompt design and the limited budget for prompt sensitivity evaluation prevent a higher remark on the method novelty and broadness of reseach influence. So I decide to maintain my score.\"}", "{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review and suggestions! We hope this response provides clarity regarding your concerns, and we look forward to discussing.\\n\\n## Contribution and data quality\\n\\nPerturbQA is first and foremost, a carefully curated *benchmark* for language-based reasoning for single-cell perturbations.\\n\\nCompared to existing benchmarks like PubmedQA, the ground truth is not determined by fact checking, but by experimental assays. The quality of our labels can be assessed based on statistical consistencies. Figure 4 illustrates that the Wilcoxon ranked-sum test is reasonably calibrated on our datasets (for determining differential expression), and Figures 5+6 illustrate that in two near-biological replicates, there is high agreement. We also employ conservative thresholds for selecting positives and negatives (A.2), and exclude statistically uncertain examples.\\n\\nThe textual descriptions extracted from knowledge graphs are manually curated by large consortia, compiling decades of research. We believe that the identifier mapping tables they maintain offer a much cleaner means of extracting information relevant to each gene, compared to search engine / embedding-based approaches more common in NLP. While we cannot guarantee that the knowledge is correct, we do include context regarding data provenance (cell line, assay type, whatever is available), as LLMs have demonstrated the ability to filter noisy information, when this is provided [1].\\n\\nFinally, we do not claim that the LLM-generated summaries are correct; only that they are helpful for making the end prediction, on which the framework is evaluated.\\n\\n[1] Allen-Zhu and Li. Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws. 2024.\\n\\n## Algorithmic novelty\\n\\nPlease see the general response. To summarize: applying NLP techniques out of the box results in near-random performance on PerturbQA, and they must be adapted in domain-specific ways. To demonstrate that this task is feasible, we introduced a *minimal* LLM-based example (SUMMER), which draws from common LLM techniques. We do not claim that this method is novel from an NLP perspective. We *do* claim that this LLM-reasoning based framework is novel in context of perturbation modeling.\\n\\n## Specific questions\\n\\n**why not \\\"formalized graph structure with exactly numeric data\\\":**\\n\\nBoth GEARS and GAT operate over formal graph structures. GEARS leverages real-valued expression matrices, while GAT is trained on our discretized labels. Both methods underperform in Table 1, and this can be attributed to two factors (Section 3, \\\"modeling perturbations\\\").\\n\\n- \\\"Raw\\\" single cell expression matrices are actually the end product of extensive preprocessing pipelines, and they are subject to high aleatoric and epistemic noise (batch effects) [2]. In fact, even two datasets generated by the same lab, of the same cell line, can be largely inconsistent at the individual gene level [3]. This is why we employed stringent criteria on PerturbQA examples (A.2), and why we choose to interpret single-cell perturbation data at the level of discrete insights, in line with best practices (Section 3, \\\"statistical insights\\\").\\n\\n- Biological knowledge graphs are highly heterogeneous, spanning [many layers of hierarchy](https://geneontology.org/docs/ontology-documentation/), and are annotated with details regarding the context and provenance of each observation. When current graph-based methods translate annotations into adjacencies, the semantics of each relationship are also lost: edges like \\\"enables,\\\" \\\"does not enable,\\\" \\\"is part of\\\" are all mapped to \\\"1.\\\" Finally, different knowledge graphs operate at differing levels of quality/noise, making their harmonization difficult. Therefore, we only use the graph structure to inform the retrieval and summarization of relevant information, rather than as a strict backbone to constrain modeling.\\n\\n[2] Luecken and Theis. Current best practices in single\\u2010cell RNA\\u2010seq analysis: a tutorial. Mol Syst Biol (2019) 15: e8746.\\n\\n[3] Nadig et al. Transcriptome-wide characterization of genetic perturbations. 2024.\\n\\n**formal task definitions:** We have included formal definitions for each task in Section 4.1, with motivation in Section 3. Could you please let us know which specific aspects of the notation / setting you find confusing, so we can try to improve the presentation? Thank you!\\n\\n**GEARS and GenePT:** We evaluate GenePT and GEARS on our benchmark for completeness, as they represent state-of-the-art approaches for this task. We do not consider their adaptations (for discrete classification) part of our primary technical contribution.\"}", "{\"title\": \"Thank you for your response!\", \"comment\": \"Thank you for your response and for updating your score!\\n\\nWe agree that scGPT and other single cell foundation models could capture orthogonal information, and we will update our paper with this discussion / error analysis (when full results are available).\\n\\nWe did run the published scGPT codebase for perturbations. Upon closer inspection, we realize that the \\\"graph\\\" part of each \\\"graph\\\" is discarded and only the expression is retained, so the results should be faithful.\"}", "{\"title\": \"Thank you for your review (1/2)\", \"comment\": \"Thank you for your review and suggestions! We hope this response clarifies some of your questions, and we look forward to discussing with you.\\n\\n## Combinatorial perturbations\\n\\nThe lack of combinatorial perturbations is less a limitation of the method, but of evaluation.\\n\\nTo craft a \\\"proof of concept\\\" for combinatorial perturbations, it would be relatively straightforward to summarize the neighborhoods of each participating gene, and prompt for any synergy/lack thereof.\\n\\nHowever, existing datasets for combinatorial perturbations are limited. The most commonly analyzed are Norman et al. [1] and Wessels et al. [2], each containing around 100 perturbation pairs. These two experiments operate in different modalities (CRISPR activation vs. inhibition), and since there are no alternatives for comparison, it is difficult to quantify the quality of these data. A major goal of this work was to craft a trustworthy benchmark for perturbation modeling, so we chose to focus on single perturbations instead. Therefore we consider combinations currently out of scope, but an opportunity for future work when better datasets are available.\\n\\n[1] Norman et al. Exploring genetic interaction manifolds constructed from rich single-cell phenotypes. Science. 2019.\\n\\n[2] Wessels et al. Efficient combinatorial targeting of RNA transcripts in single cells with Cas13 RNA Perturb-seq. Nat Methods. 2023.\\n\\n## Domain specific evaluation\\n\\nDue to the open-ended nature of the gene set task, automated evaluation methods are limited in their ability to reflect practical utility. Since this paper focuses on providing value to biologists, we recruited a domain specialist (molecular biologist, trained in wet lab and computational biology) for this task (See C.1).\\n\\nOverall, the LLM-generated summary was equal or more informative than the classical gene set enrichment results in 92% of cases, and agrees with the independent annotator in 72% of cases.\\n\\n1. In 21/25 cases, the biologist reported that the LLM-generated summary was more informative. In 2/25 cases, they contained the same amount of information; and in 2/25 cases, the gene set contained more information.\\n2. In 18/25 cases, the biologist reported that the LLM summary captured the same biology as the original human annotation (our ground truth labels).\\n\\n1. In the 2 cases where the gene set contained more *information*, a list of specific protein complexes were discovered, e.g.\\n ```\\n EIF2AK4 (GCN2) binds tRNA, Aminoacyl-tRNA binds to the ribosome at the A-site, 80S:Met-tRNAi:mRNA:SECISBP2:Sec-tRNA(Sec):EEFSEC:GTP is hydrolysed to 80S:Met-tRNAi:mRNA:SECISBP2:Sec and EEFSEC:GDP by EEFSEC, UPF1 binds an mRNP with a termination codon preceding an Exon Junction Complex, Translocation of ribosome by 3 bases in the 3' direction, Translation of ROBO3.2 mRNA initiates NMD.\\n ```\\n However, this information-rich output is hard to interpret, compared to the LLM output, which the annotator marked as agreeing with the label of \\\"translation.\\\"\\n ```\", \"ribosomal_protein_components_involved_in_translation\": \"This gene set is comprised of components of the large and small ribosomal subunits, which are essential for protein synthesis and translation. These genes are involved in the assembly and function of the ribosome, facilitating the translation of messenger RNA into protein.\\n ```\\n\\n2. In the 7/25 cases where the LLM summary differed from the human annotation, the LLM annotation tended to miss some high specific terms, e.g. \\\"targets of nonsense-mediated decay\\\" was generalized to \\\"stress response,\\\" and \\\"dysregulated lncRNA antisense transcripts\\\" was generalized to \\\"nuclear gene regulation.\\\" Related terms tend to be sparsely annotated in Gene Ontology, so this indicates that it would be useful to tune the granularity of generations in the future, or to generate multiple candidates for specific descriptions.\"}", "{\"metareview\": \"This is a new benchmark contributing to understanding the reasoning capability of large language models in structured biological data.\\nThis task will be of interest to communities interested in the space of AI and Biology. \\nHowever the recommendation at this point is borderline\\nSince the contribution is not so much methodological, it is unclear to make a strong case for this paper. On the other hand, ICLR has Datasets as one of the topics in the call for papers and one could fit this paper under that category.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal phase authors could address some of the concerns. One of the important points that was not satisfactorily answered is because of manual prompt design such benchmarks maynot see much utility.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"After checking the rebuttal and the rebuttal to all reviewers, I remain concerned about the application scenario of the proposed dataset, PerturbQA. I insist to keep my evaluation.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response and for reading our rebuttal!\\n\\nWe would like to reiterate that the primary contribution of this work is not the proposed algorithm, but the framework for modeling perturbation experiments.\\n\\nAs we write in the Introduction (Paragraph 2) and Background (Section 3), current approaches for modeling perturbations are misaligned with what biologists aim to glean from these large-scale screens. \\nWe are the first to propose that predicting differentiation expression / direction of change and summarizing gene sets are more realistic endpoints, compared to existing regression-based objectives.\\n\\nFurthermore, our hypothesis was that language-based reasoning can be helpful for perturbation modeling. To demonstrate this, we developed a **lightweight, proof of concept**, that was evaluated against the **state-of-the-art in perturbation modeling**. These include graph (GEARS, GAT), language (GenePT), and pretrained single-cell (scGPT, in rebuttal) baselines. We are the first to integrate **experimental data alongside textual knowledge graphs** towards this application.\\n\\nThere are many approaches in modern NLP that could be adapted towards perturbation modeling, and **this work exists to encourage and facilitate such exploration**, but we view this as beyond the scope of the current paper. We have prepared and presented perturbation screens / relevant knowledge sources in approachable formats for this purpose.\\n\\nFinally, we have provided additional insights, analyses, and human evaluation in our revised manuscript, so it would be very helpful to understand what concerns you feel remain unaddressed. Thank you for your time!\"}", "{\"summary\": \"The paper introduces PERTURBQA, a benchmark for using language models to interpret genetic perturbation experiments. These experiments reveal gene functions but are expensive. The proposed SUMMER framework combines biological text data and experimental results, outperforming knowledge-graph-only models on tasks like predicting gene expression changes and gene set enrichment. SUMMER\\u2019s language-based outputs offer clearer insights for biologists, enhancing interpretability and accessibility in modeling biological data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-organized and easy to follow.\\n\\nThe paper proposed an interesting problem of using LLMs to predict gene pertubation.\", \"weaknesses\": \"1. The biggest concern of the proposed method is the problem itself. For example, SUMMER relies on the biology knowledge graph. However, the biology knowledge graph is built based on the experiments and analysis. Compared with the gene perturbation prediction from scRNA, the knowledge-building quiring updates the knowledge at once to build the possible links among genes. This indicates that this method has a risk of overfitting the known knowledge and being unable to discover new perturbation genes; it is more like a retrieval system for gene interactions we already know.\\n\\nTo further explain how the proposed method can generate more situations, my suggestion is to include more datasets to avoid overfitting. For example, in scGPT[1], the evaluation for perturbation is across 3 different perturbation datasets. \\n\\n2. In the previous study of gene perturbation, one gene input can output several gene expressions, which are judged at one time. However, the LLM framework suggests that the source and target are a pair of data. This means that if we want to see every gene expression after perturbation, we should go through every combination of gene pairs with LLMs. This is really time-consuming in this setting. Besides, the experiments are really limited to a small size of genes.\\n\\nIn GEARS, scGPT, and CellPLM, they set the perturbation tasks for both 1 gene unseen and 2 genes unseen, and they can claim that the proposed method can handle different perturbation situations. However, in this paper, there are no such illustrations. Besides, in GEARS, the authors predict 102 genes at one time for novel gene perturbation analysis (Fig5a in GEARS). In scGPT, they predict 210 combinations (Fig3 in scGPT). In this paper, there is no evidence that the proposed method can do such things. \\n\\n3. The authors should include gene pretrained methods such as scGPT[1] and Geneformer[2]. The text-based methods are pretrained on lots of textual information, the comparing method is unfair with only MLP, GAT, and GEARS. \\n\\nAs I mentioned before, the pretrained methods on the single-cell data are considered one of the most powerful methods in gene perturbation tasks at this moment. In scGPT, they report scGPT can achieve 50% higher than GEARS. Instead of training on lots of textual information, they trained directly from gene expressions. If the authors want to claim the textual-based pretrained method is more powerful than we predict genes from scRNA data, they need to discuss these models.\\n\\n4. Compared with other work in gene expression from ML method, such as CellPLM[3], LanCell[4], CellSentence[5] this paper lacks biology analysis in bio view. This limited the application to further applications.\\n\\nIn scGPT and GEARS, they have a figure of predicted gene expression profiles of the perturbation conditions. Other papers are not perturbation-specific methods, but they are ai4biology papers that are published in the top AI conferences. They also show some biological analysis to strengthen their methods for their own contribution, such as marking genes on the cells to show the cell states.\\n\\n[1] Cui H, Wang C, Maan H, et al. scGPT: toward building a foundation model for single-cell multi-omics using generative AI[J]. Nature Methods, 2024: 1-11.\\n\\n[2] Theodoris C V, Xiao L, Chopra A, et al. Transfer learning enables predictions in network biology[J]. Nature, 2023, 618(7965): 616-624.\\n\\n[3] Wen H, Tang W, Dai X, et al. CellPLM: Pre-training of Cell Language Model Beyond Single Cells[C]//The Twelfth International Conference on Learning Representations.\\n\\n[4] Zhao S, Zhang J, Wu Y, et al. LangCell: Language-Cell Pre-training for Cell Identity Understanding[C]//Forty-first International Conference on Machine Learning.\\n\\n[5] Levine D, Rizvi S A, L\\u00e9vy S, et al. Cell2Sentence: Teaching Large Language Models the Language of Biology[C]//Forty-first International Conference on Machine Learning.\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a benchmark for evaluating LLM reasoning on biological perturbation experiments, and an LLM-based framework that uses knowledge graphs and prior data to outperform current methods in interpretability and performance on these tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The method appears to be sound, combining RAG and COT prompting with knowledge graphs for handling complex biological relationships.\\n2. The proposed ramework emphasizes interpretable outputs, which is beneficial to biological research.\\n3. The evaluation was thorough, with both graph and language-centric baselines across multiple datasets.\", \"weaknesses\": \"1. The model focuses on discrete perturbation outcomes and does not address combinatorial perturbations, which are common in biological studies.\\n2. The interpretability of the method comes at the cost of additional complexity in prompt engineering, and it is not known how the performance is sensitive to the prompt design.\\n3. The paper acknowledges limitations in current evaluation metrics for enrichment tasks. It would be better to see the utilization of new, domain-specific metrics.\", \"questions\": \"1. How well does SUMMER generalize to new or sparsely annotated biological datasets? Are there performance drops when applied to less-studied cell lines or organisms?\\n2. Did the authors perform an error analysis to determine whether specific types of perturbations or gene functions are harder for SUMMER to predict accurately?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces PERTURBQA, a benchmark suite for evaluating LLMs in reasoning over structured biological data from genetic perturbation experiments. The paper also presents SUMMER, a language-based framework designed to predict differential expression and direction of gene changes as well as to perform gene set enrichment. SUMMER combines knowledge graphs and retrieval-augmented generation to enhance interpretability, matching or surpassing state-of-the-art models on PERTURBQA tasks. The benchmark is designed to help researchers interpret outcomes in high-content genetic perturbation experiments and understand model limitations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Novel Application of Language-Based Reasoning in Pertburbation Task:** The paper offers a novel application of language-based reasoning to biological data, allowing PERTURBQA tasks to be approached in an interpretable way that benefits domain experts.\\n2. **Comprehensive Benchmark Design:** PERTURBQA includes real-world tasks relevant to differential expression, gene direction, and gene set enrichment, providing a holistic assessment of model reasoning on biological data.\\n3. **Interpretable Model Outputs:** SUMMER\\u2019s use of knowledge graphs and retrieval-augmented generation produces outputs that domain experts can readily interpret, addressing the limitations of black-box models in biological contexts.\", \"weaknesses\": \"1.**Insufficient Related Work Discussion:** The paper could better situate its contribution within the existing literature, especially regarding related work on several aspects: 1. graph-to-text works: such as [Zhao et al., 2023](https://arxiv.org/abs/2310.01089), [Chen et al., 2023](https://arxiv.org/abs/2307.03393), check the [survey](https://arxiv.org/pdf/2407.06564) for details. 2. graph RAG works, e.g. [GraphRAG](https://arxiv.org/pdf/2404.16130), [He et al., 2024](https://arxiv.org/abs/2402.07630), [Mavromatis et al., 2024](https://arxiv.org/abs/2405.20139) . Discussing these works would strengthen the contextual grounding of the proposed approach.\\n\\n2.**Marginal Technical Contribution:** As stated in W1, both ideas of using LLM for graph tasks and graph RAG are widely studied before, which renders the contribution of this paper marginal.\\n\\n3.**Limited Baselines:** A wider range of recent baselines, especially from the graph domain (e.g. those mentioned above in W1 and W2), should be included to provide a more comprehensive evaluation and enable better comparison of SUMMER's effectiveness.\\n\\n4.**Potential Data Leakage Concerns:** Given that LLaMA3 might have been exposed to substantial amounts of biological data, including related gene interactions, there\\u2019s a risk of data leakage. A clearer evaluation of SUMMER\\u2019s performance independent of potential pre-trained biases could help clarify if its good performance stems from model design or from pre-existing knowledge in the model.\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces PERTURBQA, focusing on prompting LLMs for gene perturbation and gene set enrichment. It also presents a reasoning method, SUMMER, designed for this task. Experiments demonstrate the effectiveness of the proposed approach.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed task of using large language models (LLMs) to address gene-related tasks is innovative and engaging.\\n\\nThe paper is clearly presented, well-written, and easy to follow.\", \"weaknesses\": \"Limited Experimental Insight: The experiments primarily conclude that SUMMER outperforms baselines but provide few additional insights into the new task. Given the novelty of using LLMs for this type of gene analysis, more extensive experiments and in-depth analysis would be valuable.\", \"insufficient_baselines_and_model_comparisons\": \"The baselines selected are not sufficiently comprehensive. Numerous studies have explored LLM reasoning with graph data or text retrieval, which are closely related to this method. Including these in the comparisons could yield deeper insights into the effectiveness of LLMs for gene-related tasks.\", \"metrics_for_gene_set_enrichment\": \"The suitability of ROUGE-1 recall and BERT Score for measuring the accuracy of gene set enrichment results is questionable. Human evaluation or evaluations with LLMs may offer more reliable assessments.\", \"questions\": \"Retrieved Content for Reasoning: Is the retrieved content primarily focused on the gene's function? If so, could this lead to information leakage in the question? For example, if asked about the influence of gene A on gene C, and the retrieved content on gene A directly states that A turn on C, wouldn\\u2019t this turn the task into reading comprehension rather than genuine prediction?\", \"multi_hop_reasoning\": \"How does SUMMER handle multi-hop reasoning if it only retrieves information on the one hop neighbors of the perturbation and target gene?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your review (1/2)\", \"comment\": \"Thank you for your review and recommendations! We hope this response provides clarity regarding your questions, and we look forward to discussing.\\n\\n## Experimental insight\\n\\nThank you for the recommendation! We have updated the draft with several additional analyses regarding the qualitative aspects of our framework (currently in Appendix C), including evaluation from a domain expert (further below). Here is a summary.\\n\\n- Clusters that elude manual annotation tend to be smaller or exhibit lower agreement. On these, gene set over-representation analysis focuses on highly specific gene sets, which each cover subsets of these clusters, rather than the whole. The LLM takes the opposite approach, and its summaries tend to \\\"lift\\\" the description to higher levels of hierarchy (Table 8). The two strategies provide orthogonal information, though the LLM outputs may be more readable.\\n\\n- We analyzed 300 generations (3 trials of 100 DE examples) to understand common failure modes (detailed examples in C.3). Errors and inconsistencies primarily resulted from deductions backed by overly-generic information. For example, the LLM may list an excessively broad set of influences, e.g. \\\"mitochondrial function, protein synthesis, or transcriptional regulation,\\\" which affect nearly everything.\\n\\n In several instances, the LLM was also confused between concepts which may be loosely connected, but not in the same context. For example, Gene A is upstream of stress signaling, e.g. \\\"oxidative stress,\\\" which is related to the mitochondria. However, Gene A is *not* responsible for healthy mitochondria function, and should not respond similarly to genes that are.\\n\\n## Human evaluation\\n\\nWe are happy to provide human evaluation and will update the manuscript accordingly (See C.1). We recruited a domain specialist (molecular biologist, trained in wet lab and computational biology) for this task.\\n\\nOverall, the LLM-generated summary was equal or better to the classical gene set enrichment results in 92% of cases, and agrees with the independent annotator in 72% of cases.\\n\\n1. In 21/25 cases, the biologist reported that the LLM-generated summary was more informative. In 2/25 cases, they contained the same amount of information; and in 2/25 cases, the gene set contained more information.\\n2. In 18/25 cases, the biologist reported that the LLM summary captured the same biology as the original human annotation (our ground truth labels).\\n\\n**Error analysis of human annotation:**\\n1. In the 2 cases where the gene set contained more *information*, a list of specific protein complexes were discovered, e.g.\\n ```\\n EIF2AK4 (GCN2) binds tRNA, Aminoacyl-tRNA binds to the ribosome at the A-site, 80S:Met-tRNAi:mRNA:SECISBP2:Sec-tRNA(Sec):EEFSEC:GTP is hydrolysed to 80S:Met-tRNAi:mRNA:SECISBP2:Sec and EEFSEC:GDP by EEFSEC, UPF1 binds an mRNP with a termination codon preceding an Exon Junction Complex, Translocation of ribosome by 3 bases in the 3' direction, Translation of ROBO3.2 mRNA initiates NMD.\\n ```\\n However, this information-rich output is hard to interpret, compared to the LLM output, which the annotator marked as agreeing with the label of \\\"translation.\\\"\\n\\n ```\", \"ribosomal_protein_components_involved_in_translation\": \"This gene set is comprised of components of the large and small ribosomal subunits, which are essential for protein synthesis and translation. These genes are involved in the assembly and function of the ribosome, facilitating the translation of messenger RNA into protein.\\n ```\\n\\n2. In the 7/25 cases where the LLM summary differed from the human annotation, the LLM annotation tended to miss some high specific terms, e.g. \\\"targets of nonsense-mediated decay\\\" was generalized to \\\"stress response,\\\" and \\\"dysregulated lncRNA antisense transcripts\\\" was generalized to \\\"nuclear gene regulation.\\\" Related terms tend to be sparsely annotated in Gene Ontology, so this indicates that it would be useful to tune the granularity of generations in the future, or to generate multiple candidates for specific descriptions.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thank you for the effort put into the rebuttal.\\n\\nHowever, my primary concerns regarding the discussion of graph literature remain unresolved, as these works were neither compared nor even discussed in the updated manuscript.\\n\\nAs a result, I will maintain my negative score.\"}", "{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review and suggestions! We hope this response clarifies some of your confusion, and we look forward to discussing.\\n\\n**Literature review**: We are happy to expand our literature review and include a wider range of work on the NLP side. Thank you for the recommendations!\\n\\n**Technical contribution**: Please see the general response. To summarize: Our primary contribution is PerturbQA, a carefully crafted benchmark, derived from experimental assays and open knowledge graphs. Applying NLP techniques out of the box results in near-random performance on PerturbQA, and they must be adapted in domain-specific ways. To demonstrate that this task is feasible, we introduced a *minimal* LLM-based example (SUMMER), which draws from common LLM techniques. We do not claim that this method is novel from an NLP perspective. We *do* claim that this LLM-reasoning based framework is novel in context of perturbation modeling.\\n\\n**Additional baselines**: Please see the general response. To summarize: As Table 1 demonstrates, directly applying NLP methods out-of-the-box on PerturbQA is comparable to random guessing. It is *non-trivial* to adapt these methods for molecular biology reasoning, especially because the vast majority of biological literature is behind paywalls (vs. in machine learning, where everything is open).\\n\\n**Data leakage**: Please see the general response. To summarize: Our \\\"no CoT\\\" and \\\"no retrieve\\\" ablations performed near-random, so it appears that the base Llama model has very poor understanding of experimental outcomes. Furthermore, the gene pairs are also minimally represented in the knowledge graphs, and the presence of a known physical interaction is not predictive.\"}", "{\"summary\": \"This paper proposed an in-silico approach to solving the biological perturbation experiment results prediction. The original idea is to enrich the graph-structure-based cellular responses textually. In terms of technique details, the authors proposed PertubQA as a pre-processing step and omitted minor changes in the gene expression level. Then, the authors adopted prompt engineering to retrieve the structured knowledge crossing gene-gene interaction result and gene description. Following a CoT manner, the LLM generates the final answers.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"S1. Adopting LLM in biological analysis is an interesting topic. This study follows a combination of LLM-related tech fashion.\\n\\nS2. The authors developed a PertubQA dataset to validate the following research on textual enrichment of perturbation analysis. \\n\\nS3. Detailed examples and case studies are provided to show the effectiveness of the framework.\", \"weaknesses\": \"W1. Part of the technical contribution is related to GenePT, which adopted a textual description and gene expression value to extract the cell representation, and GEARs, which combined prediction on a genetic relation graph (gene-coexpression graph and GO graph).\\n\\nW2. The contributions of the proposed PerturbQA dataset are unclear. Why do we need a vague textual description of gene perturbation instead of a formalized graph structure with exactly numeric data? \\n\\nW3. Missing technical details. I couldn't find the setting (or formal definition) of the study problem. Those make this paper hard to follow.\", \"questions\": \"Q1. Please describe the distinct contributions of PerturbQA beside a RAG- or a meta-path-walk-like text description generation. How do you ensure the description is correct? For instance, PubmedQA is generated from many articles and certified by human experts.\\n\\nFor the rest of the question, please refer to weak points.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your response!\", \"comment\": \"Thank you for your feedback! Please let us know if there's anything else we can do to improve the paper.\\n\\nFinally, we would like to reiterate that the primary novelty of this work is not the proposed algorithm, but the framework for modeling perturbation experiments. As we write in the Introduction (Paragraph 2) and Background (Section 3), current approaches for modeling perturbations are misaligned with what biologists aim to glean from these large-scale screens. We are the first to propose that predicting differentiation expression / direction of change and summarizing gene sets are more realistic endpoints, compared to existing regression-based objectives.\\n\\nOur hypothesis was simply, that language can be helpful for modeling perturbations in this setting. There are many approaches in modern NLP that could be adapted towards perturbation modeling, and we have prepared and presented perturbation screens / relevant knowledge sources in approachable formats for this purpose. We hope that this work can this work can encourage and facilitate such exploration.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response and for reading our rebuttals! Could you clarify what aspects of the application remain concerning, and which aspects of the task definitions are unclear? This would be very helpful for improving our paper.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response! We have updated the manuscript with your recommendations and additional relevant work (highlighted in blue on page 3).\\n\\nCurrently we do not have the compute to run additional LLM-based experiments before the paper edit deadline, but please let us know if this addresses your concerns in part.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thanks for the prompt response and revision of the paper. However, I checked the updated related work and still believe that these methods weaken the technical contribution of the proposed method. Hence, I will maintain my original score.\"}", "{\"title\": \"Thank you for your review (1/2)\", \"comment\": \"Thank you for your review and questions. We hope this response clarifies certain points of confusion, and we look forward to discussing with you!\\n\\n## Retrieval vs. discovery\\n\\n**\\\"retrieval system:\\\"** Please see the general response. To summarize: Only ~3% of our testing pairs physically interact in any context, while only ~20% of pairs are directly connected by any annotation (including coarse ones). We find that the presence of physical interactions is minimally predictive of differential expression (DE).\\n\\n**\\\"my suggestion is to include more datasets [...] For example, in scGPT, the evaluation for perturbation is across 3 different perturbation datasets\\\"**\\n\\nIn scGPT, the evaluation was conducted over 3 experiments in the **same K562 cell line**, including the K562 essential screen that we also incorporate. The other two are older (2016, 2019) and much smaller, as single-cell CRISPR technologies have become more scalable and efficient in the past 5 years.\\n\\nIn contrast, our experiments cover **4 cell lines, derived from 5 experiments**, which are the **largest publicly-available Perturb-seq screens** [1,2]. Biologically, cell lines are very distinct, and they are derived from different cancer tumors / other conditions (K562 myelogenous leukemia, RPE non-cancerous, HepG2 liver cancer, Jurkat acute T cell leukemia). In particular, different genes are expressed, and among the same genes, the marginal distributions (of expression) and gene-gene relationships may differ. To ensure the quality and consistency of our benchmark, we have chosen to focus on these 5 screens, which are larger and published more recently.\\n\\n[1]. Replogle et al. Mapping information-rich genotype-phenotype landscapes with genome-scale Perturb-seq. Cell. 2022.\\n\\n[2] Nadig et al. Transcriptome-wide characterization of genetic perturbations. 2024.\\n\\n## Combinatorial perturbations\\n\\nThe lack of combinatorial perturbations is less a limitation of the method, but of evaluation.\\n\\nTo craft a \\\"proof of concept\\\" for combinatorial perturbations, it would be relatively straightforward to summarize the neighborhoods of each participating gene, and prompt for any synergy/lack thereof.\\n\\nHowever, existing datasets for combinatorial perturbations are limited. The most commonly analyzed are Norman et al. [3] and Wessels et al. [4], each containing around 100 perturbation pairs. These two experiments operate in different modalities (CRISPR activation vs. inhibition), and since there are no alternatives for comparison, it is difficult to quantify the quality of these data. A major goal of this work was to craft a trustworthy benchmark for perturbation modeling, so we chose to focus on single perturbations instead. Therefore we consider combinations currently out of scope, but an opportunity for future work when better datasets are available.\\n\\n[3] Norman et al. Exploring genetic interaction manifolds constructed from rich single-cell phenotypes. Science. 2019.\\n\\n[4] Wessels et al. Efficient combinatorial targeting of RNA transcripts in single cells with Cas13 RNA Perturb-seq. Nat Methods. 2023.\"}", "{\"comment\": \"Thank you for your response. Personally, my main concern is how this method influences perturbation analysis from both a logical and practical perspective. I believe the perturbation problem is a complex process in the biological domain. As you mentioned, even the identified perturbation pairs may not accurately represent the actual perturbations. With this in mind, I am unsure whether the knowledge derived is truly reliable for the perturbation task.\\n\\nOne advantage of using pretrained models in perturbation analysis is their ability to study RNA data without needing to verify the knowledge's accuracy. In practice, I also think the pretrained single-cell models need less data to finetune, or infer, or predict the perturbation combination. Additionally, it\\u2019s worth noting that scGPT only extracts matching genes from GEARS for specific datasets rather than utilizing GEARS\\u2019 structure. Therefore, the results based on the GEARS backbone may not fully align with the implementation.\\n\\nI appreciate this novel perspective on the perturbation task, particularly how it combines single-cell problems with LLMs, and I will adjust my score because of this novelty.\"}", "{\"title\": \"Thank you for your response and review!\", \"comment\": \"Thank you for your time!\"}", "{\"title\": \"Thank you for your review (2/2)\", \"comment\": \"## Other questions\\n\\n**Multi-hop reasoning:**\\n- Due to the hierarchical nature of the knowledge graphs under consideration (e.g. [the GO hierarchy](https://geneontology.org/docs/ontology-documentation/)), \\\"one\\\" hop represents connections at many levels of granularity, ranging from physical interactions in small protein complexes, to concepts as generic as \\\"cell surface\\\" (625 genes) or \\\"GPCR signaling pathway\\\" (960 genes). Thus, even though we only retrieve \\\"1-hop\\\" neighbors, the model has sufficient information to perform (effectively) multi-hop reasoning.\\n- As a direct result of high-connectivity nodes (large protein complexes, coarse annotations), the size of \\\"multi-hop\\\" neighborhoods increases exponentially. For example, if we only consider physical interactions from STRING and CORUM, the median \\\"1-hop\\\" neighborhood has 4 genes, while the median \\\"2-hop\\\" neighborhood has 4456 genes (relatively uninformative).\\n\\n**Baselines:** Please see the general response. To summarize: As Table 1 demonstrates, directly applying NLP methods out-of-the-box on PerturbQA is comparable to random guessing. It is *non-trivial* to adapt these methods for molecular biology reasoning, especially because the vast majority of biological literature is behind paywalls (vs. in machine learning, where everything is open).\\n\\n**Information leakage:** Please see the general response. To summarize: Only ~3% of our testing pairs physically interact in any context, while only ~20% of pairs are directly connected by any annotation (including coarse ones). We find that the presence of physical interactions is minimally predictive of differential expression (DE).\"}", "{\"title\": \"Thank you for your review (2/2)\", \"comment\": \"## Additional biological analysis\\n\\nThank you for the recommendation! We have updated the draft with several additional analyses regarding the qualitative aspects of our framework (currently in Appendix C), including evaluation from a domain expert. Here is a summary.\\n\\n- Clusters that elude manual annotation tend to be smaller or exhibit lower agreement. On these, gene set over-representation analysis focuses on highly specific gene sets, which each cover subsets of these clusters, rather than the whole. The LLM takes the opposite approach, and its summaries tend to \\\"lift\\\" the description to higher levels of hierarchy (Table 8). The two strategies provide orthogonal information, though the LLM outputs may be more readable.\\n\\n- We analyzed 300 generations (3 trials of 100 DE examples) to understand common failure modes (detailed examples in C.3). Errors and inconsistencies primarily resulted from deductions backed by overly-generic information. For example, the LLM may list an excessively broad set of influences, e.g. \\\"mitochondrial function, protein synthesis, or transcriptional regulation,\\\" which affect nearly everything.\\n\\n In several instances, the LLM was also confused between concepts which may be loosely connected, but not in the same context. For example, Gene A is upstream of stress signaling, e.g. \\\"oxidative stress,\\\" which is related to the mitochondria. However, Gene A is *not* responsible for healthy mitochondria function, and should not respond similarly to genes that are.\\n\\n- Finally, since this paper focuses on providing value to biologists, we recruited a domain specialist (molecular biologist, trained in wet lab and computational biology) for this task (See C.1).\\n\\nOverall, the LLM-generated summary was equal or more informative than the classical gene set enrichment results in 92% of cases, and agrees with the independent annotator in 72% of cases.\\n\\n1. In 21/25 cases, the biologist reported that the LLM-generated summary was more informative. In 2/25 cases, they contained the same amount of information; and in 2/25 cases, the gene set contained more information.\\n2. In 18/25 cases, the biologist reported that the LLM summary captured the same biology as the original human annotation (our ground truth labels).\\n\\n1. In the 2 cases where the gene set contained more *information*, a list of specific protein complexes were discovered, e.g.\\n ```\\n EIF2AK4 (GCN2) binds tRNA, Aminoacyl-tRNA binds to the ribosome at the A-site, 80S:Met-tRNAi:mRNA:SECISBP2:Sec-tRNA(Sec):EEFSEC:GTP is hydrolysed to 80S:Met-tRNAi:mRNA:SECISBP2:Sec and EEFSEC:GDP by EEFSEC, UPF1 binds an mRNP with a termination codon preceding an Exon Junction Complex, Translocation of ribosome by 3 bases in the 3' direction, Translation of ROBO3.2 mRNA initiates NMD.\\n ```\\n However, this information-rich output is hard to interpret, compared to the LLM output, which the annotator marked as agreeing with the label of \\\"translation.\\\"\\n ```\", \"ribosomal_protein_components_involved_in_translation\": \"This gene set is comprised of components of the large and small ribosomal subunits, which are essential for protein synthesis and translation. These genes are involved in the assembly and function of the ribosome, facilitating the translation of messenger RNA into protein.\\n ```\\n\\n2. In the 7/25 cases where the LLM summary differed from the human annotation, the LLM annotation tended to miss some high specific terms, e.g. \\\"targets of nonsense-mediated decay\\\" was generalized to \\\"stress response,\\\" and \\\"dysregulated lncRNA antisense transcripts\\\" was generalized to \\\"nuclear gene regulation.\\\" Related terms tend to be sparsely annotated in Gene Ontology, so this indicates that it would be useful to tune the granularity of generations in the future, or to generate multiple candidates for specific descriptions.\\n\\n**scGPT:** We finetuned scGPT + GEARS using their published \\\"perturbation\\\" tutorial. Due to time constraints, we were only able to benchmark K562, but we will add the full results to the final paper. We will also include discussion of these references in our work. Thank you!\\n\\nscGPT + GEARS averaged 0.52 AUC on K562 (compared to SUMMER 0.6, GenePT 0.57). Similar to other methods, this model predicts \\\"perfectly\\\" on some perturbations (104/267 with AUC=1) while guessing randomly on the remaining, so it appears that either the embeddings are unhelpful, or the GEARS backbone (on which scGPT builds) is suboptimal.\"}", "{\"comment\": \"Thank you for your detailed response. While some of my concerns, such as multi-hop reasoning, have been addressed, significant issues remain unresolved. These include the lack of baselines, questions about the novelty of the proposed algorithm, the evaluation paradigm, and the limited experimental insights. Therefore, I will maintain my current score.\"}" ] }
5VK1UulEbE
FredNormer: Frequency Domain Normalization for Non-stationary Time Series Forecasting
[ "Xihao Piao", "Zheng Chen", "Yushun Dong", "Yasuko Matsubara", "Yasushi Sakurai" ]
Recent normalization-based methods have shown great success in tackling the distribution shift issue, facilitating non-stationary time series forecasting. Since these methods operate in the time domain, they may fail to fully capture the dynamic patterns that are more apparent in the frequency domain, leading to suboptimal results. This paper first theoretically analyzes how normalization methods affect frequency components. We prove that the current normalization methods that operate in the time domain uniformly scale non-zero frequencies, and thus, they struggle to determine components that contribute to more robust forecasting. Therefore, we propose FredNormer, which observes datasets from a frequency perspective and adaptively up-weights the key frequency components. To this end, FredNormer consists of two components: a statistical metric that normalizes the input samples based on their frequency stability and a learnable weighting layer that adjusts stability and introduces sample-specific variations. Notably, FredNormer is a plug-and-play module, which does not compromise the efficiency compared to existing normalization methods. Extensive experiments show that FredNormer improves the averaged MSE of backbone forecasting models by 33.3\% and 55.3\% on the ETTm2 dataset. Compared to the baseline normalization methods, FredNormer achieves 18 top-1 results and 6 top-2 results out of 28 settings. Our code is available at: https://anonymous.4open.science/r/ICLR2025-13956-8F84
[ "time series forecasting", "deep learning" ]
https://openreview.net/pdf?id=5VK1UulEbE
https://openreview.net/forum?id=5VK1UulEbE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qI31zyrVIC", "o4kgRQYrLs", "c3u62sfWpU", "X23pzLteIN", "LPiK3PvTdD", "HKctGpvVW2" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1730048397620, 1731831355982, 1730740991121, 1730646161067, 1730721159667, 1730711068710 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13956/Reviewer_s19A" ], [ "ICLR.cc/2025/Conference/Submission13956/Authors" ], [ "ICLR.cc/2025/Conference/Submission13956/Reviewer_xnnx" ], [ "ICLR.cc/2025/Conference/Submission13956/Reviewer_8Dbc" ], [ "ICLR.cc/2025/Conference/Submission13956/Reviewer_LfuW" ], [ "ICLR.cc/2025/Conference/Submission13956/Reviewer_NuKM" ] ], "structured_content_str": [ "{\"summary\": \"This research tackles a fundamental challenge in time series forecasting\\u2014the non-stationarity of real-world data\\u2014by introducing a novel perspective on normalization within the frequency domain. The authors propose a versatile plug-and-play module that analyzes datasets from a frequency standpoint and dynamically enhances the significance of key frequency components. Comprehensive experiments demonstrate that the FredNormer module enhances the average Mean Squared Error (MSE) of various underlying forecasting models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is generally well-structured, with a clear progression from introduction to conclusion. The use of technical language is appropriate for the audience, and the figures and tables are mostly supportive of the text.\\n\\nThe authors propose FredNormer, a frequency domain normalization method that adaptively up-weights key frequency components based on their stability. The method is model-agnostic and can be integrated into various forecasting models without compromising efficiency. \\n\\nThis paper tackles an important problem in time series forecasting\\u2014the non-stationarity of real-world data. The idea of approaching normalization from the frequency domain is novel and could potentially offer new insights into the field.\", \"weaknesses\": \"The importance of the questions posed by the paper is not in doubt, but the execution and resulting conclusions may be less reliable due to experimental concerns. The technical claims of the paper are where the most significant concerns lie. The experimental methodology appears to have several flaws, which may compromise the validity of the results. Without a solid experimental foundation, the technical claims, no matter how innovative, cannot be adequately supported. The paper would benefit from a thorough revision of the experimental design and possibly additional experiments to validate the findings.\\n\\nThe initial discussion in the article pertains to the equivalence of linear combinations before and after applying the Fast Fourier Transform (FFT). However, this proof fails to address the first-order differences introduced in the practical algorithmic procedure. Specifically, Algorithm 1 computes a quantity $S$ based on the batch mean and variance of frequency domain points derived from the input sequence. Conversely, Algorithm 2 utilizes this $S$ to perform a linear transformation on the frequency domain representation of the first-order differences of the sequence. It is evident that the actual algorithmic workflow does not fully align with the theoretical proof provided.\\n\\nWhen evaluating the experimental runtime, it is important to note that if the speed is not measured during the stable phase of training, factors such as initial data loading during training runs may obscure the speed reduction effect attributable to the FredNormer plugin.\\n\\nThe potential contribution of the paper to the field is significant if the method can be validated. However, as it stands, the experimental issues limit the trust that can be placed in the results.\", \"questions\": \"1. Why is there a lack of discussion and explanation on the introduction of the first-order difference in the paper, especially when it does not align with the proofs provided for Algorithm 1 and Algorithm 2?\\n\\n2. In the Frequency Stability Measure Analysis, why are there no comparisons with other methods in the ablation experiments, specifically when the proposed FredNormer method is being evaluated?\\n\\n3. The experimental results for the reproduction of iTransformer and PatchTST in the current paper appear significantly inferior to those reported in their original papers. When comparing the FredFormer's experimental results to those in other papers, there seems to be no notable improvement.\\n\\nLiu, Y., Hu, T., Zhang, H., Wu, H., Wang, S., Ma, L., & Long, M. (2023). itransformer: Inverted transformers are effective for time series forecasting. arXiv preprint arXiv:2310.06625.\\nNie, Y., Nguyen, N. H., Sinthong, P., & Kalagnanam, J. (2022). A time series is worth 64 words: Long-term forecasting with transformers. arXiv preprint arXiv:2211.14730.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We sincerely thank the reviewers for their valuable comments and insights. After thorough consideration, we have decided to withdraw our manuscript.\"}", "{\"summary\": \"This paper reveals that traditional time-domain normalization methods uniformly scale non-zero frequencies, which limits their ability to effectively handle distribution shifts in time series forecasting. To address this limitation, the authors propose FredNormer, a plug-and-play module that combines statistical frequency stability normalization with learnable sample-specific weighting, enabling better adaptation to key frequency components and more robust forecasting performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. this paper studies an interesting problem in time series forecasting, i.e., the distribution shift problem.\\n\\n2. the paper is understandable and easy to follow.\", \"weaknesses\": \"1. this paper is not very well-motivated. The paper mentioned that \\\"Modeling solely in the time domain struggles to distinguish between different frequency components within superimposed time series\\\" as their first motivation. Does it have any difference with the distribution shift or non-stationary forecasting? Also, it mentioned that \\\"the z-score normalization applies uniform scaling across all frequency components, which leaves frequency-specific patterns unaltered\\\". I believe only RevIN uses \\\"learnable\\\" z-score normalization but other methods like SIN, SAN, FAN do not use z-score normalization. What they do cannot be equal to z-score normalization. So how can you use z-score to summarize these works?\\n\\n2. this paper is kind of confusing for the theoretical analysis. Based on the Theorem 1, the authors mentioned that \\\"the normalization operation keeps the proportion unchanged\\\". The proportion is unchanged does not mean it cannot normalize the time series. So the theoretical analysis cannot show the existing normalization would fail. Moreover, how can you say your defined stability is correlated with the non-sationarity? Why the frequency components are entangled and thus influence normalization? These parts don't have theoretical analysis I believe.\\n\\n3. the experiments are somehow lack of comparisons with state-of-the-art normalization techniques, such as SIN [1].\\n\\n[1] SIN: Selective and Interpretable Normalization for Long-Term Time Series Forecasting. In ICML.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents FredNormer, a method designed to tackle the distribution shift issue in the frequency domain. It comprises two components: a statistical metric and a learnable weighting layer. Extensive experiments demonstrate that FredNormer enhances forecasting performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well-written and easy to follow.\\n\\n2. Extensive experiments have demonstrated that FredNormer enhances the performance of backbone forecasting models, resulting in notable improvements.\", \"weaknesses\": \"My main concern is regarding the contribution of this work. What are the key differences between FredNormer and the two referenced papers [1][2]?\\n\\n\\n\\n[1] Frequency Adaptive Normalization For Non-stationary Time Series Forecasting, in NeurIPS 2024\\n\\n\\n[2] Deep Frequency Derivative Learning for Non-stationary Time Series Forecasting, in IJCAI 2024\", \"questions\": \"please refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on the issue of distribution shift in the frequency domain to improve non-stationary time series forecasting. Its primary contribution lies in the theoretical analysis of how normalization methods impact frequency components and in adaptively up-weighting key frequency components. Specifically, the proposed method, FredNormer, includes a statistical metric to normalize input samples and a learnable weighting layer to adjust stability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors provide a theoretical proof showing that time-domain normalization is not effective for frequency components.\\n2. The proposed method is a simple and effective method for learning time-invariant frequency components to suppress non-stationary in the frequency domain.\\n3. FredNormer is demonstrated to be effective, achieving state-of-the-art (SOTA) performance in experiments.\", \"weaknesses\": \"1. FAN is also a method based on adaptive normalization in the frequency domain, capable of handling both trend and seasonal non-stationary patterns in time series data. Compared with it, although this paper provides a theoretical guarantee, its contribution appears insufficient.\\n\\n[1] Ye W, Deng S, Zou Q, et al. Frequency Adaptive Normalization For Non-stationary Time Series Forecasting[J]. Advances in Neural Information Processing Systems, 2024.\\n\\n2. In Non-stationary Transformer, the authors argue that removing inherent non-stationarity from time series may reduce the model's ability to forecast real-world bursty events. Could suppressing non-stationary information in FredNormer similarly lead to over-stationarization, limiting its practical applicability? \\n\\n[1] Liu Y, Wu H, Wang J, et al. Non-stationary transformers: Exploring the stationarity in time series forecasting[J]. Advances in Neural Information Processing Systems, 2022, 35: 9881-9893.\\n\\n3. In Definition 2, you mention \\\"M components with higher stability S(k).\\\" How do you define \\\"higher\\\" stability? What is the threshold value used, and how is it selected? \\n\\n4. The authors only compare against with Transformer-based and MLP-based methods. Why not include comparisons with more CNN-based baselines, such as TSLANet, ModernTCN, and TimesNet, to demonstrate the robustness and generalization of FredNormer?\\n\\n[1] Eldele E, Ragab M, Chen Z, et al. TSLANet: Rethinking Transformers for Time Series Representation Learning[C]//Forty-first International Conference on Machine Learning, 2024.\\n\\n[2] Luo D, Wang X. Moderntcn: A modern pure convolution structure for general time series analysis[C]//The Twelfth International Conference on Learning Representations. 2024.\\n\\n[3] Wu H, Hu T, Liu Y, et al. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis[C]//The Eleventh International Conference on Learning Representations, 2023.\\n\\n5. I am curious about FredNormer\\u2019s impact on frequency-domain methods like FiTS and FreTS. Have you tested its performance on these models?\", \"minor_error\": \"1. Section 4.2 mentions running time, but Figure 4 seems to be missing. \\n2. In Section 4.2, you refer to a \\\"fourth-shaped frequency component.\\\" Could you clarify what you mean by this?\\n3. In the notations section, the indicator function I{k=0} is said to equal 1 if k is not equal to 0. \\nIn Proof 1, you state that for k\\u22601,I{k=0}=0. Could you clarify which case is correct?\", \"questions\": \"1. Regarding the contribution, refer to W1.\\n\\n2. For the experiment, please see W4 and W5.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper notes that normalization-based methods can only partially address the distribution shift problem in non-stationary time series forecasting, as these methods operate solely in the time domain and may overlook prominent dynamic patterns in the frequency domain. To tackle this, they propose FredNormer, a plug-and-play module that dynamically adjusts the weight of each frequency component based on a proposed measure termed Frequency Stability $\\\\mathrm{S}$.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors define Frequency Stability $\\\\mathrm{S}$ across the entire dataset based on the discrete Fourier transform coefficients, and they theoretically demonstrate that time-domain normalization can only uniformly scale non-zero frequency components, leaving the proportion of the Stable Frequency Subset $\\\\mathcal{O}$ within the spectrum unchanged after normalization.\", \"The authors propose to enhance stable frequency components for better generalization in TS forecasting. They introduce a learnable layer that assigns greater weights to stable frequency elements based on $\\\\mathrm{S}$.\"], \"weaknesses\": [\"The contribution of the paper seems limited. According to equation (10), the core approach mainly involves applying a linear transformation to frequency components based on the matrix $\\\\mathrm{S}$. It is unclear how $\\\\mathrm{S}$ functions as a constraint to realize the key motivation of enhancing the weight of stable frequency components.\", \"In Definition 2, the authors introduce the concept of a Stable Frequency Subset $\\\\mathcal{O}$ but do not provide a clear criterion for determining what qualifies as a stable frequency. Some notations, such as Stable Frequency Subset $\\\\mathcal{O}$, and Theorem 1 presented in Section 2 are not well linked to the design of the method proposed in Section 3.\", \"The experimental evaluation lacks thoroughness, as it includes only three baseline models (Dlinear, PatchTST, and iTransformer). Although the authors mention other models, such as CrossFormer, they do not include it in their comparisons. Additionally, the Nonstationary Transformer [1] would also be a relevant model for further comparison.\", \"The empirical analysis in Figure 3 is confusing to the reviewer. Could the authors provide additional clarification on how the adjustments to the amplitudes of the input series and forecasting target, based on the Frequency Stability score, affect model prediction accuracy? Additionally, could you explain why these adjustments are effective in enhancing the model's performance? Both Equations (9) and (10) have large spacing from the preceding text.\", \"[1] Liu, Yong, et al. \\\"Non-stationary transformers: Exploring the stationarity in time series forecasting.\\\" Advances in Neural Information Processing Systems 35 (2022): 9881-9893.\"], \"questions\": [\"In the remark on line 196, the authors note that intermixed stable and unstable components lead to entangled patterns and thus lead to sub-optimal forecasting performance of current models. Could the authors clarify how their proposed model differentiates between stable and unstable components?\", \"What is the purpose of smoothing the data before applying the discrete Fourier transform in Equation (8)? How might the performance of the proposed model be impacted if the transform were applied directly to the raw data?\", \"Also, see the weaknesses.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5V8d2dVF1F
Assessing Vulnerabilities of Large Language Models to Social Bias Attacks
[ "Jiaxu Zhao", "Meng Fang", "Fanghua Ye", "Ke Xu", "Qin Zhang", "Joey Tianyi Zhou", "Mykola Pechenizkiy" ]
Large Language Models (LLMs) have become foundational in human-computer interaction, demonstrating remarkable linguistic capabilities across various tasks. However, there is a growing concern about their potential to perpetuate social biases present in their training data. In this paper, we comprehensively investigate the vulnerabilities of contemporary LLMs to various social bias attacks, including prefix injection, refusal suppression, and learned attack prompts. We evaluate popular models such as LLaMA2, GPT-3.5, and GPT-4 across gender, racial, and religious bias types. Our findings reveal that models are generally more susceptible to gender bias attacks compared to racial or religious biases. We also explore novel aspects such as cross-bias and multiple-bias attacks, finding varying degrees of transferability across bias types. Additionally, our results show that larger models and pretrained base models often exhibit higher susceptibility to bias attacks. These insights contribute to the development of more inclusive and ethically responsible LLMs, emphasizing the importance of understanding and mitigating potential bias vulnerabilities. We offer recommendations for model developers and users to enhance the robustness of LLMs against social bias attacks.
[ "Language model", "Bias", "Attack" ]
https://openreview.net/pdf?id=5V8d2dVF1F
https://openreview.net/forum?id=5V8d2dVF1F
ICLR.cc/2025/Conference
2025
{ "note_id": [ "roJKFPvHgX", "oBjzpJAxeb", "YzjiCX983U", "NMh6WTo4tY" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1732627794595, 1731082987519, 1730257889118, 1730254028863 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13927/Authors" ], [ "ICLR.cc/2025/Conference/Submission13927/Reviewer_Tnz4" ], [ "ICLR.cc/2025/Conference/Submission13927/Reviewer_EWno" ], [ "ICLR.cc/2025/Conference/Submission13927/Reviewer_gSeS" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"In this paper the authors study if jailbreak methods can be used to cause LLMs to give more socially/demographically biased outputs. They test 3 different jailbraeking techniques against a large number of models and make a number of observations about which cases are more susceptible.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper runs a very large and seemingly exhaustive set of experiments to do their analysis.\", \"There are intermediate contributions such as the creation of a dataset of prompts for biased output that I think could be useful (if released).\"], \"weaknesses\": [\"The biggest question for me is the alignment of motivation and experiments -- the motivation for an adversary trying to trick a model to make a biased statement seems more synthetic. It'd be interesting to push this into more realistic settings such as writing hateful content (eg tweets).\", \"The definitions of attack success seems quite loose and not able to differentiate between slightly biased and highly harmful content. Many benchmarks and platforms now have more nuanced definitions of unacceptable content that would be valuable to anchor on (eg ML Commons's \\\"hate\\\" definition).\", \"Generally, while I appreciate the thoroughness of the work, the novelty is limited.\", \"While not core to the paper, the defense methods chosen (all prompt based) are fairly weak.\"], \"questions\": [\"Can you share the prompts themselves? It is quite hard to tell how egregious they are.\", \"Likewise the human annotation of whether it is biased seems like it could range from really bad outputs to slightly biased. Is there more detail here?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the vulnerabilities of large language models (LLMs) to social bias attacks through comprehensive experiments. Previous work like [1] analyzes the bias issues of LLM-generated contents, while this paper focuses on the evaluation of the vulnerabilities of LLMs to various bias attacks. Three types of bias attacks are considered in the experiments for gender, racial, and religious biases. The effects of these attacks on LLM-generated contents are analyzed for a comprehensive list of modern LLMs. Besides, this paper explores the impacts of model size and fine-tuning on the vulnerabilities of LLMs. The results provide valuable insights for the development of LLMs that are robust and safe against bias attacks.\\n\\n[1] Zhao, J., Fang, M., Shi, Z., Li, Y., Chen, L., & Pechenizkiy, M. (2023). Chbias: Bias evaluation and mitigation of chinese conversational language models. arXiv preprint arXiv:2305.11262.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper contributes to the field of responsible LLMs by investigating the vulnerabilities of LLMs to various types of social bias attacks, it constructs the bias attack dataset for the evaluation experiments.\\n\\nThis paper studies the impacts of model size, fine-tuning, and types of attack on LLM vulnerability, which I think is valuable for the defensive research in LLMs.\\n\\nThe experiments are comprehensive and adequate. Particularly, this paper considers three types of attacks for three types of commonly seen biases. Most modern LLMs are evaluated in the experiments. I think the experimental results justify the findings and insights this paper provides.\", \"weaknesses\": \"1. This paper mainly examines the vulnerabilities of LLMs, which I think can be viewed as a kind of robustness against bias attacks. In Section 3, the authors introduce various bias techniques in existing work. If these attacks are proved in literature to be effective in attacking LLMs, this would somehow limit the contribution of this paper. In other words, the results of this paper mainly show that LLMs can be still attacked by these attacks, which is the same as the paper that introduces these attacks. To this end, I would like the authors to discuss the technical contributions of this paper.\\n\\n2. When introducing the motivation of this work, only a vague definition of \\u201cbias\\u201d is provided (as in footnote 1), which lacks preciseness and clarity. Given that this paper investigates the vulnerabilities of LLMs to social bias attacks, it is better to clearly define what the social bias issue in LLMs is at the beginning. For example, what kind of LLM-generated contents are considered biased and harmful, how they display, and who they impact. This will help the community to better understand the value of this work brings to advancing trustworthy LLM research. \\n\\n3. In the experiments, the vulnerability of an LLM is evaluated only in terms of the proportion of \\u201cbiased\\u201d results. However, such a binary outcome (biased/not biased) can oversimplify the evaluation of LLM-generated contents. Actually, the response from an LLM can have varying degrees of biasedness; some responses, even though being biased, are still tolerable or will not be too harmful for the general public, while some other responses may be completely unacceptable. Therefore, a better evaluation metric can be considered here is a numeric score that describes the degree of biasedness of the response. Using this more sophisticated evaluation metric may help to explain the reason why in some scenarios (as in Table 2) the vulnerability is lower under attacks compared to the No Attack case, which is an unexpected pattern. Some explanations on the choice of the evaluation metric will be helpful.\\n\\n4. In Table 3, the results of cross-bias attacks based on Race-Religion and Gender-Religion pairs are not included, is there a particular reason for that? Also, from the results in Table 3, it seems that LLMs are more vulnerable to single-bias attacks, which is consistent with expectation. Given this observation, could you please clarify the potential significance (usefulness) of cross-bias attacks?\", \"questions\": \"Please see the weaknesses above.\\n\\nI will raise my score if the authors could address my concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In the paper, the authors proposed a growing concern about various social biases in LLMs. In particular, they worked three main biases, i.e., gender, religion, race, with three different methods, e.g., prefix injection, refusal suppression, and learned attack prompt to deceive LLMs. They evaluated the impacts over several most popular LLMs. Besides manual evaluation, they also used LLMs to evaluate their method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe entire paper is well-written and presented the ideas in very clear way.\\n2.\\tIt introduces several representative categories of social biases in LLMs and demonstrates and evaluates a specific category of general purpose LLM vulnerability over several LLMs.\", \"weaknesses\": \"1.\\tAs a whole, this paper is more like playing with the LLMs with several categories of deceiving prompts. It significantly lacks theoretical or technical presentations of ideas and experimental analysis.\\n2.\\tThe explainability of such biases in LLM are missing. There is no formal definition of the attack evaluations.\\n3.\\tMoreover, authors mentioned jailbreaking attack as automated approach to evaluate attack, however, they did not mention how the LLM\\u2019s responses were distinguished as not containing responses like \\u201cI am a language model. . . \\u201d, \\u201cSorry. . . \\u201d, \\u201cAs a language model. . . \\u201d\\n4.\\tThe defense techniques it explored those are also a pre-caution -based methods, e.g., post-prompting and sandwich defense. Which does not also provide any deep technical analysis. Moreover, the attack impact under those defenses is not discussed in detail.\", \"questions\": \"1.\\tSince Appendix is not the main content of the paper, why subsection 4.4 SENSITIVITY ANALYSIS OF ATTACK PROMPTS has been included by only mentioning that?\\n2.\\tThere are many ways of PI and RS apart from the examples mentioned in the paper. Moreover, in the experimental results it was not clear enough which prompts of corresponding categories (e.g., \\u201dAbsolutely! Here\\u2019s\\u201d for PI or Do not include any \\u201dnote\\u201d or \\u201ddisclaimer\\u201d RS) lead to which results. \\n3.\\tDiscussing the impacts of defense a little would better complement the paper.\\n4.\\tIn 5.2, human evaluator did 100 bias attack samples, what about the number of bias attacks performed for jailbreak and LLM evaluator?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5V0f8igznO
Captured by Captions: On Memorization and its Mitigation in CLIP Models
[ "Wenhao Wang", "Adam Dziedzic", "Grace C. Kim", "Michael Backes", "Franziska Boenisch" ]
Multi-modal models, such as CLIP, have demonstrated strong performance in aligning visual and textual representations, excelling in tasks like image retrieval and zero-shot classification. Despite this success, the mechanisms by which these models utilize training data, particularly the role of memorization, remain unclear. In uni-modal models, both supervised and self-supervised, memorization has been shown to be essential for generalization. However, it is not well understood how these findings would apply to CLIP, which incorporates elements from both supervised learning via captions that provide a supervisory signal similar to labels, and from self-supervised learning via the contrastive objective. To bridge this gap in understanding, we propose a formal definition of memorization in CLIP (CLIPMem) and use it to quantify memorization in CLIP models. Our results indicate that CLIP’s memorization behavior falls between the supervised and self-supervised paradigms, with "mis-captioned" samples exhibiting highest levels of memorization. Additionally, we find that the text encoder contributes more to memorization than the image encoder, suggesting that mitigation strategies should focus on the text domain. Building on these insights, we propose multiple strategies to reduce memorization while at the same time improving utility---something that had not been shown before for traditional learning paradigms where reducing memorization typically results in utility decrease.
[ "memorization", "multi-modal", "clip", "vision language models" ]
Accept (Poster)
https://openreview.net/pdf?id=5V0f8igznO
https://openreview.net/forum?id=5V0f8igznO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w12lKu0Yqv", "uf0YCTskaK", "uFzceul8Dh", "luQT2Q5GdG", "jB5smrIaKJ", "g2rcbDZLDd", "fXG35aukdz", "eIZiQ3vCH7", "c5xxiIA3YR", "c3eRNKycPs", "aHxl4i07sM", "QjuG7ivHVt", "FDgPjYgFhJ", "Acrtoru5YX", "AKLs1BuMxn", "9v4asYNXdj", "8Xjg9A6u5p", "7vwEwGKfoz", "3wmRAPp7Co" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1729817621106, 1732546314135, 1732552438008, 1732485892947, 1733071470813, 1732532060790, 1732485625814, 1732695011499, 1733259498247, 1732485184152, 1731141210738, 1730628905920, 1737524055797, 1732486825909, 1734314945926, 1732486603633, 1732486122355, 1732670718188, 1733070217968 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10467/Reviewer_F6JJ" ], [ "ICLR.cc/2025/Conference/Submission10467/Reviewer_khe4" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Area_Chair_Cmp4" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Reviewer_khe4" ], [ "ICLR.cc/2025/Conference/Submission10467/Reviewer_CAJS" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Area_Chair_Cmp4" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ], [ "ICLR.cc/2025/Conference/Submission10467/Reviewer_F6JJ" ], [ "ICLR.cc/2025/Conference/Submission10467/Authors" ] ], "structured_content_str": [ "{\"summary\": \"They study the memorization problem in the CLIP model unlike existing studies focusing on the unimodal memorization problem.\\nThey propose a new metric to analyze it and find that memorization seems to be more significant in text encoder than in image encoder. Their analysis indicates that augmenting captions can be a key to mitigating memorization in the CLIP model. Their experiments confirm that augmenting captions can improve the quality of the image encoder's representations while reducing memorization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. They propose a new metric to measure memorization in the CLIP model. The design of the metric is reasonable.\\n2. Their insight that memorization is more significant in the text encoder is new and might interest readers. \\n3. They conduct analysis on the augmentation of the text and images, which might be also interesting to some readers. \\n4. This paper is well-organized and easy to follow.\", \"weaknesses\": \"1. They do not discuss how model size can affect memorization. Although I am not very familiar with this topic, I guess the model size can affect their arguments. For example, if they utilize a larger image encoder, the memorization might be more significant on the image side. Therefore, I think their conclusion about which encoders suffer more from memorization can change by the size of the encoders, but they do not discuss much.\\n\\n2. Most of their findings sound a bit too reasonable and are not surprising. Their finding that augmenting text improves the CLIP model has already been observed in many previous papers though they probably did not discuss memorization. Also, it is not hard to imagine that augmenting datasets mitigates memorization. In these points, I think their findings are not impressive. \\n\\n3. They mention that their metric is effective in removing noisy samples. But, they compare their approach only with random replacement. I think they need to add more baseline, such as naive CLIP's similarity as done in many works. \\n\\n4. In Table 1, the authors augment images by using a diffusion model. But, as they imply, such augmentation can cause a distribution shift in the image side and does not give much intuition about image augmentation. \\n\\nOverall, I think this paper is well-organized and delivers a clear statement to readers, which I like. However, I think their findings are not very surprising and lack impact due to the reasons described in 1, 2. My rating is based on it.\", \"questions\": \"My rating is mainly based on 1 and 2. Please respond to those points.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Appreciate the extra ablations and responses\", \"comment\": \"We thank the authors for their response and extra experiments (particularly the YFCC 7M run). I'm increaseing my score accordingly as I think memorizing in contrastive vision + language models is quite understudied.\"}", "{\"comment\": \"We appreciate the Reviewer's thoughtful feedback and are glad the additional experiments on YFCC 7M addressed the concerns. We are encouraged by your recognition of the importance of studying memorization in contrastive vision + language models and hope our work contributes to advancing understanding in this area. Thank you for increasing your score and supporting our efforts on this topic.\"}", "{\"title\": \"Linear probe accuracy, Nits, CLIPMem is bounded [-1,+1], dimension in Table 1, Infinite data regimes\", \"comment\": \">**The linear probe accuracy seems quite low (Table 1, 5-a, 5-b, 6-a/b).**\\n\\nThe comparably low linear probing accuracy stems from the significantly smaller scale of our training setup in comparison to the original CLIP model, due to computational constraints. For example, the original CLIP model was trained on approximately 30 million images with a batch size of 1712 [1]. In contrast, our model was trained on only 70000 (65,000 shared and 5,000 candidate) samples with a batch size of 128. As highlighted in prior work, self-supervised learning always needs a large number of training samples [2, 3]. This is the reason for the relatively low linear probing accuracy observed in our experiments. However, since even smaller versions of CLIP demonstrate significant memorization, this effect is likely to be even more pronounced in larger models, as memorization tends to increase with model size [4].\\n\\nWe conducted additional experiments where we vary the size of the encoders. Note that since CLIP embeds both text and image to the same latent space, the output of both encoders needs to be of same dimensionality. Hence, it is impossible to combine, for example, a ViT-base for the language part with a ViT-large for the image part.\\n\\nWe, hence, increased the size of both encoders to ViT-large and report the results below for convenience.\\n\\n| **Model** | **CLIPMem** | **Lin. Prob. Acc. (ImageNet)** |\\n|:---------------------------:|:-----------:|:------------------------------:|\\n| Paper (Baseline ViT-Base): | 0.438 | 63.11% \\u00b1 0.91% |\\n| ViT-Large | 0.457 | 67.04% \\u00b1 1.05% |\\n\\nThey highlight that while larger encoders improve performance, they also significantly increase the memorization.\\n\\n**References:**\\n\\n[1] Radford, Alec, et al. \\\"Learning transferable visual models from natural language supervision.\\\" International conference on machine learning. PMLR, 2021.\\n\\n[2] Nozawa, Kento, et al. \\\"Understanding Negative Samples in Instance Discriminative Self-supervised Representation Learning.\\\" NeurIPS, 2021\\n\\n[3] Liu, Hong, et al. \\\"Self-supervised Learning is More Robust to Dataset Imbalance.\\\" ICLR, 2022\\n\\n[4] Wang, Wenhao, et al. \\u201cMemorization in Self-Supervised Learning Improves Downstream Generalization.\\u201d ICLR, 2024.\\n\\n>**Text and images in Figure one are very hard to read. Suggest larger and fewer images and move rest to appendix. Figure 3 can be make larger / more readable by doing share-y and increasing font sizes.**\\n\\nWe updated the paper according to the reviewer\\u2019s suggestions and increased the size of images and text (including in Figures 1 and 3). \\n\\n>**Is CLIPMem bounded?**\\n\\nYes, our CLIPMem is normalized from -1 to 1, and the normalization procedure is described in Appendix A.2, \\u201cNormalization on CLIPMem.\\u201d A memorization score of $0$ indicates no memorization, +1 indicates the strongest memorization on CLIP model f, and -1 indicates the strongest memorization on CLIP model g. \\n\\n>**What dimension is kept constant for Table 1? Are the total training samples [count] seen the same?**\\n\\nThe total number of image-caption pairs is kept the same. We use either 1 or 5 captions from the COCO dataset. For the images, in the case of 1 image, we use the original ones from COCO, and to obtain 5 images per caption, we generate the images for each of the 5 original captions from COCO using Stable Diffusion v1.5. \\n\\n>**It would be interesting to evaluate infinite data regimes (i.e. no repeated data) rather than classical K-epoch runs. Would the results for - memorization hold here? Just like in language this setting is becoming more and more common.**\\n\\nWe thank the reviewer for their suggestion and added an additional experiment where we use the larger YFCC100M dataset, trained for one epoch, and then evaluate memorization. To ensure comparability to our initial experiments where we trained the model on 70,000 data points for 100 epochs (i.e., 7M samples seen during training), we trained with 7M samples from YFCC100M. \\n\\nTo fit our setup, we trained model f with 6950000 shared + 50000 candidate samples and model g with 6950000 shared + 50000 independent samples. We observe that there is still significant memorization even when training for one epoch. \\n\\n| **Model** | **Epochs** | **CLIPMem** | **Lin. Prob. Acc. (ImageNet)** |\\n|:-----------:|:--------:|:-----------:|:------------------------------:|\\n| YFCC 7M | 1 Epoch | 0.425 | 64.83% \\u00b1 1.04% |\\n| Paper (Coco) | 100 Epochs | 0.438 | 63.11% \\u00b1 0.91% |\\n\\nWe also assessed the memorized samples qualitatively, as shown in the new Figure 9 in the updated paper. They highlight that the insights remain the same even when training only for one epoch: atypical and miscaptioned samples are memorized the most.\"}", "{\"title\": \"Have concerns been addressed?\", \"comment\": \"We would like to follow up on our answers, especially regarding the factors that influence memorization. We demonstrated that (1) adding Gaussian noise (in our experiments: $N(0,0.15)$) to the text embeddings, and (2) increasing the number of captions or images for a given sample, decreases the memorization while enhancing performance. On the other hand, (3) larger encoders (ViT-Large) improve performance and also significantly increase memorization as compared to smaller encoders (ViT-Base). Do our replies adequately address the reviewer's concerns?\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThis is a gentle reminder that the authors have submitted their rebuttal, and the discussion period will conclude on November 26th AoE. To ensure a constructive and meaningful discussion, we kindly ask that you review the rebuttal as soon as possible and verify if your questions and comments have been adequately addressed.\\n\\nWe greatly appreciate your time, effort, and thoughtful contributions to this process.\\n\\nBest regards,\\nAC\"}", "{\"title\": \"Applicability of CLIPMem & the noising results\", \"comment\": \"We thank the reviewer for their insightful comments and are glad that the Reviewer found our results on \\u201cmis-captioned text labels, multi-caption and removal of memorization examples\\u201d interesting.\\n\\n>**Missing ability for CLIPMem to be applicable to general off-the-shelf CLIP models. Currently, if I understand correctly it requires retraining on specific splits.**\\n\\nWe would like to clarify that the dependence on retraining on specific splits is an inherent property of the leave-one-out methodology that our metric is based on and not a specific limitation of CLIPMem. This approach is consistent with prior works on measuring memorization in supervised [1] and self-supervised learning [2], which similarly rely on controlled experimental setups in order to accurately test memorization by comparing models trained with and without specific data points.\\n\\n**References:**\\n\\n[1] Feldman, Vitaly. \\\"Does learning require memorization? a short tale about a long tail.\\\" In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pp. 954-959. 2020.\\n\\n[2] Wang, Wenhao, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, and Franziska Boenisch. \\\"Memorization in Self-Supervised Learning Improves Downstream Generalization.\\\" In The Twelfth International Conference on Learning Representations. 2024\\n\\n>**Clarity of specifics of CLIPMem is used for vision only and joint vision + text can be improved.**\\n\\nExisting memorization metrics, such as SSLMem which forms the foundation of our metric, can be used separately for each modality (i.e., either vision or text). However, these metrics are limited when applied to multi-modal models like CLIP, which require interaction between different modalities (as we visualized in Figure 3 in the paper). Hence, our CLIPMem is designed for the joint vision+text multimodal models. Additional clarification would be appreciated if this does not fully address the Reviewer\\u2019s concern.\\n\\n>**The noising results (Table 5-b) are not very convincing. Almost all the results are within the same +/- std range.**\\n\\nFirst, we would like to emphasize that the results between no noise, i.e., $\\\\mathcal{N}(0,0)$ and the highest value of noises we evaluated in the original submission $\\\\mathcal{N}(0,0.1)$ differ significantly, indicating a strong (yet continuous) trend through noise addition. \\n\\nWe additionally extended the evaluation and added larger amounts of noise. \\n\\n| **Noise** | **CLIPMem** | **Lin. Prob. Acc. (ImageNet)** |\\n|:---------:|:-----------:|:------------------------------:|\\n| None | 0.438 | 63.11% \\u00b1 0.91% |\\n| (0,0.01) | 0.435 | 63.36% \\u00b1 0.88% |\\n| (0,0.05) | 0.428 | 64.02% \\u00b1 1.12% |\\n| (0,0.10) | 0.421 | 64.95% \\u00b1 0.96% |\\n| (0,0.15) | 0.417 | 65.34% \\u00b1 0.84% |\\n| (0,0.20) | 0.422 | 64.83% \\u00b1 0.92% |\\n| (0,0.25) | 0.436 | 63.28% \\u00b1 0.79% |\\n\\nOur results highlight that with $\\\\mathcal{N}(0,0.15)$, the best linear probing accuracy of 65.34% \\u00b1 0.84% and the lowest CLIPMem (0.417) can be achieved. This result is significantly outside of the standard deviation of no noise addition with 63.11% \\u00b1 0.91%.\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the Reviewer for engaging in the discussion with us and checking the rebuttal, submission, and other reviewers\\u2019 comments. We are glad that the Reviewer appreciates our work.\\n\\nOur work is much broader than the analysis of model size. We **never** *\\u201cargue that some techniques, e.g., data augmentation, always help\\u201d* but **provide specific parameter values**, e.g., the standard deviation for the added noise, model size (and architecture), and number and strength of augmentations. \\n\\n1. We clearly state that the memorization is related to different factors:\", \"lines_481_482\": \"\\u201cwe add small amounts of Gaussian noise to the text embeddings. Our results in Figure 5b and Table 8 highlight that this strategy is highly effective in reducing memorization while improving downstream generalization.\\u201d Figure 5 (b) shows that adding the Gaussian noise $N(0,0.15)$ gives us the sweet spot with the smallest memorization and highest performance.\\n2. Caption to Figure 5: \\u201c(a) We use multiple captions for the same image during training. In our experiments, where we analyzed the range of captions per image from 1 to 5, the case with 5 captions provided the largest reduction of memorization and the biggest increase in performance. (Lines 465-466:) The general trend is that the more captions are used during training, the lower memorization and the higher the linear probing accuracy. \\n3. We also observe (in Table 1) that having more augmented images (5 augmented images + 1 caption) helps to decrease memorization while increasing performance (compared to the case with (1 image + 1 caption). \\n4. As requested by the Reviewer, we showed that, while keeping the same architecture (ViT), larger encoders (ViT-Large) improve performance and also significantly increase memorization as compared to smaller encoders (ViT-Base).\\n\\nOverall, our work considers a broad range of factors and analyzes their impact on memorization. We, therefore, kindly ask the reviewer to re-assess their score.\"}", "{\"title\": \"Feedback on additional analysis and experiments\", \"comment\": \"We sincerely appreciate the Reviewer's feedback and would like to know if our additional analysis and experiments adequately address the concerns raised.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank the Reviewers for their thoughtful and encouraging feedback, which greatly helped us further improve our submission. Our work was described as addressing \\u201ca gap in previous research\\u201d by offering \\u201ca new way for measuring memorization in multi-modal settings (Reviewer CAJS). We are encouraged that the significance of the studied trade-off between memorization and generalization was highlighted (Reviewer khe4), with Reviewer CAJS noting our observed trade-off as \\u201cchallenging established norms\\u201d and stating that the paper \\u201csuggests guidelines that can benefit real-world multi-modal model training practices.\\u201d Reviewer khe4 also found our experimental results *interesting*, especially the removal of memorized mis-captioned examples, which improves performance, underscoring the practical impact of our findings. Finally, Reviewer F6JJ notes that our \\u201cinsight that memorization is more significant in the text encoder is new\\u201d and that our results and analysis \\u201cmight interest readers.\\u201d We hope that our work will significantly contribute to the community and inspire further research, reaching a broader audience.\\n\\n1. During the rebuttal, we performed additional experiments and analysis that we present in the individual answers to the reviewers, and update them correspondingly in the paper.\\n2. Based on the suggestion of Reviewer khe4, we experimented with additional noise strength for caption noising and updated the results in Figure 5 (b) and Table 8. When applying random Gaussian noise with 0 mean and 0.15 standard deviation, the linear probing accuracy is 65.34% \\u00b1 0.84%, which is out of the +/- std range of baseline\\u2019s 63.11% \\u00b1 0.91%. This could now prove our statement \\u201cNoising the text embedding during training is highly effective in reducing memorization while improving downstream generalization\\u201d in the paper. \\n3. Based on the suggestion of Reviewer F6JJ, we compared the effect of noise-sample removal between CLIPMem with naive CLIP's similarity and updated the results in figures 6 (a) and 6 (b) in the paper. The results show that CLIPMem is more effective than using the naive CLIP\\u2019s similarity.\\n4. Based on the suggestions of Reviewer khe4 and Reviewer CAJS, we train new models with infinite data regimes on a 7M subset of YFCC100M, which is a widely used large-scaled multi-modal dataset. The results in Table 6 and Figure 9 indicate that the most-memorized samples (according to CLIPMem) are still obviously mis-captioned samples, which proves CLIPMem is feasible in infinite data regimes training and can be applied to multi-modal datasets besides COCO and CC3M.\\n5. Based on the suggestion of Reviewer F6JJ, we performed extra experiments on ViT-large/16 to further study the impact of how model size will affect memorization effect. The results in Table 5 show that with more parameters (larger size), encoders will have higher memorization capacity. This aligns with previous research in Supervised Learning and Self-Supervised Learning [1,2,3]. \\n6. Following Reviewer F6JJ's suggestion, we conducted additional experiments for Table 1 to further substantiate the claim that augmenting either text or images during training significantly reduces memorization.\\n\\n**References:**\\n\\n[1] *\\u201dMemorization in self-supervised learning improves downstream generalization\\u201d.* Wenhao Wang, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, Franziska Boenisch. ICLR, 2024.\\n\\n[2] *\\u201cDoes learning require memorization? a short tale about a long tail\\u201d.* Vitaly Feldman. \\nACM SIGACT Symposium on Theory of Computing, 2020.\\n\\n[3] *\\u201cDo ssl models have d\\u00e9j\\u00e0 vu? a case of unintended memorization in self-supervised learning.\\u201d* Casey Meehan, Florian Bordes, Pascal Vincent, Kamalika Chaudhuri, Chuan Guo. arXiv e-prints, 2023.\"}", "{\"summary\": \"**Summary:** Understanding the memorization / generalization tradeoff is important to properly quantify modern ML models. This work focuses on CLIP which applies InfoNCE between image and text pairs and attempts to quantify the extent to which CLIP models memorize. The authors introduce CLIPMem to quantify memorization and find that the text encoder contributes more to memorization than the vision encoder. They also propose strategies to mitigate / remove memorized samples to improve performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"**Stong points:**\", \"Well highlighted literature on memorization.\", \"Defines CLIPMem based on hold one out strategy (similar to Feldman et. al in supervised learning).\", \"Interesting results on mis-captioned text labels, multi-caption and removal of memorized examples.\", \"Reasonable pretraining datasets like CC3M\", \"Clean separation of training and test splits for measuring memorization.\"], \"weaknesses\": [\"**Weak points:**\", \"Missing ability for CLIPMem to be applicable to general off-the-shelf CLIP models. Currently if I understand correctly it requires retraining on specific splits.\", \"Clarity of specifics of CLIPMem is used for vision only and joint vision + text can be improved.\", \"The noising results (Table 5-b) are not very convincing. Almost all the results are within the same +/- std range.\", \"The linear probe accuracy seems quite low (Table 1, 5-a, 5-b, 6-a/b).\", \"**Nit:**\", \"Text and images in Figure one are very hard to read. Suggest larger and fewer images and move rest to appendix.\", \"Figure 3 can be make larger / more readable by doing share-y and increasing font sizes.\"], \"questions\": [\"**Questions:**\", \"Is CLIPMem bounded?\", \"What dimension is kept constant for Table 1? Are the total training samples [count] seen the same?\", \"It would be interesting to evaluate infinite data regimes (i.e. no repeated data) rather than classical K-epoch runs. Would the results for - memorization hold here? Just like in language this setting is becoming more and more common.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces CLIPMem, a novel metric to quantify memorization in CLIP, which combines elements of supervised and self-supervised learning. The paper shows that memorization within CLIP often arises from mis-captioned or atypical samples, particularly within the text modality rather than the image modality. They propose mitigation strategies that reduce memorization without sacrificing model utility, unlike traditional methods that often degrade performance when reducing memorization. It reports several interesting findings, including 1) CLIP's memorization lies between supervised and self-supervised paradigms, with high memorization for data with inaccurate or misaligned captions; 2) Text domain adjustments, such as using varied or augmented captions, reduce memorization and improve generalization, defying the usual trade-offs seen in other paradigms.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"It introduces a new metric -- CLIPMem to provide a new way for measuring memorization in multi-modal settings, a gap in previous research.\", \"It performs empirical analysis to show differences in memorization between the text and image modalities, providing actionable insights.\", \"It proposes techniques to successfully reduce memorization while preserving or even enhancing model utility, challenging established norms.\", \"By highlighting the risks of training with uncurated, potentially mis-captioned data, the paper suggests guidelines that can benefit real-world multi-modal model training practices.\"], \"weaknesses\": [\"While tailored to CLIP, the metric and findings may need adaptation to apply effectively to other multi-modal models with different architectures.\", \"The experiments focus on datasets like COCO and CC3M, so it\\u2019s unclear how well these findings generalize to other large-scale or domain-specific datasets.\", \"The mitigation strategies, such as augmenting captions or generating variations, may incur additional computational costs in training, which could limit practicality for some users.\"], \"questions\": \"Would you please comment on how the metric adapt to other multi-modal models besides CLIP?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Model size vs Memorization, Surprising findings, CLIPMem vs CLIP Similarity, and Augmentations\", \"comment\": \"We thank the reviewer for recognizing that our metric is reasonable and that the insights on memorization provided are of interest to the readers.\\n\\n>**They do not discuss how model size can affect memorization.**\\n\\nTo address the reviewer\\u2019s comments, we conducted additional experiments where we varied the size of the encoders. Note that since CLIP embeds both text and image to the same latent space, the output of both encoders needs to be of the same dimensionality. Hence, it is impossible to combine, for example, a ViT-base for the language part with a ViT-large for the image part.\\nWe, hence, increased the size of both encoders to ViT-large and report the results below for convenience.\\n| **Model** | **CLIPMem** | **Lin. Prob. Acc. (ImageNet)** |\\n|:---------------------------:|:-----------:|:------------------------------:|\\n| Paper (Baseline ViT-Base): | 0.438 | 63.11% \\u00b1 0.91% |\\n| ViT-Large | 0.457 | 67.04% \\u00b1 1.05% |\\n\\nThey highlight that while larger encoders improve performance, they also significantly increase memorization. We included this result in the revised version of the paper in Table 5.\\n\\n>**Most of their findings sound a bit too reasonable and are not surprising. Their finding that augmenting text improves the CLIP model has already been observed in many previous papers though they probably did not discuss memorization.**\", \"we_would_kindly_disagree_with_the_reviewer_on_the_fact_that_the_results_are_not_surprising\": \"For both supervised [1] and self-supervised [2] learning, it has always been shown that decreasing memorization also reduces the performance of the model, i.e., memorization is required for generalization. In contrast, our findings suggest that in CLIP, text augmentation simultaneously reduces memorization while improving performance. This stands in contrast with the other learning paradigms. Additionally, our work is the first to highlight that memorization stems more from the text than the image modality, which provides valuable insights into the inner workings of multi-modal models.\\n\\n>**Also, it is not hard to imagine that augmenting datasets mitigates memorization.**\\n\\nBy designing a metric to objectively quantify memorization, and by conducting thorough experiments, our work provides scientific evidence for the intuition that augmentations mitigate memorization in multi-modal CLIP models.\\n\\n>**They mention that their metric is effective in removing noisy samples. But, they compare their approach only with random replacement. I think they need to add more baseline, such as naive CLIP's similarity as done in many works.**\\n\\nWe performed the experiment suggested by the reviewer and also removed samples based on the CLIP similarity score as a baseline to our removal based on CLIPMem. Please see the updated Figure 6. While CLIP similarity also manages to increase performance through removal, it is not as effective as our metric, highlighting the value of considering memorization as a lens to identify noisy samples.\\n\\n>**In Table 1, the authors augment images by using a diffusion model. But, as they imply, such augmentation can cause a distribution shift in the image side and does not give much intuition about image augmentation.**\\n\\nTo address the reviewer\\u2019s concern, we conducted additional experiments. We measured CLIPMem for a model trained with only one generated image and the corresponding real caption. We updated the Table 1 in the paper and here include its copy for the Reviewer\\u2019s convenience:\\n\\n| **Case** | **CLIPMem** | **Lin. Prob. Acc. (ImageNet)** |\\n|----------------------------------------------|-------------|--------------------------------|\\n| 1 real image + 1 real caption | 0.438 | 63.11% \\u00b1 0.91% |\\n| 1 generated image + 1 real caption | 0.428 | 63.97% \\u00b1 0.79% |\\n\\nThe results indicate that this augmentation effectively reduces memorization while also providing a slight improvement in performance. Notably, if the distribution shift had been substantial, we would have observed a decrease in performance instead. These findings further reinforce our claims.\\n\\n**References**\\n\\n[1] \\u201dMemorization in self-supervised learning improves downstream generalization\\u201d. Wenhao Wang, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, Franziska Boenisch. ICLR, 2024.\\n\\n[2] \\u201cDoes learning require memorization? a short tale about a long tail\\u201d. Vitaly Feldman. \\nACM SIGACT Symposium on Theory of Computing, 2020.\"}", "{\"metareview\": \"The authors study memorization on CLIP models and define CLIPMem as a measure of it. They find that \\\"mis-captioned\\\" samples exhibiting highest levels of memorization and that the text encoder contributes more to memorization than the image encoder. They also propose strategies to reduce memorization and improve performance.\\n\\nThis result is interesting since it differs from the behavior of other learning paradigms. Reviewers pointed some areas of improvement like exploring the effect of model size, training on infinite data regimes, or experiments with additional noise strengths. The authors did a good job with the rebuttal, and I believe they succeeded at answering all the reviewer\\u2019s concerns. However, while khe4 raised their score, CAJS remained inactive during discussion, and F6JJ decided to keep their score of 5 arguing that results are not surprising.\\n\\nDuring reviewer/AC discussion, F6JJ restated their concern and khe4 replied that this work is good science and valuable in itself. I also believe that the fact reducing memorization in CLIP increases performance differs from behavior previously seen in other paradigms. \\n\\nOverall, and given the enthusiasm of khe4, I believe this is a valuable work and it should be accepted to ICLR 2025.\", \"additional_comments_on_reviewer_discussion\": \"During reviewer/AC discussion, F6JJ restated their concern and khe4 replied that this work is good science and valuable in itself. I also believe that the fact reducing memorization in CLIP increases performance differs from behavior previously seen in other paradigms.\"}", "{\"title\": \"Additional computational costs\", \"comment\": \">**The mitigation strategies, such as augmenting captions or generating variations, may incur additional computational costs in training, which could limit practicality for some users.**\\n\\nBased on the reviewer\\u2019s comment, we benchmarked the generation times:\\n1. First, we have to point out that augmenting the text in the embedding space takes only 0.016 seconds/caption, i.e., it has a negligible impact on the computational cost or the training time.\\n2. The generation of additional captions or their paraphrases using LLMs is also very fast (0.28 seconds per paraphrased caption when using GPT3.5). Additionally, this generation incurs zero cost when using open LLMs like Llama, Vicuna, or Mistral. \\n\\nWe would like to note that our strategy of adding random noise in the embedding space (see original Table 5b) introduces very small overhead, yet, is effective in limiting memorization.\\n\\nThe generation of images is a bit more expensive (on average 1.06 seconds per image when using Stable Diffusion 1.5). However, the generation is done once and then amortizes with more epochs. For the standard CLIP training, a single generated image is reused 35 times. We note that the timing depends on the underlying hardware, especially the GPU used. In our case, we leveraged a single A100 GPU with 80GB of memory. The timing can be further improved using better graphic cards, e.g., the latest H100.\"}", "{\"title\": \"Adapting the CLIPMem metric & Experiments on the large-scale YFCC100M dataset\", \"comment\": \"We thank the reviewer for the detailed comments. We are glad that the reviewer recognizes our work as addressing \\u201ca gap in previous research\\u201d and appreciates that it \\u201cprovides actionable insights,\\u201d \\u201cchallenges established norms,\\u201d and \\u201ccan benefit real-world multi-modal model training practices.\\u201d Below we address all of the points and questions raised by the reviewer one by one.\\n\\n>**While tailored to CLIP, the metric and findings may need adaptation to apply effectively to other multi-modal models with different architectures. (...) Would you please comment on how the metric adapts to other multi-modal models besides CLIP?**\\n\\nWe tested our metric on the popular CLIP model, but many other multi-modal models follow the CLIP architecture, and our metric is immediately applicable to them as well. For example, multi-modal models with separate encoders and contrastive learning objectives can directly apply CLIPMem with minimal modifications (e.g., ALIGN [1], Florence [2], and LiT [3]), where memorization can be measured by evaluating the alignment scores between representations. For models with additional components other than contrastive alignment, CLIPMem can be applied after alignment before other operations like fusion (ALBEF [4]) or generative tasks (BLIP [5]). By doing so, CLIPMem can isolate and quantify memorization during alignment, being adaptable across different architectures.\\n\\n>**The experiments focus on datasets like COCO and CC3M, so it\\u2019s unclear how well these findings generalize to other large-scale or domain-specific datasets.**\\n\\nTo address the reviewer\\u2019s comment, we added additional experiments with the larger YFCC100M dataset. To simulate the large data regime, we trained the model for one epoch, and then evaluate memorization. To ensure comparability to our initial experiments where we trained the model on 70,000 data points for 100 epochs (i.e., 7M samples seen during training), we trained with 7M samples from YFCC100M. \\n\\nTo fit our setup, we trained model f with 6950000 shared +50000 candidate samples and model g with 6950000 shared + 50000 independent samples. We observe that there is still significant memorization when training for one epoch. \\n\\n| **Model, Epochs** | **CLIPMem** | **Lin. Prob. Acc. (ImageNet)** |\\n|:------------------------:|:-----------:|:------------------------------:|\\n| YFCC 7M, 1 Epoch | 0.425 | 64.83% \\u00b1 1.04% |\\n| Paper (Coco), 100 Epochs | 0.438 | 63.11% \\u00b1 0.91% |\\n\\nWe also assessed the memorized samples qualitatively, as shown in the new Figure 9 in the updated paper. They highlight that the insights remain the same even when training only for one epoch: atypical and miscaptioned samples are memorized.\\n\\n**References:**\\n\\n[1] \\u201cScaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision\\u201d Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ICML 2021. \\n\\n[2] \\u201cFlorence: A New Foundation Model for Computer Vision\\u201d Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang.\", \"arxiv_preprint_arxiv\": \"2111.11432 (2021).\\n\\n[3] \\u201cLiT: Zero-Shot Transfer with Locked-image text Tuning\\u201d Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer. CVPR 2022.\\n\\n[4] \\u201cAlign before Fuse: Vision and Language Representation Learning with Momentum Distillation\\u201d Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, Steven Hoi. NeurIPS 2021.\\n\\n[5] \\u201cBLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation\\u201d Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. ICML 2022.\"}", "{\"title\": \"after rebuttal\", \"comment\": \"Thanks for the response.\\n\\nI carefully checked the rebuttal, submission, and other reviewers' comments. \\n\\nI understand that the authors made the best effort to address concerns about the weaknesses in experiments. \\n\\nI still have a following concern. In my understanding, memorization should be related to diverse factors, e.g., model size, and noise of data. In this sense, we cannot argue that some techniques, e.g., data augmentation, always help. It is very helpful if we can understand which setting a specific technique is useful in mitigating memorization. However, this work seems to focus on a bit specific setting, e.g., model size. I cannot be sure the generalization of their observation. \\n\\nI would like to keep my rating.\"}", "{\"title\": \"Have concerns been addressed?\", \"comment\": \"We thank the Reviewer for their thoughtful feedback, which has greatly contributed to improving the quality of our paper. We conducted additional experiments and analyses to address all the Reviewer\\u2019s concerns and suggestions:\\n\\n1. **CLIPMem is adaptable across different architectures:** Multi-modal models with separate encoders and contrastive learning objectives can directly apply CLIPMem with minimal modifications (e.g., ALIGN [1], Florence [2], and LiT [3]), where memorization can be measured by evaluating the alignment scores between representations. For models with additional components other than contrastive alignment, CLIPMem can be applied after alignment before other operations like fusion (ALBEF [4]) or generative tasks (BLIP [5]).\\n2. **Our findings generalize to other datasets:** We train models with infinite data regimes on a 7M subset of YFCC100M, a widely used large-scale multi-modal dataset. Results in Table 6 and Figure 9 show that CLIPMem successfully identifies the most-memorized samples to be miscaptioned samples, confirming that CLIPMem is effective in infinite data regimes and applicable to large-scale, multi-modal datasets beyond COCO and CC3M.\\n3. **The proposed methods to successfully reduce memorization while preserving or even enhancing model utility are practical and can benefit real-world multi-modal model training practices:** (1) Augmenting the text in the embedding space takes only 0.016 seconds/caption. (2) The generation of additional captions or their paraphrases using LLMs is also very fast (0.28 seconds per paraphrased caption when using GPT3.5). (3) The generation of images is a bit more expensive (on average 1.06 seconds per image when using Stable Diffusion 1.5), however, the generation is done once and then amortizes with more epochs.\\n\\nWe hope that our responses address the concerns raised. Therefore, we kindly ask the Reviewer to reconsider their rating in light of these additional insights.\\n\\n**References:**\\n\\n[1] *\\u201cScaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision\\u201d*. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ICML 2021. \\n\\n[2] *\\u201cFlorence: A New Foundation Model for Computer Vision\\u201d*. Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang.\", \"arxiv_preprint_arxiv\": \"2111.11432 (2021).\\n\\n[3] *\\u201cLiT: Zero-Shot Transfer with Locked-image text Tuning\\u201d*. Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer. CVPR 2022.\\n\\n[4] *\\u201cAlign before Fuse: Vision and Language Representation Learning with Momentum Distillation\\u201d*. Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, Steven Hoi. NeurIPS 2021.\\n\\n[5] *\\u201cBLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation\\u201d*. Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. ICML 2022.\"}" ] }
5UQ0YmC2js
AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion models
[ "Yaopei Zeng", "Yuanpu Cao", "Bochuan Cao", "Yurui Chang", "Jinghui Chen", "Lu Lin" ]
Recent advances in diffusion models have significantly enhanced the quality of image synthesis, yet they have also introduced serious safety concerns, particularly the generation of Not Safe for Work (NSFW) content. Previous research has demonstrated that adversarial prompts can be used to generate NSFW content. However, such adversarial text prompts are often easily detectable by text-based filters, limiting their efficacy. In this paper, we expose a previously overlooked vulnerability: adversarial image attacks targeting Image-to-Image (I2I) diffusion models. We propose AdvI2I, a novel framework that manipulates input images to induce diffusion models to generate NSFW content. By optimizing a generator to craft adversarial images, AdvI2I circumvents existing defense mechanisms, such as Safe Latent Diffusion (SLD), without altering the text prompts. Furthermore, we introduce AdvI2I-Adaptive, an enhanced version that adapts to potential countermeasures and minimizes the resemblance between adversarial images and NSFW concept embeddings, making the attack more resilient against defenses. Through extensive experiments, we demonstrate that both AdvI2I and AdvI2I-Adaptive can effectively bypass current safeguards, highlighting the urgent need for stronger security measures to address the misuse of I2I diffusion models.
[ "Diffusion Model", "Adversarial Attack" ]
Reject
https://openreview.net/pdf?id=5UQ0YmC2js
https://openreview.net/forum?id=5UQ0YmC2js
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wP8664umCe", "w61K2zCxtB", "uKlXiiqjdE", "sJGYofTfE4", "r5oAiApCfT", "pddESGTveE", "p090JjwoH7", "lrNBHrXhnl", "kWbeDTBxGi", "jNYzJHnMDF", "fy7iJtDs1R", "arYtcC5mb0", "ZWwHVWtVAN", "Xnrnm0FMET", "WUAK9xaUOV", "VrOBXJDixt", "TGhe5Q4ITs", "SmOlkHBnz3", "SOond4F8qR", "QsH8tmoUq4", "PLk1KCGtEG", "IcSbY6Ikxx", "EA8orjEk62", "DwZ94iwhme", "Dagz11B8dD", "Cr3qzpM8AC", "8ro1VVUrtG", "4otjDIOuB3", "2RzYfneNq8" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732233200226, 1729320413576, 1732232537799, 1732232583289, 1732552457584, 1732233129606, 1734664373020, 1732641026000, 1732232798246, 1732553647093, 1732293459018, 1732236789259, 1733163387555, 1732233232320, 1732292556368, 1732580090616, 1737523880059, 1732232859398, 1732695588027, 1732552275712, 1732233011992, 1732862215183, 1730520201588, 1732886413435, 1732551971457, 1732294882764, 1730644352996, 1730498755215, 1732552160805 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_Cqd3" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Area_Chair_YknB" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_3wqb" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_Cqd3" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_Cqd3" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_3wqb" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_ZfYn" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_f4nA" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_ZfYn" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_Cqd3" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_f4nA" ], [ "ICLR.cc/2025/Conference/Submission7988/Reviewer_3wqb" ], [ "ICLR.cc/2025/Conference/Submission7988/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reponse to Reviewer Cqd3 -- Part 1\", \"comment\": \"Thank you for your valuable comments and suggestions. We have carefully addressed each of your questions and concerns below.\\n\\n>**Q1:** How our method compares to or improves upon existing NSFW image generation techniques like deepnude or deepfake.\\n\\n**A1:** While Deepnude and Deepfake could be used by malicious users to generate harmful/deceptive images, our purpose here is not to produce high-quality NSFW images but to highlight a critical security vulnerability existing in general I2I models: **even benign I2I diffusion models and users could inadvertently result in NSFW contents when the input image is from an untrusted third party**. This is especially concerning as I2I diffusion models become widely used due to the greater flexibility in editing: for instance, (benign) users can freely prompt diffusion models to edit any image, and once the image is altered by our attack to encode any NSFW concept (e.g., nudity, violence etc), the generation becomes harmful.\\n\\n>**Q2:** Evaluate the quality of generated images with reasonable metrics.\\n\\n**A2:** We compared the average TOPIQ of AdvI2I with FaceSwap [1] on 200 test samples. FaceSwap is a deepfake method that focuses on swapping faces between images while preserving the original facial features and expressions in the new context. Using FaceSwap, we swapped the faces of images onto the 200 test samples and calculated the TOPIQ scores. The results, as shown below, indicate no significant differences in quality.\\n\\n| **Method** | **TOPIQ** |\\n| ---------- | --------- |\\n| FaceSwap | 0.82 |\\n| **AdvI2I (ours)** | 0.78 |\\n\\n>**Q3:** Compare IQAs with other baselines for clean and attacked images.\\n\\n**A3:** We evaluated the LPIPS loss between clean and attacked images for Attack VAE, MMA [2], and AdvI2I on 200 test samples. As shown in the table below, the differences are minimal. It is worth noting that MMA\\u2019s adversarial images are designed specifically to bypass safety checkers, relying on adversarial prompts to generate NSFW content.\\n\\n| **Method** | **LPIPS Loss** |\\n| ------------- | -------------- |\\n| Attack VAE | 0.31 |\\n| MMA | 0.32 |\\n| **AdvI2I (ours)** | 0.31 |\"}", "{\"summary\": \"This paper proposes a adversarial image attack framework that injects adversarial information into images rather than text prompts, inducing Image2Image Diffusion to generate NSFW content, therefore it has some novelty. However, the motivation is not clear, and the evaluation is not enough. Overall, my main concern is the insufficient evaluation, if the author can solve my concerns, I would be happy to change the rating.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper shows that the prompt-based attack can be simply defenced by some filters\\n2. This paper proposes a new framework which solely based on perturbating images to induce NSFW content. It can be regarded as the knowledge distillation of adversarial text prompt that could induce NFSW contents to image embedding. the proposed method can bypass these methods as the adversarial information is hidden in the images, i.e., the text prompt will not have and malign information.\", \"weaknesses\": \"1. The motivation is not clear. See Q1\\n2. lack of evaluations, e.g., IQAs, face verification rate. See Q2, 3, 4, 5\", \"questions\": \"1. The motivation is not clear. There are a lot of existing methods that could easily generate NFSW images such as deepnude, or I can just use the deepfake to swap face of a porn image with the clean image, why people will trouble themselves to use this method? Are the NSFW images generated by your method have higher image quality? If so, I would expect the authors to clarify how their method compares to or improves upon existing NSFW image generation techniques like deepnude or deepfake. More details see Q2.\\n2. Are the images generated with the proposed method of significant high quality comparing with previous methods? The quality of generated NSFW images should be evaluated by some IQAs:\\n If my understanding is correct, the objective is to generate nude images which have same indentity as the clean image (i.e.com the target is a category of images rather than a certain image), FR IQAs may not work well. I would suggest to use some NR IQAs (FID, TOPIQ, etc). It is worth noting that, if the data volume is lower than 10k, the FID may not be meaningful.\\n3. The author did not compare IQAs with other baselines for clean and attacked images, i.e., we do not know if the perturbation is imperceptible enough. (I would suggest compare the PSNR, LPIPS, SSIM, etc between clean image and attacked image). I would expect a table comparing these metrics between the clean and attacked images for their method and baseline methods.\\n4. Will the generated NSFW images have same identity with clean images (especially for nude images)? I suggest the author to use several face verification methods or metrics to quantify identity preservation in their generated images.\\n5. The performance of AdvI2I against SC is poor, although the author admits this, and implement an adaptive version, I need to see more transferability evaluations, i.e., the author should demonstrate the ASR for SC_2 given \\\"adversarial samples generated by adaptive AdvI2I with SC_1\\\".\\n I suggest the author to provide a table showing ASR results for different combinations of safety checkers (SC_1, SC_2, etc.) used during training and testing of their adaptive AdvI2I method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reponse to Reviewer f4nA -- Part 1\", \"comment\": \"Thank you for your insightful comments. We have thoroughly considered and responded to each of your questions and concerns below.\\n\\n>**W1:** Include more advanced versions of Stable Diffusion or other models.\\n\\n**A1:** We evaluated AdvI2I on the SDv2.1-Inpainting model and achieved an ASR of 78.5% under the nudity concept, demonstrating that AdvI2I can generalize to state-of-the-art diffusion models. We have included the results in the revised version, and will further evaluate the adaptive version (AdvI2I-Adaptive) given more time. Thank you for your suggestion. \\n\\n| Model | Concept | Method | w/o Defense | SLD | SD-NP | GN | SC |\\n| ----------------- | ------- | --------------- | ----------- | ----- | ----- | ----- | ----- |\\n| SDv1.5-Inpainting | Nudity | Attack VAE | 41.5% | 36.5% | 41.5% | 39.0% | 7.0% |\\n| | | MMA | 42.0% | 37.0% | 39.5% | 26.0% | **39.5**% |\\n| | | **AdvI2I (ours)** | **82.5**% | **78.5**% | **80.0**% | **70.0**% | 10.5% |\\n| SDv2.1-Inpainting | Nudity | Attack VAE | 35.5% | 32.5% | 35.0% | 32.5% | 7.0% |\\n| | | MMA | 38.0% | 32.5% | 36.5% | 23.5% | **37.0**% |\\n| | | **AdvI2I (ours)** | **78.5**% | **73.0**% | **75.0**% | **64.5**% | 10.5% |\\n\\n>**W2:** Examine the effect of varying benign prompts with explicitly requesting safe content.\\n\\n**A2:** We tested three different suffixes appended to our benign prompts (e.g., \\\"Make the woman in the room\\\" \\u2192 \\\"Make the woman in the room and avoid any nude content.\\\") to explore this. **Even the most effective suffix achieves only a slight reduction in ASR (6.5%)**. As explained in the paper subsection \\\"NSFW Concept Vector Extraction\\\", AdvI2I incorporates the NSFW concept with a strength factor $\\\\alpha$ (larger than 1) into the embedding of benign prompts, making it challenging to defend against this attack by merely modifying the prompts. In some cases, inappropriate suffixes even increases the ASR. We will try more safe prompt and include the results in the revised version. Thank you for your suggestion.\\n\\n| Suffix | ASR (%) |\\n| ------------------------------------------------------ | ---- |\\n| No suffix | 82.5 |\\n| \\\" and avoid any nude content.\\\" | 76.0 |\\n| \\\" with suitable dress.\\\" | 75.0 |\"}", "{\"title\": \"Reponse to Reviewer f4nA -- Part 2\", \"comment\": \">**W3:** Reduced transferability of AdvI2I from SDv1.5 to SDv3.0. Effectiveness may be architecture-specific.\\n\\n**A3:** The AdvI2I attack highlights the security vulnerabilities in various I2I models. To demonstrate that AdvI2I is not architecture-specific, we tested its transferability from InstructPix2Pix to SD-Turbo Image-to-Image and from SDv1.5-Inpainting to FLUX.1-dev ControlNet Inpainting-Alpha. The ASRs are shown in the table below. We have also included the results in the revised version. Thank you for your suggestion.\\n\\n\\n| Source Model | Target **Model** | **ASR (%)** |\\n| ----------------- | -------------------------------------- | ----------- |\\n| InstructPix2Pix | InstructPix2Pix | 81.5 |\\n| | SD-Turbo | 62.5 |\\n| SDv1.5-Inpainting | SDv1.5-Inpainting | 82.5 |\\n| | FLUX.1-dev ControlNet Inpainting-Alpha | 74.0 |\\n\\nSDv3.0, however, performs remarkably well in resisting AdvI2I attacks. Interestingly, when we directly used prompts to request nudity content on the open-source SD2.1 and SD3.0 models, SD2.1 easily generated such content, while SD3.0 did not. We conjecture this is due to differences in training data rather than architectural changes: SDv3.0 is trained on datasets filtered to exclude explicit content, as noted in [1]. This suggests that our attack can expose the risk when the I2I model has the inherent ability to generate NSFW images, but could fail otherwise. Therefore, a potential future direction to enhance model safety is to totally nullify the NSFW concept from the model by thoroughly cleaning the training data.\\n\\n>**W4:** More potential defense strategies like DiffPure [2].\\n\\n**A4:** We evaluated DiffPure as a defense mechanism. While it reduces the ASR for the SDv1.5-Inpainting model under the nudity concept from 82.5% to 72.5%, the decline is not significant. This suggests that AdvI2I is robust against simple defenses targeting adversarial images. We did not include MMA as a baseline here because MMA uses adversarial prompts to induce NSFW images in diffusion models, whereas DiffPure is designed to defend against adversarial images, thus the ASR of MMA will not change after applying DiffPure defense.\\n\\n| **Method** | w/o Defense | **DiffPure** |\\n| ---------- | ----------- | ------------ |\\n| Attack VAE | 41.5 | 33.5 |\\n| **AdvI2I (ours)** | 82.5 | 72.5 |\\n\\n**References**:\\n\\n[1] Scaling Rectified Flow Transformers for High-Resolution Image Synthesis\\n\\n[2] Diffusion models for adversarial purification.\"}", "{\"comment\": \"Dear Reviewer Cqd3,\\n\\nThank you for your thoughtful suggestions and engagement. We truly appreciate your recognition of our responses and your willingness to adjust your score.\\n\\nWe will incorporate a thorough discussion of the suggested quality aspects into the paper to ensure clarity and completeness. If you have any additional feedback, we would be happy to address it. Otherwise, we kindly hope you might consider increasing the score to a positive range.\\n\\nThank you once again for your valuable time and insights!\\n\\nBest regards,\", \"authors_of_paper_advi2i\": \"Adversarial Image Attack on Image-to-Image Diffusion Models\"}", "{\"title\": \"Reponse to Reviewer 3wqb -- Part 2\", \"comment\": \">**Q1:** Can AdvI2I method apply for good use, such as helping mitigate inappropriate image editing?\\n\\n**A1:** Thank you for this insightful suggestion. We experiment with constructing a good \\\"wearing clothes\\\" concept following our AdvI2I pipeline. **Interestingly, this approach indeed reduces the ASR for the exposed prompt attack, \\\"Make the woman naked,\\\" on SDv1.5-Inpainting from 96.5% to 24.5%**. We believe our proposed method is a general framework to encode a certain concept into the input image, regardless of whether the concept is good or bad. We will include this discussion and more case studies in the paper. \\n\\n>**Q2:** How performance of AdvI2I vary based on the neutral prompt.\\n\\n**A2:** The experiments presented in the main text evaluate AdvI2I using various neutral prompts, such as \\\"Make the woman in the room\\\" and \\\"Let the woman on the set.\\\" Additionally, we tested three different suffixes appended to benign prompts (e.g., \\\"Make the woman in the room\\\" \\u2192 \\\"Make the woman in the room with suitable dress.\\\") to further explore this. **Even the most effective suffix reduces the ASR by only 6.5%.** As explained in the subsection \\\"NSFW Concept Vector Extraction,\\\" AdvI2I incorporates the NSFW concept with a strength factor \\u03b1>1 into the embedding of benign prompts, making it difficult to defend against such attacks by merely modifying prompts. In some cases, inappropriate suffixes even increase the ASR. We will try more safe prompts and include the results in the revised version. Thank you for your suggestion.\\n\\n| Suffix | ASR (%) |\\n| ------------------------------------------------------ | ---- |\\n| No suffix | 82.5 |\\n| \\\" and avoid any nude content.\\\" | 76.0 |\\n| \\\" with suitable dress.\\\" | 75.0 |\\n\\n\\n>**Q3:** What hardware and how long to train AdvI2I?\\n\\n**A3:** We use an 80GB A100 GPU, and training AdvI2I on SDv1.5-Inpainting takes approximately 43 hours with 1800 training samples, 150 epoches, batch size 4 and image size 512x512.\\n\\n**References**:\\n\\n[1] Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models?\\n\\n[2] MMA-Diffusion: MultiModal Attack on Diffusion Models\"}", "{\"metareview\": \"This paper studies the vulnerabilities of image-to-image (I2I) diffusion models under adversarial image attacks. A new method AdvI2I is proposed to generate adversarial perturbations on input images to induce NSFW content. The proposed method is effective to mislead Stable Diffusion models and bypass defense mechanisms.\\n\\nThe paper focuses on an important and timely topic on the vulnerability of I2I diffusion models. The proposed method is novel based on the idea of concept vector extraction. The paper is well-written.\\n\\nThe paper received borderline ratings with two borderline accept recommendations and two borderline reject recommendations. After author responses and author-reviewer discussions, most of the concerns have been addressed and two concerns remain.\\n- As pointed out by Reviewer Cqd3, the motivation is unclear. The paper mainly focuses on identifying the vulnerability of I2I diffusion models, but the adversary may not use such technique to generate NSFW content. The evaluation of image quality has also some mistakes.\\n- As pointed out by Reviewer f4nA and Cqd3, the effectiveness for advanced models (e.g., SDv3.0) is limited.\\n\\nBased on the above limitations of this work, AC would recommend rejection and hope the paper can be further improved for the next venue.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer f4nA initially raised concerns about limited evaluation, efficacy under diverse conditions, limited generalizability to SDv3.0 and lack of advanced defense mechanisms. The authors have addressed most of the concerns, but the effectiveness to SDv3.0 was not fully addressed.\\n\\nReviewer ZfYn initially raised concerns about the motivation, method, and adaptive attacks. The authors have successfully addressed all concerns. \\n\\nReviewer 3wqb raised some concerns about experiments and writing. The authors have addressed them.\\n\\nReviewer Cqd3 initially raised concerns about unclear motivation, image quality, comparison with baselines, poor performance against SC. After author responses and multiple rounds of author-reviewer discussions, the reviewer still has concerns about the image quality and effectiveness under advanced I2I models.\\n\\nOverall, this paper is a borderline paper, with strengths of significance, novelty, and good presentation, and weaknesses of unclear motivation and ineffectiveness under advanced I2I models. After AC-reviewer discussions, most reviewers agreed that this is a borderline paper and they would not strongly recommend this paper for acceptance while they are also not opposed to it. Therefore, AC would recommend rejection given the unaddressed limitations.\"}", "{\"comment\": \"**Response to Additional Comment by Reviewer 3wqb**\\n\\n**W1, W2, W3, Q1, Q3:** Thanks for confirming we have addressed these concerns. We have incorporated the important discussions you suggested into the paper revision.\\n\\n**Q2:** Thank you for pointing out SD's known limitations with handling negations in prompt, as well as your suggestion to explore a \\\"wearing clothes\\\" concept for defending against AdvI2I attacks. We have conducted additional experiments to evaluate the effectiveness of embedding a benign concept, \\\"wearing clothes,\\\" for mitigating ASRs of under the nudity-related AdvI2I attack on the SDv1.5-Inpainting model. The embedding strength was adjusted using the strength coefficient $\\\\alpha$ mentioned in subsection 3.2 of our paper (AdvI2I uses $\\\\alpha = 4.0$ for the nudity concept).\\n\\nThe experimental results, presented in the table below, reveal that while the benign concept shows a certain ability to reduce ASR with larger $\\\\alpha$, its practical usage is limited by the noticeable damage to utility:\\n\\n1. **ASR Reduction**: As $\\\\alpha$ increases, the ASR under the AdvI2I attack gradually decreases, demonstrating the efficacy of the \\\"wearing clothes\\\" concept in mitigating adversarial attacks to some extent.\\n2. **Impact on Image Utility**: To evaluate the impact of the \\\"wearing clothes\\\" concept on utility, we tested the editing performance on 200 randomly selected images from ImageNet-1k [1], which primarily contain natural, non-human-centered scenes. Using the prompt \\\"change its color\\\", we applied the \\\"wearing clothes\\\" concept with varying $\\\\alpha$ values and evaluated the quality of the edited images using several metrics, including LPIPS, SSIM, PSNR, NIQE, and PIQE. For reference-based metrics (LPIPS, SSIM, PSNR), the generated images at $\\\\alpha = 0.0$ were used as the reference. Results indicate that such benign concept comes at the cost of significantly degraded utility for non-adversarial edits, particularly noticeable in metrics like SSIM and PIQE.\\n\\nThese findings suggest that while using benign concepts can provide some level of defense, its practical application is heavily constrained due to the visible damage to the quality of benign image editing, especially at higher $\\\\alpha$.\\n\\nWe hope these additional experiments and clarifications address your concerns. If there are no further questions, we kindly hope you might consider adjusting your score in light of the updates and revisions we\\u2019ve made. Thank you again for your valuable feedback!\\n\\n| **$\\\\alpha$** | ASR Under AdvI2I Attack (%)\\u2191 | **LPIPS\\u2193** | **SSIM\\u2191** | **PSNR\\u2191** | **TOPIQ-flive\\u2191** | **NIQE\\u2193** | **PIQE\\u2193** |\\n| ------------ | ---------------------------- | ---------- | --------- | --------- | ---------------- | --------- | --------- |\\n| 0.0 | 82.5 | - | - | - | 0.78 | 3.74 | 36.15 |\\n| 1.0 | 74.5 | 0.35 | 0.89 | 19.12 | 0.75 | 4.12 | 40.56 |\\n| 2.0 | 61.0 | 0.41 | 0.71 | 18.30 | 0.72 | 4.95 | 49.78 |\\n| 3.0 | 42.5 | 0.56 | 0.55 | 16.45 | 0.65 | 5.82 | 65.12 |\\n\\n**References**:\\n\\n[1] Russakovsky, Olga, et al. \\\"Imagenet large scale visual recognition challenge.\\\" *International journal of computer vision* 115 (2015): 211-252.\"}", "{\"title\": \"Reponse to Reviewer ZfYn -- Part 1\", \"comment\": \"Thank you for your valuable comments and suggestions. We have carefully addressed each of your questions and concerns below.\\n\\n>**W1:** I2I model is a sub-optimal choice to generate high quality NSFW images.\\n\\n**A1:** Sorry for the confusions. We would like to hightlight the main motivation of this work: instead of achieving high-quality NSFW images, the purpose of the proposed AdvI2I attack is to expose the risks of I2I diffusion models in generating NSFW images, even when the input prompts are benign. To the best of our knowledge, prior studies like Ring-A-Bell [1] and MMA [2] primarily focus on crafting malicious prompts to induce NSFW content, and as shown in Table 2 of the paper, simple text filters could defend prompt-based attacks. On the contrary, the vulnerabilities arising from input images are relatively underexplored.\\n\\n>**W2:** The proposed method is restricted to a small subset of I2I models, which are not the main part of diffusion models.\\n\\n**A2:** Our work focuses on testing whether I2I diffusion models are vulnerable to generating NSFW images through adversarial input images. The models evaluated in the our work, including SD inpainting models and InstructPix2Pix, cover commonly used I2I diffusion models. The results clearly show that these models are indeed susceptible to such risks.\\nTo demonstrate that AdvI2I is not architecture-specific, we tested its transferability from InstructPix2Pix to SD-Turbo Image-to-Image and from SDv1.5-Inpainting to FLUX.1-dev ControlNet Inpainting-Alpha, and achieved high ASRs, as shown in the table below. We have also included the results in the revised version.\\n\\n| Source Model | Target **Model** | **ASR (%)** |\\n| ----------------- | -------------------------------------- | ----------- |\\n| InstructPix2Pix | InstructPix2Pix | 81.5 |\\n| | SD-Turbo | 62.5 |\\n| SDv1.5-Inpainting | SDv1.5-Inpainting | 82.5 |\\n| | FLUX.1-dev ControlNet Inpainting-Alpha | 74.0 |\\n\\nWe also evaluated AdvI2I on the SDv2.1-Inpainting model and achieved an ASR of 78.5% under the nudity concept, demonstrating that AdvI2I can generalize to state-of-the-art diffusion models. We have included the results in the revised version, and will further evaluate the adaptive version (AdvI2I-Adaptive) given more time. Thank you for your suggestion. \\n\\n| Model | Concept | Method | w/o Defense | SLD | SD-NP | GN | SC |\\n| ----------------- | ------- | --------------- | ----------- | ----- | ----- | ----- | ----- |\\n| SDv1.5-Inpainting | Nudity | Attack VAE | 41.5% | 36.5% | 41.5% | 39.0% | 7.0% |\\n| | | MMA | 42.0% | 37.0% | 39.5% | 26.0% | **39.5**% |\\n| | | **AdvI2I (ours)** | **82.5**% | **78.5**% | **80.0**% | **70.0**% | 10.5% |\\n| SDv2.1-Inpainting | Nudity | Attack VAE | 35.5% | 32.5% | 35.0% | 32.5% | 7.0% |\\n| | | MMA | 38.0% | 32.5% | 36.5% | 23.5% | **37.0**% |\\n| | | **AdvI2I (ours)** | **78.5**% | **73.0**% | **75.0**% | **64.5**% | 10.5% |\"}", "{\"comment\": \"W1: While the paper highlights I2I adversarial attack risks, including runtime analysis would further strengthen the claims about the method's advantages. I'm pleased to see runtime speed now included in the author's response.\", \"q1\": \"It's nice to see that AdvI2I can be a general method for good use\", \"a2\": \"The prompt \\\"and avoid any nude content\\\" is unconvincing due to Stable Diffusion's weakness with negation. A concept embedding focusing on \\\"wearing clothes\\\" would be more effective than simply changing the suffix.\\n \\nW2, W3, A3 : The author has addressed my concerns.\"}", "{\"comment\": \"Thank you for your prompt response. A comprehensive image quality assessment framework is indeed traditional in adversarial attacks and provides a solid foundation for quantitative analysis. The new experimental results have addressed most of my concerns, and I am therefore willing to increase my rating.\", \"one_minor_question_remains\": \"Were the new experimental results also conducted on the 200-image test set? I would appreciate clarification on this point to ensure consistency in the evaluation methodology.\"}", "{\"comment\": \"Thank author's reply.\\n## Q1\\nRegarding Q1, while I understand you want to emphasize the discovery of a new vulnerability, my perspective focuses on its practical implications. The key question is whether this vulnerability would see widespread adoption. Since this vulnerability is specifically designed for NSFW content generation, its widespread use would likely indicate some advantages over existing NSFW generation methods --- particularly in terms of output quality. This potential quality improvement was my immediate thought about its advantage. Based on this reasoning, I requested NR IQA (No-Reference Image Quality Assessment) in Question 2 to validate this hypothesis.\\n\\n## Q2\\nFor Q2, the author didn't solve my concern, here are the reason:\\n1. The baseline you choosed is out of date (4 years ago). you used the LDM as the backbone, so it would be fairer if you use the diffusion based face swap or deepfake. I believe there are many options, even some methods provide gradio web GUI for convenient use, so you should compare to some new method, especially LDM-based.\\n2. To compare the quality of generation, TOPIQ is not enough. I would ask the author to at least add FID as another metric.\\n\\n## Q3\\nSame as the Q2.2, one IQA is not enough. To my knowledge, a well-establish evaluation protocol is at least SSIM+LPIPS+PSNR\\n\\n## Q4 - Q5\\nMy concerns were solved.\"}", "{\"comment\": \"The authors have addressed all my questions, therefore I will increase my score to 6\"}", "{\"title\": \"Reponse to Reviewer Cqd3 -- Part 2\", \"comment\": \">**Q4:** Use face verification methods or metrics to quantify identity preservation in generated images.\\n\\n**A4:** Since the SDv1.5-Inpainting model allows masking to prevent modifications to the facial region, it ensures that the faces in the generated NSFW images remain almost identical to those in the clean images. To evaluate facial consistency, we used CosFace [3] to measure facial identity preservation. AdvI2I achieves a score of 93.7, indicating that the generated NSFW images retain the same identity as the clean images. \\n| **Method** | **CosFace** |\\n| ----------------- | ----------- |\\n| Attack VAE | 93.2 |\\n| MMA | 90.2 |\\n| **AdvI2I (ours)** | 93.7 |\\n\\n>**Q5:** ASR results for different safety checkers used during training and testing of AdvI2I-Adaptive.\\n\\n**A5:** Existing SD models use the same NSFW-detector [3] as the safety checker, which encodes images into embeddings using either ViT-L/14 or ViT-B/32 as the base model. When training the adversarial image generator for AdvI2I-Adaptive, we used a ViT-L/14-based NSFW-detector as the safety checker. We then evaluated the transferability of AdvI2I-Adaptive to the ViT-B/32-based NSFW-detector and observe that it still achieves a high ASR, as shown below. We have also included the results in the revised version. Thank you for your suggestion.\\n\\n| Method | **Source Safety Checker** | **Target Safety Checker** | **ASR (%)** |\\n| --------------- | ------------------------- | ------------------------- | ----------- |\\n| AdvI2I-Adaptive | ViT-L/14-based | ViT-L/14-based | 72.0 |\\n| | | ViT-B/32-based | 66.5 |\\n\\n**References**:\\n\\n[1] https://github.com/wuhuikai/FaceSwap\\n\\n[2] MMA-Diffusion: MultiModal Attack on Diffusion Models\\n\\n[3] CosFace: Large Margin Cosine Loss for Deep Face Recognition\\n\\n[4] https://github.com/LAION-AI/CLIP-based-NSFW-Detector\"}", "{\"title\": \"Response to Additional Comment by Reviewer Cqd3\", \"comment\": \"Thank you very much for your followup comments and further suggestions. Below, we will trying addressing your concerns in detail and provide additional results for clarification.\\n\\n> **Q1 & Q2: Consider diffusion-based face swap or deepfake. Add FID as another metric.**\\n\\nWe included Face-Adapter [1], a diffusion-based face swap method using SDv1.5 as the base model, as a baseline for comparison. The image quality was evaluated using multiple metrics: **TOPIQ** (with three checkpoints trained on different datasets) [2], **NIQE**, **PIQE**, and **FID**. The results are as follows:\\n\\n| **Method** | **TOPIQ-flive\\u2191** | **TOPIQ-koniq\\u2191** | **TOPIQ-spaq\\u2191** | **NIQE\\u2193** | **PIQE\\u2193** | **FID\\u2193** |\\n| ----------------- | ---------------- | ---------------- | --------------- | --------- | --------- | -------- |\\n| Face-Adapter | 0.83 | 0.43 | 0.50 | 6.36 | 62.60 | 104.63 |\\n| **AdvI2I (ours)** | 0.78 | 0.58 | 0.67 | 3.76 | 38.72 | 85.60 |\\n\\nThe table shows that AdvI2I consistently performs competitively across various metrics. It achieves higher quality in TOPIQ-koniq and TOPIQ-spaq compared to Face-Adapter, while also showing significant improvements in NIQE, PIQE, and FID scores, which indicate better perceptual quality and closer alignment to real image distributions. These results demonstrate that AdvI2I effectively generates high-quality adversarial images while maintaining its primary objective of exposing vulnerabilities in I2I models. We have also included the results in the revised version. \\n\\n> **Q3: Compare IQAs with other baselines for clean and attacked images using SSIM, LPIPS, and PSNR.**\\n\\nIn addition to **LPIPS**, we incorporated **SSIM**, **PSNR**, **FSIM**, and **VIF** to provide a more comprehensive comparison of the quality of attacked images. The results are as follows:\\n\\n| **Method** | **LPIPS\\u2193** | **SSIM\\u2191** | **PSNR\\u2191** | **FSIM\\u2191** | **VIF\\u2191** | ASR (%)\\u2191 |\\n| ----------------- | ---------- | --------- | --------- | --------- | -------- | -------- |\\n| Attack VAE | 0.31 | 0.89 | 18.80 | 0.96 | 0.73 | 41.5 |\\n| MMA | 0.32 | 0.63 | 23.19 | 0.94 | 0.35 | 42.0 |\\n| **AdvI2I (ours)** | 0.31 | 0.88 | 18.79 | 0.96 | 0.72 | 82.5 |\\n\\nThe results highlight that AdvI2I performs on par with Attack VAE in terms of structural and perceptual similarity (SSIM and LPIPS) and visual feature retention (FSIM and VIF), while significantly outperforming MMA. Importantly, both AdvI2I and Attack VAE use generators to produce adversarial images, while MMA directly optimizes adversarial noise. Although MMA achieves a higher PSNR due to its direct noise optimization approach, it performs worse in metrics like VIF and SSIM. AdvI2I successfully balances adversarial effectiveness and attacked image quality across all metrics, reinforcing its stealthiness and robustness. We have also included the results in the revised version. \\n\\n----\\n\\nWe hope these results and explanations address your concerns. Please feel free to share additional thoughts or questions for further discussion.\\n\\n**References**:\\n\\n[1] Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control\\n\\n[2] TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment\"}", "{\"title\": \"thanks for reply\", \"comment\": \"Most of my questions are answered, I think this paper indeed does a good job in the right direction.\\n\\nI will keep my score and lean towards accept for now.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reponse to Reviewer ZfYn -- Part 2\", \"comment\": \">**W3:** Why not use ad-hoc noise generation for each image?\\n\\n**A3:** Thank you for this suggestion. We tested the ASR and time cost of using an ad-hoc generator on the SDv1.5-Inpainting model w/ and w/o sampling timestep t (\\\"w/o sampling\\\" refers to backpropagate all inference timesteps). We evaluated the ASR on 200 test samples and calculated the average time cost of generate one adversarial image. The results highlight the significant advantage of our generator over the ad-hoc generator. While the ad-hoc generator achieves a comparable ASR (83.5% vs. 82.5%) when all timesteps are backpropagated, it incurs an exponentially higher time cost\\u2014382.434 seconds per adversarial image compared to just 0.008 seconds for AdvI2I. When the ad-hoc generator uses sampling to reduce the number of timesteps, it sacrifices ASR (35.0%) while still requiring 49.336 seconds per image. In contrast, AdvI2I achieves a near-optimal balance between high ASR and extremely low time cost, making it not only more efficient but also more practical for generating massive adversarial images.\\n\\n| **Method** | **ASR (%)** | **Average Time Cost (s)** |\\n| ----------------------------------------- | ----------- | ------------------------- |\\n| MMA | 42.0 | 415.984 |\\n| Ad-hoc generator (w/o sampling timesteps) | 83.5 | 382.434 |\\n| Ad-hoc generator (w/ sampling timesteps) | 35.0 | 49.336 |\\n| **AdvI2I (ours)** | 82.5 | 0.008 |\\n\\n>**W4:** White-box setting of the NSFW-detector may not be practical.\\n\\n**A4:** Existing SD models use the same NSFW-detector [3] as the safety checker, which encodes images into embeddings using either ViT-L/14 or ViT-B/32 as the base model. When training the adversarial image generator for AdvI2I-Adaptive, we used a ViT-L/14-based NSFW-detector as the safety checker. We then evaluated the transferability of AdvI2I-Adaptive to the ViT-B/32-based NSFW-detector and observe that it still achieves a high ASR, as shown below. We have also included the results in the revised version. Thank you for your suggestion.\\n\\n| Method | **Source Safety Checker** | **Target Safety Checker** | **ASR (%)** |\\n| --------------- | ------------------------- | ------------------------- | ----------- |\\n| AdvI2I-Adaptive | ViT-L/14-based | ViT-L/14-based | 72.0 |\\n| | | ViT-B/32-based | 66.5 |\\n\\n>**W5:** Some other defenses e.g. add noise / use purifier.\\n\\n**A5:** We evaluated DiffPure [4] as a defense mechanism. While it reduces the ASR for the SDv1.5-Inpainting model under the nudity concept from 82.5% to 72.5%, the decline is not significant. This suggests that AdvI2I is robust against simple defenses targeting adversarial images. We did not include MMA as a baseline here because MMA uses adversarial prompts to induce NSFW images in diffusion models, whereas DiffPure is designed to defend against adversarial images, thus the ASR of MMA will not change after applying DiffPure defense.\\n\\n| **Method** | w/o Defense | **DiffPure** |\\n| ---------- | ----------- | ------------ |\\n| Attack VAE | 41.5 | 33.5 |\\n| **AdvI2I (ours)** | 82.5 | 72.5 |\\n\\n**References**:\\n\\n[1] Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models?\\n\\n[2] MMA-Diffusion: MultiModal Attack on Diffusion Models\\n\\n[3] https://github.com/LAION-AI/CLIP-based-NSFW-Detector\\n\\n[4] Diffusion models for adversarial purification.\"}", "{\"comment\": \"Thank you for your response. It addresses most of my concerns. However, the results highlight AdvI2I's limitations against advanced models like SDv3.0, where the reduction of NSFW content during training effectively mitigates attacks. This suggests a simple yet impactful defense strategy that may already be employed in advanced models, rendering AdvI2I ineffective. Nonetheless, I think this paper raises safety concerns for the field. Therefore, I will maintain my score, but I do not oppose the paper's acceptance.\"}", "{\"comment\": \"Dear Reviewer 3wqb,\\n\\nWe greatly appreciate your initial comments and your positive feedback on our work. We totally understand that you may be extremely busy at this time. But we still hope that you could have a quick look at our responses to your concerns. We appreciate any feedback you could give us. We also hope you could kindly update the rating if your questions have been addressed. We are also happy to answer any additional questions before the discussion ends.\\n\\nBest Regards,\", \"authors_of_paper_advi2i\": \"Adversarial Image Attack on Image-to-Image Diffusion Models\"}", "{\"title\": \"Reponse to Reviewer 3wqb -- Part 1\", \"comment\": \"Thank you for your valuable comments and suggestions. We have carefully addressed each of your questions and concerns below.\\n\\n>**W1:** highlight the contribution of $g_\\\\psi$ on the run time comparison with optimization approach.\\n\\n**A1:** Sorry for the confusion. We would like to highlight the main contribution of our work: instead of focusing on reducing the time cost of crafting adversarial images, the purpose of the proposed AdvI2I attack is to highlight the risks of I2I diffusion models generating NSFW images, even when the input prompts are benign. To the best of our knowledge, prior studies like Ring-A-Bell [1] and MMA [2] primarily explore crafting malicious prompts to induce NSFW content, and as shown in Table 2, simple text filters could defend such prompt-based attacks. On the contrary, the vulnerabilities arising from input images are relatively underexplored.\\n\\nAs a side advantage of our attack, it indeed offers a more efficient production of adversarial images through direct inference on $g_\\\\psi$. We compared the time cost of generating a single adversarial sample using AdvI2I and MMA. As shown in the table below, using a generator significantly accelerates the process of crafting adversarial samples. \\n\\nWe also tested the ASR and time cost of using an ad-hoc generator on the SDv1.5-Inpainting model w/ and w/o sampling timestep t (\\\"w/o sampling\\\" refers to backpropagate all inference timesteps), which directly optimizes the advesarial image for each input sample. The results highlight the significant advantage of our generator over the ad-hoc noise generator. Compared with them, AdvI2I achieves a near-optimal balance between high ASR and extremely low time cost, making it not only more efficient but also more practical for generating massive adversarial images.\\n\\n| **Method** | **ASR (%)** | **Average Time Cost (s)** |\\n| ----------------------------------------- | ----------- | ------------------------- |\\n| MMA | 42.0 | 415.984 |\\n| Ad-hoc generator (w/o sampling timesteps) | 83.5 | 382.434 |\\n| Ad-hoc generator (w/ sampling timesteps) | 35.0 | 49.336 |\\n| **AdvI2I (ours)** | 82.5 | 0.008 |\\n\\n>**W2:** SDXL-Turbo Image-to-Image transferability test.\\n\\n**A2:** We evaluated the transferability of AdvI2I from InstructPix2Pix to SD-Turbo Image-to-Image and from SDv1.5-Inpainting to FLUX.1-dev ControlNet Inpainting-Alpha. The results are shown below:\\n\\n| Source Model | Target **Model** | **ASR (%)** |\\n| ----------------- | -------------------------------------- | ----------- |\\n| InstructPix2Pix | InstructPix2Pix | 81.5 |\\n| | SD-Turbo | 62.5 |\\n| SDv1.5-Inpainting | SDv1.5-Inpainting | 82.5 |\\n| | FLUX.1-dev ControlNet Inpainting-Alpha | 74.0 |\\n\\n>**W3:** Typos/Mistakes in writing.\\n\\n**A3:** Thank you for the detailed suggestions. We have carefully reviewed and corrected all typos and mistakes mentioned in the revised version. The revised parts are shown in blue color.\"}", "{\"comment\": \"Thank you for your feedback and for confirming that our response addressed most of your concerns. We appreciate your acknowledgement of the safety concerns raised by our work.\\n\\nSDv3.0\\u2019s resilience to AdvI2I stems from its unique training data, where explicit NSFW content was carefully filtered from the dataset. This demonstrates that addressing vulnerabilities during the training stage can be a robust defense mechanism. However, many other publicly available models, such as SDv2.1, SD-Turbo, and FLUX.1-dev, as well as potential future models, remain vulnerable to AdvI2I attacks. These risks could be exploited by malicious users to generate harmful content.\\nOur work aims to expose the inherent risks in I2I models that retain the ability to generate NSFW content, even when benign inputs are provided. By highlighting these vulnerabilities, we hope to encourage the community to prioritize safety measures and adopt proactive defenses, such as dataset filtering and embedding safeguards during training.\\n\\nWe will emphasize these points further in the revised paper to provide a balanced discussion of AdvI2I's scope and its implications for model safety. Thank you again for your thoughtful review and for recognizing the importance of this work.\"}", "{\"summary\": \"This paper proposed a framework that can bypass safety check of NSFW generations by modifying the image input. It first points out the problems of previous prompt attacks to bypass NSFW checks, then it proposes to add noise to images to bypass the safety checker, which will make the attack more stealthy. The authors design a pipeline with some key design choices e.g. NSFW concept vector extraction, use a universal noise generator to avoid repeated generation. The paper is mostly well-written and easy to follow, the experiments are comprehensive and show that it is better than baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written and easy to follow.\", \"It revisits current prompt attacks and show critical insights that they can be easily defended.\", \"Based on the idea of concept vector extraction. The proposed novel pipeline is well-designed and effective, which shows strong performance compared with baseline methods. Also, the proposed pipeline is learning-based and can be used as one-pass noise generator.\"], \"weaknesses\": \"Clarification:\\n- This paper is working with I2I diffusion models, which have additional encoded visual inputs as condition. The targeted I2I model is (1) different from the T2I diffusion models used in previous works (2) is not as good as popular T2I models e.g. SD-XL, FLUX. It poses challenges on the motivation of this paper, because the I2I model used in this paper is a sub-optimal choice if the malicious users want to generate high quality NSFW images.\", \"methods\": [\"The proposed method is restricted to a small subset of I2I models, which are not the main part of diffusion models.\", \"The choice of noise generator may not be optimal, ad-hoc noise generation for each image is also not that time-consuming e.g. sample timestep t and optimize for 100 steps takes for e.g. 1 min. It may show much stronger performance.\"], \"adaptive_attack\": [\"The adaptive-attack assumes the white-box setting of the NSFW-detector, which may not be pratical.\", \"Some other defenses e.g. add noise / use purifier to remove the noise deserves more study.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I apologize for my delayed response. I spent last week in the hospital.\\n\\nAs mentioned in my initial review, FID scores require around 10,000 images for statistical reliability, with a minimum threshold of 5,000 images. The authors' use of only 200 images raises significant concerns about the validity of their results. This was the reason behind my query about the sample size.\\n\\nAdditionally, I share **Reviewer f4nA**'s concerns regarding the method's limited effectiveness on more advanced models like SD3.\\n\\nGiven these fundamental issues, I cannot increase the score further.\"}", "{\"comment\": \"Dear Reviewer f4nA,\\n\\nWe greatly appreciate your initial comments and your positive feedback on our work. We totally understand that you may be extremely busy at this time. But we still hope that you could have a quick look at our responses to your concerns. We appreciate any feedback you could give us. We also hope you could kindly update the rating if your questions have been addressed. We are also happy to answer any additional questions before the discussion ends.\\n\\nBest Regards,\", \"authors_of_paper_advi2i\": \"Adversarial Image Attack on Image-to-Image Diffusion Models\"}", "{\"title\": \"Response to Additional Comment by Reviewer Cqd3\", \"comment\": \"Thank you for your positive feedback and for considering an increase in your rating. We appreciate your thoughtful comments and are glad that the new experimental results have addressed most of your concerns.\\n\\nTo clarify your question, yes, the new experimental results were indeed conducted on the same 200-image test set. We ensured consistency in the evaluation methodology across all experiments to provide a reliable and fair comparison.\"}", "{\"summary\": \"This paper introduces AdvI2I, a framework that performs adversarial image attacks on image-to-image (I2I) diffusion models to generate NSFW content without modifying text prompts. It further proposes AdvI2I-Adaptive, which incorporates techniques to minimize resemblance to NSFW embeddings, enhancing robustness against conventional defenses. Extensive experiments demonstrate AdvI2I\\u2019s ability to bypass common safeguards in I2I models, signaling potential safety concerns and underscoring the need for improved defense mechanisms.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper expands the field of adversarial attacks in I2I diffusion models by targeting image conditioning rather than the commonly used text prompts.\\n\\n2. The structure and explanations are well-organized, with a clear presentation.\", \"weaknesses\": \"1. The paper\\u2019s evaluation is limited to SDv1.5-Inpainting and InstructPix2Pix, both based on the SDv1.5 architecture. Expanding the analysis to include more advanced versions of Stable Diffusion or other models(not just in the transferability section) would enhance the assessment of AdvI2I\\u2019s generalization to state-of-the-art diffusion models.\\n\\n2. The paper does not examine how varying benign prompts, including explicitly defensive ones that request safe content, might affect the success of the adversarial attack. Investigating whether different benign prompts, especially those aimed at reinforcing safe content, influence the attack's efficacy would offer a more comprehensive understanding of its generalization and robustness across diverse input conditions.\\n\\n3. The results suggest reduced transferability of AdvI2I from SDv1.5 to SDv3.0, indicating that its effectiveness may be architecture-specific, potentially limiting the framework\\u2019s generalizability.\\n\\n4. Since AdvI2I relies on adversarial noise as a primary mechanism, it would be beneficial to explore potential defense strategies beyond the current scope, such as DiffPure[1], to counteract adversarial image attacks more effectively.\\n\\n[1] Diffusion models for adversarial purification.\", \"questions\": \"Please refer to the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores bypassing safety checkers of image editing models to generate NSFW content, using adversarial images.\\n\\nThe authors first show that previous methods using adversarial prompt are easily detectable, and then propose AdvI2I - that train a network $g_{\\\\psi}$ to perturb image.\\n\\nWhen this perturbed image is used with a neutral prompt as input for an img2img or inpainting model, it can influence the generation process, to generate some specific NSFW concept (nudity/violence).\\n\\nTo do so, AdvI2I slightly modify (perturb) the image, so that its latent after denoising with a neutral prompt, is close to the original image's latent but denoising with a NSFW prompt.\\n\\nThe authors also propose AdvI2I-Adaptive, that incorporate safety checker loss and added Gaussian noise augmentation when training $g_{\\\\psi}$. It helps improving robustness when gaussian noise is added to the perturbed image, and AdvI2I-Adaptive also improves the bypass success rate over the posthoc safety checker.\\n\\nExperiments were conducted on SDv1.5-Inpainting and InstructPix2Pix, with high success rate against Safe Latent Diffusion, Negative Prompt, added Gaussian Noise and the Safety checker. AdvI2I also shows transferability from SDv1.5 to SDv2.0 and v2.1 but not effective when test with SDv3.0.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"While bypassing content filtering using adversarial images was proposed in MMA Diffusion, the method the author uses to quickly craft an adversarial image without optimization is a new contribution. The objective for training $g_{\\\\psi}$ is also novel, and selecting components like which UNet or VAE to train, or the diffusion step to optimize, requires trial and error efforts.\\n\\nBoth AdvI2I and AdvI2I-Adaptive show good results, and the success rate against the safety checker is strong. I\\u2019m also pleased with the included transferability test with SD2x and 3.\", \"weaknesses\": [\"Experiments: Below are some experiments that the author can address to have a comprehensive view about AdvI2I\", \"Insufficient highlight the contribution of $g_{\\\\psi}$, can add run time comparison with optimization approach\", \"SDXL-Turbo Image-to-Image transferability test, as it also uses the same VAE.\", \"Transferability test with recent models not just SD3.0, such as Pixart Alpha, Flux\"], \"writing\": [\"The writing needs improvement. Please address these points:\", \"L070 confusing since there is no encoder\", \"L11 in algorithm 1 - $C_i$ and $M$ was not mentioned in require. Also the loss is cosine between a vector $C_i$ and output of $\\\\mathcal{D}$ which I assume is an image, does it need an image encoder to turn it into a vector there, if so then what encoder?\", \"SLD Configuration - lacks details on the configuration that was used, is it Max/Strong/Medium?\", \"Clarify: prompts generated by gpt4-o are pair prompts or single prompt?\", \"Typo L607 - InstructionPix2Pix - InstructPix2Pix\"], \"questions\": [\"**important** Can AdvI2I method apply for good use, like changing the target concept to e.g clothes (instead of NSFW/violence), so that $g_{\\\\psi}$ now become a protection to help mitigate inappropriate image editing? Its application will be more like photoguard and faster, also only restrict on inappropriate editing\", \"How performance of AdvI2I vary, based on the neutral prompt. Specifically, given a perturbed image with prompt that specific mention e.g clothes, like \\\"A photo of a girl in a beautiful dress\\\" , can it still generate NSFW image?\", \"What hardware and how long to train AdvI2I?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer ZfYn,\\n\\nWe greatly appreciate your initial comments and your positive feedback on our work. We totally understand that you may be extremely busy at this time. But we still hope that you could have a quick look at our responses to your concerns. We appreciate any feedback you could give us. We also hope you could kindly update the rating if your questions have been addressed. We are also happy to answer any additional questions before the discussion ends.\\n\\nBest Regards,\", \"authors_of_paper_advi2i\": \"Adversarial Image Attack on Image-to-Image Diffusion Models\"}" ] }
5UKrnKuspb
NeuralPlane: Structured 3D Reconstruction in Planar Primitives with Neural Fields
[ "Hanqiao Ye", "Yuzhou Liu", "Yangdong Liu", "Shuhan Shen" ]
3D maps assembled from planar primitives are compact and expressive in representing man-made environments. In this paper, we present **NeuralPlane**, a novel approach that explores **neural** fields for multi-view 3D **plane** reconstruction. Our method is centered upon the core idea of distilling geometric and semantic cues from inconsistent 2D plane observations into a unified 3D neural representation, which unlocks the full leverage of plane attributes. It is accomplished through several key designs, including: 1) a monocular module that generates geometrically smooth and semantically meaningful segments known as 2D plane observations, 2) a plane-guided training procedure that implicitly learns accurate 3D geometry from the multi-view plane observations, and 3) a self-supervised feature field termed *Neural Coplanarity Field* that enables the modeling of scene semantics alongside the geometry. Without relying on prior plane annotations, our method achieves high-fidelity reconstruction comprising planar primitives that are not only crisp but also well-aligned with the semantic content. Comprehensive experiments on ScanNetv2 and ScanNet++ demonstrate the superiority of our method in both geometry and semantics.
[ "3D Reconstruction", "3D Scene Understanding", "Scene Abstraction", "Neural Rendering" ]
Accept (Oral)
https://openreview.net/pdf?id=5UKrnKuspb
https://openreview.net/forum?id=5UKrnKuspb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zcpSZk1zDp", "vJm9CPlVdj", "v3sLKkqPja", "pmuIhH7xEj", "pRbtIeWrjs", "obUdnDAxxU", "ntJC0h3rcE", "ne4uc4kwnL", "jKV1xjORXr", "dfGBChJzas", "bDJvLcRP4p", "VW1dU2M3rh", "TGTuOicnT6", "RqdcCf6YtI", "QenmqP7jIu", "PvktgmwnAf", "OoN7s2UKoG", "KiRTJl42qr", "I764wgAOCd", "GNDiupTZLh", "FxNDlehHke", "F6zFBDtWPw", "BxyI4nsXLR", "B63b6lCDnm", "AzpQE4njSc", "3mnWoVldf5", "1HFxepJmSQ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732708879287, 1732414459640, 1732119730798, 1732123516631, 1732554407278, 1732121884013, 1730670032348, 1732561150267, 1730618030868, 1734542018940, 1732708953207, 1732414397272, 1729933044207, 1732116483733, 1732118312135, 1732318526297, 1732167835264, 1732122267460, 1732118467151, 1732708791549, 1732553761547, 1737523542721, 1732120404201, 1732124269092, 1730714242084, 1732708435594, 1732121246490 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Reviewer_6wEq" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Reviewer_6wEq" ], [ "ICLR.cc/2025/Conference/Submission2933/Reviewer_YwVB" ], [ "ICLR.cc/2025/Conference/Submission2933/Reviewer_1CZJ" ], [ "ICLR.cc/2025/Conference/Submission2933/Area_Chair_ZuLq" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Reviewer_YwVB" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Reviewer_1CZJ" ], [ "ICLR.cc/2025/Conference/Submission2933/Reviewer_FCSv" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Area_Chair_ZuLq" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Reviewer_FCSv" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ], [ "ICLR.cc/2025/Conference/Submission2933/Authors" ] ], "structured_content_str": [ "{\"title\": \"Many thanks to the reviewer for their valuable feedback\", \"comment\": \"Dear Reviewer 1CZJ,\\n\\nWe sincerely appreciate your insightful comments and further prompt reply.\\nWe are glad to hear that the concerns have been addressed.\\nYour recognition of our paper's strength and strong recommendation are a great encouragement to us.\\n\\nThank you again for your involvement throughout the entire review process.\\n\\nBest regards, NeuralPlane Authors\"}", "{\"comment\": \"Dear Reviewer 6wEq,\\n\\nThanks again for your efforts and suggestions for this paper. \\nAs the deadline for the author-reviewer discussion phase is nearing, we would like to courteously inquire if our responses have effectively addressed the concerns you raised.\\nShould any remain unresolved, please let us know, and we will promptly follow up.\\n\\nBest regards, Authors\", \"title\": \"A Gentle Reminder\"}", "{\"title\": \"2. Robustness to hyperparameters (2/2)\", \"comment\": \"**Push margin.**\\nAs shown below, we empirically observe that separating a negative pair as far apart as possible (i.e., with a large push margin) can lead to faster convergence and improved performance.\\nDue to a few local planar primitives that are not fully initialized at the beginning, we set the push margin $m$ to 1.5 for the first 1k iterations,\\nand then increase it to 2.0, which represents the maximum distance in a unit hypersphere.\\n\\n| The push margin $m$ | Chamfer $\\\\downarrow$ | F-score $\\\\uparrow$ | RI $\\\\uparrow$ | VOI $\\\\downarrow$ | SC $\\\\uparrow$ |\\n| ------------------------------------------ | :------------------- | :----------------- | :------------ | :--------------- | ------------- |\\n| 1.0 | 5.03 | 69.1 | 0.948 | 2.32 | 0.373 |\\n| 1.5 | 4.89 | 70.0 | 0.953 | 2.29 | 0.366 |\\n| 2.0 | 4.77 | 70.5 | 0.950 | 2.25 | 0.376 |\\n| 1.5 $\\\\rightarrow$ 2.0 (Our implementation)$\\\\text{\\\\quad}$ | 4.59 | 71.2 | 0.955$\\\\text{\\\\quad}$ | 2.25 | 0.376$\\\\text{\\\\quad}$ |\\n\\n\\n**Loss balancing parameters.**\\nThe overall training objective is defined in *Eq. 9*, where three _intra-primitive_ loss terms $\\\\mathcal{L}\\\\_{\\\\text{normal}}$, $\\\\mathcal{L}\\\\_{\\\\text{p-depth}}$ and $\\\\mathcal{L}\\\\_{\\\\text{pull}}$ are accordingly combined with balancing parameters $\\\\lambda_1$, $\\\\lambda_2$, and $\\\\lambda_3$.\\nFollowing DS-NeRF, we set $\\\\lambda_2$ to $0.1$ without conducting further ablations.\\nWe now report the performance with different $\\\\lambda_1$ and $\\\\lambda_3$ respectively, since they correspond to two independent sets of training parameters.\\nResults below show that $\\\\lambda_1$ being too large could do harm to the geometry, while a smaller value may require more iterations for convergence.\\n\\n| $\\\\lambda_1$: balancing parameter for $\\\\mathcal{L}_{\\\\text{normal}}\\\\text{\\\\quad}$ | Chamfer $\\\\downarrow$ | F-score $\\\\uparrow$ | RI $\\\\uparrow$ | VOI $\\\\downarrow$ | SC $\\\\uparrow$ |\\n| :------------------------------------------------------------------ | :------------------- | :----------------- | :------------ | :--------------- | ------------- |\\n| 0.001 | 5.19 | 68.0 | 0.951$\\\\text{\\\\quad}$ | 2.31 | 0.372$\\\\text{\\\\quad}$ |\\n| 0.01 (Our implementation) | 4.59 | 71.2 | 0.955 | 2.25 | 0.376 |\\n| 0.05 | 4.94 | 69.7 | 0.948 | 2.32 | 0.370 |\\n| 0.10 | 5.78 | 64.6 | 0.941 | 2.50 | 0.345 |\\n| 1.00 | 10.14 | 40.5 | 0.925 | 3.15 | 0.292 |\\n\\n\\n\\nMoreover, the results listed below indicate that $\\\\lambda_3$=0.5 is an effective choice for balancing the pulling and pushing forces.\\n\\n| $\\\\lambda_3$: balancing parameter for $\\\\mathcal{L}_{\\\\text{pull}}$ $\\\\text{\\\\quad}$ | Chamfer $\\\\downarrow$ | F-score $\\\\uparrow$ | RI $\\\\uparrow$ | VOI $\\\\downarrow$ | SC $\\\\uparrow$ |\\n| :---------------------------------------------------------------- | :------------------- | :----------------- | :------------ | :--------------- | ------------- |\\n| 0.1 | 4.99 | 69.9 | 0.952$\\\\text{\\\\quad}$ | 2.29 | 0.365$\\\\text{\\\\quad}$ |\\n| 0.5 (Our implementation) | 4.59 | 71.2 | 0.955 | 2.25 | 0.376 |\\n| 1.0 | 4.67 | 70.2 | 0.951 | 2.29 | 0.373 |\\n| 2.0 | 5.08 | 68.3 | 0.948 | 2.37 | 0.361 |\"}", "{\"title\": \"Comment to Official Review\", \"comment\": \"We'd like to thank the reviewer for their time in providing a detailed review with insightful comments and for their recognition of our work.\\n\\n> **W1: Complexity of methodology.**\\n\\nWe appreciate the reviewer's concern about the complexity of our method.\\nAcknowledging this common concern raised by all reviewers, **in our general response above**, we have presented additional experimental results on the complexity of our method and its robustness to hyperparameters.\\nGenerally speaking, our method is **computationally inexpensive** and **robust to most of the concerned hyperparameters**, with the performance close to the optimum when they are set within a reasonable range.\\nPlease be aware that in our main experiment, **a universal set of hyperparameters is applied to all test scenes** and we will explicitly clarify this in the revision.\\n\\nRegarding the K-means clustering on predicted normal map, we referred to PlaneRCNN and PlanarRecon, which respectively utilize 7 and 6 normal anchors to estimate plane normals.\\nWe make a difference by choosing K=8 and reserving only the 6 principal clusters, which empirically leads to improved results.\\nFurthermore, as you mentioned, we filter out noisy 2D plane segments based on their pixel areas.\\nAll these efforts are to ensure the salience of the detected 2D planes.\\nSorry that at the present time, we do not have conclusive data on how to determine these values according to various scenes.\\nWe will continue to investigate this.\\n\\nWe need to clarify that the method is currently restricted to compact environments and better suited to indoor settings.\\nAlthough our method is applicable to small outdoor scenes,\\nlarge-scale and complex outdoor scenes continue to pose many challenges, including (1) the need for large model capacities, and (2) the presence of massive non-planar and spurious clutter, which we will leave as future research.\\n\\n> **W2: Dependence on the quality of initial local planar primitives.**\\n\\nThe reviewer's observation is valid, and the limitation is also discussed in *Appendix A.5*, where we note that our method may fail to recover from catastrophic errors in such as severe inaccuracy in mono-normal estimation and SfM geometry.\\n\\nMeanwhile, following the reviewer FCSv's suggestion, we additionally conduct an out-of-domain experiment on two small-scale outdoor scenes.\\nThe results indicate that by repurposing the pretrained monocular normal predictor and the Segment Anything Model as 2D foundational models, the proposed method has the potential of generalizing to challenging environments.\\nWe invite the reviewer to check our updated supplementary material for these qualitative results.\\n\\n> **W3: Over-segmentation issue.**\\n\\nWe value the reviewer's insightful feedback.\\nYou are right that, although the over-segmentation issue in 2D could largely be managed during the training process, there are still cases where undesirable over-segmentation may occur in the final reconstruction, such as where the floor could be divided into multiple pieces.\\nWe partially attribute this to the inherent ambiguity in determining a semantically coherent plane structure and will further investigate to handle this semantic ambiguity.\\n\\nRegarding the small segments you concerned, we observe that they mostly arise from the noisy 2D plane segments.\\nThese segments usually correspond to non-planar clutter and can only be detected in a limited number of views, which could lead to the inaccuracies.\\nWe plan to tackle these limitations in the future.\\n\\n> **Q1: $\\\\hat{\\\\mathbf{n}}_i$ in Eq. 4 is undefined.**\\n\\nWe apologize for this confusing notation.\\nFirstly, the $\\\\hat{\\\\mathbf{n}}$ is defined before as the **NeRF-derived** normal, rather than the normal of the local planar primitive $P$ which is actually denoted as $\\\\bar{\\\\mathbf{n}}$.\\nThe NeRF-derived normal $\\\\hat{\\\\mathbf{n}}$ is estimated from a randomly sampled ray triplet in an algebraic manner (*Eq. 3*).\\nThen, to compute the normal loss $\\\\mathcal{L}_{\\\\text{normal}}$ (*Eq. 4*), multiple ray triplets will be sampled, so that the subscript $i$ added to $\\\\hat{\\\\mathbf{n}}$ denotes the estimated NeRF-derived normal of the **i-th** sampled ray triplet $\\\\mathcal{T}_i$.\\nWe will clarify this in the revision.\"}", "{\"comment\": \"Dear Authors:\\n\\nThank you for your reply and the hyperparameter experiments. Definitely, I agree with accepting the paper. Still, it is a very complex system with many hyperparameters. It has only been tested on a limited number of scenarios (only 12) and no visualizations of the failures have been provided, so I would like to keep my original rating.\\n\\nBest,\\nReviewer 6wEq\"}", "{\"title\": \"Comment to Official Review (2/2)\", \"comment\": \"> **W4: The ground truth used for quantitative evaluation is unclear.**\\n\\nSorry for the confusion.\\nFollowing *PlanarRecon*, we use 3D planes, i.e., **only the ground-truth planar parts** provided by *PlaneRCNN*, as ground truth for evaluation.\\nWhile in the first 7 rows of *Tab. 1*, we aim to assess the capability of state-of-the-art surface reconstruction methods, with the ground truth for their RAW geometry outputs being the official scene reconstruction (i.e., the entire meshes).\\nPlease note that, here, we do not intend to directly compare with these surface reconstruction methods, but rather to provide a quantitative demonstration of how the geometry+RANSAC paradigm relies on the quality of input geometries.\\nTo avoid any confusion, we will explicitly clarify this distinction in the main paper.\\n\\n> **W5: Robustness to hyperparameters.**\\n\\nAcknowledging this common concern raised by all reviewers, we have presented additional experimental results **in our general response above**, where the robustness of our method to hyperparameters in $\\\\mathcal{L}_{\\\\text{pull}}$ and RANSAC is assessed.\\nGenerally speaking, our method is **robust to most of the concerned hyperparameters**, and its performance is close to the optimum when they are set within a reasonable range of values.\\n\\nPlease be aware that in our main experiments presented in submission, a fixed (i.e., universal) set of hyperparameters is applied to all test scenes.\\nWe also need to clarify that the method is currently restricted to compact environments and better suited to indoor settings.\\nAlthough our method is currently applicable to small outdoor scenes,\\nlarge-scale and complex outdoor scenes continue to pose many challenges, including (1) the need for large model capacities, and (2) the presence of massive non-planar and spurious clutter, which we plan to tackle in future research.\"}", "{\"summary\": \"This paper presents NeuralPlane, a method for reconstructing 3D scene plane primitives via neural fields without GT plane labelling. The method is divided into three main stages: firstly, it combines pre-trained normal prediction and SAM to generate initial 2D planar segments and estimates their 3D parameters using SfM keypoints. Secondly, it optimises two neural fields, a density field based on planar geometric constraints, and a coplanar neural field that understands the semantic relationships between regions . The neural coplanar field is followed by a neural parser module that helps to model the learned coplanar relations. Finally, the optimised neural representations are converted into explicit 3D planes through point sampling, feature-based clustering and RANSAC fitting. The method is evaluated on the ScanNetv2 and ScanNet++ datasets, and the results show that it outperforms both the learning-based method and the Geometry+RANSAC method in terms of geometric and semantic metrics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is well-written and easy to understand.\", \"The proposed method does not require ground-truth plane annotations, as it can learn effectively from noisy monocular model outputs.\", \"The method demonstrates SOTA performance and achieves clean plane segmentation results.\"], \"weaknesses\": [\"The proposed method involves numerous hyperparameters, including balancing parameters for loss, the number of semantic prototypes, and parameters listed in Lines 850-863.\", \"As a complex system, it is important to discuss and present failure cases to help readers understand the method\\u2019s limitations.\"], \"questions\": \"Since it is a complex system paper, making it hard to reproduce, will the code be publicly avaliable?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Authors:\\n\\nThank you for the detailed response, which addresses all of my concerns and is appreciated! I'll keep my rating.\"}", "{\"summary\": \"The paper presents NeuralPlane, a novel approach to 3D plane reconstruction that utilizes neural fields for generating structured 3D maps from multi-view images without the need for plane annotations. The method emphasizes two main aspects: geometry and semantics. Key contributions include:\\n1. Monocular Plane Segmentation: A monocular module extracts geometrically smooth (based on off-the-shelves Surface Normal Predictor) and semantically meaningful 2D plane observations (based on Segmenet Anything Model). \\n2. Plane-Guided Neural Representation: The model utilizes these 2D segments to train a neural field that captures accurate 3D plane locations. A surface normal regularization and pseudo-depth regularization terms are proposed.\\n3. Neural Coplanarity Field: This self-supervised feature field enables semantic consistency within the 3D reconstruction by grouping planar regions that share coplanar relationships. A contrastive loss is proposed to distinguish between planes with similar geometric properties but different semantic properties. \\nThe method demonstrates superior performance on ScanNetv2 and ScanNet++ datasets, indicating its effectiveness in indoor environments.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. Novelty in Combining Geometry and Semantics: The method\\u2019s approach to merging geometry with semantic information through a neural coplanarity field is innovative, enhancing the semantic consistency of 3D reconstructions. A complex system with multiple stages is proposed to estimate/reason the local planar regions and the associated parameters, to associate the planes in 3D using radiance field, to resolve semantic conflicts using Neural Coplanarity Field.\\n2. High-Quality Reconstruction: Experimental results indicate that NeuralPlane achieves fine-grained and coherent plane reconstructions, outperforming existing methods in most of the metrics. \\n3. Efficiency: NeuralPlane\\u2019s volume density representation allows for faster training compared to implicit methods, an important practical advantage.\\n4. Extensive ablation studies: the authors include sufficient ablation studies to support the effectiveness of the proposed modules.\\n5. Good writing: the paper is clearly written that the technical details are clearly presented.\", \"weaknesses\": \"1. Complexity of Methodology: The proposed method involves several stages, including monocular plane segmentation, neural coplanarity field training, and plane extraction. In each stage, there are several submodules and I believe there are some hyperparameters decisions in each stage. While effective, this complexity (especially the combination of submodules and associated parameters) may impact its scalability to larger scenes or generalizability. For example, K-means clustering on predicted normal map, mask size threshold in SAM, thresholds to form negative pairs in Neural Coplanarity Field, Loss balancing parameters, etc. It would be great if the authors could share the insights on the impact of these hyperparameter settings on different scenes. For example, is a universal set of hyperparameters applied to all the test scenes? It would be great if the authors could provide a sensitivity analysis or ablation study on key hyperparameters across different scenes. This would help clarify how robust the method is to parameter changes and whether a universal set of parameters is feasible.\\n\\n2. Dependence on Initial 2D Plane Segmentation Quality: As the regularizations are based on the quality of initial local plane geoemtry, the method\\u2019s success can depend on the quality of 2D plane segments obtained from monocular priors, which may introduce inaccuracies in challenging environments for monocular predictors. As the authors mentioned , the local planar primitives can result in severe inconsistency across views (Line 194). \\n\\n3. Over-Segmentation Issue: As highlighted in the paper, the Segment Anything Model (SAM) tends to over-segment planes, resulting in multiple smaller plane segments for a single surface. Although this is managed in the training process, it may require further refinement to avoid segmentation inconsistencies in complex scenes. From the visualization on the GitHub page, it seems to me that some of the small segments usually resulted in inaccuracy.\", \"questions\": \"1. Eqn 4: n_i is not defined, is it the surface normal of the selected local planar primitive? Please explicitly define n_i in the text or equation, and confirm if it refers to the surface normal of the local planar primitive.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"/\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes to method for multi-view 3D plane reconstruction. The main idea is to utilize foundational models to provide prior (normal, segmentation, etc) and then employ neural fields to learn a plane field that is aware of both geometry and scene semantics. The propose method presents an unified framework for both geometry and semantics, as well as a way to exploit strong prior from other models. During rebuttal, the major weakness raised by the reviewers is the complexity of the proposed method, which includes multiple modules with a handful of hyperparameters to tune. Other weaknesses include low efficiency due to the neural fields, missing related works, dependency of foundational models and paper presentation. The authors have actively addressed the concerns and provide supporting materials. After rebuttal, all reviewers suggest to accept the paper.\", \"additional_comments_on_reviewer_discussion\": [\"During rebuttal, the weakness raised by the reviewers are\", \"the complexity of the proposed method, which includes multiple modules with a handful of hyperparameters to tune.\", \"low efficiency due to the neural fields\", \"missing related works\", \"dependency of foundational models and paper presentation.\", \"The authors have actively addressed the concerns, including testing their method for outdoor scenes, conducting ablation studies on hyperparameters, etc. Most of the concerns of reviewers are addressed by the authors' responses.\"]}", "{\"title\": \"Many thanks to the reviewer for their valuable feedback\", \"comment\": \"Dear Reviewer YwVB,\\n\\nWe sincerely appreciate your insightful comments and further prompt reply.\\nWe are glad to hear that the concerns have been addressed.\\nYour recognition of our paper's strength and the clear acceptance are greatly valued.\\n\\nThank you again for your involvement throughout the entire review process.\\n\\nBest regards, NeuralPlane Authors\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer YwVB,\\n\\nThanks again for your efforts and suggestions for this paper. \\nAs the deadline for the author-reviewer discussion phase is nearing, we would like to courteously inquire if our responses have effectively addressed the concerns you raised.\\nShould any remain unresolved, please let us know, and we will promptly follow up.\\n\\nBest regards, Authors\"}", "{\"summary\": \"This paper proposed a framework called NeuralPlane to reconstruct 3D indoor scenes as planar primitives from posed 2D images. The author first employs 2D prior models to generate local planar primitives, then uses the geometric and semantic priors to guide the NeRF-style reconstruction learning. Finally, a decoding algorithm is designed to extract the global explicit plane mesh from the learned neural field.\\nExtensive experiments are conducted to evaluate the performance on ScanNetv2 and ScanNet++ datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper proposed a comprehensive 3D reconstruction framework with plane primitives.\", \"Compared to some of the similar works that only focus on detecting the planes, this work also utilizes the detected planes to guide neural field learning.\", \"The presentation of this paper is clear and concise.\", \"The qualitative and quantitative results look promising.\"], \"weaknesses\": [\"This work involves a lot of submodules, especially the neural field with geometric, semantic, and coplanar features could be computationally expensive.\", \"A concurrent work is very relevant to this paper and could be discussed in the related works section:\", \"Chen, Zheng, et al. \\\"PlanarNeRF: Online Learning of Planar Primitives with Neural Radiance Fields.\\\" arXiv preprint arXiv:2401.00871 (2023).\", \"The text illustration and conceptual figure of Neural Parser could be improved, current version is not clear enough and easy to follow.\"], \"questions\": [\"What's the computation complexity of this work? Like the GPU memory usage and training time? How's it compared to related works?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response by Authors\", \"comment\": \"We are deeply grateful to the reviewers for dedicating their time and effort to the reviewing process.\\nWe are pleased to note that the reviewers find the presentation of the paper clear and concise (FCSv, 6wEq, 1CZJ, YwVB), and acknowledge the novelty of our work in associating geometry and semantics (FCSv, 1CZJ, YwVB) as well as delivering high-quality reconstruction without requiring plane annotations (FCSv, 6wEq, 1CZJ, YwVB).\\n\\nWe also notice that the reviewers' major concerns mainly lie in the **complexity of methodology** (6wEq, 1CZJ, YwVB) and its **robustness to hyperparameters** (FCSv, 6wEq, 1CZJ, YwVB).\\nAccordingly, during the rebuttal period, we measured the average training time and peak GPU memory consumption of each proposed module, and\\nconducted more detailed ablation studies on the ScanNetv2 dataset, with a particular focus on the analysis of hyperparameter sensitivity.\\nThese additional measurements and experimental results presented below serve as general responses to the reviewers' shared concerns.\\nWe hope these efforts will comprehensively evaluate the performance of our work and provide further clarity.\"}", "{\"title\": \"1. Complexity of the methodology\", \"comment\": \"As listed below, we first report the profiling results for average training time and peak memory costs of each proposed module.\\nCompared to the base model (i.e., *Nerfacto*), our full model approximately incurs a 1$\\\\times$ increase in time consumption and a 15.8% increase in peak memory consumption.\\nHowever, please note that only 4k iterations are required for training, adding only 2-3 minutes to the total time consumption, which could be considered acceptable.\\n\\n| |$\\\\text{\\\\quad}$Average Training Time / 10iters $\\\\text{\\\\quad}$ | $\\\\text{\\\\quad}$Peak GPU Memory Usage (GB) $\\\\text{\\\\quad}$ |\\n| :----------------------------------------------------------------- | :---------------------------------------------------------------- | ---------------------------------------------------------------- |\\n| Nerfacto (the base model) | $\\\\text{\\\\quad}$303.6 ms | $\\\\text{\\\\quad}$1.46 |\\n| $+\\\\text{ }\\\\mathcal{L}_{normal}$ (defined in *Eq. 4*) | $+\\\\text{ }$ 85 ms ( $\\\\uparrow$ 28% ) | $+\\\\text{ }$ 0 ( $\\\\rightarrow$ ) |\\n| $+\\\\text{ }\\\\mathcal{L}_{p-depth}$ (defined in *Eq. 5*)$\\\\text{\\\\quad\\\\quad}$ | $+\\\\text{ }$ 26 ms ( $\\\\uparrow$ 9% ) | $+\\\\text{ }$ 0.04 ( $\\\\uparrow$ 2.9% ) |\\n| $+$ Refine. | $+\\\\text{ }$ 44 ms ( $\\\\uparrow$ 14.5% ) | $+\\\\text{ }$ 0.01 ( $\\\\uparrow$ 1% ) |\\n| $+$ NCF | $+\\\\text{ }$ 105 ms ( $\\\\uparrow$ 34.7% ) | $+\\\\text{ }$ 0.166 ( $\\\\uparrow$ 11.4% ) |\\n| $+$ NP | $+\\\\text{ }$ 71 ms ( $\\\\uparrow$ 23.3% ) | $+\\\\text{ }$ 0.001 ( $\\\\uparrow$ 0.3% ) |\\n| **Full Model** | $\\\\text{\\\\quad}$**635.5 ms** ( $\\\\uparrow$ 109.3% ) | $\\\\text{\\\\quad}$**1.69** ( $\\\\uparrow$ 15.8% ) |\\n| | | |\\n\\nIn contrast to the learning-based MVS baselines, such as *PlanarRecon* and *AirPlanes*, which can be implemented at interactive speeds, *NeuralPlane* is currently a time-consuming method that requires offline optimization for each scene.\\nIt is a common issue for neural implicit reconstruction methods, but when compared to the state-of-the-art methods in this literature, as reported below, our method is significantly more efficient.\\n\\n| | NeuRIS$\\\\text{\\\\quad}$ | MonoSDF (MLP)$\\\\text{\\\\quad}$ | NeuralPlane (Ours) |\\n| :------------------------- | :----- | :------------ | :----------------- |\\n| Training Time (h) | 4.2 | 7.5 | 0.1 |\\n| Peak GPU Memory Usage (GB)$\\\\text{\\\\quad}$ | 8.0 | 4.7 | 1.7 |\\n| | | | |\"}", "{\"title\": \"reply to author response\", \"comment\": \"Thank you for providing the additional experiments and results in your response. They address my concerns and clarify the issues I raised. I will maintain my original rating for this submission.\"}", "{\"title\": \"Feedback to authors' rebuttal\", \"comment\": \"I appreciate the comprehensive and careful feedback from authors, as well as the supplemented experiments. The experiments and clarification look convincing and clear to me, especially the qualitative visualization on outdoor data to validate the generalizability of this method on zero-shot scenes. I will improve the overall rating and undoubtedly, the proposed method is novel, technically sound, and is in a good shape for acceptance. I am also looking forward to the code release of this paper for future research and exploration.\"}", "{\"title\": \"Comment to Official Review\", \"comment\": \"Thank you for your motivating and positive feedback on our work.\\nWe provide a detailed response below to each of your concerns.\\n\\n> **W1: Numerous hyperparameters.**\\n\\nWe agree with the reviewer's comment since the method involves a number of stages and submodules.\\nAcknowledging this common concern raised by all reviewers, we have presented additional experimental results **in our general response above**, where the robustness to the concerned hyperparameters for loss balancing and RANSAC is further assessed.\\nGenerally speaking, our method is **robust to most of the concerned hyperparameters**, and its performance is close to the optimum when they are set within a reasonable range of values.\\n\\nPlease also be aware that in our main experiment, a fixed set of hyperparameters is applied to all test scenes and we will explicitly clarify this in the revision.\\n\\n> **W2: Missing failure cases.**\\n\\nThank you for this point.\\nIn the submission, we provide a brief discussion of the current limitations in *Appendix A.5*:\\n(1) our method may fail to recover from catastrophic errors in 2D priors, such as severe inaccuracy in mono-normal estimation and SfM geometry;\\n(2) the number of semantic prototypes has to be fixed or manually set, which potentially limits the scalability.\\nBesides, we have to clarify that our method is currently restricted to compact environments and better suited to indoor settings.\\n\\nWe will include further qualitative results of failure cases to help the reader have a thorough understanding.\\n\\n> **Q1: On code availability.**\\n\\nFor the implementation complexity, we will release the code to support reproducible experiments and facilitate future research.\"}", "{\"title\": \"2. Robustness to hyperparameters (1/2)\", \"comment\": \"Firstly, we would like to emphasize here and will explicitly clarify in the revision that in our main experiments presented in submission, a fixed (i.e., universal) set of hyperparameters is applied to all test scenes.\\nIn addition to the ablation studies we have conducted in *Sec. 4.3* and *Appendix A.4* which include the number of semantic prototypes, the feature dimension of NCF and the DBSCAN epsilon, we now provide more details on the sensitivity of other hyperparameters.\\nGenerally speaking, the results show that **our method is robust to most of the concerned hyperparameters, and its performance is close to the optimum when they are set within a reasonable range**.\\n\\n**RANSAC parameters for plane fitting.** \\nDuring plane fitting via RANSAC, a point is considered an inlier if: (1) the angle between its normal and the normal hypothesis is less than $r_{n}$ AND (2) the distance from the point to the plane is less than $r_d$.\\nAs listed below, we report the performance of *NeuralPlane* under varying RANSAC parameters, including those adopted by other methods, as indicated in the last three rows.\\nThanks to the plane-biased scene geometry and the earlier scene decomposition based on learned coplanarity features, we find our method robust to the RANSAC parameters.\\n\\n| $(r_n$, $r_d)$ | Chamfer $\\\\downarrow$ | F-score $\\\\uparrow$ | RI $\\\\uparrow$ | VOI $\\\\downarrow$ | SC $\\\\uparrow$ |\\n| :--------------------------------------- | :------------------- | :----------------- | :------------ | :--------------- | ------------- |\\n| $($10$^{\\\\circ}$, 2cm$)$ | 4.64 | 71.1 | 0.941$\\\\text{\\\\quad}$ | 2.61 | 0.297$\\\\text{\\\\quad}$ |\\n| $($10$^{\\\\circ}$, 5cm$)$ | 4.57 | 71.3 | 0.954 | 2.26 | 0.375 |\\n| Our implementation: $($20$^{\\\\circ}$, 8cm$)\\\\text{\\\\quad}$ | 4.59 | 71.2 | 0.955 | 2.25 | 0.376 |\\n| PlanarRecon: $($30$^{\\\\circ}$, 25cm$)$ | 4.78 | 70.3 | 0.953 | 2.26 | 0.378 |\\n| AirPlanes: $($36.9$^{\\\\circ}$, 30cm$)$ | 4.76 | 70.5 | 0.951 | 2.26 | 0.382 |\\n| PlanarNeRF: $($45.6$^{\\\\circ}$, 35cm$)$ | 4.73 | 70.1 | 0.950 | 2.28 | 0.386 |\\n\\n**Pushing thresholds.**\\nWe then analyze the impact of different pushing thresholds, as proposed in *Eq. 7*.\\nResults reported below show that small pushing thresholds are preferable, as they define a more geometrically strict coplanarity condition.\\nHowever, when the thresholds become too strict, the coplanarity features tend to be excessively discriminative, resulting in over-segmentation and thus a decline in segmentation performance.\\n\\n| $(t_n$, $t_o)$ | Chamfer $\\\\downarrow$ | F-score $\\\\uparrow$ | RI $\\\\uparrow$ | VOI $\\\\downarrow$ | SC $\\\\uparrow$ |\\n| :-------------------------------------------- | :------------------- | :----------------- | :------------ | :--------------- | ------------- |\\n| $($cos10$^{\\\\circ}$, 5cm$)$ | 4.56 | 71.3 | 0.949$\\\\text{\\\\quad}$ | 2.33 | 0.368$\\\\text{\\\\quad}$ |\\n| Our implementation: $($cos10$^{\\\\circ}$, 8cm$)\\\\text{\\\\quad}$ | 4.59 | 71.2 | 0.955 | 2.25 | 0.376 |\\n| $($cos20$^{\\\\circ}$, 8cm$)$ | 4.66 | 70.2 | 0.950 | 2.31 | 0.370 |\\n| $($cos30$^{\\\\circ}$, 25cm$)$ | 5.10 | 68.9 | 0.952 | 2.32 | 0.358 |\\n| $($cos60$^{\\\\circ}$, 50cm$)$ | 5.12 | 68.3 | 0.950 | 2.37 | 0.365 |\"}", "{\"title\": \"Many thanks to the reviewer for their valuable feedback\", \"comment\": \"Dear Reviewer FCSv,\\n\\nWe sincerely appreciate your insightful comments and further prompt reply. \\nWe are highly encouraged by your acknowledgment of our comprehensive response with new experiments.\\nSpecifically, following your constructive suggestion, we have included additional outdoor results in the appendix (on page 20 and 24). \\n\\nThank you again for your involvement throughout the entire review process.\\n\\nBest regards, NeuralPlane Authors\"}", "{\"title\": \"Please read the rebuttal and reply\", \"comment\": \"Dear Reviewers,\\n\\nThanks again for serving for ICLR, the discussion period between authors and reviewers is approaching (November 27 at 11:59pm AoE), please read the rebuttal and ask questions if you have any. Your timely response is important and highly appreciated.\\n\\nThanks,\\n\\nAC\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"3. Out-of-domain experiment\", \"comment\": \"Finally, following the constructive suggestion (**W1**) of the reviewer FCSv, we conducted an out-of-domain test on two small outdoor scenes.\\nTwo outdoor scenes from the training split of the [Niantic MapFree dataset](https://research.nianticlabs.com/mapfree-reloc-benchmark) were select for the experiment.\\nAll settings used in this experiment are kept consistent with those presented in our original submission.\\nWe also attempted to draw a comparison with baseline methods described in the main text.\\n\\nWe find that *PlanarRecon* can not be successfully executed on both scenes, and the geometry+RANSAC baselines that employ learning-based MVS reconstruction methods (e.g., *SimpleRecon* and *DoubleTake*), fail to yield meaningful results.\\nWhen compared to geometry+RANSAC baselines that employ neural implicit surface reconstruction methods (e.g., *NeuRIS* and *MonoSDF*), **our method produces reconstructions that are more complete and clean**.\\n\\nSince no ground-truth reconstruction is available, we would like to invite the reviewers to **check our updated supplementary material for qualitative evaluation**.\"}", "{\"title\": \"Comment to Official Review\", \"comment\": \"We thank the reviewer YwVB for the constructive feedback and valuable suggestions!\\nBelow, we provide a detailed response to your questions and comments.\\n\\n> **W1: Computational complexity.**\\n\\nWe appreciate the reviewer's concern about the complexity of our method.\\nAcknowledging this common concern raised by all reviewers, **in our general response above**, we have presented additional experimental results on the complexity of the methodology, \\nwhere we measured the average training duration and peak GPU memory usage for each of the proposed modules, and compared our work with other methods.\\nGenerally speaking, our method is **computationally inexpensive and efficient** when compared to existing neural implicit reconstruction methods.\", \"to_clarify\": \"compared to the learning-based methods such as *PlanarRecon* and *AirPlanes*, our method is relatively time-consuming that requires per-scene optimization.\\nHowever, we deem that the neural implicit method is more generalizable (please also refer to the out-of-domain experiment section in our general response).\\nBesides, there are numerous SLAM frameworks based on neural representations (or say, differentiable rendering), such as *NICE-SLAM* and *Gaussian Splatting SLAM* which could enable us to implement our method in a more efficient and online fashion.\\nWe believe that this is an interesting direction for future research.\\n\\n> **W2: Missing a related work.**\\n\\nThank you for the reference.\\n_PlanarNeRF_ is indeed a highly relevant work that uses RGB-D sequences for dense 3D planar primitives detection in an online fashion.\\nWe will include it in the revision.\\n\\n> **W3: Illustration of Neural Parser needs improvement.**\\n\\nThank you so much for pointing this out.\\nBasically, this module learns a set of feature centroids in NCF during the training process, which is found to be effective in decomposing the scene into coplanar segments.\\nWe regret that, due to space constraints, we had to condense several points in the main text, drawing on a range of unexpanded prerequisites that may not be familiar to all readers.\\nWe will make rearrangements there to enhance readability.\"}", "{\"summary\": \"This paper presents a neural 3D reconstruction system on 3D plane reconstruction of indoor scenes. Inspired by the recent success of neural radiance field and image foundation models (SAM2), this paper presents a multi-view 3D reconstruction pipeline leveraging these techniques. The training scheme consists of three phases: Initializing plane segments and parameters -> optimizing a neural feature field for plane-specific feature representation (encouraged by a list of geometry-guided loses) -> plane extraction by grouped features and RANSAC. Experiments are conducted extensively on two representative indoor datasets: ScanNet and 7-scenes and are compared with a group of competitve baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The unique advantage of the system is the association of geometry and semantic features for plane reconstruction problem, which requires both geometry-aware perception and semantic-aware grouping.\\n2. The paper leverages foundation models to achieve plane segment initializaing and utilize geometry-driven losses to optimize the system. There is no groundtruth plane segmentation or geometry label required. \\n3. Overall the paper and diagrams are well written and presented.\\n4. The experimental comparison and thorough and convincing on both plane geometry reconstruction and segmentation.\", \"weaknesses\": \"1. I think the major advantage of this paper is the unsupervised learning paradigm on the plane reconstruction problem. However, since both ScanNet and ScanNet++ has groundtruth images, this unique advantages seems not to be fully enjoyed and reflected. So I suggest authors to apply the proposed system on some outdoor scenes containing plane structures (such as autonomous driving dataset or street view datasets), to verify the adaptability of the method.\\n \\n2. I have tested PlaneRecon in my previous projects. It can incrementally reconstructs planes in an online and real-time manner. Besides, it can trains over multiple scenes and directly test without any test-time optimization. Although its precision should fall behind than the neural reconstruction papers, the generalizability and speed is higher than the proposed method. I am curious on whether the paper has such potential to tackle these limitations.\\n\\n3. Missing a few related works: (1) Recovering 3d planes from a single image via convolutional neural networks (2) PlaneMVS: 3D Plane Reconstruction From Multi-View Stereo (3) Single-image piece-wise planar 3d reconstruction via associative embedding.\\n\\n4. On quantitative evaluation, it is unclear that what is the groundtruth is here. Does it stand for the groundtruth planar part only or the entire mesh? As a plane reconstruction paper, I think the former one should be more reasonable. If so, the first several methods listed in Table 1 cannot directly be compared with the proposed method. Authors should make it clearer on this part.\\n\\n5. Is the proposed method robust to the hyperparameters listed in the paper especially (1) to,tn and m in the push loss during training and (2) the parameters selected in RANSAC during plane fitting? Some ablation studies on hyper-parameter robustness are expected to make the method more generalizable across most indoor scenes, since the geometric scale and semantic distribution can have large variance among scenes.\", \"questions\": \"I am impressed by the technical contribution made by authors for this work. However, there also exist a few major concerns for me at this time which discourage me to grant this paper a higher value.\\n\\nPlease try to address my concerns listed in the weakness part. I will accordingly consider to improve my overall rating if the concerns are well solved.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Many thanks to the reviewer for their valuable feedback\", \"comment\": \"Dear Reviewer 6wEq,\\n\\nWe sincerely appreciate your insightful comments and further prompt reply.\\nYour recommendation is a great encouragement.\\n\\nFollowing your constructive suggestion, we have included elaboration as well as visulizations of the common failure cases in _Appendix A.5_ (on page 20).\\nRegarding the concern that only a limited number of scenarios were tested, in fact, we had carefully considered this aspect prior to our submission. \\nHowever, we still opted to proceed with evaluating several representative scenes, noting that numerous studies on indoor scene reconstruction using neural implicit representations, such as _ManhattanSDF_ and _NeuRIS_, have adopted similar approaches due to efficiency considerations. \\nMoreover, unlike feed-forward methods such as _PlanarRecon_, which are efficient but rely on 3D supervision, our method is annotation-free using 2D foundational models, and has exhibited superior robustness to outdoor scenarios (please refer to the **Out-of-domain experiment** in our general response).\\n\\nFinally, we acknowledge that your concern is completely valid: for an effective reconstruction system, it is essential to test across a wide range of scenarios to verify its robustness and investigate the relevant hyperparameters.\\nWe will continue to work on this in the future.\\n\\nThank you again for your involvement throughout the entire review process.\\n\\nBest regards, NeuralPlane Authors\"}", "{\"title\": \"Comment to Official Review (1/2)\", \"comment\": \"We appreciate the reviewer's thoughtful feedback and valuable suggestions.\\nBelow, we provide further clarification and details to your concerns.\\nIf any of our responses do not fully address your concerns, please let us know, and we will promptly follow up.\\n\\n> **W1: Evaluations on some outdoor scenes, verifying the adaptability of the method.**\\n\\nThis is a noteworthy experiment that we had not investigated prior to submission.\\nOur work and recent studies on 3D plane reconstruction primarily focus on indoor scenarios, but honestly, we too have wondered about the adaptability of our method to outdoor scenes.\\nTo this end, we select two outdoor scenes from the training split of the [Niantic MapFree dataset](https://research.nianticlabs.com/mapfree-reloc-benchmark).\\nEach scene depicts a small outdoor place and comes with two independent scans.\\nOnly the first scan is utilized for the experiments.\\nWe consider this to be a completely out-of-domain test, \\nas no existing method has been specifically designed to handle such scenes.\\nAll settings used in this experiment are kept consistent with those presented in our original submission.\\n\\nSince no ground-truth reconstruction is available for these scenes, we encourage the reviewer to **refer to our updated supplementary material** for detailed qualitative results.\\nThe results demonstrate the robustness of our method and its potential to generalize to challenging outdoor environments.\\nHere, we need to point out that *PlanarRecon* collapsed on both scenes, while geometry+RANSAC baselines employing learning-based MVS reconstruction methods, such as *SimpleRecon* and *DoubleTake*, failed to yield meaningful results.\\n\\n> **W2: The proposed method is time-consuming which needs test-time optimization.**\\n\\n*PlanarRecon* is the first learning-based model that we highly appreciate for its real-time global 3D plane reconstruction capability.\\nThe reviewer's concern about the time efficiency of our method is **indeed a valid point**.\\nHowever, as a neural implicit approach, our work primarily aims at maintaining a consistent 3D representation of environmental plane structures, thus ensuring the quality of the final plane reconstruction.\\nWe regard this as an interesting exploration of neural fields for parametric primitives.\\nMore importantly, we advocate for the use of this powerful neural 3D representation, fusing various 2D observations from pretrained or foundational models.\\nTo alleviate the concerned efficiency limitation, we consider it is possible to integrate our method into a more efficient and online SLAM framework based on neural representations (or say, differentiable rendering), such as *NICE-SLAM* and *Gaussian Splatting SLAM*.\\n\\nRegarding the concern of generalizability, as discussed in **W1**, we have conducted an out-of-domain experiment in outdoor scenarios to verify the adaptability of our method, where learning-based feed-forward reconstruction methods collapse or fail to deliver meaningful results.\\n\\n> **W3: Missing a few related works.**\\n\\nThanks for highlighting these references.\\nThey are representative prior works in reconstructing 3D plane from a limited number of views, and we will cite them in the literature review.\"}" ] }
5U1rlpX68A
SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning
[ "Yichen Wu", "Hongming Piao", "Long-Kai Huang", "Renzhen Wang", "Wanhua Li", "Hanspeter Pfister", "Deyu Meng", "Kede Ma", "Ying Wei" ]
Continual Learning (CL) with foundation models has recently emerged as a promising paradigm to exploit abundant knowledge acquired during pre-training for tackling sequential tasks. However, existing prompt-based and Low-Rank Adaptation-based (LoRA-based) methods often require expanding a prompt/LoRA pool or retaining samples of previous tasks, which poses significant scalability challenges as the number of tasks grows. To address these limitations, we propose Scalable Decoupled LoRA (SD-LoRA) for class incremental learning, which continually separates the learning of the magnitude and direction of LoRA components without rehearsal. Our empirical and theoretical analysis reveals that SD-LoRA tends to follow a low-loss trajectory and converges to an overlapping low-loss region for all learned tasks, resulting in an excellent stability-plasticity trade-off. Building upon these insights, we introduce two variants of SD-LoRA with further improved parameter efficiency. All parameters of SD-LoRAs can be end-to-end optimized for CL objectives. Meanwhile, they support efficient inference by allowing direct evaluation with the finally trained model, obviating the need for component selection. Extensive experiments across multiple CL benchmarks and foundation models consistently validate the effectiveness of SD-LoRA. The code is available at https://github.com/WuYichen-97/SD-Lora-CL.
[ "Continual learning; Low-rank adaptation" ]
Accept (Oral)
https://openreview.net/pdf?id=5U1rlpX68A
https://openreview.net/forum?id=5U1rlpX68A
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pitD8OzKor", "im35iyQFVK", "hPCHjPVgyA", "fNmlEY1r0d", "apvzyp3G8U", "a74XOROWQd", "VjaJ1MgTvw", "VaKzSIRXq4", "Uvx71A32Hd", "ShvxpshE7x", "S7Qd4fsFqH", "OlHt9730Bj", "L5Y9BA1xQ0", "JAra13AI20", "EEu6KiEmDn", "ApNnGypojE", "3hMADpdwhN", "3GDjKJk6C5" ], "note_type": [ "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732227982497, 1729815027601, 1732230834741, 1737523790170, 1732237338678, 1732255328403, 1732357839309, 1732227475753, 1732227242401, 1732267590399, 1730196843226, 1730642998876, 1730295531295, 1732331682566, 1734618754433, 1732237378544, 1732230968701, 1732228037596 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6765/Authors" ], [ "ICLR.cc/2025/Conference/Submission6765/Reviewer_Uver" ], [ "ICLR.cc/2025/Conference/Submission6765/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6765/Authors" ], [ "ICLR.cc/2025/Conference/Submission6765/Reviewer_yo9h" ], [ "ICLR.cc/2025/Conference/Submission6765/Reviewer_6nN3" ], [ "ICLR.cc/2025/Conference/Submission6765/Authors" ], [ "ICLR.cc/2025/Conference/Submission6765/Authors" ], [ "ICLR.cc/2025/Conference/Submission6765/Reviewer_on1i" ], [ "ICLR.cc/2025/Conference/Submission6765/Reviewer_on1i" ], [ "ICLR.cc/2025/Conference/Submission6765/Reviewer_6nN3" ], [ "ICLR.cc/2025/Conference/Submission6765/Reviewer_yo9h" ], [ "ICLR.cc/2025/Conference/Submission6765/Reviewer_Uver" ], [ "ICLR.cc/2025/Conference/Submission6765/Area_Chair_ojqn" ], [ "ICLR.cc/2025/Conference/Submission6765/Authors" ], [ "ICLR.cc/2025/Conference/Submission6765/Authors" ], [ "ICLR.cc/2025/Conference/Submission6765/Authors" ] ], "structured_content_str": [ "{\"title\": \"To Reviewer Uver (Part I)\", \"comment\": \"We appreciate very much your constructive comments on our paper. Please kindly find our response to your comments below, and all revisions made to the paper are highlighted in blue for your ease of reference. We hope that our response satisfactorily addresses the issues you raised. Please feel free to let us know if you have any additional concerns or questions.\\n\\n**Q1**: Better clarification about the magnitude and direction of the LoRA parameters.\\n> - In the vanilla LoRA framework, the weight update is expressed as $\\\\Delta W = AB = \\\\\\\\|AB\\\\\\\\|_F \\\\cdot \\\\frac{AB}{\\\\\\\\|AB\\\\\\\\|_F} = \\\\\\\\|AB\\\\\\\\|_F \\\\cdot \\\\overline{AB}$. This decomposition highlights two components: $\\\\\\\\|AB\\\\\\\\|_F$, the Frobenius norm of $AB$, which represents the learned magnitude, and $\\\\overline{AB} = \\\\frac{AB}{\\\\\\\\|AB\\\\\\\\|_F}$, the normalized matrix indicating the direction. \\n> - In S-LoRA, we preserve only the directions learned from previously completed tasks (i.e., $\\\\overline{A_iB_i}$ for $i = 1, \\\\dots, j-1$, where $\\\\mathcal{T}_j$ is the current task). Their associated weights $\\\\alpha_i$ ($i = 1, \\\\dots, j-1$) are treated as learnable parameters. By decoupling magnitude and direction, we empirically demonstrate its effectiveness in improving average performance, as shown in the experiments.\\n> - Section 4.2 provides further exploration of how this decoupling helps alleviate forgetting. Specifically, Finding 3 illustrates that the $\\\\Delta W$ learned by S-LoRA better aligns with $\\\\Delta W^*$, which represents the weights located in the shared low-loss region (i.e., the region in the parameter space where the model consistently achieves low loss across all tasks).\\n> - A more detailed theoretical explanation is provided in Theorem 1, which shows that the previously learned directions (e.g., $\\\\overline{A_iB_i}$ for $i = 1, 2$) correspond to the principal components of $\\\\Delta W^*$. This also explains why the automatically learned weights (e.g., $\\\\alpha_i$ values for $i = 1, 2$) are larger for earlier tasks compared to those learned for later tasks.\\n\\n**Q2**: In Findings 3, the concept of \\\"low-loss path\\\", \\\"linear low-loss path\\\", and \\\"low-loss region\\\" need to be explained with more intuitive expressions.\\n> - (**Intuitive expression.**) For **Low-Loss Region**: This is a broader area in the parameter space where the model's loss is consistently low. Intuitively, it's like the floor of a valley or plateau where any position results in a good model performance. It represents the parameter configurations that achieve low error across different tasks. **Low-Loss Path:** This refers to a trajectory in the parameter space along which the model's loss remains low. Intuitively, imagine navigating through a landscape of mountains and valleys, where the valleys represent areas of low loss. **Linear Low-Loss Path:** This is a specific type of low-loss path where the trajectory is a straight line in the parameter space. Imagine drawing a straight line across a flat valley; the line stays within the low-loss region, meaning the model's performance remains stable along this direct route. \\n> - Additionally, previous works [1, 2] have shown that the linear low-loss path exists within the context of CL, indicating that the trajectory along this path is effective for maintaining low loss across sequential tasks. In Sec. 4.2, we demonstrate that the proposed S-LoRA can effectively seek a low-loss path to identify the shared low-loss region, thereby achieving improved performance.\\n> \\n> [1] Mirzadeh, Seyed Iman, et al. \\\"Linear Mode Connectivity in Multitask and Continual Learning.\\\" ICLR, 2021.\\\\\\n> [2] Verwimp, Eli, Matthias De Lange, and Tinne Tuytelaars. \\\"Rehearsal revealed: The limits and merits of revisiting samples in continual learning.\\\" ICCV, 2021.\\n> \\n**Q3/4**: The typo in the equation after Algorithm 1 (the first term should be $A_jB_j$). How should the 'residual' of this directional approximation be defined, and how can a residual direction be controlled by a scalar threshold?\\n> - Thanks for your careful review. We appreciate your attention to detail. This was a typo, and we have already corrected it in the main text, with the changes highlighted in blue.\\n> - The updated equation is $r=\\\\\\\\|\\\\\\\\overline{A_jB_j}-\\\\\\\\sum_{i=1}^{j-1}\\\\\\\\hat{\\\\\\\\alpha}_i \\\\\\\\overline{A_iB_i}\\\\\\\\|$\\n\\n**Q5**: Do all $\\\\\\\\|\\\\cdot\\\\\\\\|$ notaion within the manuscript stand for the operator norm?\\n> - Thank you for your comment. The notation $\\\\\\\\|\\\\cdot\\\\\\\\|$ used throughout the manuscript, except in Sec. 5, refers to the Frobenius norm. To avoid confusion, we have revised the manuscript accordingly. Thank you again for helping us improve the clarity of the paper.\"}", "{\"summary\": \"This manuscript focused on continual learning with the pre-trained model (CL-PTM). By indicating that the existing prompt-based methods rely on an unreliable prompt selection mechanism which can lead to the scalability issue, this manuscript proposed Scalable Low-Rank Adaptation (S-LoRA). Specifically, S-LoRA incrementally decouples the learning of the direction and magnitude of Low-Rank Adaptation (LoRA) parameters. The theoretical and empirical analysis indicated that the proposed method tends to follow a low-loss trajectory that converges to an overlapped low-loss region. Experiments on standard benchmarks were conducted to support the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea of decoupling the learning of LoRA\\u2019s direction and magnitude is novel to me. This method addresses issues with scalability and efficiency in class-incremental learning, providing a valuable contribution to the field.\\n2. The motivation of this manuscript was driven by extensive empirical support. Some interesting findings were observed through the experiments, which were not investigated before.\\n3. Theoretical analyses were conducted to support the empirical findings. The explanation of the shared low-loss region and how S-LoRA converges to it effectively supports the results presented.\", \"weaknesses\": \"1. Some core ideas were not explained clearly. The description of how the decoupling of magnitude and direction need to be better defined.\\n2. While S-LoRA shows strong performance, some newer methods outside the scope of the prompt-based methods need to be compared for further validation.\", \"questions\": \"1. Regarding the magnitude and the direction of the LoRA parameters. In Section 4.1, the decomposition of magnitude and direction of LoRA's parameter updates $\\\\Delta \\\\mathbf{W}$ was only casually mentioned. However, I wonder how the magnitude and direction of such a matrix were defined in the authors' derivation. I believe they should be clearly defined to help the readers to understand the core idea of your method.\\n2. In Findings 3, the concept of \\\"low-loss path\\\", \\\"linear low-loss path\\\", and \\\"low-loss region\\\" need to be explained with more intuitive expressions.\\n3. In the first equation after Algorithm 1, I wonder if there is a typo for the definition of $r$. Should the first term in RHS be $\\\\mathbf{A}_j \\\\mathbf{B}_j$ rather than $\\\\mathbf{A}_i \\\\mathbf{B}_i$? If not, please provide further explanations about this point.\\n4. Still with the same equation. Since the RHS consists of a linear combination of \\\"directions\\\" mentioned by the authors, I wonder how \\\"the residual\\\" of a direction approximation should be defined. In my understanding, $r$ should also be a direction analog to the operations on vectors. However, the authors also had some expressions like \\\"the residual $r$ is less than the threshold $\\\\tau$\\\". I didn't get how to control a residual direction by a scalar threshold.\\n5. Before Theorem 1 in Section 5, the authors mentioned that $\\\\|\\\\cdot\\\\|$ refers to the operator norm. Do all $\\\\|\\\\cdot\\\\|$ notations within the manuscript stand for the operator norm?\\n6. I noticed that the proposed S-LoRA method was mainly compared to the prompt-based baselines except for InfoLoRA. However, S-LoRA is not an improvement of the prompt-based methods. It would be better if the authors provide comparisons with a wide range of recent studies like EASE [1], or at least some recent prompt-based methods like [2-5].\\n\\nI will accordingly change my rating if my concerns are addressed.\", \"references\": \"[1] Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning. CVPR 2024.\\n[2] RCS-prompt: learning prompt to rearrange class space for prompt-based continual learning. ECCV 2024.\\n[3] Prompt Gradient Projection For Continual Learning. ICLR 2024.\\n[4] PromptFusion: Decoupling Stability and Plasticity for Continual Learning. ECCV 2024.\\n[5] Consistent Prompting for Rehearsal-Free Continual Learning. CVPR 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To Reviewer on1i (Part I)\", \"comment\": \"Thank you sincerely for your thoughtful and positive feedback on our work. We are particularly grateful for your recognition of the various aspects of our research. Below, we have provided a detailed explanation for your remaining concern as follows. Please do not hesitate to let us know if you have any further questions.\\n\\n**Q1**: I understand that this may be outside the scope of the paper, but did you measure the relative distance with other benchmarks? It would be helpful to compare elements outside the ViT pre-training distribution.\\n> - To address your question, we have conducted additional experiments and plotted the relative distances across different benchmarks and backbones, including DomainNet and the DINO backbone. \\n> - Since ImageNet-R is an out-of-distribution task relative to the ViT pre-training distribution [1], we further extended the original paper's results on ImageNet-R (N=5) to ImageNet-R (N=10) for a more comprehensive comparison with tasks outside the ViT pre-training distribution.\\n> - **Please refer to the Appendix A.2 for reference.** The results demonstrate that the same trend, consistent with Finding 1, is observed across various benchmarks, backbones, and task lengths.\\n>\\n>[1] Hendrycks, Dan, et al. \\\"The many faces of robustness: A critical analysis of out-of-distribution generalization.\\\" CVPR, 2021.\\n\\n**Q2**: The intuition behind how the proposed method alleviates forgetting, along with the corresponding forgetting results.\\n> - Thank you for pointing this out. The explanation of how S-LoRA alleviates forgetting is central to the method and is detailed in Finding 3.\\n> - To address your concern about strong impact on performance on $\\\\mathcal{T}_1$, take ImageNet-R(N=5) for example, we have listed the forgetting of $\\\\mathcal{T}_1$, during the incremental training, in the following table. It can be seen that the forgetting on $\\\\mathcal{T}_1$ is smaller than others. This is because, although the $\\\\alpha_1$ changes, the final weights still relatively lie in the low-loss region for $\\\\mathcal{T}_1$ (see Finging 3$(c)$).\\n> \\n>|IN-R(N=5)|Coda-Prompt|Hide-Prompt|InfLoRA|S-LoRA|\\n>|:-:|:-:|:-:|:-:|:-:| \\n>|FM($\\\\downarrow$) of $\\\\mathcal{T}_1$ | 7.61 | 7.98 | 9.18| **6.98** |\\n>\\n> - To further clarify why S-LoRA effectively mitigates forgetting, we provide a detailed explanation below. As demonstrated in Finding 3(a), when the model is trained on the second task $\\\\mathcal{T}_2$, S-LoRA learns the weights ${\\\\alpha_1, \\\\alpha_2}$ and LoRA $\\\\overline{A_2B_2}$ while keeping $\\\\overline{A_1B_1}$ fixed. During training, **S-LoRA first focuses its updates along the critical directions learned from earlier tasks** (i.e., larger $\\\\alpha_1$ on $\\\\overline{A_1B_1}$), enabling the model to quickly approach the shared low-loss region. Then, by incrementally introducing LoRA (i.e., $\\\\overline{A_2B_2}$), **it fine-tunes the update directions, allowing the model to effectively converge on the shared low-loss region** across all tasks. Specifically,\\n> - By analyzing interpolating points along the update path, we observe that S-LoRA can improve performance on $\\\\mathcal{T}_2$ without degrading performance on $\\\\mathcal{T}_1$, as shown in Finding 3$(c)$. **This result verifies that the weights converged by S-LoRA lie within the shared low-loss region for both tasks.**\\n> - **We theoretically prove that the gradually learned $\\\\\\\\overline{A_iB_i}$ sequentially approximate the principal components of the optimal $\\\\\\\\Delta W^{\\\\ast}$** ($\\\\\\\\Delta W^*= W^*-W_0$, where $W^*$ lies in the shared low-loss region for all tasks.) This, in turn, explains why the model assigns larger weights to the first learned $\\\\overline{A_iB_i}$ components, as these represent the principal directions necessary to achieve the optimal $\\\\\\\\Delta W^*$.\\n> - **In summary**, through our designed S-LoRA, the model can find a low-loss path[1,2] for updates and converge to weights within a common shared low-loss region. This approach effectively mitigates forgetting, **providing a new perspective based on low-loss path** for leveraging LoRA to address this issue.\\n> - For your reference, the table below presents the overall forgetting results (FM), showing that the proposed S-LoRA consistently outperforms other methods in minimizing forgetting.\\n> \\n>| Methods| IN-R(N=5)(FM$\\\\downarrow$)|IN-R(N=10)(FM$\\\\downarrow$)|IN-R(N=20)(FM$\\\\downarrow$)|\\n>|:-:|:-:|:-:|:-:|\\n>|Finetune| 28.42 | 30.87 | 39.60 | \\n>|L2P| 5.54 | 5.54 | 5.95 |\\n>|DualPrompt | 4.63 | 5.58 | 6.22 |\\n>|CodaPrompt | 4.03 | 4.87 | 4.45 | \\n>|InfLoRA| 4.73 | 5.66 | 7.47 |\\n>|HidePrompt| 4.31 | 5.52 | 4.76 | \\n>|S-LoRA| **3.98** | **4.32** | **4.39** |\\n>\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"To Reviewer yo9h (Part I)\", \"comment\": \"We appreciate very much your constructive comments on our paper. Please kindly find our response to your comments below, and all revisions made to the paper are highlighted in blue for your ease of reference. We hope that our response satisfactorily addresses the issues you raised. Please feel free to let us know if you have any additional concerns or questions.\\n\\n**W1 & Q1**: Main Concern: the differences between the proposed S-LoRA and MoE-LoRA [1,2]. We expect the author to discuss more differences.\\n> Thank you for bringing these papers to our attention. Below, we would like to clarify the key differences between our proposed S-LoRA and MOE-LoRA from the following perspectives.\\n> - **Training Stage**.\\n> MoE-LoRA [1,2] can be viewed as the LoRA version of CODA-Prompt [3], as both rely on a gating mechanism to select prompts (in CODA-Prompt) or LoRA components (in MoE-LoRA), sharing similar properties as outlined in Table 1. \\n> - (***Task-level v.s. Example-level***) The gating mechanism in MoE-LoRA is sample-dependent, where the gating is trained at the example level, meaning **different samples are assigned different weights of LoRAs**. In contrast, S-LoRA uses task-level training, **where all task samples share the same learned $\\\\alpha$**. While sample-dependent gating can sometimes improve accuracy, as discussed in Lines 78-80, it also creates a bottleneck, since wrong expert selection can significantly degrade performance, especially when samples from different tasks are similar.\\n> - (***Efficiency***) In **MoE-LoRA, all LoRA components/experts are treated equally** and are continually added, leading to inefficiency as the number of tasks increases. S-LoRA, on the other hand, addresses this **through both theoretical and empirical insights, showing that later-learned LoRA components contribute less**. By reducing their ranks, S-LoRA improves efficiency without sacrificing performance.\\n> - **Inference Stage** \\n> - (***Fixed v.s. Gating***) During inference, S-LoRA directly evaluates the current model, i.e., the one after learning of task $j$ via ${\\\\bf{W}}_0+\\\\alpha_1\\\\overline{A_1 B_1}+\\\\alpha_2\\\\overline{A_2 B_2}+\\\\cdots+\\\\alpha_j\\\\overline{A_j B_j}$, on previous tasks, where all $\\\\alpha_j$ for $j\\\\in\\\\{1,\\\\cdots,N-1\\\\}$ are fixed thanks to the low-loss path. However, MoE-LoRA requires re-computing weights for each individual sample of a previous task, often resulting in inconsistencies with the optimal weights learned during training. This inconsistency can exacerbate forgetting.\\n> - (***Efficiency***) S-LoRA eliminates additional computational costs associated with MOE-LoRA's dynamic gating. By combining weighted LoRAs back into the foundation model, S-LoRA matches the computational efficiency of the foundation model itself while maintaining strong performance.\\n> \\n> To better address your concern, we have also re-implemented MoE-LoRA[1] on ImageNet-R and compared it with the proposed S-LoRA for your reference. \\n> \\n>| Methods| IN-R(N=5)(Acc/AAA)|IN-R(N=10)(Acc/AAA)|IN-R(N=20)(Acc/AAA)|\\n>|:-:|:-:|:-:|:-:|\\n>|MoE-LoRA[1]| 74.08/81.07 | 70.92/77.81 | 62.97/70.44 |\\n>|S-LoRA|**79.15**/**83.01** | **77.34**/**82.04** | **75.26**/**80.22**|\\n>\\n>[1] Liu J, Wu J, Liu J, et al. Learning Attentional Mixture of LoRAs for Language Model Continual Learning. arXiv, Sep.29, 2024.\\\\\\n>[2] Dou S, Zhou E, Liu Y, et al. LoRAMoE: Alleviating World Knowledge Forgetting in Large Language Models via MoE-Style Plugin. ACL 2024.\\\\\\n>[3] Smith, James Seale, et al. \\\"Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning.\\\" CVPR, 2023.\"}", "{\"comment\": \"Your response has sufficiently addressed my concerns, so I have revised the scores. Your revised paper should further compare or discuss the reference [1-8].\\n\\n**Reference**\\n\\n[1] Liu J, Wu J, Liu J, et al. Learning Attentional Mixture of LoRAs for Language Model Continual Learning[J]. arXiv preprint arXiv:2409.19611, 2024.\\n\\n[2] Dou S, Zhou E, Liu Y, et al. LoRAMoE: Alleviating World Knowledge Forgetting in Large Language Models via MoE-Style Plugin[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024: 1932-1945.\\n\\n[3] McDonnell, M. D.; Gong, D.; Parvaneh, A.; Abbasnejad, E.; and van den Hengel, A. 2024. Ranpac: Random projections and pre-trained models for continual learning. Advances in Neural Information Processing Systems, 36.\\n\\n[4] Zhou, D.-W.; Ye, H.-J.; Zhan, D.-C.; and Liu, Z. 2023b. Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need. arXiv preprint arXiv:2303.07338.\\n\\n[6] Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. 2023a. Orthogonal subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152.\\n\\n[7] Yang S, Ning K P, Liu Y Y, et al. Is Parameter Collision Hindering Continual Learning in LLMs?[J]. arXiv preprint arXiv:2410.10179, 2024.\\n\\n[8] Continual Learning with Pre-Trained Models: A Survey.\"}", "{\"comment\": \"I'd like to thank the authors for all the effort to address my comments, questions and concerns. I've seen only few edits being made in the paper. It would be great if at least the comparison to task arithmetic LoRA could make it into the final version (Table 4 could be a good place for it). I think these additional studies are very useful and could be interesting for future readers.\\n\\nI have no further questions.\"}", "{\"title\": \"To Reviewer 6nN3 (Part II)\", \"comment\": \">| Methods| Equation |IN-R(N=5)(Acc/AAA)|IN-R(N=10)(Acc/AAA)|\\n>|:-:|:-:|:-:|:-:| \\n>|Task Arithmetic LoRA[1] (`v1`) |$W_0+\\\\sum_{i=1}^N\\\\tau_i$ | 67.11/73.67 | 63.14/70.19 | \\n>|Task Arithmetic LoRA[1] (`v2`) |$W_0+\\\\sum_{i=1}^N \\\\tau_i'$ | 72.77/78.07 | 71.02/76.89 \\n>|S-LoRA|$W_0+\\\\sum_{i=1}^N\\\\alpha_i\\\\overline{A_iB_i}$ |**79.15/83.01** | **77.34/82.04** | \\n>\\n**W2**: Most baselines use inferior PEFT methods (prompt tuning) while the authors use LoRA which naturally will give them an edge. It would be great if the authors could show that the improvement over the baseline is not coming from PEFT method.\\n> Thank you for your insightful comment. To demonstrate that the improvement of our proposed S-LoRA is not solely attributable to our choice of LoRA over prompts, \\n> - we have already compared S-LoRA with **the current SOTA LoRA-based method, InfLoRA (CVPR'24) [4]**, in the experimental section (see Table 2 & 3);\\n> - we also included additional LoRA-based methods in our evaluation during the response period, with the results presented in the following table.\\n> - Compared to other LoRA-based methods [1-4], S-LoRA **achieves the best performance** across various task sequence lengths. This demonstrates that the observed performance gains are **not simply due to replacing prompts with LoRA**. For instance, O-LoRA [2] applies orthogonal regularization to parameters rather than directly addressing the feature space, thus limiting its effectiveness in mitigating forgetting. Task Arithmetic LoRA [1] simply adding task vectors ($\\\\tau_i$) ignores relationships between tasks. As task sequence length increases, the accumulation of conflicting task vectors leads to performance degradation.\\n> - Compared to O-LoRA[2], which treats all LoRA components equally and continually adds them, our proposed ES-LoRA reduces the ranks of later-added LoRAs and thus **improves efficiency without sacrificing performance**. Such rank reduction is supported by our theoretical and empirical findings that the LoRA components learned later contribute less within our framework.\\n>\\n>| Methods| IN-R(N=5)(Acc/AAA)|IN-R(N=10)(Acc/AAA)|IN-R(N=20)(Acc/AAA)|\\n>|:-|:-:|:-:|:-:|\\n>|Task Arithmetic LoRA[1]| 72.77/78.07 | 71.02/76.89 | 63.29/70.88 | \\n>|O-LoRA[2]| 73.88/78.89 | 70.65/76.43 | 65.23/71.89 | \\n>|InfLoRA[4]| 76.95/81.81| 74.75/80.67 | 69.89/76.68 | \\n>|S-LoRA|**79.15**/**83.01** | **77.34**/**82.04** | **75.26**/80.22|\\n>|ES-LoRA1| 79.01/82.50 | 77.18/81.74 | 74.05/**80.65** | \\n>\\n**W3**: End-to-end optimization is no desired property, high predictive performance is in Table 1.\\n> We totally agree with the reviewer that high predictive performance is desired. This is precisely why, in Table 1, we highlight `end-to-end optimization` which has been proved in [6] to strongly correlate with high predictive performance. \\n> - CodaPrompt [6] introduces an attention-based component-weight mechanism that allows end-to-end optimization for the first time, distinguishing it from previous works like L2P [5]. \\n> - As shown in Section 5.3 of [6] (see the table below), this attention mechanism indeed leads to higher average accuracy and less forgetting, underscoring the benefits of end-to-end optimization.\\n> \\n>In response to your feedback, we have revised Section 1 and marked the changes in blue to enhance clarity and ensure the accuracy of our explanation.\\n> \\n>| Methods| Acc $\\\\uparrow$| FM $\\\\downarrow$\\n>|:-:|:-:|:-:|\\n>|Coda-Prompt[6]| **75.45 $\\\\pm$ 0.56**| **1.64 $\\\\pm$ 0.10**| \\n>|Coda-Prompt(w/o End-to-End training)| 74.52 $\\\\pm$ 0.65| 1.67 $\\\\pm$ 0.13| \\n\\n[1] Chitale, Rajas, et al. \\\"Task Arithmetic with LoRA for Continual Learning.\\\" arXiv preprint arXiv:2311.02428 (2023). \\\\\\n[2] Wang, Xiao, et al. \\\"Orthogonal Subspace Learning for Language Model Continual Learning.\\\" EMNLP, 2023. \\\\\\n[3] Liu J, Wu J, Liu J, et al. Learning Attentional Mixture of LoRAs for Language Model Continual Learning. arXiv, Sep.29, 2024.\\\\\\n[4] Liang, Yan-Shuo, and Wu-Jun Li. \\\"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning.\\\" CVPR, 2024.\\\\\\n[5] Wang, Zifeng, et al. \\\"Learning to prompt for continual learning.\\\" CVPR, 2022.\\\\\\n[6] Smith, James Seale, et al. \\\"Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning.\\\" CVPR, 2023.\\\\\\n[7] Ilharco, Gabriel, et al. \\\"Editing models with task arithmetic.\\\" ICLR, 2023.\"}", "{\"title\": \"To Reviewer 6nN3 (Part I)\", \"comment\": \"We sincerely thank the reviewer for providing valuable feedback. We detail our response below point by point. Any modifications made to the paper are highlighted in blue for your convenience. Please kindly let us know whether you have any further concerns.\\n\\n**W1**: The presented work uses ideas from model merging/fusing, but doesn't discuss them in the related work.\\n> Thanks for your insightful suggestion. Following your advice, we have incorporated a discussion on model merging/fusion into the related work section, highlighted in blue in the revised paper. For your convenience, we also detail **the distinct differences between model merging/fusion works and our proposed S-LoRA** below, while we agree with the reviewer that model merging and fusion concepts are insightful in the context of continual learning.\\n> - We first summarize the **key idea of the works [1,7] that adapt model merging for CL**. \\n> - For clarity, let $\\\\theta_0$ represent the foundation model's weights, and $\\\\theta^*_i$ denote the fine-tuned weights of the foundation model on the $i$-th task. \\n> - The original objective of model merging is to fuse the well-learned $\\\\theta_i^*$ or the task vectors $\\\\tau_i=\\\\theta_i^*-\\\\theta_0$ of all tasks, thereby being well-suited for improving the performance of all tasks under multi-task learning. \\n> - When adapted for continual learning, as in [1, 7], each incoming task is fine-tuned sequentially, resulting in its own task vector. At the end of the sequence, all task vectors are merged to mitigate forgetting. Specifically, [1] efficiently fine-tunes a LoRA component for each task, which is regarded as a representation of a task vector (i.e., $\\\\tau_j = \\\\theta^*_j-\\\\theta_0 = {\\\\bf{A}}_j {\\\\bf{B}}_j$). At the end of the sequence, all LoRAs are weighted and merged to $\\\\theta_0$ following Eqn. (5)(6) in [1].\\n> - Our proposed S-LoRA distinguishes it from the above model merging-based works [1] in **three unique contributions**:\\n> - **(C1) Unique contribution 1:** We follow the conventional CL setup, where the model to fine-tune for task $j$ is $\\\\theta^*_{j-1}$ that has been fine-tuned on the previous task $j-1$, instead of $\\\\theta_0$. Thus, the LoRAs in S-LoRA are **incrementally learned on top of the model weights from the previous task**, compared to those LoRAs in model merging-based works that learned on top of the foundation model and thus can be regarded as task vectors.\\n> - **(C2) Unique contribution 2:** The proposed S-LoRA **decouples the magnitude and direction** of the learned LoRA components during sequential training (i.e., $\\\\Delta W = \\\\\\\\|AB\\\\\\\\|\\\\cdot\\\\overline{AB}$; see Lines 185\\u2013194 in the paper). This novel design is thoroughly validated in our experiments and has been recognized by both Reviewers on1i and Uver. In contrast, model merging methods adapted to CL typically use the learned LoRA directly to represent the task vector, without such decoupling.\\n> - **(C3) Unique contribution 3:** We theoretically and empirically demonstrate that in our proposed ES-LoRA, **the rank of later-learned LoRA components can be reduced to enhance efficiency**. In contrast, model-merging approaches adapted to CL typically treat all tasks or vectors as equally important, often assigning the same rank to all their learned LoRA components.\\n> - To further address the reviewer's concern, we have implemented the paper \\\"*Task Arithmetic with LoRA for Continual Learning* [1]\\\" and provided the empirical comparisons below.\\n> - For a fair comparison with ours in the same number of extra parameters, we set the rank of all LoRAs in [1] to 10. Besides, we configure the memory buffer, which is required by [1] to fine-tune the final merged model, to store 20 samples from each task. \\n> - **Effectiveness of (C1)**: In the table below, we provide the results of two versions of [1], where `v1` exactly follows [1] with a LoRA as a task vector $\\\\\\\\tau_j=\\\\\\\\theta_j^*-\\\\\\\\theta_0$ and $\\\\\\\\theta_j^*$ fine-tuned from $\\\\theta_0$ and `v2` **adapts [1] with our unique contribution 1 C1 equipped**, i.e., a task vector is $\\\\\\\\tau_j=\\\\\\\\theta_j^{\\\\ast'} - \\\\theta_0$ with $\\\\\\\\theta_j^{\\\\ast'}$ obtained via merging the LoRA fine-tuned on task $j$ from $\\\\\\\\theta_{j-1}^{\\\\ast}$ to $\\\\\\\\theta_{j-1}^{\\\\ast}$. The significant performance improvement of `v2` and our S-LoRA not involving subtraction of the foundation model $\\\\theta_0$ over `v1` proves the effectiveness of C1.\\n> - **Effectiveness of (C2)**: In Table 4, S-LoRA shows superiority over its ablated version with no decoupling of previous LoRAs and update of magnitudes of them, confirming the effectiveness of C2. \\n> - **Effectiveness of (C3)**: We show in the updated Table 5 that ES-LoRA equipped with C3 improves efficiency over S-LoRA without compromising performance (see Tables 2-3).\"}", "{\"comment\": \"I thank the authors for the clear response.\\nAfter reading the comments of the other reviewers and the answers provided by the authors, I decided to raise my score.\"}", "{\"summary\": \"This paper presents a new method called S-LoRA for addressing the Class Incremental Learning (CIL) problem. Currently, prompt-based methods are the most effective method that does not rely on memory-buffer methods in CIL. However, as highlighted by the authors, these approaches have some limitations, particularly regarding prompt selection: they could be more efficient since they require double passes through the model to identify similar prompts and depend heavily on choosing the correct prompt. The proposed method (S-LoRA) aims to alleviate these issues by enabling each task to learn a set of LoRA weights over a pre-trained model, along with a separate set of coefficients that determine the importance of each LoRA-weight. The authors provide theoretical and empirical evidence demonstrating that their proposed method achieves strong results across multiple benchmarks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The paper is very well written. It presents the constraints and motivations the authors want to work with and clearly explains a simple but effective method. The experiments shown support the majority of what is presented in the paper.\", \"The S-LoRA method represents an advancement over current methods that use pre-trained models. It mitigates the issues introduced by prompt-based approaches and reduces the inference time. The idea of splitting the LoRA trainable weights into direction and magnitude is interesting, as it helps each task have the plasticity to add new information while encouraging the reuse of weights learned in past tasks.\", \"The authors mentioned that an ideal CL method needs to be rehearsal-free, which I may partially agree with as this depends strongly on the context and scenario of the problem. However, I do agree that it needs to be more efficient at the time of inference.\", \"The experiments performed in Section 4 help to understand the proposed method's behaviour. It is also interesting to find limitations in the model and propose alternatives that mitigate these problems.\", \"Moreover, they are well accompanied by a suitable experimental ablation.\"], \"weaknesses\": [\"Some sections of the paper mention that the proposed method improves up to 31.05% across various CL benchmarks. It could be helpful to change this as it can present confusion, especially as this number does not necessarily relate to the current best method.\", \"Algorithm 1 needs to be more precise. In lines 1 and 2, you assume you are working in a Task_j, but then iterate over the T task in lines 6 to 8. This iteration may confuse readers and make them think that a task identifier conditions parameters alpha, A_j and B_j.\"], \"questions\": [\"I agree with the first finding shown in section 4.2. However, this can change drastically when the input data distribution changes abruptly.\", \"I understand that this may be outside the scope of the paper, but did you measure the relative distance with other benchmarks? It would be helpful to compare elements outside the ViT pre-training distribution.\", \"Can you explain the intuition behind the lack of forgetting in the proposed method?\", \"For example, Figure 3.A shows how the alpha values change as different tasks are trained. I imagine this may strongly affect the representation generated for the first task examples, which are only expected to be affected by the weights of A_1 and B_1.\", \"Do you have forgetting results?\", \"Some implementation details are not clearly explained:\", \"How are alphas initialised? In most routing methods, the initialisation of the alphas plays a critical role; I can imagine that this can also happen here.\", \"Does any regularisation apply? Or do you train them freely?\", \"Can you explain how the values in Table 5 were calculated?\", \"Specifically, I have doubts about the comparison of L2P's trainable parameters versus ES-LoRA's. In L2P, only the prompts pool is trained, which should be smaller than LoRA's A and B.\", \"On the other hand, in ES-LoRA, the A and B values are accumulated, which is not accounted for in the table.\", \"Can you add the S-LoRA values for comparison?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose to incrementally learn how to merge different LoRA models by keeping old LoRA modules fixed and only update their weights and a new LoRA module in the memory-free CL scenario. The authors compare their method S-LoRA as well as variants of it on multiple benchmarks and different benchmarks against multiple baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The authors present a simply, but principled idea. It makes a lot of sense to learn to fuse models in the context of CL.\\n\\nBesides the choices mentioned in \\\"Weaknesses\\\", the empirically evaluation is convincing.\", \"weaknesses\": \"The presented work uses ideas from model merging/fusing, e.g., \\\"Editing Models with Task Arithmetic\\\" by Ilharco et al., but doesn't discuss these works in the related work. Adoption of these ideas to CL, such as \\\"Task Arithmetic with LoRA for Continual Learning\\\", are not discussed or not compared to.\\n\\nThe authors compare apples with oranges. Most baselines use inferior PEFT methods (prompt tuning) while the authors use LoRA which naturally will give them an edge. It would be great if the authors could show that the improvement over the baseline is not coming from the PEFT method. See a discussion on this topic in \\\"Choice of PEFT Technique in Continual Learning: Prompt Tuning is Not All You Need\\\".\\n\\nTable 1 mixes desired properties of CL methods with general properties of methods. End-to-end optimization is no desired property, high predictive performance is.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Existing methods depend on the precision of the selection mechanism. This paper propose a class incremental learning (CLS) method called S-LoRA to decouple the learning of the direction and magnitude of LoRA parameters. It can incrementally learn task-specific LoRA and importance parameters while preserving the optimization directions of previously tasks. The experimental results show their superiority on multiple benchmarks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The ablation study is interesting and validate the effectiveness of the learned importance factors $a$.\\n2. The paper is well organization and the proposed approach is easy followed.\", \"weaknesses\": \"1. My main concern is that the idea of this paper is just like a MOE-LoRA, which has been studied in [1-2] in continual learning. The learned importance parameters are just like the gates of MOE. We expect the author to discuss more differences between the proposed \\\"reweighting LoRA\\\" and \\\"MOE LoRA\\\".\\n2. The experiments are insufficient. There are some other methods in CLS that should be compared, like RanPAC[3], and ADAM[4]. To the best of our knowledge, the RanPAC currently performs best among CIL methods, where the details can be found in this survey[8] about PTM-based CL. Additionally, some SOTA LoRA methods for continual learning (e.g., O-LoRA[6] and N-LoRA[7]) should be also compared. These two works are very related to LoRA for continual learning, even though they are performed on language process tasks.\\n3. The two efficient versions of S-LoRA seem unrelated to the core idea of this paper, and the effect is not significant. The author should discuss more details about the ES-LoRA1 and ES-LoRA2. \\n\\nIf the author can answer my issues well, I am willing to improve my score. \\n\\n\\nReference\\n\\n[1] Liu J, Wu J, Liu J, et al. Learning Attentional Mixture of LoRAs for Language Model Continual Learning[J]. arXiv preprint arXiv:2409.19611, 2024.\\n\\n[2] Dou S, Zhou E, Liu Y, et al. LoRAMoE: Alleviating World Knowledge Forgetting in Large Language Models via MoE-Style Plugin[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024: 1932-1945.\\n\\n[3] McDonnell, M. D.; Gong, D.; Parvaneh, A.; Abbasnejad, E.; and van den Hengel, A. 2024. Ranpac: Random projections and pre-trained models for continual learning. Advances in Neural Information Processing Systems, 36.\\n\\n[4] Zhou, D.-W.; Ye, H.-J.; Zhan, D.-C.; and Liu, Z. 2023b. Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need. arXiv preprint arXiv:2303.07338.\\n\\n[6] Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. 2023a. Orthogonal subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152.\\n\\n[7] Yang S, Ning K P, Liu Y Y, et al. Is Parameter Collision Hindering Continual Learning in LLMs?[J]. arXiv preprint arXiv:2410.10179, 2024.\\n\\n[8] Continual Learning with Pre-Trained Models: A Survey.\", \"questions\": \"See the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"1. My main concern is that the idea of this paper is just like a MOE-LoRA, which has been studied in [1-2] in continual learning. The learned importance parameters are just like the gate of MOE.\\n2. The experiments are insufficient. There are some other methods in CLS that should be compared, like RanPAC[3], and ADAM[4]. To the best of our knowledge, the RanPAC currently performs best among CIL methods, where the details can be found in this survey[8] about PTM-based CL. Additionally, some SOTA LoRA methods for continual learning (e.g., O-LoRA[6] and N-LoRA[7]) should be also compared. These two works are very related to LoRA for continual learning, even though they are performed on language process tasks.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comments after Author Rebuttal\", \"comment\": \"Thanks for responding to my questions. I carefully read them and my previous concerns have been addressed. The manuscript became much more readable after the clarifications and corrections of some core points, e.g., clarifying the precise definition of \\\"residual\\\" and making it compatible with your definitions of \\\"magnitude\\\" and \\\"direction\\\", eliminating the confusion regarding different norms, etc. Furthermore, the authors provided additional comparisons with diverse baselines.\\n\\nBased on the author's response, I decided to increase my rating to 8. Hope the additional results can be included in the final version of this manuscript.\"}", "{\"metareview\": \"This paper investigates class incremental learning for foundation models using LoRA. The authors introduce a novel approach, Scalable LoRA (S-LoRA), which enables the incremental learning of task-specific LoRA and importance parameters while maintaining the optimization directions of previous tasks. The technical innovation is commendable. The experimental evaluations provided in the article are comprehensive. These aspects have been unanimously acknowledged by the reviewers. The paper's presentation is well-organized and straightforward to follow. However, certain aspects of the algorithm's description and the analysis of experimental results were not clearly articulated in the initial version. During the rebuttal phase, the authors effectively addressed the raised concerns. Consequently, I recommend accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion phase, most of the concerns have been effectively addressed. All reviewers expressed positive opinions.\"}", "{\"title\": \"To Reviewer yo9h (Part II)\", \"comment\": \"**W2 & Q2**: This paper should include a comparison with methods like RanPAC[1], ADAM[2], and especially LoRA-based CL methods such as O-LoRA[3] and N-LoRA[4], even though these are applied to language processing tasks.\\n> - Thanks for the suggestion. Since N-LoRA is a concurrent work released on arXiv on Oct. 14, after our paper submission, and no official code is available for it, we have, following your advice, included a comparison with other mentioned methods such as RanPAC[1], ADAM[2], and O-LoRA[3]. The performance results for these methods are presented in the table below.\\n> - We can observe that, compared to the latest SOTA methods and closely related LoRA-based approaches, S-LoRA demonstrates competitive performance, further validating the effectiveness of the proposed approach. It is worth noting that O-LoRA [2] applies orthogonal regularization to parameters rather than the feature space [4], which limits its effectiveness in mitigating forgetting. In contrast, the proposed S-LoRA introduces a novel approach by exploring low-loss paths to prevent forgetting, achieving superior performance in the process.\\n> \\n>| Methods| IN-R(N=5)(Acc/AAA)|IN-R(N=10)(Acc/AAA)|IN-R(N=20)(Acc/AAA)|\\n>|:-:|:-:|:-:|:-:|\\n>|ADAM[2]| 74.27/79.99 |72.87/79.39 | 70.47/77.29 | \\n>|RanPAC[1]| 76.95/82.79 | 74.58/81.77 | 72.40/79.27 |\\n>|O-LoRA[3]| 73.88/78.89 | 70.65/76.43 | 65.23/71.89 | \\n>|InfLoRA[4]| 76.95/81.81| 74.75/80.67 | 69.89/76.68 | \\n>|S-LoRA|**79.15**/**83.01** | **77.34**/**82.04** | **75.26**/**80.22**|\\n>\\n> [1] McDonnell, Mark D., et al. \\\"Ranpac: Random projections and pre-trained models for continual learning.\\\" NeurIPS, 2024. \\\\\\n> [2] Zhou, Da-Wei, et al. \\\"Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need.\\\" IJCV, 2024.\\\\\\n> [3] Wang, Xiao, et al. \\\"Orthogonal Subspace Learning for Language Model Continual Learning.\\\" EMNLP, 2023.\\\\\\n> [4] Liang, Yan-Shuo, and Wu-Jun Li. \\\"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning.\\\" CVPR, 2024.\\n\\n**Q3**: The two efficient versions of S-LoRA seem unrelated to the core idea of this paper, and the effect is not significant.\\n> We respectfully emphasize the critical significance of the two efficient versions of S-LoRA in ensuring that our proposed core idea of S-LoRA is not only effective but, more importantly, **practically efficient**.\\n> - As discussed in our response to W1 & Q1, current LoRA-based CL methods typically treat all LoRA components equally, incrementally adding them as new tasks are introduced. However, this training approach becomes increasingly inefficient as the number of tasks grows.\\n> - Based on the findings and theoretical analysis of S-LoRA, we have shown that the LoRA components learned later are less important within our framework. This key insight forms the foundation for our proposed ES-LoRA, an efficient variant of S-LoRA. Therefore, these two versions are directly tied to the core idea of S-LoRA, **focusing on improving efficiency (see updated Table 5) without compromising performance (see Tables 2-3)**.\"}", "{\"title\": \"To Reviewer on1i (Part II)\", \"comment\": \">\\n> [1] Mirzadeh, Seyed Iman, et al. \\\"Linear Mode Connectivity in Multitask and Continual Learning.\\\" ICLR, 2021.\\\\\\n> [2] Verwimp, Eli, Matthias De Lange, and Tinne Tuytelaars. \\\"Rehearsal revealed: The limits and merits of revisiting samples in continual learning.\\\" ICCV, 2021.\\\\\\n> [3] Liang, Yan-Shuo, and Wu-Jun Li. \\\"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning.\\\" CVPR, 2024.\\\\\\n> [4] Wang, Shipeng, et al. \\\"Training networks in null space of feature covariance for continual learning.\\\" CVPR, 2021.\\n\\n**Q3**: Some implementation details need to be clearly explained like 1) How are alphas initialised? 2) Does any regularisation apply? Or do you train them freely?\\n> - At the beginning of a new task, we initialize all $\\\\alpha$ values to 1. To address your concern, we also tested initializing them to 0.1, 0.5 and 0.8. The corresponding results are presented in the following table. As observed, **the initialization of $\\\\alpha$ has a relatively small impact on performance overall, but its influence becomes more noticeable when the task sequence length is large**.\\n> \\n>| Initialization| IN-R(N=5)(Acc/AAA)|IN-R(N=10)(Acc/AAA)|IN-R(N=20)(Acc/AAA)|\\n>|:-:|:-:|:-:|:-:|\\n>|$\\\\alpha=(0.1,...,0.1)$| 78.47/82.20 | 76.82/80.59 | 71.72/77.82 | \\n>|$\\\\alpha=(0.5,...,0.5)$| 78.63/82.66 | 77.01/81.67 | 74.43/79.89 |\\n>|$\\\\alpha=(0.8,...,0.8)$ | 79.02/82.99 | 77.17/81.89 | 74.27/79.95 |\\n>|$\\\\alpha=(1,...,1)$| 79.15/83.01 | 77.34/82.04 | 75.26/80.22 | \\n> \\n> - In the proposed S-LoRA, we **do not apply any additional regularization** when training $\\\\alpha$; they are trained freely. \\n> - In Theorem 1, we theoretically analyzed why the model tends to learn larger weights for previously learned $\\\\overline{A_iB_i}$, as shown in Finding 2.\\n> - The analysis shows that, during sequential training, as $A_iB_i$ gradually approximate the principal components of $\\\\Delta W^*$ (i.e., the weights lie in the shared low-loss region across all tasks), the previously learned $A_iB_i$ naturally take on a more significant role in the model.\\n\\n**Q4**: Can you explain how the values in Table 5 were calculated? \\n> - Let $d$ denote the embedding dimenstion, $e$ denote the prompt length, $p$ refer to the number of prompts, and $l$ is the number of layers in which prompts are inserted. The term \\\"trainable parameters\\\" refers to the amount of parameters that requires gradient preservation and backpropagation during training.\\n> - In the training process of L2P, samples in a batch will select different prompts from the entire prompt pool, thus the size of trainable parameters is equivalent to the entire prompt pool, namely $dlp(e+1)$ where $d=768, l=1, e=20, p=30$. In contrast, Dual-Prompt only needs to preserve gradients and backpropagate for global prompts and task-specific prompts of the current task during training, resulting in fewer trainable parameters, namely $de_gl_g+d(e_t+1)l_t$, where $d=768, e_g=6, l_g=2, e_t=20, l_t=3$. The trainable parameters of S-LoRA and ES-LoRA include task-specific LoRAs and negligible $\\\\alpha$. Since it is not necessary to preserve gradients for all LoRAs, the trainable parameters of S-LoRA and ES-LoRA are fewer than those of L2P, namely $4ldr$ for S-LoRA and $4ld\\\\bar{r}$ for ES-LoRA, where $l=12, d=768, r=10, \\\\bar{r}=6.3$. \\n> - In Table 5, we primarily focus on the training and inference efficiency. Hence, we present the trainable parameters and FLOPs. The accumulated parameters of $A$ and $B$ in S-LoRA are the same as those in other LoRA-based methods, such as O-LoRA [1]. However, in this paper, we theoretically and empirically demonstrate that the later-learned LoRA components are not as crucial. Therefore, we propose ES-LoRA, which further alleviates this issue and enhances practical efficiency. The complete Table 5, including S-LoRA, is shown in the revised paper.\\n> \\n> [1] Wang, Xiao, et al. \\\"Orthogonal Subspace Learning for Language Model Continual Learning.\\\" EMNLP, 2023.\\n\\n**W1/2:** 1)Some sections of the paper mention that the proposed method improves up to 31.05% across various CL benchmarks. It could be helpful to change this as it can present confusion. 2) Algorithm 1 needs to be more precise. \\n> Thank you for the suggestion to improve readability. We have updated the corresponding section and enhanced the clarity and precision of Algorithm 1 in the revised paper\"}", "{\"title\": \"To Reviewer Uver (Part II)\", \"comment\": \"**Q6**: To compare with other methods like EASE[1], or at least some recent prompt-based methods.\\n> - Following your suggestion, we have added experiments to compare with EASE [1] as well as recent prompt-based methods[2-4]. Additionally, we have included other LoRA-based methods [5-6] for a more comprehensive comparison.\\n> - The results show that, compared to both recent prompt-based and LoRA-based methods, the proposed S-LoRA consistently achieves more competitive performance. This further demonstrates the effectiveness of the proposed S-LoRA.\\n> \\n> | Methods| IN-R(N=5)(Acc/AAA)|IN-R(N=10)(Acc/AAA)|IN-R(N=20)(Acc/AAA)|\\n>|:-:|:-:|:-:|:-:|\\n>|EASE[1] | 77.35/81.46 |76.17/81.73 | 70.58/78.31 | \\n>|PGP[2] | 71.00/75.04 |69.50/74.14 | 66.94/77.98 | \\n>|ADAM[3] | 74.27/79.99 |72.87/79.39 | 70.47/77.29 | \\n>|RanPAC[4] | 76.95/82.79 | 74.58/81.77 | 72.40/79.27 |\\n>|O-LoRA[5] | 73.88/78.89 | 70.65/76.43 | 65.23/71.89|\\n>|Task Arithmetic LoRA[6]| 72.77/78.07 | 71.02/76.89 | 63.29/70.88 | \\n>|InfLoRA[7]| 76.95/81.81| 74.75/80.67 | 69.89/76.68 | \\n>|S-LoRA|**79.15**/**83.01** | **77.34**/**82.04** | **75.26**/**80.22**|\\n>\\n> [1] Zhou, Da-Wei, et al. \\\"Expandable subspace ensemble for pre-trained model-based class-incremental learning.\\\" CVPR, 2024.\\\\\\n> [2] Qiao, Jingyang, et al. \\\"Prompt Gradient Projection for Continual Learning.\\\" ICLR, 2024.\\\\\\n> [3] Zhou, Da-Wei, et al. \\\"Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need.\\\" IJCV, 2024.\\\\\\n> [4] McDonnell, Mark D., et al. \\\"Ranpac: Random projections and pre-trained models for continual learning.\\\" NeurIPS, 2024. \\\\\\n> [5] Wang, Xiao, et al. \\\"Orthogonal Subspace Learning for Language Model Continual Learning.\\\" EMNLP, 2023.\\\\\\n> [6] Chitale, Rajas, et al. \\\"Task Arithmetic with LoRA for Continual Learning.\\\" arXiv preprint arXiv:2311.02428 (2023).\\\\\\n> [7] Liang, Yan-Shuo, and Wu-Jun Li. \\\"InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning.\\\" CVPR, 2024.\"}" ] }
5T3gpfUam7
Memory retaining finetuning via distillation
[ "Zitong Yang", "Aonan Zhang", "Sam Wiseman", "Xiang Kong", "Ke Ye", "Dong Yin" ]
Large language models (LLMs) pretrained on large corpora of internet text possess much of the world knowledge. Following pretraining, one often needs to conduct continued pretraining on certain capabilities such as math and coding, or "posttraining" (a.k.a., alignment) techniques to make the models follow users' instructions and align them with human preferences. One challenge during these finetuning stages is that the model can lose the pretraining knowledge or forget certain capabilities (e.g., in-context learning ability). Moreover, although there exist strong open-weight LLMs such as Llama 3, both their pretraining and posttraining data are not open to the public, making it difficult to mix the finetuning data with the models' own pretraining data as a solution for mitigating forgetting. We propose label annealing, a method that mitigates forgetting during finetuning without requiring access to the original pretraining data. Label annealing distills pretraining knowledge during finetuing by adding a KL divergence term in the loss function, regularizing the divergence between the finetuned model's predictions to those of the initial pretrained model. In mathematics and code finetuning, label annealing improves the model's performance in target domains without sacrificing other capabilities of the pretrained model. In alignment finetuning, our method introduces a smooth tradeoff between the instruction-following capability and the pretraining knowledge. We complement our empirical investigation with a mathematical model with overparameterized linear regression that provides geometric intuition why label annealing would help.
[ "finetuning", "alignment", "forgetting", "distillation" ]
Reject
https://openreview.net/pdf?id=5T3gpfUam7
https://openreview.net/forum?id=5T3gpfUam7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kLmgSNwLoy", "jxQTeW3y4K", "ZyomaP3Wro", "Q8n0qOJsFo", "KCUQ4wo9pJ", "G508MRsnjm", "B9obESBOZ6", "78i3PNTw7u", "5YRrbVi4qE", "4UvwKBwO9y" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "meta_review" ], "note_created": [ 1732395019707, 1737523839844, 1732394997115, 1732816281435, 1732509358345, 1730696337825, 1730586127998, 1730644957319, 1732394978805, 1734880204735 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7459/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7459/Authors" ], [ "ICLR.cc/2025/Conference/Submission7459/Reviewer_TUzD" ], [ "ICLR.cc/2025/Conference/Submission7459/Reviewer_Qr1v" ], [ "ICLR.cc/2025/Conference/Submission7459/Reviewer_BE4W" ], [ "ICLR.cc/2025/Conference/Submission7459/Reviewer_TUzD" ], [ "ICLR.cc/2025/Conference/Submission7459/Reviewer_Qr1v" ], [ "ICLR.cc/2025/Conference/Submission7459/Authors" ], [ "ICLR.cc/2025/Conference/Submission7459/Area_Chair_VnQw" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their positive assessment and constructive feedback. We address each point below:\", \"regarding_the_comparison_with_other_penalty_based_methods\": \"We agree that comparing with methods like EWC would strengthen the paper. Our focus was on demonstrating that optimizing a data-dependent objective (e.g., KL divergence) outperforms data-independent penalties (e.g., L2).\\n\\n On Section 3.3 baselines: We do have the L2 penalty-based baseline in the paper section. We find that on L2 regularization, even with very large regularization, the model can\\u2019t recover the same behavior as the initial mode.\", \"regarding_memory_requirements\": \"The additional memory from loading a frozen copy is modest, since we don't need to store gradients or optimizer states for the frozen model. The full backward and forward pass on a neural network requires 4 times the memory needed for one network weights (1 for forward pass, 1 for gradient, and 2 Adam states). Therefore, using a frozen model doesn\\u2019t increase memory requirement by 2 times, but 1.25 times.\", \"to_address_the_specific_questions\": [\"L2 regularization and past performance: Even with optimal hyperparameter selection, L2 regularization provides limited benefit because it cannot consider how the fine-tuning data interacts with the model's existing knowledge. This is demonstrated in both Table 1 and 2, where L2 regularization fails to prevent significant drops in source benchmarks.\", \"Math performance with replay: Yes, we believe this is indeed the case. The improved math performance with replay suggests that some mathematical knowledge exists in the pretraining corpus.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We acknowledge the writing issues identified and have corrected the grammatical errors and unclear sentences in the lines mentioned. We have updated the submission file with fixed typos and improved presentation.\", \"regarding_technical_concerns\": [\"Use of KL divergence: We agree that applying KL divergence to language modeling is not novel. Our contribution lies in demonstrating that this technique can effectively prevent forgetting during practical large language model finetuning, supported by comprehensive empirical results across different scenarios.\", \"Model merging comparison: Thank you for suggesting the ExPO reference. We will include this discussion in related work. Our paper focuses specifically on methods that only require access to model weights, without additional information available.\", \"Benefits of retaining general knowledge: The importance of preventing forgetting is demonstrated clearly in our empirical results. For example, direct finetuning leads to significant drops in general capabilities (e.g., 14.19% drop in TriviaQA). When finetuning a language model for specific capabilities, we don't want it to forget its basic abilities that make it useful as a general-purpose model.\", \"Fixed hyperparameters and instruction tuning: Our instruction tuning setup achieves substantial improvements on standard benchmarks \\\\- improving AlpacaEval from 0% to 10%, surpassing the performance of Llama2-chat. This validates our hyperparameter choices.\", \"Computational overhead: The additional KL loss term increases training time from 21s to 28s per batch on an 8xH100 node for all the experiments mentioned in the paper. The overhead is modest because the frozen copy of initial weights only requires forward passes, not backward passes.\"], \"regarding_your_specific_questions\": [\"Data sizes: As mentioned in Section 3.2, we use 179M tokens for math finetuning and 30M tokens for code finetuning.\", \"Open-source pre-training data: We explored this direction using RedPajama data, with results reported in the limitations section.\", \"HumanEval performance: Label annealing intentionally introduces a tradeoff between downstream and existing capabilities, allowing practitioners to select their preferred operating point based on application needs.\"]}", "{\"comment\": \"Thank you for the response. I will maintain my score.\"}", "{\"comment\": \"Thank you for your response. Regarding model merging, I believe it serves as a strong baseline that should be considered for comparison. This approach can be implemented with access to weights alone, not just ExPO.\\n\\nAdditionally, the motivation for balancing general and specific abilities lacks clarity and robustness. The results presented do not convincingly demonstrate an advantage in improving specific tasks while preserving general knowledge.\\n\\nIt\\u2019s also not accurate to assert a drop in general capabilities without broader validation across more tasks and algorithms. Even when using the same data, methods like DPO and SimPO can yield different outcomes for general abilities.\\n\\nLastly, there should be specific examples that clearly illustrate the advantages of a specialized generalist approach (retaining general knowledge while enhancing specialized abilities) for particular tasks. For instance, is there a case in math or coding where retaining general knowledge is essential to solving the problem?\\n\\nConsidering these concerns, I believe the paper requires further revisions to clarify its motivations and provide additional experiments to validate its effectiveness. Therefore, I choose to maintain my current rating.\"}", "{\"summary\": \"This paper introduces the idea of label annealing, which is designed to reduce the problem of forgetting knowledge that was learned during pretraining, as part of finetuning. This is specifically done without having access to the original pretraining data, as is common in a lot of modern practice, e.g. with the LLaMa open-weight models and that precludes direct application of techniques such as experience replay. The idea here is to keep an independent frozen version of the pretrained model and penalize the finetuning with a relative entropy term between that and the finetuned model with respect to predicted token probabilities. The validity of this approach is demonstrated through fairly extensive experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is motivated by the very practical concern of preventing forgetting as part of the finetuning process in a sociotechnical setting where the pretraining data is not known (or perhaps can only be approximated as in the RedPajama dataset). As such, any solution in this direction is useful.\\n\\nThe approach is straightforward, backed by basic theoretical explanation, and directly implementable.\\n\\nSections 1 and 2.2 are quite clearly written.\", \"weaknesses\": \"The experimental section is extensive, but also sometimes hard to understand what exactly is demonstrated by the results. Indeed, in many of the tables, the finetuning doesn't seem to help in advanced benchmarks such as in math.\", \"questions\": \"What, specifically, are the advantages of the proposed technique that the experiments show? Is it the ability to have a smooth tradeoff curve between the pretrained and finetuned settings?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper provides a solution to the forgetting of pre-training knowledge assuming that pre-training data is unavailable by using label annealing. This method adds a KL divergence term to the loss and regularizes the divergence of the fine-tuning model's predictions to those of the initial model. There are both empirical examples when fine-tuning on different domains, like math and code, instruction fine-tuning, as well as a mathematical intuition for why the solution mitigates forgetting.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"-The paper offers a novel method to mitigate forgetting using regularization.\\n\\n-The paper does a great job of motivating the problem and showing how direct fine-tuning leads to forgetting, as well as shows both empirical and mathematical motivation.\\n\\n-It was really good to compare how this method works both for scenarios where knowledge might be repeated as well as where knowledge is less likely to be repeated, and thus fine-tuning cannot rely on repeated pre-training data.\", \"weaknesses\": \"-Section 3.2: The method is compared to L2-regression, but for a complete comparison, there should be other commonly used penalty based methods (e.g. EWC (Kirkpatrick et al. 2017)).\\n\\n-Section 3.3: There are no baselines apart from direct fine-tuning. It would be especially motivating to add other penalty based methods where the hyperparameter can be altered to show the same type of curve and offer direct comparison.\\n\\n-The method requires loading 2 models into memory at once for each step (the initial model and fine-tuning model). As commonly used LLMs get larger for many language tasks, scaling this may become impractical. \\n\\n-Table 3: It would be helpful to add the results of only using label annealing for direct comparison\", \"questions\": \"-It is surprising that L2 regularization does not maintain past task performance with optimal hyperparameter selection. In Table 1, does changing the hyperparameter on the regularization term not allow the model to retain past task information better?\\n\\n-It is interesting that in Table 3, using replay improves math performance even more so than direct fine-tuning. Does this imply that math data was perhaps part of the pre-training, and thus replaying it allows the model to do better on the current math fine-tuning task?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the issue of catastrophic forgetting in large models during finetuning, specifically when capabilities such as in-context learning are lost. The authors propose a method called \\u201clabel annealing\\u201d to mitigate this issue without needing access to the original pre-training data. This is achieved by incorporating a KL divergence term in the loss function to keep the finetuned model close to the pre-trained model. It would be valuable to discuss how this method compares directly to simpler approaches, such as parameter merging.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The problem of knowledge forgetting during instruction finetuning is important.\", \"The paper provides solid theoretical analysis.\"], \"weaknesses\": [\"The paper is difficult to read due to numerous grammatical errors and typos. These issues detract from the overall clarity and academic tone. For example:\", \"Line 133: \\u201cmore effectove\\u201d should be corrected to \\u201cmore effective.\\u201d\", \"Line 136: The phrase \\u201cLabel smoothing is similar to our proposal is that\\u2026\\u201d should be revised for clarity, perhaps to \\u201c\\u2026 is similar to our proposal in that\\u2026\\u201d\", \"Several sentences, such as those on Lines 167, 175, 202, 205, and 215, are unclear or poorly constructed. These should be reorganized into concise, well-structured parts.\", \"The proposed use of KL divergence is not particularly novel, as similar techniques have been widely used in instruction finetuning [1] and model alignment (e.g., DPO). More differentiation from existing methods is needed.\", \"The rationale behind addressing forgetting issues by potentially mixing pre-training and instruction data is not well justified. In real-world implementations like Alpaca, UltraChat, Wizard, and OpenChat, researchers tend to focus on building more diverse instruction datasets rather than revisiting pre-training data. There are also many works on mixing general and specific instruction for instruction tuning [2-3].\", \"The paper does not provide sufficient examples to demonstrate the benefits of retaining general knowledge obtained during pre-training. This is especially problematic given that retaining knowledge could lead to conflicts and potentially degrade performance on target-domain tasks, as seen in results like Table 2. Additionally, while the concept of alignment tax is mentioned, the paper does not address the potential impact on performance for source tasks like MMLU and TriviaQA during instruction tuning.\", \"There is no comparison with model merging techniques such as ExPO [4], which would be relevant for a comprehensive evaluation of the proposed approach.\", \"[1] Shi, Zhengyan, et al. \\\"Instruction Tuning With Loss Over Instructions.\\\" arXiv preprint arXiv:2405.14394 (2024).\", \"[2] Yuan, Lifan, et al. \\\"Advancing llm reasoning generalists with preference trees.\\\" arXiv preprint arXiv:2404.02078 (2024).\", \"[3] Zhang, Kaiyan, et al. \\\"Ultramedical: Building specialized generalists in biomedicine.\\\" arXiv preprint arXiv:2406.03949 (2024).\", \"[4] Zheng, Chujie, et al. \\\"Weak-to-strong extrapolation expedites alignment.\\\" arXiv preprint arXiv:2404.16792 (2024).\"], \"questions\": [\"Since the pre-training data for LLaMA is not publicly available, have the authors considered using open-source datasets like fine-web [1] as a substitute? It would be helpful to know if the proposed method outperforms simply using pre-training data directly.\", \"The claim in Line 235 that \\u201ca fixed set of training hyperparameters\\u201d is used seems problematic. Instruction tuning is sensitive to hyperparameter choices, and results should ideally be supported by a thorough grid search to ensure reliability.\", \"The additional KL loss term likely increases the overall training cost. The authors should address how this cost is managed or justified.\", \"What is the size of the data generated for math/code finetuning? It\\u2019s important to consider whether scaling the instruction size could negate the differences between finetuning strategies.\", \"The method appears less effective than direct finetuning for tasks like HumanEval. What specific advantages does the label annealing method offer for domains like math, if performance gains in target domains are more critical than knowledge retention?\", \"[1] Penedo, Guilherme, et al. \\\"The fineweb datasets: Decanting the web for the finest text data at scale.\\\" arXiv preprint arXiv:2406.17557 (2024).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their detailed assessment and constructive feedback. In response to the reviewer's specific comments:\\n\\n What are the specific advantages demonstrated in the experiments, and do they primarily show smooth tradeoff capability? The experiments demonstrate two distinct benefits of label annealing:\\n\\n* In scenarios where forgetting can be mitigated without compromising target performance (Tables 1 and 2), label annealing preserves improvements while preventing knowledge loss. For example, in math finetuning (Table 1), label annealing maintains the improvement in MATH (17.94%) and GSM8K (61.78%) while mitigating the drop in pretraining metrics (MMLU and TriviaQA) that occurs with direct finetuning. Similarly, for code finetuning (Table 2), label annealing preserves most of the HumanEval gains while preventing the dramatic forgetting in mathematics benchmarks that occurs with direct finetuning (MATH drops by 14.73% with direct finetuning). \\n* When there are inherent conflicts between pre-training and fine-tuning objectives (Figures 2 and 3), label annealing introduces a smooth tradeoff between the two domains, allowing practitioners to select their preferred operating point.\", \"regarding_limited_improvement_in_advanced_benchmarks\": \"The improvements we observe in mathematics benchmarks (e.g., \\\\+2.02% on MATH, \\\\+10.61% on GSM8K) come from fine-tuning on mathematics-related text, not from direct supervised learning on the benchmark training sets. This is an important distinction, as our method improves generalization through domain adaptation rather than through direct task optimization, which typically yields larger but potentially less generalizable gains.\\n\\nWe acknowledge that the presentation of experimental results could be clearer. In our revision, we will highlight the key takeaways more explicitly around each set of results to better guide readers through the demonstrated benefits.\"}", "{\"metareview\": \"The paper tackles the challenge of catastrophic forgetting in large models during fine-tuning, particularly the loss of capabilities such as in-context learning. To address this, the authors propose a method called \\u201clabel annealing,\\u201d which incorporates a KL divergence term in the loss function to maintain the fine-tuned model's proximity to the pre-trained model, all without requiring access to the original pre-training data.\\n\\nThe strengths of the paper lie in its focus on an important research question and its solid theoretical analysis. However, the work has significant weaknesses, including the lack of robust comparisons with key baselines and shortcomings in presentation quality.\\n\\nIn light of these limitations, I recommend rejecting this submission.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion period, Reviewers Qr1v and TUzD actively engaged with the authors.\\n\\nAfter carefully reviewing all the concerns raised by the reviewers, I found that the authors failed to adequately address two critical issues: (1) a lack of empirical comparison with key baselines and (2) poor presentation quality.\\n\\nA major concern raised by all reviewers is the lack of valid comparisons with key baselines, such as model merging. Model merging, which relies solely on leveraging weights, serves as a strong baseline that effectively balances generalizability and task specificity. Unfortunately, the authors did not provide a substantial response to this important concern.\\n\\nAdditionally, the poor presentation quality of the submission was noted by multiple reviewers. Despite receiving this feedback during the rebuttal phase, the authors did not make any significant adjustments to improve the clarity or quality of their work.\\n\\nThese unresolved issues severely limit the validity and potential impact of the paper, reducing its appeal to the broader research community. Consequently, I believe the submission does not meet the high standards expected for acceptance at the prestigious ICLR conference.\"}" ] }
5Ro7JT5Vaf
Universal Time-series Generation using Score-based Generative Models
[ "Haksoo Lim", "Jaehoon Lee", "Minjung Kim", "Sewon Park", "Noseong Park" ]
Score-based generative models (SGMs) have demonstrated unparalleled sampling quality and diversity in numerous fields, such as image generation, voice synthesis, and tabular data synthesis, etc. Inspired by those outstanding results, we apply SGMs to synthesize time-series by learning its conditional score function. To this end, we present a conditional score network for time-series synthesis, deriving a denoising score matching loss tailored for our purposes. In particular, our presented denoising score matching loss is the conditional denoising score matching loss for time-series synthesis. In addition, our framework is such flexible that both regular and irregular time-series can be synthesized with minimal changes to our model design. Finally, we obtain exceptional synthesis performance on various time-series datasets, achieving state-of-the-art sampling diversity and quality.
[ "Time-series generation", "Diffusion models", "Signal processing" ]
https://openreview.net/pdf?id=5Ro7JT5Vaf
https://openreview.net/forum?id=5Ro7JT5Vaf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rXctxJAukH", "eBgWtfhXAb", "PBuLASFOUn", "3VuFGBFvcH", "1dTCf3LJUu" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1732800128150, 1730565973385, 1729810467158, 1730552404698, 1729637338565 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10657/Authors" ], [ "ICLR.cc/2025/Conference/Submission10657/Reviewer_D8xr" ], [ "ICLR.cc/2025/Conference/Submission10657/Reviewer_TfSv" ], [ "ICLR.cc/2025/Conference/Submission10657/Reviewer_1RUP" ], [ "ICLR.cc/2025/Conference/Submission10657/Reviewer_Bckg" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper proposes a score-based generative model for time-series generation and argues that a time-series score-based generative model should be autoregressive. To that end, an autoregressive score-matching objective is introduced. Since this method is designed to handle both regular and irregular time-series forecasting, it is termed \\\"Universal\\\". The score-model is trained in the latent-space of an RNN-based encoder-decoder model. The paper also theoretically justifies the validity of their autoregressive score-matching objective. The paper considers several baselines and even adapts some of them to irregular time-series to showcase the advantages of their method based on predictive-score/discriminative score.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Proposes a score-based universal time-series synthesis method.\", \"Argues for an autoregressive-style generation for time-series and introduces a new loss objective and related architectural changes.\", \"An extensive list of baselines is used to showcase the benefit of TSGM.\"], \"weaknesses\": [\"The autoregressive score-based loss objective and related theory is incremental.\", \"The results are evaluated with predictive-score and discriminative-score only.\", \"The need for autoregressive structure is not justified empirically.\", \"From Table 8, it appears that the models are trained to generate 24 time-steps and so, one may say that this paper focuses on modelling shorter time-series. Generalization of this model when extrapolating to longer sequence-lengths (without retraining) is not evaluated.\"], \"questions\": \"1) How does TSGM perform under forecasting? In my understanding, score-models can be flexibly adapted for inpainting and it is possible to evaluate based on imputation/forecasting errors. Please let me know if I am misunderstanding something.\\n2) Appendix B contains experiments where TimeGrad/CSDI are applied to generation tasks and compared with TSGM. Can TSGM be compared with TimeGrad/CSDI when applying to forecasting/imputation tasks? \\n3) Appendix M consists of one-shot generation experiments where one attempts to directly generate the time-series data. I feel that a fair comparison with TSGM should be based on one-shot generation of the sequence ${h_i}$. \\n4) Can you provide empirical justification for autoregressive generation? \\n5) In Eq. 8, why is the network $M_\\\\theta$ outputting the score for previous n-1 time-steps that are already generated? Is this a typo? Could you please clarify the rationale behind this design if it is not a typo?\\n6) A common evaluation metric of generative models is to train a downstream model on generated samples (e.g., classification-accuracy score). Is it possible to train a Patch-TST model on the generated samples and use that to perform forecasting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a score-based generative model for time series generation. The paper presents a score matching loss for this task, and shows that it is equivalent to another form which can be modeled with RNN latents. The paper then applies the proposed method for four different time series datasets and two different time series generation setups. Empirical results show that the proposed method outperform baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"It is a novel idea to model the proposed score matching loss with RNN latents, and this is validated by the main theorem of this paper, which shows the equivalence of losses if the target score is additionally conditioned on $x_n^0$ as well. This is a simple yet effective way to perform score matching for this task; otherwise it will be expensive to optimize the vanilla loss function for time series generation.\", \"In the main experiments, the proposed method outperforms the baselines with a noticeable gap. The improvements are consistent across different datasets and time series types. The results show the effectiveness of the proposed method on these datasets.\", \"In the appendix, the paper conducts extensive ablation studies with respect to baseline methods, and show that prior methods including TimeGrad and CSDI could be extended to this task but they do not perform well.\"], \"weaknesses\": [\"The presentation of this paper needs significant improvement.\", \"In terms of writing, it is unclear across the paper (especially in section 3), making it very hard to follow. For example, many definitions or terminologies remain unexplained, some important details of the models are hidden from the paper, and with the current structure one has to jump back and forth to understand the notations and technical details but they are in fact very straightforward.\", \"The paper does not justify the importance of the time series generation task compared to other tasks such as forecasting and classification. The key question is: why is it necessary to train a time series generation model? There are some quick answers to my mind: generating synthetic data for downstream tasks such as classification, understanding data bias and structure, or some application needs data generation given certain context. The first two are not addressed by the paper, and the last requires conditional generation, which is not addressed as well.\", \"The paper significantly over-claims.\", \"While the title indicates it is a universal model, the proposed method only deals with a restricted class of time series data and two tasks with uniform and non-uniform intervals. With \\\"universal\\\", I would expect a model that applies to a wide range of tasks (e.g. synthesis, forecasting or continuation, blank filling or inpainting, etc), data types (continuous, discrete, quantized), and domains (medical, environment, finance, speech, signal processing, etc).\", \"In terms of novelty, while it is novel to model the score matching loss with RNN latents, it is less novel to combine autoregressive modeling with score-based / diffusion models as many prior papers have worked on this problem and this paper only presents a variant of the loss function. Therefore, the first contribution is questionable.\"], \"regarding_the_methology\": [\"This is not a major weakness, but the theoretical results presented in the paper are quite straightforward and do not provide us new understanding of this problem. From a theoretical point of view, training score based models autoregressively is itself an interesting problem and it would be useful to understand properties like model approximation properties, score function uniqueness and existence, robustness to noises of the time series, and so on. I understand that this paper is more of an empirical study, but I think adding theoretical analysis would be a huge plus to the contribution (this is not a request, but just a recommendation).\", \"The proposed method does not seem to generalize to discrete time series data, which is quite common in practice. Please correct me if I misunderstand.\", \"The experimental results are not strong enough.\", \"Regarding data: the datasets are relatively small in size, making it hard to justify the effectiveness of the proposed method in large-scale real world applications. Training score-based models on small datasets seems an overkill and leads to concern about overfitting and mode collapse. However, I do not spot such analysis in the paper. There is also no experiment on large-scale time series data with sample at a scale of least 100K to 1M.\", \"Regarding time series types: the irregular data is generated with random dropping and may not represent real world cases where the timestamps and intervals can be arbitrary.\", \"Regarding tasks, many important applications are missing. As the paper claims unversal time series generation, I would expect to see experiments on more challenging and interesting tasks from signal processing such as speech signal generation.\", \"The experimental results do not reveal the usefulness of time series data generation. I would expect to see results on how the synthetic data help either in downstream applications (e.g. data augmentation, anomaly detection, adversarial training) or in our understanding of the data structure (e.g. bias, spurious correlations, data manifolds).\"], \"questions\": \"Please refer to the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces TSGM, a score-based generative model designed to generate time series data, including both regular and irregular time series. TSGM consists of an RNN autoencoder and a score-based model; the former is used to transform time series into latent representations, while the latter handles the generative process. Considering the autoregressive characteristics of time series data generation, TSGM derives its loss function based on denoising score matching. Experiments were conducted on four datasets, and the results demonstrate that TSGM outperforms methods based on VAEs and GANs in terms of generation performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is written clearly.\\n\\nThe loss function is novel for SGM-based network on time series data.\", \"weaknesses\": \"1. The paper does not adequately cover recent work, especially from 2023 (only two survey papers are cited). Below are a few examples of ICLR 2024 works [1-3] that should be considered. This oversight significantly impacts the presentation of the paper's novelty, and these works should be included in the experimental comparisons to better validate the method's performance.\\n\\n [1] Diffusion-TS: Interpretable Diffusion for General Time Series Generation\\n\\n [2] Generative Learning for Financial Time Series with Irregular and Scale-Invariant Patterns\\n\\n [3] Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs\\n\\n2. The experiments should compare the proposed method with diffusion-based time series generation approaches, which have been frequently utilized in recent works, such as Diffusion-TS and MG-TSD [4]. Additionally, the paper needs to articulate the differences between score-based and diffusion-based methods to strengthen the motivation of the work.\\n\\n [4] MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process. In ICLR 2024.\\n\\n3. As a data augmentation method, the experiments need to further compare the proposed approach in practical tasks such as time-series forecasting and imputation. And it is necessary to investigate whether the method can generate valuable data in scenarios with more complex data distributions, such as earthquake prediction and extreme weather forecasting, where class/data imbalance challenge is prominent. This will help to better understand the scope and applicability of the proposed method.\", \"questions\": \"Please refer to weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents score-based generative models for generating time series. The main contribution of this paper is that the authors presented a denoising score matching loss for generating time series. The model has been trained with several time series datasets, and the authors compared their work with various GAN-based time-series generative models.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This model presented in this paper works on both regular and irregular time-series generation.\\n2. Presented denoising score matching loss function which aids in the time series generation process.\\n3. Extensive comparison on GAN-based time-series generative models.\", \"weaknesses\": \"1. On page 4, line 184, the authors claimed, \\\"**Up to our survey, there's no paper about diffusion models considering autoregressiveness in time series generation.**\\\" However, Hoogeboom et al. [1] consider texts and audio synthesis in their work, where an autoregressive diffusion model has been used. Besides that, check out these works [2-4]. Also, can the authors discuss the difference between their method with existing autoregressive models [1, 4].\\n\\n2. The authors did a very comprehensive comparison with different GAN-based architectures, but a comparison with a diffusion-based time series model, e.g., with [4], is needed. However, it's noteworthy that the authors mentioned in the appendix that they considered comparing their work with [2, 3], but due to their \\\"**fundamental mismatch between their model design,**\\\" they could not compare. So, can the authors explain how they can adapt their model for a fair comparison with the existing models [2,3]?\\n\\n3. Some of the information from the appendix should be in the main section as it explains some situations better, e.g., why the authors did not consider comparing their work with some SOTA diffusion-based models. I feel like this is very important information to keep in the main section of the paper. Perhaps, the authors can move the key information regarding the model comparison in the **Discussion** or **Limitation** section.\\n\\n4. The paper requires further improvement in writing:\\n\\t1. Lacks of coherence between paragraph and/or sentences\\n\\t2. Check minor typo\\n\\t\\t1. In page 2, line 77, \\\"**in that both both regular.....**\\\"\\n\\t3. Try to stick with one table format for all tables, e.g. Table 7,9,10,14 has different format than other tables.\", \"references\": \"1. Hoogeboom, Emiel, et al. \\\"Autoregressive Diffusion Models.\\\"\\u00a0_International Conference on Learning Representations_, 2022\\n2. Tashiro, Yusuke, et al. \\\"Csdi: Conditional score-based diffusion models for probabilistic time series imputation.\\\"\\u00a0_Advances in Neural Information Processing Systems_\\u00a034 (2021): 24804-24816.\\n3. Rasul, Kashif, et al. \\\"Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting.\\\"\\u00a0_International Conference on Machine Learning_. PMLR, 2021.\\n4. Yuan, Xinyu, and Yan Qiao. \\\"Diffusion-TS: Interpretable Diffusion for General Time Series Generation.\\\"\\u00a0_The Twelfth International Conference on Learning Representations_, 2024\", \"questions\": \"1. Have the authors run experiments with longer sequence lengths, e.g., more than 1000? Using an RNN-based architecture could potentially limit the model's capability.\\n2. A common tendency of generative models is to leak training data [1]. Did the authors do any experiments to check if the generative model is leaking any information?\", \"reference\": \"1. Chen, Dingfan, et al. \\\"Gan-leaks: A taxonomy of membership inference attacks against generative models.\\\"\\u00a0_Proceedings of the 2020 ACM SIGSAC conference on computer and communications security_. 2020.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }