forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
12B3jBTL0V
Modeling the Human Visual System: Comparative Insights from Response-Optimized and Task-Optimized Vision Models, Language Models, and different Readout Mechanisms
[ "Shreya Saha", "Ishaan Chadha", "Meenakshi Khosla" ]
Over the past decade, predictive modeling of neural responses in the primate visual system has advanced significantly, largely driven by various deep neural network approaches. These include models optimized directly for visual recognition, cross-modal alignment through contrastive objectives, neural response prediction from scratch, and large language model embeddings. Likewise, different readout mechanisms—ranging from fully linear to spatial-feature factorized methods—have been explored for mapping network activations to neural responses. Despite the diversity of these approaches, it remains unclear which method performs best across different visual regions. In this study, we systematically compare these approaches for modeling the human visual system and investigate alternative strategies to improve response predictions. Our findings reveal that for early to mid-level visual areas, response-optimized models with visual inputs offer superior prediction accuracy, while for higher visual regions, embeddings from Large Language Models (LLMs) based on detailed contextual descriptions of images and task optimized models pretrained on large vision datasets provide the best fit. Through comparative analysis of these modeling approaches, we identified three distinct regions in the visual cortex: one sensitive primarily to perceptual features of the input that are not captured by linguistic descriptions, another attuned to fine-grained visual details representing semantic information, and a third responsive to abstract, global meanings aligned with linguistic content. We also highlight the critical role of readout mechanisms, proposing a novel scheme that modulates receptive fields and feature maps based on semantic content, resulting in an accuracy boost of 3-23\% over existing SOTAs for all models and brain regions. Together, these findings offer key insights into building more precise models of the visual system.
[ "Neuro AI", "vision", "deep neural networks", "representations", "fMRI encoding" ]
Reject
https://openreview.net/pdf?id=12B3jBTL0V
https://openreview.net/forum?id=12B3jBTL0V
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xQequka5X0", "x9oktfMsLf", "wwGd8jctYv", "wC6o7dYWmb", "wAkmXtsjT9", "utTMjSnHGs", "uL5DbgkGBJ", "tnkvI47y0d", "ovwYm5g8g4", "lA1ptXSpNS", "kzJgjdwUY1", "kiGLTGcZKO", "j0cfm2AWsx", "i1MgRmxPBU", "aPdzthMYl9", "aCWmU0nbf4", "ZPVt1p3fLf", "Ya64stSnzv", "WdNIg42CAt", "UaUFY3CsAQ", "UP05mQm9ep", "TUszavZiqH", "SSRlSmxtlc", "Prtcgxa6k9", "OnA56SxN3Z", "OiBuWfBOgX", "OWPZzrGG7V", "MYeEbfRCVV", "MDnk8TyQ7i", "Ln2B8OOMNV", "LKPlrECEOt", "L7zZfNJPPZ", "JuvOCMFsAX", "Ixc0CJNCyH", "I4IbucC6wu", "EIGz0MH50k", "DKVq4tizNg", "BI1cjmL7AZ", "AA7RUwSZCD", "5mKT1pEU2R", "4zAbGsUHgD", "1u1KsZzJrP" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732406648970, 1733106809831, 1730712701657, 1732319163349, 1729805409771, 1732459483451, 1732233994486, 1733187553549, 1732992696848, 1732518300125, 1732402681448, 1733182895820, 1733032093876, 1732992818727, 1732313767717, 1732992839534, 1732004168336, 1732001639873, 1737523585656, 1730717746177, 1733105970290, 1732760426932, 1732002582474, 1732233931889, 1730370492864, 1732314734307, 1732402998211, 1732313592152, 1733150607032, 1733164508289, 1732521480998, 1732512264663, 1733164826803, 1733020395999, 1732648392463, 1732234556032, 1732313474371, 1734762418609, 1732004198859, 1732541130900, 1732518062049, 1732328011295 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_gWb8" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_rgy2" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_rgy2" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_gWb8" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_zmHZ" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_EMZd" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_rgy2" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_gWb8" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_EMZd" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Area_Chair_YG17" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_EMZd" ], [ "ICLR.cc/2025/Conference/Submission3618/Authors" ], [ "ICLR.cc/2025/Conference/Submission3618/Reviewer_EMZd" ] ], "structured_content_str": [ "{\"title\": \"Continued Official Response to Reviewer gWb8\", \"comment\": \"**Addressing Weakness 16 -** The affine parameters are not learnt on population receptive fields provided in NSD, and are independent of them. One of the major advantages of using STNs is that it only uses visual stimuli to outperform existing readouts that depend upon additional information such as retinotopic mapping or population receptive fields. The parameters of STN are learnt with a help of a pretrained image encoder that highlights the most important parts of an image for each voxel. To be more specific, the Semantic Spatial Transformer Readout applies affine transformations for each channel of the encoder feature representations and the spatial features (\\u2018where\\u2019 matrix), which are learnt based on the \\u2018important\\u2019 parts of the image. We provide more motivation behind the STN readout below:-\\n\\n**a. Spatial Modulation of Spatial Masks (Spatial Receptive Fields):**\\nThe STN allows voxel-specific spatial modulation of receptive fields (RFs), inspired by studies demonstrating the dynamic adaptability of RFs to stimulus properties:\\n- RF sizes can expand or contract based on contrast (Sceniak et al., Nature Neuroscience, 1999).\\n- RFs can also shift or reshape in response to contextual or attentional influences (Womelsdorf et al., Nature Neuroscience, 2006).\\n\\nUnlike fixed spatial masks used in other linear readouts (e.g., Spatial-Feature Factorized Linear or Gaussian 2D readouts), the STN employs affine transformations to capture stimulus-dependent changes in spatial receptive fields, such as translation, scaling, rotation, and shearing, in line with existing neuroscientific results. This flexibility enables more accurate modeling of neural responses that exhibit dynamic spatial RF properties.\\n\\n**b. Spatial Modulation of Feature Maps:** The STN also spatially transforms feature maps (i.e. the encoder channels). Each channel typically encodes distinct attributes (e.g., edges, textures, shapes), and the ability to apply channel-specific transformations allows the STN to adapt to their unique geometric properties. For instance, one channel may require scaling to emphasize fine-grained details, while another might need rotation for orientation invariance. This flexibility is particularly advantageous for neural response modeling. Unlike object classification tasks, where we can employ augmentations tailored to known invariances (e.g. rotating an image won\\u2019t change the category label) to boost predictive accuracy, the geometric invariances of voxel responses are unknown. STN enables the network to learn these invariances directly from the data, providing a crucial advantage in predicting voxel responses across diverse visual regions. \\n\\n**The above reasoning will be added in the next revision of our paper.**\\n\\nIn an additional experiment focused on interpreting the STN readouts, we calculated the distance between the affine parameters corresponding to the spatial maps of each voxel for every image, relative to the mean affine parameters across all images. The L2 norm of this vector was computed for each voxel. Across all encoders, we observed that stimulus-dependent spatial shifts of the receptive field increase from lower to higher visual regions. A similar trend emerged when calculating the average spatial shifts for each channel of the feature map across images for different regions. This trend further supports the idea that higher levels of the visual cortex benefit more from learned geometric invariances and exhibit greater spatial modulation of their visual receptive fields compared to lower visual cortex regions. This modulation includes phenomena such as receptive field expansion, contraction, or shifts in response to different stimuli.\\n\\n**Please see Supplementary Fig. 10 in our revised paper for the results of this analysis. This analysis has been added in Supplementary section A.4 in the revised version of our paper.**\"}", "{\"title\": \"Request for Review of Updated Paper and Revisions\", \"comment\": \"We sincerely thank all the reviewers for dedicating their time and effort to provide valuable feedback on our paper. Your thoughtful suggestions have been instrumental in enhancing the quality of our work. We have carefully addressed each of your comments and made the necessary updates to our manuscript accordingly. We kindly request all reviewers to review the updated version of our paper and consider revising their scores if they find the revisions satisfactory.\"}", "{\"summary\": \"Using fMRI data from the Natural Scenes Dataset, the authors investigate how different encoder backbones and readout mechanisms predict neural responses in the human visual system.\\n\\nThey compare a range of models, including those optimized for visual recognition (e.g., AlexNet, ResNet), neural response prediction, and language or vision-language tasks (e.g., CLIP, MPNET). Furthermore, they explore various readout mechanisms to map model activations to fMRI signals, introducing a novel approach (in the context of fMRI encoders) called the Semantic Spatial Transformer readout.\", \"they_find_that\": \"1. Response-optimized models perform best in early visual areas (V1-V4): This suggests that these areas prioritize perceptual features not readily captured by linguistic descriptions or task-specific training.\\n\\n2. Task-optimized and language models do better in higher visual areas: This indicates a shift towards semantic processing in these regions. Large language model embeddings, particularly those using detailed contextual descriptions, prove highly effective.\\n\\n3. Semantic Spatial Transformer readout improves performance: This novel readout consistently outperforms existing methods like linear, Gaussian, and factorized readouts, boosting accuracy by 3-23%. This improvement stems from its ability to learn stimulus-specific modulations of receptive fields and feature maps.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"I think overall, the authors' thorough experimentation is the greatest strength of this paper:\", \"**Systematic Comparison:** They do a reasonably systematic comparison, comparing a diverse range of models and readout mechanisms, which offers valuable insights.\", \"**Novel Readout Mechanism:** They propose (in the context of fMRI encoders) a novel readout mechanism\\u2014the using the previously proposed spatial transformer with differentiable bilinear sampling \\u2014and show that it indeed improves prediction accuracy. This is a significant contribution in the context of fMRI encoders.\", \"**Identification of Cortical Regions:** They identify three cortical regions, largely aligned with prior hypotheses about visual cortical functionality. This further strengthens existing theories.\", \"**Good Discussion of Prior Work:** The authors do a reasonably good job in discussing prior fMRI encoder literature, effectively contextualizing their research. This demonstrates a solid understanding of the field.\"], \"weaknesses\": \"The presentation of this paper could be *significantly* improved. I think the presentation quality of this paper does not match the quality of other ICLR papers I am currently reviewing or have reviewed in past years, or ICLR papers that have been accepted in prior years.\\n\\nThe figures are unclear and lack consistent formatting, notation often unexplained, and significant wasted space.\", \"my_specific_concerns_are_below\": \"1. Figure 1 -> This figure is very cluttered and very confusing. Why are the subfigure legends (A, and B) placed so randomly?\\n2. Figure 1 -> What is the 'What' weight matrix, where does it come from? It is obvious if you are familiar with fMRI encoder literature, but describing the interaction between the weight matrix and green using a tensor product $\\\\otimes$ seems very misleading. This tensor product is also used in the upper part as well, which is deeply misleading, and instead should be expressed as a transposed matrix product. These symbols have a meaning and without redefinition this is pretty confusing.\\n3. Figure 1 -> Why is the task optimized framework placed together with the response optimized framework without any clarification of which is which? \\n4. Figure 1 -> Why is the dense captioning output also part of the response optimized framework with a rotation equivariant network?\\n5. Where are the dense captions coming from? `Line 210` says `An image of size 424 \\u2217 424 is divided into grids of size 53 \\u2217 53`, but does not otherwise clarify the origin of the captions anywhere in the paper. What model is used here?\\n6. `Line 224` please avoid vector matrix products. This assumes row vectors which is not standard.\\n7. `Line 237` what are the shapes of the output of function $V_c(x,y)$? Could you describe this sampling in more detail?\\n8. Spatial transformer section, `Line 269` it is unclear what $\\\\theta_2$ is. This is a really important part of the paper and a key claimed contribution. Could the authors mathematically clarify how $\\\\theta_2$ plays a role?\\n9. `Line 286`, what is `AT`?\\n10. `Line 297` are the models not voxel wise models?\\n11. Figure 2A, the bottom descriptions of the models is very confusing. How is the \\\"Language Encoder\\\" being used with \\\"Semantic Spatial Transformer Readouts\\\"? \\n12. Figure 2B, please use a proper categorical color map for discrete data.\\n13. Minor -- Figure 3B, extra space before optimized\\n14. Figure 3, how are you defining \\\"better predicted by Task Optimized\\\"? Is this the best of ResNet50 or AlexNet? Why use these models when CLIP has been shown to be the best model of visual responses?\\n15. All flatmaps have significant wasted space.\\n16. Lack of analysis of the spatial transformer networks. While the paper claims STNs as a significant contribution, there is no visualization of the affine parameters for each voxel. Do the affine parameters focus on population receptive fields that are provided in NSD?\", \"questions\": \"Please see weaknesses section for the questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continued Official Response to Reviewer rgy2 (Weakness 3)\", \"comment\": \"**Weakness 3 -** *Figure 4D shows three regions that respond more to vision model, single captions language model, or dense caption language model, but this is binarizing the difference between each pair of models. However, the claims of these three distinct regions would be strengthened by showing that high-level visual voxels, for example, have additional explained variance by the single caption language model after accounting for the variance explained by vision and dense caption models.*\\n\\nThank you for your insightful suggestion. We agree that computing the shared and unique variance explained by each model would strengthen our claims. However, implementing this would require training linear readouts after concatenating all feature spaces (e.g., using methods such as banded ridge regression). Given the high-dimensional representations, particularly those of dense caption language models, this approach would be computationally prohibitive. While it is an important direction for future work, we opted for the current analysis to balance clarity and computational feasibility.\"}", "{\"summary\": \"The authors aim to evaluate the extent to which LLMs (based on single or dense image captions) predict activity in high-level visual cortex relative to ImageNet-pretrained vision models or neural-response optimized vision models. They introduce a novel readout method that shows higher performance in predicting neural responses relative to linear regression and two other readout methods. They find three distinct regions of visual cortex that are better predicted by vision models, LLMs based on dense captioned images, or LLMs for single image captions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Literature review is comprehensive, and overall, the paper is clearly written.\\n\\nThe paper is not highly original building on prior readout methods, and recent work conducting large-scale benchmarking of AI models against the brain. However, the addition of dense image captions to extract representations from the language models is a nice contribution to the literature on LM alignment with visual cortex. I have some reservations mentioned below that are impacting my score, but if addressed, I think the paper may constitute a meaningful contribution to the literature.\", \"weaknesses\": \"The paper needs to better justify why a different readout method is necessary. The authors state that the predominant readout method is linear ridge regression, which has high computational and memory demands, but representational similarity analysis (RSA) is nearly as commonly used in the human literature and is less computationally intensive (Kriegeskorte et al, 2008). More importantly, however, the reason that the NeuroAI field tends to rely on linear regression as a readout is based on the logic that we are interested in evaluating the similarity of the representations up to a linear transformation (in representation space) without introducing non-linearities in the readout method. The authors should provide better justification for why a novel readout method is needed within that framework.\\n\\nThe Semantic Spatial Transformer has greater improvement relative to Ridge regression for the vision model than the language model (Figure 2), and vision models are found to better predict more voxels in high-level visual cortex using the Semantic Spatial Transformer readout (Figure 4) than when using Ridge regression (Figure 5). To me, it is a problem that the readout method does not provide uniform improvements across models. This suggests to me that the readout method is introducing a bias in the conclusions. However, I welcome rebuttal on why this logic is faulty. \\n\\nFigure 4D shows three regions that respond more to vision model, single captions language model, or dense caption language model, but this is binarizing the difference between each pair of models. However, the claims of these three distinct regions would be strengthened by showing that high-level visual voxels, for example, have additional explained variance by the single caption language model after accounting for the variance explained by vision and dense caption models.\", \"questions\": \"Why did the authors only use 4 of the 8 participants from NSD?\\n\\nFigure 1A is confusing. I don\\u2019t follow how each of the different readout methods are shown here. Better labels would be very helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the clarification that the authors provide on **Weakness 1**. However, it does not appear that this additional context has yet been incorporated in the text. This context is critical to readers to understand the contribution of the paper and incorporating it would significantly strengthen the paper. I may be mistaken that it has already been incorporated as the authors have not marked their revisions to text, and I having difficulty tracking updates that have been made.\\n\\nI also appreciate discussion on **Weakness 2**. However, the response of the authors has\\u2014if anything\\u2014has increased my concern of the bias introduced by SST. At a minimum, this needs to be more explicitly incorporated into the limitations section of the main text and for Figure 10 and Section A.4 to be referenced in the main text. I also do not find the authors argument that ridge regression is biased to favor higher-dimensional feature spaces satisfying. They reference a highly controversial non-archival workshop paper to support their argument, and higher dimensional feature spaces have not been found to uniformly enhance brain alignment (Conwell et al., 2024; Elmoznino et al., 2024). \\n\\nIn the current form, I am not able to update my score.\\n\\n\\nConwell, C., Prince, J. S., Kay, K. N., Alvarez, G. A., & Konkle, T. (2024). A large-scale examination of inductive biases shaping high-level visual representation in brains and machines. *Nature Communications*, 15(1), 9383. https://doi.org/10.1038/s41467-024-53147-y\\n\\nElmoznino, E., & Bonner, M. F. (2024). High-performing neural network models of visual cortex benefit from high latent dimensionality. PLOS Computational Biology, 20(1), e1011792. https://doi.org/10.1371/journal.pcbi.1011792\"}", "{\"title\": \"Continued Official Response to Reviewer zmHZ (Question: Beyond Accuracy)\", \"comment\": \"In an additional experiment focused on interpreting the STN readouts, we calculated the distance between the affine parameters corresponding to the spatial maps of each voxel for every image, relative to the mean affine parameters across all images. The L2 norm of this vector was computed for each voxel. Across all encoders, we observed that stimulus-dependent spatial shifts of the receptive field increase from lower to higher visual regions. A similar trend emerged when calculating the average spatial shifts for each channel of the feature map across images for different regions. This trend further supports the idea that higher levels of the visual cortex benefit more from learned geometric invariances and exhibit greater spatial modulation of their visual receptive fields compared to lower visual cortex regions. This modulation includes phenomena such as receptive field expansion, contraction, or shifts in response to different stimuli.\\n\\n**Please see Supplementary Fig. 10 in our revised paper for the results of this analysis. This analysis has been added in Supplementary section A.4 in the revised version of our paper.**\\n\\n*Responses to other questions continued in the next comment*\"}", "{\"title\": \"Further Response to Reviewer gWb8\", \"comment\": \"We sincerely thank the reviewer for their continued feedback and for taking the time to review our rebuttal and revised manuscript.\", \"we_acknowledge_and_apologize_for_the_following_typographical_errors_in_the_current_revision\": \"1. In Figure 1A, the word \\\"Objectives\\\" is misspelled.\\n2. The schematic diagram in the top left of Figure 1A incorrectly uses a downward arrow. The arrow should point upward, indicating the flow from the \\u201cReadout Layer\\u201d to \\u201cN Neurons.\\u201d\\n\\nWe deeply appreciate the reviewer bringing these issues to our attention. These are typographical errors that can be easily corrected in the camera-ready version. While we respect the reviewer\\u2019s concerns, we hope it is clear that these minor issues do not compromise the scientific integrity of the work, and we feel that labeling the paper as \\\"pretty flawed\\\" due to these errors might not fully reflect its overall quality and contributions.\\n\\nRegarding point 3, the reviewer previously noted that \\u201cplease avoid vector matrix products. This assumes row vectors which is not standard.\\u201d In response, we revised the notation in line 223 to use $E_i$ and $Y_i$ as column vectors. This approach is mathematically correct and aligns with established conventions. As such, we believe this is largely a stylistic preference rather than a substantive error.\\n\\n\\nWhile we respect the reviewer's assessment, we feel that the current score may not fully reflect the strengths of the paper, including the contributions that the reviewer acknowledged in their original feedback. We appreciate the opportunity to clarify these points and remain hopeful that the revised manuscript demonstrates the merit of our work.\"}", "{\"title\": \"Official Comment to Area and Program Chairs\", \"comment\": \"We want to thank everyone for taking the time out to review our paper. First, we want to summarize the major contributions of our paper in the domain of modeling the human visual stream using DNNs as follows -\\n\\n1. **Introduction of a novel readout mechanism:** We introduce a novel readout method utilizing Spatial Transformers, which delivers significant improvements in accuracy (3-23%) compared to previously employed SOTA readouts. \\n\\n2. **Identification of brain regions responsive to visual and semantic input:** Through comparative analysis of models across various visual regions, we identify three distinct regions in the human visual cortex that respond primarily to (a) perceptual characteristics of the input, (b) localized visual semantics aligned with linguistic descriptions, and (c) global semantic interpretations of the input, also aligned with language. \\n\\n3. **Comprehensive analysis of neural network models:** We conduct an in-depth analysis of various artificial neural network models, incorporating both vision and language inputs. Additionally, we explore different readout mechanisms and examine which models perform better in specific brain regions, while highlighting the unique advantages each provides.\\n\\nThe reviewers pointed out the following strengths in our paper - \\n\\n1. The reviewers appreciated the thorough experimentation done with an extensive variety of DNN encoders, readout mechanisms and input stimuli (both - currently used in literature as well as novel techniques introduced by the authors) to analyse the visual cortex of the human brain.\\n\\n2. The reviewers commend the introduction of a novel readout mechanism (Semantic Spatial Transformer Readout) to map encoder feature representations into brain voxel responses.\\n\\n3. The reviewers appreciated the analysis describing three distinct regions in the visual cortex sensitive to varying properties of visual stimuli, which further strengthens the existing theories.\\n\\n4. The reviewers also mentioned that the addition of different kinds of semantic language features is a nice contribution to the literature of LLM alignment with the visual cortex.\\n\\n5. Lastly, the reviewers also appreciate the extensive literature review presented by the authors on existing work.\\n\\nThe reviewers agreed that the paper would be a meaningful contribution to current research modeling neural network models similar to the visual cortex if some weaknesses were addressed, and below are the major updates made based on the suggestions - \\n\\n1. We agree with the reviewers that the initial version of Figure 1 (overview of our entire pipeline) was a little unorganised and confusing, and we have updated it in our latest version segregating the various components.\\n\\n2. We agree with the reviewers that in the initial version of the paper, the biological motivations for the Semantic Spatial Transformer (SST) Readout were not sufficiently emphasized. The SST readout is built on top of the Spatial-Feature Factorized Linear Readout, and spatially modulates the feature maps and spatial masks, allowing for dynamic and stimulus-dependent adjustments. Different channels in a feature map often encode distinct attributes, such as edges, textures, or shapes. By allowing channel-specific transformations, the STN can adapt to the unique geometric properties of the features represented in each channel. Unlike object classification tasks, where we can employ augmentations tailored to known invariances (e.g. rotating an image won\\u2019t change the category label) to boost predictive accuracy, the geometric invariances of voxel responses are unknown. STN enables the network to learn these invariances directly from the data, providing a crucial advantage in predicting voxel responses across diverse visual regions. Moreover, STN also allows voxel-specific spatial modulation of receptive fields (RFs), inspired by studies demonstrating the dynamic adaptability of RFs to stimulus properties - RF sizes can expand or contract based on contrast Sceniak et al. (1999) and can also shift or reshape in response to contextual or attentional influences Womelsdorf et al. (2006). Unlike fixed spatial masks used in the previous readouts, the STN employs affine transformations to capture stimulus-dependent spatial changes such as translation, scaling, rotation, and shearing. This flexibility enables more accurate modeling of neural responses that exhibit dynamic RF properties. We have added these details in our current revision from lines 274-287 in the main text.\\n\\n*Response Continued in the next comment.*\"}", "{\"title\": \"Updates for Reviewer zmHZ\", \"comment\": \"Thank you so much for your patience with us while we were updating our paper. We wanted to mention the changes in the comments first as updating the paper took a little longer as we had to be careful with the 10 page limit constraint. Here are all the changes we have added throughout the paper based on the various suggestions (**Please refer to the latest revision of the paper**)-\\n\\n1. Reorganizing Figure 1 based on the comments received by reviewers **gWb8** and **rgY2** - \\n - Added a brief overview of the entire pipeline (Schematic Pipeline of DNN Models)\\n - Separating out the individual components of the pipeline\\n - Replacing the tensor product symbol with reference to actual equation numbers used in the paper\\n2. Reorganized Figures - 2,3,4 by reducing the unused blank space in the cortical flatmaps according to reviewer **gWb8**.\\n3. Updated Figure 2B with discrete color maps as suggested by reviewers **rgY2** and **EMZd**.\\n4. Updated Figure 2A to highlight the advantages of STN readout as suggested by reviewer **EMZd**.\\n5. Better motivated the utility of Semantic Spatial Transformer Readout (**Lines 274-287**) based on the suggestions of reviewers **zmHZ**, **gWb8**, **rgY2** and **EMZd**. \\n - Added further analysis in **Supplementary section A.4 (Table 5, Figure 10)**, and they have been referenced in the main text at **line 287** - *Analyzing Spatial Modulation of Receptive Fields in Visual Cortex: Insights from STN Readouts*.\\n - As suggested by reviewer **rgY2**, we added further analysis on the dependency of the Semantic Spatial Transformer Readouts on channel size (and bias introduced by the readouts) in **Supplementary section A.5 (and Table 6)** - *Dependency of Semantic Spatial Transformer Readout on Channel Size*. This has been referenced in the main text in **line 333-334**.\\n6. As suggested by reviewer **zmHZ**, we added further analysis on the utility of dense captions in **Supplementary Section A.3 (and Figure 9)**, and referenced them in the main text at line **215**.\\n7. As suggested by Reviewer **EMZd**, we added further analysis on *\\u2018Comparing Architectural Approaches for Task and Response Optimized models\\u2019* in **Supplementary Section A.6 and Table 7**, and referenced them in the main text at line **192**.\\n\\n**To be more specific, specific updates with respect to your comments are addressed at** - \\n\\n1. **\\u201cBeyond accuracy\\u201d** - \\n - Further clarification on the motivation behind the use of STN readouts is added at lines 274-287.\\n - Further analysis on STN readouts are added in Supplementary section A.4 (Table 5, Figure 10), and they are referenced in the main text at line 287.\\n2. **\\u201cDensifying\\u201d the single captions** - we added further analysis on the utility of dense captions in Supplementary Section A.3 (and Figure 9), and referenced them in the main text at line 215.\"}", "{\"title\": \"Official Response to Reviewer gWb8\", \"comment\": \"Thank you for taking the time out to review our paper. We apologize for some of the inconsistencies in our presentation, and here are some of our fixes, and further clarifications as per your comments in the Weakness section -\\n\\n**Addressing concerns regarding Figure 1** - Figure 1 is the introductory figure which gives a brief overview of our experimental pipelines. We have dealt with three kinds of model inputs in our experiments - raw image data, single caption giving a brief overview of the entire image and dense captions giving detailed information about various subparts of the image. Raw pixel inputs (of shape 3*256*256) are passed through CNN encoders and caption stimuli are passed through LLM encoders and then a readout layer is used to map the encoder feature maps to brain voxel responses. The CNN encoder can be either a pre-trained task optimized model or a response optimized model that is trained from scratch. The readout layers can be a linear ridge regression readout, a gaussian 2D readout, a Spatial Feature Factorized readout or a Semantic Spatial Transformer readout. The single caption stimuli are directly passed to only the linear ridge regression readout as they contain no spatial information. \\n\\n- **Weakness 1 -** In the revised version of our paper, we added boxes separating each of the readouts and input stimuli, and also aligned the subfigure legends. We would be happy to make the figure more intuitive as per reviews.\\n\\n- **Weakness 2 -** We apologize for not making the \\u2018what\\u2019 and \\u2018where\\u2019 matrix more clear in our paper. We have added more details regarding this under Spatial-Feature Factorized Linear Readouts in section 2.2, specifically at lines 242, 243, 251 and 252. We also apologize for using the tensor product representation in our figure. IN the current revised version, we have replaced them with more suitable changes according to the actual operation they perform.\\n\\n- **Weakness 3 -** As mentioned earlier, the encoder can either be task or response optimized. Our framework is flexible in that way.\\n\\n- **Weakness 4 -** As mentioned earlier, the dense caption outputs are also passed through a CNN core (though it is just a single convolutional block), and it is explained in more details under Language Models -> Dense Captions in section 2.1, specifically in lines 210-215.\\n\\n**We will soon post another revision with a more updated version of Figure 1 for further clarity.**\\n\\n**Addressing Weakness 5 -** The dense captions are generated using GPT-2. This is already mentioned under Language Models -> Dense Captions in section 2.1, specifically in line 211.\\n\\n**Addressing Weakness 6 -** This has now been updated in our current revision. Yi is now a column vector, and Ei is also a column vector.\\n\\n**Addressing Weakness 7 -** We apologize for not being more clear with the function $V_c(x,y)$. \\nLet\\u2019s say we have a feature map of shape $C\\\\*H\\\\*W$ from an encoder where $C$ is the number of channels, and $H$ and $W$ are the dimensions of each channel. Each voxel learns a Gaussian distribution across the receptive field, the mean of which is the location the voxel is most sensitive to. For each voxel, lets represent their receptive field center by $(x,y)$ which is the mean of the gaussian distribution for the respective voxel (this lies within $W\\\\*H$). A value is chosen for each channel by sampling this gaussian distribution which is what is returned by \\t$V_c(x,y)$. Its output shape is $C\\\\*N\\\\*1$, where $N$ is the number of voxels. The voxel response is further calculated by a weighted sum across $C$ (dimension 0) as explained in line 236.\\nWe were purposely very brief with it as it's part of an already published work [1], and if needed we can add more details in the appendix. \\n\\n**Addressing Weakness 8 -** $\\ud835\\udf03_2$ defines the six affine parameters per voxel, which are used to modulate spatial masks independently for each voxel. This design is inspired by neuroscience studies showing that spatial receptive fields (RFs) are not static but can dynamically adapt based on the stimulus. For instance, seminal research has demonstrated that RF sizes can shrink or expand depending on stimulus contrast (Sceniak et al., Nature Neuroscience, 1999) and can be modulated by contextual influences such as surround modulation or attentional shifts (Womelsdorf et al., Nature Neuroscience, 2006). Unlike other linear readouts, such as the Spatial-Feature Factorized Linear Readout or Gaussian 2D readout, which use fixed spatial masks irrespective of the stimulus, the Semantic Spatial Transformer enables stimulus-dependent modulation of RFs. By leveraging affine transformations, it can capture a range of spatial changes, including translation, scaling, rotation, and shearing. \\n\\n**References -**\\n\\n1. Lurz, Konstantin-Klemens, et al. \\\"Generalization in data-driven models of primary visual cortex.\\\" BioRxiv (2020): 2020-10.\\n\\n*Response continued in the next comment.*\"}", "{\"comment\": \"I thank the authors for the rebuttal and the revision.\\n\\nI took a look at the revised PDF, in my view the current revision is still pretty flawed.\\n\\n1. `Objecttives`?\\n2. Top left of Figure 1 -- are the neurons the input or output?\\n3. Line 223 -> The issues I mentioned in the original review still remain\\n\\nI maintain my score of 3 and I strongly recommend against acceptance.\"}", "{\"title\": \"Third Official Response to Reviewer EMZd\", \"comment\": \"Thank you for raising this important point. While receptive field size increases from low- to high-level visual regions, our findings highlight that the intermediate regions are not solely better modeled by purely visual features, as some prior studies using single captions have concluded (e.g., Doerig et al.). Instead, our analysis using dense captions reveals that these regions are also well-modeled by visual features that align with linguistic descriptions. This demonstrates a nontrivial interaction between visual and semantic alignment that is not captured by previous approaches relying on sparse or single-caption annotations.\"}", "{\"title\": \"Official Comment to Area and Program Chairs (Continued)\", \"comment\": \"3. To provide a better understanding of the novel STN readout mechanism, we added an experiment focussed on better interpreting the STN readouts. We calculated the distance between the affine parameters corresponding to the spatial maps of each voxel for every image, relative to the mean affine parameters across all images (Figure 10). The L2 norm of this vector was computed for each voxel. Across all encoders, we observed that stimulus-dependent spatial shifts of the receptive field increase from lower to higher visual regions. A similar trend emerged when calculating the average spatial shifts for each channel of the feature map across images for different regions. This trend further supports the idea that higher levels of the visual cortex benefit more from learned geometric invariances and exhibit greater spatial modulation of their visual receptive fields compared to lower visual cortex regions. This modulation includes phenomena such as receptive field expansion, contraction, or shifts in response to different stimuli. We have added these in Supplementary Section A.4 and Figure 10.\\n\\n4. We added an additional experiment showing the relative improvements of the STN with only affine parameters for the spatial mask vs affine parameters for only the feature maps, showing that both contribute to the improved performance of the STN readout. This is added as Supplementary Table 5.\\n\\n5. We added an additional experiment to analyse if the observed differences between dense and global captioning are due to (a) the spatial subdivision of the image (Hypothesis 1) or the increased semantic detail in dense captions (Hypothesis 2). The original idea behind using dense captions was to provide spatial information in addition to semantic information in the form of captions, and subdividing the image into equal sized grids and getting captions for each grid was one of the easiest and most intuitive ways to do that. As per some of the reviewer\\u2019s suggestions, we tried generating more comprehensive single captions of the image using existing LLMs, however none of them were able to provide more information than those already present in the original MS-COCO dataset. In an attempt to densify the single captions, we thus adopted a different approach: for each image, we took the embeddings of dense captions generated for individual grid locations and averaged these embeddings to produce a single \\\"aggregate dense caption\\\" embedding. On comparing single caption stimuli with \\u2018densified\\u2019 single caption stimuli (as opposed to the dense caption approach discussed in the paper), we saw a similar trend where the higher regions of the visual cortex were better modeled by single caption stimuli. However, the transition in sensitivity from dense to single caption in the middle regions of the ventral, dorsal and lateral stream that is so clearly pronounced when using dense captions is missing when using the above \\u2018densified\\u2019 single captions. Further comparing \\u2018densified\\u2019 single captions to dense captions (as proposed in the paper), we saw that the dense captions modeled the overall visual cortex better. Hence, we do feel that adding spatial information to the dense caption is necessary for building more accurate models, be it by sub-dividing the image into grids or via any other way. This analysis has been added in Supplementary Section A.3 and Figure 9 of our paper.\\n\\n*Response Continued in next comment.*\"}", "{\"title\": \"Continued Official Response to Reviewer rgy2 (Weakness 1 Continued)\", \"comment\": \"We acknowledge that the biological motivations for the STN were perhaps not sufficiently emphasized in the initial version of the paper. In the next revised version, we would add the below details.\\n\\nWe completely agree with the reviewer that one of the main reasons behind the widespread use of linear regression as a readout model is because it does not introduce non-linearities in the encoder feature maps, as we want to find out how similar the encoder feature maps are to the brain responses without any external finetuning. However, as already mentioned in the paper, one of the major disadvantages of this readout is its computational complexity. The other readouts (including our proposed readout) also linearly map features to brain responses but with additional constraints on the structure of the weight space in line with biological findings. \\n\\nWe take the Spatial-Feature Factorized Linear Readout a step further with our proposed STN readout. We first emphasize that the STN does not involve unnecessarily complex transformations. Instead, it only spatially modulates the feature maps and spatial masks, allowing for dynamic and stimulus-dependent adjustments. Once again, no non-linearities are introduced here. The STN allows voxel-specific spatial modulation of receptive fields (RFs), inspired by studies demonstrating the dynamic adaptability of RFs to stimulus properties:\\n* RF sizes can expand or contract based on contrast (Sceniak et al., Nature Neuroscience, 1999).\\n* RFs can also shift or reshape in response to contextual or attentional influences (Womelsdorf et al., Nature Neuroscience, 2006).\\n\\nUnlike fixed spatial masks used in linear readouts (e.g., Spatial-Feature Factorized Linear or Gaussian 2D readouts), the STN employs affine transformations to capture stimulus-dependent spatial changes such as translation, scaling, rotation, and shearing. This flexibility enables more accurate modeling of neural responses that exhibit dynamic RF properties.\\nThe STN also spatially transforms feature maps and spatial weights at the channel level. Each channel typically encodes distinct attributes (e.g., edges, textures, shapes), and the ability to apply channel-specific transformations allows the STN to adapt to their unique geometric properties. For instance, one channel may require scaling to emphasize fine-grained details, while another might need rotation for orientation invariance. This flexibility is particularly advantageous for neural response modeling. Unlike object classification tasks, where we can employ augmentations tailored to known invariances (e.g. rotating an image won\\u2019t change the category label) to boost predictive accuracy, the geometric invariances of voxel responses are unknown. STN enables the network to learn these invariances directly from the data, providing a crucial advantage in predicting voxel responses across diverse visual regions.\\n\\n*Response Continued in next comment.*\"}", "{\"title\": \"Official Comment to Area and Program Chairs (Continued)\", \"comment\": \"6. We added further analysis of how the STN readouts\\u2019 performance is proportional to the channel size of the encoder feature representations, and how the trends observed across the various regions of the Visual cortex are biased on how biologically intuitive a readout is in Supplementary section A.5 and Table 6. The larger improvements for vision models stem from their feature representations having greater spatial dimensions than language models, allowing the SST to better leverage the rich spatial information available in vision models. To mitigate this, we can normalize spatial dimensions across models to ensure uniform treatment. Empirically we show below that if we reduce the spatial dimensions of the vision encoder to match those of the language encoder, that does drop the prediction performance and relative gains. The overall trend\\u2014where higher cortical areas are better modeled by language input and lower cortical areas by visual input\\u2014is consistently observed across all readouts (Fig. 4, 5, 6, 7). However, the margin distinguishing the effectiveness of the models varies slightly. Notably, as we progress from less biologically intuitive readouts to more biologically plausible ones (linear regression, Gaussian 2D, Spatial-Feature Factorized Linear Readout, and finally, the Semantic Spatial Transformer Readout), these trends become increasingly well-defined. Given that the Semantic Spatial Transformer Readout most accurately and consistently models neural responses, we rely on it to delineate regions of the visual cortex sensitive to varying kinds of stimulus information.\\n\\n7. We highlighted the fact that \\u2018Task\\u2019 and \\u2018Response\\u2019 Optimised models as introduced in literature do not differ merely based on their training diet, but the network architecture and structural biases play an important role. The emphasis was not on directly comparing task- and response-optimized models after controlling for every factor that distinguishes them besides the objective. Instead, our goal was to select the most effective pretrained model (with optimized architecture and dataset) for task-optimized applications versus the most suitable architecture for response-optimized models that aligns with neural data constraints. This analysis is added in Supplementary Section A.6 and Table 7.\\n\\nWe sincerely thank the reviewers who have engaged with our responses and acknowledge that the additional clarifications and analyses have strengthened our manuscript. We request the Area Chair to consider the highlighted strengths and contributions of our work in making a final decision regarding the paper.\"}", "{\"title\": \"Continued Official Response to Reviewer EMZd (Weakness 3)\", \"comment\": \"*Continued from above comment.*\\n\\n**Addressing Weakness 3:** Here, we respectfully disagree with the reviewer\\u2019s assertion that the proposed readout yields only marginal gains in predictive accuracy.\\n\\n**1. Quantitative gains in accuracy:** The improvements provided by the STN are substantial, especially when compared to other modeling variations. For instance:\\n- Varying encoding strategies (response-optimized vs. task-optimized vs. LLM embeddings) yielded a maximal accuracy difference of ~5%.\\n\\n- In contrast, the choice of readout had a much larger impact. The proposed STN readout improved prediction accuracies by 3.76\\u201326.3% over Spatial-Feature Factorized Linear readouts and by 16.87\\u201352.3% over ridge regression (the de facto choice in many studies) when applied to vision models.\\n\\nIn the ventral visual stream, for example, when using response-optimized models:\\n - Ridge regression achieved a prediction accuracy of 0.46.\\n - Gaussian and Spatial-Feature Linear Factorized readouts each improved this to 0.48.\\n - The STN readout further boosted this to 0.58, representing a significant jump over the alternatives.\\n\\nWe acknowledge that the gains are less pronounced for language models, which we attribute to the smaller spatial size of the feature maps fed into the readouts, as noted in line 329 of the manuscript.\\n\\n**2. Biological motivation behind the STN readout:** We acknowledge that the biological motivations for the STN were perhaps not sufficiently emphasized in the initial version of the paper. We will add the below further clarifications in the next revision. We first emphasize that the STN does not involve unnecessarily complex transformations. Instead, it only spatially modulates the feature maps and spatial masks, allowing for dynamic and stimulus-dependent adjustments.\\n\\n- **Spatial Modulation of Spatial Masks (Spatial Receptive Fields):** The STN allows voxel-specific spatial modulation of receptive fields (RFs), inspired by studies demonstrating the dynamic adaptability of RFs to stimulus properties:\\n\\n - RF sizes can expand or contract based on contrast (Sceniak et al., Nature Neuroscience, 1999).\\n - RFs can also shift or reshape in response to contextual or attentional influences (Womelsdorf et al., Nature Neuroscience, 2006).\\n\\nUnlike fixed spatial masks used in linear readouts (e.g., Spatial-Feature Factorized Linear or Gaussian 2D readouts), the STN employs affine transformations to capture stimulus-dependent spatial changes such as translation, scaling, rotation, and shearing. This flexibility enables more accurate modeling of neural responses that exhibit dynamic RF properties.\\n\\n- **Spatial Modulation of Feature Maps:** The STN also spatially transforms feature maps and spatial weights at the channel level. Each channel typically encodes distinct attributes (e.g., edges, textures, shapes), and the ability to apply channel-specific transformations allows the STN to adapt to their unique geometric properties. For instance, one channel may require scaling to emphasize fine-grained details, while another might need rotation for orientation invariance. This flexibility is particularly advantageous for neural response modeling. Unlike object classification tasks, where we can employ augmentations tailored to known invariances (e.g. rotating an image won\\u2019t change the category label) to boost predictive accuracy, the geometric invariances of voxel responses are unknown. STN enables the network to learn these invariances directly from the data, providing a crucial advantage in predicting voxel responses across diverse visual regions. \\n\\n| Readout Type (all with response optimized E2cnn encoder) | V1v | V1d | V2v | V2d | V3v | V3d | V4 | Ventral | Dorsal | Lateral |\\n|----------------------------------------------------------|--------|--------|--------|--------|--------|--------|--------|---------|--------|---------|\\n| Spatial-Feature Factorized Linear Readout (1) | 0.83154 | 0.79296 | 0.7795 | 0.7419 | 0.7268 | 0.7323 | 0.7085 | 0.4847 | 0.4831 | 0.4504 |\\n| (1) + affine transforms only on spatial masks | 0.8596 | 0.8217 | 0.8179 | 0.7769 | 0.7705 | 0.7719 | 0.7659 | 0.5638 | 0.5962 | 0.5371 |\\n| (1) + affine transforms only on feature maps | 0.8750 | 0.8409 | 0.8310 | 0.7948 | 0.7814 | 0.7958 | 0.7782 | 0.5865 | 0.6156 | 0.5641 |\\n\\n*References continued in next comment.*\"}", "{\"title\": \"Official Response to Reviewer EMZd (Questions and Weakness 1)\", \"comment\": \"Thank you for taking the time out to review our paper. First, we would like to answer your questions, and then provide our rebuttal for the weaknesses mentioned -\\n\\n**1. Why are the models in Figure 2b shown on a continuous color bar?**\\n\\nWe apologize for the inconsistency in this diagram, and it is fixed in our latest revised pdf.\\n\\n**2. Why were only a subset of the NSD subjects used?**\\n\\nThe fmri responses were similar across the various subjects, and it is common practice to train models on these 4 datasets only [1], [2], [3]. The data from the held out subjects is usually used for fine tuning or zero shot tasks as seen in [1]. Also, these were the only subjects that completed all 40 NSD sessions.\\n\\nNow, regarding the weaknesses mentioned by the reviewer - \\n\\n**Addressing Weakness 1:** Although our models were tested across many different factors, when comparing response optimized models with task optimized models, we made sure to keep the stimuli and the readout layer constant. The only variable was the encoder architecture.\\n\\nThe primary reason for employing different architectures in our study was to leverage state-of-the-art approaches tailored to distinct modeling paradigms. A direct comparison between task-optimized and response-optimized models is inherently challenging due to differences in the available training stimulus sets. Specifically, the stimulus set for training response-optimized models is substantially smaller\\u2014approximately 0.03 times the size of the datasets used for task optimization.\\n\\nIncorporating structural biases into response-optimized models (e.g., rotation equivariance) enables them to learn effectively from smaller datasets. This advantage of rotation-equivariant architectures in neural encoding contexts has been demonstrated in prior studies [1] and is a critical factor when designing models that align with the constraints of neural data.\\n\\nWhile head-on comparisons using identical architectures for task and neural response optimization could provide valuable insights into the specific contributions of these factors , the primary objective of our study was not to isolate these factors. Instead, we aimed to identify the most predictive models for voxel responses across distinct regions of the visual system. Our findings reveal the current best-performing models for this goal, emphasizing practical predictive utility rather than dissecting the contributions of task versus response optimization in isolation. However, we do agree with the reviewer that using the same architecture for response- and task-optimized vision models could provide a valuable comparison, especially to directly assess their relative sample efficiencies. To address this, we conducted additional experiments using a ResNet-50 encoder paired with a semantic spatial transformer readout, trained from scratch exclusively on the NSD dataset, and compared it with the proposed task and response optimized encoders in the paper all paired with a semantic spatial transformer readout. Unlike the task-optimized ResNet-50, which is trained for object classification on ImageNet, the ResNet-50 trained from scratch on neural responses struggled to match the performance of the response-optimized e2cnn model. This comparison underscores the role of network architecture and the significance of incorporating relevant structural biases into networks when optimizing them on response prediction with limited data (atleast in comparison to large-scale vision datasets).\\n\\n| Encoder Type (all with semantic spatial transformer readout) | V1v | V1d | V2v | V2d | V3v | V3d | v4 | Ventral | Dorsal | Lateral |\\n|--------------------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|---------|--------|---------|\\n| Response Optimized resnet50 | 0.7579 | 0.7034 | 0.7021 | 0.6646 | 0.6861 | 0.6712 | 0.6991 | 0.5546 | 0.5814 | 0.5470 |\\n| Task optimized resnet50 (proposed) | 0.8507 | 0.8083 | 0.8057 | 0.7603 | 0.7612 | 0.7763 | 0.7674 | 0.6105 | 0.6606 | 0.5823 |\\n| Response Optimized E2cnn (proposed) | 0.8698 | 0.8340 | 0.8302 | 0.7919 | 0.7808 | 0.7913 | 0.7729 | 0.5796 | 0.6089 | 0.5638 |\\n\\n\\nLastly, language models are predominantly based on transformer architectures, which require significantly larger datasets and computational resources and training response-optimized models using transformer architectures for the visual system is not feasible.\\n\\n*Response continued in the next comment.*\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"In this work, the authors present a comprehensive suite of analyses comparing vision / language DNN models to human fMRI data. Using a novel \\u201creadout\\u201d mechanism designed explicitly to account for space in the mapping of DNN embeddings to brain activity, the authors report localizing 3 sub-regions in the human visual cortex that respond differentially to spatial and semantic information.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"The use of deep neural network models to predict and understand the structure of representation in the biological visual system is a practice rife with heretofore unanswered, but deeply foundational questions as to how it should be done. Bucking a trend that far too often recycles canonical, but relatively unscrutinized methods to new models or new brain data, this submission is impressive not just for the fact that it tackles these questions head-on, but tackles so many of them simultaneously -- and does so (mostly) without losing the forest for the trees. For this alone, I applaud the authors and can recommend that this paper be accepted.\", \"weaknesses\": \"My major concern here (and one that I admit is not fully within the authors control, but which clarifying updates or different narrative focus could nonetheless address) is the lingering doubt as to whether even these newer, more expertly designed methods actually do give us any meaningful new \\u201cinsights\\u201d about the biological system they\\u2019re nominally designed to give us insights about. An overly reductionist summary of the \\u201cfindings\\u201d of this analysis with respect to the human visual brain could well be that they simply provide more evidence for what is already a amply established gradient of increasingly \\u201cabstract\\u201d visual information from early (more view-dependent) areas (where smaller, localized receptive fields and retinotopy are the dominant representational motifs) to later (less view-dependent) visual areas (where -- depending on which side of the ventral / dorsal divide those areas are closer to -- you begin to get \\u201crepresentations\\u201d that evoke \\u201cobject categories\\u201d, \\u201cnavigational affordances\\u201d, or \\u201cconceptual semantics\\u201d). And while much debate does remain as to many of the details here, it seems (to me at least) that the existence of this gradient is more or less a common consensus.\", \"questions\": \"My primary suggestion for strengthening this paper is more or less solely for the authors to lean further into its greatest strength -- and to further explicate or justify the expert methods that differentiate this work from so many others attempting to tackle similar questions. Needless to say, perhaps, there are a number of ways the authors could do this. Below are a few different \\u201coptions\\u201d that (I hope) seem reasonable given the constraints of the current review. The authors should feel free to choose however many / whichever of these seems most feasible or intuitive. For me, at least, almost any movement along these vectors is movement that would increase the value of this work for the target audience it seems intended for:\\n- \\u201cBeyond accuracy\\u201d: The primary justification of the authors\\u2019 \\u201cnovel readout mechanism\\u201d is the general increase in accuracy it provides over other methods. But the emphasis on accuracy as the primary advantage rings a bit hollow if a major part of the goal here is to gain insight into the structure of representation in biological cortex. There are many alternative ways (e.g. data augmentation, denoising, nonlinearities) -- even \\u201chacky\\u201d ones (e.g. smaller cross-validation splits) that one could use to increase the predictive accuracy of model readout mechanisms. What demonstrable advantage does the \\u201csemantic spatial transformer\\u201d readout have over readout methods with respect to the theoretical questions at play here? (An example answer: \\u201cordinal least squares or regularized regression-style readouts do not preserve spatial information -- therefore making certain areas of the brain appear to be more transform-invariant than they likely are in reality. Here\\u2019s a metric that operationalizes the probability of transform-invariance in the fMRI data without models. And here is a side-by-side comparison of the transform-invariance we estimate with ridge regression and our STN readout, respectively.\\u201d)\\n- \\u201cSemantics\\u201d without language model confounds: There are a number of issues (again, beyond the scope of this paper, but nonetheless relevant) with the use of language models as predictive models of visual fMRI data -- including the fact the inputs to these models (tokenized words) are already proto-symbolic at the time of their initial injection into the candidate representational models that embed them (and are thus more abstract by default than the pixels injected into vision models); and also, an increasing \\u201cconvergence\\u201d between vision and language models [1] that suggests a sort of \\u201cdefault\\u201d alignment between these systems attributable (most likely, it seems) to biases in their training data. How to get at questions of \\u201csemantics\\u201d without over-interpreting language models? One way, perhaps, is to reconsider the brain data itself: It has been suggested by [2] that certain kinds of transformations on the underlying brain data to be modeled (e.g. aggregating across multiple neurons in the case of electrophysiology) can instantiate properties like linearly-separable category boundaries not otherwise apparent without those transformations. If something like this is done on the features of vision models (e.g. averaging across multiple images of the same visual concept), do vision models begin to look more \\u201csemantic\\u201d? Perhaps the semantic spatial transformer could be used to unveil precisely the kinds of feature transformations that occur along the gradient from early to late visual cortex.\\n- \\u201cdensifying\\u201d the \\u201csingle\\u201d captions: The authors claim that the localized semantic descriptions inherent to their \\u201cdense\\u201d captioning method unveil a noticeable midpoint between early, more spatiotopic representations and later, \\u201cglobally\\u201d abstract representations. But is this really about the local tagging of an image\\u2019s subparts? Providing more comprehensive \\u201csingle captions\\u201d of the full image that includes more extensive specification of details might close the gap between the dense captioning method and the global captioning -- but in a way that obviates the need to manually subdivide the image. In short, adding further detail (with and without explicitly spatial language) seems like an important control for downstream interpretation of this result.\\n\\n[1] Huh, M., Cheung, B., Wang, T., & Isola, P. (2024). The platonic representation hypothesis. arXiv preprint arXiv:2405.07987.\\n[2] Vinken, K., Prince, J. S., Konkle, T., & Livingstone, M. S. (2023). The neural code for \\u201cface cells\\u201d is not face-specific. Science Advances, 9(35), eadg1736.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Second Official Response to Reviewer rgy2\", \"comment\": \"We appreciate the reviewer\\u2019s references to Conwell et al. (2024) and Elmoznino et al. (2024), and would like to clarify some nuances in our argument and address the relevant findings from these works.\\nFirstly, Elmoznino et al. (2024) explicitly state that top-performing encoding models of high-level visual cortex tend to exhibit high latent dimensionality. They argue that this high dimensionality facilitates better performance in tasks requiring generalization, such as learning new categories of stimuli. This suggests that higher-dimensional representations might inherently capture richer feature spaces that align more closely with the neural coding in the brain, particularly in higher-level visual regions. Thus, while dimensionality is not a universal determinant of brain alignment, its benefits have been observed under specific contexts, particularly for generalization and representational richness.\\n\\nSecondly, we wish to draw an important distinction between the notion of \\\"high dimensionality\\\" as discussed by Elmoznino et al. and Conwell et al. and the concept of \\\"larger channel dimensions\\\" favoring the SST readout as referenced in our paper. Dimensionality, in the context of the cited works, typically refers to the number of top principal components or latent features retained in the feature space. Larger channel dimensions in our study, however, relate to the degree of spatial pooling or reduction that the input feature maps undergo during encoding. For example, models with smaller spatial dimensions (e.g., 4\\u00d74 feature maps) result from significant pooling, which inherently limits the spatial richness available to the SST readout. This is distinct from the latent dimensionality referred to in the cited studies.\\n\\nLastly, we recognize the reviewer\\u2019s concern regarding potential biases in readouts. It is worth emphasizing that our claim about ridge regression is not merely about dimensionality inflating alignment scores but rather about its general tendency to favor models with richer basis functions - whether through dimensionality (more capacity to encode information) or other factors such as spatial detail. While the main claims of Schaeffer et al., 2024 about the limitations of the neural regression methodology in recovering good models of the brain is controversial, the underlying argument that ridge regression has inductive biases still stands. While this effect of dimensionality is not universally problematic, it warrants attention when comparing models with different spatial or feature characteristics, as seen in our experiments.\\n\\nWe acknowledge that the differing spatial dimensions of feature maps in different models could create an issue for model comparison ( We have referenced our new analyses A4 in the main text at line 287). But the spatial dimensions of feature maps can be easily standardized. Future studies aiming to compare models on equal footing\\u2014particularly to draw conclusions about computational objectives or architectural influences on neural representations\\u2014can address this by maintaining consistent spatial dimensions. Finally, the significantly higher predictive accuracy of the SST readout is a major advantage, especially for applications like in-silico experimentation and neural population control, where improved accuracy provides substantial benefits.\\nIn conclusion, we believe the evidence supports our claim that larger channel dimensions inherently benefit the SST readout by providing richer spatial details for alignment, a property not directly analogous to latent dimensionality but complementary in understanding model behavior. We hope this clarifies the reviewer\\u2019s concerns.\"}", "{\"title\": \"Second Official Response to Reviewer EMZd\", \"comment\": \"First of all we want to thank the reviewer for their continued suggestions to make our paper better.\\n\\n**Regarding Response and Task Optimized Models -**\\n\\nWe once again want to highlight the distinction between \\u2018task\\u2019 and \\u2018response\\u2019 optimised models. We agree that __\\u2018comparing neural versus task optimization\\u2019__ is central to the paper. We also agree that __\\u201ctask optimized\\u201d and \\u201cresponse optimized\\u201d are directly contrasted to each other, and many conclusions explicitly mention this distinction.__\\n\\nTo reiterate once more, task optimized models consist of a pretrained CNN core that has been pre trained extensively on large datasets such as ImageNet for very specific object oriented tasks. Only the final layer of these models are fine tuned to predict brain responses. While it would be insightful to explore how different task optimized models pretrained on various tasks would perform (as already mentioned in discussion), such a comparison is beyond the scope of this study. Further, past research Conwell et al. (2022) have shown that varying architectures (convolutional vs non-convolutional such as transformers) pretrained on different training objectives have minimal impact on brain predictivity. \\n\\nHowever, as impressive as these models are, they are extremely biased towards the specific tasks that they have been trained on and hinder novel discoveries regarding the brain. To overcome this, researchers came up with Response Optimized models trained from scratch on brain data, with approximately 0.03 times the data that the task optimized models are trained on. The architecture needs to have a lot of built-in structure to model the brain as closely as possible, especially since the training diet is so small, as shown by Khosla and Wehbe. \\n\\nWe argue that directly comparing response-optimized and task-optimized models, even with matched architectures, is challenging due to differences in training data\\u2014response-optimized models are trained on NSD (MS-COCO images), while task-optimized models use ImageNet. In addition to our previous experiments in Supplementary Table 7, we have now added two additional experiments with a Mask-RCNN encoder - (1) a task-optimized encoder pretrained for object detection on MSCOCO dataset, and (2) a response-optimized encoder trained from scratch on NSD dataset (which uses MSCOCO images, although a small subset of them). This further proves that merely training any architecture (e.g., AlexNet, ResNet-50, or transformers) on neural data does not make it inherently suited for response optimization. Thus, the difference between task- and response-optimized models goes beyond whether one is pre trained or not. Instead, it\\u2019s about selecting the most effective pretrained model (with optimized architecture and dataset) for task-optimized applications versus the most suitable architecture for response-optimized models that aligns with neural data constraints.\\n\\n**Regarding the differences between our paper with Doerig et al. -**\\n\\nBuilding on our earlier clarifications as to how our work is unique from Doerig et al., we emphasize that the distinction between __'local' and 'global'__ in our work differs fundamentally from the comparison between __word-level versus sentence-level embeddings__ discussed by Doerig et al. Our local vs. global comparison introduces a contrast between globalized semantic information (e.g., a caption summarizing the entire image) and localized information (e.g., individual captions for distinct regions of the image, combining spatial and semantic knowledge). In contrast, Doerig et al.'s word-level vs. sentence-level comparison focuses on varying degrees of globalized semantics, depending on the comprehensiveness of the embeddings. These two conceptual frameworks are distinct and address different aspects of analysis.\"}", "{\"title\": \"Continued Official Response to Reviewer EMZd (Weakness 2)\", \"comment\": [\"*Continued from above comment.*\", \"**Addressing Weakness 2:** We appreciate the reviewer's thoughtful comments and acknowledge the importance of the observations made. We would like to highlight the specific contributions that distinguish our work from those mentioned -\", \"**Comparison of modeling techniques:** While [4] demonstrates that language models can predict brain activity and that semantic information is important for higher visual regions, their conclusions are primarily based on representational similarity analysis (RSA) rather than direct prediction accuracies. In contrast, our study directly compares three dominant modeling approaches\\u2014response-optimized models, task-optimized vision models, and large language model (LLM) embeddings\\u2014to identify the most predictive models across the entire visual system. Identifying these models holds significant practical utility for applications such as in-silico experimentation, automated interpretability analyses of voxel responses, and neural population control.\", \"**Region-specific semantic sensitivities:** We build on the findings of [4] by offering a more nuanced analysis of how different regions of the visual cortex respond to semantic information. Specifically, we show that:\", \"Response-optimized vision models outperform LLM embeddings in mid-level vision regions when using single captions, but this gap is mitigated when dense captions are used. This suggests that mid-level regions encode perceptual information aligned with linguistic descriptions (localized semantics). We identify a transition from regions sensitive to primarily perceptual features (not aligned with language) to those sensitive to localized semantics (aligned with language) and finally to those aligned with global semantics (aligned with language) as we move deeper into the cortical hierarchy.\", \"Based on this analysis, we characterize three distinct regions in the visual cortex, each with varying sensitivities to perceptual input, localized semantics, and global semantics. To the best of our knowledge, such analyses have not been conducted before and represent a key contribution of our work.\", \"**Extending response-optimized model analyses:** We acknowledge [1] for introducing response-optimized models and showing their ability to predict NSD responses. However, their study primarily focuses on four such regions and does not examine the broader visual system. We do not claim the response-optimized modeling technique as part of our contributions. Instead, we use response-optimized models as a competitive baseline to compare against other models and to identify regions (V1-V4), whose featural selectivities are not yet fully emergent in modern task-optimized models.\", \"**Comparing different readout methods and introducing a novel readout:** One of our major contributions is the systematic comparison of different readout mechanisms for predicting fMRI responses. We show that the choice of readout significantly affects prediction accuracy and demonstrate how novel domain-specific approaches, such as our proposed STN readout, yield substantial improvements. These findings highlight an underexplored avenue for advancing neural response modeling for complex stimuli. To our knowledge, this comparative analysis of readout methods has not been done before. We highlight the significant quantitative advantages conferred by our proposed STN readout and its biological motivation in our next response.\", \"*Response continued in the next comment.*\"]}", "{\"title\": \"Official Response to Reviewer zmHZ (Question: Beyond Accuracy)\", \"comment\": \"Thank you so much for taking the time out to review our paper and for suggesting ways to improve our experiments. We would add all of the below analysis to the supplementary section of our paper in a later revision.\\n\\n**\\u201cBeyond Accuracy\\u201d -** This is an important point, and we appreciate the reviewer highlighting it. We do feel that achieving high accuracy can be a very useful goal in many contexts when developing models of the human brain. For e.g., accurate and robust neural network models are essential for in-silico experiments [2] to explore hypotheses that would be challenging or impractical to test in vivo, inform experimental design, as well as for precise neural population control. In this way, accuracy provides a foundational utility for practical and theoretical advancements.\\n\\n**Further diving into the question - What demonstrable advantage does the \\u201csemantic spatial transformer\\u201d readout have over readout methods with respect to the theoretical questions at play here?**\\n\\nWe also want to emphasize that the STN readout is not merely a hack to boost prediction accuracy; its design is well-grounded in biological principles. We expand on this below. We first emphasize that the STN does not involve unnecessarily complex transformations. Instead, it only *spatially modulates* the feature maps and spatial masks, allowing for dynamic and stimulus-dependent adjustments.\\n\\n- **Spatial Modulation of Spatial Masks (Spatial Receptive Fields):**\\nThe STN allows voxel-specific spatial modulation of receptive fields (RFs), inspired by studies demonstrating the dynamic adaptability of RFs to stimulus properties:\\n - RF sizes can expand or contract based on contrast (Sceniak et al., Nature Neuroscience, 1999).\\n - RFs can also shift or reshape in response to contextual or attentional influences (Womelsdorf et al., Nature Neuroscience, 2006).\\n\\n Unlike fixed spatial masks used in other linear readouts (e.g., Spatial-Feature Factorized Linear or Gaussian 2D readouts), the STN employs affine transformations to capture stimulus-dependent changes in spatial receptive fields, such as translation, scaling, rotation, and shearing, in line with existing neuroscientific results. This flexibility enables more accurate modeling of neural responses that exhibit dynamic spatial RF properties.\\n- **Spatial Modulation of Feature Maps:**\\nThe STN also spatially transforms feature maps (i.e. the encoder channels). Each channel typically encodes distinct attributes (e.g., edges, textures, shapes), and the ability to apply channel-specific transformations allows the STN to adapt to their unique geometric properties. For instance, one channel may require scaling to emphasize fine-grained details, while another might need rotation for orientation invariance. This flexibility is particularly advantageous for neural response modeling. Unlike object classification tasks, where we can employ augmentations tailored to known invariances (e.g. rotating an image won\\u2019t change the category label) to boost predictive accuracy, the geometric invariances of voxel responses are unknown. STN enables the network to learn these invariances directly from the data, providing a crucial advantage in predicting voxel responses across diverse visual regions. Below is an additional experiment showing the relative improvements of the STN with only affine parameters for the spatial mask vs affine parameters for only the feature maps. We see that both contribute to the improved performance of the STN readout.\\n\\n| Readout Type (all with response optimized E2cnn encoder) | V1v | V1d | V2v | V2d | V3v | V3d | V4 | Ventral | Dorsal | Lateral |\\n|----------------------------------------------------------|--------|--------|--------|--------|--------|--------|--------|---------|--------|---------|\\n| Spatial-Feature Factorized Linear Readout (1) | 0.83154 | 0.79296 | 0.7795 | 0.7419 | 0.7268 | 0.7323 | 0.7085 | 0.4847 | 0.4831 | 0.4504 |\\n| (1) + affine transforms only on spatial masks | 0.8596 | 0.8217 | 0.8179 | 0.7769 | 0.7705 | 0.7719 | 0.7659 | 0.5638 | 0.5962 | 0.5371 |\\n| (1) + affine transforms only on feature maps | 0.8750 | 0.8409 | 0.8310 | 0.7948 | 0.7814 | 0.7958 | 0.7782 | 0.5865 | 0.6156 | 0.5641 |\\n\\n\\n*Response to \\\"Beyond Accuracy\\\" continued in the next comment.*\"}", "{\"summary\": \"This paper compares task-optimized vision and language models to brain response-optimized networks on the natural scenes fMRI dataset (NSD), and introduce a new readout mechanism for model-brain mapping based on spatial receptive field and semantic content. They find that response-optimized networks provide the best match to early visual regions, while task optimized vision-language models better match high-level visual regions. They also find their readout mechanism using spatial transformers improves model to brain mapping (though only marginally).\\n\\nThe comparison between response and task optimized models is interesting, but overall the results provide only a marginal advance in our understanding and computational models of visual cortex. The spatial transformer network readout is novel, but it is not entirely clear what the value of this contribution is. It provides slightly better performance over other methods, but is much more complex (involves integrating an additional Resnet-50 module to weight the channels of each model) and provides only minimal quantitative gains over much simpler methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Compare response optimized and task optimized models directly\", \"Compared many different model-brain mapping functions\", \"Present a new metric for model-brain mapping\"], \"weaknesses\": [\"The models tested varied along many factors making it difficult to draw strong conclusions about the role of response vs. task-optimization or vision vs. language in model\\u2019s performance. For response vs. task, these points could have been made more compelling by training the same architecture on both task and neural responses\", \"The biggest issue is that the major findings of this paper have been shown previously (also on the same dataset). Prior work with vision and language models (e.g., Doerig et al.) that showed semantic content is more important for high than low-level visual regions, and Khosla & Wehbe which introduced the response optimized model used here and already showed it predicted NSD responses.\", \"The utility of the STN was not well motivated. Figure 2 suggests that the different readout mechanisms provide largely similar results with only minor quantitative differences. This slight boost is unsurprising given how much more complex the spatial transformer network is compared to other readouts.\"], \"questions\": [\"Why are the models in Figure 2b shown on a continuous color bar?\", \"Why were only a subset of the NSD subjects used?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continued Official Response to Reviewer rgy2 (Weakness 2)\", \"comment\": \"**Weakness 2 -** *The Semantic Spatial Transformer has greater improvement relative to Ridge regression for the vision model than the language model (Figure 2), and vision models are found to better predict more voxels in high-level visual cortex using the Semantic Spatial Transformer readout (Figure 4) than when using Ridge regression (Figure 5). To me, it is a problem that the readout method does not provide uniform improvements across models. This suggests to me that the readout method is introducing a bias in the conclusions. However, I welcome rebuttal on why this logic is faulty.*\\n\\nWe appreciate the reviewer\\u2019s concern about potential bias introduced by the Semantic Spatial Transformer (SST) readout method. We acknowledge the importance of ensuring that the readout does not skew conclusions about neural representations.\\nThe larger improvements for vision models stem from their feature representations having greater spatial dimensions than language models, allowing the SST to better leverage the rich spatial information available in vision models. To mitigate this, we can normalize spatial dimensions across models to ensure uniform treatment. Empirically we show below that if we reduce the spatial dimensions of the vision encoder to match those of the language encoder, that does drop the prediction performance and relative gains. \\n\\n| Feature Map Channel Size | Readout | V1v | V1d | V2v | V2d | V3v | V3d | V4 | Ventral | Dorsal | Lateral |\\n|--------------------------|---------------------------------|-------|-------|-------|-------|-------|-------|-------|---------|--------|---------|\\n| 28\\u00d728 | Spatial Feature Factorized | 0.8315| 0.7926| 0.7795| 0.7419| 0.7268| 0.7323| 0.7085| 0.4847 | 0.4831 | 0.4504 |\\n| 28\\u00d728 | Semantic Spatial Transformer | 0.8698| 0.8340| 0.8302| 0.7919| 0.7808| 0.7913| 0.7729| 0.5796 | 0.6089 | 0.5638 |\\n| 4\\u00d74 (A) | Semantic Spatial Transformer | 0.8432| 0.8089| 0.8056| 0.7690| 0.7672| 0.7743| 0.7425| 0.5734 | 0.5986 | 0.5513 |\\n| 4\\u00d74 (B) | Semantic Spatial Transformer | 0.7783| 0.7328| 0.7374| 0.6991| 0.7061| 0.7043| 0.7102| 0.5699 | 0.6002 | 0.5532 |\\n\\n**Notes:**\\n- **A** - Original 28\\u00d728 feature maps are downsampled via bilinear interpolation to 4\\u00d74.\\n- **B** - Smaller image sizes (32\\u00d732) are used instead of (224\\u00d7224) to get a smaller 4\\u00d74 feature map.\\n\\nThe overall trend\\u2014where higher cortical areas are better modeled by language input and lower cortical areas by visual input\\u2014is consistently observed across all readouts (Fig. 4, 5, 6, 7). However, the margin distinguishing the effectiveness of the models varies slightly. Notably, as we progress from less biologically intuitive readouts to more biologically plausible ones (linear regression, Gaussian 2D, Spatial-Feature Factorized Linear Readout, and finally, the Semantic Spatial Transformer Readout), these trends become increasingly well-defined. Given that the Semantic Spatial Transformer Readout most accurately and consistently models neural responses, we rely on it to delineate regions of the visual cortex sensitive to varying kinds of stimulus information.\\n\\nFurther, we note that this issue is not unique to the SST readouts. Ridge regression itself can favor larger models due to their higher number of basis features, potentially inflating prediction scores without necessarily reflecting greater alignment with the brain, as noted in prior research (Schaeffer et al., 2024). Thus, biases in readouts are a general concern, and the SST readout is not uniquely problematic in this regard.\\n\\n**Reference: https://openreview.net/pdf?id=vbtj05J68r**\\n\\n**Please refer to Supplementary Section A.4 and Figure 10 for additional analysis on the SST readout, in the current revision of the paper.**\\n\\n*Response Continued in next comment.*\"}", "{\"title\": \"Continued Official Response to Reviewer gWb8\", \"comment\": \"**Addressing Weakness 9 -** $AT(X, \\\\theta)$ is a function that performs an affine transformation (such as translation, rotation etc) on each channel of matric $X (C\\\\*H\\\\*W)$ ($C$=number of channels, $H\\\\*W$=dimension of each channel) using a set of $C$ transforms $\\\\theta$ ($C\\\\*2\\\\*3$), ie affine transform $\\\\theta_i$ is applied to $X_i$. This has already been mentioned in lines 287-288.\\n\\n**Addressing Weakness 10 -** Yes the models are voxelwise models. We model the voxel responses for each region separately. We have clarified this in the revised statement in line 296 \\u201cWe trained separate models for **voxels** in each of the following brain regions: the high-level ventral, dorsal and lateral streams, V4, V3v, V3d, V2v, V2d, V1v, and V1d.\\u201d\\n\\n**Addressing Weakness 11 -** Language encoder is a shallow CNN (a single convolutional block) that processes dense language caption embeddings. Note that each dense caption is associated with a specific spatial region of the image, and the language encoder preserves this spatial layout, enabling integration with the Semantic Spatial Transformer Readout. This is already clarified in the image caption. Further details can be found in Language Models -> Dense Captions in section 2.1.\\n\\n**Addressing Weakness 12 -** We apologize for this, and have updated this in our current revision.\\n\\n**Addressing Weakness 13 -** Thank you for pointing this out. This is fixed in the latest revision.\\n\\n**Addressing Weakness 14 -**We compare the performance of each voxel using a response optimized encoder and a task optimized encoder, both encoders using a semantic transformer readout. When we say \\u2018better predicted by task optimized\\u2019, we simply refer to whether the test performance of that voxel is better modeled by a task optimized model or not. We do not take the best of ResNet or AlexNet in this situation. We compare them individually with the response optimized encoders. In figure 3B, the top most plot compares the response optimized encoders with a resnet50 encoder, and the lower plot compares a response optimized encoder with an alexnet encoder. Due to timing constraints, we were only able to experiment with two of the models. While other computational objectives or architectures might improve predictivity, recent findings (e.g., Conwell et al. 2023) using the same NSD dataset demonstrate that qualitatively different architectures (e.g., CNNs vs. Transformers) and vastly different task objectives (e.g., purely visual contrastive learning vs. vision-language alignment as in CLIP) achieve comparable levels of brain predictivity. Thus, we do not anticipate that including the CLIP model would significantly alter the conclusions of our study. \\n\\n**Addressing Weakness 15 -** We apologize for this, and have updated this in our current revision.\\n\\n\\n*Response continued in next comment.*\"}", "{\"title\": \"Continued Official Response to Reviewer rgy2 (Weakness 1)\", \"comment\": \"**Weakness 1 -** *The paper needs to better justify why a different readout method is necessary. The authors state that the predominant readout method is linear ridge regression, which has high computational and memory demands, but representational similarity analysis (RSA) is nearly as commonly used in the human literature and is less computationally intensive (Kriegeskorte et al, 2008). More importantly, however, the reason that the NeuroAI field tends to rely on linear regression as a readout is based on the logic that we are interested in evaluating the similarity of the representations up to a linear transformation (in representation space) without introducing non-linearities in the readout method. The authors should provide better justification for why a novel readout method is needed within that framework.*\\n\\nFirst, we would like to clarify the distinction between readout models and similarity metrics, as they serve fundamentally different purposes in computational neuroscience. In summary, RSA is a post hoc analysis tool for measuring representational similarity, whereas readout models are predictive frameworks that learn mappings between neural network representations and fMRI responses. Thus, the two methodologies are not interchangeable and address distinct aspects of computational modeling in neuroscience.\\n\\n- **Readout models** are designed to map the feature representations from a neural network encoder to voxel-wise fMRI responses in specific brain regions. These models perform a predictive function by directly aligning network features with measured neural responses. Predictive modeling can be a very useful goal in many contexts. For e.g., accurate and robust neural network models are essential for in-silico experiments [2] to explore hypotheses that would be challenging or impractical to test in vivo, inform experimental design, as well as for precise neural population control. In this way, accuracy provides a foundational utility for practical and theoretical advancements.\\n- The most popular readout method in the human visual neuroscience literature is a linear readout, wherein the features are flattened/vectorized and linearly combined to predict responses (often with an additional l2 regularization on the feature weights ala ridge regression). However, the separability of what is computed from where it is computed (spatial receptive field) which is a fundamental visual neuroscience finding is not reflected in the ridge regression readouts that operated on flattened feature representations. More constrained domain-specific readout approaches, such as Spatial X Feature Factorized or Gaussian 2D readouts (which were previously proposed in the context of mouse visual cortex modeling) are also linear mapping models but they separate a neuron/voxel\\u2019s spatial receptive field (what spatial locations in the encoder output does a voxel prefer) from the voxel\\u2019s featural preferences (what channels capture its featural selectivity). In this way, they are less expressive readouts than the ridge regression readout but have strong biological grounding. Empirically, we show that the Spatial X Feature Factorized consistently outperforms the much more expressive ridge regression readout. Now, coming to our proposed readout, the Semantic Spatial Transformer (SST readout) addresses a significant limitation of other readouts: other readouts assume that the spatial receptive field remains constant irrespective of the stimuli. The SST readout on the other hand enables dynamic adjustments of spatial receptive fields depending on stimulus context. We expand on this below. More details about these methods can be found in our paper in section 2.2. \\n- On the other hand, similarity assessment tools such as **Representational Similarity Analysis (RSA)** serve a different role. RSA is a powerful tool used to evaluate how similar two sets of high-dimensional representations are, by comparing their pairwise representational geometries. For example, RSA computes a similarity matrix (e.g., correlation or cosine similarity) for the true and predicted voxel representations, allowing researchers to assess whether the underlying structure of the representations aligns. However, RSA does not provide a direct mapping from features to voxel responses, and lacks the predictive functionality required of a readout model.\\n\\nWhile it is true that RSA is computationally less intensive and widely used in the literature (e.g., Kriegeskorte et al., 2008), its purpose is fundamentally different. It is not a substitute for a readout model because it does not establish a voxel-wise predictive mapping. Instead, it is complementary to readout models, providing a means to evaluate the alignment of representational structures after the mapping is established.\\n\\n*Response Continued in next comment.*\"}", "{\"comment\": \"Thank you to authors for their additional engagement on this point. I appreciate the clarification and discussion, and as a result, I have adjusted my score to \\\"marginally above acceptance.\\\" My concerns about bias have been discussed but would need to be more seriously investigated in the current manuscript for a stronger recommendation.\"}", "{\"title\": \"Request for Review of Updated Paper and Revisions (Reviewer gWb8)\", \"comment\": \"Dear reviewer gWb8,\\n\\nThank you for your thoughtful and detailed comments, especially regarding clarity and presentation, which have greatly enhanced our work. We have addressed all your suggestions in the revised manuscript (see the previous comment for details). If possible, could you review the updated version and let us know if you have any additional concerns? Today marks the end of the discussion phase. Thank you again for your valuable input.\\n\\nThanks,\\n\\nAuthors.\"}", "{\"title\": \"Updates for Reviewer gWb8\", \"comment\": \"Thank you so much for your patience with us while we were updating our paper. We wanted to mention the changes in the comments first as updating the paper took a little longer as we had to be careful with the 10 page limit constraint. We apologize for the version inconsistencies for which you could not see our latest updates. Here are all the changes we have added throughout the paper based on the various suggestions (**Please refer to the latest revision of the paper**)-\\n\\n1. Reorganizing Figure 1 based on the comments received by reviewers **gWb8** and **rgY2** - \\n - Added a brief overview of the entire pipeline (Schematic Pipeline of DNN Models)\\n - Separating out the individual components of the pipeline\\n - Replacing the tensor product symbol with reference to actual equation numbers used in the paper\\n2. Reorganized Figures - 2,3,4 by reducing the unused blank space in the cortical flatmaps according to reviewer **gWb8**.\\n3. Updated Figure 2B with discrete color maps as suggested by reviewers **rgY2** and **EMZd**.\\n4. Updated Figure 2A to highlight the advantages of STN readout as suggested by reviewer **EMZd**.\\n5. Better motivated the utility of Semantic Spatial Transformer Readout (**Lines 274-287**) based on the suggestions of reviewers **zmHZ**, **gWb8**, **rgY2** and **EMZd**. \\n - Added further analysis in **Supplementary section A.4 (Table 5, Figure 10)**, and they have been referenced in the main text at **line 287** - *Analyzing Spatial Modulation of Receptive Fields in Visual Cortex: Insights from STN Readouts*.\\n - As suggested by reviewer **rgY2**, we added further analysis on the dependency of the Semantic Spatial Transformer Readouts on channel size (and bias introduced by the readouts) in **Supplementary section A.5 (and Table 6)** - *Dependency of Semantic Spatial Transformer Readout on Channel Size*. This has been referenced in the main text in **line 333-334**.\\n6. As suggested by reviewer **zmHZ**, we added further analysis on the utility of dense captions in **Supplementary Section A.3 (and Figure 9)**, and referenced them in the main text at line **215**.\\n7. As suggested by Reviewer **EMZd**, we added further analysis on *\\u2018Comparing Architectural Approaches for Task and Response Optimized models\\u2019* in **Supplementary Section A.6 and Table 7**, and referenced them in the main text at line **192**.\\n\\n**To be more specific, specific updates with respect to your comments are addressed at** - \\n\\n1. **Weakness 1** - Figure 1 has now been updated and can be found at page 3 of the current revision.\\n2. **Weakness 6** - This has now been updated in our current revision. $Y_i$ is now a column vector, and $E_i$ is also a column vector. These updates can be found at line 223-224\\n3. **Weakness 8** - Please refer to the further clarifications on this in the previous comments. More reasoning on this is added in the main text from lines 274-287. We also added further analysis on the Semantic Spatial Transformer Readouts and the implications of $\\\\theta_1$ and $\\\\theta_2$ in Supplementary Section A.4, Figure 10 and Table 5.\\n4. **Weakness 12, 13, 15** - These figures (2,3,4,5) have been updated in our latest revision.\\n5. **Weakness 16** - Please refer to the further clarifications on this in the previous comments. More reasoning on this is added in the main text from lines 274-287. We also added further analysis on the Semantic Spatial Transformer Readouts and the implications of $\\\\theta_1$ and $\\\\theta_2$ in Supplementary Section A.4, Figure 10 and Table 5.\\n\\n**No changes were needed for Weakness 2,3,4,5,7,10,11 and 14. Please refer to above comments for clarifications regarding the same.**\\n\\nDue to edits in the paper, following changes can be seen for the below weaknesses - \\n\\n1. **Weakness 9** - Please refer to the above comments for further clarification. Line 287-288 (in the above comment) has now been shifted to line 272-273\\n\\n**Please let is know if you still see inconsistencies in the current version. We can raise appropriate concerns if the current version as seen by the authors is not being reflected on the reviewer's end.**\\n\\n*We would be happy to address any further concerns that you have.*\"}", "{\"comment\": \"I wrote a response and then deleted it. Because it does not seem like the author's comments reflect the current state of the paper. I would be happy to take another look once the author's revision reflect their comments to me.\"}", "{\"title\": \"Request for Review of Updated Paper and Revisions (Reviewer zmHZ)\", \"comment\": \"Dear reviewer zmHZ,\\n\\nThank you for your thoughtful and detailed comments, particularly your insightful suggestions, which have significantly improved our work. We have carefully addressed all your concerns, implemented two of your major suggestions in the revised manuscript, and incorporated a discussion of your third suggestion in the discussion section (please refer to the previous comment for details). If possible, could you review the updated version and let us know if you have any additional concerns? Today marks the end of the discussion phase. Thank you again for your valuable input.\\n\\nThanks,\\n\\nAuthors.\"}", "{\"comment\": \"Thank you for clarifying the 'local' / 'global' distinction refers to spatial scale. The distinction to Doerig et al., is more clear, but now I wonder if this finding is somewhat trivial given the increase in RF size from low- to high-level visual regions?\"}", "{\"title\": \"Updates for Reviewer rgy2\", \"comment\": \"We appreciate your acknowledgment that the additional context strengthens the paper. We apologize for not marking the revisions earlier and hope this update clarifies our contributions. We have incorporated the following changes based on your specific suggestions -\\n\\n1. *Questions* - **Please refer to the latest version of Figure 1 at *page 3* of our latest revision.**\\n2. *Weakness 1* - \\n - The motivation behind the need of Semantic Spatial Transformer Readouts are added in lines 274-287\\n - Added further analysis in Supplementary section A.4 (Table 5, Figure 10), and they have been referenced in the main text at line 287 - Analyzing Spatial Modulation of Receptive Fields in Visual Cortex: Insights from STN Readouts.\\n3. *Weakness 2* - \\n - **We do understand that you still have concerns about Weakness 2, and we shall respond to them in a later comment.** For now, we have added the current analysis in supplementary section A.5 and referenced them in the main text at lines 333-334.\\n\\nHere are all the changes we have added throughout the paper based on the various suggestions (**Please refer to the latest revision of the paper**)- \\n\\n1. Reorganizing Figure 1 based on the comments received by reviewers **gWb8** and **rgY2** - \\n - Added a brief overview of the entire pipeline (Schematic Pipeline of DNN Models)\\n - Separating out the individual components of the pipeline\\n - Replacing the tensor product symbol with reference to actual equation numbers used in the paper\\n2. Reorganized Figures - 2,3,4 by reducing the unused blank space in the cortical flatmaps according to reviewer **gWb8**.\\n3. Updated Figure 2B with discrete color maps as suggested by reviewers **rgY2** and **EMZd**.\\n4. Updated Figure 2A to highlight the advantages of STN readout as suggested by reviewer **EMZd**.\\n5. Better motivated the utility of Semantic Spatial Transformer Readout (**Line 274-287**) based on the suggestions of reviewers **zmHZ**, **gWb8**, **rgY2** and **EMZd**. \\n - Added further analysis in **Supplementary section A.4 (Table 5, Figure 10)**, and they have been referenced in the main text at **line 287** - *Analyzing Spatial Modulation of Receptive Fields in Visual Cortex: Insights from STN Readouts*.\\n - As suggested by reviewer **rgY2**, we added further analysis on the dependency of the Semantic Spatial Transformer Readouts on channel size (and bias introduced by the readouts) in **Supplementary section A.5 (and Table 6)** - *Dependency of Semantic Spatial Transformer Readout on Channel Size*. This has been referenced in the main text in **line 333-334**.\\n6. As suggested by reviewer **zmHZ**, we added further analysis on the utility of dense captions in **Supplementary Section A.3 (and Figure 9)**, and referenced them in the main text at line **215**.\\n7. As suggested by Reviewer **EMZd**, we added further analysis on *\\u2018Comparing Architectural Approaches for Task and Response Optimized models\\u2019* in **Supplementary Section A.6 and Table 7**, and referenced them in the main text at line **192**.\"}", "{\"title\": \"Continued Official Response to Reviewer zmHZ (Question: Semantics\\u201d without language model confounds and \\u201cDensifying\\u201d the single captions)\", \"comment\": \"**\\u201cSemantics\\u201d without language model confounds -** We appreciate the reviewer\\u2019s insightful comments. We acknowledge that language models inherently operate on proto-symbolic inputs (e.g., tokenized words) that are more abstract than the raw pixel inputs provided to vision models. This abstraction likely facilitates alignment with higher-level semantic representations in the brain, even if it does not directly inform the structure of visual cortex representations. Additionally, the growing overlap between vision and language models, especially as models get larger, may result in \\\"default\\\" alignments of the language models with visual cortex responses that do not provide any meaningful insights into the structure of visual cortex representations.\\n\\nTo address the risk of overinterpretation, we have revised the discussion in the manuscript to caution against equating language models\\u2019 predictive success with evidence that the visual cortex encodes language-like representations (Line 532-536). Instead, we suggest that such success may reflect the ability of language models to capture the statistical structure of the visual world, given their training on vast datasets. This interpretation is consistent with recent findings that language model representations can also predict visual cortex activity in macaques\\u2014a species without linguistic abilities\\u2014underscoring that their predictive power may not depend on linguistic abstraction per se.\\n\\nThe question of whether and how vision models develop \\u201csemantic\\u201d representations is indeed compelling but falls beyond the scope of the present study. Aggregating representations across multiple images of the same concept (e.g., different pictures of dogs) could potentially enhance semantic distinctions in vision models. This line of inquiry would be a promising direction for future work investigating the emergence of semantic representations in vision models and their relationship to language models.\\n\\nRegarding the reviewer\\u2019s final point, the Semantic Spatial Transformer (SST) operates by spatially transforming feature maps and spatial masks to model dynamic, stimulus-dependent adjustments, without fundamentally altering the nature of the features themselves. While this limits its ability to reveal specific feature transformations along the cortical hierarchy, future work could leverage STNs to explore the geometric invariances underlying neural responses in different brain regions. This remains an exciting avenue for further exploration.\\n\\n**\\u201cDensifying\\u201d the single captions -** This is an excellent point made by the reviewer about whether the observed differences between dense and global captioning are due to (a) the spatial subdivision of the image (Hypothesis 1) or the increased semantic detail in dense captions (Hypothesis 2). The original idea behind using dense captions was to provide spatial information in addition to semantic information in the form of captions, and subdividing the image into equal sized grids and getting captions for each grid was one of the easiest and most intuitive ways to do that. We also played around with other options such as using the DenseCap [1] model to identify parts of the image and generate captions for them. However, there were some regions which remain unidentified, and in comparison the dense captioning method proposed in the paper performed better.\\n\\nAs per some of your suggestions, we tried generating more comprehensive single captions of the image using existing LLMs, however none of them were able to provide more information than those already present in the original MS-COCO dataset. In an attempt to densify the single captions, we thus adopted a different approach: for each image, we took the embeddings of dense captions generated for individual grid locations and averaged these embeddings to produce a single \\\"aggregate dense caption\\\" embedding.\\n \\nOn comparing single caption stimuli with \\u2018densified\\u2019 single caption stimuli (as opposed to the dense caption approach discussed in the paper), we saw a similar trend where the higher regions of the visual cortex were better modeled by single caption stimuli. However, the transition in sensitivity from dense to single caption in the middle regions of the ventral, dorsal and lateral stream that is so clearly pronounced when using dense captions is missing when using the above \\u2018densified\\u2019 single captions. Further comparing \\u2018densified\\u2019 single captions to dense captions (as proposed in the paper), we saw that the dense captions modeled the overall visual cortex better. Hence, we do feel that adding spatial information to the dense caption is necessary for building more accurate models, be it by sub-dividing the image into grids or via any other way. **Please see Supplementary Fig. 9 in our revised paper for the results of this analysis. This analysis has been added in Supplementary section A.3 in the revised version of our paper.**\"}", "{\"title\": \"Official Response to Reviewer rgy2\", \"comment\": \"We would like to thank the reviewers for taking the time out to review our paper, and for raising a lot of important questions. First, we would like to address the questions raised -\\n\\n**Why did the authors only use 4 of the 8 participants from NSD?**\\n\\nThe fmri responses were similar across the various subjects, and it is common practice to train models on these 4 datasets only [1], [2], [3]. The data from the held out subjects is usually used for fine tuning or zero shot tasks as seen in [1]. Also, these were the only subjects that completed all 40 NSD sessions.\\n\\n**Figure 1A is confusing. I don\\u2019t follow how each of the different readout methods are shown here. Better labels would be very helpful.**\\n\\nWe apologize for the confusion, and have attempted to make this figure more clear in the next revision of the paper. It will be updated very soon.\\n\\n**References**\\n\\n1. Khosla, Meenakshi, and Leila Wehbe. \\\"High-level visual areas act like domain-general filters with strong selectivity and functional specialization.\\\" bioRxiv (2022): 2022-03.\\n2. Joo, Jaehoon, Taejin Jeong, and Seongjae Hwang. \\\"Brain-Streams: fMRI-to-Image Reconstruction with Multi-modal Guidance.\\\" arXiv preprint arXiv:2409.12099 (2024).\\n3. Liu, Yulong, et al. \\\"See Through Their Minds: Learning Transferable Neural Representation from Cross-Subject fMRI.\\\" arXiv preprint arXiv:2403.06361 (2024).\\n4. Doerig, Adrien, et al. \\\"Semantic scene descriptions as an objective of human vision.\\\" arXiv preprint arXiv:2209.11737 10 (2022).\\n\\n*Rebuttals to Weaknesses continued in the next response.*\"}", "{\"metareview\": \"The reviewers collectively noted that the paper suffers from subpar presentation quality, as highlighted by reviewer gWb8, with cluttered and confusing figures and unclear descriptions. Furthermore, while the proposed Semantic Spatial Transformer readout shows improvements in prediction accuracy, its biological motivation and broader relevance remain underemphasized and inadequately justified. Reviewer EMZd pointed out that many findings replicate prior work, with limited additional insights, and raised concerns about the marginal gains of the proposed readout given its complexity. Overall, the weaknesses in writing, clarity, and limited novelty outweigh the paper\\u2019s contributions.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, all reviewers raised concerns about presentation quality, the biological relevance and marginal gains of the proposed Semantic Spatial Transformer (SST) readout, and the novelty of findings compared to prior work. The authors responded by revising figures (for gWb8), adding biological justifications for the SST readout (for EMZd and rgy2), and conducting additional experiments to clarify the distinctions between task- and response-optimized models (for EMZd and zmHZ). While these efforts were acknowledged, the persistent issues in clarity (gWb8), unresolved biases in the readout method (rgy2), and limited novelty (EMZd and zmHZ) ultimately led to the recommendation against acceptance which I also agree with.\"}", "{\"title\": \"References\", \"comment\": \"1. Khosla, Meenakshi, and Leila Wehbe. \\\"High-level visual areas act like domain-general filters with strong selectivity and functional specialization.\\\" bioRxiv (2022): 2022-03.\\n2. Joo, Jaehoon, Taejin Jeong, and Seongjae Hwang. \\\"Brain-Streams: fMRI-to-Image Reconstruction with Multi-modal Guidance.\\\" arXiv preprint arXiv:2409.12099 (2024).\\n3. Liu, Yulong, et al. \\\"See Through Their Minds: Learning Transferable Neural Representation from Cross-Subject fMRI.\\\" arXiv preprint arXiv:2403.06361 (2024).\\n4. Doerig, Adrien, et al. \\\"Semantic scene descriptions as an objective of human vision.\\\" arXiv preprint arXiv:2209.11737 10 (2022).\"}", "{\"comment\": \"Thank you. I have updated my score.\\n\\nIn addition to the points noted above, one other small suggestion is to frontload the biological relevance for STN in the motivation, if the authors feel that is indeed an important element.\"}", "{\"title\": \"Updates for Reviewer EMZd\", \"comment\": \"Thank you so much for your patience with us while we were updating our paper. We wanted to mention the changes in the comments first as updating the paper took a little longer as we had to be careful with the 10 page limit constraint. Here are all the changes we have added throughout the paper based on the various suggestions (**Please refer to the latest revision of the paper**)-\\n\\n1. Reorganizing Figure 1 based on the comments received by reviewers **gWb8** and **rgY2** - \\n - Added a brief overview of the entire pipeline (Schematic Pipeline of DNN Models)\\n - Separating out the individual components of the pipeline\\n - Replacing the tensor product symbol with reference to actual equation numbers used in the paper\\n2. Reorganized Figures - 2,3,4 by reducing the unused blank space in the cortical flatmaps according to reviewer **gWb8**.\\n3. Updated Figure 2B with discrete color maps as suggested by reviewers **rgY2** and **EMZd**.\\n4. Updated Figure 2A to highlight the advantages of STN readout as suggested by reviewer **EMZd**.\\n5. Better motivated the utility of Semantic Spatial Transformer Readout (**Lines 274-287**) based on the suggestions of reviewers **zmHZ**, **gWb8**, **rgY2** and **EMZd**. \\n - Added further analysis in **Supplementary section A.4 (Table 5, Figure 10)**, and they have been referenced in the main text at **line 287** - *Analyzing Spatial Modulation of Receptive Fields in Visual Cortex: Insights from STN Readouts*.\\n - As suggested by reviewer **rgY2**, we added further analysis on the dependency of the Semantic Spatial Transformer Readouts on channel size (and bias introduced by the readouts) in **Supplementary section A.5 (and Table 6)** - *Dependency of Semantic Spatial Transformer Readout on Channel Size*. This has been referenced in the main text in **line 333-334**.\\n6. As suggested by reviewer **zmHZ**, we added further analysis on the utility of dense captions in **Supplementary Section A.3 (and Figure 9)**, and referenced them in the main text at line **215**.\\n7. As suggested by Reviewer **EMZd**, we added further analysis on *\\u2018Comparing Architectural Approaches for Task and Response Optimized models\\u2019* in **Supplementary Section A.6 and Table 7**, and referenced them in the main text at line **192**.\\n\\n**To be more specific, specific updates with respect to your comments are addressed at** - \\n\\n1. **Questions** - Figure 2B is now updated with discrete colors at page 7\\n2. **Weakness 1** \\n - we added further analysis on Comparing Architectural Approaches for Task and Response Optimized models in Supplementary Section A.6 and Table 7, and referenced them in the main text at line **192**.\\n3. **Weakness 3**\\n - Figure 2A is updated to further highlight the performance boosts obtained by using the STN readout.\\n - Further clarification on the motivation behind the use of STN readouts is added at lines 274-287.\\n - Further analysis on STN readouts are added in Supplementary section A.4 (Table 5, Figure 10), and they are referenced in the main text at line 287\\n\\n**We understand that the reviewer still has some pending concerns regarding Weakness 1 and 2, and we shall respond to them very soon in a later comment.**\"}", "{\"comment\": \"I appreciate the clarifications. I believe these additional details and analyses could strengthen the paper, but I do not believe have been incorporated (yet). I still have some concerns:\\n\\nYou mention \\u201cthe primary objective of our study was not to isolate these factors.\\u201d But as the paper is currently written, comparing neural versus task optimization seems central to the paper. In the abstract and throughout the paper \\u201ctask optimized\\u201d and \\u201cresponse optimized\\u201d are directly contrasted to each other, and many conclusions explicitly mention this distinction. \\n\\nI appreciate the additional analysis with the Resnet-50 architecture. I think this would strengthen the paper and warrants some discussion. The lower performance of the \\u201cresponse optimized\\u201d model with this architecture however, further underscores my concern about the conclusions about \\u201cresponse\\u201d versus \\u201ctask optimization\\u201d in the paper.\\n\\nThe distinction between \\u201clocal\\u201d and \\u201cglobal\\u201d semantics is interesting, but does not feel like a major advance over [4]. In fact, I believe that paper also compares word-level versus sentence-level embeddings). \\n\\nFinally, I appreciate the additional information about the STN motivation. I think this would significantly strengthen the paper, if added.\"}" ] }
11xgiMEI5o
OmniRe: Omni Urban Scene Reconstruction
[ "Ziyu Chen", "Jiawei Yang", "Jiahui Huang", "Riccardo de Lutio", "Janick Martinez Esturo", "Boris Ivanovic", "Or Litany", "Zan Gojcic", "Sanja Fidler", "Marco Pavone", "Li Song", "Yue Wang" ]
We introduce OmniRe, a comprehensive system for efficiently creating high-fidelity digital twins of dynamic real-world scenes from on-device logs. Recent methods using neural fields or Gaussian Splatting primarily focus on vehicles, hindering a holistic framework for all dynamic foregrounds demanded by downstream applications, e.g., the simulation of human behavior. OmniRe extends beyond vehicle modeling to enable accurate, full-length reconstruction of diverse dynamic objects in urban scenes. Our approach builds scene graphs on 3DGS and constructs multiple Gaussian representations in canonical spaces that model various dynamic actors, including vehicles, pedestrians, cyclists, and others. OmniRe allows holistically reconstructing any dynamic object in the scene, enabling advanced simulations (~60 Hz) that include human-participated scenarios, such as pedestrian behavior simulation and human-vehicle interaction. This comprehensive simulation capability is unmatched by existing methods. Extensive evaluations on the Waymo dataset show that our approach outperforms prior state-of-the-art methods quantitatively and qualitatively by a large margin. We further extend our results to 5 additional popular driving datasets to demonstrate its generalizability on common urban scenes. Code and results are available at [omnire](https://ziyc.github.io/omnire/).
[ "Gaussians Splatting", "Neural Rendering", "Dynamic Scene Reconstruction", "Autonomous Driving" ]
Accept (Spotlight)
https://openreview.net/pdf?id=11xgiMEI5o
https://openreview.net/forum?id=11xgiMEI5o
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vr1xmFstQA", "rmS4BDSsn7", "opIVXIbMHf", "npHrsVocHE", "lWwlOnAWim", "h811p0iFJa", "h1AQsBYat4", "exroknt9rr", "aDXxtRmjfH", "YEhfj4co7Q", "WguGl5oI9E", "KyGrVSTKwP", "HqGwrwebsP", "HPrgogY49N", "Ce2xNGOCzo", "9cwxZxJixB", "62zsxzcw84", "5cSyGYuUbM" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732292989546, 1732867658458, 1732291596192, 1732517245399, 1732292266801, 1732291731166, 1732292182130, 1732292117796, 1734924219144, 1737523663750, 1732336696490, 1730377161546, 1732348291166, 1732517314048, 1732292240946, 1729587574590, 1730710761487, 1732452261820 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Area_Chair_uJ3v" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4816/Reviewer_h6SA" ], [ "ICLR.cc/2025/Conference/Submission4816/Reviewer_dKk5" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Authors" ], [ "ICLR.cc/2025/Conference/Submission4816/Reviewer_h6SA" ], [ "ICLR.cc/2025/Conference/Submission4816/Reviewer_WVni" ], [ "ICLR.cc/2025/Conference/Submission4816/Reviewer_dKk5" ] ], "structured_content_str": [ "{\"title\": \"General Response\", \"comment\": \"We sincerely thank the reviewers for their time and effort in reviewing our manuscript, as well as for their insightful feedback and recognition of the strengths of our work:\\n\\n- **Comprehensive Dynamic Modeling:** Our work presents a holistic framework that unifies the reconstruction of static backgrounds, driving vehicles, and non-rigidly moving dynamic actors. (WVni, h6SA)\\n- **Fine-Grained Reconstruction and Behavior Simulation:** We address the challenges of modeling humans and other dynamic actors from driving logs, even in cluttered environments. (WVni, dKk5)\\n- **Extensive Results and SOTA Performance:** Our method is demonstrated on six popular driving datasets, achieving SOTA performance in scene reconstruction and novel view synthesis. (WVni, dKk5, h6SA)\\n- **Dedicated Engineering Effort and Community Value:** We are committed to providing a complete codebase and holistic framework, which holds significant value for advancing autonomous driving simulation. (h6SA)\\n\\nWe have revised the main paper and supplementary material according to your comments, with all revisions highlighted in **red**.\\n\\n---\\n\\n**OmniRe on Highly Challenging Scenes:** We performed additional experiments on various types of challenging dynamic scenes: 1) Super Crowded Scenes, 2) Nighttime Scenes, 3) Adverse Weather Conditions, 4) High-Speed Scenes. OmniRe performs well across these scenarios, demonstrating strong performance (quantitative results in the revised manuscript, page 18). **These results highlight the overall robustness and generalizability of OmniRe to diverse driving scenes.**\\n\\nWe updated the visualizations in section **\\\"OmniRe on Challenging Scenes\\\"** on our [anonymous page](https://anonymousi079j.github.io/omnire_id4816/) for better understanding. We will greatly appreciate it if reviewers get a chance to check these additional results of highly challenging scenes.\"}", "{\"title\": \"Rebuttal follow up\", \"comment\": \"Dear Reviewer,\\n\\nThank you again for helping us improve the paper. As the rebuttal period is coming to an end, we would like to check if our responses have addressed your concerns. If so, we would greatly appreciate it if you could consider raising the score accordingly. Otherwise, we would be grateful for any additional comments or questions, and we would be happy to include further experiments and results. Thanks!\"}", "{\"title\": \"Response to Reviewer WVni (1/2)\", \"comment\": \"We sincerely thank the reviewer for the time and effort reviewing. **We are encouraged by the reviewer's recognition of the value our method brings to dynamic scene modeling, human-level reconstruction and simulation, and SOTA performance.** Our replies are stated below.\\n\\n> There are several limitations in OmniRe approach, which are correctly identified in the paper too.\\n>\\n> Q1. Lighting Effects: OmniRe doesn\\u2019t model lighting variations explicitly. This can lead to visual inconsistencies when combining scene elements with differing lighting conditions, which may reduce realism in certain simulations. Addressing this would require additional modeling of lighting dynamics.\\n\\nThanks for pointing out its importance and for recognizing that we have acknowledged it in the limitation section. Modeling lighting effects to enhance simulation realism is crucial for achieving more convincing and harmonious results. Solving it is non-trivial and requires dedicated efforts. For example, LightSim [a] is a standalone paper that focuses specifically on this issue.\\n\\nWhile this is slightly beyond the scope of our current work, we would like to discuss a few potential directions for light effect modeling based on our current framework: 1) incorporate BRDF model to learn reflectance, material properties, and utilize environmental maps for illumination modeling; 2) post-processing rendered images with harmonization techniques, such as rendering feature maps and training a network to convert these maps into photorealistic RGB images. 3) using video diffusion models to refine the outputs, e.g. localized editing with diffusion models [b]. \\n\\n[a] LightSim: Neural lighting simulation for urban scenes. NeurIPS2023.\\n[b] Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets.\\n\\n> Q2. Novel View Synthesis Limitations: OmniRe\\u2019s per-scene optimization approach struggles to generate satisfactory results when the camera view deviates significantly from the training trajectories. This could be a limitation for scenarios requiring a wide range of viewing angles, such as free navigation through the reconstructed scenes. The authors suggest incorporating data-driven priors or generative models as future work to address this.\\n\\nDriving scenes, which typically follow a forward-moving trajectory, often involve sparser observations compared to object-centric scenes. For driving simulation, simulated viewpoints are typically constrained to those on the road, with acceptable deviation. As a result, novel view rendering maintains high quality, as shown in the top-right novel-view navigation demo on the anonymous page.\\n\\nWhile the current novel view quality meets the requirements for typical driving simulations (e.g., lane shifts or various vehicle heights), limitations arise for free navigation with viewpoints far outside the training space. For future work, we believe methods using generative models like [c] could enhance reconstruction by inferring missing details and completing unseen regions.\\n\\n[c] Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion. SIGGRAPH2024\"}", "{\"title\": \"Rebuttal follow up\", \"comment\": \"Dear reviewer,\\n\\nWe greatly appreciate the time and effort you have invested in reviewing our work! We hope the additional experiments on OmniRe\\u2019s real-time rendering and simulation capabilities address concerns regarding computational complexity and real-time performance. This work involved substantial engineering efforts to ensure the system's efficiency and robustness. \\n\\nWe are also greatly inspired by your suggestion regarding CityLifeSim to further explore the possible extension of our work. Based on our current reconstruction framework, we will incorporate elements like configurable actor behaviors and interaction rules to develop a more comprehensive simulator in future work.\\n\\nWe understand how busy you must be, especially as we approach the end of the discussion period. We will greatly appreciate an increased score if we successfully address your concerns. Otherwise, we are committed to provide additional results to answer any further questions.\"}", "{\"title\": \"Response to Reviewer h6SA (2/2)\", \"comment\": \"> Q2. GS-based methods generally perform well in scenarios with static environments or low vehicle speeds, as demonstrated by most of the demos on the project page. However, I am curious about the reconstruction performance of this approach in situations where the ego vehicle is moving at higher speeds.\\n\\nThank you for raising this concern. OmniRe does perform equally well for high-speed scenarios. We have included some demos where the ego vehicle is moving at higher speeds (e.g., Pandaset scene-012, Argoverse scene-037 in the anonymous page).\\n\\nTo fully address this concern, we conducted additional tests of OmniRe, StreetGS, PVG, and DeformGS, on three scenes (seg119252\\u2026, seg122514\\u2026, seg152664\\u2026) **with moderately high vehicle speeds.** The results are detailed below, showcasing OmniRe's superior performance in full scene reconstruction and human modeling, while performing on par with StreetGS in vehicle modeling.\\n\\n| Method | Recon Full PSNR/SSIM | Recon Human PSNR/SSIM | Recon Vehicle PSNR/SSIM | Novel Full PSNR/SSIM | Novel Human PSNR/SSIM | Novel Vehicle PSNR/SSIM |\\n| :------- | :------------------- | :-------------------- | :---------------------- | :------------------- | :-------------------- | :---------------------- |\\n| omnire | **31.75**/**0.923** | **28.95**/**0.861** | 27.77/0.865 | **29.44**/**0.887** | **24.70**/**0.719** | **26.44**/**0.822** |\\n| streetgs | 31.40/0.920 | 24.79/0.737 | **27.90**/**0.869** | 29.24/0.884 | 22.93/0.630 | 26.32/0.819 |\\n| pvg | 29.81/0.887 | 26.87/0.799 | 24.37/0.758 | 27.78/0.844 | 24.24/0.665 | 21.92/0.625 |\\n| deformgs | 30.71/0.917 | 24.05/0.703 | 20.16/0.579 | 28.45/0.877 | 22.39/0.590 | 19.03/0.505 |\\n\\n**Furthermore, we tested them on three extremely high-speed driving scenes** (seg109239\\u2026, seg113924\\u2026, seg118396) from Waymo. Highways typically lack non-rigid classes (e.g., humans), making human reconstruction metrics inapplicable. Since non-rigid modeling is not required in these scenarios, OmniRe and StreetGS demonstrate comparable performance. **We provide additional videos in our [anonymous page](https://anonymousi079j.github.io/omnire_id4816/) (Section: OmniRe on Challenging Scenes).** We will greatly appreciate it if reviewers get a chance to check these additional results of highly challenging scenes. \\n\\n| Method | Recon Full PSNR/SSIM | Recon Human PSNR/SSIM | Recon Vehicle PSNR/SSIM | Novel Full PSNR/SSIM | Novel Human PSNR/SSIM | Novel Vehicle PSNR/SSIM |\\n| :------- | :------------------- | :-------------------- | :---------------------- | :------------------- | :-------------------- | :---------------------- |\\n| omnire | 30.85/**0.902** | -/- | **29.51**/0.882 | 28.01/**0.864** | -/- | 24.52/0.760 |\\n| streetgs | **30.89**/**0.902** | -/- | 29.45/**0.883** | **28.05**/0.863 | -/- | **24.88**/**0.766** |\\n| pvg | 28.28/0.861 | -/- | 21.14/0.617 | 25.79/0.825 | -/- | 17.65/0.470 |\\n| deformgs | 26.95/0.855 | -/- | 17.49/0.482 | 25.56/0.837 | -/- | 16.07/0.407 |\\n\\n> Q3. I wonder about the computational cost of reconstructing a complete segment in the Waymo dataset, as the entire pipeline seems a bit complex.\\n\\nOur pipeline is cleanly organized as shown in the supplement material. We have dedicated significant engineering efforts to ensure reconstruction efficiency. Reconstructing a complete Waymo segment with three front-facing cameras takes 1-1.5 hours, depending on the scene's dynamic complexity. The GPU memory usage is 6GB when sensor data is cached on the CPU and 15GB when cached on the GPU. Tests are performed on a single 4090 GPU.\\n\\n> Q4. Why does it seem that the reconstruction quality of NuPlan is significantly worse than that of other datasets?\\n\\nThe lower quality on NuPlan is due to severe image distortions of NuPlan\\u2019s raw data. Unlike NeRF-based (ray-based rendering) methods, which can account for distortion parameters to correct ray directions, the whole-image rasterization approach of 3DGS does not natively support such undistortion corrections. To the best of our knowledge, no GS-based method currently incorporates distortion parameters for whole-image rasterization.\\n\\nFortunately, recent works like 3D Gaussian Ray-Tracing [a] adapt Gaussian to ray-based rendering, making distortion-aware rendering possible. We will integrate this into our framework as a future work, to enhance distortion handling across datasets and improve the system's overall robustness.\\n\\n[a] 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes. SIGGRAPH2024\"}", "{\"title\": \"Response to Reviewer WVni (2/2)\", \"comment\": \"> Q3. Computational Complexity\\n>\\n> & Q4. Challenges with Real-Time Adaptability\\n\\n**Training Efficiency:** Although our system integrates multiple Gaussian representations for full category reconstruction and jointly optimizes the appearance and poses of all objects (dozens or even over a hundred) along with the background, **we made substantial engineering efforts for the whole system to ensure its efficiency and robustness,** including:\\n\\n- Parallelizing the deformation and transformation of each object's Gaussians using category-specific Gaussian containers. These containers handle all objects' Gaussians and compute deformations in parallel. For example, the vehicle class container processes all vehicles\\u2019 Gaussians simultaneously.\\n- Sharing the weights and gradients of the deformation network for non-rigid deformable nodes and using a shared network with multiple instance embeddings to distinguish them.\\n\\n**Rendering Efficiency:** The rendering of OmniRe operates in **real-time** (>24 fps), meeting most simulation requirements. To address the concern regarding its real-time adaptability, we conducted tests under various settings with increasing dynamic complexity:\\n\\n- Background only: 170 fps\\n\\n- Background + Vehicle (Rigid Nodes): 108 fps\\n\\n- Background + Vehicle + SMPL-GS (Non-Rigid Nodes): 57 fps\\n\\n- Background + Vehicle + Deformable-GS (Non-Rigid Nodes): 68 fps\\n\\n- Background + Vehicle + Deformable-GS + SMPL-GS: 45 fps\\n\\n (Experiments conducted on a single 4090 GPU)\\n\\n> Q5. I wonder for items like causality and new synthesis if an approach more configurable could take place now that they have separated the pedestrians from the road. Thinking of something like CityLifeSim [a], where the data is later attached to an engine like Carla or airsim.\\n\\nThe idea behind CityLifeSim [a] is highly inspiring and crucial for an advancing simulation system, especially in aspects like modeling causality and designing configurable actor behaviors. We will explore ways to apply its ideas to our 3DGS-based framework.\\n\\nThe waypoint-based POI (points of interest) for actor interaction rule design presented in CityLifeSim [a] is adaptable to our framework with some additional work, such as labeling valid activity areas for each object. However, the primary challenge lies in the animation of humans, since we use a different modeling approach (SMPL), animating humans in our framework requires additional attention to motion naturalness. Ensuring realistic motion through effective SMPL parameter control is a non-trivial task. Techniques like text-driven motion [b] models could enhance motion naturalness and contribute to the design of more robust interaction rules.\\n\\nThank you again for highlighting this important work. We have included a discussion of CityLifeSim in the revised manuscript **line 511-513**.\\n\\n[a] CityLifeSim: A High-Fidelity Pedestrian and Vehicle Simulation with Complex Behaviors. ICIR2022\\n[b] Generating Diverse and Natural 3D Human Motions from Text. CVPR2022\"}", "{\"title\": \"Response to Reviewer dKk5 (2/2)\", \"comment\": \"> Q2. Performance in Specific Urban Scenes: For specialized scenarios, such as highways (with fast-moving vehicles), nighttime environments, and adverse weather conditions, does OmniRe maintain high reconstruction quality under these challenging conditions?\\n\\nThe answer is yes-**OmniRe maintains high reconstruction quality under these challenging conditions.** We tested OmniRe, StreetGS, PVG, and DeformGS on various specialized scenes, with detailed results provided below **(videos available in the 'OmniRe on Challenging Scenes' of the [anonymous page](https://anonymousi079j.github.io/omnire_id4816/))**.\\n\\na. **Nighttime environments:** We tested three nighttime scenes (seg129008\\u2026, seg102261\\u2026, seg128560\\u2026) from Waymo and observed that OmniRe outperformed the compared methods.\\n\\n| Method | Recon Full PSNR/SSIM | Recon Human PSNR/SSIM | Recon Vehicle PSNR/SSIM | Novel Full PSNR/SSIM | Novel Human PSNR/SSIM | Novel Vehicle PSNR/SSIM |\\n| :------- | :------------------- | :-------------------- | :---------------------- | :------------------- | :-------------------- | :---------------------- |\\n| omnire | **31.14**/**0.778** | **31.06**/**0.790** | **25.87**/**0.774** | **29.79**/**0.744** | **25.71**/**0.689** | 24.29/0.724 |\\n| streetgs | 30.60/0.775 | 27.54/0.667 | 25.68/0.772 | 29.38/0.741 | 21.88/0.523 | **24.61**/**0.729** |\\n| pvg | 30.55/0.768 | 30.74/0.777 | 22.77/0.713 | 29.32/0.740 | 25.22/0.634 | 20.90/0.611 |\\n| deformgs | 30.06/0.767 | 27.40/0.665 | 20.62/0.612 | 28.88/0.740 | 22.30/0.530 | 19.20/0.535 |\\n\\nb. **Adverse weather conditions:** We tested seven scenes under varying weather conditions: 1) rainy: seg113555\\u2026, seg109277\\u2026, seg141339\\u2026; 2) foggy: seg161022\\u2026, seg172163\\u2026; 3) cloudy: seg144275\\u2026, seg157956\\u2026. OmniRe handled these challenging scenarios robustly, maintaining high reconstruction fidelity across all weather conditions.\\n\\n| Method | Recon Full PSNR/SSIM | Recon Human PSNR/SSIM | Recon Vehicle PSNR/SSIM | Novel Full PSNR/SSIM | Novel Human PSNR/SSIM | Novel Vehicle PSNR/SSIM |\\n| :------- | :------------------- | :-------------------- | :---------------------- | :------------------- | :-------------------- | :---------------------- |\\n| omnire | **34.00**/**0.935** | **30.12**/**0.857** | **32.55**/**0.919** | **32.58**/**0.920** | **26.75**/**0.763** | **29.82**/**0.844** |\\n| streetgs | 33.49/0.933 | 18.26/0.424 | 32.28/0.916 | 32.05/0.918 | 18.56/0.434 | 29.60/0.841 |\\n| pvg | 32.75/0.927 | 27.75/0.785 | 27.20/0.795 | 31.12/0.907 | 25.02/0.656 | 24.44/0.677 |\\n| deformgs | 32.92/0.933 | 20.06/0.497 | 23.12/0.658 | 31.43/0.916 | 19.65/0.478 | 22.06/0.598 |\\n\\nc. **Highway scenes:** We tested three highway scenes from the Waymo dataset (seg109239\\u2026, seg113924\\u2026, seg118396). Highways typically lack non-rigid classes (e.g., humans), making human reconstruction metrics inapplicable. As non-rigid modeling was not required in these scenes, OmniRe and StreetGS demonstrated similar performance.\\n\\n| Method | Recon Full PSNR/SSIM | Recon Human PSNR/SSIM | Recon Vehicle PSNR/SSIM | Novel Full PSNR/SSIM | Novel Human PSNR/SSIM | Novel Vehicle PSNR/SSIM |\\n| :------- | :------------------- | :-------------------- | :---------------------- | :------------------- | :-------------------- | :---------------------- |\\n| omnire | 30.85/**0.902** | -/- | **29.51**/0.882 | 28.01/**0.864** | -/- | 24.52/0.760 |\\n| streetgs | **30.89**/**0.902** | -/- | 29.45/**0.883** | **28.05**/0.863 | -/- | **24.88**/**0.766** |\\n| pvg | 28.28/0.861 | -/- | 21.14/0.617 | 25.79/0.825 | -/- | 17.65/0.470 |\\n| deformgs | 26.95/0.855 | -/- | 17.49/0.482 | 25.56/0.837 | -/- | 16.07/0.407 |\"}", "{\"title\": \"Response to Reviewer dKk5 (1/2)\", \"comment\": \"We sincerely thank the reviewer for their time and effort in reviewing our paper. We greatly appreciate the positive feedback and are eager to address the points raised. The reviewer's insights are valuable for improving our work. Below, we provide our responses.\\n\\n> Q1. Handling Occlusions and Complex Dynamics: OmniRe addresses in-the-wild challenges, yet the performance might be limited by severe occlusions and overlapping actors in complex urban scenes. Further refinement or integration of advanced occlusion handling techniques could enhance reconstruction fidelity.\\n\\nReconstructing scenes with extreme dynamic occlusions pose significant challenges. However, **OmniRe is designed to tackle these challenges and performs well even in scenes with extreme occlusions.** We tested OmniRe, StreetGS, PVG, and DeformGS on three extremely occluded scenes from Waymo (seg112520\\u2026, seg104859\\u2026, seg152664\\u2026), including scenarios with a large crowd of people simultaneously crossing a street. OmniRe demonstrated strong performance:\\n\\n| Method | Recon Full PSNR/SSIM | Recon Human PSNR/SSIM | Recon Vehicle PSNR/SSIM | Novel Full PSNR/SSIM | Novel Human PSNR/SSIM | Novel Vehicle PSNR/SSIM |\\n| :------- | :------------------- | :-------------------- | :---------------------- | :------------------- | :-------------------- | :---------------------- |\\n| omnire | **31.25**/**0.900** | **27.50**/**0.809** | **25.91**/**0.812** | **28.91**/**0.866** | **23.78**/**0.675** | **24.46**/**0.750** |\\n| streetgs | 26.81/0.865 | 19.31/0.516 | 24.79/0.787 | 25.67/0.841 | 18.53/0.459 | 23.40/0.723 |\\n| pvg | 29.11/0.865 | 24.56/0.686 | 21.29/0.644 | 27.21/0.830 | 22.24/0.555 | 19.63/0.528 |\\n| deformgs | 26.69/0.861 | 19.48/0.504 | 17.49/0.455 | 25.66/0.839 | 18.71/0.448 | 17.12/0.414 |\\n\\nOmniRe achieved a +2.14 PSNR improvement on scene reconstruction and a +2.96 PSNR improvement on human reconstruction. **We provide additional videos in our [anonymous page](https://anonymousi079j.github.io/omnire_id4816/) (Section: OmniRe on Challenging Scenes).** We will greatly appreciate it if reviewers get a chance to check these additional results of highly challenging scenes.\"}", "{\"metareview\": \"The paper presents an approach to simulate the appearance of dynamic road scenes using sene graphs on 3D Gaussian splats. Although the technical methods are not novel, the paper combines multiple tools into a pipeline that produces good results in demonstrated applications. The authors are encouraged to include rebuttal discussions in the final version and the paper is recommended for acceptance at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"While WVni correctly points out limitations in the proposed method to correctly handle lighting variations or large view variations, the AC agrees with the rebuttal that the scope of the paper is already sufficiently broad. Additional experiments in response to dKk5 are recommended to the added to the paper. The rebuttal is acknowledged by h6SA as convincing, especially as the code is mentioned as to be released by the authors.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Thank you to the authors for their responses, and the additional experiments have addressed my concerns. I also consider the future direction to be highly promising, and indeed, there are currently a number of innovative approaches that integrate video generation. Overall, I will maintain my score, as this work is highly complete, and the provided code will greatly contribute to the development of the community and should be accepted.\"}", "{\"summary\": \"The paper introduces OmniRe, a novel approach for urban scene reconstruction that focuses on dynamic actors, including vehicles, pedestrians, and cyclists. OmniRe employs a Gaussian Scene Graph-based framework to model both static and dynamic objects. To address the limitations of previous methods in reconstructing non-rigid human models, OmniRe integrates SMPL for in-the-wild human representation, allowing for joint-level control. Extensive evaluations across several driving datasets demonstrate OmniRe's superior performance compared to baseline methods.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The paper is well-organized and easy to follow.\\n2. The proposed method for in-the-wild human representation is straightforward yet crucial for driving scene reconstruction.\\n3. Both quantitative and qualitative experiments effectively support the claims made in the introduction, with OmniRe achieving state-of-the-art results across various experimental settings.\", \"weaknesses\": \"1. Handling Occlusions and Complex Dynamics: OmniRe addresses in-the-wild challenges, yet the performance might be limited by severe occlusions and overlapping actors in complex urban scenes. Further refinement or integration of advanced occlusion handling techniques could enhance reconstruction fidelity.\\n2. Performance in Specific Urban Scenes: For specialized scenarios, such as highways (with fast-moving vehicles), nighttime environments, and adverse weather conditions, does OmniRe maintain high reconstruction quality under these challenging conditions?\", \"questions\": \"Please refer to the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Dear reviewer, thank you so much for your valuable feedback and recognition of our paper! We will greatly appreciate an increased score if you find it becomes better---we strive to achieve the strongest possible submission we could have :-) We are committed to addressing any further concerns!\"}", "{\"title\": \"Rebuttal follow up\", \"comment\": \"Dear reviewer,\\n\\nThank you for your follow-up and for taking the time to review our updates. We appreciate your efforts! We hope that our additional results on the challenging scenes, along with the provided visualizations, have addressed your concerns about OmniRe's ability to handle challenging scenes. These results aim to provide a more comprehensive insight and strengthen our paper.\\n\\nWe will greatly appreciate an increased score if you find it becomes better, we strive to achieve the strongest possible submission we could have. We are committed to addressing any further concerns!\"}", "{\"title\": \"Response to Reviewer h6SA (1/2)\", \"comment\": \"We sincerely thank the reviewer for the time and effort invested in reviewing our manuscript. We also appreciate the recognition of our work in **advancing driving reconstruction and simulation systems, as well as the engineering effort involved.** Your comments are very helpful, our responses are detailed below.\\n\\n> Q1. As mentioned by the authors in the limitations section, there are still two key shortcomings: 1. The lack of lighting modeling results in unnatural object insertions. 2. The synthesis of new viewpoints is constrained to the original trajectory, limiting the approach from achieving fully free-trajectory digital reconstruction.\\n\\nThank you for your thoughtful comments on the limitations discussed in our paper. While addressing these challenges is non-trivial and slightly beyond the scope of our current work, we would like to propose a few potential directions to address them.\\n\\nFor light modeling, potential directions include: 1) incorporate BRDF model to learn reflectance, material properties, and utilize environmental maps for illumination modeling; 2) post-processing rendered images with harmonization techniques, such as rendering feature maps and training a network to convert these maps into photorealistic RGB images. 3) using video diffusion models to refine the outputs, e.g. localized editing with diffusion models [a]. \\n\\nFor novel view rendering, the current quality meets the requirements for typical driving simulations (e.g., lane shifts or varying vehicle heights). However, limitations arise during free navigation with viewpoints far outside the training space. For future work, we believe methods using generative models like [b] could enhance reconstruction by inferring missing details and completing unseen regions.\\n\\n[a] Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets\\n[b] Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion. SIGGRAPH2024\"}", "{\"summary\": \"This paper introduces OmniRe, a comprehensive framework for dynamic urban scene reconstruction. It leverages neural scene graphs with Gaussian representations to unify the reconstruction of static backgrounds, moving vehicles, and non-rigidly dynamic actors. Additionally, it incorporates specialized designs for human modeling. The effectiveness of the approach is demonstrated across multiple datasets, showcasing superior performance in both reconstruction quality and novel view synthesis.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is clearly written, with illustrative figures that are easy to understand. The experiments are comprehensive.\\n2. Modeling dynamic objects and simulating interactive behaviors are essential for closed-loop simulation in autonomous driving systems.\\n3. This work is highly engineering-oriented and demonstrates impressive results. Additionally, the authors have committed to open-sourcing the code, which will have significant value in advancing autonomous driving simulation in the future.\", \"weaknesses\": \"As mentioned by the authors in the limitations section, there are still two key shortcomings: 1. The lack of lighting modeling results in unnatural object insertions. 2. The synthesis of new viewpoints is constrained to the original trajectory, limiting the approach from achieving fully free-trajectory digital reconstruction.\", \"questions\": \"1. GS-based methods generally perform well in scenarios with static environments or low vehicle speeds, as demonstrated by most of the demos on the project page. However, I am curious about the reconstruction performance of this approach in situations where the ego vehicle is moving at higher speeds.\\n2. I wonder about the computational cost of reconstructing a complete segment in the Waymo dataset, as the entire pipeline seems a bit complex.\\n3. Why does it seem that the reconstruction quality of NuPlan is significantly worse than that of other datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"OmniRe is a framework designed to create high-fidelity digital twins of dynamic urban scenes for simulations, particularly for applications in autonomous driving. OmniRe goes beyond vehicle modeling to support diverse dynamic actors like pedestrians and cyclists, enabling complex simulations that reflect real-world scenarios. It utilizes Gaussian Scene Graphs with multiple representations, allowing detailed and editable scene reconstructions with both rigid (e.g., vehicles) and non-rigid (e.g., pedestrians) actors.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Comprehensive Dynamic Modeling: OmniRe can handle various actors in urban settings, unlike most previous methods that focus mainly on vehicles.\", \"scene_graphs_and_gaussian_splatting\": \"The system uses 3D Gaussian splatting for detailed scene and object rendering, including control over each object.\", \"human_behavior_simulation\": \"Through SMPL modeling, OmniRe accurately reconstructs human motions, even in cluttered environments, enabling simulations of interactions between pedestrians and vehicles.\", \"state_of_the_art_performance\": \"Extensive testing on datasets like Waymo and others show OmniRe significantly outperforms existing methods in terms of visual fidelity and reconstruction accuracy.\", \"weaknesses\": \"There are several limitations in OmniRe approach, which are correctly identified in the paper too.\", \"lighting_effects\": \"OmniRe doesn\\u2019t model lighting variations explicitly. This can lead to visual inconsistencies when combining scene elements with differing lighting conditions, which may reduce realism in certain simulations. Addressing this would require additional modeling of lighting dynamics.\", \"novel_view_synthesis_limitations\": \"OmniRe\\u2019s per-scene optimization approach struggles to generate satisfactory results when the camera view deviates significantly from the training trajectories. This could be a limitation for scenarios requiring a wide range of viewing angles, such as free navigation through the reconstructed scenes. The authors suggest incorporating data-driven priors or generative models as future work to address this.\", \"computational_complexity\": \"While the method achieves high-quality reconstructions, the complexity of the Gaussian Scene Graph and the joint optimization of multiple parameters (pose, appearance, etc.) require substantial computational resources. Training time per scene, though optimized for an RTX 4090 GPU, could still pose scalability issues for large datasets or continuous real-time simulation needs.\", \"challenges_with_real_time_adaptability\": \"The method\\u2019s reliance on SMPL modeling for human actors and per-node deformation fields, though effective, might introduce delays in real-time applications, particularly if scenes are highly dynamic or involve many non-rigid actors.\", \"questions\": \"I wonder for items like causality and new synthesis if an approach more configurable could take place now that they have separated the pedestrians from the road.\\n\\nThinking of something like this \\nWang, Cheng Yao, et al. \\\"CityLifeSim: A High-Fidelity Pedestrian and Vehicle Simulation with Complex Behaviors.\\\" 2022 IEEE 2nd International Conference on Intelligent Reality (ICIR). IEEE, 2022.\\n\\nWhere the data is later attached to an engine like Carla or airsim\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thanks for your reply! I will keep my rating.\"}" ] }
10vaHIOdEe
One Model for One Graph: A New Perspective for Pretraining with Cross-domain Graphs
[ "Jingzhe Liu", "Haitao Mao", "Zhikai Chen", "Wenqi Fan", "Mingxuan Ju", "Tong Zhao", "Neil Shah", "Jiliang Tang" ]
Graph Neural Networks (GNNs) have emerged as a powerful tool to capture intricate network patterns, achieving successes across different domains. However, existing GNNs require careful domain-specific architecture designs and training from scratch on each dataset, leading to an expertise-intensive process with difficulty in generalizing across graphs from different domains. Therefore, it can be hard for practitioners to infer which GNN model can generalize well to graphs from their domains. To address this challenge, we propose a novel cross-domain pretraining framework, "one model for one graph," which overcomes the limitations of previous approaches that failed to use a single GNN to capture diverse graph patterns across domains with significant gaps. Specifically, we pretrain a bank of expert models, with each one corresponding to a specific dataset. When inferring to a new graph, gating functions choose a subset of experts to effectively integrate prior model knowledge while avoiding negative transfer. Extensive experiments consistently demonstrate the superiority of our proposed method on both link prediction and node classification tasks.
[ "Graph Pretraining; Cross-domain Graph Learning" ]
Reject
https://openreview.net/pdf?id=10vaHIOdEe
https://openreview.net/forum?id=10vaHIOdEe
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zQCw2mxx2P", "xDGOAwlTSS", "uMjYqhtdlo", "r4Vj9pR9L2", "pb1WY1oIKo", "pWvPlrv5Am", "pVpHOKWEU9", "n8GaVIoObH", "mj40B1jNZJ", "kwN5EM1pd4", "kr04DMsbMH", "ZVp0afhy3z", "UlbzCuxYiR", "RADDqOyvbs", "P6MbfvpJ0M", "Naejhu0vu1", "K5qnL3t961", "DnVfpQnjnz", "9LmDaYSCtx", "8VlClc0uje" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730541670430, 1732500451527, 1729753463533, 1732598975561, 1732130736321, 1732773082504, 1730705772164, 1732132069666, 1730672742685, 1732604985827, 1732923985731, 1732598786861, 1732478231610, 1732131127511, 1732130938985, 1734913218021, 1737524124929, 1732130751960, 1732695687869, 1732807206549 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11443/Reviewer_6z4L" ], [ "ICLR.cc/2025/Conference/Submission11443/Reviewer_6eQR" ], [ "ICLR.cc/2025/Conference/Submission11443/Reviewer_zkPv" ], [ "ICLR.cc/2025/Conference/Submission11443/Authors" ], [ "ICLR.cc/2025/Conference/Submission11443/Authors" ], [ "ICLR.cc/2025/Conference/Submission11443/Authors" ], [ "ICLR.cc/2025/Conference/Submission11443/Reviewer_6eQR" ], [ "ICLR.cc/2025/Conference/Submission11443/Authors" ], [ "ICLR.cc/2025/Conference/Submission11443/Reviewer_Fbbv" ], [ "ICLR.cc/2025/Conference/Submission11443/Reviewer_6eQR" ], [ "ICLR.cc/2025/Conference/Submission11443/Authors" ], [ "ICLR.cc/2025/Conference/Submission11443/Authors" ], [ "ICLR.cc/2025/Conference/Submission11443/Reviewer_Fbbv" ], [ "ICLR.cc/2025/Conference/Submission11443/Authors" ], [ "ICLR.cc/2025/Conference/Submission11443/Authors" ], [ "ICLR.cc/2025/Conference/Submission11443/Area_Chair_p7Y9" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11443/Authors" ], [ "ICLR.cc/2025/Conference/Submission11443/Reviewer_zkPv" ], [ "ICLR.cc/2025/Conference/Submission11443/Reviewer_6z4L" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes OMOG, an innovative cross-domain graph learning framework designed to enhance the adaptability and performance of Graph Neural Networks (GNNs) across various domains. By training a distinct expert model for each pre-training graph and employing adaptive gating functions during inference, OMOG dynamically selects relevant experts for unseen graphs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tBy pretraining individual models for each graph dataset, OMOG effectively addresses the feature and structural heterogeneity found across diverse graphs.\\n2.\\tOMOG\\u2019s model bank allows the easy addition of new expert models without retraining the entire system, providing flexibility to expand the pretraining bank with new data and adapt quickly to novel domains.\", \"weaknesses\": \"1.\\tThe primary motivation for adopting the \\u201cone model for one graph\\u201d approach is to alleviate the negative transfer limitations observed in the \\u201cone model for all graphs\\u201d method. It would be beneficial to provide comparisons and discussions on how this method differs from prior approaches that aim to reduce negative transfer through better pretraining data selection as [1].\\n2.\\tIt is recommended to identify which models, pretrained on specific graphs, are selected as the best match for various test graphs, with explanations for these selections. Additionally, I\\u2019m curious about whether pretraining data from different domains can contribute effectively or if only similar/same-domain data is more beneficial would strengthen the analysis. A case study is recommended to evaluate whether the proposed gating strategy actually mitigate issues stemming from conflicts across pre-training data from diverse domains.\\n3.\\tIt would be valuable to explore whether this pipeline could be adapted to other self-supervised learning approaches for pretraining graph-specific experts, and additional ablation studies are expected.\\n4.\\tOMOG\\u2019s design requires a separate model for each dataset and can result in a large model bank when many datasets are involved. Will it lead to high storage costs and maintenance overhead, especially in resource-constrained environments? A complexity analysis would also be helpful to understand OMOG\\u2019s computational feasibility at scale.\\n\\n[1] Better with less: a data-active perspective on pre-training graph neural networks. In NIPS '23.\", \"questions\": \"see Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your supplementary explanations and experiments during the rebuttal period. These addressed some of my concerns, and I have raised my score to 5. However, I still have the following three key issues:\\n\\n1. You highlighted as an advantage of your method that \\u201cTrain different expert models on different datasets parallelly, avoiding the potential conflicts from different data domains during pretraining.\\u201d However, I find this reasoning unclear. If conflicts exist among different data domains, the expert models trained on these domains independently should also inherit these conflicts. In such a case, there should be no need to combine multiple experts during downstream tuning.\\n\\n2. You mentioned that pretraining on multiple datasets can be done in parallel. Could you report the training time and GPU memory consumption of your method compared to the baselines?\\n\\n3. In your response to Q3, you stated that \\u201cwe deploy SGC of different layers to aggregate different hops of neighborhood information.\\u201d However, Figure 2 and the description on line 197 of your paper suggest that you repeatedly apply Equation 2, which does not imply using different numbers of repetitions of Equation 2 for different datasets. In fact, on some datasets, increasing the number of repetitions does not necessarily improve performance. Additionally, the experiment you provided in your response is not what I intended to see. My suggestion was to replace the Expert model with other architectures, not to replace the entire pre-training model with a different one.\\n\\nIf these issues can be clarified through further discussion, I will consider raising my score further.\"}", "{\"summary\": \"This paper proposes OMOG to pretrain one model for one graph in cross-domain transformation. OMOG uses SGC as message aggregation before experts learning, and uses the contrastive method for expert and gate training. In the inference stage, OMOG activates the top-k of experts to infer node labels. The experimental results proved the effectiveness of the method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The approach of building an expert model for each graph intuitively addresses the issue of negative transfer in graph pre-training.\\n2. The problem addressed in this paper is a crucial part of foundational research on graph models and is currently a topic of significant interest among researchers.\\n3. The experiments involve graph data from multiple domains and include comparisons with recent, noteworthy methods in graph transfer learning.\", \"weaknesses\": \"1. According to the method statement, graphs from different domains are required to have input spaces of the same size, which seems difficult to satisfy with real-world data.\\n2. The construction of multiple experts for input graphs appears to be relatively naive, as it merely involves repeating several encoders and using similarity ranking with a central vector for averaging activation.\", \"questions\": \"1. What is the means of t in Equation (3)? How do you ensure that each expert has different parameters? Based on Equation (3), it seems that each expert has the same input and parameters, leading to identical outputs. What, then, is the purpose of having multiple experts?\\n2. In line 241, the mask matrix seems to be replaced by an MLP. Why are the node embeddings transformed by the MLP considered to be negative embeddings?\\n3. In Equation (4), is f_center \\u200bthe average output of a single graph across multiple experts, or the average output of multiple source graphs through their respective experts? How does this function as an anchor point for training the gate?\\n4. What are the parameters of the Gate in Equation (5)? Why is the Gate used before the Expert?\\n5. According to the inference process, an unseen graph activates the top k experts based on the highest correlation between each expert's output and the output average. How do you ensure that the pre-trained graph domain encompasses a sufficiently heterogeneous feature space to handle all potentially unseen graph domains?\\n6. How is the total number of experts determined, especially when the number of graph domains during pre-training and testing is uncertain?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your comment. We are glad that we have addressed your concerns. We have updated the PDF file of the paper and the instructions for both the datasets and the codes.\"}", "{\"comment\": \">**Q1.** The paper does not clearly explain the differences from other MOE-based methods, such as GraphAlign and AnyGraph. The approach seems very similar to these methods, leaving it unclear what specific advantages OMOG has over them and why it achieves improved performance.\\n\\n**A1.** Thank you for the question. Previous MOE methods have the following characteristics: (1) Have a fixed number of experts with a model. (2) Pretrain all the experts on different datasets together. (3) Apply the one pretrained model for all downstream datasets. Compared to the previous datasets, our new method has following advantages: (1) Train different expert models on different datasets parallelly, avoiding the potential conflicts from different data domains during pretraining. (2) Flexibly choose suitable experts for different downstream tasks, which boosts the positive knowledge transfer while mitigating the negative transfer. Furthermore, as empirical evidence shown in the table below, our proposed method outperforms the two methods on the node classification tasks when the datasets are numerous and from different domains.\\n\\n| Methods | Child | History | Cora | Citeseer | Dblp | Products | Pubmed | Sports | Wikics |\\n|---| ---| --- | ---| ---| ---| --- | ---| ---| ---|\\n|Anygraph |13.84 |15.16| 55.63| 40.03| 50.27| 22.36 |37.92 |15.35| 50.84|\\n|GraphAlign|18.63| 26.39| 63.45| 45.11| 55.11| 30.82| 37.43| 21.73| 60.17|\\n|OMOG |20.34| 25.68| 66.19| 49.23| 57.53| 31.02| 39.71| 23.65| 62.42|\\n\\n>**Q2.** A core idea of OMOG is that each pre-training dataset requires a dedicated expert. This approach poses challenges for scalability: as the volume of pre-training data increases, the model grows linearly with the data, which is detrimental to pre-training efficiency.\\n\\n**A2.** Thank you for your question. We would like to point out that to construct the pre-trained model bank, it is not required to train a large model on various datasets. Instead, our approach only trains light-weight expert and gate models on each dataset, respectively. Hence, the pre-training process on multiple datasets can run in parallel, which actually provides our method efficiency advantages over other cross-domain pre-training approaches.\\n\\n\\n>**Q3.** Why is the expert model specifically a Transformer? How would the performance change if other models, such as GNN, Graph Transformer, or MLP, were used instead? Additionally, prior to entering the experts, the features and structure are fused through SGC. Why couldn\\u2019t this fusion step be incorporated within the experts themselves? After all, different graphs may require varying levels of neighbor aggregation.\\n\\n**A3.** Thank you for your question. We would like to point out that our method can fuse the different levels of neighborhood. Specifically, we deploy SGC of different layers to aggregate different hops of neighborhood information. Then we apply the self-attention module to adaptively fuse the varying hops of features. Compared to MLP, our backbone applies SGC to capture the graph structure information. Compared to GNN and GPS (graph transformer), our backbone can selectively assign attention weights to aggregated embeddings from different hops, which is more suitable to be generalized to different datasets (different dataset might require a different number of aggregation layers). As a comparison, we also test the performance of zero-shot node classification when using the GCN, GPS and MLP. The results are shown in the table below.\\n\\n| Methods | Child | History | Cora | Citeseer | Dblp | Products | Pubmed | Sports | Wikics |\\n|---| ---| --- | ---| ---| ---| --- | ---| ---| ---|\\n|GCN |18.36| 21.65| 60.71| 44.91| 50.28| 30.63| 37.47| 21.83| 59.14|\\n|GPS |17.58| 21.35| 57.50| 43.81| 55.05| 28.42| 33.92| 19.27| 58.23|\\n|MLP |19.27| 23.27| 65.36| 47.94| 53.68| 29.30| 40.16| 22.98| 62.02|\\n|OMOG|20.34| 25.68| 66.19| 49.23| 57.53| 31.02| 39.71| 23.65| 62.42|\\n\\nFrom the table, we could find out that our expert backbone outperforms all other models.\"}", "{\"comment\": \"> I noticed that both in [1] and [2], comparisons are provided against \\u201cusing semi-supervised method training from scratch (e.g. gcn, gin, etc)\\u201d. However, I am curious why Table 2 in your paper does not include a direct comparison with methods that is trained from scratch on downstream datasets. As widely known, basic semi-supervised learning achieves good performance on these datasets (e.g., results on Cora can exceed 80%), while your method only achieves 75.41 with extensive pre-training. Additionally, in terms of memory usage and computational time, training directly on the downstream graph data from scratch is likely the most cost-effective choice. Could you clarify whether your approach is only better when the downstream dataset does not have any labels?\\n\\n\\nThank you for your comment. We would like to point out that [2] does not provide comparisons against semi-supervised methods. All their models and baselines are trained under the unsupervised learning setting (e.g., pretraining + fintuning). Regarding [1], it actually discusses two different settings: co-training (supervised learning) and pertaining (unsupervised learning setting). However, the results from different settings are never compared. Unsupervised learning requires the model to access neither any node label nor the Laplacian matrix of the whole test graph during training, which differs from semi-supervised learning. There have been plenty of works that try to build unsupervised graph learning frameworks. Our work follows this research line and focuses on the unsupervised setting. Building a framework that has superior performance in (semi-)supervised setting is beyond the scope of this paper.\\n\\n\\n> Moreover, in your response to the third part, you mentioned that your method uses a \\u201cself-attention module to adaptively fuse these embeddings.\\u201d However, I could not find any equations describing this module in the paper. The lack of such crucial details significantly hinders understanding of your method. Even based on your explanation during the rebuttal, if this self-attention module learns different parameters for different datasets, then it should be considered part of the expert model rather than a shared module across all experts. Could you clarify which parts of the model belong to the expert modules and which parts are shared among all experts?\\nI suggest revising the paper to connect the steps of the model more clearly through equations. The current model description in the paper is quite confusing and does not align with the explanations provided in your rebuttal.\\n\\nThank you for your comment. We indicate the backbone of the expert model is a transformer block (self-attention module) in line 210 and provide the equations in Appendix C. Nonetheless, we agree with the reviewer that the current presentation could lead to certain misunderstandings and we will revise the sections of model descriptions.\"}", "{\"summary\": \"This paper proposes the OMOG (One Model for One Graph) framework, which advances graph learning by pre-training a unique model for each graph within a model bank. By creating a bank of expert models, each pre-trained for a specific dataset, OMOG selects relevant experts for inference using gate modules tailored to the target graph\\u2019s domain. This approach mitigates negative transfer issues common in multi-domain pre-training, and it performs effectively in zero-shot and few-shot tasks like link prediction and node classification, showing its potential for cross-domain graph tasks.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper focuses on and attempts to address a crucial yet highly challenging problem in the field of graph analysis\\u2014constructing a Graph Foundation Model (GFM).\\n2. On commonly used graph datasets, the model OMOG presented in the paper achieves relatively good performance in both zero-shot and few-shot settings.\", \"weaknesses\": \"1. The paper does not clearly explain the differences from other MOE-based methods, such as GraphAlign and AnyGraph. The approach seems very similar to these methods, leaving it unclear what specific advantages OMOG has over them and why it achieves improved performance.\\n2. A core idea of OMOG is that each pre-training dataset requires a dedicated expert. This approach poses challenges for scalability: as the volume of pre-training data increases, the model grows linearly with the data, which is detrimental to pre-training efficiency.\\n3. Why is the expert model specifically a Transformer? How would the performance change if other models, such as GNN, Graph Transformer, or MLP, were used instead? Additionally, prior to entering the experts, the features and structure are fused through SGC. Why couldn\\u2019t this fusion step be incorporated within the experts themselves? After all, different graphs may require varying levels of neighbor aggregation.\\n4. The core part for achieving zero-shot in this paper relies on calculating the similarity between label embeddings and prediction embeddings to obtain the final label prediction. In fact, most models that work under few-shot settings can be adapted to zero-shot using a similar approach. Consequently, Table 1 lacks several relevant baselines, such as GraphAlign, GCOPE, and GraphMAE.\\n5. Do all experts contribute to downstream performance improvements? In Figure 6, while the number of experts is adjusted, the full set of pre-training data is still used to train the gating mechanism. Could you vary the number of pre-training datasets to examine how this affects downstream performance?\\n6. Although this paper discusses GFM, which should be applicable to various downstream tasks, there is still an absence of experiments on graph-level tasks, such as graph classification or graph regression.\\n7. Some parts of the paper lack clarity. For example, in Section 4.5, the phrase \\u2018select 10 samples from each dataset\\u2019 is ambiguous. Does this refer to selecting 10 nodes, subgraphs, or something else?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Q1.** What is the time complexity or running time of the proposed method given that the proposed method needs to pretrained the model on several graphs from different domains?\\n\\n\\n**A1.** Thank you for your question. For a single expert model, the training time complexity is $O(d^2+K^2d)$, where $K$ is the number of hops aggregated for a target node and $d$ is the dimension of the feature vector. For a single gate model, the training time complexity is also $O(d^2+K^2d)$. Thus, the pretraining time complexity on a single dataset is just $O(d^2+K^2d)$. Moreover, since our framework allows us to train multiple experts and gates on different datasets in parallel, it actually would gain efficiency advantages compared to the previous cross-domain pretraining methods.\\n\\n\\n> **Q2.** In line 207, the authors mention that the node-level feature are randomly maked, resulting in two maked views. Are these two masked views mutually exclusive? For instance, given a 10-d feature matrix, the first masked view is generated by masked 5 features and the second view is generated by masking the rest 5 features?\\n\\n**A2.** Thank you for your question. For the contrastive learning on node features, we follow the approach of GRACE to mask the features randomly. We would pinpoint this detail in the revision. \\n\\n\\n\\n> **Q3.** How do you ensure that the generator only generates a matrix to mask the domain irrelevant features such that the filter features are domain-related?\\n\\n**A3.** Thank you for your question. For the generated matrix $a_i$, we tend to let it learn to mask the domain-irrelevant features through the two loss terms $dis(\\\\tilde{f},f_{center})$ and $\\\\frac{1}{dis(\\\\mathbf{o_i,f_{center}})}$ in Equation (4), where $\\\\tilde{f}$ is embedding of original features masked by $a_i$ in the output space, $o_i$ is embedding of $a_i$ in the output space, and $f_{center}$ can be viewed as the representative feature of the training domain. With the loss terms decreasing, the distance between the mask matrix $o_i$ and $f_{center}$ will become larger but $\\\\tilde{f}$ masked by $o_i$ will become closer to the $f_{center}$. Hence, $a_i$ are supposed to learn to mask the domain-irrelevant features of the input vector.\\n\\n\\n\\n> **Q4.** The authors mention that one key issue is negative transfer. Since the knowledge in the proposed methods is extracted from graphs in different domains, it inevitably increases the probability of facing the negative transfer issue. How does the proposed method address this issue? The top-k strategy seems to only filter out the low confidant knowledge, while it can not directly alleviate the negative transfer issue as the irrelevant knowledge might be included in the top-k expert models.\\n\\n**A4.** Thank you for your question. We agree that the negative transfer is inevitable in cross-domain graph learning, and our proposed framework aims to mitigate the level of negative transfer. Specifically, we train the gates and let them to select top-k models that are most relevant to the new task. To deal with possible negative transfer within the top-k models, the gates would calculate the relevant scores for the k models, and their parameters will be weighted according to the score. This is validated by our experiments in Figure 6. When increasing the number of experts, the variance of the performance of top-k strategy is minor. This indicates that even including irrelevant knowledge, the performance will not be affected because of the low relavent score used in weighting. In this way, the proposed method tends to let the most relevant model contribute the most and thus alleviate the possible impacts of irrelevant knowledge included in the top-k models.\\n\\n> **Q5.** I try to reproduce the experimental results, but there is no instruction and datasets available in the provide GitHub link. The readme seems to be empty. Could you provide the datasets and the instruction to reproduce the results?\\n\\n**A5.** Thank you for your question. The dataset sizes are too large to upload to the anonymous github repo. We are trying to re-organize them into smaller ones and will provide them and the instructions shortly.\"}", "{\"summary\": \"This paper proposes a novel cross-domain pretraining framework called \\\"one model for one graph,\\\" by pretraining a bank of expert models and using a gating function to choose a subset of experts to effectively integrate prior model knowledge.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The presentation of this paper is good and most parts of the paper are clear.\\n2. This paper proposes a novel cross-domain pretraining framework.\\n3. The experimental results demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"1. The intuition of the generator and filter in Pretraining the gate module is not clear.\\n2. The authors lack the discussion about the difference between the proposed method and the mixture-of-experts based methods. \\n3. I am concerned about the negative transfer issue in the proposed method. Since the knowledge in the proposed methods is extracted from graphs in different domains (and most of them are irrelevant), it inevitably increases the probability of facing the negative transfer issue. How does the proposed method address this issue? The top-k strategy seems to only filter out the low confidant knowledge, while it can not directly alleviate the negative transfer issue as the irrelevant knowledge might be included in the top k expert models. \\n4. I try to reproduce the experimental results, but there is no instruction and datasets available in the provide GitHub link.\", \"questions\": \"1. What is the time complexity or running time of the proposed method given that the proposed method needs to pretrained the model on several graphs from different domains?\\n2. In line 207, the authors mention that the node-level feature are randomly maked, resulting in two maked views. Are these two masked views mutually exclusive? For instance, given a 10-d feature matrix, the first masked view is generated by masked 5 features and the second view is generated by masking the rest 5 features?\\n3. How do you ensure that the generator only generates a matrix to mask the domain irrelevant features such that the filter features are domain-related?\\n4. The authors mention that one key issue is negative transfer. Since the knowledge in the proposed methods is extracted from graphs in different domains, it inevitably increases the probability of facing the negative transfer issue. How does the proposed method address this issue? The top-k strategy seems to only filter out the low confidant knowledge, while it can not directly alleviate the negative transfer issue as the irrelevant knowledge might be included in the top k expert models. \\n5. I try to reproduce the experimental results, but there is no instruction and datasets available in the provide GitHub link. The readme seems to be empty. Could you provide the datasets and the instruction to reproduce the results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response and clarification.\\n\\nI noticed that both in [1] and [2], comparisons are provided against \\u201cusing semi-supervised method training from scratch (e.g. gcn, gin, etc)\\u201d. However, I am curious why Table 2 in your paper does not include a direct comparison with methods that is trained from scratch on downstream datasets.\\nAs widely known, basic semi-supervised learning achieves good performance on these datasets (e.g., results on Cora can exceed 80%), while your method only achieves 75.41 with extensive pre-training. Additionally, in terms of memory usage and computational time, training directly on the downstream graph data from scratch is likely the most cost-effective choice. Could you clarify whether your approach is only better when the downstream dataset does not have any labels?\\n\\nMoreover, in your response to the third part, you mentioned that your method uses a \\u201cself-attention module to adaptively fuse these embeddings.\\u201d However, I could not find any equations describing this module in the paper. The lack of such crucial details significantly hinders understanding of your method. Even based on your explanation during the rebuttal, if this self-attention module learns different parameters for different datasets, then it should be considered part of the expert model rather than a shared module across all experts. Could you clarify which parts of the model belong to the expert modules and which parts are shared among all experts? \\n\\nI suggest revising the paper to connect the steps of the model more clearly through equations. The current model description in the paper is quite confusing and does not align with the explanations provided in your rebuttal.\"}", "{\"comment\": \"> Thank you for your further explanations and for providing more details. While some of my concerns have been addressed, I still have the question about the motivation for selecting models. The authors argue that choosing models can prevent the drawbacks associated with data selection, such as the exclusion of some knowledge and the difficulties in generalization. However, since different expert models are trained on different datasets independently and only K of them are chosen, could this approach also lead to the elimination of knowledge?\\n\\nThank you for your comment. We choose top-K relevant models to boost the positive knowledge transfer, while the less relevant models tend to contain irrelevant knowledge which possibly causes negative transfer. This is evidenced by our experiments in Figure 5, in which selecting most irrelevant models would lead to significant performance drops. Moreover, we show the model performance in Figure 6 with varying the value of K. It demonstrates that when the number of models K exceeds 3, the model performance does not improve and may even decline.This indicates that, within our framework, model selection is unlikely to eliminate valuable knowledge but instead prevents negative transfer.\"}", "{\"comment\": \"> 1. You highlighted as an advantage of your method that \\u201cTrain different expert models on different datasets parallelly, avoiding the potential conflicts from different data domains during pretraining.\\u201d However, I find this reasoning unclear. If conflicts exist among different data domains, the expert models trained on these domains independently should also inherit these conflicts. In such a case, there should be no need to combine multiple experts during downstream tuning.\\n\\nThank you for your comment. Based on the previous research [1][2], graph datasets from similar domains are more likely to produce positive transfer, while datasets from different domains tend to produce negative transfer. Hence, if all datasets are pretrained together, the model would inevitably suffer negative transfer. Instead, our framework will first pretrain the expert models on different datasets, which prevents the negative transfer during pretraining. Moreover, for inference on a new dataset, the framework will **adaptively choose experts of high relevance** to merge, thus mitigating the conflicts/negative transfer to a large extent and boosting the positive transfer. This is addressed by our experiments in Figure 5, when merging all the models or random k models, the performance will drop due to the conflict/negative transfer compared to merging the top-k models. Hence, combining top-k relevent experts is useful for downstream task infering.\", \"reference\": \"[1] Text-space Graph Foundation Models: Comprehensive Benchmarks and New Insights, Chen et al.\\n\\n[2] Better with Less: A Data-Active Perspective on Pre-Training Graph Neural Networks, Xu et al.\\n\\n> 2. You mentioned that pretraining on multiple datasets can be done in parallel. Could you report the training time and GPU memory consumption of your method compared to the baselines?\\n\\nThank you for the comment. We list training time and GPU memory consumption in the following table, which manifests that our method has advantages in efficiency due to the lightweight models and parallel framework.\\n\\n| | GraphMAE | Oneforall | LLaGA | AnyGraph | GraphAlign | ZeroG | OMOG | \\n|---| ---| --- | ---| ---| ---| --- | ---|\\n|Time |3.7h| 12.3h| 20.3h| 9.8h| 3.8h| 21.6h| 0.9h|\\n|GPU Memory |8.3G| 17G| 35.6G| 14.5G| 6.7G| 39.6G| 4.1G| \\n\\n> 3. In your response to Q3, you stated that \\u201cwe deploy SGC of different layers to aggregate different hops of neighborhood information.\\u201d However, Figure 2 and the description on line 197 of your paper suggest that you repeatedly apply Equation 2, which does not imply using different numbers of repetitions of Equation 2 for different datasets. In fact, on some datasets, increasing the number of repetitions does not necessarily improve performance. Additionally, the experiment you provided in your response is not what I intended to see. My suggestion was to replace the Expert model with other architectures, not to replace the entire pre-training model with a different one.\\n\\nThank you for your comment. There are some misunderstandings we would like to clarify. For the SGC aggregations, we keep the ego-subgraph embeddings $h^{(\\\\alpha)}$ after each repetition. This will result in a list of embeddings containing different hops of information, $[h^{(1)}, h^{(2)}, ..., h^{(\\\\alpha)}]$, of a target node as stated in line 205. We did not just repeat aggregations and only kept the final embedding. Then we apply the self-attention module to adaptively fuse these embeddings of different hops since different datasets might favor different levels of aggregations. In this way, our expert model is flexible and can be applied to different datasets.\\n\\nRegarding the experiments in Answer 3, what we did is indeed replacing the expert model backbone with other architecture, not replacing the whole model. We are sorry that we did not specifically enhance this point in the answer and caused the confusion. If the reviewer has more questions, we are more the glad to answer.\"}", "{\"title\": \"Reply to Authors' Rebuttal\", \"comment\": \"Thank you for the detailed explanation. You address most of my concerns. However, I check the paper and it seems that the paper is not updated, such as the confusion about the masking, how to select 10 sample mentioned by Reviewer 6eQR, etc. If you have already done it, please highlight the changes. In addition, when can you provide the instruction and dataset for reproducing the results?\"}", "{\"comment\": \">**Q1.** The primary motivation for adopting the \\u201cone model for one graph\\u201d approach is to alleviate the negative transfer limitations observed in the \\u201cone model for all graphs\\u201d method. It would be beneficial to provide comparisons and discussions on how this method differs from prior approaches that aim to reduce negative transfer through better pretraining data selection as [1].\\n\\n**A1.** Thank you for your question. The data selection approach such as [1] implements data selection in the pre-training stage. After the pre-training, the model is fixed and applied to all the downstream tasks. This method will exclude knowledge from the pretraining data and would cause generalization difficulty on downstream datasets. Compared to this approach, our method is more flexible. We do not eliminate any knowledge from the pretraining data but store all in the bank. During inference, we adaptively choose the most relevant models (knowledge) for a downstream dataset (or task), which encourages the positive knowledge transfer while mitigating the negative transfer more effectively.\\n\\n\\n>**Q2.** It is recommended to identify which models, pretrained on specific graphs, are selected as the best match for various test graphs, with explanations for these selections. Additionally, I\\u2019m curious about whether pretraining data from different domains can contribute effectively or if only similar/same-domain data is more beneficial and would strengthen the analysis. A case study is recommended to evaluate whether the proposed gating strategy actually mitigates issues stemming from conflicts across pre-training data from diverse domains.\\n\\n**A2.** Thank you for your question. We have provided a case study in Section 4.5. We consider the 9 downstream datasets for zero-shot node classification. Specifically, we select 10 random samples from each dataset and visualize the average relevance score given by different gate functions. From the figure in Section 4.5, we find that in most of the situations, the gates will choose expert models trained in similar domains, which indicate that datsets from similar domain will contribute more in general. But in several cases, the gate will choose models from different domains. For example, Sports dataset of e-commerce domain have the highest relevance scores to the Wikics dataset in academic datasets, which indicates a sign of cross-domain transfer in certain situation.\\n\\n>**Q3.** It would be valuable to explore whether this pipeline could be adapted to other self-supervised learning approaches for pretraining graph-specific experts. and additional ablation studies are expected.\\n\\n**A3.** Thank you for your suggestion. In the table below we attach the results attained by switching self-supervised approaches to GraphMAE. As a comparison, we find that both the original GRACE method and GraphMAE methods could attain similar performance. We will add the results in the revision.\\n\\n| Methods | Child | History | Cora | Citeseer | Dblp | Products | Pubmed | Sports | Wikics |\\n|---| ---| --- | ---| ---| ---| --- | ---| ---| ---|\\n|w/ GraphMAE|20.78| 26.51| 64.28| 49.03| 56.88| 31.97| 39.89| 23.65| 61.54|\\n|w/ GRACE|20.34| 25.68| 66.19| 49.23| 57.53| 31.02| 39.71| 23.54| 62.42|\\n\\n\\n>**Q4.** OMOG\\u2019s design requires a separate model for each dataset and can result in a large model bank when many datasets are involved. Will it lead to high storage costs and maintenance overhead, especially in resource-constrained environments? A complexity analysis would also be helpful to understand OMOG\\u2019s computational feasibility at scale.\\n\\n**A4.** Thank you for your question. Our models are light-weighted. The storage cost of one pair of expert and gate models is 579KB. Considering the storage capability of current devices, the cost is relatively low and it is feasible to maintain the model bank.\\n\\nFor a single expert model, the training time complexity is $O(d^2+K^2d)$, where $K$ is the number of hops aggregated for a target node and $d$ is the dimension of the feature vector. For a single gate model, the training time complexity is also $O(d^2+K^2d)$. Thus, the pretraining time complexity on a single dataset is just $O(d^2+K^2d)$. This manifests the feasibility of scaling up our frameworks. Moreover, since our framework allows us to train multiple experts and gates on different datasets parallely, it actually would gain efficiency advantages compared to the previous cross-domain pretraining methods.\"}", "{\"comment\": \">**Q1** What is the means of t in Equation (3)? How do you ensure that each expert has different parameters? Based on Equation (3), it seems that each expert has the same input and parameters, leading to identical outputs. What, then, is the purpose of having multiple experts?\\n\\n**A1.** Thank you for your question. In Equation (3), $t$ indicates the number of training nodes in a batch. Equation (3) indicates the loss calculation to train a single expert for one graph. Each expert has the same architecture but is trained on different datasets. Thus, their inputs and outputs are different. In other words, for one graph, we pretrained one distinct model. \\n\\n\\n>**Q2.** In line 241, the mask matrix seems to be replaced by an MLP. Why are the node embeddings transformed by the MLP considered to be negative embeddings?\\n\\n**A2.** Thank you for your question. For the MLP-generated mask matrix $a_i$, we tend to let it learn to mask the the domain-irrelevant features through the two loss terms $dis(\\\\tilde{f},f_{center})$ and $\\\\frac{1}{dis(\\\\mathbf{o_i,f_{center}})}$ in Equation (4), where $\\\\tilde{f}$ is embedding of original features masked by $a_i$ in the output space, $o_i$ is embedding of $a_i$ in the output space, and $f_{center}$ can be viewed as the representative feature of the training domain. With the loss terms decreasing, the distance between the mask matrix $o_i$ and $f_{center}$ will become larger but $\\\\tilde{f}$ masked by $o_i$ will become closer to the $f_{center}$. Hence, $a_i$ are supposed to learn to mask the domain-irrelevant features of the input vector.\\n\\n>**Q3** In Equation (4), is f_center the average output of a single graph across multiple experts, or the average output of multiple source graphs through their respective experts? How does this function as an anchor point for training the gate?\\n\\n**A3** Thank you for your question. $f_{center}$ is the average out of one source graph through its own expert. It is a centroid point for the feature distribution of the source graph. Thus, we could view the feature vectors close to $f_{center}$ as in the similar domains of the source graph, while vectors distant from $f_{center}$ in different domains. In this way, $f_{center}$ serves as an anchor point for training the gate as we described in the answer for the last question.\\n\\n\\n>**Q4.** What are the parameters of the Gate in Equation (5)? Why is the Gate used before the Expert?\\n\\n**A4.** Thank you for your question. The Gate and Expert in Equation (5) are pretrained on the $p$th source dataset following the steps in Section 3.1. The gate is used before the Expert because Equation (5) is used to calculate the relevance score with the help of graph. \\n\\n>**Q5.** According to the inference process, an unseen graph activates the top k experts based on the highest correlation between each expert's output and the output average. How do you ensure that the pre-trained graph domain encompasses a sufficiently heterogeneous feature space to handle all potentially unseen graph domains?\\n\\n**A5.** Thank you for your questions. We agree that there is possibility that the framework will have difficulty if encountering datasets from unseen domains. Actually, the out-of-distribution problem is a general problem for pretraining models, even for LLMs [1,2]. To deal with the data heterogeneity problem, researchers usually add data from more sources to pretraining datasets. In our case, we try to include the typical datasets from different graph domains to enlarge the scope of the model bank. In this way, the framework could find similar models for most of the common downstream tasks.\", \"reference\": \"[1] Unsupervised Out-of-Domain Detection via Pre-trained Transformers; Xu et al.\\n[2] Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text; Wu et al.\\n\\n>**Q6.** How is the total number of experts determined, especially when the number of graph domains during pre-training and testing is uncertain?\\n\\n**A6.** Thank you for your question. For the pretraining datasets, we select representative datasets from common domains to construct the pretrained model bank. Hence, the number of experts in the model bank is the same as the number of pretraining datasets. For the number of experts in downstream tasks, we choose top-k relevant experts from the whole model bank. Specifically, we conduct experiments as shown in Figure 6, which reveals that 2 or 3 experts could be an ideal choice that boosts the useful knowledge to transfer and mitigate the negative transfer.\"}", "{\"metareview\": \"This paper proposes a mixture-of-experts model for improving generalization on graphs that span different domains. The idea is to have a bank of expert models and then route information to select which graph models to use on a new dataset. In contrast to previous MoE-based models, the key innovation of their approach is to have one expert per pretraining dataset.\\n\\nThe reviewers all agreed that the paper addresses an important and timely topic. However, there were concerns about the scalability of the approach, as they need a new model for each pretraining dataset. Additionally, the reviewers raised concerns about whether the method is indeed able to avoid negative transfer as the paper claims.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided replies to many of the reviewer concerns, but some of the key issues surrounding their claims that the model avoids negative transfer were not addressed fully. In the end, none of the reviewers were willing to champion the paper and all of the reviewers agreed that the paper was below the acceptance threshold.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \">**Q4.** The core part for achieving zero-shot in this paper relies on calculating the similarity between label embeddings and prediction embeddings to obtain the final label prediction. In fact, most models that work under few-shot settings can be adapted to zero-shot using a similar approach. Consequently, Table 1 lacks several relevant baselines, such as GraphAlign, GCOPE, and GraphMAE.\\n\\n**A4.** Thank you for your question. We add the performance of GraphAlign, GCOPE and GraphMAE when using the cosine similarity as the output layer. We find that in most cases, our model can get better results on both node classification and link prediction than these baselines. We will add the results to Table 1 in the revision.\", \"results_for_zero_shot_node_classification\": \"| Methods | Child | History | Cora | Citeseer | Dblp | Products | Pubmed | Sports | Wikics |\\n|---| ---| --- | ---| ---| ---| --- | ---| ---| ---|\\n|GraphMAE |15.37| 20.63| 62.83| 46.78| 51.27| 26.23| 33.95| 20.72| 57.98|\\n|GCOPE |16.83| 20.37| 61.48| 44.26| 53.40| 27.75| 34.26| 19.88| 55.23|\\n|GraphAlign|18.63| 26.39| 63.45| 45.11| 55.11| 30.82| 37.43| 21.73| 60.17|\\n|OMOG |20.34| 25.68| 66.19| 49.23| 57.53| 31.02| 39.71| 23.65| 62.42|\", \"results_for_zero_shot_link_prediction\": \"| Methods | Child | History | Cora | Citeseer | Dblp | Products | Pubmed | Sports | Wikics |\\n |---| ---| --- | ---| ---| ---| --- | ---| ---| ---|\\n|GraphMAE |22.35| 24.15| 51.68| 43.78| 52.28| 33.21| 42.15| 34.20| 49.24|\\n|GCOPE |24.33| 25.83| 50.24| 44.84| 47.24| 34.85| 45.53| 33.54| 47.29|\\n|GraphAlign|31.81| 32.28| 52.92| 51.34| 51.21| 38.05| 44.28| 35.53| 50.12|\\n|OMOG |31.29| 34.86| 56.28| 50.72| 53.46| 40.95| 49.42| 37.81| 52.38|\\n\\n\\n> **Q5.** Do all experts contribute to downstream performance improvements? In Figure 6, while the number of experts is adjusted, the full set of pre-training data is still used to train the gating mechanism. Could you vary the number of pre-training datasets to examine how this affects downstream performance?\\n\\n**A5.** Thank you for your question. We would like to clarify that our gates are also dataset-specific, just like the experts. This indicates that we do not use all datasets to train one gate, instead, we also train a bank of gates on different datasets respectively. Thus, in Figure 6, by varying the number of experts, we are actually just varying the number of pretrained datasets used in the downstream tasks. As the results manifest, not all experts contribute to the downstream performance.\\n\\n> **Q6.** Although this paper discusses GFM, which should be applicable to various downstream tasks, there is still an absence of experiments on graph-level tasks, such as graph classification or graph regression.\\n\\n**A6.** Thank you for your question. We would like to clarify that we do not explicitly claim that our proposed method is a graph foundation model in the paper. The main focus of our research is to mitigate the negative transfer of cross-domain learning on text-attributed graphs. Thus, the scope of our method is limited to the node-level and link-level tasks, which usually have text-attributed graph data. Still, we thank the reviewer for the comment and will seek opportunity to extend our framework to graph-level tasks in the revision.\\n\\n\\n\\n> **Q7.** Some parts of the paper lack clarity. For example, in Section 4.5, the phrase \\u2018select 10 samples from each dataset\\u2019 is ambiguous. Does this refer to selecting 10 nodes, subgraphs, or something else?\\n\\n**A7.** Thank you for your questions. By selecting 10 samples, we refer to selecting the ego-subgraphs of 10 randomly sampled nodes. We will clarify this in the revision.\"}", "{\"comment\": \"Thank you for your response, my concerns are well addressed. I will raise my score.\"}", "{\"comment\": \"Thank you for your further explanations and for providing more details. While some of my concerns have been addressed, I still have the question about the motivation for selecting models. The authors argue that choosing models can prevent the drawbacks associated with data selection, such as the exclusion of some knowledge and the difficulties in generalization. However, since different expert models are trained on different datasets independently and only K of them are chosen, could this approach also lead to the elimination of knowledge?\"}" ] }
10kBEqYKKN
Impact of Prompt on Latent Representations in LLMs
[ "Iskandar Boucharenc", "Thomas Gerald", "Sahar Ghannay", "Christophe Servan", "Sophie Rosset" ]
The effectiveness of zero-shot learning frameworks, particularly in Large Language Models (LLMs), has lately shown tremendous improvement. Nonetheless, zero-shot performance critically depends on the prompt quality. Scientific literature has been prolific in proposing methods to select, create, and evaluate prompts from a language or performance perspective, changing their phrasing or creating them following heuristics rules. While these approaches are intuitive, they are insufficient in unveiling the internal mechanisms of Large Language Models. In this work, we propose exploring the impact of prompts on the latent representations of auto-regressive transformer models considering a zero-shot setting. We focus on the geometrical properties of prompts' inner representation at different stages of the model. Experiments conducted give insights into how prompt characteristics influence the structure and distribution of vector representations in generative models. We focus on binary classification tasks on which prompting methods have shown robust performance and show that prompt formulation has indeed an influence on latent representation. However, their impact is dependent on the model family. Using clustering methods, we show that even though prompts are similar in natural language, surprisingly, their representations can differ. This is highly model-dependent, demonstrating the need for more precise analysis.
[ "Explainability", "Representation analysis", "LLM", "prompting", "zero-shot" ]
Reject
https://openreview.net/pdf?id=10kBEqYKKN
https://openreview.net/forum?id=10kBEqYKKN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jvdBxJsLjr", "TBatGqkrPW", "L0a5iLCAc4", "JKfy9MRIF0", "GyBxUxsWt8" ], "note_type": [ "decision", "official_review", "official_review", "meta_review", "official_review" ], "note_created": [ 1737524104617, 1730619774294, 1730647268825, 1734668214207, 1730776156423 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11121/Reviewer_LxPV" ], [ "ICLR.cc/2025/Conference/Submission11121/Reviewer_ZnB9" ], [ "ICLR.cc/2025/Conference/Submission11121/Area_Chair_TGLc" ], [ "ICLR.cc/2025/Conference/Submission11121/Reviewer_xyjk" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper studies how variations in the prompt change the distribution of hidden states in LLMs for zero-shot binary classification. The last hidden state representation is extracted at each layer for a variety of different prompts. The authors plot how the IsoScore of the hidden states evolves across layers. These hidden states are then clustered, based on which the authors conclude that \\\"[the] clustering tends to focus on other characteristics than the prompt itself.\\\"\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper studies a general and interesting question of how LLM internals (such as hidden states) evolve/change when certain parts of the input (such as the prompt) change. Further understanding of this process would be useful for several downstream applications, such as (as shown in prior work) detecting adversarial prompt injection attacks.\", \"weaknesses\": \"The biggest issue with the draft is the presentation of the results. There are numerous spelling, writing, and formatting errors, a subset of which I listed below. The first two paragraphs of the introduction are unnecessary background for the ICLR audience. The related work section on LLMs is not particularly related to the focus of this paper (understanding how prompts change internal model representations). That section is missing citations to LogitLens and its derivatives (e.g. https://arxiv.org/pdf/2303.08112), which also study how changes in the prompt change model internals. For example, that paper develops classifiers (based on features extracted from model internals) for detecting prompt injection. I.e., they already studied this paper's \\\"hypothesis 2\\\" that \\\"The geometrical characteristics are sufficiently discriminating to facilitate the grouping or separation of prompts and comprehend how the model processes them (HP2).\\\" The details of very relevant parts of the paper, such as a brief description of the IsoScore algorithm, are missing. Isotropy is never formally defined and the draft never explains why it is a desirable property for decoder-only models (beyond references to other work studying isotropy in the context of embedding models). The details of precisely how the IsoScore is computed are missing to the point where I'm not sure the results of the paper are reproducible.\\n\\nI think minor writing and presentation issues are not disqualifying, but in this case they are pervasive and make it difficult to judge the technical aspects of the paper on merit.\", \"subset_of_writing_issues\": [\"L16 \\\"heuristics rules\\\" --> \\\"heuristic rules\\\"\", \"L24 \\\"their impact\\\" --> \\\"its impact\\\"\", \"first two paragraph of the intro are unnecessary bg for the ICLR audience.\", \"L53 starts \\\"However\\\" but is not contradicting a previous point\", \"L67 what does \\\"intrinsic\\\" mean here?\", \"L67 double period\", \"L69 missing period.\", \"L74 \\\"LLMs representations leveraging prompting\\\" --> \\\"LLM representations that leverage prompting\\\"\", \"L101 \\\"section 4\\\" --> \\\"Section 4\\\".\", \"L123, 137, L286 have \\\\citet that should be \\\\citep\", \"L174 \\\"information, Therefore\\\" --> \\\"information. Therefore,\\\"\", \"L178 \\\"Dimensionnality\\\"\", \"L184 \\\"PCA get some limitations\\\"\", \"L214-L215 $k$ not in latex formatting\", \"L284 \\\"We describe, as follow,\\\"\", \"L290, L306 \\\"for instance :\\\", \\\"be written :\\\"\", \"L303 \\\"we constraint the model\\\"\", \"L317 \\\"First analyse results of isoscore are discussed\\\"\", \"L322 \\\"Since IsoScore is a recent algorithms\\\"\", \"L323 \\\"accross\\\"\", \"L483 \\\"This allows us give answers\\\"\"], \"questions\": [\"Why are we interested in isotropy of the EOS token hidden states?\", \"Can we use the isoscore as a selection criterion for picking a good prompt? This possibility is hinted at in the introduction \\\"Our objective is to establish a direct correlation between the prompts, latent representations, and the model performance.\\\" but not deeply explored. The results sound negative on this front---\\\"we cannot link the IsoScore to the performance of the model and prompt\\\". How does this finding relate to the cited works that show isotropy is a good property for embedding models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates how prompts affect the intrinsic geometry of LLMs. The authors explore two research questions: (1) whether prompts alter the intrinsic dimensionality of representations, and (2) whether prompts can be grouped by their impact on model performance and vector representations using clustering methods.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper tries to address the intriguing question of how prompt representations relate to model performance, with a specific focus on how prompt formulation impacts performance robustness.\", \"weaknesses\": \"The paper's format is somewhat disorganized, with unexpected line breaks and misplaced punctuation. Also, some figures like figure 3 do not have illustrations, making it hard to understand.\", \"questions\": \"(1) What are the motivations of grouping the prompts and how the groups of prompts correlate to the focus of the model generation? Why do the authors use the last layer rather than the embedding layer for clustering the prompts?\\n\\n(2) What is the mathematical formulation of the Isoscore? What does it used for in the experiments?\\n\\n(3) Several studies, such as PromptRobust [1], focus on evaluating prompt robustness. How does this approach offer advantages over existing methods?\\n\\n(4) The paper offers limited technical contributions and focuses narrowly on the classification task. Could the work be applied to generation task? \\n\\n[1] Zhu, K., Wang, J., Zhou, J., Wang, Z., Chen, H., Wang, Y., ... & Xie, X. (2023). Promptbench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies the effect of prompts on various geometric aspects of embeddings. This question is tackled for zero-shot binary prediction.\\n\\nThe overall direction of the paper is certainly interesting, and could help with building robust zero-shot methods. However, the paper doesn\\u2019t quite deliver on this vision.\\n\\nAs noted by the reviewers, the writing could use lots of work. The implications of the results should be further fleshed out, and comparisons to the fairly vast literature on this space should also be expanded. This iteration of the paper is below the bar, but this could be rectified in the future version.\", \"additional_comments_on_reviewer_discussion\": \"There was no rebuttal for this paper.\"}", "{\"summary\": \"This paper investigates the effect of prompts on the representation of the EOS token across prompts and LLMs.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The idea of studying the effects of prompts through the geometry of latent representations is interesting.\"], \"weaknesses\": [\"Studying only the representation of the EOS token seems to be an oversimplification. While this representation technically can depend on all of the context/generation, it might not attend to its meaningful parts, thus failing to capture interesting patterns.\", \"The paper lacks clear/practical insights besides observing that the EOS token representation does depend on the prompt in some way.\", \"For an empirically oriented work, studying only binary sentiment classification datasets is insufficient. Hypotheses should be verified across a broader range of tasks, including open-ended generation.\"], \"other\": [\"Many typos throughout the paper (missing spaces and periods, inconsistent usage of citet and citep, etc.)\", \"IsoScore is not defined in the paper. Only a high-level explanation is provided.\", \"RIS is not carefully defined - is k' the same as c? \\\"value of k is equal to itself\\\" is not a clear statement.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
10JOlFIPjt
In vivo cell-type and brain region classification via multimodal contrastive learning
[ "Han Yu", "Hanrui Lyu", "YiXun Xu", "Charlie Windolf", "Eric Kenji Lee", "Fan Yang", "Andrew M Shelton", "Olivier Winter", "International Brain Laboratory", "Eva L Dyer", "Chandramouli Chandrasekaran", "Nicholas A. Steinmetz", "Liam Paninski", "Cole Lincoln Hurwitz" ]
Current electrophysiological approaches can track the activity of many neurons, yet it is usually unknown which cell-types or brain areas are being recorded without further molecular or histological analysis. Developing accurate and scalable algorithms for identifying the cell-type and brain region of recorded neurons is thus crucial for improving our understanding of neural computation. In this work, we develop a multimodal contrastive learning approach for neural data that can be fine-tuned for different downstream tasks, including inference of cell-type and brain location. We utilize multimodal contrastive learning to jointly embed the activity autocorrelations and extracellular waveforms of individual neurons. We demonstrate that our embedding approach, Neuronal Embeddings via MultimOdal Contrastive Learning (NEMO), paired with supervised fine-tuning, achieves state-of-the-art cell-type classification for two opto-tagged datasets and brain region classification for the public International Brain Laboratory Brain-wide Map dataset. Our method represents a promising step towards accurate cell-type and brain region classification from electrophysiological recordings.
[ "contrastive learning", "electrophysiology", "extracellular", "multimodal", "neuroscience", "cell type", "brain region", "Neuropixels", "deep learning" ]
Accept (Spotlight)
https://openreview.net/pdf?id=10JOlFIPjt
https://openreview.net/forum?id=10JOlFIPjt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xYLIn8hAIm", "tWaXYZ5m92", "fuw1prNwZy", "eDltxtZ2Sx", "c8jTV2OE2b", "ZukYuv2GTo", "Wy3Jg1pLy6", "PnETo8TsPw", "IS0CyiSwny", "HtbRjcDFXj", "DOzfsGbK4W", "CcbBRx2LVZ", "Au0h0UrpKz", "8cpdR8Nll0", "4N8YGieNHO", "3nBQRIZeM5" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730450168922, 1732322181645, 1730641674938, 1732322058598, 1734634207081, 1737524247528, 1732322257130, 1732501017797, 1729279660644, 1732321864647, 1732321945359, 1732525002288, 1732379920570, 1732721237868, 1730685538655, 1732563983008 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13258/Reviewer_YH4G" ], [ "ICLR.cc/2025/Conference/Submission13258/Authors" ], [ "ICLR.cc/2025/Conference/Submission13258/Reviewer_vN2L" ], [ "ICLR.cc/2025/Conference/Submission13258/Authors" ], [ "ICLR.cc/2025/Conference/Submission13258/Area_Chair_hNox" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13258/Authors" ], [ "ICLR.cc/2025/Conference/Submission13258/Reviewer_a6D4" ], [ "ICLR.cc/2025/Conference/Submission13258/Reviewer_a6D4" ], [ "ICLR.cc/2025/Conference/Submission13258/Authors" ], [ "ICLR.cc/2025/Conference/Submission13258/Authors" ], [ "ICLR.cc/2025/Conference/Submission13258/Reviewer_vN2L" ], [ "ICLR.cc/2025/Conference/Submission13258/Reviewer_YH4G" ], [ "ICLR.cc/2025/Conference/Submission13258/Area_Chair_hNox" ], [ "ICLR.cc/2025/Conference/Submission13258/Reviewer_iQCh" ], [ "ICLR.cc/2025/Conference/Submission13258/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces NEMO (Neuronal Embeddings via MultimOdal contrastive learning), a method for classifying neurons by their cell type and brain region location using electrophysiological data. NEMO uses CLIP-like contrastive learning to jointly analyze two types of neural data: the shape of neural electrical signals (waveforms) and patterns of neural activity over time (autocorrelograms).\", \"the_authors_evaluated_nemo_on_two_datasets\": \"an opto-tagged mouse visual cortex dataset for cell-type classification and the International Brain Laboratory dataset for brain region classification. In comparative experiments, NEMO achieved higher classification accuracy than existing methods including PhysMAP and VAE-based approaches. The authors also demonstrated that using multiple units improved performance, and that the method maintained effectiveness even with limited labeled training data. The paper includes ablation studies examining the importance of joint versus independent learning of the two data modalities.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is well written, the figures are well made, descriptions are generally clear.\", \"The paper contains extensive additional material for more details.\", \"The training and experimental setup seems to be carefully chosen and sound.\", \"Limitations are discussed at the end.\"], \"weaknesses\": [\"I only found minor weaknesses.\", \"Although, the authors did a great job providing necessary details, sometimes the training specifics for the control models and the clustering analyses were a bit too short in my opinion. It would be great if you could provide a bit more detail on this.\", \"It would be great to see an ablation for the data augmentation to see how important that is (see questions).\", \"**Minor (do not expect you to respond to this)**\", \"Figure 3 typo in legend \\u201cSupervise\\u201d instead of \\u201cSupervised\\u201d\", \"Supplementary Figure 7 is blurred likely due to plotting settings\", \"As multi-unit has a particular meaning in neuronscience, I find the wording \\u201cmulti-unit brain region classification\\u201d. I think you mean \\u201cmulti-neuron\\u201d here. Unless I misunderstood and you are really using multi-unit activity, I would change the wording.\"], \"questions\": [\"The VAE training is unclear to me. Do you jointly embed waveforms and autocorrelograms, or do you use two separate encoders/decoders with a shared latent space (which would require cross-decoding, i.e. encode waveform and decoder autocorrelogram)?\", \"How important are the data augmentations? Can you provide an ablation experiment for that?\", \"Waveforms and autocorrelograms are two reasonable choices for input modalities. However they are not the only conceivable choices. Have you thought/tried other choices or thought about learning an encoding of spiking activity directly?\", \"What are the results when you cluster on the latent embeddings directly instead of running UMAP first? How stable is the clustering? I.e. if you train two models of NEMO from different seeds and then cluster neurons, how consistently are two neurons assigned to the same cluster (as measured by adjusted rand index or similar)?\", \"Can you provide a definition for the modularity index?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer YH4G\", \"comment\": \"We thank the reviewer for the useful comments and suggestions. We appreciate that the reviewer found our experiments carefully chosen and sound!\\n\\nWeaknesses\\n- As suggested, we will add additional details for the baselines and clustering analyses in the supplement to improve the clarity of the paper. To this end, we have added a schematic of the encoder architectures in Supplement Figure 2.\\n- To test the impact of our different augmentations, we ran two new analyses on the UHD cell-type classification dataset: (1) remove one augmentation and (2) add one augmentation. We have added this result to Supplementary Figure 14 with additional details in Supplementary L.2. Please see the overall response for more details. \\n- We agree with the reviewer that multi-unit is an overloaded term for neuroscientists. We have switched single-unit and multi-unit to single-neuron and multi-neuron everywhere in the paper.\\n- We have updated figure 3 and the blurry figure in the supplement (Supplementary Figure 7). We hope this fixes the problem!\\n\\nQuestions\\n\\n\\u201cThe VAE training is unclear to me. Do you jointly embed waveforms and autocorrelograms, or do you use two separate encoders/decoders with a shared latent space?\\u201d\\n- For the VAE baseline, we follow the procedure of [1] which trains a VAE separately on each modality and then concatenates the embeddings before classification.\\n\\n\\u201cHow important are the data augmentations? Can you provide an ablation experiment for that?\\u201d\\n- As mentioned in the above response, we now include an experiment ablating the data augmentations in the overall response.\\n\\n\\u201cWaveforms and autocorrelograms are two reasonable choices for input modalities. However they are not the only conceivable choices. Have you thought/tried other choices or thought about learning an encoding of spiking activity directly?\\u201d\\n- This is a great question! A couple of ideas we are thinking about is to utilize the peristimulus time histogram (PSTH) or correlations of the neuron with other neurons in the population. We plan to explore these features in future analyses. Encoding the spiking activity directly is another interesting idea although this would require a much more complicated neural network architecture.\\n\\n\\u201cWhat are the results when you cluster on the latent embeddings directly instead of running UMAP first? How stable is the clustering?\\u201d\\n- To quantify the stability of the NEMO clustering result, we re-ran our clustering pipeline with multiple random seeds. Although there is shared structure between clusterings, we found that there was variability with different random seeds (see Supplementary Figure 16). We expect this to be the case because clustering the full brain dataset leads to a very coarse clustering when, in reality, there is much more variability per region. We should be cautious over interpreting these clusters as this is more of an exploratory analysis. We will add this caveat to the paper.\\n\\n\\u201cCan you provide a definition for the modularity index?\\u201d\\n- The modularity index is a hyperparameter in the Louvain clustering that quantifies the relative density of edges within communities compared to the edges between communities. It ranges from \\u22120.5 (indicating non-modular clustering) to 1 (indicating fully modular clustering). Tuning this parameter will affect how the different nodes are grouped together in the network.\\n\\n[1] Beau, Maxime, et al. \\\"A deep-learning strategy to identify cell types across species from high-density extracellular recordings.\\\" bioRxiv (2024).\"}", "{\"summary\": \"The authors proposed a multimodal contrastive learning approach for joint modeling of extracellular action potential data and spiking activity data. The proposed approach can be fine-tuned for different downstream tasks, including in vivo cell-type and brain region classification. More specifically, the author applied the classic contrastive learning framework established in CLIP on extracellular action potential data and spiking activity data. Although the theoretical innovation is relatively limited, the authors made the earliest efforts (as far as I know) to apply contrastive learning for joint modeling of extracellular action potential data and spiking activity data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"As mentioned above, this study is among the earliest efforts to apply contrastive learning for joint modeling of extracellular action potential data and spiking activity data. Most key components in the proposed analytical framework are fetched from previous work (e.g., CLIP contrastive learning and the ACG encoder), increasing the reproducibility of the study. In addition, the study was well presented and easy to follow.\", \"weaknesses\": \"As mentioned above, the theoretically contribution of the study is relatively limited. The readers may expect to see some components that are specifically designed with consideration for the unique characteristics of the data and the problem being addressed.\", \"questions\": \"1.The authors emphasized \\u201cin vivo\\u201d, however, are there any results about the computational efficiency of the NEMO model that can support it? Does it support real-time processing?\\n\\n2.I would expect a validation of the data augmentation strategy. It is understandable that the construction of ACG images is computationally expensive. But the authors are encouraged to validate the data augmentation strategy adopted in the study, augmentations directly for the ACG images, is reasonable by showing a couple examples.\\n\\n3.The authors compared NEMO with the VAE-based method. However, in my opinion, it seems that a critical comparison is missing: the comparison between naive models, specifically, NEMO without fine-tuning and the VAE-based method without fine-tuning. This comparison would highlight the representational power of the two methods .\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reponse to Reviewer vN2L\", \"comment\": \"We thank the reviewer for their feedback and we hope to address some concerns below\\n\\nWeaknesses\\n- Novelty: While we make good use of previous work, we also introduce novel data augmentations and brain classification approaches in our paper. We are also, to our knowledge, the first application of multimodal contrastive learning for either brain region or cell-type classification. Overall, we think NEMO will be a valuable machine learning tool for systems neuroscientists.\\n\\nQuestions\\n\\n\\u201cThe authors emphasized \\u201cin vivo\\u201d, however, are there any results about the computational efficiency of the NEMO model that can support it? Does it support real-time processing?\\u201d\\n- To clarify, by \\u201cin vivo\\u201d we mean that the recordings we analyze are from the brain in its awake and behaving state. We clarify this so that the reader understands that we are not doing any \\u201cin vitro\\u201d analysis (i.e., outside the brain). This is similar terminology as used in other papers including [1]. We do not make any claims about real-time processing and we are happy to clarify this in the final version of the paper.\\n\\n\\u201c... The authors are encouraged to validate the data augmentation strategy adopted in the study, augmentations directly for the ACG images, is reasonable by showing a couple examples.\\u201d\\n- To validate the impact of our different augmentations, we ran two new analyses on the UHD cell-type classification dataset: (1) remove one augmentation and (2) add one augmentation. We have added this result to Supplementary Figure 14 with additional details in Supplementary L.2. We also show example augmentations in Supplementary Figure 3. Please see the overall response for more details. We thank the reviewer for this suggestion.\\n\\n\\u201cThe authors compared NEMO with the VAE-based method. However, in my opinion, it seems that a critical comparison is missing: the comparison between naive models\\u201d\\n- We agree that a comparison of naive models is important. We actually do quantify the performance of the naive models in our paper. We have three evaluation schemes: linear classification using the embeddings (i.e., linear classifier), MLP classification using the embeddings (i.e. frozen MLP), and full end-to-end fine-tuning (fine-tuned MLP). Both the linear classifier and frozen MLP evaluate the performance of each model without any additional fine-tuning of the embedding method (e..g, NEMO, VAEs, etc.). We hope this clarification addresses the reviewer\\u2019s concern.\\n\\n[1] International Brain Laboratory, et al. \\\"Reproducibility of in-vivo electrophysiological measurements in mice.\\\" bioRxiv (2022): 2022-05.\"}", "{\"metareview\": \"The authors proposed an application of contrastive learning for spiking activity and extracellular action potentials (EAPs). The proposed Neuronal Embeddings via Multimodal Contrastive Learning (NEMO) utilizes Contrastive Language-Image Pretraining (CLIP) task-specific data augmentations and encoders. Although the theoretical innovation is relatively limited, to the best of the referees' knowledge, this is the second work to use contrastive learning for cell-type classification and the first to combine two modalities (extracellular action potential data and spiking activity data). We have read the referee reports and the author responses. The primary concerns include restricted limited algorithmic innovation, number of neurons, and experiments (e.g., ablation for the data augmentation, significance testing, sensitivity to hyperparameters, increasing number of neurons for brain region classification). The authors were able to address most of the issues raised. We believe the comments and suggestions from the referees have improved the quality of the manuscript and encourage the authors to incorporate points discussed into the next revision.\", \"additional_comments_on_reviewer_discussion\": \"The primary concerns include restricted limited algorithmic innovation, number of neurons, and experiments (e.g., ablation for the data augmentation, significance testing, sensitivity to hyperparameters, increasing number of neurons for brain region classification). The authors were able to address most of the issues raised.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response to Reviewer a6D4\", \"comment\": \"We thank the reviewer for the detailed and useful review. We hope to address any concerns and questions below.\\n\\nWeaknesses\\n- We thank the reviewer for the suggestion to do significance testing for our results (especially for NEMO\\u2019s smaller improvements). We ran a one-sided t-test (with p<.05 significant) for both brain region and cell-type classification. We found that NEMO was significantly better than all baselines (including SimCLR) on brain region classification and also significantly better than all baselines (except SimCLR) on cell-type classification. For cell-type classification, NEMO was significantly better than SimCLR on balanced accuracy using the fine-tuned MLP and the frozen MLP, but not the linear classifier. We also found that NEMO significantly outperforms SimCLR on the f1 score using the frozen MLP, but not the fine-tuned MLP or linear classifier (although the p-value is 0.0505 for the fine-tuned MLP; p < .05 is significant). Please see Supplementary Figures 17-23 with additional details in Supplementary M. Overall, this analysis suggests that NEMO is the strongest model for both brain region and cell-type classification. We hope this alleviates some of the reviewer\\u2019s concerns.\\n- While we agree that there may not be a perfect 1-to-1 relationship between spike waveform and firing patterns, there is evidence that cell-types can be roughly separated by these two modalities in vivo [1]. Our paper provides further evidence that there is a relatively strong relationship between these two modalities. In light of this, finding the best self-supervised multimodal algorithm will be essential for accurate cell-type classification given the limited opto-tagged labels available.The analogy of the word \\u201cpuppy\\u201d and the image of a \\u201cpuppy\\u201d is interesting, but the relationship between these two are also not strictly 1-to-1 as there are many different types of puppies. Also, multimodal contrastive learning (e.g, CLIP) has a nice property that if the signal is correlated across the two modalities and the noise is independent, then the extracted features can be more informative and generalizable than unimodal contrastive learning where the data augmentations are the only way of learning invariance to the noise (which relies on making assumptions) [2]. We believe this is part of why NEMO does significantly better than SimCLR on the IBL whole-brain dataset which is noisier than the more curated UHD cell-type classification dataset (where performance is more similar).\\n- We agree the citation to CEED was inappropriate. We apologize for this oversight and have adjusted the citation to now correctly mention that CEED was utilized to perform cell-type classification. A significant difference in our results is that CEED was never applied to optotagged data and, therefore, all of their cell-typing results are on data with no ground-truth (e.g., their comparison to WaveMAP). Thank you for this correction!\\n\\nQuestions\\n\\nFigures feedback\\n- We have adjusted the figures and text with the feedback provided by the reviewer. We hope this improves the clarity and correctness of the work.\\n\\n\\u201cIn the PhysMAP paper that you benchmarked, they applied three public datasets from S1, A1, and V1/Hippocampus. Have you tested these datasets?\\u201d\\n- We have not tested NEMO the S1, A1, and V1/Hippocampus datasets used in the PhysMAP paper. We appreciate the suggestion by the reviewer and plan to look into these datasets for our future applications. We would also be interested in testing NEMO on the cerebellum dataset from [3] once this becomes publicly available. We believe that the development of a cell-type classification benchmark would also be an exciting future direction given the challenge of preprocessing and utilizing new datasets.\\n\\n\\u201cPlease specify how you computed the two primary metrics: macro-averaged F1 score and balanced accuracy.\\u201d\\n- We have added some clarification for the metrics used in our manuscript. The macro-averaged F1 score is calculated as the unweighted mean of F1 scores for each class (i.e., cell-type or brain region). The balanced accuracy measures the average accuracy per class.\\n\\n[1] Petersen, Peter C., et al. \\\"CellExplorer: A framework for visualizing and characterizing single neurons.\\\" Neuron 109.22 (2021): 3594-3608.\\n\\n[2] Huang, Wei, et al. \\\"On the Comparison between Multi-modal and Single-modal Contrastive Learning.\\\" arXiv preprint arXiv:2411.02837 (2024).\\n\\n[3] Beau, Maxime, et al. \\\"A deep-learning strategy to identify cell types across species from high-density extracellular recordings.\\\" bioRxiv (2024).\"}", "{\"title\": \"Paper Presentation is Excellent\", \"comment\": \"I thank the authors for their statistical tests, which elevate this well-presented paper to an excellent standard. I have updated the presentation score from 3 to 4.\\n\\nI would like to increase my overall score from 6 to 7, though I cannot justify giving it an 8. My primary concern is that the improvement of NEMO over other baselines, while statistically significant, is relatively small. Nevertheless, this is a great paper and is well-qualified for publication.\"}", "{\"summary\": \"The authors used multimodal (spike waveform + firing pattern) contrastive learning to classify three types of inhibitory neurons (PV/SST/VIP) and ten different brain regions. The proposed NEMO model is based on the widely used CLIP (Contrastive Language-Image Pre-Training) model. NEMO outperforms two previous multimodal models: PhysMAP and VAE.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The writing is clear, easy to understand, and follows a smooth logical flow, with almost no typos and well-presented figures.\\n\\n2. The paper provides a thorough review of relevant literature in this field. I have closely followed the cell-type classification area, and all the papers I am aware of (and some I was not) have been accurately cited, except for one (see Weakness). Notably, the authors benchmarked two very recent models that are still in preprint format.\\n\\n3. This is the second work to use contrastive learning for cell-type classification and the first to combine two modalities.\\n\\n4. The multimodal approach outperforms single-modal models, which is consistent with previous studies.\", \"weaknesses\": \"1. In the best-case scenario (cell type classification, L344, Figure 2c), NEMO outperforms VAE by 11%. However, in brain region classification, the improvement is minimal. For example, in Figure 3e, comparing the deep orange (NEMO) to the deep blue (VAE) bars, the difference in balanced accuracy is less than 0.05. Are these differences statistically significant?\\n\\n2. Joint training shows little to no benefit over independent training. For example, in Figure 5b, comparing the deep orange (NEMO) to the deep violet (SimCLR) bars, the difference in balanced accuracy is around 0.01. Additionally, in Supplementary Table 9, independent NEMO performs either better (0.84 vs. 0.83) or similarly (0.83 vs. 0.84, 0.87 vs. 0.88) to joint NEMO. Are these differences statistically significant?\\n\\n3. To my knowledge, there is no neuroscience evidence suggesting a strong pairwise correlation between spike waveform and firing patterns. For example, layer 5 pyramidal neurons and cortical PV neurons both fire a large number of spikes (both spontaneously and evoked), but their spike waveforms are broad and narrow, respectively (Cortical connectivity and sensory coding, KD Harris, TD Mrsic-Flogel - Nature, 2013). Additionally, burst firing can be evoked in both excitatory (Chattering cells: superficial pyramidal neurons contributing to the generation of synchronous oscillations in the visual cortex. CM Gray, DA McCormick - Science, 1996) and SST neurons (Somatostatin-expressing neurons in cortical networks, Urban-Ciecko, AL Barth - Nature Reviews Neuroscience, 2016). This is fundamentally different from the relationship between an image and its description in CLIP. In other words, the word \\\"puppy\\\" and an image of a \\\"puppy\\\" represent the same concept, but a broad spike could be associated with either burst or dense firing, depending on whether the neuron is located in layer 2/3 or layer 5.\\n\\n4. This is the second paper to use contrastive learning for cell-type classification. The first is CEED (Vishnubhotla et al., 2023), which used an unsupervised model (SimCLR) to classify cell types and benchmarked against WaveMap (the single-modality version of PhysMAP). While the CEED work does not significantly compromise the novelty of this study, it should be more clearly acknowledged. The current citation, \\\"Contrastive learning has been applied to raw electrophysiological recordings (Vishnubhotla et al., 2024),\\\" is inappropriate.\", \"questions\": \"1. In Figure 1a, the text \\\"Neuropixels 1.0\\\" should be replaced with \\\"Neuropixels Ultra\\\" since your VIP/SST/PV data comes from NP Ultra, not NP 1.0, which is used in the IBL dataset for classifying brain regions. Also, please update the inset to show the schematic of NP Ultra instead of NP 1.0.\\n\\n2. Figure 3b is not mentioned in the text. It could be referenced between L350 and L361.\\n\\n3. Figure 3c is also not mentioned in the text. It might fit well between L362 and L366.\\n\\n4. Consider changing the title of Figure 3b and 3c from \\\"unit\\\" to \\\"neuron\\\" to be consistent with the terminology used in the main text.\\n\\n5. In the PhysMAP paper that you benchmarked, they applied three public datasets from S1, A1, and V1/Hippocampus. Have you tested these datasets? While I am not requesting additional experiments, if you have tested them, it would be helpful to include the results in the supplementary materials.\\n\\n6. Please specify how you computed the two primary metrics: macro-averaged F1 score and balanced accuracy. Does the first metric equal the unweighted mean of all the per-cell-type F1 scores? Does the second metric equal the unweighted mean of sensitivity (True Positive) and specificity (True Negative)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Overall response to reviewers\", \"comment\": [\"We thank the reviewers for all the thoughtful feedback and suggestions. We are happy that the reviewers had a lot of positive feedback for the work, including:\", \"\\u201cThis is a very well-written paper with clear organization of the figures and sound presentation of the data sets used, methods applied, and results obtained.\\u201d (iQCh); \\u201cthe study was well presented and easy to follow.\\u201d (vN2L)\", \"\\u201cThe training and experimental setup seems to be carefully chosen and sound.\\u201d (YH4G)\", \"\\u201cThe multimodal approach outperforms single-modal models, which is consistent with previous studies.\\u201d\", \"\\u201cNEMO is able to differentiate between VIP and SST cells, which is highly valued in systems neuroscience, and its ability to yield data embeddings that lead to separable classification regions is impressive.\\u201d (iQCh)\", \"Based upon the reviewer comments and questions, we ran four new experiments and added new supplements detailed below:\", \"NEMO data augmentation ablations\", \"To evaluate the impact of our proposed data augmentations, we ran two new analyses on the UHD cell-type classification dataset: (1) remove one augmentation and (2) add one augmentation. For the remove one augmentation experiment, we remove one augmentation and keep the rest. For the add 1 augmentation experiment, we only use one augmentation and remove the rest. We compare these new NEMO models to a version of NEMO with all augmentations and a version of NEMO without any augmentations using the linear classifier to predict cell-types. The results of this analysis can be seen in Supplementary Figure 14 with additional details in Supplementary L.2. As can be seen, the no augmentations model performs noticeably worse than with the models with augmentations. Additive Gaussian noise for the ACGs is especially helpful. For other augmentations, while the combined effect is large, the individual contribution is smaller. For the final version of the paper, we plan to ablate multiple augmentations at once to see which combinations are most important for the final performance of NEMO.\", \"Significance testing NEMO vs. baselines\", \"As recommended by reviewer a6D4, we performed significance testing of the improvement of NEMO over all baseline models (VAE, SimCLR, supervised, PhysMAP) for all experiments using a one-tailed t-test (with p < .05 significant). (1) Brain region classification: For both the linear and the MLP classifiers, we find that NEMO significantly outperforms all baselines across all label ratios (except supervised vs. NEMO for 1% of the labels). We also find that NEMO significantly outperforms all baselines (which use 100% of the labels) with only 80% of the labels. (2) Cell-type classification: For both the linear and MLP classifiers, we find that NEMO significantly outperforms all baselines (except SimCLR). We found that NEMO significantly outperforms SimCLR on balanced accuracy using the fine-tuned MLP and the frozen MLP, but not the linear classifier. We also found that NEMO significantly outperforms SimCLR on the f1 score using the frozen MLP, but not the fine-tuned MLP or linear classifier (although the p-value is 0.0505 for the fine-tuned MLP; p < .05 is significant). The results of this analysis can be seen in Supplementary Figures 17-23 with additional details in Supplementary M. Overall, this analysis suggests that NEMO is the strongest model for both brain region and cell-type classification.\", \"Multi-neuron brain region classification with 3 - 25 neurons\", \"In our paper, we originally used 5 neurons for all multi-neuron brain region classification results. To evaluate the impact of this choice, we re-ran multi-neuron brain region classification with NEMO using varying numbers of neurons from (i.e., 3 - 25). The results of this analysis are shown in Supplementary Figure 15 with additional details in Supplementary L.3. As can be seen, increasing the number of neurons marginally improves the performance of brain region classification with NEMO with saturation at 25 neurons.\", \"Hyperparameter sensitivity analysis\", \"To evaluate the hyperparameter sensitivity of NEMO, we are currently performing a grid search over the learning rate, dropout, and embedding size for NEMO and the VAE-based method. We will include this new result in the Supplementary once it has finished running. We will also update our rebuttal accordingly.\"]}", "{\"title\": \"Response to Reviewer iQCh\", \"comment\": [\"We thank the reviewer for the thoughtful comments and suggestions. We appreciate that the reviewer believes that NEMO will be useful for systems neuroscientists; This is our hope as well!\", \"Additional experiments and figures\", \"We agree that the description of the architectures was too limited so we have added a schematic of the encoder architectures in Supplement Figure 2.\", \"We chose 5 neurons per depth as an initial starting point to evaluate the contribution of multi-neuron ensembling. We agree with the reviewer that this analysis could be improved so we ran an additional experiment where we vary the number of neurons from 3 - 25 for brain region classification. Please see the overall response for more details.\", \"To understand the sensitivity of NEMO and the VAE baseline to hyperparameters, we are currently running a grid search over the learning rate, dropout, and embedding dimension for the UHD cell-type classification dataset. We plan to add this to the supplement once it finishes running and we will update the rebuttal accordingly.\", \"To test the impact of our different augmentations, we ran two new analyses on the UHD cell-type classification dataset: (1) remove one augmentation and (2) add one augmentation. We have added this result to Supplementary Figure 14 with additional details in Supplementary L.2. Please see the overall response for more details.\", \"Questions\", \"\\u201cWhat exactly is the definition of a channel in this context?\\u201d\", \"The definition of a channel in this context is a single electrode. So when we restrict the template to one channel, we are restricting the template to be the waveform recorded on the single largest amplitude electrode.\", \"\\u201cWhy was additive Gaussian noise chosen as the sole data augmentation?\\u201d\", \"We used gaussian noise for single-channel templates to help compensate for low spike count templates being noisier. We could not think of any additional nuisance variables for single-channel templates. For multi-channel templates, we introduced electrode dropout and per-channel amplitude jitter which lead to improvements for NEMO on both brain region and cell-type classification (please see Supplement E). We will add this explanation to the main text.\", \"\\u201cHow do other data augmentation types impact the performance and results?\\u201d\", \"To test the impact of our different augmentations, we ran two new analyses on the UHD cell-type classification dataset: (1) remove one augmentation and (2) add one augmentation. We thank the reviewer for the suggestion. Please see the overall response for more details.\"]}", "{\"comment\": \"I would like to thank the reviewer for their effort in addressing my concerns. Most of my concerns have been adequately addressed.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for your response and the additional analyses. I still think it's a good paper: I kept my score and increase my confidence.\"}", "{\"title\": \"Reminder: Last day for author feedback\", \"comment\": \"This is a reminder that today is the last day allotted for author feedback. If there are any more last minute comments, please send them by today.\"}", "{\"summary\": \"This paper presents a novel application of contrastive learning to solve two important problems in systems neuroscience by incorporating just two electrophysiological recording modalities: spiking activity and extracellular action potentials (EAPs). The authors developed a new framework called Neuronal Embeddings via Multimodal Contrastive Learning (NEMO) by combining the well-established Contrastive Language-Image Pretraining (CLIP) framework with task-specific data augmentations and encoders. The authors demonstrate the multimodality as well as the power and utility of NEMO by evaluating its performance on two very different downstream tasks:\\n\\n1.\\tcell-type classification among parvalbumin (PV), somatostatin (SST), and vasoactive intestinal peptide (VIP) inhibitory interneurons using opto-tagged Neuropixels Ultra (NP Ultra) recordings from the mouse visual cortex, and\\n\\n2.\\tbrain-region classification among 10 broad areas using the public International Brain Laboratory (IBL) brain-wide map data set.\\n\\nThis paper\\u2019s novelty mainly stems from the utilization of a paired data set that combines an autocorrelogram (ACG) image of every neuron\\u2019s spiking activity and a waveform template of the neurons\\u2019 EAPs, and from the application of two separate encoders for the aforementioned two modalities. In both cell-type and brain-region classification tasks, the authors show that NEMO outperforms the state-of-the-art multimodal cell-type embedding methods including PhysMAP and VAE-based methods as well as a fully supervised method in terms of balanced accuracies and macro-averaged F1 scores with minimal fine-tuning.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This is a very well-written paper with clear organization of the figures and sound presentation of the data sets used, methods applied, and results obtained. The authors apply a successful contrastive learning method to develop a new framework that outperforms comparable state-of-the-art methods in both the cell-type classification task and the less widely-explored brain-region classification task. The task relies on two electrophysiological recording modalities: spiking activity and EAPs, which are more accessible, and the decent classification performances come with minimal fine-tuning. NEMO is able to differentiate between VIP and SST cells, which is highly valued in systems neuroscience, and its ability to yield data embeddings that lead to separable classification regions is impressive. The method described in this paper will be particularly helpful and useful to systems neuroscientists interested in applying this technique with the goal of decoding the neural circuitry underlying multiple biological functions.\", \"weaknesses\": \"Detailed graphical representations of the architectures used in the authors\\u2019 method may help the readers understand the details of NEMO better. A more comprehensive description, such as including explanations of 10D and 500D in the VAE baseline versions\\u2019 latent spaces of the encoder architectures, would have further aided clarity to the method\\u2019s explanations.\\n\\nThe authors restrict the number of example neurons, whose recorded activities are inputted to the overall architecture, to five neurons without an explanation of why the input neuron number was kept low. Providing a rationale for limiting the number of input neurons to five as well as an explanation of how the results of the authors' method change with varying input neuron number would be greatly appreciated.\\n\\nThe authors state that they fixed the hyperparameters for all methods across the two data sets used in the experiments due to the limited number of labels for evaluation for the cell-type classification task. It is unclear whether this choice led to fair performance comparisons among the state-of-the-art methods. It would help to know that separate hyperparameter optimization among the different methods and data sets would not yield different results. Providing a brief analysis on the sensitivity of the results and the overall performance to variations in specific hyperparameters will notably strengthen the claims made in this paper.\", \"minor_comments\": \"There seems to be a citation error or a missing preposition in Section 1 when the authors cite IBL et al.\\n\\nRadford et al. 2021 to Radford et al., 2021 in Section 4\\n\\nTable 6 is non-existent in Section 6.1. The authors may be referring to Table 1.\\n\\nThere seems to be a citation error or a missing preposition in Section 6.3 when the authors cite Chen et al. (2020).\\n\\nFigure 5 (b) caption's word \\\"then\\\" should be changed to \\\"than.\\\"\\n\\nIn Section 7, the phrase \\\"should also be useful these down-stream tasks\\\" should be changed to \\\"should also be useful in these down-stream tasks.\\\"\", \"questions\": \"The waveform template is restricted to one channel with maximal amplitude and multi-channel template results are shown in Supplement E. What exactly is the definition of a channel in this context?\\n\\nWhy was additive Gaussian noise chosen as the sole data augmentation? Can a brief rationale for this specific choice be included in the paper? The authors demonstrate that adding two template augmentations: amplitude jitter and electrode dropout does not significantly improve the performance of the two downstream classification tasks. How do other data augmentation types impact the performance and results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Update to overall response\", \"comment\": \"We have finished running the hyperparameter sensitivity analysis. The results for this analysis can be found in Supplement L, Supplementary Figure 13, and Supplementary Table 4. As can be seen, NEMO is robust to the range of hyperparameters we tested: the average performance over the tested hyperparameters is the same as the default NEMO model reported throughout the paper. The VAE is also robust to the range of hyperparameters we tested. We also found that the learning rate we used for cell-type classification for the default VAE was lower than the original learning rate from [1] (we adjusted this to improve reconstruction) and that a larger learning rate improves downstream performance. We have adjusted this parameter and updated Figure 2 and Table 1. NEMO still substantially outperforms the VAE baselines and the linear NEMO model is still significantly better than the end-to-end fine-tuned VAE. We thank the reviewers for this suggested analysis.\\n\\n[1] Beau, Maxime, et al. \\\"A deep-learning strategy to identify cell types across species from high-density extracellular recordings.\\\" bioRxiv (2024).\"}" ] }
10DtLPsdro
Factor Graph-based Interpretable Neural Networks
[ "Yicong Li", "Kuanjiu Zhou", "Shuo Yu", "Qiang Zhang", "Renqiang Luo", "Xiaodong Li", "Feng Xia" ]
Comprehensible neural network explanations are foundations for a better understanding of decisions, especially when the input data are infused with malicious perturbations. Existing solutions generally mitigate the impact of perturbations through adversarial training, yet they fail to generate comprehensible explanations under unknown perturbations. To address this challenge, we propose AGAIN, a factor graph-based interpretable neural network, which is capable of generating comprehensible explanations under unknown perturbations. Instead of retraining like previous solutions, the proposed AGAIN directly integrates logical rules by which logical errors in explanations are identified and rectified during inference. Specifically, we construct the factor graph to express logical rules between explanations and categories. By treating logical rules as exogenous knowledge, AGAIN can identify incomprehensible explanations that violate real-world logic. Furthermore, we propose an interactive intervention switch strategy rectifying explanations based on the logical guidance from the factor graph without learning perturbations, which overcomes the inherent limitation of adversarial training-based methods in defending only against known perturbations. Additionally, we theoretically demonstrate the effectiveness of employing factor graph by proving that the comprehensibility of explanations is strongly correlated with factor graph. Extensive experiments are conducted on three datasets and experimental results illustrate the superior performance of AGAIN compared to state-of-the-art baselines.
[ "interpretable neural network", "factor graph", "perturbation", "explanation rectification", "graph learning" ]
Accept (Poster)
https://openreview.net/pdf?id=10DtLPsdro
https://openreview.net/forum?id=10DtLPsdro
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yVHcWXyGXW", "yNPjTsL3E7", "wdg3xi3bp5", "sZ7Lubrx1p", "sA85SklUNj", "pOuEvTbptE", "pCGvMcJek0", "nsPWX5z2ZK", "lnpjRcGF8K", "k9k9NgZzoy", "jprPCB89ry", "hj45P1IbS3", "eMB4mVPLa7", "bjv9hRlctd", "b0QZxDvye5", "YyM1uZifMl", "YuSirtbYFT", "VzX28yIBJX", "Vo6s8V2I6S", "TSL8DvVUQy", "S9Ia9S9fhM", "S4cDJPCBrd", "QkP3dmJLFH", "M0kFKjOwtD", "Jah5vzB3l1", "IYF4Y11mKa", "GsyTV5iWa7", "GOAgS16QMF", "FbXM8jw9E4", "CTQ7HMGSLi", "Ap5e0zA8Nh", "9aidJT9B1c", "9HDodeK4K0", "8E3wXXrj9e", "86AZ3oFsct", "7zTejQ1OVJ", "70rqxVplLp", "1Fxqu4qPSy" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732644687597, 1732609679724, 1733146839833, 1732619592782, 1732293306758, 1732109717244, 1730619565552, 1730720714590, 1730167468313, 1732111090015, 1732644579175, 1737523824843, 1732589467752, 1732614581700, 1732293370309, 1732722151877, 1732670689469, 1732110583212, 1732293203211, 1732715816887, 1732616222112, 1734417669860, 1732110527240, 1732644413161, 1732762999079, 1733178111023, 1732208305952, 1730707492558, 1732679309986, 1732110789582, 1732109991980, 1732550445991, 1732518201924, 1730747970145, 1732875177241, 1732718542445, 1732762855380, 1733219603811 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_XLZi" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_SU9s" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_XLZi" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_YnPb" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_YnPb" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_XsjD" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_PFRk" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_SU9s" ], [ "ICLR.cc/2025/Conference/Submission7228/Area_Chair_FdRQ" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_PFRk" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_XLZi" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_PFRk" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_PFRk" ], [ "ICLR.cc/2025/Conference/Submission7228/Reviewer_XsjD" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ], [ "ICLR.cc/2025/Conference/Submission7228/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to follow-up comment by Reviewer SU9s\", \"comment\": \"Thank you very much for your time and effort in reviewing our paper. Your insightful suggestions greatly help us improve the quality of our paper. It is truly valuable for us to have such a meaningful discussion with you. If there are any other issues remaining, we are pleased to address them further.\"}", "{\"title\": \"Response to follow-up comment by Reviewer XLZi \\uff08Part 4\\uff09\", \"comment\": \"Dear reviewer XLZi,\\n\\nRegarding Question 1 of your Official Comment, we\\u2019d like to provide more detailed clarification. \\n\\nWe have already put the results of E-ACC and P-ACC for one real dataset into the main paper **(Line 438-466)**. The E-ACC and P-ACC for the other datasets are placed in the Appendix **(Line 1254-1330)**. The results indicate that AGAIN's performance remains superior in evaluations using standard metrics. \\n\\nWe apologize for the insufficient clarification earlier.\\n\\nThank you again for your time and effort in reviewing our paper and look forward to further discussions with you.\"}", "{\"title\": \"Summary of Revisions\", \"comment\": \"We sincerely appreciate the valuable feedback from all reviewers, which has significantly improved our work. In response, we have uploaded a revised version of the paper with substantial updates to address all the concerns and suggestions. Below, we provide a summary of the revisions made:\\n\\n**1. Baseline addition:** Based on comments from reviewer XLZi, we added 6 baselines, including 4 typical methods based on knowledge integration and 2 methods based on perturbation defense.\\n\\n**2. Computational efficiency analysis:** We added the experimental analysis of the computational efficiency of AGAIN. (Appendix D.2).\\n\\n**3. Ablation analysis:** To enhance the evaluation on the validity of the factor graph, we added the ablation analysis of the factor graph. (Appendix D.1)\\n\\n**4. Comparison of intervention strategies:** To evaluate the validity of intervention strategies, we added comparison experimental analysis and time complexity analysis of intervention strategies. (Appendix D.3)\\n\\n**5. Comparisons of E-ACC and P-ACC:** We list the results of the E-ACC and P-ACC comparisons of AGAIN with all baselines on the three datasets. (Table 3, Table 4, Table 9, Table 10, Table 11, Table 12)\\n\\n**6. Implementation details of adversarial attacks:** We provide implementation details of the adversarial attacks employed in our paper. (Appendix E)\\n\\n**7. Weight estimation:** We provide implementation details of the learning process on factor weights. (Appendix C.5)\\n\\n**8. Paragraph modification and section reorganisation:** We have thoroughly reviewed the manuscript and revised misleading or unclear sentences and paragraphs. We added the definition of adversarial attacks in Section 3.\\n\\n**9. Figure adjustment:** We revised Figure 3 for clarity of the concept intervention process. We revised Figure 7 to eliminate the issue of overlapping elements in the figure.\\n\\nWe believe these updates address the reviewers' concerns and substantially improve the quality of the paper. We thank all reviewers for their constructive feedback and support.\"}", "{\"comment\": \"I would like to express my gratitude to the authors for their thorough work and the effort they put into addressing my concerns in this rebuttal. As a result, I have raised both my score and my confidence in the paper.\\n\\nWhile most of my concerns have been satisfactorily addressed, I still have one remaining issue regarding the adversarial attacks employed. The attacks reported in your response are fine, and I agree these adversarial attacks are not standard, as they aim to change the explanation rather than decrease the model's confidence. For this reason, however, I believe it is important that these attacks are presented in the revised version of the main paper. The name of the attack, along with a reference to the paper proposing it, should be included in the main text (the equation as well, if space permits; otherwise, it can be placed in the appendix).\\n\\nI apologize if I have overlooked these details. If the authors could add this information or point out where it has been included, I would be happy to further raise my score.\"}", "{\"title\": \"Response to follow-up comment by Reviewer XLZi \\uff08Part 2\\uff09\", \"comment\": \"**3.**\\n\\n(1)This is because the perturbations discussed in this paper only disturb the explanations (concepts) and not the results of task prediction, and therefore the P-ACC (task predictive accuracy) metric hardly changes under different perturbations. This also illustrates that the effectiveness of AGAIN in improving the comprehensibility of explanations is not reflected in whether or not the P-ACC metrics change, but primarily in changes in the E-ACC and LSM metrics. The reason we still report P-ACC is to demonstrate that AGAIN did not sacrifice task predictive accuracy in order to improve the quality of explanations.\\n\\n(2)We would like to emphasize that most research in the interpretability domain mainly focuses on defending against perturbations that affect the explanation without changing the prediction [6]. This focus is due to the fact that, when the predictions are incorrect, the pursuit of explanations becomes less meaningful [7]. This is because ambiguity arises only when correct predictions are paired with incorrect explanations. At the same time, we would also like to emphasize that such perturbations are already widespread and widely confirmed, and are not mentioned for the first time in this paper. For example, [5] claims that \\u201cAn attacker can easily disrupt concepts without changing the prediction label.\\u201d A simple way to achieve this is to optimize an objective function to maximize the difference in concept activation while ensuring that the final prediction remains unchanged. This objective function converges with only a few optimization steps. The attacker can then search the solution space for perturbations that meet the condition. As a result, attacks based on these perturbations are easy to implement and widespread.\\n\\n**4.**\\n\\nAdversarial attacks that disrupt the explanation do differ from standard adversarial attacks in their goals. Here, we provide 3 perturbation examples to illustrate the specific implementation details of adversarial attacks:\\n\\n(1)Erasure perturbation\\n\\nThe goal of the erasure perturbation is to subtly remove concepts from an explanation without changing the results of the task prediction. In practice, for CBM, we typically have a pre-set threshold a for determining whether a concept is an activated concept. Specifically, for a sample x, if $h_c^{(n)} (x)-a > 0$, then the nth concept is an activated concept. Let $N$ denote a set of concepts that the attacker wishes to remove. To remove the existence of these concepts, the attacker's goal is as follows:\\n\\n\\\\begin{array}{l}\\nMax{\\\\rm{ }}\\\\sum\\\\limits_{n \\\\in N} {(\\\\mathbb{I}[a - {h_c}^{(n)}(x + \\\\delta )] - \\\\mathbb{I}[a - {h_c}^{(n)}(x)])} \\\\\\\\\\ns.t.{\\\\rm{ argmax }}{h_y}\\\\left( {{h_c}\\\\left( {x + \\\\delta } \\\\right)} \\\\right) = {\\\\rm{argmax }}{h_y}\\\\left( {{h_c}\\\\left( x \\\\right)} \\\\right)\\n\\\\end{array}\\n\\nwhere $\\\\mathbb{I}()$ denotes the indicator function, $h_c()$ is the concept predictor, and $h_y()$ is the category predictor (task predictor). $\\\\delta$ is the learnable perturbation.\\n\\n(2)Introduction perturbation\\n\\nThe goal of Introduction perturbation is to allow the existence of irrelevant concepts without modifying the task prediction results. The goal of the attacker is as follows:\\n\\n\\\\begin{array}{l}\\nMax{\\\\rm{ }}\\\\sum\\\\limits_{n \\\\in N} {(\\\\mathbb{I}[{h_c}^{(n)}(x + \\\\delta ) - a] - \\\\mathbb{I}[{h_c}^{(n)}(x) - a])} \\\\\\\\\\ns.t.{\\\\rm{ argmax }}{h_y}\\\\left( {{h_c}\\\\left( {x + \\\\delta } \\\\right)} \\\\right) = {\\\\rm{argmax }}{h_y}\\\\left( {{h_c}\\\\left( x \\\\right)} \\\\right)\\n\\\\end{array}\\n\\n(3)Confounding perturbation\\n\\nThe goal of Confounding perturbation is to simultaneously remove relevant concepts and introduce irrelevant concepts. The goal of the attacker is the sum of the above two goals.\\n\\nExamples of the above three perturbations have been implemented by [10] with good attack results. This objective function converges with only a few optimization steps. The attacker can then search the solution space for perturbations that meet the condition.\\n\\n**5.**\\n\\nTo further address your concerns about fairness, we have added these experimental results to the main paper **(Line 378-411)**. We have followed your comment that \\u201cSome of these methods have been already employed to defend against adversarial attacks, such as [3-4]\\u201d. \\u201c[6] has shown in the context of concept-based models that it can learn logic rules at training time and use them at inference time to defend against adversarial attacks.\\u201d. We have compared the methods suggested by the reviewer to specifically defend against adversarial perturbations: DKAA [8], MORDAA [9], and LEN [10]. The performance of AGAIN remains optimal compared to these three methods.\"}", "{\"title\": \"Response to Reviewer XsjD\", \"comment\": \"**Weaknesses 1:**\\n\\nThank you for your comments. We understand your concern about the reliability of our assumption. Actually, no. This assumption is realistic. Predictions are erroneous when they are affected by perturbations. We\\u2019d like to emphasize that most studies in the interpretability domain mainly focus on defending against perturbations that affect the explanation without changing the predictions [2]. This focus is due to the fact that, when the predictions are incorrect, the pursuit of explanations becomes less meaningful [3]. This is because ambiguity arises only when correct predictions are paired with incorrect explanations. Regarding your concern, we\\u2019d like to give a more detailed explanation of this situation:\\nFor concept bottleneck structures, attackers can generate perturbations that only interfere with concepts but not predictions. For example, [1] claims that \\u201cAn attacker can easily disrupt concepts without changing the prediction label.\\u201d A simple way to achieve this is to optimize an objective function to maximize the difference in concept activation while ensuring that the final prediction remains unchanged. This objective function converges with only a few optimization steps. The attacker can then search the solution space for perturbations that meet the condition. As a result, attacks based on these perturbations are easy to implement and widespread. \\n\\n**Weaknesses 2:**\\n\\nExternal knowledge is typically expressed as logical rules. However, these rules are often discrete. Factor graphs can incorporate these discrete rules, enabling the detection and correction of wrong explanations. Known logic rule sets can only handle strictly deterministic reasoning (either established or unestablished). This limitation makes purely logic-based reasoning inaccurate, as real-world external knowledge is often uncertain or fuzzy. To accurately detect erroneous interpretations, it is essential to reason with uncertainty about this external knowledge. Factor graphs enable this by leveraging rule weight learning and conditional probability estimation, thereby enhancing detection accuracy. \\nIn contrast, while a collection of known logic rules can also detect erroneous explanations, its detection accuracy may be limited due to the absence of uncertainty reasoning. To validate this, we have included a set of ablation experiments in the Rebuttal Revision **(Line 1119-1133)**, comparing the effectiveness of factor graphs with purely logic rule sets. The experimental results demonstrate that factor graphs significantly outperform methods that rely solely on logic rules.\\n\\n**Weaknesses 3:**\\n\\nThe explanation of Figure 3 is insufficient. There is no reference to Figure 4.\\nThank you for pointing out these issues. We have revised the manuscript based on your suggestions. You may refer to **Line 209** and **Figure 3**. \\n\\n**Questions 1:**\\n\\nConstructing a bipartite complete graph is a straightforward way. But the overhead of constructing a bipartite complete graph is the same as AGAIN. After constructing the bipartite complete graph, external knowledge still needs to be injected by setting weights based on logical rules. If logical rules are completely omitted and weights are learned solely from data samples, the graph fails to improve the comprehensibility of the explanation, as it lacks external knowledge necessary for detecting and correcting incorrect concepts. Therefore, to ensure the comprehensibility of the explanation under unknown perturbations, we must construct the factor graph using logical rules.\\n\\n**Questions 2:**\\n\\nEach factor is defined as a potential function that performs logical operations based on different rules, which can be categorized into coexistence and exclusion operations. We explained this point in the paper **(Line 216-221)** and apologize for any confusion caused by our initial explanation. Additional clarification has been included in the Rebuttal Revision **(Line 213-215)**.\\n\\n**Questions 3:**\\n\\nThank you for your suggestion. We have made additions in the Rebuttal Revision of the paper **(Line 1097)**.\\n\\n**Questions 4:**\\n\\nAttributional training focuses on retraining the model with adversarial samples to maximize the similarity between the model's explanations and the explanatory labels. Therefore, attributional training can be considered a type of adversarial training. we have made adjustments in the Rebuttal Revision **(Line 358)**.\\n\\n[1] Understanding and enhancing robustness of concept-based models. AAAI, 2023.\\n\\n[2] Adversarial attacks and defenses in explainable artificial intelligence: A survey. Information Fusion, 2024.\\n\\n[3] Interpretation of neural networks is fragile. AAAI, 2019.\\n\\n[4] Enhanced regularizers for attributional robustness. AAAI, 2021.\"}", "{\"summary\": \"This paper proposes AGAIN, a neural network model that generates comprehensible explanations under unknown perturbations by integrating logical rules directly during inference, rather than relying on adversarial training. Using a factor graph to identify and rectify logical inconsistencies, AGAIN demonstrates superior interpretability and robustness compared to existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents an innovative method, AGAIN, that combines factor graphs with concept-level explanations to improve model interpretability under unknown perturbations.\\n2. The authors evaluate AGAIN across multiple datasets and baseline models, providing a broad view of its effectiveness.\\n3. The paper is clear and well-structured.\", \"weaknesses\": \"1. The factor graph requires predefined logical rules, which could be challenging to construct or generalize across different domains.\\n2. The algorithm\\u2019s process of exploring all possible intervention options to find the optimal solution could create computational overhead.\", \"questions\": \"1. The authors employed black-box perturbations to test robustness. Adding input noise could provide more convincing evidence.\\n2. I am concerned about the potential impact of large factor graphs on computational efficiency. I would like to see an analysis on this problem.\\n3. Currently, the algorithm explores all intervention options to select the optimal one. Is it possible to employ a simplified strategy, such as a heuristic or greedy algorithm, to reduce computational load?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces AGAIN (fActor GrAph-based Interpretable Neural Network), which generates comprehensible explanations for neural network decisions even under unknown perturbations. Unlike traditional adversarial training methods, AGAIN incorporates logical rules through a factor graph to identify and rectify explanatory logical errors during inference. The model is validated on three datasets: CUB, MIMIC-III EWS, and Synthetic-MNIST, demonstrating superior performance compared to existing baselines\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Good presentation and structure** The paper is well-structured and easy to read. The mathematical definition of the proposed method is clear and well-defined.\", \"**Nice experimental campaign**: Extensive experiments on three datasets (CUB, MIMIC-III EWS, and Synthetic-MNIST) demonstrate the superior performance of AGAIN over the compared baselines, although these results hold mainly for a metric (LSM) that has been created ad-hoc.\"], \"weaknesses\": \"## Major Issues\\n\\n- **Novelty, Related work and Compared methods**: the main issue with the current paper is that it only considers methods injecting knowledge into the models by means of factor graphs. However, the integration of knowledge into model predictions has been fully studied under different point of views: probabilistic (e.g., DeepProblog [1] and all its variants), logic constraints (see the survey of Giunchiglia et al. [2]). Also, it is not the first method defending against adversarial attack with knowledge and without employing adversarial training. Some of these methods have been already employed to defend against adversarial attacks, such as [3-4]. [5] is a survey entirely dedicated to enhancing interpretability and adversarial robustness with prior knowledge. [6] has shown in the context of concept-based models that it can learn logic rules at training time and use them at inference time to defend against adversarial attacks. This is also reflected in the experimental campaign that is extensive but does not consider any methods injecting prior knowledge to defend against adversarial attacks. The only compared methods are CBMs or adversarial-trained versions of the same models. \\n\\n\\n- **Paper positioning and Preliminaries**: the method provides explanations and a defence mechanism that is based on concept predictions; thus, it applies only to concept-based models. Most of the compared methods also belongs to this niche. While this is fully acceptable, explicit references to concept-based models only appears in Section 3-4. Therefore, I think it should state earlier that this is the focus of the paper, as most of the related work mentioned in the paper does not focus on concept-based explanations. Furthermore, there is no explicit mentions to concept-based models in the preliminaries. The \\u201cInterpretable Neural Network\\u201d paragraph should include citations to this literature and explain concept-based models.\\n\\n## Minor Issues\\n- P.2 \\u201c[\\u2026] even if the adversarial samples are available, retraining is only effective for at most one perturbation type\\u201d I think this statement is quite strong, and although it may have been proved for some attacks in Tan & Tian, 2023, I don\\u2019t think it is a general assumption that we can make. I think this sentence should be soften to \\u201cretraining is effective only for few perturbation types at the same time\\u201d. \\n- P.2 \\u201c[\\u2026] to ensure the expectation of the potential function is in its maximum value\\u201d. It is not clear at this point what is the potential function. Consider rephrasing this sentence.\\n- P.2 \\u201cThe explanations that are further regenerated align with the exogenous logic.\\u201d Not clear in this case what are the exogenous factors. \\n- P.2-3: The term \\\"defenses against comprehensibility\\\" seems a bit misleading. It implies that the goal is to prevent explanations from being understandable, which is not the case. Instead, the focus is on defending the comprehensibility of explanations against perturbations. \\u201cdefenses of comprehensibility\\u201d colud be a more appropriate definition.\\n\\n\\n[1] Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt: DeepProbLog: Neural Probabilistic Logic Programming. NeurIPS 2018: 3753-3763\\n\\n[2] Giunchiglia, E., Stoian, M. C., & Lukasiewicz, T. (7 2022). Deep Learning with Logical Constraints. In L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22 (pp. 5478\\u20135485). doi:10.24963/ijcai.2022/767\\n\\n[3] Yin, M., Li, S., Cai, Z., Song, C., Asif, M. S., Roy-Chowdhury, A. K., & Krishnamurthy, S. V. (2021). Exploiting multi-object relationships for detecting adversarial attacks in complex scenes. In proceedings of the IEEE/CVF international conference on computer vision (pp. 7858-7867).\\n\\n[4] Melacci, Stefano, et al. \\\"Domain knowledge alleviates adversarial attacks in multi-label classifiers.\\\" IEEE Transactions on Pattern Analysis and Machine Intelligence 44.12 (2021): 9944-9959.\\n\\n[5] Mumuni, Fuseini, and Alhassan Mumuni. \\\"Improving deep learning with prior knowledge and cognitive models: A survey on enhancing interpretability, adversarial robustness and zero-shot learning.\\\" Cognitive Systems Research (2023): 101188.\\n\\n[6] Ciravegna, Gabriele, et al. \\\"Logic explained networks.\\\" Artificial Intelligence 314 (2023): 103822.\", \"questions\": [\"Can the authors provide a comparison against one of the methods injecting knowledge (prior or learnt)?\", \"Scalability: The two limitations that have been reported (domain knowledge changes, correct prediction categories) are non-negligible. How could this method be extended to face these limitations?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors propose a novel method called AGAIN to generate comprehensible explanations under unknown perturbations by employing a factor graph approach. The method presents a significant contribution to the field, addressing an important gap with a new perspective. The paper is well-structured, and the authors have provided thorough theoretical justifications and experimental analyses. These analyses effectively demonstrate the superiority of the proposed method over existing ones.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper offers a well-motivated and clear presentation of a unique approach. The use of a factor graph to handle perturbations is innovative and highly relevant to the research community. Furthermore, the authors have provided sufficient theoretical support and empirical evidence to justify the effectiveness of their approach.\", \"weaknesses\": \"1. Clarification on Figure 3:\\n In Figure 3, does the final output include hat{c}_{re} and prediction? Does the hat{c}_{re} mean the interpretation? Additionally, the figure lacks clarity on how hc is generated, which is crucial for understanding the method. Adding details on hc generation would make the figure more comprehensive.\\n \\n2. Comparison Metrics:\\n The authors compare their method with all baselines using the LSM metric. However, they only compare their method with CBM in terms of accuracy (P-ACC and E-ACC). It would be beneficial to extend the accuracy comparison to include all baselines to provide a more complete evaluation of the method\\u2019s performance.\\n \\n3. Figure 7 Interpretation and Readability:\\n In Figure 7, for each example, the authors provide two bar charts. Does the left bar chart represent the initial interpretation, and the right bar chart represent the interpretation combined with the factor graph? Clarification on this aspect would enhance the understanding of the figure. Additionally, some symbols and text in Figure 7 are overlapping.\\n \\n4. Inconsistency in Line 463 and Table 5:\\n The authors mention in line 463 that their method is compared with two baselines on the Synthetic-MNIST dataset. However, Table 5 lists four baselines. Furthermore, \\\"ProbCBM\\\" in Table 5 should be corrected to \\\"ProCBM\\\" for consistency. It is recommended that the authors proofread the paper to eliminate such inconsistencies and typographical errors.\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer YnPb\", \"comment\": \"We appreciate your valuable feedback, which are insightful and can improve the quality of our work to a great extent. Definitely, you point out some issues in our paper, giving us inspiration on how to revise the manuscript. In response to your comments, we have provided detailed answers and revised the manuscript accordingly.\\n\\n**Weaknesses 1:**\\n\\nWe have made additions in the Rebuttal Revision of the paper **(Figure 3)**. The final output include $\\\\hat{c}_{re}$ and prediction. $\\\\hat{c}_re$ is the corrected explanation.\\n\\n**Weaknesses 2:**\\n\\nFollowing your comments, we revised the manuscript to provide more experimental results about accuracy comparison. We have made additions in the Rebuttal Revision of the paper **(Line 1242-1330 and Line 438-465)**.\\n\\n**Weaknesses 3:**\\n\\nWe appreciate your constructive suggestions. Yes, the left bar chart represent the initial interpretation, and the right bar chart represent the interpretation combined with the factor graph. We revised the Rebuttal Revision of the paper **(Line 506)**.\\n\\n**Weaknesses 4:**\\n\\nWe apologize for these typos. We have made additions in the Rebuttal Revision of the paper **(Line 087 and Line 312)**.\"}", "{\"title\": \"Response to follow-up comment by Reviewer YnPb\", \"comment\": \"Thank you very much for your time and effort in reviewing our paper. If there are any other issues remaining, we are pleased to address them further.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear reviewer XLZi,\\n\\nWe really appreciate your suggestions and comments, which are highly valuable to us. We would like to kindly follow up to ensure there is sufficient time for any further clarifications or concerns you might have. Please let us know if there is anything further we can do to assist or elaborate on.\\n\\nThank you once again for your time and effort in reviewing our paper and engaging in further discussions with us.\\n\\nBest regards,\\n\\nThe authors of submission 7228\"}", "{\"comment\": \"Thanks for the response and update! These do not affect my original overall review and I keep the original rating.\"}", "{\"title\": \"Response to follow-up comment by Reviewer XLZi \\uff08Part 3\\uff09\", \"comment\": \"**References\\uff1a**\\n\\n[1]Semi-supervised Concept Bottleneck Models. ICML2024.\\n\\n[2]Concept bottleneck models. ICML2020.\\n\\n[3]Addressing leakage in concept bottleneck models. NeurIPS2022.\\n\\n[4]Harnessing Prior Knowledge for Explainable Machine Learning: An Overview. SaTML2023.\\n\\n[5]Understanding and enhancing robustness of concept-based models. AAAI, 2023.\\n\\n[6]Adversarial attacks and defenses in explainable artificial intelligence: A survey. Information Fusion, 2024.\\n\\n[7]Interpretation of neural networks is fragile. AAAI, 2019.\\n\\n[8]Exploiting multi-object relationships for detecting adversarial attacks in complex scenes. CVPR2021.\\n\\n[9]Domain knowledge alleviates adversarial attacks in multi-label classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2021.\\n\\n[10] Understanding and enhancing robustness of concept-based models. AAAI2023.\"}", "{\"comment\": \"Thank you for your response. My concerns have been addressed, and I would like to increase the scores accordingly.\"}", "{\"comment\": \"This reminds me another question. How are the \\\"logical rules\\\" obtained in this work?\"}", "{\"title\": \"Response to Reviewer PFRk (Part 2)\", \"comment\": \"**References:**\\n\\n[1]Label-free concept bottleneck models. ICLR.\\n\\n[2]Discover-then-name: Task-agnostic concept bottlenecks via automated concept discovery. ECCV.\\n\\n[3]Incremental Residual Concept Bottleneck Models. CVPR.\\n\\n[4]Semi-supervised Concept Bottleneck Models. ICML.\\n\\n[5]Understanding and enhancing robustness of concept-based models. AAAI.\\n\\n[6]Concept bottleneck models. ICML.\\n\\n[7]Addressing leakage in concept bottleneck models. NeurIPS.\\n\\n[8]Interactive concept bottleneck models. AAAI.\\n\\n[9]The caltech-ucsd birds-200-2011 dataset.\\n\\n[10] Few-shot lifelong learning. AAAI2021.\\n\\n[11] Navigating Real-World Partial Label Learning: Unveiling Fine-Grained Images with Attributes. AAAI2024.\"}", "{\"title\": \"Response to follow-up comment by Reviewer XLZi \\uff08Part 1\\uff09\", \"comment\": \"Thank you for your prompt replies. The comments you have raised help us to improve the quality of our paper.\\nBased on your comments, we have revised the Rebuttal Revision and provided detailed responses below.\\n\\n**1.**\\n\\n(1)We did not limit our comparison to LSM alone; we also evaluated standard CBM metrics, including E-ACC and P-ACC. These metrics are widely adopted in studies related to CBMs [1-3]. Due to space constraints, descriptions of all metrics, including the standard ones, were placed in the appendix, not just the LSM.\\n\\n(2)The LSM evaluates the degree to which concept-level explanations satisfy predefined logical rules. If a rule holds under the current concept prediction, the corresponding potential function for that rule has a value of 1, otherwise it is 0. We compute the weighted ratio of the potential functions with a value of 1 to all potential functions in the factor graph.The LSM is defined as the mean of the weighted ratios across all the test samples.\\n\\n(3)We compute the LSMs of other methods based solely on their generated concept activations. These activations are assigned to our factor graph, which is then used to compute the LSM. Notably, other methods do not need to construct a separate factor graph for this process. To ensure experimental fairness, all methods utilize the same factor graph during the LSM computation phase.\\n\\n**2.**\\n\\nThank you for your further comment.\\n\\n(1)E-ACC and P-ACC denote the predictive accuracy of concepts and the predictive accuracy of tasks, respectively. Therefore, we can simply understand that E-ACC is the accuracy of explanation and P-ACC is the accuracy of final prediction. In datasets, each sample contains task prediction labels and concept prediction labels. Task prediction labels are stored as indexes of categories and concept prediction labels are stored as binary vectors. Each element of the vector indicates whether each concept is active or not (whether the concept exists in the sample). Therefore, we calculate the similarity of labels and predictions to compute the E-ACC and P-ACC. In fact, we have already described this in the paper **(Line 1054-1062)**, but we would like to clarify the reviewers' concerns by providing more details. \\n\\n(2)When there is no perturbation in the environment, concept predictive accuracy is positively correlated with task predictive accuracy. In CBMs, the input of the category predictor is concepts. Therefore, a high concept predictive accuracy enhances the higher the category predictive accuracy as well. However, in the adversarial perturbation setting, an attacker can tamper with several concepts that will not affect the final prediction, resulting in a significant reduction in concept accuracy while the prediction accuracy remains almost unchanged. Moreover, LSM is positively correlated with concept predictive accuracy because concept labels typically satisfy most critical logical rules.\\n\\n(3)In fact, we made a lot of efforts in order to put the comparison results of all metrics in the main paper. However, due to space constraints in the paper, we had to put the results of the comparison between E-ACC and P-ACC (this contains 6 tables) in the appendix. We will readjust the layout of the camera-ready version so that we can put the results of the E-ACC and P-ACC comparisons into the main paper. But even so, we would like to emphasize that the comprehensibility of explanations naturally depends on the extent to which they satisfy human perception ( the logical rules of the real world) [4]. Thus, the results of the LSM comparisons are a straightforward demonstration of the advantages of AGAIN in enhancing the comprehensibility of explanations.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear reviewer XsjD,\\n\\nWe really appreciate your effort in reviewing our paper, your suggestions are highly valuable to us. As the rebuttal phase is nearing its end, we want to kindly follow up to ensure there is sufficient time for any further clarifications or concerns you might have. Please let us know if there is anything further we can do to assist or elaborate on.\\n\\nThank you once again for your time and effort in reviewing our paper.\\n\\nBest regards,\\n\\nThe authors of submission 7228\"}", "{\"comment\": \"Thank you for your response and the update! I will maintain the original rating.\"}", "{\"metareview\": [\"The paper proposes AGAIN (fActor GrAph-based Interpretable Neural Network), a method that uses factor graphs to correct logical errors in concept-based explanations caused by unknown perturbations. By integrating predefined logical rules between concepts and categories, the method identifies and rectifies inconsistencies in the outputs of concept-based neural networks. The approach demonstrates its effectiveness through experiments on three datasets\\u2014CUB, MIMIC-III EWS, and Synthetic-MNIST\\u2014outperforming existing baselines in generating comprehensible explanations under perturbations.\", \"Strengths\", \"The method enables both the detection and correction of logical errors within a single framework.\", \"AGAIN achieves superior interpretability compared to other concept-based neural networks, even when perturbations are unknown.\", \"The mathematical framework is well-defined, and the paper is clear and well-structured overall.\", \"Extensive experiments on three datasets demonstrate the method's superior performance compared to existing baselines.\", \"The examples provided are intuitive, and the three-stage framework is reasonable.\", \"Weaknesses\", \"The method assumes explanations change under perturbations without affecting predictions, which may not hold true for concept bottleneck models.\", \"The reliance on predefined logical rules can be limiting, as constructing such rules could be challenging or domain-specific.\", \"The advantage of using a factor graph over simpler methods (e.g., directly detecting inconsistencies) is not clearly justified.\", \"The paper does not compare AGAIN with other methods that integrate prior knowledge for adversarial defense (e.g., DeepProblog).\", \"Scalability issues arise from manually annotated concepts, large factor graphs, and computational overhead from exploring all intervention options.\", \"Some sections, such as implementation details and explanations of figures (e.g., Figure 3 and Figure 7), lack clarity.\", \"The datasets used may not reflect real-world adversarial scenarios, limiting the method's general applicability.\", \"Most concerns have been addressed by the authors during the rebuttal period.\"], \"additional_comments_on_reviewer_discussion\": \"This paper ended up with ratings 8, 6, 6, 6, 5. The only negative reviewer\\u2019s major concerns are \\u201crelying on pre-defined concepts and logics.\\u201d The authors clarified that \\u201chow to automatically extract rules and concepts is not the focus of this paper but rather another problem worth investigating,\\u201d which I tend to agree to. There are a number of works that rely on pre-defined concepts. I therefore recommend accepting this paper.\"}", "{\"title\": \"Response to Reviewer PFRk (Part 1)\", \"comment\": \"**Weaknesses 1:**\\n\\nWe fully understand your concern about manual annotations. However, this is widely considered as a limitation of datasets, rather than of CBMs [2]. CBM itself does not require an annotation process; therefore, AGAIN simply utilizes the well-established concept labels available in datasets. For concept annotation of datasets, many studies have already designed plug-ins to automatically annotate concepts specifically for datasets without concept labels [1-4]. These plug-ins use a multimodal-based approach to automatically extract concepts from samples and have achieved excellent performance. These plugins are sufficient to provide data support for CBMs training.\\n\\n**Weaknesses 2:**\\n\\nWe apologize for any confusion Section 4.2 may cause. We have made an adjustment to Section 4.2 in the Rebuttal Revision **(Line 230-246)**. The phrase 'reasons about a conditional probability' is a typographical error, and $G$ refers to a graph.\\n\\n**Weaknesses 3:**\\n\\n(1) The datasets selected in this paper reflect adversarial scenarios in practice. For example, CUB has been a standard dataset for evaluating the prediction accuracy and interpretability of CBMs under adversarial attacks [5-8]. \\n\\n(2) AGAIN is applicable to real-world situations. AGAIN was evaluated on real datasets capable of modeling real-world situations. Please allow us to give some existing examples to prove that these datasets are capable of modeling real situations, verifying the applicable of our model:\\n\\n* Mazumder et al. [10] argue that CUB can effectively evaluate \\u201cthe performance of deep models in real life situations\\u201d. \\n\\n* Jiang et al. [11] considered the CUB dataset as a \\u201cfine-grained dataset in real-world applications\\u201d, and argued that evaluating on CUB \\\"can contribute to advancement of the proposed approach in real-world scenarios\\\". \\n\\n* As presented in [9], \\\"the within-class variance in the CUB dataset is large due to variations in lighting, background, and extreme pose variations (e.g., flying birds, swimming birds, and perched birds partially obscured by branches)\\\". These characteristics provide a strong simulation of real-world situations.\\n\\n(3) Real-world adversarial scenarios can be summarized into two types: 1) sensor disruption: an attacker can remotely disturb data sensors to perturb the data signals fed to the model (e.g., disrupting a patient's e-health metrics); 2) data tampering: an attacker can directly modify the data by adding a perturbation and re-feeding it back to the model (e.g., putting a disruptive texture on a road signage to misdirect the in-vehicle camera). The real datasets we use can fully cover the above two realistic adversarial scenarios. Specifically, the samples of the MIMIC -III EWS dataset which are injected with perturbations can simulate the sensor signals that are interfered by the attacker in Scenario 1. The samples of the CUB dataset injected with perturbation can simulate the disruptive texture imposed on the image by the attacker in Scenario 2.\\n\\n**Questions:**\\n\\nFollowing your comments, we revised the manuscript to provide more details about probability estimation. Please refer to **Line 230-246** in the Rebuttal Revision of the paper.\\n\\nFirstly, after each variable (concept and category) in $\\\\mathcal{G}$ is assigned a value, boolean values are output from potential functions of all factors. These boolean values indicate whether the assignments of concepts and categories satisfy the logical rules represented by potential functions. Therefore, the weighted sum of all potential functions quantifies the extent to which concept assignments satisfy the logic rules in $\\\\mathcal{G}$.\\nThen, we seek to obtain the likelihood of the current concept assignments occurring, conditional on the known categories and logic rules. We quantify this likelihood by computing a conditional probability using the weighted sum of potential functions. We consider all possible concept assignments and compute the expectation of current concept assignments. This expected value is considered as the conditional probability, which is then used to detect whether concept activations are perturbed.\\nFor illustrative purposes, we provide an example. Suppose there are concepts $A$ and $B$. The current concept assignment is {1,0} denoting $A=1$ (active) and $B=0$ (inactive). We iterate through all four possible assignments: {1,0}, {0,1}, {1,1}, {0,0}. We compute the weighted sum of the potential functions for each of the four cases and compute the expectation of the potential function for {1,0}. This expectation is the conditional probability that concept assignment $\\\\{1,0\\\\}$ occurs conditionally on the known categories and logic rules.\"}", "{\"title\": \"Response to follow-up comment by Reviewer XLZi\", \"comment\": \"Thank you very much for your kind reply and raising your score and your confidence. We are glad to hear that our rebuttal has addressed your concerns and questions. Your thoughtful suggestions have greatly helped us improve the quality of our paper.\\n\\n* We have incorporated the names and descriptions of these adversarial attacks, along with references, into the Rebuttal Revision of the main paper **\\uff08Line 177-185\\uff09**. Additionally, we have placed a more detailed implementation containing descriptions of the formulas in the appendix **\\uff08Line 1310-1354\\uff09**.\\n\\n* In fact, in the original version of the main paper, we also summarized existing adversarial attack methods against explanations in the Related Work Section **(Line 101-123)**. \\n\\nThanks again for your comments. Hope we have well addressed your concerns. If there are any other issues remaining, we are pleased to address them further.\"}", "{\"title\": \"Response to follow-up comment by Reviewer XLZi\", \"comment\": \"Thank you very much for raising your score from 6 to 8. Your insightful suggestions greatly help us improve the quality of our paper. It is truly valuable for us to have such a meaningful discussion with you.\"}", "{\"comment\": \"After going through the paper details with rebuttal information, I think this paper clearly does not reach the ICLR acceptance bar. Relying on pre-defined concepts and logics is the major concern. The approach is not novel or practically scalable. Another minor problem is that the paper's presentation still needs to go through some major revision before being considered publishable.\"}", "{\"comment\": \"I thank the authors for taking the time for answering my questions and producing further experiments.\\n\\nHowever, I still have some questions:\\n1) Why do you make comparisons in terms of LSM only? This is not a standard metric and it is introduced only in the appendix. I think this is a **major issue** in the current paper that I did not notice during the first reading. Can you provide more explanations regarding this metric, in particular for how you compute it for other methods? You say that LSM consider the weighted sum of potential functions within the factor graph, but how can you compute it for the compared methods that do not provide a factor graph?\\n2) Can you provide more explanations about the other two metrics P-ACC and E-ACC? Can you provide some correlations w.r.t. standard CBM metrics like task accuracy and concept accuracy? Why are the results for the other methods for these metrics only reported in the appendix? \\n3) Why are the results for the P-ACC similar for all methods? What does it mean that the perturbations do not affect the outcome? Normal adversarial attacks are indeed those that modify the outcome of a model \\n4) Particularly if you are considering specific attacks that are not the standard one, can you provide more examples of the perturbations you consider, in terms of the adversarial attack employed to discover them?\\n5) While I thank you for taking the time to test some of the suggested methods, since you have put these results only in the Appendix, I still do not find your comparison fair enough. The methods that you compare with have not been studied to be consistent against adversarial perturbations, while the suggested ones are.\"}", "{\"summary\": \"This paper proposes AGAIN (fActor GrAph-based Interpretable Neural Network), a new approach to maintain comprehensible neural network explanations when inputs are affected by unknown perturbations. AGAIN builds factor graphs from explanations, and integrates logical rules through the graphs. The system detects adversaries by identifying violations of real-world logic, and uses an interactive intervention strategy to fix incorrect explanations. Tested on three datasets (CUB, MIMIC-III EWS, and Synthetic-MNIST), AGAIN outperformed existing methods in generating comprehensible explanations under both known and unknown perturbations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a novel and interesting idea that uses the logical correctness of explanation to detect and defense against noises and adversaries.\\n2. The three stage framework is reasonable to me. \\n3. The examples used in the paper are intuitive.\", \"weaknesses\": \"1. The concept bottleneck model is not easy to scale, as it requires manual annotations of concepts.\\n2. The implementation details of Section 4.2 is not very clear (the introduction is too conceptual). For example, is $\\\\mathcal{G}$ a graph or a model? At the beginning I thought $\\\\mathcal{G}$ is a graph, but in Line 221 it \\\"reasons about a conditional probability\\\".\\n3. The datasets used in this work are not very strong. I doubt if the work is applicable to real-world situations. At least, the datasets used cannot reflect adversarial scenarios in practice.\", \"questions\": \"How is the reasoning conducted on G? How is the probability estimation implemented in Equation 1? Please provide more details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to follow-up comment by Reviewer PFRk\", \"comment\": \"Thank you for your comments.\\n\\nOur logical rules are predefined. Specifically, for the CUB dataset, we used the table containing the correlations between concepts and categories, which was provided with the original dataset and annotated by ornithologists, to extract logical rules [1].\\nFor the MIMIC-III EWS dataset, we directly utilized the medical concepts and logical rules developed by [2]. These medical concepts and logical rules are obtained by medical experts who analyze the vital signs of samples.\\nFor the synthetic dataset, following [3], logical rules for handwritten digit concepts and categories are specified and constructed.\\n\\nIn fact, in the original version of the paper, we had already included the rule extraction process for each dataset in the **Appendix C.1** **(Line 964-1021)**.\\n\\nThank you very much for your time and effort in reviewing our paper. Hope we have well addressed your concerns. If there are any other issues remaining, we are pleased to address them further.\\n\\n[1] Concept bottleneck models. ICML2020.\\n\\n[2] Addressing leakage in concept bottleneck models. NeurIPS2022.\\n\\n[3] Probabilistic concept bottleneck models. ICML2023.\"}", "{\"title\": \"Response to Reviewer SU9s\", \"comment\": \"**Weaknesses 1:**\\n\\nThis is indeed a limitation and thank you for your valuable comments. In fact, not only factor graphs require predefined logic rules, but all methods based on external knowledge integration inevitably require predefined knowledge. Therefore, how to automatically extract knowledge from the external environment is a separate and important problem. We have already discussed this limitation in the conclusion. **(Line 537-539)**\\n\\n**Weaknesses 2:**\\n\\nIn theory, the computational overhead of AGAIN's intervention is indeed high. However, in practice, the number of interventions is usually no more than 100, because erroneous concepts are typically few in number. For example, in the CUB dataset, the number of incorrect concepts is usually only 1-10 [1]. Intervention operations are performed on a small number of concepts, as these erroneous concepts are localized during the detection phase. Even so, reducing the computational overhead of interventions is a valuable topic for discussion. Additionally, we have added a set of comparison experiments **(Line 1173-1208)** to address Question 3.\\n\\n**Questions 1:**\\n\\nThank you for your comment. Indeed, the process of injecting black-box perturbations into input data is by nature adding input noise.\\n\\n**Questions 2:**\\n\\nFollowing your comments, we revised the Rebuttal Revision to provide more details about computational efficiency analysis **(Line 1143-1171)**.\\n\\n**Questions 3:**\\n\\nThese strategies can reduce overhead in theory, but their practical payoff is not as significant. For further details, please refer to the experimental analysis in the Rebuttal Revision **(Line 1173-1208)**.\\n\\n[1] Understanding and enhancing robustness of concept-based models. AAAI.\"}", "{\"title\": \"Response to Reviewer XLZi\", \"comment\": \"**Weaknesses Major Issues 1:**\\n\\nThank you for your comment. Here, we\\u2019d like to highlight:\\n\\n(1) In this paper, we focus on generating comprehensible explanations under unknown perturbations instead of injecting knowledge into the model. To address this problem, we require an effective way to inject external knowledge. We demonstrate in a supplemental ablation analysis **(Line 366-375)** that knowledge injected through factor graphs is more helpful in generating comprehensible explanations compared to other knowledge integration methods. After injecting external knowledge, we design an error detection mechanism and a concept intervention strategy to identify and correct wrong concept activations, thereby improving the comprehensibility of explanations.\\n\\n(2)Although AGAIN also injects external knowledge, it has a distinctly different research goal and research question compared to [1-6]. Specifically, [1-4] focus on constraining the model to output correct predictions, rather than ensuring the model produces correct explanations under adversarial attacks. As a result, the model may still produce incorrect explanations even when it predicts correctly. [5] highlights that external knowledge can enhance interpretability, but it does not explain how the model ensures interpretability when explanations are disturbed by unknown perturbations. Similarly, LEN [6] learns rules between concepts and categories to correct predicted categories, but it does not address the correction of concepts. Therefore, we believe that none of these studies tackle the issue discussed in our paper.\\n\\n(3)You raised concerns about the sufficiency of the comparison methods, which suggests that other readers may be similarly confused. Therefore, we have added experiments to compare the performance differences between AGAIN and the methods listed by you, focusing on knowledge integration. This experiment was included in the Rebuttal Revision of the paper **(Line 366-375)**. The results show that the knowledge injected by the existing methods relies solely on forward reasoning to correct predictions and cannot reverse the conditional probabilities of the explanations based on those predictions. As a result, these methods lack the ability to correct explanations. In this regard, factor graphs offer an irreplaceable advantage.\\n\\n**Weaknesses Major Issues 2:**\\n\\nWe appreciate your constructive suggestions. Based on your suggestions, we have added concept-based models to the related work and preliminaries sections. These additions can be found in the Rebuttal Revision of the paper **(Line 117-121 and Line 140-145)**.\\n\\n**Weaknesses Minor Issues:**\\n\\nWe have made additions in the Rebuttal Revision of the paper **(Line 056, Line 082, Line 083, and Line 103)**.\\n\\n**Question 1:**\\n\\nWe added experiments to compare the performance differences between AGAIN and knowledge-based integration methods. These experiment were included in the Rebuttal Revision of the paper **(Line 366-375)**. \\n\\n**Question 2:**\", \"we_plan_to_extend_again_to_overcome_the_above_limitations_from_two_aspects\": \"(1) Domain knowledge changes: we will design modules to automatically mine logical rules, such as automatically constructing knowledge graphs, mining logical rules from knowledge graphs, and adaptively learning external knowledge from different scenarios.\\n\\n(2) Correct prediction categories: we will enhance the logical association between concepts and concepts to reduce the sensitivity of external knowledge to prediction categories.\\n\\n[1] DeepProbLog: Neural Probabilistic Logic Programming. NeurIPS.\\n\\n[2] Deep Learning with Logical Constraints. IJCAI.\\n\\n[3] Exploiting multi-object relationships for detecting adversarial attacks in complex scenes.CVPR.\\n\\n[4]Domain knowledge alleviates adversarial attacks in multi-label classifiers.TPAMI.\\n\\n[5] Improving deep learning with prior knowledge and cognitive models: A survey on enhancing interpretability, adversarial robustness and zero-shot learning.Cognitive Systems Research.\\n\\n[6] Ciravegna, Gabriele, et al. \\\"Logic explained networks. AI.\"}", "{\"title\": \"Response to follow-up comment by Reviewer PFRk\", \"comment\": \"Thank you very much for your response to our rebuttal. We are pleased to provide further clarification.\\n\\n* The mainstream paradigm for automatic concept annotation is to leverage large-language models (LLMs) such as GPT-3 to generate class-specific concepts and vision-language models (VLMs) such as CLIP to learn the mapping from inputs to concepts in an attribute-label-free manner. In fact, most datasets can be applied with this technique [1]. Thus, AGAIN enables training and inference on most datasets without original concept labels.\\n\\n* Thank you for your insightful comment. We revised and refined Sec 3, Sec 4.1, 4.2. We provided a more detailed version of the preliminary about CBM (We think by \\u201cPGM\\u201d you mean \\u201cCBM\\u201d), and explained how CBM generates concept-level explanations. Specifically, the CBM is trained on data points $(x, C, y)$, where the input $x$ is labeled with both concepts $C$ and the target $y$. $C=${$c_0$,...,$c_i$} is a set of binary values, where the $i$-th element in the set denotes whether or not $x$ contains the $i$-th human-specified concept, with $c=1$ denoting containment and $c=0$ denoting non-containment. The CBM consists of two components, the concept predictor and the category predictor. The concept predictor receives the input $x$ and predicts concepts $\\\\hat{C}$. The category predictor receives predicted concepts $\\\\hat{C}$ and predicts the target $y$. The CBM takes the concepts $\\\\hat{c}$ predicted in the inference as the explanation of the model. $\\\\hat{c}$ answers which human-specified concepts are contained in input $x$. These concepts directly determine the final prediction $\\\\hat{y}$. \\nWe will incorporate these descriptions into the camera-ready version.\\n\\nYour suggestion is valuable to us. We're pleased to have a further discussion with you if you still have any concerns or questions.\\n\\n[1] Discover-then-name: Task-agnostic concept bottlenecks via automated concept discovery. ECCV.\"}", "{\"comment\": \"1. I am more curious about the \\\"automatically annotated concepts\\\" you mentioned in your rebuttal. This would be more fancinating than using established concepts.\\n2. The current version of Sec 3, Sec 4.1, 4.2 are still not good. My suggestions are as follows. (1) Make the current writing more concise. Try avoid relying too much on ChatGPT in your writing, especially those that involve math or notations. (2) Provide a more detailed version of preliminary about PGM, as this is the fundamental model in your work. (3) Make it more clear how PGM provide explainable prediction based on the preliminary.\\n\\nI can reduce my confidence to 3. But I am not going to increase my score. I want to see other reviewers' comments.\"}", "{\"summary\": \"The paper proposed a method using factor graphs to correct errors in concept-based explanations caused by perturbations.\\nSpecifically, it constructs a factor graph using predefined logical rules between concepts or between concepts and categories. \\nThis graph helps identify logical errors in the output from concept-based interpretable neural networks by evaluating the likelihood of these errors. \\nAdditionally, by leveraging the factor graph, it is possible to correct these logical errors in the explanations. \\nExperimental comparisons on three datasets demonstrate that the proposed method outperforms existing concept-based approaches.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The proposed method enables both the detection of logical errors and the correction of these errors within a single framework.\", \"Compared to other concept-based interpretable neural networks, the proposed method achieves higher comprehensiveness in explanations, regardless of whether the perturbations are known or unknown.\"], \"weaknesses\": [\"While the proposed method assumes that explanations change due to perturbations without affecting predictions, this seems unrealistic. Particularly in interpretable neural networks with a concept bottleneck structure, as assumed in this study, changes in the concepts outputted by the neural network would naturally lead to changes in predictions, which undermines this assumption.\", \"The proposed method requires predefined logic rules between concepts and categories. If these rules are already known, wouldn\\u2019t it be possible to detect inconsistencies between concepts and predictions without the need for the factor graph? The advantage of using a factor graph is unclear.\", \"As noted in the minor comments below, there is room for improvement in the writing.\"], \"minor_comments\": [\"The explanation of Figure 3 is insufficient.\", \"There is no reference to Figure 4.\"], \"questions\": [\"Defining the logic rule set seems expensive. Would it be difficult to construct a factor graph by connecting concepts and categories with a bipartite complete graph and estimating the weights $w_i$?\", \"How do you distinguish between coexistence and exclusion in the factor graph?\", \"It would be better to describe the specific estimation algorithm of $w_i$ in the Appendix.\", \"Line 336-337: What is \\\"attributional training\\\"?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Reviewer PFRk\", \"comment\": \"Dear reviewer PFRk,\\n\\nWe really appreciate your effort in reviewing our paper, your suggestions are highly valuable to us. As the rebuttal phase is nearing its end, we would like to kindly follow up to ensure there is sufficient time for any further clarifications or concerns you might have. Please let us know if there is anything further we can do to assist or elaborate on.\\n\\nThank you once again for your time and effort in reviewing our paper.\\n\\nBest regards,\\n\\nThe authors of submission 7228\"}", "{\"title\": \"Response to follow-up comment by Reviewer XLZi\", \"comment\": \"Dear reviewer XLZi,\\n\\nThank you very much for your suggestion regarding the addition of introductions to adversarial attacks. We have revised the Rebuttal Revision following your suggestions. As the PDF revision phase is nearing its end,\\u00a0we would like to kindly follow up to ensure there is sufficient time for any further clarifications or concerns you might have. Please let us know if there is anything further we can do to assist or elaborate on.\\n\\nThank you once again for your time and effort in reviewing our paper.\\n\\nBest regards,\\n\\nThe authors of submission 7228\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you very much for raising your rating. We are glad to hear that our rebuttal has addressed your concerns and questions. Your insightful suggestions greatly help us improve the quality of our paper. It is truly valuable for us to have such a meaningful discussion with you.\"}", "{\"title\": \"Response to follow-up comment by Reviewer PFRk\", \"comment\": \"Thank you for your time and effort in reviewing our paper. In response to your comments, we provide further clarification:\\n\\n* We would like to emphasize that **how to automatically extract rules and concepts is not the focus of this paper but rather another problem worth investigating**. Most explanatory methods based on external knowledge are implemented by utilizing the predefined knowledge from domain experts [1]. Therefore, our paper follows this way to inject external knowledge. Notably, the challenge solved in this paper is how to improve the comprehensibility of explanations under unknown perturbations. To solve this new challenge, we directly utilize logical rules (predefined or extracted) to detect and correct explanation errors. Therefore, **AGAIN is certainly novel in terms of improving the comprehensibility of explanations**. Further, there are many approaches to automatically extract external knowledge or logical rules from datasets [2,3]. Similarly, there are many approaches to automatically annotate concepts for datasets [4,5,6,7]. These methods can provide effective external knowledge support and training data support for AGAIN. Thus, **the significance of AGAIN would not be diminished because it utilizes predefined rules**. Furthermore, we already provided the analysis on the scalability and computational efficiency of AGAIN (Appendix D.2). The experimental results show that the scalability of AGAIN is good.\\n\\n* We appreciate your valuable feedback on presentation, which is insightful and can improve the quality of our work to a great extent. We will thoroughly review the final manuscript and revise misleading or unclear sentences and paragraphs to improve our presentation. \\n\\nThanks again for your comments. Hope we have well addressed your concerns.\\n\\n[1] Harnessing Prior Knowledge for Explainable Machine Learning: An Overview. IEEE SaTML2023.\\n\\n[2] Simre: Simple contrastive learning with soft logical rule for knowledge graph embedding. Information Sciences 2024.\\n\\n[3] Collaborative artificial intelligence system for investigation of healthcare claims compliance. Scientific Reports 2024.\\n\\n[4] Label-free concept bottleneck models. ICLR.\\n\\n[5]Discover-then-name: Task-agnostic concept bottlenecks via automated concept discovery. ECCV.\\n\\n[6]Incremental Residual Concept Bottleneck Models. CVPR.\\n\\n[7]Semi-supervised Concept Bottleneck Models. ICML.\"}" ] }
107ZsHD8h7
Autoformulation of Mathematical Optimization Models Using LLMs
[ "Nicolás Astorga", "Tennison Liu", "Yuanzhang Xiao", "Mihaela van der Schaar" ]
Mathematical optimization is fundamental to decision-making across diverse domains, from operations research to healthcare. Yet, translating real-world problems into optimization models remains a formidable challenge, often demanding specialized expertise. This paper formally introduces the concept of *autoformulation*---an automated approach to creating optimization models from natural language descriptions for commercial solvers. We identify the three core challenges of autoformulation: (1) defining the vast, problem-dependent hypothesis space, (2) efficiently searching this space under uncertainty, and (3) evaluating formulation correctness (ensuring a formulation accurately represents the problem). To address these challenges, we introduce a novel method leveraging *Large Language Models* (LLMs) within a *Monte-Carlo Tree Search* framework. This approach systematically explores the space of possible formulations by exploiting the hierarchical nature of optimization modeling. LLMs serve two key roles: as dynamic formulation hypothesis generators and as evaluators of formulation correctness. To enhance search efficiency, we introduce a pruning technique to remove trivially equivalent formulations. Empirical evaluations across benchmarks containing linear and mixed-integer programming problems demonstrate our method's superior performance. Additionally, we observe significant efficiency gains from employing LLMs for correctness evaluation and from our pruning techniques.
[ "Large Language Models", "optimization modeling" ]
Reject
https://openreview.net/pdf?id=107ZsHD8h7
https://openreview.net/forum?id=107ZsHD8h7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zEM9AF4HYU", "yUfLkfnE7k", "vMUVFJVW2z", "uhXY2H25aJ", "g5ab4zKTMZ", "g0Un3h2VXa", "ZVVA3QazwT", "X2AncvMGOY", "WBg6Soiet6", "Tu1MW3fwap", "PL4VubIKfo", "LYYpdWE5x5", "JuoD9bcu7D", "HOpSjIZImM", "CluUOQUUjp", "CMWlD9E1yF", "7n4gZ85t9u", "5cC7O4Zx93", "4FpRbDqgaZ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732186091617, 1732185393677, 1732185942889, 1737524064557, 1732185102183, 1730648867779, 1732554181600, 1732185706454, 1732947201056, 1730009681223, 1732185326546, 1732607455581, 1732186419876, 1734586256286, 1729794106544, 1731124125383, 1732185613556, 1732917422183, 1732185510017 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Submission10599/Reviewer_U3Rw" ], [ "ICLR.cc/2025/Conference/Submission10599/Reviewer_mbTn" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Submission10599/Reviewer_Utyj" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Submission10599/Area_Chair_mtQv" ], [ "ICLR.cc/2025/Conference/Submission10599/Reviewer_mbTn" ], [ "ICLR.cc/2025/Conference/Submission10599/Reviewer_e8hb" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ], [ "ICLR.cc/2025/Conference/Submission10599/Reviewer_Utyj" ], [ "ICLR.cc/2025/Conference/Submission10599/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer mbTN (1/2)\", \"comment\": \"*We appreciate the reviewer\\u2019s detailed and thoughtful evaluation and positive feedback.*\\n\\n---\\n\\n### [P1] Autoformulation objective\\n\\nThank you for raising these important concerns. Allow us to clarify our original rationale, then outline our planned revisions to improve the connection between our theoretical framework and methodology.\\n\\n**Rationale of probabilistic definition.** The probabilistic framework (`Eq 2`) serves as a generalized formulation that:\\n1. Captures the inherent uncertainty in translating natural language into mathematical/computational models\\n2. Explicitly decouples mathematical formulation from computational implementation, recognizing their distinct challenges\\n\\n**Connecting definition to method.** Our method directly instantiates this objective (`Eq 2`):\\n* $p_\\\\phi$ implements a tree-search model exploring possible formulations, where $\\\\phi$ represents the parameters that are updated through MCTS backpropagation: the value function $V(s)$ and visitation counts $N(s)$. Formally, for the set of all nodes $\\\\mathcal{N}$, $\\\\phi=\\\\lbrace{(V(s), N(s)) | s \\\\in\\\\mathcal{N}\\\\rbrace}$\\n* $p_\\\\psi$ is realized through our custom-designed deterministic parser, tailored to our mathematical model representation\\n* $Q(\\\\cdot)$ implements the dual evaluation of formulation correctness (based on LLM comparative scoring) and optimality gap (`Eq 4`)\\n\\n**Proposed revisions.** So far, we have only provided clarifications. While our probabilistic framework is theoretically sound, your review has helped us realize that the current formulation can be confusing for the reader, obfuscating the core connection between the problem definition and methodology. To better align with our method's focus on search, we propose revising `Eq 2` to:\\n\\n$(m^*, c^*) \\\\in \\\\text{argmax}_{m \\\\in \\\\mathcal{M}, c \\\\in \\\\mathcal{C}} ~ Q(m, c; d)$\\n\\nThis revised formulation more explicitly connects our search-based approach while maintaining generality.\\n\\n**Distribution over computation models.** Lastly, we wanted to clarify that the second transformation ($p_\\\\psi$), in principle, presents several important challenges that can introduce uncertainty. Most notably, this is reflected in the choice of solvers and their configurations, for example, the configuration of hyperparameters for cutting plane algorithms. Our probabilistic formulation anticipates these challenges and provides a framework for future research to systematically address them, even though we made simplifying implementation choices (deterministic parsing) in this first exploration of autoformulation.\\n\\n**Actions taken:** Following your feedback, we have revised the formulation of `Eq 2` to improve clarity and expanded the discussion of computational transformation challenges in `L195`.\"}", "{\"title\": \"Response to Reviewer e8hb (2/3)\", \"comment\": \"---\\n### [P2] Optimality gap and computational efficiency\\n\\nWe appreciate you raising this point, which allows us to highlight why optimality gap and computational efficiency are both formulation and solver dependent. In the table below, we describe the impact of both **formulation** and **solver choice** on **optimality gap** and **computational efficiency** for different types of optimization problems.\\n\\n\\n| Problem type | Factor | Impact on optimality gap | Impact on computational efficiency |\\n|---|---|---|---|\\n| **Type I** \\u00a0 (Originally Convex) | **Formulation** | **Minimal impact:** any correct (equivalent) formulation can achieve global optimality | **Medium impact:** choice of variable representation (e.g. structure-preserving formulations) can improve solution time |\\n| \\u00a0| **Solver** | **Minimal impact:** most commercial solvers can achieve similar optimality gaps | **Medium impact:** specialized solvers for specific problem structures (LP, QP, SOCP) can be faster |\\n| **Type II** \\u00a0 (Non-convex but Convexifiable) | **Formulation** | **High impact:** correct, convexified reformulation enables achievable global optimality | **High impact:** reformulation complexity affects solution time, generally convex reformulations are solved faster |\\n| \\u00a0| **Solver** | **Medium impact:** solver ability to handle reformulated structures affect solution quality | **High impact:** solver must efficiently handle the specific structure of reformulation |\\n| **Type III** \\u00a0 (Non-convex requiring Relaxation) | **Formulation** | **High impact:** quality of relaxation directly affects optimality gap and bounds tightness | **Medium impact:** relaxation complexity affects solution time, involving trade-offs between relaxation tightness and computational efficiency |\\n| \\u00a0| **Solver** | **High impact:** solver abilities on relaxed problem affects solution quality | **Medium impact:** solving relaxed problems may require specialized solvers, although general purpose solvers are roughly comparable |\\n\\n**Key observations:**\\n1. For Type I problems, formulation mainly affects computational efficiency rather than optimality.\\n2. For Type II problems, correct reformulation is crucial for both optimality and efficiency.\\n3. For Type III problems, both formulation and solver choices significantly impact the trade-off between optimality and efficiency.\\n\\n**Empirical evidence:** To support this analysis empirically, we compare the impact of both formulation and solver choice (including 8 solvers) on optimality and solution time. These results are included in the revised `App G`, and we will briefly summarize them here:\\n1. **Impact of formulation:** We observed that convex reformulation results in high-quality solutions with different convex solvers consistently. In particular, a convex reformulation had a 10,000x smaller optimality gap and 1,000x faster solution time than a general-purpose solver on the non-convex original formulation.\\n2. **Impact of solver:** Using a specialized convex solver resulted in 1,000x faster solution time than a general-purpose solver.\\n\\nThis empirically confirms the importance of both formulation and solver on optimality gap and computational efficiency.\\u00a0\\n\\n**Actions taken:** Thank you again for this comment, which has helped us improve the paper with further clarifications and discussions. We have updated `App G` with empirical analysis of the impact of formulation and solvers on solution quality. We have also included the table above into `App E`.\\n\\n---\"}", "{\"title\": \"Response to Reviewer Utyj\", \"comment\": \"*We appreciate the reviewer\\u2019s thoughtful evaluation and positive feedback.*\\n\\n---\\n\\n### [P1] Real-world utility\\n\\nThank you for raising this important point. Our work focuses on autoformulation\\u2014an automated approach for translating natural language problem descriptions into diverse sets of mathematical and computational models. While our technical focus is on developing this capability, it is crucial to understand both its immediate practical value and long-term potential across different user groups.\\u00a0\\n\\n**Practical utility:** The benefits of autoformulation vary across different user groups: (1) For *domain experts without OR expertise* (e.g. business owners, healthcare administrators, engineers), it provides an accessible entry point to optimization techniques without requiring extensive OR training. (2) For *OR practitioners*, it holds the potential to streamline model development by automating routine formulation tasks, allowing greater focus on problem understanding and solution analysis. Much like how software engineers leverage coding LLMs to enhance productivity, autoformulation can serve as both an automation and ideation tool, facilitating rapid exploration of alternative formulations.\\n\\n**Our technical contributions:** While autoformulation technology is still emerging, our work makes several fundamental contributions at the intersection of OR and ML:\\u00a0\\n* Formal definition of autoformulation as a search problem, with clear characterization of key challenges.\\n* Novel framework combining MCTS with LLM-based hypothesis generation for systematic exploration under uncertainty.\\n* Tailored pruning methods using SMT solvers to eliminate redundant formulations, enhancing search efficiency.\\n* LLM-based evaluation approach for assessing formulation correctness through comparative evaluation.\\n\\nOur empirical results on IndustryOR, which contains challenging real-world problems, demonstrate significant improvements over existing approaches. Notably, it achieved a $10\\\\%$ improvement over ORLM (an LLM finetuned for optimization modeling), validating the practical potential of our framework.\\n\\n**Looking ahead.** While we acknowledge the complexities of real-world optimization modeling (elaborated in [P2]), we see parallels with the evolution of LLM-based coding technologies, which progressed from simple function completion to full application development between 2021 and 2024. We anticipate autoformulation capabilities will follow a similar trajectory of rapid advancement, and our work aims to contribute to the foundations for these future developments.\\n\\n---\\n\\n### [P2] Real-world complexity\\n\\nAgain, thank you for raising this important consideration. We acknowledge that real-world optimization modeling extends beyond the translation of problem descriptions into mathematical/computational models, and understanding these complexities helps identify which aspects can be effectively automated.\\n\\nOur autoformulation framework addresses a specific, yet crucial challenge: translating problem descriptions into mathematical and computational models. While this represents just one component of the broader modeling process, solving it alone is important for the reasons mentioned in [P1]. More broadly, the complete modeling process typically involves:\\n1. Information gathering from diverse stakeholders, often requiring multiple iterations and integration of implicit domain knowledge/conventions (i.e. 'tribal knowledge');\\n2. Handling complex problem characteristics such as stochasticity, time-varying dynamics, and large-scale variables;\\u00a0\\n3. Rigorous validation against real-world data, including sensitivity analysis of modeling assumptions and verification of edge cases;\\u00a0\\n4. Continuous communication between technical and business stakeholders to ensure practical utility.\\n\\nWe would also like to use this opportunity to give an example where autoformulation can already play a large role in this process. The optimization problems in engineering systems are relatively 'well-defined'. A concrete example is wireless communications, where there are widely accepted requirements and metrics for measuring system performance (e.g. data rate, delay, energy consumption), commonly encountered decision variables, and the system under optimization is also relatively well understood (derived from physics or by construction of the system). In such domains, autoformulation could play a more comprehensive role in the modeling pipeline.\\n\\n\\n**Actions taken:** We have included a discussion of real-world OR modeling complexities in `App H`.\\n\\n---\\n*We hope the reviewer\\u2019s concerns are addressed and they will consider updating their score. We welcome further discussions.*\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"_We thank the reviewers for their constructive comments and their commitment to the review process._\\n\\n---\\n\\nWe are encouraged that reviewers found our methodology novel, particularly in \\\"utilizing LLM both as hypothesis generators and evaluators\\\" (**e8hb**, **U3Rw**), enabling crucial \\\"verification of partial problems\\\" (**U3Rw**). The integration of MCTS with LLMs was recognized for its specific improvements \\\"to fit the optimization scenario\\\" (**mbTn**), while our SMT-based pruning technique was highlighted for its ability to \\\"eliminate redundant, trivially equivalent formulations\\\" and \\\"improve search efficiency\\\" (**e8hb**, **U3Rw**).\\n\\nReviewers appreciated the paper's clarity, noting its \\\"good writing, easy to follow\\\" presentation (**mbTn**) with a \\\"clear and compelling description of the motivation\\\" (**Utyj**). The technical contribution was strengthened by \\\"sufficient experimental details\\\" (**mbTn**) and \\\"extensive experimental comparison\\\" (**Utyj**), demonstrating \\\"strong experimental results compared to baselines\\\" (**U3Rw**).\\n\\nWe have also taken the reviewers\\u2019 feedback into account and made the following key changes to improve the paper:\\n\\n* Relocated `Sec 2.2 Categorization of Problem Difficulty` to `App E` to improve flow and presentation\\n* Extended analysis of partial formulation evaluations (`App D.5`)\\n* Included empirical investigation on the impact of formulation and solver configuration on optimality gap and solution time (`App G`)\\n\\nWe have also uploaded a revised manuscript, where these changes are highlighted in teal for easier identification. We sincerely thank the reviewers for their valuable feedback on strengthening our work and remain open to further suggestions.\\n\\n---\\nWith thanks,\\n\\nThe Authors of #10599\", \"title\": \"Global Response\"}", "{\"summary\": \"This paper introduces a framework to automate the creation of mathematical optimization models from natural language descriptions. This process is essential in operations research but traditionally requires significant expertise. There are three main challenges for autoformulation: defining a vast hypothesis space, navigating this space efficiently, and ensuring formulation correctness. To address these, the authors integrate LLMs within an MCTS framework, using LLMs to generate potential formulations and evaluate correctness. They also apply a pruning technique with SMT solvers to eliminate redundant formulations. Experiments on benchmarks for linear and mixed-integer programming show this approach is both accurate and efficient, potentially making optimization modeling more accessible.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Leveraging LLMs for both hypothesis generation and evaluation enables verification of partial problems.\\n2. Using MCTS to efficiently navigate the large hypothesis space, along with introducing the SMT pruning procedure, saves computational resources and focuses on unique, meaningful formulations.\\n3. Strong experimental results compared to baselines.\", \"weaknesses\": \"1. The same LLM is used for both hypothesis generation and verification, which introduces a high correlation between generation and evaluation stages. This could limit the objectivity and diversity of the verification.\\n\\n2. Lacks a clear explanation of how the LLM is guided to explore the hypothesis space with sufficient diversity.\\n\\n3. It would be beneficial to provide more insight into partial problem formulation at intermediate steps. Improved verification of partial models could enhance accuracy and reduce runtime for complex problems.\", \"minor_issue\": \"The font size in the figures is too small\", \"questions\": \"1. In Section 5.2, paragraph (2) \\u2018Getting Local Scores,\\u2019 how does the DFS expand the tree? Does it expand the child with the highest prior score first, or does it expand a random model generated by the LLM?\\n\\n2. How should Figure 5 be interpreted? Section 5.4 doesn\\u2019t seem directly related.\\n\\n3. How is exploration of the hypothesis space ensured? Figure 4 shows that pruning is effective, but does this imply that the LLM fails to generate diverse partial formulations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I acknowledge that I have read the response from the authors, and appreciate their improvement on this paper. In general, I don't think this is a bad paper. The presentation is clear, the experiment is sufficient, thus I would give a high score if this is a submission to some non-top/application-oriented venues. What really concerns me is that the paper is too \\\"standard\\\", which is not bad for real-world engineering practice, but may be an issue for top conferences like ICLR. I try not to say something like \\\"lack of novelty\\\" in my review, and the authors do made some efforts as their response said, but most of the concepts and techniques are familiar in OR textbooks and other LLM papers. In my very personal opinion, the only major thing that is not so standard is that the \\\"computational representation\\\" stage is represented as a probabilistic model. I know that computational representation can be tricky, and different representations can impact the efficiency especially for large-scale problems, so a model that produces optimal computational representations can be a strength and really interests me. However, the paper does not address this in the method section, which is a pity.\\n\\nOverall, my opinion for this paper is pretty neutral. I won't be bothered at all if it gets accepted into ICLR.\"}", "{\"title\": \"Response to Reviewer U3Rw (2/2)\", \"comment\": \"---\\n\\n### [P3] Insights on partial formulation evaluations\\n\\n\\nThank you for the concrete recommendation. In our existing analysis (`Fig 3`), we compare the performance of our method against an ablation that assigns uniform scores to all children nodes, highlighting the efficacy of partial evaluations to improve search.\\n\\n**Additional insights.** To obtain more insights related to partial formulation evaluation, we introduced a new set of results in `App D.5`, which we will summarize here in brief. We measure the correlation between partial formulation scores and ground-truth correctness (defined as the percentage of correct leaf nodes in its subtree). Our analysis revealed that partial evaluations increase in correlation with ground-truth correctness as depth increases. Intuitively, this reveals that deeper nodes contain more complete formulation information, enabling more accurate evaluation. Earlier components show weaker correlations due to interdependencies between modeling elements. For instance, evaluating decision variables in isolation is challenging without understanding their role in objectives and constraints. This increased uncertainty at earlier stages reflects the inherent difficulty in assessing partial formulations without full context.\\n\\nThese findings underscore the importance of hierarchical search strategies that maintain diverse exploration paths, particularly in early stages where evaluation signals are weaker.\\n\\n**Action taken:** We included additional analysis on partial evaluation in `App D.5`.\\n\\n\\n---\\n\\n### Questions\\n\\n* **Feedback on figures.** Thank you for this suggestion! We have improved the figures to enhance readability.\\n* **Q1.** The DFS approach proceeds as follows: (1) samples 10 child nodes at each step, (2) applies SMT pruning to remove trivial equivalences, (3) ranks the remaining nodes, and (4) retains up to three highest-scoring children for further exploration. While the search traverses through the highest-scored child node, we note that for the ablation study (in `Fig 3`), the exploration order does not affect the results, as we analyze the complete tree.\\n* **Q2.** `Fig 5` illustrates the search evolution of our MCTS relative to the number of rollouts, highlighting our method's ability to benefit from additional exploration (more iterations) to discover more correct solutions. We have revised the manuscript to clarify this point.\\n* **Q3.** Please see our response [P2] above.\\n\\n---\\n\\n*We hope the reviewer\\u2019s concerns are addressed and they will consider updating their score. We welcome further discussions.*\"}", "{\"title\": \"Thank you\", \"comment\": \"Dear Reviewer **Utyj**,\\n\\nWe sincerely appreciate your thorough engagement with our work and the concrete feedback that helped strengthen the paper. We are pleased that our responses addressed your comments and led to your recommendation for acceptance.\\n\\nRegarding your final suggestion, we have recently added an extended discussion of LLM+MCTS approaches and highlighted our novel contributions in App B. We will also include a condensed version of this analysis in the Related Work section.\\n\\n\\nWith thanks,\\n\\nThe Authors of #10599\"}", "{\"summary\": \"This paper identifies three core challenges: defining the vast, problem-dependent hypothesis space, efficiently searching this space under uncertainty, and evaluating the correctness of formulations.\\n\\nTo tackle these issues, the authors propose a novel method that leverages Large Language Models (LLMs) within a Monte-Carlo Tree Search framework. LLMs function as both hypothesis generators and evaluators of formulation correctness, while a pruning technique is employed to eliminate trivial equivalent formulations. \\n\\nEmpirical evaluations demonstrate that this approach significantly improves the efficiency and accuracy of formulating optimization models for linear and mixed-integer programming problems, making it accessible to users without deep optimization expertise.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper provides a clear and compelling description of the motivation behind the research, along with a well-articulated explanation of the proposed method.\", \"It includes an extensive experimental comparison that effectively demonstrates the performance and advantages of the proposed approach relative to existing methods.\"], \"weaknesses\": [\"In my experience with Operations Research, many papers focus on formulating new problems into linear programming or mixed-integer linear programming. These papers typically begin with a clear textual definition of the problem, followed by rigorous proofs, given the critical applications involved. However, I find that this work may not be particularly useful for practitioners in the field, as they still need to undertake similar formulation efforts themselves. The primary benefit seems to lie in providing students with examples for their coursework rather than advancing practical applications for researchers and professionals.\", \"In line 128, the quote from the textbook stating, \\\"Once this formulation is done, solving the problem is ... (almost) technology,\\\" highlights a distinction from the problem formulation presented in this paper. The textbook emphasizes identifying the context of the problem, extracting the decision variables, and compiling all relevant real-world constraints, which cannot be adequately captured through mere textual descriptions.\"], \"questions\": \"Could you clarify how the developed system will be utilized by domain scientists or in other potential use cases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer e8hb (1/3)\", \"comment\": \"*We appreciate the reviewer\\u2019s thoughtful evaluation and positive feedback.*\\n\\n---\\n\\n### [P1] Problem definition scope\\n\\nThank you for this thoughtful comment. Please allow us to clarify several points about our problem definition and its relationship to our proposed method.\\n\\n**Generalized definition.** We introduced a generalized problem definition for autoformulation, as this is a relatively new (and promising) direction at the intersection of ML and optimization modeling. Our framework consists of three key components that together address the autoformulation objective (`Eq 2`).\\n\\n| **Component** | **Description** | **Our method** |\\n|---|---|---|\\n| Problem description $\\\\rightarrow$ mathematical model ($p_\\\\phi$) | Transformation of natural language requirements into formal mathematical formulation | MCTS-based search with LLMs as hypothesis generators and evaluators |\\n| Mathematical model $\\\\rightarrow$ computational model ($p_\\\\psi$) | Transformation of mathematical formulation into solver-compatible code | Custom-developed deterministic parser |\\n| Evaluation metrics ($Q$) | Measure of formulation quality, e.g. optimality gap, formulation correctness, and computational efficiency | Dual evaluation approach combining solver feedback (optimality) and LLM assessment of formulation correctness |\\n\\n**Value of general framework.** Our proposed method represents one possible instantiation of this broader framework, addressing all three components while focusing primarily on the first component, which we identified as the most challenging and pressing frontier. The generality and completeness of our problem definition serve several important purposes:\\n1. Clearly delineates the multiple challenges that need to be addressed in this emerging field\\n2. Facilitates systematic identification of research opportunities, such as developing LLM-based techniques for solver configuration or specialized models for evaluating formulation correctness\\n\\nWe would appreciate any further feedback you may have on this point.\\u00a0\\n\\n\\n\\n---\"}", "{\"title\": \"Thank you and to summarize\", \"comment\": \"Dear Reviewer mbTn,\\n\\nThank you again for taking the time to review our paper and providing thoughtful and candit feedback that has improved our work. We appreciate your recognition of our paper's improved clarity and experimental rigor, while understanding that perspectives may vary.\\n\\n---\\n\\n### [Comment 1] On the perceived 'standard' nature of our work\\n\\nWhile some aspects of our methodology may appear familiar at first glance, our work makes significant novel contributions to an **emerging challenge**: *automating optimization model formulation, which remains a critical bottleneck despite decades of advances in solving algorithms*. This problem has substantial real-world impact, as enhancing formulation efficiency can lead to significant cost savings and broader access to optimization tools.\", \"we_want_to_emphasize_that_our_technical_contributions_are\": \"1. A formal framework for autoformulation that identifies distinct problem classes and their challenges, enabling systematic analysis of formulation automation and guiding solution development.\\n2. A novel method introducing components specific to optimization modeling absent in related literature: **structured hierarchical search space**, **SMT-based pruning**, and **comparative formulation correctness evaluation** (as outlined in our previous response).\\u00a0\\nOur work moves beyond simply applying LLM+MCTS to optimization problems. Instead, it introduces a principled approach, grounded in our formal framework, that combines symbolic reasoning (through SMT solvers), structured exploration (via MCTS and hierarchical decomposition), and neural language models in ways specifically designed for mathematical optimization.\\n\\nWe argue that developing novel methods, even those sharing high-level parallels with existing approaches, to solve important practical problems represents a valuable scientific contribution\\u2014particularly when such development demands innovative design choices and demonstrates measurable improvements in performance. Our state-of-the-art results on challenging benchmarks demonstrate the effectiveness of these innovations.\\n\\n\\n### [Comment 2] On the second transformation\\n\\nWe deeply appreciate your insight regarding computational representations and their impacts on solver efficiency, particularly in large-scale problems. While our paper primarily focuses on the first transformation\\u2014translating problem descriptions into mathematical formulations\\u2014this scope was chosen deliberately, as we found this to be the **most challenging frontier in automating optimization modeling**. Our empirical analysis showed that while computational representation is crucial, the primary bottleneck currently lies in correctly formulating mathematical models from problem descriptions.\\n\\nThe probabilistic framing of computational transformations in our work opens interesting research directions, including automated solver selection and configuration optimization. These aspects, though not fully explored in the current paper, align with our **broader goal of providing a comprehensive framework for automated optimization modeling**, to pave the way for continued innovation in this domain.\\u00a0\\n\\n\\n---\\n### In Closing\\n\\nThank you again for your thoughtful feedback and for recognizing our efforts. We believe our work makes meaningful contributions both **methodologically**\\u2014through novel components for LLM+MCTS integration\\u2014and **theoretically**\\u2014by providing a foundational framework for automated optimization modeling. We hope these clarifications have helped illustrate the depth and significance of these advances. We remain excited about the research directions this work enables.\\n\\n\\nSincerely,\\n\\nThe Authors of #10599\"}", "{\"title\": \"Response to Reviewer mbTN (2/2)\", \"comment\": \"---\\n\\n### [P2] Comparison to related works\\n\\nWe appreciate the suggestion of relevant related works. Let us clarify the distinctive aspects of our approach at multiple levels:\\n\\n**Novel problem.** Our work addresses automating mathematical optimization modeling\\u2014a significant real-world challenge where formulation errors can lead to suboptimal resource allocation, inefficient operations, and costly business decisions. One of our main contributions lies in formally defining and characterizing the unique challenges of this problem.\\n\\n**Problem-driven method design.** Our use of MCTS stems directly from the problem structure of autoformulation, not vice versa. The hierarchical nature of optimization modeling and uncertainty in optimal formulations naturally motivates MCTS as the most suitable search framework.\\n\\n**Novel methodological components.** Our MCTS framework introduces three key innovations specifically tailored for autoformulation:\\n1. **Structured hierarchical search:** We leverage the inherent structure of optimization modeling to decompose the search space. Unlike conventional MCTS approaches, which assume fixed search spaces, our hierarchical organization of search spaces both reduces search complexity and increases formulation diversity.\\n2. **SMT-based pruning:** Our empirical analysis revealed that $80\\\\%$ of generated formulation hypotheses are trivially equivalent (`Fig 4`). By integrating SMT solvers to prune these redundant formulations, we achieve a 400x improvement in search efficiency, avoiding exponential growth in search complexity.\\n3. **Comparative formulation evaluation:** We introduce novel pairwise comparative evaluation for assessing formulation correctness, which is distinct from the standard approach where LLMs evaluate solutions in isolation. This comparative framework enables more reliable preference-based evaluation to improve search efficiency.\\n\\n**Empirical impact.** Our method achieves state-of-the-art performance, finding $10\\\\%$ more accurate formulations than ORLM (a model specifically fine-tuned for OR formulation) on the challenging IndustryOR dataset, demonstrating that our architectural innovations translate to real-world performance gains.\\n\\n**In summary.** While LLM+MCTS is indeed an active research area, our work's novelty lies in addressing a **novel and impactful problem** with distinct technical challenges through problem-driven **methodological innovations** that achieve significant **empirical improvements**.\\n\\n**Action taken:** In the interest of space, we provide an in-depth comparison with suggested related works in `App B` of the revised manuscript.\\u00a0\\n\\n---\\n\\n### Questions\\n\\n1. **Q1:** Please see our response [P1].\\n2. **S1:** Please see our response [P2].\\n3. **S3:** Thank you for this interesting suggestion. Fine-tuning an autoformulator with process-reward models (PRM) is indeed promising, as it could enable data-driven improvement of hierarchical modeling while systematically capturing domain expert knowledge. We expect that the primary challenge would lie in curating a dataset with step-wise (or component-wise) rewards to train the PRM, which requires deep domain expertise for accurate labeling of intermediate modeling decisions. One potential approach to address this challenge is using outcome-based labels that can be redistributed to derive stepwise rewards [1]. We have expanded on this promising direction for future work in `L512`.\\n\\n[1] Luo, L., Liu, Y., Liu, R., Phatale, S., Lara, H., Li, Y., Shu, L., Zhu, Y., Meng, L., Sun, J. and Rastogi, A., 2024. Improve Mathematical Reasoning in Language Models by Automated Process Supervision. arXiv preprint arXiv:2406.06592.\\n\\n---\\n*We hope the reviewer\\u2019s concerns are addressed and they will consider updating their score. We welcome further discussions.*\"}", "{\"metareview\": [\"The paper considers a relevant problem of autoformalization but currently falls below par for ICLR conference because of following reasons:\", \"The paper uses a straightforward combination of existing ideas, each of which are widely studied in the literature now. For example, there is a lot of work on combining tree search with LLMs. This paper is mostly about applying existing techniques to a new problem. This is not necessarily bad because applying existing methods to new domains can be valuable, particularly when addressing real-world challenges but the current work primarily demonstrates these techniques on standard benchmarks. The broader claims about presenting a general framework lack enough technical validation.\", \"It would have been interesting to see any challenges that arise when applying these existing techniques to this problem and how they are addressed. For example, evaluating candidate formulations using LLMs is widely studied as \\u201cLLM-as-a-Judge\\u201d. It is presented as a gold standard solution but that is far from true.\", \"The paper has a disconnect between the problem formulation and actual implementation. Reviewer mbTn and e8hb have similar points in their review.\", \"Please consider incorporating comments from the reviewers in the next cycle to improve the paper.\"], \"additional_comments_on_reviewer_discussion\": \"Reviewer mbTn and e8hb mentioned about between the problem formulation and actual implementation. Reviewers also had concerns about the claims made in the paper making more general contributions than the actual implemented ideas.\"}", "{\"summary\": \"This paper proposed a MCTS-enhanced LLM approach, to transform descriptions of certain types of optimization problems into formal optimization models consisting of (1) variables, (2) objective, (3) equality constraints and (4) inequality constraints. The MCTS has a depth of 4 corresponding to the four components of an optimization model. When expanding a node, LLMs are used to generate a set of candidate formulations, and trivally identical formulations are pruned by SMT solvers to improve efficiency. The expanded nodes will be assigned a prior by an LLM-based ranking evaluation. The reward at the terminal node is a combination of LLM evaluation and whether optimality is reached from solvers. Experimental results show that the proposed approach outperforms other non-MCTS approaches.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Good writing, easy to follow.\\n2. Specific improvements on MCTS to fit the optimization scenario (merging trivially equivalent formulations, including solver signals to rewards, etc.)\\n3. Sufficient experimental details.\", \"weaknesses\": \"1. The rationality of the definition of \\\"autoformulation\\\" is not sufficiently backed by the main part of the paper. The connection between the problem setting and the proposed method is not very clear. For the definition, two transformations are highlighted (mathematical formulation model $p_\\\\phi$ and computational representation model $p_\\\\psi$), and \\\"autoformulation\\\" is defined to jointly optimize $(\\\\phi, \\\\psi)$ so that the expection of a quality function $Q$ is maximized. This problem setting is a bit different from other \\\"LLM+OR\\\" papers. However, in the main part of the paper, no clear effort is paid on optimizing parameters, neither $\\\\phi$ nor $\\\\psi$ (GPT-4 is used without fine-tuning). I feel that the main objective of this paper is still the same as other papers, proposing a $P_{\\\\phi\\\\text{-LLM}}^\\\\text{improved}$ that empirically performs better than a vanilla use of $P_{\\\\phi\\\\text{-LLM}}$. So I am a bit confused why bother to propose such a definition. Specifically, it seems unneeded to have a probabilistic model $p_\\\\psi$ with complex joint optimization, if the computational representation step can be simply done by a deterministic parser (line 241) or commertial packages (line 194).\\n2. There are already lots of literatures working on combining LLMs with MCTS to solve general or domain-specific problems (see arXiv:2309.17179, arXiv:2402.03289, arXiv:2409.09584, arXiv:2406.07394, arXiv:2402.08147 for some examples), in both training and inference stage, which may unfortunately raise the bar for another \\\"LLM+MCTS\\\" paper to appear on top conferences. It is good to have LLM+MCTS applied on OR problems, but more unique features might be required to distinguish this paper. This paper has paid some effort on it (see Strength 2), but apart from these adaptations, the whole LLM+MCTS pipeline seems quite standard, and only applied at the inference stage.\\n3. While LLM contains general knowledge of optimization, the proposed method is limited to (reformulated) convex problems (type I and II in this paper).\", \"questions\": \"Questions:\\n1. Why having a probabilistic model $p_\\\\psi$ with complex joint optimization in Section 2.1? (See Weaknesses 1 for details)\", \"suggestions\": \"1. Highlight the relation and difference between this paper and other works combining LLMs with MCTS.\\n2. Align the problem definition and the methods. If the quality of computational representation is not sufficiently addressed in this paper, the problem definition may not be so complex. (In my opinion, a method which fit the current problem definition will be, for example, bi-level finetuning of two LLMs $p_\\\\phi$ and $p_\\\\psi$ to maximize $Q$)\\n3. It may worth fine-tuning the model with MCTS-based self enhancement in the future (check arXiv:2309.17179 for an example).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper is along the popular direction of using LLM to automatically fomulate mathematical optimization problems. It introduces a formal definition of autoformulation and identifies three main challenages. To address these challenges, the authors propose a method using large language models (LLMs) within a Monte-Carlo Tree Search (MCTS) framework. LLMs are utilized both as hypothesis generators and evaluators, while MCTS incrementally explores the formulation space. Additionally, a pruning technique based on Satisfiability Modulo Theories (SMT) solvers is introduced to eliminate redundant, trivially equivalent formulations. The method was tested on benchmarks (NL4OPT and IndustryOR) and show superior results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The approach to utilize LLM both as hypothesis generators and evaluators seems novel.\\n2. A pruning technique based on Satisfiability Modulo Theories (SMT) solvers is introduced to eliminate redundant, trivially equivalent formulations, which can improve the search efficiency in my understanding.\", \"weaknesses\": \"1. The concept of autoformulation defined in this paper is broad, including mathematical formulation and computational formulation, and optimality gap and computational efficiency. However, the main contribution of this paper is on mathematical formulation if I understand correctly. The rest in the proposed concept is not addressed in this paper.\\n2. I do not see why adding optimality gap and computational efficiency to the evaluation metric. They're solver dependent and is not the responsbility of formulation.\\n3. The authors defined Types I to Types III problems, but I do not see how they are related to the main contribution of this paper.\\nOverall, I feel like the concept proposed in this paper is too broad and distract readers from the main contribution of this paper.\", \"questions\": \"1. Can you justify why optimality gap and computational efficiency, which depends on solver, are also included in the evalutation metric?\\n2. What is the relationship of Types I to Types III problems with the rest of this paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer U3Rw (1/2)\", \"comment\": \"*We appreciate the reviewer\\u2019s detailed and thoughtful evaluation and positive feedback.*\\n\\n---\\n\\n### [P1] Correlation between model generation and evaluation\\n\\nThank you for this important observation. We acknowledge that using the same LLM for both model generation and model evaluation could introduce correlations that potentially bias the autoformulation process.\\n\\n**Evaluation strategy.** Our method implements two key mechanisms to address this challenge. First, at the global level, we employ a composite evaluation strategy (`Eq 4`) that combines LLM-based assessment with solver feedback on optimality gap, providing complementary signals from independent sources. Additionally, for complete formulation evaluation, we utilize a ranking-based approach that compares candidates against baseline models rather than relying on absolute scoring, which helps mitigate self-reinforcing biases.\\n\\n**Empirical evidence.** Our empirical analysis supports the effectiveness of this approach. The ablation study in `Sec 5.2` reveals two key findings:\\n1. Ranking-based scoring of partial formulations significantly outperforms uniform scoring baselines (`Fig 3`), demonstrating the evaluation protocol's capacity to meaningfully differentiate between candidate expansions.\\n2. Comparative evaluation of complete formulations shows strong correlation ($\\\\rho=0.48, p<0.001$) with ground-truth correctness, indicating robust evaluation despite using the same LLM.\\n\\n**Future direction.** This challenge reflects a broader open question in LLM research. Recent works have offered tentative evidence that LLMs have the potential to evaluate their own generations and iteratively improve [1, 2]. Our framework could be extended through dedicated generation and evaluation models, fine-tuned evaluators trained on expert feedback, or debiasing techniques using ensemble/mixture-based approaches.\\n\\n**Actions taken:** We have included a detailed discussion of these considerations and future research directions in `L515`.\\n\\n\\n[1] Shinn, N., Cassano, F., Gopinath, A., Narasimhan, K. and Yao, S., 2024. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems, 36.\\n\\n[2] Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao, L., Wiegreffe, S., Alon, U., Dziri, N., Prabhumoye, S., Yang, Y. and Gupta, S., 2024. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems, 36.\\n\\n---\\n\\n### [P2] Diversity in exploration\\n\\nWe appreciate this question. As identified in `Sec 2.1`, efficient exploration of the vast hypothesis space is a key challenge in autoformulation that our method aims to explicitly address.\\n\\n**Encouraging diversity.** Our method incorporates multiple complementary mechanisms to ensure diverse exploration:\\n1. **Hierarchical decomposition** structures the search across four levels (variables, objective function, equality, and inequality constraints), enabling focused exploration within each decomposed component rather than searching for entire formulations at once.\\n2. **MCTS framework** with UCT scoring dynamically balances exploration-exploitation, adaptively guiding the search towards promising or unexplored regions of the hypothesis space.\\n3. **SMT-based pruning** removes trivially equivalent formulations, ensuring computational resources are devoted primarily to exploring functionally distinct solutions.\\n\\n**Experimental results.** The efficacy of our method design is evidenced by our empirical results.\\u00a0\\n* `Fig 4` demonstrates the diversity of generated formulations at each hierarchical level, particularly highlighting how SMT pruning promotes exploration of functionally distinct paths while preventing exponential growth in redundant search efforts.\\u00a0\\n* Additionally, `Fig 5` (further discussed in subsequent responses) illustrates that additional MCTS iterations consistently uncover new, correct formulations, confirming effective exploration of the solution space rather than redundant sampling or search.\\n\\n**Meaningful diversity.** A core premise of our work is leveraging LLMs as dynamic hypothesis generators that assign probability mass over plausible mathematical formulations. While our experimental results validate this assumption by demonstrating LLMs' ability to generate diverse formulations, the search process faces a key challenge: trivial variations due to mathematical equivalences. This underscores the importance of our pruning mechanism in eliminating uninformative diversity and focusing the search on meaningful variations.\"}", "{\"title\": \"reply\", \"comment\": \"Thanks for addressing my concerns. I'm happy to improve my rating.\\n\\nAdditionally, I would encourage the author to add a paragraph in related work on using MCTS and LLM to solve a diverse set of problems.\"}", "{\"title\": \"Response to Reviewer e8hb (3/3)\", \"comment\": \"---\\n\\n### [P3] Type I-III problems\\n\\n\\nWe appreciate this constructive feedback. For any autoformulation method, the nature of the optimization problem under consideration presents notably different challenges. We aimed to encapsulate these nuances in the categorization of problem types:\\n\\n**Method design implications.** Different problem types require distinct autoformulation strategies. For Type I problems, the main challenge is ensuring correct variable/constraint identification. For Type II problems, the autoformulator must recognize opportunities for equivalent convexified reformulations. For Type III problems, the autoformulator should identify appropriate relaxation strategies that trade-off optimality with efficiency.\\n\\n**Evaluation implications.** Problem types inform the metrics used to assess autoformulation quality. Evaluation of Type I problems will focus on formulation correctness. Type II introduces additional emphasis on convexity and optimality. Type III must evaluate relaxation quality and recovery effectiveness.\\n\\n**Small tweaks.** However, we agree that the current placement of the problem categorization in `Sec 2.2` can affect readability and flow. To improve our paper organization and maintain focus on our main contributions, we will move `Sec 2.2` to the appendix, while retaining a brief overview in the main text.\\u00a0\\n\\n**Actions taken:** We have moved `Sec 2.2` to the appendix, this reorganization will help readers focus on our primary contributions while retaining important conceptual foundations for future work in autoformulation.\\n\\n---\\n### Questions\\n1. Please see our response in `[P2]`\\n2. Please see our response in `[P3]`\\n\\n---\\n*We hope the reviewer\\u2019s concerns are addressed and they will consider updating their score. We welcome further discussion.*\"}" ] }
0zmHFyZwkA
Hierarchical Graph Learners for Cardinality Estimation
[ "Zixuan Yi", "Sami Abu-El-Haija", "Yawen Wang", "Yannis Chronis", "Yu Gan", "Michael Burrows", "Carsten Binnig", "Bryan Perozzi", "Fatma Ozcan" ]
Cardinality estimation -- the task of estimating the number of records that a database query will return -- is core to performance optimization in modern database systems. Traditional optimizers used in commercial systems use heuristics that can lead to large errors. Recently, neural network based models have been proposed that outperform the traditional optimizers. These neural network based estimators perform well if they are trained with large amounts of query samples. In this work, we observe that data warehouse workloads contain highly repetitive queries, and propose a hierarchy of localized on-line models to target these repetitive queries. At the core, these models use an extension of Merkle-Trees to hash query plans which are directed acyclic graphs. The hash values can divisively partition a large set of graphs into many sets, each containing few (whole) graphs. We learn an online model for each partition of the hierarchy. No upfront training is needed; on-line models learn as the queries are executed. When a new query comes, we check the partitions it is hashed to and if no such local model was sufficiently confident along the hierarchy, we fall-back onto a default model at the root. Our experimental results show that not only our hierarchical on-line models perform better than the traditional optimizers, they also outperform neural models, with robust errors rates at the tail.
[ "Cardinality Estimation", "Many small models", "Graph Hash", "Group-by-template", "Fast Learning" ]
Reject
https://openreview.net/pdf?id=0zmHFyZwkA
https://openreview.net/forum?id=0zmHFyZwkA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qzdXDrUHvI", "ngZydXawFN", "dvcLFtzpoB", "afrM3LRThy", "X9hs3tTptq", "QKs0nnZQHB", "LULEX6V8P7", "IlZxKgBupp", "E1Od067AY7", "7jjt8gi5Gk", "3adylqXcqK" ], "note_type": [ "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1730472053356, 1732164065118, 1734697517304, 1731994267715, 1730674654717, 1730591520721, 1731994713312, 1731995125537, 1732417835182, 1737523699583, 1731995189668 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5334/Reviewer_2Nam" ], [ "ICLR.cc/2025/Conference/Submission5334/Reviewer_3TiU" ], [ "ICLR.cc/2025/Conference/Submission5334/Area_Chair_rrKA" ], [ "ICLR.cc/2025/Conference/Submission5334/Authors" ], [ "ICLR.cc/2025/Conference/Submission5334/Reviewer_3TiU" ], [ "ICLR.cc/2025/Conference/Submission5334/Reviewer_5CiV" ], [ "ICLR.cc/2025/Conference/Submission5334/Authors" ], [ "ICLR.cc/2025/Conference/Submission5334/Authors" ], [ "ICLR.cc/2025/Conference/Submission5334/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5334/Authors" ] ], "structured_content_str": [ "{\"summary\": \"To address the issue of repeated queries in cardinality estimation, this paper proposes an on-line cardinality estimation method. Unlike traditional cardinality estimation approaches that rely on a single model, this study introduces a hierarchical cardinality estimation framework. Queries are categorized into three levels, with different structural classifications applied at each level. For each level, distinct estimator models are trained based on various classification templates, and evaluations are conducted hierarchically. Additionally, these models utilize an extension of Merkle-Trees to hash directed acyclic graph (DAG) query plans. Finally, an ensemble learning method is used to statistically aggregate the results and produce the final cardinality estimates. Compared to traditional cardinality estimators and the query-based cost estimation method MSCN, this approach achieves superior results.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. Hierarchical cardinality estimation methods have significant advantages for cardinality estimation methods in the presence of a large number of repeated queries. They allow for the training of separate cardinality estimators for each query type, thereby saving training costs and improving the accuracy of cardinality estimation.\\nS2. The method presented in this paper demonstrates strong cardinality estimation performance, and it also exhibits good convergence stability through hierarchical training.\\nS3. Compared to traditional cardinality estimators, the method proposed in this paper achieves faster convergence speed with lower overhead, making it suitable for practical industrial applications.\", \"weaknesses\": \"W1. This paper employs only the Q-Error as an evaluation metric, which can assess the stability of the cardinality estimator but does not provide an intuitive measure of its accuracy. The addition of mean absolute error (MAE) and relative prediction error (RPE) would allow for a more comprehensive evaluation of the accuracy of different cardinality estimators.\\nW2. This paper compares only with two relatively outdated query-based cardinality estimation methods, MSCN and MSCN+. It should include a broader variety and greater number of baseline methods by introducing more advanced cardinality estimation approaches. Adding comparisons with data-driven cardinality estimation methods or experiments against paradigmatic methods like QueryFormer would enhance the analysis.\\nW3. The experimental workload in this paper lacks clarification regarding query redundancy. It should include comparative experiments under different workloads. Additionally, experiments on cardinality estimation with lower query redundancy should be added to provide a more comprehensive evaluation.\", \"questions\": \"Q1. The addition of mean absolute error (MAE) and relative prediction error (RPE) would allow for a more comprehensive evaluation of the accuracy of different cardinality estimators.\\nQ2. It should include a broader variety and greater number of baseline methods by introducing more advanced cardinality estimation approaches. Adding comparisons with data-driven cardinality estimation methods or experiments against paradigmatic methods like QueryFormer would enhance the analysis.\\nQ3. It should include comparative experiments under different workloads. Additionally, experiments on cardinality estimation with lower query redundancy should be added to provide a more comprehensive evaluation.\\nQ4. Could you clarify what specific models are used at each level of cardinality estimation in this paper? This detail is not adequately explained in the manuscript.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the responses. All of them are reasonable. I raised the score by 1 point.\\nI understand that the P-error-based evaluation requires a lot of effort.\"}", "{\"metareview\": \"This paper proposes a workload-driven approach for cardinality estimation for workloads containing repetitive queries. The reviewers agree the problem is important and that the hierarchical cardinality estimation methods are effective. However, there are also concerns on the motivation, related work, experiments, and presentation. While some of the comments were addressed during the discussion period, others seem to be only partially addressed with ongoing experiments. Overall, I think the paper can be improved and should go through another round of reviewing.\", \"additional_comments_on_reviewer_discussion\": \"I will focus on the discussion on experiments. First, the integration with P-Error was suggested by reviewer 3TiU and was later considered to require too much effort, but the authors also mention to reviewer 5CiV that they are performing this experiment. Second, the comparison with DeepDB seems critical as the current baselines are considered relatively outdated (MSCN and MSCN+). While the authors explain DeepDB is orthogonal to the proposed method, both reviewers 5CiV and 2Nam do not seem convinced. Also saying that some results will be available if the paper is accepted is not convincing either. Hence, the discussion on experiments seems largely inconclusive to me.\"}", "{\"title\": \"Thank you for your reviews!\", \"comment\": \"Dear reviewers,\\n\\n\\nWe took our time to craft the response, as your feedback pushed us to re-do our experimental setup, fix typos in paper, and discuss some more related work. We uploaded the new rebuttal PDF. We highlight significant changes in blue.\\n\\nOur response here addresses common concerns. Additionally, we respond to each reviewer, often referencing this very response.\\n\\nGiven your feedbacks, we have done the following:\\n\\n## Re-design Experiments\\n\\n1. We run another data-driven baseline: DeepDB[1]. We run the code of authors of DeepDB (we only transform the input format) and show the comparisons on two datasets. DeepDB outperforms our method on **multijoin-accidents** dataset but underperforms on **multijoin-stackoverflow** dataset (numbers are at the end of this comment).\\n2. Remove \\\"imperfect admission\\\" experiments since reviewers eluded that they are not fair (each method potentially evaluated on different set of queries). Instead, we evaluate all methods on the intersection of queries they are all able to process in Table 3 of the revised PDF. This shows that: when query pattern is known, our methods outperform the baselines. This then motivates the hierarchy experiments (use our models for repetive queries) -- Table 2 of the revised PDF. In most practical scenarios, only Postgres estimator can always compute (as it can handle arbitrary queries and does not require training), that's why we use it at the root in our experiments.\\n3. We are now reporting two additional metrics (Absolute error and Relative error) -- Unfortunately, integration with P-Error requires more work, because it needs to invoke cardinality estimator at each subquery, and additionally invoke Postgres query-planner and cost-estimator during eval. We are working on it, and we plan to have the numbers in our final version.\\n4. We add the ablation study on different repetition in Fig 4 of the revised PDF.\\n5. We replaced experiment figure (now Fig #5 in Appendix) with ``cumulative plot'' i.e. the errors when using this-many-prior-observations-or-less (versus the old one: using this-many-observations) as the old one was noisy: at high example count, the number would come from one (or a couple) of predictions.\\n\\n## Motivation\\n\\nWe want to emphasize that our contribution learns from queries (online) by grouping similar ones. **Our work must sit on-top** of another cardinality estimator (may it be deep net or heuristic-based). The familiar queries can be intercepted by our hierarchy and the unseen queries can fall back onto more-general estimator. Since Postgres can always process any query, and that it gives decent results [given no training!], we use it as a default fall-back. Since our method does **not** stand on its own [needs a fallback for novel templates], it is somewhat orthogonal to compare it against many baselines.\\n\\n## DeepDB Comparison\\n\\nWe report Q-Errors (at 50th and 90th percentile) and also median absolute and relative errors, for two multijoin datasets. We train on 100,000 sample rows per table -- training takes several hours, per dataset. We are queueing runs for the remainder of the datasets, and we should have all numbers reported by camera ready (if paper gets accepted).\\n\\n\\n| | | Postgres | MSCN+ | DeepDB | Ours |\\n|---------------|----------------|----------|--------|-----------|-----------|\\n| multijoin-stackoverflow | Q-error @50 | 3.8 | 2.0 | 3.9 | **1.05** |\\n| | Q-error @90 | 149 | 10.99 | 1.9e4 | **2.19** |\\n| | Relative Error | 0.74 | 0.50 | 0.74 | **0.05** |\\n| | Absolute Error | 4.2e4 | 9.7e3 | 2.7e5 | **30.3** |\\n| multijoin-accidents | Q-Error @50 | 1.73 | 2.2 | **1.02** | 1.09 |\\n| | Q-error @90 | 11.04 | 8.31 | **1.17** | 3.31 |\\n| | Relative Error | 0.42 | 0.54 | **0.02** | 0.08 |\\n| | Absolute Error | 6.0e7 | 3.1e7 | **1.4e6** | 4.0e6 |\\n\\n\\n[1] Hilprecht, Benjamin, et al. \\\"DeepDB: Learn from Data, not from Queries!\\\", VLDB 2020.\"}", "{\"summary\": \"This paper proposes a workload-driven approach for cardinality estimation aiming at workloads containig repetative queries. The proposal utilizes multiple templatizer to derive hierarchical cardinality estimation models, taking a query plan tree as input and obtaining data with different granularity. It employs general predictors like PostgreSQL to perform cardinality estimation at the lowest level.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"S1: The evaluation experiments used six types of workloads, including up to five-table join queries, and achieved a higher Q-error than MSCN (CIDR2019). It was confirmed that the fine-grained model, H1, achieved the fastest convergence in training and the highest accuracy.\", \"S2: Since machine learning-based methods often underperform in high-percentile cases, the fallback mechanism is beneficial in practice. As shown in Table 4, the deeper hierarchical fallback mechanism achieves a higher Q-error.\", \"S3: As a query-driven approach, the featurizer\\u2019s ability to characterize predicates is useful.\"], \"weaknesses\": [\"W1: In addition to Q-error, P-error should be also evaluated, which is becoming a standard metric.\", \"W2: As shown in Table 3, H1 demonstrates the highest performance, so a structure of H1 -> PostgreSQL seems optimal, making the multiple hierarchy setup (in Equation 15) unnecessary. The authors should clarify the benefit of using multiple hierarchy levels.\", \"W3: Although the proposal adopts a workload-driven approach, recent trends favor data-driven or hybrid approaches. Notably, data-driven approaches have the advantage of robustness for unknown queries. Combining a workload-driven approach with data-driven methods could enhance accuracy in cases where prediction errors are large. While a workload-driven approach has the advantage of faster inference time, it is not obvious workload-driven approach alone is useful.\", \"W4: In Section 2.5, the statement \\\"All graphs whose templates are isomorphic share the same model\\\" is based on graph isomorphism as defined in Definition 1, relying on edge information only. The templatizer, H1, thus does not use graph attribute information. However, Section 3.1 states, \\\"Hence, query graphs found in the same H1 template differ only by the constant values,\\\" which appears contradictory.\", \"W5: Section 3.3 includes a calculation for the hash size, such as s(#T1), but according to Equation 6, the hash length is fixed at h, so the case distinction in Equation 10 does not seem valid.\", \"W6: Although Cardbench is used as the evaluation benchmark, it is proposed in an arXiv paper and is not yet a standard benchmark. It would be better to use the widely accepted join order benchmark or explain the strong justification for using Cardbench.\"], \"questions\": [\"It is unclear how the plan tree is constructed. For example, is it correct to interpret that (A ? B) ? C and A ? (B ? C) are non-isomorphic plans?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a supervised cardinality estimation method that enhances accuracy for workloads with repetitive queries (i.e., queries repeated in templates with only the constant parameters changed) by leveraging hierarchical, online localized models. This approach transforms each SQL query into a Directed Acyclic Graph (DAG) representation, groups isomorphic DAGs into the same partition, and trains an online model for each partition as queries are executed. Grouping is conducted hierarchically at multiple levels of granularity, ranging from fine-grained (i.e., queries varying only in constant terms are placed in the same partition) to coarse-grained (e.g., queries varying only in constants, operator types, and column names are placed in the same partition). During runtime, given a query, the method begins with the fine-grained model and moves to coarser-grained models until it finds a confident model for the query. If no suitable model exists, it defaults to a learned or traditional model. Using the CardBench benchmark, the authors demonstrate that this method yields more accurate cardinality estimates compared to competitors such as PostgreSQL and MSCN.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"S1. The authors tackle an important issue in analytical databases: cardinality estimation for repetitive queries.\\n\\nS2. The authors propose a cardinality estimator, which leverages hierarchical localized models that are trained online.\", \"weaknesses\": [\"W1. The Experiments section needs to be improved.\", \"The authors should compare their method with state-of-the-art (SOTA) data-driven cardinality estimation methods such as [1]. Currently, the comparison is limited to MSCN, a query-driven method proposed in 2019 [2], and PostgreSQL, a traditional estimator.\", \"The \\u201cImperfect Admission Experiments\\u201d in Section 4 seems unfair, as the q-error percentiles of each method are reported on different query subsets: PostgreSQL is evaluated on the entire query set, MSCN on a subset excluding disjunctions, while the proposed method seems to be evaluated on a subset of simple queries (e.g., repetitive queries varying only in constant terms).\", \"The authors should report end-to-end time (including both planning and query execution time), as shown in [1,3], along with its breakdown for further clarity.\", \"Reporting training time and model size would provide further insights into the method\\u2019s practical feasibility.\", \"The authors should clarify why the q-error is higher in certain intervals with larger training sample sizes in Figure 4.\", \"The experimental setup requires a more thorough explanation, particularly regarding how query repetitiveness was generated and its extent.\", \"W2. Motivation is inadequate.\", \"The authors should explain why existing learned estimators struggle with repetitive workloads to highlight the necessity for their proposed method.\", \"The authors should clarify why a hierarchical model structure is effective in improving the accuracy of cardinality estimation.\", \"W3. The presentation needs to be improved.\", \"The process of generating a DAG from a query needs further explanation. If the DAG refers to a query plan, the authors should specify which query plan is used, as multiple plans can exist for a single query.\", \"There are some undefined terms, such as d_{\\\\psi} in Section 2.\", \"There are inconsistent notations throughout the paper. For instance, \\u201cquery graph\\u201d and \\u201cquery plan\\u201d are used interchangeably.\", \"Numerous typos appear throughout the paper, such as \\u201cgeoping\\u201d and \\u201chases\\u201d in Section 5.\", \"W4. There are some misstatements regarding existing work\", \"The authors seem to overstate the limitations of existing methods. They claim that \\u201cNN-based estimators perform well if they are trained with large amounts of query samples,\\u201d which is true specifically for query-driven learned estimators, not all NN-based methods.\", \"The authors state that \\u201c50% of the real world clusters have more than 90% queries repeated in templates (only changing the constant parameters),\\u201d citing [4]. However, according to [4], the correct value is 80%, not 90%.\", \"[1] Kim, Kyoungmin, et al. \\\"Asm: Harmonizing autoregressive model, sampling, and multi-dimensional statistics merging for cardinality estimation.\\\" Proceedings of the ACM on Management of Data 2.1 (2024): 1-27.\", \"[2] Kipf, Andreas, et al. \\u201cLearned cardinalities: Estimating correlated joins with deep learning.\\u201d In Biennial Conference on Innovative Data Systems Research, 2019.\", \"[3] Wu, Ziniu, et al. \\\"FactorJoin: a new cardinality estimation framework for join queries.\\\" Proceedings of the ACM on Management of Data 1.1 (2023): 1-27.\", \"[4] van Renen, Alexander, et al. \\\"Why tpc is not enough: An analysis of the amazon redshift fleet.\\\" Proceedings of the VLDB Endowment 17.11 (2024): 3694-3706.\"], \"questions\": \"Please refer to W1-W4.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"* **W1**: We looked into P-error. It quantifies the impact of cardinality estimation on the real metric (cost of query execution). However, its implementation requires some time. It seems we have to invoke Postgres query-planner twice, once using cardinality hints (of subgraphs) from a model, and once again feeding the ground-truth, giving two plans $P(C^E)$ and $P(C^T)$. We should then run Postgres query plan cost-estimator twice, once on $(P(C^E), C^T)$ and another on $(P(C^T), C^T)$ and compute the ratio of the two -- crucially feeding ground-truth cardinalities $C^T$ to both cost-estimator invocations. This makes sense, as it shows the bottom-line metric, i.e., quantifying the slowness introduced by inaccurate cardinality estimates, as compared to the optimal [achievable by the query-planner].\\n\\nImplementing P-error implies we have to integrate with the (1) Postgres query-planner and plan cost-estimator and (2) annotate cardinalities at subqueries (as you mention, this can be retrieved during execution, and our online learner can absorb that info). Alternatively, we can integrate with https://github.com/Nathaniel-Han/End-to-End-CardEst-Benchmark which we found upon searching per your review. We will do our best to implement this by the camera read deadline, as it can greatly improve our paper.\\n\\n* **W2**: While $H_1$ are indeed the best hierarchy, it only captures 45\\\\%-75\\\\% of the queries [as many queries differ only in their constant parameter value] and $H_2$, $H_3$, ..., are designed to capture the remaining ones [e.g., query on same table though different column predicate], as much as possible, though we cannot capture more than 26% with any $H_i$ on the datasets.\\n\\nIn addition to the experiments comparing $H_1 \\\\rightarrow H_2 \\\\rightarrow H_3 \\\\rightarrow Postgres $ VS $H_2 \\\\rightarrow H_3 \\\\rightarrow P$ VS $H_3 \\\\rightarrow P$, we have additional experiments now also comparing $H_1 \\\\rightarrow P$, $H_1 \\\\rightarrow H_2 \\\\rightarrow P$. Generally, $H_1 \\\\rightarrow H_2 \\\\rightarrow H_3 \\\\rightarrow P$ seems to be the strongest. See updated Table 2.\\n\\n* **W3**: Although the proposal adopts a workload-driven approach, recent trends favor data-driven or hybrid approaches. Notably, data-driven approaches have the advantage of robustness for unknown queries. Combining a workload-driven approach with data-driven methods could enhance accuracy in cases where prediction errors are large. While a workload-driven approach has the advantage of faster inference time, it is not obvious workload-driven approach alone is useful.\\n\\nwe understand that our approach does not cover unseen queries. Our goal is to improve the error rate of frequently run queries (which is about ~80% of workloads). That is why we fall back to a general model as the default. In our experiments we used postgres, but this model can be any of the learned cardinality models as well (query- or data-driven). Specifically, one possible future work is to replace the *root model* by a data-learned model (e.g., GNN), which we elude to, in the *fifth-line in the Conclusion*. This could combine the advantages of both worlds (no restriction on input query, yet ability to specialize per workload).\\n\\n* **W4**: We sincerely apologize. Our Definition also associates node features. We had a typo in the definition $f^{(v)} = z^{(\\\\pi_v)}$ that is now corrected to $f^{(v)} = f^{(\\\\pi_v)}$ [we somehow missed that earlier, during a rename exercise $z \\\\rightarrow f$]. We checked: the Appendix hashing algorithm uses the correct symbol $f$.\\n\\n* **W5**: We sincerely apologize (again!). We (mistakenly) omitted notation $s(.)$ in the submitted version. We fixed it now by adding:\\n \\nwhere the history size $s(Ti)$ equals the number of observations that hash to ${T_i}$, \\\\textit{i.e.}, the height of matrices $\\\\mathbf{X}_i^{[T_i]}$ and $\\\\mathbf{Y}_i^{[T_i]}$.\\n\\n(it is marked in blue)\\n\\n* **W6**: We use CardBench due to practical advantages. First, they open-source their query generator (with instructions on how to re-generate the data). We modified their query generator to repeat patterns (but we remove duplicate queries) to represent common workloads where queries repeat.\\n\\nJOB benchmark uses IMDB, which has a license not compatible with our corporate policies. But our workload generator follows similar patterns as JOB.\", \"q\": \"It is unclear how the plan tree is constructed.\\n\\nExcellent point. We have been relying on the query parser of CardBench to convert SQL statement into query graph. While CardBench is consistent (graph is determined by order of join conditions), we may be splitting identical graphs onto different buckets. We think the graph can be canonicalized (e.g., super-node can absorb all join nodes). Worst-comes-worse, it is OK to have equivalent query into different buckets [potentially this can cause degradation of metrics]. We will do our best to have this aspect reflected into the paper and into our final implementation, by camera ready, if our paper is accepted.\"}", "{\"comment\": [\"## W1\", \"We are adding baseline based on reviews. We went with data-driven-approach DeepDB. However, we added [1] to the related work section.\", \"Unfairness of eval: Due to this, we re-designed our experimental setup, as discussed in the common Comment above.\", \"We plan to add P-error (we already started working on it!) which gives some quantification of the improvement of the end-to-end time.\", \"The time for our methods to run: Pop-out features from query graph, hash it, lookup model and run it, is fast -- each feature pop + graph hash averages 2.3 milliseconds. Each inference step involves 3 hashes (one per templatization strategy) [i.e., total hash time, per graph = 7 milliseconds on avg]. The inference time is negligable (e.g., microseconds) and the cost to train is model-dependent. For example, we can train 15000 linear regression models in under 5 seconds -- thanks to fast implementation of numpy pinv. For decision forests, we use xgboost -- it takes around half-a-second to do training (slow!). However, we will be moving to TensorFlow Decision Forests (TFDF) as we heard they are orders of magnitude faster. **We will have the exact time numbers onto the main paper**, by the paper camera ready (if paper gets accepted), as we would like to invoke the fingerprint in C++ to yield further advantages, and also test the speed of TFDF. This speed makes us name our method as \\\"online\\\".\", \"Fig4: now moved to appendix and became Fig 5. We improved it. It used to be: Q-error percentile (at y-axis) measured **at** history size **equal** to x-axis. That plot was very noisy. For instance, for large history size (e.g., >100), there may be one (or a couple of) template(s) with that many and therefore the Q-error estimate would come from one (or a couple of) observation(s). Now it became a cumulative plot: it measures q-error for the amount of history up-to the x-axis value (e.g., Curve(10) = Q-error when there are <= 10 examples). Curve looks less noisy. However, for coarser templatization, it does not clearly converge with more observations (due to multimodality e.g. query predicate on different columns can group to same bucket) -- Thanks for the constructive feedback.\", \"We now offer more details in Experiment **Datasets** section and **Repetition Rate** section in Ablation Studies. In short, we modified the workload generator to allow the query predicates have more constants to choose from. In this way, the repetition rate is close to 90% across different datasets (ie. the repetition rate number reported in [4]). We also show in the ablation studies section that the accuracy generally improve when we introduce more repetition.\", \"## W2\", \"We added it in the main response.\", \"Importantly, our online learning setup avoids the need for a complex training data pipeline, which is the biggest barrier for practical ML-based cardinality estimation techniques. While NN models, whether workload-driven or data-driven, often perform well, they can still produce significant errors (q-errors > 3), which can have severe consequences. By focusing on frequently occurring query patterns, we can significantly reduce their q-error to below 2.\"], \"title\": \"Part 1 / 2\"}", "{\"title\": \"Part 2 / 2\", \"comment\": [\"## W3\", \"We use a logical query plan to create the query graph (or DAG). We used CardBench to generate the logical query plans, which does not apply any transformations to the query plan, but our methods would work with any optimizer.\", \"For the same exact SQL string we will produce the same logical query plan. A join (B join C) and (A join B) join C will be result in two different logical query plans, and hence will not produce the same DAG. However, your review (and another reviewer) is making us think about canonicalizing the graph (e.g., making super node that absorbs all join nodes). Nonetheless, as-is, our method has some false negatives (i.e., one equivalent query might be partitioned onto multiple graphs), and this could be hurting the performance. In the final version, we will either fix this in code **or** mention it in the paper (we will try to go for fixing the implementation, first).\", \"We added text (in blue): where $d_{\\\\psi} \\\\in \\\\mathbb{Z}_+$ is dimensionality of extracted feature -- thank you.\", \"We now consistently use \\\"query graph\\\" -- thank you! We feel this is better aligned with ICLR audience. However, if this paper gets rejected, we will consider renaming to \\\"query plan\\\" then submit to database venue (especially that DB folks we talk to, are all thrilled by this work).\", \"We fixed a lot of typos. We will give the paper more thorough reads, before we post it anywhere (e.g., camera ready or arxiv). Thank you!!\", \"## W4\", \"We now compare with data-driven neural methods. It seems that we outperform in one case and they outperform in another (for 2 datasets). However, training takes hours (using code open-sourced by authors of DeepDB).\", \"We acknowledge that [4] reports that \\\"50% of database clusters have 80% of queries as 1-to-1 repetitions.\\\" However, this statistic refers to *exact query repetition*, where the entire query string is identical to a previous query. In our work, we focus on a broader notion of query repetition, leveraging query templates. As highlighted in Figure 5(c) of [4], 50% of clusters exhibit more than 90% query repetition within templates over a one-week period. This indicates a high degree of structural similarity among queries, even if they differ in parameter values.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"W1+Q1: We added Absolute Error and Relative Error in Table 2 and Table 3.\\nWe added absolute error and relative error to the paper (see metrics, at start of Section 4, and also experiment Tables 2 and 3) -- thanks a ton! Indeed, this makes the experiments more thorough.\\n\\nW2+Q2: We are adding DeepDB instead of QueryFormer (see results on top), as it is data-driven approach and complimentary to MSCN. Nonetheless, as discussed in Motivation in main comment to all reviewers (above), we think it is orthogonal to our method to compare it with baselines as our method **needs a baseline, at the root node, to fall back on, for novel queries**.\\n\\nW3+Q3: We improved the wording in the paper. Real workloads (e.g., per Redshift paper) have repetitions in query patterns. Nonetheless, your proposed experiments makes absolute sense. \\n\\nWe add more details in **Datasets** of the experiments section and ablation studies. Our difference from the Cardbench dataset is that we modified the workload generator to allow the query predicates have more constants to choose from. All multijoin datasets we created fixed the constant sample size at 10, resulting in a repetition rate near 90% (the number reported in the Redshift paper). Additionally, we have conducted a small sweep on the sample size (ie. [1, 3, 10]) for multijoin-stackoverflow. See Fig 4 in the ablation studies. It shows that accuracy generally improves when we introduce more repetition rate. And even when the repetition rate is low, our history-based learner still shows promising results.\", \"q4\": \"In the ablation studies section, we showed results on different models. We found that Gradient-Boosted Decision Trees (GBDT) are consistently strong and used this type of model at each level of the hierachy.\"}" ] }
0ziGSo4uWp
TimeCAT: Hierarchical Context-Aware Transformer with Dynamic Grouping for Time Series Forecasting
[ "Yun Cheng" ]
Transformer-based models have achieved significant success in time series forecasting by modeling global dependencies through self-attention mechanisms. However, these models often rely on fixed patch settings with locality constraints, tokenizing time series into spatially connected sub-series. This approach can hinder the capture of semantic relationships and lead to computational inefficiencies, especially when dealing with long sequences with complex temporal dependencies. In this work, we introduce \textbf{TimeCAT}—a \underline{Time} series \underline{C}ontext-\underline{A}ware \underline{T}ransformer that dynamically groups input sequences into semantically coherent groups, enabling efficient modeling of both local and global dependencies. By appending group and global tokens, TimeCAT facilitates fine-grained information exchange through a novel \emph{Context-Aware Mixing Block}, which utilizes self-attention and MLP mixing operations. This hierarchical approach efficiently models long sequences by processing inputs in structured contexts, reducing computational overhead without sacrificing accuracy. Experiments on several challenging real-world datasets demonstrate that TimeCAT achieves consistent state-of-the-art performance, significantly improving forecasting accuracy and computational efficiency over existing methods. This advancement enhances the Transformer family with improved performance, generalization ability, and better utilization of sequence information.
[ "Time Series", "Context-Aware", "Transformer" ]
https://openreview.net/pdf?id=0ziGSo4uWp
https://openreview.net/forum?id=0ziGSo4uWp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mspMTxLXhG", "XQq9PGr1Lt", "WxdrlDn8Eo", "9rATKSKulW" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731444291181, 1730644927786, 1730577277783, 1730908358831 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6979/Authors" ], [ "ICLR.cc/2025/Conference/Submission6979/Reviewer_RrDg" ], [ "ICLR.cc/2025/Conference/Submission6979/Reviewer_h2kY" ], [ "ICLR.cc/2025/Conference/Submission6979/Reviewer_gtQK" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"TimeCAT is designed to address the limitations of existing Transformer-based models that struggle with capturing complex temporal dependencies and suffer from computational inefficiencies, particularly with long sequences. The core innovation of TimeCAT is its dynamic grouping mechanism, which segments input sequences into semantically coherent groups, enabling efficient hierarchical mixing at different levels of context. This approach facilitates the modeling of both local patterns within groups and global trends across the entire sequence.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. TimeCAT's hierarchical structure, with its focus on intra-group, inter-group, and global interactions, provides a comprehensive framework for capturing both local and global temporal patterns. This multi-scale modeling is a significant advancement in the field.\\n2. The paper demonstrates a substantial reduction in computational complexity by applying self-attention within groups rather than across the entire sequence. This efficiency gain is crucial for handling longer sequences and larger datasets.\", \"weaknesses\": \"1. The main experiments in the article are limited. It might be worth considering adding short-term experiments and incorporating new datasets. For example, there are many new datasets available here: https://huggingface.co/datasets/Salesforce/lotsa_data.\\n2. How does the model\\u2019s performance change as the input length increases?\\n3. Wouldn\\u2019t using the downsampled version of x to perform group division result in information loss?\\n4. The spacing between the image and the title is too large.\", \"questions\": \"1. The writing of the article needs improvement. In Equation 3, what does g' mean?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript introduces a framework that employs a hierarchical context-aware transformer, TimeCAT, designed to dynamically group time series patches and efficiently capture dependencies. TimeCAT was compared with recent state-of-the-art methods, and the results demonstrate promising accuracy.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The proposed TimeCAT model was evaluated against benchmarks, and its accuracy shows promising results compared to recent state-of-the-art baselines. The Context-Aware Mixing Block enhances information exchange. In particular, the grouping mechanism allows the transformer to capture dependencies between highly related patches, thereby eliminating unnecessary computations between less related patches and improving efficiency. This mechanism demonstrates potential for effectively extracting information from Multivariate Time Series (MTS) data.\", \"weaknesses\": \"The hyperparameter path size, \\\\(P\\\\), is crucial for capturing temporal dependencies and reducing computational complexity. This manuscript notes that in scenarios where \\\\(P\\\\) is significantly greater than \\\\(G\\\\), increasing \\\\(G\\\\) leads to higher computational savings. However, a detailed discussion on the size of \\\\(P\\\\) is absent. A parameter sensitivity study regarding \\\\(P\\\\), along with a table similar to Table 3 to highlight the computational savings, would strengthen the manuscript. Additionally, providing a specific example that compares the complexity reduction in terms of parameters, FLOPs, and memory would be beneficial.\\n\\nThe clarity of the context-aware mixing workflow could be improved by detailing the dimensions of all intermediate tensors.\\n\\nWhile the efficiency of TimeCAT is highlighted as a key contribution, the manuscript does not compare it with other efficient transformer-based MTS forecasting baselines. This comparison would provide a more comprehensive evaluation of TimeCAT's performance and efficiency.\", \"questions\": \"I have a few suggestions which may improve the quality of the paper:\\n(a) Please revise Section 3 to enhance its readability.\\n(b) Include a qualitative comparison to demonstrate improvements in computational complexity in terms of parameters, FLOPs, and memory.\\n(c) I recommend including transformer-based baselines that aim to improve efficiency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes TimeCAT, a transformer-based model for time-series forecasting that leverages hierarchical dependencies among fixed-length patches, dynamically grouped patches, and a global token. Specifically, TimeCAT first forms dynamic groups of input patches, then captures both fine-grained and coarse-grained information through hierarchical dependencies using Context-Aware Mixing Blocks. Experimental results demonstrate that TimeCAT outperforms previous models on several time-series forecasting benchmarks.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well-motivated.\\n2. The proposed model, TimeCAT, outperforms prior models in several time-series forecasting benchmarks.\", \"weaknesses\": [\"1. The presentation quality of this paper is unsatisfactory.\", \"The method is not clearly described throughout the paper. The descriptions are overly wordy, with many unnecessary and confusing notations.\", \"For instance, from group ratios $\\\\mathbf{r} \\\\in \\\\mathbb{R}^G$, how are group indices obtained, and how are they optimized? Such indices are often assigned through discretization, making it unclear how they can be optimized using gradient-based methods. In particular, it is unclear how to compute gradients for $\\\\mathbf{W}\\\\_g$ and $\\\\mathbf{b}\\\\_g$. Additionally, as group size is calculated using the ceiling function, the optimization of the embedding $\\\\mathbf{l}\\\\_{s_i}$ is also unclear.\", \"Other confusing notations are as follows: What are $\\\\mathbf{g}'\\\\_{n,i}$ in Eq(3) and $\\\\tilde{\\\\mathbf{g}}\\\\_{n,i}$ in Eq (7)? Is $RD$ a single hyperparameter or a product of two hyperparameters, $R$ and $D$?\", \"Notational consistency is also an issue, with symbols such as $\\\\textbf{X}$ vs $X$ and $x$ vs $\\\\tilde{x}$ vs $\\\\hat{x}$. The frequent use of accents and subscripts/superscripts could easily confuse readers.\", \"2. Claims are not well supported.\", \"The paper emphasizes the importance of the dynamic grouping mechanism. However, the experiments show that $G=2$ is sufficient to achieve good results. This very small number of groups does not adequately demonstrate the necessity of grouping for solving the problem, as all groups may still be too coarse.\", \"Additionally, the grouping is determined by a single linear transformation of the input matrix, which raises doubts about the quality of the grouping.\", \"Why does training become unstable without Eq (12) and (15)? Furthermore, there is no ablation study to justify the inclusion of this gradient detachment technique.\", \"What is the rationale for the order of operations in the context-aware mixing block? One could simply apply self-attention across all tokens to capture intra-group and inter-group dependencies. If $G$ is as small as used in this paper (e.g., $G=2$), the computational complexity is not significantly greater than that of TimeCAT. The hierarchical design lacks both a clear explanation and experimental validation.\", \"Why does Figure 5 highlight the effectiveness of the grouping strategy? For example, I cannot see any alignment between Figures 5(a) and 5(b), and the t-SNE plots in Figures 5(c)-(e) show no meaningful pattern. Why should distinct clusters of global tokens reflect effective separation of variables and high-level interactions? A more detailed explanation would be helpful.\"], \"questions\": [\"How is $\\\\tilde{X}$ downsampled to obtain $\\\\tilde{X}'$?\", \"Time-series forecasting experiments are often sensitive to hyperparameters, such as learning rates and batch sizes. How were these parameters chosen?\", \"Could you provide a comparison of the actual training times for the models listed in Table 2?\", \"Could you include results for cases when $G=1$ and when $G$ is large (e.g., $G=16$, $G=32$)?\", \"There are many unnecessary horizontal spaces between equations and sentences (e.g., L222, L226, and L301). These should be removed.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0zZEbHLTwf
DeepFDM: A scientific computing method for Neural Partial Differential Equation (PDE) operators
[ "Patrick Chatain", "Michael Rizvi-Martel", "Guillaume Rabusseau", "Adam Oberman" ]
Solving Partial Differential Equations (PDE) has long been a critical challenge in many scientific and engineering domains. Recently, neural networks have shown great promise in solving PDEs by learning solution operators from data, offering a flexible and adaptive alternative to traditional numerical solvers. Despite these advancements, there is still a need for systematic benchmarking of neural operator methods against conventional approaches and for the development of datasets representing diverse distributions for robust evaluation. In this paper, we introduce DeepFDM, a benchmark method for learning PDE solution operators based on numerical PDE solvers. DeepFDM leverages the structure of the PDE, in order to achieve better accuracy and generalization compared to neural solvers. It is designed as a solver for a specific class of PDEs and not as a replacement for neural solvers. Moreover, because DeepFDM learns the coefficients of the PDEs, it offers inherent interpretability. We also introduce a principled method for generating training and test data for PDE solutions, allowing for a quantifiable measure of distribution shifts. This method provides a structured approach to evaluate the out-of-distribution (OOD) performance of neural PDE operators. Our work sets a foundation for future comparisons of neural operator methods with traditional scientific computing approaches, providing a rigorous framework for performance benchmarking, at the level of the data and at the level of the neural solver.
[ "Partial Differential Equations", "neural operators", "solution operators", "interpretable models", "out of distribution", "dataset shift", "physical models" ]
https://openreview.net/pdf?id=0zZEbHLTwf
https://openreview.net/forum?id=0zZEbHLTwf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wREqeRha82", "tY8Hp4hOjF", "rvt495g8vh", "9qWHTguyBa", "0snuCi0JzR" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1731103716374, 1730553527449, 1730429793033, 1730477515353, 1732658536823 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8080/Reviewer_ubKg" ], [ "ICLR.cc/2025/Conference/Submission8080/Reviewer_23Du" ], [ "ICLR.cc/2025/Conference/Submission8080/Reviewer_cXmE" ], [ "ICLR.cc/2025/Conference/Submission8080/Reviewer_NALU" ], [ "ICLR.cc/2025/Conference/Submission8080/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The manuscript considers the problem of benchmarking neural PDE solvers and analysis of the robustness with respect to the diverse distributions. The authors propose DeepFDM, a benchmark method, and the procedure to generate train/test data for benchmarking with quantified shifts in distributions. The main idea of the DeepFDM method is to represent finite difference approximation of a particular type of PDEs through a convolutional neural network that parameterizes variable coefficients. Therefore, given a ground-truth input/output pair, such a model fits the target coefficients over the used grid. The second part of the work is the procedure to generate data with a controlled distribution shift that helps evaluate the trained model's robustness to input data out of the distribution where training data was generated. In experiments, DeepFDM shows better robustness to input data distribution shifts for the broad classes of equations than competitors while requiring fewer trainable parameters.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Fair benchmarking of the neural PDE solvers and evaluation of their robustness to the input data is very important for understanding the current state of this field and identifying the gaps in the current SOTA methods.\\n2. The proposed DeepFDM method provides more accurate predictions than competitors\\n3. The benchmarking procedure is well-described and could be used in other works for evaluation of the new neural PDE solvers,\", \"weaknesses\": \"1. The main weakness of this work is that the authors combined two different contributions in a single study: a dataset generation procedure for benchmarking neural PDE solvers and a DeepFDM method for fitting the PDE coefficients.\\n2. The idea of parameterizing the finite-difference method via CNN is not new and has already appeared in other works like the smoothing operator in the multigrid method\\u0431 https://arxiv.org/abs/2102.12071 \\n3. The proposed method's scalability is not discussed or compared with competitors.\\n4. The presentation of the problem statement is confusing since the authors start not from the inverse problem of coefficient reconstruction but from the solution reconstruction problem.\", \"questions\": \"1. Why do the lines in Fig. 5 start from different initial points? It looks like the authors use different initializations, which is unfair for comparison.\\n2. What is the motivation for using Hellinger distance, not the KL divergence, for example? KL also admits closed-form for the distance between multivariate Gaussians.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper compares the performance of an inverse design method combined with a numerical solver against neural PDE solvers. The authors assume they are given data generated from a known PDE family but with unknown, spatially dependent coefficients. In this problem setting, the paper proposes to compare neural PDE solvers against numerical simulators. Since the PDE parameters are unknown, they are estimated by minimizing the difference between the output of a differentiable numerical solver and the given data. The experiments show that the proposed model converges quicker than the considered neural PDE solvers and achieves lower errors both on in- and out-of-distribution data.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"A novel way of comparing neural PDE solvers against numerical methods, which is in a sense more fair to the neural PDE solvers (if the neural PDE solvers are trained on real-word data).\", \"weaknesses\": \"1. The paper is not very clear in the type of problem it approaches. The writing could be improved to make the definition of the problem more easy to understand.\\n2. The usual setup in neural PDE solvers is that the PDE parameters are known. In this setting, neural PDE solvers have already been compared to numerical methods. The authors could better motivate their specific choice of problem definition (i.e., unknown, spatially-dependent PDE parameters).\\n3. The method is only a useful baseline if the neural PDE solver is trained on real-world data. When the neural PDE solver is used as a surrogate for a numerical solver, the PDE parameters would be known (since they would have been used to generate the training data).\\n4. There is no inference time evaluation. Faster inference is one of the main reasons for utilizing a neural PDE surrogate instead of a numerical method like the one considered in the paper.\\n5. Many experimental and model details are missing (see questions).\", \"questions\": \"1. How did you condition the neural PDE solvers on the coordinate-dependent PDE parameters? By adding the spatial coordinates to the solver inputs?\\n2. How did you create the spatially varying PDE parameters? Using the same method as generating the initial condition?\\n3. What did your data look like exactly? You mentioned 1D and 2D problems. Which PDE in Tab. 1 is 1D, which is 2D? How large was the dataset? How large were the spatial and temporal resolutions?\\n4. What learning rate did you use? What optimizer? Did you train the models autoregressively or with 1-step errors only?\\n5. How did you introduce the distribution shift? Did you increase or decrease the standard deviation of the PDE parameters? How many basis functions N did you use in the beginning? Did all of them have the same standard deviation?\\n6. Why is the Hellinger distance between the parameters generating the initial conditions a good measure for the distribution shift?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The goal of this work is to design a benchmark method for learning PDE solution operators based on numerical PDE solvers. The authors proposed DeepFDM, which focuses on one class of PDEs and takes advantage of the structure of PDEs. DeepFDM learns the coefficients of the PDEs, and distribution shifts using the Hellinger distance are quantified. The results are compared with FNO, U-Net, and ResNet.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper explores benchmarking neural operator methods and OOD performance of neural PDE operators, which is meaningful in scientific computing.\\n\\nFor one family of PDEs, it provides a neural network based solver with coefficients inferred.\", \"weaknesses\": [\"The motivation of choosing the known family of time dependent PDEs with periodic boundary conditions, and bounded coefficients is not clear to me. The problem setup seems very restricted, and the method is only applicable to learn from initial conditions.\", \"It should be made clear that if the method is data-driven/known PDE, PDE solver/operator learning in the beginning. According to my understanding, the PDE needs to be known to use the finite differences solver. DeepFDM learns an operator from the initial condition for a specific family of PDEs to the solution at next time step, and the iteratively solve for a longer time. The problem setup should be more rigorous.\", \"The literature review in Section 2.1 is not well organized or well-written. The papers of PDE discovery, PINN and operator learning are mentioned without a focus. Some claims are not correct and language is vague. For example, \\u201cLu et al. (2019) propose the DeepONet architecture, which learns PDE solution operators. However, in this case, the PDE is fully known and the PDE residual is included in the loss.\\u201d It is not correct. There is no PDE known in vanilla data-driven DeepONet. The authors may refer to Physics-informed DeepONet. \\u201cNeural PDE operators aim to learn to solve a given PDE from data, without assuming that the form of the PDE is known.\\u201d This claim is conflicting with the above point.\", \"One main issue is that it \\u2018s not fair to compare DeepFDM with FNO, U-Net, and ResNet, since the PDE structure is known and of course it can perform better than pure data-driven methods. This makes the results not convincing.\", \"There is an existing paper on distribution shift quantification: M. Zhu, H. Zhang, A. Jiao, G. E. Karniadakis, & L. Lu. Reliable extrapolation of deep neural operators informed by physics or sparse observations. Computer Methods in Applied Mechanics and Engineering, 412, 116064, 2023.\", \"Some notations are clearly defined. For example, m in the dataset, A, A* and \\\\hat{A}.\", \"There are no metrics for coefficient fields if the author considers solving the inverse problem.\", \"I don't see Appendix B in the manuscript.\", \"\\\"In this case, the solution is generated on a higher resolution grid, and then coarsened (upsampled).\\\" It should be \\\"downsampled\\\".\"], \"questions\": [\"Could you explain your definition of the benchmarking method? What makes DeepFDM a benchmarking method for PDE operator learning?\", \"Do you want to focus on the inverse problem or forward problem? Could you explain how you make sure it is fair to compare with FNO, U-Net, and ResNet?\", \"Could you provide a detailed description on datasets and training process?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents DeepFDM, a benchmark approach for learning PDE solution operators that combines the structure of traditional PDE solvers with neural networks. By leveraging the strengths of scientific computing, DeepFDM offers interpretability and aims to enhance accuracy and generalization within a specific class of PDEs, rather than acting as a replacement for neural PDE solvers.\\n\\nWhile DeepFDM is designed specifically for certain types of PDEs, it shows limited generalization to other PDE classes, reducing its applicability in diverse scenarios. Additionally, the paper lacks a detailed analysis explaining why DeepFDM outperforms traditional methods, which weakens the justification of its advantages. Providing a rigorous theoretical analysis with established approaches would strengthen the work and clarify the specific benefits of DeepFDM in terms of accuracy and generalization.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper highlights a need for benchmarking in PDE solutions, pointing out that while neural networks can flexibly solve a variety of equations, there\\u2019s limited systematic comparison with established numerical methods. This motivation is well-founded, especially in scientific and engineering fields that demand rigorous performance metrics.\\n\\n DeepFDM seems to target both in-distribution (ID) and out-of-distribution (OOD) performance, providing a structured method for generating training and test data that reflects distribution shifts. This contribution is valuable since robust OOD performance is crucial for practical applications.\", \"weaknesses\": \"DeepFDM is presented as a benchmark method; however, its applicability is limited to a specific class of partial differential equations (PDEs). The paper does not sufficiently discuss how this restriction affects DeepFDM's generalizability, particularly in scenarios that require flexibility across various forms of PDEs, such as nonlinear PDEs and complex boundary conditions.\\n\\nFor instance, in the case of hyperbolic equations with shock locations, finite difference methods (FDM) may struggle to accurately capture the discontinuities inherent in these solutions. This limitation could significantly impact the performance and reliability of DeepFDM when applied to a broader range of PDE types.\\n\\nPlease explicitly state the objectives and justify the choice of comparison methods in the context of those objectives. More specifically,\\nWhy do the authors compare DeepFDM to both neural networks like ResNet and Unet, as well as neural operators like FNO? It\\u2019s unclear whether the authors aim to solve individual instances of PDEs or to learn a solution operator.\", \"questions\": \"You need to compare your methods fairly with both ResNet and Unet, as well as FNO, since they represent different categories\\u2014traditional neural networks versus neural operators.\\n\\nPlease discuss the trade-offs between finite difference and automatic differentiation in your specific context, and to provide justification for the choice of FD and AD. There is considerable evidence that automatic differentiation outperforms FD in terms of training loss.\\n\\nCan you explain why DeepFDM doesn't show oscillation in Fig. 5, unlike the other methods?\\nHow does the computational cost and training time of DeepFDM compare to the other approaches?\\nGiven that Fig. 5 doesn't show significant improvement, can you clarify what advantages DeepFDM offers in terms of training dynamics?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0zRuk3QdiH
Multi-Shot Character Consistency for Text-to-Video Generation
[ "Yuval Atzmon", "Rinon Gal", "Yoad Tewel", "Yoni Kasten", "Gal Chechik" ]
Text-to-video models have made significant strides in generating short video clips from textual descriptions. Yet, a significant challenge remains: generating several video shots of the same characters, preserving their identity without hurting video quality, dynamics, and responsiveness to text prompts. We present Video Storyboarding, a training-free method to enable pretrained text-to-video models to generate multiple shots with consistent characters, by sharing features between them. Our key insight is that self-attention query features (Q) encode both motion and identity. This creates a hard-to-avoid trade-off between preserving character identity and making videos dynamic, when features are shared. To address this issue, we introduce a novel query injection strategy that balances identity preservation and natural motion retention. This approach improves upon naive consistency techniques applied to videos, which often struggle to maintain this delicate equilibrium. Our experiments demonstrate significant improvements in character consistency across scenes while maintaining high-quality motion and text alignment. These results offer insights into critical stages of video generation and the interplay of structure and motion in video diffusion models.
[ "text to video", "subject consistency", "video personalization", "motion alignment", "feature injection", "extended attention" ]
Reject
https://openreview.net/pdf?id=0zRuk3QdiH
https://openreview.net/forum?id=0zRuk3QdiH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zcgfc5wmC2", "qtbMnozxWx", "mpCqIkFeOC", "hPioPdmbcl", "gwFmgIKKo0", "c8gVutEJNj", "bjjm2QKEmF", "YkJWWnNazS", "W3p6yERPxw", "NHa4VahDdo", "JTUW38O4H0", "HGxXRfV53v", "FMKpxoRPTw", "9CTacnv3OE", "4uqYdnPexW", "3faHCMRKBU", "1yxfR3mS4l", "1YZsUpCnoM" ], "note_type": [ "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733310556009, 1737523447209, 1730293000528, 1733000793183, 1730291146803, 1733000501715, 1732487575288, 1732796431513, 1733022551150, 1732485107273, 1732485351282, 1733166613902, 1734343058347, 1732484914137, 1730626291734, 1732690291799, 1730657786204, 1732484501879 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1316/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1316/Reviewer_yj9C" ], [ "ICLR.cc/2025/Conference/Submission1316/Authors" ], [ "ICLR.cc/2025/Conference/Submission1316/Reviewer_ntRv" ], [ "ICLR.cc/2025/Conference/Submission1316/Authors" ], [ "ICLR.cc/2025/Conference/Submission1316/Authors" ], [ "ICLR.cc/2025/Conference/Submission1316/Authors" ], [ "ICLR.cc/2025/Conference/Submission1316/Reviewer_kLfg" ], [ "ICLR.cc/2025/Conference/Submission1316/Authors" ], [ "ICLR.cc/2025/Conference/Submission1316/Authors" ], [ "ICLR.cc/2025/Conference/Submission1316/Authors" ], [ "ICLR.cc/2025/Conference/Submission1316/Area_Chair_QrUP" ], [ "ICLR.cc/2025/Conference/Submission1316/Authors" ], [ "ICLR.cc/2025/Conference/Submission1316/Reviewer_kLfg" ], [ "ICLR.cc/2025/Conference/Submission1316/Reviewer_ntRv" ], [ "ICLR.cc/2025/Conference/Submission1316/Reviewer_e8uT" ], [ "ICLR.cc/2025/Conference/Submission1316/Authors" ] ], "structured_content_str": [ "{\"title\": \"A final note\", \"comment\": \"Dear Reviewers and AC,\\n\\nWe sincerely thank you for your valuable time and insightful feedback, which has greatly benefited our work.\\n\\nIn a final note, we wish to emphasize that 3 of 4 reviewers found our approach to be \\u201c**sound**\\u201d, offering \\u201c**novel insights**\\u201d into the representation of motion, structure and identity (ntRv, kLfg) and commended our \\u201c**novel two-phase query injection strategy**\\u201d (yj9C, kLfg, ntRv). \\n\\nIn a broader context, for the first time, our work enables consistent multi-shot video generation, combined with motion-preservation. Notably, the baseline alternatives exhibit very limited performance, both qualitatively and quantitatively, which renders them largely ineffective in this task. Empirically, users had 2-4x preference over the alternatives (66-79%). Additionally, our new experiments with T2V-Turbo-V2 demonstrated that limitations such as video quality and motion issues diminish as our approach scales to stronger models.\\n\\nWe are confident that our work contributes to the dialogue in the field, advancing the understanding of motion, structure and identity in diffusion models,.\\n\\nThank you for considering our work.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes a method called \\\"Video Storyboarding\\\" to generate multi-shot videos with consistent characters across different scenes while preserving motion quality.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Framewise Subject-Driven Self-Attention to maintain consistency without compromising motion\\n2. Novel two-phase query injection strategy to balance identity preservation and motion quality\\n3. Adaptation of refinement feature injection to both conditional and unconditional denoising steps\", \"weaknesses\": \"1. Approach limited to short video clips, unsure how it would scale to longer videos\\n2. Balancing identity preservation and motion quality is still challenging, with potential tradeoffs\", \"questions\": \"Overall, this is a well-designed and rigorously evaluated method that represents a significant advancement in generating coherent multi-shot video sequences. The authors have done a commendable job in addressing the complex challenge of maintaining character consistency while preserving natural motion.\", \"some_questions_for_the_authors\": \"1. Have you explored any strategies to further improve the balance between identity preservation and motion quality? Are there other techniques beyond query injection that could be investigated?\\n2. How do you envision this approach scaling to longer video sequences? What additional challenges might arise, and how could the method be adapted to handle them?\\n3. The user study results showed that the ConsiS Im2Vid baseline achieved the highest set consistency among the baselines. Can you comment on the strengths of this approach and how it might be combined or compared with your Video Storyboarding method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow up\", \"comment\": \"Dear Reviewer kLfg,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our work.\\n\\nWe hope you had an opportunity to review our response from November 24. In it, we included comparisons with VSTAR, as you suggested, demonstrated results with concise prompts and multiple characters, and provided additional context for our consistency results.\\n\\nAre there any other concerns or questions we can help address? We would be happy to provide further clarification.\\n\\nThank you,\\n\\nThe authors\"}", "{\"summary\": \"This paper aims to generate multi-shot videos with consistent characters in a zero-shot manner. They claim that there is a trade-off between preserving character identity and video dynamics, thereby designing a two-phase approach, Q-preservation and Q-Flow, to balance the two respects.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. **Training-Free Approach for Subject Consistency Across Shots**: This work offers a training-free method for generating subjects with consistent identity across varying scenarios and shot transitions, which is valuable for practical applications where maintaining coherence in subject appearance is essential.\\n\\n2. **Novel Insights on Self-Attention Query Features**: The authors provide fresh insights into the role of self-attention query features, demonstrating that these features effectively capture both motion and identity.\\n\\n3. **Query-Preservation and Q-flow Techniques**: By preserving query features during early denoising and applying a tokenflow-inspired approach to select keyframes, the method achieves partial injection of query features to adjacent frames. Although it draws heavily from ConsisStory and TokenFlow, this approach has demonstrated effectiveness in enhancing subject consistency and motion dynamics to a certain extent.\", \"weaknesses\": \"1. **Limited Novelty in Video Storyboarding**: The innovation of the proposed video storyboarding approach is limited. The primary method relies on frame-wise SDSA, which largely mirrors the approach used in ConsiStory. The only notable difference lies in the mask source, utilizing CLIPseg and OTSU segmentation rather than cross-attention.\\n\\n2. **Poor Writing and Project Organization**: The paper's writing and the project page's layout hinder comprehension, making it difficult for readers to follow the key contributions the authors intend to convey.\\n\\n3. **Minimal Improvement over Baseline Models**: The generated video storyboarding results appear similar to those produced by existing video generation baselines like Videocrafter2 or TokenFlow encoder, with little noticeable difference in output quality.\\n\\n4. **Lack of Motion Dynamics**: The method demonstrates limited motion dynamics. In most video segments, the objects remain static, and in every case, the object consistently occupies the center of the frame, resulting in rigid, uninspired visuals.\\n\\n5. **Overclaiming the Benchmark**: The authors\\u2019 claim of establishing a benchmark based on a dataset of only 30 videos, each containing 5 video shots, is unsubstantiated. This dataset is insufficiently sized and lacks diversity, with evaluations limited to character consistency and dynamic degree, providing a narrow view that does not comprehensively assess the model's capabilities.\", \"questions\": \"1. **Inconsistencies in Object Appearance and Action**: In the ablation study on query preservation (`ablation_q`), inconsistencies persist. For example, in the first video, the right hand of the Muppet character appears red, while in the third shot, it is not. Additionally, although the Muppet is intended to perform aerobics in a Sesame Street setting, it merely flicks its hand briefly, failing to convey the intended action sequence.\\n\\n2. **Static Object Issues in ConsiStory Component Ablation Study**: In the ablation study on ConsiStory components for video generation, the rabbit character intended to surf, train, and ride a bike appears mostly static in the first and third shots. This raises the question of whether these issues stem from limitations in the base model\\u2019s dynamic capabilities. If so, would using models with stronger dynamic performance, such as Dynamic Crafter or CogVideo, potentially improve motion consistency and address these static object limitations?\\n\\nIf the video dynamic problem is addressed, I am willing to increase my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No Ethics Concerns\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow up\", \"comment\": \"Dear Reviewer e8uT,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our work.\\n\\nWe hope you had an opportunity to review our response from November 24. In it, we provided additional evaluation metrics, described how we improved the paper's clarity, clarified how Figure 4 and L375 support our key insight about query features, and elaborated on our paper's novelty. We also addressed your questions about generation quality.\\n\\nAre there any other concerns or questions we can help address? We would be happy to provide further clarification.\\n\\nThank you,\\n\\nThe authors\"}", "{\"title\": \"General response to reviewers\", \"comment\": \"We thank all the reviewers for their useful and insightful feedback. We are encouraged that the reviewers found our work to offer \\u201c**novel insights**\\u201d into structure and motion (ntRv, kLfg) and commended our \\u201c**novel two-phase query injection strategy**\\u201d (yj9C, kLfg, ntRv). The reviewers appreciated our \\u201c**valuable**,\\u201d tuning-free problem setup (**all**) and found our work to be \\u201c**sound**\\u201d (e8uT, kLfg, yj9C), while acknowledging our method as \\u201c**effective**\\u201d (ntRv), with \\u201c**significant improvements**\\u201d (kLfg), \\u201c**outperform baseline**\\u201d (e8uT) and being \\u201c**rigorously evaluated**\\u201d (yj9C).\\n\\n\\nOur work has benefited tremendously from your feedback. Below are the main modifications to the manuscript (colored blue in the updated PDF):\\n\\n1. **Stronger pretrained model, T2V-Turbo-V2**\\n\\n Following Reviewer ntRv\\u2019s suggestion, we tested our method with T2V-Turbo-V2 [Li et al. NeurIPS 2024], a state-of-the-art video model with enhanced motion capabilities. Results (Appendix: Table 1, Figure 8) show a threefold improvement in dynamic degree (from 20 to 62) while maintaining text alignment and subject consistency.\\n\\n2. **Expanded Experimental Results and Qualitative Demonstrations**\\n\\n - **VSTAR Baseline Comparisons**: We added qualitative and quantitative results comparing our method with VSTAR (Appendix: Table 1, Figure 11), as suggested by Reviewer kLfg.\\n - **Semantic Alignment Metrics**: New results include text-similarity and subject-consistency scores (Appendix: Table 1), as suggested by Reviewers (e8uT, ntRv).\\n - **Additional Qualitative Results**: We demonstrated our method\\u2019s ability to handle general subject categories like \\u201cwoman\\u201d or \\u201crabbit\\u201d (Appendix: Figure 9) and multiple subjects (Appendix: Figure 10), as suggested by Reviewer kLfg.\\n\\n3. **Improved Clarity**: \\n\\n We refined the paper\\u2019s writing, as suggested by Reviewers (e8uT, ntRv). Specifically, we added a dedicated *Notations Section* to define key terms, and revised the methods section to align with ConsiStory's three-step structure. Technical derivations and complex details were moved to the appendix to ensure the main text focuses on core challenges and solutions.\\n\\n\\nWe look forward to addressing any remaining questions from the reviewers and continue engaging in further discussion. If you find our response satisfactory, we kindly ask you to consider raising your rating in recognition of our core contributions.\"}", "{\"comment\": \"Thank you for your feedback. We appreciate your thorough evaluation of our work and the opportunity to clarify our objectives and results.\\n \\nWe wish to emphasize that the issues you mentioned\\u2014missing backgrounds (streets/tracks), lack of realism, unnatural motion, and the limited duration of shots\\u2014are inherent limitations of the base T2V-Turbo-V2 model, which our study does not aim to address. Our primary objective is to enhance subject consistency across different shots, rather than directly improving visual quality or realism.\\n\\nWe demonstrated that our approach can provide more significant improvements when applied to a stronger baseline model that inherently addresses one of these concerns. The additional experiment with T2V-Turbo-V2, as you requested, shows that when a more advanced base model is available, our method indeed enhances subject consistency in conjunction with the improvements stemming from that model (motion dynamics in this case).\\n\\nMoreover, we have introduced a new section on our project website titled \\u201cBackground details in dynamic video-models\\u201d, where you can observe that the issues you mentioned stem from the vanilla T2V-Turbo-V2, rather than from our method. In these results, you can view our generation results alongside those from the base T2V-Turbo-V2 model. In these comparisons, we use hyperparameters specifically tuned to preserve detail, which enables our approach to better maintain the background and detail quality of the base model while maintaining subject consistency.\\n\\nThank you for considering these points. We hope this clarifies our contributions and the potential of our method in conjunction with future advancements in base models. \\n\\n* The project page was updated before the deadline for updating the paper / results\"}", "{\"comment\": \"Thank you for the detailed reply. I have carefully read your response and the experimental results, and I have decided to maintain my original score. The main reasons are that the method's performance is not satisfactory: (1) Consistency is still relatively poor on simple objects, such as the woman in Figure 9. (2) The generated video movements are either too subtle or do not match the text description, such as the \\\"Feeding\\\" in Figure 10.\"}", "{\"title\": \"Thank you for the review and support!\", \"comment\": \"Thank you for your thoughtful feedback, insightful questions, and support of our work. We greatly value your perspectives on scaling to longer videos, balancing identity preservation and motion quality, and combining methods like ConsiS Im2Vid with our approach. Below, we provide detailed responses to your questions.\\n\\n\\n* **How do you envision this approach scaling to longer video sequences? What additional challenges might arise, and how could the method be adapted to handle them?**\\n In long video sequences, subjects undergo viewpoint variations, causing their appearance to evolve due to perspective changes. Extended attention mechanisms, while useful for maintaining consistency, often promote a single dominant viewpoint. This suppresses viewpoint diversity and hinders the ability to handle natural perspective shifts over time.\\n\\n One way to address this is to apply extended attention selectively, instead of connecting frames in a non-specific manner. For this, the video can be divided into shorter temporal segments, and attention is extended only across segments detected to share similar viewpoints. This approach may maintain subject consistency within segments while respecting viewpoint diversity across the sequence.\\n\\n* **Have you explored any strategies to further improve the balance between identity preservation and motion quality? Are there other techniques beyond query injection that could be investigated?**\\n Yes, there is definitely room for improvement in balancing identity preservation and motion quality. One approach we explored, though it did not yield significant improvements, was a training-free rank-one update to the network weights by copying average features from anchor videos, inspired by [Bau et al. ECCV 2020].\\n\\n Several potential strategies for improvement include leveraging temporal attention layers to link features across video shots, such as copying temporal features from one shot to another. Alternatively, generating a single long video, as done in VSTAR [Li et al. 2024], could help maintain stronger identity preservation. However, the challenge lies in creating clear-cut transitions between scenes, avoiding the scene-transition style of VSTAR videos, while ensuring alignment with text prompts.\\n\\n Another avenue is a training-based approach, such as those explored in works like [Zeng et. al CVPR 2024], which could allow for deeper optimization of the balance between identity and motion dynamics.\", \"references\": \"[Bau et. al] Rewriting a Deep Generative Model, ECCV 2020\\n\\n [Li et. al] Generative Temporal Nursing for Longer Dynamic Video Synthesis, preprint 2024\\n\\n [Zeng et. al] JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation, CVPR 2024\\n\\n* **Comment on the strengths of ConsiS Im2Vid baseline approach and how it might be combined or compared with your Video Storyboarding method?**\\n The strength of the ConsiS Im2Vid baseline lies in its ability to generate video shots from a set of consistent reference images. However, it has key limitations: (1) it lacks a mechanism to enforce cross-shot consistency, leading to identity variations, even when conditioned on images with consistent subjects, and (2) it lacks text-prompt control, resulting in static or misaligned motion.\\n\\n Combining Im2Vid with our Video Storyboarding method is an excellent idea that could lead to a robust hybrid approach. Framewise-SDSA and feature refinement could enhance subject consistency across shots. That said, the challenge of injecting motion remains, as Im2Vid lacks text-prompt control. This could be addressed by generating a non-consistent video using a text-to-video model and applying an inversion technique to extract and inject motion activations into the Im2Vid framework. Alternatively, combining our approach with a pretrained model conditioned on both image and text (image+text-to-video) could enable simultaneous control over subject identity and motion dynamics.\"}", "{\"title\": \"Thank you for the review!\", \"comment\": \"Thank you for your detailed feedback and thoughtful suggestions, as well as your willingness to increase your score based on improvements. Your comments on motion dynamics, novelty, and clarity have been instrumental in refining our work. In particular, your suggestion to evaluate our method using a stronger video model inspired us to conduct new experiments with T2V-Turbo-V2, yielding significantly enhanced motion dynamics. Below, we address your comments in detail.\\n\\n* **Lack of Motion Dynamics .. do these issues stem from limitations in the base model\\u2019s dynamic capabilities. If so, would using models with stronger dynamic performance \\u2026 If the video dynamic problem is addressed, I am willing to increase my score.**\\n\\n Following the reviewer's request, we applied our method to T2V-Turbo-V2 [Li et al., NeurIPS 2024], a recent state-of-the-art video model released in October 2024 that offers significantly larger motion dynamics and improved video quality. Results from this experiment\\u2014both qualitative (Figure 8, Appendix) and quantitative (Table 1, Appendix)\\u2014show that combining our approach with Turbo-V2 produces consistent subjects while achieving much more dynamic visuals. Specifically, the dynamic degree **improves threefold**, from 20 to 62, while maintaining the same level of text alignment. This improvement is also evident in the qualitative examples the reviewer ask about, featuring the rabbit and Muppet characters (see Figure 8, Appendix and the online website). There, our method combined with Turbo-V2 demonstrates noticeably enhanced motion dynamics compared to its application with VideoCrafter2.\\n\\n The static motion in Ours+VideoCrafter2 results reflects the limitations of the underlying pretrained model. However, our method still achieves a better consistency-dynamics balance than naive consistency approaches, as shown in the ablation studies (Figures 4 and 5) and the user study.\\n\\n [Li et al.], T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback, NeurIPS 2024\\n\\n\\n* **Novelty.**\", \"our_key_novelties_are_twofold\": \"(1) we demonstrate that query features (Q) encode both motion and identity\\u2014*this is our main insight*. This insight directly motivates (2) our two-phase query injection mechanism, which is specifically designed to address the challenges arising from this observation, particularly in preserving identity without compromising motion. For that, we adopt components from prior work in non-trivial ways to form a cohesive solution.\\n\\n Furthermore, our work is the first to address this problem setup for video generation and the unique challenges it presents. Indeed, we also introduce additional technical improvements, such as the frame-wise SDSA and the use of estimated clean image (pred-x0) for mask generation, rather than relying on internal network activations used in prior methods.\\n\\n* **Clarity.**\\n\\n We thank the reviewer for this valuable feedback. We have improved the paper's clarity through:\\n\\n - **Added Notations Section**: A dedicated Notations section defining key terms (e.g., $Q_v$, $Q_c$, $Q_f$) to reduce ambiguity.\\n\\n - **Aligned Structure**: Revised the methods section to mirror ConsiStory's three steps using clear bold headers, making connections to prior work explicit: (1) SDSA, (2) Layout Diversity, (3) Refinement Injection.\\n\\n - **Focused Content**: Technical derivations and complex details were moved to the appendix, while the main text emphasizes core challenges before presenting the solutions.\\n\\n The project page has also been updated for better navigation.\\n\\n* **Improvement over Baseline Models.**\\n\\n While we acknowledge that the method is not perfect and there is room for improvement in future work, we emphasize that it already achieves significant advancements over existing baselines. For instance, its consistency is rated higher by 66% of users compared to the best baseline and by 79% of users compared to the base model, all while maintaining competitive text-motion alignment.\\n\\n* **Overclaiming the Benchmark.**\\n\\n\\n We revised and downplayed the benchmark claim to reflect the scope of our evaluation. Our evaluation, based on 150 generated videos (see L431, L471), provides distinguishable error bars and ensures robust evaluation results. In response to this comment, we also report additional semantic alignment scores, including \\u201ctext-similarity\\u201d (referred to as \\u201coverall-consistency\\u201d by VBench) and VBench\\u2019s \\u201csubject-consistency,\\u201d which measures the similarity between frames within the same video shot using DINO (see Table 1 in the Appendix).\"}", "{\"comment\": \"Thank you for taking the time and effort to review our work and provide thoughtful feedback. We value your insights and have carefully considered your comments.\\n\\nWe acknowledge the concern about motion subtlety. Our initial model was indeed limited in this aspect due to its reliance on VideoCrafter2 (e.g., see VideoCrafter2 in Figure 2 - \\\"playing Wii\\\" / \\\"playing w. trees\\\", Figure 11 - athlete). Following another reviewer's suggestion, we experimented with T2V-Turbo-V2, achieving more dynamic visuals while maintaining consistency (Table 1-appendix, Figure 8). These new results are also shown on our website under \\\"Video Storyboarding with a stronger pretrained model, T2V-Turbo-V2\\\". \\n\\nIn the broader context, our experiments with T2V-Turbo-V2 demonstrate how our approach scales to stronger base models. This enables motion preservation during consistent multi-shot video generation, which is particularly relevant as text-to-video synthesis continues to evolve.\\n\\nRegarding consistency, we acknowledge that the method is not perfect. However, for the first time, it enables consistent multi-shot video generation with measurable, notable improvements over baseline methods, both qualitatively and quantitatively. Notably, the baseline methods exhibit very limited performance, which renders them largely ineffective in this task. Empirical evidence from our user study reveals that participants preferred our outputs twice as often as those from the best baseline model (66%) and four times as often as those from the base model (79%).\\n\\nThank you for considering these points. We hope this clarifies our contributions and the potential of our method in conjunction with future advancements in base models.\"}", "{\"metareview\": \"This paper presents a training-free approach to ensure character consistency and motion adherence in generating multi-shot video sequences.\\n\\nThe paper received mixed ratings (5, 5, 5, 8) after rebuttal, with key concerns revolving around clarity of writing (e8uT, ntRv), novelty of the method (e8uT, ntRv), and quality of results (e8uT, kLfg, ntRv).\\n\\nIn the post-rebuttal discussion, reviewers acknowledged that the revised version improved clarity. However, concerns about the novelty and quality of results remained. Specifically:\\n- The motivation of \\u201cquery features encode both motion and identity\\u201d was interesting, but was not convincingly demonstrated through the results. \\n- The proposed method offered only incremental innovations compared to ConsiStory and TokenFlow.\\n- Despite including T2V-Turbo, the results failed to address issues with motion consistency, video fidelity, and multi-shot consistency.\\n\\nThe AC agrees with the majority of the reviewers and recommends rejecting the paper.\", \"additional_comments_on_reviewer_discussion\": \"The paper received mixed ratings (5, 5, 5, 8) after rebuttal, with key concerns revolving around clarity of writing (e8uT, ntRv), novelty of the method (e8uT, ntRv), and quality of results (e8uT, kLfg, ntRv).\\n\\nIn the post-rebuttal discussion, reviewers acknowledged that the revised version improved clarity. However, concerns about the novelty and quality of results remained. Specifically:\\n- The motivation of \\u201cquery features encode both motion and identity\\u201d was interesting, but was not convincingly demonstrated through the results. \\n- The proposed method offered only incremental innovations compared to ConsiStory and TokenFlow.\\n- Despite including T2V-Turbo, the results failed to address issues with motion consistency, video fidelity, and multi-shot consistency.\\n\\nThe reviewer yj9C with a positive score submitted short comments and did not participate in any discussion.\"}", "{\"title\": \"Thank you for the review!\", \"comment\": \"Thank you for your constructive feedback and thoughtful suggestions, which have helped us improve the scope of our work. We appreciate your detailed comments on expanding baseline comparisons, exploring concise prompts, and evaluating multiple characters, as they helped us strengthen both our experiments and presentation. In particular, your points regarding VSTAR and broader evaluations led us to incorporate new results, which we believe address your concerns. Below, we respond to your comments in detail.\\n\\n* **Experiments: The paper misses some important baseline methods, e.g., VSTAR[1]. The paper only provides several comparison samples.**\\n\\n\\n We first note that VSTAR's code was released on October 10, after the ICLR submission deadline. However, in response to the reviewer\\u2019s request, we have now included both qualitative and quantitative comparisons to the VSTAR baseline. Please see the results in the Appendix (Table 1, Figure 11 and the online website). Overall, while VSTAR produces large motion dynamics, it struggles with prompt-specific control, often resulting in entire videos misaligning with text descriptions. Since it achieves consistency through continuous video generation, VSTAR is better suited for scene transitions rather than independent video shots. \\n\\n Additionally, regarding the scope of comparisons, our user study and automated metrics were conducted with 150 generated videos (see L431, L471). This sample size provides distinguishable error bars, ensuring robust evaluation results.\\n\\n* **Would a more concise description impact character consistency? For example \\u2026 \\\"A woman\\\".**\\n \\n Following the reviewer\\u2019s request, we have added qualitative results demonstrating our method\\u2019s ability to handle general subject categories (see Appendix Figure 9 and the online website). These examples show that our approach can successfully generate consistent videos for broad subject types like \\\"woman\\\" and \\\"rabbit,\\\" indicating strong performance even with concise, superclass-level prompts.\\n\\n Overall, a more concise description generally increases the gap between our method and the baselines, as it amplifies variation between shots in the baseline methods. Note that we intentionally adopted prompts with a complexity level comparable to consistent-character image generation methods, such as Consistory. For example, prompts like \\\"Cinematic, middle-aged female athlete\\\" are not overly complex, especially when compared to the highly detailed descriptions used in works like Sora (e.g., \\u201cA stylish woman walks down a Tokyo street... She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick.\\u201d or \\u201cthe 30-year-old space man wearing a red wool knitted motorcycle helmet\\u201d). \\n\\n* **How the proposed method performs with multiple characters.**\\n\\n Following the reviewer\\u2019s request, we have added qualitative results demonstrating the capability to handle multiple subjects (see Appendix Figure 10 and the online website). By incorporating two subjects in the prompt of the zero-shot mask, our approach can consistently render multiple characters within the same scene, as illustrated by examples featuring girl-owl and boy-teddy bear pairs.\\n\\n* **Quality of consistency results .. in Fig.3, the color and style of clothes change across different video shots.**\\n\\n We acknowledge that the method is not perfect and that there is room for improvement in future work. However, it is worth noting that in some cases, changes in color and style, such as the girl wearing different attire while surfing, align with the context of the action and user expectations. A rigid preservation of clothing or style across all shots might not always be desirable.\\n\\n That said, our method already provides significant improvements over existing baselines. For instance, its consistency is rated higher by 66% of users compared to the best baseline and by 79% of users compared to the base model, all while maintaining competitive text-motion alignment.\"}", "{\"summary\": \"This paper presents Video Storyboarding, a training-free method that enhances pre-trained text-to-video models to generate multiple shots with consistent characters while maintaining high video quality and responsiveness to text prompts. By leveraging self-attention query features that capture motion and identity, the method addresses the trade-off between character consistency and video dynamics through a novel query injection strategy. Experimental results show significant improvements in character consistency and motion quality, offering insights into the video generation process and the interplay of structure and motion in diffusion models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces a training-free method to ensure charactoer consistency and motion adherence in producing multi-shot video sequences.\\n2. This paper presents a two-phase query injection strategy to balance encoding motion and identity.\\n3. A benchmark and evalution protocol are proposed to evaluate consistency of video generation.\", \"weaknesses\": \"1. The conducted experiments are not comprehensive, including two aspects: (a) The paper only provides several comparison samples. (b) The paper misses some important baseline methods, e.g., VSTAR[1]\\n2. Although the purpose of the paper is to maintain consistency of characters across different video clips, the results are not particularly good. For example, in Fig.3, the color and style of clothes change across different video shots.\\n\\n[1] Li, Yumeng, William Beluch, Margret Keuper, Dan Zhang, and Anna Khoreva. \\\"VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis.\\\" arXiv preprint arXiv:2403.13501 (2024).\", \"questions\": \"1. The paper only demonstrates the performance of a single character across different videos, and the reviewer is curious about how the proposed method performs with multiple characters.\\n2. The prompt in the paper provides overly detailed descriptions of the character. Would a more concise description impact character consistency? For example, replace the \\\"Cinematic, middle-aged female athlete\\\" in Fig.8 with \\\"A woman\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response and additional experiments. While T2V-Turbo-V2 improves motion dynamics, I noticed issues like background removal (e.g., missing streets during skating in Sesame Street and train tracks while surfing on a train in Rabbit Toy). The characters and story still lack realism, and challenges in multi-shot character-consistent story generation remain, such as duration, natural motion, and story richness. I will maintain my original score.\"}", "{\"summary\": \"This paper targets the problem of character consistency in text-to-video generation. The authors propose a training-free method to solve this problem. They find the query in the attention encodes the information of both motion and identify, which leads to the trade-off between motion dynamics and identity consistency. The experimental model used is VideoCrafter2. To solve the trade-off problem, they propose a new query injection method. Specifically, they share features between different video clips. Then, they replace the Q (query) with those from the original generation (to maintain motion from the original generation). After that, they leverage the flow map from vanilla keyframes to guide the Q injection. Their results achieve the character consistency while keeping the original motion dynamics and text alignment. The text alignment is evaluated via user study. The overall metrics for evaluation are three aspects: motion degree, id consistency, and motion text alignment.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The method is tuning-free and does not require any further training.\\n2. The results outperform baseline methods. Some of the provided visual results look good.\", \"weaknesses\": \"1. The paper is relatively hard to follow regarding the details of the method part.\\n2. Lack of novelty: 1. The id presevering mechanism is built upon the SDSA. the SDSA is adopted from ConsiStory [Tewel et al. 2024] with two minor modifications: (1) The attention not attend to each other with all frames from different clips, but one single frame from each clip. (2) The mask estimation use ClipSeg, rather than estimated from the cross attention. 2. The motion preserving is leveraging TokenFlow [Geyer et al. 2023] to inject the motion based on the flow from original keyframes. Thus, the method is like a A+B combination with some minor modifications.\\n3. The key insight \\\"self-attention query features (O) encode both motion and identity\\\" lack experimental results to demonstrate.\\n4. The results are not perfect, e.g., inconsistent hairstyles in the 3rd row of Figure 1.\\n5. The evaluation does not contain the overall video generation quality and the qualitative semantic alignment scores. \\n6. Minor formate issues like inconsistent figure reference: Figure 1 and Fig. 4; And strange line break at line 244 and line 291.\", \"questions\": \"1. Does the overall generation quality decrease after the proposed method?\\n2. How does the motion quality changed after the proposed method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the review!\", \"comment\": \"Thank you for your valuable feedback and thoughtful insights. They have greatly helped us refine the clarity, novelty, and evaluation of our work. We appreciate your recognition of our key insight that \\u201cquery features encode both motion and identity.\\u201d We hope that our revisions, including clearer explanations of our method, additional experiments, and expanded evaluations, address your concerns effectively. Below, we address your concerns in detail. Below, we respond to your points in detail.\\n\\n\\n* **Novelty**\", \"our_key_novelties_are_twofold\": [\"(1) as the reviewer highlighted in his point #3, the insight that \\\"query features (Q) encode both motion and identity.\\\" This key observation directly inspired (2) our two-phase query injection mechanism specifically designed to address the challenges arising from this observation, particularly in preserving identity without compromising motion. We respectfully disagree with the notion that our method is merely an \\\"A+B combination with minor modifications.\\\". While the method builds on components from prior work, they are adapted in non-trivial ways to form a cohesive solution.\", \"Furthermore, our work is the first to address this problem setup for video generation and the novel challenges it presents. Indeed, we also introduce additional technical improvements, such as the frame-wise SDSA and the use of estimated clean image (pred-x0) for mask generation, rather than relying on internal network activations used in prior methods.\", \"**Clarity of the method part.**\", \"We thank the reviewer for this valuable feedback. We have improved the paper's clarity through:\", \"**Added Notations Section**: A dedicated Notations section defining key terms (e.g., $Q_v$, $Q_c$, $Q_f$) to reduce ambiguity.\", \"**Aligned Structure**: Revised the methods section to mirror ConsiStory's three steps using clear bold headers, making connections to prior work explicit: (1) SDSA, (2) Layout Diversity, (3) Refinement Injection.\", \"**Focused Content**: Technical derivations and complex details were moved to the appendix, while the main text emphasizes core challenges before presenting the solutions.\", \"**The key insight \\\"self-attention query features (Q) encode both motion and identity\\\" lack experimental results to demonstrate.**\", \"Please see the experiment in Figure 4 and the discussion around L375. We demonstrate that injecting the query features from the vanilla videos results in strong motion preservation. At the same time, it causes a *loss of subject identity* (e.g., the Muppet\\u2019s color change), clearly indicating that the Q tokens encode both motion and identity information.\", \"**The evaluation does not contain the overall video generation quality and the qualitative semantic alignment scores.**\", \"In response to this comment, we now provide additional semantic alignment scores, including \\u201ctext-similarity\\u201d (referred to as \\u201coverall-consistency\\u201d by VBench) and VBench's \\u201csubject-consistency,\\u201d which measures the similarity between frames within the same video shot using DINO (see Table 1 in the Appendix). We also note that our video generation quality scores are similar to those of the underlying pretrained model (see discussion in L474), which is why we focused on presenting two key metrics \\u2014 multi-shot set consistency and dynamic degree \\u2014 that are more distinguishable relative to the pretrained model.\", \"**Results are not perfect, e.g., inconsistent hairstyles.**\", \"We acknowledge that the method is not perfect and there is room for improvement in future work. However, we highlight that our method already provides a significant improvement when compared to existing baselines. For instance, its consistency is rated higher by 66% of users compared to the best baseline and by 79% of users compared to the base model, all while maintaining competitive text-motion alignment.\", \"**Does the overall generation quality decrease after the proposed method?**\", \"No, the overall generation quality does not decrease. Only the generated motion is somewhat reduced (e.g. the dynamic degree) when using the VideoCrafter2 pretrained model (see Figure 6). Following Reviewer ntRv's comment, we included additional results using another SoTA pretrained video model (T2V-Turbo-V2). In this case, the dynamic degree is only slightly affected (see Table 1 in the appendix).\", \"**How does the motion quality changed after the proposed method?**\", \"According to the user study (Figure 7, right), 55% of our generated motions were judged to be of similar or better quality compared to the vanilla model.\"]}" ] }
0zGvf2yRMQ
MeshGen: Generating PBR Textured Mesh with Render-Enhanced Auto-Encoder and Generative Data Augmentation
[ "Zilong Chen", "Yikai Wang", "Wenqiang Sun", "Feng Wang", "Yiwen Chen", "Huaping Liu" ]
In this paper, we present MeshGen, an advanced image-to-3D pipeline designed to generate high-quality 3D objects with physically based rendering (PBR) textures. Existing methods struggle with issues such as poor auto-encoder performance, limited training datasets, misalignment between input images and 3D shapes, and inconsistent image-based PBR texturing. MeshGen addresses these limitations through several key innovations. First, we introduce a render-enhanced point-to-shape auto-encoder that compresses 3D shapes into a compact latent space, guided by perceptual loss. A 3D-native diffusion model is then established to directly learn the distribution of 3D shapes within this latent space. To mitigate data scarcity and image-shape misalignment, we propose geometric alignment augmentation and generative rendering augmentation, enhancing the diffusion model's controllability and generalization ability. Following shape generation, MeshGen applies a reference attention-based multi-view ControlNet for image-consistent appearance synthesis, complemented by a PBR decomposer to separate PBR channels. Extensive experiments demonstrate that MeshGen significantly enhances both shape and texture generation compared to previous methods.
[ "3D Generation", "Texture Generation" ]
https://openreview.net/pdf?id=0zGvf2yRMQ
https://openreview.net/forum?id=0zGvf2yRMQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kMbSDKfbMX", "PpuACGlX4P", "KuquoCGXFy", "DMPFpnuC2i", "8Y5qZxwMsw", "0VsinFVRXp" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731652551245, 1730864089624, 1730585734965, 1730815528402, 1730688113969, 1730460988938 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission707/Authors" ], [ "ICLR.cc/2025/Conference/Submission707/Reviewer_EXPc" ], [ "ICLR.cc/2025/Conference/Submission707/Reviewer_UCjb" ], [ "ICLR.cc/2025/Conference/Submission707/Reviewer_JCJb" ], [ "ICLR.cc/2025/Conference/Submission707/Reviewer_6QQ6" ], [ "ICLR.cc/2025/Conference/Submission707/Reviewer_rYth" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper studies the problem of recovering 3d and some material properties of the objects studied from an image (or from multiple images, or from multiple images and their normal maps. Which of these is correct was not clear from the paper).\\nExtensive experimentation was performed to optimize the terms of various loss functions in order to give high-quality visual results of the re-rendered captured shapes. Extra care was taken to ensure that physically-based rendering of the captured surfaces allowed for the captured objects to be rendered under different lighting conditions.\\n\\nThe paper operated within a subset of the Objaverse dataset, consisting of 35k multi-view images.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The final images do indeed look better than the rendered comparison images, for the images shown.\", \"weaknesses\": \"There aren't numerical results given for the system performance. Thus, the reader is constantly questioning, \\\"are these results cherry picked\\\"?\", \"the_paper_is_written_for_an_audience_of_researchers_operating_in_the_same_sub_field\": \"people reconstructing 3d, training and testing of Objaverse dataset images. I'm not in that set of researchers (and ICLR readers generally won't be) and many aspects of the paper were unclear to me, see the questions below.\\nThis is an engineering paper, a paper showing how to tweak parameters\\nto achieve slightly better results in a very crowded field. As I read\\nthe paper, I kept asking myself, \\\"what do I learn from this?\\\" and I\\nrarely came up with an answer to that question. The message is,\\nextensive parameter tweaking results in slightly better performance.\\nI don't feel that's a message that we need to convey to the ICLR\\naudience.\", \"my_concerns_with_the_paper\": \"(1) There's no high-level story presented, no obvious set of take-aways that the reader learns.\\n(2) The paper doesn't present the work in a way that's accessible to readers outside of this particular subfield.\\n(3) Quantitative performance evaluations are not given, just lots of thumbnail images. This is unsatisfying, and not persuasive, since the reader wonders about bad results not being shown. If the results are indeed a random selection of the system outputs, please say so.\\n(4) Generalization beyond the one dataset trained on was relegated to one figure in the appendix. Same with failure cases.\", \"questions\": \"please tell me the line number where you state: what the input is to the system, ie, how many views are assumed to be input? Are surface normals also assumed to be input?\\nIf you both train and test on the Objaverse dataset, then why can't you report more quantitative measures of performance? Presenting small thumbnails as the research output leaves the reader wondering if the results we're viewing are just the examples that happened to work well.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose PBR textured mesh reconstruction method from single view image.\\n\\nThe model has several components, which are render-enhanced auto-encoder, image-to-shape diffusion model with augmentations, and image-conditioned PBR texture generation.\\n\\nCompared to the previous methods, the paper shows enhanced and fine-grained textured mesh generation results.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides a thorough literature survey of previous studies, analyzes the weaknesses of these methods, and presents a concrete training model and pipeline.\\n\\nThe qualitative results are improved than to the previous methods, especially for the fine-grained details and well-presenting to the given input view images. The human head results in Fig. 5 is the promising result, because it is out domain of the object dataset (especially for the Objaverse). It will be better to add more human head (out-domain) results.\\n\\nThe paper can deal with PBR textures which is essential to practical applications.\", \"weaknesses\": \"The figures of the pipeline and modules are hard to understand and make less intuitive because of heavy abbreviation, especially for Fig. 1, Fig. 2, and Fig. 4.\\n\\nIt is hard to understand the whole pipeline process of the paper\\u2019s modules. Especially for the Fig. 1, the reviewer wonders what is the connection between (a) render-enhanced auto-encoder and (b) image-to-shape diffusion model with tailored augmentation. It is nice to have a well thought out and fleshed out design, but is is hard to understand the connectivity of the entire module.\\n\\nFor the Fig. 4 and Fig.7, it is better to denote which is the paper\\u2019s method (ours).\\n\\nThe paper lacks the discussion of limitations.\", \"questions\": \"What is the detailed training recipes about training dataset, GPU specs, and training time.\\n\\nThe paper needs to show qualitative results with unseen evaluation datasets, such as Google Scanned Objects (GSO) datasets to a stronger rationale for the improved results. Only the qualitative result of the ablation study is given.\\n\\nFor the geometric alignment augmentation, the reviewer wonders why it is the augmentation. In L. 261-263, the authors say that \\u201cwe select one view from multi-view images as the condition and rotate the point cloud\\u2019s azimuth to align the object\\u2019s orientation with the selected image as the target\\u201d. This seems to be just training with an image corresponding to each multi-view angle for a given point cloud, but it's hard to understand why this is an augmentation. The reviewer wonders if training to correspond to multi-view images for a given 3D point cloud is not an existing method for multi-view learning, and what is added differently by augmentation.\\n\\nFor the multi-view PBR decomposition, are the generated images between different PBR components of the same view image, and multi-view images consistent?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Building large models for translating images to 3D models is now a very hot topic. This work is a follow-up in this direction. As mentioned, three things are new: 1) incorparating render-based perceptual loss into the auto-encoder training; 2) two augmentation stratigies are proposed; 3) a texturing pipeline with reference-based attention mechanism is presented. The experiments validate the effectiveness of the proposed designs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper is easy to read.\", \"The results in all figs look good, when comparing with existing methods, especially for the geometric details.\"], \"weaknesses\": [\"My concerns include:\", \"It lacks training details, especially the data. What is the scale of the training 3D models? Is it same with meshLRM, meshFormer and Craftsman? If no, the comparison with them may be not fair.\", \"It lacks quantitative anlaysis of the image-to-shape models. Currently, there are only some selected examples are shown which is not enough to support the claim of SOTA accuracy.\", \"The paper is not well-motivated. What kind of issues does this paper aim to address? This is not clear to me.\", \"Lack technical insights. Involving of render-based perceptual loss, the propose new augmentation strategies, attention-based texturing pipeline etc. All the claimed new things are some engineering methods. I believe these can improve the performance, however it cannot brings the community new insights.\"], \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"The authors introduce solution to image-to-3D generation problem. Instead of predicting textured mesh, they design native 3D generation approach, with a separate PBR texturing stage.\", \"3D generation stage consists of training a render-enhanced point-to-shape auto-encoder; they chose triplane representation instead of previously used vector set for rendering efficiency reasons.\", \"Following the autoencoder training stage, they employ a diffusion UNet on top of h-stacked triplane features, with cross-attention on input image DINOv2 features\", \"For texture prediction, authors train a multiview ControlNet, applied on top of Zero123++ to predict the multiview shaded renders, with another Instruct-pix2pix decomposer to separate it into PBR materials.\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Authors provide a comprehensive image-to-3D pipeline (notably, based on 3D diffusion), which performs on par or better than LRMs, and far better than other native 3D methods\", \"The paper is well-structured, clear, and easy to follow\", \"A key strength is that the authors correctly point out a common issue with 3D latent diffusion models, where the outputs often look symmetric. They visually prove that geometric alignment augmentation is a well-suited solution for this problem. This is original and significant contribution of the paper.\", \"Generative rendering augmentation is shown to be an effective augmentation pipeline in practice. This idea, to my knowledge, is original.\"], \"weaknesses\": [\"The paper lacks any quantitative evaluation, particularly in two key areas:\", \"autoencoder quality: This could be benchmarked against models like 3DShape2VecSet.\", \"(biggest weakness) 3D reconstruction quality (geometry-only): Without quantifiable metrics, it's difficult to assess the quality claims. Comparisons with recent LRM papers such as TripoSR and Stable Fast3D on datasets like GSO or OmbiObject3D would be beneficial.\", \"The proposed render-enhanced autoencoder feels more like a combination of existing methods (e.g., 3DShape2VecSet + render-based loss, a technique used in prior works like DMV3D) rather than a novel contribution.\", \"I find the justification for ray-based regularization questionable. The paper mentions that\", \"> render loss alone leads to severe floaters\", \"Isn't that what the BCE loss on occupancy is meant to address? If used correctly, BCE should perform as well as, if not better than, ray-based regularization.\", \"The generative rendering augmentation seems like a training trick to artificially boost dataset diversity. While this may improve performance, it could complicate future comparisons. I'd recommend reporting metrics without this augmentation for a clearer evaluation.\", \"Finally, the texturing pipeline appears to be a technical application of existing ideas and seems more complementary to the paper\\u2019s core contribution.\", \"In summary, while the paper lacks core significant novelty, it presents a well-executed combination of techniques for image-to-3D problem with native 3D diffusion models.\"], \"questions\": \"* Please include quantitative comparisons with the latest LRM papers (e.g., TripoSR, Stable Fast3D) on datasets like GSO or OmbiObject3D.\\n\\nI understand that you're training a native 3D diffusion model, which is inherently more complex and resource-intensive compared to LRMs. It would be helpful to discuss whether your results are constrained by resource limitations, and to what extent techniques like rendering augmentation were necessary to achieve your results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a novel pipeline, termed MeshGen, designed for the generation of high-quality 3D meshes with physically based rendering (PBR) textures from a single image.\\nDuring the geometry generation stage, the authors first utilize a render-enhanced auto-encoder to encode 3D meshes into a compact latent space. Subsequently, an image-to-shape diffusion model is trained, incorporating geometric alignment and generative rendering augmentation to address challenges related to image-shape misalignment and the model's generalization capability.\\nIn the texture generation stage, the paper establishes a reference attention-based multi-view generator, which is subsequently followed by a PBR decomposer to extract PBR components, along with a UV-space inpainter to complete the rendering of occluded areas.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Generative rendering augmentation appears to hold significant promise.\\n\\n2. The results of geometry generation demonstrate satisfactory performance relative to available open-source non-commercial methods.\\n\\n3. The outcomes of PBR material generation are both compelling and credible, particularly the examples of metal objects presented in the appendix.\", \"weaknesses\": \"1. Robustness of PBR material decomposition? During the texture generation stage, the method initially produces multi-view shaded images, which are subsequently decomposed into their PBR components. However, the variability in light and shadow effects within these shaded images can be substantial. I am particularly interested in the robustness of the PBR decomposer. Specifically, I am curious to know whether the decomposer can effectively manage scenarios involving more intricate lighting conditions.\\n\\n2. PBR material generation results on more complex metal objects? The paper has demonstrated promising PBR material generation results on various metal objects, including a teapot and a roaster. I am intrigued by the potential outcomes of PBR generation on more complex metal objects, such as a game asset axe or a detailed representation of Iron Man.\", \"questions\": \"1. Geometry comparison with commercial products? I acknowledge that lower mesh quality, attributable to the constraints of high-quality data and computational resources, is a reasonable compromise. However, I am intrigued by your rationale for deeming the alignment termed \\\"symmetry\\\" as unnecessary.\\n\\n2. PBR material generation from scratch V.S. PBR material decomposition from shaded images? The paper presents a promising approach involving multi-view RGB image generation and subsequent multi-view RGB-to-PBR decomposition. Concurrently, an alternative methodology exists for generating albedo, metallic, and roughness attributes from scratch, as exemplified by the HyperHuman Rodin (CLAY). I am curious about your decision to opt for PBR material decomposition over this alternative technique. Additionally, I am interested in your assessment of the strengths and weaknesses of these two techniques (discussion without qualitative results is acceptable since HyperHuman Rodin is not open-source).\\n\\n3. More details about reference attention? Although I am not familiar with the reference attention, I am quite positively impressed by it. Could you please provide more detail?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0yvZm2AjUr
Monitoring Latent World States in Language Models with Propositional Probes
[ "Jiahai Feng", "Stuart Russell", "Jacob Steinhardt" ]
Language models (LMs) are susceptible to bias, sycophancy, backdoors, and other tendencies that lead to unfaithful responses to the input context. Interpreting internal states of LMs could help monitor and correct unfaithful behavior. We hypothesize that LMs faithfully represent their input contexts in a latent world model, and we seek to extract these latent world states as logical propositions. For example, given the input context ``Greg is a nurse. Laura is a physicist.'', we aim to decode the propositions WorksAs(Greg, nurse) and WorksAs(Laura, physicist) from the model's internal activations. To do so we introduce _propositional probes_, which compositionally extract lexical concepts from token activations and bind them into propositions. Key to this is identifying a _binding subspace_ in which bound tokens have high similarity (Greg $\leftrightarrow$ nurse) but unbound ones do not (Greg $\not\leftrightarrow$ physicist). Despite only being trained on linguistically simple English templates, we find that propositional probes generalize to inputs written as short stories and translated to Spanish. Moreover, in three settings where LMs respond unfaithfully to the input context---prompt injections, backdoor attacks, and gender bias--- the decoded propositions remain faithful. This suggests that LMs often encode a faithful world model but decode it unfaithfully, which motivates the search for better interpretability tools for monitoring LMs.
[ "Interpretability", "Language models", "AI Safety" ]
Accept (Spotlight)
https://openreview.net/pdf?id=0yvZm2AjUr
https://openreview.net/forum?id=0yvZm2AjUr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tvBlmbBhFY", "mmIupWw0ql", "f6bOjNS0OY", "e9VxtPK1Ai", "TyfoDpgwYI", "Ljk66HnS0C", "ImN0pSIItQ", "EOaTOurwDP", "C0aoMYqT2J", "AOfKv0aoEX", "8zmmnFkNka", "3ekkPiaIfN", "39DxiuvLk0", "1QyDcOhVvi", "0TX7qGvlig" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732665727132, 1730630233930, 1732313707771, 1732317360409, 1732636806922, 1737523619283, 1732319161690, 1732591297937, 1732563241250, 1730659961933, 1730415935612, 1730563126010, 1734487072277, 1732314612121, 1732320526857 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4104/Reviewer_RZaJ" ], [ "ICLR.cc/2025/Conference/Submission4104/Reviewer_JBEz" ], [ "ICLR.cc/2025/Conference/Submission4104/Authors" ], [ "ICLR.cc/2025/Conference/Submission4104/Authors" ], [ "ICLR.cc/2025/Conference/Submission4104/Reviewer_zN2i" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4104/Authors" ], [ "ICLR.cc/2025/Conference/Submission4104/Reviewer_JBEz" ], [ "ICLR.cc/2025/Conference/Submission4104/Reviewer_vPtB" ], [ "ICLR.cc/2025/Conference/Submission4104/Reviewer_RZaJ" ], [ "ICLR.cc/2025/Conference/Submission4104/Reviewer_vPtB" ], [ "ICLR.cc/2025/Conference/Submission4104/Reviewer_zN2i" ], [ "ICLR.cc/2025/Conference/Submission4104/Area_Chair_YE3P" ], [ "ICLR.cc/2025/Conference/Submission4104/Authors" ], [ "ICLR.cc/2025/Conference/Submission4104/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to authors\", \"comment\": \"Thank your for your detailed responses, and additions to the paper which do increase its value in my eyes. I have raised my overall score from 6 to 8.\"}", "{\"summary\": \"This paper proposes a method for finding a low-dimensional linear subspace of an LM's activation space which, so the main claim of the paper, encodes binding information.\\n\\nWhile *binding* is a very broad and complex concept (see Greff et al., 2020, https://arxiv.org/abs/2012.05208 for a recent overview), in this paper binding refers to the process by which textual mentions of two or more entities and attributes that participate in a certain relation are bound together into a more abstract representation of that relation. For example, understanding the text \\\"Alice lives in Laos\\\" entails recognizing \\\"Alice\\\" and \\\"Laos\\\" as mentions of entities and then forming an abstraction of their relation that can be expressed as a proposition like LivesIn(Alice, Laos). On the representational level in a neural network this requires creating internal representations of the entities in question, and then performing some transformation on those representations that signals to subsequent layers that these two representations are \\\"bound together\\\".\\n\\nThe main hypothesis of the paper is that this transformation can be seen as a function that takes two entity representations x and y as input and outputs their \\\"binding strength\\\" F(x, y), i.e., a score that quantifies whether the two entities are bound or not. Assuming that F is bilinear in the entity representations x and y, the authors propose a method to estimate F via its Hessian matrix H. If the binding subspace is low-dimensional, then F and the Hessian H should be low rank, which motivates the authors to analyze the rank-k truncated SVD of H. By measuring the efficacy of interchange interventions as a function of k, the authors find that a k=50 dimensional subspace mediates binding, i.e., when manipulating activations in this subspace model output changes accordingly. For example, given the input \\\"Alices lives in Laos. Bob lives in Peru.\\\" one can make the LM say \\\"Bob lives in Laos\\\" by intervening on activations in this low-dimensional subspace.\\n\\nHaving developed the machinery to probe an LM for internal representations of propositions, the paper demonstrates several use cases for analyzing discrepancies between the model's internal representations and its output, finding cases in which the model appears to internally represent a proposition but generates output that is inconsistent with it.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Originality:** The paper proposes a novel method for identifying a low-dimensional subspace which appears to causally mediate binding behavior in language models. Compared to the rather indirect evidence seen in prior work (Feng & Steinhardt, 2023), the present submission directly identifies this subspace, which results in a much greater degree of manipulability and interpretability.\\n\\n**Quality:** The method is well-motivated (\\u00a75.2) and -- at least on the data used -- works well empirically (\\u00a76.1). The qualitative analysis (Figures 5, 7, 8 and related discussion) nicely illustrates similarity in the low-dimensional dimensional subspace, as well as the limitations of the method.\\n\\n**Clarity:** The paper is structured well, is clearly written and flows naturally.\\n\\n**Significance:** Binding is generally believed to be an essential component in the formation of internal/situation/world models. As such, any progress towards understanding if/how language models perform binding on a representational level is an important contribution.\", \"weaknesses\": \"**Edit after author response:** The authors thoroughly addressed all issues by conducting experiments that disentangle position, order, and \\\"true\\\" binding. I've raised my scores accordingly: soundness 2 -> 3, overall rating 6 -> 8\\n\\n---\\n**Weaknesses before revision below. These issues are now sufficiently addressed in the appendix of the revised manuscript:**\\n\\nThe paper does not do enough to rule out an alternative, simpler hypothesis that could potentially explain the results. Concretely, it appears possible that, due to the highly regular nature of the data, the low-dimensional subspace claimed to encode binding information primarily encodes positional information or order information. The running example \\\"Alices lives in Laos. Bob lives in Peru.\\\" has highly regular structure with fixed positional offsets between person and country names, so it is conceivable that the proposed method actually identifies a \\\"position subspace\\\" or \\\"entity/attribute order subspace\\\" and that interchange interventions claimed to modify binding information in fact modify positional or order information. The paper takes two steps into the direction of ruling out this alternative explanation, but I do not believe that they are sufficient:\\n1. Using a LM to rewrite the template-generated texts into short story-like paraphrases. My concern here is that it is unclear how much of the original regularity remains in the paraphrases and how variations in the paraphrases relate to probe performance in Table 2. Since the probe performance exact match metric on the paraphrase is much lower than on the template-based data, it is possible that the probe works best on the paraphrases that are structurally closer to the templates and drops as the paraphrases become more varied and creative. An additional analysis looking at, say, probe performance as a function of token distance between entities and attributes in the paraphrases could provide evidence for or against position being encoded in the identified low-dimensional subspace.\\n2. A qualitative comparison in which position and order are varied (\\\"parallel\\\" setting in Figure 5, coreference examples in Figure 7). While encouraging, these are only a few qualitative examples of representational similarities. Here, systematic, quantitative experiments would go a long way towards ruling out alternative explanations. Data for such experiments could be relatively easily generated by varying position and order in the templates, e.g., \\\"Alice lives currently and has always lived Laos. Bob lives in Peru\\\", which varies the token distance between the bound arguments or \\\"Alices lives in Laos. Peru is where Bob lives.\\\", which swaps the order of arguments. If the authors can show that the subspace mediates binding in a similar manner, this would make a much stronger justification for calling it a \\\"binding subspace\\\"\", \"questions\": \"My main concern and suggestions on how to alleviate it is given in the weaknesses. If the authors can present evidence that helps rule out the positional/order explanation I'm more than happy to raise my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank all four reviewers for their helpful feedback. We are happy that reviewers found the binding problem important especially in the context of understanding internal world models in language models (RZaJ, JBEz), and that the method is \\\"novel\\\" (RZaJ), \\\"original\\\" (JBEz), and \\\"likely to be very useful to the interpretability community\\\" (zN2i).\\n\\nThe reviewers also pointed out several areas which we could work on. We address them in individual comments.\"}", "{\"comment\": \"Thank you for the thoughtful review. We share your belief that identifying binding in representation space is an important contribution.\", \"the_main_weakness_you_highlighted_was_that\": [\"\\\"The paper does not do enough to rule out an alternative, simpler hypothesis that could potentially explain the results.\\\" Specifically, you are concerned that instead of the true binding vector, the binding subspace could be capturing spurious information such as position or order.\", \"Based on your suggestions, we conducted a thorough quantitative investigation (Appendix I). Our main finding is that the extracted binding subspace is not influenced by position, but captures both the true binding vector and the order so that performance is degraded in a particular ordering of names and countries. However, despite the binding subspace's partial susceptibility to order, the propositional probes still outperform prompting in the adversarial settings.\", \"Here, we briefly summarize the key experiments, and leave the details to Appendix I.\", \"We create templates of increasing distance between the first name and country, and found that propositional probes perform consistently well. We therefore conclude our binding subspace is not influenced by position.\", \"We create 5 templates corresponding to different orderings of names and countries. These 5 orderings include the one you proposed \\\"Alice lives in Laos. Peru is where Bob lives\\\", which we call the __reversed__ ordering.\", \"4 of the 5 orderings, including the proposed __reversed__ ordering, show consistent performance.\", \"1 particular ordering shows a degradation in the probe's performance. We call this ordering the __nested__ ordering, and it looks like this: \\u201cAlice and Bob are friends. The latter lives in Germany. The former lives in France.\\u201d\", \"This degradation is significant but not total. 44% of the time, the probe still returns the correct propositions. 50% of the time, the probe assigns both countries to the same name. If the binding subspace had been capturing only order information, but not binding, we would expect the propositions to be wrong 100% of the time. In contrast, the results are more consistent with the interpretation that the binding subspace captures both binding and also spurious features corresponding to order.\", \"However, if we apply the nested ordering to the adversarial settings (prompt injection, dataset poisoning, and gender bias), our probes still out perform prompting\", \"There are signs that the language model itself may conflate order and binding. In an alternate binding template that looks like \\\"Alice, unlike Bob who lives in Germany, lives in France.\\\", we find that prompting fails catastrophically because the model thinks that both the first name and the second name in the context are \\\"Alice\\\".\", \"The degradation to propositional probes can be ameliorated by a small change to the composition algorithm: constraining the proposed propositions to have distinct entities\", \"Overall, we are grateful for your suggestions. Our experiments based on your suggestions imply that binding may be entangled with order, and our methods can be further improved by attempting to disentangle them. Nonetheless, our methods at present are still robust enough to outperform prompting in the adversarial settings.\"]}", "{\"comment\": \"Thank you for you reply. Great that you ran the experiments for an additional model. And thank you for answering my questions. I think it would be great if your answer w.r.t. implementation and performance details like the nr. of passes could be provided in the appendix as well.\", \"another_thing_that_might_be_fun_to_try_if_you_find_the_time\": \"project the top singular vectors into vocabulary/token space a la logit lens. Might give some insight into what the different singular vectors contribute.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Thank you for the thoughtful review. We are happy you felt that the \\\"method is likely to be very useful to the interpretability community\\\".\\n\\n> Results are limited to one model.\\n\\nWe have run the experiments in the main paper on the Llama-2-13b-chat model, and put the results in a new appendix J. Broadly speaking, the results are very similar to the Tulu-2-13b model we used (which may not be surprising, because they are both finetuned from the same base model.)\\n\\n> I noticed that the number of values of k that were evaluated is not super high, why is that? Is Hessian-based algorithm computationally intensive?\\n\\nTo compute the Hessian requires $d$ backward passes, where $d$ is the dimension of residual stream. For most open weight models this is around 1000-8000, which takes a few hours to run on A100 gpus. In principle the backward passes can be batched, but our attempts to use the torch functional APIs to achieve this have not been successful.\\n\\n> Separate probes are learned for each domain, but every domain contains only one predicate, do you have any sense of how well propositional probes might generalize across predicates? Is there a good way to quantify how different the probes for each domain are?\\n\\nAt present, our results in Table 2 (right) show that of the four domains, (names, foods, countries, occupations), names probes generalize the best from the synthetic training data to paraphrased and translated data, whereas food probes generalize the worst. One reason why this might be the case is that representations for food items might be different in different contexts, whereas proper nouns like names are more straightforward. For example, 'apple' could connote the fruit, the tech company, or even New York City. In our paper, we quantify the performance of the probes with their generalization behavior from simple synthetic dataset to diverse paraphrased datasets, but there could be more systematic ways of comparing domain probes that directly study their representations.\\n\\n> Do you think anything be gleaned from the singular values of H? Do they correlate with the accuracies in Figure 4 at all?\\n\\nYes, we think the singular values are important. The accuracies in Fig. 4 are obtained by sorting the singular values in descending order, and then taking the top $k$ singular vectors as the intervention subspace. If we were to sort in a random order, and take the top $k$ singular vectors (i.e. taking $k$ random vectors), we would have 0% accuracy. \\n\\nWe also took a look at the magnitudes of the singular values. The singular values decay rapidly: the highest singular value is as high as ~250, and decays to ~10 by the 50-th vector, to ~1 by the 200-th, and to ~0.1 by the 1000-th. Since there are 4096 hidden dimensions, most of the singular values (> 3/4) are less than one-hundredth of the 50-th singular value, which is our cut off for the binding subspace.\"}", "{\"title\": \"convincing experiments, raised scores\", \"comment\": \"Thank you for conducting these experiments in such short time! I think they're very convincing and your interpretation of the results makes sense. I've raised my scores accordingly.\"}", "{\"comment\": \"Thank you for your response, this clarified my question about your synthetic dataset creation!\"}", "{\"summary\": \"The paper studies how LLMs might encode propositions stated in the context, like \\\"Greg is a nurse. Laura is a physicist.\\\", by looking at the activations associated with the Greg/nurse tokens, and trying to identify \\\"propositional probes\\\" through a \\\"binding subspace\\\" of these vectors which are aligned when the proposition holds.\\n\\nThey use a synthetic set with 4 domains (people, countries, foods, occupations), each with a set of non-overlapping entities (from 14 to 60 per domain). They define a somewhat heuristic \\\"domain identifier\\\" probe to pick up tokens associated with each entity, and then (main novelty) use a low-rank Hessian approach to identify these binding subspaces.\\n\\nThere is analysis for how effective these subspaces are in changing the binding between entities (e.g., to make Greg the physicist after the context above, when answering a question like \\\"Therefore, Greg's occupation is\\\"). The conclusion is that it \\\"works\\\" to some extent, but with caveats, especially when the context gets more complicated (going from 2 to 3 entities). In addition to testing on the synthetic contexts, there is an LLM generated variant (PARA) that turns the sequence of propositions into an actual story format, and one that translates this story into Spanish (TRANS). There is non-trivial carry over of the effect to these cases. There are also comparisons to other probing baselines.\\n\\nFinally, they also test on some \\\"adversarial\\\" cases: 1) Prompt injection (encourage the model to answer wrongly), 2) Backdoor attacks (model fine-tuned to do badly in Spanish), 3) Gender bias (measure amount of gender bias in output probabilities for stereotypical vs anti-stereotypical occupation assignments). In all three cases they find the propositional probes are more faithful to the underlying \\\"true\\\" propositions vs the actual model output.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Although I am not very familiar with this field, the method for identifying the binding subspaces seems quite novel, and potentially will provide useful insights into model behavior.\\n\\nThe task setup, although very synthetic in nature, has the PARA and TRANS variants which make it a potentially fruitful testing ground for these kinds of questions.\\n\\nThe general topic of understanding mechanisms and world states inside of LLMs is both interesting and important.\", \"weaknesses\": \"The \\\"domain probes\\\" to classify tokens into domain seem quite heuristic (using the mean vector of all the entities in the domain), and it seems like there could be some evaluation to see how it works (e.g., is it always the \\\"obvious\\\" tokens, like the \\\"Alice\\\" token is the name token?).\\n\\nSome of the decisions in the binding space design seem quite arbitrary, like \\\"For tractability, we parameterize x and y so that they are shared across layers\\\". Maybe it would then be better to just focus on a few layers? But it's perhaps fine to leave that for future investigation.\\n\\nFor the Prompt Injection setting (instructing the model to \\\"Always answer the opposite.\\\"), it's hard to say what a \\\"correct\\\" output should be, in fact the prompting method should probably \\\"ideally\\\" always be \\\"wrong\\\". So saying \\\"prompting performs worse\\\" is a bit confusing, although it's still an interesting result that the probing outputs are virtually unchanged.\", \"questions\": \"Suggestion for Table 1: Make it clearer that the (P) and (FT) columns correspond to specific adversarial settings\\n\\nIt would be interesting with some more error analysis for what breaks down when the subspace hypothesis fails, to get insights into the potential for these methods to scale to more complex settings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a method to extract latent world states in the form of propositional probes. They form predicate-argument-argument triples for multiple domains. They propose a method based on a Hessian-based algorithm in order to identify the binding subspace. They evaluate the propositional probes in both standard and adversarial settings. For the adversarial setting they find that the propositional probes stay more faithful to the input.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and comes with multiple contributions. The contributions include the use of propositional probes, the definition of the hessian-based algorithm and the confirmation of two hypotheses - that propositions can be decided from internal activations and that these propositions are faithful to the input context.\", \"weaknesses\": [\"I would have liked to see a stronger focus on the adversarial experiments, in the paper. In particular, a deeper analysis on why probes remain faithful and how backdoor attacks and prompt injection could be prevented using your method.\", \"The synthetic dataset setup seems very simplistic and could have been made more true to real life use, such as by using paragraphs of existing texts and extracting propositions from them.\"], \"questions\": [\"Why did you go from proposition to text and not the other way around: use existing text (from the wild) and generate propositions from it?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a new probing technique, called propositional probes. Such probes are functions with two arguments both of which are language model representations. When applied to entities in the probe's domain, the output of the function is a symmetric metric which is expected to be high if the corresponding tokens are bound.\\nA Hessian-based algorithm is proposed to find a low-rank bilinear propositional probe. The algorithm starts with a way to query the language such that giving the correct answer depends on the ability to identify if entities are bound. In the paper's experiments, the language model is asked to repeat some relational information provided in-context (e.g. which country does entity0/entity1 live in). However, the representations of the two entities are set to their midpoint, such that the Hessian reveals how the representations would have to change in order to accurately represent their binding. After the Hessian is calculated, SVD is applied and only the top k-dimensional subspace is kept. \\nTo evaluate this algorithm, 'interchange interventions' are performed where the positions in the identified subspace of two (out of three) entity representations are swapped. When the model is queried, it reports the 'wrong' entity with close to perfect accuracy for some values k. The binding strength is also visualized for some example inputs.\\nFurther evaluations demonstrate that the probe match prompting performance in ordinary setting, and outperform it in adversarial settings.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"A new type of probe and corresponding algorithm to construct them are presented, this method is likely to be very useful to the interpretability community. Propositional probes will allow probing LLMs for the ability to represent entities as standing in certain relations to one another.\", \"The subspace identified is clearly shown to be causally implicated.\", \"Probes are shown to outperform prompting in adversarial setups.\"], \"weaknesses\": [\"Side effects of the interventions are not investigated, it would be great to evaluate how much performance is/isn't lost, as an indication of how precise the interventions are.\", \"Results are limited to one model.\"], \"questions\": [\"I noticed that the number of values of k that were evaluated is not super high, why is that? Is Hessian-based algorithm computationally intensive?\", \"Separate probes are learned for each domain, but every domain contains only one predicate, do you have any sense of how well propositional probes might generalize across predicates? Is there a good way to quantify how different the probes for each domain are?\", \"Do you think anything be gleaned from the singular values of H? Do they correlate with the accuracies in Figure 4 at all?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a method to interpret language models by extracting logical propositions that they claim represent the internal state of the model, by discovering bindings in a low-dimensional linear subspace of the model. The subspace discovery algorithm and the propositional probes were evaluated independently; the latter was done in standard and adversarial settings. While the model can generate falsehoods, these propositions may still hold; this was shown to hold for different adversarial settings.\\n\\n**Strengths**: The idea is novel and the algorithm can be, in principle, generalized to extract other phenomena, including more complex propositional logic. Experimental results in the standard settings were sound and convincing.\\n\\n\\n**Weaknesses**: Like some of the reviewers, I was a little underwhelmed and baffled by the experimental findings in the adversarial setting: the model does engage in \\\"deceitful\\\" behavior despite having a \\\"correct\\\" internal world representation which makes it somewhat antithetical to the original hypothesis of the paper. For instance, what are the implications when models exhibit behavior unfaithful to its internal state? Does this mean that the internal world representation is weak?\\n\\n**Reason for acceptance**: See strengths; contributions of this work other than the adversarial settings are quite interesting and potentially useful, and the work has its merits.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers' objections to the work included the synthetic nature of the task setup - this was addressed by the authors by motivating their work from the use of minimal pairs common in theory of mind experiments. Some issues regarding design choices were also pointed out which the authors resolved by presenting additional experiments. There were also some issues with the adversarial setting; the authors' response seems to have addressed most reviewers' objections. Regardless, I agree with the reviewers with other strengths of this work.\"}", "{\"comment\": \"Thanks for the thoughtful review; we\\u2019re happy that you found the investigation into world states \\u201cinteresting and important\\u201d, and appreciated the novelty of the Hessian method for identifying the binding subspace. We\\u2019d like to provide some clarification around some of the weaknesses you raised.\\n\\n> The \\\"domain probes\\\" to classify tokens into domain seem quite heuristic (using the mean vector of all the entities in the domain), and it seems like there could be some evaluation to see how it works (e.g., is it always the \\\"obvious\\\" tokens, like the \\\"Alice\\\" token is the name token?).\\n\\nOur domain probes are a multi-class generalization of the difference-in-means probes, which some previous works have argued are more robust than standard linear probes trained with logistic regression. For example, Belrose (2024) analyzes some theoretical properties of these probes.\\n\\nAlthough not the main focus of the work, we conducted some analysis into the domain probe. Using Grad-CAM style saliency maps (appendix F), we can estimate the extent to which the activations at each layer and token position contribute towards the correct answer, and we found that indeed it is usually the \\u201cobvious\\u201d token that matters, and when the key word contains multiple tokens, it is the last token that matters. The importance of the last token is also previously discovered in the knowledge editing setting (Meng et al, 2022). \\n\\n> Some of the decisions in the binding space design seem quite arbitrary, like \\\"For tractability, we parameterize x and y so that they are shared across layers\\\". Maybe it would then be better to just focus on a few layers? But it's perhaps fine to leave that for future investigation.\\n\\nThis is a good observation. This design choice, and others, are motivated by the circuits responsible for solving binding. We have updated Appendix C to motivate parameterizing $x$ and $y$ across layers. At a high level, the binding vectors for entities are accessed at different layers than the binding vectors for attributes. In principle you can choose to localize the specific layers relevant for entities and attributes, and then perform the injection for those layers \\u2014 we expect this idea to further improve the accuracy of the binding subspace, at the cost of having more complexity and hyperparameters to tune.\\n\\n> For the Prompt Injection setting (instructing the model to \\\"Always answer the opposite.\\\"), it's hard to say what a \\\"correct\\\" output should be, in fact the prompting method should probably \\\"ideally\\\" always be \\\"wrong\\\". So saying \\\"prompting performs worse\\\" is a bit confusing, although it's still an interesting result that the probing outputs are virtually unchanged.\\n\\nWe can see why this is confusing. There are two different ways of framing this that can justify answering the wrong answer as the undesired output. First, from the security point of view, prompt injections are malicious parts of the input that poisons the input context so that the model stops performing as we expect. For example, consider an LLM agent browsing the web on a user's behalf. If it encounters a piece of text that asks it to start misleading the user, and follows the instruction to mislead the user, then we would ideally want some way to monitor the LLM agent when this happens. \\n\\nThe second way of framing it is that we are interested in latent world models in language models. Perhaps, despite being instructed to lie, the model will still first form a coherent world state internally, and then modify its output based on this world state. Our results show exactly this.\\n\\n\\nBelrose, Nora. \\\"Diff-in-means concept editing is worst-case optimal: Explaining a result by Sam Marks and Max Tegmark, 2023.\\\" URL https://blog. eleuther. ai/diff-in-means (2024).\\n\\nMeng, Kevin, et al. \\\"Locating and editing factual associations in GPT.\\\" Advances in Neural Information Processing Systems 35 (2022): 17359-17372.\"}", "{\"comment\": \"Thank you for the thoughtful review. We are happy that you found our paper \\\"well-written\\\" and that it \\\"comes with multiple contributions\\\".\\n\\n> Why did you go from proposition to text and not the other way around: use existing text (from the wild) and generate propositions from it?\\n\\nOur main intuition is that going from structured representations (propositions) to unstructured representations (text) is easier than in the opposite direction. Further, there are some practical challenges involved in generating propositions from text-in-the-wild:\\n\\n- Not every piece of internet text fits the confines of the 4 domains that we have\\n- Labels will not be balanced: probes can potentially cheat, e.g. by predicting the nationality of people from their names.\\n\\nThis overall paradigm is related to the use of \\\"minimal-pairs\\\" in designing psychology experiments, which has been used in creating ML benchmarks such as EWOK (Ivanova, et al, 2024) and BigToM (Gandhi et al, 2024).\\n\\nHowever, we agree that if propositional probes are scaled to sufficiently diverse domains, text-in-the-wild can be a good downstream evaluation; it is still important to use a synthetic, balanced dataset as the main evaluation.\\n\\n\\nGandhi, Kanishk, et al. \\\"Understanding social reasoning in language models with language models.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\nIvanova, Anna A., et al. \\\"Elements of World Knowledge (EWOK): A cognition-inspired framework for evaluating basic world knowledge in language models.\\\" arXiv preprint arXiv:2405.09605 (2024).\"}" ] }
0ydseYDKRi
Beyond The Rainbow: High Performance Deep Reinforcement Learning On A Desktop PC
[ "Tyler Clark", "Mark Towers", "Christine Evers", "Jonathon Hare" ]
Rainbow Deep Q-Network (DQN) demonstrated combining multiple independent enhancements could significantly boost a reinforcement learning (RL) agent’s performance. In this paper, we present "Beyond The Rainbow'" (BTR), a novel algorithm that integrates six improvements from across the RL literature to Rainbow DQN, establishing a new state-of-the-art for RL using a desktop PC, with a human-normalized interquartile mean (IQM) of 7.6 on Atari-60. Beyond Atari, we demonstrate BTR's capability to handle complex 3D games, successfully training agents to play Super Mario Galaxy, Mario Kart, and Mortal Kombat with minimal algorithmic changes. Designing BTR with computational efficiency in mind, agents can be trained using a high-end desktop PC on 200 million Atari frames within 12 hours. Additionally, we conduct detailed ablation studies of each component, analyzing the performance and impact using numerous measures.
[ "Reinforcement Learning", "Computational Efficiency", "High Performance", "Atari", "Value-Based", "DQN", "Rainbow DQN", "BeyondTheRainbow" ]
Reject
https://openreview.net/pdf?id=0ydseYDKRi
https://openreview.net/forum?id=0ydseYDKRi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "v1WiPWt0d2", "tXR5y79lam", "n305KnKCUK", "n2ymH0tJy9", "krl5yAJHlw", "hSH6DMDlnQ", "h9atyS49s1", "fvrKtkwj3a", "eMqZvmvmrU", "a5PhNkxjec", "Ye9ygbwEFC", "XGHb5FOKvh", "X1TasXDBhJ", "VT2cNw0lJm", "SfRPykv48M", "STSpn2rFox", "RqAxShh7Tr", "OngB2gtLEY", "MG5Lc5YTt4", "KYHxMyMfwl", "JFOMmtQCqC", "BQfsk5Z0Zv", "6WFI2f1z1g", "6Evl6oQLiN", "2GXD8XYajz" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732566982214, 1731679632215, 1732209821646, 1732635642835, 1733142806100, 1731680149699, 1732201090749, 1731938434833, 1732292955505, 1732535778861, 1732959247593, 1730695902068, 1732535495363, 1737523637664, 1731681576237, 1730480514507, 1731679339852, 1730566164320, 1734380378754, 1733134971781, 1732535416259, 1731751838850, 1730662708019, 1731681145461, 1732535325926 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_CULH" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_GWAQ" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_2n46" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_GWAQ" ], [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_LxNJ" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_2n46" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_GWAQ" ], [ "ICLR.cc/2025/Conference/Submission4398/Area_Chair_QzYL" ], [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_CULH" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_GWAQ" ], [ "ICLR.cc/2025/Conference/Submission4398/Reviewer_CULH" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ], [ "ICLR.cc/2025/Conference/Submission4398/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the changes. I had not realized how so many of the results are based on a single seed, this almost makes the results in the paper meaningless I'm afraind.\\nWould it be possible to update the figures in the appendix to use the median of 3 seeds + a tolerance interval for the shaded area? Also I don't believe you define $\\\\epsilon$-actions in the paper.\\nI still find the paper to have potential, but things like Figure 1, B1, and F8 are again almost meaningless without the use of multiple seeds\"}", "{\"comment\": \"Thank you for your insightful review, which succinctly outlined our paper\\u2019s contributions, results, and analysis.\\n\\nIn answer to where the novelty in this work comes from, we identify three sources:\\n - The process of choosing, integrating, and testing components together is a time-consuming process that requires diligence. In addition to the six improvements included in BTR, several more promising extensions were considered and investigated, though dismissed (see Appendix I for more detail). In publishing the paper, we hope this will prevent future researchers from recomputing the same tests conducted in this paper. While integrating these components, we found it necessary to re-tune hyperparameters, especially when using vectorization with components designed to be used with a single actor (e.g., batch size, learning rate, and frequency of updating the target network).\\n- We analyze the components across several measures beyond performance, demonstrating their different and sometimes competing effects on action gaps, policy churn, and observational noise. Such analyses have been completed previously, though not across as many measures or as varied improvements.\\n- We demonstrate BTR beyond the standard testing suite to Wii games, showcasing BTR\\u2019s continued strength as an algorithm outside of standard research testing environments. This includes games with far more graphically intensive and challenging domains than what any other algorithm (with similar compute resources) has been shown to handle.\\n\\nOn the challenge of integrating these components, we found it technically complex to simultaneously support Impala, Spectral Normalization, Maxpooling, Dueling Networks, Noisy Networks, and IQN, which has never before been attempted. An additional complexity was implementing a replay buffer that supports vectorization, N-Step TD Learning and Prioritization in a memory-efficient way. This approach stores each frame individually rather than all four frames together, reducing memory overhead from 28GB to 7GB in total for a million Atari observations. To help future researchers and hobbyists, we open-source our code.\\n\\nWe are happy to further elaborate on any of these at your request.\"}", "{\"comment\": \"Thank you for your response.\\n\\nI think you can keep figure 4 as is, it is just somewhat hard to parse (although maybe a simplified version in the main text alongside the full one in the appendix could be an option).\\n\\nI also appreciate the comparison with PQN, and I think it would be beneficial to add it to the paper (in the appendix would be fine)\", \"i_think_the_following_are_still_big_problems\": \"1. Having only one seed is quite problematic, and the aggregation method is nonstandard. This was not made clear in the paper (at least to my reading).\\n2. No baselines for the Wii games is not great: Having at least something is necessary to compare the proposed algorithm against.\\n3. I think it would still be beneficial to compare against PPO, but a more reasonable number of seeds is a higher priority.\"}", "{\"comment\": \"While Figure 1 does use a single seed, we would like to point out that this is an average of 60 tasks, and even our lower 95% confidence intervals substantially outperform other popular algorithms such as Rainbow DQN (see Figure B2). Furthermore, we are currently running additional seeds, and will have 3 completed seeds before the final submission. We will additionally run 3 seeds for Figure F8, however have removed claims about this Figure as we don\\u2019t believe they are key to the paper. Given this, we don\\u2019t significantly rely on any results we are yet to produce.\\n\\nThank you for pointing out the issue with \\u03f5-actions, we will add this in the section 2.2.\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s insistence on high-quality empirical data and are running two more BTR seeds to validate our initial results. For this second seed, we have run 38 games so far, with the newly computed confidence intervals for these games, as shown in the table below.\\n\\n| Algorithm | IQM | Lower 95% CI | Upper 95% CI |\\n|-------------------|-------|--------------|--------------|\\n| BTR (2 seeds) | 5.480 | 5.049 | 5.928 |\\n| Rainbow (5 seeds) | 1.533 | 1.500 | 1.568 |\\n| DQN (5 seeds) | 0.871 | 0.842 | 0.901 |\\n\\nThe reason for this significant change in the confidence interval is how CI is computed over a single vs multiple seeds. For a single seed, task bootstrapping describes an algorithm\\u2019s sensitivity to tasks, compared to a larger unknown population of tasks. Computing CIs over seeds, however, simply compares how the performance is likely to change over multiple runs.\\n\\nAdditionally, this second seed outperforms our first seed with IQMs of 5.908 and 5.534. In conclusion, we have shown that BTR outperforms Rainbow to statistical significance, therefore alleviating the concerns of our reviewers.\"}", "{\"comment\": \"Thank you for your diligent and detailed response, which summarizes the paper\\u2019s strengths and raises productive criticisms.\\n\\n**Details of Adaptive Maxpooling**\\n\\nThanks for spotting this. We will update the paper to clarify Adaptive Maxpooling and how it works. \\n\\nAdaptive Maxpooling is identical to the standard maxpooling layer, except the output shape is an argument with the kernel size and stride automatically adjusting for different input resolutions, enabling support for different resolutions with no algorithmic change or needed learnable parameters. Our code simply uses the PyTorch Adaptive Maxpooling 2D layer with a maxpool size of (6, 6) as a drop-in replacement for standard maxpooling.\\n\\n**Citations and comparison against other work**\\n\\nIn an earlier draft, we included a comparison against Schmidt and Schmied, 2021 in Figure 4, however, after some investigation found that they used a \\u201clife information signal\\u201d when evaluating their results. As demonstrated in Appendix J, this significantly impacts performance and is not recommended for evaluation by Machado et al., 2018 [1]. Because of this, we decided not to include these results in the main paper as we believed it would confuse readers as to the reason for the performance difference, however, we did still include a comparison in Appendix A.\\n\\nWith regard to your requested citations, we think these are strong relevant additions to the paper and have added them in Sections 3.1 and 3.2, respectively.\\n\\n**Adam\\u2019s Epilson Formula**\\n\\nThe parameter `1.95e-5` for Adam\\u2019s epsilon comes from Schmidt and Schmied, 2021. We will add a reference to this. While the value was correct, we apologize for a minor mistake in our formula, which should have been `0.005 / batch_size`, as opposed to `0.05 / batch_size`. \\n\\n**BTR\\u2019s hyperparameters on different domains**\\n\\nFor new domains, e.g., the Wii games, we did not do any further hyperparameter tuning, keeping the same set across Atari, ProcGen, and Wii games. We wouldn\\u2019t be surprised if further improvements could be made to individual environments with further tuning, but we aim to produce a single algorithm that works across many domains, which is in line with DQN\\u2019s original motivation. In terms of the most important parameters, we found the learning rate, discount rate, and N (from N-Step TD learning) to be the most significant. \\n\\n**Using A High-End Desktop PC**\\n\\nWe agree about specifying desktop components and will specify them in the introduction. We would like to note that research servers with an Nvidia A100, a four-year-old GPU model, cost over \\\\\\\\$10,000 each, compared to an equivalent high-end desktop used in this research, which costs less than \\\\\\\\$5,000 in total. Additionally, with future consumer graphics cards, we anticipate the cost of building similar desktops will decrease, making this research more accessible.\\n\\n**Ethical issue with regards to using Wii Games**\\n\\nOn the copyright-based and emulator question for using Wii-based games, we view this as similar to the use of Atari games in RL research, where the ROMs are similarly under Atari's copyright, though they are not viewed as an ethical issue for research. Likewise, we have bought the ROMs for the respective games used and are not distributing the ROMs with the research, so we do not believe this is an issue. If the reviewer is unsatisfied with this response, could they clarify in what way they believe there is an ethical issue here that does not exist for RL research using the Atari games? Furthermore, we have discussed using the Dolphin Emulator with the developers, who have cleared us for use in research.\\n\\n[1] Machado, Marlos C., et al. \\\"Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents.\\\" Journal of Artificial Intelligence Research 61 (2018): 523-562.\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"I want to thank the authors for taking the time to respond to my questions and concerns.\\n\\nI want to start by clarifying my point about Atari-5. The point of that paper is that if you evaluate on their 5 environments, **and then perform their regression procedure** you can evaluate a method's median on Atari-60 with reasonable accuracy. It is not that the IQM, or any other metric evaluated on that subset meaningfully tracks performance on the whole set or is a good comparison. The fact that the authors are aware of that work, but do not process their results in accordance with its recommendations is strange to me. In addition to this, you can use rliable to estimate the error **in that regression** by computing bootstrapped confidence intervals. This is a much more effective way of performing this ablation and it's strange the authors didn't do that in the first place.\\n\\nI think that results that only use 1 seed are also incredibly difficult to trust and hence don't think the current procedure is good enough. This means that they don't have a meaningful comparison to another method anywhere in their paper. To make my position extremely clear, **I am not asking that the authors run multiple seeds on atari-60 necessarily**, I know that running such experiments takes a lot of time without significant compute. I would also accept, for example, evaluating their method using Atari-10 and the **correct regression procedure**.\\n\\nThe wii games experiments really are not meaningful without a baseline in my opinion. Given the widespread success of RL at solving game environments, its unclear how impressive these results are without a relevant baseline.\\n\\nI think that adjusting the dormant neurons to measure the fraction that *actually* have zero activation across the whole batch would be a meaningful experiment, but I think that the 0.1 threshold is problematic.\\n\\nOn the topic of the state-of-the-art claim, I concede that *if the issues with all the experiments were fixed and the values stayed the same*, it would be possible to make some form of SOTA claim like that.\\n\\n\\nI am not willing to raise my score given the serious issues with the results in this paper. As a brief summary:\\n- Figure 1 only uses 1 seed\\n- Figure 2 also only uses 1 seed\\n- Figure 3 has no baseline\\n- Figure 4 reports unnormalised and unaggregated Atari results. \\n- Table 3 has no error bars\\n- Figure 6 also has no error bars\\n\\nIn short, there isn't a single figure in this paper that uses error bars and aggregates performance correctly. The authors in their response have quoted the IQM on Atari-5, which also isn't an acceptable way of measuring performance.\\n\\nEvery figure in this paper would have to change for me to accept it at this conference. It's just not suitable in its current form. I'm also surprised that the other reviewers have not raised similar issues to those in my review, given the extensive evaluation problems in this paper.\"}", "{\"comment\": \"Apologies for missing out on a couple of your original points, we will respond to those here:\\n\\nWe think it would be beneficial to convert Figure 4 to use human-normalized scores as is standard. Furthermore, we also agree that Figure 4 is a little overcrowded, though we wish to include all the ablation options and a couple of comparisons. If the reviewer believes it will help the figure, we are happy to remove the four comparisons (Rainbow DQN with Impala, Rainbow DQN, IQN and DQN) from the main paper and move this comparison to an Appendix. \\n\\nOn the error bars, due to our limited compute resources, we were not able to run multiple seeds for BTR on Atari-60 or ProcGen before submission, though we have three seeds for Atari-5. We are currently running these additional experiments and will update the figures using the standard practice however the time required will mean that this will be finished after the discussion period. We will however give an example of what this will look like by providing a figure in our PDF of Atari-5 with the updated error bars.\"}", "{\"comment\": \"Thank you for the concise feedback, however, we are alarmed that the reviewer is unwilling to change their score even if we address the comments about the figures. Below, we summarize how we will update each figure to address the reviewers for an updated version of the paper that we aim to submit Monday.\\n\\nOn the regression procedure of Atari-5, we understood this; however, we were using this subset not to make claims about the possible Atari-55 median performance but rather as a small subset for testing new extensions/improvements. Using our data from training BTR on Atari-55 to the predicted values using the Atari-5 regression procedure, we find that it wildly overpredicts with a median score of 9.20 [8.11,10.23] (95% confidence intervals) compared to the true value of 3.92 for one seed; a relative error of 134.72%.\", \"we_identify_two_problems_with_using_the_regression_procedure\": [\"1. It predicts the Median and not IQM (which is standard) scores.\", \"2. BTR is significantly outside the regression data\\u2019s distribution, in particular with Phoenix where BTR achieves a HNS score 46.7 times better than Rainbow (Dopamine) (56.55 compared to 1.21, respectively).\", \"This huge gap in predicted vs true performance makes us skeptical of using the regression procedure to make strong claims about BTR\\u2019s performance.\", \"Could the reviewer clarify if they still believe we should discuss/include the regression procedure within our paper despite the 134.72% error to our known Atari-55 results? We believe that an actual training result across 1 seed (although we are currently running up to 3 seeds) is better than a predicted result which is wildly inaccurate for BTR.\", \"**Figure 1**: We are training BTR for three seeds currently for Atari-60 as we don\\u2019t believe that the regression procedure will provide accurate results as discussed above.\", \"**Figure 2**: Has now been updated to use two seeds and RLiable, however, for the final version, we will include three seeds. From these two seeds, taking their best runs throughout training (as is commonly done in full results tables) they achieve an IQM of 0.73 and 0.79 respectively.\", \"**Figure 3**: For our Mario Kart Environment, we now provide Rainbow DQN as a baseline. For 120 million frames, we find Rainbow averages 3.8 compared to BTR\\u2019s 17.5 at the same point in training (single seed). Following the discussion period, we can also provide this baseline for the other two environments.\", \"**Figure 4**: Has been updated to show human-normalized scores on all individual games. We include a sixth plot of IQM across the 5 games, not using the regression procedure. To reiterate, we are not claiming this to predict the scores over the whole suite, but rather to provide an easy-to-interpret average across the five-game suite, which is not skewed by outliers such as Phoenix.\", \"**Table 2**: (The reviewer notes Table 3 but we believe this is a typo) now has confidence intervals over seeds, however, we have to put this in the appendix (with a note in the main paper caption) due to the column width of the new table.\", \"**Figure 6**: Due to high storage demands (saving every model, for every ablation, every million frames, for every seed), we were only able to store the models for a single seed and thus are unable to update the figure. We have, however, updated the caption of the figure to make this clear.\"]}", "{\"comment\": \"Please see our updated PDF and list of changes. We believe we have alleviated your concerns regarding Figures 1, 2, 3, 4, 6 and Table 2. Furthermore, we included box plots and performance profiles using RLiable in appendix B.\", \"below_are_some_key_points_we_think_may_interest_you\": [\"We found that even with a 1 and 2 seeds respectively our results outperform other algorithms to 95% confidence, both in Atari and Procgen.\", \"We agree Figure 6 did not provide adequate evidence to claim improved plasticity, and thus removed these claims and the figure. We still believe the Figure holds some value, so moved it to Appendix E with a note that it uses a single seed.\", \"We use 3 seeds for Table 2, and provide 95% confidence intervals in the appendix (due to table width restrictions). While some of these figures have fairly wide confidence intervals, the claims we have about munchausen and IQN affecting action gaps and policy churn are statistically significant. Regarding our claim that maxpooling helps when evaluating performance under different altered environment regimes, while some figures have fairly large errors, we still find that across 3 seeds, BTR outperforms BTR no maxpooling, compared to the epsilon=0 setting where no maxpooling significantly outperforms BTR. Given this, we believe our claim is not unreasonable.\", \"We thank you for your comments which have improved this work to be at the highest standard of RL. Given that we have fully addressed these concerns, we sincerely hope you are willing to raise your score to reflect this.\"]}", "{\"comment\": \"Thank you for your response and improvements to the paper. I will maintain my score of 6, primarily due to the significance of the work. I still advocate for acceptance.\"}", "{\"summary\": \"This paper introduces Beyond The Rainbow (BTR), a novel reinforcement learning (RL) algorithm that enhances Rainbow DQN by integrating six key improvements. The BTR algorithm is computationally efficient, capable of training powerful agents on a standard desktop computer within a short time. Experimental results show that BTR outperforms state-of-the-art RL algorithms on both the Atari-60 and Procgen benchmarks. Additionally, BTR can handle training agents for challenging levels in complex, modern games. Finally, this paper includes a comprehensive ablation study to analyze the performance and impact of each component within the BTR algorithm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well-written and well-organized. The ideas are clear and could be easily understood\\n2. The experiments are comprehensive and the results are strong. As shown in Section 4, the proposed BTR algorithm could greatly outperform state-of-the-art baselines in two classic benchmarks and handle three hard and complex modern games with a desktop PC.\\n3. The paper includes extensive ablation studies and experimental data. Section 5 presents a detailed analysis of the performance and impact of each component of the BTR algorithm, providing readers with insights into the sources of the algorithm's performance gains. Additionally, the authors include complete experimental results and settings in the appendix, helping to clarify any potential confusion or misunderstanding for readers.\", \"weaknesses\": \"1. The BTR integrates six improvements from existing RL literature to Rainbow DQN. While the algorithm demonstrates strong performance, its novelty might appear limited. Could you further clarify the novelty of this work? Or specifically, could you briefly discuss if there is any challenges in integrating these existing improvements into the BTR algorithm?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Please see our updated PDF and list of changes.\", \"regarding_your_previous_comments\": [\"While Atari-60 uses one seed, we are in the process of running another 2 seeds, and we already outperform other algorithms such as Rainbow DQN with over 95% confidence.\", \"We have now provided a baseline for Mario Kart Wii. Before the final version, we are also running this for Mortal Kombat and Super Mario Galaxy.\", \"We have added a Table to Appendix A with a comparison against PQN (the same Table as the previous comment).\", \"We now provide a Figure in Appendix B with a comparison against PPO, using results from Cleanba [1].\", \"For Figure 4, we now include an IQM plot to help interpret the figure.\", \"[1] Huang, Shengyi, et al. \\\"Cleanba: A Reproducible and Efficient Distributed Reinforcement Learning Platform.\\\" The Twelfth International Conference on Learning Representations. 2023.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We thank you for understanding our core motivation for this work and appreciating what it achieved.\\n\\n**Error Bars**\\n\\nIn this work, we computed the error bars using the evaluation episode variance rather than as proposed in [1] with the seed variance (for the mean evaluation performance). Using [1], for the Atari-5, we trained BTR across 3 seeds, finding IQM scores of 7.77, 8.22, and 7.86. However, due to time and computing limitations, we only trained BTR for 1 seed on Atari-60 and ProcGen, meaning we can\\u2019t use [1] for error bars. If the reviewer agrees, we will train BTR for Atari-60 and ProcGen with three seeds, allowing us to update the error bars. However, due to the computational and time requirements of these experiments, we could not provide these results within the discussion period though they would be ready before the camera-ready version deadline.\\n\\nRegarding Figure 2, we did not intend to claim that we achieved state-of-the-art performance compared to prior work. Instead, we claim that we achieved similar performance as the prior state-of-the-art with significantly fewer resources, in particular, walltime and learnable parameters. We will change our wording on Line 083 from \\u201coutperformed\\u201d to \\u201csimilar\\u201d and Line 274 from \\u201cexceed\\u201d to \\u201csimilar\\u201d. Does this address the reviewer's concern? \\n\\n**Comparison of prior work on Wii Games**\\n\\nWhen demonstrating BTR in these new environments, we aimed to showcase BTR\\u2019s capability in games that are not normally considered in RL research. Therefore, we didn\\u2019t compare to the prior works shown in Figure 1 that achieved significantly lower scores than BTR. If the reviewer believes this is a crucial comparison to include, we are willing to train a Rainbow DQN agent for the Wii games to include in Figure 3. \\n\\n**Ablation Study**\\n\\nWe are happy to use human-normalized performance to measure each ablation\\u2019s performance and will update Figure 4. Regarding the regression procedure, we are not making claims about the performance of all Atari games from these ablations, rather, we used Atari-5 as a benchmark to better discriminate each ablation\\u2019s performance without needing to train on all Atari 60 environments to find similar results. Does this answer the reviewer\\u2019s concern?\\n\\n**Dormant Neurons**\\n\\nWe really appreciate this comment as we ourselves are skeptical of dormant neurons as a measurement. However, we included it as it has become a popular measure in the literature [1,2,3]. Additionally, in Appendix F, we plot a histogram of activations showcasing the range of activations used by the agent presenting a better overview of the agent\\u2019s activations. If you feel these results should not be included, we will remove them at your request.\\n\\n**Claims of the state-of-the-art on a desktop PC**\\n\\nWe recognise that this is a difficult claim to make; however, after surveying the literature, we believe, to the best of our knowledge, that it is true. Part of the reason we make this claim is that few algorithms achieve greater performance than BTR, and those that do, in their reported training requirements, make it clear that it would not be possible to train their agent on a desktop PC. In Table 3, we survey three top-performing algorithms, listing their claimed resources used and identifying several reasons for preventing them from being a fair comparison to a desktop PC: the required resources with numerous CPUs / GPUs, excessive training time over 5 days or more recently GPU VRAM to train a model (commercial GPU have significantly less VRAM than research-grade GPUs). Is the reviewer aware of any training algorithm that could reasonably compete with our claim? \\n\\n[1] Sokar, Ghada, et al. \\\"The dormant neuron phenomenon in deep reinforcement learning.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[2] Xu, Guowei, et al. \\\"Drm: Mastering visual reinforcement learning through dormant ratio minimization.\\\" arXiv preprint arXiv:2310.19668 (2023).\\n\\n[3] Qin, Haoyuan, et al. \\\"The Dormant Neuron Phenomenon in Multi-Agent Reinforcement Learning Value Factorization.\\\" The Thirty-eighth Annual Conference on Neural Information Processing Systems.\"}", "{\"summary\": \"The authors present Beyond the Rainbow (BTR), an algorithm combining advances in Q-learning based approaches to Atari developed since Rainbow.\\n\\nThe authors train their agent on Atari, Procgen and 3 games which aren't well-established benchmarks in RL (Super Mario Galaxy, Mario Kart and Mortal Kombat). They run ablations on their method and demonstrate that the Impala architecture contributes the most to their method's performance. They also demonstrate that vectorization of the environment is key to the faster runtime of their algorithm.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper has a number of positive points:\", \"The core idea of trying to achieve strong performance using Q-learning on a desktop PC has significant merit and would constitute an interesting contribution\", \"The introduction of new games to evaluate on is interesting and the games chosen would make good potential benchmarks.\", \"The paper is easy to follow and clearly written\"], \"weaknesses\": \"I slightly feel for the authors of this paper. I would like to be able to commend the paper on its empirical results, or the performance on the new environments, but the results are not presented scientifically enough for me to do that, and so I can't recommend acceptance at this venue.\", \"to_make_my_objections_more_concrete\": [\"In Figure 2, the authors claim that their method outperforms the red baseline, but this is plainly not the case from the plot. The error bars so significantly overlap the red line there is no way this result is significant.\", \"The authors do not aggregate their results in accordance with recommended practice [1]. Although they use the inter-quartile mean, they do not provide bootstrapped confidence intervals to estimate errors and do not seem to provide errors in their baseline results. This issue appears in Figures 1 and 2. As far as I know, the authors do not state what the error bars in Figures 1 and 2. If the plotted error bars are standard\", \"While the evaluation of their method on new games is nice, I can't take any information away from this without even a semblance of a baseline. Training an RL policy on Wii games has no intrinsic scientific value -- it is only by contextualisation of a baseline that this would be a compelling result. Similarly, the authors provide no error bars in this domain.\", \"Figure 4 again because of the way the results were processed provides almost no information. Atari-5 [2] provides a way to estimate the median given performance on those 5 games. But this is only after the application of a regression procedure. Without the application of this summary metric, it is just not clear what to take away from these results. This figure does not even present human normalised scores, as is standard. This Figure should therefore be replaced by a plot of the regressed median for Atari-5 with bootstrapped confidence intervals. The authors can use rliable [1] for this.\", \"Again, the analysis in Section 5.2 *should* be compelling and interesting reading, but it's just not done thoroughly enough. Figure 6 is presented without error bars and so are the results in Table 2 and the IQM in Table 3. It's just not possible to believe the authors' conclusions on their work without any estimates of error.\", \"Additionally, the authors use dormancy [3], but set a threshold of 0.1. Although resetting dormant neurons was shown to improve performance, neurons with a small activation are not in themselves a problem! A neuron followed by a ReLU activation that always outputs 0 is not learning, which clearly constitutes a problem, but a neuron that outputs a small value is still perfectly plastic. The dormancy results therefore also aren't a proxy for any form of plasticity.\", \"The authors make multiple claims about their method being \\\"state-of-the-art for a desktop PC\\\" (or similar). These should be removed from the paper as they are just impossible to verify. Even as an expert, I do not know the hardware that every paper ran experiments on and whether it would be possible to run it on a desktop PC, and it is not a claim that can be clearly backed-up. I note that the authors did not do all their experimentation on a desktop PC, but only claim that their method can run on one effectively.\"], \"questions\": \"See weaknesses.\\n\\n\\nOverall, this work is just not good enough in its current format. I recommend that the authors fix the presentation of the results, especially adding error bars and effective aggregation using a tool like rliable. Given the significant problems with every figure and table in the main body of the paper, this work is not good enough for this venue in its current form and would require wholesale changes to fix that.\\n\\n[1] Deep Reinforcement Learning at the Edge of the Statistical Precipice. Agarwal et al. Neurips 2021.\\n\\n[2] Aitchison, Matthew, Penny Sweetser, and Marcus Hutter. \\\"Atari-5: Distilling the arcade learning environment down to five games.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[3] Sokar, Ghada, et al. \\\"The dormant neuron phenomenon in deep reinforcement learning.\\\" International Conference on Machine Learning. PMLR, 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank all the reviewers for their responses and the time and effort spent on the reviews, recognising the value of the paper\\u2019s motivations and empirical results. Furthermore, we appreciate the insights and constructive criticism that we will use to continue improving the paper.\\n\\nTowards the end of the discussion period, we will provide an updated PDF of the paper, along with a summary of the changes made.\"}", "{\"summary\": \"This paper combines several different RL improvements to a single algorithm, with a focus on high performance with reasonable computational requirements. In doing so, they find that their approach achieves a new SoTA (not including recurrent methods), while being able to be run on a desktop machine in under a day.\\n\\nThey analyse the factors that led to this performance in detail through several ablations.\\n\\nOverall, this paper makes rainbow/dqn-type methods more accessible to non-industry labs\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper gives RL researchers a way to do pretty well in atari without expending too significant computational resources.\", \"They perform ablations on their individual changes to identify what helps and what has the most effect on performance/walltime. This is quite useful.\", \"I am not giving this a lower score because I think making RL more accessible is worthwhile, and this paper takes a step towards this, and further analyses many of these independent components to see what their effect is. I am not giving a higher score because I think the paper's significance does not warrant it.\"], \"weaknesses\": [\"To me it is unclear if 12 hours is for all games or just 1.\", \"I wonder how this fits in with the recent trend of hardware-accelerated RL (see e.g., Stoix/PureJaxRL/Gymnax and all of the hardware-accelerated environments). Does that line of work better achieve the goal of making RL more accessible? In that setting, the environment is often run entirely on the GPU, leading to hundreds or thousands of times speedups.\"], \"questions\": [\"The 12 hour number/other timings, is that the total time it takes to train BTR on a single game or on all 57 games?\", \"It seems like you made quite a few hyperparameter choices (e.g. how often to update, etc.) Do you use the same values for each domain?\", \"What is the shaded area in the plots? If it is standard deviation it seems like the proposed BTR algorithm is very inconsistent across seeds. Could you elaborate please/maybe provide results for individual seeds?\", \"Figure 3 does show that you can apply your approach to other games, which is great. I would really like to see some point of comparison, however, to act as a reference point. For instance, run vanilla PPO or DQN or Rainbow as a baseline.\", \"Why is fig 4 using raw score as the y-axis, as opposed to e.g. normalised?\", \"Figure 4 is somewhat hard to follow as there are so many lines and it seems like most of them overlap quite a lot.\", \"Is it feasible to run rainbow with vectorisation? This is not that crucial, it just seems like something obvious to run given figure 5, where vectorisation is the main speedup factor.\", \"Table 2: Would be nice to have another method, e.g. rainbow or DQN to act as a reference point.\", \"One recent work that seems to have a similar purpose is \\\"Simplifying Deep Temporal Difference Learning\\\" (https://arxiv.org/pdf/2407.04811). It seems like they use vectorisation as well to achieve large speedups. More importantly, however, is that they primarily use JAX---which is becoming increasingly common in RL, and is reducing computational load significantly/making RL more accessible to compute-poor labs/institutions. Could you please comment on a few things\", \"How does this paper's score compare to yours?\", \"How does the walltime compare to yours?\", \"What do you see as the benefits/disadvantages of this hardware accelerated paradigm compared to the more classic approach you are taking?\", \"I know it is not usual in these types of papers but I would really appreciate a PPO comparison, both in terms of walltime and performance.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes Beyond The Rainbow (BTR) an RL algorithm combining multiple recent advancements into a single method with the single aim of outperforming on the Atari benchmark. There is a fairly high bar for this type of work to be accepted, given that the Atari environments are no longer near the forefront of our field and it has been reported multiple times that experimental results in this domain do not necessarily translate. Indeed, the lack of experimental rigor is ultimately the issue that inclines me to recommend rejection for this paper, in particular the lack of seeds.\", \"additional_comments_on_reviewer_discussion\": \"There was a reasonable discussion, the reviewers on the negative side reiterated their views due to the lack of experimental rigor.\"}", "{\"comment\": \"It's not clear from the picture that your lower 95% confidence interval substantially outperforms Rainbow (from the figure they seem very close in fact), which only makes it clearer that without the 3 seeds results we can't be particularly sure whether it's indeed capable of consistently outperforming Rainbow.\\nI like the results of the paper and many of the experiments, but I still don't believe the experimental methodology is up to the standard expected at Neurips, I would recommend the authors study \\\"Empirical Design in Reinforcement Learning\\\" in depth and apply its methods to make the paper strong before re-submitting it to another conference.\\nGiven all that I will keep my score\"}", "{\"comment\": \"Please see our updated PDF and list of changes.\", \"the_points_which_specifically_address_your_comments_are_listed_below\": [\"We now provide more detail in the description of Maxpooling.\", \"We now specify \\u201chigh-end desktop\\u201d in both the abstract and introduction\", \"We have now updated the Adam epsilon formula to `0.005 / batch_size`.\", \"Given that we have addressed your concerns, we hope you raise your score accordingly, as you mention in your original comment.\"]}", "{\"comment\": \"Thank you for your prompt response! I will provide a more in depth reply in a few days, but in the meantime:\\nA few points in my original review were not responded to?\\n\\nIf you could provide an untuned rainbow with vectorisation comparison that would be stellar.\\n\\nLikewise a PPO comparison would be much appreciated if you have compute.\\n\\nCould you please also comment on why you provide error bars wrt episodes? To the best of my knowledge this is not standard practice.\"}", "{\"summary\": \"The paper presents a variant of Rainbow that adds further architectural and algorithmic improvements to improve not only the agent's score but also to increase its training speed to around 3x what has been previously reported, while running it on top-notch consumer hardware. Finally the authors also show that their improved version of rainbow can deal with modern games with complex graphics and physics.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The presentation is overall clear, the methodology is sound, and the results are compelling. Both extensive use of ablations, and the connection to other important metrics related to pathologies in Deep RL algorithms are an example that more papers should follow.\\nThe appendices are also data rich, showing ablations' performances on each of ALE's 60 games, and even having one appendix about things that were tried but did not lead to improvements in performance, which may help others not repeat the experiments.\", \"weaknesses\": \"1. Adaptive Maxpooling is never defined. It's not a common layer in reinforcement learning and it's never defined in the paper, in fact skimming (Schmidt and Schmied, 2021) that layer is also not defined, I believe this is the only seious weakness in the paper's presentation, but still I believe it is a serious weakness (though hopefully the authors can fix it and so I can increase their grade).\\n2. There are at least 2 relevant citations missing, \\\"Spectral Normalisation for Deep Reinforcement Learning: An Optimisation Perspective\\\" when talking about Spectral Normalisation, and \\\"On the consistency of hyper-parameter selection in value-based deep reinforcement learning\\\" when talking about the need for tuning Deep RL hyperparameters and the benefits of using layer norm between dense layers.\\n3. I believe it's slightly misleading to not specify \\\"a high-end PC\\\" when talking about the kind of machine that can run the algorithm in 12 hours (4090 RTXs are quite expensive, and i9s are Intel's high-end consumer line)\\n4. I believe a more direct comparison with Schmidt and Schmied, 2021 is warranted, given its foundational importance to the paper.\\n5. Using only 3 seeds while having a large increase in the number of tuned hyperparameters weakens the validity of the results as explained in \\\"Empirical Design in Reinforcement Learning\\\", though at the same time the analysis of metrics beyond simply the score and the extensive use of ablations help.\", \"questions\": \"1. What exactly is adaptive maxpooling? Would it be possible to add a description of it with either an equation, pseudo-code, or diagram?\\n2. Where did the formula 0.05/batch_size for Adam's epsilon come from?\\n3. The final algorithm has a considerable number of hyperparameters, would it be possible to discuss a bit which ones are the most important to tune should someone try to apply this algorithm to a new domain?\", \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"details_of_ethics_concerns\": \"I am just slightly worried about the use of somewhat modern Nintendo games as RL environments through the use of emulators, is the use of emulators for research legal?\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for this detailed review with productive and interesting questions.\\n\\n**Computation and using hardware accelerated environments**\\n\\nWe considered using hardware-accelerated environments, which provide several advantages to researchers, particularly in parallelisation and training speed. However, it is not clear how to hardware-accelerate many real-world problems, e.g., Wii games tested in this work. As we are interested in expanding and investigating RL in a broad variety of challenging environments, we focused on classical CPU-based environments. \\n\\nWe also want to clarify that 12 hours is the time to train an agent for a single environment.\\n\\n**Different Domains**\\n\\nYes, as stated in Section 4.2 (lines 285 to 287), we use the same algorithm hyperparameters for all domains (Atari, Procgen and Wii games). The only thing we changed is the input resolution (140x114 from 84x84) as Dolphin Emulator\\u2019s native resolution is much higher, and the number of parallel environments as we found Dolphin was too memory-expensive for more than 4 on a single 64Gb desktop. To counteract this, we took 16 steps (in all 4 environments) before performing an update, which gives us a similar effect to taking 64 steps in parallel, meaning that no additional tuning is required. \\n\\n**Large Shaded Areas on Plots**\\n\\nThe shaded areas are the standard deviation across all evaluation episodes and seeds. For example, with 100 evaluation episodes and 3 seeds, the shaded area is the standard deviation over all 300 episodes. This tends to be high when performing evaluations due to the surprisingly stochastic nature of atari games (i.e., sticky-actions, no-op starts and long episode lengths), increasing the score variance in a single episode.\\n\\nBTR\\u2019s IQM on Atari-5 for each seed is 7.77, 8.22, and 7.86\\n\\n**Rainbow with Vectorization**\\n\\nAdding vectorization to Rainbow is certainly possible, however, it would require extensive tuning to provide a fair and unbiased comparison as BTR\\u2019s parameters were tuned to use vectorization (e.g., learning rate, batch size and how often to replace the target network). We are, however, still happy to provide an untuned comparison if you believe this is important. \\n\\n**Comparison to \\u201cSimplifying Deep Temporal Difference Learning\\u201d (PQN)**\\n\\nWe think this is a really interesting comparison, and thank you for raising it. Sadly, PQN does not report their full results for 200M frames and importantly uses the life information parameter that alters the environment to provide more regular termination signals, which has been found to significantly improve training. This makes it an unfair comparison to previous works that do not use it. In Appendix J, we evaluate BTR with and without life information so we can provide a fair comparison. See the table below. Additionally, we note that even without life information, at 200 million, BTR still achieves a higher IQM performance at 7.42 than PQN for 400 million with 3.86 across the Atari-5 environments.\\n\\nAtari-5 IQM and per-game Scores. For individual games, Human-Normalized scores are reported, with the raw score in brackets. \\n\\n| Game | BTR (with life info, 200M frames) | PQN (with life info, 400M frames) |\\n|----------------|-----------------------------------|------------------------------------|\\n| Inter-Quartile Mean | **14.02** | 3.86|\\n| BattleZone | **13.53** (473,580)| 1.51 (54,791) |\\n| DoubleDunk | **14** (23.0) | 6.03 (-0.92) |\\n| NameThisGame | **4.59** (28,710) | 3.18 (20,603) |\\n| Phoenix | **89.95** (583,788) | 38.79 (252,173)|\\n| Qbert | **14.54** (193,428) | 2.37 (31,716)|\\n| Walltime (A100) | 22 Hours (PyTorch (non-compiled) + gymnasium async) | **2 Hours** (JAX + envpool) |\\n\\nBeyond performance, the largest difference is in wall time for several reasons. First, we use Gymnasium async rather than EnvPool, which is 1.6x faster [1]. Second, we do not use PyTorch compile or cudagraphs, which have both recently been shown to significantly speed up reinforcement learning code which are the equivalent of `jax.jit` [2]. Incorporating them should reduce BTR\\u2019s walltime. Third, using a large replay buffer will mean that off-policy algorithms like BTR will be slower from repeatedly accessing RAM than on-policy algorithms like PQN or PPO that can be stored in GPU VRAM. On-policy and off-policy algorithms tradeoff sample efficiency and walltime, which we believe the performance impact is worthwhile for BTR. \\n\\n\\nIt is noteworthy that some of BTR\\u2019s improvements could be combined with PQN to improve performance, particularly in relation to the neural network architecture. \\nWe agree that a comparison against PPO in terms of performance and walltime would be interesting, and we will add one in the appendices of our paper.\\n\\n[1] Weng, Jiayi, et al. \\\"Envpool: A highly parallel reinforcement learning environment execution engine.\\\" Advances in Neural Information Processing Systems 35 (2022): 22409-22421.\\n\\n[2] https://github.com/pytorch-labs/LeanRL\"}", "{\"comment\": [\"Thank you again to all the reviewers for their time and diligence. In the updated paper we have addressed all the reviewers' concerns, now proving statistically significant performance improvements over even more algorithms. In publishing, we hope this research will be able to increase the wider accessibility of high performance RL.\", \"Following the reviewer\\u2019s comments, we have made the following changes, which can be found in our updated PDF.\", \"We have updated all relevant figures in the paper to use 95% confidence intervals in accordance with the current research standards from RLiable. For Figure 2 and Table 2, we now use 2 and 3 seeds, respectively.\", \"For Figure 3 with the Wii games, we have added a baseline for Mario Kart with Rainbow DQN, achieving a score of 3.4 at 155 million frames compared to BTR\\u2019s score of 54 at the same point.\", \"We have added a more detailed description of Maxpooling and citations for Spectral Normalization and Hyperparameter tuning, as requested by Reviewer CULH.\", \"We have clarified \\u201cHigh-end\\u201d PC in both the abstract and introduction, as requested by Reviewer CULH.\", \"We have updated Appendix C2 with the correct formula for Adam\\u2019s Epsilon.\", \"In Appendices A and B, we added comparisons to PQN and PPO as requested by Reviewer GWAQ.\", \"Figure 4 now includes a comparison to Rainbow + Vectorization, as requested by Reviewer GWAQ.\", \"Appendix B includes the Optimality Gap, Performance Profile and Box Plots for BTR from RLiable to be at the highest standard of RL research.\", \"Figure 6 has been moved to the appendix, and some claims about plasticity have been removed due to concerns raised by Reviewer 2n46.\", \"Additionally, we are still running BTR for Atari-60 for an additional two seeds. Due to the confidence interval statistics, we expect this to significantly shrink the confidence intervals in Figure 1 to match closer to the error bars in Figures 2 and 3. Lastly, we will also provide Rainbow DQN comparisons for all Wii games.\", \"We look forward to hearing any more comments regarding the updated PDF.\"]}" ] }
0yXqV8VJKi
Understanding Complexity in VideoQA via Visual Program Generation
[ "Cristobal Eyzaguirre", "Igor Vasiljevic", "Achal Dave", "Jiajun Wu", "Rares Andrei Ambrus", "Thomas Kollar", "Juan Carlos Niebles", "Pavel Tokmakov" ]
We propose a data-driven approach to analyzing query complexity in Video Question Answering (VideoQA). Previous efforts in benchmark design have largely relied on human expertise to construct challenging samples. In this work, we experimentally demonstrate that humans struggle to accurately estimate which questions are hard to answer for machine learning models. Our alternative, automated approach takes advantage of recent advances in code generation for visual question answering. In particular, we use generated code complexity as a proxy for the question complexity and demonstrate that it indeed shows a much stronger correlation with the models' performance, compared to human estimates. We then present a novel algorithm for estimating question complexity from code. It identifies fine-grained primitives which correlate with the hardest questions. These human-interpretable results lead to a number of discoveries about the key sources of complexity for VideoQA models. Finally, we extend our approach to generate complex questions for a given set of videos. This allows us to automatically construct a new benchmark, which is 1.9 times harder for VideoQA methods than existing manually designed datasets.
[ "video understanding", "codegen" ]
Reject
https://openreview.net/pdf?id=0yXqV8VJKi
https://openreview.net/forum?id=0yXqV8VJKi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z7uvIZ2eZt", "z21frTEhOw", "y68KVxSBGR", "xy3QNREEUn", "x3I90rCBYC", "wmJpLeIMFi", "vO5tIhsQ5z", "s2SlECrxEW", "rwIbBGPrSn", "rgi9pknNpd", "rFFBIpSQPo", "oyEFiEW8y1", "m9qOljynUX", "jiMdbfZOgC", "iJ1I9fiava", "hJroW8bZyK", "gTrIhfxXbf", "cfAQ2M1BWu", "Zg4I6yiHeG", "YlugzL6PIn", "UjleWFd4Be", "Qj43gXfenf", "P9ZP0HJmMI", "MDi4CKcCDT", "ATrFvT1zov", "5TD3jgk0mS", "20TuZoxDap", "16WLqn2FcK" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732741099033, 1734716567024, 1733171764908, 1732255735255, 1730586997699, 1733171987785, 1732741157468, 1733273938857, 1732256037280, 1733274449298, 1730777925845, 1733198080139, 1730709395910, 1732256644948, 1732256682969, 1732741427665, 1732255699543, 1737523450596, 1730610389424, 1733087217180, 1733275121546, 1733024276540, 1733172723994, 1733203699567, 1732582351747, 1733172815795, 1732603647951, 1732256251072 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Area_Chair_JsVJ" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_Hrka" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_RWYy" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_Hrka" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_8Wrw" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_EhtB" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_Hrka" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_8Wrw" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_EhtB" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_Hrka" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ], [ "ICLR.cc/2025/Conference/Submission1394/Reviewer_8Wrw" ], [ "ICLR.cc/2025/Conference/Submission1394/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer 8Wrw,\\n\\nWe sincerely appreciate your thoughtful feedback and are glad that some of your concerns have been addressed by our response. Please note that we have already demonstrated that our approach generalizes to different CodeGen methods used in Section 10.3 as well as to a variety of VideoQA models in Table 1. Following your request and Reviewer Hrks\\u2019a recommendation, we now report results on MVBench below, as well as in Section 10.4 in the updated manuscript. \\n\\nFirstly, as you can see from Figure 20 in the manuscript, ViperGPT code complexity correlates strongly with question complexity for a variety of VideoQA models on this dataset. Secondly, as shown in the table below and Table 5 of the manuscript, our proposed CodePlexity metric is far superior to the naive code complexity metrics on MVBench. \\n\\nThese results complete the robustness analysis of our approach, clearly demonstrating that it can scale to new methods and datasets, and validating its broader applicability. We hope this additional analysis resolves the reviewer\\u2019s concern and would be happy to answer additional questions in the remaining discussion period.\\n\\n| | InternVideo | SeViLA ZS | Tarsier | VideoChat2 | LLaVA-NeXT |\\n|:-------------------------|------------:|-----------:|----------:|-----------:|------------:|\\n| **Lines of Code** | 0.0441471 | 0.0950738 | 0.125403 | 0.10273 | 0.0919019 |\\n| **Cyclomatic Complexity**| 0.129945 | 0.0963568 | 0.0614128 | 0.0521084 | 0.0447285 |\\n| **CodePlexity (Ours)** | **0.413361** | **0.330262** | **0.444389** | **0.299086** | **0.274255** |\"}", "{\"metareview\": \"This paper introduces a new method for evaluating question complexity in Video Question Answering by leveraging the programs generated by ViperGPT. The authors analyze these code outputs and use them to assess the difficulty of questions in a VQA triplet. To achieve this, they propose CodePlexity, an algorithm that processes the generated code through Abstract Syntax Trees (AST), identifies valid subtrees, and scores them based on their correlation with challenging questions. While the reviewers agree that the paper is well-written and using ASTs to evaluate visual program code is a promising direction, the reviewers raised several points weaknesses of the paper: In particular, the core point of using ViperGPT as the proxy for the evaluation of question complexity, given ViperGPTs relatively low performance, as well as the robustness of the proposed approach.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer Hrka had a fruitful discussion on their raised points of generalizability beyond ViperGPT and its use to compare models, which lead to them increasing their score to a marginal acceptance with low confidence. Reviewer EhtB raised points in particular on the core point about the confounding of currently challenging questions for models, as those might reflect only limitations in current models/ViperGPT and are not in general more challenging. While the authors recognise this and state (also in a direct message to the AC) that their main contribution is a \\\"data-driven framework for systematically identifying and analyzing model-specific challenges in VideoQA\\\", the AC agrees with the reviewer that this is an issue -- as benchmarks are taken as-is and this might lead to a static dataset that's limited not by inherent difficulty, but by the ViperGPT/RVT model.\"}", "{\"comment\": \"Dear Reviewer 8Wrw,\\n\\nThank you for your follow-up and for being open to change your initial impression of our paper. We are glad that our responses addressed all of your major concerns. Your questions and suggestions helped us improve the manuscript and clarify important aspects of our work. Please feel free to reach out if any additional feedback comes to mind.\"}", "{\"title\": \"Response to Reviewer RWYy\", \"comment\": \"Thank you for your detailed review and suggestions. We respond to your question individually below.\\n\\n---\\n**Q1: Does the one-hot subtree representations limit the metrics\\u2019s performance?**\\n\\nUsing one-hot encoding for subtree representations does not significantly limit metric performance because it is effectively similar to using counts in most practical scenarios. The likelihood of a subtree appearing multiple times in a program\\u2019s AST is minimal (except for leaf nodes). For a subtree to appear more than once, the same code logic would need to be repeated, which is uncommon. Therefore, the difference is negligible, and one-hot effectively captures the necessary structural information without compromising metric quality.\\n\\n---\\n**Q2: How does the code generation model affect the results?**\\n\\nThank you for this excellent question. The choice of code generation model is indeed an important factor in our methodology. To investigate this, we re-ran our experiments using the recent RVP [1] CodeGen approach as a substitute for ViperGPT. Preliminary results, included in Figure 19 of the updated manuscript, demonstrate that using a more advanced CodeGen model enhances the predictive power of code-based metrics for estimating question complexity.\\n\\nThese results suggest that the performance of CodePlexity can benefit from advancements in code generation models, as they can provide richer and more accurate program representations. While this introduces some variability depending on the underlying model, the general approach remains robust and adaptable. We are in the process of completing a detailed analysis and will include the finalized results in the camera-ready version of the paper.\\n\\nThis highlights the extensibility of our method and its potential for further improvement as code generation models continue to evolve.\\n\\n---\\n**Q3: Is CodePlexity truly measuring question difficulty?**\\n\\nWe acknowledge that CodePlexity is not a definitive complexity metric for VideoQA and has limitations, including its indirect consideration of video content. Our key observation is that generated code complexity (not \\u201ccode generation complexity\\u201d) is strongly correlated with underlying question complexity for existing VideoQA models. In particular, we experimentally demonstrate in Section 4.2 that CodePlexity exhibits **superior predictive power** compared to other automatic metrics and even human evaluations. Hence, our approach is not \\\"primarily measuring code generation complexity\\u201d.\\n\\n---\\n**Q4: How would our method generalize to fundamentally different VideoQA models?**\\n\\nWe thank the reviewer for this insightful observation. It is true that most modern VideoQA approaches leverage large visual-language models in one way or another. That said, the models evaluated in our works are quite diverse: ATP only trains a frame selection module and is based on CLIP, ViperGPT is a Visual Programming approach that leverages a suite of modules (CLIP, GPT3, BLIP, XVLM), VIOLET is a multimodal transformer, which is **trained from scratch** on the Merlot dataset and SeViLA is a 2-stage video model finetunned from a VLM. \\nIn response to the reviewers suggestion we have added two additional models.\\n- HGA [2]: based on Graph Neural Networks, this model combines visual and textual features (from BERT) using a Graph Reasoning Network. It was the state of the art on NExT-QA before ATP.\\n- Tarsier [3]: the current state of the art on NExT-QA, based on Video-LLaVA [4].\\nThe results are reported below, as well as in Table 1 in the manuscript. As you can see from the results, our question complexity metric indeed generalizes well to these disparate models.\\n\\nIf the reviewer has a concrete suggestion of a VideoQA approach which is fundamentally different from all the models listed above we would be happy to evaluate it as well.\\n\\n| | SeViLA | ViperGPT | ATP | VIOLET | HGA | SeViLA ZS | InternVideo | Tarsier |\\n|-|-|-|-|-|-|-|-|-|\\n| **Dependency Tree Depth**| 12.9 | 7.9 | 11.1 | 15.9 | 7.4 | 13.5 | 17.7 | 10.1 |\\n| **GPT-4** | 9.6 | 8.9 | 11.6 | 5.8 | 7.8 | 14.6 | 13.9 | 10.8 |\\n| **BERT** | 12.5 | 6.0 | 18.3| 17.3 | 7.7 | 14.3 | 21.1 | 10.8 |\\n| **Lines of Code** | 16.4 | 15.3 | 14.2 | 12.0 | 9.9 | 16.2 | 17.5 | 14.4 |\\n| **Cyclomatic Complexity**| 18.2 | 14.2 | 18.7 | 15.9 | 8.9 | 17.2 | 24.2 | 16.7 |\\n| **CodePlexity (Ours)** | 26.7 | 21.3 | 21.0| 15.8 | 14.1 | 25.6 | 26.6 | 24.9 |\\n\\n---\\n[1] Ge, J. & Subramanian, S. & Shi, B. & Herzig, R. & Darrell, T. Recursive visual programming,. In ECCV\\u201924\\n\\n[2] Jiang, P., & Han, Y. (2020, April). Reasoning with heterogeneous graph alignment for video question answering. In Proceedings of the AAAI Conference on Artificial Intelligence \\n\\n[3] Wang, J., Yuan, L., Zhang, Y., & Sun, H. (2024). Tarsier: Recipes for training and evaluating large video description models. arXiv preprint arXiv:2407.00634.\\n\\n[4] Lin, Bin, et al. \\\"Video-llava: Learning united visual representation by alignment before projection.\\\" arXiv preprint arXiv:2311.10122(2023).\"}", "{\"summary\": \"This paper explores a data-driven method for evaluating question complexity for Video Question Answering (VQA) by collecting the visual programs generated by ViperGPT. Then, they analyze the code outputs and leverage them to assess the difficulty of the questions in a VQA triplet.\\nGiven the output code from the Visual Programming module (ViperGPT), they propose CodePlexity, an algorithm that parses the generated code via Abstract Syntax Trees (AST), and later identifies valid subtrees. These subtrees are then scored by correlating subtrees with difficult questions -- subroutines that hurt the VQA model performance. The authors use NExT-QA as the benchmark to analyze their proposed metric. \\nGiven this metric, they also propose a pipeline to generate difficult question-answer pairs for a set of videos from MOMA, ActivityNet (+Entities/Captions), Action Genome, and Charades. \\nFinally, they compare the results of models SeViLA-ZS, InternVideo and VIOLET on their proposed new dataset (CodePlex-QA) against NExT-QA.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper is easy to read, the method is easy to follow and the qualitative examples help to illustrate the motivation of their work.\", \"Leveraging Visual Programming outputs as a proxy for assessing the difficulty of a given task is an underexplored domain -- in software engineering, an active area of study is the correlation between the code complexity and difficulty of the task. This is an open problem, and this paper proposes an interesting generalization of this task.\", \"Decomposing the output code from the VP module via ASTs, and identifying subtrees that decrease performance via scoring, may give an interesting angle for generating an interpretable pipeline to analyze subprocesses that might be hurting the models' performance. In principle, this looks like an interesting approach and a viable metric for VP-based methods.\"], \"weaknesses\": \"- Visual Programming (VP) is an interpretable framework for several visio-linguistic tasks, however, VP may output very simple code for a quite complex task, yielding false positive or false negative hard cases, and VP might not be reliable -- for a very complex task, the output code could be simple; the LLM might regard it as a simple task, but it's actually the opposite\\nIn addition, VP falls short against end-to-end models (e.g., ViperGPT acc. is 60% vs SeViLa (finetuned) acc. is 73.8%) -- given its underperformance, it is hard to justify the use of its outputs as a proxy for evaluation of complexity. It is also hard to justify using VP for measuring task complexity and then evaluating on end-to-end models.\\n\\n- Disregarding visual inputs: \\\"Text-based metrics (above) perform worse than the code-based ones (below), and our approach demonstrates the highest correlation with the models\\u2019 performance.\\\" -- this completely ignores the visual modality, which seems problematic.\\n\\n- The authors claim: \\\"In summary, we discovered that VideoQA methods struggle with fine-grained temporal reasoning and lack spatio-temporal, object-centric representations. This is in accord with prior studies\\\" [3] -- however, those studies regard both visual and language modalities for assessing temporal reasoning in video QA, giving high importance to the video part.\\n\\n- Human evaluations: Figure 6: We ask human annotators to provide the relative ordering of three provided questions\\naccording to the estimated complexity of answering the question about an unseen video -- how to correctly assess the complexity of the task without looking at the video?\\n\\n- Experimental setup:\\nBaselines [1] focus on grammar and text only. BERT and GPT-4 also focus on text only.\\n\\n- Generalizability: ViperGPT is the VP module used in this paper, however, VP has significantly progressed. Different methods leverage multiple LLMs, there has been extensive problem decomposition and iteration that impact the code outputs. Further experiments with other VP approaches might be required to ensure generalization. Similarly, the proposed metric and dataset is only compared against NExT-QA. Other benchmarks might be necessary to validate the proposed metric and dataset (e.g., MVBench [2] compiles a collection of datasets (with subsets of samples from a diverse range of sources), which includes spatio-temporal analysis, spatial action, object, position, scene, count, attribute and cognition).\\n\\n- For dataset generation, the authors compare their pipeline with [4] -- however, an important step for EgoSchema is the manual curation of the whole dataset in the final step -- details for this step is not further detailed in this paper.\\nFurthermore, as a comparison, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data -- significantly larger than the 1981 samples -- what is the length of each video sample?\\n\\n\\n[1] Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Brian MacWhinney, and Chris Dyer. Learning the curriculum with bayesian optimization for task-specific word representation learning. In ACL, 2016. \\n\\n[2] Li, Kunchang, et al. \\\"Mvbench: A comprehensive multi-modal video understanding benchmark.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[3] Shyamal Buch, Crist\\u00b4obal Eyzaguirre, Adrien Gaidon, Jiajun Wu, Li Fei-Fei, and Juan Carlos Niebles. Revisiting the\\u201d video\\u201d in video-language understanding. In CVPR, 2022.\\n\\n[4] Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. EgoSchema: A diagnostic benchmark for very long-form video language understanding. In NeurIPS, 2023.\", \"questions\": \"- SeViLA \\\"leverages a single image-language model (BLIP2) to tackle both temporal keyframe localization and question answering on videos\\\" How the findings in this paper generalize to models that take multiple video frames for VQA? (e.g., VideoChat2 [5], Video-LLaVA [6])?\\n\\n- NExT-QA uses videos from VidOR [7] -- CodePlex-QA use videos from MOMA, ActivityNet, ActivityNet-Entities, ActivityNet-Captions), Action Genome, and Charades. Why not using the same videos, or at least the same source video dataset (VidOR)?\\n\\n[5] Li, Kunchang, et al. \\\"Mvbench: A comprehensive multi-modal video understanding benchmark.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[6] Lin, Bin, et al. \\\"Video-llava: Learning united visual representation by alignment before projection.\\\" arXiv preprint arXiv:2311.10122 (2023).\\n\\n[7] Xindi Shang, Donglin Di, Junbin Xiao, Yu Cao, Xun Yang, and Tat-Seng Chua. Annotating objects and relations in user generated videos. In ICMR, pages 279\\u2013287, 2019\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Hrka,\\n\\nThank you for your thoughtful response and for engaging in this discussion with an open mind. We deeply value your feedback and your willingness to reconsider aspects of our work.\", \"we_would_like_to_emphasize_a_few_key_points_from_our_rebuttal_that_directly_address_your_remaining_concerns\": \"1. We do show that **code-based metrics generalize beyond ViperGPT** in Figure 19 by replacing it with the very recent RVP [1] visual programming model. This experiment demonstrates that using a more advanced visual programming model improves the predictive power of code-based metrics, reinforcing the argument that code serves as a robust proxy for question complexity in VideoQA.\\n\\n2. We do explicitly evaluate the effect of the correctness of the generated code on the predictive power of our metric in Table 4. While the correlation between models\\u2019 performance and metric value improves when code is correct, **CodePlexity consistently outperforms baselines in robustness to code generation errors**. This underscores its practicality even when errors occur.\\n\\n3. We do evaluate how data sources influence dataset complexity in Figure 18 and have committed to including your specific suggestion in the camera-ready version. Importantly, this is a **secondary issue** that does not challenge our central hypothesis: code complexity is a strong proxy for question complexity in VideoQA.\\n\\n4. We are running non-code-based baselines from Table 1 on MVBench right now and will report the results as soon as possible, but wanted to post this discussion earlier to give the Reviewer time to respond.\\n\\nTogether, these and other results in our paper substantiate the central claim of this work: **code complexity correlates strongly with question complexity in VideoQA**. Intuitively, code complexity is effective at this task, because it is a proxy of the complexity of the algorithm that is implicit in the question. Crucially, our work makes no claim that code-based metrics are optimal or that they capture all aspects of complexity in this domain. Rather, we position CodePlexity as a promising and novel approach that broadens the scope of how complexity can be measured. We do compare to all other text-based complexity metrics in the literature and results speak for themselves.\\n\\nWe kindly ask the reviewer to consider that papers proposing new and unexpected hypotheses\\u2014even if not yet flawless\\u2014often have the greatest potential to drive meaningful progress within the community, in contrast to works that merely confirm widely held beliefs. We hope the reviewer will give our paper the chance to contribute in this way.\\n\\n[1] Ge, J. & Subramanian, S. & Shi, B. & Herzig, R. & Darrell, T. Recursive visual programming,. In ECCV\\u201924\"}", "{\"comment\": \"Dear Reviewer EhtB,\\n\\nAs promised above, we now report the results of the evaluation of VipeGPT on CodeplexQA below, as well as in Table 2 in the updated manuscript. As you can see, our dataset is indeed significantly more challenging than NExT-QA for visual programming methods, as well as for more traditional methods. This result further demonstrates the generality of our approach. We look forward to continued discussion and would be happy to answer any additional questions.\\n\\n| Dataset | Tarsier | SeViLA ZS | Viper | InternVideo | VIOLET | Random |\\n|--------------|---------|-----------|--------|-------------|--------|--------|\\n| NExT-QA | 70.9% | 64.2% | 60.0% | 50.9% | 37.7% | 20.0% |\\n| ATP-Hard | 59.8% | 54.9% | 51.8% | 24.6% | 25.4% | 20.0% |\\n| CodeplexQA | 52.5% | 43.7% | 45.8% | 29.9% | 27.6% | 20.0% |\"}", "{\"comment\": \"Dear Reviewer Hrka,\\n\\nThank you once again for your thoughtful feedback and for your willingness to engage with our work in such depth. We truly appreciate your recognition of the potential impact of our approach and your increased rating of our submission.\\n\\nAs promised, we are providing results for the non-code-based baselines from Table 1 evaluated on MVBench below. In particular, we report the heuristic-based Dependency-Tree-Depth and data-driven BERT baselines. Evaluating the GPT-4 baseline on MVBench is taking more time, but we commit to including it in the camera ready version of the paper. Overall, we observe similar trends to those in Table 1. Here are the main findings:\\n\\n1. **Data-driven metrics outperform heuristic-based ones:** As anticipated, metrics that leverage data to model complexity (BERT and CodePlexity) show a clear advantage over manually designed heuristics, which lack the flexibility to capture the challenges posed by VideoQA.\\n2. **Our CodePlexity metric outperforms all alternatives:** The results affirm that CodePlexity, as a data-driven, code-based metric, is not only robust but also provides the most accurate estimation of question complexity. Its ability to correlate strongly with model performance across diverse datasets further validates its utility.\\n\\n| | InternVideo | SeViLA ZS | Tarsier | LLaVA-NeXT | VideoChat |\\n|:---------------------|--------------:|------------:|----------:|-------------:|------------:|\\n| **Dependency Tree Depth** | 6.5 | 6.2 | 19.7 | 12.2 | 16.5 |\\n| **BERT** | 25.6 | 11.5 | 18.8 | 21.5 | 21.1 |\\n| **Lines of Code** | 4.4 | 9.5 | 12.5 | 9.2 | 10.3 |\\n| **Cyclomatic Complexity** | 13.0 | 9.6 | 6.1 | 4.5 | 5.2 |\\n| **CodePlexity (Ours)** | **41.3** | **33.0** | **44.4** | **27.4** | **29.9** |\\n\\nWe are encouraged by your acknowledgment of the novelty and promise of this research direction. Thank you for giving our work the opportunity to be evaluated fairly and for supporting contributions that aim to broaden the scope of methodologies in this field.\"}", "{\"title\": \"Response to Reviewer 8Wrw\", \"comment\": \"Thank you for your detailed review and suggestions. We address your comments individually below.\\n\\n---\\n**W1: All evaluations are conducted on NextQA**\\n\\nWe thank the reviewer for pointing out this imitation. To evaluate the generalizability of our method, following the suggestion of Reviewer Hrka, we are now running analysis on the very recent MVBench dataset. Please appreciate the effort involved in this, as it essentially requires re-running most of the experiments in the paper on a new, large-scale dataset. We expect the experiments to finish before 26th of November and will post the results here as soon as possible.\\n\\n---\\n**W2: How do errors in code generation affect CodePlexity?**\\n\\nWe thank the reviewer for bringing up this under-explored aspect of our method. We have analyzed the performance of CodePlexity on questions which ViperGPT answers correctly (i.e. the generated code is correct), vs. the questions on which ViperGPT fails and show the results here and in Table 4 in the updated manuscript. As you can see, while the correlation between models\\u2019 performance and predicted metric value is consistently higher when the code is correct for all code-based metrics, CodePlexity is a lot more robust to code generation errors than the baselines. Please also see our response to Reviewer RWYy above, where we show that using more advanced code generation models improves the predictive power of code-based complexity metrics.\\n\\n|Metric|Result|SeViLA|ViperGPT|ATP|VIOLET|HGA|SeViLA ZS|InternVideo|Tarsier|\\n|-|-|-|-|-|-|-|-|-|-|\\n|Lines of Code|Correct|0.1373|---|0.1747|0.1455|0.0712|0.1654|0.2022|0.1475|\\n||Incorrect|0.1245|---|0.0540|0.0656|0.0735|0.0756|0.0831|0.0696|\\n|Cyclomatic Complexity|Correct|0.1702|---|0.2128|0.1930|0.0649|0.1739|0.2825|0.1634|\\n||Incorrect|0.1351|---|0.1118|0.0881|0.0664|0.0973|0.1388|0.1071|\\n|CodePlexity|Correct|0.2608|---|0.3128|0.3178|0.0867|0.2095|0.2877|0.1950|\\n||Incorrect|0.2810|---|0.2041|0.2542|0.1087|0.1839|0.1700|0.1857|\\n\\n---\\n**W3: The sample size for the human study is relatively small.**\\n\\nThank you for your observation. While we agree that a larger sample size is generally preferable, we argue that our sample size of 150 questions is consistent with established practices in the literature. For instance, in the original BLEU metric study by Papineni et al. (2002), human evaluations were conducted on 250 sentence pairs [1]. Similarly, in machine learning research, Ranzato et al. (2015) assessed their reinforcement learning model on 100-200 test cases for specific NLP tasks [2]. In linguistics, Koehn et al. (2007) conducted human evaluations of translations on sets of 150-300 sentences [3]. Given this precedent, we believe our sample size is reasonable and sufficient to draw meaningful conclusions.\\n\\n---\\n**W4: Does merging co-occurring subtrees risk missing important patterns?**\\n\\nWe thank the reviewer for pointing out this important detail. To clarify, subtrees are only merged when they always co-occur in cases where one is a descendant of the other. This means their one-hot encodings are identical across all programs, ensuring that merging does not lose unique patterns. Since the co-occurrence is absolute, any pattern found in one subtree will also exist in the other. We have updated Section 7.1 of the Appendix to include this discussion.\\n\\n---\\n**W5: Does manual filtering of generated questions introduce a selection bias?**\\n\\nOur filtering process is very straightforward and as bias-free as possible. In particular, we only remove questions where the automatically generated answer is wrong, or the question cannot be answered from the video (for example, when the LLM refers to an actor by their annotated *ID* instead of a visual attribute). There is no manual filtering of questions beyond that.\\n\\n---\\n[1] Papineni, K., Roukos, S., Ward, T., & Zhu, W.-J. (2002). BLEU: A method for automatic evaluation of machine translation. *Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)*, 311\\u2013318. [https://doi.org/10.3115/1073083.1073135](https://doi.org/10.3115/1073083.1073135)\\n\\n[2] Ranzato, M., Chopra, S., Auli, M., & Zaremba, W. (2016). Sequence level training with recurrent neural networks. *arXiv preprint arXiv:1511.06732*. [https://doi.org/10.48550/arXiv.1511.06732](https://doi.org/10.48550/arXiv.1511.06732)\\n\\n[3] Koehn, P., Och, F. J., & Marcu, D. (2003). Statistical phrase-based translation. *Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL-HLT)*, 48\\u201354. [https://doi.org/10.3115/1073445.1073462](https://doi.org/10.3115/1073445.1073462)\"}", "{\"comment\": \"Dear Reviewer EhtB,\\n\\nThank you for your detailed feedback. We are glad that our response addressed some of your concerns, but regret that you did not leave us enough time for a thorough discussion. We would like to use this opportunity to clarify a potential misunderstanding of our contributions and address your final concern.\\n\\nOur work is not about creating a fixed metric or dataset but about introducing a **data-driven framework** for systematically identifying and analyzing model-specific challenges in VideoQA. Rather than proposing a universal measure of \\\"inherent complexity,\\\" we focus on uncovering **common failure modes** across diverse state-of-the-art models. These shared failure patterns often reflect fundamental challenges in the field and offer critical insights into where progress is most needed.\\n\\nWhile you suggest that focusing on questions \\\"complex for existing models\\\" risks introducing biases, we argue that identifying such failure modes is **essential to addressing any biases** already present in current methods. By highlighting where models falter\\u2014particularly in areas like temporal reasoning and multi-frame dependencies\\u2014our approach provides actionable insights to guide model development and dataset design.\\n\\nUltimately, our framework is **adaptable and iterative**: as new models and datasets emerge, it can be reapplied to surface fresh challenges, automatically generate benchmarks that incorporate these challenges, and sustain the momentum of progress. This adaptability makes it **more future-proof than any heuristic-based definition** of \\u201ctrue complexity\\u201d crafted by human experts. Indeed, all the efforts to propose such a definition have not stood the test of time [4, 21, 33]. We believe that our approach provides a novel perspective on question complexity analysis in VideoQA and will ultimately help addressing the fundamental challenges in this domain.\"}", "{\"summary\": \"This paper introduces an approach to analyzing and generating complex questions for Video QA. The authors propose the use of generated code complexity as a metric to evaluate question difficulty. They introduce \\\"CodePlexity,\\\" a novel algorithm that analyzes generated code's structure and content to identify patterns that make questions challenging for ML models. This allows the creation of a new VideoQA dataset named \\\"CodePlex-QA\\\" which features more complex questions than existing datasets without relying on human expertise.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a novel and interesting approach using code complexity to evaluate question complexity.\", \"The paper offers interesting insights into the differences in human evaluation, text-based and code-based evaluation.\", \"The experiment results show clear empirical evidence to support the claims.\"], \"weaknesses\": \"Please see the question section below\", \"questions\": [\"The paper represents subtrees in question code using simple one-hot encoding without additional processing, which might ignore the frequency of subtrees and their structural relationships. Additionally, this representation can be very sparse. How does this affect CodePlexity's performance?\", \"To what extent does the choice of code generation model affect the method's results? Would using different code generation models lead to identifying different patterns of complexity, and how might this impact the method's consistency?\", \"Since the method evaluates difficulty without considering visual information, there might be cases where code generation is challenging due to question ambiguity, but the task would be straightforward with visual context. Is CodePlexity truly measuring question difficulty, or is it primarily measuring code generation complexity?\", \"Most models evaluated in the paper are large video-language models built upon language model architectures. Since these models share similar language model foundations with the code generation approach, they might inherently struggle with the same types of questions that are difficult for code generation. Does this architectural similarity explain their correlated performance? How well would the method generalize to fundamentally different architectures that might employ different reasoning mechanisms?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I really appreciate the discussion and summary of the updates and revised version. Thank you for pointing out the results on Figure 19, I think the authors have a valid point. I also agree with the authors regarding new research directions, and this indeed is an interesting one. Thus, I'm increasing my overall rating.\"}", "{\"summary\": \"This paper proposes a novel approach to analyzing and measuring question complexity in VideoQA by leveraging generated code complexity. The authors demonstrate that code-based metrics correlate better with model performance compared to human judgment and introduce CodePlexity, an algorithm for estimating question complexity. They use this to identify challenging patterns in VideoQA and automatically generate a new benchmark dataset, CodePlex-QA, which proves to be significantly more challenging than existing datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The idea to measuring question complexity using code generation is well-motivated\", \"The ablation studies and analysis are well-designed\"], \"weaknesses\": [\"All experiments are conducted on a single dataset (NExT-QA) for analysis. Therefore, it leads to the limited evaluation across different types of VideoQA tasks.\", \"The authors should discuss more how errors of code generation might affect the complexity metrics\", \"The human evaluation study uses only 150 questions, which is a relatively small sample\", \"We concern that the merging of subtrees that \\\"always co-occur\\\" could potentially miss important patterns in edge cases\", \"The manual filtering process for the generated dataset (removing 12% of questions) could introduce selection bias\"], \"questions\": [\"Why the authors do not validate the proposed approach on other VideoQA datasets besides NExT-QA?\", \"How do errors in code generation impact the complexity metrics?\", \"Are there any important patterns that might be missed by your current subtree merging approach?\", \"What specific criteria were used to manually filter out the 12% of questions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Hrka\", \"comment\": \"Thank you for your detailed review and suggestions. We address your comments individually below (comment 2 of 2).\\n\\n---\\n**Q1: How do the findings in the paper generalize to models that take multiple frames as input?**\\n\\nSevila takes in multiple video frames. The only model that does not take in multiple frames is ATP. Sevila does do frame selection. Other models (except for ATP) don't. We have additionally evaluated Tarsier [1], a recent multi-frame Video LLM based on LLaMA, which is the SoTA model on NextQA and is architecturally equivalent to Video-LLaVA. As shown in the table below, Tarsier also struggles with questions identified as complex by CodePlexity. We have updated Table 1 in the manuscript with this result.\\nAdditionally, Tarsier has been added to the results in Table 2 and Figure 3 of the manuscript, which also show that the findings in the paper generalize to Tarsier. Namely, that code-based metrics outperform their text-based counterparts, and that our approach, CodePlexity, further improves upon them.\\n\\n| | SeViLA | ViperGPT | ATP | VIOLET | HGA | SeViLA ZS | InternVideo | Tarsier |\\n|--------------------------|--------|----------|-------|--------|------|-----------|-------------|---------|\\n| **Type** | Train | Train | Train | Train | Val | Val | Val | Val |\\n| **Dependency Tree Depth**| 12.9 | 7.9 | 11.1 | 15.9 | 7.4 | 13.5 | 17.7 | 10.1 |\\n| **GPT-4** | 9.6 | 8.9 | 11.6 | 5.8 | 7.8 | 14.6 | 13.9 | 10.8 |\\n| **BERT** | *12.5* | *6.0* | *18.3*| *17.3* | 7.7 | 14.3 | 21.1 | 10.8 |\\n| **Lines of Code** | 16.4 | 15.3 | 14.2 | 12.0 | 9.9 | 16.2 | 17.5 | 14.4 |\\n| **Cyclomatic Complexity**| 18.2 | 14.2 | 18.7 | 15.9 | 8.9 | 17.2 | 24.2 | 16.7 |\\n| **CodePlexity (Ours)** | *26.7* | *21.3* | *21.0*| *15.8* | **14.1** | **25.6** | **26.6** | **24.9** |\\n\\n---\\n**Q2: Why not use VidOR instead of MOMA+ActivityNet+Charades for CodePlexQA**\\n\\nThank you for raising this point. While VidOR provides a valuable dataset for video-related tasks, we chose not to use it for the following reasons:\\n1. **Limited variety:** Videos in VidOR lack diversity, both in terms of visual content and scenarios. This limits the range of complex, challenging questions that can be generated.\\n2. **Short video duration:** Most videos in VidOR are relatively short, which makes it difficult to create and evaluate questions requiring fine-grained temporal reasoning or extended contextual understanding.\\n3. **Older source material:** VidOR is sourced from YFCC, a collection of videos from Flickr dating back to 2016. While useful, these older videos may not represent the richness or variability seen in more contemporary datasets.\\n\\nTo address these limitations, we opted to use videos from three different sources: MOMA, ActivityNet, and Action Genome. This allows us to leverage a wider variety of video types, durations, and content, ensuring that our benchmark better captures the complexity and diversity of real-world scenarios.\\n\\n---\\n[1] Wang, J., Yuan, L., Zhang, Y., & Sun, H. (2024). Tarsier: Recipes for training and evaluating large video description models. arXiv preprint arXiv:2407.00634.\\n\\n[2] Ge, J. & Subramanian, S. & Shi, B. & Herzig, R. & Darrell, T. Recursive visual programming,. In ECCV\\u201924\"}", "{\"title\": \"Response to Reviewer Hrka\", \"comment\": \"Thank you for your detailed review and suggestions. We address your comments individually below (comment 1 of 2)\\n\\n---\\n**W1: CodePlexity is not the perfect metric for evaluating question complexity in VideoQA**\\n\\nWe wholeheartedly agree. Our question complexity metric based on visual program analysis has limitations, including not always predicting the correct complexity score (indeed, it achieves an mPEG score of only ~25/100). It is, however, the best automatic metric that exists at the moment, and is also significantly more accurate than humans (see analysis in Section 4.2). Importantly, it provides human-interpretable and actionable insights into the key sources of complexities of existing models, allowing us to automatically construct a new challenging VideoQA benchmark. While we acknowledge that our approach does not address all challenges of VideoQA, we believe it represents a significant and valuable step forward for the field.\\n\\n---\\n**W2: Our analysis does not consider video information**\\n\\nWe acknowledge that our approach focuses on analyzing question complexity independently of video information. However, we emphasize that asking complex questions about videos is a long-standing challenge. The first benchmark for video-language reasoning was introduced in Rohrbach et al., CVPR'15, and was followed by dozens of attempts to **manually** design questions that cannot be answered from a single frame, which were largely unsuccessful [4, 21, 33]. Our method takes a **data-driven** approach instead, discovering sources of complexity directly from the data, which is both novel and complementary to existing methods.\\n\\nAdditionally, as discussed in Section 9.1 of the appendix, video and text-based complexities are composable. This validates the study of question complexity in isolation. Combining CodePlexity with accurate video complexity metrics could yield an even more comprehensive evaluation in future work. Hence, our study of question complexity in VideoQA is still a valuable and valid research direction.\\n\\n---\\n**W3: Both human and algorithmic baselines also only consider text information**\\n\\nIt is indeed true that our baselines, including human evaluations, focus solely on text information. This is a deliberate choice, as our goal is to evaluate question complexity in isolation from the video. While humans may not perfectly estimate video question complexity based on text alone, their judgments still provide informative estimates. Comparing baselines with access to the same information ensures fair evaluations of our metric.\\n\\n---\\n**W4: Please report more recent VP methods and more recent datasets**\\n\\nWe reran the analysis with the recommended RVP mode [2] and show that code-based complexity functions based on it also correlate negatively with model performance. Results are shown in Figure 19 of the Appendix and demonstrate that using a more advanced CodeGen model enhances the predictive power of code-based metrics for estimating question complexity. This highlights the extensibility of our method and its potential for further improvement as code generation models continue to evolve.\\n\\nIn addition, we are now re-running our experiments on MVBench. Please appreciate the effort involved in this, as it essentially requires re-running most of the experiments in the paper on a new, large-scale dataset. We expect the experiments to finish before 26th of November and will post the results here as soon as possible.\\n\\n---\\n**W5: Please provide additional details and statistics of the proposed CodePlexQA benchmark**\\n\\nWe provide the requested details here and added them to Section 8 of the Appendix. The resulting dataset has an average of 2.40 questions per video. The duration of each video ranges from approximately 3 seconds to 10 minutes with an average video duration of about 1.5 minutes. This diverse range of video lengths is desirable as it is conducent to generating a wide variety of questions.\"}", "{\"comment\": \"Dear Reviewer Hrka,\\n\\nWe sincerely appreciate your thoughtful feedback and are glad that you found our explanations helpful. We address your remaining concerns below.\\n\\n---\\n\\n**Q1: Robustness of our approach**\\n\\nWe have already demonstrated that our approach is robust to the CodeGen method being used in Section 10.3 as well as to a variety of VideoQA models in Table 1. Following reviewer\\u2019s request, we now report results on MVBench below, as well as in Section 10.4 in the updated manuscript. \\n\\nFirstly, as you can see from Figure 20 in the manuscript, ViperGPT code complexity correlates strongly with question complexity for a variety of VideoQA models on this dataset. Secondly, as shown in the table below and in Table 5 of the manuscript, our proposed CodePlexity metric is far superior to the naive code complexity metrics on MVBench. \\n\\nThese results complete the robustness analysis of our approach, clearly demonstrating that it can scale to new methods and datasets, and validating its broader applicability. We hope this additional analysis resolves the reviewer\\u2019s concern and would be happy to answer additional questions in the remaining discussion period.\\n\\n| | InternVideo | SeViLA ZS | Tarsier | VideoChat2 | LLaVA-NeXT |\\n|:-------------------------|------------:|-----------:|----------:|-----------:|------------:|\\n| **Lines of Code** | 0.0441471 | 0.0950738 | 0.125403 | 0.10273 | 0.0919019 |\\n| **Cyclomatic Complexity**| 0.129945 | 0.0963568 | 0.0614128 | 0.0521084 | 0.0447285 |\\n| **CodePlexity (Ours)** | **0.413361** | **0.330262** | **0.444389** | **0.299086** | **0.274255** |\\n\\n\\n---\\n**Q2: Comparison of NextQA and CodePlexQA is not fair due to different video sources**\\n\\nThank you for clarifying this point. We acknowledge that multiple factors contribute to CodePlex-QA being more challenging than NExT-QA, and we agree that differing video sources play a role. We do provide a more detailed analysis of some of these factors in Figure 18 in the manuscript, which demonstrates that selecting samples based solely on their CodePlexity scores from NExT-QA already produces a benchmark that is significantly more challenging. This highlights the efficacy of our metric, independent of the diversity introduced by alternative data sources and automatic question generation.\", \"we_would_like_to_re_itterate_the_key_difference_between_next_qa_and_codeplex_qa\": \"the former is **100% manually** constructed, whereas our pipeline is **automatic**, only requiring a single manual filtering stage to remove incorrectly generated question-answer pairs. In our opinion, it would have been impressive if such an automated pipeline simply matched a human-expert-curated benchmark in terms of complexity. As our results show, it significantly surpasses the widely adopted NExT-QA.\\n\\nWe will include results with VidOR as a data source for constructing CodePlex-QA in the camera ready version of the paper to provide a stricter comparison (getting them by the discussion deadline is, unfortunately, not feasible). However, we believe that the broader diversity and temporal richness of the datasets we selected more effectively demonstrate the generalizability and scalability of our approach.\"}", "{\"title\": \"General response\", \"comment\": \"We sincerely thank the reviewers for their thoughtful feedback and valuable suggestions. We have carefully addressed each comment individually and would like to take this opportunity to summarize the contributions of our work and clarify its scope.\\n\\nIt is hard to define what constitutes a good review in a few words, but one of the best definitions we have heard goes as follows: \\u201cA good review does not judge a paper based on what it is **NOT** (i.e. what a reviewer wishes it was), but rather based on what it **IS** (i.e. the value it brings to the community)\\u201d. We hope the reviewers and the AC can agree with this simple definition.\\n\\nWith this in mind, here are a few things our work is **NOT**:\\n1. Our metric does not capture the \\u2018inherent complexity of questions\\u2019 in VideoQA (Reviewer EhtB). Instead, it quantifies question complexity for existing models.\\n2. Our metric does not represent all the aspects of complexity in VideoQA (Reviewers RWYy, Hrka), focusing solely on question complexity instead. \\n\\nWe never claim any of these contributions in the paper. \\n\\nIn contrast, here are a few things our work **IS**:\\n1. It provides a novel perspective on question complexity analysis to the community that has struggled to achieve progress [4, 21, 33], and is in need of fresh insights.\\n2. It proposes a practical and operationalizable metric for estimating question complexity in VideoQA, which outperforms all the alternatives and indeed is more effective than humans at this task (see our analysis in Section 4.2).\\n3. It provides an automatic pipeline for generating challenging questions based on the proposed metric, which is used to construct a new VideoQA benchmark.\\n\\nWe humbly argue that these contributions make our work a meaningful addition to the VideoQA research community.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"In this paper, the author claims that the questions considered difficult are different from the actual difficult questions to current VideoQA models. Therefore, the authors propose a new metric: CodePlexity, to evaluate the difficulty of a VideoQA question. This metric is based on the recent visual programming methods, where a visual program of a question is generated and which complexity is evaluated. Based on this metric, the authors found that most models struggle with questions where multiple frames are included and frame order need to be take into consideration. Then based on the metric, the author proposes CodePlex-QA and claims that it is 1.9 times harder than existing benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes a novel metric to evaluate the complexity of the VideoQA problem and also proposes a CodePlex-QA that is more difficult for current VideoQA models. The idea of leveraging code to develop a new metric for question difficulty analysis is interesting.\\n2. The paper conducts thorough experiments on testing the correlation of the metric.\", \"weaknesses\": \"1. The criteria for what makes a question complex or challenging for models\\u2014especially compared to human intuition\\u2014seem speculative. Without more rigorous validation, it\\u2019s unclear whether the proposed complexity measures truly capture inherent question difficulty or just reflect the limitations of current VideoQA models. Also, the idea of identifying questions that are \\u201ceasy for humans but hard for machines\\u201d is ambiguous. It seems plausible that any difference in difficulty may be more a result of model architecture and training rather than the intrinsic complexity of the question itself.\\n2. Visual programming is a natural way (and probably the best way) to address the CodePlex-QA task. The author didn't report how well the recent visual programming methods (Visprog, ViperGPT, CodeVQA), especially those on addressing complex logical questions (eg. RVP[1]) addresses the task.\\n3. The comparison between Next-QA and CodePlex-QA (Table2) is not convincing enough as previous works have shown that Next-QA have easy questions[2]. How is CodePlex-QA compared with Next-QA ATP-Hard split?\\n\\n[1] Recursive Visual Programming\\n[2]Revisiting the \\\"Video\\\" in Video-Language Understanding\", \"questions\": \"1. What are the results of the recent visual programming methods on this task?\\n2. How complex is CodePlex-QA compared with Next-QA ATP-T/ATP-C split?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I really appreciate the additional experiments and explanations.\\n\\nHowever, the main issue with this work is that it is built on the assumption that the proposed CodePlexity metric is a good proxy to evaluate video-language data complexity. In Table 5 for example, the baselines are lines of code and cyclomatic complexity, which seems insufficient. \\nWhile this paper provides some hypotheses that visual program generation code quality might correlate with the complexity of VideoQA datasets, all metrics are dependent on ViperGPT, and the generated CodePlex-QA is built from more challenging sets of videos. \\nFurthermore, prior work has shown that, in general, LLMs struggle when generating code that can effectively interpret visual elements [1] and text-only instructions [2]. If the proxy is not reliable, is it valid to use it as an oracle to evaluate for complexity? This is a very promising direction, but in light of the current state of this paper and the lack of stronger empirical evidence, I am keeping my original rating of 5 and lowering my confidence to 3. \\n\\n[1] Kaixin Li, Yuchen Tian, Qisheng Hu, Ziyang Luo, Zhiyong Huang, and Jing Ma. 2024. MMCode: Benchmarking Multimodal Large Language Models for Code Generation with Visually Rich Programming Problems. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 736\\u2013783, Miami, Florida, USA. Association for Computational Linguistics.\\n\\n[2] Jiang, Juyong, et al. \\\"A Survey on Large Language Models for Code Generation.\\\" arXiv preprint arXiv:2406.00515 (2024).\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe deeply appreciate your thoughtful feedback and constructive engagement, which helped to improve our paper. Below, we summarize our core contributions and highlight the results provided during the rebuttal process that validate and expand upon these contributions.\\n\\n### **Key Contributions:**\\n\\nOur work introduces a **data-driven framework for estimating question complexity in VideoQA**, consisting of:\\n\\n1. **CodePlexity Metric:** A practical, interpretable, and adaptable metric that assesses question complexity using visual program generation.\\n2. **Insights Into Model Limitations:** Identification of shared failure modes across state-of-the-art VideoQA models, offering actionable insights to guide future research.\\n3. **Benchmark Generation:** A methodology for creating datasets, such as CodePlex-QA, that systematically surface questions challenging for current VideoQA models.\\n\\nOur framework does not propose a fixed definition of \\u201ctrue complexity\\u201d but instead offers a systematic, adaptable tool for evaluating model-specific challenges in VideoQA. By identifying failure modes common to diverse models, we provide actionable insights to guide future research. As new datasets and models emerge, our framework can be reapplied to identify fresh challenges, ensuring sustained progress in the field.\\n\\n---\\n\\n### **Key Insights from Rebuttal Results:**\\n\\nDuring the rebuttal period, we validated the robustness and generalizability of our approach:\\n\\n1. **Diverse Code Generation Models:** We evaluated our framework with the recent RVP CodeGen model in place of ViperGPT. This analysis demonstrated that using a more advanced visual programming model improves the predictive power of code-based metrics, reinforcing the argument that code serves as a robust proxy for question complexity in VideoQA.\\n2. **Wide Range of VideoQA Architectures:** By testing CodePlexity on diverse VideoQA models such as HGA and Tarsier, we confirmed its applicability across a wide range of architectural approaches. These results reaffirm that our metric captures shared challenges across fundamentally different model designs.\\n3. **Generalization to New Datasets:** We extended our evaluation to MVBench, a recent dataset with unique characteristics. On MVBench, CodePlexity showed strong correlation with question complexity across multiple models, including LLaVA-NeXT and VideoChat2 (state-of-the-art on MVBench). Our metric outperformed a wide array of code-based and language-based complexity metrics, demonstrating its effectiveness across diverse data sources.\\n4. **Robustness to Code Generation Errors:** We further analyzed the impact of code generation errors, showing that CodePlexity remains robust and outperforms other metrics even when the generated code contains inaccuracies. This highlights its practicality in real-world scenarios where code generation may not be flawless.\", \"our_rebuttal_reinforces_the_central_claim_of_this_work\": \"**CodePlexity is a robust, generalizable, and practical tool for advancing VideoQA research**. By systematically identifying shared challenges and providing a foundation for generating new benchmarks, our framework offers a scalable path toward tackling foundational challenges in VideoQA.\\n\\nWe hope the reviewers will recognize the strengthened contributions and validated applicability of our work in guiding future advancements in this domain.\"}", "{\"comment\": \"Thank you to the authors for their responses. I really appreciate the additional experiments on MVBench. Therefore, I am slightly leaning towards acceptance and will increase my score.\"}", "{\"comment\": \"Dear Reviewer EhtB,\\n\\nWe hope this message finds you well. We wanted to kindly remind you that the reviewer response deadline is at midnight today. We would greatly appreciate it if you could let us know whether your concerns have been addressed or if there are any additional questions or suggestions we can assist with.\"}", "{\"comment\": \"Thank you for the experiments on visual programming based approaches. It addresses some of my concerns. However, I am not convinced of having a set of questions targeted as \\\"complex for existing models\\\" and emphasizing them for future research. This distinction could potentially reflect biases inherent in current research methods rather than offering new insights.\\nMore importantly, focusing on these questions might inadvertently introduce new biases rather than addressing the truly challenging problems in video understanding. For this reason, I will maintain my current score.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"I thank the authors for the detailed explanations.\\n\\nOne big concern shared by other reviewers is the robustness of the proposed approach. In particular, results in the NextQA-ATP-Hard split help to shed some light on this issue. \\n\\nMy question regarding using videos from MOMA, ActivityNet, ActivityNet-Entities, and ActivityNet-Captions comes from the fact that comparing the NExT-QA benchmark and the proposed CodePlex-QA might be unfair. As the authors pointed out, VidOR has several limitations, including: limited variety, short video duration, and older source material. However, NExT-QA is composed of 6,000 VidOR videos. It seems that the benefits from CodePlex-QA may come from the video sources and not the proposed approach/metric. Generating CodePlex-QA from VidOR would yield a slightly better/fairer comparison.\"}", "{\"comment\": \"Dear Reviewer RWYy,\\n\\nWe hope this message finds you well. We wanted to kindly remind you that the reviewer response deadline is at midnight today. We would greatly appreciate it if you could let us know whether your concerns have been addressed or if there are any additional questions or suggestions we can assist with.\"}", "{\"comment\": \"Thank you to the authors for their responses. Some of my concerns have been addressed. However, I share other reviewers' concern about the generalizability of the proposed approach. The authors should validate their proposed approach on other popular VideoQA datasets.\"}", "{\"title\": \"Response to Reviewer EhtB\", \"comment\": \"Thank you for your detailed review and suggestions. We address your comments individually below.\\n\\n---\\n**W1: CodePlexity doesn't capture \\u2018the inherent complexity of the question\\u2019**\\n\\nPlease note that the concept of \\\"inherent question complexity\\\" is not well-defined and that our work does not claim to capture it. Instead, our primary objective is to propose a practical and operationalizable metric, CodePlexity, to evaluate the relative difficulty of VideoQA questions for existing models. Our findings highlight a crucial gap: questions that appear straightforward to humans can be disproportionately challenging for current VideoQA architectures due to their inherent design limitations. CodePlexity addresses this gap by serving as a bridge between human intuition and model-specific difficulty.\\n\\n---\\n**W2: Visual programming methods are not evaluated in the paper**\\n\\nWe thank the reviewer for pointing this out. We are currently evaluating ViperGPT on CodeplexQA and will post the results as soon as the evaluation finishes\\n\\n---\\n**W3: Comparison with NextQA-ATP-Hard split is not provided**\\n\\nThis is a great point! We provide the requested evaluation below, as well as in Table 2 in the manuscript. Critically, ATP-Hard uses **ground truth** labels in Next-QA to select questions on which a proportion of model from an ensemble of 10 VideoQA models fails. In contrast, our approach only requires the question itself to determine its complexity. \\n\\nDespite this **oracle-like** nature of ATP-Hard, CodeplexQA is substantially more challenging for the top performing models and approximately equally hard for the least effective VIOLET baseline. ATP-Hard is somewhat more challenging than CodeplexQA for InternVideo because this baseline is finetuned from CLIP and the samples in ATP-Hard were selected by seeing where CLIP-based models fail (i.e. this dataset represents an upper bound in terms of complexity for CLIP-based models). These results strongly support the effectiveness of both our complexity metric and of our automatic approach for generating challenging VideoQA benchmarks. \\n\\n| Dataset | Tarsier | SeViLA ZS | InternVideo | VIOLET | Random |\\n|---------------|---------|-----------|-------------|--------|--------|\\n| NExT-QA | 70.9% | 64.2% | 50.9% | 37.7% | 20.0% |\\n| CodeplexQA | 52.5% | 43.7% | 29.9% | 27.6% | 20.0% |\\n| ATP-Hard | 59.8% | 54.9% | 24.6% | 25.4% | 20.0% |\"}" ] }
0yVP49SDg0
Mamba-HMIL: Hierarchical Multiple Instance Learning via State Space Model for Whole Slide Image Diagnosis
[ "Mengkang Lu", "Tianyi Wang", "Qiwei Fu", "Qingjie Zeng", "ZhaoyangLIU", "shu minglei", "Yong Xia" ]
Multiple instance learning (MIL) has been widely employed for gigapixel whole slide image (WSI) classification. Existing MIL methods, however, are found wanting to align with the clinical practice of pathologists, who typically scrutinize WSIs at varied scales and compare the local regions in a global perspective. Given that WSIs usually boast immense dimensions peppered with large regions not pertinent to diagnosis, we propose a novel hierarchical multiple instance learning method based on the state space model, called Mamba-HMIL, for WSI classification. Mamba-HMIL consists of three primary modules to enhance the performance of MIL. First, the hierarchical feature extractor harvests features across diverse scales. Second, for capturing the correlation among patches, the state space model demonstrates robust modeling capabilities. A Mixture of Experts (MoE) module is for stable SSM training. Third, the adaptive selection model strives to reduce redundancies by focusing on disease-positive regions. We evaluate Mamba-HMIL on two WSI subtype datasets (TCGA-NSCLC and TCGA-RCC) and two WSI survival datasets (TCGA-BRCA and TCGA-BLCA). Our results suggest that Mamba-HMIL outperforms existing MIL methods on both WSI tasks. Our code will be made publicly available.
[ "Whole Slide Images", "Hierarchical Multiple Instance Learning", "State Space Model." ]
https://openreview.net/pdf?id=0yVP49SDg0
https://openreview.net/forum?id=0yVP49SDg0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vI6nTpKpeR", "jeu0VmNhLN", "eyNuNbJCPb", "IRK3G3B6hf", "CBFNc6qHrn" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731052795689, 1731658892426, 1731040813317, 1730671444254, 1730687018248 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1404/Reviewer_UNQv" ], [ "ICLR.cc/2025/Conference/Submission1404/Authors" ], [ "ICLR.cc/2025/Conference/Submission1404/Reviewer_b2nH" ], [ "ICLR.cc/2025/Conference/Submission1404/Reviewer_xc6o" ], [ "ICLR.cc/2025/Conference/Submission1404/Reviewer_XdkQ" ] ], "structured_content_str": [ "{\"summary\": \"This work presents Mamba-HMIL, a hierarchical MIL method leveraging the state space modeling for weakly-supervised tasks in computational pathology. MAMBA-HMIL includes several components: (1) state-space modeling (Mamba), (2) Mixture of Experts (MoE) blocks, and (3) sequence fusion / adaptive fusion blocks. Mamba-HMIL is evaluated on cancer subtyping and survival prediction tasks, and compared with relevant baselines in the literature (ABMIL, CLAM, DSMIL, HIPT, and Mamba extensions to MIL). Ablation experiments for each component is performed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Overall, the experimental design is comprehensive and thorough. Mamba-HMIL is compared against many-to-all relevant baselines in the literature, from simple permutation-invariant pooling baselines (ABMIL, CLAM-MB), to Transformer MIL architectures that learn token dependencies (TransMIL, HIPT), to also direct extensions of Mamba applied to MIL architectures (Mamba+ABMIL, Mamba+DSMIL, Mamba+CLAM-MB) as well as Mamba-MIL. Evaluation on survival prediction is also appreciated and validates the strength of Mamba-HMIL in learning context-aware features for understanding the tumor microenvironment. Good attention-to-detail to the survival prediction baselines in evaluating recent SOTA early multimodal fusion architectures like MCAT and PIBD. This study also presents good depth of experiments, in not only ablating the components of Mamba-HMIL (MoE, SF/AF), but also validating other components in MIL including different pretrained encoders (ResNet-50 vs UNI) and hierarchical feature extraction (10X, 20X, 10X+20X).\", \"While direct extensions of Mamba are not a technical novelty, Mamba-HMIL has good performance gains and consistently achieves the best performance across all tasks. These performance gains are on top of comparisons to MIL architectures with Mamba extensions, which suggests that the architecture modifications are not ad hoc.\", \"Interestingly, unimodal Mamba-HMIL outperforms many multimodal fusion comparisons, including Mamba+CLAM-SB, PORPOISE, and MCAT. This is a good finding that should be highlighted more.\"], \"weaknesses\": [\"Though method seems strong, many of the components of Mamba-HMIL itself are either not novel or not studied enough to demonstrate why we see strong improvement in performance. I would like to understand how these components \\\"stabilize SSM training\\\". It is not clear how SSM training is unstable when direct extension of Mamba works quite well for all MIL architectures across all tasks.\", \"I cannot review the code for this submission. As many of the contributions are empirical, it would be valuable to validate the contributions of this work empirically before acceptance.\", \"Is it possible to visualize token-to-token interactions by Mamba-HMIL, besides attention weights from global attention pooling? How do token interactions change across different Mamba layers?\", \"There are several outstanding typos in this work. There is missing-or-extra spacing after Mamba-HMIL on many lines. There are many misspellings like \\\"Propoise\\\". In the data description of survival prediction on Line 327, TCGA-LUSC instead of TCGA-BLCA is written. Many citations are missing and marked as ?.\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents hierarchical multiple instance learning using state space model for whole slide image diagnosis. The method propose to use several components such as hierarchical feature extractor, the state space model, and mixture of experts. The experiments are performed on two datasets.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This paper addresses an important problem of WSI diagnosis\", \"weaknesses\": \"This paper is poorly written, has no contribution and is just the combination of different components without any clear motivation and reasoning. I would request the authors to clearly explain the reason behind choosing each component of approach and re-write the paper for better clarity.\", \"questions\": \"No questions. The paper needs to be improved significantly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes MambaHMIL, a variant of MambaMIL which uses Hierarchical feature extraction, Mixture of Experts and adaptive fusion of sequences to improve performance over existing MIL baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper discusses use of State space models for MIL with applications in pathology. While there is some prior work - MambaMIL which discusses state space models for MIL, the authors expand on it by incorporating multi-resolution feature aggregation, MoE and adaptive selection of sequences.\\nThe authors compare against several existing MIL models to evaluate the approach across two common WSI level benchmark problems - subtyping and survival prediction.\", \"weaknesses\": \"The paper doesn't provide good motivation on why the specific additions/design choices made are relevant in the context of the problems and why they help. It primarily feels the authors did extensive hyper-parameer selection on the datasets and its unclear how these parameters or choices generalize. There isn't much discussion into why such parameters could be optimal for the dataset or the problem, which makes it challenging to come away with clear take-aways.\\nFor example how does Mamba help with with performance and how does it compare with Attention based aggregation. The authors do show some comparison against HIPT, TransMIL here but its hard to compare and put these in context without discussing #params.\\nSimilarly there are some ablations adding mamba to existing MIL models but there isnt much discussion on how its helping and if the improvements are just due to additional params\\nHow multi-resolution helps and why adding more resolutions hampers performance? \\nWhat are the different sequences and what is the relevance of their fusion/aggregations.\\n\\nIt also doesn't give clear details on how some these these are implemented. Added some of these in questions section below.\\nThe authors mention MoE was added to stabilize training, but its unclear what the instability was and how it improved stability.\\nIts also unclear how much MoE helped as there are no ablations with/out MoE. \\n\\nThe details around SSM, mixture of experts and adaptive selection are also not described clearly with no clear equations to describe the formulation.\", \"questions\": [\"For Hierarchical feature extractor\", \"How are the features from different resolutions aggregated? Is it addition or concatenation.\", \"How do the number of parameters change with the number of resolutions? Is the number of model parameters controlled for when comparing performance?\", \"Given this is one of the key contributions, the lack of discussion or detail around this makes it hard to put results in context.\", \"How are the different Sequences of instances (s, s2, .. sn) generated for input to Mamba.\", \"How is the visualization in Figure 3 generated? There is no reference to the figure anywhere in the paper.\", \"Its unclear what GAS aggregation is? There is no reference or explanation about it.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present a state-space model-based hierarchical multiple instance learning (Mamba-HMIL) approach for cancer diagnosis using whole slide images (WSI), which involves three stages. In the first stage, hierarchical encoders are used to extract multiscale features. In the second stage, a state-space model (SSM) aggregates features to assess correlations among instances. The third stage introduces an adaptive selection module that filters out disease-negative patches prior to classification. The proposed method was evaluated on four public datasets for subtype classification and survival prediction, where it was benchmarked against existing approaches.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"No notable strengths were identified in this work.\", \"weaknesses\": \"1. **Lack of novelty**: This work appears to be a straightforward combination of existing methods. The hierarchical encoder is similar to DSMIL [1], and the Mamba architecture and mixture of experts (MoE) module are identical to previously established designs. The adaptive selection (AS) module consists only of an MLP layer and a Sigmoid function. Additionally, this is not the first application of Mamba for WSI analysis. Overall, this approach lacks substantial innovation.\\n\\n2. **Insufficient comparison with existing SSM methods**: The paper does not provide a thorough comparison with similar state-space model-based approaches, such as Vim4Path [2] and MamMIL [3]. \\n\\n3. **Writing quality**: The manuscript appears to lack careful proofreading. For example, there are confusing question marks on Lines 67 and 297.\\n\\n[1] Li, Bin, Yin Li, and Kevin W. Eliceiri. \\\"Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning.\\\" In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 14318-14328. 2021.\\n\\n[2] Nasiri-Sarvi, Ali, Vincent Quoc-Huy Trinh, Hassan Rivaz, and Mahdi S. Hosseini. \\\"Vim4Path: Self-Supervised Vision Mamba for Histopathology Images.\\\" In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 6894-6903. 2024.\\n\\n[3] Fang, Zijie, Yifeng Wang, Ye Zhang, Zhi Wang, Jian Zhang, Xiangyang Ji, and Yongbing Zhang. \\\"Mammil: Multiple instance learning for whole slide images with state space models.\\\" *arXiv preprint arXiv:2403.05160* (2024).\", \"questions\": \"1. Can the authors clarify the motivation behind this work, given that multiple studies have already applied Mamba to WSI analysis?\\n\\n2. The performance of Mamba-MIL and HIPT on NSCLC is inconsistent with the results reported in the original papers, where both achieved an AUC above 0.95. I did not verify all baseline methods in this paper, but the authors should thoroughly review the experimental results and explain why the baseline methods underperformed significantly compared to the original studies.\\n\\n3. In Section 4.4, the authors compare Mamba-HMIL to ABMIL with an ImageNet-pretrained encoder to evaluate the effectiveness of the Mamba Block. Is this a fair comparison?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0yTf37PXcH
Improving Multi-modal Large Language Model through Boosting Vision Capabilities
[ "Yanpeng Sun", "Huaxin Zhang", "Xinyu Zhang", "Qiang Chen", "Nong Sang", "Gang Zhang", "Jingdong Wang", "Zechao Li" ]
We focus on improving the visual understanding capability for boosting the vision-language models. We propose \textbf{Arcana}, a multiModal language model, which introduces two crucial techniques. First, we present Multimodal LoRA (MM-LoRA), a module designed to enhance the decoder. Unlike traditional language-driven decoders, MM-LoRA consists of two parallel LoRAs -- one for vision and one for language -- each with its own parameters. This disentangled parameters design allows for more specialized learning in each modality and better integration of multimodal information. Second, we introduce the Query Ladder adapter (QLadder) to improve the visual encoder. QLadder employs a learnable ``\textit{ladder}'' structure to deeply aggregates the intermediate representations from the frozen pretrained visual encoder (e.g., CLIP image encoder). This enables the model to learn new and informative visual features, as well as remaining the powerful capabilities of the pretrained visual encoder. These techniques collectively enhance Arcana's visual perception power, enabling it to leverage improved visual information for more accurate and contextually relevant outputs across various multimodal scenarios. Extensive experiments and ablation studies demonstrate the effectiveness and generalization capability of our Arcana.
[ "Multi-modal Large Language Model", "Boosting Vision Capabilities", "Multi-modal Lora", "Ladder Adapter" ]
https://openreview.net/pdf?id=0yTf37PXcH
https://openreview.net/forum?id=0yTf37PXcH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wDC4ybGNpg", "riW5jxkTkh", "q5rTgrnwfp", "pJmRSP3GCq", "nJSEQJIb4u", "n8tlPb6fDu", "mTJwvXPgVT", "jQJXMInTmP", "hLLglO1RY9", "gGjJmI7bx7", "c6wms9T8PC", "bUbJUzkmbf", "ZeWHv7gZk0", "YqK2KqBNzW", "X0Z7oyyCKp", "WVVW5X8qph", "WEaMyVb3nz", "VAqrnONsZd", "QwNkPTLire", "QCXKHV93lJ", "NYpexxbJao", "NEwHik6xyR", "N2Xluhim80", "MdOIiVgQ30", "LwtKrmmIiw", "HsmWCptt30", "HKZ2lWqReK", "GskAI0tkb9", "F1AbJdhK2M", "BbPDsoocQa", "9ZQd5VrE8Y", "6Rvr7sI2mH", "435G0f6Dfq", "1Lla0ZLpAh" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732351993621, 1733129718911, 1732686849089, 1732100080377, 1733057347675, 1730720023351, 1730746806762, 1730967535119, 1732114874605, 1732444114848, 1733216601810, 1733133326886, 1732097548610, 1730279646749, 1732097796181, 1732457193059, 1733130259510, 1730713117443, 1732098218703, 1732687715054, 1733059888503, 1732533913303, 1732098427104, 1732258072283, 1733068866496, 1732097215172, 1732097048757, 1732097444421, 1732096824495, 1732608867768, 1737616177116, 1732097863910, 1732096753697, 1732098962132 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_5dnK" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_17q4" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_1Gww" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_cUZA" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_5dnK" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_7gKo" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_7gKo" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_17q4" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_7gKo" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_7gKo" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Reviewer_7gKo" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ], [ "ICLR.cc/2025/Conference/Submission3532/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to reviewer 7gKo\", \"comment\": \"Thanks a lot for your time and feedback. Below are address all raised concerns of the paper.\\n\\n---\\n\\n**Q1**. Although the basic innovation behind MM-LoRA and P-LoRA is similar, I generally recognize your claim on the flexibility of MM-LoRA since it has configurable \\u03b2 and \\u03b3. While, my concern is about the ability of generalization on other types of MLLM (beyond simply using a larger base LLM), i.e., would the \\u03b2 and \\u03b3 selected on Arcana still work for those other MLLMs?\\n\\n**A1**. Thank you for your feedback. To address your concern about the generalization of MM-LoRA\\u2019s \\u03b2 and \\u03b3 parameters, we conducted experiments with MM-LoRA on another MLLM, LLaVA-Next.\\n\\n| Method | ScienceQA | TextVQA | POPE | MMBench | SEED |\\n|:----------:|:---------:|:-------:|------|:-------:|:--------:|\\n| LLaVA-NeXT+LoRA | 69.2 | 63.5 | 86.2 | 66.8 | 62.1 |\\n| LLaVA-NeXT+MM-LoRA | **70.1** | **63.9** | **86.8** | **67.3** | **62.6** |\\n\\nThe results demonstrate that MM-LoRA remains effective on LLaVA-Next without modifying the \\u03b2 and \\u03b3 values. This experiments show that MM-LoRA\\u2019s approach generalizes well to other MLLMs. We will include these findings in the revised paper to further highlight MM-LoRA\\u2019s robustness and versatility. Thank you again for emphasizing this important point.\\n\\n---\\n\\n**Q2**. I think some empirical or theoretical explanation on 1) ablating the initialization way of X_q and 2) why learnable X_q can enhance visual representations are necessary.\\n\\n**A2**. 1. **the initialization way of X_q**\\n\\nThank you for your valuable feedback. To address your concern about ablation studies on X_q initialization, we conducted experiments comparing random initialization with instruction-based initialization inspired by Q-Former.\\n\\n| X_q initialization Method | ScienceQA | TextVQA | POPE | MMBench | SEED | \\n|:--------------:|:----------------:|:---------:|:--------:|:--------:|:--------:|\\n| Random | 71.0 | **58.8** | **86.6** | 66.3 | 60.5 |\\n| Use instruction tokens for initialization | **71.3** | 58.5 | 86.4 | **66.7** | **61.2** |\\n\\nThe results show that instruction-based initialization outperforms random initialization on most benchmarks. This is because early alignment of X_q with textual features helps the model integrate semantic information more effectively, enabling better task-specific reasoning. In contrast, random initialization, though simpler, does not leverage prior knowledge, which can limit the model\\u2019s ability to capture meaningful visual-semantic relationships. Theoretically, instruction-based initialization provides X_q with a structured starting point aligned with relevant semantic features, leading to faster convergence and more efficient learning of task-specific representations. Random initialization, on the other hand, starts from a neutral state and requires more iterations to learn meaningful patterns.\\n\\nThese findings highlight that while random initialization is simple, it is not optimal. Instruction-based initialization helps the model better align with semantic content, resulting in improved performance. We will clarify these points in the revised paper to better explain the impact of different initialization strategies.\\n\\n2. **The learnable X_q\\uff1a**\\n\\nThank you for raising this important point. Below, we provide both empirical and theoretical explanations for why learnable X_q enhances visual representations. To explore this, we conducted experiments comparing the impact of making X_q learnable versus keeping it frozen (non-trainable). \\n\\n| Method | X_q | ScienceQA | TextVQA | POPE | MMBench | SEED |\\n|:---------:|:----------------:|:---------:|:--------:|:--------:|:--------:|:--------:|\\n| Q-Ladder | freeze | 70.3 | 58.0 | 86.2 | 65.5 | 59.8 |\\n| Q-Ladder | learnable | **71.0** | **58.8** | **86.6** | **66.3** | **60.5** |\\n\\nOur experiments consistently showed that learnable X_q outperforms frozen X_q across various tasks, demonstrating its ability to enhance visual representations. Unlike static, frozen X_q, which cannot adapt to the diverse and evolving patterns in the data, learnable X_q actively interacts with different layers of the visual encoder to refine its representations. This flexibility allows the model to better aggregate relevant features and align them with downstream tasks, ultimately achieving superior performance.\\n\\nThese findings underscore the importance of learnable X_q. We believe this adaptability is a key factor in improving visual representations. Thank you again for your constructive feedback, which has helped us clarify this critical aspect of our work.\"}", "{\"title\": \"Response to reviewer cUZA - Follow up\", \"comment\": \"Thank you for your valuable comments and suggestions! We have conducted additional experiments and analyses based on your feedback. We hope we have addressed your concerns, but if there are any further points that need clarification or improvement, we would greatly appreciate your guidance and will make the necessary revisions.\"}", "{\"comment\": \"Thank you for the response! My concerns have been largely resolved, so I am revising my rating from \\u201c5: Marginally below the acceptance threshold\\u201d to \\u201c6: Marginally above the acceptance threshold.\\u201d\"}", "{\"title\": \"Keep my rating.\", \"comment\": \"Thanks for the response! My concerns are well addressed by the authors. Thus I keep my rating as \\\"8: accept, good paper\\\".\"}", "{\"title\": \"Response to reviewer 7gKo\", \"comment\": \"Thank you for your constructive feedback. We have conducted additional experiments to address the concerns you raised.\\n\\n---\\n\\n**Q1**. The effectiveness of MM-LoRA when simultaneously training the whole LLM\\n\\n**A1**. Thank you for the reviewers' thoughtful feedback. Regarding the effectiveness of MM-LoRA in simultaneously training the entire LLM, we conducted systematic experiments on LLaVA, comparing the performance of two methods: 1) fine-tuning only the LLM; and 2) simultaneously fine-tuning the LLM and MM-LoRA.\\n\\n| method | \\u03b2 | \\u03b3 |ScienceQA|SEED|MMBench|TextVQA|POPE|**avg.**|\\n| --- | --- | --- |--- | --- | --- |--- | --- | --- |\\n| LLaVA |- | - |66.8|58.6|64.3|58.2|85.9|66.8|\\n| +MM-LoRA |1 | 0 |68.1 | 61.1 | 64.4 | 58.5| 86.2 | 67.7 |\\n| +MM-LoRA |0.75 | 0.25 |69.0 | 61.2 | 65.0 | 58.3| 86.2 | 67.9 |\\n| +MM-LoRA |0.5 | 0.5 |68.6| 61.2 | 64.6 | 58.3| 86.2 | 67.8 |\\n| +MM-LoRA |0.25 | 0.75 |69.1 | 61.3 | 64.7 | 58.5| 86.2 | **68.0** |\\n| +MM-LoRA |0 | 1 |69.0 | 61.3 | 64.4 | 58.6| 86.1 | 67.9 |\\n\\n\\nThe results show that simultaneously fine-tuning both the LLM and MM-LoRA significantly improves model performance, outperforming the method of fine-tuning only the LLM across ScienceQA, TextVQA, MMBench, POPE, and SEED benchmarks. The best performance was achieved when \\u03b2 and \\u03b3 were set to 0.25 and 0.75, respectively. These findings demonstrate that MM-LoRA is indeed effective in the current LVLM training paradigm (which typically involves training the entire LLM) and contributes positively to performance enhancement.\"}", "{\"summary\": \"This work aims to enhance the visual capabilities of MLLMs through two main contributions: (1) the introduction of a MM-LoRA for the LLM decoder, and (2) the development of a Query module for visual encoders.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"It is well known that MLLMs often exhibit limitations in their visual capabilities, and this work addresses this important issue. Additionally, the paper is well-written and easy to follow.\", \"weaknesses\": [\"The proposed method leverages additional learning parameters to enhance the visual capabilities of MLLMs. Recent studies (e.g., LLaVA-Next, VILA-1.5, Qwen-VL-2) have shown that simply improving image resolution using various (*any resolution*) techniques is a straightforward and effective way to address this issue. I am skeptical that the proposed method will achieve performance comparable to these AnyRes approaches, particularly on tasks requiring high resolution. The proposed method appears limited by the visual encoder, despite the incorporation of additional LoRA modules.\", \"The focus of this study is on the visual capability of MLLMs. However, only one ViT is examined, and there are no ablations on different ViTs. This raises doubts about the generalizability of the proposed approach.\", \"The improvements from the proposed method should be evaluated based on the ablation studies, rather than relying on Table 1 and 2, as the model Arcana reported in Table 1 and 2 is trained on a combination of large datasets (comparing to LLaVA-1.5 presented in Table 1 and 2). However, it is important to note that only a limited selection of four benchmarks is presented in ablations.\"], \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper seeks to improve visual understanding capability of vision language model. It introduces two components to enhance the capacity of the VLM: Multimodal (MM) -LoRA and Query Ladder. The MM-LoRA increases the capacity of the decoder by introducing low-rank adaptable matrices separately for the vision and language modalities. The QLadder increases the capacity of the encoder by incorporating learnable tokens at the input. Overall, this approach shows benefits of individual components and also competitive performance across MM/Language benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The identified problem of the lack of strong visual capabilities (e.g. detection, localization, color-comprehension etc.) in current vision language models is interesting and worth studying\", \"It's also interesting to see the need for modality specific adaptation\", \"The paper is easy to comprehend and well supported by various block diagrams\"], \"weaknesses\": [\"The summary section (line 519-520) mentions \\\"achieving notable performance improvements even with limited data resources\\\". However, the problem of limited data sources is not convincing. For instance, given that LLM\\u2019s and Visual-Encoders are trained with web-scale data, it\\u2019s not clear how and why would data be limited. Perhaps, the authors want to focus on specific domains (say, medical) where curating data might be difficult due to privacy concerns. But, for the kind of problems mentioned in the paper (detection, localization, color-comprehension), its not clear why data is limited.\", \"The paper lacks explanations for why components like LORA and QLadder should improve visual capabilities like detection, localization, color-comprehension. While the attention visualization (line 469-482) demonstrates the effect of these components to visual-token-attention, it\\u2019s not clear why that itself should improve performance. Further, some of the statements like \\u201cpromotes cooperation between different modalities\\u201d (line 478) and \\u201cenriches the visual information\\u201d (line 481) are not corroborated with any intuition or experiments.\", \"The contributions of the proposed components are unclear. For instance, the benefit of LORA for limited-data-adaptation has been well studied in the past (e.g. [1]). The importance of introducing additional visual tokens to visual encoders has also been shown in [2]. In the light of the prior works, the paper should more clearly distinguish it\\u2019s technical contributions.\", \"Are the benefits of Qladder/MM-LORA consistent across scales? If we increase the scale of LLM and Visual Encoder, will Qladder/MM-LORA still show benefits?\", \"Miscellaneous\", \"Is the beta gamma ration study consistent across a range of LORA ranks (say 64 - 1024)? here it was set to 256\", \"Why was LORA applied only to linear layers?\", \"In qualitative evaluations (Fig. 5), comparisons should be made with other models to clearly show qualitative gains from using Qladder/MM-LORA\", \"[1] https://arxiv.org/abs/2106.09685\", \"[2] https://arxiv.org/abs/2309.16588\"], \"questions\": \"What was the main problem that was being addressed? Was it limited data adaptation, was it visual capabilities? If it was just visual capabilities, how does LORA or a few-learnable tokens based adaptation compare against scaling up?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes Arcana, a multi-modal large language model (MLLM) designed to improve visual perception capabilities. Arcana introduces two key techniques: MM-LoRA and QLadder. MM-LoRA enables separate vision and language pathways, reducing modality interference, while QLadder enhances the visual encoder's ability to capture fine-grained visual details. Extensive experimentation across benchmarks like VQAv2 and TextVQA demonstrates Arcana\\u2019s improvement over existing MLLMs in both zero-shot and fine-tuning scenarios, highlighting its capacity for accurate visual reasoning and multi-modal alignment\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well written and structured\", \"weaknesses\": \"The structural innovations of MM-LoRA and QLadder are not sufficiently solid, as the design does not appear to specifically address identified issues such as color recognition, object counting, small object understanding, and spatial location.\", \"questions\": \"In terms of motivation, the paper aims to resolve MLLM visual perception issues such as color recognition, object counting, small object understanding, and spatial location. However, the structural designs of QLadder and MM-LoRA do not seem specifically tailored to address these problems, leading to the impression that performance improvements may stem from data rather than a well-targeted structural design, which appears somewhat forced into explaining the results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for supporting the acceptance of our paper\", \"comment\": \"Thank you for your response. We\\u2019re very encouraged that our rebuttal basically addressed your concerns and appreciate your support for the paper\\u2019s acceptance.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thanks for your great efforts on the further reply.\\n\\nI may have another question is, I'm just wondering the effectiveness of MM-LoRA if we simultaneously train the whole LLM during pre-training or SFT. Because, we usually use LoRA just in the low resource cases, and in many cases, training the whole LLM performs much better than training LoRA on downstream benchmarks. I am curious about if you have some exploration at this point. \\n\\nMaybe the time constraints do not allow any time-consuming experiments, but any insights would be appreciated.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear reviewers, as the discussion period ends on Dec 2nd at midnight AoE , we are eager to ensure that all the questions have been thoroughly resolved. We hope that our responses have adequately addressed your concerns. Your feedback is invaluable to us, and we would greatly appreciate it if you could take a moment to provide a final rating and feedback.\"}", "{\"title\": \"Response to reviewer 7gKo - Follow up\", \"comment\": \"Thank you for your valuable feedback!\\n\\nWe would like to highlight a few things to avoid misunderstanding.\\n\\n- Fine-tuning all LLM parameters and LoRA fine-tuning are two ways for using LLM for multimodality models. LoRA fine-tuning is more efficient and takes less training cost.\\n\\n- The similar results of finetuning LLM and fine-tuning both LLM and MM-LoRA parameters does not influence the contribution of our paper. The reason is that our goal is to improve the original LoRA by extending it to MM-LoRA and the effectiveness is verified in our paper. \\n\\nWe appreciate if you can have a double-check. Looking forward to your further comments and discussions.\"}", "{\"title\": \"Response to reviewer 1Gww (2/2)\", \"comment\": \"**Q3**. The improvements from the proposed method should be evaluated based on the ablation studies, rather than relying on Table 1 and 2, as the model Arcana reported in Table 1 and 2 is trained on a combination of large datasets (comparing to LLaVA-1.5 presented in Table 1 and 2). However, it is important to note that only a limited selection of four benchmarks is presented in ablations.\\n\\n**A3**. Thank you for your comments! Here\\u2019s our response to this issue:\\n\\nThe experimental results in Table 1 and Table 2 compare our method with mainstream approaches (e.g., LLaVA-1.5, Arcana), which use different dataset combinations. These tables aim to highlight the competitiveness of our method in multimodal tasks.\\n\\nIn the ablation experiments, we used a unified dataset to ensure fairness and focus on evaluating the effectiveness of Q-Ladder and MM-LoRA. This approach aligns with standard experimental practices [1][2][3][4], where baseline methods are used for comparison, and variables are controlled in ablation studies to isolate the contribution of the proposed method.\\n\\nWe understand and appreciate your concern about the limited number of benchmark datasets in the ablation study. To address this, we will include results from all benchmark datasets tested in the ablation study in the Appendix, offering a more comprehensive validation of our method\\u2019s effectiveness across different tasks. \\n\\nThank you again for your valuable feedback! We will update the relevant content accordingly.\\n\\n[1] Ye et al. mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration. CVPR, 2024.\\n\\n[2] Li, Zhang, et al. Monkey: Image resolution and text label are important things for large multi-modal models. CVPR. 2024.\\n\\n[3] Bai, Jinze, et al. Qwen-vl: A frontier large vision-language model with versatile abilities. Arxiv preprint, 2023.\\n\\n[4] Tong, Shengbang, et al. Eyes wide shut? exploring the visual shortcomings of multimodal llms. CVPR, 2024.\"}", "{\"summary\": \"This paper proposes a new MLLM named Arcana, mainly offering two improvements for boosting model comprehension on vision information. The first one is MM-LoRA, which learns two separate sets of LoRA parameters for vision and text tokens respectively, aiming to decouple the learning spaces of different modalities and better integrate the multi-modal knowledge. The other one is Q-Ladder, compared with Q-Former, it selects the vision features of different layers in ViT as the key/value vectors for different layers of Q-Ladder, instead of only using the last-layer vision token features. The experiments include the evaluation on VQA benchmarks, multi-modal benchmarks, and language benchmarks, with some ablation studies and further explorations.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper is quite easy to follow. People can quickly grasp the core design and the underlying motivation of the proposed two improvements.\\n2. The presentation is quite ok for me.\\n3. The proposed method has little impact on the efficiency and the memory cost.\", \"weaknesses\": \"1. I think the proposed MM-LoRA is greatly inspired by some previous works like P-LoRA [1] in InternLM-XComposer2, visual-only modules in mPLUG-Owl2 [2] and CogVLM [3], which somehow reduces the novelty of MM-LoRA. The authors should tell the differences between MM-LoRA and these methods, along with some experiments on effectiveness and efficiency to prove the necessity of MM-LoRA.\\n\\n2. The baselines listed in Table 1, 2 are relatively old. I notice Arcana adopts ShareGPT4V data for training, but its benchmark performance seems not good as ShareGPT4V 7B model. So it is recommended to include some more advanced baseline MLLMs. \\n\\n3. It seems that the hyper-parameters introduced by MM-LoRA and Q-Ladder are not so robust and can easily affect the model performance. The authors choose the best hyper-parameters according to the ablation results. So does these hyper-parameters still work for different base LLM or architectures?\\n\\n\\n[1] Dong et al. InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Models. Arxiv preprint 2401.16420, 2024.\\n\\n[2] Ye et al. mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration. CVPR, 2024.\\n\\n[3] Wang et al. CogVLM: Visual Expert for Pretrained Language Models. Arxiv preprint 2311.03079, 2024.\", \"questions\": \"1. Compared with Q-Former, why does the proposed Q-Ladder not require an additional stage for alignment with the vision encoder?\\n\\n2. Is X_q in Q-Ladder a set of learnable tokens? Why not use instruction tokens for initialization, as done in Q-Former?\\n\\n3. In the visualizations, it\\u2019s difficult to conclude that (b) demonstrates more attention on vision tokens compared to (a). But interestingly, It mainly appears that (b) has more sink tokens [1]. \\n\\n4. In Table 4, why the Q-Ladder results on 13B model are absent?\\n\\n\\n[1] Xiao et al. Efficient Streaming Language Models with Attention Sinks. ICLR, 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No need for this.\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer 17q4 (1/2)\", \"comment\": \"Thanks a lot for your time and feedback. Below are address all raised concerns of the paper.\\n\\n---\\n\\n**Q1**. There is a lack of comparison with the latest open-source VLMs: LLaVA-OneVision, Qwen2-VL, InternVL2, etc. While these methods may use higher-quality training data and achieve stronger results, it is essential for readers to be aware of the current SoTAs. You may also explain why direct comparisons may not be feasible. It is acceptable for a research paper to fall short of SoTA results due to data quality differences, but these results should still be presented for context.\\n\\n**A1**. Thank you for the reviewer\\u2019s valuable feedback! We fully understand your concern regarding comparisons with the latest open-source VLM models (such as LLaVA-OneVision, Qwen2-VL, InternVL2, etc.). Below is our response:\\nThese latest open-source VLM models have indeed achieved stronger results, supported by high-quality training data. However, it is important to note that the large amounts of high-quality training data used by these methods (such as proprietary or domain-specific data) have not been made publicly available, making it unfeasible to directly replicate these methods and perform a fair comparison.\\n\\nNevertheless, to provide a more comprehensive context, we will include performance reports of these latest methods in the revised version, clearly outlining their data advantages and how they differ from our approach in the discussion. Additionally, our research focuses on exploring how structural improvements (such as Q-Ladder and MM-LoRA) can enhance visual perception abilities under the same data conditions, rather than solely relying on the expansion of data scale.\\n\\nThank you again for this important suggestion! We will include the relevant information in the updated version to help readers better understand the positioning and contributions of both current state-of-the-art methods and our approach.\\n\\n---\\n\\n**Q2**. MMVP is crucial for demonstrating visual capability, but only QLadder is ablated on the MMVP benchmark. Why not conduct an ablation of MM-LoRA on MMVP as well? This would provide stronger support for the claims.\\n\\n**A2**. Thank you for the reviewer\\u2019s suggestion! Your proposal to perform ablation studies on MM-LoRA using the MMVP benchmark is very important, and we fully understand that this would provide a more comprehensive validation of the effectiveness of our method.\\n\\nIn the initial version, we primarily focused on ablation experiments for Q-Ladder on the MMVP benchmark to highlight its direct contribution to visual perception abilities. However, to further support our research conclusions, we have also conducted additional experiments to evaluate the performance of MM-LoRA on the MMVP benchmark.\\n\\n| Method | add visual tokens | MMVP | POPE | MMBench | TextVQA |\\n|:---------:|:-----------------:|:----:|:----:|:-------:|:-------:|\\n| baseline | - | 24.0 | 85.9 | 64.3 | 58.2 |\\n| +MOF [1] | 256 | 27.1 | 86.2 | 60.1 | 56.5 |\\n| +Q-Ladder | 64 | 27.6 | 86.5 | 66.3 | 58.8 |\\n| +MM-LoRA | - | 26.9 | 86.4 | 65.7 | 58.3 |\\n\\nThe experimental results show that MM-LoRA not only significantly improves performance in multi-modal tasks (such as language generation and dialogue) but also demonstrates a strong supporting role in visual tasks.\\n\\nIn the revised version, we will include these experimental results and integrate them into the ablation study section to further showcase the synergistic effect of MM-LoRA and Q-Ladder in enhancing visual perception capabilities. Thank you again for your thoughtful suggestion. We will ensure that the analysis in this aspect is more comprehensive in the updated paper.\\n\\n[1]Tong, Shengbang, et al. Eyes wide shut? exploring the visual shortcomings of multimodal llms. CVPR, 2024.\\n\\n---\\n\\n**A3**. Was the visual encoder tuning in Table 7 conducted at the pre-training or instruction fine-tuning stage?\\n\\n**Q3**. Thank you for the reviewer\\u2019s question! In the experiments presented in Table 7, we ensured that all visual encoder adjustments (tuning) were validated during both the pre-training and instruction fine-tuning stages. This means that, whether in the model's pre-training phase or the instruction fine-tuning phase, we applied the same adjustment strategy to ensure consistency and fairness in the experimental results.\\n\\nThis setup is designed to comprehensively evaluate the impact of visual encoder fine-tuning on model performance across different training stages and further validate the effectiveness of our proposed method. Thank you again for your attention, and we will clearly outline this experimental detail in the revised version!\"}", "{\"title\": \"Response to reviewer 7gKo\", \"comment\": \"Thanks for your valuable response. Below, we address the concerns you raised.\\n\\n---\\n\\n**Q1**. I may have another question is, I'm just wondering the effectiveness of MM-LoRA if we simultaneously train the whole LLM during pre-training or SFT. Because, we usually use LoRA just in the low resource cases, and in many cases, training the whole LLM performs much better than training LoRA on downstream benchmarks. I am curious about if you have some exploration at this point.\\n\\n**A1**. Thank you for your insightful question. In fact, training both the entire LLM and MM-LoRA during pretraining or SFT is indeed an interesting direction worth exploring. Compared to traditional LoRA methods, MM-LoRA introduces additional parameter spaces for both visual and linguistic information. We hypothesize that this approach could lead to some performance improvements. The intuition behind this is that, compared to fine-tuning only the LLM, training both the LLM and MM-LoRA together introduces more trainable parameters, allowing the model to more effectively capture both general knowledge and modality-specific features, thereby enhancing performance on multimodal tasks. We plan to conduct further experiments to validate this conclusion.\"}", "{\"title\": \"Response to reviewer cUZA - Follow up\", \"comment\": \"Thank you for your valuable feedback! We have further addressed the concerns you raised in the paper. Specifically, in methods (LLaVA-Next) that include any resolution techniques, we conducted additional experiments to further demonstrate the effectiveness of Q-Ladder. Additionally, we performed ablation experiments with different visual encoders, which further validate the effectiveness of Q-Ladder.\\n\\nWe have provided further explanations in these experiments to clarify the points you mentioned. We sincerely hope that these additions address your concerns and look forward to your feedback.\"}", "{\"summary\": \"This paper proposes a novel Multi-modal Large Language Model (MLLM) called Arcana, designed to enhance visual understanding capabilities. It introduces two key components: Multimodal LoRA (MM-LoRA) and the Query Ladder adapter (QLadder). MM-LoRA consists of two parallel LoRAs (one for vision and one for language) to disentangle the modalities and enhance their specialized capabilities. QLadder aggregates intermediate representations from the visual encoder, further boosting the visual abilities of the MLLM. Experimental results demonstrate that Arcana outperforms previous MLLM baselines (e.g., LLaVA-1.5, mPLUG-Owl2, etc.) on visual question answering and multi-modal conversation benchmarks. Notably, the ablation study shows that QLadder significantly improves MMVP performance, which requires strong vision capabilities.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The presentation and writing are clear and easy to follow. Figure 1 in the introduction effectively illustrates the background, motivation, and main results of this paper.\\n\\n2. Tables 1 and 2 show that Arcana achieves better performance than previous MLLM baselines (e.g., LLaVA-1.5, mPLUG-Owl2, etc.) on visual question answering and multi-modal conversation benchmarks.\\n\\n3. The ablation studies in Tables 4 and 5 clearly validate the effectiveness of MM-LoRA and QLadder.\\n\\n4. The ablation study demonstrates that QLadder significantly improves MMVP performance, which requires robust visual capabilities. In Table 6, adding QLadder boosts MMVP performance by 3.6%.\", \"weaknesses\": \"1. There is a lack of comparison with the latest open-source VLMs: LLaVA-OneVision, Qwen2-VL, InternVL2, etc. While these methods may use higher-quality training data and achieve stronger results, it is essential for readers to be aware of the current SoTAs. You may also explain why direct comparisons may not be feasible. It is acceptable for a research paper to fall short of SoTA results due to data quality differences, but these results should still be presented for context.\\n\\n2. MMVP is crucial for demonstrating visual capability, but only QLadder is ablated on the MMVP benchmark. Why not conduct an ablation of MM-LoRA on MMVP as well? This would provide stronger support for the claims.\", \"questions\": \"1.Was the visual encoder tuning in Table 7 conducted at the pre-training or instruction fine-tuning stage?\\n\\n2.Have you tried adding LoRA to the visual encoder as well?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer 7gKo (1/3)\", \"comment\": \"Thanks a lot for your time and feedback. Below are address all raised concerns of the paper.\\n\\n---\\n\\n**Q1**. I think the proposed MM-LoRA is greatly inspired by some previous works like P-LoRA [1] in InternLM-XComposer2, visual-only modules in mPLUG-Owl2 [2] and CogVLM [3], which somehow reduces the novelty of MM-LoRA. The authors should tell the differences between MM-LoRA and these methods, along with some experiments on effectiveness and efficiency to prove the necessity of MM-LoRA.\\n\\n**A1**. Thank you for the reviewer\\u2019s valuable question! Indeed, the design of MM-LoRA was inspired by works such as P-LoRA, mPLUG-Owl2, and CogVLM. However, these methods still have limitations in multi-modal tasks, while MM-LoRA introduces a novel approach to address these challenges. We will explain in detail the differences between MM-LoRA and these methods in the following sections, and provide corresponding experimental results to demonstrate its effectiveness and necessity.\\n\\n| Method | Specific Approach | Parameter Allocation Strategy | Relationship of Vision and Language in LLM | Adjustment Mechanism | Flexibility | Decoder Training Components and Parameters | Innovations/Shortcomings |\\n|-----------------------|----------------------------------------------|-------------------------------------------|----------------------------------------------------|---------------------------------------------|------------------------------------|-----------------------------------|-----------------------------------------|\\n| **mPLUG-Owl2** | Self-Attention with KV modality separation | No explicit difference, same parameter allocation | Partial decoupling | No | Low | **LLM+KV; > LLM parameter** | **Innovation**: Simple and effective KV decoupling; **Shortcoming**: Lacks exploration of modality importance differences |\\n| **InternLM-XComposer2** | No | Adds extra visual LoRA module | Modality coupling | No adjustment mechanism | Low | **LLM+visual LoRA; > LLM parameter** | **Innovation**: Introduction of visual LoRA; **Shortcoming**: Vision and language modalities remain coupled, prone to mutual interference |\\n| **CogVLM** | Two LLMs handle two modalities separately | Equal parameters for vision and language modalities | Complete decoupling | No | Low | **Visual LLM+ Language LLM; > >LLM parameter** | **Innovation**: Full modality decoupling; **Shortcoming**: Does not consider modality importance differences, and training parameter count is huge |\\n| **Arcana** | MM-LoRA | Uses beta, gamma to control different modality learning spaces | Complete decoupling | Predefined beta, gamma | High | **MM-LoRA; < LLM parameter** | **Innovation**: Explores modality importance through fixed ratios, optimizing resource allocation; **Advantage**: More efficient and flexible |\\n\\nSpecifically, mPLUG-Owl2 only decouples the Key-Value (KV) in the Self-Attention mechanism but does not deeply explore the varying importance of visual and language modalities in multimodal tasks. InternLM-XComposer2 introduces Visual LoRA to enhance visual representation. However, during training, the LLM decoder remains involved, keeping the visual and language modalities coupled, which may lead to feature competition. In contrast, CogVLM adopts a simplified version of MM-LoRA, implementing a fully decoupled strategy where the same number of parameters are allocated separately to train the visual and language modalities. \\n\\nWhen beta and gamma are set to 0.5, MM-LoRA's design is similar to CogVLM's full decoupling strategy. However, experiments show that this is not the optimal configuration. The importance of the visual and language modalities is not balanced in multimodal tasks. By adjusting the values of beta and gamma, the learning space can be more effectively allocated, thus improving performance. Comparative experimental results indicate that the model performs significantly better with configurations such as beta = 0.25 and gamma = 0.75 compared to beta = 0.5 and gamma = 0.5. The specific experimental results are shown in the table below.\\n| beta | gamma | ScienceQA | TextVQA | MMBench | SEED |\\n|:----:|:-----:|:---------:|:--------:|:--------:|:--------:|\\n| 0 | 1 | 65.8 | 51.2 | 56.4 | 58.7 |\\n| 0.75 | 0.25 | 68.6 | 58.7 | 63.3 | 61.8 |\\n| 0.5 | 0.5 | 70.1 | 58.4 | 64.3 | 61.9 |\\n| 0.25 | 0.75 | **71.2** | **58.7** | **64.5** | **62.4** |\\n| 1 | 0 | 70.1 | 57.9 | 65.4 | 61.2 |\"}", "{\"title\": \"Thanks for raising your score\", \"comment\": \"Thanks for raising your score! We\\u2019re very encouraged that our rebuttal basically addressed your concerns and appreciate your support for the paper's acceptance.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thanks for your response. It seems that MM-LoRA offers only marginal improvements compared to the settings of 'beta=1, gamma=0' or 'beta=0, gamma=1', as results on these benchmarks typically vary within a range of +0.5 or -0.5. My concerns regarding the innovation and effectiveness of MM-LoRA have not been fully addressed, so I have decided to maintain my initial position.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We appreciate all reviewer valuable comments. We were wondering if our responses have addressed your concerns. Please let us know if you have additional questions. Thank you!\"}", "{\"title\": \"Response to reviewer 7gKo (2/3)\", \"comment\": \"**Q2**. The baselines listed in Table 1, 2 are relatively old. I notice Arcana adopts ShareGPT4V data for training, but its benchmark performance seems not good as ShareGPT4V 7B model. So it is recommended to include some more advanced baseline MLLMs.\\n\\n**A2**. Thank you for your insightful comment. We understand your concern regarding the baselines used in Tables 1 and 2. It\\u2019s true that the models listed are relatively older. However, our approach primarily leverages MM-LoRA, which, while effective, might not match the performance of fully fine-tuned models in some cases. \\n\\nTo address this, we have conducted experiments comparing ShareGPT4V based on LoRA with our model, and the results are as follows: \\n| Method | MME | MMBench |SEED (all/Image)|MM-Vet|VQAv2|GQA|\\n| --- | --- | --- | --- | --- | --- | --- |\\n| Arcana |1476.5|66.9|62.6/68.4|34.8|79.2|61.6|\\n| Arcana* |1520.9|67.4|63.2/69.4|34.4|79.5|61.8|\\n| ShareGPT4V-7B-LoRA|1501.2|66.6|61.9/67.6|33.7|79.3|62.1|\\n\\nThese results help highlight the distinct advantages and trade-offs between our approach and the full fine-tuning methods, particularly in terms of efficiency and scalability, while still achieving competitive performance.\\n\\nWe appreciate the opportunity to clarify this, and we will include more advanced baseline MLLMs in future experiments to provide a more comprehensive evaluation\\n\\n---\\n\\n**Q3**. It seems that the hyper-parameters introduced by MM-LoRA and Q-Ladder are not so robust and can easily affect the model performance. The authors choose the best hyper-parameters according to the ablation results. So does these hyper-parameters still work for different base LLM or architectures?\\n\\n**A3**.Thank you for raising the insightful question regarding the robustness of the hyper-parameters introduced by MM-LoRA and Q-Ladder. To address this issue, we conducted additional experiments to evaluate their generalizability across different base LLMs and visual encoders.\\n\\n1. **Validating MM-LoRA with Different LLMs**\\n\\nWe replaced the base LLM with a 13B model to test the effectiveness of MM-LoRA. Using the optimal hyperparameters identified in our study, the results are shown in the table below.\\n\\n| Method | Visual encoder | LLM | ScienceQA | TextVQA | POPE | MMBench | SEED |\\n|:--------:|:--------------:|:---:|:---------:|:--------:|----------|----------|----------|\\n| baseline | VIT-L | 13B | 71.2 | 60.2 | 86.7 | 68.5 | 61.3 |\\n| +MM-LoRA | VIT-L | 13B | **71.5** | **60.7** | **86.8** | **68.8** | **62.9** |\\n\\nWe found that, compared to the baseline, the introduction of MM-LoRA still leads to performance improvements across multiple benchmarks. This confirms that MM-LoRA remains effective across different language model architectures.\\n\\n2. **Validating Q-Ladder with Different Visual Encoders**\\nWe tested Q-Ladder with several alternative visual encoders, including **Siglip**, **CLIP-L**, and **CLIP-H**, while keeping the default hyperparameters unchanged.\\n\\n| Method | Visual Encoder | Image resolution | ScienceQA | TextVQA | POPE | MMBench | SEED |\\n|:---------:|:--------------:|:----------------:|:---------:|:--------:|:--------:|:--------:|:--------:|\\n| baseline | CLIP-VIT-L | 336*336 | 69.1 | 58.2 | 86.4 | 64.1 | 58.1 |\\n| +Q-Ladder | CLIP-VIT-L | 336*336 | **71.0** | **58.8** | **86.6** | **66.3** | **60.5** |\\n| baseline | CLIP-VIT-H | 224*224 | 67.8 | 53.5 | 83.7 | 63.1 | 58.4 |\\n| +Q-Ladder | CLIP-VIT-H | 224*224 | **68.8** | **53.8** | **83.9** | **63.6** | **58.8** |\\n| baseline | Siglip | 384*384 | 70.6 | 62.4 | 86.0 | 65.9 | 62.1 |\\n| +Q-Ladder | Siglip | 384*384 | **71.1** | **62.9** | **86.3** | **66.8** | **62.5** |\\n\\nThe results demonstrate that the introduction of Q-Ladder still improves the model's performance across multiple benchmarks. This highlights the robustness of Q-Ladder to variations in encoder architectures. These findings confirm that the hyperparameters identified through ablation studies are robust and effective across various base LLMs and visual encoders, validating the adaptability of our approach. We will incorporate these results into the revised manuscript to further enhance its clarity and completeness.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thanks for your detailed response! There are still a few critical points that hinder me from giving the acceptance:\\n\\n**1. Concerns on MM-LoRA**\\n\\nAlthough the basic innovation behind MM-LoRA and P-LoRA is similar, I generally recognize your claim on the flexibility of MM-LoRA since it has configurable $\\\\beta$ and $\\\\gamma$. While, my concern is about the ability of generalization on other types of MLLM (beyond simply using a larger base LLM), i.e., would the $\\\\beta$ and $\\\\gamma$ selected on Arcana still work for those other MLLMs? \\n\\n**2. Concerns on X_q in Q-Ladder**\\n\\nI think some empirical or theoretical explanation on 1) ablating the initialization way of X_q and 2) why learnable X_q can enhance visual representations are necessary.\"}", "{\"title\": \"Response to reviewer 7gKo\", \"comment\": \"Thank you for your feedback! We would like to clarify that the above experiments were conducted following the experimental setup suggested by the reviewers. The results show that varying \\u03b2 and \\u03b3 has minimal impact on the performance of jointly fine-tuning the LLM and MM-LoRA, which aligns with our expectations. Since MM-LoRA introduces only about 300 M parameters, its impact is relatively limited compared to fine-tuning a 7B LLM. **However, the goal of this experiment differs from the core perspective of our paper.**\\n\\nThe core claim of MM-LoRA is to provide independent learning spaces for visual and linguistic modalities to achieve modality decoupling, which has been shown to be beneficial. In the joint training experiment suggested by the reviewer, the visual and language modalities are still coupled during LLM training, which goes against our original design. **In the context of full fine-tuning, achieving this goal requires designing separate LLM decoders for each modality, which significantly increases model size and incurs prohibitive training costs.** To address this, we proposed MM-LoRA as an extension of LoRA, leveraging the following advantages:\\n\\n**1**. Lower training costs: Compared to full fine-tuning, LoRA-based under the Deepspeed Zero3 strategy reduces GPU memory usage by 50% and shortens training time by 30%. Given our limited computational resources, LoRA-based improvements are more practical.\\n\\n**2**. Stronger scalability: By adjusting \\u03b2 and \\u03b3, MM-LoRA allows control over the learning space of each modality, enabling a direct comparison of the importance of modality-specific knowledge in the decoder.\\n\\nThe experimental results in the paper clearly demonstrate that providing independent learning spaces for different modalities effectively facilitates modality decoupling, and this design has a substantial positive impact on improving model performance.\\n\\nWe hope this statement addresses your concerns regarding the above experimental results and provides clarification. Please let us know if further explanation is needed!\"}", "{\"title\": \"Response to reviewer cUZA (3/3)\", \"comment\": \"**Q5**. Why was LORA applied only to linear layers?\\n\\n**A5**. Thank you for the reviewer\\u2019s question! In our method, LoRA is applied only to the linear layers within the Transformer architecture, which is due to the structural characteristics of the Transformer model. In Transformers, the majority of learnable parameters are concentrated in the linear layers (e.g., fully connected layers), particularly within the self-attention mechanism and feedforward networks. Other components, such as the attention weight matrices and normalization layers, typically do not contain a large number of learnable parameters or have relatively small parameter sizes, making them less suitable for the application of LoRA.\\n\\n---\\n\\n**Q6**. In qualitative evaluations (Fig. 5), comparisons should be made with other models to clearly show qualitative gains from using Qladder/MM-LORA\\n\\n**A6**. Thank you for the reviewer\\u2019s suggestion! We fully understand your perspective. To more clearly demonstrate the advantages of Q-Ladder and MM-LoRA in qualitative evaluations, we plan to add a comparison with other models in Fig. 5. This will help visually showcase the qualitative improvements of our method in visual tasks. We will update this section in the revised version to better present the advantages of these modules. We appreciate your valuable feedback!\\n\\n---\\n\\n**Q7**. Why that MM-LORA and QLadde should improve performance. \\n\\n**A7**. Thank you for the question. MM-LoRA and Q-Ladder improve performance by addressing key challenges in multimodal learning. MM-LoRA decouples the visual and language modalities, allowing each to specialize in its respective space, reducing feature competition and improving the quality of the learned representations for both modalities. Q-Ladder enhances the visual encoder by aggregating intermediate representations from the frozen pretrained visual encoder, enabling the model to capture more detailed and relevant visual features while retaining the encoder's powerful capabilities. Experimental results in ablation study demonstrate that these components contribute to better performance on multimodal tasks.\"}", "{\"title\": \"Response to reviewer cUZA (2/3)\", \"comment\": \"---\\n\\n**Q3**. Are the benefits of Qladder/MM-LORA consistent across scales? If we increase the scale of LLM and Visual Encoder, will Qladder/MM-LORA still show benefits?\\n\\n**A3**. Thank you for the reviewer\\u2019s question! To validate the performance of Q-Ladder and MM-LoRA across different model scales, we conducted a series of experiments to explore whether these modules continue to deliver significant performance improvements with larger LLMs and visual encoders.\\n\\nFirst, we scaled the LLM from 7B to 13B to verify whether MM-LoRA remains effective as the model size increases. The experimental results are as follows:\\n| Method | Visual encoder | LLM | ScienceQA | TextVQA | POPE | MMBench | SEED |\\n|:--------:|:--------------:|:---:|:---------:|:--------:|----------|----------|----------|\\n| baseline | VIT-L | 7B | 69.1 | 58.1 | 86.4 | 63.8 | 60.1 |\\n| +MM-LoRA | VIT-L | 7B | **71.2** | **58.7** | **86.5** | **64.8** | **61.5** |\\n| baseline | VIT-L | 13B | 71.2 | 60.2 | 86.7 | 68.5 | 61.3 |\\n| +MM-LoRA | VIT-L | 13B | **71.5** | **60.7** | **86.8** | **68.8** | **62.9** |\\n\\nIt can be observed that, despite the increase in LLM size, MM-LoRA still improves the model's performance across multiple benchmarks. This indicates that the design of MM-LoRA provides consistent benefits across LLMs of different scales.\\n\\nNext, we upgraded the visual encoder from ViT-L to ViT-H to further explore the benefits of Q-Ladder and MM-LoRA with larger-scale visual encoders. The experimental results are as follows:\\n| Method | Visual encoder | Image Resolution | LLM | ScienceQA | TextVQA | POPE | MMBench | SEED |\\n|:---------:|:--------------:|:-----------------:|:---:|:---------:|:--------:|:--------:|:--------:|:--------:|\\n| baseline | VIT-L | 336*336 | 7B | 69.1 | 58.1 | 86.4 | 64.1 | 58.1 |\\n| +Q-Ladder | VIT-L | 336*336 | 7B | **71.0** | **58.8** | **86.6** | **66.3** | **60.5** |\\n| baseline | VIT-H | 224*224 | 7B | 67.8 | 53.5 | 83.7 | 63.1 | 58.4 |\\n| +Q-Ladder | VIT-H | 224*224 | 7B | **68.6** | **53.8** | **83.9** | **63.6** | **58.8** |\\n\\nThe experimental results also demonstrate that these modules continue to enhance model performance with larger-scale visual encoders. It is important to note that due to the resolution limitation of the open-source ViT-H in the CLIP model, which is restricted to 224x224, we could only conduct experiments at this resolution. Nevertheless, even under these conditions, Q-Ladder and MM-LoRA still exhibited significant advantages.\\n\\nIn conclusion, the experiments show that Q-Ladder and MM-LoRA consistently deliver substantial performance improvements across different scales of LLMs and visual encoders, validating the effectiveness of their structural design in large-scale models.\\n\\n---\\n\\n**Q4.** MiscellaneousIs the beta gamma ration study consistent across a range of LORA ranks (say 64 - 1024)? here it was set to 256.\\n\\n**A4**. Thank you for the reviewer\\u2019s question! Regarding the beta-gamma ratio, we set the rank of LoRA to 256 in our paper based on preliminary experimentation, aiming to balance model performance and computational efficiency. However, we recognize that this ratio may vary with different LoRA ranks.\\n\\nTo address your query, we further investigated how the beta-gamma ratio changes with LoRA ranks of 256 and 512. The results show that the ratio remains stable across these ranks, with only slight variations. Below are the experimental results:\\n\\n| RANK | beta | gamma | ScienceQA | TextVQA | MMBench | SEED |\\n|:----:|:----:|:-----:|:---------:|:--------:|:--------:|:--------:|\\n| 256 | 0 | 1 | 65.8 | 51.2 | 56.4 | 58.7 |\\n| 256 | 0.75 | 0.25 | 68.6 | 58.7 | 63.3 | 61.8 |\\n| 256 | 0.5 | 0.5 | 70.1 | 58.4 | 64.3 | 61.9 |\\n| 256 | 0.25 | 0.75 | **71.2** | **58.7** | **64.5** | **62.4** |\\n| 256 | 1 | 0 | 70.1 | 57.9 | 65.4 | 61.2 |\\n| 512 | 0 | 1 | 66.1 | 52.3 | 55.8 | 59.4 |\\n| 512 | 0.75 | 0.25 | 69.2 | 57.8 | 64.2 | 62.0 |\\n| 512 | 0.5 | 0.5 | 70.1 | 57.3 | 63.1 | 61.4 |\\n| 512 | 0.25 | 0.75 | **71.0** | **58.2** | **64.4** | **62.7** |\\n| 512 | 1 | 0 | 70.3 | 57.9 | 63.9 | 62.4 |\\n\\nThe experiments show that the beta-gamma ratio demonstrates strong stability, remaining largely unaffected by changes in rank size.\\n\\nWe are currently conducting additional experiments to further validate the stability of the beta-gamma ratio, especially under a broader range of rank settings, and to explore its potential impact on model performance. Relevant results will be included in the revised version. We greatly appreciate your patience and suggestions!\"}", "{\"title\": \"Response to reviewer 1Gww (1/2)\", \"comment\": \"Thank you for your valuable feedback! Below are address all raised concerns of the paper.\\n\\n---\\n\\n**Q1**. The proposed method leverages additional learning parameters to enhance the visual capabilities of MLLMs. Recent studies (e.g., LLaVA-Next, VILA-1.5, Qwen-VL-2) have shown that simply improving image resolution using various (any resolution) techniques is a straightforward and effective way to address this issue. I am skeptical that the proposed method will achieve performance comparable to these AnyRes approaches, particularly on tasks requiring high resolution. The proposed method appears limited by the visual encoder, despite the incorporation of additional LoRA modules.\\n\\n**A1**. Thank you for the reviewer\\u2019s comments! We understand your concerns and greatly appreciate your suggestions regarding the AnyRes method. Below is our detailed response to this issue:\\n\\nFirst, it is indeed true that increasing image resolution is a direct and effective approach to enhancing visual capabilities. However, this method typically requires training the visual encoder at high resolutions, which can result in significant computational overhead. Additionally, simply increasing resolution does not address specific task-related challenges in visual representations (such as fine-grained perception or improved modality alignment), which are the key problems that Q-Ladder and MM-LoRA aim to tackle.\\n\\nRegarding the models you mentioned, VILA-1.5 and Qwen-VL-2, we note that their data has not been open-sourced, making it difficult to directly replicate their results. To more clearly demonstrate the applicability of Q-Ladder within the AnyRes method, we conducted experiments within the publicly available LLaVA-Next framework and combined it with the AnyRes technique to validate the effectiveness of Q-Ladder.\\n\\n| Method | Visual Encoder | LLM | ScienceQA | TextVQA | POPE | MMBench | SEED-img |\\n|:----------:|:--------------:|:---------:|:---------:|:-------:|------|:-------:|:--------:|\\n| LLaVA-NeXT | CLIP-VIT-L | Vicuna-7B | 70.1 | 64.9 | 86.5 | 67.4 | 70.2 |\\n| +Q-Ladder | CLIP-VIT-L | Vicuna-7B | 71.0 | 65.6 | 87.4 | 68.8 | 70.7 |\\n\\nThe experimental results show that the introduction of Q-Ladder significantly enhances model performance in multi-modal tasks. Even in the high-resolution AnyRes setting, it demonstrates its unique advantage in performance improvement. This indicates that the design of Q-Ladder not only makes full use of the capabilities of existing visual encoders but also further enhances model performance on top of the resolution enhancement method.\\n\\n---\\n\\n**Q2**. The focus of this study is on the visual capability of MLLMs. However, only one ViT is examined, and there are no ablations on different ViTs. This raises doubts about the generalizability of the proposed approach.\\n\\n**A2**. Thank you for the reviewer\\u2019s comments! We fully understand your concern regarding the generalization ability of the method across different visual encoders. To address this, we conducted further experiments using a variety of visual encoders, including CLIP-ViT-L, CLIP-ViT-H, and SigLIP encoders, to assess the applicability and robustness of our proposed method. The experimental results are as follows:\\n\\n| Method | Visual Encoder | Image resolution | ScienceQA | TextVQA | POPE | MMBench | SEED |\\n|:---------:|:--------------:|:----------------:|:---------:|:--------:|:--------:|:--------:|:--------:|\\n| baseline | CLIP-VIT-L | 336*336 | 69.1 | 58.2 | 86.4 | 64.1 | 58.1 |\\n| +Q-Ladder | CLIP-VIT-L | 336*336 | **71.0** | **58.8** | **86.6** | **66.3** | **60.5** |\\n| baseline | CLIP-VIT-H | 224*224 | 67.8 | 53.5 | 83.7 | 63.1 | 58.4 |\\n| +Q-Ladder | CLIP-VIT-H | 224*224 | **68.8** | **53.8** | **83.9** | **63.6** | **58.8** |\\n| baseline | Siglip | 384*384 | 70.6 | 62.4 | 86.0 | 65.9 | 62.1 |\\n| +Q-Ladder | Siglip | 384*384 | **71.1** | **62.9** | **86.3** | **66.8** | **62.5** |\\n\\nThe experimental results demonstrate that both Q-Ladder and MM-LoRA significantly improve model performance across various benchmarks, regardless of whether the visual encoder is CLIP-ViT-L, the higher-capacity CLIP-ViT-H, or the SigLIP encoder with different training strategies. This strongly indicates that our method exhibits good adaptability and generality across different types and scales of visual encoders.\"}", "{\"title\": \"Response to reviewer cUZA (1/3)\", \"comment\": \"Thank you for your valuable feedback! Below are address all raised concerns of the paper.\\n\\n---\\n\\n**Q1**. The summary section (line 519-520) mentions \\\"achieving notable performance improvements even with limited data resources\\\". However, the problem of limited data sources is not convincing. \\n\\n**A1**. Thank you for your comment. We agree that the challenge of limited data might not be immediately clear. In the context of Multimodal Language Models (MLLMs), the key challenge lies in the **alignment** of different modalities (vision and language)[1][2]. While pretrained models benefit from large-scale data, aligning these modalities for specific tasks requires significant amounts of **Supervised Fine-Tuning (SFT)** data. Previous work has demonstrated that fine-tuning on high-quality, task-specific data is essential for optimizing the interaction between vision and language, especially for tasks like localization, color recognition, and detection.\\n\\nWhat we mean by \\\"limited data resources\\\" is specifically the scarcity of **SFT data** needed to align these modalities effectively for multimodal tasks. Annotating such data is often time-consuming and expensive, particularly in specialized domains. Our approach focuses on optimizing model structure\\u2014through components like **Q-Ladder** and **MM-LoRA**\\u2014to improve performance even when such fine-tuning data is limited.\\n\\nIn summary, while large-scale datasets are used for pretraining, the bottleneck for multimodal tasks is the availability of sufficient SFT data for alignment, which our method helps address.\\n\\n[1] Ye et al. mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration. CVPR, 2024.\\n\\n[2] Liu, Haotian, et al. Visual instruction tuning. Neurips, 2024.\\n\\n---\\n\\n**Q2**. The contributions of the proposed components are unclear. For instance, the benefit of LORA for limited-data-adaptation has been well studied in the past (e.g. [1]). The importance of introducing additional visual tokens to visual encoders has also been shown in [2]. In the light of the prior works, the paper should more clearly distinguish it\\u2019s technical contributions.\\n\\n**A2**. Thank you for the reviewer\\u2019s feedback! We understand your concerns regarding the similarities between our method and existing studies. Below, we will clarify our technical contributions to distinguish our approach from prior work.\\n\\nFirst, regarding the application of LoRA in limited-data adaptation, although LoRA has been extensively studied, our MM-LoRA module introduces a fundamental distinction. MM-LoRA employs a multi-modal parameter decoupling design, which introduces separate LoRA modules for vision and language modalities, enabling each modality to learn dedicated parameters. This approach promotes more efficient multi-modal information fusion.\\n\\nThis design not only enhances the independent learning capabilities of each modality but also effectively strengthens their interaction. In contrast, previous LoRA implementations primarily focus on single-modality adaptation and do not achieve such decoupling and fusion. To validate this, we compared the performance of LoRA and MM-LoRA within MLLM models. The experimental results are as follows:\\n| Method | TextVQA | ScienceQA | MMBench | MME |\\n|:--------:|:-------:|:---------:|:-------:|:----:|\\n| LoRA | 58.1 | 69.1 | 63.8 | 1460 |\\n| +MM-LoRA | **58.7** | **71.2** | **64.8** | **1500** |\\n\\nSecond, regarding the introduction of visual tokens, while studies such as \\\"Vision Transformers Need Registers\\\" suggest that adding visual tokens can enhance the performance of visual encoders, the innovation of the Q-Ladder module goes beyond this. Q-Ladder introduces a learnable \\\"ladder\\\" structure that deeply aggregates intermediate-layer features from frozen pre-trained visual encoders (e.g., CLIP).\\n\\nThis structure preserves the generalization capabilities of the pre-trained encoder while further improving task-specific performance. Unlike simply adding visual tokens, Q-Ladder emphasizes the effective integration of intermediate-layer features from the visual encoder, thereby enhancing the model's capacity for fine-grained visual tasks. Additionally, we compared the performance of Q-Ladder with the prompt-based approach from [2]. The experimental results are as follows:\\n| Method | TextVQA | ScienceQA | MMBench | SEED |\\n|:---------:|:-------:|:---------:|:-------:|:----:|\\n| baseline | 58.2 | 69.1 | 64.3 | 58.6 |\\n| +prompt | 58.1 | 69.4 | 64.1 | 59.1 |\\n| +Q-Ladder | **58.8** | **71.2** | **66.3** | **61.3** |\\n\\nIn summary, the key distinctions of our work compared to existing studies lie in the multi-modal parameter decoupling design and the introduction of the ladder structure. The combination of these two innovations enables our method to achieve significant performance improvements in multi-modal tasks under limited data conditions.\"}", "{\"title\": \"More experimental results are expected\", \"comment\": \"Thanks for following up. Since the discussion period has been extended, I am curious about the effectiveness of MM-LoRA when simultaneously training the whole LLM. I want to know if MM-LoRA is truly effective in current LVLM training (usually training the whole LLM). It is suggested to ablate different $\\\\beta$ and $\\\\gamma$ choices on benchmarks such as ScienceQA, TextVQA, MMBench, MME, SEED.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to reviewer 17q4 (2/2)\", \"comment\": \"**Q4**. Have you tried adding LoRA to the visual encoder as well?\\n\\n**A4**.Thank you for the reviewer\\u2019s question! To address your inquiry, we have indeed conducted comparative experiments by incorporating LoRA and Q-Ladder into the visual encoder. The experimental results are as follows:\\n\\n| Method | ScienceQA | POPE | MMBench | TextVQA |\\n|:---------:|:---------:|:--------:|:--------:|:--------:|\\n| baseline | 69.1 | 85.9 | 64.3 | 58.2 |\\n| +LoRA | 69.5 | 85.7 | 65.9 | 58.1 |\\n| +Q-Ladder | **71.0** | **86.5** | **66.3** | **58.8** |\\n\\nThe experimental results show that Q-Ladder outperforms the direct application of LoRA in the visual encoder in terms of performance.\\n\\nWe believe that this result is due to Q-Ladder's approach of aggregating intermediate representations from the frozen pre-trained visual encoder without altering its original output representations. In contrast, when LoRA is applied directly to the visual encoder, it adjusts the output representations, which may disrupt the original capabilities of the pre-trained model. The structural design of Q-Ladder allows it to retain the strong representational power of the pre-trained model while adding new visual feature learning, leading to superior performance compared to the direct application of LoRA.\\n\\nWe will describe this comparative experiment in more detail in the revised version to further demonstrate the advantages of Q-Ladder over LoRA in the visual encoder. Thank you again for your suggestions and attention!\"}", "{\"title\": \"Response to reviewer 5dnK\", \"comment\": \"Thank you for your valuable feedback! Below are address all raised concerns of the paper.\\n\\n---\\n\\n**Q1**. In terms of motivation, the paper aims to resolve MLLM visual perception issues such as color recognition, object counting, small object understanding, and spatial location. However, the structural designs of QLadder and MM-LoRA do not seem specifically tailored to address these problems. \\n\\n**A1**.Thank you for the reviewers' feedback! Our research is motivated by the limitations of MLLMs in multimodal feature alignment, \\nvisual information representation, and cross-modal interaction, which hinder their visual perception capabilities. To highlight these issues, we introduce specific examples such as color recognition, object counting, small object understanding, and spatial position awareness, emphasizing their prevalence in real-world scenarios and the necessity of addressing them.\\n\\nTo overcome these limitations, we designed two modules, Q-Ladder and MM-LoRA, to enhance the model\\u2019s adaptability in complex visual tasks through structural improvements:\\n\\n1. **MM-LoRA** introduces two parallel LoRA modules tailored to the visual and language modalities, respectively, enabling modality-specific parameter learning. This design adapts to the unique characteristics of each modality and facilitates efficient integration of multimodal information.\\n\\n2. **Q-Ladder** incorporates a learnable \\\"ladder\\\" structure to deeply aggregate intermediate representations of frozen pretrained visual encoders (e.g., CLIP image encoders). This approach preserves the general visual representation capabilities while addressing the shortcomings of pretrained models in specific tasks, thereby improving the visual representation ability of MLLMs.\\n\\nTo demonstrate the effectiveness of Q-Ladder and MM-LoRA in tasks such as color recognition, object counting, small object understanding, and spatial position perception, while minimizing the influence of data quality differences, we conducted experiments using a unified dataset. These experiments were evaluated based on subclass metrics from MMBench and MME Benchmark. The results indicate the following:\\n\\n| Method | MME(Color) | MME(Count) | MME(Position) | MMB(Localization) | MMB(FP-C) | MMB(FP-S) |\\n|:---------:|:----------:|:----------:|:-------------:|:-----------------:|:---------:|:---------:|\\n| baseline | 169.0 | 160.0 | 123.3 | 42.6 | 67.9 | 52.5 |\\n| +Q-Ladder | 172.0 | 164.0 | 131.4 | 44.4 | 69.1 | 54.2 |\\n| +MM-LoRA | 171.0 | 166.0 | 129.7 | 43.2 | 68.7 | 53.3 |\\n\\nFP-C represents Fine-grained Perception (Cross Instance), and FP-S represents Fine-grained Perception (Single Instance). The introduction of Q-Ladder and MM-LoRA significantly improves MLLM performance across tasks including color recognition, counting, position estimation, localization, and fine-grained perception. Combined with the ablation studies presented in the paper, the results intuitively demonstrate the effectiveness of these modules in addressing the limitations of MLLMs\\u2019 visual perception abilities, while also validating the rationality and generality of their structural design.\\n\\n---\\n\\n**Q2**. leading to the impression that performance improvements may stem from data rather than a well-targeted structural design, which appears somewhat forced into explaining the results.\\n\\n**A2**. We understand the concern about whether the performance improvements stem from the dataset rather than the structural design. To validate the genuine effectiveness of Q-Ladder and MM-LoRA, **we ensured that all ablation studies in the paper were conducted on a unified dataset to guarantee experimental fairness.**\\n\\nSpecifically, we verified the independent contributions of the modules through the following approaches:\\n\\n1. **Ablation Study**: We incrementally added Q-Ladder and MM-LoRA to the baseline and observed performance changes across multiple benchmarks, clearly identifying the independent contributions of each module.\\n\\n2. **Unified Dataset**: All experiments used the same dataset to avoid inconsistencies related to data quality or quantity.\\n\\n3. **Multi-Dimensional Evaluation**: We evaluated performance across diverse benchmarks (e.g., MMBench, MME, TextVQA) using metrics such as color recognition, object counting, and localization. The results consistently showed that the structural designs of Q-Ladder and MM-LoRA led to significant improvements under identical data conditions.\\n\\nThe results demonstrate that the performance gains are due to the structural innovations of Q-Ladder and MM-LoRA, not merely the dataset, highlighting their effectiveness in enhancing the visual capabilities of MLLMs.\"}", "{\"title\": \"Response to reviewer 7gKo (3/3)\", \"comment\": \"**Q4**. Compared with Q-Former, why does the proposed Q-Ladder not require an additional stage for alignment with the vision encoder?\\n\\n**A4**. Thank you for your question. The design philosophy of Q-Ladder differs from that of Q-Former. While Q-Former focuses on aligning the visual encoder\\u2019s features, Q-Ladder aims to enhance the visual representation by supplementing the original features of the visual encoder. This approach eliminates the need for an additional alignment stage, resulting in a simpler and more efficient model.\\n\\nFor modality alignment, we adopt the design principles of mainstream methods like LLaVA and QwenVL, integrating multimodal information through a streamlined fusion mechanism. This allows us to achieve similar alignment results while preserving the strengths of the visual encoder\\u2019s feature representation, ultimately improving overall performance.\\n\\nIn summary, Q-Ladder achieves effective multimodal integration without requiring an extra alignment stage, offering a lightweight and efficient solution. We will further clarify this design choice in the revised manuscript.\\n\\n---\\n\\n**Q5**. Is X_q in Q-Ladder a set of learnable tokens? Why not use instruction tokens for initialization, as done in Q-Former?\\n\\n**A5**. Thank you for your question. To clarify, X_q in Q-Ladder refers to a set of learnable tokens designed to enhance visual representations, rather than performing modality alignment directly at the visual encoder stage. Unlike Q-Former, which initializes with instruction tokens, Q-Ladder focuses on improving visual features rather than transforming the visual encoder's output for language model adaptation.\\n\\nHere\\u2019s why we chose not to use instruction tokens for initialization:\\n\\n1. **Preserving Visual Encoder Independence** \\n Q-Ladder aims to enhance visual features without altering the output distribution of the visual encoder. The modality alignment is handled at the fusion stage, as seen in other approaches like LLaVA and QwenVL.\\n\\n2. **Flexibility and Adaptability** \\n Using learnable tokens allows Q-Ladder to easily adapt to various visual encoders and tasks without relying on specific initialization strategies, offering broader applicability.\\n\\n3. **Improved Visual Feature Representation** \\n Our experiments show that learnable tokens can adjust autonomously during training, leading to a deeper integration with the visual encoder\\u2019s output and enhancing visual feature expression.\\n\\nIn summary, Q-Ladder uses learnable X_q tokens to enhance visual representations efficiently, rather than aligning the visual encoder\\u2019s output directly, offering flexibility and stronger performance across tasks. We will expand on this in the revised manuscript.\\n\\n---\\n\\n**Q6**. In the visualizations, it\\u2019s difficult to conclude that (b) demonstrates more attention on vision tokens compared to (a). But interestingly, It mainly appears that (b) has more sink tokens [1].\\n[1] Xiao et al. Efficient Streaming Language Models with Attention Sinks. ICLR, 2024.\\n\\n\\n**A6**. Thank you for your insightful comment. You are correct in observing that (b) exhibits more sink tokens compared to (a). However, this phenomenon primarily occurs in the early layers of the decoder. While sink tokens appear in the initial layers, the overall performance of (b) remains superior to (a). This performance improvement is attributed to the decoupling of the visual and language learning spaces in MM-LoRA, which allows for more focused and efficient learning in each modality. This design significantly enhances the model\\u2019s ability to integrate multimodal inputs, leading to better task performance. Therefore, although sink tokens are present in the early layers, they do not negatively impact the model\\u2019s overall performance in later stages.\\n\\n---\\n\\n**Q7**. In Table 4, why the Q-Ladder results on 13B model are absent?\\n\\n**A7**. Thank you for raising this question. We believe the reviewer is referring to Table 6 rather than Table 4, as Table 4 mainly pertains to the ablation study of MM-LoRA. In the original experiments, the results for Q-Ladder on the 13B model were not presented, mainly due to resource constraints.\\n\\nWe highly appreciate the reviewer's suggestion and have conducted new experiments using the 13B model. The updated results are as follows:\\n\\n| Method | add visual tokens | MMVP | POPE | MMBench | TextVQA |\\n|:---------:|:-----------------:|:----:|:----:|:-------:|:-------:|\\n| baseline | - | 24.7 | 85.9 | 67.7 | 61.3 |\\n| +MOF [1] | 256 | 28.0 | 86.3 | 61.6 | 55.3 |\\n| +MOF [1] | 576 | 31.2 | 86.7 | 65.4 | 58.7 |\\n| +Q-Ladder | 64 | 32.7 | 86.5 | 68.3 | 61.6 |\\n\\nThe experimental results show that Q-Ladder outperforms MOF on the 13B model. We will update these results in the revised manuscript and provide a detailed comparative analysis for the 13B model.\\n\\nWe will ensure that the revised manuscript comprehensively presents all experimental results.\"}" ] }
0y3hGn1wOk
Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset
[ "Yingzi Ma", "Jiongxiao Wang", "Fei Wang", "Siyuan Ma", "Jiazhao Li", "Jinsheng Pan", "Xiujun Li", "Furong Huang", "Lichao Sun", "Bo Li", "Yejin Choi", "Muhao Chen", "Chaowei Xiao" ]
Machine unlearning has emerged as an effective strategy for forgetting specific information in the training data. However, with the increasing integration of visual data, privacy concerns in Vision Language Models (VLMs) remain underexplored. To address this, we introduce Facial Identity Unlearning Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms under the Right to be Forgotten setting. Specifically, we formulate the VLM unlearning task via constructing the Fictitious Facial Identity VQA dataset and apply a two-stage evaluation pipeline that is designed to precisely control the sources of information and their exposure levels. In terms of evaluation, since VLM supports various forms of ways to ask questions with the same semantic meaning, we also provide robust evaluation metrics including membership inference attacks and carefully designed adversarial privacy attacks to evaluate the performance of algorithms. Through the evaluation of four baseline VLM unlearning algorithms within FIUBench, we find that all methods remain limited in their unlearning performance, with significant trade-offs between model utility and forget quality. Furthermore, our findings also highlight the importance of privacy attacks for robust evaluations. We hope FIUBench will drive progress in developing more effective VLM unlearning algorithms.
[ "Machine Unlearning", "Vision Language Model", "Privacy" ]
Accept (Poster)
https://openreview.net/pdf?id=0y3hGn1wOk
https://openreview.net/forum?id=0y3hGn1wOk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qsmKVeaAGv", "pFClKzSF2k", "oCs07BC83U", "nE4EuRikMz", "lFbg13oPBc", "klJg7tFhPJ", "jFxs8TtY5t", "eT2jg3X32A", "X9MRAYMebc", "SwOeJZ062X", "SWRCOadIeT", "RCEHSuNHoN", "NY1An0Ivo5", "KtacqZ2hlE", "JYe1zWQoqX", "E1NIFHv6hs", "DaX8BiORUL", "7CXb6klqmj", "6X96A3DCIh", "5g4e0ikHNq", "3bpgOMypW6" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732382036630, 1737524166570, 1732901769718, 1732724357788, 1734441678189, 1732724384696, 1733260449303, 1732382160583, 1732919968690, 1733195130374, 1729876263218, 1730776053394, 1730707728721, 1732474528766, 1730774772450, 1730655732784, 1732382962887, 1732724328336, 1732474516276, 1732382234775, 1733223502609 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12100/Reviewer_ndqf" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Area_Chair_6DmT" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Reviewer_ndqf" ], [ "ICLR.cc/2025/Conference/Submission12100/Reviewer_ndqf" ], [ "ICLR.cc/2025/Conference/Submission12100/Reviewer_1Bmo" ], [ "ICLR.cc/2025/Conference/Submission12100/Reviewer_3WKf" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Reviewer_9p6D" ], [ "ICLR.cc/2025/Conference/Submission12100/Reviewer_cfPb" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Authors" ], [ "ICLR.cc/2025/Conference/Submission12100/Reviewer_9p6D" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 1Bmo04\", \"comment\": \"Thanks for your comments. We\\u2019d like to address your concerns as follows.\\n\\n**Q1: Details and purpose of K-means algorithm in dataset construction.**\\n\\nWe apologize for any confusion caused by the statement regarding \\u201c400 sampled faces\\u201d in lines 165\\u2013172. We clarify that the face images are sampled from Part 4 of the SFHQ dataset, which consists of 125,754 high-quality 1024x1024 curated face images. These images were generated using \\\"inspiration\\\" images sampled from the Stable Diffusion v2.1 text-to-image generator with various face portrait prompts. Therefore, we clustered these 125,754 images into 400 clusters and selected the central face of each cluster to construct our benchmark. This approach ensures diversity among the evaluation candidate faces in our dataset and simplifies the face feature learning process. The details of the K-means clustering process have been included in Appendix C of the revised version of the paper.\\n\\n\\n**Q2: Concerns about small datasets and the usage of synthetic faces instead of real faces.**\\n\\n\\nOur dataset pairs each face with 20 QA questions, resulting in 8,000 VLM question-answer pairs. This amount of data is sufficient for evaluating the unlearning problem. In comparison, the previous TOFU unlearning benchmark for LLMs [1] involves only 200 authors, highlighting that our dataset offers significantly more data for analysis.\\n\\nAccording to our benchmark construction process, we create a profile for each facial image and generate 20 corresponding QA pairs based on these profiles. These profiles may include sensitive information such as health records and criminal histories. If real facial images were used, associating them with their actual information (e.g., health records) would undoubtedly lead to personal information leaks. On the other hand, if fictitious information were created for these real facial images, it could be mistakenly associated with actual individuals, leading to misunderstandings and stigmatization. Moreover, such associations could violate privacy protection regulations (e.g., IRB) and raise ethical concerns. Therefore, we chose to use synthetic facial images.\\n\\n\\n\\nTo calculate the distribution difference between synthetic faces and real faces, we randomly selected 5000 synthetic\\uff0ccelebrity's, private facial images from the SFHQ dataset, the CelebA dataset [2] and the human faces dataset [3]. Since most celebrity's faces are enhanced by makeup and other alterations, they may not accurately represent the distribution of real-world individual faces. Therefore, we also included the human faces dataset, a comprehensive facial dataset covering diverse creeds, races, and age groups. We choose to measure the distribution differences between different face sets using Fr\\u00e9chet Inception Distance (FID) [4] with CLIP features. The results are as follows:\\n\\n\\n\\n| CLIP | | | |\\n|-----------|-----------|---------|-----------|\\n| | **Synthetic** | **Private** | **Celebrity** |\\n| **Synthetic** | 0.00 | 19.71 | 29.86 |\\n| **Private** | 19.71 | 0.00 | 27.51 |\\n| **Celebrity** | 29.86 | 27.51 | 0.00 |\\n\\n\\nCLIP features, commonly used by VLMs, represent the visual features of images. Our observations reveal that the distribution difference (FID) between Synthetic and Private faces is smaller than that between Private and Celebrity faces when measured using CLIP features. This indicates that, for VLMs, the difference between synthetic faces and real faces is even smaller than the variability within the distribution of real faces.\\n\\n\\nFinally, we also want to point out that our benchmark focuses on robustly evaluating the effectiveness of unlearning algorithms within the Right to be Forgotten setting. Any image-paired question-answering datasets where VLMs lack prior knowledge can be applied for this evaluation.\\n\\n[1] Maini, Pratyush, Zhili Feng, Avi Schwarzschild, Zachary Chase Lipton, and J. Zico Kolter. TOFU: A Task of Fictitious Unlearning for LLMs. In ICLR 2024 Workshop on Navigating and Addressing Data Problems for Foundation Models.\\n\\n[2] Large-scale CelebFaces Attributes (CelebA) Dataset\\n\\n[3] https://www.kaggle.com/datasets/ashwingupta3012/human-faces\\n\\n[4] GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you to the author for clarifying some of the questions I raised in my comments.\\n\\nHowever, I still have the following concerns.\", \"for_q1\": \"The authors state in their response that \\\"In real-world scenarios, since VLMs are trained on massive corpora of data from the web, it is unavoidable that they may memorize and reproduce some sensitive or private data. \\\"\\nIs there any reference or experiment that can substantiate this claim? In practice, sensitive data are subject to anonymization before model training, so theoretically, personal information should not be incorporated into the model. In Section 3.1, the experiments shown in Table 1 also indicate that the two MLLMs involved obtained extremely low GPT scores. This suggests that \\\"the VLMs do not possess knowledge of the correct answers prior to fine-tuning.\\\" (line 351 in the updated pdf)\\nI am not sure whether existing MLLMs (whether open-source or closed-source) indeed memorize some sensitive or private data.\", \"for_q4\": \"1. In Figure 4, there are instances where a decrease in Forget Quality is accompanied by a decrease in Model Utility, which seems counterintuitive. Could you provide some explanations for this phenomenon?\\n2. I would like to understand if the following procedure for generating the Model Utility-Forget Quality curve would be more reasonable:\\n(Note: The author's team does not need to conduct new experiments.)\\nGenerally, as the unlearning algorithm is executed, Forget Quality should gradually improve while Model Utility tends to decline. One might select appropriate model checkpoints at steps such that Forget Quality is sampled at regular intervals (e.g., recording values of 1, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, and 0.1) and then report the Model Utility of the corresponding checkpoint at each Forget Quality score.\\n\\nI cannot determine the sampling rationale for the current Figure 4. The data points on both the horizontal and vertical axes appear to be randomly sampled instead of being evenly spaced.\\n\\nIn summary, I believe this is an inspiring work, but it has two significant shortcomings: (1) The motivation is still somewhat inadequately articulated. (2) In the main experiment, the experimental design appears somewhat unreasonable, as it does not account for the common practice of balancing the two objectives (Forget Quality and Model Utility).\"}", "{\"comment\": \"Dear Reviewer cfPb03,\\n\\nAs the discussion period deadline approaches, we would like to kindly inquire whether we have adequately addressed the concerns raised in your review. If any remaining points require further clarification, please know that we are eager to provide thorough responses to any additional questions or comments you may have.\\n\\nWe greatly value your constructive feedback and appreciate your efforts in helping us improve the quality of our manuscript. Thank you once again for your time and consideration.\\n\\nBest regards, The Authors\"}", "{\"metareview\": \"The paper introduces FIUBENCH, a benchmark for evaluating unlearning algorithms in Vision-Language Models (VLMs) under the \\\"Right to be Forgotten\\\" setting, using synthetic facial data and a two-stage evaluation pipeline. There are also some weaknesses of this paper, including the small dataset size, reliance on synthetic faces with potential artifacts, and unclear rationale for some experimental design choices. Despite these limitations, the paper\\u2019s novelty and relevance to a growing privacy challenge justify its acceptance, as it lays a foundation for future research in VLM unlearning.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised concerns about the dataset's small size, reliance on synthetic faces, unclear clustering methodology, and robustness of evaluation metrics. The authors clarified the use of synthetic data to address ethical concerns, demonstrated its distribution similarity to real faces via FID scores, and justified the dataset size by referencing comparable benchmarks. They provided additional experimental results with more models, detailed evaluation settings, and introduced a utility-forget quality curve for better clarity. While some concerns about motivation and experimental design remained, the authors\\u2019 thorough responses and added analyses strengthened the paper's validity, justifying its acceptance.\"}", "{\"comment\": \"Dear Reviewer ndqf25,\\n\\nAs the discussion period deadline approaches, we would like to kindly inquire whether we have adequately addressed the concerns raised in your review. If any remaining points require further clarification, please know that we are eager to provide thorough responses to any additional questions or comments you may have.\\n\\nWe greatly value your constructive feedback and appreciate your efforts in helping us improve the quality of our manuscript. Thank you once again for your time and consideration.\\n\\nBest regards, The Authors\"}", "{\"comment\": \"**Q4: why is the Fr\\u00e9chet Inception Distance (FID) between the synthetic and private faces smaller than the FID between private and celebrity faces when measured using CLIP**\\n\\n\\nWe argue that since the FID metric essentially calculates the Fr\\u00e9chet distance between two feature distributions (Gaussian distributions), the result showing a smaller FID between synthetic and private faces compared to the FID between private and celebrity faces suggests that, in the features extracted by CLIP, synthetic faces are closer to private faces. This could be because celebrity faces are mostly of middle-aged or young adults, often with heavy makeup, which causes more noticeable feature differences in CLIP. It is notable that almost all VLMs use CLIP features to generate visual tokens. Furthermore, we also tested using features from Inception V3 to compute the FID metric, with the following results:\\n\\n| Inception V3 | | | |\\n|-----------|-----------|---------|-----------|\\n| | Synthetic | Private | Celebrity |\\n| Synthetic | 0.00 | 51.55 | 52.82 |\\n| Private | 51.55 | 0.00 | 49.46 |\\n| Celebrity | 52.82 | 49.46 | 0.00 |\\n\\n\\nThe results show that there is little difference in the visual features extracted by Inception V3 among the three groups. This suggests that using synthetic faces can be applicable to real-world scenarios. In fact, many face recognition systems now incorporate synthetic faces as training data to improve performance [1,2,3].\\n\\n[1] IDiff-Face: Synthetic-based Face Recognition through Fizzy\\nIdentity-Conditioned Diffusion Models\\n\\n[2] Digi2Real: Bridging the Realism Gap in Synthetic Data Face Recognition via Foundation Models\\n\\n[3] AnyFace++: A Unified Framework for Free-style Text-to-Face Synthesis and Manipulation\\n\\n\\n**Q5: include a broader range of models in the evaluation** \\n\\nWe would like to clarify that the motivation of our paper is to fairly evaluate the effectiveness and robustness of various unlearning algorithms, while also highlighting the challenges associated with applying unlearning techniques to VLMs. Therefore, what we should consider is testing different unlearning algorithms, rather than testing many VLMs. In this field, many influential previous works have only tested two models (VLMs or LLMs) while we have tested 3 VLMs with different LLMs. For example, TOFU tested Llama-2-7B and Phi-1.5B [4]; Textual Unlearning tested LLaVA-1.5-7B and LLaVA-v1.6-7B [5]; MLLMU-Bench tested LLaVA-1.5-7B and Idefics-2-8B [6]; LLM unlearning tested OPT-1.3B, OPT-7B, and Llama-2-7B [7]. Based on this, we believe that the reviewer's suggestion is not reasonable.\\n\\n[4] TOFU: A Task of Fictitious Unlearning for LLMs (COLM 2024)\\n\\n[5] Can Textual Unlearning Solve Cross-Modality Safety Alignment? (EMNLP 2024)\\n\\n[6] Protecting Privacy in Multimodal Large Language Models with MLLMU-Bench\\n\\n[7] Large Language Model Unlearning (ICLR 2024)\"}", "{\"title\": \"Response to Reviewer 3WKf04\", \"comment\": \"Thanks for your comments. We\\u2019d like to address your concerns as follows.\\n\\n**Q1: Concerns about the appropriateness of the setting with forget set and retain set.**\\n\\nThis setting is appropriate and it is commonly-used setting for unlearning, referring to previous work [1, 2]. As outlined in Section 2.3, after fine-tuning the VLMs on the entire dataset, which includes both the retain and forget sets, we apply the unlearning algorithm exclusively to the forget set. This setting effectively evaluates whether the model can precisely forget the sensitive information in the forget set while preserving the utility derived from the personal information in the retain set.\\n\\nAdditionally, our benchmark does not aim to directly forget individual samples. Instead, it focuses on forgetting question-answer pairs that explicitly contain specific sensitive information, such as medical conditions and hospitalization records, as illustrated in Figure 1.\\n\\n**Q2: Rationale for selecting the 5% forgetting proportion.**\\n\\n \\nOur paper introduces a range of forgetting difficulties, from easy to hard, based on the forgetting proportion. We present a 5% forgetting proportion in our main table as it represents a moderate setting. As shown in Figure 3, we also provide results for 1% and 10% forgetting proportions, representing easier and harder settings, respectively. Besides, these three forgetting proportions are also common practices in previous work [1, 2].\\n\\n\\n[1] Maini, Pratyush, Zhili Feng, Avi Schwarzschild, Zachary Chase Lipton, and J. Zico Kolter. TOFU: A Task of Fictitious Unlearning for LLMs. In ICLR 2024 Workshop on Navigating and Addressing Data Problems for Foundation Models.\\n\\n[2] Protecting Privacy in Multimodal Large Language Models with MLLMU-Bench\"}", "{\"comment\": \"Thank you for your thoughtful question. We would like to address your concerns as follows:\\n\\n**Q7: References for our motivation**\\n\\nWe agree with your point that \\\"sensitive data are typically anonymized before model training\\\" to reduce the risk of private information being included in the pretraining data. However, to our knowledge, we would like to emphasize that there is no conclusive evidence showing that anonymization techniques can fully eliminate all sensitive or private information from large-scale datasets used for training LLMs or VLMs. Many works have pointed out that the issues of private information memorization and leakage still exist in LLMs [1,2,3,4] and VLMs [5]. As a result, it is necessary to explore additional strategies to ensure the removal of such data. Furthermore, a recent study [6] has shown that VLMs can still make privacy-infringing inferences from previously unseen combinations of text and images. This suggests that even in the absence of explicit private information in the pretraining data, VLMs' powerful inference abilities may still pose privacy risks.\\n\\n[1] Counterfactual Memorization in Neural Language Models\\n\\n[2] Quantifying Memorization Across Neural Language Models\\n\\n[3] Scalable Extraction of Training Data from (Production) Language Models\\n\\n[4] SHIELD: Evaluation and Defense Strategies for Copyright Compliance in LLM Text Generation\\n\\n[5] Protecting Privacy in Multimodal Large Language Models with MLLMU-Bench\\n\\n[6] Private Attribute Inference from Images with Vision-Language Models\\n\\n\\n\\n**Q8: Score in Table 1**\\n\\nWe would like to clarify that we created a synthetic dataset consisting of face image + private information pairs to ensure that these sensitive data were not included in the pretraining process, which explains the low GPT scores observed in Table 1. Afterward, we injected these synthetic private data into the VLMs through fine-tuning and then applied unlearning algorithms to remove them. This approach allowed us to fairly evaluate the effectiveness and robustness of various unlearning algorithms, while also highlighting the challenges associated with applying unlearning techniques to VLMs (**this is our main motivation, as mentioned in Q1**). It's important to note that the removal of private information is only one of the potential applications of unlearning methods in VLMs and is not our primary motivation.\\n\\n\\n\\n**Q9: Explanation about Figure 4**\\n\\nThank you very much for your careful observation. First, we would like to clarify that for each point in Figure 4, we perform uniform sampling by saving checkpoints every 6 steps. We believe the counterintuitive behavior you mention likely comes from the Preference Optimization method, as it shows a \\\"zig-zag\\\" trajectory in the two-dimensional plane. Specifically, in the latter phase of unlearning, model utility does not decrease as forget quality increases; instead, it follows this fluctuating pattern. We think this occurs because this method minimizes the likelihood of ground truth predictions on the forget set, while also accessing the retain set during loss computation, attempting to balance the losses between the two sets. However, this balance is unstable. Similar phenomena appear in other works, such as Figures 6, 26, and 27 in TOFU [7].\\n\\nOf course, we agree that the approach you suggest, where model utility is evaluated at the same forget quality score, would be an ideal strategy. However, it is difficult to implement. This is because the decline in forget quality is not a uniform process\\u2014it may start with a sharp decrease and then level off more gradually. Moreover, the rate of decline differs across different unlearning algorithms. Therefore, even if we evaluate forget quality at each step, it is impossible to obtain regular intervals (e.g., recording values of 1, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, and 0.1).\\n\\n[7] TOFU: A Task of Fictitious Unlearning for LLMs\"}", "{\"comment\": \"Thank you for your detailed rebuttal. I appreciate your efforts in addressing most of my concerns.\\nI\\u2019ve decided to keep my original rating since further refinement may help this paper achieve the best presentation.\"}", "{\"summary\": \"This paper, based on the Right to be Forgotten setting, defines a VLM unlearning scenario that is closer to real-world use cases. The primary contributions are as follows:\\n1. Formalizing the VLM unlearning tasks. It emphasizes that VLM unlearning algorithms should focus on sensitive information linked to images rather than the visual attributes themselves.\\n2. Defining a two-stage evaluation pipeline with the Fictitious Facial Identity VQA dataset. In the first stage, personal information is injected into the VLM, and in the second stage, the unlearning algorithm is performed.\\n3. Providing various metrics for robust evaluation in terms of forget quality and model utility.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Based on the \\\"Right to be Forgotten\\\" setting, this paper defines a new VLM unlearning scenario closer to real-world use cases.\\nThe writing is clear.\", \"weaknesses\": \"1. The rationale behind the research motivation requires further substantiation (Q1).\\n2. Some experimental details are not clearly described or potentially contain errors. (Q2, Q3, Q6). \\n3. The analysis of the experimental results does not sufficiently consider the characteristics of unlearning algorithms, namely, the trade-off of model utility and forget quality for unlearning algorithms (Q4). \\n4. Some metrics appear to lack robustness when the VLM category changes (Q5).\", \"questions\": \"1. My first question is whether it is necessary to store individual private information within VLMs in real-world use cases. In practical applications, the best approach is to separate individual private information from the VLM and use retrieval-augmented generation (RAG) techniques to provide appropriate responses. Under such techniques, the Right to be Forgotten can be easily ensured by deleting individual private information from the relevant databases. The authors need to further elaborate on the motivation for their research.\\n2. In line 290, it is described that the score range for GPT-Eval is 0-1, but in Table 1 and Table 2, there are scores that exceed this range. This appears to be an error.\\n3. Line 366 mentions that \\u201cearly stopping based on training loss\\u201d is employed. Are the results reported in Table 2 based on this early stopping setting? What criteria are used for early stopping? I would like to know more details.\\n4. Unlearning algorithms involve a trade-off between model utility and forget quality. It might be necessary to compare the forget quality scores at fixed model utility levels, or vice versa. Alternatively, plotting a Model Utility-Forget Quality curve could be more informative. In fact, in Figure 3, representing the effectiveness of the unlearning algorithm with a single point is also unreasonable; a Model Utility-Forget Quality curve would likely be a more appropriate choice.\\n5. On certain metrics, the VLM category significantly affects the model utility and forget quality of an unlearning algorithm. Why is this the case? Comparing the performance of the unlearning algorithm with the Retain Model reveals many such instances: (1) The difference between the Retain Model and GA for LLaVA-Phi-mini-3B is 42.4 (93.7 v.s. 50.6), whereas, for LLama-3.2-Vision-Instruct-11B, the difference is 84.5 (88.8 v.s. 4.30). (2) The difference between the Retain Model and KL for LLaVA-Phi-mini-3B is -29.8 (12.3 v.s. 42.1), whereas, for LLama-3.2-Vision-Instruct-11B, the difference is -1.5 (12.2 v.s. 13.7). The significant impact of the VLM category on certain metrics raises the question of whether these metrics can provide robust testing results. Please provide a more detailed discussion on this matter.\\n6. In line 464, it is stated that you \\\"finally decided to fine-tune the parameters from LLM.\\\" However, from Table 3, it is evident that the MIA of $E_{x_3}$ is the highest among the four fine-tuning strategies. This choice seems to lack sufficient justification.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces an unlearning benchmark for Vision Language Models (VLMs) under the Right to be Forgotten setting. After defining the VLM unlearning tasks, this benchmark assigns a two-stage evaluation pipeline with a newly proposed Fictitious Facial Identity VQA dataset. The proposed benchmark offers a comprehensive evaluation by computing both forget quality and model utility, with further assessment under membership inference attack and adversarial privacy extraction. Another contribution of the work is its evaluating four baseline unlearning algorithms, which indicates that none of them achieve good unlearning performance considering both model utility and forget quality. In addition, the divergent performance of Preference Optimization with and without membership inference attacks underscores the importance of privacy attacks for robust evaluations. This benchmark is good to foster the community\\u2019s further research on developing better unlearning methods for VLMs under the setting of Right to be Forgotten.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper addresses an important ethics problem of AI, i.e., to fulfill the right to be forgotten for VLM, which is underexplored relatively. To my understanding, few works have been done in the literature.\\n\\n2. The benchmark, together with the evaluation metrics, is validated, especially via the assessment of four baseline unlearning algorithms. The results imply that existing unlearning algorithms are far from being mature when considering both model utility and forget quality. \\n\\n3. The benchmark is good to foster the community\\u2019s further research on developing better unlearning methods for VLMs.\", \"weaknesses\": \"I am confused by the dataset constructed. As described in Line 165~172, 400 faces are sampled, which are then divided into 400 clusters by using K-means algorithm. How can 400 faces clustered into 400 clusters?\\n\\nAnd, to me, a dataset with 400 faces is relatively small for evaluating the unlearning problem. I am also not convinced why only synthetic faces are used for this evaluation. Is there any difference between real faces and synthetic faces for this evaluation purpose?\", \"questions\": \"Please reply my concerns mentioned in the Weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces FIUBENCH, a benchmark designed to evaluate unlearning algorithms for Vision Language Models (VLMs) under the Right to be Forgotten setting. FIUBENCH includes a Fictitious Facial Identity VQA dataset and a two-stage evaluation pipeline to control information exposure levels. To handle VLMs\\u2019 ability to process semantically similar queries, FIUBENCH incorporates robust evaluation metrics, including membership inference and adversarial privacy attacks. Initial results on four baseline algorithms show limitations in unlearning performance, with trade-offs between model utility and forget accuracy.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper systematically examines forgetting in Vision Language Models and introduces FIUBENCH, a new benchmark for robust evaluation of unlearning algorithms.\\nThe paper is well-written and easy to understand.\", \"weaknesses\": \"The proposed benchmark uses a forget set and a retain set to assess the forgetting quality and model utility of unlearning algorithms. However, is this setting appropriate? In my view, the privacy concerns in Vision Language Models are more about forgetting specific sensitive information, such as identity or email, rather than simply forgetting individual samples.\\nThe forget set is limited to 5% of the total dataset, comprising only 20 images. Could you explain the rationale behind selecting this specific proportion? How was this number determined?\", \"questions\": \"Please refer to the Strengths and Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 9p6D04\", \"comment\": \"**Q3: More VLMs for evaluations.**\\n\\n\\nTo further illustrate the effectiveness of our benchmark, we incorporate Idefics2 [4] for evaluations. Idefics2 is an open multimodal model that accepts arbitrary sequences of image and text inputs and produces text outputs. Its underlying LLM is Mistral-7B-v0.1. In total, we tested three VLMs, each differing in size and LLM architecture (Mistral-7B-v0.1, LLama-3.2-Vision-Instruct-11B, and LLaVA-Phi-mini-3B). The results for Idefics2 are summarized as follows:\\n\\n\\n| Method | Model Utility | | | Forget Quality | | |\\n|--------|---------------|-------|-------------|----------------|--------------|--------|\\n| | Rouge-L | GPT | Truth Ratio | KS-Test | Exact Match | MIA |\\n| Retain | 88.67 | 82.90 | 77.33 | 0.00 | 12.52 | 11.55 |\\n| GA | 3.88 | 0.32 | 40.62 | **-0.54** | 1.89 | **14.46** |\\n| GD | 22.20 | 8.40 | 57.51 | -1.40 | 5.20 | 16.26 |\\n| KL | **75.48** | 38.00 | **65.89** | -12.86 | 28.43 | 68.03 |\\n| PO | 64.01 | **40.77** | 62.88 | -5.81 | **0.28** | 52.76 |\\n\\n\\n\\nThe results indicate that methods like GA and GD, which maximize the likelihood of ground truth predictions, exhibit significantly better forgetting quality. However, this comes at the cost of a substantial drop in model utility. This is because, during the unlearning process, the loss continues to decrease without bounds (sometimes even dropping below -50). Although we applied an early stopping strategy (stopping training when the loss falls below -20), over-unlearning still occurs (see our responses to reviewer ndqf25: Q3 and Q5 for more details). On the other hand, the KL method, constrained by the Kullback-Leibler loss, avoids over-unlearning. Similar observations apply to LLaVA-Phi and LLama-Vision.\\n\\nTo better compare the forgetting quality of different unlearning methods at the same level of model utility, we included a model utility-forget quality curve figure in Appendix D.4 of the revised version of our paper. We believe you will find more of the details you\\u2019re looking for there.\\n\\n\\n[1] Large-scale CelebFaces Attributes (CelebA) Dataset\\n\\n[2] https://www.kaggle.com/datasets/ashwingupta3012/human-faces\\n\\n[3] GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium\\n\\n[4] https://huggingface.co/HuggingFaceM4/idefics2-8b\"}", "{\"summary\": \"This paper introduces Facial Identity Unlearning Benchmark (FIUBENCH), a VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms under the Right to be Forgotten setting. Moreover, FIUBENCH further incorporates membership inference attacks and adversarial privacy extraction to robustly evaluate unlearning performance, testing whether the private information is unlearned even under attacks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tUnlike unlearning in LLMs, which primarily focuses on forgetting sensitive text information, unlearning in VLMs extends to both images and text. This paper formalizes VLM unlearning as the task of unlearning private image and text-paired information.\\n2.\\tTo study privacy under the Right to be Forgotten scenario, a two-stage evaluation pipeline with Fictitious Facial Identity VQA dataset is proposed.\", \"weaknesses\": \"1.\\tThis paper proposes FIUBENCH, a VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms under the Right to be Forgotten setting, which is interesting. However, the effectiveness of this benchmark is unclear.\\n2.\\tSince the faces are generated by StyleGAN2, it is necessary to evaluate the distance between the generated face distribution and the real one. From the figure 1, the synthetic face images seem different from the real faces. Will it hurt the evaluations on the Vision Language models.\\n3.\\tFor the experiments, you\\u2019d better involve more Vision Language models for evaluations.\", \"questions\": \"My major concerns lie in the effectiveness of the proposed benchmark and the experiments. If you can well address these problems, I am happy to improve my rating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This submission touches an important privacy-related topic in vision language model - the forgetting of specified content, e.g. unlearning. To evaluate the performance of unlearning, the authors construct a Facial Identity Unlearning Benchmark (FIUBENCH), with protocol and several methods as baseline. This is an interesting work. From the reviewer's point of view, this is still a preliminary work in considering of dataset size, and the methods of evaluation and face generation. However, it is worthwhile to continue to advance it to become a consensus for the community.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"A novel dataset, Facial Identity Unlearning Benchmark (FIUBENCH) is constructed which can support the evaluation of the \\u2018Right to be Forgotten\\u2019 in Visual Language Model. Possible privacy risks are avoided through fictitious facial images. This submission is well written and easy to follow. The protocol provides settings to support different kinds of evaluations, including membership inference attack and adversarial privacy attack, etc.\", \"weaknesses\": \"The database size is too small to fit the task of evaluate the performance of unlearning in the wild. Although the fictitious facial images avoid the privacy risk, the synthetic images bring some flaws, such as artifacts in style, and these could become an unexpected feature for recognition. In addition the database has taken some action to \\u2018Filtering out similar faces with K-means\\u2019, which leads to an imposed environment for face recognition, and make the evaluation is far from real world case.\", \"questions\": \"As mentioned in weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer cfPb03\", \"comment\": \"Thanks for your comments. We\\u2019d like to address your concerns as follows.\\n\\n**Q1: Database size too small.**\\n\\nOur dataset pairs each face with 20 QA questions, resulting in 8,000 VLM question-answer pairs. This amount of data is sufficient for evaluating the unlearning problem. In comparison, the previous TOFU unlearning benchmark for LLMs [1] involves only 200 authors, highlighting that our dataset offers significantly more data for analysis.\\n\\n\\n**Q2: Evaluation is far from the real-world case (artifacts in style and imposed K-means filtering).**\\n\\n\\nThe purpose of our paper is not to evaluate the potential privacy issues in the real world. Instead, our benchmark aims to simulate the Right to be Forgotten setting and robustly evaluate the effectiveness of different unlearning algorithms. In fact, any image-paired question-answering datasets where VLMs lack prior knowledge can be applied for this evaluation. For easy understanding of VLM unlearning, we still choose to use the synthetic face dataset.\\n\\nDue to the ethical, legal, and privacy concerns of regulatory constraints by IRB, we do not use datasets with real faces. However, we still demonstrate that our synthetic faces are similar to real ones by showing that they have comparably low distances. \\n\\nHere we randomly selected 5000 synthetic\\uff0ccelebrity's, private facial images from the SFHQ dataset, the CelebA dataset [1] and the human faces dataset [2]. Since most celebrity's faces are enhanced by makeup and other alterations, they may not accurately represent the distribution of real-world individual faces. Therefore, we also included the human faces dataset, a comprehensive facial dataset covering diverse creeds, races, and age groups. We choose to measure the distribution distances between different face sets using Fr\\u00e9chet Inception Distance (FID) [3] with CLIP features, which are commonly used by VLMs to represent the visual features of images. The results are as follows:\\n\\n\\n| CLIP | | | |\\n|-----------|-----------|---------|-----------|\\n| | Synthetic | Private | Celebrity |\\n| Synthetic | 0.00 | 19.71 | 29.86 |\\n| Private | 19.71 | 0.00 | 27.51 |\\n| Celebrity | 29.86 | 27.51 | 0.00 |\\n\\n\\nOur observations reveal that the distribution difference (FID) between Synthetic and Private faces is smaller than that between Private and Celebrity faces when measured using CLIP features. This indicates that, for VLMs, the difference between synthetic faces and real faces is even smaller than the variability within the distribution of real faces.\", \"we_use_k_means_filtering_for_two_main_reasons\": \"(1) To ensure the diversity of synthetic faces. Since people have all kinds of faces in the real life, we aim to closely approximate real-world scenarios that we used the K-means filtering method to ensure that these 400 synthetic face images include a diverse range of facial types. This approach demonstrates that VLMs can retain a variety of facial features along with their corresponding private information (shown in Table 1).\\n\\n(2) Reducing similar images improves finetuning efficiency.We present in the table below the impact of using randomly selected synthetic face images versus synthetic face images obtained through K-means filtering on the learning stage.\\n\\n| | Kmeans | Kmeans | Random | Random |\\n|------------------|---------|--------|---------|---------|\\n| | Rouge-L | GPT | Rouge-L | GPT |\\n| LLaVA-Phi-3-mini | 93.30 | 85.80 | 83.79 | 73.89 |\\n\\n\\nThe results show that, under the same condition of fine-tuning for 10 epochs, using more diverse synthetic face images enables VLMs to memorize relevant information more accurately.\\n\\n\\n\\n\\n[1] Maini, Pratyush, Zhili Feng, Avi Schwarzschild, Zachary Chase Lipton, and J. Zico Kolter. TOFU: A Task of Fictitious Unlearning for LLMs. In ICLR 2024 Workshop on Navigating and Addressing Data Problems for Foundation Models.\"}", "{\"comment\": \"Dear Reviewer 9p6D04,\\n\\nAs the discussion period deadline approaches, we would like to kindly inquire whether we have adequately addressed the concerns raised in your review. If any remaining points require further clarification, please know that we are eager to provide thorough responses to any additional questions or comments you may have.\\n\\nWe greatly value your constructive feedback and appreciate your efforts in helping us improve the quality of our manuscript. Thank you once again for your time and consideration.\\n\\nBest regards,\\nThe Authors\"}", "{\"title\": \"Response to Reviewer 9p6D04\", \"comment\": \"Thanks for your comments. We\\u2019d like to address your concerns as follows.\\n\\n**Q1: Effectiveness of the benchmark.**\", \"we_demonstrate_the_effectiveness_of_our_benchmark_through_the_following_points\": \"Our benchmark effectively establishes a VLM unlearning environment aligned with the 'Right to Be Forgotten' framework. As demonstrated in Section 3.2, it achieves this with minimal prior knowledge of the pre-trained model, evidenced by the GPT scores below 0.01. Additionally, the high GPT score exceeding 80 after fine-tuning indicates that the VLMs successfully acquire fictitious knowledge during the initial learning phase. These findings confirm that our benchmark accurately simulates the 'Right to Be Forgotten' scenario by precisely controlling the source of information and the exposure levels of the dataset's knowledge prior to unlearning, particularly for rarely occurring personal information. \\n\\nAdditionally, our benchmark evaluates several baseline unlearning methods for VLM unlearning, revealing significant trade-offs between forgetting quality and model utility in current baseline algorithms. It also shows that the alignment-based method (Preference Optimization) fails to effectively remove knowledge from VLMs when subjected to our robust evaluation framework.\\n\\nTo further demonstrate the effectiveness of our benchmark, we also plotted a Model Utility-Forget Quality curve in Appendix D.4 (Figure 4) in the revised version of our paper. The results reveal that the GD method is the most effective unlearning algorithm when maintaining 60% of original model performance. Meanwhile, the KL method achieves the highest level of forgetting when retaining 80% of model performance. However, this method only manages to forget 60% of face-related private knowledge, highlighting the urgent need for more effective unlearning algorithms tailored to vision-language models (VLMs).\\n\\n**Q2: Concerns about synthetic faces hurting the evaluation.**\\n\\nTo ensure that synthetic faces would not hert the evaluation, we calculate the distribution distance between synthetic faces and real faces. Here we randomly selected 5000 synthetic\\uff0ccelebrity's, private facial images from the SFHQ dataset, the CelebA dataset [1] and the human faces dataset [2]. Since most celebrity's faces are enhanced by makeup and other alterations, they may not accurately represent the distribution of real-world individual faces. Therefore, we also included the human faces dataset, a comprehensive facial dataset covering diverse creeds, races, and age groups. We choose to measure the distribution distances between different face sets using Fr\\u00e9chet Inception Distance (FID) [3] with CLIP features, which are commonly used by VLMs to represent the visual features of images. The results are as follows:\\n\\n\\n| CLIP | | | |\\n|-----------|-----------|---------|-----------|\\n| | Synthetic | Private | Celebrity |\\n| Synthetic | 0.00 | 19.71 | 29.86 |\\n| Private | 19.71 | 0.00 | 27.51 |\\n| Celebrity | 29.86 | 27.51 | 0.00 |\\n\\n\\nOur observations reveal that the distribution difference (FID) between Synthetic and Private faces is smaller than that between Private and Celebrity faces when measured using CLIP features. This indicates that, for VLMs, the difference between synthetic faces and real faces is even smaller than the variability within the distribution of real faces.\\n\\nFinally, we also want to point out that our benchmark focuses on robustly evaluating the effectiveness of unlearning algorithms within the Right to be Forgotten setting. Any image-paired question-answering datasets where VLMs lack prior knowledge can be applied for this evaluation.\"}", "{\"title\": \"Response to Reviewer ndqf25\", \"comment\": \"Thanks for your comments. We\\u2019d like to address your concerns as follows.\\n\\n\\n**Q1: Research motivation for the necessity to store individual private information within VLMs.**\\n\\nFirst, we need to clarify that our benchmark is not limited to evaluating how to remove the private information stored within VLMs. Instead, our benchmark focuses on robustly evaluating the effectiveness of unlearning algorithms within the Right to be Forgotten setting. Any image-paired question-answering datasets where VLMs lack prior knowledge can be applied for this evaluation (such as using some fictional creature images). \\n\\nAdditionally, we strongly agree with the approach of separating individual private information from VLMs and using the RAG method to enhance responses. However, in real-world scenarios, since VLMs are trained on massive corpora of data from the web, it is unavoidable that they may memorize and reproduce some sensitive or private data. Therefore, unlearning provides a way to protect private data and remove sensitive information after training.\\n\\n\\n\\n**Q2: Score range for GPT-Eval.**\\n\\n\\nWe apologize for the lack of clarity in our description of the presented results. We multiplie the GPT scores by 100 to calculate a reasonable average with the other metrics (e.g., Rouge-L, Accuracy, Truth Ration) of model utility. We have also added a corresponding explanation in the line 290-291 of the revised version of our paper.\\n\\n\\n**Q3: Details about the early stopping setting.**\\n\\nAll results in Table 2 are based on the early stopping setting. We provide the hyperparameters used for the unlearning stage in Appendix D.1. For each unlearning algorithm, our initial setup involves fine-tuning the model for 8 epochs. However, for the GA and GD methods, their losses are negative, and as the training steps increase, the loss can become extremely small (even less than -50). When the loss reaches this level (-50), the models completely collapse and do not output any tokens for any question, which is obviously unreasonable. Therefore, we set the early stopping criterion as follows: when the loss exceeds -20, the model stops training. This ensures that the unlearned model remains usable and does not become entirely non-functional. For KL and PO methods, we don't apply the early stopping setting.\\n\\n\\n\\n**Q4: Model Utility-Forget Quality curve to compare the forget quality scores at fixed model utility levels.**\\n\\nThanks for your constructive comments! We have plotted a Model Utility-Forget Quality curve in Appendix D.4 (Figure 4) in the revised version of our paper. The results reveal that the GD method is the most effective unlearning algorithm when maintaining 60\\\\% of original model performance. Meanwhile, the KL method achieves the highest level of forgetting when retaining 80\\\\% of model performance. However, this method only manages to forget 60\\\\% of face-related private knowledge, highlighting the urgent need for more effective unlearning algorithms tailored to vision-language models (VLMs).\\n\\n\\n\\n\\n**Q5: Concerns about the robustness of evaluation metrics.**\\n\\n\\nThank you very much for your detailed observation. The reason for this result lies in the fact that different VLMs exhibit varying degrees of unlearning under the same early stopping settings:\\n\\nFirst, it is important to clarify that LLaVA-Phi-mini-3B and LLama-3.2-Vision-Instruct-11B are two completely different VLMs. They have distinct LLMs, scales, vision encoders, and training data. Therefore, we believe that controlling their unlearning degree solely based on a fixed loss threshold (e.g., -20) is overly simplistic. Instead, we suggest that using the model utility-forget quality curve you mentioned could partially address this issue.\\n\\nSpecifically, in Table 2, we can observe that for the GA algorithm, LLama-Vision demonstrates a significantly higher degree of unlearning, which causes its model utility to drop to an almost unusable level. However, LLama-Vision (**-0.2; 93.2 vs. 93.4**) achieves better forget quality compared to LLaVA-Phi (**-0.4; 92.9 vs. 93.3**). Therefore, the results show that LLama-vision has traded lower mod utility for better forget quality, which is reasonable. For the KL algorithm, LLama-Vision also exhibits a significantly higher degree of unlearning \\u2014> LLama-Vision: **-1.5 (12.2 vs. 13.7)** > LLaVA-Phi: **-29.8 (12.3 vs. 42.1)**. However, its model utility drops significantly more: LLama-Vision **-32.9 (52.4 vs. 85.3)** compared to LLaVA-Phi **-10.4 (88.6 vs. 78.2)**. \\n\\nTherefore, we argue that our metrics can provide robust testing results.\\n\\n\\n**Q6: The choice for fine-tuning the parameters from LLM in the ablation study.**\\n\\nWe apologize for the written-error in our paper. In fact, we finally select the Ex4 which finetunes the parameter of the projector and LLM. We have revised it in our edited version.\"}", "{\"comment\": \"Thank you for your detailed response. Some of my concerns have been addressed, but I still have a few questions regarding Q2. Since the generated images differ significantly from real faces, why is the Fr\\u00e9chet Inception Distance (FID) between the synthetic and private faces smaller than the FID between private and celebrity faces when measured using CLIP? Additionally, for Q3, given the availability of many recent vision-language models, it would be beneficial to include a broader range of models in the evaluation and provide a more in-depth analysis. Overall, I maintain my original rating.\"}" ] }
0xUEBQV54B
Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
[ "Bradley Brown", "Jordan Juravsky", "Ryan Saul Ehrlich", "Ronald Clark", "Quoc V Le", "Christopher Re", "Azalia Mirhoseini" ]
Scaling the amount of compute used to train language models has dramatically improved their capabilities. However, when it comes to inference, we often limit the amount of compute to only one attempt per problem. Here, we explore inference compute as another axis for scaling, using the simple technique of repeatedly sampling candidate solutions from a model. Across multiple tasks and models, we observe that coverage – the fraction of problems that are solved by any generated sample – scales with the number of samples over four orders of magnitude. Interestingly, the relationship between coverage and the number of samples is often log-linear and can be modelled with an exponentiated power law, suggesting the existence of inference-time scaling laws. In domains like coding and formal proofs, where answers can be automatically verified, these increases in coverage directly translate into improved performance. When we apply repeated sampling to SWE-bench Lite, the fraction of issues solved with DeepSeek-Coder-V2-Instruct increases from 15.9% with one sample to 56% with 250 samples, outperforming the single-sample state-of-the-art of 43%. In domains without automatic verifiers, we find that common methods for picking from a sample collection (majority voting and reward models) plateau beyond several hundred samples and fail to fully scale with the sample budget.
[ "Inference-Time Compute", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=0xUEBQV54B
https://openreview.net/forum?id=0xUEBQV54B
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ondawrRMSu", "c0kl5Ofar9", "ACOpynxHdh", "4S2pEpXraF", "20yiXJG8lv", "1WU2aHd5br" ], "note_type": [ "official_review", "decision", "official_review", "official_review", "meta_review", "official_review" ], "note_created": [ 1730765151056, 1737523613338, 1730554569875, 1730522328432, 1734978465832, 1731432150320 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4004/Reviewer_zJ2q" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4004/Reviewer_W6hC" ], [ "ICLR.cc/2025/Conference/Submission4004/Reviewer_NSZC" ], [ "ICLR.cc/2025/Conference/Submission4004/Area_Chair_CBd7" ], [ "ICLR.cc/2025/Conference/Submission4004/Reviewer_feio" ] ], "structured_content_str": [ "{\"summary\": \"This work considers the question of scaling inference compute through repeated sampling. The authors find that repeated sampling can greatly improve the performance on reasoning/code tasks, particularly when there is an external verifier that can be used to check the result. At a high level, this work targets an interesting research question and the methodology looks sound. However, parts of the paper are poorly written/confusing (particularly in explicating the experimental setup + results with/without the verifier), and could benefit from some major revisions. I am willing to raise my score if the authors address my questions/concerns.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"While it isn't surprising that repeated sampling improves performance, the authors have quantified this phenomena in a precise way. In particular, they find a relatively clean scaling relation between the coverage and number of samples.\", \"There is good diversity in the models and datasets used, from more \\\"agentic\\\" tasks like SWE-bench to standard math/code reasoning.\"], \"weaknesses\": [\"The authors should include a few more details about the experimental setup or streamline the existing writing. For example, one may have chosen a couple of different approaches on how to incorporate the verifier: the first is an iid setup, where the verifier is just used to pick the final answer out of several of attempts. The second is where the model is provided some verifier signal (e.g., is the answer right/wrong?) between attempts, and asked to review its reasoning trace. Both scenarios interesting, but there were some experimental choices made in the paper that bear some justification.\", \"The improved performance with repeat sampling is not meaningful in and of itself; if you have enough attempts, the fraction that you will get right will of course go up. However, it may be useful for the reader to show an example of repeat sampling where the model gets the right answer after a small number of samples -- which part of the reasoning trace gets \\\"fixed\\\" between samples typically? Can we characterize the mistakes that the models make better?\", \"I am a bit unclear on the relevant baseline here. How should we compare spending the inference compute on sampling (versus, say, doing X-of-thought approaches)?\", \"A common assumption in the literature is that verification is easier than generation. Interestingly, this work seems to show that while non-oracle verification can provide some wins, the improvement is overall quite small. It would be interesting to dig into this question a bit more, and study some explicit examples where the reward model (or majority vote) fails. How much of the lack of improvement is due to the inherent noisiness of the reward model? Can the authors characterize the failure mode here a bit more precisely (is non-oracle verification almost as hard as generation in this case)?\"], \"questions\": \"(See above)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents a systematic evaluation of repeated sampling for LLM generation across models and benchmark problems considering both oracle verifiers (pass@k) and reward models to select answers. The paper finds smooth scaling behavior with number of samples across all settings with oracle verifiers and proposes a parametric for the inference time scaling laws, even showing that repeated sampling from smaller models can outperform FLOP-matched sampling from larger models. Finally, the paper finds that there remains a large gap between the performance of oracle verifiers and reward models or other mechanisms like majority voting.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides a clear, rigorous study of an important and basic topic in LLM inference. The paper tests many different benchmarks and base models and finds that the main results to be robust. A lot of these may exist in prior work, but not in a coherent and systematic way as presented here.\\n\\n2. The paper is well-written and easy to follow.\", \"weaknesses\": \"1. The paper needs to be careful about claims of novelty. The paper does discuss some related work in the intro, but could be more clear that existing work has made many of these observations. The main diffference is that this paper gives a more systematic analysis. For example alphacode [1] and others have figures that look almost identical scaling figures (Fig 6 in alphacode paper), and [2] has similar figures about scaling samples with reward models.\\n\\n2. I think the scaling laws presentation is missing the somewhat basic discussion that this is totally the expected behavior for computing pass@k for independent bernoulli samples, no LLMs needed. For example, try running this numpy code and you will reproduce the same kinds of scaling law curves on a log scale:\\n```\\np = 0.001\\nT = 10000\\nn_trials = 1000\\n\\nsamples = np.random.random((n_trials, T)) < p\\npasses = np.cumsum(samples, axis=1) >= 1\\npass_rate = np.mean(passes, axis=0)\\n```\\nOf course this is a simplification since there should be a different value of p for each problem in the test set, but the basic idea is that when you take independent samples of a bernoulli variable, this is the expected behavior. There is likely a closed form for this simple process. It would probably make sense to model this directly rather than fitting a heuristic scaling law.\\n\\n3. It is not clear how the presented scaling laws could be useful. The usual usefulness of pre-training scaling laws is to predict the optimal model size at a FLOP budget beyond those used for training. This kind of extrapolation is not tested in the paper. Moreover, the scaling laws are different for each specific model/task combinations and seem cheap to estimate, so it is not clear how prescriptive they are.\\n\\n4. The stated conclusions about verifiers seem to be too strong given that the paper only uses one reward model that is not particularly tailored to the problems studied. The off-the-shelf RM is chosen for performance on reward bench reasoning, but to my understanding this benchmark is mostly about humaneval-style coding and then the RM is being applied on math reasoning problems. For example, [2] trains task-specific RMs and does not observe the same sort of saturation.\\n\\n5. Some discussion of related work is missing when discussing how to improve verifiers in the future work directions. For example, [3] proposes an objective for reward modeling to allow LMs to evaluate themselves with a linear model on their representations and [4] uses models to evaluate themselves.\\n\\n[1] Li, Yujia, et al. \\\"Competition-level code generation with alphacode.\\\" Science 378.6624 (2022): 1092-1097.\\n\\n[2] Lightman, Hunter, et al. \\\"Let's verify step by step.\\\" arXiv preprint arXiv:2305.20050 (2023).\\n\\n[3] Li, Kenneth, et al. \\\"Q-Probe: A Lightweight Approach to Reward Maximization for Language Models.\\\" arXiv preprint arXiv:2402.14688 (2024).\\n\\n[4] Yuan, Weizhe, et al. \\\"Self-rewarding language models.\\\" arXiv preprint arXiv:2401.10020 (2024).\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the potential of scaling inference compute of LLMs. The authors show that, by generating repeated samples from language models, we can increase the \\u201ccoverage\\u201d, i.e., the fraction of problems that are solved by any generated sample. The improved coverage directly translates to better performances when an oracle verifier is available. The author also conduct experiments in the setting without oracle verifiers and find that the performances plateau quickly given repeated samples.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The research problem is interesting and well-motivated. Scaling up training compute has led to remarkable success in deep learning. It is important to consider scaling inference compute.\", \"The paper is easy to read.\", \"The evaluation covers 5 different benchmark datasets and show consistent trends.\"], \"weaknesses\": [\"One main claim, **scaling inference compute through repeated sampling leads to large improvements in coverage, seems trivial**. I believe that this fact is generally known in the community. Empirically, it has been observed in prior works like [1,2]. Mathematically, it is a simple consequence of equation 1. It is easy to prove that pass@$k$ monotonically increases with $k$ as long as there exists some $C_i>0$. The scaling curves are simply numerical calculation of equation 1 (correct me if this is not the case!). Thus, the novelty of this analysis seems limited.\", \"The paper **lacks in-depth analysis on scaling laws**. In Section 3, the proposed formula (equation 2) is simply adopted from the GPT-4 technical report. Then the \\u201ccurve fitting\\u201d directly fit the power law to the curve generated by a _known formula_ (i.e., equation 1). I believe that meaningful scaling laws should be distilled from experimental observations. It\\u2019s not clear to me what is the insight of fitting some curves when the underlying formula is already known.\", \"I like the analysis on domains without automatic verifiers since it is a more realistic setting than the experiments based on oracle verifiers. However, this section is very short and **lacks a deeper exploration of verifiers**. The conclusions here are based on experiments with a single existing 8B verifier. Strengthening this analysis with verifiers of varying sizes and other well-known verification approaches (e.g., process supervision [3]) would significantly enrich this section.\", \"Minor issues: Line 394: $k$ should be italic.\", \"[1] Program Synthesis with Large Language Models\", \"[2] Competition-level code generation with AlphaCode\", \"[3] Let\\u2019s Verify Step by Step\"], \"questions\": \"Please refer to the **weakness** part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies scaling laws for LLMs as a function of the inference time budget. It makes empirical observations that coverage improves with increasing inference time budget and can be modeled with an exponentiated power law. For domains with automatic verifiers (e.g., code generation), the paper shows significant improvement in performance with increased inference time budget. For domains without automatic verifiers, the paper shows that performance plateaus after few hundred samples due to inaccurate verifier.\\n\\nThe reviewers liked the experimental findings acknowledging that most of the findings are known to the community. The only non-trivial contribution is the scaling curve and its claimed predictive power. However, two reviewers noted that this part is not well done and the analysis is somewhat insufficient. I agree with this observation.\\n\\nTherefore, I'm recommending to reject this paper and strongly encourage the authors' to improve the scaling curve and its analysis based on the feedback from reviewers' for resubmission.\", \"additional_comments_on_reviewer_discussion\": \"Summarized in meta review.\"}", "{\"summary\": \"The paper studies scaling laws for a new axis of compute for LLMs: inference time compute. They empirically find that coverage (pass@N) improves with the number of inference time samples log-linearly and can be modeled with an exponentiated power law.\\nIn domains like coding, where automatic verifiers exist, i.e., verification is much easier than generation, the performance improves from 15.9% with one sample to 56% with 250 samples, outperforming the single-sample state-of-the-art of 43%. In domains lacking automatic verifiers, the authors observe that typical approaches for selecting from a set of IID sampled responses, such as majority voting and reward models, reach a performance plateau after several hundred samples and do not effectively scale with an increasing sample budget, and thus the performance on these problems is bottlenecked by the accuracy of the verifier.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper presents an extensive analysis of coverage on math and coding tasks and plots empirical scaling laws (and fitted ones) on a multiple model families like Gemma, Pythia and Llama. One of the most interesting trends to note is that models from the same family only affected the offset of the log-linear plot, and not the slope. This is different from typical scaling laws for pre-training loss seen in prior works.\", \"The authors also extend analysis to inference time compute, and show that sometimes drawing multiple samples from a smaller (and less capable model) is more compute-efficient (in terms of FLOPs), compared to a single sample from a larger and more capable model. This analysis can be immediately used to reduce the inference time for models deployed in practice, without any loss in performance, at least when a reliable verifier is available.\", \"The analysis in Figure 6,7 is particularly compelling to put more effort in improving verification, since it suggests that most test-time methods for reasoning/safety etc., are not failing due to lack of coverage, but more due to the inaccuracy of the verifier.\"], \"weaknesses\": [\"While the coverage analysis expands on different models and tasks, it does not model the affect of other common parameters of interest like temperature, top-K etc.\", \"Some recent works like (Snell et al. 2024, Setlur et al. 2024) show that beam search against an automated verifier improves compute efficiency, with a fixed beam size. Having analysis on this direction of compute use would make the paper stronger.\", \"While the authors fit scaling laws for coverage, it would be much more useful to see if scaling laws can also be fit for the setting with trained verifiers, that may not be perfect, accounting for the error in the verifier. Currently it is unclear how the size/data used for the trained verifier affects the compute scaling laws for best-of-n inference.\", \"In general, the paper presents interesting results for best-of-N inference, but they are quite narrow and expected. Broadening the laws to other forms of compute usage, or identifying how the \\\"learnability\\\" of a verifier affects these laws can help to make the work more complete.\", \"[1] Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters (C. Snell, J. Lee, K. Xu, A. Kumar)\", \"[2] Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning (A. Setlur, C. Nagpal, A. Fisch, X. Geng, J. Eisenstein, R. Agarwal, A. Agarwal, J. Berant, A. Kumar)\"], \"questions\": [\"Maybe I missed discussion on this but is there any attempt to also parameterize the coverage scaling law in terms of the model size?\", \"Can the authors compare their work with parallel work from Snell et. al. 2024 (Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters)? In particular it would be nice to see discussion on whether the exponentiated power law is also a reasonable to fit scaling curves generated by forms of test-time compute methods, like sequential corrections explored by Snell et al.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0x8wWloW2O
OracleMamba: A Dynamic Market-Guided and Time State Selection Framework for Robust Stock Prediction
[ "Song-Li Wu" ]
Stock price prediction is a complex challenge due to the inherent volatility of financial markets and the influence of diverse factors such as macroeconomic conditions, capital flows, and market sentiment. Recent joint stock forecasting models focus on extracting temporal patterns from individual stock price series and combining them to model stock correlations. However, these models face two critical limitations: first, in long-term predictions, they retain both informative and excessive states, amplifying noise and increasing complexity; second, in short-term predictions, they prioritize market indices and technical indicators, neglecting the real-time influence of market sentiment, which can drive price movements independent of traditional indicators. While state space models (SSMs) like Mamba improve efficiency and capture long-distance relationships, they still underperform compared to Transformer-based models. To address these challenges, we propose OracleMamba, a novel framework that integrates a dynamic market-guided module for short-term forecasting and a SelectiveMamba module for long-term forecasting. The dynamic market-guided module fuses objective market data and subjective sentiment analysis to enhance short-term prediction accuracy. The SelectiveMamba module efficiently captures both spectral and temporal features using a 3D scan mechanism, which extracts and filters key signals from the time-series data. By integrating spectral features to identify market rhythms and temporal features to track price movements over time, the SelectiveMamba module reduces noise and preserves critical information for long-term forecasts. This framework significantly improves both model efficiency and accuracy, outperforming existing approaches across real-world stock prediction tasks.
[ "deep learning", "time series" ]
Reject
https://openreview.net/pdf?id=0x8wWloW2O
https://openreview.net/forum?id=0x8wWloW2O
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vsJu3dPzDz", "trELFnqDOa", "t3D39nmPmG", "lMcQx0LNi9", "keYcqCSoK2", "imvvFszFp9", "93Jvj7jq8H" ], "note_type": [ "official_review", "official_review", "official_review", "meta_review", "comment", "decision", "official_review" ], "note_created": [ 1730557602337, 1730608648479, 1730603484203, 1734316125772, 1742372426454, 1737523419758, 1730731193717 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission868/Reviewer_GKGo" ], [ "ICLR.cc/2025/Conference/Submission868/Reviewer_T7mb" ], [ "ICLR.cc/2025/Conference/Submission868/Reviewer_Ap4Z" ], [ "ICLR.cc/2025/Conference/Submission868/Area_Chair_78bY" ], [ "ICLR.cc/2025/Conference/Submission868/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission868/Reviewer_VN4H" ] ], "structured_content_str": [ "{\"summary\": \"This paper is the first to introduce Mamba for stock price forecasting, focusing on efficiently combining various short- and long-term factors necessary for stock prediction. The study aims to efficiently integrate time-series data and text data, such as analyst reports, through the dynamic market-guided module, while combining short- and long-term contexts through the SelectiveMamba module. The model was tested on Chinese stock market data, with results showing that it outperforms baseline models. However, the paper lacks a thorough literature review on stock price forecasting. Only three baseline models are referenced for comparison, two of which are studies published at least three years ago. Additionally, while the authors state they used scraped analyst reports, they fail to specify the sources of the reports, the authors of these reports, or the number of reports used, which raises concerns about the credibility of their experiment and data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"According to the authors, this is the first study to apply Mamba to stock price forecasting (although this claim is questionable due to the lack of a detailed literature review).\", \"weaknesses\": \"The authors claim that incorporating sentiment analysis, such as analyst reports, into stock price forecasting is novel, but many studies have already pursued this approach. Moreover, multiple studies within stock price forecasting have also explored combining short- and long-term contexts. Therefore, the two main contributions claimed by the authors are not particularly new, and the lack of a comprehensive search and mention of previous studies is a significant oversight.\\n\\nFurthermore, the fact that testing was limited to the Chinese market decreases the reliability of the experimental results, and the lack of detailed information on the analyst reports, which are crucial data, also presents serious issues with the study\\u2019s credibility and reproducibility.\", \"questions\": [\"Many studies have already combined sentiment analysis with stock price forecasting, so why is there no mention of these studies in this paper?\", \"The approach of combining short- and long-term contexts has also been widely studied within the context of stock price forecasting and has been presented at major ML/AI conferences. Why wasn\\u2019t this mentioned, and what advantages does this study have compared to those prior works?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a Mamba-based framework for stock return prediction by leveraging financial market data, such as stock prices, market indices, and market sentiment.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors proposed a novel 3D scanning mechanism for analyzing financial market information\\n2. The problem to be solved is well formulated\", \"weaknesses\": \"1. The data used in the paper is not clearly specified\\n2. The motivation behind the SSM design is not well explained\\n3. The experiment lacks comparisons with some important baselines\", \"questions\": \"The market state encoding section lacks sufficient details to justify the use of market state information:\\n1. The authors failed to describe the subjective context thoroughly. There is only a vague mention of analysts' reports and financial documents scraped from unspecified platforms. It is unclear what these documents are, their market coverage, their frequency, and the volume of data involved. This information is crucial. For example, if only quarterly updated earnings reports from some public companies are used, how do they align with daily updated stock prices? Additionally, how noisy is this data? The current version of the paper is problematic and does not justify the use of market context.\\n2. It is unclear how the GPT-O1 model is used to convert these textual data into sentiment. The prompts used are not described. The authors did not specify what the sentiment results look like. Are they presented as sentiment scores, sentiment labels, etc.?\\n3. The authors should more clearly specify the role of the experts. Do they mean that the experts are the ones who wrote the documents, or are they directly involved in the analysis?\\n4. It is unclear how sectors and regions are processed and embedded in the data or how they are used. The authors seem to only mention these aspects without integrating them into their analysis.\", \"the_tsss_structure_has_several_issues\": \"1. Why are B and C input-independent? This is more like a basic SSM structure that is time-invariant, while Mamba is designed to be an improvement on such structures.\\n2. What is $s$ in the calculation of DSE?\\n3. The design of DTE and DSE lacks explanation, and it is unclear how they reflect the benefits claimed by the authors in the relevant section.\", \"experiment_section\": \"1. The setup for the comparison methods is unclear. Are these methods using the same data as the proposed model, or are they only using stock price data?\\n2. Many SOTA time series forecasting models are not included in the comparison, such as DLinear, NLinear, Autoformer, Fedformer, PatchTST, etc.\\n3. Why was the vanilla Transformer model used for comparison when there are many Transformer-based models specifically designed for time series tasks?\\n4. Comparisons between the proposed model and Mamba-based models (or Mamba itself) are needed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The article includes a dynamic market-guided module and the SelectiveMamba module, effectively addressing the challenges posed by noisy data. It introduces Mamba into stock price forecasting and employs a 3D scan method. By integrating market sentiment, the article incorporates subjective factors. Experiments demonstrate its performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The article introduces Mamba into stock price forecasting.\\n\\n2.The article proposes a 3D scan method to capture interactions across the dimensions of time, stock, and market state.\\n\\n3.The method proposed in the article performs well in experiments.\", \"weaknesses\": \"1. The article lacks novelty and sufficient discussion on related work:\\n\\n a) While using GPT to process market sentiment is indeed innovative, there have been many previous works that utilized text data, including the use of pre-trained language models as in [1].\\n\\n b) It is difficult to see the relationship between the Time-Spectral method in the article and stock data. It appears to merely apply a time series method to stock data without analyzing how the Time-Spectral method aids stock prediction or offers any special improvements for it. Additionally, frequency methods are also common in time series analysis, as in [2].\\n\\n c) Inter-stock correlations have been used in many previous articles, as in [3].\\n\\n2. The article claims that the model can handle noisy data, but I do not see how or why noisy data can be addressed, or what improvements have been made compared to previous methods. Why could previous deep learning methods not handle noisy data while the current one can? Furthermore, noise in stock data is often caused by random Brownian motion, which the article does not analyze or explain theoretically.\\n\\n3. The article mentions the application of the Mamba model, but the method section does not clearly indicate where it is used for those unfamiliar with the Mamba model. Nor does it explain how the Mamba model contributes to stock prediction.\\n\\n4. The baselines are relatively weak, with only two methods specifically tailored for stock data.\\n\\n5. The article does not provide the code.\\n\\n\\n[1] Yang, Linyi, et al. \\\"Numhtml: Numeric-oriented hierarchical transformer model for multi-task financial forecasting.\\\"\\u00a0Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 10. 2022.\\n\\n[2] Liu, Shengzhong, et al. \\\"FOCAL: Contrastive learning for multimodal time-series sensing signals in factorized orthogonal latent space.\\\"\\u00a0Advances in Neural Information Processing Systems\\u00a036 (2024).\\n\\n[3] Cheng, Rui, and Qing Li. \\\"Modeling the momentum spillover effect for stock prediction via attribute-driven graph attention networks.\\\"\\u00a0Proceedings of the AAAI Conference on artificial intelligence. Vol. 35. No. 1. 2021.\", \"questions\": \"Please refer to the \\\"Weaknesses\\\" for your response, especially the first three points.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper for the first time proposed to introduces Mamba into stock price forecasting, which is novel. The problem is well formulated and the paper is clearly presented. Experiment results show the effectiveness of the proposed model. However, the reviewers have also raised many issues that the paper needs to be further improved, including the limited novelty, the insufficient experiment and discussions, and the unclear explanation on the motivation.\", \"additional_comments_on_reviewer_discussion\": \"There is no author rebuttal and reviewer discussion.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper presents OracleMamba, a new stock prediction framework designed to integrate both short-term and long-term market dynamics using a dynamic market-guided module and a SelectiveMamba module. The model aims to address the limitations of previous joint forecasting models by effectively balancing short-term market volatility and long-term trends. OracleMamba uses a comprehensive market-guided gating mechanism that fuses market sentiment and objective market indicators to enhance prediction accuracy, while the SelectiveMamba module captures spectral and temporal features to reduce noise and extract key signals from market data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper studies the first work that introduces Mamba into stock price forecasting, which could be promising for the development of this subdomain.\\n2. The integration of a market-guided module for short-term forecasting and a SelectiveMamba module for long-term stability represents an novel hybrid approach to stock prediction. \\n3. The paper is written with good clarity and thus is easy to follow.\", \"weaknesses\": \"1. A key concern is the lack of clarity regarding whether the additional information used to enhance OracleMamba is also used for baseline comparison. It is also unclear how much these features specifically contribute to OracleMamba's performance. If baseline models do not incorporate the same information, the comparison may be unfair.\\n2. The exact data point length in CSI300 and CSI800 for model training is not specified. Is the data daily-based or hourly-based? Given that Mamba-based methods are used, a longer financial data sequence might provide an advantage.\\n3. Since the paper adopts a Mamba-based solution, it is crucial to evaluate the computational cost, including memory usage and runtime efficiency.\", \"questions\": \"Please see the weakness for details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0wmfzWPAFu
Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity
[ "Eduard Gorbunov", "Nazarii Tupitsa", "Sayantan Choudhury", "Alen Aliev", "Peter Richtárik", "Samuel Horváth", "Martin Takáč" ]
Due to the non-smoothness of optimization problems in Machine Learning, generalized smoothness assumptions have been gaining a lot of attention in recent years. One of the most popular assumptions of this type is $(L_0,L_1)$-smoothness (Zhang et al., 2020). In this paper, we focus on the class of (strongly) convex $(L_0,L_1)$-smooth functions and derive new convergence guarantees for several existing methods. In particular, we derive improved convergence rates for Gradient Descent with (Smoothed) Gradient Clipping and for Gradient Descent with Polyak Stepsizes. In contrast to the existing results, our rates do not rely on the standard smoothness assumption and do not suffer from the exponential dependency on the initial distance to the solution. We also extend these results to the stochastic case under the over-parameterization assumption, propose a new accelerated method for convex $(L_0,L_1)$-smooth optimization, and derive new convergence rates for Adaptive Gradient Descent (Malitsky and Mishchenko, 2020).
[ "generalized smoothness", "first-order optimization", "convex optimization", "Polyak stepsizes", "gradient clipping", "adaptive optimization", "acceleration" ]
Accept (Poster)
https://openreview.net/pdf?id=0wmfzWPAFu
https://openreview.net/forum?id=0wmfzWPAFu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wZ2Z47pXDY", "wQYEBjVuA2", "tZNMa6WojV", "rLHN1Ksbt4", "kRhSyGVh9G", "jBw5ZQrgOd", "igPE83FbGP", "iBFNcxPxiP", "hoNNIwwnXm", "hFtpT9CJE4", "gpxOCTGMU8", "gisGW5EBB4", "giC7vCndpw", "gUqyDRXqrx", "gO9nKnTjYm", "et4ZrGaMYX", "e1i8sNQmeP", "akBmgagUSz", "a4sWFNRU1y", "Y8hKfCqiZB", "XJ5hTytham", "W72EEv0aPN", "SHOlIWFqR8", "Q4gCKHn5fe", "KEmJkzp7El", "IrswG0mixZ", "GVAP5Sm74K", "7UI21CyU0e", "7M6eOzt6VY", "6t2gdes7gv", "67NRiza5t8", "4VVoZ0huPW", "1vdD8PPYX9", "0ykgJ0S3OH", "0onDHfKU4c" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732100354520, 1730399228329, 1732209864997, 1732638129974, 1732520254509, 1730673322759, 1732520867242, 1732563871710, 1732520529597, 1732792812426, 1732491086245, 1732815858014, 1733626716075, 1732100571800, 1732610212559, 1732540875554, 1730704611079, 1732101380453, 1732491169183, 1732100213571, 1732100894065, 1733256395718, 1732101225757, 1732100482592, 1732100265212, 1729994572874, 1732491162123, 1732490970145, 1730667935687, 1737523469884, 1732504604760, 1732490872914, 1732653834578, 1732101946479, 1733159775438 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_iTbo" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_4rMB" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_jtK8" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_pFMj" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_jtK8" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_jtK8" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Area_Chair_MLFq" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Area_Chair_MLFq" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_jtK8" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_RdU1" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_RdU1" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_pFMj" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_4rMB" ], [ "ICLR.cc/2025/Conference/Submission1810/Area_Chair_MLFq" ], [ "ICLR.cc/2025/Conference/Submission1810/Area_Chair_MLFq" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_jtK8" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_iTbo" ], [ "ICLR.cc/2025/Conference/Submission1810/Area_Chair_MLFq" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_jtK8" ], [ "ICLR.cc/2025/Conference/Submission1810/Authors" ], [ "ICLR.cc/2025/Conference/Submission1810/Reviewer_jtK8" ] ], "structured_content_str": [ "{\"title\": \"Respose to Reviewer pFMj\", \"comment\": \"We thank the reviewer for a very positive evaluation of our work. Below, we address the reviewer\\u2019s questions and comments.\\n\\n>**The paper is missing the conclusion and experiments sections. However, the experiments are provided in the appendix.**\\n\\nWe thank the reviewer for the comment. Following the request, we moved the discussion of the related works on gradient clipping and Polyak stepsizes to Appendix A, moved Figure 1 to the main part, and added a conclusion to the paper. All changes are highlighted in red. \\n\\n>**The assumptions for stochastic problem is restrictive; however, the derived results are new and better than previous ones**\\n\\n\\u0422hank you for the comment, but the result is indeed non-trivial. A generalization of deterministic results is challenging, since the analysis involves cases when either $\\\\|\\\\| \\\\nabla f(x^k) \\\\|\\\\| \\\\geq L_0 / L_1$ or $\\\\|\\\\| \\\\nabla f(x^k) \\\\|\\\\| < L_0 / L_1$; for the stochastic case we need to consider the norm of stochastic gradient instead. Dealing with the norm of stochastic gradient instead is challenging due to the need to take the expectation (see lines 2025-2035). But we think our analysis of this challenge is already interesting and brings value. The generalization that does not require the over-parameterization assumptions is an interesting direction for future research. \\n\\n>**In the last part of the equation (15), should it be N+1 - T instead of N+1?**\\n\\nWe believe that the bound is correct. We derive the final bound in (15) by maximizing it with respect to $T$; please, see the further details in Appendix D (lines 1258-1275).\\n\\n>**Why is it so crucial to choose $\\\\gamma = 1/4$? In Theorem 6.1, to achieve a rate better than in (25)? Can it be relaxed?**\\n\\nWe chose $\\\\gamma = 1/2$ for the sake of simplicity and ease of presentation, but the original paper (Malitsky & Mishchenko, 2020) also provides more complicated analysis allowing larger $\\\\gamma$. We follow the simplicity motivation while setting $\\\\gamma = 1/4$. We reduce $\\\\gamma$ to get an extra term \\\\frac{1}{2}\\\\|\\\\|x^{k+1}-x^k\\\\|\\\\|^2 appearing while introducing $\\\\Sigma_{k+1} = \\\\frac{1}{2} \\\\sum_{i=0}^{k}\\\\| \\\\|x^{i+1} - x^{i}\\\\|\\\\|^2$ to the potential (lines 1663-1667).\\nHaving $\\\\Sigma_{k+1}$ to be bounded for all $k$ is crucial for obtaining convergence. But any $\\\\gamma <= 1/2 - \\\\delta$, for any $\\\\delta > 0$ keeps the same results up to the $\\\\delta$ factor in the final rate for the simple approach.\\nWe believe that larger stepsizes ($\\\\gamma = 1/2$) lead to faster convergence, but $\\\\gamma = 1/4$ gives a better tradeoff between practical and theoretical performance.\"}", "{\"summary\": \"This paper focuses on analyzing optimization methods for convex problems under $(L_0,L_1)$-smoothness settings. It provides improved convergence rates for the Gradient Descent (GD) method with Gradient Clipping and the Gradient Descent method with Polyak Stepsizes. It introduces a new accelerated method based on the Similar Triangles Method and provides new convergence rates for the Adaptive Gradient Descent Method. Finally, it extends the analysis to the stochastic case in overparametrized settings.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The analysis in this work does not rely on the L-smooth assumption, which was required in previous works. In this way, the work proves a convergence rate for the Gradient Descent (GD) method with Gradient Clipping and the Gradient Descent method with Polyak Stepsizes, with the dominant part having a smaller constant depending on $L_0$.\\n\\n2. This work proposes a new variant of the Similar Triangles Method that accelerates GD.\\n\\n3. This work provides a faster convergence rate for the Adaptive Gradient Descent Method in $(L_0,L_1)$-smoothness settings compared to the locally L-smooth setting.\", \"weaknesses\": \"1. A new acceleration of GD is proposed. It would be supportive to add more remarks to highlight the theoretical merits of this acceleration compared with the STM in (Gasnikov & Nesterov, 2016). Additionally, it would be more supportive to numerically compare its performance with STM in (Gasnikov & Nesterov, 2016).\\n\\n2. Example 1.3 considers a logistic function with L2 regularization. However, $f(x)$ is not related to L2. It would be better to specify where the L2 regularization is.\\n\\n3. The discussion after Theorem 7.1 (line 521) claims that the probability must be smaller than $\\\\frac{8nL_1^2\\\\|x^0-x^*\\\\|^2}{\\\\eta\\\\nu(N+1)}$. It would be clearer to explain why the probability should be smaller than this value.\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your clarifications. I will adjust my score, leaning more towards acceptance.\"}", "{\"comment\": \">I am confused as to why the authors are getting so defensive and refusing to read the review comment literally. Nowhere did I say that all your results are local --- all that I said was, wherever there's an exponential dependence in the bound, that particular result can be viewed as being local in spirit (obviously, because in a small enough region, the contribution of the exponential term will be small). Yet, the authors keep insisting on their not being local, which I find baffling.\\n\\nWe are afraid that there is a misunderstanding between us and the reviewer. To resolve this misunderstanding, we kindly ask the reviewer to provide a definition of \\\"local result\\\". In our responses, we relied on the standard definition: a result is called local if it holds only for initialization that is sufficiently close to the optimum. **According to this definition, all our results are not local**. Moreover, we kindly ask the reviewer to give precise references to the results the reviewer is referring to. We emphasize that in the original review and in the subsequent responses, the reviewer did not specify the results that are discussed. We also highlight that we are explicit in the results and all the discussions: whenever the result contains exponential factors, we show it explicitly (see Theorems 5.1 and 6.1 and discussions after them). \\n\\n>Obviously, that was the whole question. Also, I'm not speaking of literally out of the box. But just consider the trivialmost setting, where one has from convexity: $f_k - f^\\\\ast \\\\leq \\\\langle f_k', x_k - x^\\\\ast \\\\rangle$, to which one can apply Cauchy-Schwarz, and bound in terms of $\\\\|\\\\| \\\\nabla f(x_k) \\\\|\\\\|$ and $\\\\|\\\\| x_k - x^\\\\ast \\\\|\\\\|$. Of course, getting a bound on the latter distance is not easy, otoh, it shows already that upto a diameter term, convexity translates a small gradient into a function suboptimality. Now, after working a bit harder, one should be able to make more thorough connections to Chen et al's work. \\n\\nIf we continue the derivation as the reviewer suggested, assume that $\\\\|\\\\| x_k - x^\\\\ast \\\\|\\\\| \\\\leq D$, and apply Theorem 2 from Chen et al (2023) with $\\\\alpha = 1$ and $\\\\beta = 1$ (note that it is the only possible choice for $\\\\beta$ when $\\\\alpha=1$), then we get \\n$$\\n\\\\frac{1}{N}\\\\sum\\\\limits_{k=0}^{N-1}(f(x_k) - f^\\\\ast) \\\\leq \\\\frac{2L_0 D(f(x_0) - f^\\\\ast)}{N\\\\varepsilon} + \\\\frac{D\\\\varepsilon}{2}\\n$$\\nfor normalized GD, where $\\\\varepsilon > 0$. In the original paper, $\\\\varepsilon$ is the target accuracy. However, one can optimize it in the above bound and get $\\\\varepsilon_{\\\\text{opt}} = 2\\\\sqrt{\\\\frac{L_0(f(x_0) - f^\\\\ast)}{N}}$ and\\n$$\\n\\\\frac{1}{N}\\\\sum\\\\limits_{k=0}^{N-1}(f(x_k) - f^\\\\ast) \\\\leq \\\\frac{4D\\\\sqrt{L_0(f(x_0) - f^\\\\ast)}}{\\\\sqrt{N}}.\\n$$\\nThe above rate is $\\\\sqrt{N}$-worse than the leading term in the results we obtained for $(L_0,L_1)$-GD and GD-PS. We believe that $1/\\\\sqrt{N}$ differs significantly from $1/N$ rate.\\n\\n>For example, we know that the gradient of an L-smooth convex function is cocoercive, i.e., $\\\\langle f'(x) - f'(y), x-y \\\\rangle \\\\geq \\\\frac{1}{L}\\\\|\\\\| x-y\\\\|\\\\|^2$. Using this we can even directly control $\\\\|\\\\| x-y \\\\|\\\\|$ in terms of $\\\\|\\\\| \\\\nabla f(x_k) \\\\|\\\\|$, etc.\\n\\nThe inequality the reviewer is referring to is **coercivity**, and it is equivalent to **$\\\\frac{1}{L}$-strong convexity** (see Theorem 2.1.9 from [1]). Coercivity does not follow from convexity and smoothness. **Cocoercivity** is $\\\\langle \\\\nabla f(x) - \\\\nabla f(y), x-y \\\\rangle \\\\geq \\\\frac{1}{L}\\\\|\\\\| \\\\nabla f(x) - \\\\nabla f(y) \\\\|\\\\|^2$, and it is indeed equivalent to convexity and smoothness (see Theorem 2.1.5 from [1]). Although it can be used to bound $\\\\|\\\\| x_k - x^\\\\ast \\\\|\\\\|$ for smooth problems, it does not hold for $(L_0,L_1)$-smooth problems. Therefore, it cannot be directly applied in our case (though a variant of this inequality holds in our case as well -- see inequality (11) in our paper).\\n\\n>All the points I made in the review were supposed to be helpful and constructive, it's a pity that the authors viewed them not as such, and doubled-down on repeating their claims. It is possible, I'm not seeing something the authors are seeing, but I feel they do need to address all the points carefully.\\n\\nWe never claimed that we do not see the reviewer's comment as helpful and constructive. In our responses, we carefully and respectfully addressed all the reviewer's comments: we highly value the reviewer's feedback and their time, and we are committed to applying all necessary changes that will improve our work according to the reviewer. **We kindly ask the reviewer to let us know about the comments that remain unaddressed.**\\n\\n---\", \"references\": \"[1] Yurii Nesterov. Introductory lectures on convex optimization: A basic course. 2004\"}", "{\"comment\": \"As I noted in the review, any bounds involving exponentials can be viewed **in spirit** as local results; the authors are free to disagree and interpret the bounds whichever way they want, I am just expressing here what I believe would be a **valid takeaway for anybody who wants to use the bound** for either understanding the algorithm, or for developing subsequent theory of their own.\"}", "{\"summary\": \"The paper focuses on convex $(L_0, L_1)$-smooth optimization and answers important open questions in this domain. In particular, new convergence rates are derived for the gradient method with smoothed clipping and Polyak stepsizes, improving existing results. The best-known convergence rate is derived for $(L_0, L_1)$-STM. Also, new results proved for AdGD and a stochastic case under the additional assumption on a shared minimizer. The statements in the paper are clear, and compared with existing results in the literature, the proofs are correct.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. Paper improved existing convergence results for gradient methods with (smoothed) clipping and Polyak stepsizes.\\n2. New convergence results for AdGD under $(L_0, L_1)$-smooth assumption.\\n3. Proposed $(L_0, L_1)$-STM recovers the best-know convergence rate for accelerated methods without additional knowledge on $R_0$ and $f(x^0) - f^*$.\", \"weaknesses\": \"1. The paper is missing the conclusion and experiments sections. However, the experiments are provided in the appendix.\\n2. The assumptions for stochastic problem is restrictive; however, the derived results are new and better than previous ones.\", \"questions\": [\"In the last part of the equation (15), should it be N+1 - T instead of N+1?\", \"Why is it so crucial to choose $\\\\gamma= 1/4$? In Theorem 6.1, to achieve a rate better than in (25)? Can it be relaxed?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> We are not aware of any technique allowing trivially adapts the results of the nonconvex case to the convex case, but we believe that a trivial adaptation does not lead to tight results.\\n\\nThe question is not about tight results, but for stronger positioning of the current paper it would be **valuable** to answer this question with mathematical precision. In particular, the zeroth order adaptation (that requires no work), is simply the stationarity result of the nonconvex case also applies to the convex one. The next step of adaptation would be good to put into the motivation part / related work part of the paper, so that is is clear how big a jump from those results is the present work (including, some simple, non-tight results that arise).\"}", "{\"comment\": \"We thank the reviewer for participating in the discussion with us. Below, we address the reviewer's remaining concerns/comments.\\n\\n>As I noted in the review, any bounds involving exponentials can be viewed in spirit as local results; the authors are free to disagree and interpret the bounds whichever way they want, I am just expressing here what I believe would be a valid takeaway for anybody who wants to use the bound for either understanding the algorithm, or for developing subsequent theory of their own.\\n\\nWe respectfully disagree with the takeaway suggested by the reviewer. In particular, the results presented in our Theorems 3.1, 4.1, and 7.1 are **independent of any exponential terms** and **hold globally**, i.e., the starting point can be arbitrarily far from a solution. Therefore, it is incorrect to say that our results are local.\\n\\n>The reason I raised the flag about (Zhang et al 2020b) is to make sure that the narrative of the current paper is more accurate. The authors should be a bit more careful in citing related work precisely, otherwise, anybody else who reads this work (or people who read Chen et al) will believe that the possibility of handling only once differentiable functions was first considered by Chen et al, which seems to be not precise, even though, the credit for properly working things out, as well as introducing more refined notions of smoothness belongs to them.\\n\\nWe added the proper reference to the introduction. Please, see our revised version (the end of the second paragraph and footnote 3 on page 3).\\n\\n>In its current version (and even from the authors' reply here), the paper seems to claim greater novelty for dropping twice differentiability.\\n\\nWe never claimed any novelty in dropping twice differentiability.\\n\\n>The question is not about tight results, but for stronger positioning of the current paper it would be valuable to answer this question with mathematical precision. In particular, the zeroth order adaptation (that requires no work), is simply the stationarity result of the nonconvex case also applies to the convex one. The next step of adaptation would be good to put into the motivation part / related work part of the paper, so that is is clear how big a jump from those results is the present work (including, some simple, non-tight results that arise).\\n\\nIn the non-convex case, the convergence is characterized by the norm of the gradient, while in the convex case, the convergence is measured by a stronger criterion - function value sub-optimality, i.e., $f(x^N) - f(x^\\\\ast)$. These two performance criteria are not directly comparable for convex problems, and a small gradient norm does not necessarily imply small function sub-optimality. To illustrate this, consider a Huber loss function $f:\\\\mathbb{R} \\\\to \\\\mathbb{R}$ with parameter $\\\\varepsilon$ defined as follows:\\n\\n$$\\nf(x) = \\\\begin{cases} \\\\frac{1}{2}x^2,& \\\\text{if } x\\\\in [-\\\\varepsilon, \\\\varepsilon],\\\\\\\\\\\\\\\\ \\\\varepsilon\\\\left(|x| - \\\\frac{1}{2}\\\\varepsilon\\\\right),& \\\\text{otherwise}. \\\\end{cases}\\n$$\\n\\nThis function is convex and $L$-smooth with $L=1$. Moreover, for any $x\\\\in\\\\mathbb{R}$ we have $|f'(x)| \\\\leq \\\\varepsilon$. However, the function sub-optimality $f(x) - f(x^\\\\ast)$ is unbounded. Therefore, in general, analysis in the non-convex case cannot be easily adapted to the convex case. More precisely, the proofs from Chen et al. (2023) use inequality (7) as a starting point, while our proofs are based on the careful analysis of $\\\\|\\\\|x^k - x^\\\\ast \\\\|\\\\|$. Moreover, Chen et al. (2023) consider normalized versions of GD and SPIDER, while we do not study normalization. Next, the analysis from Chen et al. (2023) does not consider cases when $\\\\|\\\\| \\\\nabla f(x^k) \\\\|\\\\| \\\\geq L_0 / L_1$ and $\\\\|\\\\| \\\\nabla f(x^k) \\\\|\\\\| < L_0 / L_1$ separately and also their analysis does not show two-stage behavior of the considered methods. Therefore, our analysis cannot be seen as an extension of the analysis from Chen et al. (2023).\\n\\n**We believe that we addressed all the reviewer's concerns. Therefore, we kindly ask the reviewer to reconsider their score.**\"}", "{\"comment\": \"The reason I raised the flag about (Zhang et al 2020b) is to make sure that the narrative of the current paper is more accurate. In its current version (and even from the authors' reply here), the paper seems to claim greater novelty for dropping twice differentiability. The authors should be a bit more careful in citing related work precisely, otherwise, anybody else who reads this work (or people who read Chen et al) will believe that the possibility of handling only once differentiable functions was first considered by Chen et al, which seems to be not precise, even though, the credit for properly working things out, as well as introducing more refined notions of smoothness belongs to them.\"}", "{\"comment\": \"We thank the reviewer for the feedback.\\n\\n**Comparison with other works.** Following the reviewer's request, we updated the paper: in the revised version, we discuss the difference between our proofs and the ones from (Koloskova et al., 2023; Takezawa et al., 2024; Chen et al., 2023) in Appendix D.1. \\n\\n**Exponential factors.** We thank the reviewer for the clarification. However, in the current version, we explicitly show all exponential factors. We are afraid that mentioning locality in the non-conventional sense may confuse the reader. Therefore, we decided to keep the discussions related to the exponential terms as they are now, but we are open to suggestions from the reviewer regarding the improvement of these discussions.\\n\\n**If the reviewer has further comments/questions/concerns, we are committed to addressing them promptly. If all concerns are resolved, we kindly ask the reviewer to reconsider their score.**\"}", "{\"comment\": \"Dear Reviewer jtK8,\\n\\nThe author discussion phase will be ending soon. The authors have provided detailed responses. Could you please reply to the authors whether they have addressed your concern and whether you will keep or modify your assessment of this submission?\\n\\nThanks.\\n\\nArea Chair\"}", "{\"comment\": \"We thank the reviewer for the response and the acknowledgment of the importance of removing the dependence on $L$ from the bounds. Below, we would like to provide further clarifications.\\n\\n**Complexity bounds for $(L_0,L_1)$-STM and AdGD.** We agree with the reviewer (and also explicitly mentioned in the text) that the tightness of the derived bounds for $(L_0,L_1)$-STM and AdGD is an open question. However, to the best of our knowledge, this question was not raised in prior works. Though Li et al. (2024) also derive accelerated rates (for a different method), they do not raise the question of the optimality of the derived results in the case of $(L_0,L_1)$-smoothness. Moreover, we are not aware of any prior convergence results for AdGD or any other parameter-free method under $(L_0,L_1)$-smoothness. Our analysis of $(L_0,L_1)$-STM and AdGD illustrates the importance and non-triviality of the open questions we formulated in the text. We believe such observations are also important for the community. We also highlight that we see the results from Sections 3, 4, and 7 as the main results of this paper, so our contribution is not limited to formulating those open questions.\\n\\n**Lemma 2.2.** This lemma indeed resembles existing results. However, we do not see this lemma as the main contribution of our paper.\\n\\n**Contributions.** We would like to highlight the significance of the derived results.\\n\\n- The main results of our paper are Theorems 3.1 and 4.1. As the reviewer acknowledged, these results remove the dependence on $L$ from the bounds (and also the need to assume $L$-smoothness), which is an important improvement. In the revised version of our paper, we also explain the technical differences between our proofs and existing ones (see Appendix D.1). We believe that presented bounds and proof techniques are sufficiently novel given the importance of the considered setup (generalized smoothness) and the importance of deriving tight convergence bounds without unnecessary assumptions, which is one of the ultimate goals in Optimization.\\n\\n- The results of Theorems 7.1 and 7.2 are new and provide important generalizations of the derived bounds for $(L_0,L_1)$-GD and GD-PS to the stochastic case. We also emphasize that our proofs are quite non-standard (see lines 2004-2051 \\u2013 we are not aware of similar tricks used in prior works on stochastic optimization).\\n\\n- Although we agree with the limitations of our results for $(L_0,L_1)$-STM and AdGD, our analysis of these methods and the fact that we brought the attention of the community to the open questions about the accelerated and parameter-free methods for $(L_0,L_1)$-smooth problems might be very important and useful for the community.\\n\\n**We kindly ask the reviewer to provide their opinion on the above clarifications. If the reviewer has any further concerns/questions, we are happy to address them as soon as possible.**\"}", "{\"metareview\": \"This paper studies the optimization algorithm for strongly convex and (L0,L1)-smooth functions. They derived the improved convergence rates for multiple algorithm including Gradient Descent with Gradient Clipping, Gradient Descent with Polyak stepsizes, and Adaptive Gradient Descent. This work fills a gap in the analysis of convex (L0,L1)-smooth functions. The main weakness is that there are not many practical examples that are (L0,L1)-smooth but not C2.\", \"additional_comments_on_reviewer_discussion\": \"I gave a lower weight to Reviewer pFMj who gave score 8 to this paper. This is because the review of Reviewer pFMj does not provide enough details, especially on the technical parts.\\n\\nReviewer jtK8 and the authors had a long and back-and-forth debate over if an exponential factor in the complexity should be called \\u201clocal\\u201d result. In my opinion, no matter who is correct, this is really a minor issue and is just about how to explain this factor. Here, Reviewer jtK8 does not criticize on the value of contribution but just would like the authors to fairly claim their contribution and better position this work. I think no reviewers raised any critical issues. Two reviewers increased their scores after rebuttal.\"}", "{\"title\": \"Respose to Reviewer jtK8: Part 2\", \"comment\": \">**The authors's motivation for studying AdGD should be as a result further enhanced: it seems inherently unsuitable to use under the generalized smoothness assumptions since AdGD does not utilize clipping.**\\n\\nIndeed, Zhang et al. (2020) noticed that $(L_0, L_1)$-smoothness is sufficient for the study of clipping algorithms. Stepsize selection rule from Malitsky and Mishchenko (2019) states that \\n$$\\\\lambda_{k-1}\\\\sqrt{1 + \\\\frac{\\\\lambda_{k-1}}{\\\\lambda_{k-2}}} \\\\geq \\\\lambda_k \\\\geq \\\\frac{\\\\gamma\\\\|\\\\|x^k-x^{k-1}\\\\|\\\\|}{\\\\|\\\\|\\\\nabla f(x^{k})-\\\\nabla f(x^{k-1})\\\\|\\\\|} \\\\geq \\\\frac{\\\\gamma}{L_0 + L_1 \\\\|\\\\|\\\\nabla f(x)\\\\|\\\\|},\\n$$\\nwhere the latter expression represents the form of a stepsize for Normalized Gradient Descent (NGD). Given that NGD is a very close method to Clipped-SGD (Zhang et al. 2019)[Section 3.1 GRADIENT DESCENT ALGORITHMS], the stepsize from Malitsky and Mishchenko (2019) can be interpreted as a form of adaptive clipping.\\n\\nZhang, Jingzhao, et al. \\\"Why gradient clipping accelerates training: A theoretical justification for adaptivity.\\\" arXiv preprint arXiv:1905.11881 (2019).\\n\\nZhang, Bohang, et al. \\\"Improved analysis of clipping algorithms for non-convex optimization.\\\" Advances in Neural Information Processing Systems 33 (2020): 15511-15521.\\n\\n>**And the result for the stochastic case (Section 7) requires a common minimizer for all the stochastic components (Assumption 4). Although such an assumption is also used in some works, this assumption is not that weak, and renders the results less applicable.**\\n\\nWe partially agree with the reviewer that this assumption may be restrictive and give a proper discussion about this in lines 477-484. In line 479, we state that it is a typical assumption for over-parameterized models and provide a number of references. We would like to note that over-parameterized models have become increasingly common in modern applications, driven by the emergence of larger models like GPT, BERT, and their derivatives year after year, making overparameterization the norm. Supporting this, we highlight that over-parameterized networks are widely studied, as evidenced by the references provided below. However, we agree that the generalization that does not require the over-parameterization assumptions is an interesting direction for future research, but out of scope of our current work. \\n\\nZhang, Guodong, James Martens, and Roger B. Grosse. \\\"Fast convergence of natural gradient descent for over-parameterized neural networks.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\nKong, Zhifeng. \\\"Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent.\\\" International Journal of Computer and Information Engineering 14.6 (2020): 166-177.\\n\\nLi, Yuanzhi, and Yingyu Liang. \\\"Learning overparameterized neural networks via stochastic gradient descent on structured data.\\\" Advances in neural information processing systems 31 (2018).\\n\\nArora, Sanjeev, et al. \\\"Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks.\\\" International Conference on Machine Learning. PMLR, 2019.\\n\\nAllen-Zhu, Zeyuan, Yuanzhi Li, and Yingyu Liang. \\\"Learning and generalization in overparameterized neural networks, going beyond two layers.\\\" Advances in neural information processing systems 32 (2019).\\n\\n\\n>**Can a version of the method that is somewhat more agnostic of the knowledge of L0 and L1 be designed, since right now this knowledge is quite necessary for selecting the correct step sizes?**\\n\\nIn fact, the problem of knowing $L_0$ and $L_1$ is identical to the standard L-smooth case. It is enough to tune only one parameter $\\\\hat L$, s.t. $\\\\hat L \\\\geq L_0$ and $\\\\hat L \\\\geq L_1$; in the standard case, we tune $\\\\hat L \\\\geq L$. In other words, in practice, it is enough to take a sufficiently large constant $\\\\hat L$. However, in the case of $(L_0, L_1)$- smoothness, the constants itself can be much smaller.\\n\\nFurthermore, we wish to highlight that GD-PS is agnostic to $L_0$ and $L_1$, the only required parameter $f(x^\\\\ast)$ is often known in many applications, such as overparameterized models. Even when $f(x^\\\\ast)$ is unknown, there are techniques that can adapt to its value, e.g., see (Elan and Kakade, 2019).\\n\\nHazan, Elad, and Sham Kakade. \\\"Revisiting the Polyak step size.\\\" arXiv preprint arXiv:1905.00313 (2019).\\n\\nFinally, our result for AdGD of Malitsky and Mishchenko (2019) does not require any knowledge of any parameters while recovering the standard GD rates.\"}", "{\"comment\": \">> As I noted in the review, any bounds involving exponentials can be viewed in spirit as local results; the authors are free to disagree and interpret the bounds whichever way they want, I am just expressing here what I believe would be a valid takeaway for anybody who wants to use the bound for either understanding the algorithm, or for developing subsequent theory of their own.\\n> We respectfully disagree with the takeaway suggested by the reviewer. In particular, the results presented in our Theorems 3.1, 4.1, and 7.1 are independent of any exponential terms and hold globally, i.e., the starting point can be arbitrarily far from a solution. Therefore, it is incorrect to say that our results are local.\\n\\nI am confused as to why the authors are getting so defensive and refusing to read the review comment literally. Nowhere did I say that **all your results are local** --- all that I said was, *wherever there's an exponential dependence in the bound, that particular result can be viewed as being local in spirit* (obviously, because in a small enough region, the contribution of the exponential term will be small). Yet, the authors keep insisting on their not being local, which I find baffling.\\n\\n>In the non-convex case, the convergence is characterized by the norm of the gradient, while in the convex case, the convergence is measured by a stronger criterion - function value sub-optimality, i.e., $f(x^N) - f(x^\\\\ast)$. These two performance criteria are directly comparable for convex problems, and a small gradient norm does not necessarily imply small function sub-optimality.\\n\\nObviously, that was the whole question. Also, I'm not speaking of literally out of the box. But just consider the trivialmost setting, where one has from convexity: $f_k - f^* \\\\le \\\\langle f_k', x_k-x^*\\\\rangle$, to which one can apply Cauchy-Schwarz, and bound in terms of $\\\\|\\\\nabla f(x_k)\\\\|$ and $\\\\|x_k-x^*\\\\|$. Of course, getting a bound on the latter distance is not easy, otoh, it shows already that upto a *diameter term*, convexity translates a small gradient into a function suboptimality. Now, after working a bit harder, one should be able to make more thorough connections to Chen et al's work. For example, we know that the gradient of an L-smooth convex function is cocoercive, i.e., $\\\\langle f'(x)-f'(y), x-y\\\\rangle \\\\ge \\\\frac{1}{L}\\\\|x-y\\\\|^2$. Using this we can even directly control $\\\\|x_k-x^*\\\\|$ in terms of $\\\\|f'(x_k)\\\\|$, etc. \\n\\nAll the points I made in the review were supposed to be helpful and constructive, it's a pity that the authors viewed them not as such, and doubled-down on repeating their claims. It is possible, I'm not seeing something the authors are seeing, but I feel they do need to address all the points carefully.\"}", "{\"comment\": \"Thank you for your response, and apologies for my late reply. I have four follow-up remarks that follows.\\n\\n> **The importance of improved complexities.**\\n\\nI generally agree with your point that the assumption of $L$-smoothness is somewhat undesirable for analyses under generalized smoothness, even though constant $L$ only appears in the non-dominant term of the iteration complexity. The example you raised is a bit extreme, since loss functions are usually not in the form of powers. The real issue is that $L$ can be thousands of times larger than $L_0$ and $L_1$ in practice. Nonetheless, it is apparently better to explicitly remove the dependence on $L$, and I recognize your efforts in this direction.\\n\\n> **The complexities of accelerated methods.**\\n\\nAs you remarked after Theorem 3.1, the result of Li et al. (2024) is not satisfactory due to its dependence on $\\\\Vert\\\\nabla f(x^0)\\\\|$, which scales up to $L_0 R_0 \\\\exp(L_1 R_0)$ according to Lemma 2.1. The current complexities of accelerated methods still leave an open question regarding the possibility of a more refined results (without exponential dependence) that makes the current results less strong, although the remaining question may be highly non-trivial. I think your modification of AdGD is interesting, yet the result still does not make an improvement over the best-known complexity.\\n\\n> **Interpretation of Lemma 2.2.**\\n\\nAfter further checking the revised version and comparing it with the literature, I now understand that Lemma 2.2 has not been previously introduced. However, I must note that the difference is not particularly significant. As you mentioned, Equation (8) differs from Koloskova et al. (2023) due to the difference in the definition of $(L_0,L_1)$-smoothness. Similarly, cocoercivity and monotony of the gradient have been previously introduced (Li et al., 2023), and thus your Equation (10) seems like a natural extension. Moreover, since the requirement in Equation (9) is equivalent to $\\\\Vert x-y\\\\Vert\\\\leq \\\\nu/L_1$, your Equation (10) and (11) still utilize local properties, which is similar to prior studies.\\n\\n> **Overall contribution.**\\n\\nIn my opinion, the paper is generally good in presentation and soundness, but the significance of results is still marginal in terms of novel bounds. Indeed, the authors provide a substantial amount of analysis for different algorithms and extensions, and I appreciate your efforts. If the open problem that you raised is addressed, I will certainly be inclined to accept the paper. However, the present contribution still does not convince me to assign a higher rating.\"}", "{\"summary\": \"This paper analyzes the iteration complexities of several algorithms targeted at convex $(L_0,L_1)$-smooth optimization. In comparison to previous works, the authors focus on a more refined analysis, including the elimination of dependence on the smoothness parameter $L$ (although it is not the dominant term) for variants of Gradient Descent, as well as improvements on the naive adaption of Adaptive Gradient Descent for generalized smoothness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"This paper covers existing methods for convex $(L_0,L_1)$-smooth optimization, specifically Gradient Descent with Smoothed Clipping and Polyak Stepsizes. It also provides analyses of Similar Triangles Methods and Adaptive Gradient Descent under the setting of generalized smoothness. The results give an overview of existing and adapted methods.\", \"weaknesses\": \"My major concern is about the significance of the results. To be specific, for the first two variants of Gradient Descent, this paper only improves the non-dominant $\\\\mathcal{O}(\\\\sqrt{1/\\\\varepsilon})$ term of iteration complexity, which is inconsequential for small $\\\\varepsilon$, i.e., when finding a solution with good quality. For the adapted versions of Similar Triangles Method and Adaptive Gradient Descent, the iteration complexity is in the form of $\\\\mathcal{O}(\\\\sqrt{L_0L_1A\\\\exp(L_1A)A^2/\\\\varepsilon^2})$, where $A$ equals either $R_0$ or $D$, which generally aligns with a specification of Li et al. (2024). Thus, the overall contribution in terms of novel theoretical guarantees appears quite limited to me at this stage.\", \"questions\": \"**Typos and minor suggestions:**\", \"line_203\": \"Since you cite the conference version rather than the arXiv version of the paper, please refer to it as \\\"Proposition 1\\\".\\n\\nLine 383-387 / Inequality (20-21): I suggest that the authors avoid breaking the entire inequality into two labels, which can cause ambiguity as seen in the second step of (73). Using an underbrace might be a better option.\", \"line_419\": \"\\\"$\\\\varepsilon$-solution\\\".\", \"line_1095\": \"a multiplier of $\\\\exp(\\\\eta)$ is missing in the last term.\\n\\n**Questions:**\\n\\n* As I mentioned in the Weaknesses section, are there any practical examples (either theoretical or experimental) that can justify the importance of the *improved* complexities?\\n* For Theorem 3.1, it appears that you integrate the analyses from Li et al. (2024) and Koloskova et al. (2023), substituting the normalization term $\\\\ell(G)$ in $\\\\eta$ into something related to $\\\\|\\\\nabla f(x^k)\\\\|$, so that the sequence enjoys the monotonic properties and can be analyzed in two cases. Could you elaborate on the intuition behind this approach?\\n* You mention new technical results for $(L_0,L_1)$-smooth functions in Section 1.3. Could you specify these results for reference? \\n\\n\\n\\nHaochuan Li, Jian Qian, Yi Tian, Alexander Rakhlin, and Ali Jadbabaie. Convex and non-convex optimization under generalized smoothness. In Advances in Neural Information Processing Systems 36, 2024.\\n\\nAnastasia Koloskova, Hadrien Hendrikx, and Sebastian U Stich. Revisiting gradient clipping: Stochastic bias and tight convergence guarantees. In Proceedings of the 40th International Conference on Machine Learning, 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Respose to Reviewer 4rMB\", \"comment\": \">**The analysis of the current paper is mostly based on Assymetric and Symmetric (L0,L1) smoothness, which can be hold for functions that are not twice differentiable. This is good since this class cover the original class of function in (Zhang et al, 2020b), where twice differentiability is a must. However, which meaningful classes of functions satisfy Assymetric/Symmetric (L0,L1) smoothness while not C2? I found that all the examples presented are C2, and thus it seems that the use of Assymetric and Symmetric (L0,L1) smoothness is not neccesary, which reduces the importance of the current paper much. Note that examples for L-smooth functions that are not C2 are diverse, so it is reasonable to use the gradient Lipschitz condition instead of the stronger one, bounded Hessian. I do not think it is the case for (L0,L1) smoothness.**\\n\\n\\nIndeed, we consider a more general assumption, including Hessian free given by (Zhang et al. 2020b) [Corollary A.4] which is equivalent to the standard assumption with Hessian assumption (inequality (5) of our work or C2).\\nThe lack of examples does not necessarily reduce the importance. [1] provides various plots which indicate complicated dependence of Hessian norm on Gradient norms. Having a more complicated model may lead to a more general assumption on smoothness. So we believe that our results also create value for a community, by analyzing more general assumption, adaptive methods, and introducing stochasticity.\\n\\n[1] Zhang, Jingzhao, et al. \\\"Why gradient clipping accelerates training: A theoretical justification for adaptivity.\\\" arXiv preprint arXiv:1905.11881 (2019).\\n\\n>**Now assume that the function is C2. Lemma 2.1 shows that (L0,L1) smoothness implies (not equivalent) equation (7). In comparison with Lemma 2.2 (Vankov et al., 2024) below, their equation (2.2) seems to be a shaper inequality and in fact is an equivalent condition. Based on this, I suspect that the results obtained by the paper under review is not as tight as claimed by the authors, especially when compared with (Vankov et al., 2024).**\\n\\nFirst of all it is worth mentioning that (Vankov et al., 2024) appeared on arXiv after the submission deadline, so it is not appropriate to compare. \\n\\nBut as we stated above, we consider a more general assumption, which includes the assumption made in (Vankov et al., 2024). Moreover, to the best of our knowledge, the lower bounds are not obtained yet for the set of considered assumptions. So, it is challenging for us to say that our results or the results of (Vankov et al., 2024) are optimal for the corresponding set of assumptions. Moreover, for sufficiently large number of iterations even constant factors are almost identical. For $(L_0,L_1)$-GD we obtain $2/\\\\nu$ constant which is almost the same as 4 for GD of (Vankov et al., 2024), since $\\\\nu = e^{-\\\\nu}$ and $0.56 < \\\\nu < 0.57$, so we argue, that the assumption we made is not too restrictive.\\n\\n>**The convergence theory of algorithms for $(L0,L1)$-smooth function does not explain why they are better than standard gradient descent. For example, can the authors explain why standard GD performs worse than designated algorithms for $(L0,L1)$-smooth function in solving logistic regression?**\\n\\nAppendix B considers some examples of functions satisfying the Hessian-based assumption (5), which is covered by the symmetric assumption we used in our paper. $(L_0,L_1)$-smoothness can be reduced to the case of the standard smoothness by bounding the gradient norm by (24), which leads to a pessimistic constant. The standard gradient descent performs constant step-size steps, and the first one is inversely proportional to that pessimistic constant, so it is very small.\\n$(L_0,L_1)$-GD allows the stepsize to increase as the gradient norm vanishes. The method performs large stepsizes, thus achieving faster convergence.\\n\\nRegarding logistic regression, we consider the logistic loss function in example B.3, which is expected to share properties with the average of logistic functions. Although we are not aware of an explicit formula for $L_0$ and $L_1$ for the average of logistic functions, one can note that in practice, the Hessian norm can indeed depend on the gradient norm, as shown in Figure 2. Thus, GD performs worse than other methods since GD uses smaller step sizes corresponding to the worst-case smoothness constant.\"}", "{\"comment\": \"Thank you for your clarifications. I agree with the rebuttal and keep my initial rating.\"}", "{\"title\": \"Respose to Reviewer RdU1: Part 1\", \"comment\": \">**My major concern is about the significance of the results. To be specific, for the first two variants of Gradient Descent, this paper only improves the non-dominant $\\\\mathcal{O}(\\\\sqrt{1/\\\\varepsilon})$ term of iteration complexity, which is inconsequential for small $\\\\varepsilon$, i.e., when finding a solution with good quality.**\\n\\n>**As I mentioned in the Weaknesses section, are there any practical examples (either theoretical or experimental) that can justify the importance of the improved complexities?**\\n\\nWe emphasize that our results hold without $L$-smoothness assumption, while the existing results for Clipped GD and GD with Polyak Stepsize do require standard L-smoothness, and, in particular, the mentioned $\\\\mathcal{O}(\\\\sqrt{1/\\\\varepsilon})$ term is proportional to $\\\\sqrt{L}$. Of course, one can show that for small enough stepsize parameters, the methods do not escape the ball centered at the solution and have radius $\\\\|\\\\| x^0 - x^\\\\ast \\\\|\\\\|$. However, the smoothness constant is dependent on the size of this ball. This means that the $\\\\mathcal{O}(\\\\sqrt{1/\\\\varepsilon})$ term can be the leading one even for very small values of $\\\\varepsilon$. \\nTo illustrate this, consider the function from Example 1.1: $f(x) = \\\\|\\\\| x \\\\|\\\\|^{2n}$. As we show in Appendix B, it is $(2n, 2n-1)$-smooth, but it is not smooth for $n \\\\geq 2$ on the whole space. However, on the ball centered at $0$ and radius $\\\\|\\\\| x^0 \\\\|\\\\|$ the smoothness constant equals $2n(2n-1)\\\\|\\\\| x^0 \\\\|\\\\|^{2n-2}$. For example, if $n=5$ and $\\\\|\\\\| x^0 -x^\\\\ast \\\\|\\\\| = 10$, then $L_0 = 10, L_1 = 9$ and the smoothness constant on the mentioned ball equals $L = 9\\\\cdot 10^9$. Plugging these values in the bound $\\\\mathcal{O}(\\\\max\\\\{L_0R_0^2/\\\\varepsilon, \\\\sqrt{R_0^4 LL_1^2/\\\\varepsilon}\\\\})$ term shown by (Koloskova et al., 2023; Takezawa et al., 2024), we observe that the second term is dominant even for small values of $\\\\varepsilon = 10^{-6}$. In contrast, the leading term in our bound $\\\\mathcal{O}(\\\\max\\\\{L_0R_0^2/\\\\varepsilon, L_1^2 R_0^2\\\\})$ is the first one and it is significantly smaller than $\\\\mathcal{O}(\\\\sqrt{R_0^4 LL_1^2/\\\\varepsilon})$ term even for reasonably small values of $\\\\varepsilon$. Moreover, as one can see from Lemma 2.1, the smoothness constant on the described ball is proportional to $\\\\exp(L_1\\\\|\\\\| x^0 - x^\\\\ast \\\\|\\\\|)$ in the worst case, i.e., it can be exponentially large.\\nWe also highlight that our result for $(L_0, L_1)$-GD provides a refined characterization of the method\\u2019s behavior: we prove that the norm of the gradient is non-increasing and that the method\\u2019s convergence has two clear phases (large gradient and small gradient regimes). Previous results do not provide such detailed characterization.\\nConsidering all of these aspects, we believe that our results make a noticeable and significant improvement over the previous ones.\\n\\n>**For the adapted versions of Similar Triangles Method and Adaptive Gradient Descent, the iteration complexity is in the form of $\\\\mathcal{O}(\\\\sqrt{L_0L_1A\\\\exp(L_1A)A^2/\\\\varepsilon^2})$, where $A$ equals either $R_0$ or $D$, which generally aligns with a specification of Li et al. (2024).**\\n\\nIndeed, as we explain in lines 409-416, our result for $(L_0, L_1)$-STM matches the result derived by Li et al. (2024) in the case of $(L_0, L_1)$-smooth problems, though we consider a different method, which is of the interest on its own. However, Adaptive Gradient Descent (AdGD) is not considered by Li et al. (2024), and, in particular, our Theorem 6.1 gives the result of a different form. To achieve this, we modify the proof of AdGD significantly \\u2013 see the proof of Theorem G.2 in Appendix G.2. This new approach to the analysis of AdGD allows us to show a better bound than the one that directly follows from $(L_0, L_1)$-smoothness and existing analysis by Malitsky & Mishchenko (2020) (see more details in lines 424-450 and the discussion after Theorem 6.1).\\n\\n>**Thus, the overall contribution in terms of novel theoretical guarantees appears quite limited to me at this stage.**\\n\\nWe hope that our above responses clarify the novelty of the derived results. Moreover, we also highlight that in Section 7, we provide the results for the stochastic extensions of $(L_0, L_1)$-GD and GD-PS. These results are novel, and the proof technique is also also new for stochastic optimization, e.g., see lines 2004-2051 \\u2013 we are unaware of similar tricks used in prior works on stochastic optimization.\\n\\n>**You mention new technical results for $(L_0,L_1)$-smooth functions in Section 1.3. Could you specify these results for reference?**\\n\\nLemma 2.2 is new and provides several useful inequalities. In particular, we are not aware of analogs of inequality (10) in the literature on generalized smoothness. Moreover, in the revised version, we expanded the discussion about the relation to the known analogs of inequalities (8) and (11) from (Koloskova et al., 2023; Li et al., 2024a).\"}", "{\"title\": \"Respose to Reviewer jtK8: Part 3\", \"comment\": \">**Ultimately, a high-level takeaway of the work (which is likely also a takeaway that follows from the Chen paper if one could quickly bridge their stationarity to function suboptimality here) seems to be that the results are ultimately of a local nature. What I mean by that is that due to the unavoidable dependence of the type , the terms involving that in the bound will be large, unless is sufficiently small, and hence, in spirit the results can be viewed as showing that the GD methods studied in the paper are good but only locally. This aspect is not a criticism of the paper, but just a comment on how one can typically interpret bounds that involve exponentials.**\\n\\nWe respectfully disagree with the reviewer that our result is of local nature. For $(L_0, L_1)$-GD we have a clear characterization of two-stage convergence and the leading term in the rate is independent of $L_1$ and any exponentially large terms (inequality (15)). Furthermore, for GD-PS we also show better results without requiring any change of the method. Finally, our analysis for AdGD does not involve these stages at all, and does not show local nature. \\n\\n>**Do the results really offer a significant improvement over Li et al. 2024a (Convex and Non-convex Optimization Under Generalized Smoothness). The results provided by Li et al. 2024a for GD and NAG also do not rely on the assumption of L-smoothness, and their acceleration results for NAG do not have a dependence on exp(L1). The authors claim that the advantage of their results is that they do not require dependencies on and , as in Li et al. 2024a, and they argue that these quantities could potentially be exponentially large in the worst case (Line 408-420)---But in the reviewer's opinion, assuming these initial quantities to be constants is not a very strong assumption, whereas in comparison, the authors' acceleration result depends on exp(L1), which seems to be less favorable.**\\n\\n\\nWe thoroughly discuss and explain in detail the improvements on the results of (Li et al., 2024a) in lines 135-154, 157-165, 286-290, 326-329, 386-398, 836-837. We added lines 221-224 for additional discussion. \\nWe would like to briefly summarize it here.\\n\\n- constant $\\\\ell$ can be much larger than $L_0$ and $L_1$, \\n- $\\\\ell \\\\sim L_0(1 + 2L_1R_0 \\\\exp(L_1 R_0))$ in the worst case, and the derived complexity is not optimal\\n- our bound does not depend on $f(x^0) - f(x^*)$ and on any bound for $\\\\|\\\\nabla f(x^k)\\\\|$ including $\\\\|\\\\nabla f(x^0)\\\\|$ which is of order $L_0R_0\\\\exp(L_1 R_0)$\\n\\nAssuming initial quantities to be constants does not lead to a good bound. For example, trivial bound (24) reduces $(L_0, L_1)$-smoothness to the standard smoothness with a pessimistic constant, having double exponential of $L_1D$.\\nWe further improve the results of (Li et al., 2024a) by removing the $L_1$ dependent factors from $\\\\ell$ in the worst case, e.g.\\nour bound for GD is $\\\\mathcal O (\\\\max [ \\\\frac{L_0 R_0^2}{\\\\varepsilon}, L_1^2R_0^2])$ vs. (Li et al., 2024a) which is $ \\\\mathcal O (\\\\frac{\\\\ell R_0^2}{\\\\varepsilon})$ where $\\\\ell \\\\sim L_0(1 + 2L_1R_0 \\\\exp(L_1 R_0))$ in the worst-case. We improve the result by a factor of $1 + 2L_1R_0 \\\\exp(L_1 R_0)$ which is significantly larger in the worst case. \\n\\nRegarding our results for the accelerated method, the reviewer fairly noticed an extra term $\\\\exp(L_1D)$ in contrast to others' results. In fact (Li et al., 2024a) have the same issue; we have the discussion in lines 386-398. Our analysis faced a challenge bounding the term (20), discussed in Lines 371-377.\\nHowever, in experiments we see that $(L_0, L_1)$ STM shows better performance than $(L_0, L_1)$-GD. And we believe that the term (20) can be bounded. We think that it would be an important contribution to bound the term, and decide to put this challenge into the main part.\"}", "{\"comment\": \"We thank the reviewer for the reply and for the suggestions regarding the improvement of the presentation. We promise to address these comments in the final version if our paper gets accepted. In particular:\\n\\n- In addition to the current clarifications in the introduction, we will elaborate on the contribution of Zhang et al. (2020a) in Section 1.3, i.e., we will add that Zhang et al. (2020a) provide a generalized version of $(L_0,L_1)$-smoothness.\\n\\n- We will add explicitly to the Conclusion section (Section 8) that our rates for $(L_0,L_1)$-STM and AdGD have exponential factors of $\\\\exp(L_1\\\\|\\\\| x^0 - x^\\\\ast \\\\|\\\\|)$ and $\\\\exp(2L_1D)$ respectively, meaning that their influence is significant unless $L_1 = 0$ or the starting point is close enough to the solution, i.e., the results are local in spirit.\\n\\nThese modifications will require a few lines of additional space to fit the main part into ten pages. We plan to get this extra space by merging the statements of Theorems 7.1 and 7.2 and making formulas (1) and (22) inline formulas.\\n\\nOverall, we believe that **the mentioned modifications are minor**: they change neither the flow of the paper nor the results. We believe we also addressed all other concerns raised by the reviewer. **Therefore, we kindly ask the reviewer to reconsider their score.**\"}", "{\"title\": \"Respose to Reviewer iTbo\", \"comment\": \">**A new acceleration of GD is proposed. It would be supportive to add more remarks to highlight the theoretical merits of this acceleration compared with the STM in (Gasnikov & Nesterov, 2016). Additionally, it would be more supportive to numerically compare its performance with STM in (Gasnikov & Nesterov, 2016).**\\n\\nWe have a detailed comparison of our accelerated GD method with the STM from [Gasnikov & Nesterov, 2016]. As highlighted in the line 362-364, our algorithm differs from the original STM in line 6 of Algorithm 3. Moreover, as discussed in lines 408-409, we recover the convergence guarantees of the original STM from Theorem 5.1 with $L_1 = 0$. \\n\\nHowever, we agree that it would be valuable to include experiments to evaluate the empirical performance of the proposed method. We will add these experiments in the camera-ready version.\\n\\n\\n>**Example 1.3 considers a logistic function with L2 regularization. However, f(x) is not related to L2. It would be better to specify where the L2 regularization is.**\\n\\nThank you for pointing this out. This is indeed a typo. The problem in example 1.3 is a simple logistic function (without L2 regularization). We fixed this typo in the revised version of our manuscript. \\n\\n>**The discussion after Theorem 7.1 (line 521) claims that the probability must be smaller than \\u2026\\u2026. It would be clearer to explain why the probability should be smaller than this value.**\\n\\nThank you for your question. We agree that due to space constraints, we were unable to include all the necessary details. For clarity, we added Remark H.1 in line 2146.\\n\\nWe welcome the reviewer to check the revised version of our manuscript for the clarifications.\"}", "{\"title\": \"Respose to Reviewer jtK8: Part 1\", \"comment\": \">**From a quick skim of the literature, it seems that Zhang et al noted in passing that twice differentiability could be dropped at the expense of some more analysis, but the full details of the differentiable case were worked out in subsequent work. This aspect is not clear from how the current work cites related work, and should be fixed.**\\n\\nIndeed, (Zhang et al, 2020b) provides an equivalent assumption for $(L_0, L_1)$ smoothness, not involving Hessian (avoiding twice differentiability). This assumption is very similar to the one we use in our work which originates from (Chen et al., 2023. We explicitly refer this in Lemma 2.1, see line 188. We also added a remark in line 197. See also discussion in lines 826-837 about other notions of generalized smoothness.\\nOur assumption covers the Hessian one (so the assumption we use is more general), but,to the best of our knowledge, the assumption equivalent to the Hessian one can improve our results only up to a constant factor, preserving exponentials at their form, see line 197.\\n\\nWe kindly ask the reviewers to let us know whether our revised version addresses their concern. \\n\\n>**While the related work section is overall fairly good, a more precise statement about the results of Chen et al is needed, especially because that work introduced some of the key technical tools too, and studied the nonconvex case; in particular, it would be worth noting what happens if one trivially takes their nonconvex results, and tries to adapt them to the convex case (by boosting stationarity to function suboptimality using the current assumptions). Also, their slightly more general $\\\\alpha$-version of Assumption 3 could be noted.**\\n\\nWe respectfully and explicitly refer to (Chen et al., 2023) contribution in Lemma 2.1 (see line 185-187), related work section and extra related work in the appendix. Indeed, our results utilize their assumption. We also discuss the results of the $\\\\alpha$-version of the assumption in Appendix A. \\n\\nWe are not aware of any technique allowing trivially adapts the results of the nonconvex case to the convex case, but we believe that a trivial adaptation does not lead to tight results. \\n\\nWe welcome the reviewer to check the revised version of our manuscript for the improved statement of (Chen et al., 2023) results in appendix A. All the changes are highlighted in red. \\n\\n\\n>**Section 5 on acceleration could be deferred to the appendix or noted in passing, and more discussion given to the exponential terms in the bound, which are tantamount to saying that essentially no practical acceleration happens, even though the technical result itself is interesting to note. Similarly, the bounds arising in Section 6 should be discussed a bit more, because due to the central assumption of the paper, ultimately the pessimistic exponential terms in D arise.**\\n\\nRegarding Section 5, our bound for the accelerated method indeed follows from the pessimistic bound on the gradient norm, which leads to the effective smoothness given by the inequality (24). It actually means an accelerated convergence with a pessimistic bound on smoothness constant. Despite a challenge proving better bound due to the term (20), in experiments, we see $(L_0, L_1)$-STM shows a certain acceleration. We also conjecture that the term (20) can be bounded better. We believe this is an interesting open question, and we want to highlight it in the main part.\\n\\nRegarding Section 6, we also present a pessimistic bound (25) based on (24). Then, in lines 432-435, we point out that the constant in the bound can be huge and then provide the improved result, avoiding additional exponent. We believe that further discussion in lines 450-463 is quite comprehensive since it provides an example that the improved analysis can lead to a convergence rate up to 10^5 times faster; see line 451. However, we would greatly appreciate any suggestions from the reviewer to enhance this discussion.\"}", "{\"title\": \"Respose to Reviewer RdU1: Part 2\", \"comment\": \">**For Theorem 3.1, it appears that you integrate the analyses from Li et al. (2024) and Koloskova et al. (2023), substituting the normalization term $\\\\ell(G)$ in $\\\\eta$ into something related to $\\\\|\\\\|\\\\nabla f(x^k)\\\\|\\\\|$, so that the sequence enjoys the monotonic properties and can be analyzed in two cases. Could you elaborate on the intuition behind this approach?**\\n\\nOur proof follows the one from (Koloskova et al., 2023), which follows the analysis of standard GD. Then, similarly to (Koloskova et al., 2023), we consider two possible situations: either $\\\\|\\\\| \\\\nabla f(x^k) \\\\|\\\\| \\\\geq L_0 / L_1$ or $\\\\|\\\\| \\\\nabla f(x^k) \\\\|\\\\| < L_0 / L_1$. In the second case, the gradient is small, and $L_1$-term in $(L_0, L_1)$-smoothness is dominated by $L_0$-term, i.e., in inequality (8), the denominator of the left-hand side is $\\\\mathcal{O}(L_0)$. In this case, the method behaves as standard GD for $L$-smooth problems with $L = \\\\mathcal{O}(L_0)$. However, when $\\\\|\\\\| \\\\nabla f(x^k) \\\\|\\\\| \\\\geq L_0/L_1$, the $L_1$-term in $(L_0, L_1)$-smoothness is the leading one and this is the crucial difference between our proof and the one obtained by Koloskova et al. (2023): to handle this case, we use (8) and show that such situations lead to the decrease of $\\\\|\\\\| x^{k+1} - x^\\\\ast \\\\|\\\\|^2$ by some positive constant $\\\\nu\\\\eta / (8L_1^2)$. In contrast, Koloskova et al. (2023) use traditional smoothness, which leads to worse complexity, as we explained in our first response. Moreover, Lemma 3.2 shows that the gradient norm is non-increasing along the trajectory of the method \\u2013 similar to standard GD (e.g., see Lemma C.3 from [1]).\\n\\nNext, Li et al. (2024a) show that under $(r,\\\\ell)$-smoothness, the function is locally smooth, and thus, the standard analysis of GD for smooth problems can be applied under the assumption that the stepsize is sufficiently small. However, this approach leads to a worse dependency on $L_0$ and $L_1$ than we have in our bounds for $(L_0, L_1)$-GD, as we explain in lines 147-153.\\n\\n\\n>**Typos and minor suggestions**\\n\\nWe thank the reviewer for spotting the typos and for the suggestions. We incorporated all of them in the revised version. All the changes are highlighted in red. \\n\\n\\u2014\\nReferences\\n\\n[1] Gorbunov et al. Extragradient Method: O (1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity. AISTATS 2022\"}", "{\"summary\": \"The paper present a study for gradient method for solving optimization problems involving (L0,L1)-smooth objective function, which was first introduced in (Zhang et al, 2020b). The analysis in the current paper is devoted fully for the convex and strongly convex case, where the convergence analysis of several methods is proopsed, both in the deterministic and stochastic setting. However, there are some major issues needed to be addressed.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-writen, and thus easy to follow. The paper has some signification theoretical contributions for the existing algorithms. There are also new algorithms being adapated from the classical smoothness to the new (L0,L1)-smoothness.\", \"weaknesses\": \"1. The analysis of the current paper is mostly based on Assymetric and Symmetric (L0,L1) smoothness, which can be hold for functions that are not twice differentiable. This is good since this class cover the original class of function in (Zhang et al, 2020b), where twice differentiability is a must. However, which meaningful classes of functions satisfy Assymetric/Symmetric (L0,L1) smoothness while not C2? I found that all the examples presented are C2, and thus it seems that the use of Assymetric and Symmetric (L0,L1) smoothness is not neccesary, which reduces the importance of the current paper much. Note that examples for L-smooth functions that are not C2 are diverse, so it is reasonable to use the gradient Lipschitz condition instead of the stronger one, bounded Hessian. I do not think it is the case for (L0,L1) smoothness.\\n\\n2. Now assume that the function is C2. Lemma 2.1 shows that (L0,L1) smoothness implies (not equivalent) equation (7). In comparison with Lemma 2.2 (Vankov et al., 2024) below, their equation (2.2) seems to be a shaper inequality and in fact is a equivalent condition. Based on this, I suspect that the results obtained by the paper under review is not as tight as claimed by the authors, especially when compared with (Vankov et al., 2024). \\n\\n3. The convergence theory of algorithms for (L0,L1)-smooth function does not explain why they are better than standard gradient descent. For example, can the authors explain why standard GD performs worse than designated algorithms for (L0,L1)-smooth function in solving logistic regression?\\n\\nReference.\\nD Vankov, A Rodomanov, A Nedich, L Sankar, SU Stich, Optimizing (L0,L1) - Smooth Functions by Gradient Methods, https://arxiv.org/abs/2410.10800\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer iTbo,\\n\\nThe author discussion phase will be ending soon. The authors have provided detailed responses. Could you please reply to the authors whether they have addressed your concern and whether you will keep or modify your assessment of this submission?\\n\\nThanks.\\n\\nArea Chair\"}", "{\"comment\": \"Dear Reviewer pFMj,\\n\\nThe author discussion phase will be ending soon. Could you please reply to the authors whether they have addressed your concern and whether you will keep or modify your assessment of this submission?\\n\\nThanks.\\n\\nArea Chair\"}", "{\"summary\": \"This paper takes a closer look at $(L_0,L_1)$-smoothness for the setting of convex optimization. There, it derives more fine-grained convergence rate guarantees than existing work, while discussing extensions to accelerated, stochastic, and certain adaptive settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This article fills a gap in the analysis of convex L0-L1 problems that are differentiable, and claims to do that without having to resort to small stepsizes as in the recent work of Li et al. The work is reasonably clearly written, and is easy to follow. Some aspects that could do with more discussion are the two-phase nature of GD (with the $>, < L_0/L_1$), and what that may mean when compared with other work (does the same happen in the nonconvex case for instance? how does it relate to bounds from the work of Li et al for instance?)\", \"weaknesses\": [\"From a quick skim of the literature, it seems that Zhang et al noted in passing that twice differentiability could be dropped at the expense of some more analysis, but the full details of the differentiable case were worked out in subsequent work. This aspect is not clear from how the current work cites related work, and should be fixed.\", \"While the related work section is overall fairly good, a more precise statement about the results of Chen et al is needed, especially because that work introduced some of the key technical tools too, and studied the nonconvex case; in particular, it would be worth noting what happens if one trivially takes their nonconvex results, and tries to adapt them to the convex case (by boosting stationarity to function suboptimality using the current assumptions). Also, their slightly more general $\\\\alpha$-version of Assumption 3 could be noted.\", \"Section 5 on acceleration could be deferred to the appendix or noted in passing, and more discussion given to the exponential terms in the bound, which are tantamount to saying that essentially no practical acceleration happens, even though the technical result itself is interesting to note. Similarly, the bounds arising in Section 6 should be discussed a bit more, because due to the central assumption of the paper, ultimately the pessimistic exponential terms in D arise.\", \"The authors's motivation for studying AdGD should be as a result further enhanced: it seems inherently unsuitable to use under the generalized smoothness assumptions since AdGD does not utilize clipping. And the result for the stochastic case (Section 7) requires a common minimizer for all the stochastic components (Assumption 4). Although such an assumption is also used in some works, this assumption is not that weak, and renders the results less applicable.\"], \"questions\": [\"Can a version of the method that is somewhat more agnostic of the knowledge of L0 and L1 be designed, since right now this knowledge is quite necessary for selecting the correct step sizes?\", \"Ultimately, a high-level takeaway of the work (which is likely also a takeaway that follows from the Chen paper if one could quickly bridge their stationarity to function suboptimality here) seems to be that the results are ultimately of a *local nature*. What I mean by that is that due to the unavoidable dependence of the type $\\\\exp(\\\\|x-y\\\\|)$, the terms involving that in the bound will be large, unless $\\\\|x-y\\\\|$ is sufficiently small, and hence, in spirit the results can be viewed as showing that the GD methods studied in the paper are good but only locally. This aspect is not a criticism of the paper, but just a comment on how one can typically interpret bounds that involve exponentials.\", \"Do the results really offer a significant improvement over Li et al. 2024a (_Convex and Non-convex Optimization Under Generalized Smoothness_). The results provided by Li et al. 2024a for GD and NAG also do not rely on the assumption of L-smoothness, and their acceleration results for NAG **do not** have a dependence on exp(L1). The authors claim that the advantage of their results is that they do not require dependencies on $\\\\Vert \\\\nabla f(x_0) \\\\Vert$ and $f(x_0) - f^*$, as in Li et al. 2024a, and they argue that these quantities could potentially be exponentially large in the worst case (Line 408-420)---But in the reviewer's opinion, assuming these initial quantities to be constants is not a very strong assumption, whereas in comparison, the authors' acceleration result depends on exp(L1), which seems to be less favorable.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks for your clarifications. My concerns have been addressed except the experimental part. I will keep my initial rating.\"}", "{\"comment\": \"Dear Reviewer RdU1,\\n\\nThe author discussion phase will be ending soon. The authors have provided detailed responses. Could you please reply to the authors whether they have addressed your concern and whether you will keep or modify your assessment of this submission? \\n\\nThanks.\\n\\nArea Chair\"}", "{\"comment\": \">We are afraid that there is a misunderstanding between us and the reviewer. To resolve this misunderstanding, we kindly ask the reviewer to provide a definition of \\\"local result\\\"\\n\\nI have repeated again and again that \\\"in spirit\\\" the can be interpreted as, and the reason I called it \\\"in spirit\\\" is that it is not literally, formally like a usual local result. For completeness, let me reiterate what I've been saying in each response, but with a concrete pointer. Lines 174 and 179 of the paper note in summary upper bounds that have the terms $\\\\exp(L_1 R_0)$, and $\\\\exp(L_1D)$, respectively. These terms are exponentially large, and thus unless $R_0$ is small, or $D$ is small, these terms will dominate. Having the need to have small $R_0$ and/or $D$ are what I am calling **local** in spirit. Of course, the authors can say `these are just constants', hence I did not formally call out the dependence to be local. \\n\\n**Side remark:** If we as an optimization community are a bit more critical about these constants, then one will see that actually these such style of bounds can be even more problematic, especially say if one is trying to make a strong statement in terms of bit complexity, but that is not the topic of this paper, so I did not raise it.\\n\\n>If we continue the derivation as the reviewer suggested, assume that $|| x_k - x^\\\\ast || \\\\leq D$, and apply Theorem 2 from Chen et al (2023) with $\\\\alpha = 1$ and $\\\\beta = 1$ (note that it is the only possible choice for $\\\\beta$ when $\\\\alpha=1$), then we get $$ \\\\frac{1}{N}\\\\sum\\\\limits_{k=0}^{N-1}(f(x_k) - f^\\\\ast) \\\\leq \\\\frac{2L_0 D(f(x_0) - f^\\\\ast)}{N\\\\varepsilon} + \\\\frac{D\\\\varepsilon}{2} $$ for normalized GD, where $\\\\varepsilon > 0$.......\\n\\nThank you, this precisely proves the point I was trying to make. Namely, as noted in my previous message, the **trivialmost** approach already gives some bound already, albeit not tight. But noting such a thing makes the paper's positioning stronger, and not weaker. Perhaps a less trivial approach may already give something a bit stronger, but perhaps to reach the final result of the present paper one has to work a bit harder. So my point was (in all comments so far), that the authors should comment precisely on how far does one get with Chen et al's nonconvex approach, and then use that to justify the need for the present paper to exist. \\n\\n>The inequality the reviewer is referring to is coercivity..\\n\\nSorry, I made a typo when I wrote that, and you are right about the switched terminology. And yes, your (11) is a version of the **co-coercivity** -- and it itself is also **local in spirit** (unavoidably so, as you noted yourself in L198). However, the whole point of what I was trying to express there was that on top of mere convexity, another **simple approach** would be to use co-coercivity, to get a somewhat tighter bound. But I made a typo, which derailed the discussion a bit (and added some junk in my comment). Nevertheless, one tries to follow this route of extending gradient norm bounds using something beyond just convexity, one lands at co-coercivity as one choice, but to use co-coercivity one needs to then develop an analog of co-coercivity, which brings us to your bound (11), and fits in with enhancing the narrative by connecting to Chen et al's work more directly.\"}", "{\"title\": \"General comment to all reviewers\", \"comment\": \"We thank the reviewers for their feedback and time. In particular, we appreciate that the reviewers acknowledged the multiple strengths of our work. In particular, they write that\", \"the_paper\": [\"is well-written (Reviewer 4rMB)\", \"makes a technical contribution (Reviewer jtK8)\", \"improves and derives fine-grained existing convergence results (Reviewer jtK8and pFMj)\", \"provides new convergence results for AdGD (Reviewer pFMj)\", \"We have updated our manuscript, **highlighting the changes in red**. Additionally, we have addressed the reviewers' questions, comments, and concerns in individual responses, **referencing line numbers from the revised manuscript**. We remain committed to promptly addressing any further questions or comments and are happy to engage in back-and-forth discussions as needed.\"]}", "{\"comment\": \"I hope the authors have also made their presentation of related work *more precise* as per the comments from my review and other reviews.\\n\\nAnd, I do not think that **acknowledging** that bounds involving $\\\\exp(D)$ factors are *local in spirit* would confuse anybody. At least, that's the most obvious takeaway for me from those bounds. Pedagogically, such a statement could find home in a \\\"Limitations / Discussion\\\" section without hurting exposition, and while enhancing transparency. \\n\\nThanks for plowing through the intense discussion this paper ended up raising (across all reviews).\"}" ] }
0whx8MhysK
Influence-Guided Diffusion for Dataset Distillation
[ "Mingyang Chen", "Jiawei Du", "Bo Huang", "Yi Wang", "Xiaobo Zhang", "Wei Wang" ]
Dataset distillation aims to streamline the training process by creating a compact yet effective dataset for a much larger original dataset. However, existing methods often struggle with distilling large, high-resolution datasets due to prohibitive resource costs and limited performance, primarily stemming from sample-wise optimizations in the pixel space. Motivated by the remarkable capabilities of diffusion generative models in learning target dataset distributions and controllably sampling high-quality data tailored to user needs, we propose framing dataset distillation as a controlled diffusion generation task aimed at generating data specifically tailored for effective training purposes. By establishing a correlation between the overarching objective of dataset distillation and the trajectory influence function, we introduce the Influence-Guided Diffusion (IGD) sampling framework to generate training-effective data without the need to retrain diffusion models. An efficient guided function is designed by leveraging the trajectory influence function as an indicator to steer diffusions to produce data with influence promotion and diversity enhancement. Extensive experiments show that the training performance of distilled datasets generated by diffusions can be significantly improved by integrating with our IGD method and achieving state-of-the-art performance in distilling ImageNet datasets. Particularly, an exceptional result is achieved on the ImageNet-1K, reaching 60.3\% at IPC=50. Our code is available at https://github.com/mchen725/DD_IGD.
[ "Dataset Distillation", "Dataset Condensation", "Diffusion Model", "Guided Diffusion Generation" ]
Accept (Poster)
https://openreview.net/pdf?id=0whx8MhysK
https://openreview.net/forum?id=0whx8MhysK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ysupuxtERR", "wivitUtgvw", "wfSVT4S077", "wUt0YmfoA2", "uAogHGDZdO", "quqFr4hvMR", "nepBLIhDmT", "iWp8Wkzfob", "c1gosz0AvO", "bU08H9LmFM", "amWvJgooJW", "amWWc6OLDJ", "a2l7vRXj9p", "YxX6DAb1bx", "YM9IDkPvTM", "XpI4pZStoE", "WkS0wAhBIg", "QPTZAWsVof", "Ps5c1OZ2KR", "Nh0PlpXmf5", "LxkBvRy3M4", "LvYa9cdntA", "IkOUadUSd3", "Ghh70wiyd3", "FVs9vETLbX", "FAEfCOaRF3", "EEbhPomtRJ", "DKc0NM20RQ", "CCsEM9oYDB", "9JEnxHCRn3", "7M2iG6MTk7", "74tNsFliw2", "6cG4ICRbWb", "3MSSJBsmS3", "22yxZOO0oA" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732592784403, 1732113061293, 1732113632845, 1732521280321, 1732545399325, 1732813012961, 1732773645317, 1730730334055, 1732601104067, 1732554275352, 1730637878245, 1732113086552, 1732113575425, 1732113509456, 1732521148931, 1732113037714, 1732113475733, 1730516811050, 1733048069681, 1733137459167, 1732761213388, 1732394755917, 1732287889293, 1732521046013, 1737524031493, 1730527681567, 1732549368830, 1732521104314, 1732547098260, 1733123642937, 1732791108165, 1734561759174, 1730194580357, 1732113595652, 1732521205629 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_hMXA" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_Qttw" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_Xr47" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_rhZM" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_Xr47" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_hMXA" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_rhZM" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_brCT" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_rhZM" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_rhZM" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Area_Chair_JvuM" ], [ "ICLR.cc/2025/Conference/Submission10187/Reviewer_Qttw" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ], [ "ICLR.cc/2025/Conference/Submission10187/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks\", \"comment\": \"Thanks for the rebuttal and my questions have been largely answered. I maintain my score.\"}", "{\"comment\": \"**Q3(@Weakness3&4): The paper should compare with other recent methods using pre-trained diffusion models.**\\n\\n**A3:** Thank you for the valuable suggestion and for introducing the insightful related work [1]. Although [1] was not originally proposed for the DD setting, it aims to improve the training efficacy of synthetic data generated by diffusion models. We also identified a related work, D$^4$M [2]. Together with our baseline method Minimax, **these three works propose using distribution-matching-like objectives to fine-tune diffusion models**. Here, we follow RDED\\u2019s evaluation protocol commonly used in dataset distillation (involving predicted soft labels) to compare our guided-diffusion methods with these three approaches on ImageNet-1K.\\n| IPC | [1] | D$^4$M | Minimax | DiT-IGD | Minimax-IGD |\\n|:---:|:----:|:-------:|:---------:|:-------:|:-----------:|\\n| 10 | 44.5 | 27.9 | 44.3 | 45.5 | **46.2** |\\n| 50 | 59.4 | 55.2 | 58.6 | 59.8 | **60.3** |\\n\\nThe results demonstrate that **our guided-diffusion methods continue to achieve superior performance**, supporting our claim that **introducing informative guidance is essential for applying diffusion models in dataset distillation**. We also observe that among the three distribution-matching-like fine-tuning methods, [1] achieves the best performance. We will include these evaluations in the later revised version of the paper and plan to explore integrating [1] with our influence-guided generation in future work.\\n\\n[1] Jianhao Yuan, Jie Zhang, Shuyang Sun, Philip Torr, Bo Zhao. Real-Fake: Effective Training Data Synthesis Through Distribution Matching. ICLR 2024\\n\\n[2] Duo Su, Junjie Hou, Weizhi Gao, Yingjie Tian, Bowen Tang. D4M: Dataset Distillation via Disentangled Diffusion Model. CVPR 2024\", \"title\": \"Response to Reviewer Qttw (2/2)\"}", "{\"comment\": \"Thank you for your instructive feedback and valuable suggestions. Below are our responses to your questions.\\n\\n---\\n**Q1(@Weakness1): The introduction lacks a clear explanation of what \\\"influence\\\" entails or why it is used for guidance. Adding a figure to illustrate the motivation or differences from previous methods could be helpful.**\\n\\n**A1:** **We have uploaded a revised version of the paper incorporating your constructive feedback**. In lines 59-80 of the Introduction, we added **a brief explanation of the influence function** and reorganized the text to highlight its role in addressing the inherent challenges posed by the abstract nature of our training-effective condition for diffusion generation. Additionally, in the newly added Figure 5, **we provide an intuitive illustration of the IGD sampling framework**, contrasting it with the vanilla diffusion sampling method.\\n\\nWe respectfully look forward to your feedback on the updated content.\\n\\n---\\n**Q2(@Weakness2): How does the proposed training-free diffusion framework for dataset distillation differ from existing training-free approaches in AIGC?**\\n\\n**A2:** This is a key question related to our motivation for proposing the influence-guided paradigm for diffusion generation. In common diffusion-based AIGC applications, users primarily control data generation through text prompts (line 134). While this enables some degree of content specification, **systematically defining a diverse set of high-quality prompts** to effectively guide diffusion models in generating both diverse and training-effective data for dataset distillation (DD) **remains an abstract challenge without a structured optimization methodology**.\\n\\nTo address these challenges, we identify the influence function as an effective metric that measures the compatibility of generated data with generalized training-effective conditions. Building on this insight, **our proposed IGD method is the first to systematically optimize data influence to generate high-quality surrogate datasets for DD tasks**.\\n\\n---\\n**Q3(@Weakness3): The experiments only report results on ImageNet, without including results on classic datasets such as CIFAR-10 and CIFAR-100.**\\n\\n**A3:** We provide a comparison of our DiT-IGD method with other state-of-the-art DD methods designed for distilling high-resolution, large-scale datasets, including RDED [1] and SRe$^2$L [2], on CIFAR-10 as a reference. \\n| IPC | SRe$^2$L | RDED | DiT-IGD |\\n|:---:|:------:|:-------:|:-------:|\\n| 10 | 29.3 | **37.1** | 35.8 |\\n| 50 | 45.0 | 62.1 | **63.5** |\\n\\nOur method achieves comparable performance with RDED on CIFAR-10. However, for the primary focus of large-scale DD tasks, our method attains significantly better performance, as shown in Tables 2-3 and our response A1@Reviewer-brCT.\\n\\n[1] Peng Sun, Bei Shi, Daiwei Yu, Tao Lin. On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm. CVPR 2024\\n\\n[2] Zeyuan Yin, Eric P. Xing, Zhiqiang Shen. Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective. NeurIPS 2023\\n\\n---\\n**Q4(@Question1): What is the relationship between the two proposed guidance and the \\\"train-free\\\" concept? Notably, even when these components are excluded from Equation 9, good results are still achieved.**\\n\\n**A4:** In this work, we frame the DD problem as learning a training-effective conditional distribution of the authentic distribution, thereby addressing the distribution shift issues encountered by earlier DD methods (line 108). As mentioned in **A3**, addressing this problem requires tackling the abstract nature of a generalized training-effective condition. To systematically optimize data influence and generate high-quality surrogate datasets for DD tasks, we introduce two effective guidance to steer the diffusion sampling process that provides influence promotion and diversity enhancement.\\n\\nWe respectfully disagree with the statement that \\\"good results are still achieved even when these guidance components are excluded.\\\" Our comparative results in Tables 1 and 2 over ImageNet datasets show **significant improvements over the two baselines**, DiT and Minimax, when integrating our influence guidance and deviation guidance, **as also acknowledged by reviewers brCT, hMXA, and Qttw**. For instance, IGD enhances the average performance of DiT by 6.6% and provides a 5.1% boost for Minimax on ImageWoof when IPC \\u2265 50. Furthermore, our ablation study in Table 5 shows the contributions of the two guidance components.\", \"title\": \"Response to Reviewer rhZM (1/1)\"}", "{\"comment\": \"Dear Reviewer rhZM,\\n\\nThank you for your continued engagement and valuable feedback on our submission! Below is a summary of our responses to your further questions:\\n\\n- Our method achieves comparable lossless performance to DATM on CIFAR-10.\\n- Further clarifications and revisions regarding the \\\"training-free\\\" term is provided.\\n\\nWe would greatly appreciate it if you could review our additional responses and let us know if you have further questions or concerns. We look forward to your further feedback!\\n\\nBest Regards\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Thanks for the detailed reponse and the new experiments. I believe the paper can be further enhanced after the revision.\"}", "{\"comment\": \"Dear Reviewer Xr47,\\n\\nThank you for your thoughtful review! We appreciate your insightful questions, which have inspired us to reflect on future directions in dataset distillation.\\n\\nTo the best of our knowledge, **all current DD methods require generating a new distilled dataset for each unseen task-specific dataset**. However, your question raises an important point: similar to transfer learning for downstream tasks, is it possible to leverage knowledge from previously distilled datasets to improve distillation efficiency or performance on new tasks? We plan to explore this idea in future work.\\n\\nWe also acknowledge that achieving scalability for complex generative tasks is challenging but valuable. Unlike classification, which benefits from clear decision boundaries, improving the data efficiency of generative models remains an open problem. A recent work [1] has proposed a coreset selection method based on distribution alignment, which shows moderate performance on smaller datasets. However, the scalability of such methods for more complex tasks remains an open challenge.\\n\\nWe share your concern about distilled datasets not matching full dataset performance, which is a common issue in methods including ours and state-of-the-art baslines we compared, e.g., RDED and SRe$^2$L based on model-inversed information. While these methods can achieve remarkable performance at lower IPCs, a gap remains due to limitations in diversity and marginal case coverage. As we noted earlier, addressing this gap will be a focus of our future work.\\n\\nThank you once again for your insightful feedback, which has greatly contributed to the direction of our ongoing and future work. If you have any further thoughts or concerns regarding our work or the future of this field, we would be grateful for your continued guidance.\\n\\nBest Regards.\\n\\n[1] Yize Li, Yihua Zhang, Sijia Liu, Xue Lin. Pruning then Reweighting: Towards Data-Efficient Training of Diffusion Models. CoRR abs/2409.19128 (2024)\"}", "{\"title\": \"I confirm to read rebuttals\", \"comment\": \"Dear authors,\\n\\nI confirm to read your rebuttals carefully. My concern is still in its generalizability. I can acknowledge the importance of the research direction. Yet, there are still concerns\\n\\nI concern that the datasets are generated to fit to only one dataset and one task. If so, the application of this work is very limited. Whenever a new task comes, the data distillation needs to be done one which is actually time-consuming even more than the training time of the task with full data. The classification does not take much time to train, but for more expensive task like generative task, not so sure if the distilled dataset can work. This will limit the application of the work.\\n\\nEven the case that it is specifically designed for one task, it still can not achieve the SOTA without the constraint on the number of instances.\"}", "{\"summary\": \"This paper addresses the challenges of dataset distillation, which aims to create compact yet effective datasets for training larger original datasets. Existing methods often face limitations when dealing with large, high-resolution datasets due to high resource costs and suboptimal performance, largely due to sample-wise optimizations in the pixel space. To overcome these challenges, the authors propose framing dataset distillation as a controlled diffusion generation task, leveraging the capabilities of diffusion generative models to learn target dataset distributions and generate high-quality data tailored for training.\\n\\nThe authors introduce the Influence-Guided Diffusion (IGD) sampling framework, which generates training-effective data without retraining the diffusion models. This is achieved by establishing a connection between the goal of dataset distillation and the trajectory influence function, using this function as an indicator to guide the diffusion process toward promoting data influence and enhancing diversity. The proposed IGD method is shown to significantly improve the training performance of distilled datasets and achieves state-of-the-art results in distilling ImageNet datasets. Notably, the method reaches an impressive performance of 60.3% on ImageNet-1K with IPC (Images Per Class) set to 50.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is easy to read and understand.\\n2. IGD appears to be superior to existing diffusion model-based approaches.\", \"weaknesses\": \"1. In the introduction, the authors introduce the concept of Influence-Guided without clearly explaining what \\\"Influence\\\" entails or why it is used for guidance. The motivation is not well established. While Figure 1 effectively shows performance, adding an additional subfigure to illustrate the motivation or highlight differences from previous methods might be more valuable.\\n\\n2. The primary contribution of the authors is the proposal of a train-free diffusion framework for Dataset Distillation. While train-free approaches are common in the AIGC field, how does the proposed method differ from existing ones?\\n\\n3. The experiments only report results on ImageNet, without including results on classic datasets such as CIFAR-10 and CIFAR-100.\", \"questions\": \"As shown in Table 5, the main contributions of the authors include the proposed influence guidance and deviation guidance. What is the relationship between these contributions and the \\\"train-free\\\" concept? Notably, even when these components are excluded from Equation 9, good results are still achieved.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer hMXA,\\n\\nThank you very much for your positive feedback on our responses! We especially appreciate your attention to the details and the mathematical formulation of our paper. The revisions based on your feedback certainly enhanced the clarity of the submission.\\n\\nBest regards.\"}", "{\"comment\": \"Dear Reviewer rhZM,\\n\\nThank you very much for the effort and continued attention you have given to our submission!\\n\\nIn Table 1, we have presented **two representative DD methods**, e.g., Distribution Matching (DM) and IDC, **primarily designed for distilling small datasets but can also distil larger datasets**, such as 10-class ImageNet subsets, with acceptable computational resources. However, both methods demonstrated poor performance. Moreover, given that these methods require tens of hours to distil even 10-class subsets of ImageNet, we found it impractical and unnecessary to include them for the distillation of ImageNet-1K in Table 2.\\n\\nAdditionally, while trajectory matching-based methods like DATM can achieve superior performance on small datasets such as CIFAR, **their computational and time costs are prohibitive for distilling 224\\u00d7224 ImageNet datasets**. These costs primarily stem from:\\n\\n- Constructing an expert trajectory pool requires training dozens of surrogate models on the full dataset, which is extremely time-consuming for large datasets.\\n- The trajectory-matching loss used to optimize distilled data involves unrolling the computation graph of multiple gradient descent steps, which is highly memory-intensive (\\u2265 100 GB) and time-consuming.\\n\\nDue to these challenges, we considered employing these methods for distilling high-resolution datasets to be intractable and impractical. Given the limited time remaining in the discussion phase (less than 48 hours), we respectfully emphasize **the difficulty of supplementing such a benchmark at this stage**, as well as **its divergence from the focus of our submission**. \\n\\nIf our responses have addressed most of your concerns regarding our submission, we would respectfully request your kind consideration of increasing the rating. We truly appreciate your positive feedback on our submission!\\n\\nBest regards.\"}", "{\"summary\": \"The work proposes a guidance scheme for dataset distillation with two main contributions. The first is to do gradient matching between sampled data with the training data, and the second is to add diversity constraints among samples inside a class.The experimental results show clear improvement over other baselines\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The motivation is reasonable\\n2. The paper is well written\\n3. The performance is significant\", \"weaknesses\": \"1. The performance is still far from full dataset.\\n2. Lack of diversity measurement experiments\\n3. The design of equation (7) lacks clarification.\\n4. The application of the work seems to be not flexible. From my understanding, according to one architecture, there will be a need for one-time distillation. Is it possible to have one time distillation and use that distilled datasets to validate across models? I can see Table 4 for the robustness between models, yet the performance is not the same for the used models for guidance. This results in the concern in the application in reality due to computational exhibitions.\", \"questions\": \"1. The performance is still very far from the original datasets. What is the least IPC to achieve similar performance with full data?\\n2. Which experiments show an improvement in diversity? The diversity should be measured in terms of FID/Recall values.\\n3. The equation (7) uses cosine similarity instead of product; is it purely due to experimental results or based on some other hypothesis?\\n4. How will the work be performed on different tasks apart from classification?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive feedbacks. We address your questions in the following responses.\\n\\n---\\n**Q1(@Weakness1): There are writing issues in the math part.**\\n\\n**A1:** Here are responses to the issues you raised:\\n\\nI. *It is unclear how the derivation is transferred from stepwise (Eq4) to epochwise (Eq5).*\", \"answer\": \"Thank you very much for your attention to detail and for pointing out these issues. First, we update the notation for the conditional distribution from $p(x|C)$ to $p(x|\\\\textit{condition})$ to avoid confusion. Second, we correct the missing bold formatting. Third, we clearly indicate that $D(.)$ represents the decoder function and that $u_i=D(z_i)$. **All these revisions are updated in the current revised version of our paper**.\\n\\n---\\n**Q2 (@Weakness2): The proposed method seems to have high computational and storage costs, particularly for gradient (L294) and similarity calculations (Eq. 8). Detailed costs should be provided.**\\n\\n**A2:** Thank you for the suggestion. We repectfully clarify that our method does not require high computational resources to generate surrogate datasets. All experiments conducted on ImageWoof, ImageNette, and ImageNet-1K can be completed using a single RTX 4090 with **nearly 16 GB peak memory usage**. Regarding storage costs, peak usage occurs before applying our proposed gradient-similarity-based checkpoint selection algorithm to retain representative checkpoints. This involves storing $E+1$ model checkpoint parameters, but after selection, peak storage is reduced to approximately $2K$ checkpoints. With ConvNet-6 as the surrogate in our default implementation, **this cost nearly 295 MB for storing overall $51$ checkpoints' parameters** and **nearly 24 MB for storing $4$ selective checkpoints** and **nearly 24 MB for storing corresponding averaged gradients**. Below, we provide a detailed analysis.\\n\\nFirst, as outlined in L290-L295, we obtain $E$ checkpoints trained on $\\\\mathcal{T}$ and retain $K$ representative checkpoints for subsequent influence guidance calculation (Eq. 7). As shown in Table 6, our checkpoint selection algorithm allows for only 4 or 5 representative checkpoints to achieve good performance. Moreover, based on our complexity analysis in Appendix A.3, the computational cost of this selection algorithm is comparable to training the surrogate model for $E$ epochs. Before starting generation for a specific class $c$, we calculate the average gradient $\\\\bar{\\\\nabla}_{\\\\theta} \\\\ell_c(X_c ; \\\\theta_e^{\\\\mathcal{T}})$ across each of the $K$ retained checkpoints for each class. This process is similar in computational load to training the model for $K$ epochs on the class c of the original dataset and storing $K$ corresponding averaged gradients. All these calculations are performed offline before data generation. \\n\\nAs for the diversity guidance, **the similarity calculation for all generated samples is not computationally demanding** because it is performed in the latent code space of the diffusion model rather than the original image space. **The dimension of the latent code is only 4096**.\\n\\nMore evaluations of the data generation time of our IGD methods can be found in our response A4 to the Reviewer brCT. \\n\\n---\\n**Q3 (@Weakness3): I would suggest revising the statement that this method is \\\"training-free\\\".**\\n\\n**A3:** Thank you for highlighting the need for greater clarity regarding this statement. The use of \\\"training-free\\\" was meant to emphasize our proposition that introducing informative guidance is effective for leveraging a well-trained diffusion model in DD tasks. However, we acknowledge that the term \\\"training-free\\\" may be ambiguous. Therefore, we remove the term in line 082 and revise \\\"training-free guidance\\\" in line 137 to \\\"guided-diffusion generation\\\".\", \"title\": \"Response to Reviewer hMXA (1/1)\"}", "{\"comment\": \"Thank you for your positive feedback! Your valuable question and suggestions regarding the evaluation of distribution diversity led new insights on supporting our influence-guided strategy for dataset distillation (DD). Below are our repones to your questions.\\n\\n---\\n**Q1(@Question1): The performance is still very far from the original datasets. What is the least IPC to achieve similar performance with full data?**\\n\\n**A1:** As a reference, our Minimax-IGD method achieves **90.8% test accuracy** on a ResNetAP-10 model with IPC=400 (**approximately 31% of the original dataset size**) on the ImageNette dataset under RDED's evaluation protocol. This shows a 3.8% performance gap compared to the test accuracy on the full original dataset (94.6%). Further increasing the IPC only results in marginal improvements.\\n\\nEmpirically, the 3.8% error is largely due to \\\"hard samples\\\" that also result in relatively high test loss for a model trained on the full dataset, indicating these are marginal instances within the authentic data distribution. Due to the inherent objective of denoising diffusion models, synthetic data tends to be sampled from high-probability regions, resulting in poor coverage of these marginal instances. Moreover, as suggested by the definition of averaged gradient $\\\\bar{\\\\nabla}_{\\\\theta} \\\\ell_c(X_c ; \\\\theta_e^{\\\\mathcal{S}})$ (line 180), the influence guidance tends to contribute less to the influence promotion over marginal instances. Addressing this limitation will be a key focus of our future research. \\n\\n---\\n**Q2(@Question2): Which experiments show an improvement in diversity? The diversity should be measured in terms of FID/Recall values.**\\n\\n**A2:** In Section 4.5 and **Figure 3**, we provided t-SNE visualizations comparing the distributions of data generated by two baseline methods (DiT and Minimax) and our IGD-based methods with IPC=100. The figure shows that integrating **IGD enhances diversity and alignment** with the original dataset, supported by lower Wasserstein distances to the original dataset.\\n\\nOur further experiments followed your suggestion reveal that \\\"**focusing solely on diversity or simple alignment with the original dataset is insufficient for optimal effectiveness in DD scenarios**\\\". We compare the **FID scores** and **coverage** of surrogate datasets (IPC=100) generated for ImageWoof by different methods, \\n| Metric | DiT | DiT-IGD | Minimax | Minimax-IGD | Random |\\n|:---------------------------:|:-------:|:-------:|:--------:|:-----------:|:-------:|\\n| **FID** | 81.1 | 75.9 | 80.1 | 76.4 | **54.1**|\\n| **Coverage (%)** | 65.4 | 68.1 | 66.5 | 67.2 | **72.3**|\\n| **Accuracy (%)** | 62.3 | 70.6 | 67.4 | **72.1** | 63.6 |\\n\\nCoverage was assessed based on whether each original data point had a nearest neighbor in the surrogate dataset within a given threshold (e.g., 300 in the Inception V3 latent space). For fairness, we excluded data selected by the Random method from the original dataset during coverage calculation. \\n\\nFrom the results, although **the randomly selected dataset has the lowest FID and highest coverage, its performance was the worst**. Similarly, while Minimax-IGD has worse FID and coverage than DiT-IGD, it performed better. These findings suggest that our diversity-constraint influence-guided objective is a more effective measure for DD than relying solely on distribution alignment.\", \"title\": \"Response to Reviewer Xr47 (1/2)\"}", "{\"comment\": \"**Q3(@Weakness4): Can IGD be used with other efficient diffusion sampling strategies like DPM solvers?**\\n\\n**A3:** Following your advice, we achieved a **significant 50% reduction in average sampling time** by applying the DPM solver to our IGD method (from 8.2 s to 4.3 s on an RTX 4090).\\n\\nWe used the official implementation of the DPM solver with the default 20 sampling steps. Notably, we observed **negligible performance degradation** or even **slight improvement** with fewer sampling steps. Below, we compare the average performance using DDIM with 50 steps and the DPM solver with 20 steps for distilling ImageNette with IPC=50:\\n| Solver | DiT | DiT-IGD | Minimax | Minimax-IGD |\\n|:--------:|:-------:|:-------:|:-------:|:-----------:|\\n| DDIM-50 | 75.4 | 80.9 | 77.7 | 82.1 |\\n| DPM-20 | 74.1 (&darr;1.3) | **81.9 (&uarr;1.0)** | 76.4 (&darr;1.3) | 80.5 (&darr;1.6) |\\n\\nWe will include and further extend this useful supplementary evaluation in the later revised version of our paper and introduce the corresponding implementation with the DPM solver in our released code.\\n\\n---\\n**Q4(@Weakness2&5): The time consumption for training surrogate checkpoints and the generation time of IGD against other methods like RDED should be compared.**\\n\\n**A4:** Thank you for pointing out the need for greater statement clarity and your constructive suggestions. First, we want to clarify that our method's \\\"training-free\\\" claim means we do not require retraining the diffusion model, as done in Minimax. **Rather than purely focusing on efficiency**, we regard our influence-guided method as **paving a new way for using diffusion for DD tasks by designing effective guidance to improve training efficacy**.\\n\\nRegarding your suggestion to compare time consumption with SOTA methods like RDED and model-inversion-based methods, all require a well-trained surrogate model for generating synthetic data and predicting soft labels. For example, to distil the ImageNet datasets, these methods need to train a ResNet model on the entire original dataset for over 100 epochs. In contrast, our method only requires training a simpler model, such as ConvNet, for 50 epochs.\\n\\nFor instance generation time, we acknowledge that **RDED is inherently more efficient than diffusion-based methods for dataset distillation**. RDED uses a strategy similar to core-set selection methods to choose informative patches from real images and generate data by stitching them together. This enables RDED to create a surrogate dataset with IPC=50 for ImageNette/ImageWoof in just a few minutes. By comparison, our method takes approximately 69 minutes with DDIM-50 and 35 minutes with DPM-20, respectively, when generating all classes without using parallel workers. \\n\\nHowever, RDED\\u2019s reliance on core-patch selection also limits its ability to synthesize new content of distilled data, whereas our guided diffusion method can. This is reflected in our superior performance, especially in settings where only hard labels are used. Therefore, we believe exploring guided diffusion methods remains a promising direction for dataset distillation.\", \"title\": \"Response to Reviewer brCT (2/2)\"}", "{\"comment\": \"Dear Reviewer brCT,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our submission. As the discussion deadline approaches, we would like to provide a summary of our responses and updates:\\n\\n- Leveraging the DPM solver achieving a **50% reduction in sample generation time** with **negligible performance degradation**.\\n- Our method still performs better than RDED in Table 1's comparison.\\n- Insensitivity is observed when using the Swin Transformer as a surrogate.\\n- Detailed time comparison with RDED are reported.\\n\\nWould you mind checking our responses and confirming if you have any additional questions? We welcome any further comments and discussions!\\n\\nBest Regards\"}", "{\"comment\": \"Thank you very much for your positive feedback! We greatly appreciate your insightful question regarding the distribution of the generated data, which led us to discover new insights on applying our guided-diffusion method for dataset distillation. Below, we address your questions in sequence.\\n\\n---\\n**Q1(@Weakness1): The performance sensitivity to hyper-parameters should be examined, including how to conduct the hyper-parameter search.**\\n\\n**A1:** Thank you for your valuable suggestion. **We have already analyzed the impact of the influence factor $k$ and deviation factor $\\\\gamma$ in Appendix A.5**, titled \\\"Parameter Analysis\\\", and in **Figure 5** of the original paper. The optimal values for these factors were determined through a grid search. As shown in Figure 5, our DiT-IGD method is relatively sensitive to changes in both factors, which supports our motivation for adding a diversity constraint to the influence guidance. In contrast, since the baseline Minimax has already strengthened diversity through fine-tuning, our Minimax-IGD primarily relies on adjusting the influence factor $k$.\\n\\nAs for the guided range parameters, we drew on insights from previous guided diffusion work (line 266) to determine the them. This led us to search for the optimal range within steps $[25,50]$. For the length of the guided range, we tested values of 10, 15, and 20 steps, adjusting with a stride of 5 within $[25,50]$. Setting the length to 15 produced the best and most stable performance among various groups of $k$ and $\\\\gamma$. Ultimately, we discovered that a guided range of $[30,45]$ achieved the best results. We provide a comparison for DiT-IGD on ImageWoof with IPC=100 as a reference.\\n\\n| Range | [25, 40] | [30, 45] | [35, 50] |\\n|:--------------:|:--------:|:--------:|:--------:|\\n| Acc (%) | 69.9 | **70.6** | 67.2 |\\n\\n---\\n**Q2(@Weakness2): Whether the new method causes the collapse of the distribution of the generated data, though with the deviation guidance.**\\n\\n**A2:** In Section 4.5 and **Figure 3**, we provided t-SNE visualizations comparing the distributions of data generated by two baseline methods (DiT and Minimax) and our IGD-based methods with IPC=100. The figure shows that integrating **IGD enhances diversity and alignment** with the original dataset, supported by lower Wasserstein distances to the original dataset.\\n\\nOur further experiments reveal that \\\"**focusing solely on diversity or alignment with the original dataset is insufficient for optimal effectiveness in DD scenarios**\\\". We compare the **FID scores** and **coverage** of surrogate datasets (IPC=100) generated for ImageWoof by different methods, \\n \\n| Metric | DiT | DiT-IGD | Minimax | Minimax-IGD | Random |\\n|:---------------------------:|:-------:|:-------:|:--------:|:-----------:|:-------:|\\n| **FID** | 81.1 | 75.9 | 80.1 | 76.4 | **54.1**|\\n| **Coverage (%)** | 65.4 | 68.1 | 66.5 | 67.2 | **72.3**|\\n| **Accuracy (%)** | 62.3 | 70.6 | 67.4 | **72.1** | 63.6 |\\n\\nCoverage was assessed based on whether each original data point had a nearest neighbor in the surrogate dataset within a given threshold (e.g., 300 in the Inception V3 latent space). For fairness, we excluded data selected by the Random method from the original dataset during coverage calculation. \\n\\nFrom the results, although **the randomly selected dataset has the lowest FID and highest coverage, its performance was the worst**. Similarly, while Minimax-IGD had worse FID and coverage than DiT-IGD, it performed better. These findings suggest that our diversity-constraint influence-guided objective is a more effective measure for DD than relying solely on distribution alignment.\", \"title\": \"Response to Reviewer Qttw (1/2)\"}", "{\"comment\": \"Thank you for your insightful questions and constructive suggestions. Specifically, we followed your suggestion to leverage the DPM solver in our method, which **reduced sample generation time by 50%** with **negligible performance degradation**. We answer you questions as below.\\n\\n---\\n**Q1 (@Weakness1): The compared methods, such as the latest RDED, are missing in Table 1.**\\n\\n**A1:** Thank you for pointing this out. We compare our method with RDED, and a state-of-the-art model-inversion-based method CDA [1], on the ImageNette, under two evaluation protocols below: \\n\\n**Hard-label** evaluation protocol (our default):\\n| Test (IPC=50) | CDA | RDED | DiT-IGD | Minimax-IGD |\\n|:-----------:|:----:|:----:|:-------:|:-----------:|\\n| ConvNet-6 | 37.5 | 65.2 | 80.9 | **82.3** |\\n| ResNetAP-10 | 37.9 | 75.2 | 81.0 | **82.3** |\\n| ResNet-18 | 38.5 | 75.5 | 81.0 | **82.0** |\\n\\n\\n**Soft-label** evaluation protocol (RDED's & CDA's default):\\n| Test (IPC=50) | CDA | RDED | DiT-IGD | Minimax-IGD |\\n|:-----------:|:----:|:----:|:-------:|:-----------:|\\n| ConvNet-6 | 78.6 | 81.1 | **87.8** | 87.0 |\\n| ResNetAP-10 | 80.8 | 83.2 | 86.6 | **87.0** |\\n| ResNet-18 | 81.8 | 85.0 | 87.6 | **87.8** |\\n\\nThe results indicate that **our method achieves superior performance under both evaluation protocols**, despite being primarily designed for the hard-label protocol by default.\\n\\n\\n[1] Yin, Zeyuan, Zhiqiang, Shen. Dataset Distillation via Curriculum Data Synthesis in Large Data Era. Transactions on Machine Learning Research, 2024.\\n\\n---\\n**Q2(@Weakness3): How robust is the influence guidance calculation across different architectures?**\\n\\n**A2:** **We have already evaluated the impact of using different-size models as surrogates for influence guidance calculation in Section 4.3**, titled \\\"Cross-Architecture Robustness of Influence Guidance,\\\" **and in Table 4**. The experimental results show that the effectiveness of influence guidance is insensitive to the choice of surrogate models.\\n\\nYour suggestion to use a Swin Transformer, e.g., a non-CNN architecture, as a surrogate is valuable for a more comprehensive robustness evaluation. As a reference, we provid below **the cross-architecture performance using Swin Transformer checkpoints to compute influence guidance** for generating an IPC=50 synthetic dataset over ImageNette:\\n| Test Model | DiT-IGD | Minimax-IGD |\\n|---------------|---------|-------------|\\n| ConvNet-6 | 78.3 | 79.2 |\\n| ResNetAP-10 | 80.6 | 80.8 |\\n| ResNet-18 | 79.6 | 80.2 |\\n\\nFrom the results above and Table 4, we observe: 1) **the influence guidance effectiveness is generally insensitive to the surrogate model used**; and 2) **using a more complex model as the surrogate tends to slightly underperform compared to simpler models**. Notably, the second observation aligns with findings from gradient-matching-based or training-trajectory-matching-based DD methods which also utilize gradient information. We hypothesize that complex, high-performance models as surrogates might inject less generalizable \\\"short-cut\\\" features into the synthetic data, leading to reduced performance.\", \"title\": \"Response to Reviewer brCT (1/2)\"}", "{\"summary\": \"This paper works on dataset distillation by generating the distilled dataset using diffusion models guided by an influence function. In the implementation, two guidance terms are used. One is to increase the similarity between the gradient using the generated sample and the average gradient using the original training samples. The other is to decrease the similarity between generated samples. Experiments are conducted on ImageNette and ImageWoof.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea of using guided diffusion to generate samples for distillation is an interesting application of diffusion models.\\n\\n2. The proposed two guidance terms, i.e., increasing the gradient similarity and sample diversity, are well-motivated, simple, and intuitive. \\n\\n3. The paper is well written and presented in general.\\n\\n4. In the experiments, the proposed method achieves better performance and shows effectiveness. Comprehensive ablation studies and analyses are provided.\", \"weaknesses\": [\"1. There are a lot of writing issues in the math part:\", \"It is unclear how the derivation is transferred from stepwise (Eq4) to epochwise (Eq5).\", \"C duplicated defined on L096 and L110.\", \"In Sec 2.2, some z is bold, and some are not.\", \"In L132, D is not clearly defined.\", \"In Eq5, theta_e and theta_E are not clearly defined.\", \"2. The proposed method seems to have a high computational cost. Both computing and storage costs are high for the gradient calculation in L294. The similarity calculation with respect to all generated samples is also high in Eq8. The computing and storage costs should be clearly provided and analysed in all the experiment sections.\", \"3. I doubt the statement this method is training-free. I agree that it is training-free as commonly understood in the diffusion community. But there are still a lot of training efforts here. It is only training-free given all the checkpoints, stored gradients, and pre-trained diffusion models. I would suggest revising this statement.\"], \"questions\": \"Please refer to W1 and W2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer rhZM,\\n\\nThank you very much for your proactive attitude and continued engagement with our submission! We deeply appreciate your constructive suggestions, which have significantly helped improve our work.\\n\\nIn response to your previous comment titled \\\"Last Question,\\\" we have added **a benchmark evaluating our method and other high-resolution-oriented methods on smaller datasets (CIFAR-10 & CIFAR-100)**. We have **included this valuable benchmark in Appendix A5** of the revised paper. Together with our method's strong performance on ImageNet datasets, this suggests that our approach provides a unified solution effective across both low-resolution and high-resolution datasets.\\n\\nWith the discussion period extended, we look forward to any further valuable feedback you may have on the additional evaluation. Any other suggestions or thoughts on future directions for this field from you are also invaluable to us!\\n\\nBest Regards.\"}", "{\"comment\": \"Dear Reviewer rhZM,\\n\\nWe are truly grateful for your kind recognition of our submission and the additional content! Your constructive feedback and suggestions have significantly improved our work!\\n\\nWe deeply appreciate your thorough review and proactive attitude to enhancing the paper, both of which are invaluable to the research community.\\n\\nWishing you all the best!\\n\\nBest Regards.\"}", "{\"title\": \"Supplement Benchmark Testing High-Resolution-Oriented Methods on Small Datasets\", \"comment\": \"Dear Reviewer rhZM,\\n\\nAs highlighted in our previous responses, **adapting most low-resolution-oriented DD methods to distil high-resolution datasets presents scalability issues**. We found it intractable to report their performance on ImageNet datasets evaluated in our submission. For reference, we have provided benchmarks for other state-of-the-art baselines (including two low-resolution-oriented methods DM and IDC) in Tables 1 and 2, which have been largely well-received by the other reviewers.\\n\\nWe agree with your valuable suggestion to **test high-resolution-oriented methods on small datasets** to better assess the generalizability of the methods. As per your recommendation, we have established a comprehensive benchmark comparing the performance of ConvNet using state-of-the-art high-resolution-oriented methods, including SRe$^2$L [1] and RDED, on CIFAR-10 and CIFAR-100 at various IPCs.\", \"cifar_10\": \"| Method | DM | DATM | SRe$^2$L | RDED | DiT-IGD |\\n|---------|------|-------|-------|-------|---------|\\n| IPC 50 (1%)| 63.1 | 76.1 | 43.2 | 68.4 | 66.8 |\\n| IPC 500 (10%)| 74.3 | 83.5 | 55.3 | 78.1 | 82.6 |\\n| IPC 1000 (20%)| 79.2 | 85.5 | 57.1 | 79.8 | 84.6 |\", \"cifar_100\": \"| Method | DM | DATM | SRe$^2$L | RDED | DiT-IGD |\\n|----------|------|-------|-------|-------|---------|\\n| IPC 10 (2%)| 29.7 | 47.2 | 24.5 | 46.4 | 45.8 |\\n| IPC 50 (10%)| 43.6 | 55.0 | 45.2 | 51.5 | 53.9 |\\n| IPC 100 (20%)| 47.1 | 57.5 | 46.6 | 52.6 | 55.9 |\\n\\n\\nThe results show that our method outperforms SRe$^2$L and RDED in most scenarios. Additionally, it achieves nearly lossless performance similar to DATM (e.g., at a 20% compression ratio). These findings, combined with our method's outstanding performance on ImageNet datasets, suggest that **our approach is a unified solution that performs well across low-resolution and high-resolution datasets**.\\n\\nThank you for your constructive suggestion! We have included these evaluations in Appendix A5 of the current revised paper. We would appreciate your feedback on the benchmark evaluation we have provided.\\n\\nBest Regards.\\n\\n[1] Zeyuan Yin, Eric P. Xing, Zhiqiang Shen. Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective. NeurIPS 2023\"}", "{\"title\": \"Further discussion with Reviewer rhZM\", \"comment\": \"Thank you for your reply! We believe your further questions are related to **Q3 (@Weakness3)** and **Q4 (@Question1)** from our previous response. Below, we provide additional discussion on these two questions.\\n\\n---\\n**Further discussion on Q3(@Weakness3):**\\n\\nWe would like to respectfully recall that the significant progress of early DD frameworks in distilling small datasets like CIFARs was acknowledged (lines 40-44). However, the limitations identified in lines 45-50 (e.g, cost increase with data dimension or high-frequency features) **restrict their applicability to more practical, high-resolution, large-scale datasets** (e.g., 224\\u00d7224 ImageNets), which is the focus of our research (line 301). These observations motivated us to leverage diffusion models to generate **near-real but training-effective** synthetic data.\\n\\nThank you for mentioning DATM, a representative training trajectory matching DD method that achieves lossless performance on CIFARs. While DATM significantly outperforms our method or RDED (which aims to generate near-real distilled data) at IPC=10 for CIFAR-10, the test performance of 66.8% still has a significant gap compared to using the full dataset (e.g., 84.8%). More importantly, the core contribution of DATM is to emphasize the use of late trajectory matching to **generate hard samples with fewer distilled features (closer to real images)** for **lossless distillation** when IPC is high.\\n\\nDue to limited time during the rebuttal phase, we continue to use CIFAR-10 as a reference here. The following results demonstrate that **our method achieves approximately lossless performance with a 20% compression ratio (IPC=1000), comparable to DATM**:\\n\\n| Test | ConvNet | ResNet-18 | VGG-11 |\\n|:-------:|:-------:|:---------:|:-------:|\\n| DATM | 85.5 | 87.2 | 84.6 |\\n| DiT-IGD | 84.6 | 85.8 | 84.0 |\\n\\nImplementation details will be released later in our official code.\\n\\n\\n---\\n**Further discussion on Q4 (@Question1):**\\n\\nWe apologize for not fully understanding your concern earlier. The term \\\"training-free\\\" was meant to emphasize the practicality of introducing informative guidance with a well-trained diffusion model in DD tasks, rather than enhancing distribution alignment through retraining, as done in Minimax.\\n\\nInspired by the remarkable ability of denoising diffusion models to learn a parameterized distribution that approximates the authentic distribution (line 144), a natural extension is to define **a conditional distribution aligned with specific task requirements**. A line of work known as \\\"training-free guided-diffusion generation\\\" based on energy-based models (EBM) (line 138) has verified the effectiveness of this strategy for guiding diffusion models to meet requirements that are hard to be described by text (e.g., segmentation-guided generation). This motivated us to adopt guided-diffusion frameworks to steer the diffusion model in generating data under training-effective conditions. Furthermore, the improvement achieved by our influence-guided generation over the Minimax method also verifies that **our method can effectively complement DD frameworks like Minimax which require retraining the diffusion model**.\\n\\nWe acknowledge that the term \\\"training-free\\\" may have caused ambiguity. To prevent any misunderstanding, we have removed the term in line 082 and revised \\\"training-free guidance\\\" in line 137 to \\\"guided-diffusion generation\\\" in the updated version of the paper.\\n\\nWe hope this response addresses your concern and further clarifies our motivation. If any questions remain, we would appreciate any specific suggestions or further questions you may have to help improve our clarity.\"}", "{\"title\": \"Further Discussion\", \"comment\": \"I appreciate the authors' response. While some of my comments were addressed, others, in my view, require further discussion.\\n\\n1. Regarding Q3, the authors provided results only on the CIFAR-10 dataset. The results indicate that with IPC=10, the performance of DATM is 66.8, whereas IGD achieves only 35.8. This demonstrates a significant performance gap, raising concerns about the efficiency of IGD. \\n \\n2. My concern persists regarding why the method is considered **train-free** and how it aligns with the target domain. Additionally, the relationship between performance improvement and being train-free remains unclear. I believe the authors may not have fully understood my main concern. While I acknowledge that ablation experiments in every paper aim to demonstrate the validity of the approach in terms of performance, this is not the aspect I am focusing on. \\n\\nFurther clarification on these points would be appreciated.\"}", "{\"comment\": \"Dear Reviewer Qttw,\\n\\nWe are truly grateful for your constructive feedback and recognition of our work! Below is a summary of our responses to your questions:\\n\\n- A kind reminder of hyper-parameter analysis in Appendix A.5 and a detailed searching process of the guided range are introduced.\\n- Additional experiments regarding data diversity and distribution coverage are discussed.\\n- Comparisons with recent approaches using pre-trained diffusion models are involved, demonstrating a continued superior performance of our method.\\n\\nIf there are any additional questions or thoughts, we are welcome to further discussion!\\n\\nBest Regards\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes Influence-Guided Diffusion (IGD) for dataset distillation. IGD solves the problem of poor performance and high resource costs of existing methods at high resolution. IGD proposes a training-free sampling framework which can be used in pretrained diffusion models to generate training-effective data. Extensive experiments show IGD achieves state-of-the-art performance in distilling ImageNet datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. IGD is a training-free framework that can be easily used in any pretrained diffusion models.\\n2. The target this paper hopes to solve is clear, and the proposed methods solve the problem of data influence and diversity constraint theoretically.\\n3. The performance improvement of IGD used in DiT and Minimax finetuned DiT is obvious.\\n4. The ablation study is adequate, including all proposed methods and hyperparameters.\", \"weaknesses\": \"1. In Table.1, the compared methods are missing like latest method RDED [1] mentioned in section 4.1.\\n2. Although the proposed method IGD is training-free for diffusion models. It requires training a model to collect the surrogate checkpoints used in Eq. 7. The time consumption should be listed as the paper emphasizes efficiency.\\n3. The model used in Eq.7 is ConvNet-6. If we change the model like for a bigger one Swin Transformer, will the performance better? Or this model choice is relatively insensitive?\\n4. Is IGD can be used in other efficient diffusion sampling strategy like DPM [2] solvers?\\n5. The generation time should be compared between IGD and other methods like current SOTA RDED [1].\", \"reference\": \"[1]. Sun P, Shi B, Yu D, et al. On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 9390-9399.\\n\\n[2]. Lu C, Zhou Y, Bao F, et al. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps[J]. Advances in Neural Information Processing Systems, 2022, 35: 5775-5787.\", \"questions\": \"Please see weeknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Qttw,\\n\\nThank you very much for your positive feedback on our responses and additional content! Reported comparisons with recent approaches using pre-trained diffusion models will be included in the revised version of the paper.\\n\\nWe greatly appreciate your positive support of our submission!\\n\\nBest regards.\"}", "{\"comment\": \"Dear Reviewer hMXA,\\n\\nThank you for your positive assessment of our work. We sincerely appreciate your thoughtful comments and constructive feedback on our submission! Below, we briefly summarize our key responses:\\n\\n- The derivation from stepwise to epochwise in Eq. (4) to Eq. (5) is clarified.\\n- Several notational issues in the math part are resolved.\\n- The computational and storage costs with a detailed analysis are reported.\\n- Ambiguous statements regarding the method being \\\"training-free\\\" are revised.\\n\\nCould you please review our responses and let us know if you have any further questions? Your feedback is invaluable to us!\\n\\nBest Regards\"}", "{\"title\": \"Last question\", \"comment\": \"Most of my questions have been addressed by the authors. However, one remaining point raises concerns about the development of the DD field. Specifically, there appear to be two dominant categories of methods: (1) DATM, which demonstrates strong performance on small datasets (commonly IPC = 10, 50), and (2) high-resolution-oriented methods, which have gained traction more recently. While some of these high-resolution methods essentially combine a single high-resolution image with multiple lower-resolution images, these two approaches seem fundamentally contradictory.\", \"this_leads_to_my_main_concern\": \"how do methods like DATM perform on high-definition (HD) datasets? Conversely, methods tailored for high-resolution data often struggle with lower-resolution datasets, and most of these approaches lack reported results on small-scale datasets. Is there a unified method capable of achieving strong performance across both low-resolution and high-resolution datasets?\\n\\nI would encourage the authors to construct a more comprehensive benchmark that incorporates both low-resolution and high-resolution datasets, as this would significantly enhance the generalizability and robustness of the evaluation. If such a benchmark is developed, I will be inclined to increase my overall score.\"}", "{\"title\": \"Good benckmark\", \"comment\": \"Thanks for the authors' response and I have raised my score.\"}", "{\"title\": \"Summary of Major Revisions in the Updated Submission based on Reviewer Feedback\", \"comment\": \"We deeply appreciate the reviewers for their thoughtful and constructive comments, which have greatly contributed to improving our work. We have thoroughly revised the paper to address the reviewers' concerns, as outlined in our responses to their questions. Below is a summary of the major revisions in the current submission.\\n\\n**Summary of Major Revisions:**\\n1. We added **an intuitive illustration**, contrasting it with the vanilla diffusion sampling method in **Figure 5**. (A1@Reviewer rhZM)\\n2. In lines 59-80, we included a brief explanation of the trajectory influence function and reorganized the text to emphasize its role in achieving training-effectiveness in diffusion generation. (A1@Reviewer rhZM)\\n3. We included **additional comparisons on CIFAR-10 and CIFAR-100 in Appendix A5** to demonstrate the versatility of our method in distilling smaller datasets. (Last Question@Reviewer rhZM)\\n4. We added **comparisons with recent diffusion-based approaches in Appendix A6**, showing the continued superior performance of our method. (A3@Reviewer Qttw)\\n5. We evaluated our methods **using the DPM solver for diffusion generation in Appendix A7**, demonstrating a 50% reduction in sample generation time with negligible performance degradation. (A3@Reviewer brCT)\\n6. We expanded **the analysis of distribution diversity and coverage in Appendix A8 using FID and coverage metrics**. (A2@Reviewer Xr47)\\n7. We addressed several notational issues and removed the term \\\"training-free\\\" in line 082 as well as revised \\\"training-free guidance\\\" in line 137 to \\\"guided-diffusion generation\\\" to avoid ambiguity. (A1&A3@Reviewer hMXA)\\n\\nWe sincerely appreciate the reviewers for their insightful feedback, which has greatly contributed to refining our work. The revisions, including additional analyses, new baselines, and expanded experiments, have significantly improved the clarity and comprehensiveness of the paper. We welcome any further discussions or feedback and are happy to provide additional clarifications or materials as needed.\"}", "{\"metareview\": \"The paper introduce a new method called Influence-Guided Diffusion for dataset distillation. IGD leverages a training-free sampling framework with pretrained diffusion models to generate training-effective data, incorporating gradient matching and diversity constraints for improved performance. The method demonstrates state-of-the-art results on ImageNet distillation.\\n\\nThe authors have made substantial revisions to enhance the clarity and depth of the paper. Key updates include the addition of an intuitive illustration comparing their method to the standard diffusion sampling approach, as well as a reorganization of the text to highlight the role of the trajectory influence function in improving training effectiveness. Additional experimental results on CIFAR-10 and CIFAR-100 were incorporated to demonstrate the method\\u2019s versatility, alongside comparisons with recent diffusion-based approaches. These revisions enhance the comprehensiveness and clarity of the paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers generally were satisfied with the authors' response. The reviewer highlighted challenges like scalability, task-specific dataset generation, and performance gaps, which could be common issues in dataset distillation, not unique to this paper. Despite these inherent challenges, the paper makes valuable advancements and have meaningful contributions to the field.\"}", "{\"summary\": \"This paper proposed a guided diffusion generation method for dataset distillation problem. The trajectory influence and deviation guidance are introduced to the vanilla diffusion process for generating synthetic samples as efficient training data. The results on ImageNet and its subsets demonstrates the improvements over the baselines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Strengths:\\n\\n1.\\tThe paper is easy to follow. The motivation of importance guided synthesis is clear and method is well presented. \\n\\n2.\\tThe new idea is reasonable and neat that bridge the diffusion-based generative models and importance-based sample selection.\\n\\n3.\\tThe results are promising. The performance improvements on challenging datasets are remarkable. Extensive ablation studies, cross-architecture validation and visualization are implemented.\", \"weaknesses\": \"Weaknesses:\\n\\n1.\\tThere are several hyper-parameters in the algorithm, such as influence factor, deviation factor, scale, guided range etc. The sensitiveness of performance on these hyper-parameters should be studied. How do authors search the hyper-parameters?\\n\\n2.\\tI have a concern whether the new method cause the collapse of the distribution of the generated data, though with the deviation guidance. \\n\\n3.\\tSince DiT and VAE pre-trained on huge dataset are utilized for generating training samples on small datasets. It naturally brings advantages over traditional dataset distillation methods. Hence, more recent methods that also use pre-trained diffusion models should be compared with. \\n\\n4.\\tThere are also some similar works that improve the efficiency of diffusion-model generated training samples, such as [1], which should also be discussed in the paper.\\n\\n[1] Real-Fake: Effective Training Data Synthesis Through Distribution Matching, ICLR 2024.\", \"questions\": \"Please address the above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q3(@Question3): Equation (7) uses cosine similarity instead of dot product; is it purely due to experimental results or based on some other hypothesis?**\\n\\n**A3:** As stated in lines 205-207, we replace the dot product with cosine similarity to primarily stabilize the magnitude of the influence guidance signal. Together with the dynamic scale factor $\\\\rho_t$ defined in Eq. 8, **this replacement allows our method to achieve good performance with the minimal tuning of the influence factor $k$** when using different surrogate models or diffusion samplers (as reported in Table 4 and our response A3@Reviewer-brCT).\\n\\nMoreover, since Eq. 7 involves checkpoint parameters retained from different training stages, we empirically observed that using the dot product for gradient similarity causes loss from earlier checkpoints to dominate the influence guidance. Cosine similarity alleviates this issue effectively.\\n\\n---\\n**Q4(@Question4): How will the work be performed on different tasks apart from classification?**\\n\\n**A4:** As defined in Section 2.1, our work currently focuses on dataset distillation for image classification tasks. Given the limited time during the rebuttal stage, we respectfully provide evaluations centred on our primary focus and comparisons with related work of similar scope. We recognize the potential of this question and plan to explore the applicability of our approach to other tasks in future work.\\n\\n---\\n**Q5(@Weakness4): Is it possible to have a one-time distillation and use that distilled datasets to validate across models?**\\n\\n**A5:** Thank you for raising this important question, which relates to a fundamental criterion in DD research. The community often refers to this as **the cross-architecture or unseen-architecture generalization capability of distilled datasets**. This is one of our key motivations for introducing the guided-diffusion paradigm, allowing us to formulate the DD problem as learning a training-effective conditional distribution of the authentic distribution, thereby mitigating the distribution shift issues faced by earlier DD methods (line 108).\\n\\nAs noted in the implementation details (line 317), our method **uses ConvNet-6 as the surrogate model for calculating influence guidance during one-time distillation by default**. The results reported in Tables 1-3 are based on this default setting. These results demonstrate the strong unseen-architecture generalization of our one-shot synthetic dataset across both CNN and Transformer models.\", \"title\": \"Response to Reviewer Xr47 (2/2)\"}", "{\"comment\": \"Dear Reviewer Xr47,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our submission, as well as your positive feedback on our work! Below is a summary of our responses and updates:\\n\\n- A discussion regarding the least IPC needed for the full dataset performance is provided.\\n- Experiments measuring diversity with FID and distribution coverage have been included.\\n- The rationale behind using cosine similarity in Equation (7) is further clarified.\\n- The feasibility of using a one-time distilled dataset across unseen models is elucidated.\\n\\nWe would greatly appreciate it if you could check our responses and let us know if there are any additional questions. Your feedback is invaluable to us!\\n\\nBest Regards\"}" ] }
0wfmHoKQX6
Replicate and Quantize: A Plug-and-Play Strategy for Load Balancing in Sparse Mixture-of-Experts LLMs
[ "Zijie Liu", "Jie Peng", "Zirui Liu", "Kaixiong Zhou", "Tianlong Chen", "Zhaozhuo Xu" ]
While the rapid increase in the number of model parameters poses significant benefits to the development of large language models (LLMs), computational costs are also raised. In order to tackle this difficulty, the sparse mixture-of-experts(SMoE) model was introduced to tackle LLM scaling by activating a subset of experts per input. Therefore, how to leverage the knowledge of multiple experts will be an important topic. Normally, in the most extreme scenario, employing a balanced expert allocation system will result in a time-saving of $n$ times compared to utilizing only a single expert. Thus, in this paper we (1) systematically analyzed the performance and functionality of each expert. (2) Introduced a metric to fill the blank of evaluating load balance for the sparse mixture-of-experts(SMoE) model, based on the observation. (3) Proposed a dynamic plug-and-play strategy that is both trainingless and near-lossless, effectively resolving the load balancing problem, in contrast to previous works that focused on training strategies.
[ "mixture-of-experts;load balance" ]
Reject
https://openreview.net/pdf?id=0wfmHoKQX6
https://openreview.net/forum?id=0wfmHoKQX6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wINebkHDs2", "w3wMLjAknZ", "tml4q2JZrv", "ra3YjLpJvZ", "r1SFVbXx3e", "mXkCXIefwG", "lpB48EdcGw", "fA4QPUIgGC", "cobcsfPHfv", "bNivfsAbmz", "aQpQIHzb9X", "XWx94ZsWnO", "VnF9TqwRmp", "VJYErBwO4X", "Rt2G8MdWPv", "RXvAPFVoX3", "QgVuIeD0Zp", "OxCkv8VOBs", "KJT7ocNEw7", "IHOSZgMZvj", "G00tzGp78B", "ER9LSMdpQR", "2QzpEpGblv" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732249189634, 1729842768200, 1732249819375, 1732919027563, 1730545845151, 1732674693652, 1732332950201, 1732249471457, 1733294957724, 1733859209849, 1730633951102, 1733295039470, 1730515684871, 1732516623456, 1732909600046, 1733212615094, 1737523516720, 1732672633565, 1732908931890, 1732946408497, 1733295023223, 1732249732723, 1732624088427 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Reviewer_5vEp" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Reviewer_CGsx" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Reviewer_Dy55" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Area_Chair_j2x2" ], [ "ICLR.cc/2025/Conference/Submission2654/Reviewer_UpDR" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Reviewer_Dy55" ], [ "ICLR.cc/2025/Conference/Submission2654/Reviewer_5vEp" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Reviewer_UpDR" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Reviewer_Dy55" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Authors" ], [ "ICLR.cc/2025/Conference/Submission2654/Reviewer_CGsx" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewers for the comments and suggestions to improve our paper. Please see the following clarification.\\n\\n## [W1&Q1: Implementation Details - Sure!]\\n## Hyperparameters\\n\\n| Parameter | Value |\\n|-----------------------|------------|\\n| **Learning Rate** | 5e-5 |\\n| **Train Epochs** | 10 |\\n| **Training Batch Size** | 8 |\\n| **Eval Batch Size** | 16 |\\n| **Weight Decay** | 0.01 |\\n| **Optimizer** | AdamW |\", \"quantization_techniques\": \"For Switch Transformer they load in float32 so we use the half quant with the float16\\nFor Llama and Deepseek which load in float16, so we used 8 bit quantization refer (https://arxiv.org/abs/2208.07339)\\n\\n## [W2&Q2: Additional ablation studies to demonstrate the individual contributions of the replication and quantization - Sure!]\\nWe provide an ablation study of replication and quantization as follows.\\n\\n\\n| Model | Hellaswag | MMLU | PIQA | Truthful QA | Winogrande |\\n|------------------------------------|----------------------|--------------------|--------------------|--------------------|-------------------|\\n| **Switch 8** | | | | | |\\n| Only Quant Less-Important Experts | 0.2795 \\u00b1 0.0045 | 0.2295 \\u00b1 0.0035 | 0.5751 \\u00b1 0.0115 | 0.3605 \\u00b1 0.0110 | 0.5138 \\u00b1 0.014 |\\n| Replicate and Quant replicate one | 0.2749 \\u00b1 0.0045 | 0.2498 \\u00b1 0.0037 | 0.5811 \\u00b1 0.0115 | 0.3635 \\u00b1 0.0110 | 0.4917 \\u00b1 0.0141 |\\n| Quant All Experts | 0.2641 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5490 \\u00b1 0.0116 | 0.3775 \\u00b1 0.0112 | 0.5185 \\u00b1 0.014 |\\n| R&D | 0.2763 \\u00b1 0.0045 | 0.2522 \\u00b1 0.0037 | 0.5832 \\u00b1 0.0115 | 0.3640 \\u00b1 0.011 | 0.4917 \\u00b1 0.0141 |\\n| **Switch 16** | | | | | |\\n| Only Quant Less-Important Experts | 0.2820 \\u00b1 0.0045 | 0.2295 \\u00b1 0.0035 | 0.5577 \\u00b1 0.0116 | 0.3914 \\u00b1 0.0114 | 0.4964 \\u00b1 0.0141 |\\n| Replicate and Quant replicate one | 0.2864 \\u00b1 0.0045 | 0.2490 \\u00b1 0.0036 | 0.5490 \\u00b1 0.0116 | 0.3669 \\u00b1 0.0112 | 0.483 \\u00b1 0.014 |\\n| Quant All Experts | 0.2595 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5582 \\u00b1 0.0116 | 0.3721 \\u00b1 0.0110 | 0.5091 \\u00b1 0.0141 |\\n| R&D | 0.2872 \\u00b1 0.0045 | 0.2495 \\u00b1 0.0036 | 0.5501 \\u00b1 0.0116 | 0.3694 \\u00b1 0.0112 | 0.4854 \\u00b1 0.014 |\\n\\n\\n\\nThe replication is for load balancing and it maintains the model performance.\\nThe quantization of less important expert ensures us to fit the total memory budget while maintaing the overall model performance.\\n\\n## [Q3: Studies on different levels of model sparsity- We include SMoEs with various sparsity.]\\n\\nWe have conducted experiment in the paper with the following models with different sparsity.\\n\\n**Switch Transformer 8 experts (Sparsity 1/8):**\\ntotal have 8 experts in each layer, only choose 1 expert for each token\\n\\n**Switch Transformer 16 experts (Sparsity 1/16):**\\ntotal have 16 experts in each layer, only choose 1 expert for each token.\\n\\n**Llama MoE 8 experts (Sparsity 2/8):**\\ntotal have 8 experts in each layer, choose 2 experts for each token\\n\\n**Deepseek MoE (Sparsity 1+3/63):**\", \"total_has_64_experts_in_each_layer\": \"63 isolated experts and 1 shared expert.\\u00a0 In each routing, the system will select three experts from the 63 isolated experts, always using the shared expert. Therefore, our strategy is exclusively applied to the 63 isolated experts.\"}", "{\"summary\": \"This work provides a simple yet effective strategy for load balancing in MoE-based LLMs. Specifically, the authors first find the most heavy expert and the less important experts, and (a) replicate and quantize most heavy experts, (b) quantize less important experts. In experiments, the authors have deployed the proposed method on 4 MoE models, achieving comparable results with more balanced load among experts. In conclusion, the proposed model is sound and easy to deploy, while more in-depth evaluations and analyses should be conducted.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1)\\tThe proposed idea is simple and sound.\\n2)\\tOverall, this work is well-organized and easy to follow.\\n3)\\tThe authors have tested on various MoE base models.\", \"weaknesses\": \"1)\\tThe central idea of this work is the replicate and quantize strategy. Firstly, an appropriate ablation study should be conducted to verify the effectiveness of both strategies on the most heavy experts and less important experts. Secondly, whether the selections of most heavy experts and less important experts are essential? If we further quantize other experts while maintaining the activated parameters, what will be the results?\\n2)\\tIn Section 3.4, the authors merely give the importance score/load results on one task PIQA. Does this phenomenon also exist in other tasks and other MoE blocks? The authors are suggested to give more quantitative indicators (e.g., correlation coefficient on different settings) to support their claims.\\n3)\\tIn Related work, an in-depth analyses on other load balance methods (and why they do not work well in the experiments) should be given.\\n4)\\tIn experiments, although the authors claimed that \\u201cbefore that, we have tried to use the different tuning strategies to adjust the router mechanism to solve the load imbalance issues, Clearly, it does not work as we expected, and the part of the strategies emplifies the imbalanced distribution among the different experts\\u201d, which strategies are used and the corresponding results should be given. Currently, only using the raw setting as baseline is not sufficient.\\n5)\\tThe experimental details are insufficient. For instance, the details of adopting the proposed method on DeepSeekMoE should be given. DeepSeekMoE adopts shared and specialized experts, and whether the shared experts are also used for replicate? Moreover, it also multiplies the number of experts, which shares the similar idea of the \\u201creplicate\\u201d heavy expert part in this work.\\n6)\\tThe actual inference speed and cost should be given. Do all comparisons share the same activated parameters in inference?\\n7)\\tTypos, e.g., Page2, missing reference in the first paragraph.\\n8)\\tThe scalability of the proposed method is encouraged to be evaluated or discussed.\\n\\nAfter rebuttal, I raise the voting to 5.\", \"questions\": \"Refer to the questions in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewers for the comments and suggestions to improve our paper. Please see the following clarification.\\n\\n## [W1: ablation study - Sure]\\nWe provide an ablation study of replication and quantization as follows.\\n\\n\\n| Method | Hellaswag | MMLU | PIQA | Truthful QA | Winogrande |\\n|-----------------------------|---------------------|---------------------|--------------------|--------------------|--------------------|\\n| **Switch 8 (only quant)** | 0.2795 | 0.2295 | 0.5751 | 0.3605 | 0.5138 |\\n| **Replicate and quant replicate one** | 0.2749 \\u00b1 0.0045 | 0.2498 \\u00b1 0.0037 | 0.5811 \\u00b1 0.0115 | 0.3635 \\u00b1 0.0110 | 0.4917 \\u00b1 0.0141 |\\n| **Quant ALL** | 0.2641 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5490 \\u00b1 0.0116 | 0.3775 \\u00b1 0.0112 | 0.5185 \\u00b1 0.0140 |\\n| **Ours** | 0.2763 \\u00b1 0.0045 | 0.2522 \\u00b1 0.0037 | 0.5832 \\u00b1 0.0115 | 0.3640 \\u00b1 0.0110 | 0.4917 \\u00b1 0.0141 |\\n| **Switch 16 (only quant)** | 0.2820 \\u00b1 0.0045 | 0.2295 \\u00b1 0.0035 | 0.5577 \\u00b1 0.0116 | 0.3914 \\u00b1 0.0114 | 0.4964 \\u00b1 0.0141 |\\n| **Replicate and quant replicate one** | 0.2864 \\u00b1 0.0045 | 0.2490 \\u00b1 0.0036 | 0.5490 \\u00b1 0.0116 | 0.3669 \\u00b1 0.0112 | 0.4830 \\u00b1 0.0140 |\\n| **Quant ALL** | 0.2595 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5582 \\u00b1 0.0116 | 0.3721 \\u00b1 0.0110 | 0.5091 \\u00b1 0.0141 |\\n| **Ours** | 0.2872 \\u00b1 0.0045 | 0.2495 \\u00b1 0.0036 | 0.5501 \\u00b1 0.0116 | 0.3694 \\u00b1 0.0112 | 0.4854 \\u00b1 0.0140 |\\n\\n## [W2: Does this phenomenon also exist in other tasks and other MoE blocks? - Yeah]\\nIn Figure 2, our goal is to demonstrate the existence of two distinct dimensions among the experts, which represent our observation framework rather than a phenomenon inherent to the experts themselves. These dimensions provide a structured way to quantify and analyze the functionality of the experts. While PIQA serves as a representative example, we observed that this two-dimensional framework can be applied consistently across other tasks and MoE blocks. \\n\\n## [W3: Related Work in load balance - Sure]\\nThe current load balanced method focuses on the training stage. For the current SMoE model, they have tried to solve their load imbalance issue by introducing the imbalance loss or other methods during training the model. Moreover, there are some current works focus on the gpu utilization rather than the load balance for each experts.\\n\\n## [W4: Tuning strategies - we have show their results in Table 1]\\n\\n## [W5: The details of adopting the proposed method on DeepSeekMoE should be given - Sure]\", \"the_deepseek_moe_total_has_64_experts_in_each_layer\": \"63 isolated experts and 1 shared expert.\\u00a0 In each routing, the system will select three experts from the 63 isolated experts, always using the shared expert. Therefore, our strategy is exclusively applied to the 63 isolated experts.\\n\\n## [W6: Do all comparisons share the same activated parameters in inference? - Yeah, they shared the same activated parameters during the inference]\\n\\n## [W7: Format - Thank you for reminder. We have updated the paper. We will post a revised version with a summary of revision.]\\n\\n## [W8: The scalability of the proposed method is encouraged to be evaluated or discussed. - Sure]\\nIn Figure 3, Panels (a) and (b) illustrate the load balance scores across multiple timesteps under two distinct routing strategies. Panel (a) demonstrates the case where the system utilizes cumulative information aggregated from all prior timesteps, providing a holistic approach to decision-making. Conversely, Panel (b) focuses on a scenario where only information from the immediately preceding timestep is leveraged, showcasing a more localized decision-making approach.\\n\\nThe purpose of this setup is to identify the most important \\\"heavy-hitter\\\" experts from the less important ones. These decisions depend directly on the input data at each timestep. To mimic how data arrives in a real-world streaming scenario, we divided the MMLU dataset into 10 parts and set the timestep count to 10. This approach allows us to simulate the flow of data over time and study how it affects the routing strategy in a manageable and realistic way.\"}", "{\"comment\": \"Hello Reviewer Dy55,\\n\\nWe hope you had a wonderful Thanksgiving! We are checking in on our previous comment to see if you've had a chance to read it, and we'd be grateful if you could have a moment to review it.\\n\\nWe address your concerns in our rebuttal.\\n\\n* The inference time will decrease, and the memory usage will remain unchanged.\\n* We will release our simulation code depending on the final decision.\\n* Paper [7] focuses on the training strategy.\\n* Clarify our metric\\u2014the overlap between the most important experts and heavy-hitter experts; do not limit our metric ability to preserve accuracy as shown in our Table 3,\\u00a0 and we conducted the experiments you mentioned here.\\n* We demonstrated a more pronounced load imbalance in a larger batch, and our strategy can still effectively address this phenomenon.\\n\\n| Model & Configuration | Hellaswag | MMLU | PIQA | Truthful QA | Winogrande |\\n|----------------------------------|----------------------|---------------------|---------------------|---------------------|---------------------|\\n| **Switch 8** | | | | | |\\n| Quant All Experts | 0.2641 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5490 \\u00b1 0.0116 | 0.3775 \\u00b1 0.0112 | 0.5185 \\u00b1 0.014 |\\n| R&D | 0.2763 \\u00b1 0.0045 | 0.2522 \\u00b1 0.0037 | 0.5832 \\u00b1 0.0115 | 0.3640 \\u00b1 0.0110 | 0.4917 \\u00b1 0.0141 |\\n| Quant All Model | 0.2927 \\u00b1 0.0045 | 0.2295 \\u00b1 0.0035 | 0.5952 \\u00b1 0.0115 | 0.3706 \\u00b1 0.0111 | 0.5091 \\u00b1 0.0141 |\\n| **Switch 16** | | | | | |\\n| Quant All Experts | 0.2595 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5582 \\u00b1 0.0116 | 0.3721 \\u00b1 0.0110 | 0.5091 \\u00b1 0.0141 |\\n| R&D | 0.2872 \\u00b1 0.0045 | 0.2495 \\u00b1 0.0036 | 0.5501 \\u00b1 0.0116 | 0.3694 \\u00b1 0.0112 | 0.4854 \\u00b1 0.014 |\\n| Quant All Model | 0.2768 \\u00b1 0.0045 | 0.2295 \\u00b1 0.0035 | 0.5539 \\u00b1 0.0116 | 0.3726 \\u00b1 0.0112 | 0.4901 \\u00b1 0.014 |\\n\\nBatch Size 32\\n| Task | R & Q | Raw |\\n|----------------|--------------|---------------|\\n| GSM8K | 3.1689 | 3.4523 |\\n| Truthful QA | 2.2348 | 3.2967 |\\n| Winogrande | 1.5515 | 2.7249 |\\n| Hellaswag | 1.7974 | 2.5058 |\\n| MMLU | 2.7664 | 3.9897 |\\n| PIQA | 3.6789 | 3.8401 |\\n\\nIf you have any updates or thoughts, we\\u2019d greatly appreciate your feedback. Please let us know if there\\u2019s anything we can clarify or assist with.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper introduces a plug-and-play approach (R&Q) for addressing load imbalance in Sparse Mixture-of-Experts models to improve computational efficiency without retraining. R&Q literally replicates heavily used experts in a quantized form and quantizes less important experts to maintain memory efficiency. Minimal impact on performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1. No retraining is required.\\n\\nS2. Near-original accuracy (at least on classification / MCQ tasks)\\n\\nS3. I like how they distinguished between heavy-hitter experts and important experts, which could easily be confused as the same. They also conducted experiments to show that these concepts are distinct, although there is some correlation between them.\", \"weaknesses\": \"W1. Weak presentation of algorithms and figures. Some figures lack any caption or explanation. In Algorithm 1, the choice of variable names is awkward and confusing. For example, l(x), count(expert_chosen), argmax(expert_num), EC, etc., need to be clarified with better names.\\n\\nW2. Weak baseline. The baseline experiments were only conducted within their framework (ours vs. random vs. heavy-hitter). They lack comparisons with other techniques that address load balancing. \\n\\nW3. Weak empirical analysis on computational efficiency gain. While their experiments show that R&Q improves load balancing compared to naive techniques, they don't demonstrate how this improvement directly translates to reduced inference latency. This is critical because the use of quantization could often slow down inference.\\n\\nW4. Weak empirical analysis on more challenging tasks, such as generation tasks (e.g., perplexity, code generation, MT-Bench, etc.).\", \"questions\": \"Q1. Error on Page 6, line 288: Are the X-axis and Y-axis labels inverted?\\n\\nQ2. Should R&Q identify heavy-hitter and important experts for each individual task, or can the identified experts be reused across tasks? The motivation behind this question is that heavy-hitters may vary depending on task characteristics. For example, experts 1-3 might be heavy-hitters for task A, while different experts could be heavy-hitters for task B.\\n\\nQ3. While resolving load imbalance could theoretically improve computational efficiency, how does R&Q empirically achieve this efficiency gain? Could it actually slow down inference latency due to the quantized experts? I\\u2019m asking this because the experiment section lacks an empirical analysis of memory and latency improvements. A strong answer to this question would require empirical results.\\n\\nQ4. Would R&Q maintain performance on more challenging tasks, such as generation tasks (e.g., perplexity, code generation, MT-Bench, etc.)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewers for the comments and suggestions to improve our paper. Please see the following clarification.\\n\\n| Model & Configuration | Hellaswag | MMLU | PIQA | Truthful QA | Winogrande |\\n|----------------------------------|----------------------|---------------------|---------------------|---------------------|---------------------|\\n| **Switch 8** | | | | | |\\n| Quant All Experts | 0.2641 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5490 \\u00b1 0.0116 | 0.3775 \\u00b1 0.0112 | 0.5185 \\u00b1 0.014 |\\n| R&D | 0.2763 \\u00b1 0.0045 | 0.2522 \\u00b1 0.0037 | 0.5832 \\u00b1 0.0115 | 0.3640 \\u00b1 0.0110 | 0.4917 \\u00b1 0.0141 |\\n| **Switch 16** | | | | | |\\n| Quant All Experts | 0.2595 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5582 \\u00b1 0.0116 | 0.3721 \\u00b1 0.0110 | 0.5091 \\u00b1 0.0141 |\\n| R&D | 0.2872 \\u00b1 0.0045 | 0.2495 \\u00b1 0.0036 | 0.5501 \\u00b1 0.0116 | 0.3694 \\u00b1 0.0112 | 0.4854 \\u00b1 0.014 |\\n## [W5: Inference Performance: Sure]\\nOur method ensures memory usage does not increase even when duplicating new experts. After duplication, we quantize the new experts and further quantize less-important ones to half precision. \\n\\nRegarding inference speed, in each MoE layer, the inference time is determined by the slowest expert, which handles the largest number of tokens. Below, we provide an example to illustrate this behavior.\\n\\n**Premise:**\\n\\n- **Number of Experts:** 4 \\n- **Number of Tokens:** 80 \\n- **Processing Capacity per Expert:** Each expert processes \\\\( \\\\frac{80}{4} = 20 \\\\) samples per unit time (T). \\n\\n---\\n\\n**Original Routing Results:**\\n\\n- **Experts' Token Allocation:** [18, 30, 25, 7] \\n- **Inference Time (Slowest Expert):**\", \"expert_1_handles_30_tokens\": \"\\\\[\\n $\\\\frac{30}{20} = 1.5 \\\\, T$\\n \\\\]\\n\\n---\\n\\n**After Rerouting & Quantization (R & Q):**\\n\\n- **Experts' Token Allocation:** [18, [17, 13], 25, 7] \\n- **Inference Time (Slowest Expert):**\", \"expert_2_handles_25_tokens\": \"\\\\[\\n $\\\\frac{25}{20} = 1.25 \\\\, T$\\n \\\\]\\n\\nBy applying our method, the inference time is reduced, resulting in a speed-up of 0.25.\\n\\n\\n## [W8: Code Release - Sure]\\nThank you for your question. Our current implementation is a simulation code. Depending on the final decision for the paper, we plan to release the simulation code along with detailed documentation to support reproducibility.\\n\\n## [W1: Paper [7]]\\nThey focus on the training strategy and duplicate the expert to other free GPU to release this issue during training.\\n\\n## [W2: batch-level eval - yeah, we have conducted experiments on it]\\n| Task | Batch Size 1 | Batch Size 32 |\\n|----------------|--------------|---------------|\\n| GSM8K | 1.9709 | 3.4523 |\\n| Truthful QA | 1.4956 | 3.2967 |\\n| Winogrande | 1.5261 | 2.7249 |\\n| Hellaswag | 1.4182 | 2.5058 |\\n| MMLU | 1.5405 | 3.9897 |\\n| PIQA | 1.5770 | 3.8401 |\\n\\n## [W4 & Q1: Two Metrics]\\nIn this method, we observe that some experts handle significantly more tokens than others, leading to load imbalance during the inference stage. Instead of randomly redistributing tokens to other experts, which might compromise accuracy, we propose a strategy called \\\"replicating the heavy-hitter expert.\\\" This involves creating a duplicate of the heavy-hitter expert, with tokens randomly assigned to one of the replicas. Ideally, this approach reduces the workload by half. However, it increases memory usage. To address this, we quantize both the replicated expert and the less important experts to half-precision, maintaining the overall memory footprint. Our findings show that half-precision quantization effectively preserves accuracy. Randomly quantizing any expert could negatively impact real-world performance or specific tasks, so we prioritize quantizing the less important experts instead.\"}", "{\"comment\": [\"I thank the authors for their detailed rebuttal. While some of my concerns have been addressed, the most critical issues remain unresolved, which are W5 and W8, respectively.\", \"W5: The most important metrics, which are the inference performance of the proposed system (memory consumption, latency, hardware utilization, and so on), are not studied in this work. These metrics are crucial for justifying the significance of reducing load imbalance. Without measurements of these quantities, it is difficult to assess the practical improvements brought by the proposed strategy and its impact on the field. I strongly encourage the authors to outline a concrete plan for evaluating these metrics and, if feasible, provide preliminary results.\", \"W8: The authors have not provided any code or reproducibility statement. The official implementations of both Llama-MoE and DeepSeek-MoE, when deployed with Huggingface Transformers, only support native tensor parallelism. The lm-eval-harness framework also do not provide an expert parallelism implementation. Which open-sourced MoE framework are you building upon? Did you implement the framework from scratch?\"], \"other_questions_include\": [\"W1. I appreciate the authors\\u2019 effort in expanding the review of related works. And I think one work that is closely related is [7]. The Dynamic Shadowing Strategy in Section 4.1 of [7] appears quite similar to the proposed replicate strategy, aside from the quantization component. The strategy does not have any training-only component, thus can be applied to inference scenarios. I encourage the authors to clearly differentiate their approach from [7] to establish the novelty of their contribution. The author should also discuss [6] in depth to answer the question in W4 & Q1 (see below).\", \"W2. Hardware-level metrics are directly impacted by batch-level statistics, making dataset-level metrics less relevant in this context. I would expect the load imbalance score to increase when evaluated at the batch level, which would enhance your argument.\", \"W4 & Q1. The data provided in the paper does not convincingly demonstrate the significance of the difference between the two metrics. As stated on line 096, only the most important expert (which happens to be the heavy-hitter) and the least important experts are quantized in the experiments. The overlap between heavy-hitter and most important expert limits the proposed metric's ability to make a substantial difference in preserving accuracy. A more compelling example illustrating where these experts diverge would strengthen the argument. Additionally, the paper should address the broader question: what is the accuracy loss when the model is fully quantized? From [6], it seems like quantizing all the experts only incurs a negligible accuracy loss.\", \"[7] He, Jiaao, et al. \\\"Fastermoe: modeling and optimizing training of large-scale dynamic pre-trained models.\\\" Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 2022.\"]}", "{\"comment\": \"We thank the reviewers for the comments and suggestions to improve our paper. Please see the following clarification.\\n\\n## [W1: Weak presentation of algorithms and figures. Some figures lack any caption or explanation. - Thanks! We will complement them! ]\\n\\n### Algorithm 1: Identify Heavy-Hitter Experts\\n**Purpose**: This algorithm identifies the most frequently selected \\\"heavy-hitter\\\" experts for each Mixture-of-Experts (MoE) layer based on input tokens and their routing.\\n\\n#### Inputs:\\n- **Tokens**: `input_tokens` - A list of input tokens.\\n- **Experts**: `num_experts` - The number of experts in each MoE layer.\\n- **Layers**: `num_layers` - The number of MoE layers.\\n- **Token Count**: `num_tokens` - The number of tokens to be processed.\\n\\n#### Outputs:\\n- **Heavy Experts**: `heavy_hitters` - A list where each entry corresponds to the most selected expert in each layer.\\n\\n---\\n\\n#### Algorithm:\\n\\n\\n1. Initialize: heavy_hitters \\u2190 list of size `num_layers`\\n\\n2. For each layer `layer_index` in range `num_layers` do:\\n\\n a. Initialize: expert_selection \\u2190 empty list\\n\\n b. For each token `token` in `input_tokens` do:\\n\\n i. selected_expert \\u2190 route_token_to_expert(token, layer_index)\\n\\n ii. Append selected_expert to expert_selection\\n\\n c. Compute expert_frequencies \\u2190 count_frequency(expert_selection)\\n\\n d. heavy_hitter \\u2190 find_expert_with_max_frequency(expert_frequencies)\\n\\n e. Store heavy_hitter in heavy_hitters[layer_index]\\n\\n3. Return heavy_hitters\\n\\n## [Q1: Error on Page 6, line 288: Are the X-axis and Y-axis labels inverted? - Thanks]\\nChange to \\u201cAs shown in Figure 2, the Y-axis represents the workload of an expert by the number of allocated tokens, while the X-axis displays the inverse importance score of an expert (see Definition 3.3). \\u201c\\n\\n## [Q2: Should R&Q identify heavy-hitter and important experts for each individual task, or can the identified experts be reused across tasks? - Of course]\\nThey should be identified for each task; as you mentioned, each expert may have a different performance. Moreover, we have evidence that for each task, we can only utilize 0.1% of the data to identify the heavy-hitter experts and less important experts. We then apply the R&Q strategy to the remaining data, focusing on those experts who continue to perform well.\\n\\n## [W2: Weak baseline. - The baseline you mentioned is the one for choosing the less-important experts, not the one for load balance; this one should refer to Table 1.]\\nThe current load balanced method focuses on the training stage. For the current SMoE model, they have tried to solve their load imbalance issue by introducing the imbalance loss or other methods during training the model. So in our Table 3, the raw load balance scores are the model already equipped with their load balanced method.\\n\\n## [W3 & Q3: Weak empirical analysis on computational efficiency gain. - The quantization did slow down inference]\\nTheoretically, our strategy will not increase the memory because when we duplicate the heavy-hitter experts, we also quantize the duplicate one to a half precision, then we quantize the less important one to a half precision.\\n\\n## [W4 & Q4: Updated the more challenging tasks - Sure!]\\n## Results on other generation task:\\n## TruthfulQA Results\\n\\n| Model | Type | bleu_acc | bleu_diff | rougeL_acc |\\n|----------------|------|----------|-----------|------------|\\n| **DeepSeek MoE** | RQ | 0.3133 | -7.7523 | 0.2876 |\\n| | Raw | 0.3121 | -7.7476 | 0.2901 |\\n| **Llama MoE** | RQ | 0.2925 | -8.1597 | 0.2546 |\\n| | Raw | 0.2913 | -8.5101 | 0.2436 |\\n\\n## Load Balance Results\\n\\n| Dataset | Type | Value | Type | Metric | Value |\\n|------------------------|------|-----------------------------|------|-------------------|---------|\\n| **codexglue_code2text**| Raw | 2.6119107948506235 | Raw | smoothed_bleu_4 | 1.5517 |\\n| | RQ | 1.9426032317889572 | RQ | | 1.5463 |\\n| **coqa** | Raw | 2.0412320534522754 | Raw | f1 | 0.0104 |\\n| | RQ | 1.6487805751851754 |RQ | | 0.0106 |\\n| **wikitext** | Raw | 2.090017361111111 | Raw | byte_perplexity | 17.2096 |\\n| | RQ | 1.6235243055555557 | RQ | | 17.2826 |\\n| | | | Raw | bits_per_byte | 4.1051 |\\n| | | |RQ| | 4.1112 |\"}", "{\"title\": \"A summary of rebuttal\", \"comment\": [\"We sincerely thank the reviewers and area chairs for their thoughtful feedback and engaging discussions on this paper. To support clearer summarization and discussion, we have included a concise, one-step overview of our work.\", \"## **Background, Motivation, and Contributions**\", \"### **Task Motivation**\", \"The inference performance of Sparse Mixture-of-Experts (SMoE) models is hindered by load imbalance, where certain \\\"heavy-hitter\\\" experts process disproportionately more tokens, leading to increased latency and resource inefficiency. While existing strategies attempt to address this at the training stage, they often fail to generalize during inference.\", \"### **Our Contributions**\", \"1. **Novel Load Balancing Strategy**:\", \"Introduced **Replication and Quantization (R&Q)** to alleviate load imbalance and maintain model accuracy:\", \"**Replicate Heavy-Hitter Experts**: Tokens are dispatched to any of the heavy-hitter experts and their replicas.\", \"**Quantization**: Heavy-hitter replicas are quantized to half precision, and less important experts are also quantized, maintaining memory constraints.\", \"2. **Load Imbalance Score**:\", \"Proposed a novel metric to evaluate load distribution across experts during inference, addressing a critical gap in prior research.\", \"3. **Empirical Results on Multiple Models**:\", \"Demonstrated consistent improvements in inference efficiency across various models, including Switch Transformers, LlaMa MoE and DeepSeek MoE.\", \"Provided comprehensive studies on models with varying sparsity configurations and routing strategies.\", \"----------\", \"## **Reviewers\\u2019 Feedback**\", \"### **Positive Feedback**\", \"1. **Comprehensive experiments**:\", \"Reviewer Dy55: \\\"The author conducted experiments on various tasks and model types, creating a comprehensive overview of the impact of the proposed strategy on the performance of the model.\\\"\", \"Reviewer 5vEp: \\\"The authors have tested on various MoE base models.\\\"\", \"Reviewer CGsx: \\\"Near-original accuracy (at least on classification / MCQ tasks)\\\"\", \"2. **Novel Insights**:\", \"Reviewer CGsx: \\\"I like how they distinguished between heavy-hitter experts and important experts, which could easily be confused as the same. They also conducted experiments to show that these concepts are distinct, although there is some correlation between them..\\\"\", \"Reviewer UpDR: \\\"The \\\"Replicate and Quantize\\\" strategy is a novel approach that dynamically addresses load imbalance in SMoE models without requiring extensive retraining.\\\"\", \"Reviewer Dy55: \\\"This paper studies an interesting problem, which is the load imbalance of MoE LLMs in inference scenarios.\\\"\", \"3. **Easy to implement**:\", \"Reviewer UpDR: \\\"The proposed strategy is plug-and-play, making it easy to integrate with existing models and practical for real-world applications.\\\"\", \"Reviewer CGsx: \\\"No retraining is required.\\\"\", \"Reviewer 5vEp: \\\"The proposed idea is simple and sound.\\\" \\\"Overall, this work is well-organized and easy to follow.\\\"\", \"### **Remaining Concerns**\", \"1. **Scalability**:\", \"Reviewer CGsx: \\\"Weak empirical analysis on more challenging tasks\\\"\", \"Reviewer UpDR: \\\"How does your method perform under different levels of model sparsity and varying numbers of experts in the SMoE models?\\\"\", \"Reviewer 5vEp: \\\"The scalability of the proposed method is encouraged to be evaluated or discussed.\\\"\", \"2. **Inference Efficiency**:\", \"Reviewer CGsx: \\\"Requested further clarification on how R&Q affects inference speed and memory usage.\\\"\", \"Reviewer Dy55: \\\"The most important metrics, which are the inference performance of the proposed system (memory consumption, latency, hardware utilization, and so on), are not studied in this work. These are the most important reasons one would like to reduce the load imbalance.\\\"\", \"3. **Detailed Ablation Study**:\", \"Reviewer UpDR: \\\"Could you conduct additional ablation studies to demonstrate the individual contributions of the replication and quantization components in your proposed method?\\\"\", \"Reviewer 5vEp: \\\"an appropriate ablation study should be conducted to verify the effectiveness of both strategies on the most heavy experts and less important experts\\\"\", \"----------\"]}", "{\"metareview\": \"The submission presents a load-balancing strategy for mixture of experts models involving quantization of experts. The reviewers indicated that the submission was borderline or below the acceptance threshold, with a majority indicating that the submission should be rejected. Reviewer Dy55 indicates that the empirical evaluation is incomplete. Reviewer CGsx indicates that the connection between the design choices and speedups is inadequately explained. Reviewer 5vEp was appreciative of the additional explanations, but still felt that the contribution was not significant enough in its current form to recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The reviewer discussion was active on all parts, with substantial responses from the authors, which were appreciated by the reviewers. The reviewers have suggested these additional explanations be included in future revisions of the work should it be submitted to another conference in the future.\"}", "{\"summary\": \"This paper introduces a novel strategy called \\\"Replicate and Quantize\\\" for addressing load balancing issues in Sparse Mixture-of-Experts (SMoE) models. The authors systematically analyze the performance and functionality of each expert and introduce a metric to evaluate load balance. They propose a dynamic plug-and-play strategy that is both trainingless and near-lossless, effectively resolving load balancing problems by replicating heavily used experts with lower-bit quantized versions and quantizing the least important experts to fit within the memory budget. Empirical results demonstrate that this approach significantly reduces load imbalance with minimal impact on model performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) The \\\"Replicate and Quantize\\\" strategy is a novel approach that dynamically addresses load imbalance in SMoE models without requiring extensive retraining.\\n\\ufeff\\n2) The proposed strategy is plug-and-play, making it easy to integrate with existing models and practical for real-world applications.\", \"weaknesses\": \"1) The paper lacks detailed implementation specifics, such as the exact quantization methods and hyperparameters used.\\n\\n2) There is a need for more extensive ablation studies to isolate and demonstrate the contributions of the replication and quantization components individually.\", \"questions\": \"1) Can you provide more detailed implementation details, including the specific quantization techniques and hyperparameters used in your experiments, to facilitate reproducibility?\\n\\n2) Could you conduct additional ablation studies to demonstrate the individual contributions of the replication and quantization components in your proposed method?\\n\\n3) How does your method perform under different levels of model sparsity and varying numbers of experts in the SMoE models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### **On Ablation Study**:\\n\\n| Model & Configuration | Hellaswag | MMLU | PIQA | Truthful QA | Winogrande |\\n|----------------------------------|----------------------|---------------------|---------------------|---------------------|---------------------|\\n| **Switch 8** | | | | | |\\n| Only Quant Less-Important Experts | 0.2795 \\u00b1 0.0045 | 0.2295 \\u00b1 0.0035 | 0.5751 \\u00b1 0.0115 | 0.3605 \\u00b1 0.0110 | 0.5138 \\u00b1 0.014 |\\n| Replicate and Quant replicate one | 0.2749 \\u00b1 0.0045 | 0.2498 \\u00b1 0.0037 | 0.5811 \\u00b1 0.0115 | 0.3635 \\u00b1 0.0110 | 0.4917 \\u00b1 0.0141 |\\n| Quant All Experts | 0.2641 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5490 \\u00b1 0.0116 | 0.3775 \\u00b1 0.0112 | 0.5185 \\u00b1 0.014 |\\n| R&D | 0.2763 \\u00b1 0.0045 | 0.2522 \\u00b1 0.0037 | 0.5832 \\u00b1 0.0115 | 0.3640 \\u00b1 0.0110 | 0.4917 \\u00b1 0.0141 |\\n| Quant All Model | 0.2927 \\u00b1 0.0045 | 0.2295 \\u00b1 0.0035 | 0.5952 \\u00b1 0.0115 | 0.3706 \\u00b1 0.0111 | 0.5091 \\u00b1 0.0141 |\\n| **Switch 16** | | | | | |\\n| Only Quant Less-Important Experts | 0.2820 \\u00b1 0.0045 | 0.2295 \\u00b1 0.0035 | 0.5577 \\u00b1 0.0116 | 0.3914 \\u00b1 0.0114 | 0.4964 \\u00b1 0.0141 |\\n| Replicate and Quant replicate one | 0.2864 \\u00b1 0.0045 | 0.2490 \\u00b1 0.0036 | 0.5490 \\u00b1 0.0116 | 0.3669 \\u00b1 0.0112 | 0.483 \\u00b1 0.014 |\\n| Quant All Experts | 0.2595 \\u00b1 0.0044 | 0.2295 \\u00b1 0.0035 | 0.5582 \\u00b1 0.0116 | 0.3721 \\u00b1 0.0110 | 0.5091 \\u00b1 0.0141 |\\n| R&D | 0.2872 \\u00b1 0.0045 | 0.2495 \\u00b1 0.0036 | 0.5501 \\u00b1 0.0116 | 0.3694 \\u00b1 0.0112 | 0.4854 \\u00b1 0.014 |\\n| Quant All Model | 0.2768 \\u00b1 0.0045 | 0.2295 \\u00b1 0.0035 | 0.5539 \\u00b1 0.0116 | 0.3726 \\u00b1 0.0112 | 0.4901 \\u00b1 0.014 |\\n\\n----------\\n\\n### **Why This Work Matters**\\n\\n1. **Practical Load Balancing Strategy for Inference**:\\n - Significantly reduces inference time while maintaining accuracy and preserving memory usage.\\n \\n2. **Foundational for Future Research**:\\n - Introduces the novel _Load Imbalance Score_ metric, providing a valuable tool for future studies on SMoE model efficiency.\\n \\n3. **Adaptable and Scalable**:\\n - Demonstrates practical applicability across diverse models, tasks, and real-world streaming data scenarios.\\n\\n\\n----------\\n\\n### **Conclusion**\\n\\nWe hope the above overview may provide our AC and reviewers with a concise way to navigate through the mass information present on this page. \\nFurther, we sincerely hope our appreciation of simple but effective design, as well as our novel observations and insights into load imbalance in SMoE model, can be shared with you and our fellow scholars in this long-overlooked but important field of SMoE model.\\n\\nSincerely, \\n\\nPaper Authors\"}", "{\"summary\": \"This paper proposes Replicate and Quantize, an inference-time strategy that aims to mitigate load imbalance on Mixture-of-Expert based Large Language Models. The author claimed that there exist differences between heavy-hitters and important experts in MoE models, and proposed to 1) quantize the least important expert for resource savings, and 2) replicate a quantized version of heavy hitters to reduce load imbalance. Results show that the authors' strategy improved load imbalance without reducing too much performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper studies an interesting problem, which is the load imbalance of MoE LLMs in inference scenarios.\", \"The author conducted experiments on various tasks and model types, creating a comprehensive overview of the impact of the proposed strategy on the performance of the model.\"], \"weaknesses\": [\"The paper's claim on its novelty, that the load imbalance of MoE models has not been studied for inference scenarios, is wrong. Plenty of research have focused on the inference scenario, such as [1], [2], [3], [4] and [5], and many of them have provided a thorough characterization of the workload already. Quantization of the MoE model has also been studied in [6]. The author should conduct **a more thorough review on the existing literature** and discuss the difference of this work with these existing ones.\", \"The Load Imbalance Score defined at Sec 3.1 is an ill-defined metric. The overall load in a dataset is not directly related to the inference performance of the MoE model. It is the load in a certain batch that would have a major impact on the robustness of the model (preventing OOM errors) and latency (of all-to-all communications).\", \"Algorithm 1 seems to be unnecessary. The search is quite straightforward.\", \"The authors' proposition, that the heavy-hitters are not necessarily the most important expert, seems to be *rebuked* by the presented data in Fig. 2. See questions below.\", \"The most important metrics, which are the inference performance of the proposed system (**memory consumption, latency, hardware utilization, and so on**), are not studied in this work. These are the most important reasons one would like to reduce the load imbalance.\", \"Line 216, \\\"Wanda metric\\\" has been referenced, but only formally defined on line 237.\", \"The paper is not well formatted. For example:\", \"The citations are not correctly adapted to the ICLR format. e.g. Line 107 -- Jacobs et al. Jacobs et al..\", \"Missing spaces \\\",or\\\" on line 115.\", \"Missing reference on line 120.\", \"Missing spaces \\\".For\\\" on line 406.\", \"The authors have not provided any code or reproducibility statement.\", \"[1] Huang, Haiyang, et al. \\\"Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference.\\\" arXiv preprint arXiv:2303.06182 (2023).\", \"[2] Gale, Trevor, et al. \\\"Megablocks: Efficient sparse training with mixture-of-experts.\\\" Proceedings of Machine Learning and Systems 5 (2023): 288-304.\", \"[3] Kong, Rui, et al. \\\"Serving MoE Models on Resource-constrained Edge Devices via Dynamic Expert Swapping.\\\" arXiv preprint arXiv:2308.15030 (2023).\", \"[4] Li, Jiamin, et al. \\\"Accelerating distributed MoE training and inference with lina.\\\" 2023 USENIX Annual Technical Conference (USENIX ATC 23). 2023.\", \"[5] Hwang, Ranggi, et al. \\\"Pre-gated moe: An algorithm-system co-design for fast and scalable mixture-of-expert inference.\\\" 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA). IEEE, 2024.\", \"[6] Kim, Young Jin, Raffy Fahim, and Hany Hassan Awadalla. \\\"Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness.\\\" arXiv preprint arXiv:2310.02410 (2023).\"], \"questions\": [\"Fig. 2 shows that the most important expert is the expert that receives the most tokens. It seems like it is rejecting, instead of confirming the authors' proposition, that the heavy-hitters are not necessarily the most important expert. Wouldn't quantizing expert 3, the heavy hitter in this case lead to performance degradation?\", \"How does the router adapt to the case where the most important expert is replicated? Will it evenly distribute its tokens to each GPU device?\", \"How are the expert loaded on the GPU? Are the other experts completely unaffected?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"After rebuttal\", \"comment\": \"I have read all author rebuttals. The authors have answered lots of my questions on further experiments.\\nI will raise my voting to 5, and suggest that these revisions should be included in the future version.\"}", "{\"comment\": \"Hello Reviewer CGsx,\\n\\nWe hope you had a wonderful Thanksgiving! We are checking in on our previous comment to see if you've had a chance to read it, and we'd be grateful if you could have a moment to review it.\\n\\nWe address your concerns in our rebuttal.\\n* **Applying our strategy will not increase memory usage.**\\n* **The inference time will decrease, as we show in our load balance results table, and we also provide an example to show how it works.**\\n\\n\\nIf you have any updates or thoughts, we\\u2019d greatly appreciate your feedback. Please let us know if there\\u2019s anything we can clarify or assist with.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Thank you\", \"comment\": \"Thanks for the detailed response! I have adjusted my score given the reviews and the replies to reflect these improvements.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Our method ensures memory usage does not increase even when duplicating new experts. After duplication, we quantize the new experts and further quantize less-important ones to half precision.\\n\\nRegarding inference speed, in each MoE layer, the inference time is determined by the slowest expert, which handles the largest number of tokens. Below, we provide an example to illustrate this behavior.\\n\\n---\\n\\n### Example for Each MoE Layer\\n\\n**Premise:**\\n\\n- **Number of Experts:** 4 \\n- **Number of Tokens:** 80 \\n- **Processing Capacity per Expert:** Each expert processes $\\\\( \\\\frac{80}{4} = 20 \\\\)$ samples per unit time (T). \\n\\n---\\n\\n**Original Routing Results:**\\n\\n- **Experts' Token Allocation:** [18, 30, 25, 7] \\n- **Inference Time (Slowest Expert):**\", \"expert_1_handles_30_tokens\": \"\\\\[\\n$ \\\\frac{30}{20} = 1.5 \\\\, T$\\n \\\\]\\n\\n---\\n\\n**After R & Q:**\\n\\n- **Experts' Token Allocation:** [18, [17, 13], 25, 7] \\n- **Inference Time (Slowest Expert):**\", \"expert_2_handles_25_tokens\": \"\\\\[\\n$ \\\\frac{25}{20} = 1.25 \\\\, T$\\n \\\\]\\n\\n---\\n\\n### Result:\\n\\nBy applying our method, the inference time is reduced, resulting in a speed-up of \\\\( 0.25 \\\\, T \\\\).\\n\\n---\"}", "{\"comment\": \"Hello Reviewer UpDR,\\n\\nWe hope you had a wonderful Thanksgiving! We are checking in on our previous comment to see if you've had a chance to read it, and we'd be grateful if you could have a moment to review it.\\n\\nWe address your concerns in our rebuttal.\\n* **Provide clear implementation details.**\\n* **Add ablation studies to demonstrate the individual contributions of the replication and quantization.**\\n* **Clarify that the model test in our experiments has different levels of sparsity.**\\n\\nIf you have any updates or thoughts, we\\u2019d greatly appreciate your feedback. Please let us know if there\\u2019s anything we can clarify or assist with.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": [\"Happy Thanksgiving, and thank you for your efforts in addressing my earlier comments. Unfortunately, I find that the response has again left my key concerns unaddressed and, in some cases, has reinforced my doubts about the paper. Below, I outline the specific issues that remain unresolved:\", \"W5: The primary benefit of the proposed strategy hinges on its ability to deliver tangible improvements in real-world systems. However, the results presented, such as the load imbalance score, are preliminary and speculative. For a high-impact venue like ICLR, we expect the authors to develop at least a minimum viable product and provide substantial experimental evidence supporting their claims. The proposed strategy introduces potential complications that could hinder practical application:\", \"Randomly assigning tokens destined for the same expert to different GPUs could cause memory fragmentation and increase memory allocation overhead. Additionally, this approach may necessitate more complex all-to-all communication patterns, which must be taken care of in implementation.\", \"Uneven distribution of experts across GPUs may lead to bottlenecks, as experts with heavier workloads may introduce overheads that negate potential gains.\", \"The type casting and mixed-precision processing proposed could further introduce computational overhead.\", \"To convincingly establish the benefits of this work, it is crucial for the authors to implement the full mechanism and address these practical challenges experimentally.\", \"The illustrative example presented in the paper contains notable flaws:\", \"1. In the example, tokens for Expert 1 are split into two batches: 17 and 13. It is unclear where the batch of 13 tokens is directed. If it is sent to GPU0, the total workload becomes 31 tokens, exceeding the original maximum of 30 tokens. Since expert quantization requires pre-allocation, such decisions cannot be deferred until the gating function is applied.\", \"2. Previous research, as well as my own observations, indicate that the time required to process tokens does not scale linearly with the token count. This is especially evident when the number of tokens per expert is small, as is the case here.\", \"W8: The lack of a real implementation exacerbates concerns regarding the work's practical advantages. Notably, official implementations of Llama-MoE and DeepSeekMoE on Huggingface promote tensor parallelism, effectively balancing loads and eliminating expert imbalance issues. Since these models do not rely on expert parallelism, the problem addressed by the proposed strategy does not arise in these cases. Without further justification, the relevance of this work remains questionable.\", \"W1: As I mentioned in my previous comment, the proposed \\\"replicate strategy\\\" bears strong similarity to the \\\"Dynamic Shadowing Strategy\\\" described in Section 4.1 of [7], differing primarily in the inclusion of quantization, which is also a common strategy (see below for my concerns on quantization). Given that the strategy lacks training-specific components, it can already be applied to inference scenarios. Changing the application domain from training to inference does not, in my view, constitute sufficient novelty for an ICLR paper.\", \"W4 & Q1: The authors state that \\\"Randomly quantizing any expert could negatively impact real-world performance or specific tasks, so we prioritize quantizing the less important experts instead.\\\" However, this claim is not substantiated with evidence in the paper. Furthermore, [6] reports contradictory findings on a related task, making it difficult to accept this assertion without additional justification.\"]}", "{\"comment\": \"## **Our Response to Concerns**\\n\\n### **On Scalability**:\\n\\n- **1. Add more challenging task**:\", \"truthfulqa_generation_results_and_other_challenging_tasks\": \"| Model | Type | bleu_acc | bleu_diff | rougeL_acc |\\n| ---------------- | ---- | -------- | --------- | ---------- |\\n| **DeepSeek MoE** | RQ | 0.3133 | -7.7523 | 0.2876 |\\n| | Raw | 0.3121 | -7.7476 | 0.2901 |\\n| **Llama MoE** | RQ | 0.2925 | -8.1597 | 0.2546 |\\n| | Raw | 0.2913 | -8.5101 | 0.2436 |\\n\\n| Dataset | Type | Value | Type | Metric | Value |\\n|------------------------|------|-----------------------------|------|-------------------|---------|\\n| **codexglue_code2text**| Raw | 2.6119107948506235 | Raw | smoothed_bleu_4 | 1.5517 |\\n| | RQ | 1.9426032317889572 | RQ | | 1.5463 |\\n| **coqa** | Raw | 2.0412320534522754 | Raw | f1 | 0.0104 |\\n| | RQ | 1.6487805751851754 |RQ | | 0.0106 |\\n| **wikitext** | Raw | 2.090017361111111 | Raw | byte_perplexity | 17.2096 |\\n| | RQ | 1.6235243055555557 | RQ | | 17.2826 |\\n| | | | Raw | bits_per_byte | 4.1051 |\\n| | | |RQ| | 4.1112 |\\n\\n- **2. Clarify the previous experiments setting**\\n\\tWe have conducted experiment in the paper with the following models with different sparsity.\\n\\t**Switch Transformer 8 experts (Sparsity 1/8):**\\n\\ttotal have 8 experts in each layer, only choose 1 expert for each token\\n\\t**Switch Transformer 16 experts (Sparsity 1/16):**\\n\\ttotal have 16 experts in each layer, only choose 1 expert for each token.\\n\\t**Llama MoE 8 experts (Sparsity 2/8):**\\n\\ttotal have 8 experts in each layer, choose 2 experts for each token\\n\\t**Deepseek MoE (Sparsity 1+3/63):**\", \"total_has_64_experts_in_each_layer\": \"63 isolated experts and 1 shared expert. In each routing, the system will select three experts from the 63 isolated experts, always using the shared expert. Therefore, our strategy is exclusively applied to the 63 isolated experts.\\n- **3. Scalability in streaming data**\\n In Figure 3, Panels (a) and (b) illustrate the load balance scores across multiple timesteps under two distinct routing strategies. Panel (a) demonstrates the case where the system utilizes cumulative information aggregated from all prior timesteps, providing a holistic approach to decision-making. Conversely, Panel (b) focuses on a scenario where only information from the immediately preceding timestep is leveraged, showcasing a more localized decision-making approach.\\n\\n The purpose of this setup is to identify the most important \\\"heavy-hitter\\\" experts from the less important ones. These decisions depend directly on the input data at each timestep. To mimic how data arrives in a real-world streaming scenario, we divided the MMLU dataset into 10 parts and set the timestep count to 10. This approach allows us to simulate the flow of data over time and study how it affects the routing strategy in a manageable and realistic way.\\n\\n### **On Inference Efficiency**:\\nOur method ensures memory usage does not increase even when duplicating new experts. After duplication, we quantize the new experts and further quantize less-important ones to half precision. \\n\\nRegarding inference speed, in each MoE layer, the inference time is determined by the slowest expert, which handles the largest number of tokens. Below, we provide an example to illustrate this behavior.\\n\\n\\t### Example for Each MoE Layer\", \"premise\": [\"Number of Experts: 4\", \"Number of Tokens: 80\", \"Processing Capacity per Expert: Each expert processes (80 / 4 = 20) samples per unit time (T).\", \"---\"], \"original_routing_results\": [\"Experts' Token Allocation: [18, 30, 25, 7]\", \"Inference Time (Slowest Expert):\"], \"expert_1_handles_30_tokens\": \"30 / 20 = 1.5 T\\n\\n\\t---\\n\\n\\tAfter Rerouting & Quantization (R & Q):\\n\\n\\t- Experts' Token Allocation: [18, [17, 13], 25, 7] \\n\\t- Inference Time (Slowest Expert):\", \"expert_2_handles_25_tokens\": \"25 / 20 = 1.25 T\", \"result\": \"By applying our method, the inference time is reduced, resulting in a speed-up of 0.25 T.\\n\\n---\"}", "{\"comment\": \"We thank the reviewers for the comments and suggestions to improve our paper. Please see the following clarification.\\n\\n## [Q1: The paper's claim on its novelty, that the load imbalance of MoE models has not been studied for inference scenarios, is wrong. - We will discuss those papers as follows]\\n\\n---\\n### [Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference.](https://openreview.net/pdf?id=stXtBqyTWX)\\n**Focus**: load balance for each GPU, not the experts, and they do not release their after-accept by Nips 2024, and the description in the paper is too vague for us to replicate their methods.\\n\\n### [MEGABLOCKS: EFFICIENT SPARSE TRAINING WITH MIXTURE-OF-EXPERTS](https://arxiv.org/pdf/2211.15841)\\n\\n**Focus**: Sparse training, not inference.\\n\\n---\\n\\n### [SwapMoE: Serving Off-the-shelf MoE-based Large Language Models with Tunable Memory Budget](https://arxiv.org/pdf/2308.15030)\\n\\n**Focus**: Inference on edge devices with memory constraints. Dynamically load, activate, and swap experts during inference. From the system aspects \\n\\n---\\n\\n### [Accelerating Distributed MoE Training and Inference with Lina](https://www.usenix.org/conference/atc23/presentation/li-jiamin)\\n\\n**Focus**: Both training and inference. \\n**Load Balancing**: If an expert is overloaded, another expert from the top-k set (on the same device) is selected.\\n\\n---\\n\\n### [Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference](https://arxiv.org/pdf/2308.12066)\\n\\n**Focus**: Add a pre-gating mechanism, which modifies the model architecture. \\n\\n---\\n\\n### [Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness](https://arxiv.org/pdf/2310.02410)\\n\\n**Focus**: Memory optimization via quantization. They mainly conduct thorough experiments to show the experts weight is more robust than we expect. \\n\\n---\\n## [W2: The Load Imbalance Score defined at Sec 3.1 is an ill-defined metric. - It's good ]\\nWe measure the load distribution across experts for the entire dataset because a batch is typically a uniform sample from the dataset. Therefore, using the dataset-level measurement provides a reasonable approximation.\\n\\n## [W4 & Q1: Fig. 2 shows that the most important expert is the expert that receives the most tokens. - The symptom showing the experts inner function does not influence our strategy.]\\nIn Figure 2, our goal is to demonstrate the existence of two distinct dimensions among the experts, highlighting that their functionality can be analyzed from these two different perspectives. While it is true that in some cases, the most important expert may also be the heaviest one, this does not hinder our method to quantize the less important experts and replicate the heavier ones. While certain experts may excel in multiple aspects, this does not negate the presence of distinct functional dimensions that allow us to manage and optimize the experts accordingly.\\n\\n## [W5 & W6: Thanks for your suggestion, we will modify it!]\\n\\n## [Q2: How does the router adapt to the case where the most important expert is replicated? Will it evenly distribute its tokens to each GPU device? - Randomly and Not necessary]\\nWhen the router adapts to the case where the most important expert is replicated, it randomly assigns tokens to one of the replicated experts for processing. \\n\\nBased on our experiments, splitting the workload of a heavy-hitter expert between just two replicas significantly alleviates the load imbalance. It is not necessary to distribute the workload across all GPUs to achieve this improvement.\\n\\n## [Q3: How are the expert loaded on the GPU? Are the other experts completely unaffected? - Not affected]\\nIn the quantized version, the compressed weights are loaded into the GPU as low-bit representations. This allows the quantized experts to occupy significantly less memory, enabling their efficient deployment on the GPU. \\n\\nOther experts are unaffected by this process as their weights remain unchanged and continue to operate in their original precision.\"}", "{\"comment\": \"Thank you to the authors who provided additional context. It sounds good that the load balancing helps with more complicated tasks. However, it's still not empirically clear how their method actually translates to speedups or memory efficiency, which is crucial to this technique's usability.\"}" ] }
0wQCSXJbwt
Temporal-Difference Variational Continual Learning
[ "Luckeciano Carvalho Melo", "Alessandro Abate", "Yarin Gal" ]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks. This adaptability allows them to respond to potentially inevitable shifts in the data-generating distribution over time. However, in Continual Learning (CL) settings, models often struggle to balance learning new tasks (plasticity) with retaining previous knowledge (memory stability). Consequently, they are susceptible to Catastrophic Forgetting, which degrades performance and undermines the reliability of deployed systems. Variational Continual Learning methods tackle this challenge by employing a learning objective that recursively updates the posterior distribution and enforces it to stay close to the latest posterior estimate. Nonetheless, we argue that these methods may be ineffective due to compounding approximation errors over successive recursions. To mitigate this, we propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations, preventing individual errors from dominating future posterior updates and compounding over time. We reveal insightful connections between these objectives and Temporal-Difference methods, a popular learning mechanism in Reinforcement Learning and Neuroscience. We evaluate the proposed objectives on challenging versions of popular CL benchmarks, demonstrating that they outperform standard Variational CL methods and non-variational baselines, effectively alleviating Catastrophic Forgetting.
[ "continual learning", "online variational inference", "temporal-difference learning" ]
Reject
https://openreview.net/pdf?id=0wQCSXJbwt
https://openreview.net/forum?id=0wQCSXJbwt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yxJrVE1rmG", "yQZbkNHTqn", "ZrZU2MhK9S", "PQKw4c912g", "KTGXBLWZ7X", "G9IamZ5K2T", "3n2kwiT7W0" ], "note_type": [ "official_review", "official_review", "official_review", "meta_review", "official_review", "official_comment", "decision" ], "note_created": [ 1729641945458, 1730704604967, 1730311505245, 1734738493535, 1730715010732, 1732790344454, 1737523566496 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3274/Reviewer_awe2" ], [ "ICLR.cc/2025/Conference/Submission3274/Reviewer_DJ93" ], [ "ICLR.cc/2025/Conference/Submission3274/Reviewer_vxsa" ], [ "ICLR.cc/2025/Conference/Submission3274/Area_Chair_cchu" ], [ "ICLR.cc/2025/Conference/Submission3274/Reviewer_Z6hC" ], [ "ICLR.cc/2025/Conference/Submission3274/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The paper focuses on mitigating the issue of cumulative error accumulation in variational continual learning due to relying on a single posterior from the past task. The paper formulates n-Step KL-VCL, which allows for regularizing network updates using past n posteriors. In doing so, it formulates the likelihood term to integrate replay samples from past n tasks. Furthermore, it proposes TD($\\\\lambda$)-VCL, which connects variational continual learning with TD methods from reinforcement learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper makes a significant contribution by drawing on Temporal-Difference methods to mitigate error accumulation in variational continual learning. Thus proposed formulation allows the regularization using past $n$ posteriors and incorporation of a replay buffer for previous $n$ tasks into the principled framework of variational continual learning.\\n2. The experiments show a performance boost compared to the baselines. The propositions and their proofs further enhance the strength of this work. \\n3. The paper is well-organized and easy to follow. The authors provide a thorough analysis of their method on benchmark datasets, along with sensitivity analysis of hyper-parameters.\", \"weaknesses\": \"1. One major weakness is that the benchmarks include small-scale MNIST variants (permuted MNIST and single-headed MNIST/not-MNIST tasks) only.\\n2. The benchmarks are constrained to the task-incremental learning, where the task identifier is provided during prediction. The paper's claim of effort to raise the standards for evaluating continual learning is not strong, as recent works commonly focus on the more challenging class-incremental learning setting, which doesn't require task identifiers for prediction.\\n\\nI would be happy to raise the score if these weaknesses and the following questions are addressed.\", \"questions\": \"1. As most recent works on Bayesian continual learning [1,2] experiment with CIFAR and tiny ImageNet, it would be interesting to see the results when applied to such relatively more complex datasets.\\n2. Since the proposed method incorporates a replay buffer, it would be interesting to see how it compares in a class-incremental learning setting against replay-based methods like ER [3].\\n\\n[1] Kumar, A., Chatterjee, S., Rai, P. (2021). Bayesian Structural Adaptation for Continual Learning. In Proceedings of the 38th International Conference on Machine Learning (pp. 5850\\u20135860). PMLR.\\n\\n[2] Thapa, J., Li, R. (2024). Bayesian Adaptation of Network Depth and Width for Continual Learning. In Forty-first International Conference on Machine Learning.\\n\\n[3] Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc\\u2019Aurelio Ranzato. Continual learning with tiny episodic memories. arXiv preprint arXiv:1902.10486, 2019.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces TD-VCL, aiming to mitigate Catastrophic Forgetting in continual learning (CL) by using a variational framework inspired by reinforcement learning\\u2019s temporal-difference (TD) methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper is well-written and easily understandable.\", \"weaknesses\": \"This work builds on an earlier approach to variational continual learning. While applying a temporal modification to the variational objective to mitigate model drift is intuitive, and drawing a connection to reinforcement learning is conceptually interesting, this work and its benchmarks feel largely disconnected from recent advances in continual learning. Had this work been published six years ago, it might have been more impactful, but recent developments have rendered variational models less relevant due to their limitations in scalability and stability.\\n\\nThe experiments are confined to benchmarks like PermutedMNIST, SplitMNIST, and SplitNotMNIST\\u2014datasets that are relatively simple and fall short of reflecting real-world continual learning challenges. More recent works typically include larger and more complex datasets such as CIFAR-100 and ImageNet, which would provide a more realistic evaluation of the method.\\n\\nAdditionally, the paper\\u2019s evaluation lacks comparisons to newer, stronger baselines in the field. While standard VCL and its variants are included, recent advanced methods, such as ALTA, DER, and L2P, are absent. This omission raises questions about the practical relevance and competitiveness of the proposed method.\", \"questions\": \"To improve the impact of the method, the authors could consider building on more recent models and benchmarks or even integrating connections to neural science, potentially aligning the method more closely with the evolving landscape of continual learning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors introduce the Variational Continual Learning (VCL) paper, which is a Bayesian CL approach where posterior distributions are updated recursively, highlighting the compounding effect of VCL and error accumulation due to objective depending on the posterior of an immediate previous task. To address this, the paper proposes two main solutions. First, they introduce an n-step KL regularization objective, which incorporates multiple past posterior estimates. This approach reduces the impact of individual errors and enhances the overall reliability of the model. Additionally, the authors draw parallels between their approach and temporal-difference (TD) methods from reinforcement learning \\u2013 no experiment in RL though. They suggest that integrating concepts from TD learning can further improve learning outcomes by providing a more robust way to handle updates. The proposed methods were validated through experiments against standard VCL techniques and non-variational baselines, using well-known CL benchmarks. The paper also presents detailed theoretical insights to validate the claims made. The results showed improved performance, effectively mitigating the problem of catastrophic forgetting. This research offers valuable insights into developing more robust continual learning frameworks by combining variational inference with temporal-difference learning mechanisms. It would be more interesting to see the results with the larger model on complex datasets\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Addressed a Potential Gap in Current Bayesian Continual Learning**:\\n The proposed method effectively addresses the issue of Catastrophic Forgetting by utilizing multiple past posterior estimates, which helps to dilute the impact of individual errors that could compound over time.\\n\\n2. **Enhanced Learning Objectives**: \\n By integrating n-Step KL regularization, the model can leverage a broader context from previous tasks, leading to improved performance in continual learning scenarios compared to standard Variational Continual Learning (VCL) methods.\\n\\n3. **Single-Step Optimization**: \\n Unlike some existing methods that require complex two-step optimizations or replay mechanisms, this approach simplifies the learning process by naturally incorporating replay into the learning objective.\", \"weaknesses\": \"## Key Points of Consideration\\n\\n### 1. Dependence on Hyperparameter Tuning\\n- **Effectiveness Contingency**: The performance of n-Step KL regularization is heavily dependent on the appropriate setting of its hyperparameters. \\n\\n### 2. Increased Computational Complexity\\n- **Robustness vs. Overhead**: While utilizing multiple past estimates can enhance robustness, it may introduce significant computational overhead, particularly in resource-limited environments.\\n- **Training and Inference Time**: It is essential to report training and inference times, as Bayesian models are generally slower compared to deterministic counterparts.\\n\\n### 3. Assumption of IID Tasks\\n- **Real-World Applicability**: The framework operates under the assumption that tasks are independent and identically distributed (IID). This assumption may not hold in many real-world scenarios, potentially limiting the framework's applicability.\\n\\n### 4. Potential for Bias in Estimates\\n- **Impact of Biased Estimates**: If earlier posterior estimates are significantly biased, they could adversely affect the learning target, even with proposed mitigation strategies.\\n\\n### 5. Scalability of the Bayesian Framework\\n- **Applicability Limitations**: Focusing on a Bayesian approach may restrict applicability to other models or frameworks that do not align with Bayesian principles. The framework may struggle with complex datasets exhibiting multiple distribution shifts, such as CIFAR10/100 and ImageNet, especially when utilizing larger architectures like ResNets and ViTs.\\n\\n### 6. Limited Experiments\\n- **Validation Scope**: The framework has only been validated on MNIST and its variations and compared solely with the VCL paper. There are other prominent Bayesian continual learning works based on Mean-Field Variational Inference (MVFI), such as UCB [1], UCL [2], and Bayesian Structural Adaptation [3]. It would be beneficial to evaluate these frameworks after applying dilation techniques.\\n- **Lack of Analysis**: The main section claims contributions, but there is a lack of empirical analysis in the results section for RL.\\n\\n## Contribution to Literature\\nDespite its limitations, the work presents a valuable contribution to the existing literature on continual learning.\\n\\n## Questions for Further Clarification\\n1. **Learning Strategy**: For SplitMNIST and SplitNotMNIST, which learning strategy was employed? Was it Task-Incremental Learning (TIL) or Class-Incremental Learning (CIL)?\\n2. **Re-weighting Posteriors**: What is the intuition behind re-weighting the posteriors with KL-divergence to mitigate error accumulation? What are the implications when \\\\( n = t \\\\)?\\n3. **Exemplar-Free Setting**: How does the framework perform in an exemplar-free setting?\\n\\nI will be happy to increase the score if the authors show empirical validation that the framework is scalable to larger models and complex datasets\\n### References\\n[1] Ahn, Hongjoon, et al. \\\"Uncertainty-based continual learning with adaptive regularization.\\\" Advances in neural information processing systems 32 (2019).\\n\\n[2] Ebrahimi, Sayna, et al. \\\"Uncertainty-guided continual learning in Bayesian neural networks\\u2013Extended abstract.\\\" Proc. IEEE Conf. Comput. Vis. Pattern Recognition (CVPR). 2018.\\n\\n[3] Kumar, Abhishek, Sunabha Chatterjee, and Piyush Rai. \\\"Bayesian structural adaptation for continual learning.\\\" International Conference on Machine Learning. PMLR, 2021.\", \"questions\": \"### Please refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper draws links between new Variational Continual Learning methods and Temporal-Difference mthods.\\n\\nThis meta-review is relatively short as all reviewers agreed that the paper requires more experiments to verify the claims made in the paper (MNIST scale experiments are not enough now, even if it was when the original VCL paper came out years ago). The authors wrote a short response acknowledging this limitation in the current version (and some ways to improve clarity / reduce misunderstandings), and look forward to a future version incorporating the reviewers' feedback.\", \"minor_point\": \"Reviewer Z6hC also brings up that the accuracies seem low. Looking into this, I agree that the accuracies are surpisingly low, eg for Permuted MNIST and VCL (Table 1). Nguyen et al. (2018) have higher accuracies, so this might be worth looking into / actively addressing reasons for in a future version of the paper. Swaroop et al. (2019) improve VCL performance further, as does Generalised VCL (Loo et al., 2020).\", \"additional_comments_on_reviewer_discussion\": \"The authors understandably chose not to engage in a detailed rebuttal, and I look forward to a future version of the paper.\"}", "{\"summary\": \"In this paper, the authors propsoed a new version of variational continual learning (VCL) which combines n-step regularization loss with temporal difference. The n-step loss considers all posterior and log likelihood before n steps, and the distribution that minimizes the n-step loss can cover all n tasks. As an improved version, TD($\\\\lambda$)-VCL uses the weighted sum of the log likelihood and KL regularization, and controls the weights using $\\\\lambda$. In the experiment, TD($\\\\lambda$)-VCL achieves better performance than other baselines in variation of MNIST expeirments.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Strengths\\n\\n1. In VCL or VCL variants, they formulate the KL regularization loss only using the posterior distribution on previous task. However, in this paper, the scheme that using all the n posteriors before n-steps has strong advantage for tackling the catastrophic forgetting.\", \"weaknesses\": \"Weaknesses\\n\\n1. To minimize Eq.(8), we should store both the memory buffer and the posterior distribution on previous tasks. However, I think that this scheme takes large memory, and higly inefficient. Most of the VCL variants (UCL[1] or UCB[2]) only stores the posterior distribution of previous task and also outperform the VCL and other baselines. \\n\\n2. The authors should include other baselines ([1]. [2], and other regularization based CL methods). In the PermutedMNIST or Split MNIST experiment, the overall accuracy is too low. In [1] and [2], they achieves much better performance than the proposed methods without using large amount of memory. Therefore, I think the contribution on TD($\\\\lambda$)-VCL is too weak\\n\\n3. To strengthen the effectiveness of TD($\\\\lambda$)-VCL, I think the experiments on using CNN architecture with larger dataset should be carried out. I think the algorithms that are applied only at a small scale scenario does not have any advantage these days.\\n\\n\\n\\n[1] Ahn et.al., Uncertainty-based Continual Learning with Adaptive Regularization, NeurIPS 2019\\n\\n[2] Ebrahimi et. al., Uncertainty-guided Continual Learning with Bayesian Neural Networks, ICLR, 2020\", \"questions\": \"Already mentioned in weaknesses section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your reviews\", \"comment\": \"Dear reviewers,\\n\\nWe would like to extend our sincere gratitude for your time and feedback to improve our paper. We understand that the reviewers agreed that our work requires more empirical validation, particularly in more complex benchmarks and comparing against other baselines. Despite our efforts, we did not have enough time to implement and run all experiments we believed that would address the concerns raised. We also noticed some misunderstandings of the adopted evaluation setup, which would also require further clarifications in the paper content.\\n\\nGiven these circumstances, we decided not to engage in the rebuttal discussion without these required changes. Nonetheless, we would like to leave this message here to thank the reviewers for their time and useful feedback.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
0vtftmYQGV
SNAP-TTA: Sparse Test-Time Adaptation for Latency-Sensitive Applications
[ "Hyeongheon Cha", "Dong Min Kim", "Taesik Gong", "Hye Won Chung", "Sung-Ju Lee" ]
Test-Time Adaptation (TTA) methods use unlabeled test data to dynamically adjust models in response to distribution changes. However, existing TTA methods are not tailored for practical use on edge devices with limited computational capacity, resulting in a latency-accuracy trade-off. To address this problem, we propose SNAP-TTA, a sparse TTA framework that significantly reduces adaptation frequency and data usage, delivering latency reductions proportional to adaptation rate. It achieves competitive accuracy even with an adaptation rate as low as 0.01, demonstrating its ability to adapt infrequently while utilizing only a small portion of the data relative to full adaptation. Our approach involves (i) Class and Domain Representative Memory (CnDRM), which identifies key samples that are both class-representative and domain-representative to facilitate adaptation with minimal data, and (ii) Inference-only Batch-aware Memory Normalization (IoBMN), which leverages representative samples to adjust normalization layers on-the-fly during inference, aligning the model effectively to changing domains. When combined with five state-of-the-art TTA algorithms, SNAP-TTA maintains the performances of these methods even with much-reduced adaptation rates from 0.01 to 0.5, making it suitable for edge devices serving latency-sensitive applications.
[ "Test-Time Adaptation", "Unsupervised Domain Adaptation" ]
Reject
https://openreview.net/pdf?id=0vtftmYQGV
https://openreview.net/forum?id=0vtftmYQGV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zCCkKgGusI", "yPVx6LNDoH", "wUz2uhSuS4", "tUYNzx3zBh", "o4GeibAdTB", "mmfv28nc51", "l8ZBda6LRI", "jy12l4aIMA", "jswUeM4u8v", "ilf9bFgZue", "fubmHeGjLK", "dCB3ifHqtX", "cwtqOfVcBL", "cIJdyMWtfF", "XEyGW6CVaT", "U4XpKuNh9a", "TmIu66SXbn", "RgISKjHWqT", "P5wPIbQRXA", "NWpTW01vEZ", "M4IF6033cO", "Lr48BrcxVM", "IQuTxn6rDH", "Gv1SNoFyl0", "FPAmfMsVne", "EonLrZVkwH", "EG7kFD64yW", "DSGZbq7nya", "7vPkme7yNX", "7m79ELFgeH", "76861ApNBr", "5pr2vllZ8Z", "5bSOEmjhui", "4GSdRyoCFw" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732258581481, 1732477737638, 1732268869071, 1732268235214, 1732861203245, 1732255816270, 1732259035481, 1732269222922, 1732266873400, 1732511725349, 1732760442081, 1732862274363, 1732390067491, 1730711472658, 1732861249019, 1732564310037, 1732804803023, 1730741225955, 1732478562470, 1732857357597, 1732267863923, 1732745350388, 1730118844468, 1732267222375, 1730706465877, 1737524305173, 1735027826033, 1732513147635, 1732267602759, 1732903028354, 1732440890486, 1732257272153, 1732773916381, 1732259723022 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_sYH2" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_sYH2" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_BUbi" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_sYH2" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_vuQK" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_v9DY" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_v9DY" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_BUbi" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_vuQK" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14267/Area_Chair_Lo5f" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_sYH2" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_vuQK" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ], [ "ICLR.cc/2025/Conference/Submission14267/Reviewer_sYH2" ], [ "ICLR.cc/2025/Conference/Submission14267/Authors" ] ], "structured_content_str": [ "{\"title\": \"Responses to Reviewer v9DY (Part 2)\", \"comment\": \">Q1: Some results for latency and performance metrics on mobile or embedded systems would be helpful, to further validate the method\\u2019s effectiveness and robustness.\\n\\nThank you for suggesting the inclusion of latency and performance metrics on diverse mobile and embedded systems. In our study, the primary benchmarking was conducted using the widely-adopted edge device, Raspberry Pi 4. To further validate the effectiveness and robustness of SNAP-TTA, we **extended** our performance analysis to include other popular edge devices: **Raspberry Pi Zero 2W** and **NVIDIA Jetson Nano**. The results of these additional evaluations, now detailed in the *Appendix E.3*, consistently demonstrate the significant efficiency gains of SNAP-TTA over the original TTA across various resource-constrained platforms. Specifically, the latency measurements for CoTTA **across the three edge devices** (below table) reveal a **substantial reduction in latency compared to fully adapting** (Original TTA). These findings highlight the remarkable efficiency and robustness of SNAP-TTA on diverse devices.\\n| Methods | Latency on JetsonNano (s) | Latency on RPi4 (s) | Latency on RPiZero2w (s) |\\n|----------------|---------------------------|---------------------|--------------------------|\\n| Original TTA | 13.18 | 41.77 | 622.28 |\\n| **+ SNAP-TTA** | **2.61 (-80.2%)** | **4.93 (-88.2%)** | **92.01 (-85.2%)** |\\n___\\n>Q2-1: Some in-depth analysis of specific limitations would be helpful, such as how memory overhead might impact performance on resource-constrained devices.\\n\\nWe appreciate your request for a more in-depth analysis of limitations and potential trade-offs. SNAP-TTA introduces minimal memory overhead as it requires only (1) the memory buffer in Class and Domain Representative Memory (CnDRM) for storing representative samples, including both feature statistics (mean and variance) and (2) the statistics required for Inference-only Batch-aware Memory Normalization (IoBMN). Therefore, for a batch size $B$, the total memory overhead can be expressed as: $B \\\\times \\\\left( \\\\text{Image Size} + 2 \\\\times \\\\text{Feature Dimension} \\\\times \\\\text{Bytes per Value} \\\\right) + \\\\text{Feature Dimension} \\\\times \\\\text{Bytes per Value} \\\\times 2. $\\n\\nThen, in the case of ResNet18 on CIFAR, the total memory overhead is calculated as **only 116 KB**. Also on resource-constrained device Raspberry Pi 4, benchmarking results on ResNet50 and ImageNet-C show that SNAP-TTA incurs **negligible peak memory usage overhead (<1.8%)** compared to original TTA algorithms. Additionally, by reducing backpropagation frequency, SNAP-TTA **lowers average memory consumption**, enabling more flexible memory allocation for multitasking on edge devices. We have added both details of these memory overhead tracking results and theoretical analysis in *Appendix E.4*. \\n| | **Average Mem** | **(MB)** | **Peak Mem** | **(MB)** | **Mem Overhead (MB)** |\\n|---------|:---------------:|:------------:|:------------:|:--------:|-----------------------|\\n| Methods | Original TTA | **SNAP-TTA** | Original TTA | SNAP-TTA | **SNAP - Original** |\\n| Tent | 764.24 | **751.35** | 822.93 | 828.46 | **5.52 (0.67%)** |\\n| CoTTA | 1133.52 | **1099.64** | 1211.21 | 1227.99 | **16.78 (1.13%)** |\"}", "{\"title\": \"Thank you for responding to our rebuttal\", \"comment\": \"We sincerely thank you for taking the time to read our rebuttal and for your thoughtful comments. Also, we are pleased that our clarifications and improvements have helped our paper meet your acceptance threshold.\\n\\nWe agree that verifying SNAP-TTA on Cortex-M MCUs would require specialized libraries and adaptations, which we recognize as an important avenue for future work. Combining SNAP-TTA with recent advances in memory- and computation-efficient backpropagation for quantized DNNs on Cortex-M MCUs is a promising direction, and we appreciate you bringing this connection to our attention.\\n\\nThank you again for your valuable feedback and consideration. We would welcome any additional suggestions or questions you may have.\"}", "{\"title\": \"Responses to Reviewer BUbi (Part 1)\", \"comment\": \"We sincerely appreciate the time and effort you have devoted to reviewing our work and offering such valuable feedback. Below, we have addressed each of your points in detail.\\n___\\n>W1: The claimed contribution of the paper is that SNAP can make existing TTA algorithms more latency efficient and suitable for edge devices. However, this is only demonstrated in Table 4 for one algorithm (STTA) and one target device (Raspberry Pi 4). All other experiments focus only on accuracy. And while it is an important and valuable contribution to properly demonstrate that SNAP does not reduce the effectiveness of the TTA algorithms it is applied to, I think the evaluation overall fails to adequately demonstrate the claimed contribution of latency reduction across various edge devices.\\n\\nThank you for your thoughtful feedback and for identifying areas where our evaluation could be clarified. We would like to elaborate on a key aspect of our work to ensure clarity: **STTA (Sparse TTA)** is not a single algorithm but a **generalized adaptation protocol that selectively skips certain batches to meet latency constraints**. Without this protocol, full adaptation using original TTA methods would result in significantly higher latencies, as illustrated in *Figure 1*.\\n\\nIn *Table 4*, we demonstrated how SNAP-TTA reduces latency while maintaining accuracy when applied to five SOTA TTA algorithms (Tent, CoTTA, EATA, SAR, and RoTTA). While our primary experiments used the Raspberry Pi 4 as a representative edge device, the lightweight nature of SNAP-TTA ensures its compatibility across a variety of hardware platforms, as it introduces minimal additional memory and computational overhead (*Appendix E.4*).\\n\\nTo further address your concern, we **have added latency tracking experiments on additional edge devices (total 3)**, including **NVIDIA Jetson Nano** and **Raspberry Pi Zero 2W** on *Figure 7, Appendix E.3*. These results confirm that SNAP-TTA remains effective and compatible across different hardware. Specifically, the latency tracking results in the table below for CoTTA on **three edge devices demonstrate a substantial reduction in latency with SNAP-TTA** compared to fully adapting with the original TTA approach. These findings highlight the remarkable efficiency and robustness of SNAP-TTA on diverse devices.\\n| Methods | Latency on Jetson Nano (s) | Latency on RPi4 (s) | Latency on RPi Zero2w (s) |\\n|----------------|---------------|---------------------|--------------------------|\\n| Original TTA | 13.18| 41.77| 622.28|\\n| **+ SNAP-TTA** | **2.61 (-80.2%)**| **4.93 (-88.2%)** | **92.01 (-85.2%)**|\\n\\nWe hope this expanded evaluation addresses your concern about the demonstration of SNAP-TTA\\u2019s latency reduction across a broader range of devices.\\n___\\n>Q1: What are the lower limits of the proposed approach? For example, would SNAP enable TTA on microcontroller units (MCUs) such as Cortex-M MCUs?\\n\\nThank you for your insightful question. **Yes, SNAP-TTA is compatible with microcontroller units (MCUs)**, including Cortex-M MCUs, and can effectively enable TTA on such resource-constrained devices. Its feasibility depends on the specific model and TTA algorithm, which are suitable for MCU, but the point is the additional overhead introduced by SNAP-TTA is minimal.\", \"the_approach_relies_on_two_lightweight_components_that_are_both_computationally_simple_and_easy_to_implement_on_mcus\": \"a memory buffer in Class and Domain Representative Memory (CnDRM) for storing representative samples and feature statistics, and Inference-only Batch-aware Memory Normalization (IoBMN) for efficient inference. These components are designed with simplicity in mind, requiring minimal processing and memory overhead.\\n\\nTo address the computational limitations of MCUs, SNAP-TTA **adjusts the adaptation rate** to enable sparse adaptation, **significantly reducing the number of backpropagation** while prioritizing the most informative samples (via CnDRM) and ensuring efficient inference (via IoBMN). This makes it particularly well-suited to devices with limited resources. For detailed memory usage analysis, please refer to the response of Q2, *Appendix E.4 and E.5*. We hope these provides further clarity on the adaptability of SNAP-TTA to MCUs.\"}", "{\"title\": \"Responses to Reviewer vuQK (Part 3)\", \"comment\": \">W5: In Table 6 for ImageNet-C, only the Tent method is compared, ignoring other methods, which could provide more comprehensive and convincing results.\\n\\nWe appreciate your feedback on previous Table 6. We have totally agreed that including other methods would make the results more comprehensive and convincing. Therefore, we **have additionally evaluated SNAP-TTA\\u2019s performance** not only with the Tent algorithm but also with **EATA and SAR on ViT-base** models. These additional results, presented in *Table 5*, demonstrate that SNAP-TTA consistently achieves higher accuracy gains across all algorithms, further validating its effectiveness and versatility when applied to transformer-based model. Detailed explanations of implementations of SNAP-TTA on ViT are in *Appendix F.3*. We sincerely appreciate your interest in this aspect of our work and hope the additional details in these sections comprehensively address your concerns.\\n| Methods| Gau. | Shot | Imp. | Def. | Gla. | Mot. | Zoom | Snow | Fro. | Fog | Brit. | Cont. | Elas. | Pix. | JPEG | Avg. |\\n|-------------------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|\\n| EATA| 20.12 | 21.52 | 21.40 | 20.90 | 23.42 | 15.71 | 18.00 | 16.12 | 28.35 | 22.24 | 35.97 | 11.33 | 19.78 | 20.22 | 19.99 | 21.00 |\\n| **+ SNAP-TTA** | **40.74** | **43.22** | **43.11** | **40.63** | **44.59** | **51.58** | **50.63** | **54.77** | **58.32** | **61.5** | **73.91** | **33.85** | **60.19** | **63.35** | **63.01** | **52.23** |\\n| SAR|21.45 | 23.02 | 23.17 | 23.67 | 24.64 | 15.98 | 14.62 | 7.70 | 31.49 | 8.94 | 41.33 | 6.82 | 17.35 | 22.39 | 22.49 | 20.34 |\\n| **+ SNAP-TTA** | **37.59** | **38.27** | **36.78** | **38.58** | **39.99** | **49.00** | **45.77** | **43.96** | **56.61** | **59.96** | **73.02** | **19.69** | **54.30** | **61.16** | **61.85** | **47.77** |\\n___\\n>W6: In the experiments, it is not clear how the number of participating samples is controlled to meet the adaptation rate. Is it through adjusting the tau conf hyperparameter? Also, it is not described how other compared methods meet the adaptation rate.\\n\\nThank you for raising this question. First of all, **\\u2018tau conf\\u2019 doesn\\u2019t affect the sample number**, it\\u2019s an easy threshold that works in the first step in a multi-stage process of CnDRM designed to prioritize informative samples *(Algorithm 1)*. To clarify, **the number of participating samples for the adaptation is keeping the CnDRM memory size consistent with the initial batch size**. For example, with a batch size of 16 and an adaptation rate (AR) of 0.1, our method processes 160 streaming test samples but only uses 160\\u00d70.1=16 samples (i.e., one batch) for model updates. While it is possible to use more samples per update, we deliberately chose this setup to align with real-world edge device constraints, ensuring minimal latency and memory overhead. Thus, while the number of samples per model update remains fixed (matching the batch size), the total number of samples used for updates across all data streams is proportional to the adaptation rate. This consistent number of adaptation samples minimizes memory and latency overhead during backpropagation, which is particularly important for edge-device applications.\\n___\\n>W7: The description of lines 10-15 of the algorithm in the paper is relatively brief, considering its importance for the proposed method. More detailed explanation in the paper would assist readers in understanding.\\n\\nThank you for pointing out the brevity of the description for lines 10\\u201315 of the algorithm in the paper. These lines describe the implementation of a prediction-balanced method for removing the domain-centroid farthest sample from memory. Since implementation details were not the main contribution of our work, we opted for a conceptual explanation rather than a detailed one. To elaborate further, the memory management mechanism tracks the number of stored samples for each prediction class. When attempting to store a new sample, the algorithm operates as follows: \\n- If the prediction of the new sample belongs to the class with the highest number of stored samples, it removes the sample in that class that is farthest from the domain centroid and replaces it with the new one.\\n- Otherwise, it removes the farthest sample from the class with the highest number of stored samples overall. \\n\\nTo improve clarity and assist readers in understanding this process, **we have added more detailed explanations** (via comments) in *Algorithm 1*. We hope this additional context ensures that the mechanics of the algorithm are better communicated.\"}", "{\"title\": \"Response to Reviewer sYH2 (Part 1)\", \"comment\": \"Thank you for taking the time to review our response and for providing your thoughtful suggestions. We deeply appreciate your insights and agree with the points you raised.\\n>However, the key question is that efficiency/latency is coupled with the adaptation rate. A low adaptation rate leads to improved latency for all methods. As such, if you compare efficiency/latency with fully update baselines (in Figure 1 of the original manuscript), it is equally important to compare performance with fully update baselines for SNAP-TTA across various adaptation rates in all tables.\\nIn some tables, such as Table 3 (ImageNet-C, commonly used in TTA) in the original paper, the results for fully update baselines are missing. Although Table 1 includes results for an adaptation rate of 1.0, the differences among baselines on CIFAR-10 are relatively minor. As a result, the manuscript gives me an impression of an unfair comparison overall.\\n\\nFollowing your suggestion, **we have included a latency column in the main tables (*Table 1* and *Table 2*) for each adaptation rate and baseline**, including fully adaptive settings, in our most recent revision. Additionally, **we have incorporated both accuracy and latency comparisons with full adaptation in *Table 2*** to ensure fairness, similar to *Table 3*.\\n\\nMoreover, **we have updated Figure 4 to include accuracy alongside latency**, ensuring that no main table or figure in the paper now lacks a fully adaptation baseline for comparison as follow your recommendation.\\n>For all latency comparison figures, I strongly recommend including efficiency comparisons across different adaptation rates.\\n\\nTo follow your recommendations about efficiency comparisons across different adaptation rates, **we have extended latency comparison between SNAP-TTA and Original TTA on additional Adaptation Rates(AR) 0.05 and 0.3 for three edge devices (Original Figure 7)**. The results tables are below. They demonstrate that SNAP-TTA consistently reduces latency in rates proportional to adaptation rate, regardless of the adaptation algorithm and edge device. Since the PDF update deadline has passed while running this experiment, we will include these results in the final draft of the paper.\\n\\n*Table A. Additional Latency Measurements (AR=1, 0.3, 0.1, 0.05) on Jetson Nano*\\n| Methods | AR=1 | AR=0.3 | AR=0.1 | AR=0.05 |\\n|:---:|:---:|:---:|:---:|:---:|\\n| Tent | 2.57 | 1.97 (-23.51%) | 1.35 (-47.62%) | 1.19 (-53.75%) |\\n| EATA | 2.52 | 1.90 (-24.70%) | 1.33 (-47.22%) | 1.19 (-52.79%) |\\n| SAR | 5.15 | 2.87 (-44.29%) | 1.60 (-68.94%) | 1.32 (-74.28%) |\\n| RoTTA | 5.24 | 2.91 (-44.46%) | 1.62 (-69.13%) | 1.32 (-74.81%) |\\n| CoTTA | 13.18 | 6.13 (-53.46%) | 2.61 (-80.22%) | 1.82 (-86.19%) |\\n\\n*Table B. Additional Latency Measurements (AR=1, 0.3, 0.1, 0.05) on Raspberry Pi 4*\\n| Methods | AR=1 | AR=0.3 | AR=0.1 | AR=0.05 |\\n|:---:|:---:|:---:|:---:|:---:|\\n| Tent | 4.78 | 3.54 (-26.09%) | 3.09 (-35.45%) | 2.35 (-50.87%) |\\n| EATA | 5.68 | 3.52 (-38.00%) | 2.87 (-49.45%) | 2.31 (-59.28%) |\\n| SAR | 9.45 | 4.88 (-48.34%) | 2.98 (-68.41%) | 2.54 (-73.16%) |\\n| RoTTA | 12.07 | 4.95 (-58.97%) | 2.94 (-75.62%) | 2.91 (-75.91%) |\\n| CoTTA | 41.77 | 11.80 (-71.76%) | 4.93 (-88.19%) | 3.64 (-91.29%) |\\n\\n*Table C. Additional Latency Measurements (AR=1, 0.3, 0.1, 0.05) on Raspberry Pi Zero 2W*\\n| Methods | AR=1 | AR=0.3 | AR=0.1 | AR=0.05 |\\n|:---:|:---:|:---:|:---:|:---:|\\n| Tent | 34.96 | 24.67 (-29.42%) | 25.06 (-28.32%) | 17.07 (-51.16%) |\\n| EATA | 50.72 | 27.01 (-46.75%) | 28.43 (-43.93%) | 17.00 (-66.48%) |\\n| SAR | 74.64 | 47.79 (-35.96%) | 29.56 (-60.40%) | 18.64 (-75.02%) |\\n| RoTTA | 154.88 | 86.54 (-44.13%) | 44.08 (-71.54%) | 22.44 (-85.51%) |\\n| CoTTA | 622.28 | 228.03 (-63.36%) | 92.01 (-85.21%) | 39.22 (-93.70%) |\"}", "{\"title\": \"Global Response\", \"comment\": \"Dear Reviewers and Meta-Reviewers,\\n\\nWe sincerely thank you for your thoughtful and constructive feedback. Your suggestions have been invaluable in refining our work, and we deeply appreciate the time and effort you dedicated to reviewing our paper. We have carefully addressed all points and incorporated the necessary improvements in the revised version.\", \"we_are_pleased_that_reviewers_appreciated\": [\"The **novelty** and **practicality** of SNAP-TTA for Sparse Test-Time Adaptation (STTA), achieving **high accuracy with significant latency reduction** across challenging benchmarks. [v9DY, sYH2, BUbi, vuQK]\", \"The **technical soundness** of components of SNAP-TTA for handling domain shifts effectively. [v9DY]\", \"The plug-and-play design of SNAP-TTA, offering seamless integration with existing TTA methods and improving **efficiency**. [sYH2]\", \"The **strong empirical evidence**, demonstrating broad adaptability across diverse datasets, adaptation rates, and TTA algorithms. [BUbi, vuQK, sYH2]\", \"We also sincerely thank the reviewers for their valuable suggestions, which helped us identify areas for improvement. In response to this feedback, we have made several major revisions and additions (marked as blue in paper):\", \"**Clarified the statistical significance** of performance gains in terms of efficiency in sparse adaptation scenarios and provided additional latency-performance trade-off evaluations. *(Section 4)* [v9DY]\", \"**Extended latency tracking** experiments to include **multiple edge devices** (Raspberry Pi Zero 2W, NVIDIA Jetson Nano) to demonstrate robustness across hardware. *(Appendix E.3)* [v9DY, BUbi, vuQK]\", \"Added **detailed memory usage analysis** to highlight the negligible overhead of SNAP-TTA and its **compatibility with memory-efficient methods** like MECTA. *(Appendix E.4 and E.5)* [v9DY, sYH2, vuQK, BUbi]\", \"Provided **additional experiments on transformer-based model** (e.g., ViT-base) to demonstrate SNAP-TTA\\u2019s versatility. *(Table 5)* [sYH2, vuQK]\", \"**Enhanced explanations** of CnDRM\\u2019s memory balancing mechanism for better clarity. *(Algorithm 1)* [vuQK]\", \"Provided additional experiments and analysis of SNAP-TTA under **continual domain shift** scenarios, demonstrating its adaptability and robustness in dynamic real-world scenarios. *(Appendix F.2)* [v9DY]\", \"We believe these revisions address the reviewers' concerns and further strengthen our work. Thank you for your constructive feedback and recognition of our contributions.\", \"Best regards,\", \"The Authors\"]}", "{\"title\": \"Responses to Reviewer v9DY (Part 3)\", \"comment\": \">Q2-2: How SNAP-TTA handles highly dynamic data distributions in real-world applications?\\n\\nTo address highly dynamic data distributions in real-world applications, we tested SNAP-TTA in a continuous domain adaptation scenario. Results of the below table illustrate that the **adaptive domain centroid effectively tracks continuous distribution changes**, ensuring sustained performance improvements. Moreover, the application of SNAP-TTA to the CoTTA algorithm yielded notable results: even with minimal adaptation rates (e.g., 0.1 or 0.05), SNAP-TTA performed slightly better than fully adapted models, demonstrating its reliability under challenging conditions. We have added the detailed analysis and result table in *Appendix F.2*.\\n| Adaptation Rate| Method | Gau.|Shot| Imp. | Def.|Gla.| Mot.|Zoom|Snow|Fro.|Fog|Brit. | Cont.| Elas. | Pix. | JPEG | Avg.|\\n|------|-------------------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|\\n| 1 | CoTTA (Full) | 13.19 | 13.42 | 13.16 | 11.74 | 11.74 | 22.84 |34.62 | 31.47 | 30.29 | 44.22 | 62.45 | 14.96 | 40.68 | 45.25 | 36.61 | 28.44 |\\n| 0.1 | CoTTA | 10.99 | 12.21 | 11.54 | 11.28 | 11.13 |22.08 |34.80 | 30.69 | 29.45 | 43.87 | 61.92 | 12.76 | 40.03 | 44.99 | 36.43 | 27.61 |\\n| | **+ SNAP-TTA** | **15.19** | **15.97** | **15.91** | **13.94** | **14.18** | **24.76** | **36.50** | **32.61** | **31.76** | **46.14** | **63.60** | **15.60** | **42.17** | **46.77** | **38.08** | **30.21** |\\n| 0.05 | CoTTA | 11.04 | 12.25 | 11.73 | 11.62 | 11.25 | 22.05 | 34.89 | 30.73 | 29.50 | 44.09 | 61.87 | 12.87 | 40.15 | 45.06 | 36.53 | 27.71 |\\n| | **+ SNAP-TTA** | **15.20** | **15.89** | **15.93** | **13.81** | **14.15** | **24.74** | **36.68** | **32.51** | **31.71** | **46.11** | **63.48** | **15.73** | **42.20** | **46.69** | **38.05** | **30.19** |\\n___\\n>Q2-3: Additionally, there is no discussion on potential trade-offs between latency reduction and accuracy under different conditions.\\n\\nRegarding the trade-offs between latency reduction and accuracy, adjusting the adaptation rate to reduce latency directly limits the number of samples used for model updates. This limitation makes it more difficult to respond to both skewed data distributions and rapidly changing domains. Consequently, **as latency is reduced, there is a corresponding trade-off with adaptation accuracy compared to full adaptation**. This trade-off is particularly pronounced in scenarios with frequent domain shifts. However, SNAP-TTA mitigates these through its innovative mechanisms, including the Class and Domain Representative Memory (CnDRM) based on moving domain centroids and Inference-only Batch Memory Normalization (IoBMN). These features enable SNAP-TTA to achieve significant latency reductions while maintaining high domain adaptation performance, as demonstrated in *Table 6-11 and 21*. The results highlight **SNAP-TTA's ability to simultaneously reduce latency and improve accuracy in diverse scenarios**.\\n___\\n>Q3: The combined CnDRM+IoBMN method performs best, but the contribution of each component is not discussed. A brief explanation of how they work together would improve clarity. The table 5 only shows results at an adaptation rate of 0.1, the authors can mention that the complete data is in appendix.\\n\\nThe combined use of CnDRM and IoBMN is central to the success of our method in Sparse Test-Time Adaptation (STTA), and we appreciate the opportunity to clarify how these components work together.\\n- CnDRM (Class-Domain Representative Memory) ensures that only the most informative and representative samples are selected for adaptation during sparse updates. This targeted selection reduces noise and maximizes the utility of limited adaptation opportunities, addressing the inherent challenges of STTA where updates occur infrequently.\\n- IoBMN (Inference-only Batch-aware Memory Normalization) complements CnDRM by mitigating potential mismatches between the stored memory statistics (derived from adaptation batches) and the current inference data. IoBMN dynamically adjusts normalization by blending stable, representative statistics from memory with recent inference batch data. This ensures robust and adaptive normalization, even when adaptation is skipped for several batches.\\n\\n**The synergy between CnDRM and IoBMN ensures that the model remains both adaptive and robust**, even under sparse adaptation conditions, as demonstrated in our ablation study *(Table 4 and 12-16)*. We have added these detailed explanations in *Section 3.2 and 4*. We hope this explanation provides the clarity you requested, and we are happy to elaborate further if needed.\"}", "{\"title\": \"Responses to Reviewer BUbi (Part 2)\", \"comment\": \">Q2: How memory intensive is the approach? There seem to be some mechanisms in place to keep memory requirements fixed (line 264 ff), but could memory, i.e. RAM, availability still become a bottleneck of the approach on edge systems?\\n\\nSNAP-TTA introduces minimal memory overhead as it requires only (1) the memory buffer in Class and Domain Representative Memory (CnDRM) for storing representative samples, including both feature statistics (mean and variance) and (2) the statistics required for Inference-only Batch-aware Memory Normalization (IoBMN). Therefore, for a batch size $B$, the total memory overhead can be expressed as: $B \\\\times \\\\left( \\\\text{Image Size} + 2 \\\\times \\\\text{Feature Dimension} \\\\times \\\\text{Bytes per Value} \\\\right) + \\\\text{Feature Dimension} \\\\times \\\\text{Bytes per Value} \\\\times 2. $\\n\\nThen, in the case of ResNet18 on CIFAR, the total memory overhead is calculated as **only 116 KB**. Also on resource-constrained device Raspberry Pi 4, benchmarking results on ResNet50 and ImageNet-C show that SNAP-TTA incurs **negligible peak memory usage overhead (<1.8%)** compared to original TTA algorithms. Additionally, by reducing backpropagation frequency, SNAP-TTA **lowers average memory consumption**, enabling more flexible memory allocation for multitasking on edge devices. We have added both details of these memory overhead tracking results and theoretical analysis in *Appendix E.4*. \\n| | **Average Mem** | **(MB)** | **Peak Mem** | **(MB)** | **Mem Overhead (MB)** |\\n|---------|:---------------:|:------------:|:------------:|:--------:|----------------|\\n| Methods | Original TTA | **SNAP-TTA** | Original TTA | SNAP-TTA | **SNAP - Original** |\\n| Tent | 764.24 | **751.35** | 822.93 | 828.46 | **5.52 (0.67%)** |\\n| CoTTA | 1133.52 | **1099.64** | 1211.21 | 1227.99 | **16.78 (1.13%)** |\\n\\nFurthermore, SNAP-TTA can **integrate with memory-efficient methods like MECTA**[1] and can independently address latency concerns, enabling broader applicability for edge devices where both memory and latency efficiency are required. To demonstrate this synergy, we have added additional analysis and results in *Appendix E.5*. Below table shows the classification accuracy and peak memory usage for Tent+MECTA and EATA+MECTA configurations with and without SNAP-TTA. Integrating SNAP-TTA with Tent+MECTA **improves accuracy, while reducing peak memory usage by approximately 30\\\\%** compared to the baseline. Similarly, SNAP-TTA boosts the accuracy of EATA+MECTA while maintaining an efficient memory footprint.\\n| Method| Accuracy (%) | Max Memory (MB) | Reduced Memory (%) |\\n|--------------------------|:------------:|:---------------:|:------------------:|\\n| Tent| 35.21 | 6805.26 |- |\\n|+ MECTA | 37.62 | 4620.25 | 32.10 |\\n| **+ MECTA + SNAP-TTA** | **39.52** | **4622.12** | **32.08** |\\n| EATA| 35.55 |6541.02 |- |\\n| + MECTA| 41.41 | 4512.38 |31.01 |\\n| **+ MECTA + SNAP-TTA** | **42.86** | **4535.44** | **30.66** |\\n\\nWe sincerely thank the reviewer for highlighting this perspective, and we believe our approach contributes meaningfully to addressing these pressing challenges in edge-device TTA deployment.\\n\\n**_References_**\\n\\n[1] Hong, Junyuan, et al. \\\"Mecta: Memory-economic continual test-time model adaptation.\\\" International Conference on Learning Representations. ICLR, 2023.\\n___\\n>Q3: I am a bit confused about the hyperparameter \\\"adaptation rate\\\": Is this parameter specifically implemented by SNAP or is it implemented by the underlying TTA algorithms? I was wondering because, for example, in Table 1 the accuracy for the TTA algorithms without SNAP-TTA also decreases at lower adaptation rates.\\n\\nWe appreciate your question and would like to clarify the concept of the \\\"adaptation rate\\\" and its implementation. **The adaptation rate is NOT the parameter specifically implemented by SNAP-TTA**. It is a **general concept that affects the frequency of updates and determines how sparsely adaptation occurs**. Therefore, in Table 1, the accuracy of TTA algorithms without SNAP-TTA decreases at lower adaptation rates because fewer updates result in a significant degradation when applying sparse TTA naively. This phenomenon highlights the challenge of sparse adaptation, which SNAP-TTA is designed to address. By introducing CnDRM and IoBMN, SNAP-TTA effectively mitigates the performance degradation associated with lower adaptation rates, ensuring efficient and reliable sparse adaptation even under stringent resource constraints. \\n\\nWe hope this explanation clarifies the distinction and emphasizes SNAP-TTA\\u2019s role in addressing this critical issue.\"}", "{\"title\": \"Responses to Reviewer sYH2 (Part 2)\", \"comment\": \">Q1: I am somewhat confused about the latency differences between, Tent, EATA, SAR and SNAP, all of which are sample selection-based methods. Compared to Tent, EATA does not reduce latency because, this is because in the EATA\\u2019s code, even filtered samples are still used in back-propagation, despite halving the number of samples involved in adaptation. However, in SNAP, latency is reduced. If this reduction is due to engineering optimizations, the same should ideally apply to EATA and SAR for a fair comparison. If not, the comparison could be seen as unfair.\\n\\nThank you for your thoughtful feedback and for the opportunity to clarify the latency differences between SNAP-TTA, Tent, EATA, and SAR. I\\u2019d like to take this opportunity to clarify the distinctions and address your concerns. \\n\\nFirst, it is important to note that in all evaluations presented in the paper, SNAP-TTA was not directly compared to EATA or SAR in terms of latency. Instead, the comparisons were made between the original versions of SOTA methods and their enhanced versions, where SNAP-TTA was integrated. This approach ensures that differences in engineering optimizations did not lead to unfair comparisons. \\n\\n**SNAP-TTA uniquely adopts a strategic batch-skipping mechanism to adapt sparsely, significantly reducing the number of backpropagation steps and thus lowering latency**. This method fundamentally differs from EATA and SAR, which focus on filtering samples for loss computation but still perform backpropagation at the batch level. As only a small number of samples are filtered out in these methods, they end up conducting backpropagation on most batches, leading to considerable latency overhead. Additionally, it is worth emphasizing that neither EATA nor SAR was specifically designed with latency reduction as a primary objective. Even under ideal conditions (e.g., without the PyTorch limitations or based on theoretical analysis in their respective papers), these methods still involve backpropagation on at least 50% of the batches, inherently constraining their ability to reduce latency.\\n\\nIn contrast, SNAP-TTA prioritizes latency efficiency, **achieving robust performance while utilizing only 10% or, in some cases, as little as 1%** of the samples for adaptation. This efficiency underscores SNAP-TTA\\u2019s suitability for latency-sensitive applications, particularly in meeting strict Service Level Objectives (SLOs). We hope this clarifies the distinctions between these methods and highlights SNAP-TTA\\u2019s complementary contributions to the field.\\n___\\n> Q2: Another area of confusion is that, based on my experience, EATA generally outperforms Tent and SAR under standard settings. However, the authors\\u2019 results show SAR and Tent performing better than EATA, which contradicts my observations. Could the authors provide further clarification on this?\\n\\nWe sincerely appreciate your insightful observation regarding EATA's performance under standard settings. Indeed, **EATA generally achieves comparable or superior results to Tent and SAR under a standard Adaptation Rate(AR) 1 (Top of *Table 1*)**, aligning with your observations. However, the other experimental results on lower adaptation rates were conducted in sparse adaptation scenarios, where the characteristics of EATA's performance can differ slightly.\\n\\nIn sparse TTA settings, **EATA's reliance on low-entropy samples for adaptation becomes a limitation**. These samples often exhibit **low loss values, providing insufficient gradient updates**, and as a result, they become less informative for domain adaptation. This behavior leads to a **more pronounced performance drop for EATA compared to Tent**, which directly uses the current test batch for adaptation. These challenges in sparse TTA scenarios underline the need for a robust method like SNAP-TTA, which ensures effective adaptation even under such constraints.\\n\\nAdditionally, we would like to acknowledge that EATA's performance, as well as that of other methods, is influenced by various hyperparameters such as batch size, learning rate, and other algorithm-specific settings. While we made our best effort to perform hyperparameter tuning within a reasonable scope, our primary focus in this study was not to compare the absolute performances of SOTA TTA algorithms against each other. Instead, the objective was to **demonstrate the effect of applying or not applying SNAP-TTA across these algorithms**. To this end, we focused on unifying the hyperparameters across for each algorithms and tuned them to a level that achieves reasonable convergence, ensuring a fair baseline for comparison *(Appendix B.1)*.\\n\\nWe humbly request your understanding on this matter and emphasize that our intent was to **highlight the relative benefits introduced by SNAP-TTA rather than to claim definitive rankings among the existing algorithms under all settings.** We hope this explanation clarifies the observed differences and provides a clearer context for interpreting our results.\"}", "{\"title\": \"Follow-up from reviewer\", \"comment\": \"Thanks for the authors\\u2019 response. I appreciate that CnDRM operates in parallel with memory-optimization methods like MECTA and can be incorporated with MECTA.\\n\\n### **Unfair comparison**: \\n\\nRegarding **efficiency comparison**, the authors compare SNAP-TTA (under the sparse setting, i.e., every $k$-th batch) with prior methods like **TENT and EATA (under the fully update setting, i.e., every batch)**. However, for **performance comparisons** across all tables, the authors compare SNAP-TTA with baselines including **TENT and EATA (under the sparse setting ($k$-batch updates)**. This comparison is unfair. While I understand the motivation to adopt the sparse setting to improve average efficiency, this paper risks misleading both the reviewers and readers into believing that SNAP-TTA achieves both higher efficiency and higher performance than baselines. Actually, when baselines are evaluated under the same sparse setting, SNAP-TTA does not demonstrate superior efficiency over them. The efficiency are the same when CnDRM size equals to the batch size.\\n\\nTo ensure a fair comparison, if the efficiency is evaluated with baselines in the *fully update setting*, the performance comparisons should also be under the *fully update setting*. In the main tables, the results for all baselines in their original *fully update setting* need to be included.\\n\\nMoreover, how does the memory size of CnDRM affect the performance of SNAP-TTA? Given that saving samples may raise privacy concerns, this aspect should be carefully addressed.\"}", "{\"comment\": \"Thank you for carefully going through our response. We are glad to hear it erased some of your concerns. Please let us know if there is anything we can do to completely resolve your remaining concerns.\"}", "{\"comment\": \"Thank you for your response. The paper, particularly the comparisons, is much clearer now. I will increase my score. Pls also update the tables in Appendix to include latency comparisons in the future.\"}", "{\"comment\": \"Thank you for responding to my concerns about the memory efficiency and latency of your approach on edge devices. While I find it very interesting to hear that you believe your approach is feasible for MCUs, it is unfortunate that you were unable to provide results for an actual Cortex-M based MCU (Jetson, RPi4 and RPi Zero2w are all Cortex-A based as far as I know). However, I understand that such additional experiments may not have been in the cards given the limited time during the rebuttal, and given that the code you shared relies on python/pytorch and does not appear to be easily executable on a platform that cannot run Linux or similar.\\n\\nBesides that, I would be interested to hear how your approach, and in particular the adjustable adaptation rate, relates to or could be combined with some of the recent developments discussed regarding memory and computation efficient backpropagation and on-device training of quantized DNNs on Cortex-M based MCUs [1, 2].\\n\\nNevertheless, with the additional clarifications and additions made to the paper, it now passes my personal acceptance threshold.\\n\\n[1] Lin, Ji, et al. \\\"On-device training under 256kb memory.\\\" Advances in Neural Information Processing Systems (2022).\\n\\n[2] Deutel, Mark, et al. \\\"On-Device Training of Fully Quantized Deep Neural Networks on Cortex-M Microcontrollers.\\\" IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2024).\"}", "{\"summary\": \"This paper addresses the problem of test-time adaptation for out-of-distribution generalization. To reduce the adaptation rate and improve the overall latency of TTA, the authors propose a SNAP framework that selects partial samples for adaptation. Experimental results highlight the potential of the proposed method. However, I still have several concerns as outlined below.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The design of the SNAP method is well-motivated and reasonable from the technical perspective.\\n\\nThe proposed approach is a plug-and-play module that can be integrated with existing TTA methods to reduce adaptation steps and enhance efficiency. \\n\\nExperimental results underscore the effectiveness of the proposed method.\", \"weaknesses\": \"On edge devices, the most critical factor in determining whether a TTA method is feasible is actually peak memory usage, as highlighted by MECTA [A]. While this work does reduce the number of adaptation steps, it does not decrease peak memory usage. In this sense, the primary motivation for applying the proposed method to edge devices may be misplaced.\\n\\n[A] MECTA: Memory-Economic Continual Test-time Adaptation\", \"questions\": \"I am somewhat confused about the latency differences between, Tent, EATA, SAR and SNAP, all of which are sample selection-based methods. Compared to Tent, EATA does not reduce latency because, this is because in the EATA\\u2019s code, even filtered samples are still used in back-propagation (due to limitations in PyTorch), despite halving the number of samples involved in adaptation. However, in SNAP, latency is reduced. If this reduction is due to engineering optimizations, the same should ideally apply to EATA and SAR for a fair comparison. If not, the comparison could be seen as unfair.\\n\\nAnother area of confusion is that, based on my experience, EATA generally outperforms Tent and SAR under standard settings. However, the authors\\u2019 results show SAR and Tent performing better than EATA, which contradicts my observations. Could the authors provide further clarification on this?\\n\\nDoes the proposed method reduce latency for a single batch or does it show an average improvement over multiple batches?\\n\\nLastly, would the proposed method be effective for transformer-based models, such as ViT-base?\\n\\nI strongly encourage the authors to move Table 1 to the Appendix and provide additional results on ImageNet-C with various adaptation rates in the main paper, as the CIFAR-10 results are less critical and not sufficiently convincing. Currently, Table 1 occupies nearly an entire page, which I feel could be better utilized for more impactful content.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer sYH2 (Part 2)\", \"comment\": \">Lastly, for results with varying learning rates, could you provide additional experiments under different adaptation rates and include more baselines? Addressing these points comprehensively is pretty important, and I would like to raise my score accordingly.\\n\\nFollowing your suggestion, **we have conducted additional experiments not only for an adaptation rate of 0.3 but also for 0. and 0.1**, extending the baselines to **include Tent, CoTTA, and EATA**. The results of these experiments are included in the tables below. **By comparing the best accuracy across various learning rates, SNAP-TTA even outperforms fully adaptive settings in most scenarios.** Since the PDF update deadline has passed while running this experiment, we will include these results in the final draft of the paper.\\n\\n*Table D. ImageNet-C Gaussian Noise, Adaptation rate 0.5*\\n| LR | Tent(Full) | Tent(STTA) | Tent+SNAP | CoTTA(Full) | CoTTA(STTA) | CoTTA+SNAP | EATA(Full) | EATA(STTA) | EATA+SNAP |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| 2e-3 | 2.31 | 4.16 | 6.68 | 13.31 | 12.03 | 14.58 | 0.36 | 0.48 | 0.69 |\\n| 1e-3 | 4.54 | 10.19 | 16.37 | 13.18 | 11.98 | 14.63 | 1.31 | 1.36 | 22.11 |\\n| 5e-4 | 10.22 | 18.43 | **28.36** | 13.15 | 11.95 | **15.17** | 21.96 | 13.97 | 25.42 |\\n| 1e-4 | **27.03** | **25.24** | 28.05 | 13.12 | **11.99** | 15.16 | **29.42** | **28.62** | **30** |\\n| 5e-5 | 26.34 | 22.62 | 26.32 | **13.34** | 12.1 | 14.93 | 29.37 | 27.3 | 28.76 |\\n\\n*Table E. ImageNet-C Gaussian Noise, Adaptation rate 0.3*\\n| LR | Tent(Full) | Tent(STTA) | Tent+SNAP | CoTTA(Full) | CoTTA(STTA) | CoTTA+SNAP | EATA(Full) | EATA(STTA) | EATA+SNAP |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| 2e-3 | 2.31 | 7.04 | 13.69 | 13.31 | 11.88 | 14.67 | 0.36 | 0.59 | 0.75 |\\n| 1e-3 | 4.54 | 16.13 | 27.63 | 13.18 | 11.86 | 14.68 | 1.31 | 0.95 | 24.35 |\\n| 5e-4 | 10.22 | **24.96** | **29.95** | 13.15 | 11.85 | 15.11 | 21.96 | 20.96 | 27.72 |\\n| 1e-4 | **27.03** | 23.63 | 26.60 | 13.12 | 11.74 | **15.26** | **29.42** | **27.35** | **29.48** |\\n| 5e-5 | 26.34 | 20.94 | 24.87 | **13.34** | **11.92** | 14.85 | 29.37 | 26.07 | 27.90 |\\n\\n*Table F. ImageNet-C Gaussian Noise, Adaptation rate 0.1*\\n| LR | Tent(Full) | Tent(STTA) | Tent+SNAP | CoTTA(Full) | CoTTA(STTA) | CoTTA+SNAP | EATA(Full) | EATA(STTA) | EATA+SNAP |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| 2e-3 | 2.31 | 18.06 | 27.41 | 13.31 | 10.93 | 14.8 | 0.36 | 1.86 | 9.59 |\\n| 1e-3 | 4.54 | **25.46** | **31.12** | 13.18 | 10.93 | 14.73 | 1.31 | 2.86 | 24.95 |\\n| 5e-4 | 10.22 | 24.71 | 28.01 | 13.15 | 10.92 | **15.18** | 21.96 | 18.76 | **28.09** |\\n| 1e-4 | **27.03** | 22 | 26.21 | 13.12 | **11.74** | 15.13 | **29.42** | **22.43** | 26.1 |\\n| 5e-5 | 26.34 | 16.72 | 19.31 | **13.34** | 10.92 | 14.76 | 29.37 | 20.32 | 23.28 |\\n\\nOnce again, thank you for your kind and detailed feedback. Your suggestions have been invaluable in improving the quality and rigor of our work. Please let us know if there is anything further we can do to completely address your remaining concerns.\"}", "{\"title\": \"Response to Reviewer sYH2 (Follow-up questions)\", \"comment\": \"Thank you very much for taking the time to read our rebuttal and your thoughtful follow-up questions. We deeply appreciate your feedback and would like to provide further clarification.\\n>To ensure a fair comparison, if the efficiency is evaluated with baselines in the fully update setting, the performance comparisons should also be under the fully update setting. In the main tables, the results for all baselines in their original fully update setting need to be included.\\n\\nThank you for pointing this out, and we apologize for the confusion caused. To clarify, the efficiency of SNAP-TTA refers to achieving significantly lower latency than fully adaptive methods while maintaining comparable or superior accuracy. \\n\\nWe also clarify that we did include fair accuracy comparisons with **fully update settings** across all evaluation scenarios (*Table 1, Table 6-Table 11*), and provided dedicated analyses on accuracy comparison with the full-adaptation in *Table 3* and *Figure 5*. We have just revised the captions of the tables to clarify this setup.\", \"the_key_findings_of_our_work_are_two_fold\": \"(1) **traditional algorithms experience accuracy drops under sparse adaptation**, and (2) **SNAP-TTA mitigates these drops while retaining latency benefits, thereby boosting efficiency**. The baselines and structure of the main tables were designed to highlight both aspects together. By leveraging class and domain representative samples, SNAP-TTA significantly narrows the trade-off between efficiency and accuracy in sparse adaptation, even outperforming fully adaptive methods in certain cases (e.g., *Table 3*).\\n___\\n>Moreover, how does the memory size of CnDRM affect the performance of SNAP-TTA? Given that saving samples may raise privacy concerns, this aspect should be carefully addressed.\\n\\nAs you pointed out, increasing memory size might introduce challenges such as privacy risks. We have conducted additional experiments on the ImageNet-C gaussian noise using the Tent + SNAP-TTA (adaptation rate 0.3) on batch size 16, varying the memory size to evaluate its effect on performance. The results, presented in the table below, show that **increasing memory size does not yield significant performance gains**. This outcome highlights that SNAP-TTA prioritizes storing representative samples, ranked by their proximity to class and domain centroids, which leads to a saturation point in the informativeness of the stored samples. Therefore, to minimize computational overhead while maintaining a plug-and-play structure for easy adaptation, we have designed the memory size to align with the batch size. This ensures efficient adaptation while addressing the potential risks of storing excessive samples. We have added this discussion in the revised paper *Appendix F.4*.\\n| Memory Size | Accuracy (%) |\\n|:---:|:---:|\\n| 16 (Base) | 26.60 |\\n| 32 | 28.44 |\\n| 64 | 28.89 |\\n| 128 | 28.60 |\\n___\\n>One more question: how do you set the learning rate for SNAP-TTA + baseline methods?\\nIn sparse settings, the number of learning iterations for the baselines is limited, indicating that the updates are very insufficient. In this case, have you considered increasing the learning rate for the baselines? Would this lead to improved performance?\\n\\nWe initially set the same learning rate across the adaptation rates to ensure a fair comparison. Agreeing with your comment that the updates might be insufficient for sparse settings, we have conducted additional experiments with varying learning rates, as shown below (evaluated on ImageNet-C gaussian noise via Full/Sparse TTA with adaptation rate 0.3). The results show that higher learning rates improve accuracy for both baselines and SNAP-TTA. Despite this, **the best performance of the naive baseline remains below that of full adaptation** even with a larger learning rate. In contrast, **SNAP-TTA surpasses full adaptation, achieving higher accuracy**. However, applying such a high learning rate to full adaptation causes model collapse. Therefore, we selected a stable learning rate that ensures model convergence across all adaptation rates. We have added this result and discussion in the *Appendix F.5*.\\n| Learning Rate | Tent (Full) | Tent (Sparse TTA) | Tent + SNAP-TTA |\\n|:---:|:---:|:---:|:---:|\\n| $2 \\\\times 10^{-3}\\\\$ | 2.31 | 7.04 | 13.69 |\\n| $1 \\\\times 10^{-3}\\\\$ | 4.54 | 16.13 | 27.68 |\\n| $5 \\\\times 10^{-4}\\\\$ | 10.22 | **24.96** | **29.95** |\\n| $1 \\\\times 10^{-4}\\\\$ | **27.03** | 23.63 | 26.60 |\\n| $5 \\\\times 10^{-5}\\\\$ | 26.34 | 20.94 | 24.87 |\\n\\nThank you again for your valuable feedback and consideration. We would welcome any additional suggestions or questions you may have.\"}", "{\"title\": \"More comments\", \"comment\": \"The rebuttal partially addressed some of my concerns. I still think the applications of the proposed methods are limited. I am raising the scoring from 3 to 5.\"}", "{\"summary\": \"This paper introduces SNAP-TTA, a sparse Test-Time Adaptation (STTA) framework designed for latency-sensitive applications on resource-constrained edge devices. Traditional TTA methods dynamically adjust models using unlabeled test data to handle distribution shifts, but they often incur high computational costs and latency, making them impractical for real-time edge environments. SNAP-TTA addresses these challenges by introducing two key components: (i) Class and Domain Representative Memory (CnDRM), which selects class-representative and domain-representative samples to enable effective adaptation with minimal data, and (ii) Inference-only Batch-aware Memory Normalization (IoBMN), which corrects feature distribution shifts during inference without additional training. By combining SNAP-TTA with five state-of-the-art TTA algorithms, the paper demonstrates that SNAP-TTA achieves significant latency reductions (up to 87.5%) while maintaining competitive accuracy. Experimental results on benchmarks like CIFAR10-C and ImageNet-C show SNAP-TTA\\u2019s superior performance in edge settings, making it suitable for real-world, latency-sensitive applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper addresses the challenge of achieving high adaptation accuracy while maintaining computational efficiency in Sparse Test-Time Adaptation (STTA), where updates rely on only a small subset of data.\\n2. SNAP-TTA demonstrates improved classification accuracy across adaptation rates (0.01 to 0.5) compared to baseline TTA methods on CIFAR10-C, CIFAR100-C, and ImageNet-C. At an adaptation rate of 0.1, SNAP-TTA reduces latency by up to 87.5% while mitigating accuracy loss, validating its effectiveness in STTA\\n3. IoBMN combines memory statistics from domain-representative samples with current inference batch statistics, using a soft shrinkage function to balance them. This dynamic normalization adjustment during inference effectively addresses domain shift, ensuring model adaptability and performance stability.\", \"weaknesses\": \"1. The reliance on a fixed confidence threshold of CnDRM may limit adaptability across varying data distributions and could lead to suboptimal sampling.\\n2. In table 5, accuracy differences between methods are small, without statistical analysis, making it unclear if these differences are significant (In Detailed comments 4)\", \"questions\": \"I have some comments as following:\\n\\n1. Some results for latency and performance metrics on mobile or embedded systems would be helpful, to further validate the method\\u2019s effectiveness and robustness\\n\\n\\n2. Some in-depth analysis of specific limitations would be helpful, such as how memory overhead might impact performance on resource-constrained devices and how SNAP-TTA handles highly dynamic data distributions in real-world applications. Additionally, there is no discussion on potential trade-offs between latency reduction and accuracy under different conditions\\n\\n\\n3. The combined CnDRM+IoBMN method performs best, but the contribution of each component is not discussed. A brief explanation of how they work together would improve clarity. The table 5 only shows results at an adaptation rate of 0.1, the authors can mention that the complete data is in appendix.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer vuQK (More questions)\", \"comment\": \"Thank you very much for taking the time to read our rebuttal and your thoughtful follow-up questions. We deeply appreciate your feedback and would like to provide further clarification.\\n> For the rebuttal of W1, could the authors provide the memory footprint of baseline without any adaptation (i.e., only forward propagations).\\n\\nWe have added the Source (forward-only) row to the table below. It contains cache for model, optimizer, and inference. \\n| Method | Accuracy (%) | Max Memory (MB) | Reduced Memory (%) |\\n|--------------------------|:------------:|:---------------:|:------------------:|\\n| Source (forward-only) | 18.15 | 1766.38 | - |\\n| Tent | 35.21 | 6805.26 | - |\\n| + MECTA | 37.62 | 4620.25 | 32.10 |\\n| **+ MECTA + SNAP-TTA** | **39.52** | **4622.12** | **32.08** |\\n| EATA | 35.55 | 6541.02 | - |\\n| + MECTA | 41.41 | 4512.38 | 31.01 |\\n| **+ MECTA + SNAP-TTA**| **42.86** | **4535.44** | **30.66** |\\n___\\n>For the rebuttal of W2, in my understanding, you mean the implementation would skip some batches and only perform backward-propagations in some batch? I found that in line 19 of Algorithm 1, it says: adaptation occurs every k batch. In this sense, I think the proposed method is not friendly for latency-sensitive applications since it would have an increasing latency every k batch. Since latency-sensitive application often requires a very stable inference latency in all the batch, the proposed method seems to be not suitable for latency-sensitive applications. Or could the authors show some applications that the proposed method is suitable for?\\n\\nYou are correct that the current implementation adapts every k batch, which introduces periodic latency spikes. This reflects practical constraints in the PyTorch framework, where backpropagation must occur as a single block. However, we would like to emphasize that **these spikes are not an inherent limitation of the main idea behind SNAP-TTA**, and can be mitigated by **distributing computational cost (e.g., backpropagation) across batches**. For example, while using a fixed model for inferences (skip-batches), the single backpropagation step for adaptation could be split into smaller portions and executed incrementally during the k batches. At the end of k batches, the accumulated updates are applied seamlessly, ensuring that no individual batch experiences a significant latency spike. This approach **retains the benefits of sparse adaptation while smoothing latency, aligning with the \\u2018*average latency per batch*\\u2019 focus of our current work**. Although not implemented in the present study, this strategy might be an extension of SNAP-TTA for applications with stringent latency requirements. \\n\\nEven in its current implementation, SNAP-TTA is suited for applications where occasional latency spikes are acceptable as long as the method ensures overall adaptability and efficiency. For instance, in **real-time video analytics tasks such as wildlife monitoring or environmental surveillance**, the ability to adapt to **gradual changes in lighting or weather** conditions is often more important than maintaining perfectly consistent latency for every frame. Similarly, in **adaptive anomaly detection** for industrial systems, where **periodic updates help refine detection over time**, minor and infrequent delays are not critical to the system's overall effectiveness. These scenarios highlight contexts where adaptability and robust performance across dynamic environments outweigh the need for absolute latency stability, making SNAP-TTA a practical choice despite its periodic latency variations. While we acknowledge the limitations of the current implementation, **our findings demonstrate the feasibility of sparse adaptation and its potential to enable efficient test-time adaptation in latency-sensitive scenarios**.\\n___\\n>For the rebuttal of W6, if the hyper-parameters k in line 19 of Algorithm 1, the adaptation rate is 1/k, right?\\n\\nYes, your understanding is correct. \\n___\\n>Some more questions: if the adaptation rate is 0.1 (the settings in Table 2), the baseline EATA would only exploit 10% samples for adaptation? And it would further remove some samples in these 10%, leading to less than 10% samples for adaptation?\\n\\nYou are correct. In that scenario, **10% of the total samples are used for adaptation**, but **the number of samples ultimately contributing to the loss calculation during adaptation is less than 10%** following the EATA\\u2019s additional step where it selects only reliable samples based on entropy. Note that this is not imposed by our sparse adaptation framework. \\n\\nWe hope this answers your questions. Thank you again for your valuable feedback, and please don\\u2019t hesitate to let us know if there are follow-up questions.\"}", "{\"comment\": \"Thank you for taking the time to review our response and for thoughtfully reconsidering your score. We truly appreciate your feedback and would be happy to address any remaining concerns you may have. Please let us know if there is anything further we can do to completely address your concerns.\"}", "{\"title\": \"Responses to Reviewer vuQK (Part 2)\", \"comment\": \">W2: It is unclear whether the proposed method reduces the delay per batch or the average delay (adaptation occurs once every several batches as shown in Figure 1). If it is the latter, its effectiveness for latency-sensitive applications may be limited as the inference delay could increase significantly every several batches.\\n\\nThank you for raising this important point. Our current implementation reduces average latency by adapting sparsely across multiple batches, as shown in *Figure 1*. This approach reflects practical constraints in our PyTorch implementation, which applies adaptation at discrete intervals. Conceptually, **SNAP-TTA can also distribute backpropagation steps proportionally across batches, avoiding delay spikes while maintaining low latency**. This is an implementation-specific limitation and does not affect the core concept. In future work, we plan to explore dynamic strategies to further optimize latency-sensitive applications. Thank you for your valuable feedback.\\n___\\n>W3: The method reduces the cost of backpropagation by filtering samples to decrease the inference latency. However, EATA also uses a similar strategy, but in Figure 2, the delay of EATA is the same as that of Tent, and the delay of SAR is inconsistent with the results reported in its original paper.\\n\\nThank you for your insightful observation regarding the relationship between backpropagation cost, sample filtering, and latency reduction. SNAP-TTA employs a novel approach that significantly reduces backpropagation steps by processing only a small fraction of samples (as low as 10%) while maintaining competitive accuracy. **This distinguishes it from EATA, which filters samples for loss computation but does not skip backpropagation entirely**. As shown in our experiments and supported by the original EATA paper, the number of backpropagation steps\\u2014a key factor influencing latency\\u2014is not significantly reduced in EATA, leading to comparable delays with Tent in previous Figure 2 (current *Figure 4*). Additionally, components such as sample filtering criteria computation and Fisher regularization added to EATA require additional calculations, which increase latency on the CPU. This makes the delay gap between the two appear smaller.\\n\\nRegarding the discrepancies in SAR latency results, the variation arises from differences in evaluation setups. Our evaluations were conducted on a CPU platform, while the original SAR paper used a GPU-based setup. Despite this, **the overall trends observed in our results align with SAR\\u2019s original findings, where SAR exhibits slightly higher latency than Tent and EATA**. We appreciate the reviewer\\u2019s feedback and hope these clarifications address the concerns while emphasizing the unique contributions of SNAP-TTA in achieving substantial latency reductions.\\n___\\n>W4: The paper could compare the inference latency in Tables 1, 2, and 3.\\n\\nThank you for the valuable suggestion regarding including latency comparisons in previous Tables 1, 2, and 3. These tables are focused on illustrating the consistent accuracy improvements of SNAP-TTA under sparse adaptation settings. To maintain clarity and avoid overloading the tables with information, we chose to **highlight latency reductions separately in *Table 4* (revised as *Table 3*)**, where both the latency reductions and accuracy gaps of SNAP-TTA (at an adaptation rate of 0.1) are detailed relative to the original TTA methods. We believe this separation ensures a clear presentation of both accuracy and latency performance without redundancy. To further address your feedback, we have added more detailed latency reduction results in the Appendix E.3, Figure 7, tested on additional edge-devices (**Raspberry Pi Zero 2W** and **NVIDIA Jetson Nano**) for readers who wish to explore this aspect more thoroughly. Specifically, the latency tracking results in the table below for CoTTA on **three edge devices demonstrate a substantial reduction in latency with SNAP-TTA compared to fully adapting with the original TTA approach**. These findings highlight the remarkable efficiency and robustness of SNAP-TTA on diverse devices.\\n\\n| Methods | Latency on JetsonNano (s) | Latency on RPi4 (s) | Latency on RPiZero2w (s) |\\n|----------------|---------------------------|---------------------|--------------------------|\\n| Original TTA | 13.18 | 41.77 | 622.28 |\\n| **+ SNAP-TTA** | **2.61 (-80.2%)** | **4.93 (-88.2%)** | **92.01 (-85.2%)** |\\n\\nWe hope this approach balances clarity and depth while addressing your concern, and we sincerely appreciate your thoughtful feedback.\"}", "{\"comment\": \"I appreciate the authors' detailed response. It addresses some of my concerns, so I will maintain the acceptance rating.\"}", "{\"summary\": \"The authors propose a sparse test-time adaptation (TTA) framework, which they call SNAP, that improves the latency-accuracy trade-off of existing TTA algorithms to enable practical use of TTA on ede devices.\\nTo this end, the authors propose \\\"CnDRM\\\", a method for identifying \\\"important\\\" samples for training based on class- and domain-representative sampling, and \\\"IoBMN\\\", a method for mitigating the effects of domain shifts on the model's internal feature distributions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The method is promising in that, at least on a Raspberry Pi 4 and when used together with STTA, SNAP provides a significant reduction in latency, as shown in Table 4, while being able to maintain accuracy comparable to using STTA alone.\", \"The authors show empirically that SNAP works well with a number of different TTA algorithms (TENT, CoTTA, EATA, SAR, RoTTA) and with different adaptation rates for different datasets (CIFAR10-C, CIFAR100-C, ImageNet-C).\"], \"weaknesses\": [\"The claimed contribution of the paper is that SNAP can make existing TTA algorithms more latency efficient and suitable for edge devices. However, this is only demonstrated in Table 4 for one algorithm (STTA) and one target device (Raspberry Pi 4). All other experiments focus only on accuracy. And while it is an important and valuable contribution to properly demonstrate that SNAP does not reduce the effectiveness of the TTA algorithms it is applied to, I think the evaluation overall fails to adequately demonstrate the claimed contribution of latency reduction across various edge devices.\"], \"questions\": [\"What are the lower limits of the proposed approach? For example, would SNAP enable TTA on microcontroller units (MCUs) such as Cortex-M MCUs?\", \"How memory intensive is the approach? There seem to be some mechanisms in place to keep memory requirements fixed (line 264 ff), but could memory, i.e. RAM, availability still become a bottleneck of the approach on edge systems?\", \"I am a bit confused about the hyperparameter \\\"adaptation rate\\\": Is this parameter specifically implemented by SNAP or is it implemented by the underlying TTA algorithms? I was wondering because, for example, in Table 1 the accuracy for the TTA algorithms without SNAP-TTA also decreases at lower adaptation rates.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer sYH2 (Part 3)\", \"comment\": \">Q3: Does the proposed method reduce latency for a single batch or does it show an average improvement over multiple batches?\\n\\nOur current implementation focuses on demonstrating the conceptual and practical benefits of SNAP-TTA by evaluating its average latency improvement across multiple batches. This approach stems from limitations in our PyTorch-based implementation, which handles sparse adaptation uniformly across batches. However, conceptually, **SNAP-TTA can also reduce latency for a single batch by proportionally distributing backpropagation updates across adaptation intervals while performing inference in between**. This is a simple implementation detail that does not alter the core concept or its effectiveness. In future work, we plan to explore more dynamic adaptation strategies by incorporating device availability and batch-level overhead monitoring to further optimize latency.\\n___\\n>Q4: Lastly, would the proposed method be effective for transformer-based models, such as ViT-base?\\n\\nThank you for your question and for bringing up the applicability of our method to transformer-based models such as ViT-base. We are pleased to confirm that **SNAP-TTA is indeed effective for transformer-based architectures**. To provide further evidence of this, we have **additionally evaluated** SNAP-TTA\\u2019s performance not only with the Tent algorithm but also with **EATA and SAR on ViT-base models**. These additional results, presented in *Table 5*, demonstrate that SNAP-TTA consistently achieves higher accuracy gains across all algorithms, further validating its effectiveness and versatility when applied to transformer-based models. Detailed explanations of implementations of SNAP-TTA on ViT are in *Appendix F.3*. We sincerely appreciate your interest in this aspect of our work and hope the additional details in these sections comprehensively address your concerns.\\n| Methods | Gau. | Shot | Imp. | Def. | Gla. | Mot. | Zoom | Snow | Fro. | Fog | Brit. | Cont. | Elas. | Pix. | JPEG | Avg. |\\n|-------------------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|\\n| EATA | 20.12 | 21.52 | 21.40 | 20.90 | 23.42 | 15.71 | 18.00 | 16.12 | 28.35 | 22.24 | 35.97 | 11.33 | 19.78 | 20.22 | 19.99 | 21.00 |\\n| **+ SNAP-TTA** | **40.74** | **43.22** | **43.11** | **40.63** | **44.59** | **51.58** | **50.63** | **54.77** | **58.32** | **61.5** | **73.91** | **33.85** | **60.19** | **63.35** | **63.01** | **52.23** |\\n| SAR | 21.45 | 23.02 | 23.17 | 23.67 | 24.64 | 15.98 | 14.62 | 7.70 | 31.49 | 8.94 | 41.33 | 6.82 | 17.35 | 22.39 | 22.49 | 20.34 |\\n| **+ SNAP-TTA** | **37.59** | **38.27** | **36.78** | **38.58** | **39.99** | **49.00** | **45.77** | **43.96** | **56.61** | **59.96** | **73.02** | **19.69** | **54.30** | **61.16** | **61.85** | **47.77** |\\n___\\n>Q5: I strongly encourage the authors to move Table 1 to the Appendix and provide additional results on ImageNet-C with various adaptation rates in the main paper, as the CIFAR-10 results are less critical and not sufficiently convincing. Currently, Table 1 occupies nearly an entire page, which I feel could be better utilized for more impactful content.\\n\\nThank you for the suggestion. We appreciate the importance of including results for ImageNet-C with various adaptation rates in the main paper to better illustrate the method\\u2019s impact. While Table 1 was originally designed to highlight consistent gains across various SOTA algorithms under STTA settings, we recognize that presenting ImageNet-C results in the main text could provide stronger support for our contributions. In response to your feedback, we have revised the manuscript as follows:\\n- We replaced *Table 1* in the main paper with results for ImageNet-C across major adaptation rates (0.3, 0.1, and 0.05).\\n- We condensed the CIFAR-10-C and CIFAR-100-C results into concise tables (formerly *Tables 2 and 3*), keeping them in the main paper *(Table 2)*.\\n- Expanded results, including the detailed breakdown for CIFAR-10-C, CIFAR-100-C, and ImageNet-C, remain in the appendix for readers who seek comprehensive details *(Appendix C)*.\\n\\nWe hope this revision enhances the clarity and impact of the paper.\"}", "{\"summary\": \"This paper focuses on Test-Time Adaptation (TTA) for edge devices with limited computational capacity. The authors propose SNAP-TTA, a sparse TTA framework with two key components, Class and Domain Representative Memory (CnDRM) and Inference-only Batch-aware Memory Normalization (IoBMN), aiming to reduce model adaptation frequency and data usage while maintaining accuracy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed SNAP-TTA framework addresses the latency-accuracy trade-off issue in existing TTA methods for edge devices in some cases. It reduces latency while achieving competitive accuracy, as demonstrated by extensive experiments on multiple benchmarks and with integration of several existing TTA algorithms.\", \"weaknesses\": [\"In the background section, the mention of applications like real-time health monitoring for IoT edge devices may not be entirely appropriate as these devices often have extremely limited memory.\", \"With limited memory, these devices are difficult and even impossible for backward-propagation and gradient decent. In this sense, memory should perhaps be prioritized over latency as the primary concern.\", \"It is unclear whether the proposed method reduces the delay per batch or the average delay (adaptation occurs once every several batches as shown in Figure 1). If it is the latter, its effectiveness for latency-sensitive applications may be limited as the inference delay could increase significantly every several batches.\", \"The method reduces the cost of backpropagation by filtering samples to decrease the inference latency. However, EATA also uses a similar strategy, but in Figure 2, the delay of EATA is the same as that of Tent, and the delay of SAR is inconsistent with the results reported in its original paper.\", \"The paper could compare the inference latency in Tables 1, 2, and 3.\", \"In Table 6 for ImageNet-C, only the Tent method is compared, ignoring other methods, which could provide more comprehensive and convincing results.\", \"In the experiments, it is not clear how the number of participating samples is controlled to meet the adaptation rate. Is it through adjusting the $tau_conf$ hyperparameter? Also, it is not described how other compared methods meet the adaptation rate.\", \"The description of lines 10-15 of the algorithm in the paper is relatively brief, considering its importance for the proposed method. More detailed explanation in the paper would assist readers in understanding.\"], \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper proposes a sparse test-time adaptation (TTA) framework called SNAP-TTA, which aims to address the latency-accuracy trade-off in existing TTA methods when applied to edge devices. The main benefit of SNAP-TTA is reducing latency. The motivation of the paper is reasonable, and it can be seamlessly integrated into existing TTA methods. However, the main issues lie in some experimental setups and the overall contribution. Some reviewers raised concerns about memory usage during experiments and compatibility across different hardware scenarios, and the authors' experimental additions and analyses did not fully address these concerns. The AC reviewed the paper and all discussions and believes that the study still needs improvement.\", \"additional_comments_on_reviewer_discussion\": \"The common concerns raised by several reviewers include the robustness of the proposed method across different edge devices, memory usage, and the lack of Transformer-based experiments. Although the authors provided analysis and explanations, they did not fully address the reviewers' concerns.\"}", "{\"comment\": \"One more question: how do you set the learning rate for SNAP-TTA + baseline methods?\\n\\nIn sparse settings, the number of learning iterations for the baselines is limited, indicating that the updates are very insufficient. In this case, have you considered increasing the learning rate for the baselines? Would this lead to improved performance?\"}", "{\"title\": \"Responses to Reviewer vuQK (Part 1)\", \"comment\": \"We sincerely appreciate the time and effort you have devoted to reviewing our work and offering such valuable feedback. Below, we have addressed each of your points in detail.\\n___\\n>W1: In the background section, the mention of applications like real-time health monitoring for IoT edge devices may not be entirely appropriate as these devices often have extremely limited memory. With limited memory, these devices are difficult and even impossible for backward-propagation and gradient descent. In this sense, memory should perhaps be prioritized over latency as the primary concern.\\n\\nWe appreciate the reviewer\\u2019s observation about the significance of memory constraints in edge devices. It is indeed true that limited memory can make backpropagation and gradient descent challenging, especially for extremely resource-constrained devices. However, recent advancements in memory-efficient algorithms and models[1-3], have significantly mitigated this issue. **These methods have enabled backpropagation to achieve competitive TTA performance even on devices with highly restricted memory.**\\n\\nDespite these advancements in addressing memory challenges these days, **another equally critical barrier remains: adaptation latency.** Most of the SOTA TTA methods [4-8] **still face significant hurdles in real-world latency-sensitive applications** due to their high adaptation latency, even when memory concerns are resolved. For example, many SOTA TTA methods require substantial computational resources for operations like backpropagation, augmentation, and ensembling, which makes them impractical for maintaining the inference frame rates required by latency-sensitive edge applications.\\n\\nFurthermore, latency concerns are not limited to edge devices. With the growing speed and volume of data streams, latency-sensitive applications across various domains demand efficient adaptation strategies. Specifically, **a recent study [9] has highlighted the latency issues in TTA** and proposed practical evaluation strategies for TTA algorithms, but **no solutions have been provided yet**. Our work specifically targets this underexplored issue in on-device TTA research by proposing SNAP-TTA, which addresses latency without sacrificing accuracy.\\n\\nFinally, we emphasize that **SNAP-TTA\\u2019s memory overhead is negligible (<1.8%)** as detailed analysis has been added in *Appendix E.4 *. Furthermore, **SNAP-TTA can integrate with memory-efficient methods like MECTA** and can independently address latency concerns, enabling broader applicability for edge devices where both memory and latency efficiency are required. To demonstrate this synergy, we have added additional analysis and results in Appendix E.5. Below table shows the classification accuracy and peak memory usage for Tent+MECTA and EATA+MECTA configurations with and without SNAP-TTA. Integrating SNAP-TTA with Tent+MECTA improves accuracy, while **reducing peak memory usage by approximately 30\\\\%** compared to the baseline. Similarly, SNAP-TTA boosts the accuracy of EATA+MECTA while maintaining an efficient memory footprint.\\n| Method | Accuracy (%) | Max Memory (MB) | Reduced Memory (%) |\\n|------|:-------:|:----------:|:-----:|\\n| Tent|35.21|6805.26 |- |\\n|+ MECTA|37.62 |4620.25 |32.10 |\\n| **+ MECTA + SNAP-TTA** | **39.52** |**4622.12** |**32.08** |\\n| EATA|35.55 |6541.02 |- |\\n|+ MECTA|41.41|4512.38 |31.01|\\n| **+ MECTA + SNAP-TTA** |**42.86** |**4535.44** |**30.66** |\\n\\n**_References_**\\n\\n[1] Hong, Junyuan, et al. \\\"Mecta: Memory-economic continual test-time model adaptation.\\\" International Conference on Learning Representations. ICLR, 2023.\\n\\n[2] Song, Junha, et al. \\\"Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR, 2023.\\n\\n[3] Jia, Hong, et al. \\\"TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices.\\\" The Thirty-eighth Annual Conference on Neural Information Processing Systems. NeurIPS, 2024.\\n\\n[4] Wang, Dequan, et al. \\\"Tent: Fully test-time adaptation by entropy minimization.\\\" International Conference on Learning Representations. ICLR, 2021.\\n\\n[5] Wang, Qin, et al. \\\"Continual test-time domain adaptation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR, 2022.\\n\\n[6] Niu, Shuaicheng, et al. \\\"Efficient test-time model adaptation without forgetting.\\\" International Conference on Machine Learning. ICML, 2022.\\n\\n[7] Niu, Shuaicheng, et al. \\\"Towards stable test-time adaptation in dynamic wild world.\\\" International Conference on Learning Representations. ICLR, 2023. \\n\\n[8] Yuan, Longhui, Binhui Xie, and Shuang Li. \\\"Robust test-time adaptation in dynamic scenarios.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR, 2023.\\n\\n[9] Alfarra, Motasem, et al. \\\"Evaluation of Test-Time Adaptation Under Computational Time Constraints.\\\" International Conference on Machine Learning. ICML, 2024.\"}", "{\"comment\": \"We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. Your insightful feedback and constructive suggestions have greatly enhanced the quality of our work, and we are especially pleased to hear that the revisions have clarified the comparisons, leading to an increased score.\\n\\nThank you as well for recommending the inclusion of latency comparisons in the Appendix tables. We will ensure this improvement is reflected in the final update.\\n\\nWe would be grateful for any additional suggestions or questions you may have.\\n\\nBest regards,\\n\\nAuthors.\"}", "{\"title\": \"Some more questions\", \"comment\": \"For the rebuttal of W1, could the authors provide the memory footprint of baseline without any adaptation (i.e., only forward propagations).\\n\\nFor the rebuttal of W2, in my understanding, you mean the implementation would skip some batches and only perform backward-propagations in some batch? I found that in line 19 of Algorithm 1, it says: adaptation occurs every k batch. In this sense, I think the proposed method is not friendly for latency-sensitive applications since it would have an increasing latency every k batch. Since latency-sensitive application often requires a very stable inference latency in all the batch, the proposed method seems to be not suitable for latency-sensitive applications. Or could the authors show some applications that the proposed method is suitable for?\\n\\nFor the rebuttal of W6, if the hyper-parameters k in line 19 of Algorithm 1, the adaptation rate is 1/k, right?\\n\\n---\", \"some_more_questions\": \"if the adaptation rate is 0.1 (the settings in Table 2), the baseline EATA would only exploit 10% samples for adaptation? And it would further remove some samples in these 10%, leading to less than 10% samples for adaptation?\"}", "{\"title\": \"Responses to Reviewer v9DY (Part 1)\", \"comment\": \"We sincerely appreciate the time and effort you have devoted to reviewing our work and offering such valuable feedback. Below, we have addressed each of your points in detail.\\n___\\n> W1: The reliance on a fixed confidence threshold of CnDRM may limit adaptability across varying data distributions and could lead to suboptimal sampling.\\n\\nThank you for your thoughtful feedback regarding the use of a fixed confidence threshold in CnDRM. We understand that a static threshold may limit adaptability across varying data distributions, potentially leading to suboptimal sampling in certain scenarios.\\n\\nTo clarify, the confidence threshold in our framework serves primarily as a safeguard to filter out highly noisy samples, particularly in the unsupervised domain adaptation setting where pseudo-label reliability can vary. It is not the sole sampling criterion but rather an initial step in a multi-stage process designed to prioritize informative samples *(Algorithm 1)*, so the most samples typically surpass this easy threshold.\\n\\nAdditionally, **fixed thresholds are commonly used in SOTA TTA works to establish a baseline for evaluating new approaches**[1-4]. We followed this standard to emphasize the feasibility of our method as an initial investigation into improving sample efficiency. That said, we fully agree that dynamically adapting the threshold based on data characteristics could further enhance the adaptability and performance of CnDRM. We have included this consideration as a promising direction for future research in Section 5. We sincerely appreciate your valuable suggestion and will strive to address this aspect in greater depth in subsequent work.\\n\\n**_References_**\\n\\n[1] Niu, Shuaicheng, et al. \\\"Efficient test-time model adaptation without forgetting.\\\" International Conference on Machine Learning. ICML, 2022.\\n\\n[2] Niu, Shuaicheng, et al. \\\"Towards stable test-time adaptation in dynamic wild world.\\\" International Conference on Learning Representations. ICLR, 2023. \\n\\n[3] Gong, Taesik, et al. \\\"Note: Robust continual test-time adaptation against temporal correlation.\\\" Advances in Neural Information Processing System. NeurIPS, 2022.\\n\\n[4] Gong, Taesik, et al. \\\"SoTTA: Robust Test-Time Adaptation on Noisy Data Streams.\\\" Advances in Neural Information Processing Systems. NeurIPS, 2024.\\n___\\n>W2: In table 5, accuracy differences between methods are small, without statistical analysis, making it unclear if these differences are significant (In Detailed comments 4)\\n\\nThank you for highlighting the need for statistical analysis in ablative evaluation table. While the accuracy differences may appear not that great, **the gains of each component of SNAP-TTA are consistent across very diverse experimental setups**, including integration with TTA algorithms, adaptation rates, and datasets (All the variations are in *Appendix D, Table 12-16*). This consistency strongly supports the effectiveness of our proposed components, CnDRM and IoBMN.\\n\\nFurthermore, while the absolute accuracy improvements might seem modest, SNAP-TTA\\u2019s primary focus lies in optimizing latency-sensitive applications. In this context, the crucial performance metric is not just accuracy but the efficiency achieved through sparse adaptation compared to fully adaptive TTA. To directly address the statistical significance of these results, Table 3 provides a detailed analysis comparing SNAP-TTA with naive sparse TTA. Specifically, at an adaptation rate of 0.1, SNAP-TTA achieves an average **latency reduction of 60%** relative to full adaptation while maintaining a **minimal accuracy drop of only 1.1%** *(Table A)*. This **significant efficiency gain** demonstrates the practical value of our approach, even when absolute accuracy differences appear small.\\n\\n*Table A. Statistical analysis comparing latency and accuracy of SNAP-TTA with fully adaptive TTA methods across various state-of-the-art TTA algorithms.*\\n| | **Latency (s)** | | **Accuracy (%)** | |\\n|---------|:---------------:|:-----------------:|:----------------:|:-----------------:|\\n| Methods | Original TTA | **SNAP-TTA** | Original TTA | **SNAP-TTA** |\\n| Tent | 3.97 | **2.20 (-44.0%)** | 80.43 | **78.95 (-1.8%)** |\\n| CoTTA | 71.68 | **8.96 (-87.5%)** | 78.00 | **78.83 (+1.1%)** |\\n| EATA | 3.93 | **2.18 (-44.6%)** | 81.56 | **78.61 (-3.6%)** |\\n| SAR | 5.75 | **2.30 (-60.1%)** | 79.05 | **78.06 (+1.2%)** |\\n| RoTTA | 5.93 | **2.25 (-62.0%)** | 77.00 | **77.07 (+0.1%)** |\"}", "{\"comment\": \"I know that you report the results of baseline methods under the fully updated setting in some tables, as an adaptation rate of 1 corresponds to the fully update, so there is no need for clarification.\\n\\nHowever, the key question is that efficiency/latency is coupled with the adaptation rate. A low adaptation rate leads to improved latency for all methods. As such, if you compare efficiency/latency with fully update baselines (in Figure 1 of the original manuscript), it is equally important to compare performance with fully update baselines for SNAP-TTA across various adaptation rates in all tables.\\n\\nIn some tables, such as Table 3 (ImageNet-C, commonly used in TTA) in the original paper, the results for fully update baselines are missing. Although Table 1 includes results for an adaptation rate of 1.0, the differences among baselines on CIFAR-10 are relatively minor. As a result, the manuscript gives me an impression of an unfair comparison overall.\\n\\nI understand and accept that your method achieves better performance under various sparse adaptation rates. However, to ensure fairness, both latency and performance should be compared under the same adaptation rate and with clear descriptions (as in Table 1 in the revised paper), to avoid the risk of misleading reviewers and readers. For all latency comparison figures, I strongly recommend including efficiency comparisons across different adaptation rates.\\n\\nLastly, for results with varying learning rates, could you provide additional experiments under different adaptation rates and include more baselines? Addressing these points comprehensively is pretty important, and I would like to raise my score accordingly.\"}", "{\"title\": \"Responses to Reviewer sYH2 (Part 1)\", \"comment\": \"We sincerely appreciate the time and effort you have devoted to reviewing our work and offering such valuable feedback. Below, we have addressed each of your points in detail.\\n___\\n>W1: On edge devices, the most critical factor in determining whether a TTA method is feasible is actually peak memory usage, as highlighted by MECTA. While this work does reduce the number of adaptation steps, it does not decrease peak memory usage. In this sense, the primary motivation for applying the proposed method to edge devices may be misplaced.\\n\\nWe appreciate the reviewer\\u2019s thoughtful comment on the critical importance of peak memory usage for edge devices. We agree that memory constraints are a key consideration, and we acknowledge that SNAP-TTA does not directly address peak memory usage compared to methods such as MECTA. However, our work focuses on a complementary and equally critical bottleneck: **adaptation latency**.\\n\\nWhile MECTA and similar approaches have made significant strides in mitigating memory constraints, the issue of high adaptation latency in TTA methods has become increasingly prominent. Most of SOTA TTA algorithms [1\\u20135] involve computationally intensive processes, such as backpropagation, augmentation, and ensembling, which render them impractical for latency-sensitive applications on edge devices that require strict inference frame rates. Recent studies [8\\u20139] have begun emphasizing the growing need to address adaptation latency for edge devices, though concrete guidelines or frameworks for latency-focused TTA are still lacking. **Specifically, a recent study [6] has highlighted the latency issues in TTA and proposed practical evaluation strategies for TTA algorithms, but no solutions have been provided yet**.\\n\\nSNAP-TTA fills this gap by providing **the first general strategy to reduce adaptation latency significantly while maintaining performance**. Unlike other efficient TTA approaches [7,8] that rely on custom algorithms or specialized model structures, SNAP-TTA integrates seamlessly with existing SOTA TTA algorithms. This allows it to retain their benefits while making them practically deployable on edge devices by reducing latency without introducing substantial accuracy trade-offs.\\n\\nFurthermore, **SNAP-TTA can be integrated with existing memory-efficient TTA method MECTA** [9]. By combining these approaches, we can address both **peak memory usage** and **latency concerns**, enabling broader applicability across various edge environments. To demonstrate this synergy, we have added additional analysis and results in Appendix E.5. The table below shows the classification accuracy and peak memory usage for Tent+MECTA and EATA+MECTA configurations with and without SNAP-TTA. Integrating SNAP-TTA with Tent+MECTA improves accuracy, while **reducing peak memory usage by approximately 30\\\\%** compared to the baseline. Similarly, SNAP-TTA boosts the accuracy of EATA+MECTA while maintaining an efficient memory footprint.\\n| Methods| Accuracy (%) | Max Memory (MB) | Reduced Memory (%) |\\n|--------------------------|:------------:|:---------------:|:------------------:|\\n| Tent | 35.21 | 6805.26 | - |\\n|+ MECTA |37.62 | 4620.25 |32.10 |\\n| **+ MECTA + SNAP-TTA** |**39.52** | **4622.12** | **32.08** |\\n| EATA| 35.55 | 6541.02 |- |\\n| + MECTA | 41.41 | 4512.38 | 31.01 |\\n| **+ MECTA + SNAP-TTA** | **42.86** |**4535.44** | **30.66** |\\n\\n**_References_**\\n[1] Wang, Dequan, et al. \\\"Tent: Fully test-time adaptation by entropy minimization.\\\" International Conference on Learning Representations. ICLR, 2021.\\n\\n[2] Wang, Qin, et al. \\\"Continual test-time domain adaptation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR, 2022.\\n\\n[3] Niu, Shuaicheng, et al. \\\"Efficient test-time model adaptation without forgetting.\\\" International Conference on Machine Learning. ICML, 2022.\\n\\n[4] Niu, Shuaicheng, et al. \\\"Towards stable test-time adaptation in dynamic wild world.\\\" International Conference on Learning Representations. ICLR, 2023. \\n\\n[5] Yuan, Longhui, Binhui Xie, and Shuang Li. \\\"Robust test-time adaptation in dynamic scenarios.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR, 2023.\\n\\n[6] Alfarra, Motasem, et al. \\\"Evaluation of Test-Time Adaptation Under Computational Time Constraints.\\\" International Conference on Machine Learning. ICML, 2024.\\n\\n[7] Song, Junha, et al. \\\"Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR, 2023.\\n\\n[8] Jia, Hong, et al. \\\"TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices.\\\" The Thirty-eighth Annual Conference on Neural Information Processing Systems. NeurIPS, 2024.\\n\\n[9] Hong, Junyuan, et al. \\\"Mecta: Memory-economic continual test-time model adaptation.\\\" International Conference on Learning Representations. ICLR, 2023.\"}" ] }
0vMLqSdsKW
A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns
[ "Vibhhu Sharma", "Shantanu Gupta", "Nil-Jana Akpinar", "Zachary Chase Lipton", "Liu Leqi" ]
As recommender systems become widely deployed in different domains, they increasingly influence their users’ beliefs and preferences. Auditing recommender systems is crucial as it not only ensures the improvement of recommendation algorithms but also provides ways to assess and address ethical concerns surrounding them. In this work, we view recommender system auditing from a causal lens and provide a general recipe for defining auditing metrics. Under this general causal auditing framework, we categorize existing auditing metrics and identify gaps in them—notably, the lack of metrics for auditing user agency while accounting for the multi-step dynamics of the recommendation process. We leverage our framework and propose two classes of such metrics: future- and past-reachability and stability, that measure the ability of a user to influence their own and other users’ recommendations, respectively. We provide both a gradient-based and a black-box approach for computing these metrics, allowing the auditor to compute them under different levels of access to the recommender system. Empirically, we demonstrate the efficacy of methods for computing the proposed metrics and inspect the design of recommender systems through these proposed metrics.
[ "recommender systems", "causality", "evaluation", "auditing", "machine learning" ]
Reject
https://openreview.net/pdf?id=0vMLqSdsKW
https://openreview.net/forum?id=0vMLqSdsKW
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sW7D1FDelG", "sFkDOgvXPC", "n6OaEGtomH", "ksLcEU89WT", "gqhybUXrra", "dB16BVYmpY", "TCfKmXci3A", "SOV7sbLojL", "SAicdsc6d4", "QAwG4AEilr", "ON2L44n1J2", "MPrcnbjWpZ", "MDPemmXdWa", "IZRctCtCaI", "EPhbMVA7PM", "CDAt7jKlBU", "Bidll62tLO", "5ytKGUBfEe", "4jiW5fB7ss", "22nX5h2A6l" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730694519273, 1732430071376, 1732506734188, 1730865604415, 1734636434638, 1729931002932, 1733084415940, 1732429937857, 1732428339094, 1732428734142, 1733085150917, 1733213934682, 1733283492363, 1732428033432, 1733283438044, 1737523875605, 1730361880179, 1732429359917, 1732429050897, 1732429641500 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7929/Reviewer_FqFb" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Reviewer_f6Sk" ], [ "ICLR.cc/2025/Conference/Submission7929/Reviewer_WsZQ" ], [ "ICLR.cc/2025/Conference/Submission7929/Area_Chair_oYS8" ], [ "ICLR.cc/2025/Conference/Submission7929/Reviewer_f6Sk" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Reviewer_FqFb" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7929/Reviewer_C1ia" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ], [ "ICLR.cc/2025/Conference/Submission7929/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a unified causal framework for auditing recommender systems, specifically to address ethical concerns such as user agency, stability, and reachability. It categorizes auditing metrics from a causal perspective and introduces two key metrics, past- and future-reachability, and stability, which measure a user\\u2019s ability to influence recommendations. The empirical studies evaluate the metrics on different recommender models, highlighting the trade-offs between user influence on recommendations and system stability.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The causal approach offers a novel way to address ethical issues, providing a structured method for defining and calculating user-centric metrics.\\n\\nOffering both gradient-based and black-box methods for metric computation enables broader application\", \"weaknesses\": \"The framework\\u2019s reliance on specific causal assumptions and models,this may reduce its generalizability across diverse recommender systems.\\n\\nThe paper lacks a discussion about the differences between recommendation systems.\", \"questions\": \"What are the differences and impacts of applying this model to various recommendation models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer 4 (continued)\", \"comment\": \"**Q5-2**: \\\"The current experiments focus on analyzing existing models within the proposed framework but do not clarify why this framework or these metrics are more valid than existing auditing methods. Additional experiments, such as straightforward case studies, are needed to further validate the framework.\\\"\\n\\n**Our Response**:\\nWe note the general deficiencies we see with existing auditing metrics in Section 3.2, primarily that they are association-based. Specifically, we cite metrics like diversity and popularity that are defined over an observational distribution.\\nIn Section 4, we elaborate on how the notion of user agency ties into ethical concerns associated with recommender systems: primarily the formation of filter bubbles that can serve to only reinforce biases and insulate users from opposing viewpoints, and the possibility of adversaries having an outsized ability to manipulate the recommender system. In Section 2, we mention how studying user agency requires answering causal questions pertaining to how a recommender would respond to interventions made in the user\\u2019s behavior over time, which cannot be answered by association-based metrics alone.\", \"we_also_talk_about_the_deficiencies_of_existing_interventional_metrics_in_section_2\": \"they are often limited in the kind of interventions they consider, with few works tapping into new types of interventions and ethical concerns. In addition, most are one-step metrics, ignoring the recommender-user interaction dynamics over time.\\nOverall, to our knowledge, this is the first work that concretely proposes a causal framework for auditing recommender systems, and allows for a general procedure for defining auditing metrics (Section 3.2) besides the ones we define in Section 4.\\n\\n**Q6**: \\\"What is the relationship between the proposed metrics and recommendation performance? Does a stronger recommendation model perform better according to these metrics?\\\"\\n\\n**Our Response**:\\nWe argue that the proposed metrics act as a separate measure to evaluate the \\u2018strength\\u2019 of a recommender system. A recommender system that performs extremely well on traditional accuracy/recall-style evaluation metrics may not perform well on the auditing metrics we define. For instance, these traditional evaluation metrics cannot directly detect whether or not a user is in a filter bubble since this would require an interventional perspective. A recommender system with such an issue can still perform highly on these metrics as the user is sucked deeper into the bubble and is unaware that the bubble even exists since they are insulated from other items.\\nA strong recommender system should perform highly on these traditional metrics in addition to the new metrics we propose, which value the capacity of the recommender system to promote user agency.\\n\\n**Q7**: \\\"The metric comparisons in Figure 3 are described but lack corresponding explanations. For instance, why do some items show 'a user\\u2019s recommended list is either heavily affected by the actions of an adversary or is minimally affected by them'?\\\"\\n\\n**Our Response**:\\nFigure 3 has 2 sets of plots. The reachability plots are scatterplots where each dot represents a (user, item) pair. Some (user, item) pairs have higher base reachability (x-coordinate) than others, just by virtue of the initial match between the user and item as perceived by the recommender system. Each (user, item) pair then also has a different max-reachability (y-coordinate), because of the differences in final match between the user and item as perceived by the recommender system after the primal perturbation.\\n\\nSimilarly, the stability plots are histograms of stability values for each (user, adversary) pair. Some adversaries have a greater effect on the recommendations of some target users than other adversaries; for instance, if a user very similar to the target user changes their ratings, the recommendations of the target user are more likely to change than if someone whose preferences have little correlation with the preferences of the target user changes their ratings. We also see that this distribution is mostly bimodal, implying that for these recommender systems on this dataset, for each user, every other user can broadly either be an effective adversary or an ineffective adversary.\\n\\n**Q8**: \\\"Is the time horizon parameter in the experimental parameters equivalent to k in Definitions 4.1 and 4.2? If not, how is k set in the experiments?\\\"\\n\\n**Our Response**:\\nYes, the time-horizon parameter refers to \\u2018k\\u2019 in Definitions 4.1 and 4.2.\"}", "{\"comment\": \"Thanks for the authors' response. While some of my concerns have been addressed, there are still unresolved issues that need further clarification:\\n\\n### Response to **A1-1**\\nYou seem to have misunderstood **Q1-1**. As you mentioned, \\\"the item recommended to the user at the next time step ($A_{i,t+1}$) depends on the rating the user gives in the preceding timestep ($O_{i,t}$),\\\" but in your Equation 1, you only consider:\\n$\\n\\\\mathbb{P}(A_{i,t+k} = j \\\\mid do(O_{i,t+k-1} = f(A_{i,t+k-1})), do(O_{i,t+k-2} = f(A_{i,t+k-2})), \\\\cdots)\\n$\\nThis does not account for the effect of $do(O_{i,t+k-2} = f(A_{i,t+k-2}))$ on $A_{i,t+k-1}$. That is, after the intervention $O_{i,t+k-2}$, $A_{i,t+k-1}$ remains in the state of an un-changed recommendation. My question in **Q1-1** was: why did you not consider the impact of $do(O_{i,t+k-2} = f(A_{i,t+k-2}))$ on $A_{i,t+k-1}$?\\n\\n### Response to **A1-2**\\nBased on **A1-2**, can I understand that **future reachability** considers the effect of $do(O_{i,t+k-2} = f(A_{i,t+k-2}))$ on $A_{i,t+k-1}$, whereas **past reachability** does not? If so:\\n1. Why is this distinction not reflected in Equation 1?\\n2. What is the practical significance of **past reachability**? Why analyze the expected reachability of an item under unchanging recommendations? What does this imply in real-world scenarios?\\n\\n### Response to **A2**\", \"my_question_is\": \"when $k>1$, do Equation 5 and the formulas in Section 5.2 need to be adjusted? Or would this require retraining to evaluate the metrics? I would like the authors to explain this in detail.\\n\\n### Response to **A3**\\nI understand that you made assumptions to facilitate analysis. However, I want to know how significant the impact of these assumptions would be on practical auditing. Previous works using similar assumptions do not necessarily imply that these assumptions will not affect the auditing process.\\n\\n### Response to **A5-1**\\nI understand that your method is not limited to specific datasets. However, my concern is that some of the conclusions you draw in the experimental section, such as \\u201cAs we increase $\\\\beta$, or decrease the stochasticity of the system, user recommendations tend to become more stable,\\u201d are data-dependent. If consistent results cannot be achieved across multiple datasets, these conclusions will be hard to accept. As other reviewers have also raised this point, I strongly encourage you to include results from additional datasets.\\n\\n---\\n\\nConsidering the authors\\u2019 response, I am willing to raise my score for this paper. However, as the above issues, particularly **Q1-1**, **Q1-2**, and **Q5-1**, have not been adequately addressed, I am unable to provide a positive score for this submission.\"}", "{\"summary\": \"The paper presents a unified causal framework for auditing recommender systems with focus on user agency. The authors make three main contributions:\\n1. A general causal framework that formalizes interventional and counterfactual metrics for auditing recommender systems.\\n2. Two novel classes of metrics - reachability and stability, to measure user agency while accounting for recommendation dynamics.\\n3. Efficient computational methods for these metrics under different levels of access to the recommender system.\\n\\nThe framework is evaluated empirically using both matrix factorization and recurrent neural network based recommenders, showcasing interesting trade-offs between stability and reachability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The technical claims and methodology are very well-supported. The causal framework is rigorously developed with clear mathematical formulations. The empirical evaluation is comprehensive, with well-designed ablation studies showing impact of various stochasticity levels, time horizon lengths and model architecture choices.\\n\\n2. Novel formalization of reachability and stability metrics presented capture both immediate and long-term effects, handle multi-step recommendation dynamics and account for both user and adversary perspectives.\\n\\n3. The paper is generally well-written and logically structured. The causal framework is presented clearly with helpful examples.\", \"weaknesses\": \"1. The assumption of static user/item embeddings during gradient computation could be better justified. Additional experiments showing impact of this simplification would be valuable.\\n\\n2. The empirical evaluation focuses on movie recommendations - testing on other domains (e.g. social media, e-commerce, etc.) would strengthen the framework's generalizability claims.\\n\\n3. The choice of distance metrics for stability measures (L2 distance) could be better justified. Adding discussion of metric sensitivity to adversarial perturbations and analysis of the relationship between local and global notions of reachability would be useful.\\n\\n4. The paper presents limited discussion of computational complexity and scalability analysis, particularly for large-scale recommender systems. The paper could analyze how the methods scale with number of users, items and time horizon.\", \"questions\": \"1. How does the computational complexity scale with the number of users, items and time horizon? What are recommended approaches for large-scale recommender systems?\\n\\n2. What are the practical implications of assuming static embeddings during gradient computation? How would the results change with full retraining?\\n\\n3. Could the framework be extended to handle more complex recommendation scenarios like slate recommendations or contextual bandits?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a causal framework for auditing recommender systems with a focus on ethical considerations such as user agency, stability, and reachability. It introduces two novel classes of metrics: future- and past-reachability and stability, which measure a user\\u2019s influence over their own recommendations and others\\u2019. The authors also propose two computational approaches, gradient-based and black-box, for calculating these metrics. Empirical evaluations demonstrate the utility of these metrics using matrix factorization and recurrent recommender network models, revealing insights into the trade-offs between stability and reachability.\\n\\nThe paper has several strengths. It tackles an important and underexplored problem in auditing recommender systems from a causal perspective, offering a framework that has the potential to generate new insights into user-centric evaluation metrics. The proposed metrics are novel, and the framework itself is methodologically rigorous and well-motivated. Additionally, the authors provide detailed explanations and experiments, improving the paper\\u2019s clarity and reproducibility.\\n\\nHowever, the paper has notable weaknesses that undermine its potential contribution. First, the motivation for focusing on user agency as an ethical concern, while mentioned, lacks a compelling connection to practical scenarios and actionable insights. The authors do not sufficiently justify the real-world impact of their metrics, especially when tied to the ethical concerns they aim to address. Second, the experimental evaluation is narrow in scope, relying solely on the ML-1M dataset, which limits the generalizability of the findings across diverse recommendation domains. Reviewers consistently raised concerns about the lack of validation on other datasets, as this significantly impacts the credibility of the conclusions. Additionally, the framework relies on several assumptions, such as static user and item embeddings, that simplify the computational challenges but reduce the practical applicability of the methods for real-world, large-scale systems. This gap between theoretical contributions and practical implications is a significant drawback. Finally, there are ambiguities in the definitions of the proposed metrics (e.g., the relationship between past and future reachability), and some experimental results lack sufficient explanation or justification.\\n\\nThe most critical reasons for the recommendation to reject this paper are the lack of experimental diversity, insufficient justification of the practical implications of the metrics, and the significant gap between theoretical assumptions and real-world applicability. While the paper demonstrates a well-developed theoretical framework, its limitations in empirical validation and practical relevance prevent it from making a strong case for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors engaged actively with the reviewers, addressing several points of criticism. Reviewer WsZQ appreciated the rigor of the framework but raised concerns about the assumption of static embeddings and the lack of experiments on diverse datasets. While the authors provided theoretical justifications for the assumptions and emphasized the domain-agnostic nature of their framework, they did not conduct additional experiments on other datasets, which remained a major concern. Reviewer FqFb raised concerns about the generalizability of the framework and the lack of a discussion on differences across recommendation models. The authors clarified that the framework is theoretically applicable across models, but the empirical evaluation did not adequately substantiate this claim. Reviewer C1ia questioned the motivation and real-world implications of the metrics, as well as specific experimental results, such as contradictions in stability metrics. The authors provided detailed responses but did not fully resolve the underlying concerns about practical relevance and experimental scope. Finally, Reviewer f6Sk highlighted ambiguities in metric definitions and concerns about oversimplifications in experimental setups. While the authors offered explanations for some of these points, critical issues such as dataset limitations and the impact of simplifying assumptions on practical auditing remained inadequately addressed.\\n\\nIn weighing these discussions, it became evident that while the authors made efforts to clarify their theoretical contributions, they did not sufficiently address the reviewers\\u2019 primary concerns regarding empirical validation, generalizability, and practical relevance. These unresolved issues ultimately outweighed the strengths of the paper, leading to the decision not to accept this submission.\"}", "{\"summary\": \"In this work, the authors adopt a causal perspective on recommender system auditing and present a general method for defining auditing metrics. Within this overarching causal auditing framework, they categorize existing audit metrics. Leveraging their framework, they propose two types of metrics: future-/past-reachability and stability, which respectively measure a user's ability to influence recommendations for themselves and for other users. Additionally, they introduce a gradient-based method and a black-box method for calculating these metrics, allowing auditors to assess them at various levels of system access. Empirically, the authors demonstrate the effectiveness of their proposed metrics and use them to examine the design of recommender systems.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Auditing recommender systems is a highly meaningful area of study, and the paper contributes valuable insights.\", \"The article is well-written and clearly articulated, making complex concepts accessible.\", \"It provides methods for auditing from both white-box and black-box perspectives, catering to different levels of system access.\"], \"weaknesses\": [\"**W1**: **Ambiguity in Definitions**: The definitions in the article lack detailed explanations, which may lead to ambiguity. For example:\", \"**Q1-1**: In Definitions 4.1 and 4.2, the authors consider only the intervention on $O_{i,t}$ without accounting for its effect on $A_{i,t+1}$. Why was this setting chosen?\", \"**Q1-2**: Are Definitions 4.1 and 4.2 consistent? Specifically, does past-$k$ at time $t+k$ equal future-$k$ at time $t$? It would be helpful if the authors could address this question both intuitively and formally.\", \"**W2**: **Limited Analysis Scope**: The analysis in Section 5 is confined to $k=1$, representing only a special case of the broader definitions provided.\", \"**Q2**: Please describe how the corresponding white-box and black-box methods would operate when $k > 1$.\", \"**W3**: **Practical Applicability Concerns**: There is a gap between the theoretical propositions and practical scenarios.\", \"**Q3**: Proposition 5.1 requires fixing item embeddings, while Proposition 5.2 requires fixing user embeddings. Since these conditions are difficult to meet in real recommender systems, how does this gap affect practical auditing?\", \"**W4**: **Lack of Experimental Rationale**: Certain experimental setups lack clear justification.\", \"**Q4-1**: Section 6.1 mentions different policies for future and past metrics. Why was this setup chosen? Please explain the rationale behind this decision.\", \"**W5**: **Incomplete Experimental Validation**:\", \"**Q5-1**: The use of a single dataset limits the experimental scope and generalizability of the findings.\", \"**Q5-2**: The current experiments focus on analyzing existing models within the proposed framework but do not clarify why this framework or these metrics are more valid than existing auditing methods. Additional experiments, such as straightforward case studies, are needed to further validate the framework.\"], \"questions\": [\"In addition to **Q1** to **Q5** mentioned in the Weaknesses, I have several other questions:\", \"**Q6**: What is the relationship between the proposed metrics and recommendation performance? Does a stronger recommendation model perform better according to these metrics?\", \"**Q7**: The metric comparisons in Figure 3 are described but lack corresponding explanations. For instance, why do some items show \\\"a user\\u2019s recommended list is either heavily affected by the actions of an adversary or is minimally affected by them\\\"?\", \"**Q8**: Is the time horizon parameter in the experimental parameters equivalent to $k$ in Definitions 4.1 and 4.2? If not, how is $k$ set in the experiments?\", \"I am happy to engage in further discussion, and if these issues are addressed, I am willing to reconsider the score.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer f6Sk\", \"comment\": \"**Response to \\u201cResponse to A1-1\\u201d**:\\nYour interpretation written in A1-2 is correct. For interventional metrics (the future-facing metrics), the item rated by the user in the next time step is directly the item recommended to the user. The recommendation in subsequent timesteps depends on the rating given in the previous timestep and therefore considers the impact of the do-intervention that you have mentioned. On the other hand, for counterfactual metrics (the past-facing metrics), the items the user rates are fixed to be the actual items they rated in the past (the user\\u2019s true history), and aren\\u2019t affected by the user\\u2019s intervention in the previous timestep. Note that the recommender system is still updated after every interaction.\\n\\n**Response to \\u201cResponse to A1-2\\u201d**:\\nYour understanding is correct. Comparing Equation 1 and Equation 2, the difference is that in Equation 1, the user\\u2019s choice follows the recommendation dynamics, which allows A_{i, t+k-1} to be influenced by previous outcomes. The recommender system is updated (through updating the user vector) after every interaction the user has, and so the item the user is recommended at time t+k-1 depends on the rating the user gives to the item recommended to them at time t+k-2 (this is the do-intervention that you mention). Meanwhile, in Equation 2, the recommendation history is set to be the user\\u2019s factual history and the users are only allowed to change their feedback for them. We specify this in the manuscript where after Definition 4.1, we point out that the intervention $f_{t'}(A_{i,t + t'- 1})$ would affect the distribution of $A_{i,t + t'}$, the recommendations in the next time step, in the case of future reachability.\", \"the_practical_significance_of_past_reachability_is_that_it_helps_answer_the_crucial_question\": \"\\\"Could a user have received different recommendations in the present if they had rated their historically viewed items differently?\\\" Under Definition 4.4, we provide a brief explanation of the differences between past and future facing metrics, and we reiterate the utility of past-facing metrics mentioned there here: \\u201cPast-/counterfactual metrics focus on how user behavior (e.g., the items a user chose to rate) contributes to the recommendations they receive in the present. For example, consider a social media user who primarily receives recommendations for cat videos in the present. Counterfactual/past metrics help us understand whether the narrowness in the recommendations can be attributed to the recommendation system, or the user\\u2019s behavior in the past, which imply vastly different conclusions in terms of user agency. If engaging with cat videos unfavorably in the past would have led to more diverse recommendations in the present, the observed narrow recommendations do not imply a violation of user agency.\\u201d\\nWe noted in the earlier reply how it is necessary to fix an item selection rule (otherwise, the user could just select the item they have to reach), and setting these items to be the actual sequence of items the user rated is a natural choice since they reflect the preferences of the user to a large extent. An intuitive distinction with future-facing metrics is that for past-facing metrics, the user\\u2019s choice of item essentially reflects their true preference (since it was an item they actually selected), while for future-facing metrics, the user\\u2019s choice of item reflects their preference as perceived by the recommender system. Both points of view offer us distinct, but related, insights into user agency.\"}", "{\"title\": \"Reply to Reviewer 4 (continued)\", \"comment\": \"**Q3**: Proposition 5.1 requires fixing item embeddings, while Proposition 5.2 requires fixing user embeddings. Since these conditions are difficult to meet in real recommender systems, how does this gap affect practical auditing?\\\"\\n\\n**Our Response**:\\nFirst, we clarify that the definitions of the proposed metrics don\\u2019t require specific assumptions on how the recommender is trained (i.e., not requiring the recommender system to have fixed item embeddings). Second, it is worth noting that obtaining the reachability and stability metrics involves solving a challenging optimization problem. Inherently, it is a bilevel optimization problem: the inner optimization problem is to retrain the recommender system, and the outer optimization problem is with respect to users\\u2019 own ratings (or other users\\u2019 ratings). In general, bilevel optimization is an active research area and can be quite difficult to solve [1]. To simplify this optimization problem both algorithmically and computationally, we choose to have fixed item embeddings when obtaining reachability, similar to assumptions made in [2,3,4]. If one were to solve the full bilevel optimization problem (e.g., retraining the recommender system entirely) for example when auditing the system over a longer time horizon, one may adopt black-box optimization methods similar to the one discussed in Sec 5.2.\\n\\n[1 ]B. Colson, et al. \\\"An overview of bilevel optimization.\\\" In Annals of operations research (2007).\\n[2] Sirui Yao, Yoni Halpern, Nithum Thain, Xuezhi Wang, Kang Lee, Flavien Prost, Ed H. Chi, Jilin Chen, and Alex Beutel. Measuring recommender system effects with simulated users. CoRR, abs/2101.04526, 2021. URL https://arxiv.org/abs/2101.04526.\\n[3] Mihaela Curmei, Sarah Dean, and Benjamin Recht. Quantifying availability and discovery in recommender systems via stochastic reachability. In Proceedings of the 38th International Conference on Machine Learning, pp. 2265\\u20132275. PMLR, 2021.\\n[4] Sarah Dean, Sarah Rich, and Benjamin Recht. Recommendations and user agency: The reachability of collaboratively-filtered information. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* \\u201920, pp. 436\\u2013445, New York, NY, USA, 2020. Association for Computing Machinery.\\n\\n**Q4-1**: \\\"Section 6.1 mentions different policies for future and past metrics. Why was this setup chosen? Please explain the rationale behind this decision.\\\"\\n\\n**Our Response**:\\nWe justify our choice for a different selection rule for future facing metrics in Appendix C. Future-k metrics are more computationally intensive as they require optimizing k\\u00d7|V| parameters to account for all possible item trajectories. However, we made this more tractable by using a deterministic (top-1) recommendation policy rather than stochastic sampling. We detail the effects in Appendix G.3.\\n\\n**Q5-1**: \\\"The use of a single dataset limits the experimental scope and generalizability of the findings.\\\"\\n\\n**Our Response**:\\nWe agree that additional experiments would provide for a valuable comparison. However, we would like to clarify that one of our key contributions is providing a framework for quantitatively assessing user agency metrics. The framework applies to a broad class of recommender systems and is domain agnostic. The framework requires only three fundamental components to define a causal metric: (1) an intervention, (2) an outcome, and (3) a functional that maps the outcome to a real value. These components are abstract and can be instantiated in any application domain. While we demonstrate the framework's utility through movie recommendations, the mathematical formulation itself makes no domain-specific assumptions.\\n\\n\\n(continued in next comment)\"}", "{\"title\": \"Reply to Reviewer 1\", \"comment\": \"We thank you for appreciating the rigor of our work:\\n\\n\\\"The technical claims and methodology are very well-supported. The causal framework is rigorously developed with clear mathematical formulations. The empirical evaluation is comprehensive, with well-designed ablation studies showing impact of various stochasticity levels, time horizon lengths and model architecture choices.\\\"\\n\\nWe address your concerns individually as follows.\\n\\n**Weakness 1**: \\\"The assumption of static user/item embeddings during gradient computation could be better justified. Additional experiments showing impact of this simplification would be valuable.\\\"\\n**Question 2**: \\\"What are the practical implications of assuming static embeddings during gradient computation? How would the results change with full retraining?\\\"\\n\\n**Our Response**:\\nFirst, we clarify that the definitions of the proposed metrics don\\u2019t require specific assumptions on how the recommender is trained (i.e., not requiring the recommender system to have fixed item embeddings). Second, it is worth noting that obtaining the reachability and stability metrics involves solving a challenging optimization problem. Inherently, it is a bilevel optimization problem: the inner optimization problem is to retrain the recommender system, and the outer optimization problem is with respect to users\\u2019 own ratings (or other users\\u2019 ratings). In general, bilevel optimization is an active research area and can be quite difficult to solve [1]. To simplify this optimization problem both algorithmically and computationally, we choose to have fixed item embeddings when obtaining reachability, similar to assumptions made in [2,3,4]. If one were to solve the full bilevel optimization problem (e.g., retraining the recommender system entirely) for example when auditing the system over a longer time horizon, one may adopt black-box optimization methods similar to the one discussed in Sec 5.2.\\n\\n[1 ]B. Colson, et al. \\\"An overview of bilevel optimization.\\\" In Annals of operations research (2007).\\n[2] Sirui Yao, Yoni Halpern, Nithum Thain, Xuezhi Wang, Kang Lee, Flavien Prost, Ed H. Chi, Jilin Chen, and Alex Beutel. Measuring recommender system effects with simulated users. CoRR, abs/2101.04526, 2021. URL https://arxiv.org/abs/2101.04526.\\n[3] Mihaela Curmei, Sarah Dean, and Benjamin Recht. Quantifying availability and discovery in recommender systems via stochastic reachability. In Proceedings of the 38th International Conference on Machine Learning, pp. 2265\\u20132275. PMLR, 2021.\\n[4] Sarah Dean, Sarah Rich, and Benjamin Recht. Recommendations and user agency: The reachability of collaboratively-filtered information. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* \\u201920, pp. 436\\u2013445, New York, NY, USA, 2020. Association for Computing Machinery.\\n\\n**Weakness 2**: \\\"The empirical evaluation focuses on movie recommendations - testing on other domains (e.g. social media, e-commerce, etc.) would strengthen the framework's generalizability claims.\\\"\\n\\n**Our Response**:\\nWe agree with your suggestion that additional evaluation could strengthen the framework\\u2019s generalizability claims. However, we would like to clarify that our framework's theoretical foundations are domain-agnostic by design. The framework requires only three fundamental components to define a causal metric: (1) an intervention, (2) an outcome, and (3) a functional that maps the outcome to a real value. These components are abstract and can be instantiated in any application domain. Our contribution is a framework for defining new metrics - One of our key contributions is providing a framework for quantitatively assessing user agency metrics. The framework applies to a broad class of recommender systems and is domain agnostic. While we demonstrate the framework's utility through movie recommendations, the mathematical formulation itself makes no domain-specific assumptions.\\n\\n(response continued in next comment)\"}", "{\"title\": \"Reply to Reviewer 1 (continued)\", \"comment\": \"**Weakness 3**: \\\"The choice of distance metrics for stability measures (L2 distance) could be better justified. Adding discussion of metric sensitivity to adversarial perturbations and analysis of the relationship between local and global notions of reachability would be useful.\\\"\\n\\n**Our Response**:\\nWe have used Hellinger Distance as the distance metric for stability plots. We note that the definition of stability allows for any metric that measures the distance between two probability distributions. In Appendix C, we specifically show how using L2 distance, and other metrics sharing some similar properties, can make computing past-stability a quasi-convex problem where the solution can be easily obtained by simply checking boundary points of the domain, justifying its use for the paper.\\n\\nBoth reachability and stability are already defined as an optimization problem that finds the optimal perturbation for a specific objective. For reachability, we solve for the best perturbation a user can make to maximize their probability of reaching a desired item. For stability, we find the optimal perturbation an adversary can make to maximize the distance between a user's initial and final recommendation distributions. During the audit, this is the only perturbation that takes place - the recommender system then evolves according to its natural dynamics. Therefore, analyzing sensitivity to additional adversarial perturbations is redundant, as we've already characterized the maximum impact possible within our well-defined perturbation space.\\n\\nRegarding the relationship between local and global reachability - our framework actually captures both through different parameter settings. The past-k and future-k reachability metrics can examine both local effects (when k=1 or with small rating changes) and global effects (with larger k values or more substantial rating modifications). This flexibility allows auditors to examine both incremental changes and more dramatic shifts in recommendation patterns.\\n\\n**Weakness 4**: \\\"The paper presents limited discussion of computational complexity and scalability analysis, particularly for large-scale recommender systems. The paper could analyze how the methods scale with number of users, items and time horizon.\\\"\\n**Question 1**: \\\"How does the computational complexity scale with the number of users, items and time horizon? What are recommended approaches for large-scale recommender systems?\\\"\\n\\n**Our Response**:\\nAs mentioned, our methods scale with three key factors: number of users (n), items (m), and time horizon (k). For both past-k metrics, we optimize k parameters (one per timestep). Future-k metrics are more computationally intensive as they require optimizing k\\u00d7|V| parameters to account for all possible item trajectories. However, we made this more tractable by using a deterministic (top-1) recommendation policy rather than stochastic sampling. We detail the effects in Appendix G.3. In practice, the metrics can be computed on sampled subsets of users and items for large-scale systems while still providing meaningful audit insights.\\n\\n**Question 3**: \\\"Could the framework be extended to handle more complex recommendation scenarios like slate recommendations or contextual bandits?\\\"\\n\\n**Our Response**:\\nYes, the framework can be extended to handle the more complex recommendation scenarios mentioned in the question. The general framework itself (Section 3) does not depend on specifics on the recommendation algorithms. Operationalizing for computing the metrics is the part that relies on these specifics. For instance, in slate recommendation, now instead of computing the probability of the item under consideration being the recommended item (for reachability), we will be optimizing the probability of the item being part of the slate of recommended items instead.\"}", "{\"title\": \"Reply to Reviewer f6Sk\", \"comment\": \"**Response to \\\"Response to A2\\\"**:\\n\\nThe expressions in G.1 and G.2 are the corresponding equations to the first equation in Section 5 (before Section 5.1) for 'k' taking values other than 1. These equations essentially tell us the gradient terms we need to compute (or need access to). \\n\\n\\nFor k=1, this term is:\\n\\n\\n$\\\\nabla_{f_1} \\\\mathbb{P}(A_{i, t+1} = j | \\\\text{do}(O_{i,t} = f_1(A_{i,t})))$\\n\\n\\nWhile for k=2 (for example), these terms, as shown in Section G.1, are:\\n\\n\\n$\\\\nabla_{f_1} \\\\sum_{a_{i,t+1}} \\\\left[ P(A_{i,t+1} = a_{i,t+1} | \\\\text{do}(O_{i,t} = f_1(a_{i,t}))) \\\\cdot P(A_{i,t+2} = j | \\\\text{do}(O_{i,t+1} = f_2(a_{i,t+1})), \\\\text{do}(O_{i,t} = f_1(a_{i,t}))) \\\\right]$\\n\\n\\nand\\n\\n\\n$\\\\nabla_{f_2} P(A_{i,t+2} = j | \\\\text{do}(O_{i,t+1} = f_2(a_{i,t+1})), \\\\text{do}(O_{i,t} = f_1(a_{i,t})))$\\n\\n\\nafter simplifying\\n\\n\\n$\\\\nabla_{f_1} E_{A_{i,t}, A_{i,t+1}} \\\\left[ P(A_{i,t+2} = j | \\\\text{do}(O_{i,t} = f_1(A_{i,t})), \\\\text{do}(O_{i,t+1} = f_2(A_{i,t+1}))) \\\\right]$\\n\\n\\nand\\n\\n\\n$\\\\nabla_{f_2} E_{A_{i,t}, A_{i,t+1}} \\\\left[ P(A_{i,t+2} = j | \\\\text{do}(O_{i,t} = f_1(A_{i,t})), \\\\text{do}(O_{i,t+1} = f_2(A_{i,t+1}))) \\\\right]$ respectively.\\n\\n\\nEquation 5 simply gives us the probability of item 'j' being recommended to user 'i'. In the stochastic setting we consider, this is proportional to $\\\\exp(\\\\beta \\\\mathbf{p}_i^\\\\top \\\\mathbf{q}_j)$, where $p_i$ is the user vector corresponding to 'i' after the specified perturbation has been made, and $q_j$ is the item vector corresponding to 'j'.\\n\\n\\nEquation 5 hides away all the complexity within $p_i$ and $q_j$. For reachability, the user vector $p_i$ is updated after every interaction. We specify this under equation 5 where we note that these user and item vectors are the updated user and item vectors depending on the intervention. For k=1, this update is after only one interaction. For k=n, this update is after 'n' interactions. So the relation in Equation 5 is valid for all k, only $p_i$ or $q_j$ will change depending on the specified do-condition.\", \"so_this_would_take_the_form\": \"$\\\\mathbb{P}(A_{i, t+k} = j | \\\\text{do}(O_{i,t} = f_1(A_{i,t})), \\\\ldots, \\\\text{do}(O_{i,t+k-1} = f_k(A_{i,t+k-1}))) \\\\propto \\\\exp(\\\\beta \\\\mathbf{p}_i^\\\\top \\\\mathbf{q}_j)$\\n\\n\\nWhere $p_i$ is now the updated user vector after simulation according to the specified do-conditions.\\n\\n\\nSimilarly, in Section 5.2, for other values of 'k', the do-condition will be different as specified in Definition 4.1:\\n\\n\\n$\\\\mathbb{P}(A_{i, t+k} = j | \\\\text{do}(O_{i,t} = f_1(A_{i,t})), \\\\ldots, \\\\text{do}(O_{i,t+k-1} = f_k(A_{i,t+k-1})))$\\n\\n\\nbut this probability will still be given by Equation 5, with the user vector $p_i$ correspondingly updated after running the simulation according to the specified do-condition.\\n\\n\\n**Response to \\\"Response to A3\\\"**:\\n\\n\\nAs you point out, these assumptions facilitate the analysis, providing clean theoretical results to obtain the metrics. For practical auditing, with no assumption on how the model is updated, our proposed methodology for obtaining these metrics still works (i.e., solving the optimization problems in equation 1,2,3,4. Section 5 shows how this looks for equation 1 with k=1, and the specific gradient term we need to compute. Similarly, Section 5.2 shows how this is done for black-box access). We only use the assumptions for the experiments in the paper because of relatively simpler implementations.\\nFor practical auditing, with the current approach of only updating one vector, we only foresee a potential negative impact for recommender systems that are retrained after every few interactions, and even then, only if successive models differ significantly from each other. \\nSince practical recommender systems are not fully retrained after every single interaction, and only after fixed intervals, the auditing procedure is still valid between these intervals and accounts for small changes brought about by every interaction without the need for full retraining, albeit with some noise.\\n\\nWe are actively working to conduct the additional experiments mentioned in **\\\"Response to A5-1\\\"** and aim to include the results in the final version.\"}", "{\"comment\": \"Thank you for the rebuttal. I would like to keep my current score.\"}", "{\"title\": \"Follow-up comment by Authors\", \"comment\": \"Dear Reviewer C1ia, we hope our rebuttal has addressed your concerns. We value your feedback and would appreciate any further questions or thoughts before the end of the rebuttal period.\"}", "{\"title\": \"Summary of Primary Conceptual Contributions of our work\", \"comment\": \"We are very thankful for the reviewers' feedback and wanted to outline the key conceptual contributions of our paper to clarify common questions raised by some reviewers.\\n\\nWe note that while much attention has been given to auditing various ethical concerns of recommender systems, there has been comparatively little work conducted on measuring user agency. This gap is particularly notable given the rich line of qualitative work emphasizing the importance of user agency [1]. User agency is a user\\u2019s power over their own recommendations compared to recommendations being driven by external forces like other users\\u2019 behaviors or algorithmic profiling [2]. It can be compromised in various ways, several of which we target with the metrics we propose. \\nTwo primary ethical concerns associated with the violation of user agency that we study here are the existence of filter bubbles and susceptibility to adversarial attacks. Both pose important ethical threats, as filter bubbles reinforce biases and restrict opposing viewpoints while adversarial attacks essentially game the recommender system to fulfill an objective.\\n\\nWe illustrate how existing metrics used to audit recommender systems are either 'association-based' and cannot audit user agency, or make assumptions that are not in line with the true dynamics of the recommender system. The unified causal framework that we propose and the two classes of metrics we introduce resolve these issues.\\n\\nIn summary,\\n\\n1) We provide a general causal framework for defining new causal metrics and categorizing existing\\nmetrics for auditing recommender systems in a principled manner (Section 3)\\n\\n2) Using our proposed framework, we develop two classes of metrics for measuring user agency while\", \"accounting_for_the_dynamics_of_the_recommendation_process\": \"past- and future-reachability and\\nstability (Section 4). We provide effective ways to compute the metrics, allowing the auditor to\\nhave different levels of access to the systems (Section 5).\\n\\n3) Empirically, we investigate two common classes of recommender systems in terms of our proposed\\nuser agency metrics and found that higher stochasticity in a recommender system will help with\\nstability but harm reachability (Section 6).\\n\\n[1] Silvia Milano, Mariarosaria Taddeo, and Luciano Floridi. Recommender systems and their ethical\\nchallenges. AI & SOCIETY, 35(4):957\\u2013967, Feb 2020a.\\n\\n[2] Katja de Vries. Identity, profiling algorithms and a world of ambient intelligence. Ethics and\\nInformation Technology, 12(1):71\\u201385, January 2010.\"}", "{\"title\": \"Follow-up comment by Authors\", \"comment\": \"Dear Reviewer WsZQ, we hope our rebuttal has addressed your concerns. We value your feedback and would appreciate any further questions or thoughts before the end of the rebuttal period.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"In this work, authors pay attention to recommender system auditing from a causal perspective, and point out the lack of metrics for auditing user agency for the recommendation process. Therefore, two metrics are proposed, including future- and past-reachability and stability, which can measure the impact of users on their own and other users. To calculate these metrics, the authors also design a gradient-based and a black box approach.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1- This paper provides comprehensive details on the background of the problem.\\n\\nS2- The authors give detailed experiment settings which improves the reproducibility.\", \"weaknesses\": \"W1-The motivation of this paper is not quite clear. For example, what\\u2019s the actual relationship between user agency and ethical concerns?\\n\\nW2-The experiments are only conducted on ML-1M, which are insufficient to explain the universality of the conclusions since the recommendation senarios are diverse. Experiments on at leasr one dataset from other recommendation senarios are needed.\\n\\nW3- In Figure 3 for the distribution of past instability values, for MF, Past-5 shows lower proportion of 0.0 than Past-1, but for RRN, Past-5 presents higher proportion of 0.0 than Past-1. Could you please explain the reason for this contrary result?\", \"questions\": \"Please see them in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer 3\", \"comment\": \"We thank you for your feedback. Below we address your concerns individually.\\n\\n**Weakness 1**: \\\"The motivation of this paper is not quite clear. For example, what\\u2019s the actual relationship between user agency and ethical concerns?\\\"\\n\\n**Our Response**:\\nWe elaborated on how user agency connects to ethical concerns for recommender systems in Section 4 (L195 - L207). While, to the best of our knowledge, little work on quantifying user autonomy and agency has been conducted, there is a rich line of qualitative work emphasizing the importance of user agency in recommender systems. These works conceptualize user agency as a user\\u2019s power over their own recommendations vs recommendations being driven by external forces like other users' behaviors or algorithmic profiling [2]. User agency can be compromised in various ways, several of which we target with the metrics we propose:\\n\\n-First, recommender systems can enforce filter bubbles that restrict users from diverse content feeds and amplify biases [1]. Looking at this problem from the context of the reachability metric we propose, we consider scenarios where certain groups of items are unreachable for a user in spite of them taking actions specifically in order to attempt to reach these items. This implies the existence of a filter bubble, which could be harmful. Without counterfactual and interventional metrics as the ones we proposed, it is hard to attribute the effect of the algorithm versus the effect of user's own behaviors on creating filter bubbles, which motivates our causal definitions of reachability.\\n\\n-Second, recommendation algorithms can be vulnerable to strategic behaviors and adversarial attacks that alter recommendations for unrelated users [1]. For example, consider content duplication attacks on e-commerce platforms [3]. In this setting, providers game the recommendation system by duplicating item listings with little to no changes to maximize the probability of recommendation. Maintaining user agency over their own recommendations in this scenario requires stability in the recommendations (e.g., recommendations of users cannot be manipulated easily), motivating our definitions of counterfactual and interventional stability metrics. As such, user agency ties directly to the core ethical concerns of recommender systems.\\n[1]Silvia Milano, Mariarosaria Taddeo, and Luciano Floridi. Recommender systems and their 429 ethical challenges. AI amp; SOCIETY, 35(4):957\\u2013967, February 2020.\\n[2]K. De Vries. Identity, profiling algorithms and a world of ambient intelligence. In Ethics and information technology (2010).\\n[3]M. Frobe, et al. The effect of content-equivalent near-duplicates on the evaluation of search engines. In European Conference on Information Retrieval (2020).\\n\\n**Weakness 2**: \\\"The experiments are only conducted on ML-1M, which are insufficient to explain the universality of the conclusions since the recommendation scenarios are diverse. Experiments on at least one dataset from other recommendation scenarios are needed.\\\"\\n\\n**Our response**:\\nWe would like to clarify that our framework's theoretical foundations are domain-agnostic by design. The framework requires only three fundamental components to define a causal metric: (1) an intervention, (2) an outcome, and (3) a functional that maps the outcome to a real value. These components are abstract and can be instantiated in any application domain. One of our key contributions is providing a framework for quantitatively assessing user agency metrics. The framework applies to a broad class of recommender systems and is domain agnostic. While we demonstrate the framework's utility through movie recommendations, the mathematical formulation itself makes no domain-specific assumptions.\\n\\n**Weakness 3**: \\\"In Figure 3 for the distribution of past instability values, for MF, Past-5 shows lower proportion of 0.0 than Past-1, but for RRN, Past-5 presents higher proportion of 0.0 than Past-1. Could you please explain the reason for this contrary result?\\\"\\n\\n**Our response**:\\nThe main conclusion we draw regarding instability is that allowing malicious actors (adversaries) to exist on the platform for longer periods leads to more harm as they are allowed more opportunities to manipulate the recommender system. This observation is echoed in both plots: for the RRN case, mean instability values for Past-5 Instability are still larger than those for Past-1 Instability because of additional concentration away from 0, which is a lot more non-existent for Past-2 Instability.\"}", "{\"title\": \"Reply to Reviewer 2\", \"comment\": \"We thank you for acknowledging the novelty of our work:\\n\\n\\\"The causal approach offers a novel way to address ethical issues, providing a structured method for defining and calculating user-centric metrics.\\\"\\n\\nBelow we address your concerns.\\n\\n**Weakness 1**: The framework\\u2019s reliance on specific causal assumptions and models, this may reduce its generalizability across diverse recommender systems.\\n\\n**Our Response**:\\nMaking causal assumptions is necessary since the metrics we define are causal and identifying/estimating causal effects requires us to make certain assumptions. We want to emphasize that the framework we propose **does not** rely on specific assumptions about the recommender systems it is used to audit. The causal graph in Figure 1 is general for all recommender systems as all recommender systems involve providing recommendations, collecting (possibly empty) user feedback, and then using the user-recommendation history to determine future recommendations. Our causal graph captures this, laying out the dependency among the factors involved in this recommendation process, **without** assuming anything about the recommendation algorithm itself.\\n\\nThe sequential dependency of these factors can be viewed as causal assumptions. However, the dependency of these factors can only be the presented way in Figure 1 as in reality, these factors show up in a sequential order. That is, a user can only provide feedback on a particular recommendation after a recommendation is given to them,\\nand the next-stage recommendation can only depend on past (possibly empty) user feedback instead of future ones.\\n\\n**Weakness 2**: \\\"The paper lacks a discussion about the differences between recommendation systems.\\\"\\n**Question 1**: \\\"What are the differences and impacts of applying this model to various recommendation models?\\\"\\n\\n**Our Response**:\\nAs we noted above, our general causal auditing framework apply to all recommender systems. To illustrate the usage of our proposed metrics, we evaluated two types of recommender systems empirically: a Matrix Factorization based recommender and a Recurrent Recommender Network. In our analysis, we find that the Matrix Factorization based recommender promotes greater user-item reachability but has less user-adversary stability as compared to its RNN based counterpart. While the operationalization details may vary slightly between different recommender systems (as detailed in Section 5), the framework itself (Section 3) and the metrics we define (Section 4) are model-agnostic.\"}", "{\"title\": \"Reply to Reviewer 4\", \"comment\": \"We thank the reviewer for acknowledging the importance of our work.\\n\\n\\\"Auditing recommender systems is a highly meaningful area of study, and the paper contributes valuable insights.\\\"\\n\\nBelow we address your concerns individually.\\n\\n**Q1-1**: \\n\\n**Our response**:\\nIn definition 4.1 (future reachability), the user is constrained to only rate the items that are recommended to them, but is allowed to assign any rating. Here, the item recommended to the user at the next time step ($A_{i,t+1}$) depends on the rating the user gives in the preceding timestep ($O_{i,t}$). Therefore, in this case, even though the intervention is only on the user action, the items the user rates in successive timesteps are dependent on these interventions.\\nThe decision to have the user simply rate the item that is directly recommended to them at each time step rather than having them choose the item to rate was to create a more concrete tie-in with the role of the recommender system itself in this whole process. By exclusively rating items that are recommended to them, we are able to see if the user can \\u2018reach\\u2019 an item simply by interacting with the recommender system. If the user was allowed to rate arbitrary items, this would mean the user can directly rate/reach the item to be reached too, which would be contrary to the problem itself.\\n\\nIn definition 4.2 (past reachability), we fix the items the user is to rate to be the same items the user rated in the past. This again avoids the issue mentioned above and the items the user themselves chose to rate in the past are indicators of the user\\u2019s preference. This is a retrospective look at user agency, hence this metric is counterfactual. We ask the question: could the user have rated the items they already rated in a different manner and been able to reach some target item?\\n\\n**Q1-2**: Are Definitions 4.1 and 4.2 consistent? Specifically, does past-k at time t+k equal future-k at time t? It would be helpful if the authors could address this question both intuitively and formally.\\\"\\n\\n**Our response**:\\nThe difference between the two metrics, as detailed in the previous answer is in the choice of items the user rates in the \\u2018k\\u2019 timesteps before reachability is computed. In the case of future reachability, the user accepts the item recommended to them at each timestep and rates it, while in the case of past reachability, the user rates the same items they rated in the past.\\n\\nWe elaborate on how these metrics differ and how they have their own importance under Definition 4.4 in the main paper.\\n\\n**Q2**: \\\"Please describe how the corresponding white-box and black-box methods would operate when k>1.\\\"\\n\\n**Our Response**:\\nWe elaborated on this in Appendix G.1 and G.2, where we write out this expression for additional values of \\u2018k\\u2019.\\n\\n(continued in next comment)\"}" ] }
0vKokoPKTo
Towards Geometry Problems Solving Employing GPT-4 Vision with Few-Shot Prompting: An Empirical Study of What Matters
[ "Xiuliang Duan", "Dating Tan", "Liangda Fang", "Quanlong Guan", "Yuyu Zhou", "Xiujie Huang", "Zhiguo Gong", "Changqin Huang" ]
The few demonstrations ("few-shot prompting") can significantly improve the ability of Large Language Models (LLMs) in mathematical reasoning, including geometry problem solving (GPS). GPT-4 Vision (GPT-4V), as a leading example of LLMs, also demonstrates significant improvements. This tremendous achievement is mainly attributed to prompting methods like "Chain-of-Thought" and "Program-of-Thought," which leverage the in-context learning ability of the model combined with few-shot prompting to solve new problems. Despite the success of these prompting methods, it remains understood what the GPT-4V model learns from the demonstrations that lead to improved performance. In this paper, we evaluated the answering accuracy of GPT-4V with 2-shot prompting on five geometric problem datasets and conducted a series of detailed analyses. Firstly, through ablation experiments with valid and invalid demonstration examples, we found that the model’s performance improvement is not due to the quality of the demonstration, but rather to the input format, output format, and logic and structure of the demonstration. Secondly, by analyzing the reasoning and computational requirements of geometric problems, and verifying experimental results, we found that GPS tasks emphasize reasoning ability more than computational power. Finally, our analysis of various prompt methods revealed that existing approaches are not effective at improving model performance concerning problem length and geometric shape. Therefore, specialized prompt methods could be designed to enhance the model's performance in these aspects, or fine-tuning the model by adding problem data with longer lengths or mixed geometric shapes could optimize its performance. Overall, developing an LLM that fully adapts to GPS tasks represents a key research direction. The source code and data will be made available in a GitHub repository.
[ "Large Language Models", "Mathematical Reasoning", "Geometry Problem Solving", "Prompting Methods" ]
https://openreview.net/pdf?id=0vKokoPKTo
https://openreview.net/forum?id=0vKokoPKTo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vF2xeZWbUa", "ejMElFqzh6", "UvHKL1l9NB", "MvpjFuDXSX", "1suqa8Hq3v" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730887438129, 1730698307421, 1729525070467, 1734087992300, 1730567817919 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6698/Reviewer_Hhu7" ], [ "ICLR.cc/2025/Conference/Submission6698/Reviewer_mnyL" ], [ "ICLR.cc/2025/Conference/Submission6698/Reviewer_HL6A" ], [ "ICLR.cc/2025/Conference/Submission6698/Authors" ], [ "ICLR.cc/2025/Conference/Submission6698/Reviewer_zVK4" ] ], "structured_content_str": [ "{\"summary\": \"The paper examines the use of GPT-4V for solving geometry problems through few-shot prompting, assessing how input/output formats, prompt structures, and different reasoning strategies impact performance. It explores two prompting types, Chain-of-Thought and Program-of-Thought, and analyzes their effectiveness across various datasets. Findings suggest that the model\\u2019s performance is influenced more by prompt structure than by the validity of demonstrations. Furthermore, reasoning abilities are highlighted as more essential than computation power for geometry problem-solving.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The reasoning task for LLM is an intriguing and promising area. The authors approach this problem from the perspective of GPS, which presents a fresh and valuable perspective.\\n2. The experimental evaluation is comprehensive: multiple datasets and prompt types (e.g., CoT, PoT) are used.\", \"weaknesses\": \"1.\\tI am confused about the motivation for why we need to answer the three questions the paper is asking (line 55-67). Do these questions really contribute to our understanding of LLM? I feel there is a logical gap between the purpose of understanding LLM and these specific questions. For instance, the question \\u201care valid demonstrations important?\\u201d is more about performance tweaks, rather than about actually providing insights on how LLMs work for GPS problems. I cannot directly connect having answers to these performance-related questions to the underlying working schemes of LLMs.\\n2.\\tThe findings in the paper seem trivial to me. The answers, such as the necessity of valid demonstrations in input/output format, are not surprising. To me, what would actually be interesting is seeing cases where the LLMs can handle bad demonstrations.\\n3.\\tThere is almost zero algorithmic/technical contribution to this paper. It\\u2019s just a bunch of prompts, which any solid paper would have as an ablation study.\\n4.\\tThe writing quality needs to be significantly improved. For instance, line 74-79 are very vague and poorly explained. There lack of scientific rigor for line 193-195. The claims in line 353-354 are confusing. The conclusion in lines 353\\u2013354 regarding the importance of input-output formats, lacks clear support from preceding paragraphs.\\n5.\\tExperimental analysis are weak. Fig. 2 does not demonstrate significant differences across settings, making it difficult to extract meaningful conclusions from the results. The way how the datasets are sampled from the original one is not explained. Some claims are misleading: in line 378, an average domain knowledge score exceeding 1.5 reflects the involvement of extensive domain knowledge actually suggests the opposite of the written claim. Additionally, the notion that the number of digits relates to computational requirements is confusing and inaccurate: it should indicate the precision rather than the computational requirement.\", \"questions\": \"1.\\tPlease use $``$ rather than $\\u2019\\u2019$ across the paper\\n2.\\tPlease use \\\\citep rather than \\\\cite\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the factors that impact the ability of GPT-4 Vision on geometry problem solving (GPS) ability. Experiments are conducted to examine GPT-4\\u2019s behavior under various controlled settings. Based on the result, the paper draws the conclusion that (1) the correctness of the demonstrations does not impact the model\\u2019s performance; (2) Chain-of-thought outperforms program-of-thought methods, as GPS does not require much computational power from the code-writing; (3) GPT-4V is better at solving problems of shorter description and that concerning simpler shapes, both of which indicate the problem complexity.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The study reveals some behaviors of LLMs that can potentially motivate future research.\", \"weaknesses\": \"1. The study of the GPT-4V under valid or invalid few-shot demonstrations in Section 5 is not straightforward to back up the claim \\u201cmodel\\u2019s improvement from few-shot demonstrations is due to input format, output format, and logic and structure of the demonstration\\u201d appearing throughout the paper. The study only shows that the overall performance does not degrade by using the demo of the specific invalidity mode in the paper, which is a valid solving process for a wrong problem as shown in Appendix A, compared to using valid demonstrations. It is unclear if it is because \\u201cthe input format, output format, and logic and structure\\u201d is learnt. If there are no demonstrations, is the input format, output format, or logic in the output wrong? If the model is provided with demonstrations of wrong logics (e.g. perimeter = AB * BC *AC), will the model still achieve good performance? The paper can clarify those questions by showing zero-shot GPT-4 (no demonstrations) performance in Figure 2 as comparison, qualitatively comparing the model behavior with no/correct/wrong demonstrations, and testing on more invalidity mode of the demonstrations.\\n\\n2. While the study in Section 6 shows an interesting result that Program-of-thought (PoT) is outperformed by the Chain-of-thought (CoT), meaning that the code-writing is not suitable for the GPS problem, the analysis of the reason is not convincing. Specifically, the claim is that there are two reasons for this phenomenon: (1) PoT is better at CoT in solving complex arithmetic calculation, but the GPS task does not require much computation; (2) Reasoning in language is better at reasoning in the code. In this case, what is the performance gap between the two categories of the methods under different calculation complexity and reasoning complexity measurement on the problem level instead of the dataset level? This result will be a stronger support of the claims.\", \"questions\": [\"1. Is Figure 2 the result with one-shot or two-shot demonstration? Section 5.2 claims it is with one-shot, but Section 6.2 claims it is with two-shot. Besides, it is written in Section 6.2 that \\u201c...two different background colors represent different prompting methods: the white\\u2026, the gray\\u2026\\u201d. But in Figure 2, there is no distinct background color.\", \"2. What are the \\u201cinvalid reasoning\\u201d and \\u201cinvalid computation\\u201d demonstrations phrased in Section 6.2? Based on the Appendix A, the invalid demonstrations for either method categories are the same, but in language and code format, respectively. When in the code format, it seems they are still valid in computation (no calculation error) but invalid in reasoning (good demo for a wrong problem)?\", \"3. There are also some writing issues in the paper that might mislead the readers:\", \"Figure 1, Program-of-Thought method, there is a mismatch between p1 and C1 on the left. \\u201cThe shorter base is 6 ft\\u201d in p1, but it is set to 2 in C1.\", \"Section 3.2, paragraph 2. \\u201c...few-shot demonstration <pk, Ck>...\\u201d. \\u2018k\\u2019 should be the subscript.\", \"Section 6.1, paragraph 1. \\u201cIn appendix E, we further refined the distribution of\\u2026\\u201d. The choice of word \\u201crefined\\u201d is misleading, as it seems Appendix E merely collects the percentage of problems over different knowledge numbers?\", \"Section 6.2, paragraph 1. \\u201cFor example, the RP method \\u2026 improved the accuracy by 22.3% compared to the PAL method \\u2026\\u201d, \\u201cimproved\\u201d should be \\u201coutperforms\\u201d as these are two irrelevant methods.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the geometry problem. It observes that the model's performance gain is not due to the quality of the demonstration but to the input format, output format, and logic of the demonstration. Moreover, this analysis finds that specialized prompt methods and find-tuning of the model can optimize its performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper targets a hard and interesting mathematical problem called geometry problems. And it compares two series of SOTA promising methods: chain of thoughts and program of thought. It claims that LLMs often \\\"draw a dipper with a gourd as a model\\\" (Ln 194).\\n\\n2. A wide range of problems are studied, and various analysis experiments are performed.\\n\\n3. The related works are carefully reviewed, and this analysis paper is well-motivated.\", \"weaknesses\": \"1. This paper does not propose a specific method to overcome the claimed issues of LLMs, and it does not provide much insight into resolving such problems. In fact, the CoT and PoT methods are well known to the community and are already used in daily life. So it's not clear where the novelty is.\\n\\n2. Invalid demonstrations would definitely deteriorate the performance of GPT, and this analysis is quite unuseful. What's more concerning is whether the few-shot demonstrations are useful. Note that OpenAI-o1 discourages the use of few-shot demonstrations (refer to their official website).\\n\\n3. Some claims are quite vague, and the findings cannot support the conclusion. For example, Ln433 concludes that the GPS task requires a small amount of computation. Ln420 suggests that \\\"the method of enhancing reasoning ability is more effective than computation.\\\" However, it's not sure whether the proposed computation is the optimal choice of the LLMs. The probability it's the **choice or implementation of CoT / PoT** that hinders the model performance while increasing computation (e.g., making the LLM larger) should significantly improve the model performance.\\n\\n4. Figure 4 is quite noisy and no meaningful conclusions can be drawn from this figure. Also, it's expected that increasing problem length would deteriorate the accuracy. This analysis does not lead to significant discoveries.\", \"questions\": \"1. It seems that samples are of the same magnitude as the original dataset, why don't test on the full dataset? It is not clear how the data is sampled from the original dataset.\\n\\n2. Why not apply the OpenAI-4o model? It's not sure whether the results of this study can be true for the newest OpenAI model.\\n\\n3. How do you test accuracy given answers? Are they multi-choice questions?\\n\\n4. In Ln 514, why does the use of prompting methods have nothing to do with the improvement of answering accuracy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper investigates the impact of few-shot prompting methods on enhancing the performance of GPT-4V in solving geometry problem-solving tasks, proposing three key research questions.\\n\\nThe authors first investigate whether valid demonstrations are essential for performance, concluding that prompt structure and logic are more influential than correctness. They then examine whether reasoning (CoT) or computation (PoT) methods are superior for GPS tasks, finding that reasoning-based prompts generally yield better results. Finally, they analyze the influence of various prompting methods on the problem length and the geometric shapes, which all demonstrate minor improvements.\\n\\nThis study suggests that tailored prompting could further optimize GPS performance, paving the way for future research directions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Enhancing GPT's ability to solve GPS problems through few-shot prompting is a highly significant topic.\\n2. The paper is clear and well-structured. It provides a thorough discussion of three key research questions.\\n3. The paper makes intriguing discoveries: (1) The model\\u2019s performance improvement is not due to the quality of the demonstration, but rather to the input format, output format, and the logic and structure of the demonstration; (2) GPS tasks emphasize reasoning ability more than computational power; (3) Specialized prompting methods could be de\\u0002signed to enhance the model\\u2019s performance. These findings have the potential to inspire new research directions.\", \"weaknesses\": \"1. The motivation for studying GPS tasks is not clearly articulated; the authors do not clarify what makes these types of problems uniquely challenging or valuable for research.\\n2. If I understand correctly, the first research question in the paper has already been thoroughly discussed in previous work. See my questions below.\\n3. The analysis and discussion of some experimental results are not sufficiently clear or rigorous. See my questions below.\", \"questions\": \"1. Regarding the first research question, is the difference between the findings of this paper and Wang et al. [1] merely a matter of testing problems?\\n2. Why does the paper only compare chain-of-thought and program-of-thought? How about tree-of-thought [2] or graph-of-thought [3]?\\n3. For the reasoning part of the second question, how did you calculate the domain knowledge accounts for each problem? Was it done manually or automatically?\\n4. Why do you classify problems involving more than two domain knowledge accounts as complex reasoning, while stating that the vast majority of problems, which involve less than three-digit arithmetic, require only a small amount of computation? How do you objectively define what is a high demand for reasoning or computation?\\n5. What is invalid computation? Could it be that the better performance compared to \\\"invalid computation\\\" (as mentioned in Q1) is due to invalid reasoning providing a standard input-output format, rather than an intrinsic difference between reasoning and computation?\\n6. For the third research question, what is the significance of analyzing which range of problem lengths yields the optimal answering accuracy?\\n7. Could the authors clarify the meaning of \\\"the problem length is unrelated to the method with or without prompting, but only to the model\\u2019s ability to understand semantic information\\\"?\\n\\n\\n[1] Wang B, Min S, Deng X, et al. Towards understanding chain-of-thought prompting: An empirical study of what matters. \\\\\\n[2] Yao S, Yu D, Zhao J, et al. Tree of thoughts: Deliberate problem solving with large language models. \\\\\\n[3] Besta M, Blach N, Kubicek A, et al. Graph of thoughts: Solving elaborate problems with large language models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0uRc3CfJIQ
ORSO: Accelerating Reward Design via Online Reward Selection and Policy Optimization
[ "Chen Bo Calvin Zhang", "Zhang-Wei Hong", "Aldo Pacchiano", "Pulkit Agrawal" ]
Reward shaping is critical in reinforcement learning (RL), particularly for complex tasks where sparse rewards can hinder learning. However, choosing effective shaping rewards from a set of reward functions in a computationally efficient manner remains an open challenge. We propose Online Reward Selection and Policy Optimization (ORSO), a novel approach that frames the selection of shaping reward function as an online model selection problem. ORSO automatically identifies performant shaping reward functions without human intervention with provable regret guarantees. We demonstrate ORSO's effectiveness across various continuous control tasks. Compared to prior approaches, ORSO significantly reduces the amount of data required to evaluate a shaping reward function, resulting in superior data efficiency and a significant reduction in computational time (up to 8×). ORSO consistently identifies high-quality reward functions outperforming prior methods by more than 50% and on average identifies policies as performant as the ones learned using manually engineered reward functions by domain experts.
[ "Reinforcement Learning", "Reward Design", "Reward Selection" ]
Accept (Poster)
https://openreview.net/pdf?id=0uRc3CfJIQ
https://openreview.net/forum?id=0uRc3CfJIQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zarjG6H4D0", "y8u5ACc13C", "xaay2sNuPR", "tGcRg2r3oA", "s7GyK7BgKm", "r2ecBwrQA8", "qOF6Asqcca", "m48RalZU1y", "jxv6CfqixH", "hnrmYwPmpo", "dRNXuDdxYn", "c7yrTKrkHk", "bxrAHOgnQb", "alQ4FpzQsm", "ZnEKjd61Xx", "ZWHplOKPoT", "ZDo5FndtKp", "Z0mSV85ycC", "Yj4nlOc4Yc", "YY2kYpH1cW", "YW99EYzX9Y", "WJmlNYNo2s", "VpEw2ebTL6", "U9RaoKtiXV", "TXpOJg8U2u", "SV9wRlSjDX", "QpqvGhXVGB", "ODtUEucXK3", "LRId7s6d0W", "JrT7EAsw0j", "JLPqordvC8", "GdpudWUVqQ", "GBi21WKn0b", "EJBkTd1sca", "DYLMdZKoQH", "AULpOPRndo", "9rlMTAHM7q", "8gTvLQzFxj", "5uFhYADIrM", "418SkZ9qQt", "2pIbIbPbFQ", "2PBD1kO1lK", "0ivAa79TI8", "0FMTHTL70g" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732335977729, 1732647162905, 1733214246583, 1732746444251, 1730506085982, 1732225319698, 1732563813101, 1730719580713, 1732705168983, 1732224083220, 1732746198929, 1732736108268, 1732647147039, 1732699022668, 1732225553302, 1730637955157, 1732225866043, 1732696881024, 1732746370440, 1732225634746, 1732521703709, 1733214204121, 1732696642733, 1732225801581, 1732683839329, 1732225662922, 1732487925161, 1732746422175, 1732222266759, 1732696897738, 1730274766564, 1732225848964, 1733214258633, 1732746266201, 1732221959155, 1737524210683, 1730264920417, 1732669090508, 1734739844928, 1732562165532, 1732631134922, 1731557446816, 1732224026269, 1732225336697 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_YU7z" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_C5vj" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_t4Xg" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_DEh7" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_C5vj" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_knxR" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_DEh7" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_YU7z" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_DEh7" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_YU7z" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_t4Xg" ], [ "ICLR.cc/2025/Conference/Submission12721/Area_Chair_qHTm" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_knxR" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_sCCd" ], [ "ICLR.cc/2025/Conference/Submission12721/Reviewer_sCCd" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ], [ "ICLR.cc/2025/Conference/Submission12721/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the authors' reply. I think it would be complicated to train a lot of reward candidates (e.g. K=96), and switching reward functions during training would be unstable. I suggest considering regression to get new rewards instead of selection.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your thoughtful and positive feedback on our paper. Your insights were incredibly helpful in refining our work. We hope that our responses to your comments have addressed your concerns and clarified the contributions of our study.\\n\\nIf there are any remaining questions or areas where you feel further clarification is needed, we would be happy to provide additional details or engage in further discussion. We hope that our responses demonstrate the merits of the paper, and we kindly ask if you would consider revisiting your evaluation in light of these updates.\\n\\nThank you again for your time and effort in reviewing our submission. We greatly value your perspective and look forward to your final decision.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear reviewer,\\n\\nas the discussion period is coming to an end, we would be grateful for any further feedback on our earlier response. Please let us know if you have any questions or need clarification. Otherwise, we kindly ask you to consider adjusting your score.\\n\\nWe truly value your insights and participation in the review process.\\n\\nKind regards,\\n\\nThe authors\"}", "{\"comment\": \"> The abstract claims that \\\"While shaping rewards have been introduced to provide additional guidance, selecting effective shaping functions remains challenging and computationally expensive.\\\" However, this overlooks many methods that do not require explicit shaping function selection. Numerous algorithms exist that automate the generation of reliable reward models. Although the experiments in the paper explore the importance of reward selection and compare various selection strategies, the primary objective of the paper is\\u00a0*reward design*. To fully demonstrate its effectiveness, comparisons with SOTA automated reward shaping methods would be necessary.\\n> \\n\\nWe note that we have compared with LIRPG in a previous comment [here](https://openreview.net/forum?id=0uRc3CfJIQ&noteId=U9RaoKtiXV), which is a widely adopted method. Our results show that ORSO with an LLM generator can significantly outperform this baseline. We redirect the reviewer to the linked response to avoid additional repetitions on the thread of responses.\\n\\nWe hope our comments clarify the questions and concerns raised by the reviewer. We would be more than happy to further engage in discussion and clarify any remaining questions. We truly believe that the efficiency gains of ORSO can be useful for the broader research community, in particular for those who do not have large computational resources.\"}", "{\"summary\": \"This paper introduces ORSO, a method that aims to increase the efficiency of the reward shaping selection process by casting it as an automated online model selection problem.\", \"orso_is_a_general_algorithmic_framework_that_implements_the_following_steps\": \"(i) a reward function generation method is used to provide a set of candidate shaping reward functions, (ii) a selection algorithm is used to select a shaping reward function, (iii) an RL algorithm is used to train the policy associated with the selected shaping reward function for a set amount of iterations, (iv) the trained policy is then evaluated against the task reward function and the utility is used to update the parameters of the selection algorithm. This process is repeated until a predefined computational budget is exhausted.\\nWhile the components within the ORSO framework are modular and exchangeable, this work uses (i) an LLM based generator as the reward function generation method, (ii) PPO as the RL algorithm, (iii) D3RB, as the reward function selection algorithm (ablations are additionally conducted with Exp3, D3RB,UCB, ETC, and EG).\\nExperiments are conducted across tasks of varying difficulty, 3 budgets, and 3 reward functions sets.\\nResults indicate that the ORSO performance in terms of task return scales with budget, and is comparable - and can surpass - that of human defined shaping reward functions; additionally ORSO is twice as fast as a prior shaping reward function selection method (EUREKA).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Claims: (1) ORSO reduces both time and computational costs by more than half compared to earlier methods, making reward design accessible to a wider range of researchers. (2) By formalizing the reward design problem and providing a theoretical analysis of ORSO\\u2019s regret when using the D3RB algorithm, we also contribute to the theoretical understanding of reward design in RL.\", \"The work proposes an original formulation of the shaping reward function selection process, viewing it as an online model selection problem.\", \"The regret based analysis provides a clear and intuitive way to monitor ORSO's performance and efficiency gains.\", \"Thanks to the problem formulation, the method is kept elegant in its simplicity and largely amounts to the application of existing approaches into a unified framework.\", \"Results on a variety of tasks and domains against available baselines support the efficiency and performance claims.\"], \"clarity\": [\"The writing is clear, the paper is well structured, and appropriate context is provided to the reader.\"], \"significance\": [\"This work tackles the issue of reward design for RL. This has been and continues to be one of the most significant challenges keeping RL from widespread successful deployment in the real world.\"], \"weaknesses\": [\"Contribution:\", \"Ultimately, ORSO searches over a set of shaping reward functions. While the framework is simple and elegant, to my understanding, it ultimately relies on and is limited by the performance of the \\\"shaping reward function generator\\\".\", \"ORSO is only benchmarked against methods for which a performant human-engineered reward function can be defined. Impact would be higher if method could generalize beyond these settings.\", \"ORSO in its current form does not seem to offer the flexibility to deal with changing / annealing of shaping reward functions throughout training, a common technique in reward shaping literature.\"], \"questions\": [\"Unique contributions could be made clearer and explicitly called out in the introduction.\", \"Lines 54-59: I understand you are highlighting unique challenges compared to standard multi-arm bandit settings, yet ORSO uses the same selection algorithms typically used to solve such multi-arm bandit problems. The paper could be clearer in defining exactly which components of ORSO are key to address the unique challenges presented.\", \"I recommend expanding on the various resampling strategies, if more than one was tried out, and their impact on performance as this seems to be a key ingredient to the method's success.\", \"I would recommend adding the synthetically generated best-performing shaping reward functions for each task to appendix E. Are the reward functions sensible to the human reader? This has implications on how well these shaping reward functions could be further refined by human experimenters, and possibly give insights on their logical soundness.\", \"Also, was any constraint, structure, human knowledge beyond the imposed when prompting the generation of such rewards or could the prompt be arguably generated programmatically (if so, I recommend just stating it - the code base is not available during the review process to verify)? While not directly related to the ORSO contribution, this is arguably important to showcase as ORSO heavily relies on the existence of an automated way of generating reward functions without human priors.\", \"Please look at the weaknesses section and help clarify if any are can be addressed.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarifications on ORSO (Part 1)\", \"comment\": \"We thank the reviewer for the comments and the insightful feedback provided. We address the points below.\\n\\n**Weaknesses**\\n\\n- W1\\n \\n > Ultimately, ORSO searches over a set of shaping reward functions. While the framework is simple and elegant, to my understanding, it ultimately relies on and is limited by the performance of the \\\"shaping reward function generator\\\".\\n > \\n \\n While it is true that ORSO\\u2019s performance relies on the set of reward functions given by the generator, our practical implementation includes an iterative improvement resampling step that allows the algorithms to improve the set of candidate reward functions. An alternative generator could be devised. For instance, if the required reward components are known, an alternative generator could sample different weights for the reward components from a uniform distribution over plausible ranges. For example, let the reward function \\\\(r(s, a)\\\\) be a weighted sum of known components \\\\(r_1(s, a), r_2(s, a), \\\\ldots, r_k(s, a)\\\\), with weights \\\\(\\\\boldsymbol{w} = [w_1, w_2, \\\\ldots, w_k]\\\\). ORSO could then operate on this sampled set (our experiments show that ORSO works well even with large reward sets). Then the results could be used to update a posterior distribution over the parameter weights. Repeating this process would allow the framework to explore any reward function space defined by its components.\\n \\n- W2\\n \\n > ORSO is only benchmarked against methods for which a performant human-engineered reward function can be defined. Impact would be higher if method could generalize beyond these settings.\\n > \\n \\n We would love to test ORSO on tasks where no human-designed reward function is available. If the reviewer has specific continuous control tasks and implementation in mind, we would be happy to explore them. However, this may fall outside the discussion timeframe, though we could incorporate these suggestions in future work.\\n \\n- W3\\n \\n > ORSO in its current form does not seem to offer the flexibility to deal with changing / annealing of shaping reward functions throughout training, a common technique in reward shaping literature.\\n > \\n \\n While we have not implemented shaping reward annealing in this version, this feature could be easily added. Since the selection algorithms track the frequency of each reward function being chosen, shaping rewards could be dynamically re-weighted, e.g., inversely proportional to the selection count. We encourage the reviewer to check our codebase following the publication of ORSO, as we plan to incorporate such variants to enhance the framework\\u2019s utility for the community.\"}", "{\"comment\": \"We thank the reviewer for their thorough review and valuable feedback. We are glad the clarifications regarding Assumption 4.2 and the monotonicity in the experimented environments were helpful.\\n\\nWe agree that extending the discussion on these points, as well as addressing the impact of incorrect or redundant reward functions, would enhance the presentation of the paper. We will incorporate these improvements in the final version.\\n\\nThank you again for your constructive comments and for supporting the paper\\u2019s acceptance.\"}", "{\"summary\": \"This paper introduces Online Reward Selection and Policy Optimization (ORSO), an approach that defines reward selection as an online model selection problem. The approach uses exploration strategies to identify shaping reward functions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Soundness\\n======\\nThe approach is generally sound. \\n\\t\\nSignificance & Related work\\n=========\\nThe paper presents an in-depth related work section, and well defined preliminaries (note redundancy of section 2) that lead to demonstrations of several results. \\n\\nExperimentation\\n=========\\nThe paper presents an in-depth ablation analysis of the performance of various selection algorithms.\\n\\nPresentation\\n=========\\nThe paper is well written.\", \"weaknesses\": \"Soundness\\n======\\nIt is unclear in the experiments in Fig 2 what \\u2018human-level performance\\u2019 or \\u2018human-designed reward function\\u2019 is and how it is defined/computed. Note that the proof for D1 needs to be rewritten for clarity to show base case and inductive hypothesis, should proof by induction still be the chosen approach.\\n\\t\\nExperimentation\\n=========\\nThe paper presents an in-depth ablation analysis of the performance of various selection algorithms, however, the impact of poorly chosen task rewards needs to be analysed. \\n\\nPresentation\\n=========\\nPresentation is good, as above, Section 2 is too short and redundant.\", \"questions\": [\"What is human designed reward and how it is computed?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks to the authors for providing detailed responses to my follow-up questions and offering further clarification. I believe the key point emphasized by the authors is that ORSO focuses on reward selection rather than reward generation. However, clearly defining the scope of problems that can be addressed (e.g., robotics with available environmental descriptions/codes) based on the source of reward generation remains important.\\n\\nAdditionally, in terms of reward selection, ORSO still relies on existing model selection strategies (Line 4 of Algorithm 1), and its performance is strongly dependent on the specific model selection algorithm employed. Given the limited contribution of simply applying various existing model selection algorithms to the domain of reward model selection, I will maintain my current score (below the acceptance threshold).\"}", "{\"title\": \"Addressing Weaknesses and Questions on Theoretical and Experimental Contributions (Part 2)\", \"comment\": \"**Questions**\\n\\n- Q1\\n \\n > The proposed method generates a set of candidate reward functions for the online selection phase. Does having K reward function candidates mean that the number of candidates is fixed? If all these candidates are not suitable or not the best, what is the solution?\\n > \\n \\n This is a great question. Our method includes an iterative improvement step that allows for refinement of the reward functions (proposal of new candidates). Furthermore, our experimental results demonstrate that even with a fixed set of reward functions, ORSO with D3RB performs well, given the reward function set is large enough (our Ant experiments in Section 5.3 show that with K=48 or K=96 can still outperform human-design performance). The large candidate set makes it highly probable to include effective reward functions.\\n\\n- Q3\\n \\n > Figure 4 shows the normalized cumulative regret for different selection algorithms. The manuscript mentioned that the ORSO\\u2019s regret can become negative, indicating that it finds reward functions that outperform the human baseline. The minimum value is zero in Figure 4, I didn\\u2019t observe the negative values.\\n > \\n \\n Thank you for this observation. The **instantaneous regret** can indeed become negative when ORSO identifies reward functions that outperform the human baseline. This causes the **cumulative regret** to decrease, but it does not necessarily drop below zero in the normalized cumulative regret plots.\\n \\n- Q5\\n \\n > The number of references is small, and more recent articles on reward shaping can be added.\\n > \\n \\n Thank you for this suggestion. While the manuscript includes foundational and relevant works, we will expand the related work section to include additional recent papers on reward shaping. If there are specific references the reviewer believes are important, we welcome the input.\\n \\n\\nWe hope that our clarifications and the proposed additional experiments sufficiently address the reviewer\\u2019s questions and concerns. Given the theoretical results, comprehensive experimental evaluation, and practical relevance of ORSO, we believe our work makes a meaningful contribution to the field of reward design in reinforcement learning. We kindly ask the reviewer to reconsider their evaluation of the manuscript. Thank you for your thoughtful consideration.\"}", "{\"title\": \"Additional Comment on Theoretical Contribution\", \"comment\": \"We would also like to add that we provide a novel analysis of D3RB, which is in stark contrast with the regret guarantees of the original paper. Namely, our guarantee depend on the true regret coefficients rather than the monotonic ones.\\n\\nTherefore, even though our there is no guarantee that the learned policy will achieve the same optimal behavior as the optimal policy for the task reward (which might not achievable because of the task reward not being amenable to optimization), we guarantee that ORSO with D3RB will eventually select and train with the optimal reward function within the set.\"}", "{\"comment\": \"Dear authors,\\nThanks for thoroughly addressing my feedback. I am satisfied with the responses and with the proposed modifications. I will update my score accordingly.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your thoughtful and positive feedback on our paper. Your insights were incredibly helpful in refining our work. We hope that our responses to your comments have addressed your concerns and clarified the contributions of our study.\\n\\nIf there are any remaining questions or areas where you feel further clarification is needed, we would be happy to provide additional details or engage in further discussion. We hope that our responses demonstrate the merits of the paper, and we kindly ask if you would consider revisiting your evaluation in light of these updates.\\n\\nThank you again for your time and effort in reviewing our submission. We greatly value your perspective and look forward to your final decision.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"We thank the reviewer once again for their valuable feedback and constructive suggestions, and we are happy to hear their score indicates support for accepting our paper.\"}", "{\"title\": \"Clarifications and Additional Insights on ORSO's Reward Generation, Complexity, and Experimental Results (Part 1)\", \"comment\": \"We thank the reviewer for the comments and suggestions. We clarify the weaknesses and questions below.\\n\\n**Writing**\\n\\nThe reward generation can be seen as both part of the ORSO or not. If one has a set of generated reward functions, then we can use ORSO to simply jointly select a performant reward function and train a policy with it. On the other hand, if there is no reward function provided beforehand, one can also see the generation as part of ORSO. Algorithm 4 provides the full pseudo-code with generation and iterative improvement of the reward function, where a new set of reward function is generated when the most selected reward function has been used to train for a maximum number of iterations.\\n\\n**Complexity**\\n\\nIt is true that we need to maintain K separate policies. In our setting, the policies being simple MLPs, it does not create much overhead. We note that the naive approach (given a set of reward functions, train a policy until convergence for each reward function and then pick the best policy) still needs to instantiate K policies.\\n\\n**Performance Guarantees**\\n\\n> How do you ensure consistency in performance when there is no guarantee that the proposed objective aligns with the base reward function?\\n> \\n\\nOur performance guarantees are with respect to the set of reward functions used for selection. While potential reward shaping could be applied in ORSO to ensure optimal policy invariance guarantees, we did not incorporate this because empirical evidence suggests that potential shaping often does not improve performance substantially in practice (as shown in [1]). Specifically, applying potential shaping to human-designed rewards frequently leads to suboptimal outcomes. Instead, we focus on an online selection process that chooses the reward function to train on based on its performance on the base reward function, ensuring that the chosen reward function maximizes the base reward. Empirically, we observe that with a sufficiently diverse set of candidate rewards, it is likely that one will perform similarly to or better than a human-designed reward when evaluated using the original task reward.\\n\\n**Experiments**\\n\\n- E1\\n \\n > The paper does not compare with established reward shaping methods, such as those by Ng et al. (1999), Zou et al. (2019), Zheng et al. (2018), Sorg et al. (2010), and Gupta et al. (2023). Including these comparisons would strengthen the experimental evaluation.\\n > \\n \\n We note that methods like [2] are complementary to ORSO, meaning that one can use ORSO to propose shaped reward functions and then apply other shaping methods on top to further improve the performance of such reward functions.\\n \\n We chose to compare ORSO-designed reward functions with LIRPG because it is one of the most widely adopted methods for reward design in reinforcement learning. We provide experimental results with LIRPG on the Ant task. LIRPG jointly trains a policy and learns an intrinsic shaping reward, such that the intrinsic reward leads to higher extrinsic reward.\\n \\n In each experiment, we use the task reward function, the human-designed reward function, and the reward function selected by ORSO as the extrinsic reward for LIRPG, respectively. We run each experiment for 5 random seeds and report the mean base environmental reward achieved by training with each method, along with 95% confidence intervals.\\n \\n | Method | Without LIRPG | With LIRPG |\\n | --- | --- | --- |\\n | No Design | 4.67 +/- 0.84 | **5.73 +/- 1.08** |\\n | Human | **9.84 +/- 0.30** | 10.02 +/- 0.30 (*) |\\n | ORSO | **11.09 +/- 0.68** | 11.51 +/- 0.45 (*) |\\n - **LIRPG cannot design better rewards than ORSO**: When LIRPG is applied to the task reward, it results in lower performance compared to using the human-designed or ORSO-selected rewards.\\n - **LIRPG as a complementary method**: We emphasize that LIRPG can complement ORSO. By applying LIRPG to reward functions selected by ORSO (which have already undergone shaping), LIRPG may help learn an additional function that aids the agent in optimizing ORSO-designed rewards.\\n \\n The bolded entries have undergone one stage of reward shaping, while the entries in the table above marked with (*) have undergone two stages of reward design. First a performant reward function was obtained from a human designed or ORSO (both outperforming LIRPG on the task reward). Then, given the good quality of the reward, we show that we can apply LIRPG on such reward functions to some marginal improvement. We only provide such results for completeness. We note that the evaluation of each policy is done with respect to the task reward function (No Design).\"}", "{\"summary\": \"This paper studies automated reward shaping by posing it as an online reward selection problem. Instead of multiple training runs to observe the impact of different shaping functions, this work aims to identify the best one within a fixed time budget. To this aim, the authors develop ORSO, a regret-minimizing approach that utilizes multi-armed bandits where a candidate shaping function is an arm. More specifically, ORSO uses the D3RB algorithm to select an arm. Upon selection of an arm, ORSO trains a policy corresponding to the said arm for a fixed number of iterations and then evaluates the policy with respect to the task rewards. The paper provides regret guarantees, assuming that a learner monotonically dominates all learners and its average performance increases monotonically.\\n\\nThe paper evaluates a practical implementation of ORSO in continuous control tasks with varying complexity whose rewards are either sparse or unshaped. The experimental results show that ORSO is faster than an LLM-based reward design method, can surpass human-designed rewards, and performs better as the budget increases. The authors also provide an ablation study for different arm selection strategies and different numbers of candidate shaping functions.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is very well-written. The motivation is clearly explained, and the problem and assumptions are well described. Moreover, given the assumptions, the proposed approach and its theoretical guarantees are clear.\", \"The experimental set-up makes sense, and the evaluated baselines allow us to see how reward design is critical, humans can be sub-optimal at it, and naive attempts are prone to fail.\", \"The experimental results clearly showcase its advantages in comparison to the baselines. In addition, the ablation study evaluates the impact of different components of ORSO and provides detailed insights.\"], \"weaknesses\": [\"I urge the authors to move the related work, at least the most relevant parts, to the main document.\", \"Assumption 4.2 seems limiting. A discussion of why the assumptions are viable or how they are needed would strengthen the paper's arguments. It would be even better to explain their role in causing the contrast with the regret guarantees in Pacchiano et al. (2023).\", \"As the quality of candidate shaping functions plays an important role, an ablation study to understand the impact of wrong/redundant candidates would help the reader understand the limitations of ORSO.\"], \"questions\": [\"In what cases would the monotonicity assumption be violated? Do the environments in the experimental set-up violate or obey the assumption? How would ORSO handle such violations?\", \"Future work mentions exciting directions. Since the naive approach is failing, how likely is a VLM-based reward design method to fail?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reward Shaping Comparison, Reward Function Generation, and Evaluation Clarifications (Part 3)\", \"comment\": \"We hope these clarifications address the concerns raised. If clarifications and additional experiments resolve your concerns, we would be grateful if you could consider raising your score to reflect the improvements.\\n\\n**References**\\n\\n[1] Zheng, Zeyu, Junhyuk Oh, and Satinder Singh. \\\"On learning intrinsic rewards for policy gradient methods.\\\" Advances in Neural Information Processing Systems 31 (2018).\\n\\n[2] Cheng, Xuxin, et al. \\\"Expressive whole-body control for humanoid robots.\\\"\\u00a0*arXiv preprint arXiv:2402.16796*\\u00a0(2024).\\n\\n[3] Margolis, Gabriel B., and Pulkit Agrawal. \\\"Walk these ways: Tuning robot control for generalization with multiplicity of behavior.\\\"\\u00a0*Conference on Robot Learning*. PMLR, 2023.\\n\\n[4] Bates, Elizabeth, Vasilios Mavroudis, and Chris Hicks. \\\"Reward Shaping for Happier Autonomous Cyber Security Agents.\\\"\\u00a0*Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security*. 2023.\\n\\n[5] Ng, A. Y., Harada, D., & Russell, S. (1999).\\u00a0*Policy invariance under reward transformations: Theory and application to reward shaping.*\\u00a0In ICML.\\n\\n[6] Cheng, Ching-An, Andrey Kolobov, and Adith Swaminathan. \\\"Heuristic-guided reinforcement learning.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a034 (2021): 13550-13563.\"}", "{\"comment\": \"# Part 1/2\\n\\nWe thank the reviewer for the additional comments and thinking points. We address the concerns below.\\n\\n> (a) seems the reward function needs much task-specific knowledge. [\\u2026] how does LLM know this kind of information (like which dimensions refer to which metric) or it is given in the prompts?\\n> \\n\\nWe appreciate the reviewer's comment. Regarding the necessary task-specific knowledge, we do not manually provide this information in the prompt, but we do provide the observation space. While the LLM finds that some coordinates correspond to velocity and position, there might be a lot of candidates where this is not the case.\\n\\nAs noted, while our approach leverages LLMs to generate candidate reward functions, we do not claim this as the primary contribution. Instead, our focus is on an efficient selection process that can handle diverse reward function candidates.\\n\\n> can just learn some weights of different components, instead of \\\"generating a lot of candidate reward functions and choose one from them\\\"?\\n> \\n\\nWhile weight learning is a valid approach, knowing what reward function components to include often non-trivial. Take the Ant environment example:\\n\\n```jsx\\nheight_diff = torch.abs(torso_position[:, 2] - target_height)\\n```\\n\\nIt is not clear whether one should use absolute difference, squared difference, or another metric. This still has to be decided and designed.\\n\\n> (b) How about some more complex environments, like Atari? given the pixel observations, does it still work to generate python reward functions?\\n> \\n\\nWe acknowledge the current limitation with pixel-based environments. Our method is most effective in state-based environments where reward function generation is more straightforward.\\n\\nHowever, we emphasize that our core contribution is the ORSO selection process. Regardless of how reward functions are generated\\u2014whether through LLMs, domain experts, or other methods\\u2014ORSO provides an efficient mechanism for identifying the most promising candidates.\\n\\n> This is where my main concerns from, in the LLM-generated reward functions, could it be possible the reward functions make the agent stay in some local optimum and then lead to sub-optimal or non-consistent policies?\\n> \\n\\nThis is a valid concern. Suboptimal reward functions can indeed emerge and there is no guarantee that LLM generated reward functions are optimal with respect to the task reward.\\n\\nHowever, interestingly, a \\\"suboptimal\\\" reward function (i.e., its global optimum is not the same as the task reward functions) might still outperform a theoretically optimal one (the task reward function) if the latter is challenging to optimize. Many existing works [1, 2, 3, 4, 5, 6, 7] use reward shaping without optimality guarantee with respect to the base reward function, but achieve impressive performance in real-world applications.\\n\\nOur approach provides a systematic way to explore and validate different reward formulations and we empirically show that with enough sampling and the iterative improvement step, one can obtain reward functions that when train with will outperform policies trained directly with the task reward function (with performance measure with the task reward function).\\n\\nRegarding the 3-state MDP example, we would like to note that the setting we considered is the infinite horizon MDP. Therefore, a better comparison would be between $t_ 1 : s_1 \\\\to s_2 \\\\to s_3 \\\\to s_3 \\\\to \\\\dots$ and $t_2 : s_1 \\\\to s_2 \\\\to s_2 \\\\to s_3 \\\\to \\\\dots$, where we assume that $s_3$ is an absorbing state. In this case, one would have consistent results between the human-designed and the task reward functions. Either way, the evaluation is performed with the base reward function as specified in the previous response, so this problem would not arise as a reward function that highly rewards the agent when trained with, but leads to low task reward would be discarded.\\n\\n**References**\\n\\n[1] Liu, Minghuan, et al. \\\"Visual whole-body control for legged loco-manipulation.\\\"\\u00a0*arXiv preprint arXiv:2403.16967*\\u00a0(2024).\\n\\n[2] Margolis, Gabriel B., et al. \\\"Rapid locomotion via reinforcement learning.\\\"\\u00a0*The International Journal of Robotics Research*\\u00a043.4 (2024): 572-587.\\n\\n[3] Margolis, Gabriel B., and Pulkit Agrawal. \\\"Walk these ways: Tuning robot control for generalization with multiplicity of behavior.\\\"\\u00a0*Conference on Robot Learning*. PMLR, 2023.\\n\\n[4] Lee, Joonho, et al. \\\"Learning quadrupedal locomotion over challenging terrain.\\\"\\u00a0*Science robotics*\\u00a05.47 (2020): eabc5986.\\n\\n[5] Ma, Yecheng Jason, et al. \\\"Eureka: Human-level reward design via coding large language models.\\\"\\u00a0*arXiv preprint arXiv:2310.12931*\\u00a0(2023).\\n\\n[6] Ha, Huy, et al. \\\"Umi on legs: Making manipulation policies mobile with manipulation-centric whole-body controllers.\\\"\\u00a0*arXiv preprint arXiv:2407.10353*\\u00a0(2024).\\n\\n[7] Kaufmann, Elia, et al. \\\"Champion-level drone racing using deep reinforcement learning.\\\"\\u00a0*Nature*\\u00a0620.7976 (2023): 982-987.\"}", "{\"comment\": \"We thank the reviewer for the additional comments and feedback. We however respectfully disagree with some of the statements.\\n\\n> Thanks to the authors for providing detailed responses to my follow-up questions and offering further clarification. I believe the key point emphasized by the authors is that ORSO focuses on reward selection rather than reward generation. However, clearly defining the scope of problems that can be addressed (e.g., robotics with available environmental descriptions/codes) based on the source of reward generation remains important.\\n> \\n\\nWe note that the only component that requires access to the environment code is the generation process with LLMs. As the reviewer highlighted (\\u201dI believe the key point emphasized by the authors is that ORSO focuses on reward selection rather than reward generation.\\u201d), ORSO mainly focuses on effective and efficient selection. This part does not require access to the code. Indeed, selection can be applied to any form of reward function as is also stated in Section 3.1, \\u201cReward Generation\\u201d of the paper.\\n\\n> Additionally, in terms of reward selection, ORSO still relies on existing model selection strategies (Line 4 of Algorithm 1), and its performance is strongly dependent on the specific model selection algorithm employed. Given the limited contribution of simply applying various existing model selection algorithms to the domain of reward model selection, I will maintain my current score (below the acceptance threshold).\\n> \\n\\nWe disagree that the contribution of ORSO is limited. As stated in our previous response, we show that ORSO has significant (up to **16x fewer GPUs needed** to achieve the same performance in the same time) gains in terms of necessary compute compared to EUREKA. Because all our experiments can be run on a single commercial GPU (e.g., a 3090 Ti or even a 2080 Ti), this will open the possibility for researchers with smaller computational budgets to quickly iterate their experiments, which we believe to be an important contribution.\\n\\nMoreover, the connection between model selection techniques and reward design in RL has not been explore before and our experimental results show that we gain in efficiency, while maintaining effectiveness. We provide a novel analysis of D3RB, which is in stark contrast with the regret guarantees of the original paper. Namely, our guarantee depend on the true regret coefficients rather than the monotonic ones.\"}", "{\"title\": \"Clarifications and Additional Insights on ORSO's Reward Generation, Complexity, and Experimental Results (Part 2)\", \"comment\": \"(continued response)\\n **LIRPG does not help if the extrinsic reward function is too sparse.** We also test LIRPG on sparse-reward manipulation tasks, such as the Allegro Hand. However, LIRPG does not provide any improvement over the environmental reward as the reward function is \\u201ctoo sparse.\\u201d This agrees with the experimental results in Figure 7 of [1], where the authors show that increasing the sparsity of the feedback (every 10, 20, or 40 steps) can decrease the performance of LIRPG.\\n \\n- E2\\n \\n > Iterative resampling appears to be essential for obtaining high-quality reward functions, as it involves reinitializing policies and refining rewards over iterations. However, the paper lacks discussion on the resampling process's challenges, frequency, and impact on performance. Additionally, it is unclear if resampling is an integral part of ORSO or a separate process.\\n > \\n \\n The details of the iterative resampling process and its frequency are discussed in Appendix F.\\n \\n- E3\\n \\n > In the experiments, performance is evaluated against human-designed rewards, but the actual evaluation should ideally be based on the baseline reward. This raises questions about the metrics used for evaluation. In Figure 2, the upper bounds should correspond to\\u00a0**No Design**, as the objective should be to assess performance against the MDP\\u2019s base reward.\\n > \\n \\n We would like to clarify that the evaluation is always based on the original task reward. The human-designed reward functions are heavily engineered reward functions, such that policies trained on them will achieve high original task reward (indeed, human-designed reward functions are strict improvements over the original task rewards). This is the reason for the red line being higher than the gray line. the question Figure 2 is answering is \\u201cHow long does it take to design reward functions that perform as well as human-designed reward functions when evaluated with the original task reward?\\u201d Directly optimizing for the \\u201cNo Design\\u201d reward does not achieve optimal policies because these reward can be sparse of poorly shaped, leading to a particularly hard optimization problem.\\n \\n- E4\\n \\n > The reward generation process seems to require access to code-level details of the MDP, which may not be feasible in cases where the environment is not code-based. Discussion of this limitation would improve transparency regarding the method\\u2019s applicability.\\n > \\n \\n This is correct. The generator used in our work requires a simulator for the environment and its code, which is common in robot learning, the application analyzed in the experiment in this paper. As mentioned above, the generation can be seen as both part of ORSO or not. If one is working with a non-code based environment, multiple reward functions can still be instantiated and ORSO can be applied to such pre-defined set of reward functions.\\n \\n\\n**Questions**\\n\\n- Q1\\n \\n > ORSO functions only as a selection algorithm, correct? Reward generation isn\\u2019t part of the algorithm itself?\\n > \\n \\n As state above, the two components can be decoupled. When using the iterative improvement step, the generation can be seen as part of ORSO. Instead, if one already has a set of candidate rewards, ORSO can be used on the fixed set directly.\"}", "{\"comment\": \"Thanks for the authors' response.\", \"i_have_some_questions_want_to_discuss\": \"1. regarding Q1(a), the specific forms of these reward functions, thanks for updating the detailed reward functions in the appendix. I have two main concerns:\\n(a) seems the reward function needs much task-specific knowledge, like:\\n```\\nvelocity = root_states[:, 7:10] # [vx, vy, vz]\\ntorso_position = root_states[:, 0:3] # [px, py, pz]\\n```\\nhow does LLM know this kind of information (like which dimensions refer to which metric) or it is given in the prompts? If we still need to provide this prior knowledge to LLM, not sure why not directly use a human-designed reward function. For example, if we already know the `[7:10]` represents the velocity, and suppose my final target is to let the ant be as fast as possible, then why not just write out this reward function?\\n\\nMoreover, I see the final reward is the weight-sum of different components:\\n```\\nreward = forward_reward + balance_reward + angle_penalty - action_penalty_scaled + survival_bonus\\n```\\ncan just learn some weights of different components, instead of \\\"generating a lot of candidate reward functions and choose one from them\\\"?\\n\\n(b) How about some more complex environments, like Atari? given the pixel observations, does it still work to generate python reward functions?\\n\\n2. Regarding the author's claim:\\n\\n> This is more obvious if we consider an MDP with n states $s_1,\\\\dots,s_n$ with task reward function (0, 0, \\u2026, 0, 1), i.e., zero everywhere, except for the n-th state. This reward function is clearly sparse and hard to optimize. A good human-designed reward function could look like (1/n, 2/n, \\u2026, (n-1)/n, n), which still leads the agent towards the n-th state but provides more guidance during training.\\n\\nBut this still leads to non-consistent optimal policies, let's take a three-state example: $s_1,s_2,s_3$, and two reward schemes: $R_1: s_1 =0, s_2=0, s_3=1$, and $R_2: s_1 =1/3, s_2=2/3, s_3=3/3=1$ as the authors' claim. Then we consider the following trajectory: $t_1: s_1 \\\\rightarrow s_2 \\\\rightarrow s_3$, which we know clearly is the optimal trajectory under the environmental rewards. Then compute the returns under two reward schemes ($\\\\gamma=0.9$):\\n\\n$$\\nreturn_1(t_1) = 0 + 0.9 * 0 + 0.9^2 * 1 = 0.81\\n$$\\n\\n$$\\nreturn_2(t_1) = 1/3 + 0.9 * 2/3 + 0.9^2 * 1 \\\\approx 1.71\\n$$\", \"while_we_further_consider_the_trajectory\": \"$t_2: s_1 \\\\rightarrow s_2 \\\\rightarrow s_2 \\\\rightarrow s_3$, then:\\n\\n$$\\nreturn_1(t_2) = 0 + 0.9 * 0 + 0.9^2 0 + 0.9^3 * 1 = 0.729\\n$$\\n\\n$$\\nreturn_2(t_2) = 1/3 + 0.9 * 2/3 + 0.9^2 * 2/3 + 0.9^3 * 1 \\\\approx 2.17\\n$$\", \"we_can_see\": \"$return_2(t_2) > return_2(t_1)$ but $return_1(t_2) < return_1(t_1)$, which means under the human-designed reward scheme, the optimal policy changes (actually, staying at $s_2$ longer can get higher returns). This is where my main concerns from, in the LLM-generated reward functions, could it be possible the reward functions make the agent stay in some local optimum and then lead to sub-optimal or non-consistent policies?\\n\\n3. Regarding the novelty and contribution, from the papers' statements at around Line 309, and to my knowledge, the OSRA mainly modified the EUREKA (Ma et al., 2024) from selecting and testing one-by-one to initializing all candidates and testing in parallel, which is not a big improvement, but naturally trading space for time.\\n\\n4. Lastly, I encourage the authors to compare ORSO with at least one more related baseline, as for now, the study of comparing ORSO with different sources of reward functions is good enough, but no other auto-reward-function-generation algorithms are compared, they also show good performance on sparse-reward envs and reward design.\"}", "{\"comment\": \"Dear reviewer,\\n\\nas the discussion period is coming to an end, we would be grateful for any further feedback on our earlier response. Please let us know if you have any questions or need clarification. Otherwise, we kindly ask you to consider adjusting your score.\\n\\nWe truly value your insights and participation in the review process.\\n\\nKind regards,\\n\\nThe authors\"}", "{\"comment\": \"We thank the reviewer for the thoughtful response and for motivating their score. However, we would like to respectfully disagree with some of the points raised by the reviewer.\\n\\n> The method does not guarantee the optimality of the policy with respect to the base reward (theoretically), which makes it difficult to see how the approach will be practically useful in real-world applications.\\n> \\n\\nLack of theoretical optimality does not hinder practicality in real-world applications. While it is true that the method does not theoretically guarantee optimality with respect to the base reward, our empirical evaluation shows significant and statistically significant improvements in performance when training policies with the rewards selected by ORSO. Additionally, it worth noting that optimality of the reward function does not necessarily translate to empirical performance gain. Many existing works [1, 2, 3, 4, 5, 6, 7] use reward shaping without optimality guarantee with respect to the base reward function, but achieve impressive performance in real-world applications.\\n\\n> While access to environment code is feasible in robotics domains, it is generally not true for other scenarios. If the paper specifically targets robotics domains, this focus should be made clear in the writing, as the current framing suggests a more general-purpose method for reward shaping.\\n> \\n\\nAs the author pointed out, the generation is not the main contribution of our paper, which is the only part of the algorithm that requires access to the environment code. The ORSO framework can be applied to any form of reward function. The choice of using LLMs to generate reward functions is motivated by the success of such methods in EUREKA and the ability to interpret the reward functions (compared to methods such as LIRPG, where the intrinsic reward is a black box model)\\n\\n> Methods such as LIRPG and other reward-shaping approaches typically do not require maintaining multiple copies of policies. While the proposed method benefits significantly from parallelizing the search space, it seems to have prohibitive requirements, including access to parallel simulators/environments.\\n> \\n\\nThis is a great observation. ORSO indeed requires additional space requirements to maintain multiply policy networks. However, this is generally not a prohibitive computational requirement. In Figure 3 of the updated PDF, we show that ORSO has significant (up to **16x fewer GPUs needed** to achieve the same performance in the same time) gains in terms of necessary compute compared to EUREKA. All our experiments can be indeed run on a single commercial GPU (e.g,. a 3090 Ti or even a 2080 Ti).\\n\\nWe also believe access to parallel simulators/environments is not an unreasonable requirement. Simulators such as Isaac Gym, gymnax and MuJoCo MJX are now widely adopted and available to the community. Moreover, GPU parallelization of environment is not necessary for ORSO.\\n\\nWe hope the additional clarifications will highlight the impact of our proposed framework. We are happy to engage in further discussions with the reviewer and clarify any additional questions they may have.\\n\\n**References**\\n\\n[1] Liu, Minghuan, et al. \\\"Visual whole-body control for legged loco-manipulation.\\\"\\u00a0*arXiv preprint arXiv:2403.16967*\\u00a0(2024).\\n\\n[2] Margolis, Gabriel B., et al. \\\"Rapid locomotion via reinforcement learning.\\\"\\u00a0*The International Journal of Robotics Research*\\u00a043.4 (2024): 572-587.\\n\\n[3] Margolis, Gabriel B., and Pulkit Agrawal. \\\"Walk these ways: Tuning robot control for generalization with multiplicity of behavior.\\\"\\u00a0*Conference on Robot Learning*. PMLR, 2023.\\n\\n[4] Lee, Joonho, et al. \\\"Learning quadrupedal locomotion over challenging terrain.\\\"\\u00a0*Science robotics*\\u00a05.47 (2020): eabc5986.\\n\\n[5] Ma, Yecheng Jason, et al. \\\"Eureka: Human-level reward design via coding large language models.\\\"\\u00a0*arXiv preprint arXiv:2310.12931*\\u00a0(2023).\\n\\n[6] Ha, Huy, et al. \\\"Umi on legs: Making manipulation policies mobile with manipulation-centric whole-body controllers.\\\"\\u00a0*arXiv preprint arXiv:2407.10353*\\u00a0(2024).\\n\\n[7] Kaufmann, Elia, et al. \\\"Champion-level drone racing using deep reinforcement learning.\\\"\\u00a0*Nature*\\u00a0620.7976 (2023): 982-987.\"}", "{\"title\": \"Reward Shaping Comparison, Reward Function Generation, and Evaluation Clarifications (Part 1)\", \"comment\": \"We thank the reviewer for the feedback and the suggestions to improve the paper. We address the comments below.\\n\\n- W2\\n \\n > The author states that this is a reward shaping approach, but the paper doesn't compare it with any reward shaping or reward selection baselines. If the authors could compare ORSO with some representative reward shaping algorithms (such as [1][2][3][4]), it would better showcase its advantages.\\n > \\n \\n These papers collectively discuss various approaches to reward shaping and intrinsic motivation in reinforcement learning, with a focus on improving learning efficiency in sparse-reward environments through methods like exploration-guided rewards (ExploRS), learning intrinsic rewards, self-supervised reward shaping, and random network distillation (RND). \\n \\n We note that methods like [1] are complementary to ORSO, meaning that one can use ORSO to propose shaped reward functions and then apply other shaping methods on top to further improve the performance of such reward functions.\\n \\n We chose to compare ORSO-designed reward functions with LIRPG because it is one of the most widely adopted methods for reward design in reinforcement learning. We provide experimental results with LIRPG on the Ant task. LIRPG jointly trains a policy and learns an intrinsic shaping reward, such that the intrinsic reward leads to higher extrinsic reward.\\n \\n In each experiment, we use the task reward function, the human-designed reward function, and the reward function selected by ORSO as the extrinsic reward for LIRPG, respectively. We run each experiment for 5 random seeds and report the mean base environmental reward achieved by training with each method, along with 95% confidence intervals.\\n \\n | Method | Without LIRPG | With LIRPG |\\n | --- | --- | --- |\\n | No Design | 4.67 +/- 0.84 | **5.73 +/- 1.08** |\\n | Human | **9.84 +/- 0.30** | 10.02 +/- 0.30 (*) |\\n | ORSO | **11.09 +/- 0.68** | 11.51 +/- 0.45 (*) |\\n - **LIRPG cannot design better rewards than ORSO**: When LIRPG is applied to the task reward, it results in lower performance compared to using the human-designed or ORSO-selected rewards.\\n - **LIRPG as a complementary method**: We emphasize that LIRPG can complement ORSO. By applying LIRPG to reward functions selected by ORSO (which have already undergone shaping), LIRPG may help learn an additional function that aids the agent in optimizing ORSO-designed rewards.\\n \\n The bolded entries have undergone one stage of reward shaping, while the entries in the table above marked with (*) have undergone two stages of reward design. First a performant reward function was obtained from a human designed or ORSO (both outperforming LIRPG on the task reward). Then, given the good quality of the reward, we show that we can apply LIRPG on such reward functions to some marginal improvement. We only provide such results for completeness. We note that the evaluation of each policy is done with respect to the task reward function (No Design).\\n \\n **LIRPG does not help if the extrinsic reward function is too sparse.** We also test LIRPG on sparse-reward manipulation tasks, such as the Allegro Hand. However, LIRPG does not provide any improvement over the environmental reward as the reward function is \\u201ctoo sparse.\\u201d This agrees with the experimental results in Figure 7 of [1], where the authors show that increasing the sparsity of the feedback (every 10, 20, or 40 steps) can decrease the performance of LIRPG.\\n \\n- 1 (a)\\n \\n > What are the specific forms of these reward functions? Are they related to the observations, features, and/or pre-defined reward components? Can these generated rewards capture all the necessary aspects to define effective rewards?\\n > \\n \\n The reward functions are generated as Python code. The input of the reward functions are environment observations and agent actions. We report some reward functions in the Appendix of the updated PDF file.\\n \\n- 1 (b)\\n \\n > If the optimal reward function is not included in the generated candidates, how does ORSO ensure that the final optimized policy is indeed optimal?\\n > \\n \\n The regret guarantee we provide is with respect to the optimal reward function in the set of candidate functions. While it is true that if the set does not contain an optimal one, ORSO would not achieve high task reward, we find that in practice this rarely happens. Practically, in order to find a performant reward functions, we need sampling and iterative improvement. Thanks to ORSO\\u2019s efficiency, we can explore a large set of functions within a limited budget, which leads to a higher probability of sampling the optimal reward function from the generator.\"}", "{\"comment\": \"Thanks for the authors' response. Many of my concerns have been addressed.\\n\\nRegarding the \\\"regression\\\" approach, I note from Appendix E.1 that most reward functions created by LLMs are parsing the observation space to extract various pieces of information, designing reward components, and then summing them with weights. If all candidate reward functions are of similar form, rather than having the LLM generate K reward functions and simultaneously maintaining K policies for selection, I wonder if it would be more flexible to let the LLM/human only specify the components and then learn the corresponding weights (a zero weight indicates the absence of a reward component). This is just an idea of mine and not a request for the authors to implement during the rebuttal period.\\n\\nAfter carefully reading the revised paper and other reviewers' comments, I decided to maintain my score for the following reasons:\\n\\n1. The LLM-generated reward functions have certain limitations and lack controllability. The LLM-generated reward functions are constrained by the LLM's understanding of the environment or the prior knowledge provided by humans. As shown in the examples from Appendix E.1, the LLM appears to have a clear understanding of what each element in the observation space represents. However, in many real-world environments, such as those with image-based observations, such detailed understanding is difficult to obtain. This limitation restricts the generalizability of ORSO.\\n\\n2. In my view, the contribution of the paper is somewhat incremental. The method for generating candidate reward functions closely follows existing work (e.g., EUREKA). Similarly, the selection strategies employed are also existing algorithms (e.g., ETC, EG, UCB, EXP3, and D3RB). Simply combining these two parts and demonstrating improved performance over EUREKA without selection strategies does not constitute a sufficiently novel contribution.\\n\\n3. The abstract claims that \\\"While shaping rewards have been introduced to provide additional guidance, selecting effective shaping functions remains challenging and computationally expensive.\\\" However, this overlooks many methods that do not require explicit shaping function selection. Numerous algorithms exist that automate the generation of reliable reward models. Although the experiments in the paper explore the importance of reward selection and compare various selection strategies, the primary objective of the paper is *reward design*. To fully demonstrate its effectiveness, comparisons with SOTA (within the last five years) automated reward shaping methods would be necessary.\\n\\nGiven these considerations, I have decided to maintain my score. I would like to thank the authors again for their response.\"}", "{\"title\": \"Clarifications and Additional Insights on ORSO's Reward Generation, Complexity, and Experimental Results (Part 3)\", \"comment\": \"- Q2\\n \\n > Does the paper simply apply the D^3RB algorithm, or does it introduce theoretical improvements? This aspect is somewhat unclear.\\n > \\n \\n We use the D3RB algorithm, for which we present guarantees under different assumptions. The convergence result presented in our manuscript (dependence on true regret coefficient) is different from the one presented in the original D3RB paper (dependence on the monotonic regret coefficient). The intuition behind the guarantees provided in our paper is that the true regret coefficient dependence implies that even if the optimal reward function has a \\u201cslow start\\u201d, it can still be selected and trained on and will achieve similar regret to running only the optimal.\\n \\n- Q3 + Q4\\n \\n > As I understand it, there\\u2019s no guarantee that the optimal policy obtained with the selected reward aligns with the optimal policies for the base reward (i.e.,\\u00a0**No Design**), correct?\\n > \\n \\n > Could you clarify the evaluation metrics? The ideal benchmark should be based on the base reward, as that\\u2019s ultimately the reward we aim to optimize.\\n > \\n \\n Yes, while there is no guarantee that the policies learned with the selected shaping reward perfectly align with the optimal policy for the task reward (base reward / No Design), our approach explicitly selects shaping rewards that maximize the task reward, and our experiments show significant performance gains over the No Design baseline. The \\u201cselection reward\\u201d in our framework measures the task reward achieved by each candidate reward function, ensuring that the selected shaping reward leads to improvements in the original task reward. As a clarification, the task reward is always used as the evaluation metric to assess the quality of policies and their corresponding candidate rewards.\\n \\n- Q5\\n \\n > The terminology around \\u201ceffective budget\\u201d seems confusing. Based on the proposed algorithms, the effective budget for environment interactions should be\\u00a0TN\\u00a0rather than\\u00a0T, since each iteration assumes running the algorithm for at least\\u00a0N\\u00a0steps to yield a final policy. Could you clarify this? This also seems to affect Figure 1\\u2014does preferred reward selection occur after\\u00a0N\\u00a0iterations or a single one? As I understand it, each reward would at-least have to be evaluated\\u00a0N\\u00a0times, hence making a minimum budget of\\u00a0KN, right?\\n > \\n \\n We appreciate the reviewer\\u2019s comment and apologize for any confusion caused by the terminology. In **Algorithm 1**, \\\\(T\\\\) refers to the number of selection steps, and the total number of iterations is indeed \\\\(T \\\\times N\\\\), where $N$ is the number of training iterations a reward function is trained on before selecting another one. To clarify the notion of \\u201cbudget\\u201d and the allocation of iterations, we provide a more structured explanation below:\\n \\n - Budget: In our context, the budget refers to the total number of PPO iterations allowed for training\\n - Number of Selection Steps: $T$\\n - Number of Training Iterations per Selection Step: $N$\\n - Number of Iterations to Train Baselines: `n_iters`\\n \\n In our experiments, we fix the total budget to be a fixed multiple of `n_iters` (the number of iterations used to train the baselines, i.e., task reward function and human-designed reward function) for each task and ablate the choice of the multiple. That is, we have `total_iterations_budget = n_iters x B = T x N`, where we ablate the choice of $B$ with $B \\\\in \\\\{5, 10, 15\\\\}$. We choose `N = n_iters / 100`, so that $T = 100 \\\\times B$.\\n \\n We note that we can arbitrarily choose values like `1e6` iterations for the budget, but this alone does not provide insight into the relative cost compared to training with the baseline reward functions.\\n \\n Regarding **Figure 1**, it is a schematic illustration intended to highlight the trade-off between allocating budget to suboptimal versus optimal reward functions when the budget is limited. The figure depicts an extreme case where \\\\(N = \\\\texttt{n\\\\_iters}\\\\) and \\\\(T < 2 \\\\times \\\\texttt{n\\\\_iters}\\\\). However, this situation does not occur in our experiments. For example, in the **Ant** task, we use \\\\(\\\\texttt{n\\\\_iters} = 1500\\\\) and set \\\\(N = 15\\\\).\\n \\n\\nWe hope these clarifications address the concerns raised. If clarifications and additional experiments resolve your concerns, we would be grateful if you could consider raising your score to reflect the improvements.\\n\\n**References**\\n\\n[1] Cheng, Ching-An, Andrey Kolobov, and Adith Swaminathan. \\\"Heuristic-guided reinforcement learning.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a034 (2021): 13550-13563.\\n\\n[2] Zheng, Zeyu, Junhyuk Oh, and Satinder Singh. \\\"On learning intrinsic rewards for policy gradient methods.\\\" Advances in Neural Information Processing Systems 31 (2018).\"}", "{\"comment\": \"We thank the reviewer for the quick response. We addressed the instability in choosing among many reward functions in the response to Reviewer DEh7's comments (https://openreview.net/forum?id=0uRc3CfJIQ&noteId=GdpudWUVqQ). Moreover, we show in our experiments with K=48 and K=96 that this instability does not occur. The additional table provided in the review above serves to show that ORSO indeed selects reward functions within one CI of the optimal one.\\n\\nRegarding the suggestion to consider regression for obtaining new rewards instead of selection, we appreciate the idea but would like to better understand the intended approach.\\n\\nWe would also ask that if the previous comments clarified the reviewer\\u2019s questions, they would consider increasing the score accordingly. Thank you again for your time and feedback.\"}", "{\"comment\": \"We thank the reviewer for their response. We would like to address the concerns raised by the reviewer.\\n\\n> Regarding the \\\"regression\\\" approach, I note from Appendix E.1 that most reward functions created by LLMs are parsing the observation space to extract various pieces of information, designing reward components, and then summing them with weights. If all candidate reward functions are of similar form, rather than having the LLM generate K reward functions and simultaneously maintaining K policies for selection, I wonder if it would be more flexible to let the LLM/human only specify the components and then learn the corresponding weights (a zero weight indicates the absence of a reward component). This is just an idea of mine and not a request for the authors to implement during the rebuttal period.\\n> \\n\\nWe thank the reviewer for clarifying the regression approach. This is indeed an interesting approach for settings in which the possible reward components are known and the candidate rewards are all in the same form and would be exciting to explore as a future direction. However, we would like to note that one might not know all possible reward components or listing all possible components might not be feasible as there might be numerous possible transformations applied to, for example, the distance component. We further elaborate on this point [here](https://openreview.net/forum?id=0uRc3CfJIQ&noteId=Z0mSV85ycC).\\n\\n> The LLM-generated reward functions have certain limitations and lack controllability. The LLM-generated reward functions are constrained by the LLM's understanding of the environment or the prior knowledge provided by humans. As shown in the examples from Appendix E.1, the LLM appears to have a clear understanding of what each element in the observation space represents. However, in many real-world environments, such as those with image-based observations, such detailed understanding is difficult to obtain. This limitation restricts the generalizability of ORSO.\\n> \\n\\nWe agree that LLM-based generation is not the answer to reward generation in general. However, we would like to point out that many real-world applications [1, 2, 3] are non-image-state-based. Moreover, as the reviewer pointed out, generation is not the main contribution of our paper, which is the only part of the algorithm that requires access to the environment code. The ORSO framework can be applied to any form of reward function. The choice of using LLMs to generate reward functions is motivated by the success of such methods in EUREKA and other similar works.\\n\\n> In my view, the contribution of the paper is somewhat incremental. The method for generating candidate reward functions closely follows existing work (e.g., EUREKA). Similarly, the selection strategies employed are also existing algorithms (e.g., ETC, EG, UCB, EXP3, and D3RB). Simply combining these two parts and demonstrating improved performance over EUREKA without selection strategies does not constitute a sufficiently novel contribution.\\n> \\n\\nWhile ORSO is similar to EUREKA (Ma et al., 2024), our work extends beyond a simple modification. In Figure 3 of the updated PDF, we show that ORSO has significant (up to **16x fewer GPUs needed** to achieve the same performance in the same time) gains in terms of necessary compute compared to EUREKA. All our experiments can be indeed run on a single commercial GPU (e.g., a 3090 Ti or even a 2080 Ti). This will open the possibility for researchers with smaller computational budgets to quickly iterate their experiments.\\n\\nMoreover, the connection between model selection techniques and reward design in RL has not been explore before and our experimental results show that we gain in efficiency, while maintaining effectiveness. We provide a novel analysis of D3RB, which is in stark contrast with the regret guarantees of the original paper. Namely, our guarantee depend on the true regret coefficients rather than the monotonic ones.\\n\\n**References**\\n\\n[1] Margolis, Gabriel B., et al. \\\"Rapid locomotion via reinforcement learning.\\\"\\u00a0*The International Journal of Robotics Research*\\u00a043.4 (2024): 572-587.\\n\\n[2] Margolis, Gabriel B., and Pulkit Agrawal. \\\"Walk these ways: Tuning robot control for generalization with multiplicity of behavior.\\\"\\u00a0*Conference on Robot Learning*. PMLR, 2023.\\n\\n[3] Lee, Joonho, et al. \\\"Learning quadrupedal locomotion over challenging terrain.\\\"\\u00a0*Science robotics*\\u00a05.47 (2020): eabc5986.\"}", "{\"title\": \"Clarifications and Revisions Based on Feedback\", \"comment\": [\"We thank the reviewer for their thorough review. Your detailed understanding of our work is greatly appreciated, as reflected in your summary and questions. We address your points below:\", \"1. Related Work\", \"We agree with moving the related work section to the main paper. This will help better contextualize our contributions. The revised PDF contains some related work in the main paper and a more thorough treatment of relevant literature in the appendix.\", \"2. Assumption 4.2 and Monotonicity\", \"In our experimental settings, we observed that monotonicity holds in most cases\", \"The few violations we observed (visible in Figures 17 and 18) occurred with alternative selection strategies, not with D3RB\", \"These cases likely represent situations where the selection strategy committed to a suboptimal reward function, while the optimal reward function still exhibited monotonic behavior\", \"The convergence result presented in our manuscript (dependence on true regret coefficient) is different from the one presented in the original D3RB paper (dependence on the monotonic regret coefficient). The intuition behind the guarantees provided in our paper is that the true regret coefficient dependence implies that even if the optimal reward function has a \\u201cslow start\\u201d, it can still be selected and trained on and will achieve similar regret to running only the optimal.\", \"3. Impact of Wrong/Redundant Reward Functions\", \"While a direct analysis of wrong/redundant reward functions would be ideal, it would be computationally prohibitive given our large search space\", \"However, our analysis of varying K (number of reward functions) serves as a useful proxy:\", \"The results in the appendix show ORSO's robustness to K given sufficient budget\", \"On the other hand\", \"With small K, naive selection can evolve quickly but risks converging to and evolving suboptimal rewards\", \"With large K, naive selection may explore more options and allow evolution to find optimal rewards, however, this will lead the naive selection algorithm to spend significant amount of time and compute on suboptimal reward functions initially\", \"4. VLM-based Reward Design\", \"In some initial experiments, we tested the possibility of using VLMs to evaluate the behavior of trained agents and remove the need to the manually specified task reward function. We however observed that, at the time of the experiments, VLMs struggled to evaluate behaviors correctly and would hallucinate most of the time. We believe that as VLMs improve, this can be an exciting direction to explore.\", \"We thank the reviewer again for their careful reading and constructive feedback that has helped us identify areas where we can strengthen the paper's presentation and discussion. We incorporate these clarifications in the revised version.\"]}", "{\"comment\": \"# Part 2/2\\n\\n> Regarding the novelty and contribution, from the papers' statements at around Line 309, and my basic understanding, the OSRA mainly modified the EUREKA (Ma et al., 2024) from selecting and testing one-by-one to initializing all candidates and testing in parallel, which is not a big improvement, but naturally trading space for time.\\n> \\n\\nWhile ORSO is similar to EUREKA (Ma et al., 2024), our work extends beyond a simple modification. In Figure 3 of the updated PDF, we show that ORSO has significant (up to **16x fewer GPUs needed** to achieve the same performance in the same time) gains in terms of necessary compute compared to EUREKA. The connection between model selection techniques and reward design in RL has not been explore before and our experimental results show that we gain in efficiency, while maintaining effectiveness.\\n\\n> Lastly, I encourage the authors to compare ORSO with at least one more related baseline, as for now, the study of comparing ORSO with different sources of reward functions is good enough, but no other auto-reward-function-generation algorithms are compared, they also show good performance on sparse-reward envs and reward design.\\n> \\n\\nWe appreciate the suggestion and are committed to enhancing the experimental section. We are open to and happy to incorporate additional baseline comparisons in the final version. We note that we have compared with LIRPG (https://openreview.net/forum?id=0uRc3CfJIQ&noteId=U9RaoKtiXV), which is a widely adopted method. Our results show that ORSO with an LLM generator can significantly outperform this baseline.\\n\\nWe want to emphasize that our primary goal is not to propose a new reward generation algorithm, but to introduce a more efficient and effective method of reward function selection.\\n\\nWe hope our comments address the reviewer\\u2019s comments. We are happy to engage in further discussions and clarify any remaining questions.\"}", "{\"summary\": \"The paper proposed an Online Reward Selection and Policy Optimization (ORSO) algorithm for reinforcement learning. ORSO pre-generates some candidate reward functions by linearly combining some reward components or by LLM, while learning, ORSO dynamically evaluates which candidate reward function can lead to better policy optimization, then selects the optimal candidate to guide the learning process.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The approach is easy to implement and effective at selecting reward functions, it also shows fast convergence in terms of both computational time and sample efficiency.\", \"weaknesses\": \"1. As this method follows a \\\"pre-define and select one\\\" paradigm, the final optimal performance that ORSO can achieve heavily depends on how good are the pre-generated candidate reward functions.\\n2. The author states that this is a reward shaping approach, but the paper doesn't compare it with any reward shaping or reward selection baselines. If the authors could compare ORSO with some representative reward shaping algorithms (such as [1][2][3][4]), it would better showcase its advantages.\\n\\n[1] Devidze, Rati, Parameswaran Kamalaruban, and Adish Singla. \\\"Exploration-guided reward shaping for reinforcement learning under sparse rewards.\\\" Advances in Neural Information Processing Systems 35 (2022): 5829-5842.\\n\\n[2] Zheng, Zeyu, Junhyuk Oh, and Satinder Singh. \\\"On learning intrinsic rewards for policy gradient methods.\\\" Advances in Neural Information Processing Systems 31 (2018).\\n\\n[3] Memarian, Farzan, et al. \\\"Self-supervised online reward shaping in sparse-reward environments.\\\" 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021.\\n\\n[4] Burda, Y., Edwards, H., Storkey, A., and Klimov, O. (2018). Exploration by random network distillation. In International Conference on Learning Representations.\", \"questions\": \"1. Regarding Weakness 1, about candidate reward function generation, As described in section 5.1.2, the candidate reward functions are directly generated through LLM, Could you clarify:\\n\\n(a) What are the specific forms of these reward functions? Are they related to the observations, features, and/or pre-defined reward components? Can these generated rewards capture all the necessary aspects to define effective rewards?\\n\\n(b) If the optimal reward function is not included in the generated candidates, how does ORSO ensure that the final optimized policy is indeed optimal?\\n\\n2. the policy optimizes based on a given candidate reward function, would this make it difficult to ensure that the policy is optimizing the task's original objective (the environmental reward function defined by the MDP)?\\n\\n3. Frequent switching of reward functions may lead to significant shifts in the policy's learning objectives. For instance, in a maze task, if reward function #1 focuses on avoiding obstacles, while reward function #2 focuses on resource collection, switching between these two may lead to inconsistent learning targets. Would this cause instability in the learning process?\\n\\n4. I'm unclear about the evaluation metric in the experiments, specifically, in Figure 2 (left), it shows performance as a percentage of the human-designed reward function. In Section 5.1.1, the paper states, \\\"No design is with the task reward function r for each MDP\\\". I assume this refers to the original environmental reward function, which should be the primary objective the agent aims to optimize. However, in Figure 2 (left), the \\\"No design\\\" baseline is around half of the human-designed reward (I assume this figure reports cumulative rewards under each baseline's own reward function). This seems unfair and could introduce bias for deviating from the MDP\\u2019s original task objective. \\n\\nFor example, suppose the MDP provides rewards of $0, 0, 1$ for states $s_1, s_2, s_3$ (only 3 states). A human-designed reward function might assign $0, 1, 1$ for $s_1, s_2, s_3$. Consequently, the cumulative reward under the human-designed reward function would be higher, and it also proposes new targets (both $s_2$ and $s_3$ are equally important). From my understanding, the performance should be evaluated consistently on the original MDP reward (the true objective), meaning that the \\\"No design\\\" case should actually serve as an upper bound.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reward Shaping Comparison, Reward Function Generation, and Evaluation Clarifications (Part 2)\", \"comment\": \"1. Q1\\n \\n > The policy optimizes based on a given candidate reward function, would this make it difficult to ensure that the policy is optimizing the task's original objective (the environmental reward function defined by the MDP)?\\n > \\n \\n This is correct. If the candidate reward function does not optimize the for original environmental reward, the learned policy would not be optimal with respect to the original task. Reward candidates that, when optimized for, lead to low original environmental reward are discarded by our selection algorithm and those that lead to higher environmental reward will be chosen more often. Reward design involves aligning auxiliary reward functions with the task reward, especially when the task reward is sparse or difficult to optimize. This is typical in robotics [2, 3], cybersecurity [4] and more. Approaches like potential reward shaping [5] can provably preserve optimality, however, they have been shown to not work very well in practice [6], however, as shown in the additional LIRPG experiments above, such methods can help improve base reward functions marginally.\\n \\n2. Q2\\n \\n > Frequent switching of reward functions may lead to significant shifts in the policy's learning objectives. For instance, in a maze task, if reward function #1 focuses on avoiding obstacles, while reward function #2 focuses on resource collection, switching between these two may lead to inconsistent learning targets. Would this cause instability in the learning process?\\n > \\n \\n In the implementation, this behavior does not occur because a separate policy is instantiated for each reward function, as detailed in Algorithm 1, line 2. Consequently, each policy is updated only as frequently as its corresponding reward function is selected.\\n \\n3. Q3\\n \\n > I'm unclear about the evaluation metric in the experiments, specifically, in Figure 2 (left), it shows performance as a percentage of the human-designed reward function. In Section 5.1.1, the paper states, \\\"No design is with the task reward function r for each MDP\\\". I assume this refers to the original environmental reward function, which should be the primary objective the agent aims to optimize. However, in Figure 2 (left), the \\\"No design\\\" baseline is around half of the human-designed reward (I assume this figure reports cumulative rewards under each baseline's own reward function). This seems unfair and could introduce bias for deviating from the MDP\\u2019s original task objective.\\n > \\n \\n All evaluations in our experiments are with respect to the MDP\\u2019s task reward $r$. Because the human-designed reward functions were specifically designed so that training on those would improve task reward, we normalize result such that the policy of human designed reward functions is at 1.0. The \\u201cNo Design\\u201d line being lower than the \\u201cHuman\\u201d line means that training with the original task reward achieves lower performance than training with the human-designed reward function as measured by the original environmental reward function. This is exactly the objective of reward design: we aim to find reward functions that, when optimized, will lead to better performance with respect to the original task reward (because the designed reward is more amenable to optimization). In Figure 2 (left), the \\\"No design\\\" baseline \\u2014 using the task reward function alone \\u2014 achieves approximately half the performance of the human-designed reward. This demonstrates the difficulty of optimizing directly for the task reward and highlights the benefit of effective reward design in facilitating better optimization.\\n \\n- Regarding the example MDP, we would like to reiterate that the evaluation is done with respect to the the original task reward. Therefore, if one were to evaluate using the human-designed reward function (0, 1, 1), an agent that equally visits the states 2 and 3 would be optimal but would not be optimal if we evaluate using the task reward (0, 0, 1). Therefore, in our evaluation, the human-designed reward function would not be considered a \\u201cgood\\u201d one. On the other hand, a reward function of the form (0, 1, 2) would be \\u201cgood\\u201d as it provides more guidance compared to (0, 0, 1) and the optimal strategy is still to visit state 3 as often as possible. This is more obvious if we consider an MDP with $n$ states $s_1, \\\\ldots, s_n$ with task reward function (0, 0, \\u2026, 0, 1), i.e., zero everywhere, except for the n-th state. This reward function is clearly sparse and hard to optimize. A good human-designed reward function could look like (1/n, 2/n, \\u2026, (n-1)/n, n), which still leads the agent towards the n-th state but provides more guidance during training.\"}", "{\"comment\": \"Dear reviewer,\\n\\nas the discussion period is coming to an end, we would be grateful for any further feedback on our earlier response. Please let us know if you have any questions or need clarification. Otherwise, we kindly ask you to consider adjusting your score.\\n\\nWe truly value your insights and participation in the review process.\\n\\nKind regards,\\n\\nThe authors\"}", "{\"comment\": \"We thank the reviewer for their thoughtful feedback and for revisiting their evaluation of our work. We greatly appreciate the updated score and your confidence in our submission. Your comments and suggestions have been invaluable in improving the clarity and quality of our paper.\"}", "{\"title\": \"Clarifications on Rewards, Proofs, and Presentation\", \"comment\": \"We thank the reviewer for their thoughtful feedback and for highlighting the strengths of our work, including the soundness of our approach, the in-depth related work section, and the detailed ablation analysis. Below, we address the specific questions and concerns raised.\\n\\n- **Human-Designed Reward Function**\\n \\n The term *human-designed reward function* refers to reward functions manually created by domain experts who implemented the environments used in our experiments. These experts designed the rewards to reflect task objectives based on their domain knowledge. Details on how each reward function (task reward and human-designed reward) is defined can be found in Appendix E (Reward Functions Definitions). We note that the plots always plot the task reward. When **Human** is indicated, it means \\u201cperformance of a policy trained with the human-designed reward function on the task reward.\\u201d\\n \\n- **Clarification on Proof for Lemma D1**\\n \\n We appreciate the reviewer pointing out that the proof for D1 could benefit from additional clarity. We have updated the proof of Lemma D1 with the following structure in the updated PDF.\\n \\n **Base Case ($t=1$)**\\n \\n At $t=1$, for all algorithms $i \\\\in [K]$:\\n \\n - $\\\\widehat{d}^i_1 = d_{\\\\min}$ (by initialization)\\n - $n_1^i = 1$ if $i$ is the first algorithm chosen, 0 otherwise\\n - Therefore $n_1^i \\\\leq n_1^{i_\\\\star} + 1$ holds\\n \\n **Inductive Step**\", \"inductive_hypothesis\": \"Assume that for some $t \\\\geq 1$:\\n \\n - $\\\\widehat{d}^{i_\\\\star}_{t-1} = d_{\\\\min}$\\n - $n_{t-1}^i \\\\leq n_{t-1}^{i_\\\\star} + 1$ for all $i \\\\in [K]$\\n \\n We need to show these properties hold for $t$. Let $i_t = i_\\\\star$. When $\\\\mathcal{E}$ holds, the LHS of D$^3$RB's misspecification test satisfies [proof as is].\\n \\n Combining these inequalities shows the misspecification test will not trigger, thus:\\n \\n 1. $\\\\widehat{d}^{i_\\\\star}_t$ remains at $d_{\\\\min}$\\n 2. For all $i \\\\in [K]$, $n_t^i \\\\leq n_t^{i_\\\\star} + 1$ continues to hold\\n \\n This finalizes the proof.\\n \\n- **Impact of Poorly Chosen Task Rewards**\", \"regarding_the_concern_about_analyzing_the_impact_of_poorly_chosen_task_rewards\": \"this is fundamentally a task specification problem [1]. In any reinforcement learning framework, a misspecified task reward \\u2014 whether used with ORSO or another method \\u2014 will inevitably lead to optimization towards unintended objectives. While this highlights a broader challenge in reward design, it is not unique to ORSO.\\n \\n- **Presentation**\\n \\n We agree that Section 2 might be a bit redundant for RL experts. We provided this section to help non-experts with the necessary notation. We provide more a more thorough presentation of online model selection preliminaries in the appendix.\\n \\n\\nWe hope these changes, with the strengths the reviewer already highlighted will increase your confidence in our work. If these revisions resolve your concerns, we would be grateful if you could consider raising your score to reflect the improvements. We appreciate your time and constructive feedback, which has helped us refine our submission.\\n\\n**References**\\n\\n[1] Agrawal, Pulkit. \\\"The task specification problem.\\\"\\u00a0*Conference on Robot Learning*. PMLR, 2022.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This manuscript proposes the Online Reward Selection and Policy Optimization (ORSO) to frame shaping reward selection as an online model selection problem. It automatically identifies promising shaping reward functions, balancing exploration and exploitation with provable regret guarantees. The ORSO method significantly improves sample efficiency and reduces computational time compared to traditional methods that fully evaluate each shaping reward function.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea is a little simple but effective.\", \"weaknesses\": \"Some experiments and theories can be added.\", \"questions\": \"1. The proposed method generates a set of candidate reward functions for the online selection phase. Does having K reward function candidates mean that the number of candidates is fixed? If all these candidates are not suitable or not the best, what is the solution?\\n\\n2. The experiments compared the performance of policies trained using No Design, Human, Naive Selection, and ORSO to show the superiority of ORSO. However, the impact of selecting different reward functions for ORSO algorithm on experimental results has not been analyzed. If possible, please provide relevant experiments to demonstrate the experimental differences caused by selecting different reward functions.\\n\\n3. Figure 4 shows the normalized cumulative regret for different selection algorithms. The manuscript mentioned that the ORSO\\u2019s regret can become negative, indicating that it finds reward functions that outperform the human baseline. The minimum value is zero in Figure 4, I didn\\u2019t observe the negative values.\\n\\n4. There is a similar paper ORSO: Accelerating Reward Design via Online Reward Selection and Policy Optimization published in ARLET 2024. What is the difference between these two works? Has the ARLET paper been cited?\\n\\n5. The number of references is small, and more recent articles on reward shaping can be added.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"I thank the authors for their comments and revision. I will retain my score.\"}", "{\"metareview\": \"This paper proposes a reward design solution method that uses automatically generated candidate reward functions (from an LLM) and optimizes a policy for each reward function. The method then dynamically selects which reward function is likely to be the best one under the original reward function. The algorithm iterates on training and evaluating each candidate reward function for a fixed amount of time. The algorithm is not applicable to every problem, as noted by one reviewer, and makes a strong assumption that good candidate reward functions can be generated. However, the paper does demonstrate that it can solve the problem it set out to. Thus, I recommend this paper for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"There were several discussions with the reviewers. One reviewer (sCCd) maintained their score and raised concerns of the method's applicability. However, I do not believe these concerns are cause for a rejection. In one case, the reviewer raised their score to an eight, leading to a sufficiently positive reception of the paper to consider acceptance. While the paper is not perfect, there do not appear to be major flaws warranting rejection.\"}", "{\"comment\": \"Thank you for your response. The clarifications on the assumption 4.2 and whether monotonicity holds in experimented environments are helpful.\\n\\nExtending the discussions in the main document about these clarifications and the impact of wrong/redundant reward functions would improve the presentation.\\n\\nI will keep my score as is towards acceptance.\"}", "{\"comment\": \"Thank you to the authors for their detailed responses and clarifications. After careful consideration, I would like to maintain my current score.\", \"the_primary_reasons_for_this_decision_are_as_follows\": \"1. The method does not guarantee the optimality of the policy with respect to the base reward (theoretically), which makes it difficult to see how the approach will be practically useful in real-world applications.\\n2. While access to environment code is feasible in robotics domains, it is generally not true for other scenarios. If the paper specifically targets robotics domains, this focus should be made clear in the writing, as the current framing suggests a more general-purpose method for reward shaping.\\n3. Methods such as LIRPG and other reward-shaping approaches typically do not require maintaining multiple copies of policies. While the proposed method benefits significantly from parallelizing the search space, it seems to have prohibitive requirements, including access to parallel simulators/environments.\\n\\nRegarding the point about reward generation being part of ORSO, I do not believe that the way it is currently proposed and written supports the claim that reward generation is an integral part of the method.\\n\\nAdditionally, I resonate with the points raised by reviewer DEh7 and agree with their assessment. For these reasons, I would like to keep my score unchanged.\\n\\nThank you again for your thoughtful engagement with the feedback.\"}", "{\"summary\": [\"The paper employs the data-driven online model selection algorithm D^3RB (Pacchiano et al., 2023) to choose between candidate reward functions for reinforcement learning, where these candidates are generated through an LLM. It replaces the naive reward selection from Ma et al. (2023) with D^3RB, enabling more efficient online selection among candidate rewards. A simple example shows how D^3RB helps prevent budget exhaustion by avoiding over-allocation to a single option. The paper further evaluates the algorithm\\u2019s effectiveness on Isaac Gym (Makoviychuk et al., 2021), comparing baseline and human-designed rewards. Ablations also explore various bandit exploration strategies, including UCB, EXP3, ETC, EG, and Naive.\", \"**References**\", \"Pacchiano, A., Dann, C., & Gentile, C. (2023). *Data-driven regret balancing for online model selection in bandits.* arXiv preprint arXiv:2306.02869.\", \"Ma, Y. J., Liang, W., Wang, G., Huang, D.A., Bastani, O., Jayaraman, D., Zhu, Y., Fan, L., & Anandkumar, A. (2023). *Eureka: Human-level reward design via coding large language models.* arXiv preprint arXiv:2310.12931.\", \"Makoviychuk, V., Wawrzyniak, L., Guo, Y., Lu, M., Storey, K., Macklin, M., Hoeller, D., Rudin, N., Allshire, A., & Handa, A., et al. (2021). *Isaac Gym: High performance GPU-based physics simulation for robot learning.* arXiv preprint arXiv:2108.10470.\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper addresses a significant issue in reinforcement learning, as sparse rewards indeed pose challenges, and reward shaping can provide dense feedback, potentially accelerating learning. The writing is clear, with well-presented graphs and experiments, and a well-defined motivation. The paper also successfully applies a recent online model selection algorithm, demonstrating in the proposed benchmark how the algorithm can aid in selecting the appropriate reward functions with limited budget and can scale effectively with increased budget and candidate reward functions.\", \"weaknesses\": \"**Writing**: In Section 3.1, ORSO is presented as an effective and efficient method for reward function design. However, my understanding is that ORSO simply selects a reward function from a set of candidates, with the reward generation handled separately. If this is correct, it would be helpful if the paper clearly defined ORSO\\u2019s components, specifying whether reward generation is part of ORSO or assumed to be user-provided. Clarification here would improve understanding.\\n\\n**Complexity**: The proposed method requires maintaining $K$ separate policies for each reward, which could create substantial memory overhead. Standard approaches in reward shaping often aim to achieve similar benefits while maintaining only a single policy, highlighting a potential drawback in scalability.\\n\\n**Performance Guarantees**: Traditional reward shaping studies typically ensure that the shaped reward aligns with a base reward function. Here, there is no guarantee that the proposed objective aligns with the base reward, leaving the outcome quality solely dependent on the reward generation process. This could raise concerns about consistency in performance.\\n\\n**Experiments**:\\n\\n1. The paper does not compare with established reward shaping methods, such as those by Ng et al. (1999), Zou et al. (2019), Zheng et al. (2018), Sorg et al. (2010), and Gupta et al. (2023). Including these comparisons would strengthen the experimental evaluation.\\n \\n2. Iterative resampling appears to be essential for obtaining high-quality reward functions, as it involves reinitializing policies and refining rewards over iterations. However, the paper lacks discussion on the resampling process's challenges, frequency, and impact on performance. Additionally, it is unclear if resampling is an integral part of ORSO or a separate process.\\n \\n3. In the experiments, performance is evaluated against human-designed rewards, but the actual evaluation should ideally be based on the baseline reward. This raises questions about the metrics used for evaluation. In Figure 2, the upper bounds should correspond to **No Design**, as the objective should be to assess performance against the MDP\\u2019s base reward.\\n \\n4. The reward generation process seems to require access to code-level details of the MDP, which may not be feasible in cases where the environment is not code-based. Discussion of this limitation would improve transparency regarding the method\\u2019s applicability.\\n \\n\\n**References**\\n\\n- Ng, A. Y., Harada, D., & Russell, S. (1999). *Policy invariance under reward transformations: Theory and application to reward shaping.* In ICML.\\n \\n- Zou, H., Ren, T., Yan, D., Su, H., & Zhu, J. (2019). *Reward shaping via meta-learning.* arXiv preprint arXiv:1901.09330.\\n \\n- Zheng, Z., Oh, J., & Singh, S. (2018). *On learning intrinsic rewards for policy gradient methods.* Advances in Neural Information Processing Systems.\\n \\n- Sorg, J., Lewis, R. L., & Singh, S. (2010). *Reward design via online gradient ascent.* Advances in Neural Information Processing Systems.\\n \\n- Gupta, D., Chandak, Y., Jordan, S. M., Thomas, P. S., & da Silva, B. C. (2023). *Behavior Alignment via Reward Function Optimization.* arXiv preprint arXiv:2310.19007.\", \"questions\": \"The weaknesses section raised some key questions, which I\\u2019ll summarize here for clarity:\\n\\n1. ORSO functions only as a selection algorithm, correct? Reward generation isn\\u2019t part of the algorithm itself?\\n \\n2. Does the paper simply apply the D^3RB algorithm, or does it introduce theoretical improvements? This aspect is somewhat unclear.\\n \\n3. As I understand it, there\\u2019s no guarantee that the optimal policy obtained with the selected reward aligns with the optimal policies for the base reward (i.e., **No Design**), correct?\\n \\n4. Could you clarify the evaluation metrics? The ideal benchmark should be based on the base reward, as that\\u2019s ultimately the reward we aim to optimize.\\n \\n5. The terminology around \\u201ceffective budget\\u201d seems confusing. Based on the proposed algorithms, the effective budget for environment interactions should be $TN$ rather than $T$, since each iteration assumes running the algorithm for at least $N$ steps to yield a final policy. Could you clarify this? This also seems to affect Figure 1\\u2014does preferred reward selection occur after $N$ iterations or a single one? As I understand it, each reward would at-least have to be evaluated $N$ times, hence making a minimum budget of $KN$, right?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Addressing Weaknesses and Questions on Theoretical and Experimental Contributions (Part 1)\", \"comment\": \"We thank the reviewer for the feedback. We address the points below.\\n\\n**Weaknesses**\\n\\n> Some experiments and theories can be added.\\n> \\n\\nWe believe the current manuscript provides a comprehensive theoretical framework and sufficient experimental validation. Theoretical contributions include the provable regret guarantees for reward selection, and our experiments span diverse tasks, ablations on different selection strategies, reward functions set size and budget. To help us better address your concern, could you please identify specific gaps in either our theoretical analysis or experimental validation that you feel need addressing? We would be happy to take the suggestions into consideration to improve the contribution of our work.\\n\\n**Questions**\\n> The experiments compared the performance of policies trained using No Design, Human, Naive Selection, and ORSO to show the superiority of ORSO. However, the impact of selecting different reward functions for ORSO algorithm on experimental results has not been analyzed. If possible, please provide relevant experiments to demonstrate the experimental differences caused by selecting different reward functions.\\n> \\n\\nThank you for the suggestion to analyze different reward functions in detail. However, running full training on each reward function would be computationally prohibitive, which is precisely the challenge our method aims to address.\\n\\nTo demonstrate the effectiveness of our approach, we provide results for the Ant task. Specifically, we trained a policy for each of the 96 reward functions used in Figure 5 (Figure 6 in the updated PDF). The table reports the mean task reward with 95% confidence intervals over five seeds. Rewards are ordered from best to worst, with those within one confidence interval of the best reward italicized (up to and including reward ID 54). Bolded values indicate the reward functions selected by ORSO + D3RB across the seeds we ran.\\n\\nOur results show that ORSO consistently selects reward functions within one confidence interval of the best reward. The occasional selection of different reward functions can be attributed to the inherent stochasticity in reinforcement learning training.\\n\\n| Reward ID | Value (+/- CI) |\\n| --- | --- |\\n| 34 | ***10.24 +/- 0.36*** |\\n| 18 | *10.01 +/- 0.63* |\\n| 71 | *9.98 +/- 0.37* |\\n| 79 | *9.88 +/- 0.73* |\\n| 21 | ***9.77 +/- 0.22*** |\\n| 94 | *9.70 +/- 0.38* |\\n| 66 | ***9.67 +/- 0.27*** |\\n| 81 | ***9.55 +/- 0.80*** |\\n| 70 | ***9.51 +/- 0.63*** |\\n| 37 | *9.46 +/- 0.55* |\\n| 33 | *9.34 +/- 0.83* |\\n| 95 | *9.27 +/- 0.33* |\\n| 47 | *9.24 +/- 0.68* |\\n| 63 | *9.21 +/- 0.76* |\\n| 54 | *9.20 +/- 0.80* |\\n| 80 | 9.16 +/- 0.25 |\\n| 62 | 8.88 +/- 0.37 |\\n| 38 | 8.81 +/- 0.45 |\\n| 49 | 8.81 +/- 0.77 |\\n| 35 | 8.69 +/- 1.07 |\\n| 5 | 8.61 +/- 0.56 |\\n| 52 | 8.35 +/- 1.43 |\\n| 67 | 8.32 +/- 0.85 |\\n| 46 | 8.30 +/- 0.89 |\\n| 68 | 8.20 +/- 1.22 |\\n| 75 | 8.09 +/- 0.40 |\\n| 84 | 8.05 +/- 1.25 |\\n| 85 | 7.77 +/- 0.97 |\\n| 72 | 7.64 +/- 1.27 |\\n| 55 | 7.43 +/- 1.46 |\\n| 20 | 7.26 +/- 0.18 |\\n| 23 | 7.26 +/- 1.12 |\\n| 86 | 7.15 +/- 0.42 |\\n| 36 | 7.06 +/- 0.68 |\\n| 91 | 6.93 +/- 1.45 |\\n| 1 | 6.50 +/- 1.17 |\\n| 31 | 6.36 +/- 0.80 |\\n| 61 | 6.06 +/- 0.93 |\\n| 19 | 5.78 +/- 1.43 |\\n| 25 | 5.67 +/- 1.41 |\\n| 48 | 5.65 +/- 1.54 |\\n| 59 | 5.59 +/- 1.02 |\\n| 26 | 5.50 +/- 0.89 |\\n| 60 | 5.47 +/- 1.17 |\\n| 44 | 5.47 +/- 1.36 |\\n| 40 | 5.34 +/- 1.73 |\\n| 73 | 5.33 +/- 1.77 |\\n| 0 | 5.30 +/- 1.38 |\\n| 56 | 5.05 +/- 0.62 |\\n| 39 | 4.91 +/- 0.64 |\\n| 74 | 4.88 +/- 0.49 |\\n| 30 | 4.83 +/- 0.78 |\\n| 78 | 4.83 +/- 0.35 |\\n| 6 | 4.76 +/- 0.91 |\\n| 2 | 4.69 +/- 0.92 |\\n| 28 | 4.66 +/- 1.21 |\\n| 8 | 4.65 +/- 0.44 |\\n| 16 | 4.57 +/- 1.08 |\\n| 29 | 4.53 +/- 0.81 |\\n| 65 | 4.44 +/- 0.62 |\\n| 50 | 4.23 +/- 1.94 |\\n| 58 | 3.89 +/- 0.43 |\\n| 53 | 3.86 +/- 0.44 |\\n| 32 | 3.79 +/- 0.73 |\\n| 22 | 3.74 +/- 0.54 |\\n| 3 | 3.48 +/- 1.66 |\\n| 69 | 3.22 +/- 0.42 |\\n| 4 | 3.18 +/- 0.42 |\\n| 88 | 3.18 +/- 0.43 |\\n| 64 | 3.12 +/- 0.16 |\\n| 9 | 3.11 +/- 0.39 |\\n| 17 | 3.10 +/- 0.15 |\\n| 93 | 3.02 +/- 0.21 |\\n| 14 | 2.99 +/- 0.52 |\\n| 45 | 2.89 +/- 0.29 |\\n| 83 | 2.72 +/- 0.82 |\\n| 27 | 2.50 +/- 0.72 |\\n| 10 | 2.15 +/- 0.43 |\\n| 57 | 1.69 +/- 0.80 |\\n| 7 | 1.67 +/- 1.01 |\\n| 82 | 1.03 +/- 0.35 |\\n| 42 | 0.63 +/- 0.80 |\\n| 41 | 0.37 +/- 0.30 |\\n| 43 | 0.33 +/- 0.16 |\\n| 76 | 0.22 +/- 0.08 |\\n| 89 | 0.22 +/- 0.07 |\\n| 24 | 0.21 +/- 0.03 |\\n| 11 | 0.21 +/- 0.08 |\\n| 12 | 0.19 +/- 0.14 |\\n| 15 | 0.19 +/- 0.14 |\\n| 92 | 0.14 +/- 0.07 |\\n| 77 | 0.13 +/- 0.04 |\\n| 13 | 0.05 +/- 0.00 |\\n| 51 | 0.05 +/- 0.00 |\\n| 87 | 0.05 +/- 0.02 |\\n| 90 | 0.00 +/- 0.00 |\"}", "{\"title\": \"Clarifications on ORSO (Part 2)\", \"comment\": \"**Questions**\\n\\n- Q1\\n \\n > Lines 54-59: I understand you are highlighting unique challenges compared to standard multi-arm bandit settings, yet ORSO uses the same selection algorithms typically used to solve such multi-arm bandit problems. The paper could be clearer in defining exactly which components of ORSO are key to address the unique challenges presented.\\n > \\n \\n The main difficulty of using classical MAB algorithms lies in the non-stationarity and statefulness of each learner (candidate reward function and corresponding policy) in ORSO. Algorithms such as UCB provide convergence guarantees if the utility of each arm is stationary. However, it is clear that as one trains using a reward function the task reward achieved by the corresponding policy will change. We integrate algorithms for online model selection (D3RB), which are designed to deal with stateful learners, into our ORSO implementation and show that such methods indeed overcome some of the problems encountered by MAB algorithms (e.g., they tend to commit to seemingly optimal reward functions early on and not exploring reward other reward functions enough). Therefore, the key component for successful selection is D3RB, which can deal with stateful learners.\\n \\n- Q2\\n \\n > I recommend expanding on the various resampling strategies, if more than one was tried out, and their impact on performance as this seems to be a key ingredient to the method's success.\\n > \\n \\n We have reported the resampling strategy in the Appendix, where we also discuss the pros and cons of different resampling strategies. We do not have have statistically significant evidence that the resampling strategy in our experiments is optimal, nor do we claim so. However, we observed in some early experiments that the chosen strategy (we resample one half of the new set of reward functions by conditioning the reward generation on the code of the previous best reward function and the other half is generated from scratch as if it was the first time the reward function set was generated) was a good tradeoff between greedily improving the best reward function for all new rewards and always resampling from scratch.\\n \\n- Q3\\n \\n > I would recommend adding the synthetically generated best-performing shaping reward functions for each task to appendix E. Are the reward functions sensible to the human reader? This has implications on how well these shaping reward functions could be further refined by human experimenters, and possibly give insights on their logical soundness.\\n > \\n \\n This is a great suggestion. We have reported the code for the best reward functions in the Appendix in the updated PDF.\\n \\n- Q4\\n \\n > Also, was any constraint, structure, human knowledge beyond the imposed when prompting the generation of such rewards or could the prompt be arguably generated programmatically (if so, I recommend just stating it - the code base is not available during the review process to verify)? While not directly related to the ORSO contribution, this is arguably important to showcase as ORSO heavily relies on the existence of an automated way of generating reward functions without human priors.\\n > \\n \\n In our experiments, we do not add any particular human knowledge in addition to simply exposing the observation space and some useful environment variables available in the environment definition. We will make sure to clarify this in the manuscript. We would also like to add that if human prior is available (e.g., we might know which reward components are important for quadruped locomotion \\u2014 xy velocity tracking, yaw velocity tracking, action penalty, body height, etc. \\u2014 but do not know how to weigh them), we can use a Bayesian way to update the distribution of each coefficient as detailed in the response to the first Weaknesses bullet point. This would indeed be an interesting future experiment\"}" ] }
0uFTqvQhML
MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes
[ "Ruiyuan Gao", "Kai Chen", "Zhihao Li", "Lanqing HONG", "Zhenguo Li", "Qiang Xu" ]
While controllable generative models for images and videos have achieved remarkable success, high-quality models for 3D scenes, particularly in unbounded scenarios like autonomous driving, remain underdeveloped due to high data acquisition costs. In this paper, we introduce MagicDrive3D, a novel pipeline for controllable 3D street scene generation that supports multi-condition control, including BEV maps, 3D objects, and text descriptions. Unlike previous methods that reconstruct before training the generative models, MagicDrive3D first trains a video generation model and then reconstructs from the generated data. This innovative approach enables easily controllable generation and static scene acquisition, resulting in high-quality scene reconstruction. To address the minor errors in generated content, we propose deformable Gaussian splatting with monocular depth initialization and appearance modeling to manage exposure discrepancies across viewpoints. Validated on the nuScenes dataset, MagicDrive3D generates diverse, high-quality 3D driving scenes that support any-view rendering and enhance downstream tasks like BEV segmentation. Our results demonstrate the framework's superior performance, showcasing its transformative potential for autonomous driving simulation and beyond.
[ "controllable 3D scene generation", "3D gaussian splatting", "autonomous driving" ]
https://openreview.net/pdf?id=0uFTqvQhML
https://openreview.net/forum?id=0uFTqvQhML
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ti2bl7A429", "KFg447ICID", "FmRSs36acg", "4ivOQ4D0Et", "1F8nufznjc" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730876694200, 1730116956516, 1731581208946, 1729024810513, 1730591612371 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6681/Reviewer_T2U1" ], [ "ICLR.cc/2025/Conference/Submission6681/Reviewer_vviZ" ], [ "ICLR.cc/2025/Conference/Submission6681/Authors" ], [ "ICLR.cc/2025/Conference/Submission6681/Reviewer_s5XG" ], [ "ICLR.cc/2025/Conference/Submission6681/Reviewer_wYwF" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a novel approach for 3D street scene generation, with a strong emphasis on multi-condition controllability, including BEV (Bird\\u2019s Eye View) maps, 3D objects, and text descriptions. The method involves first training a video generation model, followed by scene reconstruction using deformable Gaussian splatting (DGS). This two-step approach improves the quality and temporal consistency of the generated scenes, making it particularly beneficial for data augmentation in downstream tasks. Validation on the nuScenes dataset highlights the method\\u2019s strengths in both controllability and scene reconstruction quality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents impressive visualization results, with generated scenes that are virtually indistinguishable from real-world counterparts.\\n\\nIt introduces an innovative generation-first, reconstruction-later pipeline, which simplifies both scene control and data acquisition, offering a more streamlined approach to 3D scene synthesis.\\n\\nThe deformable Gaussian splatting (DGS) method significantly enhances the quality of both generated and reconstructed views, demonstrating robust performance in complex autonomous driving environments.\\n\\nThe method provides high controllability through multi-level signals, including BEV maps, 3D bounding boxes, and text descriptions, enabling precise and flexible scene generation.\", \"weaknesses\": \"The method occasionally struggles with generating intricate objects, such as pedestrians, and detailed texture areas, like road fences, which can affect the realism of the scenes in certain contexts.\\n\\nThe experiments are conducted solely on the nuScenes dataset, which includes 700 training and 150 validation clips. Although widely used, this dataset may not fully capture the complexity of real-world environments, raising concerns about the method\\u2019s generalizability to more diverse and challenging scenarios.\\n\\nThe scholarship could be improved by referencing recent advancements in street-view generation, such as SCP-Diff: Photo-Realistic Semantic Image Synthesis with Spatial-Categorical Joint Prior [ECCV 2024]. This would help position the proposed approach more clearly within the current state of the field.\", \"questions\": \"see weakness box.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript introduces MagicDrive3D, a novel approach for controllable 3D street scene generation. This method divides the generation process into two distinct stages. In the first stage, a conditional generation model is trained to produce multi-view video sequences from the perspective of an ego car. The authors enhance the existing MagicDrive framework by encoding the relative pose with respect to the first frame and using these encodings as conditions for the network. In the second stage, the focus shifts to reconstruction, where the generated data is used to reconstruct the 3D scene. The authors propose several improvements to the 3DGS in terms of spatial location priors, modeling, and loss functions, specifically tailored for street view scene reconstruction. Experimental results demonstrate the effectiveness of each proposed component.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-structured and straightforward to understand.\\n2. The concept of breaking down 3D scene generation into a sequential multi-view generative stage followed by a static reconstruction stage, utilizing two distinct representations that have proven effective in their respective areas, is particularly intriguing.\\n3. The ablation studies demonstrate a significant improvement over the selected baselines (3DGS and LucidDreamer).\", \"weaknesses\": \"1. The performance on test views is not particularly strong. As noted in the manuscript, the PSNR on novel views in both test settings is below 22. While this work does advance the field of scene generation, it is not yet suitable for practical applications, such as generating synthetic data for end-to-end autonomous driving policy training.\\n2. The manuscript lacks a comparison with key baselines during the reconstruction phase, specifically Street Gaussians [A].\\n3. Have you attempted a long-term rollout of video diffusion models? If such a long-term rollout were conducted (like Vista [B]), would the two-stage scene generation pipeline still perform effectively?\\n\\n\\n[A] Street Gaussians: Modeling Dynamic Urban Scenes with Gaussian Splatting\\n[B] Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability\", \"questions\": \"Please see the section of weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper presents MagicDrive3D, a novel framework for controllable 3D street scene generation. The framework combines geometry-free video generation with geometry-focused reconstruction using 3DGS. MagicDrive3D allows for multi-condition control including BEV maps, 3D objects, and text descriptions, enabling the generation of diverse and high-quality 3D street scenes. It also improves downstream tasks like BEV segmentation and supports realistic scene simulations for applications such as autonomous driving.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed framework supports controllable scene generation using BEV maps, 3D bounding boxes, and text descriptions, which enhances its applicability in tasks like autonomous driving simulations.\\n2. The introduction of deformable 3D GS effectively addresses local dynamics and exposure discrepancies, ensuring better scene generation quality.\", \"weaknesses\": \"1. MagicDrive3D is composed of two parts: a video generation model and a 3DGS to recover 3D scenes from images, both are proposed in previous works, while showing technical improvements, still limiting the overall novelty of the paper.\\n2. The comparison in Table 2 is only made with Vallinia 3D-GS, yet there are several other dynamic 3D-GS methods for road scenes (e.g., PVG[1], StreetGaussian[2]) that should also be considered for comparison.\\n\\n[1] Periodic Vibration Gaussian: Dynamic Urban Scene Reconstruction and Real-time Rendering\\n[2] Street Gaussians: Modeling Dynamic Urban Scenes with Gaussian Splatting\", \"questions\": \"I'm a little questioned about the quality of the generated night scene in Figure 1, as it's blurry and doesn\\u2019t clearly convey a night setting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces MagicDrive3D, a new framework for controllable 3D street scene generation useful for view synthesis. The framework supports multi-condition control, including BEV road maps, 3D object bounding boxes, and text descriptions. The proposed framework MagicDrive3D first trains a video generation model and then reconstructs from the generated data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"[S1: Significance] The paper addresses an important problem in the field of computer vision: controllable 3D scene generation. The proposed method has the potential to be used in a variety of applications, including autonomous driving simulation, virtual reality, and video gaming.\"], \"weaknesses\": [\"[W1] The technical contributions of pose conditioned video generation and its relation in the framework is not clearly stated.\", \"[W1.1] According to Figure 2, it looks like the video generator works without conditioning on input camera images. If that is the case, the reviewer would like to understand what\\u2019s the benefit of feeding the video generated multi-view data to Stage 2 compared to using ground-truth data? Based on my understanding, the exposure discrepancy across multi-views and dynamic objects in the generated data will pose the same challenge to Stage 2 (vs. ground-truth camera images).\", \"[W1.2] If the proposed video generator works without conditioning on camera input images, please explain the steps that generate row (e) in Figure 8. In Figure 8, it is clear that the proposed system is able to take camera images as input and apply style transfer on top.\", \"[W1.3] The reviewer cannot find any videos in supplementary material, which is usually the hard requirement for accepting a video generation paper. The reviewer feels video results are still required for this paper, as it highlights video generation as one important step compared to existing work in 3D street view generation.\", \"[W2] The paper\\u2019s claim that Magic3D is the first to achieve controllable 3D street scene generation using a common driving dataset (Line 91-92) is questionable. For example, controllable 3D street scene generation has been achieved in Panoptic Neural Fields [NewRef1] on KITTI dataset. In another example, as discussed in Section 5.1 of BlockNeRF [NewRef2], 3D street scene generation has also been achieved on the single-capture subset (open-sourced) called San Francisco Mission Bay Dataset. Please discuss the relevant work in the main text and compare against them for novel view synthesis (show quantitative metrics).\", \"[W2.1] The reviewer would recommend to conduct a more sophisticated literature review. For example, this paper also missed prior work that shares similar motivation but on object reconstruction from driving videos using a generative model GINA-3D [NewRef3].\", \"[W3] Important details regarding the FVD and FID metrics are missing. As Nuscenes dataset is relatively small, the reviewer would like to understand how many images or 16-frame video clips have been used in computing the metrics. How do you construct the real videos and generated videos (on what conditions). This is an important factor to decide whether the metrics reported in Table 2 are valid or not.\", \"[W3.1] In the field of image and video generation, it is known that FID and FVD are good but not perfect. Certain adversarial artifacts can lead to unexpected changes to FID and FVD. Please consider using FID-DINOv2 [NewRef4] and FVD-VideoMAEv2 [NewRef5] as alternative metrics.\", \"[W4] While one focus of the paper is on controllable generation, the reviewer cannot find enough details on different controllable signals. It would be good to develop quantitative metrics to measure the accuracy of control and provide more diverse examples of scene editing. This could include user studies to assess the usability and effectiveness of the control mechanisms.\", \"[W5] The paper focuses on 3D street view synthesis but the reviewer cannot find 3D visualizations in the supplementary materials.\", \"References\", \"[NewRef1] Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation, Kundu et al., In CVPR 2022.\", \"[NewRef2] Block-NeRF: Scalable Large Scene Neural View Synthesis, Tancik et al., In CVPR 2022.\", \"[NewRef3] GINA-3D: Learning to Generate Implicit Neural Assets in the Wild, Shen et al., In CVPR 2023.\", \"[NewRef4] Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models, Stein et al., In NeurIPS\\u201923.\", \"[NewRef5] On the Content Bias in Fr\\u00e9chet Video Distance, Ge et al., In CVPR 2024.\"], \"questions\": \"Please address the concerns raised in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0tXmtd0vZG
Enhancing Decision-Making of Large Language Models via Actor-Critic
[ "Heng Dong", "Kefei Duan", "Chongjie Zhang" ]
Large Language Models (LLMs) have achieved significant advancements in natural language processing tasks, yet they encounter challenges in complex decision-making scenarios that require long-term reasoning and alignment with high-level objectives. This paper introduces a novel gradient-free LLM-based Actor-Critic framework, termed LAC, which addresses these limitations by integrating both action generation and action evaluation mechanisms. Our approach employs two distinct critics: a language-based critic that provides context-sensitive feedback and a value-based critic that offers quantitative assessments of expected long-term rewards. This dual-critic architecture enhances decision-making by leveraging the complementary strengths of both critics, enabling contextually appropriate and more robust action selection. Additionally, we propose a gradient-free policy improvement method that reduces computational overhead, facilitating efficient updates to the actor’s policy without the complexities of gradient backpropagation. We validate the effectiveness of LAC across diverse environments that cover both high-level action space (ALFWorld) and low-level action space (BabyAI-Text), demonstrating its superior performance compared to existing state-of-the-art methods. Our method outperforms other state-of-the-art baselines using the same 7B/8B open-source LLMs and even exceeds a strong baseline ReAct using GPT-4 in most settings. Our findings highlight the efficacy and generality of the dual-critic Actor-Critic framework in enhancing LLM-based decision-making.
[ "Large Language Models", "Decision-Making", "Actor-Critic" ]
Reject
https://openreview.net/pdf?id=0tXmtd0vZG
https://openreview.net/forum?id=0tXmtd0vZG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zor1iQnpGu", "z6XDyNsG0L", "yq9qwVTrO0", "xe07ApslGr", "wypFbnS8yv", "wU7TjpAby7", "oXm7K0b8z7", "o6I5QJ7gxx", "myCb0MFTXr", "mqSS28HYin", "lncLBOUZJW", "kAnebdg0Kd", "iU9Y0j8aiX", "hOAFgUT98l", "g83Yp6YaqA", "eBKykU7Zsw", "Vmb3gEver7", "VRrWxZ2V8m", "UatvOYs3mh", "ReaFqVZI4k", "RQc26gIlvc", "PzGICgiC3H", "Lpefnh6voY", "L1rFuOuEAU", "HjlC2FLBeI", "FU2DilQlub", "E758hNXgNG", "4jT0T4cnsA" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review" ], "note_created": [ 1732661842985, 1732676828948, 1732498405979, 1732498612151, 1732689636125, 1730711382846, 1732999372155, 1732497859003, 1732998693970, 1732498845954, 1729643037050, 1731167695731, 1732651953598, 1734749375301, 1732661920200, 1732498052454, 1732919076606, 1732999595984, 1732998557666, 1732499125745, 1732498337373, 1732498711000, 1732499056061, 1732999669239, 1732498933973, 1732643091459, 1737523665517, 1730712479812 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Reviewer_thbr" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Reviewer_Xk7a" ], [ "ICLR.cc/2025/Conference/Submission4850/Reviewer_thbr" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Reviewer_Xk7a" ], [ "ICLR.cc/2025/Conference/Submission4850/Reviewer_KnwK" ], [ "ICLR.cc/2025/Conference/Submission4850/Reviewer_FHde" ], [ "ICLR.cc/2025/Conference/Submission4850/Area_Chair_Q5mg" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Authors" ], [ "ICLR.cc/2025/Conference/Submission4850/Reviewer_KnwK" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4850/Reviewer_FHde" ] ], "structured_content_str": [ "{\"title\": \"Follow-Up on Rebuttal for Your Review\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your time and effort in reviewing our work. We have provided detailed clarifications and experimental results in our rebuttal to address the issues and concerns raised in your comments.\\n\\nIf our response satisfactorily resolves your concerns, we kindly ask if you could reconsider your evaluation of our work. Should you have any additional questions or comments, we would be happy to engage in further discussions to ensure all aspects are addressed.\\n\\nThank you again for your thoughtful review and support.\\n\\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Reviewer thbr Response\", \"comment\": \"Thank you for conducting additional experiments on the WebShop domain. Could the authors clarify how actions were chosen to be \\\"judged\\\" when the action space is large? Were a subset of permissible actions sampled at the beginning?\\n\\nOverall, I still believe that the paper makes a positive contribution and maintain my score. I do find the results between LAC and ReAct to be rather close for more sophisticated models though. They also appear to be weaker than the results obtained by LATS, which to my knowledge, currently achieves state-of-the-art in this domain [1]. If the authors could also show a substantial benefit of LAC over LATS, I would consider further raising my score. \\n\\n[1] https://arxiv.org/pdf/2310.04406\"}", "{\"title\": \"New Experimental Results and Clarifications of the Questions (Part 2)\", \"comment\": \"> **Question 3**: How does ReAct, combined with a value critic, perform?\\n\\n**A**: As indicated in Table 1 above, ReAct combined with a Value-Critic corresponds to `LAC w/o lang-critic`. The performance results for this configuration are presented in Figure 4 of the initial submission (Section 5.3). In brief, `LAC w/o lang-critic` consistently underperforms compared to the full LAC in both AlfWorld and BabyAI benchmarks across various base models. However, it generally outperforms ReAct, highlighting the effectiveness of our value-critic.\\n\\n> **Question 4**: How does ReAct, combined with a language critic, perform?\\n\\n**A**: Similarly, ReAct combined with a Lang-Critic corresponds to `LAC w/o value-critic`. The results for this configuration are also shown in Figure 4 of the initial submission (Section 5.3). In summary, `LAC w/o value-critic` underperforms compared to the full LAC across various base models in the AlfWorld and BabyAI benchmarks. Nonetheless, it typically outperforms ReAct alone, indicating the effectiveness of our lang-critic.\\n\\n> **Question 5**: Is the prior \\\\pi_{LLM} the same LLMs used for Q_{LLM} for computing the critic values?\\n\\n**A**: No, they are not identical. $\\\\pi_{LLM}$ is based on the original model, while $\\\\mathcal{Q}_{LLM}$ utilizes a fine-tuned version of the same model, adapted using several trajectories. The fine-tuning details are provided in Appendix B.2.\\n\\n> **Question 6**: How does language critic + value critic perform? (Essentially LAC without update action distribution, instead using the critic values to choose an action)\\n\\n**A**: According to Table 1 above, the configuration of the language critic combined with the value critic corresponds to `Value-critic only`. The performance results for this setup are detailed in Figure 4 of the initial submission. In summary, `Value-critic only` consistently underperforms full LAC, demonstrating the effectiveness of our gradient-free policy improvement component. \\n\\n> **Weakness 1**: The terminology used in the paper could be more precise, particularly in relation to terms commonly found in the reinforcement learning literature.\\n\\n**A**: Thank you for your feedback. If the reviewer could specify which terms you find imprecise, it would greatly help us improve the clarity of our paper.\\n\\nThanks again for your efforts and insightful comments! We hope our clarification addresses your concerns and sincerely appreciate it if you could re-evaluate our work. Any further feedback and discussions are much appreciated.\"}", "{\"title\": \"Additional Experimental Results and Clarifications to Other Questions (Part 1)\", \"comment\": \"Thanks for the valuable comments and helpful suggestions. Here we provide additional experimental results and detailed explanations for clarifying your questions. The detailed experimental results are provided in the revised version of the paper.\\n\\n> **Weakness 1**: The motivation that \\\"these methods typically adopt either an actor only or critic only approach\\\" (line 42-43) misses many related works. The paper relies on [1] to be the only paper that discusses actor-critic methods for LLM agents but many important related works are missing. Even PPO is commonly considered as an actor-critic method where it has an actor that estimates the V function to reduce variance for policy gradient estimation. Thus, many prior works that use PPO for LLM agents should be considered as actor-critic methods (e.g. [2]). Retroformer [3] can also be considered to be an actor-critic method where the critic is a natural language-based critic. Other works also applied value-based actor-critic methods to LLM agent tasks (e.g. [4]).\\n\\n**A**: Thank you for your insightful feedback. However, our paper primarily addresses enhancing decision-making performance in LLMs with few samples, a context that often contrasts with traditional actor-critic with LLM methods that require large datasets [2,3,4]. We have also recognized the importance of discussing traditional actor-critic with LLM methods and included several works [3,8,9] in the Related Work of our initial submission. We thank your suggestions and will incorporate citations [2,4] to provide a more comprehensive overview of actor-critic methods in the context of LLMs in the revised version of the paper.\"}", "{\"title\": \"Thanks for your rebuttal\", \"comment\": \"Thank the authors for their rebuttal. My following concerns still remain:\\n\\n### The novelty of language and value critic compared to prior works\\nIt seems from the response of the author that the main difference of the language critic from naive chain-of-thought is that naive chain-of-thought reflects on the current state while the language critic reflects on the previous state and action results. It does not seem to make enough contributions as it seems that the only difference is whether the observation space contains the previous states and actions or not. For the value critic, it seems that the main difference from constrained decoding is that the value critic uses Monte Carlo estimations of the Q function instead of actually training a parametric Q function. This practice is also commonly adopted when it is unreliable to train a Q function (e.g. https://arxiv.org/pdf/2406.14532). It is unclear to me what the challenges are to combine those two critics, and simply combining two common practices is not novel enough for a paper that claims to come up with a new method.\\n\\n### The scope of experiments \\nWhile the novelty is limited, the domains where experiments are conducted in this paper seem to be limited too. Alfworld and babyAI are no longer the most exciting applications where the state-of-the-art models are competing on. Even WebShop is on the more contrived side compared to more recent benchmarks, where each task can be completed by a sequence of searching, choosing an object, and clicking [buy now]. Since this paper does not perform gradient-based optimizations, it might worth showing impressive results in more realistic and up-to-date benchmarks such as WebArena (https://arxiv.org/abs/2307.13854) and OS World (https://arxiv.org/abs/2404.07972). For the level of complexity of the method proposed, more impressive results are expected on more challenging benchmarks.\\n\\n### The computation analysis experiment misses important details\\nWhile the authors present additional results of computation analysis in Section 5.4, it seems to be a red flag to me as many important details are missing. E.g. What is the base model used here, are they the same for all methods? What is the hardware for this to be tested on, is there any necessary parallelism? How is each metric calculated? Why does ReAct have more steps but much less time per task? Does \\\"steps per task\\\" take into account the costs of estimating Q(s,a) with trajectory rollouts? \\n\\nTherefore, I maintain my original score and suggest that the paper would benefit from more explanations on the challenges of combining language and value critic, more experiments on most up-to-date challenging benchmarks, and more clarifications and re-writing of the experiments section.\"}", "{\"summary\": \"The paper proposes a new way to tune LLMs to behave as agents, by combining the initial action probabilities with predictions by a language and value critic. The language critic adds natural language feedback to various candidate actions, and the value critic uses search to assign a value (or probability of success) to those actions. The method achieves impressive empirical performance on popular benchmarks such as ALFWorld against actor-only and critic-only baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is clearly written and tackles an important problem, as many applications of LLMs rely on them to behave as long-term agents rather than simply generate responses.\\n\\nThe method is sensible, using the language critic to refine the action space, then combining predictions by the value critic with the base LLM policy via a simple perturbation of initial action probabilities. \\n\\nFinally, The empirical results are impressive, outperforming previous state-of-the-art techniques such as ReAct by a large margin.\", \"weaknesses\": \"Though the method is impressive, I have some concerns about its generalizability.\\n\\nNamely, the authors only consider tasks with small action spaces, where evaluating each action individually is tractable. In many more realistic tasks such as dialogue, I imagine that the action space would be more open-ended and am unsure how to adapt the proposed method. \\n\\nIn addition, the tasks considered rely on being able to simulate future trajectories with high fidelity, which may be harder to do in more complex environments. Specifically, it is likely much harder to faithfully predict trajectories when the agent is engaging with another human rather than simply moving around in a static environment.\\n\\nFinally, the value critic currently only works for tasks with binary outcomes (success or failure).\", \"questions\": \"Overall, I think the paper makes a valuable contribution. However, I do think there are several areas that the authors could address to further strengthen it:\\n\\n(1) How would the method change when the task has a potentially infinite action space (such as in dialogue)?\\n\\n(2) Have the authors experimented with tasks with more expressive outcomes/rewards? I am curious if both critic can still behave well when there are more nuanced outcomes than just success or failure. \\n\\n(3) While the benchmarks are well-known, I think they are perhaps not realistic of tasks that people might actually want LLMs to accomplish. For example, it would be interesting to see the authors evaluate on tasks considered in the GDP-Zero paper such as donation solicitation [1].\\n\\n[1] https://arxiv.org/abs/2305.13660\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"New Experimental Results and Clarifications to Other Questions (Part 3)\", \"comment\": \"### **Question 2:** The scope of experiments.\\n\\n> **Question 2.1:** While the novelty is limited, the domains where experiments are conducted in this paper seem to be limited too. Alfworld and babyAI are no longer the most exciting applications where the state-of-the-art models are competing on. Even WebShop is on the more contrived side compared to more recent benchmarks, where each task can be completed by a sequence of searching, choosing an object, and clicking [buy now]. \\n\\n**A**: We respectively disagree with this claim.\\n\\n(1) AlfWorld and BabyAI remain widely used benchmarks in the research community, as evidenced by their inclusion in multiple recent studies [6,7,8,9]. These environments provide a valuable foundation for evaluating the capabilities of state-of-the-art LLM agent methods in simulated settings that mimic real-world household decision-making processes.\\n\\n(2) Although the WebShop environment may appear straightforward for humans, it presents significant challenges for state-of-the-art LLM agents. As indicated in Table 9 below, our method (LAC) outperforms several existing approaches, yet it still falls short of human-level performance:\\n\\n**Table 9: SOTA performance on WebShop benchmark**\\n\\n| | Success Rate | Reward |\\n| ------------------------------- | ------------ | ---------- |\\n| LAC (ours), Gemma-7B | 46% | 0.7237 |\\n| LATS [10], Gemma-7B | 39% | 0.6313 |\\n| LATS [10], GPT-3.5 | 38% | 0.7590 |\\n| Auto-GPT [11], GPT-3.5 | 23% | 0.5282 |\\n| Auto-GPT [11], GPT-4 | 32% | 0.6155 |\\n| Expert Human, results from [10] | **59.6%** | **0.8210** |\\n\\n\\n> **Question 2.2:** Since this paper does not perform gradient-based optimizations, it might worth showing impressive results in more realistic and up-to-date benchmarks such as WebArena ([2] https://arxiv.org/abs/2307.13854) and OS World ([3] https://arxiv.org/abs/2404.07972). For the level of complexity of the method proposed, more impressive results are expected on more challenging benchmarks.\\n\\n**A**: Thank you for your valuable suggestion regarding the use of advanced benchmarks such as WebArena [2] and OS World [3]. While these benchmarks are indeed challenging and valuable for assessing complex decision-making systems, we believe the benchmarks used in our study are well-suited to demonstrate the strengths and contributions of our method.\\n\\nOur primary focus is to enhance the decision-making capabilities of smaller open-source LLMs (e.g., 7B/8B models). The benchmarks we selected, such as ALFWorld, BabyAI-Text, and WebShop, provide controlled and widely-recognized environments for evaluating sequential reasoning and policy improvement. These settings allow us to clearly isolate and quantify the impact of our proposed method, as demonstrated by significant performance gains over strong baselines.\\n\\nWe recognize that benchmarks like WebArena and OS World involve additional complexities, including diverse observation modalities and scaling trajectory rollouts. These benchmarks present significant challenges even for the most advanced closed-source models. For instance, GPT-4 achieves an end-to-end task success rate of only 11.70% in WebArena, compared to the human benchmark of 78.24% [2], and a success rate of 12.24% in OS World, compared to 72.36% for humans [3]. These results illustrate the difficulty of these tasks and underscore the gap that remains, even for leading models.\\n\\nWhile these benchmarks are undoubtedly valuable, we believe that the current benchmarks sufficiently validate the key contributions of our work. Future exploration of more complex settings like these benchmarks would require significantly more computational resources and model scaling, which were beyond the scope of this work. Nevertheless, the demonstrated improvements in decision-making on our chosen benchmarks highlight the broad potential of our method for diverse applications.\"}", "{\"title\": \"Clarifications of the Questions (Part 1)\", \"comment\": \"Thanks for the valuable comments and helpful suggestions. Here we provide detailed explanations for your questions.\\n\\n> **Weakness 1**: The first kind is due to the nature of academic work, which is resource constrained (small teams; limited compute). This induces a set of \\\"easy\\\" criticisms such as \\\"insufficient experimental validation\\\" or \\\"excessive focus on the small sample regime\\\".\\n\\n**A**: Thank you for your understanding regarding the constraints of academic work. While we acknowledge the limitations of resources, our study demonstrates that even with few samples and small models, it is possible to achieve performance that outperforms state-of-the-art LLMs.\\n\\nTo further validate the generalizability of our method, we have conducted new experiments using the WebShop benchmark [1], which simulates a web browsing task with more complex and infinite action spaces. In this scenario, an agent is required to purchase a product based on specific instructions (e.g. \\\"I need a long clip-in hair extension which is natural looking, and price lower than 20.00 dollars\\\") through web interactions (e.g. search \\\"long clip-in hair extension\\\", click buttons such as \\\"[item ID]\\\" or \\\"back to search\\\"). Within this context, the 'search' and 'click' actions can indeed lead to an unbounded set of potential actions, as the agent can continuously refine its queries and selections based on dynamic web results. \\n\\nWe represent the detailed results in Figure 7 of the revised paper, and we also show some results in the following Table 1 and Table 2. We found that our method, LAC, consistently outperforms other baselines, in terms of both success rate and final reward across various base models. This demonstrates the generalizability of our method in handling more complex and infinite action spaces.\\n\\n**Table 1: Success rate comparison in WebShop benchmark**\\n| Success Rate | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ------------ | ------------ | -------- | ---------- | ---------- |\\n| LAC (ours) | **32%** | **46%** | **39%** | **32%** |\\n| ReAct | 15% | 35% | 37% | 30% |\\n| RAP | 19% | 28% | 28% | 26% |\\n\\n**Table 2: Final reward comparison in WebShop benchmark**\\n| Reward | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ---------- | ------------ | ---------- | ---------- | ---------- |\\n| LAC (ours) | **0.5840** | **0.7237** | **0.6733** | **0.6299** |\\n| ReAct | 0.5042 | 0.6332 | 0.6445 | 0.6159 |\\n| RAP | 0.5545 | 0.6048 | 0.6215 | 0.5594 |\\n\\n> **Question 1**: Authors mention fine-tuning under a limited budget (e.g., 18 trajectories), following the specification on lines 1048-1057. However what is the source these trajectories? Are they human annotated trajectories? Trajectories sampled from the zero-shot architecture, perhaps conditioned on task success? How are the special tokens which indicate positive/negative judgment produced for the fine-tuning data?\\n\\n**A**: The 18 trajectories used for fine-tuning are human-annotated and sourced from the training split of our benchmarks, including both successful and failed trajectories to ensure balance. Each can be annotated in about 10-20 minutes, allowing us to maintain a manageable budget.\\n\\nThe special tokens indicating positive and negative judgments for the fine-tuning data are also human-annotated. Specifically, we assigned the following labels based on the relevance of actions to the final goal:\\n\\n- Positive judgment like \\\"GOOD\\\": assigned to actions deemed necessary for achieving the final goal.\\n- Negative judgment like \\\"BAD\\\": assigned to actions that were determined to be useless or incorrect in the context of reaching the goal.\\n- Other judgments like \\\"UNKNOWN\\\": assigned to actions that could not be evaluated as either good or bad based on the trajectory history.\\n\\nWe have included this information in Appendix C.3 of our paper for further reference, where we detail the annotation process and the criteria used for assigning these labels.\\n\\n> **Question 1.1**: Under what policy (action distribution) is the future simulator generating? If the reweighted action distribution becomes divergent from the prior, will the future simulator be invalidated?\\n\\n**A**: The future trajectories are generated based on the original actor $\\\\pi_{LLM}$. While this may lead to some divergence of the reweighted action distribution from the prior, our method mitigates this risk by incorporating a KL-divergence constraint in Equation 4 in our manuscript, which helps to prevent the new actor from deviating too far from the original actor. \\n\\n---\\n\\n**References**\\n\\n[1] Yao, Shunyu, et al. \\\"Webshop: Towards scalable real-world web interaction with grounded language agents.\\\" Advances in Neural Information Processing Systems 35 (2022): 20744-20757.\"}", "{\"title\": \"New Experimental Results and Clarifications to Other Questions (Part 2)\", \"comment\": \"> **Question 1.2:** For the value critic, it seems that the main difference from constrained decoding is that the value critic uses Monte Carlo estimations of the Q function instead of actually training a parametric Q function. This practice is also commonly adopted when it is unreliable to train a Q function (e.g. [1] https://arxiv.org/pdf/2406.14532).\\n\\n**A**: We also respectfully disagree with the claim about value-critic. \\n\\n(1) Our value-critic employs a unique Q-value estimation method described in Eq.3, which leverages the internal beliefs of LLMs about success and failure to compute the Q-value. This approach is distinct from the methodology presented in [1], which does not incorporate our specific design.\\n\\n(2) Though the KL-regularized RL objective is a common practice for balancing conflicting goals (as seen in methods like DPO [5] and Constrained Decoding [4]), the critical aspect lies in how each term in the objective is defined. Our value-critic accounts for long-term consequences, resulting in more stable value-based evaluations derived from the LLMs' internal beliefs regarding success and failure.\\n\\nTo further demonstrate the strengths of our value-critic. We conducted additional experiments comparing the performance of `LAC` and `LAC w/ direct evaluation` on the WebShop benchmark.\\n\\n- `LAC`: Our original method. It generates value-based evaluations by extracting LLMs' internal beliefs of success and failure as described in Eq.3.\\n- `LAC w/ direct evaluation`: We prompt the LLMs to directly output the probability of success $p(y=+1)$ while keeping all other components unchanged. The Q-value is then calculated as $\\\\log\\\\frac{p(y=+1)}{1-p(y=+1)}$.\\n\\nThe results, presented in Tables 7 and 8, show that our `LAC` method outperforms `LAC w/ direct evaluation` in terms of success rate and reward across most base models. Analysis of the resulting trajectories revealed that `LAC w/ direct evaluation` often produces a non-informative success probability (e.g., $p(y=+1)=0.5$), leading to ineffective improvements in policy.\\n\\n\\n**Table 7: Success rate comparison of `LAC` and `LAC w/ direct evaluation` in WebShop benchmark**\\n\\n| Success Rate | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ------------------------ | ------------ | -------- | ---------- | ---------- |\\n| LAC (ours) | 32% | **46%** | **39%** | **32%** |\\n| LAC w/ direct evaluation | **38%** | 42% | 22% | 24% |\\n\\n**Table 8: Final reward comparison of `LAC` and `LAC w/ direct evaluation` in WebShop benchmark**\\n\\n| Reward | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ------------------------ | ------------ | ---------- | ---------- | ---------- |\\n| LAC (ours) | **0.5840** | **0.7237** | **0.6733** | 0.6299 |\\n| LAC w/ direct evaluation | 0.5636 | 0.6975 | 0.6453 | **0.6333** |\\n\\n\\n\\n> **Question 1.3:** It is unclear to me what the challenges are to combine those two critics, and simply combining two common practices is not novel enough for a paper that claims to come up with a new method.\\n\\n**A**: We would like to clarify that our approach is not a combination of the critics. Instead, LAC represents a novel integration of the actor and dual-critics in a cohesive manner. The primary challenge we address is the seamless integration of the actor with both critics. Specifically, the novelty lies in:\\n\\n- LAC provides a **synergistic framework** where the critics are designed to complement each other\\u2014language critic captures contextual feedback from its history with interpretability and qualitative reasoning, while value critic evaluates predicted future trajectories and ensures quantitative alignment with long-term outcomes.\\n\\n- LAC's novel gradient-free optimization strategy harmonizes their outputs and seamlessly leverages both historical performance and future predictions to improve the actor's policy effectively.\"}", "{\"title\": \"Additional Experimental Results and Clarifications to Other Questions (Part 3)\", \"comment\": \"> **Weakness 3**: The writing of the paper, and in particular the motivation (see above) and the experiment section can be improved. Section 5.2 and 5.3 only state the the proposed methods are better than baselines and other ablations without investigating into the reason of the gap. E.g. Why is LAC so much better than ReAct/ RAP, and is there any finding of comparing the performance of LAC with different base models. The experiment section does not provide such necessary analysis information.\\n\\n**A**: We thank the reviewer for the suggestions and we add more discussions to explain the performance gaps. \\n\\n- **LAC vs. ReAct/RAP**: LAC's superior performance stems from its balanced integration of the strengths of both actor and critic. While actor-only (e.g., ReAct) methods excel in short-term actions, they often struggle with long-term reasoning. In contrast, critic-only (e.g., RAP) methods conduct explicit reasoning but might mispredict future trajectories and lead to even worse action selection occasionally compared with actor-only methods. LAC addresses these limitations by balancing the actor's action generation and the critic's evaluation. We have provided illustrative examples for AlfWorld and BabyAI in Figure 1 and Figure 13 respectively. In summary, actor-only and critic-only methods make mistakes at different time steps, our LAC can select the correct action. \\n\\n- **LAC with Different Base Models**: Regarding the performance of LAC with different base models, we highlight two key findings: (1) Our method is general and can be adapted to various base models, and (2) stronger base models, such as Gemma-7B, demonstrate higher performance when integrated with our approach. However, due to the incomplete public availability of training details for these base models, further in-depth analysis will require additional investigation.\\n\\nWe will include these discussions in our paper to improve the overall clarity and rigor of the paper. Thank you for highlighting these areas for improvement.\\n\\n> **Weakness 4 & Weakness 5**: (W4:) It is unclear how generalizable and computationally efficient the proposed method is. In particular, it seems that the method can only be applied to tasks with a finite action space and it is unclear if the method can generalize to realistic tasks with unbounded action space such as web browsing. (W5:) The tasks used in this work are more on the simple side. It would be interesting to see if the proposed method can work in more challenging tasks such as web browsing, coding, minecraft etc.\\n\\n**A**: We have systematically exhibited the computational cost of our method and baselines in Figure 5 of the paper. In brief, although our method does not have the lowest cost per task, when considering both the success rate and the cost, our approach is the most cost-effective.\\n\\nTo show the generalizability of our method, we have conducted new experiments using the WebShop benchmark [10], which simulates a web browsing task with a potentially infinite action space. In this scenario, an agent is required to purchase a product based on specific instructions (e.g. \\\"I need a long clip-in hair extension which is natural looking, and price lower than 20.00 dollars\\\") through web interactions (e.g. search \\\"long clip-in hair extension\\\", click buttons such as \\\"[item ID]\\\" or \\\"back to search\\\"). Within this context, the 'search' and 'click' action can indeed lead to an unbounded set of potential actions, as the agent can continuously refine its queries and selections based on dynamic web results. \\n\\nWe represent the detailed results in Figure 7 of the revised paper, and we also show some results in Table 1 and Table 2 below. We found that our method, LAC, consistently outperforms other baselines, in terms of both success rate and final reward across various base models. This demonstrates the robustness of our method in handling more complex and unbounded action spaces.\\n\\n**Table 1: Success rate comparison in WebShop benchmark**\\n| Success Rate | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ------------ | ------------ | -------- | ---------- | ---------- |\\n| LAC (ours) | **32%** | **46%** | **39%** | **32%** |\\n| ReAct | 15% | 35% | 37% | 30% |\\n| RAP | 19% | 28% | 28% | 26% |\\n\\n**Table 2: Final reward comparison in WebShop benchmark**\\n| Reward | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ---------- | ------------ | ---------- | ---------- | ---------- |\\n| LAC (ours) | **0.5840** | **0.7237** | **0.6733** | **0.6299** |\\n| ReAct | 0.5042 | 0.6332 | 0.6445 | 0.6159 |\\n| RAP | 0.5545 | 0.6048 | 0.6215 | 0.5594 |\"}", "{\"summary\": \"This paper introduces LLM-based Actor Critic framework (LAC) to improve decision-making capabilities of LLM agents through an integration of the actor and the critic. LAC makes use of two different critics including a language critic that provides contextual information and a value critic that provides more quantitative information. The paper also proposed a gradient-free policy improvement approach using two critics without incurring costly backpropagation processes. The effectiveness of LAC is demonstrated in Alfworld and BabyAI-test, and even surpasses GPT4 with ReAct.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This paper addresses an important and relevant problem of improving decision-making capabilities of LLM agents.\\n\\nIt is nice that policy improvement can be achieved without incurring costly gradient updates and loss backpropagation.\\n\\nThe paper shows improvements on two popular benchmarks including Alfworld and BabyAI.\", \"weaknesses\": \"The motivation that \\\"these methods typically adopt either an actor only or critic only approach\\\" (line 42-43) misses many related works. The paper relies on [1] to be the only paper that discusses actor critic methods for LLM agents but many important related works are missing. Even PPO is commonly considered as an actor-critic method where it has an actor that estimates the V function to reduce variance for policy gradient estimation. Thus, many prior works that use PPO for LLM agents should be considered as actor-critic methods (e.g. [2]). Retroformer [3] can also be considered to be an actor-critic method where the critic is a natural language based critic. Other works also applied value-based actor critic methods to LLM agent tasks (e.gl [4]).\\n\\nThe novelty of the method is limited. The language critic and value critic are two main proposals in this paper. However, the language critic is relatively simple and can be considered as a direct use of CoT [5] where the agent is asked to generate thoughts reflecting on the previous round actions before taking actions. The objective of the value critic is also similar to constrained decoding [6] that has been widely used in the alignment domain without the need of performing gradient updates on models.\\n\\nThe writing of the paper, and in particular the motivation (see above) and the experiment section can be improved. Section 5.2 and 5.3 only state the the proposed methods are better than baselines and other ablations without investigating into the reason of the gap. E.g. Why is LAC so much better than ReAct/ RAP, and is there any finding of comparing the performance of LAC with different base models. The experiment section does not provide such necessary analysis information.\\n\\nIt is unclear how generalizable and computationally efficient the proposed method is. In particular, it seems that the method can only be applied to tasks with a finite action space and it is unclear if the method can generalize to realistic tasks with unbounded action space such as web browsing.\\n\\nThe tasks used in this work are more on the simple side. It would be interesting to see if the proposed method can work in more challenging tasks such as web browsing, coding, minecraft etc.\\n\\n[1] Controlling large language model-based agents for large-scale decision-making: An actor-critic approach\\n[2] Large language models as generalizable policies for embodied tasks\\n[3] RETROFORMER: RETROSPECTIVE LARGE LANGUAGE AGENTS WITH POLICY GRADIENT OPTIMIZATION\\n[4] ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL\\n[5] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models\\n[6] Controlled Decoding from Language Models\", \"questions\": \"37: long-time horizon -> long horizons\", \"151_152\": \"\\\"Due to the auto-regressive nature of LLM, it does not do reasoning and planning explicitly.\\\" This seems controversial. Chain-of-thought/o-1 are also auto-regressive decoding, but arguably they have some reasoning in them. Same with 152-154, \\\"Accordingly, LLM\\nwith actor-only methods often struggles with complex tasks that require multiple steps of planning and reasoning\\\".\\n\\nPlease see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Authors propose a self-reflecting flow architecture for multi-step problem solving which is informed by classical RL concepts.\\nOn AlfWorld and BabyAI-text environments it exhibits good performance.\\n\\nAlthough components of the architecture have been previously proposed, the combination is sensible.\\n* \\\"Lang-critic\\\": this components reflects upon the goal and the history and augments the prompt for the actor component.\\n* \\\"actor\\\": proposes new actions given the goal, the history, and the lang-critic augmentation.\\n* \\\"rollout simulator\\\": given a goal, history, and action: simulates the next few steps (under what policy?)\\n* \\\"value-critic\\\": given a goal, history, proposed action, and simulated future under this action: estimate the likelihood of task completion\\n\\nMultiple actions are sampled from the actor at each round, scored by the value critic, the distribution is reweighted via exponential-log-odds, and then the greedy action is selected.\\n\\nThere are some ablations studies to provide insight into the importance of the components, and comparisons to classical RL techniques.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"In a zero-shot context, the architecture and results are facially reasonable, given there are numerous prior results indicating large foundation models can exhibit improved performance when composed via a self-reflecting architecture, and given prior art that reasonable synthetic rollouts can improve value estimation analogous to chain-of-thought for possible futures.\\n\\nOn balance, despite the poor exposition around fine-tuning, I believe readers would net benefit from exposure to this paper because of intriguing concepts such as: 1) exponential log-odds reweighting of a prior action distribution is superior to policy gradient for sculpting the action distribution in the small-sample regime [line 375 vs. line 833]; 2) foundational [rather than component or architecture specific] fine-tuning induces end-to-end improvement when composed [lines 1058-1066].\", \"weaknesses\": \"This paper has two kinds of weaknesses.\\n\\nThe first kind is due to the nature of academic work, which is resource constrained (small teams; limited compute). This induces a set of \\\"easy\\\" criticisms such as \\\"insufficient experimental validation\\\" or \\\"excessive focus on the small sample regime\\\". I believe both authors and readers are well-aware of practical constraints, so this reviewer will not weigh such concerns heavily.\\n\\nThe second kind is insufficient description to allow the reader to understand what was done. Specifically, the weakest parts of the paper are all related to the impact of fine-tuning, which is not sufficiently described (see questions). Authors could improve both the intelligibility and the impact of this paper via more detail.\", \"questions\": \"1. Authors mention fine-tuning under a limited budget (e.g., 18 trajectories), following the specification on lines 1048-1057. However what is the source these trajectories? Are they human annotated trajectories? Trajectories sampled from the zero-shot architecture, perhaps conditioned on task success? How are the special tokens which indicate positive/negative judgement produced for the fine-tuning data? It is not clear, and in the small-sample regime, these details are critical.\\n 1. Related: Under what policy (action distribution) is the future simulator generating? If the reweighted action distribution becomes divergent from the prior, will the future simulator be invalidated?\\n1. Authors argue equation (6) on line 375 is a more sample efficient way to incorporate labeled trajectory information than conventional policy gradient via the equation on line 833. However the value-critic on line 833 does not take simulated rollouts as a parameter. Perhaps this is a typo? Otherwise the comparison is invalid.\\n1. In figure 8, it is unclear why LAC+fine-tuned-actor underperforms LAC. The lack of commentary raises reader suspicion.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I acknowledge the author's response and thank them for addressing the weaknesses and questions I raised. I will maintain my score.\"}", "{\"metareview\": \"This paper presents an approach to improve decision-making in LLMs. Given a current state and goal, the approach first generates language feedback on previous actions which is then used to .\", \"the_main_concerns_with_this_paper_are\": \"(i) lack of novelty and (ii) experiments being presented over older less-challenging methods than more recent challenging benchmarks like Web Arena. The novelty argument comes down to the fact that multiple papers now use the different pieces present in the description approach. There is a large body of work that uses LLMs to generate actions for decision-making, then there are LLM reflection papers that generate language feedback using the LLM and use them to improve the model, and finally, there are papers that generate reward values using LLM. This concern doesn't bother me too much as it can be non-trivial to assemble these ideas into a product. Implementation matters a lot, as I think the ML community has learned in the last few years. However, I agree with reviewer Xk7a that one would have expected results on more challenging domains. I think WebShop is a good domain that the authors have added during the discussion period, but results on challenging domains like WebArena would have helped a lot.\\n\\nI do like the results here and the authors have done a good job overall in presenting more baselines in the discussion. The novelty part doesn't bother me too much. That said, I am leaning towards a weak rejection for now with a strong encouragement to submit again with results on more challenging domains. I think the paper is close to an accepted state.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers discussed this paper thoroughly. The main concerns that came up were:\\n\\n1. Reviewer KnwK and Xk7a raised concerns on weak experiments: particularly lack of challenging domains. Authors added experiments on a new domain called WebShop and got positive results. In response, reviewer Xk7a argued that WebShop is somewhat contrived and experiments on OSSWorld or WebArena will be better. While I appreciate that authors added a new domain, I do think including a more common and challenging domain like WebArena would have been nice.\\n\\n2. Comparisons with other approaches such as LATS (reviewer thbr) and variations of the algorithms (reviewer FHde). Authors present positive results against LATS and variations. Overall, I think this makes the approach promising.\\n\\n3. There were concerns about novelty as past works have studied the different pieces in this paper. I am okay with the approach not being novel but this makes achieving positive results on challenging domains more necessary.\\n\\nOverall, I think authors addressed concerns in (2) and I think lack of novelty in itself isn't that important in my assessment. But I think further evaluations are necessary to ensure that the approach works generally.\"}", "{\"title\": \"Follow-Up on Rebuttal for Your Review\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your time and effort in reviewing our work. We have provided detailed clarifications and experimental results in our rebuttal to address the issues and concerns raised in your comments.\\n\\nIf our response satisfactorily resolves your concerns, we kindly ask if you could reconsider your evaluation of our work. Should you have any additional questions or comments, we would be happy to engage in further discussions to ensure all aspects are addressed.\\n\\nThank you again for your thoughtful review and support.\\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Clarifications of the Questions (Part 2)\", \"comment\": \"> **Question 2**: Authors argue equation (6) on line 375 is a more sample efficient way to incorporate labeled trajectory information than conventional policy gradient via the equation on line 833. However the value-critic on line 833 does not take simulated rollouts as a parameter. Perhaps this is a typo? Otherwise the comparison is invalid.\\n\\n**A**: We thank the reviewer for pointing this out. The value-critic on line 833 also takes the simulated rollouts as input in practice. We have corrected this typo in the revised version of our paper. \\n\\n> **Question 3**: In figure 8, it is unclear why LAC+fine-tuned-actor underperforms LAC. The lack of commentary raises reader suspicion.\\n\\n**A**: Compared to LAC, the underperformance of `LAC w/ fine-tuned-actor` arises from its tendency to overfit the training trajectories. This overfitting causes the actor to favor actions that are more frequent in the dataset, potentially leading to suboptimal action selection. \\n\\nFor example, in the AlfWorld training dataset, the action \\\"take an apple from X\\\" occurs frequently. After fine-tuning, the actor may disproportionately generate this action, even when it is irrelevant to the current goal. One case is that the current goal is to \\\"heat some egg and put it in the garbage can\\\". When the agent sees an \\\"apple 2\\\" in \\\"fridge 1\\\", it generates and selects an irrelevant action \\\"take apple 2 from fridge 1\\\", which does not align with the task.\\n\\nThis tendency towards overfitting arises because the complexity of the policy function, which maps states $s$ to actions $a$, often exceeds that of the critic. The policy often has to capture a wide variety of potential actions for each state, particularly in complex environments. However, the quite limited training dataset in our setting restricts its ability to generalize effectively, resulting in memorization of specific actions rather than flexible decision-making. In contrast, our critic, which includes a world model for rollout and an evaluation function, focuses on capturing more predictable dynamics of the environment and simpler evaluation criteria. This typically requires simpler mappings than those needed for the policy, thus avoiding overfitting.\\n\\nWe have added this clarification in the revised version of our paper to address any ambiguity.\\n\\nWe hope these revisions and clarifications adequately address your concerns. Thank you for your valuable feedback.\"}", "{\"title\": \"New Experimental Results and Clarifications to Other Questions\", \"comment\": \"Thank you for your timely feedback! Here we provide new experimental results and extra clarifications to your questions.\\n\\n> **Question 1:** Could the authors clarify how actions were chosen to be \\\"judged\\\" when the action space is large? Were a subset of permissible actions sampled at the beginning?\\n\\n**A**: We do not manually sample a subset of permissible actions at the beginning. Instead, we allow the actor to sample a subset of candidate actions with higher probabilities and then apply gradient-free policy improvement to refine the action distribution. This strategy enables us to focus on actions with the highest potential, allowing us to efficiently navigate the large action space while ensuring effective decision-making.\\n\\n\\n> **Question 2:** I do find the results between LAC and ReAct to be rather close for more sophisticated models though. \\n\\n**A**: We appreciate your observation. However, it's important to note that the performance of LAC and ReAct is influenced by a variety of factors of LLMs beyond model complexity, including reasoning, modeling, and planning capabilities. For instance, as shown in the results for WebShop and AlfWorld, the `final reward` or `success rate` gap between LAC+Llama-3-8B and ReAct+Llama-3-8B is more significant than the gap between LAC+Mistral-7B and ReAct+Mistral-7B, even though Llama-3-8B is more sophisticated than Mistral-7B. This suggests that the disparity in performance is not solely a function of the sophistication of the models used.\\n\\n\\n> **Question 3:** They also appear to be weaker than the results obtained by LATS, which to my knowledge, currently achieves state-of-the-art in this domain [1]. If the authors could also show a substantial benefit of LAC over LATS, I would consider further raising my score.\\n\\n**A**: Thanks for your constructive suggestion. We present a detailed performance comparison between LAC and LATS in Table 3 and Table 4 below. Our results demonstrate that LAC consistently outperforms LATS across all evaluated base models on the WebShop benchmark. Notably, LAC combined with Gemma-7B surpasses the performance of LATS with GPT-3.5 in terms of Success Rate and achieves comparable performance in terms of Reward. These findings underscore the effectiveness of LAC in enhancing the decision-making capabilities of large language models.\\n\\n\\n**Table 3: Success rate comparison between LAC and LATS in WebShop**\\n\\n| Success Rate | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| :------------------------------- | :----------- | :------- | :--------- | :--------- |\\n| LAC (ours) | **32%** | **46%** | **39%** | **32%** |\\n| LATS (reported: 38%, w/ GPT-3.5) | 14% | 39% | 37% | 27% |\\n| ReAct | 15% | 35% | 37% | 30% |\\n| RAP | 19% | 28% | 28% | 26% |\\n\\n\\n**Table 4: Final reward comparison between LAC and LATS in WebShop**\\n\\n| Reward | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| :-------------------------------- | :----------- | :--------- | :--------- | :--------- |\\n| LAC (ours) | **0.5840** | **0.7237** | **0.6733** | **0.6299** |\\n| LATS (reported: 75.9, w/ GPT-3.5) | 0.4924 | 0.6313 | 0.6521 | 0.6063 |\\n| ReAct | 0.5042 | 0.6332 | 0.6445 | 0.6159 |\\n| RAP | 0.5545 | 0.6048 | 0.6215 | 0.5594 |\\n\\nThanks again for your prompt feedback. We hope our extra explanations and experimental results address your concerns and we would appreciate it if you could re-evaluate our paper. If you have any further questions or concerns, please feel free to let us know. \\n\\n\\n---\\n\\n**References**\\n\\n[1] Zhou, Andy, et al. \\\"Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models.\\\" Forty-first International Conference on Machine Learning.\"}", "{\"title\": \"New Experimental Results and Clarifications to Other Questions (Part 4)\", \"comment\": \"### **Question 3:** The details of the computation analysis experiment.\\n\\n> **Question 3.1:** While the authors present additional results of computation analysis in Section 5.4, it seems to be a red flag to me as many important details are missing. E.g. What is the base model used here, are they the same for all methods?\\n\\n**A**: All results in Section 5.4 are based on the Llama3.1-8B model, and the same base model was used for all methods to ensure consistency.\\n\\n> **Question 3.2:** What is the hardware for this to be tested on, is there any necessary parallelism?\\n\\n**A**: All experiments were conducted on a single Nvidia A100 GPU with 80GB of video memory, supported by a machine with 100GB of RAM. To ensure a fair comparison across our experiments, we utilized the same version of the HuggingFace Transformers library [12], which employs consistent parallelism techniques for accelerating inference. \\n\\n> **Question 3.3:** How is each metric calculated? \\n\\n**A**: Each metric is calculated as follows:\\n\\n- `Steps per Task`: This metric quantifies the number of actions executed for each task. It specifically counts the actions taken by our method and the baseline methods, excluding any planning or reasoning steps that may precede those actions.\\n- `Time Used per Task`: This metric measures the total time taken to execute each task, from initiation to completion or termination. It encompasses all time costs associated with the task, including execution time as well as any time spent on reasoning and planning.\\n- `Token Cost per Task`: This metric captures the total number of tokens utilized by each method during the execution of a task. It includes tokens consumed for both action generation and any planning activities that occur during the task execution.\\n\\n> **Question 3.4:** Why does ReAct have more steps but much less time per task? \\n\\n**A**: As we explained above, the `Steps per Task` metric counts only the execution steps involved in completing a task, while `Time Used per Task` metric encompasses the total time required, including thinking (including reasoning and planning) and execution phases.\\n\\nReAct does not perform explicit planning, thus having sinificantly less thinking time. However, it can result in more execution steps as it may take longer to complete the task or may terminate when the maximum time step is reached. Thus, ReAct exhibits more steps but less overall time per task.\\n\\n> **Question 3.5:** Does \\\"steps per task\\\" take into account the costs of estimating Q(s,a) with trajectory rollouts?\\n\\n**A**: No. As mentioned above, `Steps per Task` counts how many actions are actually executed for each task. The planning or reasoning steps of our method and baselines are not taken into account for this metric.\\n\\nThanks again for your prompt feedback. We hope our extra explanations and experimental results have made the novelty of our work more clear to you. We would appreciate it if you could re-evaluate our paper. If you have any further questions or concerns, please feel free to let us know.\"}", "{\"title\": \"New Experimental Results and Clarifications to Other Questions (Part 1)\", \"comment\": \"### **Question 1:** The novelty of language and value critic compared to prior works.\\n\\n> **Question 1.1:** It seems from the response of the author that the main difference of the language critic from naive chain-of-thought is that naive chain-of-thought reflects on the current state while the language critic reflects on the previous state and action results. It does not seem to make enough contributions as it seems that the only difference is whether the observation space contains the previous states and actions or not.\\n\\n**A**: We respectfully disagree with the claim about the novelty of lang-critic. Though the lang-critic draws inspiration from CoT, it is not trivial to determine what the reflections should be. The lang-critic generates judgments based on previous actions and their outcomes, whereas CoT uses arbitrary thought generation without a structured focus on past mistakes. The lang-critic is specially designed for decision-making tasks and owns two advantages over naive CoT: \\n\\n(1) Lang-critic can improve the policy directly by avoiding previous mistakes when the actor is conditioned on the lang-critic's generation.\\n\\nTo substantiate this claim, we conducted experiments comparing the performance of `actor w/ lang-critic` and `actor w/ CoT` on the WebShop benchmark. Here are the details of these two variants:\\n\\n- `actor w/ lang-critic`: We remove all components of LAC, leaving the actor and lang-critic unchanged. Specifically, at each step, after observing the action results, the lang-critic first generates some judgments on previous actions, and then the actor selects the next action based on the judgments.\\n- `actor w/ CoT`: We remove all components of LAC, except the actor. Additionally, we equip the actor with CoT by adding \\\"Let's think step by step\\\" to the prompt. Specifically, at each step, before choosing the next action, the CoT prompting component first outputs arbitrary thoughts that may help solve the task.\\n\\nAs shown in Table 3 and Table 4 below, `actor w/ lang-critic` consistently surpasses `actor w/ CoT` across most base models in terms of both Success rate and Reward. By analyzing the results, we found that `actor w/ CoT` may make the same mistake multiple times and get stuck at this mistake, while `actor w/ lang-critic` can largely avoid seen mistakes.\\n\\n**Table 3: Success rate comparison of `actor w/ lang-critic` and `actor w/ CoT` in WebShop benchmark**\\n\\n| Success Rate | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| -------------------- | ------------ | -------- | ---------- | ---------- |\\n| actor w/ lang-critic | **21%** | **46%** | **39%** | **31%** |\\n| actor w/ CoT | **21%** | 20% | 31% | 20% |\\n\\n\\n**Table 4: Final reward comparison of `actor w/ lang-critic` and `actor w/ CoT` in WebShop benchmark**\\n\\n| Reward | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| -------------------- | ------------ | ---------- | ---------- | ---------- |\\n| actor w/ lang-critic | **0.5739** | **0.6564** | **0.6556** | **0.6288** |\\n| actor w/ CoT | 0.5520 | 0.5347 | 0.6379 | 0.4671 |\\n\\n\\n(2) Lang-critic can be seamlessly integrated with value-critic, which helps value-critic to generate more accurate value-based evaluations. \\n\\nTo further evaluate this integration, we compare the performance of `LAC` and `LAC w/ CoT` on the WebShop benchmark. The details of the two methods are as follows:\\n\\n- `LAC`: Our original method.\\n- `LAC w/ CoT`: We replace the lang-critic component of LAC with CoT and keep other components unchanged.\\n\\nWe show the results in Table 5 and Table 6 below. Our method `LAC` consistently outperforms `LAC w/ CoT` regarding Success rate and Reward across all evaluated base models. This is because, without lang-critic's judgment on previous steps, the value-critic may output inaccurate value-based estimations, hindering the policy improvement phase. \\n\\n**Table 5: Success rate comparison of `LAC` and `LAC w/ CoT` in WebShop benchmark**\\n\\n| Success Rate | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ------------ | ------------ | -------- | ---------- | ---------- |\\n| LAC (ours) | **32%** | **46%** | **39%** | **32%** |\\n| LAC w/ CoT | 27% | 29% | 33% | 24% |\\n\\n**Table 6: Final reward comparison of `LAC` and `LAC w/ CoT` in WebShop benchmark**\\n\\n| Reward | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ---------- | ------------ | ---------- | ---------- | ---------- |\\n| LAC (ours) | **0.5840** | **0.7237** | **0.6733** | **0.6299** |\\n| LAC w/ CoT | 0.5734 | 0.5270 | 0.6515 | 0.5101 |\\n\\nIn summary, the lang-critic's focused evaluations of past actions provide critical advantages over the simple reflections of CoT, leading to improved performance in decision-making tasks.\"}", "{\"title\": \"Additional Experimental Results and Clarifications to Other Questions (Part 2)\", \"comment\": \"> **Question 3**: While the benchmarks are well-known, I think they are perhaps not realistic of tasks that people might actually want LLMs to accomplish. For example, it would be interesting to see the authors evaluate on tasks considered in the GDP-Zero paper such as donation solicitation [1].\\n\\n**A**: Thank you for suggesting an evaluation on more realistic tasks. While we were unable to include results on donation solicitation as discussed in [3] due to time constraints, we did conduct experiments on WebShop [2], which we believe represents a more realistic task that LLMs might be expected to perform. A brief description of this benchmark and the corresponding results are provided in the responses above. Our proposed method, LAC, consistently outperforms other baselines in terms of both success rate and final reward across various base models. These results highlight the effectiveness of our approach in addressing more complex and realistic tasks. We hope the experiments on WebShop [2] address your concerns.\\n\\nWe hope these revisions and clarifications adequately address your concerns. Thank you for your valuable feedback and insightful suggestions.\\n\\n---\\n\\n**References**\\n\\n[1] Yu, Xiao, Maximillian Chen, and Zhou Yu. \\\"Prompt-Based Monte-Carlo Tree Search for Goal-oriented Dialogue Policy Planning.\\\" The 2023 Conference on Empirical Methods in Natural Language Processing.\\n\\n[2] Yao, Shunyu, et al. \\\"Webshop: Towards scalable real-world web interaction with grounded language agents.\\\" Advances in Neural Information Processing Systems 35 (2022): 20744-20757.\\n\\n[3] Yu, Xiao, Maximillian Chen, and Zhou Yu. \\\"Prompt-Based Monte-Carlo Tree Search for Goal-oriented Dialogue Policy Planning.\\\" arXiv preprint arXiv:2305.13660 (2023).\"}", "{\"title\": \"New Experimental Results and Clarifications of the Questions (Part 1)\", \"comment\": \"Thanks for your comments and valuable suggestions. Here we provide detailed explanations for your questions. The detailed experimental results are provided in the revised version of our paper.\\n\\n> **Question 1**: Is it reasonable to summarize the algorithm differences in the following manner: ReAct includes reasoning + actor, Critic only includes future trajectory + actor, and Lang-critic includes evaluation + actor?\\n\\n**A**: For better clarification, we first provide a summary of our method\\u2019s variants in Table 1 below. Our approach integrates the actor with dual-critics: the Lang-Critic, which provides language-based evaluations of actions, and the Value-Critic, which offers value-based assessments of candidate actions that will be used to refine the policy via gradient-free policy improvement as described in Eq.6.\", \"regarding_the_differences\": \"- ReAct includes reasoning + actor, but it does not include critics so we classify it into `Actor-only` methods.\\n - The term `Critic-only` is misleading if interpreted as including future trajectory + actor. `Critic-only` refers to the methods that solely rely on value-based evaluations of actions for decision-making while ignoring the actor's prior knowledge of action distribution, although they also use the `actor` to sample candidate actions.\\n - Similarly, characterizing `Lang-critic` as including evaluation + actor is inaccurate. The `Lang-Critic` is a component of our LAC that specifically provides language-based evaluations of actions.\\n\\n**Table 1: Summary of our method's variants.**\\n| Method | Actor | Lang-Critic | Value-Critic |\\n| -------------------- | ------- | ----------- | --------------------------------------------- |\\n| LAC | $\\\\surd$ | $\\\\surd$ | $\\\\surd$ |\\n| LAC w/o rollout | $\\\\surd$ | $\\\\surd$ | $\\\\surd\\\\times$ (w/o rollout) |\\n| Value-critic only | $\\\\surd$ | $\\\\surd$ | $\\\\surd\\\\times$ (w/o policy improvement, Eq.6) |\\n| LAC w/o value-critic | $\\\\surd$ | $\\\\surd$ | $\\\\times$ |\\n| LAC w/o lang-critic | $\\\\surd$ | $\\\\times$ | $\\\\surd$ |\\n| ReAct | $\\\\surd$ | $\\\\times$ | $\\\\times$ |\\n\\n> **Weakness 2 & Question 2**: (W2:) The paper is missing important baselines needed to understand the performance gain claims. (Q2:) Why does the value critic require a future trajectory, and how does it perform without future trajectories?\\n\\n**A**: The `Value-Critic` relies on future trajectory predictions to improve the accuracy of its evaluations. By predicting future trajectories, the critic considers long-term consequences and evaluates actions more effectively, which ultimately leads to better decision-making.\\n\\nFor a full comparison, we have conducted a new experiment for `LAC w/o rollout`, in which the `Value-Critic` generate value-based evaluations without future trajectory predictions. We represent the detailed result in Figure 10 and Figure 11 of the revised paper, and we also show some results in Table 2 and Table 3 below. The results show that `LAC w/o rollout` consistently underperforms compared to the full LAC across various base models. This finding emphasizes the importance of future trajectory predictions for accurate evaluations.\\n\\n**Table 2: More ablation studies in AlfWorld benchmark.**\\n| Method | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| -------------------- | ------------ | ---------- | ---------- | ---------- |\\n| LAC | **79.10%** | **84.33%** | **77.61%** | **79.10%** |\\n| LAC w/o rollout | 59.70% | 74.63% | 31.34% | 64.67% |\\n| Value-critic only | 47.01% | 64.69% | 66.42% | 66.42% |\\n| LAC w/o value-critic | 71.64% | 72.39% | 58.21% | 76.12% |\\n| LAC w/o lang-critic | 41.79% | 70.15% | 47.76% | 56.72% |\\n| ReAct | 20.15% | 54.48% | 30.60% | 33.83% |\\n\\n**Table 3: More ablation studies in BabyAI benchmark.**\\n| Method | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| -------------------- | ------------ | -------- | ---------- | ---------- |\\n| LAC | **46%** | **76%** | **66%** | **70%** |\\n| LAC w/o rollout | 44% | 60% | 64% | 50% |\\n| Value-critic only | 38% | 64% | 56% | 62% |\\n| LAC w/o value-critic | 34% | 52% | 52% | 38% |\\n| LAC w/o lang-critic | 32% | 60% | 48% | 54% |\\n| ReAct | 42% | 48% | 42% | 26% |\"}", "{\"title\": \"Additional Experimental Results and Clarifications to Other Questions (Part 2)\", \"comment\": \"> **Weakness 2**: The novelty of the method is limited. The language critic and value critic are two main proposals in this paper. However, the language critic is relatively simple and can be considered as a direct use of CoT [5] where the agent is asked to generate thoughts reflecting on the previous round actions before taking actions. The objective of the value critic is also similar to constrained decoding [6] that has been widely used in the alignment domain without the need of performing gradient updates on models.\\n\\n**A**: We respectfully disagree with the assessment regarding the novelty of our contributions. Below we address the points raised and clarify the distinct contributions of our work relative to the cited methods.\\n\\n - **Novelty of the Language Critic**: (1) While it is true that our language critic draws inspiration from techniques like Chain-of-Thought (CoT) prompting [5], our approach goes beyond merely generating intermediate reflections. Specifically, the language critic in our framework evaluates prior actions using natural language feedback and dynamically integrates this feedback into the actor\\u2019s decision-making process through in-context learning. This enables the agent to not only reflect on previous decisions but also refine its subsequent action space in a structured manner. (2) Unlike CoT, which focuses on reasoning for static problem-solving, our language critic is explicitly designed for sequential decision-making tasks, where the evaluation of actions must adapt to the evolving context of the task. This extension is critical in domains requiring iterative feedback loops and real-time adjustments, as demonstrated in our benchmarks (Section 5.2 and 5.3 of the paper). \\n - **Novelty of the Value Critic**: (1) While constrained decoding [6] provides a mechanism for aligning generated outputs with desired constraints, our value critic operates differently in both its objectives and implementation. Constrained decoding typically enforces constraints at the output level, often without evaluating long-term sequential outcomes. In contrast, our value critic explicitly estimates the expected cumulative rewards of actions by leveraging LLMs' internal belief probabilities, coupled with a gradient-free optimization framework. (2) The introduction of trajectory rollouts within our value critic further distinguishes it from constrained decoding. By simulating future outcomes of candidate actions, our approach provides a robust mechanism to estimate long-term action-value relationships, ensuring both contextual alignment and quantitative optimization. As shown in the ablation studies (Section 5.3), this integration leads to a significant performance boost over value-only or heuristic methods.\\n - **Integration of Dual Critics with Policy Improvement**: The most significant novelty of our framework lies in the synergistic integration of the language critic and value critic and enabling gradient-free policy improvement. While each critic independently enhances decision-making, their combined use enables a holistic evaluation of actions, balancing qualitative insights (language critic) with quantitative optimization (value critic). This dual-feedback mechanism is unique to our framework and has not been addressed in prior works such as CoT or constrained decoding.\\n\\nWe believe that these distinctions substantiate the novelty of our proposed methods and their significant contributions to LLM-based sequential decision-making. Thank you for pointing out this opportunity to clarify our contributions.\"}", "{\"title\": \"Additional Experimental Results and Clarifications to Other Questions (Part 1)\", \"comment\": \"Thanks for the valuable comments and helpful suggestions. Here we provide additional experimental results and detailed explanations for your questions. The detailed experimental results are provided in the revised version of the paper.\\n\\n> **Weakness 1 & Question 1**: (W1:) Namely, the authors only consider tasks with small action spaces, where evaluating each action individually is tractable. In many more realistic tasks such as dialogue, I imagine that the action space would be more open-ended and I am unsure how to adapt the proposed method. (Q1:) How would the method change when the task has a potentially infinite action space (such as in dialogue)?\\n\\n**A**: Thanks for the insightful suggestion. We have conducted new experiments using the WebShop benchmark [2], which presents a scenario with a potentially infinite action space. This benchmark requires an agent to purchase a product based on specific instructions (e.g. \\\"I need a long clip-in hair extension which is natural looking, and price lower than 20.00 dollars\\\") through web interactions (e.g. search \\\"long clip-in hair extension\\\", click buttons such as \\\"[item ID]\\\" or \\\"back to search\\\"). Within this context, the 'search' and 'click' actions can indeed lead to an unbounded set of potential actions, as the agent can continuously refine its queries and selections based on dynamic web results. \\n\\nWe represent the detailed results in Figure 7 of the revised paper, and we also show some results in Table 1 and Table 2 below. We found that our method, LAC, consistently outperforms other baselines, in terms of both success rate and final reward across various base models. This demonstrates the robustness of our method in handling more complex and open-ended action spaces.\\n\\n**Table 1: Success rate comparison in WebShop benchmark**\\n| Success Rate | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ------------ | ------------ | -------- | ---------- | ---------- |\\n| LAC (ours) | **32%** | **46%** | **39%** | **32%** |\\n| ReAct | 15% | 35% | 37% | 30% |\\n| RAP | 19% | 28% | 28% | 26% |\\n\\n**Table 2: Final reward comparison in WebShop benchmark**\\n| Reward | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B |\\n| ---------- | ------------ | ---------- | ---------- | ---------- |\\n| LAC (ours) | **0.5840** | **0.7237** | **0.6733** | **0.6299** |\\n| ReAct | 0.5042 | 0.6332 | 0.6445 | 0.6159 |\\n| RAP | 0.5545 | 0.6048 | 0.6215 | 0.5594 |\\n\\n> **Weakness 2**: In addition, the tasks considered rely on being able to simulate future trajectories with high fidelity, which may be harder to do in more complex environments. Specifically, it is likely much harder to faithfully predict trajectories when the agent is engaging with another human rather than simply moving around in a static environment.\\n\\n**A**: We acknowledge the challenge of accurately predicting future trajectories in complex environments. To mitigate this issue, our method incorporates a KL-divergence constraint in Equation 4, balancing the actor's prior knowledge with the critic's evaluations, rather than solely relying on the critic's evaluations. This approach reduces reliance on potentially inaccurate future predictions.\\n\\n> **Weakness 3 & Question 2**: (W1:) Finally, the value critic currently only works for tasks with binary outcomes (success or failure). (Q2:) Have the authors experimented with tasks with more expressive outcomes/rewards? I am curious if both critics can still behave well when there are more nuanced outcomes than just success or failure.\\n\\n**A**: Thank you for your valuable feedback. The WebShop benchmark discussed above also includes expressive rewards in the range of [0, 1]. In scenarios where the purchased product only partially satisfies requirements, the final reward is a value between 0 and 1. The results of the final reward in Table 2 above show that LAC outperforms other baselines across various base models, demonstrating that it can effectively adapt to more nuanced outcomes beyond binary success or failure.\"}", "{\"title\": \"New Experimental Results and Clarifications to Other Questions (Part 5)\", \"comment\": \"**References**\\n\\n[1] Setlur, Amrith, et al. \\\"RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-Fold.\\\" arXiv preprint arXiv:2406.14532 (2024).\\n\\n[2] Zhou, Shuyan, et al. \\\"WebArena: A Realistic Web Environment for Building Autonomous Agents.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[3] Xie, Tianbao, et al. \\\"Osworld: Benchmarking multimodal agents for open-ended tasks in real computer environments.\\\" arXiv preprint arXiv:2404.07972 (2024).\\n\\n[4] Mudgal, Sidharth, et al. \\\"Controlled Decoding from Language Models.\\\" Forty-first International Conference on Machine Learning.\\n\\n[5] Rafailov, Rafael, et al. \\\"Direct preference optimization: Your language model is secretly a reward model.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[6] Verma, Mudit, Siddhant Bhambri, and Subbarao Kambhampati. \\\"Do Think Tags Really Help LLMs Plan? A Critical Evaluation of ReAct-Style Prompting.\\\" Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning. NeurIPS Workshop (2024).\\n\\n[7] Shinn, Noah, et al. \\\"Reflexion: Language agents with verbal reinforcement learning.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[8] Carta, Thomas, et al. \\\"Grounding large language models in interactive environments with online reinforcement learning.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[9] Chen, Wentse, et al. \\\"Fine-tuning LLM Agents with Retrospective In-Context Online Learning.\\\" Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning. NeurIPS Workshop (2024).\\n\\n[10] Zhou, Andy, et al. \\\"Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models.\\\" Forty-first International Conference on Machine Learning.\\n\\n[11] Yang, Hui, Sifu Yue, and Yunzhong He. \\\"Auto-gpt for online decision making: Benchmarks and additional opinions.\\\" arXiv preprint arXiv:2306.02224 (2023).\\n\\n[12] Wolf, Thomas, et al. \\\"Transformers: State-of-the-art natural language processing.\\\" Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations. 2020.\"}", "{\"title\": \"Additional Experimental Results and Clarifications to Other Questions (Part 4)\", \"comment\": \"> **Question 1**: 151-152: \\\"Due to the auto-regressive nature of LLM, it does not do reasoning and planning explicitly.\\\" This seems controversial. Chain-of-thought/o-1 are also auto-regressive decoding, but arguably they have some reasoning in them. Same with 152-154, \\\"Accordingly, LLM with actor-only methods often struggles with complex tasks that require multiple steps of planning and reasoning\\\".\\n\\n**A**: There might be some misunderstandings. Our statement refers specifically to the actor-only method, which directly outputs selected actions ($a\\\\sim \\\\pi_{LLM}(\\\\cdot|g,h)$ as defined in the Preliminary Section). This method does not involve explicit reasoning or planning. In contrast, chain-of-thought and o1 incorporate reasoning processes, which distinguishes them from the actor-only method.\\n\\nThanks again for your efforts and insightful comments! We hope our clarification addresses your concerns and sincerely appreciate it if you could re-evaluate our work. Any further feedback and discussions are much appreciated.\\n\\n---\\n\\n**References**\\n\\n[1] Zhang, Bin, et al. \\\"Controlling Large Language Model-based Agents for Large-Scale Decision-Making: An Actor-Critic Approach.\\\" ICLR 2024 Workshop on Large Language Model (LLM) Agents.\\n\\n[2] Szot, Andrew, et al. \\\"Large language models as generalizable policies for embodied tasks.\\\" The Twelfth International Conference on Learning Representations. 2023.\\n\\n[3] Yao, Weiran, et al. \\\"Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[4] Zhou, Yifei, et al. \\\"ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL.\\\" Forty-first International Conference on Machine Learning.\\n\\n[5] Wei, Jason, et al. \\\"Chain-of-thought prompting elicits reasoning in large language models.\\\" Advances in neural information processing systems 35 (2022): 24824-24837.\\n\\n[6] Mudgal, Sidharth, et al. \\\"Controlled Decoding from Language Models.\\\" Forty-first International Conference on Machine Learning.\\n\\n[7] Rafailov, Rafael, et al. \\\"Direct preference optimization: Your language model is secretly a reward model.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[8] Carta, Thomas, et al. \\\"Grounding large language models in interactive environments with online reinforcement learning.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[9] Tan, Weihao, et al. \\\"True Knowledge Comes from Practice: Aligning Large Language Models with Embodied Environments via Reinforcement Learning.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[10] Yao, Shunyu, et al. \\\"Webshop: Towards scalable real-world web interaction with grounded language agents.\\\" Advances in Neural Information Processing Systems 35 (2022): 20744-20757.\\n\\n[11] Hao, Shibo, et al. \\\"Reasoning with Language Model is Planning with World Model.\\\" NeurIPS 2023 Workshop on Generalization in Planning.\\n\\n[12] Yao, Shunyu, et al. \\\"ReAct: Synergizing Reasoning and Acting in Language Models.\\\" The Eleventh International Conference on Learning Representations.\"}", "{\"title\": \"I have read your response\", \"comment\": \"WebShop results appear net positive. I maintain my score but find the revisions helpful (e.g., typo fix mentioned in another response).\\n\\n> A: The future trajectories are generated based on the original actor \\n. While this may lead to some divergence of the reweighted action distribution from the prior, our method mitigates this risk by incorporating a KL-divergence constraint in Equation 4 in our manuscript, which helps to prevent the new actor from deviating too far from the original actor.\\n\\nDoes this limit the amount of policy improvement that is possible?\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"Although large language models (LLMs) have shown impressive capabilities in natural language processing tasks, they struggle with complex reasoning tasks. One common approach to tackle this issue is to train LLMs using reinforcement learning (RL); however, RL methods have several drawbacks. The authors propose a novel gradient-free Actor-Critic framework based on LLMs to overcome these limitations. This new framework includes an actor and a critic component, distinguishing it from previous approaches. Notably, the Critic is designed to provide feedback in either language or numerical form to help update the actor.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The ability to help LLMs reason in complex decision-making scenarios is a very important task.\", \"The structure of the paper was well organized and easy to follow, aside from some terminology framing issues.\", \"The paper effectively demonstrates how different types of actor feedback\\u2014reasoning, evaluation, and future trajectory prediction\\u2014affect downstream performance.\"], \"weaknesses\": [\"The terminology used in the paper could be more precise, particularly in relation to terms commonly found in the reinforcement learning literature.\", \"The paper is missing important baselines needed to understand the performance gain claims.\"], \"questions\": [\"Is it reasonable to summarize the algorithm differences in the following manner: ReAct includes reasoning + actor, Critic only includes future trajectory + actor, and Lang-critic includes evaluation + actor?\", \"Why does the value critic require a future trajectory, and how does it perform without future trajectories?\", \"How does ReAct, combined with a value critic, perform?\", \"How does ReAct, combined with a language critic, perform?\", \"Is the prior \\\\pi_{LLM} the same LLMs used for Q_{LLM} for computing the critic values?\", \"How does language critic + value critic perform? (Essentially LAC without update action distribution, instead using the critic values to choose an action)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0tMcsHsHgQ
Towards Undistillable Models by Minimizing Conditional Mutual Information
[ "Linfeng Ye", "Shayan Mohajer Hamidi", "Renhao Tan", "EN-HUI YANG" ]
A deep neural network (DNN) is said to be undistillable if used as a black-box input-output teacher, it can not be distilled by knowledge distillation (KD) to train a student model so that the distilled student (called knockoff student) outperforms the student trained alone with label smoothing (LS student) in terms of prediction accuracy. To protect intellectual property of DNNs, it is desirable to build undistillable DNNs. To this end, it is first observed that an undistillable DNN may have the trait that each cluster of its output probability distributions in response to all sample instances with the same label should be highly concentrated to the extent that each cluster corresponding to each label should ideally collapse into one probability distribution. Based on this observation and by measuring the concentration of each cluster in terms of conditional mutual information (CMI), a new training method called CMI minimized (CMIM) method is proposed, which trains a DNN by jointly minimizing the conventional cross entropy (CE) loss and the CMI values of all temperature scaled clusters across the entire temperature spectrum. The resulting CMIM model is shown, by extensive experiments, to be undistillable by all tested KD methods existing in the literature. That is, the knockoff students distilled by these KD methods from the CMIM model underperform the respective LS students. In addition, the CMIM model is also shown to performs better than the model trained with the CE loss alone in terms of their own prediction accuracy.
[ "Nasty teacher", "Knowledge distillation", "Intellectual property protection" ]
https://openreview.net/pdf?id=0tMcsHsHgQ
https://openreview.net/forum?id=0tMcsHsHgQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wnIRXv3qMX", "kC0xPHvDFJ", "iv4ukjLPHk", "gq64q9rzVt", "dYCqrrrFNG", "cuOlJok4hO", "b6jIGwKpAJ", "YVVHjNRpzG", "YC1VloQzZ8", "T7IsTlTdA4", "Ra3dqJtAq6", "NxHZHTm7pF", "NVr4uqXFvo", "JTTwnWsrXH", "Av4JOS5PZw", "8iDFlh28Hf", "8NLDiea8Vf", "7ZftYBdp2g", "2vqRhJ9f3H", "2TFAjn0iWq", "1sY3cAxVJg", "10a1ojoAuQ", "0nwEq8UAdI", "0eGZF0D2CR" ], "note_type": [ "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732719119278, 1732826853714, 1738169108994, 1730728712594, 1733241480335, 1730667998702, 1732718436935, 1732722566090, 1733294376433, 1732720980384, 1732722223339, 1730791440499, 1733203007628, 1732720374880, 1732719222082, 1733242633525, 1733290427882, 1730323469300, 1733217838582, 1732718303019, 1733203702789, 1732780473614, 1732721622717, 1732722620678 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Reviewer_UcAd" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Reviewer_aDtW" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Reviewer_x3fp" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Reviewer_nSpi" ], [ "ICLR.cc/2025/Conference/Submission1976/Reviewer_x3fp" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Reviewer_nSpi" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ], [ "ICLR.cc/2025/Conference/Submission1976/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reply to Reviewer UcAd (1/2)\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback. We also appreciate the positive comments regarding the intuitiveness of our idea, the thoroughness of our experiments, and the organization of the paper. Please find our responses to your comments in the following.\\n\\n> ### Weakness 1 (part a): The discussions of the proposed method\\u2019s limitations are missing.\\n\\n**Ans:** We have revised the conclusion section of the paper to explicitly address some of the method's limitations. Thank you for bringing this into our attention.\\n\\n> ### Weakness 1 (part b): The proposed method collapses the logits so that each class\\u2019s output is highly concentrated (as shown in Fig.2), the teacher model might become overly confident in its predictions. This can lead to poor calibration and deteriorate generalization capability on out-of-distribution (OoD) data. Therefore, more settings and evaluations on the protected teacher model\\u2019s performance beyond prediction accuracy are necessary.\\n\\n**Ans:** We would like to clarify the distinction between \\\"*highly concentrated output clusters*\\\" and \\\"*overly confident predictions*\\\". A highly concentrated output cluster **does not necessarily** imply that the model produces overly confident predictions. This is because the clusters can be concentrated around points that are not close to one-hot labels (the corners of the probability simplex). As a result, the model can have concentrated outputs without being overly confident. These are two separate concepts.\", \"our_experiments_support_this_distinction\": \"the proposed method creates more concentrated output clusters, yet the accuracy of the model improves on the held-out test dataset compared to models trained with the conventional cross-entropy (CE) loss. This observation aligns with findings in [1], where it was demonstrated that training DNNs to produce highly concentrated output clusters can enhance their test accuracy.\\n\\nRegarding out-of-distribution (OoD) data, we acknowledge the importance of such evaluations; however, the primary focus of this work is on training undistillable DNNs. While extending the evaluation to OoD scenarios is a valuable future direction, it is beyond the current scope of this study.\\n\\n> ### Weakness 2: The proposed method involves multiple hyperparameters, e.g., such as the number of power samples $N$ and the range $[0,\\\\beta]$, but the ablation studies on hyperparameter sensitivity are missing. For example, have you assessed how these hyperparameters impact the model\\u2019s undistillability and accuracy? If some parameters are particularly influential, could you highlight those findings?\\n\\n**Ans:** In response to your comment, we have conducted detailed ablation studies for the key hyperparameters, including the number of power samples $N$ and the range $[0, \\\\beta]$, to assess their impact on the model\\u2019s undistillability and accuracy. The results of these studies have been included in the appendix (Appendix L) to provide a comprehensive understanding of the sensitivity and influence of these parameters. We hope this addition addresses your concern and strengthens the paper.\\n\\n> ### Weakness 3: The proposed method introduces computation overhead, but the comparisons of computational costs are missing. The method introduces extra computation for minimizing CMI and performing multiple transformations, what is the relative computational cost of training a CMIM-protected model compared to a standard model and other protection methods? As we can see, the experiments are primarily conducted on small datasets, with very limited testing on the larger ImageNet dataset. \\n\\n**Ans:** We have added another section in the appendix (Appendix K) which discuss the computational overhead of CMIM. The experiment results is comprehensive and intensive. In summary, CMIM increases the training time by approximately 20\\\\%.\\n\\nAdditionally, we note that among the existing KD protection methods in the literature, only CMIM (our method) and ST are scalable to larger datasets like ImageNet. This scalability is due to the significant computational complexity of other benchmark methods, which limits their applicability to smaller datasets. We hope this clarification and the added discussion in the appendix address your concern.\"}", "{\"title\": \"Authors' Response to Comment by Reviewer nSpi\", \"comment\": \"Thank you for taking the time to review our response and providing additional feedback.\\n\\n1. **Code Repository Link:** We sincerely apologize for the oversight in the uploaded PDF, which included an outdated link to our repository. Please find the correct link below:\\n\\n [https://anonymous.4open.science/r/CMIM-CBCA](https://anonymous.4open.science/r/CMIM-CBCA)\\n\\n We regret any inconvenience or confusion this may have caused. We will update the repo for the Camera-ready version of the paper.\\n\\n2. **Ablation Study on Parameter $\\\\omega$:** Thank you for pointing out the missing ablation study on $\\\\omega$. We have conducted a detailed analysis of $\\\\omega$ using VGG-16 and SNV2 as the teacher-student pair on the CIFAR-100 dataset. \\n\\n In this experiment, we examined the impact of the power coefficient $\\\\omega$ on the knockoff student's performance, keeping $\\\\beta = 2$ and $N = 25$ fixed while varying $\\\\omega$. The results are summarized in the table below:\\n\\n | Value of $\\\\omega$ | 1 | 2 | 5 | 10 | 15 | 20 | 25 | 30 | 40 | 50 | 100 | 200 |\\n |-------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n | Knock-off Student Accuracy | 72.65 | 72.67 | 72.55 | 72.56 | 72.55 | 72.52 | 72.53 | 72.52 | NaN | NaN | NaN | NaN |\\n\\n **Observations:** \\n - When $\\\\omega > 30$, the simulations often result in NaN values due to excessively large exponent values. \\n - At $\\\\omega = 25$, the knockoff student's accuracy reaches its minimum, effectively approximating the behavior observed when $\\\\omega = \\\\infty$. \\n\\nWe hope this addresses your concerns. \\n\\nWe would also like to reiterate the significance of the contribution of the paper again. To the best of our knowledge, our method is the **only one** in the literature capable of training undistillable DNNs that remain robust against a wide range of knowledge distillation (KD) methods. This represents a major advancement in the field.\\n\\nIf you have any further questions or require additional clarification, please do not hesitate to let us know. Lastly, **we kindly ask you to consider raising your score to reflect the impact of this contribution.**\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"To protect the intellectual property (IP) of pretrained DNNs (teachers), the authors propose a method to prevent student models from using knowledge distillation (KD) to mimic the teacher models\\u2019 behavior. Specifically, they focus on a black-box scenario where the student model can only access the inputs and output logits of the teacher model. The proposed conditional mutual information minimization (CMIM) method constrains the output probability distributions of the teacher model so that each cluster associated with a label is highly concentrated (highly peaked around a single label). Intuitively, this eliminates the inter-class information from the teacher model's output logits such that the student model receives no more information than the labels themselves, thereby protecting the IP of the teacher model.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea is intuitive and with sufficient details.\\n\\n2. The benchmark defense and knowledge distillation (attack) methods are exhaustive in the experiments.\\n\\n3. The paper is well-organized and easy to follow.\", \"weaknesses\": \"1. The discussions of the proposed method\\u2019s limitations are missing. The proposed method collapses the logits so that each class\\u2019s output is highly concentrated (as shown in Fig.2), the teacher model might become overly confident in its predictions. This can lead to poor calibration and deteriorate generalization capability on out-of-distribution (OoD) data. Therefore, more settings and evaluations on the protected teacher model\\u2019s performance beyond prediction accuracy are necessary.\\n\\n2. The proposed method involves multiple hyperparameters, e.g., such as the number of power samples $N$ and the range $[0,\\\\beta]$, but the ablation studies on hyperparameter sensitivity are missing. For example, have you assessed how these hyperparameters impact the model\\u2019s undistillability and accuracy? If some parameters are particularly influential, could you highlight those findings?\\n\\n3. The proposed method introduces computation overhead, but the comparisons of computational costs are missing. The method introduces extra computation for minimizing CMI and performing multiple transformations, what is the relative computational cost of training a CMIM-protected model compared to a standard model and other protection methods? As we can see, the experiments are primarily conducted on small datasets, with very limited testing on the larger ImageNet dataset.\", \"questions\": \"Please see the Weaknesses. Other questions:\\nThe proposed method seems to be limited to a single-label classification setting. Can the method potentially extend to regression or multi-label classification, where outputs are continuous or to predict multiple classes simultaneously? Can the method be adapted to protect the IP of the state-of-the-art models, e.g., LLMs, CLIP, and Diffusion models, which require it most?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed and constructive feedback. We appreciate the time you have taken to clarify your perspective and for recognizing the value of our work.\\n\\n1. Regarding the relationship between label smoothing and entropy:\\n We acknowledge and agree with your clarification that the term minimizing the cross-entropy between a uniform distribution and the DNN output distribution can indeed be interpreted as a confidence penalty term. Our intention was to emphasize that label smoothing introduces a regularization effect, rather than equating it directly with maximizing entropy. \\n\\n2. On the relationship between entropy and the concentration of output clusters:\\n We understand your concern and appreciate the nuanced point you are raising. While we agree that techniques like label smoothing can lead to more concentrated output clusters in practice, our statement was intended to highlight that increasing the entropy of the outputs does not *necessarily* lead to such concentration under all conditions. To address your comment, we will clarify this statement in the revised manuscript to reflect that there is a complex relationship between entropy and cluster concentration, influenced by the specific regularization technique employed.\\n\\n3. Writing quality and experimental analysis:\\n We are grateful for your constructive criticism regarding the writing quality and the need for further experimental analysis. In response, we plan to carefully revise the manuscript to improve its overall clarity and coherence. \\n\\n4. Rebuttal impact:\\n We are pleased to see that our additional experiments addressing the comparison with label smoothing have partially alleviated your concerns. We will include the extended analysis and discussion in the revised paper to provide a more comprehensive understanding of the trade-offs offered by our method.\\n\\nThank you once again for your thoughtful review and for increasing your score. Your feedback has been invaluable in helping us identify areas for improvement and strengthening the presentation of our work.\"}", "{\"summary\": \"The authors tackle the topic of *defending* trained models from getting *stolen* through knowledge distillation. They investigate when the teacher models are *undistillable* by knowledge distillation and introduce the CMIM method to train teachers to concentrate the predicted probability vectors in close clusters to minimize the information available for distillation. They theoretically introduce and support their method and empirically test the procedure as well as other *defence* techniques against multiple *attacking* techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The topic of undistillable models is highly relevant, particularly given the growing online prevalence of large closed-source models.\", \"The paper is mostly well-written with mostly appropriately supported claims.\", \"The authors provide a nice balance between theoretical and empirical results.\"], \"weaknesses\": [\"L040: Missing reference to theoretical paper by Borup and Andersen (2021), \\u201cEven your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation,\\u201d NeurIPS.\", \"*\\\"An insight is provided that in order for a DNN to be undistillable, it is desirable for the DNN to possess the trait that each cluster of the DNN\\u2019s output probability distributions corresponding to each label is highly concentrated to the extent that all probability distributions within the cluster more or less collapse into one probability distribution close to the one-hot probability vector of that label.\\u201d* Although, I am unable to provide a reference, that a concentrated probability distribution is uninformative, and thus desirable to avoid distillation is, to the best of my knowledge, common knowledge in the field, and should not be considered a contribution of this paper.\", \"Replacing $\\\\hat{Y}$ with $\\\\hat{Y}^\\\\alpha$ in the MI and CMI is simply replacing a variable in a function; the change itself is not inherently innovative and warrant a notion of \\\"contribution\\\". However, the implications and importance of doing so may hold some importance.\", \"Section 4.1 appears redundant, as the extension follows naturally by substituting variables (see also comment above).\", \"Variance estimates in Appendix J reveal that multiple cases deemed \\\"undistillable\\\" in Table 1 do not definitively qualify as such.\", \"Table 2 mistakenly labels \\\"RSP\\\" as \\\"RSG\\\" and \\\"RSD.\\\"\", \"Omitting CE from Table 1 limits insight into the distillability of the standard training procedure, and it is unclear how much better the proposed method is to the simplest baseline.\", \"(Minor) When introducing notation, some notation is used before it is introduced. Consider reordering this section so that no notation convention is used before it has been introduced.\", \"Equation (3): avoid using $\\\\times$ if it solely represents normal multiplication.\", \"L403: \\\"intensive\\\" -> \\\"extensive.\\\"\", \"L520-L524 could be rephrased, as the current phrasing appear redundant and confusing.\"], \"questions\": [\"Please elaborate on the computational requirements of doing the proposed alternating optimization, where the optimization of the Q\\u2019s is done over multiple minibatches (I assume for each alternating step)?\", \"Proving a negative result is challenging empirically; the inability to surpass LS with tested KD methods does not necessarily imply that no viable methods exist or that the models trained were optimally configured. Without theoretical proof of undistillability, the results can unfortunately only be seen as the current state, as new (or already existing and untested) methods for KD might render the claims of this paper invalid soon. Please elaborate on why the results should be considered sufficient to prove a negative result.\", \"Table 1: A concerning number of results are reported as less than 10, which is either incorrect/faulty reporting or potential issues with collapsed training. If collapsed training runs, this supports the concern above (about negative results), and the authors should investigate and elaborate clearly on this.\", \"What happens if $\\\\alpha = 1$? Setting $\\\\alpha > 1$ naturally forces the simplex to be more concentrated in the corners, but this post-transform is not applicable to the probabilities a knockoff-student would train on.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer x3fp (2/2)\", \"comment\": \">### Weakness 3: The paper's finding that the teacher model trained with the proposed method achieves higher accuracy is not surprising. The mechanism by which the proposed method operates is similar to that of label smoothing, which is known to enhance accuracy. The authors might refer to the paper \\\"When does label smoothing help\\\" to understand that label smoothing can also produce the feature compression effect as shown in Figure 2 of this manuscript. The enhancement in accuracy due to the proposed method is expected and aligns with the effects of label smoothing, which is not a novel discovery in the field.\\n\\n**Ans:** Thank you for the comment. We would like to clarify that the primary focus of this paper is not on increasing the accuracy of DNNs but on developing a method to train undistillable DNNs. While many existing methods in the literature can enhance a DNN\\u2019s accuracy, they do not address the critical challenge of making DNNs undistillable. \\n\\nOur approach is the first in the literature that effectively trains undistillable models robust against a wide range of existing KD methods. The improvement in accuracy observed in our results is a by-product of our method and not its primary goal. This improvement arises from the unique properties of our approach rather than replicating the effects of label smoothing or similar techniques. \\n\\nWe believe this distinction highlights the novelty and importance of our contribution to the field.\\n\\n\\n\\n-------------------\\n**References:**\\n\\n[1] Yang, E. H., Hamidi, S. M., Ye, L., Tan, R., \\\\& Yang, B. (2023). Conditional Mutual Information Constrained Deep Learning for Classification. arXiv preprint arXiv:2309.09123.\"}", "{\"title\": \"Reply to Reviewer nSpi (2/3)\", \"comment\": \"> ### Weakness 3 (Writing, part 2):\\n\\n**Ans:** Thank you for your feedback. The concept of clusters in the output probability space of DNNs is well-established in the literature, where the response of DNNs to samples from different classes naturally forms clusters, each corresponding to a different class [1,2,3]. The same terminology, \\\"cluster\\\", is commonly used in these references and aligns with our usage.\\n\\nTherefore, while we acknowledge the need for clear communication, we believe the sentences in the abstract are consistent with the established terminology in the field.\\n\\n> ### Weakness 3 (Writing, part 3): In several locations there seems to be confusion in the citation format or whether a citation is justified at all (e.g., lines 72, 80)\\n\\n**Ans:** Thank you for pointing this out. We have reviewed and corrected the citation format throughout the paper to ensure consistency and accuracy in the revised version. We appreciate your attention to detail and for bringing this to our attention.\\n\\n> ### Weakness 3 (Writing, part 4): The link for the code doesn't work.\\n\\n**Ans:** Thank you for noting this issue. Immediately after submission, we realized that the original link breached the anonymity requirement of the paper, so we promptly removed it. In the revised version, we have included a new link that adheres to the anonymity guidelines. We apologize for the oversight and appreciate your understanding.\\n\\n> ### Weakness 3 (Writing, part 5): Table 1 is very busy. The authors should consider breaking it down into several tables/figures.\\n\\n**Ans:** Thank you for your suggestion. While we acknowledge that Table 1 is dense, it has been carefully structured to ensure readability. The table is already partitioned into three sections, each corresponding to a different dataset, which helps organize the information clearly.\\n\\nFurther partitioning, such as dividing it by KD methods (columns) or defense methods (rows), would not necessarily simplify the presentation. On the contrary, it might fragment the data and make cross-referencing between methods more difficult, potentially leading to confusion. We believe the current structure strikes a balance between comprehensiveness and readability. \\n\\n> ### Weakness 4: The authors claim that their training method makes the network undistillable, but it is validated only empirically. No formal proof is given. This is not an actual weakness since I acknowledge that giving such proof is hard and perhaps even impossible. Hence, it would be beneficial to discuss the limitations of the work in general and the empirical validation specifically. Perhaps adding a section on potential failure cases or datasets/methods where CMIM might not hold would help to provide a more balanced perspective on the method's applicability.\\n\\n**Ans:** Thank you for this valuable suggestion. We acknowledge that providing a formal proof of undistillability is extremely challenging, and our claim is validated solely through empirical evidence. While we believe our comprehensive experiments demonstrate the robustness of CMIM across a wide range of KD methods and datasets, we agree that discussing potential limitations would add balance and clarity to the paper.\\n\\nIn the revised version of the paper, we have added the limitations of our work in the conclusion section which includes acknowledging the reliance on empirical validation, the absence of a formal theoretical proof, and the possibility that future KD methods or specific datasets might expose vulnerabilities in CMIM.\\n\\nWe believe this addition provides a more balanced perspective on the method's applicability and encourages further exploration in this area. Thank you for highlighting this important point.\"}", "{\"comment\": \"Dear Reviewer x3fp,\\n\\nThank you for your thoughtful feedback and for the opportunity to address the points raised in your review.\\n\\nAs we discussed, through comprehensive experiments, we have demonstrated that CMIM is the only method in the literature which make the teacher model undistillable across a broad range of logit-based KD methods while simultaneously improving the teacher\\u2019s accuracy. \\n\\nWhile label smoothing can indeed enhance output cluster compactness to some extent under specific power transform factors (temperatures), it does not guarantee compactness across all possible factors. To further strengthen this claim, we conducted additional experiments, which are detailed below. \\n\\n\\n1. We tested label smoothing with a smaller smoothing factor and observed that, in some cases, label smoothing can increase the model's CMI value (decrease compactness). We also report the entropy of the output probability vectors. \\n\\n| | CE | LS ($\\\\alpha=0.001$) | LS($\\\\alpha=0.005$) | LS($\\\\alpha=0.007$) |\\n|----------|-------|--------------------|-------------------|-------------------|\\n| CMI | 0.071 | 0.071 | 0.076 | 0.070 | \\n| Entropy | 0.0246 | 0.0373 | 0.0470 | 0.0692 | \\n| Accuracy | 77.81 | 77.81 | 77.82 | 77.83 |\\n\\n2. We applied MKD to distill a teacher trained with a label smoothing factor of 0.5. Once again, we found the LS-trained teacher to be distillable.\\n\\n|ResNet50 (LS 0.5)| LS | MKD |\\n|----------|-------|--------------------|\\n| VGG11 | 71.94 | 72.04 | \\n\\nWe kindly request your consideration for an increased score if all your concerns have now been fully addressed.\"}", "{\"title\": \"Reply to Reviewer aDtW (2/3)\", \"comment\": \"> ### Weakness 5: Variance estimates in Appendix J reveal that multiple cases deemed \\\"undistillable\\\" in Table 1 do not definitively qualify as such.\\n\\n**Ans:** Thank you for highlighting this concern. Upon reviewing the variance estimates in Appendix J, we acknowledge that certain cases, such as the (RN50, VGG11) pair on CIFAR-100, might suggest that the CMIM-trained model could be rendered distillable under naive statistical interpretations. For example, when the DIST method is applied to the CMIM model, the accuracy is reported as $71.86 \\\\pm 0.28$, which could potentially exceed the LS student accuracy of $71.94 \\\\pm 0.09$ if variance is simply added to the mean.\\n\\nHowever, this approach of directly comparing mean values with added variances does not provide an accurate or fair assessment of undistillability. To address this, we conducted a more rigorous analysis where, across five different seeds, we compared the accuracy of the knock-off Student (VGG11 in this case) trained via label smoothing and the DIST method applied to RN50 trained with CMIM. \\n\\nThe results of this comprehensive comparison are summarized in the following table, which demonstrates that the CMIM model achieves undistillability across all different seeds. We hope this detailed clarification and additional data alleviate the concerns raised regarding the robustness of our findings. \\n\\n| | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 |\\n|------|--------|--------|--------|--------|--------|\\n| LS | 72.28 | 71.83 | 72.06 | 72.25 | 71.53 |\\n| MCMI | 72.01 | 71.74 | 71.93 | 72.12 | 71.27 |\\n\\n> ### Weakness 6: Table 2 mistakenly labels \\\"RSP\\\" as \\\"RSG\\\" and \\\"RSD.\\\"\\n\\n**Ans:** Thank you for catching this typo. We have corrected the labeling error in Table 2, replacing \\\"RSG\\\" and \\\"RSD\\\" with the correct term \\\"RSP\\\" in the revised version of the manuscript. We appreciate your attention to detail.\\n\\n> ### Weakness 7: Omitting CE from Table 1 limits insight into the distillability of the standard training procedure, and it is unclear how much better the proposed method is to the simplest baseline.\\n\\n**Ans:** Thank you for your observation. By definition, the baseline for evaluating distillability and undistillability should be the accuracy of the student model trained using the label smoothing (LS) method. This is because it is well-established in the literature that the accuracy obtained using the LS method typically exceeds that achieved using CE (at least this is the case for the models tested in our paper). Therefore, including CE results in Table 1 would not provide any additional valuable insight into the distillability comparison, as LS already represents a stronger and more relevant baseline. We hope this explanation clarifies our rationale.\\n\\n> ### Weakness 8: When introducing notation, some notation is used before it is introduced. Consider reordering this section so that no notation convention is used before it has been introduced.\\n\\n**Ans:** Thank you for pointing this out. We have revised the relevant sections to ensure that all notation is introduced before it is used. \\n\\n> ### Weakness 9: Equation (3): avoid using $\\\\times$ if it solely represents normal multiplication. \\\\\\\\L403: \\\"intensive\\\" $\\\\to$ \\\"extensive.\\\" \\\\\\\\L520-L524 could be rephrased, as the current phrasing appear redundant and confusing.\\n\\n**Ans:** Thank you for identifying these issues. We have addressed all of them in the revised version of the paper. Specifically: \\n\\n- Equation (3) has been clarified to avoid any confusion if it represents normal multiplication. \\n\\n- The word \\\"intensive\\\" on L403 has been corrected to \\\"extensive.\\\"\\n\\n- Lines 520\\u2013524 have been rephrased to eliminate redundancy and improve clarity.\\n\\nWe appreciate your careful review and feedback.\\n\\n\\n> ### Question 1: Please elaborate on the computational requirements of doing the proposed alternating optimization, where the optimization of the Q\\u2019s is done over multiple minibatches (I assume for each alternating step)?\\n\\n**Ans:** \\nThank you for your question. To address this, we have added a new section in the appendix (Appendix K) that provides a detailed explanation of the computational requirements for the proposed alternating optimization. \\n\\nIn summary, CMIM increases the training time by approximately 20\\\\%, as the optimization of \\\\( Q \\\\)'s involves additional computations over multiple minibatches during each alternating step. We believe this is a reasonable trade-off given the significant benefits of the method. We encourage you to refer to Appendix K for a more comprehensive discussion of these requirements.\"}", "{\"title\": \"Reply to Reviewer nSpi (1/3)\", \"comment\": \"Thank you for appreciating our work in that our proposed approach seem to be the only one that makes the network not distillable on all benchmarks. Also, thank you for your time reading our paper. Please find our responses to your concerns in the following.\\n\\n> ### Weakness 1: I believe that further justification, evidence, or analysis (theoretical or empirical) is required to relate the approximation of the second term in the objective to the original one (as $\\\\omega$ was taken to be a finite number). There is some discrepancy that needs to be settled as eventually instead of maximizing over $\\\\mathbf{\\\\alpha}$ (which makes sense), averaging is done over multiple values. Also, can you please share what values of $\\\\omega$ were used in the paper? I didn't find this information.\\n\\n**Ans:** \\nThank you for raising this important point. As stated in Theorem 4.1, when $ \\\\omega \\\\to \\\\infty $, the second term in the objective function becomes equivalent to the RHS of Eq. (20). We kindly ask the reviewer to verify the derivation provided in the appendix, which outlines the steps leading to this approximation. For a sufficiently large value of $ \\\\omega $, we achieve the approximation stated in Eq. (21). Additionally, the integral in Eq. (21) is approximated by the summation provided in Eq. (22).\\n\\nIn our experiments, we set $ \\\\omega = 25 $, which we found to be a sufficiently large value to effectively mimic the behavior of $ \\\\omega \\\\to \\\\infty $. It is worth noting that setting $ \\\\omega $ to very large numbers, such as 1000, can induce numerical issues (e.g., overflow) in Python programming, making it impractical for computation.\\n\\nTo address the impact of $ \\\\omega $ comprehensively, we have included an ablation study in the revised version of the paper's appendix (Appendix L). This study examines the effect of different values of $ \\\\omega $ on the results, providing further evidence and justification for our choice. We hope this additional analysis clarifies our approach and resolves any concerns.\\n\\n> ### Weakness 2 (Experimental section, part 1): I find the improvements over competing methods, and in particular label smoothing, marginal in most cases. I acknowledge that label smoothing does not adhere to the requirement of a network being undistillable, but it gets quite close to it. I am not convinced whether the minor improvements over it really make a difference in practice. Perhaps additional analysis or experiments in other scenarios can demonstrate the practical significance of your method over label smoothing.\\n\\n**Ans:** Thank you for your observation. We respectfully argue that the improvement over label smoothing (LS) is not marginal when viewed in the correct context. While it is true that for **some** KD methods, the accuracy of the knockoff student trained on DNNs protected by LS and CMIM might appear close, the critical distinction lies in the undistillability of the DNNs: no existing KD method can render a DNN trained by CMIM distillable, whereas DNNs trained using LS can be rendered distillable by certain KD methods. In other words, there exists at least one KD method that makes LS trained DNNs distillable.\\n\\n> ### Weakness 2 (Experimental section, part 2): This may be a criticism in general for defense methods in this domain and not specifically for this paper. It seems that the evaluation is done under the assumption that the student model has access to the input only ($\\\\mathbf{x}$). How likely is that setup? In my opinion, a more realistic setup is distilling a model based on a new dataset altogether. I believe that a comparison in this setting will be much more informative.\\n\\n**Ans:** Thank you for your insightful comment. We agree that exploring the distillation process when the student model is trained on a dataset different from the one used to train the teacher model would be an interesting and more realistic scenario. However, it is not immediately clear whether conventional KD methods would be as effective in such cases, as the knowledge transfer process might be hindered by the domain gap between the two datasets. \\n\\nThe investigation of how KD operates under these conditions could indeed open a new avenue of research and expand the current understanding of defense methods in this domain. We appreciate your suggestion and recognize it as a potential direction for future work.\\n\\n> ### Weakness 3 (Writing, part 1): Some of the sentences are too long which makes it hard to understand at first pass (e.g., the first sentence of the abstract and the sentence in lines 46-50).\\n\\n**Ans:** Thank you for pointing this out. We have revised and restructured the sentences mentioned, including the first sentence of the abstract and the one in lines 46\\u201350, by breaking them into smaller, more concise sentences.\"}", "{\"summary\": \"This paper introduces a defense method against knowledge distillation (KD) attacks, where the goal is to avoid the undesired usage of the outputs of deep neural networks (DNNs) by making them undistillable. The authors propose a training method that aims to minimize the conditional mutual information (CMI) across all temperature-scaled clusters, resulting in a model that cannot be effectively distilled by existing KD methods. The CMIM model is shown to be undistillable through extensive experiments on CIFAR-100, TinyImageNet, and ImageNet datasets, while outperforming state-of-the-art methods and even improving upon the conventional cross-entropy (CE) loss in terms of prediction accuracy.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The paper's goal is to prevent the misuse of models and serves a partial privacy protection technique, which is significant for the reliable use of AI models.\\n\\n2. The paper provides both theoretical and empirical evidence to demonstrate the benefits of the proposed method in enhancing the undistillability of models. T\", \"weaknesses\": \"1. The writing quality of the paper could be enhanced.\\n\\n2. From a methodological perspective, the overall contribution of the paper is somewhat limited. The paper utilizes an existing metric, CMI, to measure the compactness of model outputs and aims to enhance model undistillability by maximizing this compactness metric. This approach appears too trivial and straightforward. It is not clear how this method fundamentally differs from directly employing a maximum entropy term or label smoothing technique to increase output concentration. Moreover, in the field of machine learning, particularly in computer vision, numerous loss functions have been studied to enhance model output compactness, such as the Large-Margin-Softmax-based methods.\\n\\n3. The paper's finding that the teacher model trained with the proposed method achieves higher accuracy is not surprising. The mechanism by which the proposed method operates is similar to that of label smoothing, which is known to enhance accuracy. The authors might refer to the paper \\\"When does label smoothing help\\\" to understand that label smoothing can also produce the feature compression effect as shown in Figure 2 of this manuscript. The enhancement in accuracy due to the proposed method is expected and aligns with the effects of label smoothing, which is not a novel discovery in the field.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer UcAd,\\n\\nThank you again for reviewing our work and providing your valuable insights.\\n\\nWith the rebuttal deadline fast approaching, we kindly urge you to share your feedback on our responses to your comments at your earliest convenience. If you have any additional concerns or questions, please let us know. We are committed to addressing all your points comprehensively and are happy to provide further clarifications or details as needed. Otherwise, we kindly ask you to consider raising the score if our responses have resolved your concerns.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\nThe Authors\"}", "{\"title\": \"Reply to Reviewer aDtW (1/3)\", \"comment\": \"We thank the reviewer for their valuable feedback and for acknowledging the relevance of our work on undistillable models in the context of the increasing prevalence of large closed-source models. We also appreciate the recognition of the quality of our writing and the balance we have achieved between theoretical and empirical results. Please find our responses to your concerns in the following.\\n\\n> ### Weakness 1: L040: Missing reference to theoretical paper by Borup and Andersen (2021), \\u201cEven your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation,\\u201d NeurIPS.\\n\\n**Ans:** Thank you for pointing this out. We have included the missing reference to the paper. We appreciate your attention to detail. \\n\\n> ### Weakness 2: \\\"An insight is provided that in order for a DNN to be undistillable, it is desirable for the DNN to possess the trait that each cluster of the DNN\\u2019s output probability distributions corresponding to each label is highly concentrated to the extent that all probability distributions within the cluster more or less collapse into one probability distribution close to the one-hot probability vector of that label.\\u201d Although, I am unable to provide a reference, that a concentrated probability distribution is uninformative, and thus desirable to avoid distillation is, to the best of my knowledge, common knowledge in the field, and should not be considered a contribution of this paper.\\n\\n**Ans:** Thank you for your comment. It appears there is a misunderstanding regarding the concept discussed in our paper. Our approach focuses on making the **clusters** of output probabilities corresponding to each class more concentrated. This is fundamentally different from the notion of a \\\"concentrated probability distribution\\\", which we have not employed or referenced in our work. \\n\\nIf you are aware of any method in the literature that explicitly addresses the concentration of output probability clusters, we would appreciate it if you could point it out. To the best of our knowledge, [1] is the first paper to introduce and formalize this concept, making it a key novelty of our work. We hope this clarification resolves any confusion regarding the contributions of our method.\\n\\n> ### Weakness 3: Replacing $Y$ with $\\\\hat{Y}$ in the MI and CMI is simply replacing a variable in a function; the change itself is not inherently innovative and warrant a notion of \\\"contribution\\\". However, the implications and importance of doing so may hold some importance.\\n\\n**Ans:** \\nWe would like to clarify that the innovation of this paper does not stem from simply \\\"replacing $ Y $ with $ \\\\hat{Y} $\\\" in the MI and CMI formulations. Rather, our key contribution lies in the novel approach of measuring the concentration of output clusters for different values of the power transform $ \\\\alpha[Y] $. To achieve this, we introduce the concept of $ \\\\mathrm{I}(X; \\\\hat{Y}^{\\\\alpha[Y]} \\\\mid Y) $, which quantifies this concentration in a meaningful way.\\n\\nFurthermore, calculating $ \\\\mathrm{I}(X; \\\\hat{Y}^{\\\\alpha[Y]} \\\\mid Y) $ is fundamentally different from calculating $ \\\\mathrm{I}(X; \\\\hat{Y} \\\\mid Y) $. Specifically, as shown in [1], the calculation of $ \\\\mathrm{I}(X; \\\\hat{Y} \\\\mid Y) $ relies on the Markov chain relationship $ Y \\\\to X \\\\to \\\\hat{Y} $. However, in our case, $ Y $, $ X $, and $ \\\\hat{Y}^{\\\\alpha[Y]} $ no longer form a Markov chain, as $ \\\\alpha[Y] $ explicitly depends on $ Y $. This shift introduces significant challenges, as minimizing CMI under a non-Markovian setting has not been explored in the literature before. Addressing this challenge is a novel and important contribution of our paper, which we believe should be acknowledged.\\n\\n> ### Weakness 4: Section 4.1 appears redundant, as the extension follows naturally by substituting variables (see also comment above). \\n\\n**Ans:** Thank you for your comment. We respectfully disagree with the assertion that Section 4.1 is redundant. As detailed in our response to the earlier point, the extension presented in Section 4.1 is not a straightforward substitution of variables. Instead, it addresses the significant conceptual and computational challenges introduced by the dependency of $ \\\\alpha[Y] $ on $ Y $, which breaks the Markov chain assumption ($ Y \\\\to X \\\\to \\\\hat{Y} $) typically used to calculate $ \\\\mathrm{I}(X; \\\\hat{Y} \\\\mid Y) $. \\n\\nSection 4.1 provides a rigorous framework for calculating $ \\\\mathrm{I}(X; \\\\hat{Y}^{\\\\alpha[Y]} \\\\mid Y) $ under these non-Markovian conditions. This methodological innovation is critical to the proposed approach and addresses an unexplored area in the literature, as no prior work has dealt with minimizing the CMI in such settings. We hope this explanation underscores the necessity and value of Section 4.1 in the paper.\"}", "{\"title\": \"Reply to Reviewer UcAd (2/2)\", \"comment\": \"> ### Questions: The proposed method seems to be limited to a single-label classification setting. Can the method potentially extend to regression or multi-label classification, where outputs are continuous or to predict multiple classes simultaneously? Can the method be adapted to protect the IP of the state-of-the-art models, e.g., LLMs, CLIP, and Diffusion models, which require it most?\\n\\n**Ans:** Thank you for these thought-provoking questions. The concept of undistillable DNNs, as introduced in [1], is relatively new. The primary goal of this paper is to demonstrate that it is possible to train undistillable DNNs in conventional single-label classification tasks. Due to the scope of this work, it is not feasible to extend the methodology to a wide range of learning tasks within the same paper.\\n\\nAdapting the method to multi-label classification, regression, or protecting the intellectual property of state-of-the-art models such as LLMs, CLIP, and Diffusion models presents unique challenges and would require tailored approaches. These extensions are valuable directions for future work, and we are eager to explore them in subsequent research.\\n\\n\\n-------------\\n**References:**\\n\\n[1] Yang, E. H., \\\\& Ye, L. (2024). Markov knowledge distillation: make nasty teachers trained by self-undermining knowledge distillation fully distillable. In European Conference on Computer Vision. Springer (Vol. 3).\"}", "{\"comment\": \"Dear Reviewer UcAd,\\n\\nThank you again for your thoughtful review and valuable feedback on our work.\\n\\nIn our response, we have carefully addressed all the concerns raised in your review. With the rebuttal deadline rapidly approaching, we kindly request your feedback on our responses at your earliest convenience. If you have any additional questions or concerns, please do not hesitate to let us know. We are fully committed to ensuring all your points are thoroughly addressed and are happy to provide further clarifications or details as needed.\\n\\nWe greatly appreciate your time and consideration.\\n\\nBest regards,\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer\\u2019s UcAd,\\n\\nThank you for your detailed and constructive feedback on our submission, which has significantly improved the quality of our work. We have carefully addressed all the concerns raised in your review and provided detailed responses to each point.\\n\\nIf possible, we kindly request a re-evaluation of the score, considering that the issues highlighted have been thoroughly resolved. \\nWe truly appreciate your time and effort in reviewing our work and are grateful for your consideration.\\n\\nBest regards.\"}", "{\"summary\": \"The paper presents a method for protecting black-box models against knowledge distillation by student models. The authors define a DNN as distillable if a student model learned from its output outperforms the same model learned from ground truth labels. The proposed objective for the teacher model consists of a standard CE loss and a regularization term based on a tempered conditional mutual information (CMI) between the input and network predictions. Its aim is to make the output of the network close to one-hot encoding. Since the proposed objective is not tractable, through a series of approximations the authors propose a tractable objective. The authors demonstrate their method on CIFAR-100, TinyImageNet, and ImageNet using various teacher and student models, and against other baselines methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a novel objective based on conditional mutual information that includes optimizing over a power-transformed probability distribution.\", \"Approximating the intractable terms of the objective is original although I am not sure that it is justified. I validated Theorem 4.1.\", \"I am not an expert in this field, but the experimental part seems very comprehensive in terms of datasets, student and teacher networks, defense strategies, and compared methods.\", \"The proposed approach seem to be the only one that makes the network not distillable on all benchmarks.\"], \"weaknesses\": [\"I believe that further justification, evidence, or analysis (theoretical or empirical) is required to relate the approximation of the second term in the objective to the original one (as $\\\\omega$ was taken to be a finite number). There is some discrepancy that needs to be settled as eventually instead of maximizing over $\\\\mathbf{\\\\alpha}$ (which makes sense), averaging is done over multiple values. Also, can you please share what values of $\\\\omega$ were used in the paper? I didn't find this information.\", \"Experimental section:\", \"I find the improvements over competing methods, and in particular label smoothing, marginal in most cases. I acknowledge that label smoothing does not adhere to the requirement of a network being undistillable, but it gets quite close to it. I am not convinced whether the minor improvements over it really make a difference in practice. Perhaps additional analysis or experiments in other scenarios can demonstrate the practical significance of your method over label smoothing.\", \"This may be a criticism in general for defense methods in this domain and not specifically for this paper. It seems that the evaluation is done under the assumption that the student model has access to the input only ($\\\\mathbf{x}$). How likely is that setup? In my opinion, a more realistic setup is distilling a model based on a new dataset altogether. I believe that a comparison in this setting will be much more informative.\", \"The paper has some writing issues in my opinion:\", \"Some of the sentences are too long which makes it hard to understand at first pass (e.g., the first sentence of the abstract and the sentence in lines 46-50).\", \"Some sentences are not clear until properly explained in the paper. For instance, \\\"cluster of its output probability distributions in response\", \"to all sample instances\\\" or \\\"cluster corresponding to each label should ideally collapse into one probability distribution\\\", both are in the abstract.\", \"In several locations there seems to be confusion in the citation format or whether a citation is justified at all (e.g., lines 72, 80)\", \"The link for the code doesn't work.\", \"Table 1 is very busy. The authors should consider breaking it down into several tables/figures.\", \"The authors claim that their training method makes the network undistillable, but it is validated only empirically. No formal proof is given. This is not an actual weakness since I acknowledge that giving such proof is hard and perhaps even impossible. Hence, it would be beneficial to discuss the limitations of the work in general and the empirical validation specifically. Perhaps adding a section on potential failure cases or datasets/methods where CMIM might not hold would help to provide a more balanced perspective on the method's applicability.\"], \"questions\": [\"The paper seems to rely heavily on Yang et al. (2023). What is the technical novelty of this paper besides including the power-transformed elements?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanksfor your reply.\\n\\nI apologize if my previous statement caused any confusion. What I mentioned is that \\\"Label smoothing is equivalent to adding a confidence penalty term to the original cross-entropy,\\\". I did not equate label smoothing with maximizing entropy, because as you mentioned, the cross-entropy between the uniform distribution and the DNN output distribution is NOT equal to the entropy of the DNN output distribution. However, you cannot deny that the term which minimizes the cross-entropy between the uniform distribution and the DNN output distribution can be considered as a confidence penalty term.\\n\\nFurthermore, what I said was that \\\"the author's statement that increasing the entropy of outputs does not directly relate to the concentration of output clusters is somewhat wrong.\\\" My point is that increasing the entropy of outputs is related to the concentration of output clusters (not equivalent, hence the example you provided at the end does not address my question), because techniques like label smoothing lead to a concentration of output clusters. Therefore, it is an oversimplification to say that \\\"increasing the entropy of outputs does not directly relate to the concentration of output clusters\\\".\\n\\nHowever, the author's experimental results partially address my concern regarding to the comparison with LS. The additional experiments show that the proposed method can indeed achieve a better trade-off compared to the simple LS.\\n\\nBased on the overall rebuttal, I would like to increase the score to 5, but I still believe that the paper can be significantly improved in the overall writing quality and experimental analysis, even though the phenomenon of undistillable models analyzed in this paper is somewhat interesting.\"}", "{\"title\": \"Reply to Reviewer x3fp (1/2)\", \"comment\": \"We appreciate the reviewers recognizing the significance of our work in preventing the misuse of models and contributing to partial privacy protection. We also thank the reviewers for acknowledging the theoretical and empirical contributions of our work in demonstrating the effectiveness of our proposed method in enhancing the undistillability of models.\\nPlease find our responses to your comments in the sequel.\\n\\n>### Weakness 1: The writing quality of the paper could be enhanced.\\n\\n**Ans:** Thank you for pointing out the need for improving the writing quality. We have revised several lengthy and complex sentences throughout the paper to enhance clarity and make it more reader-friendly. Additionally, we will make further editorial refinements as needed following the paper\\u2019s acceptance.\\n\\n>### Weakness 2 (part a): From a methodological perspective, the overall contribution of the paper is somewhat limited. The paper utilizes an existing metric, CMI, to measure the compactness of model outputs and aims to enhance model undistillability by maximizing this compactness metric. This approach appears too trivial and straightforward.\\n\\n**Ans:** We appreciate the reviewer\\u2019s comments and would like to clarify the unique contributions of our work.\\n\\nFirst, our paper is the only one to propose a method for training undistillable DNNs capable of resisting a wide range of knowledge distillation (KD) methods. We invite the reviewer to identify any existing approaches in the literature that achieve this specific goal. Given the novelty and practical importance of our work, we firmly believe that our contributions should not be considered trivial.\\n\\nSecond, while we leverage the concept of CMI from [1], the way it is calculated in our work significantly differs from how it is calculated [1]. In [1], the calculation of $ I(X, \\\\hat{Y} \\\\mid Y) $ is based on the assumption of a Markov chain $ Y \\\\to X \\\\to \\\\hat{Y} $. However, in our work, we measure the compactness of clusters using $ I(X, \\\\hat{Y}^{\\\\alpha[Y]} \\\\mid Y) $, where the term $ \\\\hat{Y}^{\\\\alpha[Y]} $ explicitly depends on $ Y $, breaking the Markov chain assumption. Consequently, calculating $ I(X, \\\\hat{Y}^{\\\\alpha[Y]} \\\\mid Y) $ poses additional challenges. We address these challenges by proposing a novel method for calculating this metric, as detailed in Section 4.1.\\n\\nFinally, we introduce a novel training method called the CMI minimization method. This approach trains a DNN by jointly minimizing the cross-entropy loss and the CMI values of all power-transformed clusters. This joint minimization process is non-trivial and requires careful formulation, as discussed in the paper. \\n\\nThese elements collectively establish the methodological contributions of our work and demonstrate its significance.\\n\\n>### Weakness 2 (part b): It is not clear how this method fundamentally differs from directly employing a maximum entropy term or label smoothing technique to increase output concentration. Moreover, in the field of machine learning, particularly in computer vision, numerous loss functions have been studied to enhance model output compactness, such as the Large-Margin-Softmax-based methods.\\n\\n**Ans:** We appreciate the reviewer\\u2019s comments and would like to clarify the distinction between our method and existing approaches.\\n\\nWhile there are methods in the literature that aim to enhance the intra-class compactness of DNN output clusters, this alone is \\\\textbf{insufficient} to achieve undistillability. Our work is the first to propose a method specifically designed to train undistillable DNNs that remain robust against KD techniques.\", \"regarding_the_methods_mentioned_by_the_reviewer\": \"1. Label Smoothing and Large-Margin-Softmax-based methods: These approaches may lead to more concentrated clusters. However, this is \\\\textbf{inadequate} for ensuring compactness under power transformations, a key aspect required to achieve undistillability. As a result, models trained with these methods may still be vulnerable to KD techniques. \\n\\n2. Maximum Entropy Term: Increasing the entropy of outputs merely reduces the model's confidence, making its predictions less certain. This does not directly relate to or contribute to the concentration of output clusters, which is the cornerstone of our approach for achieving undistillability.\\n\\nOur method fundamentally differs by addressing these limitations and incorporating a novel strategy to ensure compactness under power transformations, which is critical for training undistillable DNNs. We hope this clarification highlights the unique contributions of our work.\"}", "{\"comment\": \"Dear Reviewer x3fp,\\n\\nThank you for taking the time to review our work. We have conducted additional experiments to provide further clarification on the points you raised.\\n\\nWe would greatly appreciate your feedback on our rebuttal. If you have any further concerns or questions, please let us know. We are eager to ensure we have thoroughly addressed your comments and are happy to provide any additional clarifications. Otherwise, we kindly ask you to consider raising the score if our responses have resolved your concerns.\\n\\nThank you for your consideration.\\n\\nBest regards,\\nThe Authors\"}", "{\"comment\": \"I thank the authors for the answers. Perhaps I am missing something, but in the current version of the paper the code link still leads to a page with a 404 error. Also, I didn't find in Appendix L the ablation over $\\\\omega$ (only over $\\\\alpha$ and $\\\\beta$).\"}", "{\"title\": \"Reply to Reviewer aDtW (3/3)\", \"comment\": \"> ### Question 2: Proving a negative result is challenging empirically; the inability to surpass LS with tested KD methods does not necessarily imply that no viable methods exist or that the models trained were optimally configured. Without theoretical proof of undistillability, the results can unfortunately only be seen as the current state, as new (or already existing and untested) methods for KD might render the claims of this paper invalid soon. Please elaborate on why the results should be considered sufficient to prove a negative result.\\n\\n**Ans:** Thank you for raising this important point. Providing a theoretical proof of undistillability is a significant challenge and falls beyond the scope of this work. The primary aim of our paper is to demonstrate empirically that our approach achieves undistillability against all existing KD attack methods in the literature. Importantly, we also show that our method succeeds where all existing defense methods have failed.\\n\\nWhile we acknowledge that the possibility of future KD methods rendering our approach vulnerable cannot be entirely ruled out, the empirical evidence presented here demonstrates the robustness of our approach against the current state of the art. We believe this comprehensive evaluation across all known KD methods provides strong evidence of the effectiveness of our method, even if it does not constitute a formal proof of a negative result.\\n\\n> ### Question 3: Table 1: A concerning number of results are reported as less than 10, which is either incorrect/faulty reporting or potential issues with collapsed training. If collapsed training runs, this supports the concern above (about negative results), and the authors should investigate and elaborate clearly on this.\\n\\n**Ans:** We would like to clarify that the reported numbers in Table 1 are correct and have been thoroughly validated through extensive experiments. The results indicate that all methods in the literature, except CMIM (our proposed method), fail to successfully render the model undistillable.\\n\\nThe low values simply illustrate that certain KD methods (e.g., method DKD) are unable to extract and utilize information from a teacher model defended by another method (e.g., method MAD or APGP when DKD is used as the underlying KD method). \\n\\nWe hope this clarification resolves any concerns regarding the reported results.\\n\\n> ### Question 4: What happens if $\\\\alpha = 1$? Setting $\\\\alpha > 1$ naturally forces the simplex to be more concentrated in the corners, but this post-transform is not applicable to the probabilities a knockoff-student would train on.\\n\\n**Ans:** Thank you for this insightful question. As shown by [2], logit temperature scaling with temperature $ T $ is mathematically equivalent to applying a power transform to the output probability distribution with power $ \\\\alpha = \\\\frac{1}{T} $. This relationship is discussed in the introduction of our paper.\\n\\nWhen $ \\\\alpha = 1 $, it corresponds to $ T = 1 $, meaning no temperature scaling is applied, and there is no additional smoothing or sharpening of the probability vectors. In this case, the output probabilities remain unchanged.\\n\\nAs noted by the reviewer, when $ \\\\alpha > 1 $ (equivalently $ T < 1 $), the power transform makes the output probabilities more peaky, pushing them closer to the corners of the simplex. \\n\\n-------\\n**References**\\n\\n[1] Yang, E. H., Hamidi, S. M., Ye, L., Tan, R., \\\\& Yang, B. (2023). Conditional Mutual Information Constrained Deep Learning for Classification. arXiv preprint arXiv:2309.09123.\\n\\n[2] Zheng, K., \\\\& Yang, E. H. (2024). Knowledge distillation based on transformed teacher matching. arXiv preprint arXiv:2402.11148.\"}", "{\"title\": \"Reply to Reviewer nSpi (3/3)\", \"comment\": \"> ### Questions: The paper seems to rely heavily on Yang et al. (2023). What is the technical novelty of this paper besides including the power-transformed elements?\\n\\n**Ans:** \\nThank you for your question. While our work builds upon the framework introduced by [3], the concept of CMI deployed in our paper is fundamentally different. Specifically, the CMI formula used in [3] to calculate $ I(X, \\\\hat{Y} \\\\mid Y) $ is based on the assumption that $ X, \\\\hat{Y}, Y $ form a Markov chain as $ Y \\\\to X \\\\to \\\\hat{Y} $.\\n\\nIn contrast, our work introduces the concept of power-transformed clusters, where we calculate the concentration of these clusters using $ I(X, \\\\hat{Y}^{\\\\alpha[Y]} \\\\mid Y) $. Here, $ \\\\hat{Y}^{\\\\alpha[Y]} $ explicitly depends on $ Y $, and as a result, the Markov chain assumption no longer holds. This introduces significant challenges in the calculation of $ I(X, \\\\hat{Y}^{\\\\alpha[Y]} \\\\mid Y) $, which are not addressed in previous works.\\n\\nTo overcome these challenges, we propose a novel method for calculating $ I(X, \\\\hat{Y}^{\\\\alpha[Y]} \\\\mid Y) $, as detailed in Section 4.1. This methodological innovation represents a key technical contribution of our paper, setting it apart from prior work and extending the applicability of CMI to non-Markovian settings.\\n\\n\\n------\\n**References:**\\n\\n[1] Tsekouras, G. E., \\\\& Tsimikas, J. (2013). On training RBF neural networks using input\\u2013output fuzzy clustering and particle swarm optimization. Fuzzy Sets and Systems, 221, 65-89.\\n\\n[2] Tao, S. (2019). Deep neural network ensembles. In Machine Learning, Optimization, and Data Science: 5th International Conference, LOD 2019, Siena, Italy, September 10\\u201313, 2019, Proceedings 5 (pp. 1-12). Springer International Publishing.\\n\\n[3] Yang, E. H., Hamidi, S. M., Ye, L., Tan, R., \\\\& Yang, B. (2023). Conditional Mutual Information Constrained Deep Learning for Classification. arXiv preprint arXiv:2309.09123.\"}" ] }
0tIiMNNmdm
Limitations of measure-first protocols in quantum machine learning
[ "Casper Gyurik", "Riccardo Molteni", "Vedran Dunjko" ]
In recent times, there have been major developments in two distinct yet connected domains of quantum information. On the one hand, substantial progress has been made in so-called randomized measurement protocols. Here, a number of properties of unknown quantum states can be deduced from surprisingly few measurement outcomes, using schemes such as classical shadows. On the other hand, significant progress has been made in quantum machine learning. For example, exponential advantages have been proven when the data consists of quantum states and quantum algorithms can coherently measure multiple copies of input states. In this work, we aim to understand the implications and limitations of combining randomized measurement protocols with quantum machine learning, although the implications are broader. Specifically, we investigate quantum machine learning algorithms that, when dealing with quantum data, can either process it entirely using quantum methods or measure the input data through a fixed measurement scheme and utilize the resulting classical information. We prove limitations for quantum machine learning algorithms that use fixed measurement schemes on the input quantum states. Our results have several implications. From the perspective of randomized measurement procedures, we show limitations of measure-first protocols in the average case, improving on the state-of-the-art which only focuses on worst-case scenarios. Additionally, previous lower bounds were only known for physically unrealizable states. We improve upon this by employing quantum pseudorandom functions to prove that a learning separation also exists when dealing with physically realizable states, which may be encountered in experiments. From a machine learning perspective, our results are crucial for defining a physically meaningful task that shows fully quantum machine learning processing is not only more efficient but also necessary for solving certain problems. The tasks at hand are also realistic, as the algorithms and proven separations hold when working with efficiently preparable states and remain robust in the presence of measurement and preparation errors.
[ "quantum machine learning", "machine learning", "learning separation" ]
Reject
https://openreview.net/pdf?id=0tIiMNNmdm
https://openreview.net/forum?id=0tIiMNNmdm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zw1pPQFzZs", "x8hMPAAzCv", "wiRBgRwzlL", "u3e8tyWIl6", "tmssFoAHLw", "rBx71Rrg5W", "p8A68vv63t", "p4ghPtXOtV", "myKo20I5hX", "mQKDrwqCFq", "lScWH1zXKy", "j4phFoAenH", "gP4vH2S8UK", "dUJoZtgI0k", "cseQ5uMTlv", "cOLFaWVCNU", "acUMC6rIzn", "Y2ACqI1ZQV", "Sh60xaDYs7", "R4ADFgScmy", "QOnxr6eest", "OPmYCG5qCm", "Lfi3XOXG5z", "JeOI0b25Tv", "IfqiYF5eou", "I08Kz61fxv", "HxbRWh5zJQ", "FpKhcuf7V7", "FRIry6LLlP", "BoptHHplEI", "8irNF0Aoyn", "4fZFF3gVAB", "12Ofja37Ic", "0UxDnPuHKY" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731929983062, 1732790243072, 1733311095223, 1733155306450, 1732790549726, 1732790770908, 1731931240220, 1731930656342, 1731929159059, 1731930054321, 1731929852480, 1732706645430, 1730619829365, 1731930805378, 1732703791013, 1732362404445, 1730088066883, 1731929197319, 1732679803886, 1737523654060, 1733144958088, 1733990880401, 1731931325723, 1732790601597, 1730773762556, 1732539203948, 1732790071581, 1732790149567, 1731930862983, 1731930300689, 1731930142818, 1731929562032, 1731929628699, 1732678714396 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Area_Chair_HTLL" ], [ "ICLR.cc/2025/Conference/Submission4664/Reviewer_Mm3W" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Reviewer_RYau" ], [ "ICLR.cc/2025/Conference/Submission4664/Reviewer_N6yC" ], [ "ICLR.cc/2025/Conference/Submission4664/Reviewer_N6yC" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Reviewer_Mm3W" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4664/Reviewer_RYau" ], [ "ICLR.cc/2025/Conference/Submission4664/Area_Chair_HTLL" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Reviewer_RYau" ], [ "ICLR.cc/2025/Conference/Submission4664/Reviewer_RYau" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Authors" ], [ "ICLR.cc/2025/Conference/Submission4664/Area_Chair_HTLL" ] ], "structured_content_str": [ "{\"title\": \"Addressing \\\"Other comments in Official Review of Submission4664 by Reviewer RYau\\\" (Part 2)\", \"comment\": \"*- Eq (8): how can x appear as an unbound function variable on the LHS and as a bound variable on the RHS?*\\n\\nWe thank the reviewer for finding this notational error on our part. x should not appear on the RHS. We fixed it in the revised version of the paper.\\n\\n*- Line 370: the use of Aaronson et al. is the very heart of the whole paper, yet this is not reproduced anywhere in the paper*\\n\\nFor completeness, we now include a description of Aaronson\\u2019s U_x in the Appendix.\\n\\n*- Line 434: please remind me what non-uniform means in this context.*\\n\\nNon-uniform here means there does not need to be an efficient Turing machine that on input n outputs the quantum circuit for instance size n.\\n\\n*- Eq (9): the notation of dot and unfilled function brackets is not defined/explained.*\\n\\nThank you for the remark. We explained the notation below Eq. 9.\\n\\n*- Lines 475-480: is unclear, especially given that there are supposed to be different random functions for each data-point.*\\n\\nEach data point corresponds to a specific function. The distinguisher evaluates the performance of the MF protocols based on the given function. If the MF protocols were effective on pseudorandom states (PRS), there would be a noticeable difference in performance: poor performance when the functions are truly random and good performance when they are pseudorandom. Since such a difference cannot occur, the MF protocols will also fail on pseudorandom functions. We further clarify the point in the main text too.\\n\\n*- Eq (11): if g_i(x) is of size n+1, doesn't this change the learning problem in unclear-ways, not least because the dimensions no longer match (2n + 1 != 2n + 2)*\", \"this_change_only_affects_the_length_of_the_output_bitstring\": \"instead of $(x, y, b) \\\\in \\\\{0,1\\\\}^{2n+1}$, it becomes $(g_i(x), y, b) \\\\in \\\\{0,1\\\\}^{2n+2}$. This increase of the length of the bitstring by 1 has no further consequences for the learning problem. In particular, the $(y,b)$ variables still follow the same probability distribution determined by the POVM $\\\\Lambda_x$, so the learning problem remains fundamentally the same.\\n\\n*- Eq (29): \\\\pi_x(f) -> \\\\pi_x(f^{(k)})*\\n\\nWe thank the reviewer for finding this typo, we fixed it in the revised version.\\n\\n*- Line 787: is query access -> has query acces*\\n\\nWe thank the reviewer for finding this error, we fixed it in the revised version\"}", "{\"title\": \"Further explanation of the naturalness and capacities of the learning algorithms\", \"comment\": \"Next, allow us to clarify some misunderstandings regarding the naturalness and capacities of the learning algorithms and the manner in which the task reveals performance separations between them. We hope to address these points and resolve any remaining concerns.\\n\\n*Reviewer: If we are given \\\\(x\\\\) but the MF algorithm is not allowed to use it, then what can the MF algorithm do at all? What is the best MF attempt?*\\n\\nWe thank the reviewer for this insightful question, which led us to reflect on the exposition of our result and prompted improvements in the revised version of the manuscript (see Appendix D).\\nTo clarify, consider an MF protocol defined by $(M, A)$, where $A$ corresponds to the blue box in Fig. 1 and $M$ corresponds to the yellow box in Fig. 1. It is important to emphasize that the learning algorithm $A$ in the MF protocol can utilize $x$ -- whether $x$ is provided directly via the data or learned via the $g_i$. However, the key restriction lies in the measurement strategy $M$, which is responsible for compressing the quantum phase states into a succinct classical description. This $M$ must remain independent of $x$ and always perform the same operations for all $x$.\\nOne might naturally ask whether relaxing this restriction to allow $M$ to depend on $x$ would improve performance. However, if $M$ were allowed to read and consequently have its measurements/steps depend on $x$, then it could simulate any fully quantum (FQ) protocol, and no separation between MF and FQ protocols would remain. This restriction is therefore crucial to defining the MF protocol in a meaningful way and preserving the separation between MF and FQ protocols.\\nNext, let us consider two examples that aim to illustrate the potential power and capabilities of MF protocols, as well as precisely how they are permitted to utilize the x$ obtained from the dataset.\\n\\n**APPROACH 1:** \\nIn this approach, the measurement strategy $M$ employs the classical shadow method from [1] to obtain a classical representation of the input quantum states. The learning algorithm $A$ then uses the data $\\\\hat{T}_x$ to determine $x$ and outputs a hypothesis $h_x$ as follows: $h_x$ uses the classical shadow $\\\\hat{\\\\rho}^*$ (Fig. 1) to estimate the probabilities of outcomes corresponding to $\\\\Lambda_x$. Based on these estimates, $h_x$ generates samples from the estimated probability distribution of $\\\\Lambda_x$. While classical shadows can, in principle, be used to approximate the required distributions, this would require exponentially many copies of the quantum state due to the exponential precision needed to achieve small total variation distance. Crucially, our main result shows that no improvement to the classical shadow protocol can make it feasible to obtain a succinct classical description of the phase states that is sufficient for the ML task at hand.\\n\\n**APPROACH 2:** \\nIn this approach, the measurement strategy $M$ attempts to identify a polynomially-sized description of the circuit $U$ that approximately prepares the input phase state (i.e., $U|0\\\\rangle \\\\langle 0|U^\\\\dagger \\\\approx \\\\rho$). The learning algorithm $A$ again uses the data $T_x$ to determine $x$ and outputs a hypothesis $h_x$ that works as follows: $h_x$ utilizes the circuit description of $U$ provided by $M$ to prepare the phase state and then implements the measurement $\\\\Lambda_x$. However, while pseudorandom phase states can be prepared by polynomial-sized circuits, our results -- using tools from the HM problem and pseudorandom quantum states -- prove that no measurement strategy $M$ can efficiently infer the circuit that prepares these states. While this protocol might initially appear efficient, our results highlight fundamental obstacles that prevent it from succeeding. \\n\\nIn both approaches, we emphasize a clear delineation of dependencies on $x$: the behaviour of $M$ is entirely independent of $x$ (e.g., $M$ consistently runs the classical shadow protocol or attempts to find a preparation circuit, regardless of the specific $x$). However, the learning algorithm $A$ and the hypothesis $h_x$ it produces are allowed to depend on $x$, as $A$ learns from the data and obtains $x$. \\n\\n[1] Huang, Hsin-Yuan, Richard Kueng, and John Preskill. \\\"Predicting many properties of a quantum system from very few measurements.\\\" Nature Physics 16.10 (2020): 1050-1057.\"}", "{\"title\": \"General response\", \"comment\": \"Dear reviewers, dear Area Chairs,\\n\\nWe sincerely appreciate the invaluable feedback provided by the reviewers. Their insightful comments have not only allowed us to clarify key aspects of our work but also to enhance it significantly in the final revised version.\\n\\nIn particular, we have made the following improvements to the manuscript:\\n\\n1) We clarified how our results could, in principle, extend to a broader scenario where the advice states are ground states of local Hamiltonians rather than phase states. While we emphasize this point in the Conclusion, we also underscore here that our work represents the first rigorous separation between measure-first (MF) and fully-quantum (FQ) protocols in a supervised learning task.\\n\\n2) To further illustrate the relevance of our findings, we connected them to existing literature highlighting the significant power of MF protocols. Additionally, we expanded the manuscript with Appendix D, which includes two illustrative examples of MF learning protocols. These examples showcase the strong capabilities enabled by the MF learning algorithm, and in discussing their limitations, we underline how our work significantly advances beyond the hidden matching (HM) problem and Aaronson\\u2019s results [1]. We respectfully believe that we have thoroughly expanded on the quality and significance of the machine learning task, the naturalness and capacities of the learning algorithms, and how the task highlights performance distinctions between them, both in our responses and the revised manuscript.\\n\\n3) We made several adjustments throughout the manuscript to improve readability and enhance the clarity of the text.\\n\\n\\nWe hope this summary effectively conveys our dedicated efforts to refine the manuscript in response to the reviewers' valuable feedback. Once again, we are truly grateful to the reviewers for their input.\\n\\n\\n[1]: Aaronson et al., A Qubit, a Coin, and an Advice String Walk Into a Relational Problem, 2023\"}", "{\"title\": \"Response to RYau\", \"comment\": \"We thank the reviewer for their reply and for pointing out the two minor corrections.\\n\\n---\\n\\n**Reviewer**: *pg 16: \\\"leading A_f to most of the time output 0\\\" \\u2192 surely it should be 50/50?*\\n\\n---\", \"our_response\": \"At the risk of being repetitive, we emphasize that our primary interest lies in exploring the fundamental limits of \\\"measure-first protocols\\\" within the context of supervised learning\\u2014a question that we believe remains meaningful and highly relevant to the QML community. We have demonstrated an impossibility result for a well-defined class of problems involving efficiently generatable quantum states and have provided detailed reasoning on why and how such classical approaches fail. Furthermore, through these discussions, we clarified the motivations for framing the task as we did, along with the boundaries of our definitions and the broader implications.\\n\\nWe recognize that the reviewer may feel that alternative formulations (e.g., generative tasks or compression-decompression tasks) might have framed the problem differently or addressed other related questions. While we respect this perspective, we believe our chosen focus and scope are valid and important, particularly given the separation we aim to show between measure-first and fully quantum protocols. At this stage, we also believe we have addressed any misunderstandings or concerns about the soundness of our results and do not see how they could justifiably merit a low score for soundness.\\n\\nThat said, we respect the reviewer\\u2019s final judgment on the suitability of our work for ICLR, which we continue to see as an appropriate venue given the relevance of the topic, the technical content, and the improvements made throughout this exchange. Regardless of the decision, we are deeply grateful to the reviewer for their detailed feedback and for raising insightful points that have helped significantly improve both the clarity and rigor of our work.\"}", "{\"title\": \"Addressing specific justifications\", \"comment\": \"Finally, allow us to address the justifications provided by the reviewer in their bullet-pointed list:\\n\\n**Reviewer:**\\n- *The authors themselves state: \\\"Importantly, the classical description cannot be tailored to the specific concept being learned; otherwise, the learning task could be effectively solved at the measurement stage, negating the need for further analysis.\\\" This is a great way to state my reservation. My position is that, of course, the classical description should be allowed to be tailored to the specific concept being learned! This is what learning is all about, tailoring responses to inputs. Therefore, if I insist that this is a minimum power that should be granted to the MF algorithm, then the authors already agree that this 'negat[es] the need for further analysis.'\\\"*\", \"our_response\": \"We greatly appreciate the reviewer\\u2019s detailed feedback, as it helped us identify and clarify potential points of confusion. Upon reflection, we agree that Figure 1 in the original manuscript could lead to misinterpretation. To address this, we will revise the figure in a future updated version of the paper to make the protocol clearer, which we hope will resolve the reviewer\\u2019s concerns. Allow us to explain the changes we plan to implement.\\n\\nIn the previous version of Figure 1, an arrow was drawn to indicate that $T_x$ and $\\\\rho^*$ were inputs to the measurement strategy $M$. This was indeed misleading for two reasons: \\n\\n1. It could give the impression that $T_x$ (training data) and $\\\\rho^*$ (deployment phase data) are simultaneously accessible to $M$, whereas they pertain to distinct phases of the protocol. \\n\\n2. More importantly, $M$ in the MF protocol does not receive the labels that allow one to recover $x$; it only processes the quantum states $\\\\rho$ from $T_x$. The labels $x$, or partial information about $x$ through the $g_i$, are accessed exclusively by the learning algorithm $A$.\\n\\nIn the updated figure, we will explicitly highlighted this distinction by clearly separating the roles of $M$ and $A$ and which of these has access to the labels $x$ (or partial information about $x$ through the $g_i$).\", \"to_further_clarify\": \"it was always intended that the training data includes $x$ (or partial information about it). We hope that this updated explanation, along with the revisions to Figure 1 and the illustrative examples discussed earlier, resolves any misunderstandings and provides a clearer picture of the protocol structure.\"}", "{\"title\": \"Response to comment regarding soundness of Theorem 3\", \"comment\": \"We thank the reviewer for raising their concern, which has given us the opportunity to further clarify the correctness of our proof.\\n\\nWe firmly believe that Theorem 3 is true in its current form and also that the proof is correct, barring one potential misunderstanding that we clarify next. We acknowledge that this misunderstanding may have arisen due to our phrasing used in the proof and we clarify this below and in the updated version of the manuscript. Specifically, we currently write: \\n\\n*\\\"Bob performs steps (2)\\u2212(3)...[and] step (6).\\\"*\", \"a_more_accurate_phrasing_would_be\": \"*\\\"Bob performs steps (2)\\u2212(3)...[and] **the first part** of step (6).\\\"* \\n\\nTo clarify further, Bob only executes the portion of step (6) where they output the sample $(x, y, b)$. Importantly, Bob does not verify whether $x \\\\in R_f(y)$ for correctness, as such verification is unnecessary for the HM problem. \\n\\nAllow us to reiterate the key reasoning behind our proof. \\n\\nSince our proof of non-learnability is by contradiction, we begin by assuming that the learning task is $(\\\\epsilon, \\\\delta, p)$-MF learnable on pseudorandom phase states for $(1-\\\\epsilon) \\\\cdot (1-\\\\delta) \\\\cdot p > c$, where $c > 7/8$. We then show that this assumption leads to a contradiction. \\n\\n1. In the first part of our proof, we demonstrate that under this assumption, the distinguisher algorithm will output $1$ with probability $> c$ for all pseudorandom phase states. \\n2. At this point, there are two possible scenarios: \\n - The distinguisher also outputs $1$ with probability $> c$ over truly random phase states. \\n - The distinguisher outputs $1$ with probability $\\\\leq c$ over truly random phase states. \\n3. We show that the first scenario leads to a contradiction with the hardness of the HM problem. Specifically, if the distinguisher outputs $1$ with probability $> c$ for all truly random phase states, we construct a protocol for the HM problem with success probability $> 7/8$\\u2014a result that is have been proven to be impossible in the literature.\", \"the_protocol_for_the_hm_problem_is_as_follows\": \"Bob performs steps (2)\\u2212(3), Alice performs steps (4)\\u2212(5), and then sends $\\\\hat{\\\\psi}_f$ to Bob, who executes the *first part of step (6)* to output the sample $(x, y, b)$.\\nNow supposing the distinguisher outputs $1$ with probability $> c$ over truly random phase states, then this implies that the pair $(y, b)$ sampled by Bob is correct for the HM problem with probability $> 7/8$, which contradicts the hardness of the HM problem. \\n\\nThus, we establish that the distinguisher must output $1$ with probability $\\\\leq c$ for all truly random phase states. This implies that the distinguisher behaves strictly differently on pseudorandom phase states versus truly random phase states, violating the pseudorandomness assumption. \\n\\nWe hope this explanation adequately addresses the reviewer\\u2019s concerns and clarifies the correctness of the proof. Thank you again for the opportunity to elaborate on this crucial aspect of our work.\", \"the_reviewer_raises_the_following_concern\": \"> \\\"Theorem 2 and 3 refer to the untrainability of MF for $\\\\epsilon \\\\cdot \\\\delta > c$ and $\\\\epsilon \\\\cdot \\\\delta \\\\cdot p > c$. Is this supposed to be $(1-\\\\epsilon) \\\\cdot (1-\\\\delta) > c$ and $(1-\\\\epsilon) \\\\cdot (1-\\\\delta) \\\\cdot p_{succ} > c$?\\\" \\n\\nWe thank the reviewer for pointing this out, as this is indeed the correct formulation of the theorems. Fortunately, this does not affect the validity of the theorems or the correctness of the proofs. We have updated the manuscript to reflect the correct formulation.\"}", "{\"title\": \"Addressing \\\"Questions/Weaknesses in Official Review of Submission4664 by Reviewer N6yC\\\"\", \"comment\": \"*-Is the proved separation just an instance of [S. Aaronson et al., 2023], where FBQP/poly represents classical shadow-based algorithms (including measure multiple states by Bell measurement).*\\n\\nWe thank the reviewer for this insightful question. While our results are related to those of Aaronson et al. (2023), we would like to emphasize that they are not merely an instance of their work. Aaronson's results rely heavily on the fact that quantum advice states require an exponentially large description. In contrast, a key aspect of our work is that the separation between measurement-first protocols and fully quantum protocols holds even for quantum states that can be efficiently prepared on a quantum device, which admits a polynomially-sized description in the form of the circuit preparing it. This distinction is crucial because, without it, the separation would lack relevance to real-world quantum machine learning applications. With this consideration, we provide compelling evidence that such separations can extend to a broader range of learning tasks, as long as the states involved are sufficiently \\\"rich.\\\" Inspired by this question, we have added a brief explanation of this significant difference in the revised version of the manuscript. \\n\\nFurthermore, to bring our separation closer to practical machine learning scenarios, we employ proof strategies involving Yao's principle to specialize our result to the average case, and we also account for potential noise in the data. \\nWe would also like to refer the reviewer to Part 2 of our general response to reviewer RYau for even more details.\\n\\n*- The worst-to-average reduction seems very natural; if my understanding is correct: the classical shadow methods may fail for every x (as shown in Eq.~7), due to the fact that FQBP/qpoly != FBQP/poly*\\n\\nWe thank the reviewer for this comment. We would like to note that the result from Aaronson would imply that for every f there is *at least one* x on which classical methods (e.g., shadows) must fail. Critically, in our average case reduction we prove a broader result: for every x, there is at least a fraction of f\\u2019s for which a classical method must fail. We believe this is not straightforward per-se and requires proof-techniques relying on tools such as Yao's lemma.\\n\\n*- The main contribution of this paper relies on the Theorem~2. I know the proof idea is there, but the proof details in the Appendix B is not very easy to follow.*\\n\\nWe apologize for any difficulties the reviewer may have encountered in following our proof. If possible, could you kindly highlight any specific points that were unclear, so we can provide further clarification or elaboration? Nonetheless, we would like to emphasize that we believe the claim is genuinely nontrivial and subtle, and as such, it is unlikely that a much more streamlined proof could be provided without sacrificing essential rigor or clarity. \\n\\n*- It would be helpful to provide the explicit construction of U_x (also the learned measurement operator).*\\n\\nWe thank the reviewer for this comment. We provided the explicit construction in the revised version of the paper.\\n\\n*- The problem is quite artificial (alghough Y. Liu et al. Nat. Phy. 2021 is still an artifical construction); it would be perfect if the phase states can be substituted to other more practical quantum state (such as ground state or thermal state of a physical system).*\\n\\nWe appreciate the reviewer\\u2019s insight and would like to clarify that although the proof in our current work is somewhat artificial, we strongly believe that the claim is much more broadly applicable. We conjecture that a formal proof of separation for a more general class of states, such as the ground state of a sufficiently complex local Hamiltonian, is achievable. However, this would require the development of new proof techniques, which goes beyond the scope of our current work.\\n\\nIn this regard, we refer the reviewer to Part 2 of our addressing of the Questions/Weaknesses in the review of Reviewer Mm3W. To briefly reiterate, building on the work of Aaronson et al. (2023), we speculate that phase states could be replaced by the ground states of sufficiently complex local Hamiltonians. This would allow us to extend our separation result between measurement-first protocols and fully quantum protocols to more physically relevant problems, such as predicting the ground state properties of local Hamiltonians. While this is indeed an exciting direction, it would require further investigation and would likely constitute a separate study.\\n\\nFurthermore, we note that pseudo-random phase states have recently been demonstrated to be realizable as Hamiltonian phase states (see [arXiv:2410.08073]), which are states that can be prepared by time evolution under a sparse Ising Hamiltonian. This brings us closer to the physical realization of such states and reinforces the broader relevance of our separation result.\"}", "{\"title\": \"Addressing \\\"Questions/Weaknesses in Official Review of Submission4664 by Reviewer Mm3W\\\" (Part 2)\", \"comment\": \"*- Do the authors consider the separation to hold in many natural learning problems that people are actively working on? Could the authors comment on whether the community should consider most problems to be addressable using measurement-first protocols? If not, could the authors comment on what families of problems one should expect fully-quantum protocols to be much more powerful than measurement-first protocols? Providing a few concrete examples of widely-studied quantum machine learning problems where they expect their results might be relevant would also be useful in this context.*\\n\\nWe thank the reviewer for this insightful question. Indeed, recent years have seen significant results where measurement-first protocols provide surprisingly strong theoretical guarantees in various physically-relevant tasks, such as learning phases of gapped Hamiltonians (e.g., see [1,2]). Our result, however, highlights that there are learning tasks where a fully quantum protocol is essential. In particular, our results indicate that ML problems related to the hard problems in (F)BQP\\\\qpoly are the learning problems where fully-quantum protocols are required.\\n\\nAs a consequence of this, we would like to draw the reviewer\\u2019s attention to an interesting connection of our result with more physically relevant tasks. According to an important result by [3], any quantum computation that can be solved with polynomial-sized quantum state advice can also be solved using a different quantum circuit with advice derived from the ground state of a local Hamiltonian. Formally, this implies that any task in BQP\\\\qpoly can be tackled within BQP using a quantum state from the ground state of a local Hamiltonian as advice. Based on this, we speculate that learning problems where the input data is the ground state of a sufficiently complex local Hamiltonian might exhibit a separation between measure-first and fully quantum protocols. Although we recognize the distinctions between relational and decisional problems, and we acknowledge that rigorously proving such learning separations involves significant complexities, we believe a promising direction for exploring the limitations of measure-first protocols in physically relevant tasks would be to investigate the learning properties of local Hamiltonian ground states. This is an intriguing avenue for future research, though it would likely require a dedicated, separate project\\n\\nWe already touched upon this in the previous version of the conclusion, but for completeness, we have expanded the conclusion section in the revised version to provide a more in-depth discussion on this matter.\\n\\n[1] Huang, Hsin-Yuan, et al. \\\"Provably efficient machine learning for quantum many-body problems.\\\" Science 377.6613 (2022).\\n\\n[2] Huang, Hsin-Yuan, et al. \\\"Quantum advantage in learning from experiments.\\\" Science 376.6598 (2022).\\n\\n[3] Scott Aaronson and Andrew Drucker. \\\"A full characterization of quantum advice\\\". Proceedings of the forty-second ACM symposium on Theory of computing.\"}", "{\"title\": \"General response to \\\"Official Review of Submission4664 by Reviewer RYau\\\" (Part 1)\", \"comment\": \"We thank the reviewer for their thoughtful feedback and appreciate their acknowledgment of our understanding of the various techniques employed in our proof. While we are naturally disappointed that the reviewer finds our results \\\"unsurprising,\\\" we believe we can understand how one might arrive at this opinion. At first glance, it may appear that we simply compare a stronger model with a weaker one and then demonstrate the expected result\\u2014that they differ. However, we respectfully argue that the situation is far more nuanced, and we would like to take this opportunity to clarify why we believe our results are more surprising than they may initially seem.\\n\\nThe primary critique seems to be that our measure-first (MF) protocol, as defined, is too \\u201chamstrung\\u201d (which we take to mean deliberately weakened) in comparison to the fully-quantum (FQ) protocol. While we understand this perspective, we would like to emphasize that recent advancements in the field have demonstrated that establishing a clear separation between these two protocols is far more challenging than it might initially seem.\\n\\nTo illustrate, it is worth beginning with the remarkable successes in the literature on shadow tomography and classical shadows. These breakthroughs have led leading researchers in the field to suggest that, in many cases, one can \\u201cmeasure first, ask questions later.\\u201d Indeed, a statement of this nature made during a conference explicitly inspired us to investigate the matter further.\\n\\nReflecting on the general evidence supporting this sentiment, MF protocols have shown surprisingly strong capabilities in a range of physical learning tasks. For instance, in a recent milestone work on predicting properties of gapped Hamiltonians using a variant of an MF protocol [2], it was shown that an algorithm employing a fixed measurement procedure (i.e., classical shadows of ground states [1]) could accurately predict numerous observables on the ground states of Hamiltonians.\\n\\nSimilarly, we would like to draw the reviewer\\u2019s attention to another recent work by experts in the field [3]. This study demonstrated that for many quantum input data learning tasks, classical convolutional neural networks, provided with classical shadow representations of quantum states as input (i.e., a MF protocol), achieved performance levels comparable to quantum neural networks (i.e., a FQ protocol). In fact, the authors explicitly raised the question of whether any task could conclusively demonstrate a clear separation between classical and quantum approaches.\\n\\nTo further contextualize this, consider our setting in quantum machine learning. For example, if in our task the labels corresponded to expectation values of local observables, there would be no separation between MF and FQ protocols due to the provable guarantees of classical shadows [1]. Thus, in light of this substantial body of evidence, we argue that expecting a provable difference between MF and FQ protocols is not obvious. Moreover, the challenge of identifying a task that does exhibit such a difference is not straightforward.\\n\\nWe hope this clarification inspires the reviewer to appreciate our perspective and see further value in our work. Even if the final outcome aligns with what one might naively expect, it is important to note that recent (and often surprising) results in the field suggest otherwise.\", \"references\": \"[1] Huang, Hsin-Yuan, et al. \\\"Provably efficient machine learning for quantum many-body problems.\\\" Science 377.6613 (2022).\\n\\n[2] Huang, Hsin-Yuan, et al. \\\"Quantum advantage in learning from experiments.\\\" Science 376.6598 (2022).\\n\\n[3] Bermejo, Pablo, et al. \\\"Quantum convolutional neural networks are (effectively) classically simulable.\\\" arXiv preprint arXiv:2408.12739 (2024).\"}", "{\"title\": \"Concluding remarks\", \"comment\": \"We sincerely thank the reviewer for their detailed comments and suggestions, which have greatly helped us to better articulate the significance of our work. In our reply, we have addressed several points of concern and clarified key aspects of our results. Specifically, we have provided a broader context for the \\\"measure first, ask questions later\\\" paradigm to demonstrate the nontrivial nature of our separation results. We have also explained why our reduction is both meaningful and distinct from a mere restatement of known results, particularly in the context of efficiently preparable states and real-world machine learning scenarios.\", \"we_hope_that_the_explanations_provided_in_our_reply_may_lead_the_reviewer_to_a_more_favorable_opinion_of_our_work_as_we_have_clarified_that\": \"(i) while the result might initially appear unsurprising, the broader context of the power of MF protocols underscores that establishing such a separation is far from trivial, and (ii) we emphasize that our result is not merely a restatement of the classical intractability of the hidden matching problem but represents a significant contribution that bridges multiple foundational concepts in quantum machine learning. Thank you for your careful consideration, and we hope these clarifications will help in re-evaluating the merit of our work.\"}", "{\"title\": \"Addressing \\\"Other comments in Official Review of Submission4664 by Reviewer RYau\\\" (Part 1)\", \"comment\": \"*- Lines 157-159 is unclear*\", \"currently_is_says\": \"\\u201dDirectly applying these techniques to our learning setting that is focused on learning a $2^n$-outcome POVM by estimating the probability of each possible measurement outcome $j \\\\in \\\\{1,2,...,2^n\\\\}$ requires exponential precision $\\\\epsilon$, achievable only with an exponential amount of resources.\\u201d\\n\\nTo provide more clarity, we changed it to\\n\\n\\u201cAs we want to learn distribution associated to a $2^n$-outcome POVM with an $\\\\epsilon$ error in TV, directly applying these techniques to estimate each of the $2^n$ outcomes would require an exponential precision for each. In their upper bound estimates, this results in the requirement of an exponential number of samples.\\n\\nWe hope this is clearer. \\n\\n*- Lines 205-210 claims a significant difference between this work and Jerbi\\net al.'s, but given the above characterization it is not clear that this is the\\ncase.*\\n\\nIn Jerbi et al., the target is a quantum state $U|\\\\psi\\\\rangle$, and the goal is to learn the unitary U. Measurements are performed on the evolved state $U|\\\\psi\\\\rangle$, rather than on the initial state $|\\\\psi\\\\rangle$ alone. This distinction is critical and highlights a fundamental difference between the two works.\\n\\n*- Line 229 introduces f', why the prime?:*\\n\\nWe thank the reviewer for noticing this typo, we removed the prime.\\n\\n*- \\\\pi_x is not anywhere clearly connected to \\\\Lamda_x*\\n\\n$\\\\pi_x$ is a probability distribution over the bitstring $(x, y, b) \\\\in \\\\{0, 1\\\\}^{2n+1}$. Specifically, the probability distribution for the $(y,b)$ variables corresponds directly to the outcome of applying the POVM $\\\\Lambda_x$ on the input state $|\\\\psi_f\\\\rangle$. This establishes a clear link between $\\\\pi_x$ and $\\\\Lambda_x$ . We also highlight this in Section 3.1 below Definition 7.\\n\\n*- z and x are used inconsistently/confusingly in multiple places*\\n\\nThis issue has been resolved by clarifying the notation. We now denote j as the orthonormal basis of the POVM, and $z$ as the label for the quantum states, specifically $z = (x, y, b)$. \\n\\n*- Eq (3) and Eq (5): the training data should not include x, perhaps g_i(x) is somehow meant (however see below)?*\\n\\nWe highlight in Appendix A that there are multiple scenarios to consider. In particular, $g_i(x)$ can be any function of x that \\\"leaks\\\" information about $x$. One particularly simple to understand example is $g_i(x) = x$, which we have chosen to go with in the main text. We kindly refer to Appendix A for further details.\\n\\n*- Eq (3) and Eq (5): (y, b) ~ \\\\pi_x and not (x,y,b)*\\n\\nFor how it is defined $\\\\pi_x$, the correct notation is indeed $(x,y,b)\\\\sim\\\\pi_x$. However, note\\nthat only $(y,b)$ follows the probability distribution given by the POVM $\\\\Lambda_x$ on the input states $|\\\\psi_f\\\\rangle$\\n\\n*- Line 285: perhaps \\\\Tilde{\\\\pi}x should be \\\\Tilde{\\\\pi}{T_x} since the x dependence is surely only through T_x.*\\n\\nOf course the measure first protocol (and the fully quantum too) only receives information about $x$ through $T_x$. However we feel it would overcomplicate the notation to put $\\\\Tilde{\\\\pi}_{T_x}$ instead of $\\\\Tilde{\\\\pi}_x$.\"}", "{\"comment\": \"Dear Authors,\\n\\nPlease take a close look at the replies from the Reviewer RYau. In particular, further clarifications were required and please try to respond accordingly. Thanks.\\n\\nBest wishes,\\nAC\"}", "{\"summary\": \"This paper investigates a fundamental question in quantum machine learning (QML): whether quantum advantages can persist when quantum data is first measured and converted to classical information before processing. The authors establish a formal separation between two types of QML protocols:\\n\\n1. \\\"Measure-first\\\" protocols: Those that first measure quantum states using a fixed measurement strategy (though possibly complex and coherent) before processing\\n\\n2. \\\"Fully-quantum\\\" protocols: Those that can process quantum states coherently and maintain quantum data throughout the entire learning & deployment process\\n\\nThe main contribution is proving that there exists a learning task involving quantum measurements where measure-first protocols provably require exponentially more data than fully-quantum protocols, even when restricted to efficiently preparable quantum states. This is shown by constructing a specific learning problem based on the Hidden Matching problem and quantum pseudorandom states.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses a fundamental question about the nature of quantum advantages in machine learning and provides concrete evidence that some quantum tasks inherently require maintaining quantum states throughout processing.\", \"The proofs are rigorous and combine multiple techniques from quantum computing, cryptography, and communication complexity.\", \"Unlike previous work, the separation holds even for average-case performance (not just worst-case), efficiently preparable quantum states (not just arbitrary states), and scenarios with experimental noise and errors.\"], \"weaknesses\": [\"While the paper discusses robustness to noise theoretically, no numerical simulations or experimental results are provided to demonstrate the practicality of the proposed protocols.\", \"While the separation is proven rigorously, it relies on a somewhat artificial learning task constructed specifically to demonstrate the separation. It would be valuable to understand if similar separations exist for more natural learning problems.\", \"The paper focuses on a specific type of quantum learning problem involving Hidden Matching. It remains unclear how broadly these limitations of measure-first protocols apply to other quantum learning scenarios.\"], \"questions\": \"Given the theoretical claims regarding noise robustness for the separations established in this work, could the authors add a numerical experiment showcasing the separation under noisy settings? For example, it would be beneficial to simulate your protocols with realistic noise models for near-term quantum devices. It would also be useful to see how the separation between measure-first and fully-quantum protocols changes as noise increases.\\n\\nThe main result (Theorem 1) is stated in terms of an existence statement. Could the author provide a more concrete description regarding the task that lead to the separation between \\\"measurement-first\\\" vs \\\"fully quantum\\\" methods?\\n\\nDo the authors consider the separation to hold in many natural learning problems that people are actively working on? Could the authors comment on whether the community should consider most problems to be addressable using measurement-first protocols? If not, could the authors comment on what families of problems one should expect fully-quantum protocols to be much more powerful than measurement-first protocols? Providing a few concrete examples of widely-studied quantum machine learning problems where they expect their results might be relevant would also be useful in this context.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Concluding remarks\", \"comment\": \"We sincerely thank the reviewer for their thoughtful comments and suggestions. In response, we have made several important clarifications that we believe strengthen our work. First, we have provided a more concrete description of the task leading to the separation between \\\"measurement-first\\\" and \\\"fully quantum\\\" methods, clarifying its connection to Theorem 1, and we have made cosmetic revisions to improve the clarity and readability of the presentation. Additionally, we have expanded the discussion regarding the relevance of our results to natural quantum learning problems. We elaborated on cases where fully-quantum protocols are essential, especially in learning problems related to (F)BQP\\\\qpoly, and highlighted a promising avenue for future research: the potential for a separation in learning tasks involving the ground states of sufficiently complex local Hamiltonians. We hope that these clarifications, along with the more detailed conclusion, will give the reviewer a clearer understanding of the broader applicability of our results. With these revisions and clarifications, we hope the value and novelty of our work is now more apparent, and we hope the reviewer might reconsider their evaluation of our paper.\"}", "{\"comment\": \"I have had time to revisit my comment about Line 810 and the author's response stating:\\n\\n>there is no \\\"Bob\\\" present in this context. \\nand\\n>we note that the distinguisher is provided with knowledge of both f and x, and can thus efficiently check whether y is in $R_f(x)$.\\nand\\n>the distinguisher algorithm itself is not situated within the communication complexity setup of the HM problem.\\n\\nHowever, this surely can't be the case because in the paper the author's state?:\\n\\\"In particular, if Eq. 31 does not hold, then one can construct a one-way communication protocol for HM\\\"\\n\\nIn other words, HM is situated in the proof via proof by contradiction. That is, if (30) holds (by assumption which will be shown to produce a contradiction), we check the two cases for (31): if (31) holds then we break PRF and if (31) does not hold *we break HM*. In particular, if (31) does not hold, we enter the *HM* setting and use the measure-first protocol to break HM. As the author's state:\\n\\\"Bob perform steps (2) \\u2212 (3)...[and] step (6)\\\".\\nBob does *not* have access to *f* in HM (I do see that he does have access to x here but only because he has to generate Tx^M, which is unrelated to my concern that x should not be part of the training data in the ML task setting).\\n\\nIn conclusion, I now worry that Theorem 3 is not sound in its current form.\\n----------------------------------------------------------------------------------\\n\\nFinally, I have a minor new request for clarification:\\n\\nTheorem 2 and 3 refer to the untrainability of MF for \\\\epsilon . \\\\delta > c and \\\\epsilon . \\\\delta . p > c. Is this supposed to be (1-\\\\epsilon) . (1-\\\\delta) > c and (1-\\\\epsilon) . (1-\\\\delta) . p_succ > c ?\\n\\nPlease do address my previous and above concerns before the end of the discussion period.\"}", "{\"comment\": \"Thanks to the authors for their response, which addressed my questions and concerns.\"}", "{\"summary\": \"In the paper entitled \\\"Limitations of Measure-First Protocols in Quantum Machine Learning,\\\" the authors constructed a learning task based on quantum phase states, which provides a separation between the full quantum protocol and classical-shadow-based protocols in terms of sample complexity. From my understanding, the construction is fundamentally based on the fact that $FBQP/qpoly\\\\neq FBQP/poly$, while the authors claimed that they successfully achieved the worst-to-average case reduction. In summary, this work theoretically studied the differences between two popular quantum machine learning paradigms, which advances our understanding of quantum models to some extent.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. This paper clearly demonstrates its main results and the comparison to related work.\\n2. The authors rigorously proved the separation of sample and running time by implementing a polynomial reduction to the Hidden Matching problem, which provides new insights in the field of quantum machine learning.\", \"weaknesses\": \"Here are some weaknesses and questions on this paper.\\n1. Is the proved separation just an instance of $FBQP/qpoly \\\\neq FBQP/poly$ [S. Aaronson et al., 2023], where $FBQP/poly$ represents classical shadow-based algorithms (including measure multiple states by Bell measurement).\\n2. The worst-to-average reduction seems very natural; if my understanding is correct: the classical shadow methods may fail for every $x$ (as shown in Eq.~7), due to the fact that $FBQP/qpoly \\\\neq FBQP/poly$.\\n3. The main contribution of this paper relies on the Theorem~2. I know the proof idea is there, but the proof details in the Appendix B is not very easy to follow.\", \"questions\": \"1. It would be helpful to provide the explicit construction of U_x (also the learned measurement operator).\\n2. The problem is quite artificial (alghough Y. Liu et al. Nat. Phy. 2021 is still an artifical construction); it would be perfect if the phase states can be substituted to other more practical quantum state (such as ground state or thermal state of a physical system).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General response to \\\"Official Review of Submission4664 by Reviewer RYau\\\" (Part 2)\", \"comment\": \"For different reasons, we must respectfully disagree with the assessment that our results are essentially a restatement of the intractability of the Hidden Matching (HM) problem. While we completely agree that we reduce the intractability of a learning problem to a lower bound in communication complexity to derive our result, we do not see this as a weakness per se, as the reduction itself is far from trivial. In fact, we would argue that reducing to a known separation is not inherently a limitation\\u2014after all, many foundational separations in quantum computing rely on reductions to established claims.\\n\\nThe reviewer\\u2019s statement appears to frame this as a shortcoming, and we interpret this to mean that the reduction is considered straightforward or uninteresting. On this point, we must respectfully disagree. Our results specifically address physical quantum states that are efficiently preparable on quantum hardware. It is important to note that the HM problem arises in the context of communication complexity, and its lower bounds do not directly apply in scenarios where states are efficiently preparable. In particular, if the quantum state can be efficiently prepared, the preparation circuit itself can be treated as a message in the communication protocol, thereby bypassing the standard lower bounds.\\n\\nTo address this challenge, we rigorously demonstrated that the learning problem remains classically intractable even for efficiently preparable states by leveraging pseudo-random states and imposing a time efficiency requirement on the learner\\u2014a standard consideration in machine learning contexts. This additional step is crucial because, without it, the separation would have no meaningful relevance to real-world cases. With it, however, we provide compelling evidence that such separations extend more broadly, as long as the states under consideration are sufficiently \\u201crich\\u201d.\\n\\nFurthermore, due to our focus on a machine learning scenario, there are additional key aspects that set our work apart from the traditional HM problem: our analysis emphasizes both average-case correctness and robustness to noise, which are critical considerations in practical learning tasks that are not considered in the HM problem. Covering all these common ML considerations in our result required us to apply all the various techniques of the sub-fields listed out by the reviewer in the \\u201cstrenghts\\u201d section.\\n\\nIn summary, we believe that the reduction we employ is not only nontrivial but also highly relevant to practical contexts, offering insights beyond what a straightforward restatement of the HM problem would provide. We hope this explanation helps clarify our perspective.\\n\\nWe are also surprised by the score of 1 for soundness (which we understand as technical correctness and absence of mistakes), as we are not aware of any errors in our proofs, and the reviewer has not identified any specific issues (we find that the \\u201cunclear statements or mistakes\\u201d listed by the reviewer were mostly typos and things we could\\u2019ve explained more clearly).\"}", "{\"comment\": \"I would like to thank the authors for their responses.\\nTheir responses have addressed my questions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Summative Response\", \"comment\": \"Thank you for the detailed incorporation of the feedback.\", \"first_let_me_offer_another_two_minor_corrections\": [\"pg 16: \\\"leading A_f to most of the time output 0\\\" -> surely it should be 50/50?\", \"pg 16: \\\"where they obtain the a sample ...\\\" -> \\\"where they obtain a sample ...\\\"\", \"Right so I think we are getting closer to understanding each other.\", \"The learning task is to learn x. So the data point can't just be f (as stated in the response). Perhaps it is helpful to think that the data is (g(x), f) and the label is (y, b). Yes f is part of the data-point but its role IS to scramble x AND IT IS NOT LEARNABLE, since it is a completely random part of the data AND more importantly, a new *random* f is generated at deployment.\", \"As we agree, for the learning to be non-trivial, g cannot be the identity, so the whole work should be presented with a non-trivial g from the outset (the identity case should not even be considered if designing a meaningful ML task is the goal).\", \"The real result is that efficient classical-compression of quantum data is not possible. But this is HM and the authors attempts at turning it into a learning problem is not convincing. This can be seen in the two example MF strategies: for the classical shadows example it needs exponential data at *deployment* (under the version where x is given); for the circuit-learning perspective compression-DECOMPRESSION is the task which again is not a learning task from the traditional supervised ML point of view. It could be seen as a GENERATIVE task, but then, the separation between MF and fully-quantum is due to the difference between polynomial-sized classical (and therefore not actually about MF) and quantum random seed data.\", \"My suggestion going forward is to rewrite this interesting work as an extension of HM (if this has not already been done in the literature) or if the authors really want to connect it to ML then somehow present it as a generative ML task (which now that I think about it is related to the other reviewer's connection to complexity classes with advice).\"]}", "{\"metareview\": \"This paper studied limitations for quantum machine learning algorithms that use fixed measurement schemes on the input quantum states. This is motivated by both recent advances in randomized measurement protocols as well as machine learning for quantum states. In particular, the authors proved limitations of measure-first protocols in the average case, improving on the state-of-the-art which only focuses on worst-case scenarios.\\n\\nThere were adequate discussions during the rebuttal, in particular between the authors and the reviewer RYau. Technical contents were clarified and the paper was subsequently improved by the authors. However, the current results are more like extending the hidden matching problem and contributing to complexity theory (worse-case to average-case), with its nature being more as a quantum information problem than a machine learning problem. The paper is also purely theoretical, and other reviewer also agreed with its limited practical application. Considering all, the decision is to reject this paper at ICLR 2025.\\n\\nFor future versions of the paper, if the authors continue to submit to machine learning venues, more efforts are solicited for its connection to machine learning, in particular practical applications. Otherwise it might suit better to a quantum information/computation venue.\", \"additional_comments_on_reviewer_discussion\": \"There were adequate discussions during reviewer discussions. This is also reflected in the metareview above.\"}", "{\"title\": \"Concluding remarks\", \"comment\": \"We thank the reviewer for their insightful comments and suggestions. In response, we have clarified the distinction between our work and Aaronson et al. (2023), emphasizing that our separation holds for efficiently preparable quantum states, which is relevant for practical quantum machine learning. We also expanded on the worst-to-average reduction, noting that our result is broader than Aaronson's and requires proof techniques such as Yao's lemma. Regarding the artificial nature of the problem, we have highlighted how our separation could extend to more physically relevant quantum states, like ground states of local Hamiltonians, though this requires new techniques beyond the scope of the current work.\\n\\nWe hope these revisions and clarifications address your concerns and improve the clarity of our arguments. We trust that these changes will help in reconsidering the evaluation of our work. Thank you again for your thoughtful feedback.\"}", "{\"title\": \"Concluding paragraph\", \"comment\": \"We thank the reviewer once again for their detailed feedback and insightful questions, which have allowed us to better articulate the contributions and implications of our work. In particular, we addressed misunderstandings regarding the roles of $M$ and $A$ in MF protocols, clarified how learning occurs within the MF framework, and explained why $M$ must remain independent of $x$ to maintain a meaningful separation from FQ protocols. We also provided examples of MF strategies to illustrate their capabilities and the challenges they face, highlighting the non-triviality of the learning task and the connection to fundamental limitations in quantum-to-classical data compression. We hope these clarifications resolve the reviewer\\u2019s concerns and provide a clearer understanding of our work\\u2019s significance. We respectfully ask the reviewer to reconsider their evaluation in light of these points and hope our responses and explanations lead to a higher opinion of our work.\"}", "{\"summary\": \"The authors design a quantum machine learning task that exhibits a sample and time complexity separation between two classes of quantum algorithms acting on input training data consisting of quantum data and classical labels with the goal of producing a classical description of a quantum circuit that can then be deployed on unseen quantum data to produce samples from a distribution related to the training data. The main difference between the two classes of quantum algorithms is that the more powerful class is allowed to process the input quantum-classical training data in \\\"one go\\\", while the weaker class is hamstrung to first turn the quantum data into classical data, one training data-point at a time, without looking at the classical labels before being allowed to process this now-classical data together with the classical labels through a quantum algorithm to produce a classical description of the deployment quantum circuit. At deployment time, the weaker class is then further forced to measure the unseen quantum data before feeding it into the quantum circuit produced at the end of the training phase.\\n\\nUnsurprisingly, there is a complexity separation between the fully quantum algorithm and the hamstrung quantum algorithm which seems to essentially be a restatement of the intractability of the Hidden Matching Communication problem. Finally, the authors claim to extend the complexity separation to efficiently preparable training data sets by using pseudo-random functions.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Originality: The authors venture to create a quantum machine learning task with average-case complexity separations between measurement-first and fully-quantum quantum algorithms.\", \"quality\": \"The authors seem to know the techniques of the sub-field well (including those related to POVMs, HM problem, Yao's principle, QPRFs, ...), as well as other related work (including classical shadows, shadow tomography, ...)\", \"clarity\": \"The authors do try to provide both diagrammatic, high-level and low-level explanations.\", \"significance\": \"The authors do aim to produce a significant result by hoping to shed light on the role of information loss due to measurement when performing quantum machine learning in the average-case setting with realistic training data.\", \"weaknesses\": \"Unfortunately, the high aims and strong potential of the \\\"Strengths\\\" section, seem not to be met. I believe this is mainly due to the overly restrictive setting of the weakened class of \\\"measurement-first\\\" quantum algorithms. The most confusing and vague constraint is: \\\"that the measurement strategy cannot depend on the specific target concept\\\". Looking at Definition 5 and 7 for clarity this seems to be the following constraint on the hamstrung class:\", \"training\": \"Quantum data + classical labels -----[Point-wise measurement]------> M(Quantum data) + classical labels = Classical data + classical labels ----[Quantum Algorithm]----> Classical description of Quantum circuit\", \"deployment\": [\"Quantum data -----[Point-wise measurement]------> M(Quantum data) = Classical data ----[Quantum Algorithm given by circuit above]----> Sample\", \"Therefore it seems that the main difference between the two classes is that the weaker one actually only operates on classical data during both training and deployment while the stronger one is allowed to operate on quantum data. This is a difference between data capacity and not really about a separation between interesting algorithmic classes.\", \"Secondly, there does seem to be quite a few unclear statements or mistakes:\", \"Lines 157-159 is unclear\", \"Lines 205-210 claims a significant difference between this work and Jerbi et al.'s, but given the above characterization it is not clear that this is the case.\", \"Line 229 introduces f', why the prime?\", \"\\\\pi_x is not anywhere clearly connected to \\\\Lamda_x\", \"z and x are used inconsistently/confusingly in multiple places\", \"Eq (3) and Eq (5): the training data should not include x, perhaps g_i(x) is somehow meant (however see below)?\", \"Eq (3) and Eq (5): (y, b) ~ \\\\pi_x and not (x,y,b)\", \"Line 285: perhaps \\\\Tilde{\\\\pi}_x should be \\\\Tilde{\\\\pi}_{T_x} since the x dependence is surely only through T_x\", \"Eq (8): how can x appear as an unbound function variable on the LHS and as a bound variable on the RHS?\", \"Line 370: the use of Aaronson et al. is the very heart of the whole paper, yet this is not reproduced anywhere in the paper\", \"Line 434: please remind me what non-uniform means in this context.\", \"Eq (9): the notation of dot and unfilled function brackets is not defined/explained.\", \"Lines 475-480: is unclear, especially given that there are supposed to be different random functions for each data-point.\", \"Eq (11): if g_i(x) is of size n+1, doesn't this change the learning problem in unclear-ways, not least because the dimensions no longer match (2n + 1 != 2n + 2)\", \"Eq (29): \\\\pi_x(f) -> \\\\pi_x(f^{(k)})\", \"Line 787: is query access -> has query access\"], \"questions\": [\"How can my following perceived weakness from above be remedied:\", \"\\\"This is a difference between data and not really about a separation between interesting algorithmic classes.\\\"\", \"Is my understanding correct that the essence of the separation between the two classes boils down to the inability to compress the unseen quantum data during deployment time? What would happen if the measurement restriction during training were modified to allow the measurements to depend on the labels? Or if we must keep them blind of the labels, joint measurements across all data-points were allowed?\", \"Line 289: Why is the \\\"more general\\\" case not studied, when it is arguably much more interesting than the restrictions under consideration?\", \"Theorem 5: What are the assumptions of Yao's principle? Are they satisified?\", \"Line 810: How can Bob output 1 when y is in R_f(x) without knowledge of *f*?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Collected summative response to the sequence of author responses\", \"comment\": \"Thank you for attempting to address my concerns.\", \"first_let_me_step_back_and_make_the_following_orientating_remark\": \"I do acknowledge that the worst-to-average aspect, the analysis of the noisy scenario and the consideration of an efficiently preparable input state takes the HM result into new interesting territory.\\n\\nI worry that I may be completely misunderstanding the setting. My current understanding hinges on the fact that the learning task is to learn *x* (which is why I thought it was a typo that x was included in the training data and it is how I understood the necessity of introducing g_i: to make the learning task non-trivial). Once x is learned then U_x and hence \\\\Lambda_x (the POVM) can easily be constructed (a bias that should be afforded to the learning algorithm since the data *has* well defined structure). I have also understood any particular f to be a random \\\"scrambling\\\" mechanism to hide the fixed, to-be-learned x.\\n\\nNow, this paper's aim is to bring insight into the field of *QML*; it is first and foremost an ML paper and should be judged on the quality/non-triviality of the ML task, the naturalness and capacities of the *learning* algorithms and the manner in which the task teases out performance separations between the algorithms. This is the source of my dissatisfaction with the paper, HM is a communication task that I think has unsatisfactorily been recast as a learning problem. However, I realise I am not clear on what the learning task is. I currently think the task should be to learn x. If it is not this and it is to turn a blind eye to the known structure of the input data (that x is the only actual unknown data, not the construction steps of U; a familiar cryptographic/data-compression lens on ML: computational intractability/cost not obfuscation is the right adversarial/interesting setting) and to pretend we don't how the (y,b)'s were constructed then what is the task? If we are given x but the MF algorithm is not allowed to use it, then what can the MF algorithm do at all? What is the best MF attempt? It would seem, that without the task of learning x there is no learning task. There might as well not be any training stage at all. The best MF attempt would then be to only focus on the deployment stage. It seems the MF algorithm must then learn x and f at deployment? To justify this assessment:\\n* The authors themselves state: \\\"Importantly, the classical description cannot be tailored to the specific concept being learned; otherwise, the learning task could be effectively solved at the measurement stage, negating the need for further analysis.\\\" This is a great way to state my reservation. My position is that, of course, the classical description should be allowed to be tailored to the specific concept being learned! This is what learning is all about, tailoring responses to inputs. Therefore if I insist that this is a minimum power that should be granted the MF algorithm, then the authors already agree that this \\\"negat[es] the need for further analysis.\\\"\\n* The left side of Figure 1, itself suggests that the measurement stage (the yellow box) should at least be allowed to simultaneously analyse T_x and \\\\rho^*, which of course does not match the actual restrictions placed on the MF algorithm. The MF algorithm should more accurately be drawn with two disconnected yellow boxes, which shows the artificiality and the classicality of MF's setting.\\n\\nIt is my above understanding (that the training data should not contain x, that Bob does not have access to x and the mistakes in Eq 8, the mismatch of dimensions in Eq 11) that lead me to assign a score of 1 for soundness. However, now it seems that the mistakes and mismatch were typos or not serious and that it was intended all along for the training data to include x? If this is the case, then yes soundness of the math is fine but now I contend that soundness of the learning task is poor.\"}", "{\"title\": \"General response\", \"comment\": \"We thank the reviewer for their response and for acknowledging that our techniques take the HM result into new and interesting territory.\\n\\nWhile we fully respect the reviewer\\u2019s judgment regarding the quality and non-triviality of the ML task, as well as the naturalness/capacities of the learning algorithms and the manner in which the task teases out performance separations between the algorithms, we believe certain details may have been overlooked. We are grateful for the reviewer\\u2019s detailed outline of their understanding and for raising specific questions about these aspects, as this allows us to directly address areas where potential misunderstandings may have arisen.\"}", "{\"title\": \"Further explanation of the machine learning task\", \"comment\": \"First, allow us to elaborate further on our learning setting and clarify the potential misunderstanding surrounding it. The reviewer correctly notes that the goal is to learn $x$) and reproduce the corresponding measurements, with the $ g_i$ functions ensuring the non-triviality of the task. However, the interpretation of the role of the $f$ functions as \\u201crandom scrambling mechanisms\\u201d to obscure $x$ is not accurate. Instead, the $f$ functions (or, more precisely, their associated phase states) serve as inputs to the learning problem. To provide a clearer analogy: in supervised learning, the data typically consists of (data point, label) pairs. In our setting, each phase state associated with a given $f$ represents a data point. We apologize for any confusion caused by our notation.\\n\\nThe key challenge in this learning task (i.e., its non-triviality) does not arise from the $f$ functions scrambling $x$, but rather from the fundamental limitations of any measurement strategy to efficiently compress the (pseudorandom) phase states into succinct classical descriptions. Importantly, while these phase states *do* admit such a succinct classical description (e.g., the circuit that prepares them), which contains sufficient information to approximately and efficiently recover the measurements associated with all $\\\\Lambda_x$ for a subset of $f$ values (i.e., in the average-case setting). However, our main result demonstrates that no measurement strategy can efficiently compress the phase states into such descriptions.\\n\\nOur primary goal is to use the HM task as a tool to study the limitations of quantum-to-classical data compression in a machine learning context. What distinguishes our approach from previous results on quantum-to-classical data compression limitations is the incorporation of several key considerations ubiquitous in machine learning settings. Specifically, we extend the analysis to the average-case scenario, analyze the robustness of the task under noise, and consider efficiently preparable input states that have succinct representations but are provably infeasible to identify in polynomial time. These aspects, as the reviewer acknowledged, take the HM problem into new and interesting territory.\\n\\nIn summary, while the $g_i$ functions ensure the non-triviality of the learning task, the primary barrier lies in the impossibility of efficiently compressing pseudorandom phase states, not in the $f$ functions obscuring $x$. We hope this clarification resolves any misunderstandings and strengthens the reviewer\\u2019s confidence in the contributions of our work.\"}", "{\"title\": \"General response to \\\"Official Review of Submission4664 by Reviewer N6yC\\\"\", \"comment\": \"We thank the reviewer for the time spent on this review. We are particularly happy that they appreciate the soundness of our proofs and the relevance of our result within the field of quantum machine learning.\"}", "{\"title\": \"Addressing \\\"Questions/Weaknesses in Official Review of Submission4664 by Reviewer Mm3W\\\" (Part 1)\", \"comment\": \"*- While the paper discusses robustness to noise theoretically, no numerical simulations or experimental results are provided to demonstrate the practicality of the proposed protocols. Given the theoretical claims regarding noise robustness for the separations established in this work, could the authors add a numerical experiment showcasing the separation under noisy settings? For example, it would be beneficial to simulate your protocols with realistic noise models for near-term quantum devices. It would also be useful to see how the separation between measure-first and fully-quantum protocols changes as noise increases.*\\n\\nWe appreciate the reviewer\\u2019s suggestions for numerical simulations and experimental validation. We would like to mention that we are already collaborating with an experimental team on this front. However, there are significant challenges in conducting such experiments even just in ideal simulations. For instance, demonstrating that the separation holds \\\"for all\\\" MF protocols requires identifying the optimal MF protocol for the given task, which is a complex problem. While we have some ideas on how to address this, it is a substantial undertaking that we believe warrants a separate, dedicated project. Moreover, we would like to clarify that this is a theoretical paper, where we provide rigorous theoretical guarantees on the robustness to noise. While investigating the effects of different noise models on our protocols is certainly an interesting direction, we believe it falls outside the scope of this work. In particular, since our primary focus is on establishing the theoretical separation between measure-first and fully-quantum protocols, rather than on noise analysis itself, we have chosen not to include numerical simulations. Investigating other aspects would present a completely different set of challenges and convey a different message, which is outside the scope of our current work.\\n\\n*- The main result (Theorem 1) is stated in terms of an existence statement. Could the author provide a more concrete description regarding the task that leads to the separation between \\\"measurement-first\\\" vs \\\"fully quantum\\\" methods?*\\n\\nWe thank the reviewer for their comment, and we are glad to have the opportunity to improve the presentation of the task that leads to the separation. In the initial manuscript, the task that results in the separation between \\\"measurement-first\\\" and \\\"fully quantum\\\" methods is outlined in Section 3.1, with a high-level overview provided in Section 1.1. To enhance clarity, in the revised version of the paper, we explicitly state that the task referenced in Theorem 1 is the same as the one defined in Section 3.1, and we have made several cosmetic changes to improve readability.\"}", "{\"title\": \"General response to \\\"Official Review of Submission4664 by Reviewer Mm3W\\\"\", \"comment\": \"We thank the reviewer for taking the time to provide their feedback. We are pleased that they recognize the relevance of our work within the field and acknowledge our use of diverse proof techniques, which distinguishes our results from previous works.\"}", "{\"title\": \"Addressing \\\"Questions/Weaknesses in Official Review of Submission4664 by Reviewer RYau\\\" (Part 1)\", \"comment\": \"*- How can my following perceived weakness from above be remedied: \\\"This is a difference between data and not really about a separation between interesting algorithmic classes.\\\"*\\n\\nWe agree that our results can be interpreted as demonstrating a separation in solving a learning task based on the type of data provided (classical vs quantum). However, we do not view this as a weakness of our work. On the contrary, the converse claim -- that there is no separation in value -- is precisely at the core of the \\\"measure first, ask questions later\\\" approach, which we set out to investigate and challenge. Addressing this separation is a significant and timely question within the quantum machine learning (QML) community, as highlighted in our general response above. We believe our work contributes to this ongoing discussion by providing a rigorous framework and evidence for when and why such separations emerge. To properly investigate this, it is necessary to study two classes of algorithms: one that can access the full quantum states and another that only receives classical descriptions of these states. Importantly, the classical description cannot be tailored to the specific concept being learned; otherwise, the learning task could be effectively solved at the measurement stage, negating the need for further analysis. This constraint underscores the significance and the necessity to study the two \\\"algorithmic classes\\\" (i.e., measure-first vs fully-quantum) as defined in our paper. \\n\\n*- Is my understanding correct that the essence of the separation between the two classes boils down to the inability to compress the unseen quantum data during deployment time? What would happen if the measurement restriction during training were modified to allow the measurements to depend on the labels? What would happen if the measurement restriction during training were modified to allow the measurements to depend on the labels?*\\n\\nWe agree with the reviewer\\u2019s observation that the separation result stems from the measurement restrictions during deployment. However, we do not view this as a significant limitation of our results. In fact, previous notable works [2,3] have shown that fixed measurement schemes during deployment can still yield highly powerful models, challenging the perceived necessity of quantum computation and information in quantum machine learning (QML). Our primary goal was to examine the limitations of common MF protocols found in the literature, and we demonstrate that a separation exists for these variants of MF protocols. While a very interesting related question lies beyond the scope of our work, exploring how far \\\"quasi\\\"-MF schemes can go is a natural next step. We have also given thought to this direction and have intuition that separations may also be established in more general cases -- e.g., when the deployment involves a fully quantum strategy (rather than a \\\"measure first, ask later\\\" scheme). However, proving such separations formally would require significant additional work and novel results, which fall outside the current scope of this paper. Lastly, we note that our separation result holds even when measurements are performed on multiple input states, and we have clarified this point in the revised version of the paper.\\n\\n*- Line 289: Why is the \\\"more general\\\" case not studied, when it is arguably much more interesting than the restrictions under consideration?*\\n\\nWe believe that our current result already makes a significant contribution by demonstrating a learning separation between models that are of particular interest to the community. This addresses what we consider to be the principal and most fundamental question in this context. The more general cases are natural and obvious follow-ups, which we fully intend to explore in future work.\"}", "{\"title\": \"Addressing \\\"Questions/Weaknesses in Official Review of Submission4664 by Reviewer RYau\\\" (Part 2)\", \"comment\": \"*- Theorem 5: What are the assumptions of Yao's principle? Are they satisified?*\\n\\nWe refer to Yao principle as stated in, for example, https://nerva.cs.uni-bonn.de/lib/exe/fetch.php/teaching/ws1819/vl-aau/lecturenotes05.pdf. The assumptions are that the variables $\\\\mathbb{A}$ and $\\\\mathbb{X}$ are random variables following a probability distribution over the set $\\\\mathcal{A}$ and $\\\\mathcal{X}$ respectively. Our setting perfectly matches these conditions.\\n\\n*- Line 810: How can Bob output 1 when y is in R_f(x) without knowledge of f?*\\n\\nThe reviewer appears to be referring to a step in the \\\"distinguisher algorithm,\\\" whose goal is to differentiate between truly random and pseudorandom phase states. However, there seems to be a misunderstanding, as there is no \\\"Bob\\\" present in this context. We can understand the source of the confusion, as we do relate the performance of the distinguisher algorithm to the lower bounds of the Hidden Matching (HM) problem. Nevertheless, the distinguisher algorithm itself is not situated within the communication complexity setup of the HM problem.\\n\\nImportantly, the use of pseudorandom states and distinguisher algorithms is a key distinction that sets our result apart from being essentially a restatement of the intractability of the HM problem. Finally, we note that the distinguisher is provided with knowledge of both f and x, and can thus efficiently check whether y is in $R_f(x)$.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThe authors have provided their rebuttal to your comments/questions. Given that we are not far from the end of author-reviewer discussions, it will be very helpful if you can take a look at their rebuttal and provide any further comments. Even if you do not have further comments, please also confirm that you have read the rebuttal. Thanks!\\n\\nBest wishes,\\nAC\"}" ] }
0tAn34IkXI
Flat Posterior Does Matter For Bayesian Model Averaging
[ "Sungjun Lim", "Jeyoon Yeom", "Sooyon Kim", "Hoyoon Byun", "Jinho Kang", "Yohan Jung", "Jiyoung Jung", "Kyungwoo Song" ]
Bayesian neural network (BNN) approximates the posterior distribution of model parameters and utilizes the posterior for prediction via Bayesian Model Averaging (BMA). The quality of the posterior approximation is critical for achieving accurate and robust predictions. It is known that flatness in the loss landscape is strongly associated with generalization performance, and it necessitates consideration to improve the quality of the posterior approximation. In this work, we empirically demonstrate that BNNs often struggle to capture the flatness. Moreover, we provide both experimental and theoretical evidence showing that BMA can be ineffective without ensuring flatness. To address this, we propose Sharpness-Aware Bayesian Model Averaging (SA-BMA), a novel optimizer that seeks flat posteriors by calculating divergence in the parameter space. SA-BMA aligns with the intrinsic nature of BNN and the generalized version of existing sharpness-aware optimizers for DNN. In addition, we suggest a Bayesian Transfer Learning scheme to efficiently leverage pre-trained DNN. We validate the efficacy of SA-BMA in enhancing generalization performance in few-shot classification and distribution shift by ensuring flat posterior.
[ "Bayesian Neural Network", "Bayesian Deep Learning", "Flatness-aware Optimization", "Bayesian Transfer Learning" ]
https://openreview.net/pdf?id=0tAn34IkXI
https://openreview.net/forum?id=0tAn34IkXI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wYgdzX9Yoz", "u5QJAsq5uw", "tFwerVl47c", "oahLzI07nN", "UAPozVokXg" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730688335070, 1731081513840, 1731127081343, 1731554113748, 1730558104602 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2837/Reviewer_tDL5" ], [ "ICLR.cc/2025/Conference/Submission2837/Reviewer_JtUY" ], [ "ICLR.cc/2025/Conference/Submission2837/Reviewer_yjnt" ], [ "ICLR.cc/2025/Conference/Submission2837/Authors" ], [ "ICLR.cc/2025/Conference/Submission2837/Reviewer_ZSvu" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a flatness-aware optimizer for Bayesian model averaging, which applies to a variety of Bayesian inference algorithms. The introduced optimizer SA-BMA is generalized from the SAM optimizer. This paper also has a clear empirical proof of why flatness is important and why existing Bayesian inference methods ignore flatness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. I notice that this is a resubmission paper. Compared with the last version, more analysis on the flatness of the loss landscape and the relations between flatness and general performances are included. I respect the authors' efforts in studying the geometry of loss landscape.\\n\\n2. The empirical analysis using Hessian eigenvalues clearly demonstrates why finding flat modes is important to the overall performance.\\n\\n3. Comprehensive experiments are conducted to demonstrate the effectiveness of SA-BMA.\", \"weaknesses\": \"1. The experiments on real-world datasets are limited to CIFAR10/100. I expect to see results on large-scale dataset like ImageNet to show the scalability of SA-BMA.\\n\\n2. Figure 5 may lead to a misunderstanding that PTL and SA-BMA change the loss surface (in the first 2 figures).\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Flat optima has been shown to connect with good generalization in point estimation for neural networks. The authors study flatness for Bayesian neural networks and propose a flatness-seeking optimizer, Sharpness-Aware Bayesian Model Averaging (SA-BMA), for VI. Specifically, the authors first show empirically that (1) BNN's posterior tends to land in a sharper loss region; (2) when making a prediction with MC estimation, using flat samples will result in better performance. Based on the empirical finding, the authors propose a new learning objective for VI, which also accounts for flatness as well as a Bayesian Transfer Learning scheme for efficient computation for large models. Experiment results have shown SA-BMA can improve generalization in few-shot classification and distribution shift.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The connection between the proposed objective and existing works is well-analyzed\", \"Well written and easy to follow\"], \"weaknesses\": [\"In Bayesian deep learning in the end we have a distribution, here the authors use the averaged Hessian eigenvalues of different sampled weights as the measurement of flatness. I'm not fully convinced this is a good measurement of a flatness over a distribution.\", \"The proposed objective is expensive to train.\"], \"questions\": [\"SAM training is already more expensive than vanilla gradient descent, adding VI on top (now you need to sample to estimate ELBO), won't it be too expensive? This begs another question, how much improvement can be gained by using SA-BMA when compared with LA on SAM solution? I see LA implemented in your code but there are no results of LA in the paper. I would be interested to see an experiment comparing LA on SGD solution, LA on SAM solution, and SA-BMA.\", \"How do you ensure VI has been trained successfully? I see in multiple cases VI ends up with higher NLL and ECE than MAP, which seems strange.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a sharpness-aware Bayesian neural network (BNNs) to ensure the found modes are flat. A new Bayesian transfer learning scheme is also developed to leverage the pre-trained deep neural networks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper targets the generalization of BNNs, which is an important problem.\", \"The paper provides empirical and theoretical analysis to support the need for flatness in BNNs.\"], \"weaknesses\": [\"The overall goal of the paper is vague. As far as I understand, the proposed method increases the flatness of the variational parameter \\\\theta, not the model parameter w. However, the literature shows flatter w leads to better generalization. The seems to be a gap. The meaning of \\\"flatness in BNNs\\\" is not very clear in the paper.\", \"Previous works have demonstrated the benefits of including flatness in BNNs, e.g. M\\u00f6llenhoff & Khan, 2022, Nguyen et al., 2023, Li & Zhang, 2023. The additional insights offered by Sec 3 are unclear.\", \"It is unclear how Theorem 1 indicates that BNN needs flatness. This theorem basically shows the relationship between the flatness of the weight-averaged model and the flatness of individual models. It does not explain the benefits of ensuring flatness in BNNs.\", \"Variational inference (VI) approximates the posterior through an optimization problem. We can naively apply SAM to the objective of VI. The difference and benefit of the proposed objectives in Eq.4 and 5 over this naive version are unclear.\", \"In the experiment section, the proposed method is applied to VI, SWAG, and MCMC. However, it is unclear how the method is compatible with SWAG and MCMC.\", \"The proposed Bayesian transfer learning scheme is a straightforward application to transfer learning. The novelty of this part is low.\"], \"questions\": [\"How to measure the flatness of a BNN, as it involves a set of NNs? Do the authors average the flatness of all samples in Fig. 1 and 2?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We appreciate the anonymous reviewers for their constructive feedback. Unfortunately, given the short rebuttal period, we will not be able to address all of the concerns raised in the reviews. Therefore, after careful consideration, we have decided to take time to more carefully revise our work for another venue and withdraw the current submission. We appreciate the reviewers\\u2019 time and effort, again.\"}", "{\"summary\": \"This paper proposes to use sharpness-aware minimization for Bayesian models. It proposes a framework that can be used with 3 different posterior approximation methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper does propose an interesting combination of lines of work in deep learning, it missed out on evaluating whether this combination is useful in my opinion, though. I do see the plots in Figure 2 as a negative result in this way, and think based on this one could have written an interesting paper on flatness-seeking methods approximating Bayesian averages.\", \"weaknesses\": [\"I do not think there is a need for flatness-aware optimization in Bayesian models. That is because Bayesian models are building an average over all models with high likelihood (or posterior likelihood for informative priors). Taking this average will naturally lead to including a lot of models from flat optima, as they are simply wider and thus have more mass (in the prior). This in my opinion is underlined by the experiments in Figure 2b-c, where we can see that by simply using a larger Ensemble, thus approximating the true PPD more closely, we get the same effect as when choosing models that lie in wide optima. I hope I did understand this experiment right and the 30 models that you speak about are 30 samples from your posterior.\", \"One more point on this: I can imagine that this argument does not work well for the particular problems that VI has, as it will always try to find a simple Gaussian distribution that represents the posterior.\", \"The flatness difference in Experiment 5.1 looks marginal at mere 2x radius of the optimum and a worse likelihood. This toy experiment would be more interesting if both optima had the same likelihood, but one being *much* more narrow.\", \"Your language is missing a lot of articles, but generally feels more like a draft than a paper. I guess you are not a native English speaker, I am neither, so this does not affect the score much for me, but I can recommend you to use LLMs/DeepL to improve your English writing.\", \"The accuracies seen in the experiments seem to be far away from the state of the art for the models, see e.g. this torch tutorial https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/\", \"SAM is known to increase the time each step takes, this algorithm should have the same impact. A comparison of performance over time is missing, though.\"], \"questions\": [\"How many posterior samples have you used for Figure 2a?\", \"How large are your Bayesian ensembles for the final experiments, i.e. how many times do you sample from your posterior to approximate the PPD integral?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
0tAXMiSufG
BOND: Aligning LLMs with Best-of-N Distillation
[ "Pier Giuseppe Sessa", "Robert Dadashi-Tazehozi", "Leonard Hussenot", "Johan Ferret", "Nino Vieillard", "Alexandre Rame", "Bobak Shahriari", "Sarah Perrin", "Abram L. Friesen", "Geoffrey Cideron", "Sertan Girgin", "Piotr Stanczyk", "Andrea Michi", "Danila Sinopalnikov", "Sabela Ramos Garea", "Amélie Héliou", "Aliaksei Severyn", "Matthew Hoffman", "Nikola Momchev", "Olivier Bachem" ]
Reinforcement learning from human feedback (RLHF) is a key driver of quality and safety in state-of-the-art large language models. Yet, a surprisingly simple and strong inference-time strategy is Best-of-N sampling that selects the best generation among N candidates. In this paper, we propose Best-of-N Distillation (BOND), a novel RLHF algorithm that seeks to emulate Best-of-N but without its significant computational overhead at inference time. Specifically, BOND is a distribution matching algorithm that forces the distribution of generations from the policy to get closer to the Best-of-N distribution. We use the Jeffreys divergence (a linear combination of forward and backward KL) to balance between mode-covering and mode-seeking behavior, and derive an iterative formulation that utilizes a moving anchor for efficiency. We demonstrate the effectiveness of our approach and several design choices through experiments on abstractive summarization and Gemma models.
[ "LLM", "Alignment", "RLHF", "Best-of-N" ]
Accept (Poster)
https://openreview.net/pdf?id=0tAXMiSufG
https://openreview.net/forum?id=0tAXMiSufG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zLTmC1CmmU", "yh6wouxHVw", "xbZ7CM58dQ", "vmiDeIlgTJ", "rTgltQ6W5r", "rBimtAsa0d", "qhfg9LTnCN", "SenMcMHrWX", "RyCqVtDVsA", "RFy2MSBhht", "MWqxSp120S", "MQe00xdaXJ", "KUJvJg7PKl", "FpQcUjHxIg", "DbvZm68TAN", "7pariekAi0", "5OfU78dPEr", "3OJVsXZZuL" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1730351178270, 1730781974871, 1732621130145, 1732372687597, 1732178672786, 1734551750176, 1732178116166, 1732179306373, 1737523647232, 1732432005764, 1730006218898, 1732179803275, 1732525047566, 1732177797025, 1732598257236, 1730587503158, 1732177631016, 1729066670235 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4546/Reviewer_X2hV" ], [ "ICLR.cc/2025/Conference/Submission4546/Reviewer_2AHH" ], [ "ICLR.cc/2025/Conference/Submission4546/Reviewer_6VBe" ], [ "ICLR.cc/2025/Conference/Submission4546/Reviewer_NVi9" ], [ "ICLR.cc/2025/Conference/Submission4546/Authors" ], [ "ICLR.cc/2025/Conference/Submission4546/Area_Chair_FY7g" ], [ "ICLR.cc/2025/Conference/Submission4546/Authors" ], [ "ICLR.cc/2025/Conference/Submission4546/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4546/Reviewer_X2hV" ], [ "ICLR.cc/2025/Conference/Submission4546/Reviewer_rggM" ], [ "ICLR.cc/2025/Conference/Submission4546/Authors" ], [ "ICLR.cc/2025/Conference/Submission4546/Authors" ], [ "ICLR.cc/2025/Conference/Submission4546/Authors" ], [ "ICLR.cc/2025/Conference/Submission4546/Reviewer_rggM" ], [ "ICLR.cc/2025/Conference/Submission4546/Reviewer_6VBe" ], [ "ICLR.cc/2025/Conference/Submission4546/Authors" ], [ "ICLR.cc/2025/Conference/Submission4546/Reviewer_NVi9" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a distribution matching-based Best-of-N distillation method that simulates the Best-of-N distribution space, while reducing the time overhead of N inferences to just one. Starting from the theoretical distribution of BoN, the authors construct the Iterative BOND algorithm based on Quantile estimation and the choice of Jeffreys Divergence, and further propose the more practically meaningful J-BOND algorithm.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Rigorous Theoretical Analysis: This work rigorously analyzes the distribution characteristics under Best-of-N sampling and establishes its connection with standard RLHF, as well as the specific reward value $r_{BOND}$ under this correlation. This provides a reliable theoretical foundation for the work, rather than being based on naive assumptions.\\n\\n2. Some Degree of Novelty: Although there is some concurrent work, the idea of distilling distributions from Best-of-N is fairly novel and important.\\n\\n3. Consideration of Practical Efficiency: I appreciate the authors' consideration of the practical efficiency of the algorithm. The proposed J-BOND algorithm theoretically has lower sampling complexity, which should increase the efficiency of RLHF.\", \"weaknesses\": \"1. Lack of Important Baselines: Given that the main purpose of the paper is to distill Best-of-N sampling, BoN performance should straightforwardly serve as an important baseline to analyze pros and cons in terms of performance and efficiency. Moreover, other concurrent BoN distillation algorithms [1] should also be considered.\\n\\n2. Lack of Downstream Validation: The main metrics in the paper, such as reward value and KL divergence, cannot be directly equated to the model's performance on downstream tasks. For an RLHF method, it is necessary to conduct experiments on downstream tasks and present more intuitive metrics to demonstrate the model's alignment performance.\\n\\n3. Insufficient Experimental Setup: The paper lacks exploration of several issues. For instance, BoN sampling heavily depends on the Reward Model, and the influence of different RMs on the BOND algorithm is not investigated. Additionally, a more nuanced exploration of Jeffreys Divergence with smoother \\u03b2 variations could be included; and the comparison between J-BOND and standard Iterative BOND lacks investigation.\\n\\n[1] Variational Best-of-N Alignment, https://arxiv.org/pdf/2407.06057\", \"questions\": \"1. Can the authors compare with more fundamental baseline methods, such as BoN or other BoN distillation algorithms?\\n\\n2. Can the authors supplement additional experiments, including downstream validation and more ablation studies as discussed in Weakness 3?\\n\\n3. Can the authors prove more clearly the advantages of the BOND algorithm over other Alignment algorithms, in terms of both performance and efficiency, to make the argument more convincing?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper is focusing on the RLHF alignment problem, in particular on emulating the Best-of-N distribution which is known to perform very well, but is very costly at inference time (for each prompt it requires drawing N candidate generations from the reference model and selecting the one with highest reward according to a reward model). The authors propose the BOND (Best-of-N Distillation) algorithm designed to force the distribution of generations from the finetuned policy to be close to the Best-of-N distribution, requiring the generation of just a single sample (instead of N). To this end, BOND regards the alignment problem as a distribution matching problem and distills the Best-of-N distribution by finetuning the reference policy to imitate the Best-of-N distribution. To stay close to the original reference model, the authors incorporate a KL regularization term that considers both the forward and backward divergence (Jeffrey divergence). In addition, they incorporate Monte-Carlo quantile estimation, and exponential moving anchor, resulting in the J-BOND algorithm. The authors conduct experiments on the abstractive summarization task (XSum dataset) and aligning GEMMA using J-BOND.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper makes multiple contributions, namely theoretical derivation for the Best-of-N distribution and a practical RLHF finetuning algorithm that distills the Best-of-N distribution into a policy which is sample efficient and requires just one single sample at inference time\\n\\nThe authors are making a lot of engineering design choices in their proposed model, and carefully analyze the role of each component in the performance of the proposed algorithm\\n\\nTo regularize the model and ensure it is not steering too far from the reference model (the supervised finetuned policy), the authors use a combination of both forward and reverse KL, namely Jeffrey divergence. While the forward KL ensures mode covering behavior, the reverse KL is used for mode seeking behavior; their combination results in better aligned policies that combine the advantages of both divergences\\n\\nApplying BOND recursively (Iterative BOND) improves the sample efficiency of the BOND algorithm and works for very small values of n (2, 4); its reward/KL tradeoff is comparable to the non-iterative BOND while being more sample efficient\\n\\nThe J-BOND algorithm presents better reward/KL trade-off compared to the REINFORCE algorithm with different values of \\\\beta and does not require using a specific regularization strength\\n\\nThe paper is well written, well-motivated, presents theoretically and experimentally sound insights that would benefit the research community\", \"weaknesses\": \"The paper combines a lot of distinct ideas already proposed in previous works - it would be good to actually clearly articulate what the novel contribution is. Besides, the comparison with concurrent works is not very clear, in particular the difference with (Amini et al, 2024), WARM, WARP (Rame et al, 2024).\\n\\nFigure 4 - It would be interesting to see how the performance of Best-of-N compares to the proposed algorithm J-BOND and REINFORCE\\n\\nAlgorithm 1, line 330 - \\\\pi_t is not defined\\n\\nLine 329 - \\\\pi \\\\in Capital \\\\pi -Capital \\\\pi in Algorithm 1 is not defined\\n\\nLine 456 - \\u201ca large previously trained reward model r(.)\\u201d - please provide details\\n\\nLines 481-482 - there are not many details about the hyperparameters of the REINFORCE algorithm \\n\\nThe authors are conducting experiments on Gemma 2B and 7B models, while results are convincing it would be good to see if they hold with other models and other tasks than summarization\", \"questions\": \"How does J-BOND performance compare to (Amini et al, 2024) and other concurrent works?\\n\\nThere are three algorithms discussed in the paper, namely BOND, Iterative-BOND and J-BOND. Is it always preferable to use JBOND or do you recommend using each algorithm in particular situations?\\n\\nWill the code be made publicly available to serve the research community?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response. I think to demonstrate the estimator's stability with 32 fresh samples, it would be best to present the corresponding MC simulation variation. The trends in Figure 3 (not Figure 2?) are not reliable unless simulated multiple times. However, your response does add clarity to the paper, and I have adjusted my score accordingly.\"}", "{\"comment\": \"Thank the authors for addressing my questions. I find the analysis proposed by the authors to be interesting and capable of explaining the observed training curves. Although the authors did not provide a more detailed explanation regarding the reward curve, I agree that this depends on the characteristics of both the RM and the initial policy. Despite not comparing all the baselines I listed, I believe the authors' analysis of (iterative) J-BOND and the alignment between theoretic analysis and experiment phenomenon convinces me of the algorithm's effectiveness and its novelty in comparison to other works on distilling BoN distributions. Therefore, I decide to raise my rating and score of the contribution.\"}", "{\"comment\": \"Thank you for reviewing our work and providing valuable feedback. Below, we respond to the main points and questions raised.\\n\\nReward shaping can definitely improve the performance of standard RLHF algorithms, as demonstrated in several recent works. Though, we believe BOND provides a more principled alternative compared to possible reshaping approaches as it only relies on generations\\u2019 rank. \\nAlso, we note that the superior performance of BOND is not only attributed to optimizing a transformed reward (backward KL component), but also to forward KL minimization and by the iterative distillation procedure.\\n\\nIn terms of impact on downstream tasks, we believe J-BOND provides two main benefits: 1) because it does not rely on reward values (but only on generations\\u2019 rank), the resulting policy should be less prone to reward optimization and thus of higher quality. 2) because of the iterative procedure, J-BOND fine tuning is more stable than standard baselines (e.g., Figure 4) and can return a whole spectrum of policies (at different KL divergences) that one could evaluate on the downstream task of interest. As mentioned in the general response, in Appendix B.5 we have reported downstream evaluations of the fine tuned policies showing that J-BOND achieves higher quality (measured via side-by-side comparisons) and significantly higher benchmark scores.\\n\\nWe found reward baselines to be of high importance (as typical in RL and RLHF literature). Moreover, we have performed additional ablations (see general response) illustrating the effect of Jeffreys divergence in J-BOND. These are reported in Appendix B.3. \\nThe results complement the ablation performed in Appendix B.1 and highlight the fact that both forward and backward KL components are crucial for achieving best reward/KL tradeoffs. \\nIntuitively, the forward KL component makes sure the policy covers all the modes of the BoN distribution, while the backward KL makes sure the policy is not over-dispersed and makes it peaky on certain modes.\\n\\nFinally, we provide answers to the raised questions:\\n\\n> Would the curves in Figure 4(a) converge to similar spots if you run the algorithms long enough?\\n\\nIn Figure 4(a), the curves with a fixed N (i.e. $N=4,8,16$) are sublinear and thus will asymptote to different reward values, depending on the fixed $N$. Instead, the iterative BOND with $n=2$ is a slower version of the iterative BOND with $n=4$ so it will eventually reach the same reward values. That is, unlike BOND with a fixed $N$, iterative BOND improves the rewards continuously.\\n\\n> Do the authors expect that the conclusions would be similar for much larger models?\\n\\nYes, we expect similar conclusions for larger models. Arguably, larger models could be more prone to reward over-optimization while BOND could mitigate this via the robust BoN distillation objective. \\n\\n> pi(x,y) makes it look like a joint distribution; do the authors mean pi(y|x)?\\n\\nYes, we mean the conditional distribution. We\\u2019ll improve the notation accordingly. \\n\\n\\n> 3e-6 means 0.000003; 3e^(-6) means 0.007. Which one are you referring to?\\n\\nThanks for spotting the typo, indeed we mean 3e-6.\\n\\nWe hope the above points clarify all of the Reviewer\\u2019s concerns. We are happy to expand them further if needed.\"}", "{\"metareview\": \"This paper proposes to align an LM based on the best-of-N distribution. This paper is backed up by formal mathematical characterizations (which may be useful for others working with these types of distributions), as well as solid empirical results. There were some concerns with regard to justifying particular choices (e.g., Jeffrey's divergences), but I think this paper is a clear accept.\", \"additional_comments_on_reviewer_discussion\": \"Several reviewers asked for comparison against vBoN (Amini et al. 2024). The authors, to their credit, performed comparison against vBoN during the rebuttal. However, these works are truly concurrent, and as such the comparison was not a factor in my decision.\"}", "{\"comment\": \"Thank you for your thorough review and positive feedback about our work. We would like to respond to the points raised.\\n\\n\\nAbout the KL divergence estimates against the BoN distribution (Figure 7 of Appendix B.1), we agree that 32 MC samples introduce significant estimation variance. However, we remark that these are estimated with 32 *fresh* samples at each evaluation step. Each of such estimates correspond to a marker in Figure 2. As expected, there is quite some variance but we found the setup sufficiently informative to show clear trends and separations among the different baselines.\\n\\nTo complement our results, we have performed additional ablations as detailed in the general response. In particular, in Appendix B.3 we have ablated the impact of different Jeffreys divergences in the (iterative) J-BOND algorithm. The ablation complement the one of Appendix B.1 demonstrating that mixing backward and forward KL is also beneficial to achieve better reward/KL tradeoffs.\\nWe thank the reviewer for providing us with the relevant work (Go et al. 2023) which we will cite in our manuscript. We acknowledge that alternative f-divergences can be used \\u2013 we see this as an interesting future direction.\\n\\nFinally, we are happy to follow the Reviewer\\u2019s suggestions on making the benefits of our approach clearer in the introduction.\"}", "{\"comment\": \"Thank you for the positive feedback and the constructive comments. We address your concerns below.\\n\\nAs mentioned in the general response, we have included additional ablations of J-BOND and direct comparisons w.r.t concurrent work of (Amini et al. 2024) in Appendix A.5.\", \"quoting_from_the_general_response\": \"In short, unlike J-BOND, vBoN requires fixing a regularization strength in advance (via the parameter $N$) and \\u2013moverover\\u2013 considers only backward KL minimization which in (Amini et al. 2024) is estimated using several MC samples. Instead, J-BOND is designed to require only 2 anchor samples, optimizes Jeffreys divergence, and continuously optimize rewards achieving better reward/KL pareto solutions.\\n\\nRegarding comparisons with BoN, in Figure 7 (Appendix B.1) we compute actual KL distances between BOND policies and the BoN distribution. This can be done leveraging our analytical expression of Theorem 1 and using several MC samples to estimate quantiles. The goal is to show that our BOND objective (and its approximate gradients) are effective in terms of distilling the BoN distribution. When moving to Gemma fine tuning experiments, BoN sampling becomes intractable since it would require $N$ of order of thousands (if not more) to achieve the same rewards of J-BOND and the other RLHF baselines.\", \"to_answer_your_questions\": \"**Q1**: The statement is rather qualitative and abstract, in that it assumes perfect convergence of the BOND operator. However, assuming applying BOND to $\\\\pi_{ref}$ *exactly* outputs the Best-of-$N$($\\\\pi_{ref}$) distribution, then applying BOND again to such a distribution we are turning it into Best-of-$N^2$($\\\\pi_{ref}$), and so on\\u2026\\n\\n**Q2,3&4**: We thank the reviewer for the questions. Indeed, it is quite interesting to analyze the rates of growth of KL, log quantiles, and rewards of J-BOND in comparison with the different baselines. First we would like to summarize two well-known results about BoN sampling: \\n1. The KL between BoN sampling and the reference policy is upper bounded by $log(N) - (N-1)/N$, i.e. it grows logarithmically with $N$. [Beirami et al. 2024]\\n2. The win rate (i.e. $p_\\\\leq(y)$) between BoN sampling and the reference policy is $N/(N+1)$, *cf.* e.g. [Amini et al. 2024].\\n\\nThe above two points are explainable of the trend curves associated to (iterative) J-BOND: \\nFor J-BOND, each step t can be seen as distilling the Best-of-$N^t$ distribution (see **Q1**). Thus, the KL vs. steps $t$, grows as $O(log(N^t)) = O(t)$, i.e. linearly, while the log quantiles grow sublinearly as $O(N^t/(N^t + 1))$. This is exactly visualized in Figures 3 and 4 and is an indication that, even with the several made approximations, J-BOND metrics are behaving according to theory. In terms of rewards, we believe there is no relationship that can be theoretically drawn, especially because J-BOND optimizes log quantiles and not the rewards directly. The difference between the reward trends of Figure 3 and 4 can simply be attributed to the different tasks (XSum vs. Gemma fine tuning) and reward models.\\n\\n[Beirami et al. 2024]: Theoretical guarantees on the best-of-n alignment policy\\n\\n**Q5**: We believe the Reward/KL tradeoff is quite a meaningful metric when it comes to reward overoptimization. Achieving better rewards without incurring a too high KL ensures the policy is not degenerating into exploitative behaviors of the reward model. This is what happens when, e.g. the tradeoff parameter $\\\\beta$ of standard REINFORCE baselines is set too low. On the contrary, the fact that J-BOND displays a better and stable reward/KL pareto curve is a good indication of reward hacking being mitigated. Note that reward hacking is unavoidable after a certain KL due to the reward model becoming out-of-distribution. Finally, we note that J-BOND displaying such a better reward/KL tradeoff is quite interesting and encouraging, given that only log quantiles (i.e. generations\\u2019 rank) are optimized by the algorithm and not the actual rewards.\\n\\nWe hope the above points clarify all of the reviewer\\u2019s concerns. We are happy to expand them further if needed.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you to the authors for the additional experiments and explanations; this has addressed most of my concerns.\\n\\nHowever, I am still curious about a few issues: How does the performance of BOND compare to actual Best-of-N sampling? How does BOND compare to other algorithms in terms of Reward Overoptimization? How is BOND's learning affected by the choice of Reward Model, and so on? \\n\\nOf course, I understand that addressing these issues might require more time. If some of these questions are resolved, I will consider raising the score.\"}", "{\"summary\": \"The paper is essentially distilling inference-time best-of-N sampling into training time. Specifically, the authors propose to train the policy to match the best-of-N distribution (which is an analytical form derived by the authors). The distribution matching is done through minimizing a combination of forward and backward KL. The behavior of J-BOND appears better for reward vs. num steps as well as KL(policy || ref policy) vs. num steps, compared to REINFORCE with various beta values.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The motivation of distillation is great and this direction should be deeply explored. The paper is written carefully. The algorithm is clearly written. I checked the derivations and they seem correct so far.\\n\\nREINFORCE is a natural baseline and the authors have attempted multiple beta values for the baseline.\", \"weaknesses\": \"Have the authors tried reward shaping techniques for RLHF baseline, e.g., making the reward values more peaky \\u2013 either very large or very small?\\n\\nI\\u2019d appreciate a more comprehensive discussion on how much the authors expect this technique to benefit downstream tasks.\\n\\nIt\\u2019ll be great if the authors can include more discussion on whether the baseline B is important or not in the algorithm.\\n\\nWhat ablations on beta and gamma in Algorithms 2 (balancing among forward KL, backward KL, additional regularization) would likely benefit downstream tasks more? It's still unclear to me why we want to put an equal/large weight on backward KL. More motivation would be nice.\", \"questions\": \"Would the curves in Figure 4(a) converge to similar spots if you run the algorithms long enough?\\n\\nDo the authors expect that the conclusions would be similar for much larger models?\", \"writing\": [\"pi(x,y) makes it look like a joint distribution; do the authors mean pi(y|x)?\", \"3e-6 means 0.000003; 3e^(-6) means 0.007. Which one are you referring to?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the positive feedback and constructive comments.\\n\\nFollowing your suggestions we have performed additional ablations and evaluations, as detailed in the general response. \\nIn particular, we have added: \\n- Comparisons (both theoretical and experimental) with the concurrent work of Amini et al. (2024),\\n- Additional ablations demonstrating the impact of Jeffreys divergence (as suggested, with smoother variations of $\\\\beta$) in J-BOND,\\n- Downstream evaluations of the Gemma 7B fine tuned policies, in terms of side-by-side comparisons and popular benchmark scores.\\n\\nWe refer to the general response for an overview on each of the above points.\\n\\nWe hope these address the main concerns raised, but we are happy to expand them further if needed.\"}", "{\"comment\": \"Thank you for being responsive and for the follow-up questions.\\n\\nThe reason why we did not include Best-of-N sampling in our Gemma fine tuning experiments is simply because it would require a prohibitive large N to achieve comparable performance. Indeed, the considered RLHF baselines achieve best performance at a KL >> 50. The KL of BoN sampling grows logarithmically with N and thus \\u2013 although BoN should achieve a better reward/KL tradeoff \\u2013 it is impractical to run for achieving such high rewards.\\nThat said, because BoN sampling is the target distribution of BOND, we have compared against it in our XSum experiments. In Figure 7 (Appendix B.1) we compute actual KL distances between BOND policies and the BoN distribution. This can be done leveraging our analytical expression of Theorem 1 and using several MC samples to estimate quantiles. The decaying KL curves show that our BOND objective and its approximate gradients, are effective in terms of distilling the BoN distribution.\\n\\nIn terms of reward overoptimization, we believe the Reward/KL tradeoff is quite a meaningful metric to compare different fine tuning algorithms: achieving better rewards without incurring a too high KL ensures the policy is not degenerating into exploitative behaviors of the reward model. The fact that J-BOND displays a better and stable reward/KL pareto curve is a good indication of reward hacking being mitigated. We believe this is quite interesting and encouraging, given that only log quantiles (i.e. generations\\u2019 rank) are optimized by the algorithm and not the actual rewards.\\nThe above considerations are ultimately validated by the end-to-end performance on side-by-side and zero-shot evals. J-BOND achieving the highest peak performance across the board is an effect of extracting the most out of the used reward model (note that reward hacking is unavoidable after a certain KL due to the reward model becoming out-of-distribution).\\n\\nFinally, BOND has absolutely minimal sensitivity to the used reward model (unlike other methods). This is because the RM is only used to compute ranks across generations, while its absolute values are unused. This makes J-BOND (and its hyperparameters) absolutely agnostic to whatever is the rewards\\u2019 signal range, shifts, and steepness.\\n\\nWe hope the above answers resolve some of the remaining Reviewer\\u2019s concerns.\"}", "{\"comment\": \"We thank the Reviewer for the detailed review and the positive and constructive comments.\\n\\nWe would like to clarify that BOND and iterative BOND are the two main building blocks of J-BOND, which is the ultimate algorithm we recommend for RLHF fine tuning. This is because BOND, as presented in Section 3 has the following main sources of complexity:\\n1. requires prescribing in advance to a fixed value of N\\n2. it does not scale with large N since the forward KL component requires sampling N times from the reference policy)\\n3. requires several per-prompt MC samples for accurate quantile estimation.\\n\\nJ-BOND overcomes challenges (1) and (2) by employing the iterative BOND approach. In addition, it addresses challenge (3) by the crude quantile approximation based on 2 anchor samples.\\n\\n\\nThe challenges above can also serve to illustrate the differences between J-BOND and the concurrent vBoN approach by (Amini et al. 2024): vBoN does not suffer from (2), since it considers only backward KL divergence, but it crucially suffers from (1) and (3). In particular, (3) makes vBoN not viable for our fine-tuning experiments with conditional generations.\\nAs mentioned in the general response, we have clarified such points in the paper and performed additional ablations to compare J-BOND with a scalable version of vBoN with the same crude quantile estimation (see Appendix A.5 in the updated manuscript). Compared to vBoN, J-BOND does not require prescribing a fixed N in advance and displays a better reward/KL tradeoff. This is attributed to its iterative approach and to the additional forward KL minimization component. \\n\\nFinally, we thank the reviewer for the spotted inconsistencies \\u2013 we will update the paper accordingly. We are planning to release the J-BOND code as well.\"}", "{\"comment\": \"Thank you for the response and the additional ablations (e.g., Appendix B.3). I'm increasing my score to 6 but it'll be great if the paper can include a more detailed discussion on the relationship between reward shaping techniques and BOND (and pros and cons), and whether the two can work together.\"}", "{\"summary\": \"The paper introduces Best-of-N Distillation (BOND), a novel alignment tuning algorithm designed to emulate the Best-of-N sampling method in a more computationally efficient manner. BOND aims to achieve the same high-quality output as Best-of-N sampling without the inference-time computational overhead, by aligning model outputs to match the distribution of the Best-of-N candidates.\\nIn addition, to ensure stability and scalability, the authors introduce an iterative approximation strategy that operates effectively even with a minimal sample size (e.g., 2 or 4). \\nFurther, based on the two types of loss function derivation aiming forward and reverse KL respectively, the author leverages Jeffreys divergence proposing J-BOND, an enhanced algorithm incorporating iterative distribution matching with an exponential moving average (EMA) anchor. J-BOND demonstrates effectiveness in maintaining a stable training and superior KL-reward trade-off through experiments.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-structured and clearly presents its methodology, with detailed explanations and algorithms that allow readers to follow the progression. From iterative BOND to the addition of KL regularization in Sections 4 and 5, the additional experimental results effectively support these methodological advancements.\\nBOND is notable for its originality, offering a practical and computationally efficient alternative to traditional RLHF that achieves a superior KL-reward balance without requiring commitment to a specific regularization level. The work is significant in its potential impact on RLHF practices, as it provides a scalable solution for optimizing performance and efficiency while minimizing trade-offs between KL divergence from the reference distribution and reward maximization.\", \"weaknesses\": \"The paper relies heavily on the Jeffreys divergence without sufficient comparative analysis against alternative divergence metrics. The mode-covering and mode-seeking behavior property paper mentioned about are only observed in lower dimension such as multimodal distribution in 1-dimension. An inclusion of other divergence types, especially in the iterative stages, could offer clearer insights into the unique advantages of Jeffreys divergence. Further, relevant literature on divergence measures in alignment tuning, should also be cited to contextualize this choice.\\n- Go, Dongyoung, et al. \\\"Aligning language models with preferences through f-divergence minimization.\\\" Proceedings of the 40th International Conference on Machine Learning. 2023.\\n\\nAs the paper discusses the method\\u2019s efficiency, the paper would benefit from explicit comparison of the computational cost saved by BOND relative to traditional Best-of-N sampling, or comparisons with sampling approaches used in RLHF. This would clarify BOND\\u2019s potential advantages in real-world applications.\\n\\nAdditionally, while the paper addresses the challenge of sampling size N through iterative approximation, showing practical advantages like non-saturation compared to non-iterative BOND, this helpful randomness raised in iterative BOND is solely introduced by approximation randomness, which lacks controls or specific directions. This calls into question whether the proposed algorithm genuinely achieves a distilled $\\\\text{Bo}N^M$ distribution. \\nThe substantial difference in $r(y)$ between iterative and non-iterative BOND in Figure3 suggests a potential vulnerability to reward hacking, as discussed in Gao et al.\\n- Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n\\nThe introduction combines related works and problem setup, which could be structured more effectively. Detailed discussions on RLHF and Best-of-N would be more suitable in a separate related works section or could be incorporated into the problem setup. In the introduction, it would be clearer to emphasize the limitations of existing methods and highlight the advantages of the proposed approach over current methods.\", \"questions\": \"In Section 4.2, the authors utilize 32 Monte Carlo samples to estimate the backward and forward KL divergences between the training policy and the reference distribution. Given the high dimensionality of these distributions, this sample size seems insufficient to reliably capture the divergence and may introduce substantial estimation variance. A sensitivity analysis showing how the estimator's variance changes with an increasing number of Monte Carlo samples would strengthen the results. Alternatively, using a larger sample size for these estimates could enhance the reliability of the reported divergences.\\n\\nWhile BOND\\u2019s benefits, such as improved KL/reward trade-offs and dynamic regularization, are discussed in the body of the paper, they are not clearly summarized in the introduction or abstract. A brief overview in these sections would effectively communicate BOND\\u2019s main advantages over traditional RLHF approaches, aiding readers in understanding its unique contributions and practical value.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General response\", \"comment\": \"We thank all the Reviewers for their valuable reviews and the positive feedback about our work.\\n\\nWhile we are responding to each reviewer individually, we have followed common suggestions and complemented our paper with additional results.\", \"in_particular\": [\"We have compared J-BOND with the concurrent vBoN approach of (Amini et al. 2024), in Appendix A.5. First, we have highlighted the main differences between the two approaches and then compared them experimentally on Gemma 7B. In short, unlike J-BOND but similar to standard RLHF algorithms, vBoN requires fixing a regularization strength in advance (via the parameter $N$) and \\u2013moverover\\u2013 considers only backward KL minimization which in (Amini et al. 2024) is estimated using several MC samples. Instead, J-BOND is designed to require only 2 anchor samples, optimizes Jeffreys divergence, and continuously optimize rewards achieving better reward/KL pareto solutions.\", \"We have performed additional ablations to demonstrate the impact of using different Jeffreys divergences in J-BOND. These are reported in Appendix B.3 and complement the Jeffreys divergence ablations of Appendix B.1. Utilizing Jeffreys divergence as objective, not only is beneficial to optimize both forward and backward KL divergences (as shown in Appendix B.1) but also enables J-BOND to achieve better reward/KL tradeoffs.\", \"In Appendix B.5, we have complemented our experiment with downstream evaluations of the Gemma 7B policies fine tuned by J-BOND and our standard RLHF (REINFORCE) baseline. More specifically, we reported side-by-side comparisons and zero-shot performance on several popular benchmarks. J-BOND achieves higher quality and significantly better scores across the board.\", \"We are happy to provide further clarifications should there be any question.\"]}", "{\"summary\": \"This article models the alignment of LLMs as a distribution-matching problem. Specifically, it considers the distribution induced by Best-of-N sampling as the target distribution and optimizes the Jeffreys divergence with respect to it for balancing the characteristics of forward & backward KL-based optimization. Additionally, this work derives an iterative form, updating the reference model during training.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"1. This work implements the LLM alignment problem through Best-of-N distillation, which can be a sound direction for the development of algorithms in this field.\\n2. This work formulates and discusses the BoN distribution and its relationship with the general RLHF target. \\n3. This work proposes to utilize Jeffreys divergence to balance the mode-covering and mode-seeking behavior introduced by forward- and backward-KL- KL optimization.\\n4. This work further integrates their method with EMA techniques and proposes an iterative version.\", \"weaknesses\": \"1. My biggest concern is the absence of direct comparison with close-related works http://arxiv.org/abs/2407.06057, http://arxiv.org/abs/2406.00832. It would be much more convincing if there is a clear comparison like Figure 1 in http://arxiv.org/abs/2407.06057, rather than a simple baseline REINFORCE presented in Figure 7 in this work.\\n2. More discussion about why BoN distribution as the target can be included.\\n3. Other possible additional analyses are discussed in the Questions.\", \"questions\": \"#### Question 1:\\nLine 298, why is the proposed approach \\\"equivalently to distilling a Best-of-$N^M$ of the initial distribution $\\\\pi_{ref}$\\\"? Is this a qualitative or quantitative statement?\\n#### Question 2:\\nFigure 4, seems like a linear relationship between # steps and r(y) for iterative BOND.\\nFor Figure 5, it seems like another kind of trending between $r(y)$ and # steps or $KL(\\\\pi||\\\\pi_{ref})$.\\nFigures 6 and 7, present a log-like trend between the KL and reward similar to the REINFORCE algorithm.\", \"a_similar_trend_is_also_observed_in_http\": \"//arxiv.org/abs/2406.00832 and http://arxiv.org/abs/2407.06057.\", \"as_discussed_in_http\": \"//arxiv.org/abs/2204.05862 and my experience, there can be an approximately linear relationship between $\\\\sqrt{D_{KL}}$ and $r$ for BoN sampling.\\nIt would be interesting if the author could provide some empirical or theoretical intuition about such a relationship.\\nDoes this indicate that even though performing a BoN distribution match, it is still more similar to the general policy-gradient RL algorithm (which may try to match another distribution)?\\n#### Question 3:\\nWhy is there an approximately linear relationship between #steps and KL in, as presented in Figures 5 and 6 for the BOND algorithm?\\nFor the REINFORCE algorithm, it seems a quite different trend.\\n#### Question 4:\\nFigure 4 presents consistent trending between KL and $\\\\log_{p\\\\le(y)}$ across varying $N$ and algorithm, which is quite interesting.\\nIs there any explanation for this phenomenon?\\n#### Question 5:\\nIs there any analysis or comparison of the reward overoptimization of this algorithm?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0sr8bS4S2H
AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant
[ "Chengyou Jia", "Minnan Luo", "Zhuohang Dang", "Qiushi Sun", "Fangzhi Xu", "Junlin Hu", "Tianbao Xie", "Zhiyong Wu" ]
Digital agents capable of automating complex computer tasks have attracted considerable attention due to their immense potential to enhance human-computer interaction. However, existing agent methods reveal deficiencies in their generalization and specialization capabilities, especially in handling open-ended computer tasks in real-world environments. Inspired by the rich functionality of the App store, we present AgentStore, a scalable platform designed to dynamically integrate heterogeneous agents for automating computer tasks. AgentStore empowers users to integrate third-party agents, allowing the system to continuously enrich its capabilities and adapt to rapidly evolving operating systems. Additionally, we propose a novel core MetaAgent with the AgentToken strategy to efficiently manage diverse agents and utilize their specialized and generalist abilities for both domain-specific and system-wide tasks. Extensive experiments on challenging benchmarks demonstrate that AgentStore surpasses the limitations of previous systems with narrow capabilities, particularly achieving a significant improvement from 11.21\% to 23.85\% on the OSWorld benchmark, more than doubling the previous results. Comprehensive quantitative and qualitative results further demonstrate AgentStore's ability to enhance agent systems in both generalization and specialization, underscoring its potential for developing the specialized generalist computer assistant. All our codes will be made publicly available.
[ "human-computer interactions", "multi-agent", "multimodal learning" ]
https://openreview.net/pdf?id=0sr8bS4S2H
https://openreview.net/forum?id=0sr8bS4S2H
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gG889bMNWw", "alqhUT8MzX", "Y6bzFGSiyc", "XPeGrrY1an", "7M47fc9f3i" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1729867988715, 1729293142041, 1730237462696, 1730516332133, 1732264760261 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3595/Reviewer_6VM1" ], [ "ICLR.cc/2025/Conference/Submission3595/Reviewer_m1su" ], [ "ICLR.cc/2025/Conference/Submission3595/Reviewer_JEbG" ], [ "ICLR.cc/2025/Conference/Submission3595/Reviewer_tQJj" ], [ "ICLR.cc/2025/Conference/Submission3595/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces AgentStore, a platform designed to integrate and manage a wide variety of digital agents capable of performing specialized tasks on computer systems. This system addresses the limitations of current general-purpose agents, which struggle with complex, open-ended tasks, by using a flexible, scalable approach similar to an app store. AgentStore includes a core MetaAgent that uses a novel AgentToken strategy to dynamically select and manage suitable agents for specific tasks, allowing for collaboration between specialized agents. Experiments show AgentStore's effectiveness on the OSWorld benchmark, significantly outperforming previous systems by more than doubling their success rates on complex tasks. This advancement highlights the potential of AgentStore in developing versatile, specialized assistant systems that improve both user experience and task automation across different environments\\u200b.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. AgentStore enables easy integration of various specialized agents, similar to an app store, allowing the platform to continuously expand its capabilities. This adaptability makes it suitable for handling a broad range of tasks in evolving operating system environments.\\n\\n2. The MetaAgent with AgentToken routes tasks to the most suitable agents and can manage collaborative tasks involving multiple agents. This approach significantly enhances task handling by using minimal resources and avoiding frequent model retraining.\\n\\n3. AgentStore achieves a marked improvement on challenging benchmarks like OSWorld, doubling the success rates of prior systems. This demonstrates its capability to handle complex tasks across different software and application domains effectively.\", \"weaknesses\": \"1. The authors claim that their methods \\\"double the performance of previous systems\\\". However, this comparison is not entirely fair, as their approach employs a significantly larger number of agents and incurs substantially higher memory and costs. The paper does not address these additional costs, nor does it include experiments comparing baselines that utilize multiple agents, which would provide a more accurate comparison with the proposed method. I suggest that the authors test multi-agent baselines that use the same group of agents mentioned in the paper.\\n\\n2. While the authors describe their AgentStore as a \\\"generalist\\\" assistant, the evaluation lacks sufficient breadth. The method could be tested on one additional benchmark such as WebArena or Mind2Web to demonstrate generalizability. Both APPAgent and OSWorld-Multi involve fewer than 100 tasks, which is a relatively small number and could allow for manual tuning of the agents to game the evaluation system.\\n\\n3. The presentation of the paper lacks rigor. The introduction uses overly fancy language and falls short of the scientific rigor expected, including imprecise terms such as \\\"stunning results.\\\" Additionally, in Figure 2, the \\\"AgentPool\\\" is illustrated with agents like Sheet Agent, Slide Agent, Web Agent, etc., which are not clearly defined in the paper. Please provide an explanation of what each of these agents is and how they are built in the main text or appendix, or revise the figure to present a more accurate representation.\\n\\n4. The related work section is not comprehensive, particularly regarding multi-agent systems. The authors state that previous works \\\"use a fixed number of agents with predefined roles\\\" and that \\\"their agents are usually homogeneous,\\\" but this is inaccurate for many studies, such as \\\"Internet of Agents\\\" and \\\"AutoGen\\\". A review of classical papers in multi-agent systems would also reveal that many incorporate heterogeneous agents, a discussion that the authors have entirely overlooked.\", \"questions\": \"1. I noticed that the number of tasks in the AppAgent paper is higher than those discussed in your paper. Additionally, the accuracy in your paper is reported in increments of \\\"20%,\\\" which makes it less convincing, as I didn't see this in the original paper. Did you select a subset of tasks? Please correct me if I'm wrong.\\n\\n2. Could you make the figures more clear? Currently, there are too many elements, especially in Figure 2, making the figures look cluttered.\\n\\n3. For AgentMatch, you mention a \\\"ground truth set of agents required for successful task completion.\\\" What if multiple different sets could successfully complete the tasks, making it so there's no single ground truth?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"AgentStore is a scalable platform designed to integrate diverse agents to automate operating system tasks dynamically. Through its MetaAgent and AgentToken modules, AgentStore achieves state-of-the-art results on the OSWorld benchmark by enhancing adaptability and task execution efficiency.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. I like the scope of this paper. It is necessary to discuss how to scale up by incorporating more evolving agents into one platform.\\n\\n2. The experiments show the SoTA performances in the OSWorld Benchmark, and the performance is strong compared with other baselines. \\n\\n3. The figures are interesting. And the claims are straightforward.\", \"weaknesses\": \"1. I strongly suggest that the authors include at least one additional benchmark. Since the current OSWorld benchmark is relatively new, achieving good results on it may not be fully convincing. Most importantly, it seems that there are many similar benchmarks within the same scope, and incorporating several of them would provide a more comprehensive evaluation.\\n\\n2. Building on the first weakness, it would be helpful if the authors could conduct experiments that explore the generalizability of the model across different benchmarks.\\n\\n3. Is the main contribution of this paper training a model that orchestrates different agents, and prior to this, do you also introduce agents in the AgentStore? If so, I believe the most relevant baselines would be models that train to select APIs or tools, which can be analogous to selecting agents (as they function similarly). It would be beneficial to compare your results with these existing methods.\\n\\n4. Expanding on the 3, I suggest supplementing the experiments using alternative methods to orchestrate the agents within AgentStore. For example, you could compare against RL-based approaches such as [1] GPTSwarm, which orchestrates agents using graphs, or model-based methods like [2] Toolformer, which selects tools from a trained model, and [3] LangChain's ICL-based tool-calling agent.\\n\\n5. The time or cost analysis of training and inference is missing and would provide valuable insights.\", \"references\": \"[1] GPTSwarm: Language Agents as Optimizable Graphs.\\\" ICML 2024\\n\\n[2] Toolformer: Language models can teach themselves to use tools.\\\" NeurIPS 2024\\n\\n[3] LangChain: https://python.langchain.com/v0.1/docs/modules/agents/agent_types/tool_calling/\", \"questions\": \"1. Could you provide more details about the term *hybrid* in Table 1? There are no related explanations in the paper, which makes it unclear for the reviewer to understand the exact meaning of *hybrid* in this context.\\n\\n2. Is *Hash Manager* a commonly used term in this context? The connection between your paper and *Hash_RC6*, as mentioned, is unclear. Additionally, the statement *\\\"our method narrows the management scope to a few selected agents, leaving ample context space for detailed documentation of these fixed agents. This design shares similarities with hashing methods\\\"* is unclear and could benefit from further clarification.\\n\\nIt would be appreciated if the authors addressed all of the weaknesses and questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concern for this paper.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a method for managing and deploying multiple different agents to achieve computer control tasks. The approach collects a group of agents, AgentStore, each with different capabilities and domain specialties. Each agent has an associated document describing the agent.\\n\\nIn order to deploy the correct agent for a given task, the paper uses \\u201cAgentToken\\u201d, which is a trained embedding for selecting an appropriate agent to deploy for the task. For more complex tasks, the AgentManager can select up to k tasks. The paper demonstrates SoTA performance on OSWorld. They also release a dataset, based on OSWorld for tasks that require multiple agents.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper proposes an interesting idea of combining together multiple agents to solve complex tasks. There is a clearly significant engineering effort that went into creating this work, i.e. to create nearly twenty different agents and documents from scratch. The approach achieves impressive performance on a very challenging benchmark.\", \"weaknesses\": [\"The paper may overstate the difficulty of agent selection and potentially understates how much the success depends on designing customized agents for the applications specifically in the benchmark. While MetaAgent with AgentToken is presented as a main contribution, the paper does not conclusively demonstrate its superiority over an ICL baseline.\", \"The reported 49.63% accuracy for ICL with GPT-4o in agent routing seems unusually low. This appears to stem from implementing a simplistic ICL baseline that inefficiently includes entire agent documents in the prompt (as shown in the appendix, though the baseline implementation should be better detailed in the main text). A fair comparison would require:\", \"Testing ICL with more concise capability descriptions\", \"Including few-shot examples\", \"Providing concrete examples demonstrating where ICL fails compared to AgentToken\", \"The system's current implementation raises significant scalability concerns:\", \"Custom agents and documentation were developed specifically for each app in the OSWorld dataset\", \"Scaling requires substantial manual effort for each new application:\", \"Collecting demonstrations\", \"Implementing new agents\", \"Writing documentation\", \"The strong performance appears largely attributable to carefully engineered custom agents rather than a scalable automated approach\", \"True scalability would require automated agent generation\", \"The paper lacks sufficient detail on the nearly twenty custom different agents (excluding existing ones e.g., Friday) used in the system. Without these details, it is difficult to assess the effectiveness of the approach. Most concerning is the unclear origin of training demonstrations and their potential overlap with OSWorld test tasks. The paper should:\", \"Specify the source of demonstrations\", \"Detail measures taken to prevent data leakage between training and test sets\", \"Discuss how generalization is ensured\", \"There are spelling and grammar errors.\", \"In Figure 1,\\u201dSildeAgent specialize\\u2026\\u201d should be \\u201cSlideAgent specializes\\u2026\\u201d,\", \"In Figure 1, \\u201care required to collaborate system-wide\\u201d should be \\u201care required to collaborate on system-wide\\u2026\\u201d\", \"In the prompts in the Appendix, \\u201cDemostation\\u201d should be \\u201cDemonstration\\u201d\", \"In the prompts in the Appendix, \\u201cTemplete\\u201d should be \\u201cTemplate\\u201d\"], \"questions\": \"Please see weakness for points of clarification desired.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents AgentStore, which allows integrating and dynamically use a range of domain-specific agents (the difference is mainly the base model, how the model is prompted, the action/observation space for each agents).\\nThey train a model, named MetaAgent, to dynamically select the agents and distribute the tasks given current context. \\nPerformance is verified OsWorld.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea of effectively integrating and dynamically use a range of domain-specific agents is interesting.\", \"weaknesses\": \"1. Vague Agent Descriptions: The descriptions of agents in the AgentPool are too insufficient to understand what each agent actually are. The only available information seems to be from Table 6, which provides only names for many agents. What distinguishes each agent, such as SheetAgent from SlideAgent? Is it simply their prompts, or are there other differences?\\n2. Over-engineered, Overfitting to OSWorld: Many agents in Table 6 appear optimized for tasks specific to OSWorld, raising doubts about their general applicability. Evaluating the system against broader benchmarks, like GAIA or SWE-Bench, would strengthen the claim of generalist capabilities.\\n3. Scalability Concerns: the claimed scalability of this system is unclear. Will there be contributors create specialized agents? And can this platform effectively integrate diverse agents? Table 6 shows that 18 of the 20 agents are authored by the team. So it's unclear if this system design can effectively integrate diverse agents found in the wild.\\n3. Missing Key Baselines in AgentToken: The study only presents AgentToken training with tunable embedding layers. It would be valuable to compare performance and efficiency when the entire model is tunable to understand the trade-offs better.\", \"questions\": \"1. How are AgentStore(FT) baseline constructed? Why is it performing worse then AT?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thanks to all the reviewers for their feedback. Due to time constraints, additional results and analyses require more time to complete. We will continue to improve our work\\uff01\"}" ] }
0spR7wDwBh
A grid world agent with favorable inductive biases
[ "Patrick Hammer", "Peter Isaev", "Andre N. Costa", "Ali Beikmohammadi", "Sindri Magnússon" ]
We present a novel experiential learning agent with causally-informed intrinsic reward that is capable of learning sequential and causal dependencies in a robust and data-efficient way within grid world environments. After reflecting on state-of-the-art Deep Reinforcement Learning algorithms, we provide a relevant discussion of common techniques as well as our own systematic comparison within multiple grid world environments. Additionally, we investigate the conditions and mechanisms leading to data-efficient learning and analyze relevant inductive biases that our agent utilizes to effectively learn causal knowledge and to plan for rewarding future states of greatest expected return.
[ "intrinsic rewards", "inductive biases", "planning", "uncertainty", "deep reinforcement learning", "reinforcement learning" ]
Reject
https://openreview.net/pdf?id=0spR7wDwBh
https://openreview.net/forum?id=0spR7wDwBh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zyQoDdupq3", "zW62Nncfx3", "zVtXdWJVf1", "zNWRymPYPK", "w2whai7k8K", "spK0UGEZqU", "rWaktpyo38", "qOvWfiTMkb", "q1AGKlRfbz", "j0iVayZ53i", "he8lqBjlVA", "ha8lOgUXfg", "dMfPFcj66j", "cIMTEXGnIC", "cEsBUWKQzr", "bvpD9PDdvc", "aSKSyz3BXz", "a3q0c2EHlW", "Z7v1CyAtW3", "Y2mm2yuapC", "XyIjYVpZMa", "XkcXogzJI3", "VxmE3ncZo2", "VXDNokMSbE", "VIfhKU6DUy", "VHKYxWDIR8", "UMA5iBeLUX", "SsUmWKX8D5", "RpDbl9uMUm", "MqJwgM0Ehg", "MjbfZ0wVQj", "MaxUYg2fB7", "MGnYMscK4K", "LWjaiRu21D", "L308qdGf1o", "KRYIpgGBHX", "K3QDQwk40i", "JWJsHnWG8P", "IGA4Vpdodo", "GPVUdqeojO", "FhDSr3ZWRr", "FDvaPUIg9z", "EUl81BkFio", "EBVUVAwKpC", "B8lY4Nnyld", "AJaLG4M6kj", "9ce5s9nNsq", "8wRHuusZoM", "3Fx8voM5IA", "2417c9Oz6c", "0VwqozKu88" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1733095680987, 1732611573357, 1732676674588, 1731610229093, 1733028662220, 1731677100208, 1731682995648, 1732676363676, 1732810234659, 1731609129664, 1732489046596, 1732966244608, 1732553224886, 1732660645902, 1733118367115, 1732885147751, 1733082300812, 1732589082100, 1732761814040, 1731679233923, 1732745432288, 1737523613931, 1733089522573, 1730681109980, 1730663163451, 1730721847362, 1732494856186, 1732173291703, 1732676749389, 1732694112072, 1732734313981, 1731604209578, 1732735765143, 1730487890420, 1732676475621, 1732703150796, 1732738401171, 1732678016682, 1732676914960, 1733223688882, 1732752187799, 1731555651229, 1730213951508, 1732929002000, 1732736466098, 1732735275608, 1732676816320, 1732589762835, 1733029189829, 1732763562326, 1734581960903 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_Saa1" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_ea49" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_ea49" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_euZ6" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_WhYe" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_QixP" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_euZ6" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_ea49" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_Saa1" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_euZ6" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_euZ6" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_euZ6" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_QixP" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_Saa1" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_WhYe" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_WhYe" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_QixP" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_WhYe" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_yvs6" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_euZ6" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Reviewer_QixP" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Authors" ], [ "ICLR.cc/2025/Conference/Submission4017/Area_Chair_BCJd" ] ], "structured_content_str": [ "{\"comment\": \"> Again the reviewer wants to mention that the revisions have significantly improved the original version of the paper, however some of the main points still remain:\\n\\nThank you for your thoughtful feedback and for highlighting the improvements. Your insights and active engagement have been instrumental in refining our work!\\n\\n> It is not even clear if the authors average over different algorithms here (which they should not).\\nGiven that the NACE times are also given per seed and per environment, it is not clear for the reader if and how the different times should be compared.\\n\\nWe regret that this appendix section is not yet fully clear. If the paper is accepted, we promise to address this in the camera-ready version. Besides structural improvements of Appendix F, we will include the individual runtime data for each technique, utilizing the detailed runtime information already available in the OtherLogs folder provided in our contribution's zip file. This clarification will ensure a more precise runtime comparison and better alignment with your suggestions.\"}", "{\"comment\": \"Dear authors,\\n\\nthank you for addressing my comments.\\n\\nAs other reviewers have highlighted, I don't see a revised version of the paper addressing our concerns. Even if your work is valuable, for future submissions I suggest you pay more attention to presentation and notation consistency, which have been weak points of your paper. \\n\\nFor these reasons, I won't change my rating - marginally below the acceptance threshold.\"}", "{\"comment\": \"Thank you for your valuable feedback, which has greatly influenced the revisions to our paper. We have addressed several shortcomings of the initial submission, leading to significant improvements:\\n\\n1. Updated Results: The noted tables with new results are now included in Section 4.4, along with an extended discussion. This section is fully dedicated to comparing our approach to Dreamer in the context of Model-based RL.\\n\\n2. Improved Motivation: The introduction has been revised to better motivate our approach and clarify its positioning within related works.\\n\\n3. Broader Generalization: We have expanded on how our ideas could generalize beyond grid worlds in section 4.4.\\n\\nWe plan further revisions before the deadline and welcome additional feedback.\"}", "{\"comment\": \"Addressed questions:\", \"q1\": \"The key differences lie in NACE\\u2019s compound state representation, which enables it to handle the high-dimensional state space of Minigrid efficiently, the particulars of how uncertainty is estimated, and the agent\\u2019s intrinsic motivation to seek out for states its knowledge applies the least in.\", \"q2\": \"In the state and rule representation section, we mention that the state is an array of cell values. As suggested, we will formalize and clarify this further. In the Minigrid environment, this array is provided for the observation window, and NACE integrates this into its \\\"mental map\\\", including both observed and currently unobserved cell values outside its view.\", \"q3\": \"NACE obtains the expected returns for the trajectories within its planning horizon as specified in the Planner description in section 4.4., which makes it choose one trajectory over the other. Hereby only the first action of the obtained plan is executed, then the cycle repeats.\\n\\nAddressed weaknesses\", \"w1\": \"Thank you for this observation, we will add more information about each bias in relation to the techniques used.\", \"w2\": \"The problems we addressed can be modeled as POMDPs. Partial observability was considered in most DRL techniques via LSTMs, as they were previously applied to partially observable Minigrid environments, enabling fair comparison. In NACE, partial observability is handled through its explicit map maintenance. We believe that addressing partial observability effectively remains an open research question, rather than an inherent limitation of NACE or our comparisons.\", \"w3\": \"Thank you for pointing this out, we will correct this notational oversight to improve clarity.\\n\\nW4.1: We agree and have explicitly stated that NACE handles grid worlds, which remain challenging for RL methods and have many variations. Extending NACE to work outside of grid worlds (such as to continuous environments) would be an interesting direction for further research, as mentioned in our future works.\\n\\nW4.2: To our knowledge, RMax depends on an explicit tabular model with discrete, non-structural states, which does not scale to high-dimensional spaces like Minigrid's 2D-array-based observation window. Current literature indicates that DRL methods with intrinsic rewards are SOTA in Minigrid, which is why these methods have been the basis of our comparison, along with more basic DRL algorithms. However, we acknowledge that RMax\\u2019s optimistic value estimates could be valuable if extended to a DRL model. If you are aware of an RMax extension that could apply here, we would be glad to reference it in our final submission.\\n\\nThank you for your valuable review. We hope that our responses clarify the design and contributions of NACE, as well as address any potential misunderstandings regarding its capabilities and limitations. We appreciate your thorough assessment and would be happy to engage further on any remaining questions.\"}", "{\"comment\": \"Thank you for having re-assessed our paper and for further engaging in valuable discussions!\\n\\nWe appreciate your suggestion to explore extensions beyond grid worlds, aligning closely with our ongoing work. \\nSpecifically, we have been developing a ROS 2 node to adapt our method to continuous environments faced by mobile robots, which involves:\\n- Use of a SLAM algorithm to maintain an occupancy grid map.\\n- Downsampling the grid map to match the robot's dimensions upon each map update.\\n- Applying YOLO for object detection on the camera feed and mapping bounding boxes with depth information to grid cells.\\n- Providing NACE with this semantic grid to operate similarly to its behavior in simulation.\\n- Sending NACE's movement actions to Nav2 for navigation and MoveIt2 for pick and drop operations.\\nThis integration enables NACE to handle navigation, exploration, and pick-and-place operations in real-world scenarios.\\n\\nWe wonder if this extension meets your expectations.\\nIn case it does, we could incorporate these results into the appendix for the camera-ready submission including a link to a working demo?\"}", "{\"comment\": \"We greatly appreciate the time and effort you invested in reviewing our work. We address your comments and suggestions, starting with answering your questions:\", \"q1\": \"Thank you for highlighting this. We plan to include the hyperparameter choices in the final submission. For most of the baseline algorithms, we adopted hyperparameters from their original repositories to ensure fair comparisons.\", \"q2\": \"We agree that this requires further discussion. Both theoretical considerations (e.g., NACE rules not capturing the statistics of the structure of generated environments) and practical factors (e.g., length of planning horizon which will benefit from more efficient implementation, and choice of exploration parameters such as the truth expectation threshold) contribute to NACE\\u2019s inability to consistently find the optimal policy, which we will elaborate on in the final version.\", \"addressing_weaknesses\": \"\", \"w1\": \"Agreed. We will expand this section to provide more detailed explanations for each inductive bias and how in particular it is realized in the technique.\", \"w2\": \"We agree and will incorporate a diagram and additional examples for Section 4.3 to clarify the state and rule representations.\", \"w3\": \"Thank you for this suggestion. We will include a causal rule in the final version to illustrate NACE\\u2019s structure more effectively.\\n\\nThank you for your thoughtful review and valuable insights. We appreciate your engagement with our work and look forward to incorporating your suggestions to enhance the final version of our paper.\"}", "{\"comment\": \"We appreciate your review and the feedback provided! Below, we respond to your observations and outline adjustments.\", \"questions\": \"\", \"q1\": \"The Curiosity Model is a key component of the planner in NACE. It operates as a secondary objective in planning, guiding the agent to systematically explore states where its existing knowledge is least applicable. This is quantified through the State Match Value, derived from the match quotients of rules applied to cells in the state. During planning, if no action sequence leads to greater-than-zero expected return, the planner uses the Curiosity Model to identify action sequences that minimize the State Match Value, encouraging the agent to explore less familiar areas. It supports NACE\\u2019s systematic acquisition of missing causal knowledge.\", \"q2\": \"Thank you for raising this. In our notation, $c$ represents a cell, $v$ is a value, $a$ denotes an action, and $R(r)$ refers to the reward predicted by a rule $r$. We chose concise lower-case letters to align with the variables\\u2019 names, but we recognize that this may cause confusion given the multiple variables introduced. We will revise and clarify the notation.\", \"weaknesses\": \"\", \"motivation\": \"We acknowledge that grid worlds may not fully reflect the complexity of real-world environments, and therefore may not align with everyone\\u2019s research interests. However, they are valuable for testing and benchmarking algorithms, particularly in addressing challenges such as partial observability, sequential dependencies, and combinatorial state spaces. Minigrid, in particular, is a widely used benchmark and has been featured in numerous recent publications at ICLR and other leading conferences. Additionally, many SOTA techniques were extensively studied in Minigrids, including those used in our comparisons, further underscoring its relevance and the open challenges it presents. While we recognize that grid worlds are not inherently realistic, their controlled settings enable systematic experimentation and comparison, which aligns with our research goals. We are happy to elaborate on this motivation to provide greater context.\", \"rl_in_grid_worlds\": \"With respect, we do not intend to claim that RL is incapable of solving these tasks. If our wording suggests otherwise, we are happy to revise it. Some of our results demonstrate competent performance from various DRL techniques, which we trained and comprehensively evaluated across the environments, and these results are largely consistent with findings in the literature. However, we acknowledge that some of the weaker results may stem from hyperparameter choices, which we adopted from the repositories cited in the corresponding papers. We appreciate your suggestion to include them, as this will enhance the quality of the final submission.\", \"missing_details\": \"We agree that Subsection 4.3 before 4.2 would improve clarity, as state is not yet defined in its current order. Thank you for noting this oversight. This adjustment will resolve the issue and improve clarity, particularly for first-time readers engaging with these novel concepts.\", \"lenghty_drl_descriptions\": \"We greatly appreciate this observation, as it provides the space we need to address the other suggestions and further improve the paper.\", \"lack_of_novelty\": \"We are optimistic that the novelty of our contribution can be objectively assessed, as our architecture is unique and built on formulations not found in existing literature. However, we acknowledge the importance of strengthening the Related Works section to better contextualize our approach and highlight its distinctive aspects.\", \"performance_variations\": \"Note that we are reporting results within a maximum of 10^7 time steps. DQN is not learning on MiniGrid-Empty-16x16-v0 in this time frame because the problem space is significantly larger than in MiniGrid-DistShift2-v0. Given additional time steps, DQN would certainly be able to learn in MiniGrid-Empty-16x16-v0 as well. For context, we also tested MiniGrid-Empty-6x6-v0, which has a similar problem size to MiniGrid-DistShift2-v0, and DQN converged within (9 * 10^5) time steps. This is expected, as the task is simpler and demonstrates how DQN's performance is influenced by the problem size and time steps allotted.\", \"undefined_notations\": \"As we mentioned in our response to a related question, $c$ represents a cell, $v$ denotes a value, $a$ refers to an action, and $R(r)$ is the reward associated with a rule $r$. Lower-case letters were chosen for alignment with their respective names, but we understand your frustration and will hence spend time to improve our notations.\\n\\nWe appreciate the time and effort you dedicated to reviewing our paper. Your feedback has highlighted areas where clarity and structure can be improved, and we are committed to addressing these in the final version. Specifically, we will refine our notations, reorganize sections for better flow, and adjust our explanations to ensure the content is accessible and engaging for readers.\"}", "{\"comment\": [\"Please see the updated version, we plan to revise it even further prior to the deadline.\", \"Missing details:\", \"We fixed the section ordering and enhanced the description of states in the states and rule representation section which is now 3.1.\", \"Lenghty DRL descriptions: We have significantly reduced this section which is now section 2.1.\", \"Novelty: We strenghtened introduction, related works (particularly 2.1) and compare now also with an additional technique in section 4.4.\", \"Performance variations:\", \"Hyperparameter choices are now documented in Appendix D, including performance influence of NACE planning horizon.\", \"Undefined notations:\", \"Besides resolving ambiguities in chosen notations (section 3.3, 3.4), we have added a table in Appendix B which describes all notations. Switching from $r$ to $rule$ was a promising suggestion however it made the formalizations too lengthy. Consequently the notation choice is now outlined in the notation description and we are now consistently only using capital $R$ for reward.\"]}", "{\"comment\": \"Thank you for addressing my concerns. I've reviewed the updated manuscript and I appreciate the author's effort in incorporating the reviewers concerns. I'll be increasing my score.\\nNonetheless, I was wondering if the authors could also clarify how the value estimation of the planner was done? Is it using Monte Carlo estimates (similar to MCTS)? TD? I believe it's relevant to readers to understand these details of the algorithm.\"}", "{\"comment\": \"Thank you for your thoughtful questions and observations!\", \"addressing_questions\": \"\", \"q1\": \"Yes it is. We agree it may not be entirely clear from the current wording. We will make this explicit in the final version and appreciate your feedback.\", \"q2\": \"The relevant cells are determined by the union of the two sets: change set, and prediction mismatch set. (1) The change set contains cells that have been changed from the last time step to the current step. (2) The prediction mismatch set contains cells that have a different observed value than was predicted for the current time step. While this is already part of the formalization of $w_+(r)$, we can mention this explicitly to enhance clarity.\\n\\nQ3.1: It is part of the conceptual design to increase the computational efficiency by considering only cells that have either changed or (\\\"or\\\" due to the union of the sets in the definition of w+(r)) differ from the prediction as \\\"relevant cells\\\" for rule formation, evidence updating, and selective prediction.\\n\\nQ3.2: The match quotient measures to what degree the rule preconditions match the observed cell values. It equals to 1 only if all the precondition cell value constraints of the rule are satisfied.\\n\\nQ3.3: It cannot, positive evidence is only obtained when the rule predicts correctly. The misconception might stem from the fact that the rule formation is only considering cells in the set of changed cells and prediction mismatch cells, which acts as a filter that is separate from the actual cell values to compare.\", \"q4\": \"There are usually multiple rules utilized to predict a given state in its entirety. In the simplest case, if the reward prediction of all these rules aligns with the observed reward, their average will also align, while the sum would overestimate the outcome.\", \"addressing_weaknesses\": \"\", \"w1\": \"While we believe our approach introduces novel formulations not present in existing literature, we understand the importance of contextualizing it with related work in structured learning. Our Related Works section currently includes comparisons with several relevant methods, but we will add the reference to further highlight distinctions and similarities.\", \"w2\": \"NACE does not rely on an pre-defined model of the grid world. It starts with empty rule base, building them purely from observations, without assumptions about the environment beyond its strong priors. Only after these rules are learned, the agent knows how to operate in the environment. The rules are derived directly from the observation arrays provided by the environment, with no predefined notion of the objects involved.\", \"w3\": \"Rules in NACE are based on evidence measures rather than true/false values, providing some tolerance to less precise state representations. We acknowledge that comprehensive studies are needed to demonstrate and quantify this. Regarding lacking strengths, Grid World environments remain a challenge in RL, for multiple reasons, such as the sequential dependencies, as you mentioned. We do not consider the use of rules itself as a limitation, provided they effectively capture relevant dependencies and enable the agent to learn efficiently and perform competently.\\n\\nW3.1: True, but it falls outside of the scope of the tested domains. Extending the work to learn rules based on absolute coordinate values is reserved for future work, and not essential for Minigrid benchmarks.\\n\\nW3.2: Since the agent can learn multiple rules with AND conditions, it can capture many cases where OR conditions might otherwise be required. For example, ((a OR b) implies x) is logically equivalent with ((a implies x) and (b implies x)). As for dynamics involving variable values, this is outside the scope of the Minigrid benchmarks.\\n\\nW3.3: NACE was specifically designed to handle \\\"agent-external changes\\\" within the observation window of the agent, as hypotheses formed from spurious correlations produce wrong predictions, generating negative evidence.\\n\\nW4.1: We appreciate your suggestion and will incorporate this change for improved clarity.\\n\\nW4.2: Thanks for catching this. We will correct the symbol usage and clarify related areas of confusion in the final version.\\n\\nW4.3: With \\\"Match Ratio\\\" R conflicts with the reward symbol. \\\"Match Goodness\\\" with the symbol G may be an alternative.\\n\\nW4.4: Thank you for noting. We will certainly correct these issues.\\n\\nWe are grateful for your thorough and insightful review, which helps us strengthen our work. And we hope our responses have clarified potential misunderstandings!\\nWe would also like to highlight that this is a new technique and the first paper to comprehensively compare it with DRL in grid worlds. We hope these comparisons add value for the research community as well.\"}", "{\"comment\": \"\\\"I don't see that in the definition of when is updated to from line 266 to line 277- there is no mention of checking that the value of matches any prediction.\\\"\\n\\nWe apologize for the notational oversight and appreciate you pointing it out. Indeed, the equality constraints of the postconditions must hold for positive evidence to increase.\\n\\n\\n\\\"I said explicit, not fully pre-defined. I am referring to the 2-dimensional array where each value explicitly represents one cell in the grid world (which I guess does have to be pre-defined with knowledge of the size and shape of the grid world, though the details of this appear to be missing in your paper). My concerns about the lack of comparison to model-based deep RL methods still stand.\\\"\\n\\nYou are correct that one grid cell corresponds to one cell in the agent's map representation. However, the size and shape of the agent's map do not need to be pre-defined, as the memory array can dynamically grow. This allows the agent to operate without prior knowledge of the world's dimensions.\\n\\n\\n\\\"What if the agent-external change is non-random and it would be helpful to predict it? It seems that NACE would be very inefficient at learning to predict such changes, because all the rules include an action. Thus, multiple copies of the agent-external change rules would be created for every action the agent might take.\\\"\\n\\nYou are correct that a separate rule is required for each action to predict agent-external changes. However, in environments with a limited number (typically fewer than 20) of discrete actions, NACE remains sample-efficient. To explore agent-external changes, World 5 in the provided source package models a Pong-like game in a grid world, where NACE successfully learns to predict the ball's movement. In future work, NACE could be extended to learn action-independent rules, with such rules accumulating negative evidence due to prediction failures if the outcome depends on the chosen action.\\n\\n\\n\\\"How does this work provide useful generalizable knowledge to the research community if it is so anchored to this specific simplistic benchmark, and will require extensive manual modification to work for less toy environments?\\\"\\n\\nIn addition to our comprehensive comparison of model-free DRL techniques in grid worlds, our contribution extends beyond the NACE architecture itself to introduce principles that apply to more complex environments:\\n\\n- The curiosity model, encourages an agent to seek unfamiliar states where its knowledge is least applicable (via precondition match, differing from traditional predictability measures in the literature), while also maximizing the likelihood of reaching these states.\\n\\n- The concept of lazily updating representations, focusing primarily on observed changes and prediction mismatches, rather than exhaustively updating all representations.\\n\\n- The principle of compositionality in states and predictions, where each rule predicts only part of a state, and multiple rules combine to predict the full state.\\n\\n- Count-based evidence accumulation, which efficiently tracks prediction success and failure, considering both the prediction success ratio and amount of contradictions and confirmations of rules.\\n\\n\\n\\\"My concerns about the lack of comparison to model-based deep RL methods still stand.\\\"\\n\\nWe acknowledge that comparisons with model-based deep RL techniques could be valuable in the future. However, we think that such comparisons are not currently aligned with the State-of-the-Art in this domain. Recent advancements have demonstrated significant improvements in sample efficiency through the inclusion of intrinsic rewards\\u2014an approach closely related to our own formulation of Curiosity. This guided our selection of comparison techniques, focusing on methods that reflect these advancements.\\n\\nIf you have a specific model-based technique in mind, we would greatly appreciate a concrete suggestion to address your feedback constructively.\\n\\n\\n\\u201cRegarding the responses arguing that X weakness is outside the scope of Minigrid: this says more about the simplicity of Minigrid as a benchmark than the value of the approach.\\u201d\\n\\nWhile we did not propose Minigrid ourselves, we note that it is a widely respected benchmark in the DRL community, valued for addressing critical challenges such as partial observability and sequential dependencies. These characteristics have motivated our investigations into relevant inductive biases, which are not limited to Minigrid environments, even though are helpful in the domain.\\n\\nThank you again for your detailed and valuable feedback!\"}", "{\"comment\": \"Thank you for having re-evaluated the quality of our contribution; we are pleased to hear that our addressing of your concerns was satisfactory!\", \"in_response_to_your_inquiry\": \"Unlike MCTS, which estimates state values by sampling rollouts that prioritize high-reward branches, NACE calculates the expected return over its planning horizon using Breadth-First Search. Exploring MCTS as an alternative presents a promising direction for addressing more complex environments, thank you for implicitly pointing us toward this. We will also ensure this is clarified further in the camera-ready version, should our paper be accepted.\"}", "{\"comment\": \"Thank you for your active engagement in the discussion, which we highly appreciate! Please find our responses below:\\n\\n> What are these representations? Do you mean updating rules?\\n\\nIndeed, the rules we create and the mechanisms for selectively updating their evidence.\\n\\n> You have provided no evidence that these principles will still be applicable/helpful in more complex environments. E.g. in high dimensional spaces, it seems likely that preconditions won't match for almost all unvisited states, making this an unhelpful form of curiosity.\\n\\nWe agree that a direct application of our curiosity model in high-dimensional spaces would face challenges. However, this limitation stems not necessarily only from our model but also from the need for complementary mechanisms to reduce high-dimensional continuous states to manageable, lower-dimensional discrete representations. Techniques such as ConvNets for object detection or other feature abstraction methods are already addressing similar issues of space reductions for numerous domains and we expect it can address our concerns and enable our principles to generalize to more complex real-world environments.\\n\\nImportantly, selective processing in NACE enhances computational efficiency in larger grid worlds, assuming most cell values remain static between time steps. However, computational efficiency was not our primary focus.\\n\\n> The principle of compositionally is also not novel, and it is not surprising that it applies to Minigrid which was designed that way.\\n\\nWhile Minigrid states are compositional by design, our method extends on this by predicting entire states using competing rules for each cell and applying only the winner. This winner-takes-all approach, based on the Cell match value definition, extends beyond grid-based compositions and can generalize to more complex state representations.\\n\\n> Intrinsic rewards may help sample efficiency, but they can be added to model-based methods too- they are not inherently tied to model-free RL. Sample efficiency is well-known to be one of the main advantages of MBRL over Model-free RL. Considering that your method is model-based and uses curiosity, the most fitting comparison would be a state of the art model-based deep RL method with the addition of curiosity-based intrinsic rewards. As I mentioned earlier, Dreamer [1] is a well known state of the art MBRL algorithm.\\n\\nThank you for pointing us to this very interesting reference! We hence drafted a comparison with DreamerV2 which fits into the space from reducing our lenghty descriptions of the compared techniques in section 3 as another reviewer suggested.\\n\\nWe now compare the advantages and drawbacks of the explicit state transition (full state) and rule representation (partial state) with those of learning a latent dynamics model. The latter approach, as employed by DreamerV2, offers broader applicability beyond grid worlds and is well-suited for domains with high-dimensional state spaces.\\n\\nAt the same time, we show quantitative comparisons for some of the environments, whereby we show the converged performance (first table) as well as when NACE reaches the performance DreamerV2 converges to (second table):\\n\\n| Technique | Environment | Average Reward | Time Steps |\\n|-----|-----|-----|-----|\\n| NACE | DoorKey-6x6-v0 | 0.93 | 1.00E+05 |\\n| DreamerV2 | DoorKey-6x6-v0 | 0.89 | 1.00E+05 |\\n| NACE | Unlock-v0 | 0.86 | 2.50E+06 |\\n| DreamerV2 | Unlock-v0 | 0.60 | 2.50E+06 |\\n\\n| Technique | Environment | Average Reward | Time Steps |\\n|-----|-----|-----|-----|\\n| NACE | DoorKey-6x6-v0 | 0.89 | 3.00E+02 |\\n| DreamerV2 | DoorKey-6x6-v0 | 0.89 | 1.00E+05 |\\n| NACE | Unlock-v0 | 0.60 | 2.70E+02 |\\n| DreamerV2 | Unlock-v0 | 0.60 | 2.50E+06 |\"}", "{\"comment\": \"Thank you for your answers and clarifications!\\n\\nI will be keeping my score the same. While I think the paper has some interesting contributions, I believe it would benefit from addressing these reviews in the manuscript.\"}", "{\"comment\": \"The described extension could provide such evidence, if it showed that NACE works well compared to existing methods, without requiring extensive modification or tailoring to the new environment. However, the description of the extension by itself does not constitute evidence, and at this stage in the process it may be inappropriate to propose such a significant extension, and for a reviewer to change their score based on such an extension.\"}", "{\"comment\": [\"For your convenience, the final version addresses additional points of your feedback:\", \"Integration of DreamerV3 into Section 4 with proper result interpretation as you suggested.\", \"Expanded references to model-based techniques in Related Works (Section 2.1).\", \"Discussion of agent-external changes and non-determinism in Appendix A (Representational Limitations).\", \"Improved discussion about DreamerV3 and NACE\\u2019s strengths and weaknesses in Section 4.3.\", \"Further clarifications of notations with a table of all notations in Appendix C.\", \"Match quotient renamed to 'M' to avoid confusion with Q-values in Section 3.3.\", \"Thank you for considering our updated version, and we hope this list is helpful!\"]}", "{\"comment\": [\"For your convenience, a list of additional changes to address your feedback as found in the PDF:\", \"What a timestep exactly consists of has been described in Section 4.\", \"The inductive biases in NACE have been clarified in section 3.2.\", \"Appendix B.5 adds additional discussion of inductive biases in the DRL techniques.\", \"The comparison with DreamerV3 in Section 4.1, 4.2, 4.3 is coherently integrated in plots and tables, and not in a separate section anymore.\", \"The ablation studies in the appendix A have been enhanced with a reference from initiating discussion in Section 4.3.\", \"As you can see we have addressed most of your feedback, which proved very valuable to improve our paper,\", \"and hope you find the last PDF to be more coherent and worthwhile for publication than the previous versions.\"]}", "{\"comment\": \"Thank you for your response and for answering my questions/concerns.\\n\\nA quick thought - one issue with using symbol $r$ to refer to a rule (although it makes sense) is often in RL the immediate reward is also $r$. Perhaps a suggestion might be to simply write out $rule$ to make the distinction more clear? \\n\\nUnfortunately I will still leave my rating unchanged - marginally below the acceptance threshold. Simply because I am unable to see the revised version addressing my (and the other reviewers') concerns with the presentation; a more succinct explanation with clear notation will greatly improve the reading, as well as placing the novelty front and center - emphasizing even more how it addresses the existing gaps in literature.\"}", "{\"comment\": \"We updated the PDF further. As promised DreamerV3 results are now seamlessly integrated with the comparison section exactly as you desired. Not all runs are plotted to convergence yet, as our HPC runs are still ongoing (SimpleCrossing, Unlock, DoorKey), but will be complete in our last revision prior to the deadline. Thank you very much for having encouraged us to strenghten our comparison!\"}", "{\"comment\": \"Thank you for your thoughtful and detailed review! Your feedback has provided us with valuable insights and suggestions for improving our work, and we hope that our responses clarify the design and contributions of NACE. We address each of your points below:\", \"w1\": [\"In accordance with your feedback we will integrate more specific descriptions for each inductive bias in 4.1 as follows:\", \"Temporal Locality: NACE builds rules based solely on the current and previous state.\", \"Causal Representation: NACE's rules are structured as \\\"(precondition, action) implies consequence\\\".\", \"Spatial Equivariance: learned rules can be applied at any location.\", \"State Tracking: NACE explicitly keeps track of a bird's-eye view map by recording observations into it, updating the values that are within its observability window.\", \"Attentional Bias: only rules that show a change from the previous to the current time step, or differ from the predicted value, are considered for rule formation, evidence updating, and prediction.\"], \"regarding_ablation_studies\": \"we agree and will include a discussion on this in the main paper, referring to the appendix for more detail. For instance, if the agent cannot track observation locations, its performance often declines after picking up a key when the previously observed door is no longer in view. Also, if rules are tied to the specific locations where they were learned, they fail to generalize across locations, requiring the agent to relearn the same knowledge at each location, which greatly reduces sample efficiency. Furthermore, without attentional bias using the change and prediction mismatch sets, processing becomes significantly more resource-intensive due to the need to create and update a larger number of rules.\", \"w2\": \"Yes as in RL framework iterations. The Minigrid environment uses the Gymnasium API, whereby in each timestep the agent gets one observation from the environment, performs one action, and receives the resulting reward given by the environment. We will clarify this in the text.\", \"w3\": \"Thank you for pointing it out. NACE's computational complexity depends primarily on the asymptotics of its planning algorithm, which we mentioned to be a depth- and breadth-bounded Breadth-First Search. We are happy to explicitly include the asymptotics in the final version for clarity.\", \"w4\": \"The behavior varies across techniques and environments considered, and the standard deviations are helpful to answer your question. For instance, in the first environment, Minigrid-Empty-16x16, RND's average reward is zero, but its non-zero standard deviation indicates some variation even at the end. DQN, on the other hand, shows both an average reward and standard deviation of zero, with values consistently near zero throughout the run.\", \"w5\": \"Thank you for your excellent input! We ran a new experiment in accordance with your suggestion, with an increased truth expectation threshold of 0.85 for rule usage. We find it makes the system spend longer (due to 3 confirmations of a rule until exceeding the threshold) to take explorative actions before exploiting its rule base, which makes the system less greedy (more likely to find optimal ways to obtain reward) and could help improving performance in non-deterministic environments as well.\\nAnd as noted in other answers, additional factors contribute to non-optimality: NACE's rule-based learning mechanism, does not capture the statistics of the structure of the generated environments that DRL policies can leverage. Additionally, constraints in NACE's prediction horizon and search breadth further contribute to suboptimal performance and would benefit from more efficient implementation. For instance if NACE's planning depth in MiniGrid-DoorKey-8x8-v0 is restricted to 8, we find that NACE does not succeed within the maximum time steps in about 50% of the episodes, leading to an average reward of only 0.48 on average, increasing the gap to the optimal solution (0.975) further.\\n\\nWe are sincerely grateful for your detailed review and thoughtful feedback. Your insights have been instrumental in highlighting areas for improvement and have provided valuable guidance for strengthening our work. Thank you again for your time and engagement. We welcome any further questions or feedback you might have.\"}", "{\"comment\": \"Thanks for the prompt response! We will spend the remaining time to incorporate other reviewer's feedback trying to get some extra points to get our paper accepted. In case the paper gets accepted we will ensure to incorporate rule counts and associated statistics within the camera-ready version of the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for providing the updated manuscript. The authors added a lot of additional information and the paper has now double the length of the original submission.\\n\\nAgain the reviewer wants to mention that the revisions have significantly improved the original version of the paper, however some of the main points still remain:\\n\\n- NACE is not able to find optimal or near-optimal policies for all of the experiments. While this is now at least discussed within the paper, this behaviour does not indicate good generalizability to even more complex test cases as already for 2D minigrid the algorithm is not able to find the optimal policies for any of the environments considered. This significantly limits the contribution of the paper in the reviewers opinion.\\n\\n- Having said this, the reviewer acknowledges that NACE shows good results with regards to sample efficiency and outperforms here other algorithms. \\n\\n- Based on the reviewers comments the authors added a section on computational efficiency. While the reviewer is pleased that the authors addressed this comment despite their main focus on sample efficiency, this new section is not very convincing. After the comment that, 'it seems questionable that PPO, A2C, and other algorithms all report a runtime of 1 hour for all runs', the authors added approximately and on average but again no detailed run times were provided. It is not even clear if the authors average over different algorithms here (which they should not). Given that the NACE times are also given per seed and per environment, it is not clear for the reader if and how the different times should be compared.\\n\\nThe reviewer has decided to keep their current score.\"}", "{\"summary\": \"The authors present NACE, a learning agent which uses strong inductive biases, causal reasoning and a causally-informed intrinsic reward to explore more efficiently in grid-world environments. NACE maintains an internal state consisting of a 2D array corresponding to each cell of the grid world, a 1D array to track non-spatial values such as inventory, as well as a set of rules of the form \\u201c(preconditions, action) => consequence\\u201d with counts of associated positive and negative evidence. At each step, it updates the 2D array and calculates which observed cells changed and which did not match their predicted values, uses this evidence to update the set of rules, then plans an action sequence to maximize expected return\\u2013 or if no positive return trajectory is found, then to reach a state with minimum familiarity (average over all cells of how well they match the best fitting rule). Finally, the best-fitting rules are used to predict the cell values of the next state. They test on a number of minigrid environments and show that NACE reaches good performance in about 1000 steps, while existing DRL methods take around 1e6-1e7 steps to reach similar performance, although the best methods converge to higher average rewards at the end of training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The sample efficiency results look very good.\\n\\n In general, the writing quality is high. \\n\\nThe Observer and Hypothesizer components of NACE, along with the State Match measure of state familiarity, appear to be quite novel. \\n\\nSuch a method should be quite interpretable - though the authors do not show any of the rules learnt by NACE in the test environments.\", \"weaknesses\": \"The authors do not mention or compare to existing methods for efficient structured learning which capture inductive biases, for example [1]. It is hard to evaluate the work\\u2019s originality given that the authors did not contextualize it among existing related approaches.\\n\\nThough NACE heavily relies on an explicit model of the gridworld, they also do not compare to any explicitly model-based deep RL algorithms such as [2] or [3]\\n\\nThe significance of the contribution seems limited. NACE shares a lot of weaknesses with existing methods- (depends heavily on quality of state representations, would struggle where defining impactful state changes is difficult) - while lacking strengths (adaptable to continuous state spaces or high-dimensional action spaces, theoretical optimality guarantees). It seems limited to very simple rules, and the environments the authors tested on likewise covered a very small number of dynamics- navigating to a goal location with obstacles, and picking up a key to unlock a door to test sequential dependencies. \\n\\n - The authors did not test the ability to develop rules that capture dependencies across space rather than time, e.g. the need to flip a switch to unlock a set of doors. In fact, because the precondition constraints are defined on cells\\u2019 relative positions to the consequence cell, this method would likely do poorly on this dynamic, since this constraint would be best expressed as a condition on a cell specified by its global position (the switch location). \\n\\n - The constraints also require the cells to be exactly equal to a certain value, and are limited to cases where all constraints must be satisfied, rather than other conjunctions like Or, which excludes dynamics where values need only be above some threshold or within a set of allowable values (e.g. the Put Near minigrid environment where the agent must place one object near to another object).\\n\\n - The environments did not contain any stochasticity or objects that can move independently of the agent, e.g. the Dynamic Obstacles environment. A core component of NACE is observing which cells changed at each step and using that to create and update rules- is this method robust to settings where cells change irrespective of the agent\\u2019s action?\", \"the_clarity_of_the_paper_has_room_for_improvement\": \"- The cell notation is inconsistent and confusing- the subscript changes between $c$, $c_r$, $c_{t,x,y}$, $c_t$ without any explanation. Different symbols should be used for cell variables than for cell values e.g. in the definition $\\\\bar{c}:=(c_r=c)$. If the precondition constraints are on cells\\u2019 relative positions, there should be notation for that in contrast to the global position notation $c_{t,x,y}$ \\n\\n - K is used for the number of rules and also the number of equality constraints- consider using a different symbol.\\nsome aspects of the method were not fully explained- see the Questions section.\\n\\n - Should consider using a different notation for the Match Quotient, since Q is usually used for the Q value function in RL. \\n\\n - Small grammar errors throughout the paper. E.g. \\u201cSuch [an] approach\\u201d on line 154, quotation marks are flipped on line 163\\n\\n[1] Tsividis, Pedro A., et al. \\\"Human-level reinforcement learning through theory-based modeling, exploration, and planning.\\\" arXiv preprint arXiv:2107.12544 (2021).\\n\\n[2] Hafner, Danijar, et al. \\\"Mastering diverse domains through world models.\\\" arXiv preprint arXiv:2301.04104 (2023).\\n\\n[3] Sekar, Ramanan, et al. \\\"Planning to explore via self-supervised world models.\\\" International conference on machine learning. PMLR, 2020.\", \"questions\": \"Is the match quotient Q(r,c) defined for cell c being the consequence cell?\\n\\nNew rules are created \\u201cwhen positive evidence is found for the first time\\u201d - but how are the set of precondition equality constraints determined for the new rule? I.e., how does NACE determine which cells are relevant? \\n\\nWhy is positive evidence only counted for a rule if all of the precondition cells changed values and/or didn\\u2019t match the prediction at the last step? Since the precondition is an AND conjunction of many cell values, it is possible only one might need to change for a rule to be activated. And why can the positive evidence count still increase even if the rule fails to predict the outcome?\\n\\nWhy is the predicted reward not the sum, rather than the average, of the reward of each of the N utilized rules? Each rule seems to describe a way to obtain a certain reward, so if multiple rules are satisfied shouldn\\u2019t multiple rewards be obtained?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors designed NACE (Non-Axiomatic Causal Explorer), a learning agent that incorporates a set of inductive biases that the authors consider to be important for an acting agent. These include causal relations, temporal locality, spatial equivariance, state tracking, and attentional biases.\\nThe design of the agent is based on predicate rules that are proposed by the agent given the observations. The agent then plans to either explore rules (to collect new evidence about the rule) or maximize reward.\\nFinally, the authors test this agent in various scenarios of Minigrid and compare it against a wide range of (deep) RL agents. They show that in these particular scenarios, NACE is particularly sample efficient compared to the RL agents.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is motivated by the importance of the inductive biases they propose to grid world environments. Thus, the authors proposed to study these by incorporating them all in their agent design. Finally, showing that these biases have a huge effect on sample efficiency.\\n2. The paper is mostly well written with some gaps in notation that I had a hard time following (see Questions)\\n3. The agent design seems to be novel in the way they instantiate the different biases based on predicate rules.\", \"weaknesses\": \"1. It is clear that NACE beats all the (deep) RL agents. However, given the comprehensive design, it is hard to understand where the benefit comes from. Perhaps ablating the effects of each inductive bias would be a good way to understand its contribution. Moreover, all RL agents considered are used in all experiments, but each one of them incorporates different biases that are incorporated in NACE. Perhaps grouping the RL agents based on the biases would make a clearer point of the importance of each bias.\\n2. RL baselines are shown to be less sample efficient. This could be the result of their generality (less inductive biases) as claimed. But I\\u2019m concerned that it seems that in all these cases the problems violate the Markov assumption, putting all these RL agents at a disadvantage. Is there an explicit handling of partial observability? Are there any RNNs/memory involved?\\n3. In the formal presentation of the agent, some notation is overloaded (e.g. c for cells, clauses in a rule, c(r) in line 288) which makes some of the method presentation hard to follow. \\n4. Although this is stated at the core of the paper, NACE is specifically designed for the grid world considered. It\\u2019s unclear how the results would extrapolate to other type of tasks. Also, I think it would be relevant to compare NACE to RMax, at least to discuss its similarities and differences.\", \"questions\": \"1. How does this compare to RMax? It seems to me that it has a similar flavor, in which we observe transitions and the agent explores such transitions until sure.\\n2. The formal definition of a cell is missing. I supposed the cell is the value of the 3rd dimension of the state definition.\\n3. Is there any value estimation happening? If so, how are you estimating the value function?\\n\\nMinor comments\\n- Planner. Lines 311-314. Unclear wording.\\n- Overloading c(r) I think (line 288)\\n- Fix notations (use \\\\citep)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"NACE (Non-Axiomatic Causal Explorer) is a novel experiential learning agent leveraging causal reasoning and intrinsic reward signals to enable more efficient learning within grid world environments. The authors compare the proposed method against state-of-the-art RL algorithms, demonstrating its benefit in terms of sample efficiency across many different grid world environments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Novelty: the work brings novelty due to the adoption of a curiosity model based on causal reasoning.\", \"Narration: the paper's narration is well-done and sound, and the work is generally well-written.\", \"Experiments: the experimental campaign is convincing since it considers several state-of-the-art RL algorithms and exploration frameworks. The evaluation metric regards the sample efficiency of each method, demonstrating NACE's brilliant results.\", \"Supplementary materials: the attached zip file containing NACE's codebase runs easily and smoothly.\"], \"weaknesses\": [\"**Some notations are not very clear.** In particular, the section dedicated to the NACE architecture (section 4.4) leaves some symbols unexplained, such as the observer's sets $M_t^{change}, M_{t}^{observation-mismatched}, M_t^{prediction-mismatched} $, which have been introduced here only in mathematical notation. Still, I would suggest to explain their meaning. Same for the function $f_{exp}$ whose usage and terms composition are not completely clear.\", \"Apart from the notation, also **intuitions behind the need for some components of the architecture are not immediately understandable**. I would have rather added an appendix to explain those details more deeply. For example, I would explain the interactions between the different components of the architecture more verbosely, also describing the flow diagram in Figure 1 and the role of each component in natural language, to give an intuition about the maths behind it. Perhaps, a pseudocode of the entire algorithm could come in handy.\", \"The main limitation of NACE is due to its application since it is **usable only in deterministic grid world settings**. However, authors highlight as future works possible extensions to more complex problems.\", \"**Experimental setups could have been explained more in detail** in the Appendix, by reporting a more extended description of the presented scenario, perhaps with the support of the relative images (bird-view map). Furthermore, authors could add those scenarios that have not been presented in the main paper, but that can be run in the codebase, such as the *soccer world*.\", \"**Hardware employed to run the experiments and time consumption of the framework** not provided.\"], \"questions\": [\"From learning curves is evident that NACE is more sample efficient than all the other tested algorithms. However, I would like to ask why it is not able to reach the optimal policy and which can be the intuition behind this recurrent behavior.\", \"Thinking out of the grid world environment, I would like to ask how this method can work and if you see limitations and challenges that have to be considered in more complex problems.\", \"Regarding non-deterministic transitions, how can NACE give \\\"system tolerance\\\" as stated in line 294?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your further clarifications.\\n\\n>The concept of lazily updating representations\\n\\nWhat are these representations? Do you mean updating rules? \\n\\n>our contribution extends beyond the NACE architecture itself to introduce principles that apply to more complex environments\\n\\nYou have provided no evidence that these principles will still be applicable/helpful in more complex environments. E.g. in high dimensional spaces, it seems likely that preconditions won't match for almost all unvisited states, making this an unhelpful form of curiosity. The principle of compositionally is also not novel, and it is not surprising that it applies to Minigrid which was designed that way. \\n\\n>Recent advancements have demonstrated significant improvements in sample efficiency through the inclusion of intrinsic rewards\\n\\nIntrinsic rewards may help sample efficiency, but they can be added to model-based methods too- they are not inherently tied to model-free RL. Sample efficiency is well-known to be one of the main advantages of MBRL over Model-free RL. Considering that your method is model-based and uses curiosity, the most fitting comparison would be a state of the art model-based deep RL method with the addition of curiosity-based intrinsic rewards. As I mentioned earlier, Dreamer [1] is a well known state of the art MBRL algorithm. \\n\\n[1] Hafner, Danijar, et al. \\\"Mastering diverse domains through world models.\\\" arXiv preprint arXiv:2301.04104 (2023).\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for your response.\\n\\n>positive evidence is only obtained when the rule predicts correctly.\\n\\nI don't see that in the definition of when $w_+(r)$ is updated to $w_+(r)+1$ from line 266 to line 277- there is no mention of checking that the value of $c_t$ matches any prediction.\\n\\n>NACE does not rely on an pre-defined model of the grid world\\n\\nI said explicit, not fully pre-defined. I am referring to the 2-dimensional array where each value explicitly represents one cell in the grid world (which I guess does have to be pre-defined with knowledge of the size and shape of the grid world, though the details of this appear to be missing in your paper). **My concerns about the lack of comparison to model-based deep RL methods still stand.**\\n\\n>W3.3: NACE was specifically designed to handle \\\"agent-external changes\\\" within the observation window of the agent, as hypotheses formed from spurious correlations produce wrong predictions, generating negative evidence.\\n\\nWhat if the agent-external change is non-random and it would be helpful to predict it? It seems that NACE would be very inefficient at learning to predict such changes, because all the rules include an action. Thus, multiple copies of the agent-external change rules would be created for every action the agent might take.\", \"regarding_the_responses_arguing_that_x_weakness_is_outside_the_scope_of_minigrid\": \"this says more about the simplicity of Minigrid as a benchmark than the value of the approach. How does this work provide useful generalizable knowledge to the research community if it is so anchored to this specific simplistic benchmark, and will require extensive manual modification to work for less toy environments?\"}", "{\"comment\": [\"Please see the updated version, we plan to revise it even further prior to the deadline.\", \"W1: We clarified the inductive biases in regards to NACE in section 3.2.\", \"W2: The appendix now also contains information regarding hyperparameters and structure of the policies, including utilization of convolution layers and LSTM in Appendix D.\", \"W3: The definition is now explicit and does not overload variables anymore. (Hypothesizer in section 3.4)\", \"W4.1: We include discussion of how the system can be extended to operate beyond grid worlds in section 4.4.\"]}", "{\"comment\": \"Thanks for your responses.\\n\\n>Techniques such as ConvNets for object detection or other feature abstraction methods are already addressing similar issues of space reductions for numerous domains and we expect it can address our concerns and enable our principles to generalize to more complex real-world environments.\\n\\nThe additional training time for the feature abstraction methods such as CNNs must then be taken into account, reducing or potentially nullifying the method's sample efficiency advantage. Furthermore, the features extracted by a general-purpose feature abstraction model might not be the most fitting ones to plug into your method, which could harm both sample efficiency and final performance. These claims of applicability to more complex environments would be more convincing if the authors could provide evidence with an empirical demonstration.\\n\\n>We hence drafted a comparison with DreamerV2\\n\\nI appreciate the fast turnaround for this, but why did you compare with DreamerV2? I cited DreamerV3 - DreamerV2 is no longer state of the art. I also find it odd and concerning that you ran it on different environments than the ones used for the other deep RL model comparisons- why DoorKey-6x6-v0 instead of DoorKey-8x8-v0? The standard deviations are also missing. Instead of presenting the comparison in two tables, it would be much more informative while also saving space to add the full learning curves to your existing Figures 2-7 along with all the other baselines. I don't think it is necessary or helpful to separate this comparison into a separate section.\"}", "{\"comment\": \"Further update:\\n- W1: Ablation studies regarding inductive biases are now present in more detail in Appendix B.\\n- W3: Hardware and runtime costs are now included in Appendix F.\\n\\nThank you again for your valuable feedback, which at this point we have mostly incorporated.\"}", "{\"comment\": \"Thank you for valuable feedback and your interesting questions!\", \"we_address_each_question_in_the_following_paragraphs\": \"\", \"q1\": \"Besides possible implementation limitations, NACE rules do not exploit the statistics of the generated environment structure, which DRL policies do capture. Additionally, limits in the prediction horizon and search breadth bounds in the planning component of NACE can lead to more greedy, non-optimal behavior, even when relevant knowledge is already learned, as the expected return is calculated over the planning horizon.\\nWe appreciate that you noticed the data efficiency properties, which is NACE's core strength. Our paper also outlines sample efficiency differences across the compared DRL techniques, which we believe could provide additional value to the research community.\", \"q2\": \"Complexity can increase in various ways. For instance, what NACE delivers is the effective consideration of sequential dependencies, which, as we also demonstrated, significantly increases the training data demand of Deep Reinforcement Learning techniques, even with techniques that involve intrinsic rewards for better sample efficiency. This can also be seen in our paper when going from MiniGrid-Unlock-8x8-v0 to MiniGrid-DoorKey-8x8-v0 (Lines 486-505), which only adds one necessary additional sequential dependency to reach a goal location after the door has been opened with the key. Our DRL results in this regard are also consistent with the results in the literature.\\nA key challenge, as you noted, is generalizing beyond grid worlds. We are currently researching NACE's integration with mobile robots in simulation using ROS2, featuring automatic downsampling of occupancy grid maps, and continuous action invocation via Nav2. However, this work lies outside the scope of the current paper.\", \"q3\": \"NACE achieves system tolerance through continuous evidence updating. The truth expectation of a rule determines whether it is being utilized, and this is a statistical measure that takes the prediction success rate into account as well as how many data points met the rule preconditions. With increasing amounts of collected evidence, the truth expectation measure becomes more stable, making the system to consider the statistically most likely outcomes of its actions even when they do not always generate the same outcome.\\n\\nNow, we address the weaknesses:\", \"w1\": \"Respectfully, we agree in part and recognize that the page limit restricted us from providing extended explanations for some mathematical notations. However, we feel that these notations are not entirely ``unexplained\\\", since the description of the Observer includes a brief sentence about these sets. We will clarify these descriptions for improved readability.\", \"w2\": \"Thank you for this suggestion. We agree that additional explanations would enhance understanding. We will include a more detailed appendix covering interactions among architecture components as well as the pseudo-code of the algorithm, which we already have, but could not fit in the main body of the paper.\", \"w3\": \"With respect, we believe the statement ``usable only in deterministic grid world settings\\\" is somewhat inaccurate. While we focused on deterministic domains to maintain consistency with the nature of the RL benchmarks used for comparison, our technique is explicitly designed to handle non-determinism via evidence accumulation and updating. Specifically, the $f(r)$ value of a rule reflects its prediction success ratio, which stabilizes (and its truth expectation increases) with accumulated evidence $w(r)$, as noted in our prior response to question 3.\", \"w4\": \"Thank you for your suggestion, we will include experimental setups and additional information in the appendix for the final version.\", \"w5\": \"Thank you for pointing it out, we have hardware information available and we will include it too.\\n\\nThank you for your detailed review and valuable feedback. We hope to have addressed your questions and suggestions thoroughly, and we are working to incorporate these improvements in the final version of the paper. We appreciate your insights, which will help strengthen our work, and we hope our ideas sparked your interest. Please let us know if there are any further questions or if additional clarifications are needed.\"}", "{\"comment\": \"Thank you for your reconsideration and the valuable feedback you provided that allowed us to improve our contribution!\"}", "{\"summary\": \"The paper introduces Non-Axiomatic Causal Explorer (NACE), an agent optimized for grid world environments using causality-informed intrinsic rewards and inductive biases, including temporal and spatial modeling, to achieve data-efficient learning. Unlike most standard RL approaches, which require extensive training data, NACE efficiently learns policies in fewer steps by systematically exploring unfamiliar states. Experiments in MiniGrid scenarios show NACE's superior sample efficiency across various environments. The paper suggests that NACE\\u2019s principles could extend to more complex domains, promising advancements in data-efficient reinforcement learning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors target a very important and interesting question: How to incorporate inductive bias into Reinforcement Learning and increase the data efficiency. Moreover, the method is compared to various other already established algorithms and tested with different examples.\", \"weaknesses\": [\"Unfortunately, the reviewer cannot recommend the paper for publication at ICLR due to the following issues:\", \"The reviewer notes that while NACE\\u2019s systematic exploration of unfamiliar states is highlighted as its primary distinction from other RL methods, the incorporation of additional inductive biases defined in Section 4.1 remains unclear. Could the authors elaborate on how each bias is implemented within NACE\\u2019s framework? Additionally, conducting ablation studies on the contribution of each inductive bias would provide valuable insight into their individual impacts on performance.\", \"In the experimental results, the authors present rewards over time steps. Could the authors clarify how time steps are defined in this context? Specifically, are these time steps equivalent to RL framework iterations, with each time step representing the generation and evaluation of a potential solution?\", \"The reviewer suggests that comparing computational costs between algorithms would enhance the study's rigor. The current comparison lacks detail, as one time step in NACE may involve higher computational complexity than in other algorithms.\", \"In many of the RL frameworks tested, rewards remain stagnant for extended periods. If the results were examined at a finer scale, would smaller reward changes become visible, or does the mean reward remain consistently at zero?\", \"After the initial rapid increase in reward, NACE plateaus below the maximum attainable reward across all environments. The reviewer recommends exploring this behavior further and considering modifications to the algorithm that might enhance performance during the latter stages of learning. This could provide insights into whether additional mechanisms could support continued improvement toward optimal rewards.\"], \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Please see the updated version, we plan to revise it even further prior to the deadline.\", \"W1: We significantly improved the textual explanation of the sets in the Observer description in section 3.4.\", \"W2: The flow diagram now comes with a textual description in section 3.4 Observer description. Additionally, pseudocode of NACE can now be found in Appendix C.\", \"W3: In Appendix A we already include a basic analysis of the system's ability to handle non-determinism including quantitative analysis in an example environment.\", \"W4: We included an example of the bird view map as well as list the rules learned in this environment in Appendix E. We might also add illustrations of the test environments in the final submission as you suggest.\", \"W5: Exact hardware information will be included in our last revision prior to the deadline, including time consumptions as we measured for running the models. We assume you are fine with it being in the appendix.\"]}", "{\"comment\": \"Thanks for all the work done on the updated version of the paper. Since clarifications have been made and readability improved, I'll update my score to \\\"marginally above the acceptance threshold.\\\"\"}", "{\"comment\": \"Thank you for pointing me in the right direction for this; 50-100 seems reasonable. This may be a less pressing matter, but if accepted, it would be fantastic to provide a clearer picture on the rule count with mean, standard deviation, confidence intervals etc for each experiment task. However, for the time-being, please continue your efforts addressing any concerns the other reviewers might have.\"}", "{\"comment\": \"Thanks for all your work in revising the submission. I looked over the article's changes and it is definitely improved from initial draft. I have updated my score to \\\"marginally above the acceptance threshold\\\". Is there a particular area or statistic referencing the number of rules typically required to solve each of these environments? I may have overlooked this, but I cannot find one. This seems important if NACE is rule-based.\"}", "{\"comment\": [\"Thank you very much for your feedback. We have uploaded the newer version of the paper, incorporating your input. We plan to revise it even further prior to the deadline.\", \"W1: How the inductive biases are utilized in NACE is now explained in detail in section 3.2.\", \"W2: The state and rule representation section 3.1 now contains a corresponding diagram of the rule structure\", \"W3: We included an example of the bird view map as well as list the rules learned in this environment, it can be seen in Appendix E.\"]}", "{\"title\": \"Summary\", \"comment\": \"We thank the reviewers for their constructive feedback and active engagement in discussion, which significantly strengthened our contribution.\\n\\nOur submission, \\\"A Grid World Agent with Favorable Inductive Biases,\\\" presents three major contributions:\\n\\n1. A comprehensive comparison of 11 DRL techniques across 6 Minigrid environments, evaluating sample efficiency and performance.\\n2. A detailed discussion on inductive biases that significantly enhance sample efficiency in grid world environments.\\n3. A novel experiential learning agent, NACE, designed for grid-world environments with incorporated inductive biases.\\n\\n**Reviewer Feedback:**\\nReviewers praised the novelty of the formalisms, value of the inductive biases and the comprehensive experimental setup for comparison.\", \"initial_suggestions_for_improvement_included\": [\"Enhanced clarity in notation and conceptual explanations.\", \"Clearer documentation of individual inductive biases and their implementation.\", \"Addressing NACE's convergence and generalizability beyond grid-world environments.\", \"Including comparisons with a model-based RL technique.\", \"Reporting computational costs and hardware specifications for the experiments.\", \"**Author Responses and Revisions:**\"], \"we_provided_detailed_responses_to_each_review_and_made_several_key_revisions\": [\"Expanded explanations of the inductive biases in the main text and appendices.\", \"Improved notation, added diagrams, and reorganized sections for better readability.\", \"Detailed discussions on NACE's performance limitations and potential improvements.\", \"Updated the manuscript with computational cost details and hardware specifications.\", \"Included comparisons with DreamerV3, a state-of-the-art model-based RL method.\"], \"impact_and_contribution\": \"While some reviewers had reservations about the general applicability regarding point 3 of our contribution (NACE), \\nthe overall sentiment indicates that the work contributes valuable insights into sample-efficient learning through the included inductive biases. \\nFollowing revisions, four reviewers increased their scores, reflecting the improvements made.\\n\\nWe believe these enhancements substantiate the manuscript\\u2019s contribution. \\nShould the paper be accepted, we plan to further refine it, \\nincluding detailed runtime metrics for each technique in the camera-ready version.\"}", "{\"comment\": \"Thank you for providing the updated manuscript.\\n\\nWhile the revisions have significantly improved the paper, the reviewer finds that many results and sections still feel rushed.\\n\\n- Appendix F: This section lacks detail and clarity. Additionally, it seems questionable that PPO, A2C, and other algorithms all report a runtime of 1 hour for all runs. When introducing a new algorithm, comparing computational costs with existing frameworks is crucial. These comparisons should ideally be conducted on the same (or at least similar) hardware to ensure consistency. However, the authors have run some algorithms on CPUs, others on CPUs + GPUs, and even different CPUs and GPUs, making it difficult to draw meaningful conclusions.\\n\\n- While the new Appendix B adds relevant information, it could benefit from a more quantitative approach. For instance, in B.2, calculating the binomial coefficient seems unnecessary, while in B.3, a figure or additional evidence to support the authors' claims would strengthen the section. On a positive note, the reviewer appreciates the study on the planning horizon depth included in another appendix.\\n\\n- Unfortunately, none of the appendix sections are properly referenced in the main paper. Given the considerable length of the appendix, proper referencing is essential to guide readers and maintain clarity.\\n\\n\\nThe reviewer also notes that is has not been further explored why NACE converges to a sub-optimal reward across all test cases, which remains a significant drawback. While the authors provided some explanations in their initial response, this issue has not been adequately resolved. From the reviewer\\u2019s perspective, this limitation should be explicitly discussed in the paper, along with potential solutions.\\n\\nLastly, the new comparison with Dreamer V3 is a valuable addition, but it feels misplaced in a new section rather than being incorporated into the existing plots for consistency and readability.\"}", "{\"summary\": \"The authors propose NACE, a technique to efficiently solve grid world environments and compare this to state-of-the-art deep reinforcement learning algorithms.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The experimental results are easy to follow, and the figures are well made.\", \"weaknesses\": \"I find the motivation of the authors' interest in gridworld problems to be lacking, and the testbeds to be simplistic. I am not convinced that RL is unable to solve such simple tasks as is claimed by the authors, and believe this to be due to suboptimal hyperparameters which appear to be missing from the text. The overall presentation of the paper lacks sufficient depth in details to where it is difficult to follow along in a meaningful way with notation left undefined. It is written in a manner not meant to be read by someone seeing this material for the first time. For example, Subsection 4.3 should likely be in the beginning of Section 4 or at least before Section 4.2, as it formally defines a rule, what a cell is, that you are doing conjunction, etc. All of this we can assume in 4.3, but for clarity, it should be clearly stated beforehand.\\n\\nSome areas the authors spend too much time explaining - for example, DQN or PPO, and a whole page is dedicated to these algorithms; each algorithm's description/shortcoming should have been reduced to 1-2 sentences (e.g., don't need to define DQN here just get to the point), giving the authors 0.5 page back that could have been used to better explain their contribution. At the end, I am left with a feeling that this is nothing new, I am still unclear how this compares to existing work *that is similar*, and how everything ties together. Also, how can DQN not solve MiniGrid-Empty-16x16-v0 but can solve MiniGrid-DistShift2-v0? This makes me question hyperparameters, because it should have been possible for DQN to randomly discover at least once a path from start to goal and then improve upon it, like is seen in the other more difficult task.\\n\\nMany notations are not defined in 4.4 to the point where the paper is frustrating to read. What are c, v, a, R(r). Is lower-case r rules? Or reward? Why a lower-case r for a set of rules?\", \"questions\": \"1. Where does the Curiosity Model fit into the overall NACE Architecture in 4.4?\\n2. What are c, v, a, R(r). Is lower-case r rules? Or reward? Why a lower-case r for a set of rules?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors introduce NACE, a novel learning agent that utilizes a causality-informed curiosity model to make intelligent hypotheses about causal information in grid world environments. NACE is comprised of 4 components: an observer that updates a \\\"bird-view\\\" map of the environment and assesses prediction-observation failures, a hypothesizer that generates new rules, a planner that balances an exploration-exploitation tradeoff for accruing reward and refining hypotheses, and a predictor that models the environment. The authors assess NACE in a variety of environments from the Minigrid library clustered into three relevant groups: stationary environments, dynamic environments, and dynamic environments with sequential dependencies. Although NACE does not always find the optimal policy, its data efficiency is unparalleled by modern DRL algorithms.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Existing RL techniques for solving gridworlds are systematically laid out and elaborated on in Section 3, which makes it easy for the reader to contextualize the work.\", \"Section 4 introducing NACE is concise well-described.\", \"Section 5 provides compelling results with a comparison to multiple baselines. Figures highlight the salient contributions that the authors attempt to make with NACE: extreme sample efficiency.\", \"The overall prose of the paper is extremely clear.\"], \"weaknesses\": [\"A more thorough discussion of the 5 kinds of inductive biases, including examples, would make them easier to grasp.\", \"A diagram depicting the states and rule representations described in section 4.3 would be useful. Section 4.3 could use more development and examples.\", \"An example of a full set of causal rules for a simple environment would be welcomed.\"], \"questions\": \"1. How were the hyperparameters chosen for the baseline algorithms?\\n2. Why is NACE unable to find the optimal policy? What improvements could be made to enable NACE to do so? A case-study on a specific environment would be interesting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response. The integration of the DreamerV3 results in section 4 looks good. The discussion of limitations and better contextualization with more relevant related works is also valuable. I think the soundness of the paper has been significantly improved, and thus I raise my score.\\n\\nUnfortunately, I still cannot recommend acceptance since I still do not see sufficient concrete evidence that the proposed methods can actually be useful beyond Minigrid. Currently, I am doubtful that the contribution is sufficient for ICLR.\"}", "{\"comment\": \"Thank you for the score update and for strengthening our contribution!\\n\\n> Is there a particular area or statistic referencing the number of rules typically required to solve each of these environments?\\n\\nWe do not pre-define the particular rules to identify, as they are dynamically created. And they are spatially independent which greatly reduces the amount of constructed rules. In this regard, we observe that most Minigrid environments demand a rule number of about 50, usually less than 100 as now also indicated in the Appendix H.\"}", "{\"comment\": \"**Regarding training time for the feature abstraction methods:**\\n\\nThank you for your insightful comments! We agree that the sample efficiency of methods relying on feature abstraction can be diminished when the abstraction model is trained concurrently with the agent\\u2019s policy. Methods like Dreamer, which leverage gradient-based updates to improve representations dynamically, indeed hold an advantage in this respect. We now already highlight this point more explicitly in our comparisons to provide a balanced perspective.\\n\\nFurthermore, in many real-world applications, the relevant object types and features are often predefined, with models pre-trained accordingly. Hybrid systems that leverage such pre-training are common in industry. however we acknowledge that they typically demand large engineering effort.\\n\\n**Regarding comparison with DreamerV3:**\\n\\nThank you for your invaluable feedback! Initially, we compared our approach with DreamerV2 since it is peer-reviewed and has multiple published results in Minigrid, whereas DreamerV3 is currently a pre-print. Also, we encountered challenges with running DreamerV3 on our HPC cluster in Minigrid, as it lacks a standard Gymnasium interface, unlike the techniques we compared against and DreamerV2.\\n\\nHowever we totally agree and understand the importance of using the state-of-the-art version and in the meanwhile have identified comparable numbers for DreamerV3 in the literature. We have hence included an initial comparison with DreamerV3 in the current version. \\n\\nAdditionally, we have now managed to run DreamerV3 on the environments ourselves on our HPC cluster, however the runs are not yet fully completed. Once the results are available, we will integrate them into the plots and tables as you suggest, hopefully by the end of this day. Lastly, we will ensure standard deviations are included for completeness and consistency, as is already the case for the preliminary comparison in the current version.\\n\\nThank you for your proactive and constructive feedback, it has helped making our paper stronger! We want to let you know that most of your feedback is now integrated, as you can see in the recent revision, and hope it will be sufficient for you to reconsider the rating.\"}", "{\"comment\": [\"Please see the updated version, we plan to revise it even further prior to the deadline.\", \"W1: We include a discussion of the effects of the omission of the inductive biases in Appendix A.\", \"W3: The asymptotics are now described in the main text in the planner description of section 3.4. Hardware setup and performance costs as exported from our runs will be added later today in the appendix.\", \"W5: This is now analyzed in detail, we ran the model with different planning horizon hyperparameter choices to analyze how it affects performance in Appendix A.\"]}", "{\"comment\": \"Thank you for the detailed response, which addresses some of my concerns.\", \"w1\": \"The reviewer appreciates that the authors agree with the feedback. However, the revised version does not include any of the proposed ablation studies or discussions. Given that the inductive bias presented appears to be tailored specifically to the Minigrid environment and may lack general applicability, the reviewer believes a detailed analysis is essential.\", \"w3\": \"Beyond the asymptotic performance, the reviewer would like to see an evaluation of the computational costs of NACE compared to competing algorithms.\", \"w5\": \"The response highlights that NACE's performance is highly sensitive to certain hyperparameter choices, with one sub-optimal choice leading to a significant drop in performance. This dependency warrants further exploration and should be explicitly addressed in the main paper.\\n\\nAdditionally, it appears that none of the changes mentioned in the responses to this or other reviewers have been incorporated into the revised paper. As a result, the reviewer is unable to adjust their rating and maintains their original assessment that the paper is marginally below the acceptance threshold.\"}", "{\"title\": \"Just a reminder\", \"comment\": \"Thank you again for your valuable feedback, we kindly ask to respond to our last message before the deadline.\"}", "{\"comment\": [\"We have updated the paper further and polished it significantly according to your feedback.\", \"The appendix sections are now properly referenced in the main paper.\", \"As you suggested we removed the special case of calculating the amount of possible rule preconditions with the binomials.\", \"The sub-optimality which primary cause we identified to lie in representational limitations is now mentioned in the main paper (end of section 4.3) and described in more detail in Appendix A, \\\"Representational Limitations\\\". Additionally as you noticed we clarified how hyperparameters can contribute further to sub-optimal results \\\"Appendix A, Study of Reduced Planning Horizon\\\".\", \"We acknowledge that our models have been ran on different hardware by different authors, with the implementations using different ML software frameworks with different degrees of GPU utilization efficiency. However as it is visible, NACE was running on the slowest hardware (CPU only), yet took the least amount of time. Nevertheless our paper focuses on the sample efficiency rather than computational efficiency of the implementations as we did not implement the techniques we compared with ourselves.\"]}", "{\"metareview\": \"This paper presents NACE (Non-Axiomatic Causal Explorer), a novel experiential learning agent leveraging causally-informed intrinsic reward for grid world environments. While reviewers acknowledged the paper's contribution to sample-efficient learning and thorough empirical validation across multiple environments, significant concerns were raised about the method's fundamental limitations. Specifically, NACE fails to achieve optimal performance even in simple grid worlds, shows high sensitivity to hyperparameter choices, and lacks convincing evidence for generalizability beyond grid-world domains. Through extensive revisions, the authors improved notation clarity and added comparisons with state-of-the-art methods like DreamerV3, but core concerns about theoretical foundations and broader applicability remain insufficiently addressed.\\n\\nGiven these substantial limitations and the preliminary nature of the current results, I recommend rejection with encouragement to strengthen the theoretical framework and demonstrate effectiveness in more complex domains.\", \"additional_comments_on_reviewer_discussion\": \"The paper exhibits three critical unaddressed concerns:\\n\\n1. The method's generalizability remains unproven beyond the specific domain of grid worlds, with no empirical evidence supporting its applicability to more complex environments.\\n2. Despite its sample efficiency, NACE consistently converges to sub-optimal policies across all test environments, raising questions about its fundamental limitations and practical utility.\\n3. The empirical validation lacks standardization in computational comparisons and thorough analysis of the method's scalability, particularly in relation to state-of-the-art approaches.\"}" ] }
0sary0UZn5
On the Limitation and Redundancy of Transformers: A Rank Perspective
[ "Zeping Min", "Zhong Li" ]
Transformers have showcased superior performances across a variety of real-world applications, particularly leading to unparalleled successes of large “foundation” models. However, since these models are usually trained on web-scale datasets, the overall computation and memory loads are considerably increasing, calling for more *efficient* methods in machine learning. In this work, we step towards this direction by exploring the architectural limitation and redundancy of Transformers via investigating the ranks of attention score matrices. On one hand, extensive experiments are conducted on various model configurations (model dimensions, heads, layers, etc) and data distributions (both synthetic and real-world datasets with varied sequence lengths), uncovering two key properties: although the attention rank increases with the head dimension $d_h$, as expected, the rank is eventually upper bounded (limitation) and gets saturated (redundancy). We call them the *low-rank barrier* and *model-reduction effect*, respectively. On the other hand, we provide rigorous demonstrations for these observations through a fine-grained mathematical analysis, highlighting (i) a consistent theoretical upper bound ($\approx 0.63n$, $n$: the sequence length) of the attention rank regardless of the head dimension $d_h$, and (ii) a critical position of the rank saturation ($d_h=\Omega(\log n)$). These results shed light on the inductive biases and internal dynamics of Transformers, contributing to the theoretical understanding and assessment of the model capacity and efficiency in practical applications.
[ "Transformers", "self-attention", "low-rank", "redundancy", "model reduction" ]
Reject
https://openreview.net/pdf?id=0sary0UZn5
https://openreview.net/forum?id=0sary0UZn5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xz1xjjGfrn", "xXygIZPzuo", "ud0DxHnynS", "u4Y2UsJIBL", "r8fQioMc2l", "opDZZn92O5", "iKmmrqVJ17", "gOxBxWtGCm", "VaPSfCxqdo", "VXQajqSXvz", "RyAcSMJ8nE", "ONlqsiEQHD", "O4ScgVDkth", "GeIa2AMh0L", "G5H7o6YxhC", "ACkT9TwKjO", "8h11hbsFVa", "6SDD1ubRz2", "37ICDWInV9", "0iYbPW7Upk" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732674731379, 1732674692395, 1733168371052, 1732674104098, 1732673725356, 1732673959447, 1732674625370, 1730698869974, 1733294321541, 1730680107971, 1734736434190, 1730515504124, 1732674054351, 1733169208724, 1732673633081, 1732674186491, 1737524153719, 1730028630737, 1733179294693, 1732869488477 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Reviewer_Kafv" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Reviewer_64Hy" ], [ "ICLR.cc/2025/Conference/Submission11916/Area_Chair_GxP9" ], [ "ICLR.cc/2025/Conference/Submission11916/Reviewer_CvzB" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Submission11916/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11916/Reviewer_VcLz" ], [ "ICLR.cc/2025/Conference/Submission11916/Reviewer_64Hy" ], [ "ICLR.cc/2025/Conference/Submission11916/Reviewer_VcLz" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer VcLz\", \"comment\": \">**Q1**: The results of Sections 3 and 4 discuss transformers with random data, random weights, or both. It is difficult to generalize from that to general transformers trained on real data. In general, I believe it is OK to have a theoretical paper with restricted assumptions. However, the presentation of this work, and specifically the abstract and introduction, gives the impression that the low-rank phenomenon is general and not restricted to random settings. If the focus of the paper is random settings (i.e. random data and/or weights) this should be clearly stated. If the message of the paper is more general, then further evidence should be provided as I discuss later.\\n\\n**A1**: We clarify that although the theoretical results are developed under random settings, the obtained insights (mainly model-reduction effect) generalize to real-world settings. This point has been discussed in the first paragraph of Section 5 (Line 419-427) in the original manuscript, and we have updated corresponding contents for further clarifications in the revised version. \\n\\n>**Q2**: The theoretical result (Theorem 1) is nice, but limited and isn\\u2019t enough in itself for a theoretical paper. The biggest limitation is the assumption of the data. Although the authors justify assuming that the samples are almost orthogonal (e.g. drawn from a Gaussian or uniform on a sphere), the assumption is that they are exactly orthogonal. This allows only for n samples, instead of O(2^n) samples. It seems possible to prove this result for almost orthogonal data. \\n\\n**A2**: Thanks for your suggestion. It is sharp to note that almost orthonormality leads to exponentially many data samples (rather than linear for exact orthonormality) due to Johnson\\u2013Lindenstrauss lemma. We have successfully extended the main theorem to the almost orthonormality setting via approximation procedures and stability/perturbation analysis. See details in ***Section B.1*** in the revised manuscript. \\n\\n>**Q3**: The \\u201creal-world experiments\\u201d in section 5 are done on very small-scale image datasets, CIFAR-10/100 and SVHN. It would be more convincing to do experiments on larger datasets, and specifically text datasets where it is possible to change the embedding dimension, and thus experiment on the effects of changing $n$. \\n\\n**A3**: Thanks for your suggestion. We have added required NLP experiments (on the IMDB dataset) and discussions in ***Section D.3 (Figure 15 and 16)*** in the revised manuscript. It turns out that the insights and implications obtained under image settings consistently hold under text settings with varied input sizes. \\n\\n>**Q4**: What would the effect of the rank in Theorem 1 be when having data that is almost orthogonal? \\n\\n**A4**: Please refer to **A2** for details. \\n\\n>**Q5**: Does the rank saturation phenomenon also happen in real datasets when the input dimension n varies? \\n\\n**A5**: Please refer to **A3** for details.\"}", "{\"title\": \"Response to Reviewer CvzB (continue)\", \"comment\": \">**Q3**: Limited exploration of practical applications: While the theoretical findings are interesting, the work could benefit from a more explicit discussion of how these insights translate into practice. For example, I would be interested to see if the findings on the model-reduction effect could lead to model compression techniques without significant performance loss.\\n\\n**A3**: We respond by two points: \\n- In the first paragraph of Section 5 (Line 425-427) in the original manuscript, we have briefly discussed the principle of hyper-parameters configuration: \\\"In practical applications, one may check the saturation situation of attention ranks before training, and set the optimal number of parameters as where the rank first gets saturated\\\". \\n- Although it is beyond the scope of this work, we are inspired by Reviewer 64Hy for potential developments of this work in model compression techniques. Related references (e.g. KDEformer [3], HyperAttention [4]) aim to study the approximate calculation problem of attention matrices (with direct applications in model compression), with the fundamental approach to reduce the full matrix multiplication to sub-matrix multiplications. These works relate to attention ranks through the size of sub-matrices, which is typically lower bounded by measures depending on (stable) ranks of attention matrices. It would be interesting to further develop these works with the inductive biases established in this work, i.e. explore potentially more efficient algorithms given the low-rank barrier and rank saturation of attention matrices. We have added these references and corresponding discussions in the related work section (***Section A***) in the revised manuscript. \\n\\n\\n**References** \\n\\n[1] Srinadh Bhojanapalli, Chulhee Yun, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. Low-rank bottleneck in multi-head attention models. *International Conference on Machine Learning*, pp. 864\\u2013873. PMLR, 2020. \\n\\n[2] Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. *International Conference on Machine Learning*, pp. 2793\\u20132803. PMLR, 2021. \\n\\n[3] Amir Zandieh, Insu Han, Majid Daliri, and Amin Karbasi. KDEformer: Accelerating transformers via kernel density estimation. *International Conference on Machine Learning*, pp. 40605\\u201340623. PMLR, 2023. \\n\\n[4] Insu Han, Rajesh Jayaram, Amin Karbasi, Vahab Mirrokni, David P. Woodruff, and Amir Zandieh. HyperAttention: Long-context attention in near-linear time. In *International Conference on Learning Representations*, 2024.\"}", "{\"title\": \"Newly updated results (continue)\", \"comment\": \"As is suggested by Reviewer VcLz, we further make the following revisions (since a pdf revision cannot be uploaded currently, we list the changes below):\\n1. We merge Theorem 1 and Remark 2 (Section B.1) in the revised manuscript, to form a main theorem with explicitly weaker conditions on inputs. Theorem 1 now becomes: \\nLet the parameters $\\\\mathbf{W} _ q, \\\\mathbf{W} _ k$ be Gaussian random matrices, i.e., the entries of $\\\\mathbf{W} _ q, \\\\mathbf{W} _ k$ are independent $\\\\mathcal{N}(0, 1)$ random variables. Assume that the input sequence $\\\\mathbf{X}$ satisfies $\\\\mathbf{X} \\\\mathbf{X}{^\\\\top} = \\\\mathbf{I} _ n +\\\\mathbf{E}$ with $\\\\mathbf{E} = [E _ {ij}] \\\\in \\\\mathbb{R} ^ {n \\\\times n}$ satisfying $|E _ {ij}| \\\\le \\\\epsilon = o(1/(n ^ {\\\\frac{3}{2}}(d+d _ h)))$ ($\\\\forall i, j \\\\in [n]$, i.e. almost orthonormality). Then for any $n \\\\in \\\\mathbb{N} _ +$ appropriately large, $d \\\\ge n$, and $\\\\delta>0$ appropriately small, we have \\n \\\\begin{equation}\\n \\\\mathbb{E} _ {\\\\mathbf{W} _ k, \\\\mathbf{W} _ q}\\n \\\\left[\\\\mathrm{rank} \\n \\\\left(\\\\mathrm{hardmax}\\n \\\\left(\\\\mathbf{X} \\\\mathbf{W} _ q \\\\mathbf{W} _ k^\\\\top \\\\mathbf{X}^\\\\top \\\\right), \\\\delta\\n \\\\right)\\\\right] \\n \\\\le (1 - \\\\exp(-1))n \\n + O(1) \\n \\\\approx 0.63n, \\n \\\\end{equation}\\n where $\\\\mathrm{rank}(\\\\mathbf{A}, \\\\delta)$ equals to the number of singular values (of $\\\\mathbf{A}$) no less than $\\\\delta$ (i.e. numerical rank). \\n\\n2. We will move Section D.3 (Figure 15 and 16) to the main part, right in Section 5.2 (after Figure 4). \\n\\n3. We further add an additional test on the rank saturation of different layers. The results are as follows: \\n\\n- Table 1: Rank saturation in layers (2nd layer). One can observe that given different (input) embedding dimensions ($32, 128, 256, 512$), the attention rank always gets saturated when increasing the head dimension. \\n|Embed Dim|Head Embed Dim|Mean Rank|Std Rank\\n|----|----|----|----|\\n32|2|0.0609|0.0026\\n32|3|0.127|0.0055\\n32|4|0.2032|0.0083\\n32|8|0.2539|0.0137\\n32|16|0.2584|0.0186\\n128|2|0.0591|0.0014\\n128|3|0.132|0.0037\\n128|4|0.2033|0.0063\\n128|8|0.2593|0.0096\\n128|16|0.2667|0.0101\\n256|2|0.0588|0.0018\\n256|3|0.1347|0.0051\\n256|4|0.2105|0.0074\\n256|8|0.261|0.0137\\n256|16|0.2661|0.0143\\n512|2|0.0596|0.001\\n512|3|0.134|0.0036\\n512|4|0.209|0.0061\\n512|8|0.2609|0.0106\\n512|16|0.2612|0.0108\\n\\n- Table 2: Rank saturation in layers (4th layer). One can observe that given different (input) embedding dimensions ($32, 128, 256, 512$), the attention rank always gets saturated when increasing the head dimension.\\n |Embed Dim|Head Dim|Mean Rank|Std Rank\\n|----|----|----|----|\\n32|2|0.0518|0.0042\\n32|3|0.1177|0.0038\\n32|4|0.1925|0.0115\\n32|8|0.2584|0.0161\\n32|16|0.267|0.0137\\n128|2|0.0521|0.0023\\n128|3|0.1201|0.002\\n128|4|0.196|0.0049\\n128|8|0.2572|0.0131\\n128|16|0.2527|0.0097\\n256|2|0.0523|0.0021\\n256|3|0.121|0.0034\\n256|4|0.1953|0.0058\\n256|8|0.2677|0.0146\\n256|16|0.2551|0.0081\\n512|2|0.053|0.0021\\n512|3|0.1202|0.0056\\n512|4|0.1946|0.0066\\n512|8|0.2567|0.0134\\n512|16|0.2598|0.0139\\n\\nWe will plot these two tables in the following version and also add them to Section 5.2 (after Figure 4). \\n\\n4. To clarify the setting of theoretical analysis: \\n- In the abstract, line 023-024: \\\"we provide rigorous demonstrations for these observations through a fine-grained mathematical analysis\\\" -> \\\"we provide rigorous demonstrations for these observations under idealized settings through a fine-grained mathematical analysis\\\". \\n- In the introduction, line 099-100: \\\"mathematical estimates are established on the barrier of attention ranks\\\" -> \\\"mathematical estimates are established on the barrier of attention ranks for Transformers with random parameters\\\".\"}", "{\"title\": \"Response to Reviewer 64Hy (continue)\", \"comment\": \">**Q5**: This is smaller weakness, but it is not clear what we can do with this rank-bounded-ness insight (assuming that this upper bound is useful and accurate). It is not clear what problems a transformer is unable to solve because of this rank-boundness, or what problems it would have been able to solve if it was able to have full-rank attention matrices. Can the authors please provide specific examples or hypothetical scenarios where this rank-boundedness might impact transformer performance or capabilities as this would help connect their theoretical results to practical applications.\\n\\n**A5**: Yes, as is discussed in Line 45-50 in the original manuscript, an important phenomenon called the *low-rank bottleneck* has been uncovered by numerous recent works. Representative examples among them include (but not limited to)\\n- [1]: The low-rank attention unit cannot represent certain desired contexts. \\n- [2]: Pure attention loses its rank exponentially fast in depth. \\n\\n>**Q6**: It would be good to make the caption of Figure 1 self-sufficient (or point to the part of the text where the complete description is available). Otherwise, having such an introductory figure on page 2 seems a bit confusing. \\n\\n**A6**: Thanks for your suggestion. We have updated this in the revised version. \\n\\n>**Q7**: The \\\"less than or around 0.01\\\" comment in Table 2 caption makes that \\\"log-dependence\\\" argument a bit less convincing. One can alternately argue that for sequence length of 200, we needed more than a linear increase in $d_h$, implying a very different story. \\n\\n**A7**: Certainly, noises are inevitable in the evaluation (even with averaging), since both the data generation and parameters initialization introduce randomness. That is why the standard deviation ($\\\\pm$) is also reported, and a not strict tolerance (\\\"around\\\" $0.01$) is required. The story is not different, since one can observe (i) minor variations around the saturation; (ii) increases of $d_h^*$ (around the saturation) are much slower than those of $n$.\"}", "{\"title\": \"Response to Reviewer Kafv\", \"comment\": \">**Q1**: While the paper presents extensive experiments on various model configurations and data distributions, the evaluation focuses on a few common computer vision datasets (CIFAR-10, CIFAR-100, and SVHN). This raises the question of whether the findings generalize to other domains, such as natural language processing or audio processing, where Transformer models are widely used. Including additional experimental results from these domains would be very helpful.\\n\\n**A1**: Thanks for your suggestions. We have added required NLP experiments (on the IMDB dataset) and discussions in ***Section D.3 (Figure 15 and 16)*** in the revised manuscript. It turns out that the insights and implications obtained under image settings consistently hold under text settings with varied input sizes. \\n\\n>**Q2**: The findings are rather interesting. Would they apply to other Transformer models, such as those used in NLP and audio processing tasks? \\n\\n**A2**: Please refer to **A1** for details.\"}", "{\"title\": \"Response to Reviewer 64Hy\", \"comment\": [\">**Q1**: There are many variations of multi-head where the value matrix $\\\\mathbf{V}^{(i)}\\\\in\\\\mathbb{R}^{n \\\\times d_h}$ has the head dimension $d_h$. But usually the key and query matrices do not need to have the same dimensionality as the value matrix. Furthermore, there are versions where $d _ h \\\\times h \\\\ne d _ {\\\\text{model}}$, and $\\\\mathbf{W} _ o$ projects $h \\\\times d _ h \\\\to d_{\\\\text{model}}$. For example $d_h = d_{\\\\text{model}}$. How does the results studied here be affected by these different strategies? Furthermore, there are versions that just vary the value matrix dimension, but the query/key matrix dimensions are not affected by the number of heads. In that case, this analysis is not applicable. In fact, in the empirical evaluation of Figure 4b (which uses a different variation), we are able to go beyond the $0.63$ range with $d_h<5$ and goes up to $0.68$. Different variations of this could allow us to go to full rank (or close to it). In fact, it would seem that, with $h=1$, we should be able to recover the full-rank. Can the authors please explicitly discuss how their analysis might change for these different variations? Alternately, can the authors please justify their choice of focusing on one particular formulation of multi-head attention as this would clarify the scope and limitations of the presented results?\", \"**A1**: Thanks for your detailed comments.\", \"For variations: We would like to clarify that we have investigated both cases of single head (Theorem 1) and multiple heads (Figure 3: $d=d_{\\\\text{model}} = h \\\\times d_h$; Figure 4: $d \\\\ne d_{\\\\text{model}} = h \\\\times d_h$). There are certainly many other variants, but we select these variations with specific purposes:\", \"Single head: It is necessary to start with the most fundamental case.\", \"Multiple heads with $d=d_{\\\\text{model}} = h \\\\times d_h$: It is the most standard and conventional case in practice.\", \"Multiple heads with $d \\\\ne d_{\\\\text{model}} = h \\\\times d_h$: This is required by our goal to study the model reduction effect of Transformers, i.e. to check the architectural redundancy when increasing modeling parameters. Since the scope of this work is on the head dimension $d_h$, we have to vary $d_h$ with other hyper-parameters fixed.\", \"Beyond the range: Note that the $\\\\approx 0.63n$ upper bound (Eq. (4)) is derived under idealizations, and can only act as approximations of real-world experiments. This is understandable due to the complexity of ground truth distributions (e.g. images) and large-scale models (with many different modules). Moreover, as is discussed in the first paragraph of Section 5 (Line 421-426) in the manuscript, the primary goal in real-world settings is to investigate the model redundancy issue.\", \">**Q2**: It is not clear how much of this analysis is dependent on the normal distribution assumption on the weigthts and tokens? Consider a case where the $\\\\mathbf{K}=\\\\mathbf{Q}=\\\\mathbf{X}$ with each $\\\\parallel\\\\mathbf{x}_i\\\\parallel_2=1$ and the temperature is low enough that we are doing top-1 attention (which is also what is considered in this paper), then the matrix $\\\\mathbf{\\\\text{Attn}}(\\\\mathbf{X})=\\\\mathbf{I}_n$ which is the $n \\\\times n$ identity matrix, which is full-rank. So this clearly gives a case where the proposed bound is violated. Why is this an implausible case, or conversely, how does the presented analysis subsume this situation? While the identity attention matrix seems too special, one can think of a problem where each token needs to just attend to the token right before it (that is $\\\\mathbf{\\\\text{Attn}}(\\\\mathbf{X})[i,i-1]=1$, $\\\\forall i >1$, leading to an off-diagonal almost full-rank attention matrix. Can the authors please address this specific counterexample and explain how it relates to their assumptions and results?\", \"**A2**: Thanks for your detailed comments. In fact, our bound in Theorem 1 (Eq. (4)) is for $\\\\mathbb{E} _ {\\\\mathbf{W}}$, not for $\\\\sup _ {\\\\mathbf{W}}$ (which is the case you refer to), and bounded $\\\\mathbb{E}\\\\mathbf{z}$ does not imply the boundness of $\\\\sup\\\\mathbf{z}$ ($\\\\mathbf{z}$: random variable). One can easily construct a counterexample even in one dimension: Let $z \\\\sim p(z)=2/z^3$, $z\\\\ge 1$, it is straightforward to verify that $p(\\\\cdot)>0$ and $\\\\int _ 1^\\\\infty p(z) dz=1$ (i.e., $p(\\\\cdot)$ is a probability density), $\\\\mathbb{E} z=\\\\int_1^\\\\infty zp(z) dz=2<\\\\infty$, while $\\\\sup z=\\\\infty$.\"]}", "{\"title\": \"Response to Reviewer CvzB\", \"comment\": [\">**Q1**: Lack of motivation: First of all, why should one care about the rank of the attention matrix? While it is interesting to note that the attention matrices display the low-rank barrier and model-reduction effects, it is unclear how these findings directly impact the design or usage of Transformer models in practical applications. The study would benefit from a more explicit motivation linking these theoretical insights to specific challenges in machine learning or computational limitations. In particular, attention ranks do not seem to have a clear relationship with the model performance or expressive power. Have you identified whether the low-rank barrier correlates with any performance metrics? Could the model-reduction effect be leveraged to improve model efficiency?\", \"**A1**: For the rank motivation and connections with pratice:\", \"Matrix rank is a fundamental algebra concept, which reflects the spectrum of certain operators. Its fundamental effect in approximation (expressive power) has been discussed in the paragraph at Line 354-365 in the original manuscript.\", \"As is discussed in the introduction part of the original manuscript (Line 45-50) , an important phenomenon called the *low-rank bottleneck* has been uncovered by numerous recent works for practical Transformers. Representative examples among them include (but not limited to)\", \"[1]: The low-rank attention unit *cannot represent* certain desired contexts.\", \"[2]: Pure attention loses its rank exponentially fast in depth, leading to rank-1 attention matrices and quite limited expressive power (outputs stay in only one dimension).\"], \"for_the_model_reduction_effect_and_performance_metrics\": \"- In the first paragraph of Section 5 (Line 422-427) in the original manuscript, we demonstrate through real-world numerical simulations that \\\"the low-rank saturation of every single head leads to an *inefficiency* issue: Both the attention rank and model performance *consistently* get *marginal enhancements* when increasing parameters, implying the model redundancy\\\". \\n- To boost the model efficiency, this aligned saturation in both model performance and attention ranks \\\"gives chances for the optimal configuration of hyper-parameters: In practical applications, one may check the saturation situation of attention ranks before training, and set the optimal number of parameters as where the rank first gets saturated\\\". \\n\\n>**Q2**: Assumptions in theoretical analysis: The theoretical analysis assumes orthonormal input sequences to attention, which may not fully reflect the reality. For example, there are other evidence in the literature suggesting that contexualized token embeddings tend to be anisotropic [Ethayarajh, 2019]. While the authors justify this orthogonal assumption by citing Tian et al. 2024, further discussion on the applicability of the theoretical results in varied real-world scenarios would enhance their robustness. \\nKawin Ethayarajh. How contextual are contextualized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings. In EMNLP, 2019. \\n\\n**A2**: We argue that assuming orthonormal inputs in the main theorem gives the worst-case justification. To see this, let $\\\\mathbf{A}:=\\\\mathbf{X} \\\\mathbf{W} _ q \\\\mathbf{W} _ k^\\\\top \\\\mathbf{X}^\\\\top$, it is straightforward to show that orthonormal $\\\\mathbf{X}$ leads to the maximal $\\\\mathrm{rank}(\\\\mathbf{A})$. In fact, since the matrix multiplication cannot increase the overall rank, we formally have $\\\\mathrm{rank} _ {\\\\text{low-rank}~\\\\mathbf{X}}(\\\\mathbf{A}) \\\\le \\\\mathrm{rank} _ {\\\\text{full-rank} ~ \\\\mathbf{X}}(\\\\mathbf{A})$. Also, for any full-rank matrix $\\\\mathbf{X}$, it has a QR decomposition $\\\\mathbf{X}=\\\\mathbf{Q}\\\\mathbf{R}$, where $\\\\mathbf{Q}$ is orthonormal and $\\\\mathbf{R}$ is a upper triangular matrix with positive diagonal elements (and hence invertible). This leads to an equivalent parameterization $\\\\mathbf{A}=\\\\mathbf{Q} (\\\\mathbf{R} \\\\mathbf{W}_q) (\\\\mathbf{R}\\\\mathbf{W}_k)^\\\\top \\\\mathbf{Q}^\\\\top$.\"}", "{\"summary\": \"This work explores the architectural limitations and redundancy in Transformer models by analyzing the ranks of their attention score matrices. Through extensive experiments across diverse model configurations and data distributions, the authors uncover two key properties: the low-rank barrier and the model-reduction effect. These findings are rigorously supported by a fine-grained mathematical analysis, revealing (i) a consistent theoretical upper bound on the attention rank (0.63n) and (ii) a critical threshold for rank saturation where the hidden dimension h scales as \\u03a9(log n). These results illuminate the inductive biases and internal dynamics of Transformers, deepening our theoretical understanding and enabling better assessment of model capacity and efficiency in practical applications. These insights are particularly valuable for Transformer architecture design and optimization.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This work studies a fundamental problem in Transformer model efficiency. In this paper, the authors present extensive empirical results and rigorous theoretical analysis, offering critical insights into the architectural limitations and redundancy in Transformer models. The findings are highly valuable for designing more efficient Transformer-based architectures.\", \"weaknesses\": \"While the paper presents extensive experiments on various model configurations and data distributions, the evaluation focuses on a few common computer vision datasets (CIFAR-10, CIFAR-100, and SVHN). This raises the question of whether the findings generalize to other domains, such as natural language processing or audio processing, where Transformer models are widely used. Including additional experimental results from these domains would be very helpful.\", \"questions\": \"The findings are rather interesting. Would they apply to other Transformer models, such as those used in NLP and audio processing tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 64Hy\", \"comment\": [\"Thanks for your comments. Regarding your concerns, we would like to further clarify as follows.\", \"1. Literature comparisons\", \"As we have stated in the above rebuttal, Bhojanapalli et al. (2020) shows that low-rank attention unit cannot represent certain desired contexts (with critical \\\"phase transitions\\\" around the *full-rank*), and Dong et al. (2021) proves that pure attention loses its rank exponentially fast in depth (i.e. converging to *1-rank*). Their settings are totally different from this work.\", \"Particularly, these works, also including many other studies, did not explicitly derive **quantitative estimates** on the attention rank, and further show the architectural **redundancy** (in both attention ranks and model performance).\", \"2. Theoretical constructions\", \"We point out that the constructions you refer to are under the \\\"limit\\\" setting (i.e. $\\\\epsilon=0$ with $\\\\epsilon$ as the perturbation tolerance). However, we have extended the results to the \\\"asymptotic\\\" setting (i.e. $\\\\epsilon=o(1)\\\\ne 0$). As is discussed in **A2** in **Response to Reviewer VcLz**, due to Johnson\\u2013Lindenstrauss lemma, almost orthonormality (the asymptotic setting) leads to *exponentially* many \\\"dimensions\\\", rather than the *linear* dimension for exact orthonormality (the limit setting). Therefore, this extension is significant and non-trivial.\", \"Particularly, all the independence you refer to appears in the limit setting, but *none of the independence holds in the asymptotic setting* (where Lemma 4 does not hold). That is, the independence in theoretical constructions is only *intermediary*, which is leveraged by further (approximation) analysis to obtain *general results without independence*.\", \"Given that it is difficult to directly analyze the general dependence in theory, we also conducted extensive experiments to verify the theoretical results. For more complex cases, say *more types of data distributions* and *non-i.i.d.* data, *even under real-world settings*, it is observed that similar patterns of both the rank barrier and rank saturation hold (see Figure 2(right), Figure 3(b), Figure 4(b), Figure 15(right) and Figure 16 in the revised manuscript), despite that the independence in theoretical constructions is not provided in these practical cases.\", \"3. Contributions: We kindly remind you (and also other reviewers) of possibly missing core contributions of this work. **We emphasize that the main contributions of this submission also include the key ___theory-experimentation consistency___ in ___real-world___ settings**.\", \"As is shown in Figure 3(b), Figure 4(b), Figure 15(right) and Figure 16 in the revised manuscript, it is observed in general that both the low-rank barrier and rank saturation appear in Transformers (with random weights) on real-world datasets.\", \"**More importantly**, as is *jointly* shown in subplots ((a), (b)) of Figure 3, 4, 15, there is significant *alignment* between the attention *rank saturation* and *model performance improvement*: When increasing the head dimension, *similar trends* of *marginal enhancements* appear in *both* the attention rank (of randomly initialized Transformers) and classification accuracy (of trained practical Transformers) on real-world datasets.\", \"This theory-experimentation consistency captures the actual behavior (i.e. architectural redundancy) underlying the attention mechanism.\", \"4. Paradigm: Given the high non-linearity of Transformers and complexity of practical data distributions, it is considerably difficult and also unnecessary to directly analyze every mathematical detail. As is often the case to handle complex systems, here we adopt a two-stage paradigm:\", \"Scale-down: For complicated practical problems, approximately propose an idealized formulation, and then perform rigorous mathematical analysis.\", \"Scale-up: Back to original problems, experimentally verify the \\\"scale-down\\\" results to further justify their validity in general cases.\", \"This paradigm is guaranteed to be reasonable if theoretical insights (1st stage) and experimental phenomena (2nd stage) are *consistent*. As is highlighted in **Point 3**, we have achieved this ultimate goal.\"]}", "{\"summary\": \"This paper focuses on the attention matrix in transformers, and studies the effect of the head dimension on the rank of this attention matrix. Under assumptions on the data and the transformer weights, the paper empirically and theoretically highlight that the rank of the $(n \\\\times n)$ attention matrix for $n$-length input sequences is upper-bounded by a quantities close to $0.63n$, and that attention matrix rank grows with the head dimension $d_h$ but the gain in the attention matrix rank diminishes as the head dimension grows, demonstrating a \\\"diminishing returns\\\" behaviour. This behaviour is demonstrated with vision transformers where the ranks of the attention matrix of the first transformer block are reported as the head dimensions is varied.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper considers an interesting topic of analysis, studying the rank of the attention matrix, and how it is limited from above, which puts a limitation on the expressivity of the attention mechanism.\", \"weaknesses\": [\"(W1) In my opinion, the main weakness of this paper are the analysis tools and assumptions utilized for providing the rank upper bounds which is used make the main claim of this paper that the attention matrices are rank limited. Here are some specific issues:\", \"(W1.1) There are many variations of multi-head where the value matrix $\\\\mathbf{V}^{(i)} \\\\in \\\\mathbb{R}^{n \\\\times d_h}$ has the head dimension $d_h$. But usually the key and query matrices do not need to have the same dimensionality as the value matrix. Furthermore, there are versions where $d_h \\\\times h \\\\not= d_{\\\\text{model}}$, and $W_o$ projects $h \\\\times d_h \\\\to d_{\\\\text{model}}$. For example $d_h = d_{\\\\text{model}}$. How does the results studied here be affected by these different strategies? Furthermore, there are versions that just vary the value matrix dimension, but the query/key matrix dimensions are not affected by the number of heads. In that case, this analysis is not applicable. In fact, in the empirical evaluation of Figure 4b (which uses a different variation), we are able to go beyond the $0.63$ range with $d_h < 5$ and goes up to 0.68. Different variations of this could allow us to go to full rank (or close to it). In fact, it would seem that, with $h = 1$, we should be able to recover the full-rank. Can the authors please explicitly discuss how their analysis might change for these different variations? Alternately, can the authors please justify their choice of focusing on one particular formulation of multi-head attention as this would clarify the scope and limitations of the presented results?\", \"(W1.2) It is not clear how much of this analysis is dependent on the normal distribution assumption on the weigthts and tokens? Consider a case where the $\\\\mathbf{K} = \\\\mathbf{Q} = \\\\mathbf{X}$ with each $\\\\|\\\\|\\\\mathbf{x}_i \\\\|\\\\|_2 = 1$ and the temperature is low enough that we are doing top-1 attention (which is also what is considered in this paper), then the matrix $\\\\textbf{Attn}(\\\\mathbf{X}) = I_n$ which is the $n\\\\times n$ identity matrix, which is full-rank. So this clearly gives a case where the proposed bound is violated. Why is this an implausible case, or conversely, how does the presented analysis subsume this situation? While the identity attention matrix seems too special, one can think of a problem where each token needs to just attend to the token right before it (that is $\\\\textbf{Attn}(\\\\mathbf{X})[i, i-1] = 1, \\\\forall i > 1$, leading to an off-diagonal almost full-rank attention matrix. Can the authors please address this specific counterexample and explain how it relates to their assumptions and results?\", \"(W1.3) It is odd that while we are studying the effect of the head dimension on the rank of the attention matrix, Theorem 1 has no dependence on $d_h$. This makes the result somewhat odd. I think this is an artifact of the assumptions and analysis, which effectively reduces the attention matrix to the case where each row has 1 in one of the $n$ indices at random in a query independent way. This is equivalent to each token sampling uniformly at random with replacement a token out of the $n$ tokens. Thus the expected number of unique tokens attended to in the complete attention matrix (with only one non-zero per row) is equivalent to its rank. Using standard combinatorics arguments, this expected number of unique tokens attended (for sampling with replacement) to will come to $n(1 - (1 - 1/n)^n)$ which approaches the $\\\\approx 0.63n$ bound. While the analysis in the paper is correct, this form of an attention matrix is not useful or interesting for real applications of transformers, and also the head dimension $d_h$ plays no role here, which is different from the motivation of this paper. Can the authors please explicitly discuss this apparent discrepancy and explain how it relates to their overall claims about the effect of head dimension on attention rank? Alternately, the authors can also share (or point to) a finer-grained analysis that directly tie this rank upper bound to the head dimension.\", \"(W1.4) It is not clear why line 363 \\\"Recall that the rows of $\\\\mathbf{X} \\\\mathbf{W}_q \\\\mathbf{W}_k^\\\\top \\\\mathbf{X}^\\\\top = \\\\mathbf{Q} \\\\mathbf{K}^\\\\top$ are independently and identically distributed as $\\\\mathcal{N}(\\\\mathbf{0}_n, \\\\mathbf{K} \\\\mathbf{K}^\\\\top)$\\\" is true. Why is this distribution independent of $\\\\mathbf{Q}$? Similarly, equation (6) seems odd, highlighting that the rows for each query in the attention matrix are distributed identically. This query-independence is both odd and counter to the main motivation of transformers that usually have different attention patterns for different rows/queries.\", \"(W2) This is smaller weakness, but it is not clear what we can do with this rank-bounded-ness insight (assuming that this upper bound is useful and accurate). It is not clear what problems a transformer is unable to solve because of this rank-boundness, or what problems it would have been able to solve if it was able to have full-rank attention matrices. Can the authors please provide specific examples or hypothetical scenarios where this rank-boundedness might impact transformer performance or capabilities as this would help connect their theoretical results to practical applications.\"], \"minor_comments\": [\"(C1) It would be good to make the caption of Figure 1 self-sufficient (or point to the part of the text where the complete description is available). Otherwise, having such an introductory figure on page 2 seems a bit confusing.\", \"(C2) The \\\"less than or around 0.01\\\" comment in Table 2 caption makes that \\\"log-dependence\\\" argument a bit less convincing. One can alternately argue that for sequence length of 200, we needed more than a linear increase in $d_h$, implying a very different story.\", \"(C3) While the assumptions on the input are discussed in Remarks 2 and 3, note that the assumptions on the key/query projection matrices seem more restrictive to me, and require appropriate discussion.\"], \"questions\": [\"(Q) Both KDEFormer [A] and Hyperattention [B] seems to also consider the rank of the attention matrix theoretically (among others). However, these references are missing. How is the setup of this current paper positioned against these references?\", \"[A] Zandieh, Amir, et al. [\\\"Kdeformer: Accelerating transformers via kernel density estimation.\\\"](https://proceedings.mlr.press/v202/zandieh23a.html) International Conference on Machine Learning. PMLR, 2023.\", \"[B] Han, Insu, et al. [\\\"HyperAttention: Long-context Attention in Near-Linear Time.\\\"](https://openreview.net/forum?id=Eh0Od2BJIM) The Twelfth International Conference on Learning Representations. 2024\", \"(Q) It is not clear how the rank of the attention matrix is tied to the expressivity of the attention mechanism. Are there existing results that make this connection?\", \"(Q) Given the use of softmax operation in the attention matrix, isn't it expected that the attention matrix will not be full rank? Part of the motivation for schemes like Performers [C], Scatterbrain [2] etc is this low rank structure. In fact, if we remove the softmax, this linear attention can probably have full rank if the head dimension is large enough but is not desired since the softmax operation is what makes attention work.\", \"[C] Choromanski, Krzysztof Marcin, et al. [\\\"Rethinking Attention with Performers.\\\"](https://openreview.net/forum?id=Ua6zuk0WRH) International Conference on Learning Representations. 2020.\", \"(Q) In Section 3.1, whose rank are we computing? The $\\\\textbf{Attn}^{(i)}(\\\\mathbf(X))$ matrices? How are the ranks aggregated across the multiple heads?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There are no ethical concerns in my opinion.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper analyzes limitations of transformer architectures by studying the rank of attention score matrices, limited to the case of random data and/or random weights, showing theoretical upper bounds on attention rank and empirically demonstrating rank saturation effects as model dimensions increase.\\n\\nThe main concerns that remained after feedback were the theoretical analysis relying on overly simplified assumptions, such as the assumption of query-independent attention patterns, and the strong assumptions of random (not real) weights or data respectively.\\n\\nWe hope the detailed feedback from the reviews and discussions helps to strengthen the paper for a future occasion.\", \"additional_comments_on_reviewer_discussion\": \"The author feedback phase was useful and active. Concerns however remained if the work is ready for the high bar of ICLR.\"}", "{\"summary\": \"The manuscript investigates the relationship between head dimensions and the corresponding attention score matrix ranks. The authors identify two phenomena in Transformer attention ranks: (1) an upper bound on attention rank, referred to as the \\\"low-rank barrier,\\\" and (2) the \\\"model-reduction effect,\\\" which denotes the diminishing returns on attention rank when increasing head dimensions beyond a certain threshold. Experiments are provided to validate these findings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper offers a new theoretical perspective on Transformer architecture, particularly in characterizing the limitations of attention ranks, which is insightful for understanding the underlying mechanics of model capacity.\\n\\n2. The two phenomena in Transformer attention ranks: (1) an upper bound on attention rank, referred to as the \\\"low-rank barrier,\\\" and (2) the \\\"model-reduction effect,\\\" which denotes the diminishing returns on attention rank when increasing head dimensions beyond a certain threshold, are quite interesting and intriguing.\", \"weaknesses\": \"1. Lack of motivation: First of all, why should one care about the rank of the attention matrix? While it is interesting to note that the attention matrices display the low-rank barrier and model-reduction effects, it is unclear how these findings directly impact the design or usage of Transformer models in practical applications. The study would benefit from a more explicit motivation linking these theoretical insights to specific challenges in machine learning or computational limitations. In particular, attention ranks do not seem to have a clear relationship with the model performance or expressive power. Have you identified whether the low-rank barrier correlates with any performance metrics? Could the model-reduction effect be leveraged to improve model efficiency?\\n\\n2. Assumptions in theoretical analysis: The theoretical analysis assumes orthonormal input sequences to attention, which may not fully reflect the reality. For example, there are other evidence in the literature suggesting that contexualized token embeddings tend to be anisotropic [Ethayarajh, 2019]. While the authors justify this orthogonal assumption by citing Tian et al. 2024, further discussion on the applicability of the theoretical results in varied real-world scenarios would enhance their robustness. \\n\\n3. Limited exploration of practical applications: While the theoretical findings are interesting, the work could benefit from a more explicit discussion of how these insights translate into practice. For example, I would be interested to see if the findings on the model-reduction effect could lead to model compression techniques without significant performance loss.\\n\\n\\n\\nKawin Ethayarajh. How contextual are contextualized word representations? comparing the\\ngeometry of bert, elmo, and gpt-2 embeddings. In EMNLP, 2019.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 64Hy (continue)\", \"comment\": \">**Q3**: It is odd that while we are studying the effect of the head dimension on the rank of the attention matrix, Theorem 1 has no dependence on $d_h$. This makes the result somewhat odd. I think this is an artifact of the assumptions and analysis, which effectively reduces the attention matrix to the case where each row has 1 in one of the $n$ indices at random in a query independent way. This is equivalent to each token sampling uniformly at random with replacement a token out of the $n$ tokens. Thus the expected number of unique tokens attended to in the complete attention matrix (with only one non-zero per row) is equivalent to its rank. Using standard combinatorics arguments, this expected number of unique tokens attended (for sampling with replacement) to will come to $n(1-(1-1/n)^n)$ which approaches the $\\\\approx 0.63n$ bound. While the analysis in the paper is correct, this form of an attention matrix is not useful or interesting for real applications of transformers, and also the head dimension $d_h$ plays no role here, which is different from the motivation of this paper. Can the authors please explicitly discuss this apparent discrepancy and explain how it relates to their overall claims about the effect of head dimension on attention rank? Alternately, the authors can also share (or point to) a finer-grained analysis that directly tie this rank upper bound to the head dimension.\\n\\n**A3**: We respond by two points: \\n- For independence on $d_h$: This is exactly what the low rank barrier stands for. In fact, Theorem 1 holds *universally* for any head dimensions. The $\\\\approx 0.63n$ upper bound is derived under certain idealizations, but both the argument and applicability in practice are not limited, as is explained in the following points. \\n- For combinatorics arguments: The standard combinatorics argument is over-simplistic, since it is based on assumptions that (i) the (column) values in attention matrices are identically distributed (leading to the $1/n$ probability to assign $1$); (ii) the rows of attention matrices are independent and identically distributed (leading to the $n$-power), and combining (i) (ii) gives the $n(1-(1-1/n)^n)$ quantity. However, the assumptions (i)(ii) are too strong with the trivial symmetry of indices. In fact, the considered setting is not trivial to tackle. The non-triviality of the current setting is, here we do *not* assume any independence in the attention matrix. Its columns are *dependent*. Although its rows can be proved to be independent due to Lemma 4 and the arguments in Line 859-866 in the original manuscript under the orthonormal inputs assumption, the orthonormality can be relaxed, as is quantitatively analyzed in newly updated Section B.1, hence no independence of rows can be guaranteed. This definitely raises more technical difficulties, which significantly distinguishes the proof framework from simple symmetry or combinatorics arguments. \\n - Also note that it is not straightforward to derive $n(1-(1-1/n)^n) \\\\to n (1-1/e)$ as $n \\\\to \\\\infty$, since the multiplier $n$ also goes to positive infinity when applied the limit $(1-1/n)^n \\\\to 1/e$ (i.e., the limit is indeterminate). As a technical solution, Lemma 2 in the original manuscript gives the required quantitative characterization. \\n\\n>**Q4**: It is not clear why line 363 \\\"Recall that the rows of $\\\\mathbf{X} \\\\mathbf{W}_q \\\\mathbf{W}_k^{\\\\top} \\\\mathbf{X}^{\\\\top} = \\\\mathbf{Q} \\\\mathbf{K}^{\\\\top}$ are independently and identically distributed as $\\\\mathcal{N}(0_n, \\\\mathbf{K}\\\\mathbf{K}^{\\\\top})$\\\" is true. Why is this distribution independent of $\\\\mathbf{Q}$? Similarly, equation (6) seems odd, highlighting that the rows for each query in the attention matrix are distributed identically. This query-independence is both odd and counter to the main motivation of transformers that usually have different attention patterns for different rows/queries. \\n\\n**A4**: The claim holds due to Lemma 4 and the arguments in Line 859-866 in the original manuscript under the orthonormal inputs assumption. Again, the orthonormality can be relaxed, as is quantitatively analyzed in newly updated Section B.1, hence no independence of rows in attention matrices can be guaranteed. Since the columns of attention matrices are also not required to be independent, our assumption does not lose generality even compared with practical applications, where the parameters and data in attention matrices are coupled after training and certainly with dependent entries.\"}", "{\"title\": \"Response to Reviewer VcLz\", \"comment\": \"We are very glad to see that our revisions solve most of your questions, and many thanks for you to raise the score.\\n\\nFor your follow-up feedback, we have also adopted all of them in the new section **Newly updated results (continue)** in the global response (since a pdf revision cannot be uploaded currently, we list the changes there to inform all reviewers). Again, we are very grateful to have these insightful suggestions from you, which greatly help us to improve this work.\"}", "{\"title\": \"Newly updated results (as suggested)\", \"comment\": \"We sincerely appreciate all reviewers for their insightful and constructive feedback. Besides answering questions in detail to address all reviewers\\u2019 comments, we want to summarize and highlight new key results as follows, and all the results are included in the revised manuscript (revised contents in blue).\\n\\n1. Theoretically, we have successfully extended the main theorem to the almost orthonormality setting via approximation procedures and stability/perturbation analysis. See details in ***Section B.1*** in the revised version.\\n2. Experimentally, we have added NLP experiments (on the IMDB dataset) and discussions in ***Section D.3 (Figure 15 and 16)*** in the revised version. It turns out that the insights and implications obtained under image settings consistently hold under text settings with varied input sizes.\"}", "{\"title\": \"Response to Reviewer 64Hy (continue)\", \"comment\": \">**Q8**: While the assumptions on the input are discussed in Remarks 2 and 3, note that the assumptions on the key/query projection matrices seem more restrictive to me, and require appropriate discussion.\\n\\n**A8**: It is true that the main theorem holds for Gaussian distributions of model weights, which can be certainly viewed as an initialization regime. However, this formulation can be also understood as a \\\"capacity\\\" (expressive ability) study of Transformers, but only analyzed for random parameters. In fact, random neural networks are *commonly* studied in theoretical papers even for architectures that are much simpler than Transformers, such as random feature models, neural tangent kernels and reservoir computing. \\n\\n>**Q9**: Both KDEFormer [A] and Hyperattention [B] seems to also consider the rank of the attention matrix theoretically (among others). However, these references are missing. How is the setup of this current paper positioned against these references?\\n[A] Zandieh, Amir, et al. \\\"Kdeformer: Accelerating transformers via kernel density estimation.\\\" International Conference on Machine Learning. PMLR, 2023.\\n[B] Han, Insu, et al. \\\"HyperAttention: Long-context Attention in Near-Linear Time.\\\" The Twelfth International Conference on Learning Representations. 2024. \\n\\n**A9**: It seems that both KDEFormer [A] and Hyperattention [B] aim to study the approximate calculation problem of attention matrices, with the fundamental approach to reduce the full matrix multiplication to sub-matrix multiplications. These works relate to attention ranks through the size of sub-matrices, which is typically lower bounded by measures depending on (stable) ranks of attention matrices. It would be interesting to further develop these works with the inductive biases established in this work, i.e. explore potentially more efficient algorithms given the low-rank barrier and rank saturation of attention matrices. We have added these references and corresponding discussions in the related work section (***Section A***) in the revised manuscript. \\n\\n>**Q10**: It is not clear how the rank of the attention matrix is tied to the expressivity of the attention mechanism. Are there existing results that make this connection? \\n\\n**A10**: Please refer to **A5** for details. \\n\\n>**Q11**: Given the use of softmax operation in the attention matrix, isn't it expected that the attention matrix will not be full rank? Part of the motivation for schemes like Performers [C], Scatterbrain [2] etc is this low rank structure. In fact, if we remove the softmax, this linear attention can probably have full rank if the head dimension is large enough but is not desired since the softmax operation is what makes attention work.\\n[C] Choromanski, Krzysztof Marcin, et al. \\\"Rethinking Attention with Performers.\\\" International Conference on Learning Representations. 2020. \\n\\n**A11**: It is true that the softmax operation makes attention work, but the single softmax does not make everything work. Besides your mentioned references involving low-rank structures, there are other issues including sparsity ([3]) and attention sinks ([4]). More importantly, one of the core contributions of this work is the saturation effect consistently occurring in both attention ranks and model performance. This point has been discussed in the first paragraph of Section 5 (Line 421-426) in the manuscript: \\\"For the multiple heads case, we aim to emphasize the *saturation* effect via numerical simulations. That is, despite that one can increase the overall rank by concatenation, the low-rank saturation of every single head still leads to an *inefficiency* issue: Both the attention rank and model performance *consistently* get *marginal enhancements* when increasing parameters, implying the model redundancy.\\\"\\n\\n>**Q12**: In Section 3.1, whose rank are we computing? The $\\\\mathbf{\\\\text{Attn}}^{(i)}(\\\\mathbf{X})$ matrices? How are the ranks aggregated across the multiple heads? \\n\\n**A12**: Yes, we compute the averaged rank over every attention head. For the discussion about concatenation, please refer to **A11** for details. \\n\\n**References** \\n\\n[1] Srinadh Bhojanapalli, Chulhee Yun, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. Low-rank bottleneck in multi-head attention models. In *International Conference on Machine Learning*, pp. 864\\u2013873. PMLR, 2020. \\n\\n[2] Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. In *International Conference on Machine Learning*, pp. 2793\\u20132803. PMLR, 2021. \\n\\n[3] Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, and Christopher R\\u00e9. Scatterbrain: Unifying sparse and low-rank attention approximation. *Advances in Neural Information Processing Systems 34*, 2021. \\n\\n[4] Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, amd Mike Lewis. Efficient streaming language models with attention sinks. In *International Conference on Learning Representations*, 2024.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper studies the limitations of the rank of attention matrices, with a goal of gaining a better understanding of the expressive power of transformers. First, the paper provides experiments on randomly generated data from different distributions and show empirically a rank saturation at around 0.63n (n being the input dimension). Next, it is proved theoretically that transformers at initialization also exhibit this rank saturation. Finally, experiments are given on the CIFAR-10/100 and SVHN datasets showing a rank saturation phenomenon.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The phenomenon that is presented in the paper is interesting and sheds new light on the expressive power of attention matrices. The paper provides theoretical results and complementing empirical validation under different setting. The technical parts of the paper are clearly written and easy to follow.\", \"weaknesses\": \"I have some concerns with the presentation of the paper and the theoretical and empirical results:\\n\\n1) The results of Sections 3 and 4 discuss transformers with random data, random weights, or both. It is difficult to generalize from that to general transformers trained on real data. In general, I believe it is OK to have a theoretical paper with restricted assumptions. However, the presentation of this work, and specifically the abstract and introduction, gives the impression that the low-rank phenomenon is general and not restricted to random settings. If the focus of the paper is random settings (i.e. random data and/or weights) this should be clearly stated. If the message of the paper is more general, then further evidence should be provided as I discuss later.\\n\\n2) The theoretical result (Theorem 1) is nice, but limited and isn\\u2019t enough in itself for a theoretical paper. The biggest limitation is the assumption of the data. Although the authors justify assuming that the samples are almost orthogonal (e.g. drawn from a Gaussian or uniform on a sphere), the assumption is that they are exactly orthogonal. This allows only for n samples, instead of O(2^n) samples. It seems possible to prove this result for almost orthogonal data.\\n\\n3) The \\u201creal-world experiments\\u201d in section 5 are done on very small-scale image datasets, CIFAR-10/100 and SVHN. It would be more convincing to do experiments on larger datasets, and specifically text datasets where it is possible to change the embedding dimension, and thus experiment on the effects of changing n.\", \"questions\": [\"What would the effect of the rank in Theorem 1 be when having data that is almost orthogonal?\", \"Does the rank saturation phenomenon also happen in real datasets when the input dimension n varies?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"I thank the authors for their detailed responses and the updated results and analyses. Based on these responses and the updated manuscript, this is my understanding of the main contributions of this submission:\", \"(**Contrib 1**) The first contribution is that the paper empirically demonstrates the existence of a low-rank bottleneck in the attention matrices, and some relationship between the head dimension $d_h$ and the attention rank.\", \"However, it is not clear why this is a novel contribution given the existing literature such as Bhojanapalli et al. (2020), Dong et al. (2021) and others.\", \"(**Contrib 2**) To the best of my understanding, the results in Theorem 1 (or extended Theorem 2) establish an upperbound on the rank of a single head attention matrix under assumptions of orthogonality among the token embeddings of the input sequence ($\\\\mathbf{X} \\\\mathbf{X}^\\\\top = \\\\mathbf{I}_n$) and Gaussianity of the attention parameters $\\\\mathbf{W}_q, \\\\mathbf{W}_k$, while Appendix B.1 relaxes the orthogonality condition to almost orthogonality. This result (Theorem 1) is utilized to present some \\\"model-reduction effect\\\".\", \"I understand that assumptions are necessary to study the theoretical properties of models. Furthermore, one can potentially justify the Gaussianity and orthogonality assumptions.\", \"However, the moment we arrive at a situation where the (pre-soft/hard-max) attention score matrix row is identically distributed regardless of the query as in equation (28), the analysis seems overly simplified. We arrive at $\\\\mathbf{K} \\\\mathbf{q}_i \\\\sim \\\\mathcal{N} ( \\\\mathbf{0}_n, \\\\mathbf{K} \\\\mathbf{K}^\\\\top )$ -- note that here $\\\\mathbf{q}_i = \\\\mathbf{W}_q \\\\mathbf{x}_i$ and the $\\\\lbrace \\\\mathbf{x}\\\\_i \\\\rbrace\\\\_{i=1}^n$ are supposed to be orthogonal to each other. So, for a set of orthogonal token embeddings, we end up in a situation where their attention score rows are identically distributed. To me this analysis does not capture the right notion of rank bottleneck.\", \"The assumptions and the resulting behaviour from (28) is utilized to the motivate the \\\"model reduction effect\\\" and motivate the results in (5) and (6). However, note that under the conditions of Theorem 1, the right hand side of (5) is zero (or very close to it under approximate orthogonality) unless $i = j$. This extends to the result in (6) implying $\\\\mathbf{e}_i \\\\mathbf{Q} \\\\mathbf{K}^\\\\top / \\\\sqrt{d_h} \\\\sim \\\\mathcal{N}(\\\\mathbf{0}_n, \\\\mathbf{I}_n)$ which means that the subsequent rank in (7), (8) does not even depend on $\\\\mathbf{X}$. So each row in the attention score matrix is just a multivariate normal distribution. While valid, this seems like an overly simplified analysis of the dependence of the rank on the head dimension $d_h$.\", \"Given that the expressivity of attention (compared existing sequence modeling schemes) for modeling long term dependencies comes from the fact that the attention pattern is input dependent, it seems that this analysis makes assumptions that lead to a situation where the attention pattern is input independent, and thus very different from what happens in practice.\", \"There is no question that the attention matrix would be low-rank. This is the premise for various efficient low-rank approximation of the attention matrix. However, I do not think this analysis captures the actual behavior underlying the attention mechanism.\", \"I have also read the reviews posted by other reviewers who have scored this paper highly just to make sure that I have not misunderstood the contributions. However, their reviews, responses and discussions do not clarify my technical concerns.\"]}", "{\"comment\": \"I thank the authors for the response and the revised manuscript. The additional experiment and the new proof with the perturbation analysis greatly improved the paper and I now vote for acceptance.\\n\\nIn case the paper doesn't get accepted, I encourage the authors to resubmit and I have some suggestions for improving the presentation that I think could benefit the paper:\\n\\n1) It would be better to present theorem 1 already using the weaker assumption about the data. Also, the allowed size of the perturbation is only presented implicitly (Appendix B.1, Lemma 6). It can be made explicit as part of the theorem's assumptions.\\n\\n2) Figures 15 and 16 are very good, I think they should be part of the main paper rather than the last page of the appendix. Also, the rank (if I understand correctly) is calculated only for the first layer. It can be beneficial to have another figure representing the rank saturation phenomenon at different layers.\\n\\n3) The abstract and intro are still a bit misleading since it is not clear from them that the theoretical analysis is about random weights. I understand this is discussed in Section 5, but it would be fair to say this much earlier in the paper.\"}" ] }
0sU4myabw1
RapidDock: Unlocking Proteome-scale Molecular Docking
[ "Rafal Powalski", "Bazyli Klockiewicz", "Pawel Dabrowski-Tumanski", "Maciej Jaśkowski", "Łukasz Kuciński", "Piotr Miłoś", "Bartosz Topolski", "Maciej Wiśniewski", "Dariusz Plewczyński" ]
Accelerating molecular docking -- the process of predicting how molecules bind to protein targets -- could boost small-molecule drug discovery and revolutionize medicine. Unfortunately, current molecular docking tools are too slow to screen potential drugs against all relevant proteins, which often results in missed drug candidates or unexpected side effects occurring in clinical trials. To address this gap, we introduce RapidDock, an efficient transformer-based model for blind molecular docking. RapidDock achieves at least a $100 \times$ speed advantage over existing methods without compromising accuracy. On the Posebusters and DockGen benchmarks, our method achieves $52.1$\% and $44.0$% success rates ($\text{RMSD}<2A$), respectively. The average inference time is $0.04$ seconds on a single GPU, highlighting RapidDock's potential for large-scale docking studies. We examine the key features of RapidDock that enable leveraging the transformer architecture for molecular docking, including the use of relative distance embeddings of $3$D structures in attention matrices, pre-training on protein folding, and a custom loss function invariant to molecular symmetries. We make the model code and weights publicly available.
[ "molecular docking", "protein-ligand binding", "transformer", "equivariance", "high-throughput screening", "drug discovery" ]
Reject
https://openreview.net/pdf?id=0sU4myabw1
https://openreview.net/forum?id=0sU4myabw1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vjzWT4Svbc", "salc2kHrfU", "s3fH7LzXIy", "d4qKMjo3bH", "WGTfX56MxP", "Vrqmd7UZ0j", "RCwbf9BrnC", "Kd2C4korwr", "HDJvsK2z5C", "C5dybyUC9e", "3lU8V0jbCb", "1UVOEyNeD5" ], "note_type": [ "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment" ], "note_created": [ 1734679508205, 1730676990998, 1730209998422, 1732634252479, 1732632756960, 1730567462188, 1732632322278, 1732634099602, 1732634861955, 1737524108152, 1730641660837, 1732875848503 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11173/Area_Chair_QDbC" ], [ "ICLR.cc/2025/Conference/Submission11173/Reviewer_KqLH" ], [ "ICLR.cc/2025/Conference/Submission11173/Reviewer_S2uY" ], [ "ICLR.cc/2025/Conference/Submission11173/Authors" ], [ "ICLR.cc/2025/Conference/Submission11173/Authors" ], [ "ICLR.cc/2025/Conference/Submission11173/Reviewer_fj9R" ], [ "ICLR.cc/2025/Conference/Submission11173/Authors" ], [ "ICLR.cc/2025/Conference/Submission11173/Authors" ], [ "ICLR.cc/2025/Conference/Submission11173/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11173/Reviewer_1PMn" ], [ "ICLR.cc/2025/Conference/Submission11173/Reviewer_KqLH" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a rapid docking method based on a Transformer. The model takes as input a ligand and a protein, both of which are tokenized, and their 3D structures are passed through all self-attention layers. The protein is embedded using ESM-2.\\n\\nThe method achieves strong empirical results, and the reviewers appreciated this aspect. Overall, the work is very well executed. The training choices, such as the use of AlphaFold2-generated structures or biasing self-attention with distance, are generally intuitive. (Additional ablations would be very useful.)\\n\\nHowever, three out of four reviewers voted for rejection.\\n\\nA significant drawback of the manuscript is its limited novelty. The idea of using deep learning models to directly predict poses and/or docking scores has been explored before, as noted by one of the reviewers. UniMol2 is one such comparison point.\\n\\nAnother major critique focused on the limited evaluation. More detailed analysis of poses or comparisons to more docking baselines were suggested. One reviewer specifically proposed using a time-based split. It would also be useful to consider other routes to accelerating existing docking methods (e.g., distilling DiffDock).\", \"these_lower_level_comments_on_the_evaluation_scheme_reflect_a_broader_issue\": \"the paper does not clearly demonstrate its intended use. Two reviewers expressed doubts about the possibility of proteome-wide docking. For example, it is not fully clear how broadly the method generalizes to novel ligand structures or different protein families, which might be crucial depending on the use case. The paper would benefit from a clearer explanation of the model\\u2019s practical applications, ideally supported by an actual demonstration.\\n\\nAll in all, the paper fell slightly short of the acceptance threshold. While its strong empirical results were acknowledged, it did not fully clarify how the model would be used in practice. At this stage, I am recommending rejection. I hope these comments will be helpful in improving the paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewers raised concerns about the limited novelty of the Transformer-based architecture, incomplete benchmarking against classical docking tools like AutoDock Vina, and the scalability claims for proteome-wide docking. The authors addressed these by including additional baselines such as SMINA, and clarifying the choice of ligand embeddings and distance matrices. While these changes improved the presentation and partially addressed weaknesses, key issues regarding concerns about generalization to diverse protein families (stemming from limitations of the evaluation), novelty and practical validation of proteome-wide docking remained unresolved.\"}", "{\"summary\": \"The authors tackle the problem of proteome-scale docking, the goal of which is predicting the binding pose of a ligand against many thousands of proteins. To do this, they develop an equivariant Transformer model (RapidDock) for dramatically accelerating docking compared to previous diffusion or GNN-based approaches. The model takes various features from the protein and ligand as input, and outputs a prediction of the binding pose of the ligand. The results show that RapidDock achieves 100x faster runtimes than three competing deep learning methods, while retaining equivalent accuracy (except to AlphaFold 3, which is much slower but has much better accuracy).\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"Clear presentation of methods and results.\", \"Novel application of transformer architecture to docking, which results in much faster inference.\", \"Reasonable design choices in model. These include the addition of features for ligand atom charges and the use of a pre-trained protein language model. The scaling of the attention vector based on distance also seems well-motivated.\", \"Strong results on two distinct datasets, achieving a better success rate than two competitive deep learning methods at a fraction of the cost. While RapidDock has significantly worse accuracy on Posebusters compared to AlphaFold 3, I do not think this is a negative, because as the authors note AlphaFold 3\\u2019s speed is not suitable for large-scale docking. Additionally, my understanding is that AlphaFold-3 performs energy minimization as a post-processing step, which would give it an unfair advantage compared to RapidDock.\", \"Ablations of various model components help show the benefits of each design choice.\"], \"weaknesses\": [\"Motivation behind the problem setting is unclear. The authors address proteome-scale docking because any protein in the human proteome could be a potential *off-target* (a term of art that should probably be included in the paper) of a drug. Thus, docking against all proteins and then predicting affinity in a downstream task would detect these potential off-targets before they are discovered in later preclinical or clinical testing. However, I am not convinced that docking to each protein in the proteome is necessary to detect off-target effects. Pharmacologists often screen a drug against a limited number of safety-relevant proteins (in the hundreds), such as G-protein coupled receptors or ion channels, that are frequent off-targets [1, 2]. This is usually sufficient to detect many clinical issues beforehand, and it is not obvious that just considering more potential off-targets would further reduce the rate of adverse effects occurring (which for example may be due to more complex issues, such as toxicity of metabolic products or on-target unwanted effects). Could the authors provide a better argument for why it is important to screen a drug against all potential protein targets?\", \"[1] Bendels et al. \\u201cSafety screening in early drug discovery: An optimized assay panel.\\u201c J Pharmacol Toxicol Methods 2019.\", \"[2] Peters et al. \\u201cCan we discover pharmacological promiscuity early in the drug discovery process?\\u201d Drug Discovery Today 2012.\", \"Limited baseline comparisons. I think the most important baseline to include would be AutoDock Vina (or another similar docking program). Despite not being deep learning-based, Vina is relatively fast and the current state-of-the-art in applied fields. Practitioners conducting large-scale docking would likely use Vina, so including results on this baseline is important. An additional deep-learning based docking model, such as TANKBind, would also improve the strength of the results, but it is not as critical.\", \"Some of the design choices are not well-explained. For example, it is not clear to me why discretized charge embeddings are used for the ligand atoms instead of simply providing the charge scalar as an input. It is also not clear what role the distance bias matrices play.\"], \"questions\": [\"Is there a better way to determine which atoms are rigid and which are flexible? For example, AutoDock Vina determines if bonds are rotatable using a simple chemical definition, which dictates which atoms are flexible and which are not. Just searching through a lot of generated conformations seems like it might miss bonds that only rotate when exposed to external charges.\", \"Is this model potentially applicable to *target fishing*? Target fishing is the process of taking an existing drug compound and evaluating it against a large number of potential proteins to see if it can target them. This can be applied for drug repurposing, which is the use of currently approved drugs against new indications based on previously unknown binding against a new protein. This is potentially a strong application of the proposed method, but I do not see it explicitly mentioned in the paper anywhere.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces RapidDock, a fast and accurate transformer-based model for the\\nblind-docking task. The model predicts interatomic distances. Afterwards, the docked\\npose is reconstructed with the L-BFGS algorithm. The authors report a 100x speed-up while\\nsimultaneously improving over commonly used deep learning-based docking models in the\\nPoseBusters and DockGen datasets. The authors make first experiments at human\\nproteome-scale docking.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"RapidDock is fast and accurate, allowing it to tackle human proteome-scale docking studies. The reported speed could enable its use as an oracle function for other DD-related tasks in future work. Ablations on PoseBusters and Dockgen show promising results. The\\npaper is well written and gives a lot of insights into the modelling and training\", \"weaknesses\": \"While the method shows good results on a task relevant to computational biology, there is not enough novelty on the ML side that justifies acceptance as a main track contribution. The authors use a standard transformer embedding the ligand and proteins with ESM2 and\\nemploy cross-attention to the distance matrix. I encourage submission to a domain-specific journal.\", \"questions\": \"Wording:\\nL. 30 \\u201c \\u2026 will revolutionize medicine\\u201d is an unsupported statement, please reformulate. It would be better to focus on the obstacles that need to be overcome and how you tackle those (like in\\nyour paragraph 2, L. 34 X.).\\nIn l.113f. the authors claim equivariance, however they work with interatomic distances only, making your model invariant.\\nPermutation loss citation is wrong (e.g. l. 228), cite original paper (Zhu et al. 2022 Direct molecular conformation generation) \\nIn my opinion, l.419f. \\u201cthe model demonstrates a strong understanding of the physicochemical principles\\u201d is a big stretch - how is this supported in your work? Accurate and fast prediction of ligand poses in proteins doesn\\u2019t mean that the model understands the physicochemical principles that lead to those poses, and it is debatable whether this is even needed.\", \"open_questions\": \"Since the prediction time is critical to the paper's claims, I would like to see how much time you spend at inference on average in a) pre-processing, b) prediction, c) reconstruction and d) any form of post-processing.\\n\\nIn l.162f., what is the reasoning for choosing 257 buckets? Are there ablations on higher/lower resolution? How does this affect the performance-inference time tradeoffs? Same questions for the charge embeddings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to reviewer KqLH\", \"comment\": \"We are truly thankful for your insightful comments. We are glad that you appreciated our choice to pre-train the model on protein folding and represent the protein and ligand 3D data as inductive biases in the attention matrices. We also think these are the key features of RapidDock. We are also grateful for mentioning the clarity of the presentation.\\n\\nThank you for noting that in modern drug design molecules are screened against a relatively small number of proteins for assessing toxicity. This is precisely our motivation for developing a model that can quickly dock molecules to all proteins. The number of screened proteins nowadays is limited because of computational constraints. We believe this is the reason most drugs fail even the first phases of clinical trials. To confirm it, one needs a full picture of the given molecule\\u2019s effects. For example, a ligand may indirectly cause side effects by yet unknown protein cascades. Or, as you have pointed out, the side effects may be caused by the metabolic products of the drug, which again calls for a fast docking method in particular. Apart from toxicity, a quick binding tool can also allow studying potential beneficial effects, such as decreasing inflammation or speeding up proliferations in the metabolome due to binding to specific receptors, etc.\\n\\nWe appreciate your suggesting a comparison to classical methods. We now include a comparison to SMINA in the paper. We note that there are many papers that include comparisons of deep learning methods to classical tools. The reported speeds of the latter methods are, however, consistently slower than those of most deep learning methods. \\n\\nRegarding the charge embeddings, we believe that including \\u201cper token\\u201d numerical values via embeddings is a standard approach, though we admit there are other possible choices. The distance matrices are needed to encode the 3D structure of the protein and the ligand within the transformer in a way that is invariant to translations, reflections, or rotations. The distances describe a protein-protein or ligand-ligand pairwise all-to-all property, so the attention matrix is an appropriate place to include them. We added a clarification of that point in the paper.\\n\\n---\\n\\n### Questions:\\n\\n*Is there a better way to determine which atoms are rigid and which are flexible? For example, AutoDock Vina determines if bonds are rotatable using a simple chemical definition, which dictates which atoms are flexible and which are not. Just searching through a lot of generated conformations seems like it might miss bonds that only rotate when exposed to external charges.*\\n\\nThank you for this question. For sure, the representation is not perfect and could be improved in the future to better capture such aspects. On the other hand, sampling the conformations is implicitly performed with energy weighting. As a result, the distribution of conformations, and therefore the distances obtained, resembles the one found in reality.\\n\\n---\\n\\n*Is this model potentially applicable to target fishing? Target fishing is the process of taking an existing drug compound and evaluating it against a large number of potential proteins to see if it can target them. This can be applied for drug repurposing, which is the use of currently approved drugs against new indications based on previously unknown binding against a new protein. This is potentially a strong application of the proposed method, but I do not see it explicitly mentioned in the paper anywhere.*\\n\\nYes, in fact, that is one of the applications the model is well-suited for. We included a mention of target fishing in the paper.\"}", "{\"title\": \"Reply to Reviewer fj9R\", \"comment\": \"Thank you for carefully reading our manuscript and listing multiple strengths of our method. We also believe that our embeddings of the protein and ligand 3D structures are the key to success.\\nWe appreciate your pointing out the need for improving some of the descriptions of technical details in the paper. We tried to improve them.\\n\\nRegarding the differences in terms of the number of parameters, we do not think it is particularly relevant here because in practice only speed and accuracy matter. Our main claim is that a properly designed transformer can perform competitively while being much faster than models based on generative modeling. \\n\\nYour comment about the RMSD metric being imperfect is of course correct. The actual molecule poses can be obtained via post-processing, which we now describe in Appendix A.6. However, we hope that RapidDock will be fine-tuned on downstream tasks directly, or used for obtaining docking embeddings, without forming any poses explicitly. \\n\\nIn Appendix A.7, we now also include examples where RapidDock performs worse than AlphaFold 3. We think a comparison to the most accurate method is sufficient. \\n\\nThank you for referring to ETDock and FeatureDock. While these methods use transformers in their pipeline (as do the models in our baseline comparisons), we do not think they are end-to-end blind docking tools fully based on a single transformer, although we admit that such a distinction is not clear-cut. ETDock is composed of several custom modules (Feature Processing, TAMformer, Attention Layer, Message Layer, etc). FeatureDock is not a blind docking tool because the grid around the pocket is input to the model. We agree that these methods should be cited though, and we added a mention of these methods in the paper. \\n\\nFinally, we are confident that the paper does not contain major grammatical errors. We do acknowledge, though, that there may be occasional typos or misplaced punctuation. We placed the Related Work section after Experiments to better describe other methods as they relate to ours.\\n\\n---\\n\\n### Questions:\\n\\n*When comparing runtime in inference mode, did you preprocess the protein, or was the comparison done solely for molecule conformation?*\\n\\nThe protein requires minimal preprocessing (only computing the distances), though ESM embeddings need to be ready. The number of proteins in the human body is limited to only about 20,000, however, and the preprocessing can be done \\u201conce and for all,\\u201d so it can be neglected.\\n\\n---\\n\\n*Did you use the same time-splitting approach as previous methods, such as DiffDock and NeuralPlexer, for the dataset?*\\n\\nNo, because that approach is known to be very leak-prone. See, for example, [1].\\n\\n---\\n\\n*In the statement, \\n\\\"Only the fixed distances across the molecule\\u2019s possible conformations are recorded, and others are denoted by a special value of \\u22121,\\\" \\ncould you clarify why you selected -1 as the special value?*\\n\\nWe wanted to use a value that does not correspond to any distance. This value indicates that: \\n1. The part of the first term inside softmax corresponding to those distances should not be modified (see the definition of $S_m$), and \\n2. The distance is larger than the maximum distance of 16 (see the definition of $b(x)$). \\n\\nThe special value could be any negative number. We added a clarification.\\n\\n---\\n\\n*In the RapidDock attention section, you state, \\n\\\"First, we multiply the attention scores corresponding to input pairs with known distances (i.e., ligand-ligand within a rigid part and protein-protein) by a learnable scalar $s_m$, one for each layer $m$.\\\" \\nDid you also apply this to protein-ligand pairs? If not, could you explain why?*\\n\\nThe scalar $s_m$, together with $z_m$, controls how much the final score is affected by the original score and how much by the distance bias. The protein-ligand distances are unknown, so the model should rely solely on the original attention score. That part could be multiplied by another scalar, but the overall effect would be the same because we already have two scalars, $z_m$ and $s_m$. We added a clarification.\\n\\n---\\n\\n*For inference, do you generate one conformation per runtime for each protein-ligand pair?*\\n\\nAll conformations need to be generated at inference.\\n\\n---\\n\\n### References:\\n\\n[1] Li, Jie, Xingyi Guan, Oufan Zhang, Kunyang Sun, Yingze Wang, Dorian Bagni, and Teresa Head-Gordon. \\\"Leak proof PDBBind: A reorganized dataset of protein-ligand complexes for more generalizable binding affinity prediction.\\\" *arXiv preprint arXiv:2308.09639* (2023).\"}", "{\"summary\": \"In this paper, the authors introduce RapidDock, a new approach to molecular docking that leverages a Transformer model. The method includes both ligand atom embeddings and ligand charge embeddings. Protein representations are generated using embeddings from protein amino acids, the ESM-2 PLM, and calculated distance metrics. Instead of deep learning, RDKit-based methods such as MMFF and EDKG, construct the rigid distance matrix for molecules. Trained on the PDBBind and BindingMOAD datasets, RapidDock outperforms DiffDock and other open-source methods on the PoseBuster benchmark in % of ligands in RMSD < 2 \\u00c5 metric, delivering at least a 100x increase in inference speed.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors demonstrate the need for a faster model by outlining the scalability limitations of previous deep learning models.\", \"Aligned with their motivation, RapidDock shows the ability to perform conformation sampling for molecular docking in GPU inference runtime in approximately one-hundredth of a second per protein-ligand pair.\", \"In benchmarking with PoseBuster, RapidDock achieves the best performance among open-source codes (noting that AlphaFold 3 is not open source), particularly in the percentage of ligands achieving RMSD < 2 \\u00c5.\", \"The paper provides a clear explanation of how ligand and protein modalities are utilized and fed into the Transformer model.\", \"For constructing the ligand distance matrix, the authors use a hybrid approach, incorporating physics-based methods from RDKit, such as MMFF and EDKG.\", \"Additionally, the inclusion of ligand charge embeddings represents another hybrid approach in the model.\", \"For protein embeddings, the authors showcase the effectiveness of using not only pre-trained models but also their custom-trained ESM-2 models.\", \"The Transformer architecture employs a non-autoregressive approach with a full attention mask, introducing a new method for molecular docking by incorporating an attention scaler within the attention mechanism.\", \"The training hyperparameters are shared in detail through comprehensive tables.\", \"**Originality:** RapidDock introduces a unique approach to molecular docking, particularly in terms of preprocessing compared to other DL-based methods. The detailed steps in the ligand and protein embedding process highlight its originality, which the authors further validate through an ablation study.\", \"**Significance:** From a large-scale proteomic perspective, RapidDock is highly scalable and significantly faster in runtime compared to other DL-based and search-based molecular docking methods.\"], \"weaknesses\": \"- The code is not shared.\\n- Although the method section claims equivariance, it lacks sufficient explanation on this aspect.\\n- The rationale for using ligand atom charges is not adequately clarified.\\n- It is unclear why non-fixed distances in the molecule's rigid distance matrix are assigned a value of -1.\\n- The annotations for distance bias matrices are insufficiently explained; the annotations appear to be included simply because they work, without detailing why they are effective.\\n- Similarly, the rationale behind RapidDock\\u2019s use of attention and charge embeddings, along with their annotations, is not fully addressed.\\n- The splitting strategy for the training, validation, and test sets is not sufficiently described.\\n- The parameter comparison in benchmarking does not seem fair; DiffDock results are compared with 30 million parameters, while RapidDock has 60 million parameters.\\n- The RMSD metric, commonly used in molecular docking and Structure-Based Drug Design (SBDD), does not always yield bioactively, physically, or chemically plausible structures, as shown by Posebuster[1], PoseCheck[2], PoseBech[3], and CompassDock[4]. Including these metrics in the benchmark would strengthen the study.\\n- Appendix A.6 lacks a comparison between DiffDock and NeuralPLexer examples.\\n- The extent of ligand filtering during ligand preparation is not sufficiently discussed.\\n\\n**Quality:** Although the authors claim this is the first use of a Transformer-based model in blind docking, ETDock[5] and FeatureDock[6] have previously used Transformers for molecular docking; however, these methods are not mentioned in the paper. Additionally, the benchmarking is limited to comparisons with only a few popular methods. In the conclusion, the authors state that: \\n\\n>\\\"... the model demonstrates a strong understanding of the physicochemical principles behind forming biological structures,\\\" \\n\\nyet no bioactivity or physicochemical analyses, as discussed earlier, have been conducted to support this claim.\\n\\n**Clarity:** The paper contains numerous grammatical errors that detract from readability and should be carefully revised. Placing the Related Work section after the Experiments section disrupts the flow, and the method annotations are not clearly explained.\\n\\n**Reproducibility:** As the code is not shared, it is currently impossible to test whether it performs as reported. If the code were provided, I would be able to review, test, and reassess my evaluation (including scoring and comments) accordingly.\\n\\n### **References**\\n[1] Martin Buttenschoen, Garrett M Morris, and Charlotte M Deane. Posebusters: Ai-based docking methods fail to generate physically valid poses or generalise to novel sequences. Chemical Science, 15(9):3130\\u20133139, 2024. \\n\\n[2] Charles Harris, Kieran Didi, Arian R Jamasb, Chaitanya K Joshi, Simon V Mathis, Pietro Lio, and Tom Blundell. Benchmarking generated poses: How rational is structure-based drug design with generative models? arXiv preprint arXiv:2308.07413, 2023.\\n\\n[3] Alex Morehead, Nabin Giri, Jian Liu, Jianlin Cheng. Deep Learning for Protein-Ligand Docking: Are We There Yet? arXiv preprint arXiv:2405.14108, 2024.\\n\\n[4] Ahmet Sarigun, Vedran Franke, Bora Uyar, Altuna Akalin. CompassDock: Comprehensive Accurate Assessment Approach for Deep Learning-Based Molecular Docking in Inference and Fine-Tuning. arXiv:2406.06841, 2024.\\n\\n[5] Yiqiang Yi, Xu Wan, Yatao Bian, Le Ou-Yang, Peilin Zhao. ETDock: A Novel Equivariant Transformer for Protein-Ligand Docking. arXiv:2310.08061, 2023.\\n\\n[6] Mingyi Xue, Bojun Liu, Siqin Cao, Xuhui Huang. FeatureDock: Protein-Ligand Docking Guided by Physicochemical Feature-Based Local Environment Learning using Transformer. ChemRxiv\", \"questions\": [\"When comparing runtime in inference mode, did you preprocess the protein, or was the comparison done solely for molecule conformation?\", \"Did you use the same time-splitting approach as previous methods, such as DiffDock and NeuralPlexer, for the dataset?\", \"In the statement,\", \"> \\\"Only the fixed distances across the molecule\\u2019s possible conformations are recorded, and others are denoted by a special value of \\u22121,\\\"\", \"Could you clarify why you selected -1 as the special value?\", \"In the RapidDock attention section, you state,\", \"> \\\"First, we multiply the attention scores corresponding to input pairs with known distances (i.e., ligand-ligand within a rigid part and protein-protein) by a learnable scalar $s_m$, one for each layer $m$.\\\"\", \"Did you also apply this to protein-ligand pairs? If not, could you explain why?\", \"For inference, do you generate one conformation per runtime for each protein-ligand pair?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer S2uY\", \"comment\": \"Thank you for your comments. Indeed, we would like to use RapidDock as a kind of oracle function in our future work (or to produce embeddings for other models). We are glad you appreciated our efforts to present the ideas in a well-structured way.\\n\\nTo clarify the role of the distance matrices, they are embedded into the self-attention matrices of the encoder (there is no cross-attention). We respectfully disagree with the statement that the embedding of the protein is standard. In fact, this is the key part that enables using the encoder as an end-to-end docking tool for the first time. ESM is an additional external embedding but the precise geometric embedding into the attention matrices is novel. A similar idea has been recently proposed after our submission in Dockformer [1] which is a model for pocket-based docking.\\n\\n---\\n\\n### Questions:\\n\\n*Wording: L. 30 \\u201c \\u2026 will revolutionize medicine\\u201d is an unsupported statement, please reformulate. It would be better to focus on the obstacles that need to be overcome and how you tackle those (like in your paragraph 2, L. 34 X.).*\\n\\nWe changed \\u201cwill\\u201d to \\u201ccould\\u201d. The next sentence strongly supports the claim.\\n\\n---\\n\\n*In l.113f. the authors claim equivariance, however they work with interatomic distances only, making your model invariant.*\\n\\nIt\\u2019s a fair point that the model is (strictly speaking) invariant (though equivariance is often used interchangeably in the field). We changed the wording.\\n\\n---\\n\\n*Permutation loss citation is wrong (e.g. l. 228), cite original paper (Zhu et al. 2022 Direct molecular conformation generation).*\\n\\nThank you for this remark, we fixed the citation to point to the original paper.\\n\\n---\\n\\n*In my opinion, l.419f. \\u201cthe model demonstrates a strong understanding of the physicochemical principles\\u201d is a big stretch - how is this supported in your work? Accurate and fast prediction of ligand poses in proteins doesn\\u2019t mean that the model understands the physicochemical principles that lead to those poses, and it is debatable whether this is even needed.*\\n\\nWe toned down the wording.\\n\\n---\\n\\n*Open questions: Since the prediction time is critical to the paper's claims, I would like to see how much time you spend at inference on average in a) pre-processing, b) prediction, c) reconstruction and d) any form of post-processing.*\\n\\nFollowing the authors of all our comparison benchmark models, we do not include pre-processing in runtimes. It is dominated by generating conformations which is a highly parallelizable process and will be negligible for docking to thousands of proteins. We do not include pre-processing times also because comparisons to other methods would be problematic. For example, the reported runtimes of AlphaFold 3 include only the GPU wallclock time. NeuralPLexer reports GPU runtimes only as well. \\nReconstruction is included in runtimes and is on average about two times longer than the transformer forward pass as we mention in A.3. \\nOptional post-processing is now described in the Appendix A.6. We hope, however, that the model can be directly fine-tuned on downstream tasks or will be used to produce docking embeddings for other models, without forming the poses.\\n\\n---\\n\\n*In l.162f., what is the reasoning for choosing 257 buckets? Are there ablations on higher/lower resolution? How does this affect the performance-inference time tradeoffs? Same questions for the charge embeddings.*\\n\\nThank you for this remark. Indeed, these ablations would be desirable. There is no reason inference times would change but performance might indeed be affected. The number 257 was chosen for several reasons:\\n\\n- An uneven number was chosen because we added one special embedding for unknown distances.\\n- The resolution must be sufficiently high to capture nuances in the molecule representation, e.g., to distinguish the length of the double carbon bond from single carbon bond (1.5 vs 1.3).\\n- The maximum distance must be sufficient to capture the far-away dependencies interactions between amino acids.\\n\\nWe added some clarifications of these in the paper.\\n\\n---\\n\\n### References:\\n\\n[1] Yang, Zhangfan, Junkai Ji, Shan He, Jianqiang Li, Ruibin Bai, Zexuan Zhu, and Yew Soon Ong. \\\"Dockformer: A transformer-based molecular docking paradigm for large-scale virtual screening.\\\" *arXiv preprint arXiv:2411.06740* (2024).\"}", "{\"title\": \"Reply to Reviewer 1PMn\", \"comment\": \"Thank you for carefully reading our manuscript and providing constructive feedback. We acknowledge the need for generating conformations. However, this is embarrassingly parallel and will be negligible in docking studies with thousands of proteins. None of the baseline models we compare to report pre-processing times.\\n\\nWe are aware that the experiment in Section 4.2 only confirms the quality of the predicted protein-ligand distances. A better experiment would align the entire predicted structure with the bound complex and compute RMSD or LDDT-PLI metrics only then. This is part of our current work.\\n\\n---\\n\\n### Questions:\\n\\n*DiffDock-L (also AlphaFold3 and NeuralPLexer) is a generative approach, so it gives multiple binding poses according to their probabilities. Therefore, its prediction accuracy can be improved by including multiple different poses (Top-n poses) in the evaluation. However, RapidDock is deterministic, so it gives only a single output. (...).*\\n\\nWe do not believe that the non-deterministic nature is a shortcoming of our method, especially for high-throughput studies, though various approaches to generating a collection of poses could be considered. The results for other methods were obtained with the parameters recommended by their authors. In particular, DiffDock generates ten poses and chooses the most likely pose via a ranking module.\\n\\n---\\n\\n*Since RapidDock requires conformation searches for each molecule before docking, the authors need to clarify whether the reported computational times include the time for conformer searches (...).*\\n\\nAs mentioned above, the runtimes do not include preprocessing. For studies with thousands of proteins, they should be negligible. Moreover, the comparisons to other methods would be problematic. We followed the authors of our comparison baselines and do not report preprocessing times. For example, the reported runtimes of AlphaFold 3 include only the GPU wallclock time. NeuralPLexer or DiffDock authors also report GPU inference runtimes only. We added a clarification of what runtimes are reported in Appendix A.3.\\n\\n---\\n\\n*The proposed method has been compared with AlphaFold3 and NeuralPLexer. However, the former performs a rigid docking, whereas the latter predicts binding poses only from protein sequences and molecular graphs. (...)*\\n\\nYou are absolutely right to point out that AlphaFold3 and NeuralPLexer perform docking-while-folding, which is a more general task. We write about it in detail in the Related Work section. However, it is particularly important to compare RapidDock to the tools that achieve state-of-the-art accuracy whatever they may be.\\n\\n---\\n\\n*In the reconstruction of ligand location, predicted distances between ligands and also between ligand-protein may not precisely match with the coordinates of a single pose. (...) Does it require a kind of post-processing?*\\n\\nThank you for pointing this out. This is a very good point. The results are obtained without any post-processing. Such post-processing can be performed, though, which we now describe in Appendix A.6. However, we believe that in practice \\u2013 being fully based on a transformer \\u2013 the model is well-suited for fine-tuning without explicitly forming the poses.\\n\\n---\\n\\n*The authors greatly emphasize the importance of proteome-wide docking, but this work does not provide any meaningful analysis except the computational time, (...). The authors may add an additional study verifying that the proposed method can indeed provide meaningful results from the proteome-wide docking (...).*\\n\\nDemonstrating the importance of proteome-wide docking is a topic of our current research. It is a fact, however, that the number of screened proteins for assessing toxicity of new drugs nowadays is limited because of computational constraints and most drugs fail even the first phases of clinical trials. We believe that to answer why this is the case, one needs a full picture of the given molecule\\u2019s effects. We think RapidDock is one important step toward enabling that. This approach is already partially utilized in genomics, where similarity of drug effect is measured by the similarity of their expression profiles on a pre-defined set of genes. Demonstrating actual use cases for proteome-wide docking is part of our current work.\\n\\n---\\n\\n*Appendix A.6 shows the examples of 3D structures predicted by RapidDock and AlphaFold3. The authors deliberately selected specific examples where RapidDock outperformed AlphaFold3, while they admit that the latter is far better than the former on average. AlphaFold3 even predicted those structures from the sequence-level information. (...).*\\n\\nWe include such examples in the new version of the paper.\\n\\n---\\n\\n### References\\n\\n[1] Li, Jie, Xingyi Guan, Oufan Zhang, Kunyang Sun, Yingze Wang, Dorian Bagni, and Teresa Head-Gordon. \\\"Leak proof PDBBind: A reorganized dataset of protein-ligand complexes for more generalizable binding affinity prediction.\\\" *arXiv preprint arXiv:2308.09639* (2023).\"}", "{\"comment\": [\"# Response to Reviewers\", \"Dear Reviewers,\", \"Thank you for taking the time to read our manuscript and providing invaluable feedback.\", \"We especially appreciate the warm comments regarding the originality of the protein and ligand representations within the transformer. We are also glad that many of you praised the clarity and structure of our manuscript.\", \"We also accept and deeply value your constructive critical remarks. For sure, there are several aspects of the method that need further effort before the proteome-based docking can be fully unlocked. Several points that you raise are topics of our current work. We tried to address as many of them as possible at this stage. The changes to the manuscript are summarized as follows:\", \"## List of Changes\", \"We changed the wording in the first sentence.\", \"We fixed the results obtained for DiffDock-L on Neuralplexer on the Posebusters sets \\u2013 there was an error due to a bug in our processing of that dataset for those models. DiffDock-L\\u2019s results on Posebusters now align better with those reported in their paper. We slightly changed the wording accordingly.\", \"We changed \\u201cequivariant\\u201d to \\u201cinvariant\\u201d and improved the explanation of why the model is invariant.\", \"We described the motivation for embeddings of atom charges.\", \"We added explanations on the choice of the special value in the distance bias matrix.\", \"We elaborated on the choice of the number of buckets and maximum value in the $b$ function.\", \"We expanded on our motivation for including the distance information within the attention matrices.\", \"We added an explanation of the role of the scalers in our attention mechanism.\", \"We fixed the reference to the paper that introduced Permutation Loss.\", \"We added a comparison to a classical docking tool, SMINA.\", \"In *Related Work*, we included references to FeatureDock and ETDock.\", \"We changed wording in *Conclusions*.\", \"We added a mention of target fishing in *Conclusions*.\", \"We provided more details about how runtimes were reported in Appendix A.3.\", \"We added a section in the Appendix on the optional post-processing of the ligand.\", \"We extended the illustration of the docked molecules to include examples where AlphaFold 3 outperforms RapidDock.\", \"Thank you once again for your detailed review and suggestions, which have greatly contributed to the improvement of our manuscript.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work proposed a fast and reliable prediction method, RapidDock, for protein-ligand binding poses. Most previous methods employed in this work as base-lines used diffusion models, leading to large computational costs. However, RapidDock is based on a transformer and so much faster than the base-line models. The benchmark studies on PoseBusters and DockGen show that RapidDock is not only fast but also far more accurate than others except AlphaFold3. While the performance of the proposed method seems competitive, there are several issues that need to be addressed.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. This work shows the possibility of transformer-based approaches for binding structure predictions, while most previous works are based on diffusion methods.\\n2. The proposed method outperformed the popular base-line, DiffDock-L, in the two benchmark studies, while its computational time for predictions is much faster than those of all base-line models.\", \"weaknesses\": \"1. The proposed method needs to generate 96 molecular conformations for each molecule and analyze the conformers to obtain its distance matrix.\\n2. The experiment in Section 4.2 seems meaningless because it uses holostructures when it predicts binding poses.\\n3. The title and introduction parts emphasize the importance of proteome-wide docking, but this work does not provide any meaning results regarding that. \\n4. Technical details of the proposed method are insufficient.\", \"questions\": \"1. DiffDock-L (also AlphaFold3 and NeuralPLexer) is a generative approach, so it gives multiple binding poses according to their probabilities. Therefore, its prediction accuracy can be improved by including multiple different poses (Top-n poses) in the evaluation. However, RapidDock is deterministic, so it gives only a single output. It is understood that the authors want to emphasize computational speeds, but it seems they also need to discuss the accuracy aspects for a fair comparison.\\n2. Since RapidDock requires conformation searches for each molecule before docking, the authors need to clarify whether the reported computational times include the time for conformer searches or not. \\n3. The proposed method has been compared with AlphaFold3 and NeuralPLexer. However, the former performs a rigid docking, whereas the latter predicts binding poses only from protein sequences and molecular graphs. Therefore, the prediction complexity of RapidDock is much lower than that of the baseline models, so the direct comparison between them is less meaningful. The authors need to clarify this fact in the introduction or result sections. The current form may cause undesirable confusion to potential readers.\\n4. In the reconstruction of ligand location, predicted distances between ligands and also between ligand-protein may not precisely match with the coordinates of a single pose. If this is the case, the authors should elaborate more details about this process. Does it require a kind of post-processing?\\n5. The authors greatly emphasize the importance of proteome-wide docking, but this work does not provide any meaningful analysis except the computational time, which can be readily estimated without performing the actual calculations. The authors may add an additional study verifying that the proposed method can indeed provide meaningful results from the proteome-wide docking. Otherwise, they need to tone down their argument from the title and introduction parts. \\n6. Appendix A.6 shows the examples of 3D structures predicted by RapidDock and AlphaFold3. The authors deliberately selected specific examples where RapidDock outperformed AlphaFold3, while they admit that the latter is far better than the former on average. AlphaFold3 even predicted those structures from the sequence-level information. These examples may lead to misunderstanding that RapidDock works better than AlphaFold3. The authors need to provide examples where RapidDock fails while AlphaFold3 succeeds.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your reply, and for running SMINA as a baseline. These new results show that RapidDock outperforms both deep learning and classical docking methods, and therefore I increase my score from 5 to 6. RapidDock has strong performance given its short runtime, so I think it could be practically quite useful.\\n\\n> The number of screened proteins nowadays is limited because of computational constraints. We believe this is the reason most drugs fail even the first phases of clinical trials.\\n\\nI'm still not convinced that the reason most drugs fail early clinical trials is because there was an off-target protein that was missed in an earlier stage of development. It would be good if the authors could provide some citation for this claim, or an argument based on data from clinical trials. Otherwise, the motivation behind the problem setting remains unclear to me.\"}" ] }
0sJ8TqOLGS
LLM Spark: Critical Thinking Evaluation of Large Language Models
[ "Runing Yang", "Adam Nguyen", "Hoang Anh Just", "Ruoxi Jia", "Ming Jin" ]
Large language models (LLMs) excel in complex tasks but often struggle with inconsistencies in problem framing, a critical skill for real-world scenarios. This paper introduces SPARK, a novel evaluation framework grounded in the Hierar- chical Three-Space Theory, to assess LLMs’ ability to identify missing informa- tion and challenge flawed problem setups. We propose a general framework to create benchmarks by introducing inconsistencies and misleading cues in diverse question-answering datasets, covering mathematics, science, and reading compre- hension. To assist with robust measuring of critical thinking, we employ two key metrics: problem-solving capability rate and challenge rate. Our experiments with state-of-the-art LLMs reveal their limitations in critical thinking, particularly in recognizing inconsistencies. We also explore mitigation strategies, such as modi- fied prompting and targeted fine-tuning. Furthermore, we conduct comprehensive experiments to investigate how model and problem properties influence critical thinking capabilities in LLMs.
[ "critical thinking", "llm", "problem-solving", "benchmarks" ]
Reject
https://openreview.net/pdf?id=0sJ8TqOLGS
https://openreview.net/forum?id=0sJ8TqOLGS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xzB49VN39N", "wSbwKQmD6g", "vXVm87GKPi", "t4bYUZj05H", "sOkPhW7RUb", "oPsdlvX3YB", "kWRAguvllk", "kC8brvbFM8", "h6whBKHMCo", "g3WmgYtuHJ", "fTju6wsw7e", "dwAwAMI5IM", "VUKEcE1cog", "K5UdL1xV6M", "BmLgeLHGAV", "AUSWgOCOhQ", "8zr3nlbEpI", "8g72eIDX0k", "8CvIoNu6GJ", "6sL20G4ycB", "6Haxe4mqrD", "3HP5gm4voQ" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732905912950, 1732905645040, 1730555589501, 1732906475282, 1732514389728, 1732514126758, 1732515036014, 1730201271930, 1737524221186, 1735064806431, 1733161840513, 1732634173720, 1732515185062, 1733161976022, 1730574266395, 1732691927594, 1730598141451, 1732514274552, 1732514540558, 1732515365539, 1732514058113, 1732584357188 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Reviewer_pRhJ" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Reviewer_JpKC" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Reviewer_pRhJ" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Reviewer_xYtG" ], [ "ICLR.cc/2025/Conference/Submission12877/Reviewer_xYtG" ], [ "ICLR.cc/2025/Conference/Submission12877/Reviewer_Dx3U" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ], [ "ICLR.cc/2025/Conference/Submission12877/Authors" ] ], "structured_content_str": [ "{\"title\": \"Seeking Your Additional Feedback on Paper12877 Revisions\", \"comment\": \"Dear Reviewer JpKC,\\nFollowing our responses to your thoughtful review of Paper12877, we would welcome your perspectives on whether our clarifications have sufficiently addressed your concerns, particularly regarding:\\n- The distinction between hallucination and critical thinking in our framework\\n- Our approach to evaluating real-world inconsistencies through controlled benchmarks\\n- The enhanced evaluation methodology for challenge rates and correctness\\n\\nWith the rebuttal period closing on December 2nd, any additional insights would be valuable in strengthening our manuscript.\\n\\nBest regards, Authors of Paper12877\"}", "{\"title\": \"Looking Forward to Your Feedback on Paper12877 Revisions\", \"comment\": [\"Dear Reviewer Dx3U,\", \"We would appreciate your feedback on our thorough responses to your concerns about Paper12877. We have specifically addressed:\", \"Enhanced figure clarity with improved notations and captions\", \"Added substantial details to improve writing and readability\", \"Expanded related work to include Tree-of-Thoughts\", \"Detailed explanation of data contamination mitigation and Game24 testing\", \"Added new evaluation metrics including challenge rates on well-defined questions\", \"Clarified our framework's benchmarking capabilities\", \"Improved reproducibility with detailed implementation documentation\", \"As the December 2nd deadline approaches, your input on whether these revisions adequately address your concerns would be valuable.\", \"Best regards, Authors of Paper12877\"]}", "{\"summary\": \"Based on the three-space theory, this paper presents the SPARK hypothesis and assessment framework on LLM critical thinking. Through the benchmark constructed in the paper, the authors explored the current critical thinking ability of LLM, with the influence of various factors on it, through a large number of experiments, contributing to the assessment and enhancement of LLM's critical thinking.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Focusing on LLM's critical thinking skills, this paper frames the Benchmark by designing inconsistencies in the problem and designing a large number of experiments to explore it.\\n\\nI personally like the idea of this work. In my opinion, the main strengths of this work include: \\n1) This paper uses the three-space theory to model LLM's critical thinking ability and explores the reverse proof of the framework, which provides a theoretical basis for critical thinking related research.\\n2) This paper conducts a large number of experiments to explore LLM's critical thinking and its influencing factors from multiple perspectives, which provides a feasible direction for the subsequent research.\", \"weaknesses\": \"I do not find significant shortcomings of this work, but only a few minor points to be clarified:\\n1. The critical thinking assessment was designed without taking into account the impact of the model's capabilities. For example, if the model itself cannot understand or does not have knowledge of the question, it is difficult to \\\"criticize\\\" it. This is especially true for multiple-choice questions and smaller models, as shown in Figure 2, where multiple-choice questions have a low percentage of correct answers and most models have a low change rate. This may result in an underestimation of their critical thinking, i.e., it is not that they do not have the ability to think this way, but that the questions are beyond their knowledge and ability.\\n2. In terms of assessment metrics, it is best to minimize the use of LLM assessments, which can be costly. For example, for multiple-choice questions, can there be a simpler way of assessing correctness rate, making the benchmark easier to use?\\n3. Correctness rate is sometimes used in complete problems [line 278] and sometimes in incomplete problems, and is also expressed as \\\"none of the options\\\" in its definition (incomplete problem), which can be confusing when reading the experiments and results.\\n4. Why does the gaslight increase its challenge rate while decreasing its correctness rate? If it affects the correctness rate, i.e., LLM is misled by gaslight, shouldn't the reasoning not be challenged but follow the misguidance?\\n5. Some of the dots and text in Figures 2, 11, and 12 overlap, which is hard to read.\", \"questions\": \"See weaknesses.\\n\\nP.S., I'm really curious how o1 would respond to such problems.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your valuable feedback!\", \"comment\": \"Thank you for your thorough feedback on our revised submission. Your insights on SPARK's positioning and suggestions about definitive claims will be valuable for improving our work. We appreciate the time you've taken to provide such detailed guidance.\"}", "{\"title\": \"Thank you for your review (2/2)\", \"comment\": \"> **Why choose these set of questions? Why focusing on reasoning-focused datasets?**\\n\\nThese datasets span diverse problem types\\u2014including mathematics, reading comprehension, domain-specific science, and story completion\\u2014each designed to evaluate specific problem-solving skills (detailed in Appendix B.1). While these datasets assess different aspects of LLM problem-solving, they share some common elements, enabling us to evaluate our Across-Domain Abstraction Hypothesis about the transferability of critical thinking between similar tasks(Sec 4.6). Furthermore, these datasets provide unique ground truth answers, which is convenient for us to evaluate whether the LLMs incorporate the required knowledge.\", \"we_specifically_focus_on_reasoning_tasks_as_they_align_with_our_definition_of_critical_thinking\": \"the ability to analyze inference steps and update assumptions about problem completeness. This focus is crucial because reasoning paths and intermediate steps provide necessary feedback points, allowing us to evaluate whether LLMs exhibit inconsistencies during their inference process.\\n\\n\\n> **What are decoding parameters?**\\n\\nWe provided decoding parameters through the table in Appendix B.2 in vLLM and a link to the documentation.\\n\\n> **Explain checkpoints in Section 4.7\\u2026**\\n\\nWe apologize for the confusing write-up of a paragraph. In Section 4.7, we evaluate the performance of the Llama-3.1-8B-Instruct on the challenging mathematical dataset, TAL, under the gaslighting setting. Observing, low correctness rate of the original model on the test TAL dataset, we study how fine-tuning affects the ability of the model. We evaluate fine-tuned models on four different datasets:\\n\\n+ TAL Test dataset with 2000 samples (denoted as llama31_8bin_sft_talen2ktest).\\n+ GSM8K, a mathematical dataset with 8790 samples with step-by-step reasoning (llama31_8bin_sft_gsm8k_ep3).\\n+ Polytope, a mathematical dataset with 42300 samples with more detailed step-by-step reasoning steps than GSM8K (Llama3.1-8B-Cobalt).\\n+ Helpfulness and Harmlessness (HH) with 150000 samples for human preference learning (llama31_8bin_dpo_hh_150000).\\n\\nWith the first model, we study whether memorizing the test data can help the model be robust to gaslighting. GSM8K and Polytope are general math datasets with solution steps, where the latter is larger and has an in-depth solution, and we want to evaluate how tuning on general math datasets can make the model less prone to misleading hints. Lastly, we study how fine-tuning with instruction-following preference datasets affects the model\\u2019s critical thinking ability. \\n\\n> **Fonts and colors in figures are hard to see, some has confusing legends and annotation (Figure 10)**\\n\\nWe are grateful for your review and have improved the figure to be clear, readable, and moved some information into tables for clarity.\"}", "{\"title\": \"Thank you for your review (2/2)\", \"comment\": \"> **More evaluation metrics \\u2026**\\n\\nOur previous correctness evaluation focuses exclusively on modified questions, where LLM performance may be constrained by the incompleteness of generative tasks and incorrect options in multiple-choice problems. To enhance the robustness of our evaluation, we have introduced additional correctness criteria focused on assessing whether LLMs possess and utilize the necessary knowledge for clear problems. We implement this through clear, free-form tasks. Specifically, we remove predefined options for multiple-choice problems to convert them to free-form tasks and use clear problem descriptions for existing generative tasks. We evaluate the problem-solving capability using the correctness of both the clear problems and modified questions. If an LLM demonstrates correct knowledge in either scenario, we consider this evidence of proper knowledge acquisition and understanding. The new metric for problem-solving capability offers a more reliable assessment of LLMs' knowledge acquisition. \\n\\nAdditionally, we have improved our critical thinking evaluation metric by incorporating challenge rates on well-defined questions. Consider for each dataset, we have N pairs of well-defined questions and modified questions. Our experimental analysis first examines LLMs' challenge behavior on well-defined questions. Since these questions contain no inconsistencies, any challenges must stem from the model's inherent tendency. We assume this inherent tendency is independent of data inconsistency. To isolate the effect of actual inconsistency detection, we first identify well-defined questions that the LLM does not challenge. Let N1 denote the number of unchallenged clear questions, and N2 denote the number of their corresponding modified versions that are challenged. Assume the model's inherent challenge tendency remains absent for the corresponding modified versions. Therefore, when the LLM challenges a modified question in these pairs, we can attribute it solely to successful inconsistency detection. The ratio N2/N1 measures the LLM's true capability to identify problem inconsistencies, controlled for inherent challenge tendency. The detailed explanations are provided in the Appendix of the revised paper.\\n\\n> **More details on figures \\u2026**\\n\\nThank you for your advice. We have made revisions to the visualization of our figures. We have enhanced the clarity of figure captions.\"}", "{\"title\": \"Thank you for your review (2/2)\", \"comment\": \"> **Can we assess o1 performance?...**\\n\\nWe appreciate the reviewer's interest in o1. Our evaluation revealed several practical limitations with this model:\\n\\nThe table shows that o1-preview has poor performance and rarely challenges the prompt, less than 15 assumptions per QA format. We suspect this is due to the hidden reasoning tokens. Additionally the model has incredibly poor correctness rates against our QA_gaslight_both and QA_hidden correct, showing an ~85% drop in correctness.\\n\\n\\nHowever, the model required a high max_completion_tokens setting (10K) to generate consistent responses otherwise it would return no response and the responses were fixed at temperature 1, limiting control over generation parameters. Additionally, the testing costs were substantial ($100 for 810 prompts).\\n\\n\\nEven though we used the small scale of datapoints, we have already observed that gpt-o1 can be easily fooled by misleading hint by following it, as we observe below:\\n\\n| | model | QA_Format | #Errors | #Challenges | #No Challenges | #Correct | #Incorrect |\\n|---:|:---------------|:------------------|---------------:|----------------:|--------------------:|----------------:|------------------:|\\n| 0 | o1-preview.pkl | QA_gaslight_both | 0 | 13 | 257 | 34 | 236 |\\n| 1 | o1-preview.pkl | QA_hidden_correct | 0 | 14 | 256 | 43 | 227 |\\n| 2 | o1-preview.pkl | QA_original | 0 | 1 | 269 | 230 | 40 |\"}", "{\"summary\": \"This paper introduces SPARK, a framework for evaluating Large Language Models' (LLMs) critical thinking abilities, specifically their capacity to identify inconsistencies in problem framing. The framework is grounded in the Hierarchical Three-Space Theory and evaluates LLMs across multiple dimensions (problem framing space, strategy space and implementation space) through five key hypotheses proposed by the authors. The authors create benchmarks by modifying existing datasets like commonsense QA, math and science datasets to introduce inconsistencies (e.g. missing options or missing conditions in the questions). Multiple LLMs are tested, and the experimental results show their limitations in critical thinking abilities.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The framework is grounded in a cognitive theory (the Hierarchical Three-Space Theory)\", \"Extensive experimental results covering multiple domains and tasks, demonstrate the limitations of current LLMs in identifying inherent inconsistencies in provided problems.\"], \"weaknesses\": [\"The findings that LLMs lack of ability to identify flaws and often agree with the hallucinations in the given queries are not surprising.\", \"It is unclear whether the modified question can truly capture the problem inconsistencies in the real world. It would be helpful to add a human baseline to see if this task is solvable and aligned.\"], \"questions\": [\"How are the correctness rate and challenge rate calculated? Can a model that always rejects to answer questions obtain the highest challenge rate?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"PC is entering meta-review on behalf of SAC and AC:\\n\\nThe reviewers felt that the paper's contribution was difficult to assess due partially to the writing/clarity of the work, and that there was limited insight that could be gleaned due to limited rigor in experiment design.\", \"additional_comments_on_reviewer_discussion\": \"TBD\"}", "{\"title\": \"Final Day - Paper12877 Feedback\", \"comment\": \"Dear Reviewer JpKC,\\n\\nAs the rebuttal deadline is today (Dec 2nd), we'd appreciate your feedback on our earlier responses to Paper12877.\\n\\nBest regards,\\nAuthors\"}", "{\"comment\": \"I appreciate the authors' responses. I will keep my original scores as they are already good enough.\"}", "{\"title\": \"Thank you for your review\", \"comment\": \"> **Findings that LLMs lack the ability to identify flaws is not surprising-agree with hallucination results\\u2026**\\n\\nWe agree that the performance of critical thinking and hallucination have similarities. Here we want to clarify the distinction. Hallucination in LLMs usually refers to the cases when LLMs generate a response that is false, fabricated, or unsupported by the context. Within the 3-space solving theory framework, hallucination presents in the implementation space. Additionally, one of the causes of hallucination is the lack of knowledge or problem-solving capability. Our work evaluates model correctness to assess problem-solving capability and compare the performance of the modified question. In this way, we have a clear observation about how problem setup influences LLM behavior when relevant knowledge is available. According to our definition of critical thinking in the introduction section, we consider undesired behaviors, such as selecting the wrong option or hallucinating the final result, indicate a lack of critical thinking. LLMs may produce inconsistent responses despite having relevant knowledge, as they rigidly adhere to question-answering structures in the model space. A critical agent should be able to recognize and declare when a problem is unsolvable through reasoning. In conclusion, our experimental design deliberately controls for knowledge insufficiency, demonstrating that critical thinking capability reflects a higher-order aspect of LLM behavior about how they approach problem-solving fundamentally. This distinguishes it from hallucination. \\n\\n> **Unclear whether the modified question can truly capture the problem inconsistency in the real world\\u2026**\\n\\nWe agree that insufficient information or inconsistency from real-world users is usually more complex than our setting. However, our approach using established benchmarks provides a controlled environment to introduce ambiguity and develop reliable automatic evaluation templates for correctness and challenge rate measurement. These benchmarks, each targeting specific problem-solving skills, allow us to systematically investigate which types of tasks elicit critical thinking in LLMs and how task similarity influences performance (Section 4.6). The complexity varies across our datasets: modified GSM8k problems may appear obvious to humans, HotpotQA's implicit condition removal presents more subtle detection challenges, and multiple-choice questions become trivial when ground-truth answers are known. Despite some inconsistencies being relatively straightforward in our problem setup, LLM performance is not satisfying. This suggests that their performance would likely deteriorate further when faced with more complex or realistic queries where inconsistencies are more implicit. Importantly, our selected datasets span multiple domains and complexity levels, incorporating both common sense reasoning and domain-specific scientific knowledge. \\n\\n> **How are the correctness and challenge rates calculated?...**\\n\\nCorrectness and challenge rate are calculated through the dataset. The correctness rate measures the proportion of responses demonstrating accurate knowledge. The challenge rate measures how often the model questions problem solvability. The automatic evaluation templates are displayed in Appendix D.\\n\\nFor the second question, we have improved our critical thinking evaluation metric by incorporating challenge rates on well-defined questions. Consider for each dataset, we have a N pair of well-defined questions and modified questions. Our experimental analysis first examines LLMs' challenge behavior on well-defined questions. Since these questions contain no inconsistencies, any challenges must stem from the model's inherent tendency.. We assume this inherent tendency is independent of data inconsistency. To isolate the effect of actual inconsistency detection, we first identify well-defined questions that the LLM does not challenge. Let N1 denote the number of unchallenged clear questions, and N2 denote the number of their corresponding modified versions that are challenged. Assume the model's inherent challenge tendency remains absent for the corresponding modified versions. Therefore, when the LLM challenges a modified question in these pairs, we can attribute it solely to successful inconsistency detection. The ratio N2/N1 measures the LLM's true capability to identify problem inconsistencies, controlled for inherent challenge tendency. The detailed explanations are provide in the Appendix of the revised paper.\"}", "{\"title\": \"Final Day - Paper12877 Feedback\", \"comment\": \"Dear Reviewer Dx3U,\\n\\nAs the rebuttal deadline is today (Dec 2nd), we'd appreciate your feedback on our responses addressing figure clarity, writing improvements, expanded related work, and enhanced evaluation metrics.\\n\\nBest regards,\\nAuthors\"}", "{\"summary\": \"This paper introduces a novel approach to evaluating LLMs' critical thinking in identifying flaws in problem formulation. Grounded in Three-Space Theory, the authors reformulate existing datasets as critical thinking evaluation ones by removing correct answer choices (for multiple-choice QA datasets) or removing necessary conditions (for free-form generation datasets). They assess the \\\"challenge rate\\\"\\u2014the frequency with which LLMs, prompted to detect flaws, correctly identify issues, using GPT-4 for automatic YES/NO judgments. To further evaluate the model's robustness to misleading information in the problem formulation, the authors also augment QA datasets with hints (\\\"gaslighting\\\") on correct/incorrect answers or both.\\n\\nExperiment results demonstrate that while some larger LLMs can achieve non-trivial challenge rates ($>50\\\\\\\\%$) on free-form generation tasks only, there remains substantial room for improvement. Notably, the challenge rate does not correlate with model accuracy, and chain-of-thought prompting yields inconsistent effects on both metrics across models and datasets. Although gaslighting increases challenge rates across models, it also reduces accuracy, highlighting LLMs' susceptibility to manipulation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper investigates an interesting evaluation dimension on whether LLM can critique flaws in the prompted problem formulation, complementary to widely-used instruction-following and LLM reasoning benchmarks.\\n\\n2. The experiment results can support the major claims of the paper.\", \"weaknesses\": \"1. **Limited Insight into Findings.** As this is an evaluation-focused paper, deeper analysis and implications of the results are the most important contributions. Many findings are presented as direct observations, often summarized by broad statements like \\\"experiment results are influenced by dataset properties, models, training ...\\\", \\\"[prompting methods] achieves mixed results\\\", or \\\"[model names] are vulnerable to manipulation in prompts\\\". While these findings may hold, they resemble insights from prior work (Section 2) and align with expectations under a well-constructed evaluation framework. Although the paper emphasizes a unique \\\"critical thinking\\\" evaluation, it\\u2019s unclear what additional insights this approach offers beyond previous evaluation works.\\n\\n2. **Clarity and Rigor in Experiment Design.** This paper reformulates existing datasets to build a new benchmark that focuses on evaluating LLM critical thinking, but several important experiment details are missing, or not rigorous enough. For example, the implementation of \\\"Missing Information\\\" for non-Quail datasets is not clearly defined, and criteria for identifying and removing \\\"necessary conditions\\\" (to make questions unanswerable) are unspecified. \\nThe validity of the \\\"LLM-as-judge\\\" approach in this new critical thinking evaluation benchmark is not clearly explained, nor are the \\\"held-out datasets\\\" used in evaluations. \\n\\n Additionally, the exclusive use of instruct-tuned models raises questions about claims regarding instruction training effects (e.g., Line 272), as non-instruct-tuned models are not assessed. Including control prompts where no flaws should be detected is also important to investigate potential false positive problems and prevent simple flaw-reporting models from skewing the results. Also, it seems a random baseline (possibly achieving 50% challenge rates) can beat most models in identifying problem formulation flaws, but there is no related explanation and analysis. Further problems are listed in the \\\"Questions\\\" section. \\n\\n3. **Readability and Conciseness of Main Text.** Introducing the new \\\"SPARK\\\" framework and articulating hypotheses is understandably challenging. However, the overall paper organization, especially in Sections 3 and 4, could be improved for readability. The flow from Section 3.1 to Section 3.2 feels disjointed, and hypotheses are discussed in fragments across sections, which complicates their verification for readers. While the reviewer appreciates the efforts of putting an experiment summary in Section 3.4, it lacks grounding in detailed results, making the introduction feel verbose. Tightening the organization and streamlining explanations would improve the paper's clarity and coherence.\", \"questions\": \"Major problems are written in Weakness #2, and here are some less severe questions that need clarification or presentation advice:\\n\\n1. Why choose this particular set of questions? Why there is a focus on reasoning-focused datasets? \\n\\n2. What are the decoding parameters for most models used in the experiments? Some datasets and models are sensitive to these hyperparameter decisions, so it should be clarified in the paper. There is no need to seek for best hyperparameter combinations, but for reproduction purposes, it is needed to know these experiment details. \\n\\n3. How are checkpoints compared in Section 4.7 different? More importantly, what are they, and how they are related to the analysis in the main text?\\n\\n4. Presentation Advice: The fonts and colors in many figures are hard to read or interpret, and some figures contain confusing legends and annotations (e.g., Figure 10).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the detailed responses\", \"comment\": \"Thanks to the authors for the detailed responses, which clarifies some of my confusion (terminology, paper organizations, and metrics). I also see the revision brings significant improvement to the paper. However, after reading the responses, I am concerned about the necessity of introducing SPARK as a necessary framework. The authors highlight two main contributions of the framework: (1) in-context learning experiments (Section 4.8) and (2) new experimental controls and analytical angles. While these contributions are valuable, they do not appear to justify the necessity of introducing SPARK. Even without SPARK framework, the paper\\u2019s experimental analyses could stand independently, raising questions about the central contribution and framing of the paper. I encourage the authors to reassess the positioning and main contribution of the work. This is not to suggest that SPARK lacks value, but its necessity within the context of this evaluation-focused paper remains unclear.\\n\\nAdditionally, I recommend double-checking the newly introduced content for compatibility with existing sections, as some parts seem hastily added and there are some minor formatting issues. My concerns about the scattered verification of hypotheses are only partially addressed. Finally, the authors should avoid making definitive claims such as, \\\"We can definitively attribute performance degradation to limitations in critical thinking capability rather than knowledge insufficiency.\\\" It is still an open question how models process and utilize injected knowledge, and these conclusions may only hold under the current experiment setup of fine-tuning and out-of-domain evaluation.\\n\\nBased on these concerns, I am inclined to slightly increase my score to acknowledge the authors\\u2019 detailed revisions and significant improvements. However, I maintain a negative assessment of the current version. I encourage the authors to revisit the framing and contributions to maximize the paper's impact in future submissions.\"}", "{\"summary\": \"The paper introduces SPARK, a framework intended to assess the capability of LLMs to identify inconsistencies in problem framings using modified existing datasets. The authors come up with two metrics \\\"Correctness\\\" and \\\"Challenge Rate\\\" for the evaluation. They use the idea of Three-space theory from the cognitive science to come up with this framework. The dataset consists of different domains such as math, science, comprehension etc. They also introduce perturbation's to the data to see the changes in the output given by the LLM and evaluate those responses and try to analyze the behavior of LLMs. While the work is interesting but there are many issue with this, right from writing to selection of data and evaluation metrics.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The work uses inspiration from cognitive science to come up with framework\\n\\nThey address an important aspect of LLMs which is critical thinking\\n\\nMultiple models are considered for the work and comparison \\n\\nMultiple hypothesis are tested in this work.\", \"weaknesses\": \"-> First most of the figures are poorly inserted, there could have been other type of figures chosen as most of the figures have the data overlapping and it's hard to interpret them.\\n\\n-> The writing is poor, there is too many things and little details\\n\\n-> While the related work is good there are many more work that are missing one of them is \\\"Tree of Thoughts\\\" \\n\\n-> The datasets chosen for this work are diverse and contains many existing datasets, there is no mention of testing of data contamination given the models that are considered for this work have this data in their training data, also the dataset could have been better, I feel there are better datasets like Game24, or the one's mentioned in the related work are more relevant to this work.\\n\\n-> It is mentioned that you create benchmarks in the abstract, I didn't clearly understand exactly what that meant.\\n\\n-> A framework paper should be more detailed such that others can reproduce and compare their work to this, need more quantitative results.\\n\\n-> Also there could have been more evaluations metrics rather than having just two of them and using them to test variety of hypothesis, this decreases the robustness of the results.\\n\\n-> Most of the figures in the appendix were hard to interpret, more details on them is appreciated.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your review (1/2)\", \"comment\": \"> **Limited Insight into Findings\\u2026**\\n\\nWhile we acknowledge that some of our analyses and the hypotheses presented in Section 3.2 share similarities with existing work, we argue that our Knowledge and Behavior Conditioning hypothesis offers unique insights into LLM problem-solving processes within the 3-space framework. Our in-context learning experiments (Section 4.8) reveal a unique tension: while in-context examples improve model accuracy across various datasets, they simultaneously decrease the challenge rate. This finding is particularly noteworthy because in-context examples help LLMs better understand questions and more frequently reference correct knowledge\\u2014factors that should make inconsistencies more detectable. Instead, we observe that these examples appear to reinforce the model's belief in the Problem Framing Space, making LLMs more confident in applying familiar solution structures while becoming less likely to question the completeness of problem setups. This observation has significant implications for future prompt design, highlighting the need to balance accuracy with critical thinking capability. \\n\\nFurthermore, we also want to emphasize that our problem definition differs from prior work. By deliberately modifying well-defined problems to create ambiguous or inconsistent questions, we achieve two significant advantages. First, we maintain clear and controllable conditions for understanding the source of inconsistencies. Second, we can definitively attribute performance degradation to limitations in critical thinking capability rather than knowledge insufficiency. This is because we specifically assess problem-solving capability against questions where the model demonstrates knowledge competence but is constrained by problem formulation.\\n\\n\\n> **Clarity and Rigor in Experiment Design\\u2026**\\n\\nWe apologize for any confusion regarding the creation of incomplete generative tasks. Let us clarify our methodology for each dataset (this information will be added to Appendix A.2) \\n\\n+ Quail is a reading comprehension dataset and includes questions whose correct answer is \\u201cnot enough information\\u201d. We directly sample some questions and corresponding paragraphs as incomplete reading comprehension tasks\\n+ GSM8k contains arithmetic problems, where the final answer is calculated by all the numerical conditions provided in the context. We design a reliable template to leverage GPT-4o to rephrase the problem context and remove one provided numerical condition. The detailed template is provided in Appendix A.1\\n+ HotpotQA is a multi-hop reasoning task, requiring information extraction from multiple documents. The dataset provides the indices of related documents and sentences. We create incomplete tasks by removing one relevant document from the required set \\n\\n\\nWe have refined our critical thinking evaluation metric by incorporating challenge rates on well-defined questions. Consider for each dataset, we have a N pair of well-defined questions and modified questions. Our experimental analysis first examines LLMs' challenge behavior on well-defined questions. Since these questions contain no inconsistencies, any challenges must stem from the model's inherent tendency.. We assume this inherent tendency is independent of data inconsistency. To isolate the effect of actual inconsistency detection, we first identify well-defined questions that the LLM does not challenge. Let N1 denote the number of unchallenged clear questions, and N2 denote the number of their corresponding modified versions that are challenged. Assume the model's inherent challenge tendency remains absent for the corresponding modified versions. Therefore, when the LLM challenges a modified question in these pairs, we can attribute it solely to successful inconsistency detection. The ratio N2/N1 measures the LLM's true capability to identify problem inconsistencies, controlled for inherent challenge tendency. The detailed explanations are provide in the Appendix of the revised paper.\\n\\n> **Readability and Conciseness of Main Text\\u2026**\\n\\nWe sincerely appreciate the reviewer\\u2019s suggestion to enhance the readability and conciseness of our main text. We have made significant efforts to improve the structure of our work, including removing the unnecessary transition between Sections 3.1 and 3.2 and integrating the experimental proposals with the hypothesis to create a more cohesive narrative. Additionally, we refined the experimental details to support reproducibility and streamlined explanations to enhance the clarity and coherence of our findings.\"}", "{\"title\": \"Thank you for your review (1/2)\", \"comment\": \"> **Critical thinking assessment without taking into account the model\\u2019s capabilities..**\\n\\nWe have refined our correctness and evaluation metrics, with detailed explanations provided in our response to Reviewer Dx3U. Our updated methodology now incorporates LLM performance on clear questions as a measure of problem-solving capability.\\n\\n> **Minimizing the use of LLM as a judge to minimize costs\\u2026**\\n\\nWe agree that there is potential for simpler evaluation metrics for response correctness. However, our experimental setup presents unique challenges that require a more sophisticated approach. Since we modify the original questions and require LLMs to explain their reasoning, simple linguistic matching metrics like ROUGE are insufficient. Our evaluation process involves two key challenges: first, extracting answer-relevant sentences from responses, and second, handling diverse response patterns across different benchmarks. While designing dataset-specific rule-based algorithms is possible, leveraging GPT-4's natural language processing capabilities offers a more efficient and flexible solution. Although this approach incurs higher computational costs, our evaluation templates demonstrate strong alignment with human judgment on our manually validated subset of data.\\n\\n> **Correctness rate is sometimes used in complete problems\\u2026**\\n\\nThank you for highlighting this limitation. We use correctness to assess whether LLMs incorporate the required knowledge for specific tasks. We acknowledge that our initial correctness evaluation template had limitations in assessing problem-solving capabilities for generative tasks, particularly when dealing with incomplete questions that lack ground-truth comparisons. To address this, we've introduced a refined correctness evaluation metric that considers LLM responses to both clear and modified questions. We now consider an LLM capable of solving the problem if it demonstrates correct knowledge or provides correct answers in either the clear or modified question scenarios. We will display the new results in the revised paper.\\n\\n> **Why does gaslighting increase the challenge rate?...**\\n\\nIn our gaslighting experiment, we demonstrate that models can be easily misled to answer questions incorrectly when presented with a misleading hint, even if they initially solved the problem correctly without gaslighting, thus resulting in a significant drop in the correctness rate, as illustrated in Figure 5. Simultaneously, we observe an increase in the challenge rates when gaslighting is introduced. Misleading hints can influence LLMs to select incorrect options. When generating inference steps to support their wrong choices, the LLMs produce reasoning paths that contain counterfactual or flawed statements. The increased challenge rate in these cases suggests that when reasoning paths contain obvious errors or contradict common sense, LLMs are more likely to identify inconsistencies and challenge the problem setup or the provided hints. This demonstrates that LLMs exhibit critical thinking capabilities when the implausibility of their inference steps is obvious.\\n\\n> **Text and dots in Figure 2, 11, 12 overlap, which is hard to read ..**\\n\\nFor these figures, we reduced the messy annotations and decided to emphasize more on the information by simplifying them into tables and separating them into different graphs.\"}", "{\"title\": \"Dear Reviewers\", \"comment\": \"Dear Reviewers,\\n\\n\\nWe would like to thank you for your valuable reviews, which have greatly contributed to improving our work. We are delighted that our research has been recognized as addressing a crucial aspect of large language models (Dx3U, JpKC)\\u2014critical thinking supported by the theoretical foundation of the Hierarchical Three-Space Theory (pRhJm, JpKC). Our claims are reinforced by experimental results (xYtG, pRhJ) and highlight promising directions for future research (pRhJ).\\n\\nBased on your suggestions, we have made the following significant improvements to our work:\\n\\n1. We **updated our figures** to **improve clarity and readability**. While our initial focus was on showcasing patterns, we overlooked their accessibility to readers. To address this, we refined plots and converted some figures into tables for better comprehension.\\n2. We **removed redundant information** and reorganized sections by combining related hypotheses. This restructuring eliminates disjointed parts and makes the narrative more cohesive and **easier to follow.**\\n3. We **expanded the reproducibility subsection**, ensuring that any reader can replicate our results. Additionally, we **released full code and datasets** to promote transparency and open research.\\n4. We **emphasized the key insights of our findings** and **refined our metrics** to highlight the primary message of our work. These changes also enhance the robustness of our results.\\n5. We **extended the related work section** to include advanced prompting techniques, providing a more comprehensive context for our contributions.\\n\\nFinally, we want to underscore the broader significance of our work. Large language models are vulnerable to subtle changes, making it crucial to assess their critical thinking abilities to improve robustness. By introducing a straightforward framework for evaluating critical thinking metrics, we aim to provide model owners and users with actionable insights into model performance. We hope our work not only addresses this pressing issue but also inspires further research into enhancing the critical thinking and problem-solving capabilities of large language models.\\n\\nWe will be happy to address any further questions!\\n\\nKind Wishes, \\\\\\nAuthors of Paper12877\"}", "{\"title\": \"Thank you for your review (1/2)\", \"comment\": \"> **Figures are poorly inserted\\u2026**\\n\\nThank you for this feedback. We have revised the figure notations and clarified the data points in the captions\\n\\n> **Writing is poor\\u2026**\\n\\nThank you for the reviewer\\u2019s valuable feedback. In response, we have added more details to our findings to provide greater depth and clarity. We have also enhanced the presentation of our results and streamlined the manuscript to improve readability and overall comprehension. We have updated our manuscript to reflect the changes.\\n\\n> **Related work is good but missing\\u2026**\\n\\nThank you for your suggestion. We employ Chain-of-Thought prompting to elicit the intermediate reasoning steps from the LLM, allowing us to assess both the model's knowledge accuracy and any inconsistencies between its understanding and the problem constraints. In our revised paper, we will expand the discussion to include other advanced prompting techniques, such as Tree-of-Thought and Graph-of-Thought approaches, as relevant methodological references.\\n\\n> **Data contamination**\\n\\nWhile detecting data contamination poses challenges, we have implemented specific measures to mitigate its effects. Our study primarily examines LLMs' critical thinking rather than problem-solving abilities. For modified multiple-choice questions, memorization of correct answers is less relevant, instead, we analyze how problem setup constraints influence LLM behavior. In modified GSM8k problems, we remove key numerical conditions, making it impossible to derive the exact answer. Thus, even if a model has encountered the original problem, providing the correct result would indicate flawed reasoning rather than desired behavior. Regarding HotpotQA, we specifically excluded questions that GPT-4o could answer correctly without all the necessary documents, ensuring our question pool requires genuine multi-hop reasoning. \\n\\nWe have tested our evaluation framework on Game24. In this setup, LLMs must construct mathematical expressions using basic operations and each provided number exactly once to reach 24. We observed low problem-solving performance across all tested models, with a high frequency of responses indicating problems were unsolvable. Unlike our other datasets, Game24 does not allow for clear parallel versions of questions. It is straightforward to construct number sets that make the task unsolvable, however, replacing a single number creates an entirely new problem. This fundamental change means we cannot guarantee that the modified version maintains comparable complexity to the original. When LLMs respond that answers cannot be determined for modified questions, it becomes impossible to distinguish between two scenarios: (1) the model's inability to solve the problem, and (2) the model's successful recognition of task inconsistency based on its ability to solve similar problems. Given these inherent limitations in controlling for problem complexity and disambiguating the sources of model behavior, we conclude that Game24 is not suitable for evaluating critical thinking capabilities. We include the experimental results here.\\n\\n> **We create a benchmark\\u2026**\\n\\nWe apologize for any confusion. Our work introduces a framework for creating benchmarks rather than offering a single, fixed benchmark. This framework enables users to assess the robustness of a trained model on their dataset of interest by evaluating its critical thinking abilities and examining how well these capabilities hold up for a given skill.\\n\\n> **Framework paper should be more detailed\\u2026**\\n\\nWe appreciate the advice on reproducing our results. To support this, we have included a link to the vLLM sampling parameters documentation and a table that highlights the parameter changes used in our experiments. In addition, we provide the code and implementation used to generate our results. Furthermore, we have included the templates for automatic evaluation, covering challenge rates, correctness rates, and problem modifications.\"}", "{\"title\": \"We anticipate your feedback!\", \"comment\": \"Dear Reviewers,\\n\\nWith the rebuttal period coming to an end, we would greatly value your additional input. We want to express our sincere gratitude for the time and expertise you've invested in reviewing Paper12877 and helping us enhance its quality. \\n\\nWe would be particularly grateful if you could review our responses and indicate whether they adequately address your concerns, either fully or partially. We'd also appreciate knowing if our explanations are moving in a constructive direction. \\n\\nPlease don't hesitate to raise any additional questions or concerns about the paper. **As there is still time until our November 27th deadline, we welcome the opportunity to incorporate any changes that would strengthen the paper further.**\\n\\nBest regards, \\nAuthors of Paper12877\"}" ] }
0rmOx0Ifbf
Controlled Generation of Natural Adversarial Documents for Stealthy Retrieval Poisoning
[ "Collin Zhang", "Tingwei Zhang", "Vitaly Shmatikov" ]
Recent work showed that retrieval based on embedding similarity (e.g., for retrieval-augmented generation) is vulnerable to poisoning: an adversary can craft malicious documents that are retrieved in response to broad classes of queries. We demonstrate that previous, HotFlip-based techniques produce documents that are very easy to detect using perplexity filtering. Even if generation is constrained to produce low-perplexity text, the resulting documents are recognized as unnatural by LLMs and can be automatically filtered from the retrieval corpus. We design, implement, and evaluate a new controlled generation technique that combines an adversarial objective (embedding similarity) with a "naturalness" objective based on soft scores computed using an open-source, surrogate LLM. The resulting adversarial documents (1) cannot be automatically detected using perplexity filtering and/or other LLMs, except at the cost of significant false positives in the retrieval corpus, yet (2) achieve similar poisoning efficacy to easily-detectable documents generated using HotFlip, and (3) are significantly more effective than prior methods for energy-guided generation, such as COLD.
[ "Dense Retrieval", "Corpus Poisoning", "Adversarial Attack" ]
Reject
https://openreview.net/pdf?id=0rmOx0Ifbf
https://openreview.net/forum?id=0rmOx0Ifbf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yzYNOwi4Ew", "ytL7iW0pPA", "rnKOCtsSce", "ogb5G4dRPk", "oLa3xa6A70", "iFsTmUbR3B", "dUaICyLOZR", "d6ASAopL6O", "a01OIUReO7", "YGmCqKHSGt", "XUttDApCrx", "WxaoIEmbZN", "WZURmaAuvf", "OsZXAwSvMH", "HHmYaEaleC", "As3uSLCTqg" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732504822325, 1732504255568, 1730753145926, 1730572195123, 1732504501093, 1732505313807, 1733171347727, 1732694974616, 1734665948326, 1737523914177, 1730704218529, 1732704756406, 1730481085419, 1730709228342, 1732505001674, 1732503951963 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8508/Authors" ], [ "ICLR.cc/2025/Conference/Submission8508/Authors" ], [ "ICLR.cc/2025/Conference/Submission8508/Reviewer_QPvX" ], [ "ICLR.cc/2025/Conference/Submission8508/Reviewer_NX6r" ], [ "ICLR.cc/2025/Conference/Submission8508/Authors" ], [ "ICLR.cc/2025/Conference/Submission8508/Authors" ], [ "ICLR.cc/2025/Conference/Submission8508/Reviewer_QPvX" ], [ "ICLR.cc/2025/Conference/Submission8508/Reviewer_SPBo" ], [ "ICLR.cc/2025/Conference/Submission8508/Area_Chair_wceg" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8508/Reviewer_SPBo" ], [ "ICLR.cc/2025/Conference/Submission8508/Reviewer_sCy5" ], [ "ICLR.cc/2025/Conference/Submission8508/Reviewer_sCy5" ], [ "ICLR.cc/2025/Conference/Submission8508/Reviewer_wZCy" ], [ "ICLR.cc/2025/Conference/Submission8508/Authors" ], [ "ICLR.cc/2025/Conference/Submission8508/Authors" ] ], "structured_content_str": [ "{\"comment\": \"> Novelty: comparison with previous beam search based works\\n\\nOur contribution is to show that simply optimizing for low perplexity doesn't produce natural texts. We propose a metric to detect these low-perplexity but unnatural texts, and also propose a method to generate actually natural texts.\\nPlease see the \\u201cNew baseline benchmarks\\u201d section in General Response for more information.\\n\\n> Method: soft naturalness score\\n\\nThe purpose of the naturalness score is to guide the optimization of adversarial documents. It might not be well-calibrated but, as our experiments show, it is sufficient to produce adversarial documents that are not detected as unnatural by any currently known method.\\n\\n> LLM naturalness evaluation\\n\\nOur apologies if this was not clear in the original submission, but we use *both* Llama and GPT4o for naturalness evaluation. We have now added Gemma \\u2013 please see \\u201cMultiple LLMs for Naturalness Evaluation\\u201d section in General Response.\\n\\n> Perplexity Filtering\\n\\nThank you for the suggestion! We have added perplexity measurements with three more LLMs, Llama, Gemma, and Qwen. All experiments show that the perplexity distribution of texts produced by AdversarialDecoding has high overlap with normal texts. Please see the \\u201cMultiple LLMs for Perplexity Measurement\\u201d section in General Response.\\n\\n> Threat model\\n\\nIn existing RAG attacks (see related work in our paper), adversarial documents consist of two parts: the sub-document responsible for retrieval (this is what we focus on) and a *separate* sub-document responsible for influencing generation. Our method for generating retrieval sub-documents can thus be combined with existing attacks on RAG and other systems downstream of retrieval. \\nIt is easy to change the optimization objective to maximize the attack success rate of an adversary-chosen prefix + our optimized adversarial text. We have added experiments to demonstrate the feasibility of this attack, setting the prefix to \\u201c[Trigger] is awful\\u201d. Please see the \\u201cprefix attack\\u201d section in General Response.\\n\\n> Multiple Retrievers\\nThanks for your suggestions! We have added evaluations on more retrievers. Please see the \\u201cMultiple Retrievers\\u201d section in General Response.\\n\\n> prefix\\n\\nSince we are sampling from an LLM, we need to start from some text. For the trigger attack, we use the prefix \\u201ctell us a story about [trigger]\\u201d as a hint to search for a sentence related to the [trigger]. We show in the GPT4o baseline that simply generating a sentence with this prefix doesn't work. For the no-trigger attack baseline, we use the prefix \\u201ctell us a story\\u201d to prepare the LLM to generate diverse texts.\\n\\n> HotFlip baseline\\n\\nWe set the number of tokens we use to 32 instead of 50 (used in [5]). HotFlip generates texts with high, extremely abnormal perplexity, which are very easy to detect. By contrast, our objective is stealthiness.\\n\\n> Test query\\n\\nThanks for pointing out a problem in our evaluation. We did not split the datasets because these adversarial texts seldomly suffer from overfitting. We fixed the issue and re-ran the evaluation on 1K disjoint queries from the test query set. Here is the updated Table 3:\\n\\n| Method | Top-1 | Top-5 | Top-10 | Top-20 | Top-100 |\\n|----------------------------|-------|-------|--------|--------|---------|\\n| BeamSearchHotflip | 0.00 | 0.01 | 0.01 | 0.02 | 0.05 |\\n| PerplexityLLMOptimizer | 0.08 | 0.20 | 0.27 | 0.37 | 0.59 |\\n| NaturalnessLLMOptimizer | 0.02 | 0.06 | 0.09 | 0.13 | 0.31 |\\n\\n> Presentation\\n\\nThanks for the suggestions! In the next revision, we will improve the presentation.\"}", "{\"comment\": \"> prefix and misinformation\\n\\nSince we are sampling from an LLM, we need to start from some text. For the trigger attack, we use the prefix \\u201ctell us a story about [trigger]\\u201d as a hint to search for a sentence related to the [trigger]. We show in the GPT4o baseline that simply generating a sentence with this prefix doesn't work. \\n\\nFor the no-trigger attack baseline, we use the prefix \\u201ctell us a story\\u201d to prepare the LLM to generate diverse texts. For the results, please see the \\u201cprefix attack\\u201d section in General Response.\\n\\nWe have also added the attack setup for which we spread information by inserting misinformation before our text. Please refer to \\\"Prefix attack\\\" section in General Response.\\n\\n> end-to-end RAG performance\\n\\nWe work in the same setting as Zhong et al. \\u201cPoisoning Retrieval Corpora by Injecting Adversarial Passages\\u201d (EMNLP 2023), focusing on retrieval. \\n\\nIn existing RAG attacks (see related work in our paper), adversarial retrieval is separate from adversarial generation. Adversarial documents consist of two parts: the sub-document responsible for retrieval (this is what we focus on) and a separate sub-document responsible for influencing downstream generation. Our method for generating retrieval sub-documents can thus be combined with existing attacks on RAG and other systems downstream of retrieval. \\n\\n> novelty and weighted sum\\n\\nThese papers apply beam search to sentence editing, not generation.\\n\\nAs we show in our BasicAdversarialDecoding setting, beam search alone does not produce natural text. Existing beam search methods produce low-perplexity adversarial documents that are detectable \\u2013 using the method proposed in this paper \\u2013 as unnatural.\\n\\nOur contributions are new methods for (a) detecting adversarial texts that have low perplexity, yet are unnatural (this includes documents produced by previous beam search methods), and (b) incorporating the naturalness objective into beam search. \\n\\nWe have also added experiments to compare with two recent beam search-based adversarial generation methods. Please see the \\u201cNew baseline benchmarks\\u201d section in General Response.\\n\\n> black-box\", \"we_use_the_standard_definition_of_black_box_access_from_the_security_literature\": \"the attacker can query the model freely to get the outputs, but does not have access to the weights of the model.\\n\\n> llm naturalness evaluator robustness\\n\\nPlease refer to \\u201cMultiple LLMs for Naturalness Evaluation\\u201d section in General Response.\\n\\n> false negative\\n\\nWe provided true positives. By definition, false negatives = 1 - true positives.\\n\\n> presentation\\n\\nThanks for your suggestions, we will make the changes in next version.\"}", "{\"summary\": \"This paper introduces a beam-search-based adversarial attack method for RAG, designed to produce fluent text with sentence embeddings closely matching a target embedding. Experimental results demonstrate that this approach effectively bypasses perplexity-based filtering and achieves a comparable attack success rate to HotFlip baselines.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The proposed method is straightforward and easy to follow.\\n\\nExperiments are conducted on a recent dataset and compared against current baselines.\", \"weaknesses\": \"(1) The proposed attack method relies on simple prompts like \\u201cTell me a story about\\u201d to generate documents from scratch. This approach raises concerns about practical applicability, as real-world malicious users typically aim to inject misinformation. It is unclear how the proposed method would effectively introduce misinformation in a RAG setting.\\n\\n(2) The paper lacks experiments demonstrating how the proposed method impacts the end-to-end performance of RAG systems, such as downstream QA performance.\\n\\n(3) The novelty of this work is limited, as similar approaches have been applied in slightly different contexts. For example, beam search algorithms have been widely used in adversarial attacks [1][2]. The paper should discuss these related works and emphasize its unique contributions beyond altering the beam search optimization objective.\\n\\n> [1] Zhao et. al., Generating Natural Language Adversarial Examples through An Improved Beam Search Algorithm\\n\\n> [2] Liu et. al., A More Context-Aware Approach for Textual Adversarial Attacks Using Probability Difference-Guided Beam Search\\n\\n(4) The paper claims black-box access to the embedding encoder. However, given the assumption that embeddings can be accessed repeatedly, one could calculate gradients numerically, making the black-box claim somewhat overstated.\\n\\n(5) Some other minor issues:\\n- Please use `\\\\citep` and `\\\\citet` in Latex properly.\\n- The paper uses the gendered pronoun \\\"his\\\" for attackers (Line 109), which could be avoided.\\n- The paper contains several grammatical mistakes\\n- Notation definitions lack precision and could be simplified. For example, `P_n`\\u200b is defined as a retrieval corpus but actually represents benign documents. The subscript `n` could be omitted.\", \"questions\": \"(6) Why not use a weighted sum of embedding similarity and perplexity instead of introducing an extra model?\\n\\n(7) Why are only true positives and false positives considered for defense? Would false negatives not be equally important?\\n\\n(8) Is the LLM naturalness evaluator used during the attack aligned with the one used in the evaluation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the vulnerability of modern retrieval systems that rely on embedding similarity. The authors highlight that existing adversarial methods, such as HotFlip, generate malicious documents with high perplexity, making them easily detectable through perplexity filtering and large language model (LLM) evaluations. To address this, the paper introduces a novel controlled generation technique called AdversarialDecoding, which simultaneously optimizes for embedding similarity and naturalness using a surrogate LLM. This approach produces adversarial documents that maintain low perplexity and appear natural, effectively evading both perplexity-based and LLM-based detection mechanisms. Experimental results on the MS MARCO dataset demonstrate that AdversarialDecoding achieves high poisoning success rates comparable to traditional methods while significantly reducing the likelihood of detection. Additionally, the study explores the limited transferability of these adversarial documents across different encoders, suggesting potential avenues for developing robust defenses. The research underscores the importance of advancing defensive strategies to safeguard retrieval systems against sophisticated adversarial attacks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper introduces a novel controlled generation method named AdversarialDecoding, which uniquely integrates embedding similarity with a \\\"naturalness\\\" constraint. By leveraging a surrogate large language model (LLM) to compute soft scores, the method simultaneously optimizes for semantic relevance and textual naturalness.\", \"The methodology presented in the paper is robust and meticulously designed. The authors conduct comprehensive experiments using the MS MARCO datasets, and give a lot of ablation studies.\"], \"weaknesses\": [\"The writing in this paper could be **improved a lot**. Firstly, each formula lacks numbering. Additionally, the citation format in lines 251-258 seems off. Moreover, line 123 ends with a comma, and line 130 lacks a period. These issues are quite frequent in the article, which suggests a need for more attention to detail.\", \"In some experiments, using llama3.1-8b as both the adversarial decoding model and the naturalness evaluation model could raise concerns about fairness. This is because llama3.1-8b might be biased towards the data it generates itself. Besides, could you explain why you use GPT2 to measure the perplexity of generated adversarial documents rather than GPT3 or other LLMs?\", \"The selected baselines are limited. Hotflip is an early character-level adversarial attack method, but since then, many more effective attack algorithms[1] have been developed, whether at the word, character, or sentence level. These newer methods often result in much higher fluency.\", \"Adding some additional human evaluations would be valuable.\", \"**References:**\", \"[1] https://github.com/thunlp/TAADpapers\"], \"questions\": \"See the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"# Weaknesses:\\n> transferability\\n\\nAs we show in our paper, the baselines also transfer poorly. Therefore, this limitation is not specific to our method but a generic issue with attacks on encoders. One possible solution is to integrate multiple encoders into the optimization objective.\\n\\n> It depends on LLM and does not consider the situation of LLM hallucination. In addition, the use of LLM needs to consider the efficiency and effectiveness of the attack.\\n\\nTo avoid LLM hallucination, we use multiple LLMs in the evaluation stage (Llama and GPT4o in the original submission; we have now added Gemma 2). Please refer to \\\"Multiple LLMs for naturalness evaluation\\\" section in Genereal Responses.\\n\\nOur method is efficient, taking about 3 minutes to generate one adversarial document. Note that all non-trivial attacks on retrieval or RAG require only a single or a few adversarial documents for the entire retrieval corpus. Therefore, the attacker never needs to generate many adversarial documents. We also discuss the efficiency of our method below.\\n\\n> PoisonedRAG comparison and multiple retrievers\\n\\nThank you for the suggestions!\\n\\nPoisonedRAG focuses on poisoning retrieval results for a specific query. We demonstrate how to generate natural-looking adversarial documents for a wide range of queries (same setting as Zhong et al., EMNLP 2023).\\n\\nWe have added experiments with two additional retrievers. Please see the \\u201cMultiple Retrievers\\u201d section in General Response.\\n\\n# Questions:\\n> How to further enhance the attack effect while improving the naturalness of adversarial documents?\\n\\nAs our experiments show, baseline attack success rate is already high. Enforcing the naturalness constraint inevitably restricts the search space, which the maximum achievable ASR. For naturalness, we show that our documents evade all currently known defenses.\\n\\n> For different types of retrieval systems and application scenarios, does this method need to be specifically adjusted?\\n\\nWe can incorporate several different retrieval objectives. Please see our new experiments for other retrievers.\\n\\n> How to better understand and quantify the \\\"naturalness\\\" indicator in order to more accurately evaluate the generated adversarial documents? Is it reasonable to rely solely on perplexity?\\n\\nWe show in our paper that perplexity is not enough to define naturalness. In fact, one of our contributions is an effective method to detect adversarial documents that have low perplexity.\\n\\nThe naturalness score is our proposed optimization objective *in addition to perplexity*. We also propose a systematic way to evaluate naturalness by using multiple LLMs and different prompts. \\n\\n> How to consider the hallucination and efficiency problems caused by the auxiliary LLM?\\n\\nPlease see response to the second weakness point above.\"}", "{\"comment\": \"> Dependence on Surrogate LLM\\n\\nOur method is efficient, taking only about 3 minutes to generate one adversarial document. Note that all known attacks on retrieval or RAG require only a single or a few adversarial documents for the entire retrieval corpus. Therefore, the attacker never needs to generate many adversarial documents. \\n\\nWe can further apply KV cache to speed up generation. Here is the running time comparison:\\n\\n| Method | Time (min:sec) |\\n|------- | -------|\\n| No KV Cache | 3:46 |\\n| KV Cache | 1:10 |\\n\\n\\n> Single Prompt Optimization\\n\\nOur evaluation includes experiments showing that generation using a single prompt transfers to other prompts. Moreover, we evaluate generated documents using multiple LLMs (Llama, GPT4o, and Gemma) and multiple prompts. They evade detection by all of them.\\n\\n> Insufficient Evaluation of LLM Detection Evasion\\n\\nThank you for the suggestions. We have added experiments to use more LLMs for naturalness evaluation and perplexity measurements. Please see \\u201cMultiple LLMs for naturalness evaluation\\u201d and \\u201cMultiple LLMs for perplexity measurement\\u201d in General Response.\\n\\n> Generalizability across Different Retrievers\\n\\nThank you for the suggestions. We have added experiments for two new retrievers. Please see the \\u201cMultiple Retrievers\\u201d section in General Response.\\n\\n> Naturalness Evaluation Questions\\n\\nPlease see the \\u201cMultiple LLMs for Naturalness Evaluation\\u201d section in General Response.\\n\\n> Table 2 Beam search Question\\n\\nSince the current naturalness score of our adversarial texts is sufficiently high to bypass the defense, we cap the naturalness score objective and only further optimize for cosine similarity when we increase the beam size.\"}", "{\"title\": \"Follow up comments\", \"comment\": \"Thank you for your response. I appreciate the effort in addressing my concerns; however, I remain unconvinced about the practical relevance of the attack settings. While I understand that the work builds on Zhong et al. (EMNLP 2023, Short Paper), I struggle to envision a realistic scenario where this type of attack could pose significant real-world challenges.\\n\\nThere are several issues with the proposed setting. For instance, why would an attacker have control over the sub-documents used in RAG? Furthermore, why is the alignment between the sub-document and the main document for downstream tasks not validated, especially when fluency checks are performed?\\n\\nI encourage the authors to critically evaluate the assumptions and settings adopted from previous work.\\n\\nI will maintain my original score.\"}", "{\"title\": \"Thank you & follow-up comments\", \"comment\": \"Thanks for addressing some of the raised issues; including evaluating perplexity against different models (which does present stronger results), adding the prefix attack and making clarifications. However, there are major issues remain unresolved:\\n\\n1. **Comparison with Fluency-Optimized LLM Attacks.** The proposed comparison with \\u201cbeam-search work\\u201d is a step forward toward highlighting the motivation for involving naturalness in attacks. However, the evaluation should be made more precise and fair: currently it compares apples (attacks against LLMs) to oranges (attacks against retrievers). Additionally, it is unclear how similar are the settings of the three attacks (e.g., a different length adversarial suffix is expected to affect the naturalness, as well as other parameters), and it is possible that LLMs refuse to answer some prompts (due to potential harm, as these are originally jailbreak prompts). Finally, presenting attack examples for each evaluated attack (e.g., to showcase the lack of naturalness of existing ones), can further strengthen the paper.\\n2. **Evaluation of Naturalness.** The naturalness metric (Section 6) still lacks a convincing justification. Specifically, I am concerned binary questions to LLMs might bias evaluation. To rule this out, I recommend further validating the findings through querying LLMs with non-binary scores (e.g., 1-10), in addition to a possible online study with human subjects.\\n3. **Evaluation on multiple retrievers.** Validating results on multiple retrievers is a valuable step. However, as only a narrowly scoped experiment is run on the newly added retrievers, it is unknown which of the findings generalize beyond the Contriever model (which, as noted in the original review, is trained on out-of-distribution data).\\n4. **Threat Model.** As mentioned in the original review, it seems that most of the evaluation (besides the new prefix attack) does not adhere to the threat model\\u2019s assumptions. Accordingly, either the threat model needs to be adjusted, or all experiments should be repeated with the prefix attack.\\n5. **HotFlip Baseline.** A clarification of the results in Table 3 would be helpful. The response states that the ASR drop in the HotFlip evaluation is due to optimizing fewer tokens than Zhong et al. 2023, albeit using x100 poisoning rate. However, Zhong et al.\\u2019s results demonstrate 98% ASR, as opposed to the 1% ASR in Table 3. Is it possible that a certain defense is applied but not explicitly mentioned? If not, it would be helpful to double-check and discuss what causes unconstrained HotFlip to provide _worse_ results than the proposed fluency-constrained attack.\\n\\nLast, I\\u2019d like to note that I found the response somewhat hard to follow. Particularly, upon reading the response, it is not immediately clear which changes have been already made in the PDF and which will only be made in future revision (and if so where). A clarification about the updates (both past and planned changes) and a diff would be extremely helpful.\"}", "{\"metareview\": \"This work identifies a challenge in earlier retrieval poisoning attacks\\u2014their vulnerability to detection by fluency-based defenses\\u2014and proposes a novel solution. It presents a black-box attack that employs beam search to generate adversarial passages, optimizing for both the retrieval objective and penalizing text perplexity and unnaturalness (as assessed by an auxiliary LLM). The proposed attack achieves performance on par with prior methods while being significantly more resistant to detection by standard fluency-based defenses.\", \"strength\": [\"identification of previous methods limitation and proposed a method inspired by this vulnerability.\", \"Weaknesses (still remain after the rebuttal):\", \"There are several issues with the proposed setting. For instance, why would an attacker have control over the sub-documents used in RAG? Furthermore, why is the alignment between the sub-document and the main document for downstream tasks not validated, especially when fluency checks are performed?\", \"The evaluation lacks fairness, comparing methods for LLMs and retrievers with differing settings, such as adversarial suffix lengths. The absence of attack examples limits the clarity of naturalness evaluation differences.\", \"The reliance on binary prompts for evaluation may introduce bias. Non-binary scoring or human evaluation would strengthen the validity of the naturalness claims.\", \"The findings are primarily tested on the Contriever model, which is trained on out-of-distribution data, making generalization uncertain.\", \"The evaluation does not consistently adhere to the stated threat model, particularly for non-prefix attacks. Adjustments or expanded experiments with the prefix attack are needed.\", \"The HotFlip results in Table 3 significantly deviate from previously reported benchmarks, lacking an explanation for this inconsistency.\"], \"additional_comments_on_reviewer_discussion\": \"The reviewers of this submission are extremely responsive, but many limitations and weaknesses remain after the rebuttal. I suggest that the authors address these issues before the next submission.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work points to an issue in previous retrieval poisoning attacks\\u2014detectability via fluency-based defenses\\u2014and addresses it by proposing a new method. Specifically, it introduces a black-box attack that uses beam search to generate adversarial passages following both the retrieval objective, and text perplexity and naturalness (i.e., the level of naturalness as judged by an auxiliary LLM) penalties. The attack shows comparable performance with prior work, while it is arguably harder to detect by standard fluency-based defenses.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The work identifies\\u2013and clearly states\\u2013a limitation in existing retrieval attacks, and proposes a method to address it.\", \"As the evaluation shows, the proposed attack is harder to detect through the proposed fluency-based detectors (including perplexity and naturalness), while attaining comparable attack success to prior attacks, which further emphasizes the vulnerability of retrieval.\"], \"weaknesses\": \"**Novelty.** The work\\u2019s novelty focuses on the naturalness of adversarial documents generated by the new method. However:\\n\\n* The main novelty of the method\\u2014enriching the objective with LLM logit-based naturalness score (Sec. 5.2)\\u2014lacks a convincing presentation (see more details below) and its current evaluation might be biased (more below), especially in light of the repetitive text shown in qualitative samples.\\n* It was previously shown that the discrete text optimization\\u2019s (e.g., in LLM jailbreaks) trade-off between attack success and text fluency [1] can be mitigated (e.g., [2], [3]). Specifically, similarly to this work, Sadasivan et al. [3] show LM-logit-based beam search to produce fluent text attacks. Thus, it is unclear whether this work offers a significant delta w.r.t. to previous work. \\n\\n**Method.** Since it is introduced as a core contribution, it would be helpful to elaborate on the soft naturalness score component in the method. This, for example, could be done by reporting on a study of alternative scores (if such were considered), or exploring the correlation between the soft naturalness score with naturalness of text.\\n\\n**Threat Model.** It is unclear why an attacker would aim to generate unconstrained documents (potentially meaningless) and promote their retrieval. For example, In the trigger attack is motivated by the potential of \\u201cspreading adversarial content\\u201d (line 112), although, to my understanding, such content is not necessarily contained in the generated documents.\\n\\n**Evaluation.** As the work\\u2019s contribution focuses on the \\u201cnaturalness\\u201d of the generated documents, it would be helpful to strengthen the evaluation:\\n\\n* **Perplexity Filtering (Sec 7.2).** As GPT2 is a relatively dated and weak LLM (line 329), it would be helpful to additionally calculate documents\\u2019 perplexity using stronger LLMs (e.g., Gemma2-2B or others), and show that the method is robust to such filtering.\\n* **Naturalness Filtering (Sec. 7.3).** It seems that the naturalness evaluation for the non-basic attack (\\u201cAdv\\u201d) is largely done using the same LLM (Llama) used in the attack. A stronger result would be to show the generated documents are robust to naturalness filtering of different, strong, LLMs. Alternatively, one could ask LLMs for a score in a large range (e.g., 1-10), as the current prompt (asking for binary score) could possibly bias the LLM\\u2019s answer Another option is reporting on a user study of their naturalness.\\n* **Evaluated Model(s).** The paper evaluates the attacks against a __single__ retrieval model (namely, Contriever [4]). It should be noted that the evaluated dataset (MS-MARCO) is out-of-training-distribution for this model (Contriver was not trained on MS-MARCO [4], as opposed to most text encoders), and it was previously observed to be exceptionally vulnerable to such retrieval attacks [5]. Thus, it would be helpful to validate the results on additional models.\\n\\n**Presentation.** Some presentation-related comment and nits:\\n* Sec. 7.3: It would be helpful to state in the text (besides the table caption) that the evaluated attack is trigger attack.\\n* Fig. 1: The figure would be easier to interpret if the y-axis ticks would match the (pre-log) values from text.\\n* Algorithm 1: As LLM_{logits},LLM_{naturalness} and \\\\lambda are all part of the algorithm parametrization, it would be clearer if these were included in the Input.\\n* Algorithm 1, line 23: Shouldn\\u2019t `k` be `m` (in the comment)?\\n\\n**References:**\\n\\n[1] Baseline Defenses for Adversarial Attacks Against Aligned Language Models; Jain et al. 2023.\\n\\n[2] FLRT: Fluent Student-Teacher Redteaming; Thompson & Sklar, 2024.\\n\\n[3] Fast Adversarial Attacks on Language Models in One GPU Minute; Sadasivan et al., ICML 2024.\\n\\n[4] Unsupervised Dense Information Retrieval with Contrastive Learning; Izacard et al., TMLR 2022.\\n\\n[5] Poisoning Retrieval Corpora by Injecting Adversarial Passages; Zhong et al., EMNLP 2023.\", \"questions\": [\"It seems that the attack algorithm is given a prefix prompt (per line 186), however, unless I missed it, it is not mentioned in the text (Sec. 5). Could you clarify what is the role of this prefix and how it is chosen?\", \"Results in Sec 7.4, Table 3 (e.g., HotFlip, Top-20, 0.01; with 500 passages), seem to contradict those originally reported by Zhong et al. 2023 [5] on the same dataset and model (e.g., Top-20, 98% with 50 passages). It would be helpful to clarify this discrepancy.\", \"In line 243, it is mentioned that the no-trigger attack is tested against 1K queries. Are these disjoint from the 50K attacked set (similar to the trigger attack\\u2019s evaluation)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"comments\", \"comment\": \"Thank you for your response and the additional experiments. I can now see the results of the naturalness evaluation of LLMs with different prompts. However, I still have a few questions:\\n\\n1. In the \\\"Multiple LLMs for Naturalness Evaluation\\\" section, for each query, how many samples were generated? Were multiple trials conducted? \\n2. The evaluation prompts still appear semantically similar to me. I would expect to see whether **AdversarialDecoding** can evade detection with prompt less similar to the attack one, such as: *\\\"You are tasked to evaluate the naturalness of the provided text. ... Use the following criteria to assess naturalness: ...\\\"* \\n3. The authors did not address my question in Q1. While the authors stated that *\\\"Table 2 shows that evasion of detection transfers to two other prompts and another LLM,\\\"* the *Adv* column only displays results for Llama. \\n4. Regarding the statement: *\\\"We cap the naturalness score objective and only further optimize for cosine similarity when we increase the beam size,\\\"* where is this step reflected in your method? I could not find a corresponding explanation in the paper.\"}", "{\"summary\": \"The paper addresses the vulnerability of retrieval systems based on embedding similarity to poisoning attacks. The authors demonstrate that previous techniques, such as HotFlip, produce documents that can be easily detected using perplexity filtering. They introduce a new controlled generation technique that combines an adversarial objective (embedding similarity) with a \\\"naturalness\\\" objective based on soft scores from an LLM. The proposed method aims to generate adversarial documents that cannot be automatically detected using perplexity filtering or LLM-based \\\"naturalness\\\" filtering without incurring significant false positives, while maintaining similar poisoning efficacy. The authors evaluate their approach on the MS MARCO dataset and show that their method outperforms prior techniques like energy-guided decoding (COLD) and is more effective than HotFlip in generating stealthy adversarial documents.\", \"soundness\": \"3\", \"presentation\": [\"Figure 1 is too large, in my opinion. It might be better to present two figures (e.g., one for trigger attack and one for non-trigger attack) horizontally.\", \"The table's caption should be placed before the table.\"], \"contribution\": \"2\", \"strengths\": [\"The paper presents a novel approach to generating natural adversarial documents for retrieval poisoning. Combining an adversarial objective with a \\u201cnaturalness\\u201d objective based on soft scores from a surrogate LLM is novel. This addresses the limitations of previous methods that produced easily detectable adversarial documents.\", \"The methodology section explains the proposed adversarial decoding method in detail and the \\\"Algorithm 1 Adversarial Decoding\\\" is clear. The experimental setup and results are also presented in a clear and organized manner.\", \"The work is significant as it addresses an important issue in modern retrieval systems. The ability to generate stealthy adversarial documents has implications for the security and integrity of retrieval-augmented generation and other applications that rely on retrieval systems.\"], \"weaknesses\": [\"Methodology:\", \"**Dependence on Surrogate LLM**: The proposed method's reliance on a surrogate LLM for computing the naturalness score has a drawback. It significantly raises the computational cost because computing $s_{natural}$ demands the calculation of the LLMs' output logits, which is more costly than computing the similarity score. This could limit the method's practical application, especially when dealing with large datasets. I would expect a runtime comparison between their method and baselines, or to discuss potential optimizations to reduce computational cost.\", \"**Single Prompt Optimization**: Optimizing adversarial documents based on only a single prompt (\\u201cIs this text unintelligible?\\u201d) restricts their robustness.\", \"**Insufficient Evaluation of LLM Detection Evasion**: One of the three \\u201cnaturalness\\u201d filtering prompts (\\u201cIs this text unintelligible?\\u201d) is identical to the attacker's prompt, and the other two are semantically similar. This resembles a \\\"data leakage\\\" situation, in my opinion. The perplexity-based filtering is also the case (the attacker and defender both use GPT-2). I expect a more comprehensive evaluation using a wider variety of prompts and different LLMs to accurately determine the method's ability to evade detection.\", \"**Generalizability across Different Retrievers**: Given the relatively low transferability of adversarial decoding across different retrievers, more experiments on different retrievers are needed to verify the effectiveness of the proposed method.\"], \"questions\": [\"Q1: In line 378, The author stated \\\"Table 2 shows that evasion of detection transfers to two other prompts and another LLM\\\". It is confusing as table 2 does not seem to include the results for other prompts and LLMs. So where is the result for evasion of detection on other prompts and LLMs?\", \"Q2: In experiment setup, The author said \\\"To evaluate 'naturalness' of adversarial and real documents, we prompt GPT-4o and LLaMA-3.1-8B with these prompts\\\". But where is the result of GPT-4o filtering?\", \"Q3: In table 2, at the same threshold, increasing the width of the beam search actually increases the true positive rate of LLM-based naturalness filtering (0.5 -> 0.7), which means more adversarial document is filtered. This is very strange to me. In my opinion, increasing the width of beam search should be able to find better solutions (i.e., more stealthy adversarial documents) and therefore less likely to be detected.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on the problem of retrieval poisoning in modern retrieval systems. Firstly, it points out the limitations of existing methods (such as HotFlip and COLD) in generating adversarial documents. The documents generated by HotFlip have a relatively high perplexity and are easily detected; while COLD fails to generate useful texts under adversarial constraints. Then, this paper proposes a new controlled generation technique, which combines an adversarial objective (embedding similarity) with a \\\"naturalness\\\" objective calculated based on an open-source surrogate language model (LLM). The generated adversarial documents are difficult to be detected by perplexity filtering or other LLMs without generating a large number of false positives. This method has been evaluated in different scenarios such as trigger attacks and no-trigger attacks, using the MS MARCO dataset. In terms of poisoning efficacy and the naturalness of generated documents, it is superior to previous methods, but still has some limitations, such as poor transferability across encoders and the need for more research on defenses.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed adversarial decoding method is a novel controlled generation technique that comprehensively considers embedding similarity and naturalness and effectively addresses the deficiencies of existing methods.\\n2. Experiments were conducted using the large-scale MS MARCO dataset, comparing different generation methods and considering two scenarios: trigger attacks and no-trigger attacks.\", \"weaknesses\": \"1. The transferability of adversarial documents between different encoders is poor, which limits the universality of the method.\\n2. It depends on LLM and does not consider the situation of LLM hallucination. In addition, the use of LLM needs to consider the efficiency and effectiveness of the attack.\\n3. The experiments are not sufficient. The experiment only considers one retriever, contriver, and other retrievers need to be compared. At the same time, the baselines need to be increased (for example, PoisonedRAG in the references).\", \"questions\": \"1. How to further enhance the attack effect while improving the naturalness of adversarial documents?\\n2. For different types of retrieval systems and application scenarios, does this method need to be specifically adjusted?\\n3. How to better understand and quantify the \\\"naturalness\\\" indicator in order to more accurately evaluate the generated adversarial documents? Is it reasonable to rely solely on perplexity?\\n4. How to consider the hallucination and efficiency problems caused by the auxiliary LLM?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> presentation\\n\\nthanks for the suggestions! In the next revision, we will improve the presentation.\\n\\n> perplexity and naturalness evaluation\\n\\nPlease see \\u201cMultiple LLMs for Perplexity Measurement\\u201d and \\u201cMultiple LLMs for Naturalness Evaluation\\u201d in General Response.\\n\\n> Baseline comparison\\n\\nPlease see \\u201cMultiple LLMs for Naturalness Evaluation\\u201d in General Response.\\n\\n> Human Evaluation\\n\\nIn all existing retrieval and RAG attacks, adversarial documents are added to corpora consisting of thousands or millions of documents. Human inspection of every document is not a feasible defense in these scenarios, therefore we focus on automated defenses.\"}", "{\"title\": \"General Response\", \"comment\": \"# New baseline benchmarks\\n\\nThe state of the art for producing fluent adversarial texts are BEAST and FLRT. They produce text that has low perplexity but is detected as unnatural by the detection method proposed in this paper.\\n\\nWe applied the naturalness evaluation metric proposed in this paper to all 10 examples provided by BEAST [1] and all 41 examples provided by FLRT [2]. We then asked GPT4o, Llama-3.1 8B, and Gemma 2 9B these three questions:\\n\\n \\\"Is this text meaningless?\\\"\\n\\n \\\"Is this text gibberish?\\\"\\n\\n \\\"Is this text unintelligible?\\\"\\n\\nThe score for a given document is the total of all LLM-prompt combinations where the answer is \\u201cNo\\u201d, thus scores range from 0 to 9. Here are the scores for BEAST and FLRT texts:\", \"beast\": \"`[5, 0, 0, 1, 0, 2, 0, 0, 0, 0]`\", \"flrt\": \"`[0, 0, 0, 0, 0, 3, 0, 0, 0, 1, 1, 0, 0, 3, 5, 0, 0, 0, 0, 4, 0, 3, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]`\\n\\nThis clearly shows that BEAST and FLRT documents are recognized as unnatural and fail to evade detection. By contrast, here are the scores for texts produced by our method:\", \"advdecoding\": \"`[9, 6, 9, 6, 6, 3, 6, 9, 6, 9]`\\n\\n# Multiple Retrievers\\n\\nWe have added experiments with two additional retrievers.\\n\\n| Model | Top-1 | Top-3 | Top-5 | Top-10 | Top-100 |\\n|------------------------------------------|-------|-------|-------|--------|---------|\\n| sentence-transformers/sentence-t5-base | 0.08 | 0.12 | 0.15 | 0.19 | 0.35 |\\n| sentence-transformers/gtr-t5-base | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\\n| facebook/contriever | 0.04 | 0.23 | 0.29 | 0.43 | 0.83 |\\n\\n\\nWhile success rate varies from retriever to retriever, documents generated by our method achieve non-trivial success rates against all of them that enable practical attacks.\\n\\n\\n\\n# Multiple LLMs for perplexity measurement\\n\\nWe have added the perplexity measurement by Llama, Gemma and Qwen. Please see the updated PDF. We have updated section 7.2 and Figure 1 to reflect the changes. We show that our methods evade perplexity detection by all LLMs.\\n\\n\\n# Multiple LLMs for naturalness evaluation\\n\\nIn our naturalness evaluation, we use both Llama-3.1 8B (the LLM we use for generation) and GPT4o (which we do *not* use for generation). We also added evaluation experiments with Gemma 2 9B. We use the same method as in the \\u201cnew baseline benchmarks\\u201d to compute the naturalness score.\\n\\n| Query | GPT2BasicAdversarialDecoding | LlamaBasicAdversarialDecoding | AdversarialDecoding |\\n|-------------------|-----------------------------|-------------------------------|---------------------|\\n| spotify | 0 | 0 | 9 |\\n| xbox | 0 | 0 | 6 |\\n| lebron james | 0 | 0 | 9 |\\n| amazon | 0 | 0 | 6 |\\n| iphone | 0 | 0 | 6 |\\n| netflix | 0 | 0 | 3 |\\n| BMW | 0 | 0 | 6 |\\n| Marilyn Monroe | 0 | 0 | 9 |\\n| nfl | 0 | 3 | 6 |\\n| Olympics | 0 | 0 | 9 |\\n\\n\\nThis shows that documents generated by our method achieve high naturalness scores with all three LLM evaluators.\\n\\n\\n\\n# Prefix attack\\nIt is easy to change the optimization objective to maximize the attack success rate of an adversary-chosen prefix + our optimized adversarial text to spread misinformation. We have added experiments to demonstrate the feasibility of this attack, setting the prefix to \\u201c[Trigger] is awful\\u201d.\\n| | Top-1 | Top-3 | Top-5 | Top-10 | Top-100 |\\n|-----------------------|-------|-------|-------|--------|---------|\\n| Adversarial Decoding | 0.21 | 0.29 | 0.36 | 0.46 | 0.75 |\", \"references\": \"[1] Fast Adversarial Attacks on Language Models in One GPU Minute; Sadasivan et al., ICML 2024.\\n\\n[2] FLRT: Fluent Student-Teacher Redteaming; Thompson & Sklar, 2024.\"}" ] }
0rS9o1uKqu
Training-Like Data Reconstruction
[ "Pirzada Suhail", "Amit Sethi" ]
Machine Learning models are often trained on proprietary and private data that cannot be shared, though the trained models themselves are distributed openly assuming that sharing model weights is privacy preserving, as training data is not expected to be inferred from the model weights. In this paper, we present Training-Like Data Reconstruction (TLDR), a network inversion-based approach to reconstruct training-like data from trained models. To begin with, we introduce a comprehensive network inversion technique that learns the input space corresponding to different classes in the classifier using a single conditioned generator. While inversion may typically return random and arbitrary input images for a given output label, we modify the inversion process to incentivize the generator to reconstruct training-like data by exploiting key properties of the classifier with respect to the training data. Specifically, the classifier is expected to be relatively more confident and robust in classifying training samples, and the gradient of the classifiers output with respect to the classifier’s weights is also expected to be lower for training data than for random inverted samples. Using these insights, along with some prior knowledge about the images, we guide the generator to produce data closely resembling the original training data. To validate our approach, we conduct empirical evaluations on multiple standard vision classification datasets, demonstrating that leveraging these robustness and gradient properties enables the reconstruction of data semantically similar to the original training data, thereby highlighting the potential privacy risks involved in sharing machine learning models.
[ "Network Inversion", "Interpretability", "Privacy", "Training Data Reconstruction" ]
Reject
https://openreview.net/pdf?id=0rS9o1uKqu
https://openreview.net/forum?id=0rS9o1uKqu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yOLa9hWVsw", "uxpqb8k4kU", "rlHhENgiQx", "rZp38Mc9BO", "ZdKOjTZX6O", "YxFhJvrDBN", "W2ifZjptDz", "OpnoQGTv5y", "N4GM97Z0qL", "Mmma84nFuH", "9Nn86PzTQi", "89FcXxMlGd", "6VH8UPwXPC" ], "note_type": [ "decision", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1737524281140, 1729501246309, 1729879082373, 1732288036527, 1732372050239, 1732289349734, 1732477019624, 1730376010830, 1734654177686, 1732310375221, 1732544531093, 1730051160902, 1732111676801 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13775/Reviewer_gHTs" ], [ "ICLR.cc/2025/Conference/Submission13775/Reviewer_bSYE" ], [ "ICLR.cc/2025/Conference/Submission13775/Reviewer_CPfS" ], [ "ICLR.cc/2025/Conference/Submission13775/Authors" ], [ "ICLR.cc/2025/Conference/Submission13775/Authors" ], [ "ICLR.cc/2025/Conference/Submission13775/Reviewer_bSYE" ], [ "ICLR.cc/2025/Conference/Submission13775/Reviewer_CPfS" ], [ "ICLR.cc/2025/Conference/Submission13775/Area_Chair_uVk6" ], [ "ICLR.cc/2025/Conference/Submission13775/Authors" ], [ "ICLR.cc/2025/Conference/Submission13775/Reviewer_CPfS" ], [ "ICLR.cc/2025/Conference/Submission13775/Reviewer_wha5" ], [ "ICLR.cc/2025/Conference/Submission13775/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work introduces a novel approach to reconstruction of training data from ML models using an inversion-based attack. The attack is evaluated on a number of CV benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is well-motivated, the presentation is mostly on the positive side and mostly well-written.\", \"weaknesses\": \"The biggest scientific weakness of this work is its impact. We have previously seen dozens of papers on model inversion and data reconstruction using various techniques ranging from inversion networks (e.g. [1, 2]) to gradient reconstruction ([3,4]). Note that these cover collaborative learning in most cases, as authors have already linked the relevant centralised settings in their related work. There is hardly anything new about the approach proposed here or the results obtained in this work. The datasets used here are the basic toy datasets that have previously been inverted multiple times using the techniques I linked above across a very large number of training settings and model architectures (mostly beyond the basic CNN architectures). While novelty is not the only factor that we are looking for when assessing submissions, I can hardly see any additional insights, unexplored work directions or interesting findings either. Almost everything in this work has previously been studied (e.g. the attack method, the use of priors, combination of multiple reconstruction factors etc.) in great detail and I do not see how the community benefits from this work.\", \"in_terms_of_more_addressable_concerns_i_have\": \"the paper is not well-presented given the tight space constraints. With only 9-10 pages, authors should really concentrate on new methods, novel results and discussion. Currently, the introduction (which is in my view really inflated for no particular reason) takes 2 pages. This is not to mention the 2 pages of basic ML vocabulary, where each term has its own paragraph with margins (e.g. you do not need to explain what a cross-entropy loss is at ICLR). This adds up to about 4-5 pages of superfluous content. And given my criticism of the novelty and the impact of the work and its findings, this is exactly where this extra space should have gone - to explore the method in more detail, show novel insights etc. There are also 2 diagrams which I find to be relatively similar and they take about a page as well without adding much to the content (i.e. one would suffice, two are too much).\\n\\nWhile this may not be the comment the authors expected to hear, I would encourage them to concentrate on a) extracting as many novel scientific insights from their method as possible and b) restructuring the work so these results are clear to the reader. This would make the paper useful for the community and acceptable for publication. \\n\\n[1] - Usynin, Dmitrii, et al. \\\"Zen and the art of model adaptation: Low-utility-cost attack mitigations in collaborative machine learning.\\\" Proceedings on Privacy Enhancing Technologies (2022).\\n\\n[2] - He, Zecheng, Tianwei Zhang, and Ruby B. Lee. \\\"Model inversion attacks against collaborative inference.\\\" Proceedings of the 35th Annual Computer Security Applications Conference. 2019.\\n\\n[3] - Geiping, Jonas, et al. \\\"Inverting gradients-how easy is it to break privacy in federated learning?.\\\" Advances in neural information processing systems 33 (2020): 16937-16947.\\n\\n[4] - Boenisch, Franziska, et al. \\\"When the curious abandon honesty: Federated learning is not private.\\\" 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P). IEEE, 2023.\", \"questions\": \"None.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present a technique called Training-Like Data Reconstruction (TLDR) which can essentially create samples which are similar to the data which a neural network based classifier was trained on. Their technique is based on learning a generative model which can take in label encodings and produce outputs in the image space by using up-convolutions. The signal for training the generator comes from the classifier itself. Essentially, the authors come up with a loss function which encourages the generator to produce samples which the pre-trained classifier will classify into the same class as that of the conditional label provided to the generator. The loss function is made up of several regularizations and terms which encourage the generator to produce images which look similar to the training data. Evaluation is done on CNN based classifiers and 4 datasets including Cifar-10 abd MNIST.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The main aim of the paper is quite clear to understand. The authors lay out the techniques they use to design their inversion network and explain the three desired properties they would want their generative model to have quite well. If their generative model is able to produce high confidence samples which have some room for perturbation (i.e some perturbation in the input space do not produce wildly different output distributions by the classifier) and have low gradient norm, then their model has a better likelihood of producing realistic samples. I also liked their use of vector and matrix conditioning techniques which I believe are useful for generating controlled samples from generative models. The authors also provided a very comprehensive literature review in the space of model inversion based attacks which is well appreciated.\", \"weaknesses\": \"The paper suffers from several weaknesses which I believe can be addressed.\\n\\nFirstly, the experimental evaluation is very limited. The authors only perform evaluation on MNIST, FMNIST, SVHN and CIFAR-10. These are very simple datasets and we do not know if this technique will be applicable to more complex datasets such as Imagenet or Ms-COCO where finegrained details need to be captured. Secondly, their is no quantitative evaluation of how well the reconstructed images match the original training data. The authors only presented visual samples. Third, evaluation was only done on CNN based classifiers and it would be interesting to know if this technique can perform well on more modern architectures like ViTs.\\n\\nAnother concern I have about the paper is the complexity of the TLDR scheme. In total, there are 9 types of loss functions including KL divergence, Cross entropy, variational losses, feature orthogonality, cosine similarity etc. There is no clear understanding of the impact of each type of loss function and whether they are all necessary. I would have liked to see some ablation studies or theoretical justification for such a complex scheme.\\n\\nFinally, I am concerned about the quality of the reconstructions themselves. CIFAR-10 is the most complex dataset they perform their evaluation on and many of the samples are hard to parse. The inversion scheme does not capture color / contrast very well and my understanding is that this inversion is only done to get a feel for what the training data was and not to be able to steal confidential training data and reuse it.\", \"questions\": \"What exactly is the input to the generator? You discuss four approaches - label, vector, intermediate matrix and vector-matrix conditioning.\\nAdditionally, if you only use vector conditioning, do you simply sample from a Gaussian, softmax it, and then feed this to the Generator?\\nAlso, in Figure 1, the input to the generator seems to be a latent + conditioning vector. What is the source of the latent vector?\\n\\nWhat exactly is the cosine similarity between? It says on Line 319 that \\\"cosine similarity between features of generated images i and j\\\". Where are two images coming from?\\n\\nTypos and grammatical errors\\n\\nLine 219 has a typo ' a diverse data distributions'\\n\\nLine 234 - 'we given its simplicity'\\n\\nLine 255 - for a encoding the label\\n\\nLine 240 - learnt off of the labels each representative of the separate classes\\n\\nOther \\n\\nIs the claim on Line 256 an observation you made during your research or a previously known fact from the literature? If so, please cite the relevant literature.\\n\\nDo you think you could use a more powerful generative model i.e diffusion model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your comments. My suggestion is to revise the paper based on the feedback received.\\n\\nI did not get any specific reply to my questions (no metrics, no comparison (albeit under different threat model, a comparison would be useful), and unclear attacker's goal).\"}", "{\"title\": \"Discussion Phase\", \"comment\": \"Thanks a lot for your reviews.\\n\\nGeneral Comment.\\\\\\nWe would like to make a distinction here that while reconstructions have been studied in different settings using multiple techniques, we in this paper are looking at an extreme case of trying to reconstruct the training data\\n1. from a SINGLE MODEL,\\n2. without use of any PRE-TRAINING,\\n3. without any auxiliary datasets,\\n4. without any insight of the training process,\\n5. without any information of the gradients during training, and\\n6. without the use of any unobvious priors.\\n\\nOur work is in extension of the cited NeurIPS paper [A] from 2022 that does reconstruction in similar settings\\n1. but only on binary classifiers,\\n2. based only on fully connected layers, and\\n3. trained only on around 500 samples of MNIST & CIFAR.\\n\\nWe in this paper extend this work onto\\n1. CNNs,\\n2. with multi-class classification tasks,\\n3. on models trained on over 10000 samples of MNIST, CIFAR, SVHN, FMNIST.\\n\\nHence we would request the paper to be reviewed in the right context.\\n\\nReviewer Specific Comments\", \"response_to_the_weaknesses_identified\": \"1. We have extended experimental evaluation from MNIST & CIFAR-10 in the previous paper to also include FMNIST & SVHN. The approach is fairly general and can be extended to others datasets as well. We will include the experimental results of application of this technique on Tiny-Imagenet in the revised version. Quantitative evaluation of the reconstructed samples with the training samples will be performed using SSIM and included in the revised version. Our approach is fairly general and can easily be extended to other architectures as the reconstructions are done mostly based on the input-output relationships instead of the inner workings of the models. However the reconstructions in CNNs are particularly difficult because of weight sharing, in which the same kernel moves over the entire image. In FC Layers each input pixel is associated with an individual weight that can memorise the training data aiding the reconstruction.\\n\\n2. We have only used four losses in our proposed Network Inversion approach. In reconstruction the same cross entropy and KL divergence are applied on the perturbed images as well. While the other three individually weighted terms constitute the prior loss. A detailed ablation study highlighting the relative importance of each of the individual terms in the Reconstruction Loss will be included in the revised version.\\n\\n3. The reconstructions as mentioned above are performed without any prior knowledge about the data. We have not come across techniques that can do perfect reconstructions of the training data on Fully Connected Layers let alone CNNs. Better reconstruction are just good hyper-parameters away and will be included in the revised version with all experimental details.\", \"questions_answered\": \"1. The input to the generator is a latent vector randomly sampled from normal distribution concatenated with the conditioning vector.\\nIn label conditioning the conditioning vector is an embedding learnt from the integer labels. In vector conditioning the conditioning vector is randomly sampled from the normal distribution and then soft-maxed to represent an input conditioning distribution. In Intermediate Matrix Conditioning, no conditioning vectors are used instead the latent vector is unconditionally upsampled to the spatial dimensions of NxN(the number of classes) and then concatenated with the conditioning matrix. While as in Vector-Matrix, conditioning vectors as above are concatenated with the latent vectors in the initial upsampling to NxN spatial dimensions, followed by concatenation with the conditioning matrix. The conditioning matrix is an NxN matrix with a particular row and column set to 1 & rest 0.\\n2. Cosine Similarity and Feature Orthogonality are computed between the features of a batch of generated images in which i & j are two images in a batch. These losses are computed for features of every image with every other image in the batch itself. \\n3. Thanks for identifying the typos, all the errors will be rectified in the revised version.\\n4. The claim that 2D matrices do a better job at conditioning compared to only vectors in case of images is our experimental observation and will be backed by an ablation study done on all the 4 conditioning approaches.\\n5. We are only using a conditioning generator that is trained differently from conventional generative models. However we are considering the use of normalising flows and diffusion models to perform inversion.\\n\\n[A] Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, and Michal Irani. Reconstructing training data from trained neural networks, 2022. URL https://arxiv.org/abs/2206.07758.\"}", "{\"title\": \"Discussion Phase\", \"comment\": \"Thanks a lot for your reviews.\\n\\nGeneral Comment\\\\\\nWe would like to make a distinction here that while reconstructions have been studied in different settings using multiple techniques, we in this paper are looking at an extreme case of trying to reconstruct the training data\\n1. from a SINGLE MODEL,\\n2. without use of any PRE-TRAINING,\\n3. without any auxiliary datasets,\\n4. without any insight of the training process,\\n5. without any information of the gradients during training, and\\n6. without the use of any unobvious priors.\\n\\nOur work is in extension of the cited NeurIPS paper [A] from 2022 that does reconstruction in similar settings\\n1. but only on binary classifiers,\\n2. based only on fully connected layers, and\\n3. trained only on around 500 samples of MNIST & CIFAR.\\n\\nWe in this paper extend this work onto\\n1. CNNs,\\n2. with multi-class classification tasks,\\n3. on models trained on over 10000 samples of MNIST, CIFAR, SVHN, FMNIST.\\n\\nReviewer Specific Comments\", \"response_to_the_weaknesses_identified\": \"1. We would like to know of any recommended evaluation metrics that can be used. We will include evaluation using SSIM/PSNR, however they are more suited for cases in which we have the actual ground truth for comparison. Similarly FID/LPIPS are more suited for generative models in which the quality of the generated images is at par with the training data, which is not expected of any reconstruction technique.\\n\\n2. We would like to be referred to any established state of art techniques that perform reconstruction in the above defined settings on CNN for comparison. There is none to the best of our knowledge. \\n\\n3. The main goal of TLDR as clearly specified in the paper is to get the clue of what the model was trained on and since we are dealing with classifiers we would like to be able to reconstruct the training data for each class.\\\\\\nBalle et al. 2022 & Fredrikson et al., 2015 perform reconstructions in a different setting and we are not trying to do or achieve either.\\\\\\nBalle et al. 2022 has information of all the training data except one that they are trying to reconstruct. We don\\u2019t have any idea of even a single training sample.\\\\\\nFredrikson et al., 2015 uses demographic information about the patient along with the model to predict the genetic markers, which are heavily related. We instead perform reconstructions entirely from the given model weights.\", \"questions_answered\": \"As pointed out above, the related works are not performed in the extreme setting of trying to reconstruct the training data entirely from the model weights. We can still rephrase the sentence as \\u201cWhile these attacks have mostly been demonstrated in controlled settings on over-parameterized models, several studies have also examined scenarios involving complex, multi-class datasets. However, these examinations often rely on additional information, such as priors, pre-trained models, or auxiliary data, highlighting the need for further research to understand the risks in the absence of such supplementary resources.\\u201d\\n\\nWe acknowledge that some data can be memorised and would like to avoid overstating by rephrasing the scentence as \\u201cin under-parameterized models, the likelihood of memorization is significantly reduced compared to over-parameterized models\\u201d.\\n\\n[A] Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, and Michal Irani. Reconstructing training data from trained neural networks, 2022. URL https://arxiv.org/abs/2206.07758.\"}", "{\"comment\": \"I thank the authors for their response and their clarification to my question. However, I still hold to my previous concerns regarding the limited evaluation, complexity of the scheme, and unclear goal of the paper. I'd recommend revising this paper as I do not believe it is ready for a conference publication.\"}", "{\"summary\": \"This paper introduces a new method for reconstructing data that resembles the training dataset of an ML model. The method is based on two steps: inversion, where one learns the space corresponding to different classes, and reconstruction.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The related work presents an interesting connection between reconstruction attacks and works from the '90s.\", \"The method is fairly presented.\"], \"weaknesses\": [\"The paper defines no metrics to evaluate the effectiveness of the method. The only results that are shown are reconstructed images.\", \"No empirical comparison is provided against state of the art methods (or any methods, for that matter). Unfortunately, this makes it impossible to judge how better the method is with respect to prior work.\", \"It is unclear what the main goal of the reconstruction attack is. In the literature, there are two: 1) reconstructing one (or more) images that look as close as possible to the original image (e.g., see Balle et al. 2022 as referenced by the authors), and 2) reconstructing images that resembles data from a certain label (e.g., (Fredrikson et al., 2015)). Based on the results (Fig 3-4), it seems to me that the proposed attack is trying to do the former but is achieving the latter.\", \"(Fredrikson et al., 2015) \\\"Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures\\\". You may want to include this reference, which predates (Yang et al., 2019) in terms of model inversion attacks.\"], \"questions\": \">While these attacks have been demonstrated in controlled settings, where models are typically over-parameterized or overly simplistic, the risks associated with sharing models trained on large, complex and multi-class datasets are yet been fully explored.\\n\\nSome of the related work you mentioned did actually consider some of these factors. You may want to be more precise here.\\n\\n>For under-parameterized models, where there is no possibility of memorization\\n\\nThis is an overstatement.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The reviewers were unfortunately not excited about the paper. In particular there were concerns about the experimental setup, complexity of the scheme, and the scale of reconstruction.\", \"additional_comments_on_reviewer_discussion\": \"The authors could not convince the reviewers to bump up the score.\"}", "{\"title\": \"Discussion Phase\", \"comment\": \"Thanks a lot for your reviews.\\n\\nGeneral Comment.\\\\\\nWe would like to make a distinction here that while reconstructions have been studied in different settings using multiple techniques, we in this paper are looking at an extreme case of trying to reconstruct the training data\\n1. from a SINGLE MODEL,\\n2. without use of any PRE-TRAINING,\\n3. without any auxiliary datasets,\\n4. without any insight of the training process,\\n5. without any information of the gradients during training, and\\n6. without the use of any unobvious priors.\\n\\nOur work is in extension of the cited NeurIPS paper [A] from 2022 that does reconstruction in similar settings\\n1. but only on binary classifiers,\\n2. based only on fully connected layers, and\\n3. trained only on around 500 samples of MNIST & CIFAR.\\n\\nWe in this paper extend this work onto\\n1. CNNs,\\n2. with multi-class classification tasks,\\n3. on models trained on over 10000 samples of MNIST, CIFAR, SVHN, FMNIST.\\n\\nReviewer Specific Comments\", \"response_to_the_weaknesses_identified\": \"1. We haven\\u2019t claimed any theoretical correctness or given any theoretical guarantees of our proposed approach in the paper. However the assumptions made with regard to the training data and the classifier are very obvious and empirically well established. The inclusion of these assumptions can clearly be seen to incentivise the generator to generate training-like data instead of random inverted samples.\\n\\n2. We would appreciate any recommendation for an evaluation metric. We are considering to use SSIM/PSNR, although they are more suited for cases in which we have the actual ground truth for comparison. In our case, we can compare the reconstructed samples with all the samples in the training data to find their best match.\\n\\n3. While all the experimental details about the models, architecture and hyper-parameters used are included in the code, we will add these details to the revised version of the paper. \\n\\n4. Further a detailed ablation study about the relative importance of each of the individual terms in the Reconstruction Loss will also be included in the revision version.\", \"questions_answered\": \"As mentioned in the paper, the quality of the reconstructed samples starts to degrade as we increase the number of the training samples used for the classifier. We however has been able to achieve faithful reconstructions for models trained on over 10000 samples, which is orders of magnitude larger than the number of samples(500) used in [A].\\n\\n[A] Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, and Michal Irani. Reconstructing training data from trained neural networks, 2022. URL https://arxiv.org/abs/2206.07758.\"}", "{\"comment\": \"> We acknowledge that some data can be memorised and would like to avoid overstating by rephrasing the scentence as \\u201cin under-parameterized models, the likelihood of memorization is significantly reduced compared to over-parameterized models\\u201d.\\n\\nOk.\\n\\n> The main goal of TLDR as clearly specified in the paper is to get the clue of what the model was trained on and since we are dealing with classifiers we would like to be able to reconstruct the training data for each class.\\n> As pointed out above, the related works are not performed in the extreme setting of trying to reconstruct the training data entirely from the model weights. We can still rephrase the sentence as \\u201cWhile these attacks have mostly been demonstrated in controlled settings on over-parameterized models, several studies have also examined scenarios involving complex, multi-class datasets. However, these examinations often rely on additional information, such as priors, pre-trained models, or auxiliary data, highlighting the need for further research to understand the risks in the absence of such supplementary resources.\\u201d\\n\\nYou need to be much more precise than that. From the first sentence, it seems that you're trying to achieve model inversion as defined by Fredrikson et al. You should then specify what is different from them (assumptions, methods), and what is similar to them.\\nI would encourage you to re-elaborate your paper based on these recommendations.\"}", "{\"summary\": \"The paper introduces a network inversion approach to reconstruct training-like data from trained models. The paper modifies the inversion process to incentivize the generator to reconstruct training like data by exploiting several key properties of the classifier with respect to the training data. For example, the classifier is expected to be relatively more confident and robust in classifying training samples, the gradient of the classifier output with respect to the classifier's weight is also expected to lower for the training data than for the random inverted image.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The method extends inversion techniques from simpler architectures to CNNs, demonstrating its potential with complex datasets and classifiers.\\n\\nThe incorporates a few types of losses including Cross-entropy, KL divergence, cosine similarity, and feature orthogonality losses that may be useful to reconstruct training like data.\\n\\nDemonstrates the model's effectiveness across multiple datasets, highlighting privacy risks in different scenarios.\", \"weaknesses\": \"The paper assumes because training data possess these properties, we can use these properties to reconstruct the training data, which may not be theoretically right. For example, A => B does not mean B>A.\\n\\nThe paper also do not have any formal metrics on how successful the reconstruction attack is. \\n\\nThe paper also do not provide clear details about the experimental setup like details about the models, dropout rates and so on, and do not discuss details about how successful the attack can be for different types of models and architecture.\\n\\nThe paper may benefit more from detailed discussion about which loss is more important to the reconstruction attack.\", \"questions\": \"Under what conditions which the attacks more successful versus not?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion Phase\", \"comment\": \"Thanks a lot for your reviews.\\\\\\nThe paper does not seem to have been reviewed in proper context.\\\\\\nWhile reconstructions have been studied in different settings using multiple techniques, we in this paper are looking at an extreme case of trying to reconstruct the training data\\n1. from a SINGLE MODEL,\\n2. without use of any PRE-TRAINING,\\n3. without any auxiliary datasets,\\n4. without any insight of the training process,\\n5. without any information of the gradients during training, and\\n6. without the use of any unobvious priors.\\n\\nThe papers referred in here and others in the literature are not entirely related to our approach as they use multiple models, pre-training, auxiliary datasets, gradient information, collaborative learning frameworks.\\\\\\nOur work is in extension of the cited NeurIPS paper [A] from 2022 that does reconstruction in similar settings\\n1. but only on binary classifiers,\\n2. based only on fully connected layers, and\\n3. trained only on around 500 samples of MNIST & CIFAR.\\n\\nWe in this paper extend this work onto\\n1. CNNs,\\n2. with multi-class classification tasks,\\n3. on models trained on over 10000 samples of MNIST, CIFAR, SVHN, FMNIST\\n\\nThe paper [A] also shows that the weights in the first layer that are individually associated with each input also directly memorise the training data which aids reconstruction in Feed Forward Neural Nets. This memorisation however is not possible is CNNs because of weight sharing, making reconstructions harder.\\n\\nHence I would request the paper to be reviewed in the right context that can help us to improve this work in the revision.\\n\\nFurther we would be happy to address other concerns related to the paper presentation, diagrams, ablation study, evaluation metrics, and also include a detailed description of the proposed method in place of any unnecessary content. Also we are not at all explaining any individual loss function, instead we are just trying to clarify how they are used in our approach.\\n\\n[A] Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, and Michal Irani. Reconstructing training data from trained neural networks, 2022. URL https://arxiv.org/abs/2206.07758.\"}" ] }
0rACj8JLAL
BOOD: Boundary-based Out-Of-Distribution Data Generation
[ "Qilin Liao", "Shuo Yang", "Bo Zhao", "Ping Luo", "Hengshuang Zhao" ]
Harnessing the power of diffusion models to synthesize auxiliary training data based on latent space features has proven effective in enhancing out-of-distribution (OOD) detection performance. However, extracting effective features outside the in-distribution (ID) boundary in latent space remains challenging due to the difficulty of identifying decision boundaries between classes. This paper proposes a novel framework called Boundary-based Out-Of-Distribution data generation (BOOD), which synthesizes high-quality OOD features and generates human-compatible outlier images using diffusion models. BOOD first learns a text-conditioned latent feature space from the ID dataset, selects ID features closest to the decision boundary, and perturbs them to cross the decision boundary to form OOD features. These synthetic OOD features are then decoded into images in pixel space by a diffusion model. Compared to previous works, BOOD provides a more efficient strategy for synthesizing informative OOD features, facilitating clearer distinctions between ID and OOD data. Extensive experimental results on common benchmarks demonstrate that BOOD surpasses the state-of-the-art method significantly, achieving a 29.64\% decrease in average FPR95 (40.31\% vs. 10.67\%) and a 7.27\% improvement in average AUROC (90.15\% vs. 97.42\%) on the Cifar-100 dataset.
[ "OOD detection", "Diffusion models", "Training data generation" ]
Reject
https://openreview.net/pdf?id=0rACj8JLAL
https://openreview.net/forum?id=0rACj8JLAL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xMNBVGxm06", "n7djDf2Hll", "m9e9ofArOL", "kNrZLCBjOU", "jVWUnDruYk", "immpp6dORz", "ibPBXWCAs4", "agFXd9ZNNp", "aXUFvWWVWu", "ZRShLKB7B2", "WUerMRpC8g", "RrL9KSZ927", "Q44O1P5iIc", "OyG9JRiZ2g", "KTjVDMiGMb", "HsRPW1STO5", "GoJV9qWveI", "GlMCBC0GkQ", "CeKSg9tMIt", "B9XWolMSHF", "AYV9A4Heof", "AIttnj7rUo", "7P1Eovhg0t", "3kMpUViCUo" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732772984494, 1730424109982, 1732165038196, 1732606314402, 1732772740410, 1730665548144, 1732773264174, 1732164824798, 1732369513926, 1732369531550, 1732164695086, 1734639807609, 1732164726137, 1732581556690, 1732165086056, 1732369495770, 1732773066318, 1731115621590, 1732164801110, 1737524012254, 1732369545553, 1730123652920, 1732164874804, 1732555356709 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Reviewer_dKDk" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Reviewer_YNDo" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Reviewer_Fhg5" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Area_Chair_b6oN" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Reviewer_v1ZR" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Reviewer_YNDo" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Reviewer_v1ZR" ], [ "ICLR.cc/2025/Conference/Submission9891/Authors" ], [ "ICLR.cc/2025/Conference/Submission9891/Reviewer_Fhg5" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer dKDk,\\n\\nAs the discussion period draws to a close, we wanted to check if our responses have addressed your concerns adequately. Please let us know if you need any further clarification.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposes a novel framework, named Boundary-based Out-Of-Distribution data generation (BOOD). It first identifies the features closest to the decision boundary by calculating the minimal perturbation steps imposed on the feature to change the model's prediction. Then, it generates the outlier features by perturbing the identified boundary ID features along with the gradient ascent direction. These synthetic features are then fed into a diffusion model to generate the OOD images, enhancing the model\\u2019s ability to distinguish ID and OOD data. Extensive experiments show the effectiveness of their method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.This paper proposes a novel boundary-based method for generating OOD data, leveraging diffusion models to identify ID data closest to decision boundaries and applying an outlier feature synthesis strategy to generate images located around decision boundaries. This approach provides high-quality and informative features for OOD detection.\\n2.This paper is technically sound. The ablation experiments, hyperparameter analysis experiments, and visualization experiments are all comprehensive.\\n3.This paper provides a clear and thorough introduction to the proposed methods and algorithmic procedures. The formulas and notations are well-explained, with detailed definitions for all symbols and terms used.\", \"weaknesses\": \"1.One potential drawback is a notation conflict between the additional perturbation steps c (line 287-288) and the earlier use of C for the number of classes. This overlap in symbols could cause confusion, so it might be beneficial to change the symbol for one of these terms to improve clarity.\\n2.In Table 2, the comparison with state-of-the-art (SOTA) methods could be enhanced by including more recent methods from 2024. This would better highlight the advantages and relevance of the proposed approach in the context of the latest advancements.\\n3.A limitation of the hyperparameter sensitivity analysis is that it could benefit from experimenting with a wider range of values to better demonstrate the rationale behind the chosen settings. Additionally, more intuitive visualizations could be provided to clearly illustrate the improvements of the proposed method over previous approaches.\", \"questions\": \"See Weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer v1ZR 1/2\", \"comment\": \"We appreciate the review for providing valuable advice. Below are our responses:\\n\\n> Weakness 1: the perturbation strategy may unintenrionally transform the features into in-distribution classes.\\n\\nThank you for providing a reasonable concern. To prevent the generated OOD features from transforming into the distribution of other in-distribution classes, we set **a relatively small step size $\\\\alpha$ for minor perturbations** in each iteration when synthesizing OOD features, which **guarantees small deviations** and prevents the synthesized features from entering other distributions. We also employ **small additional perturbation steps $c$** to guarantee the synthesized OOD features will not step into other distributions after crossing the decision boundaries. \\n\\nTo further provide the empirical evidence to support our theory, we attach the performance of BOOD on CIFAR-100 with a larger range of $\\\\alpha$ and $c$:\\n\\n| $c$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 0 | 12.19 | 97.20 |\\n| 1 | 11.98 | 97.32 | \\n| 2 | 10.67 | 97.42 |\\n| 3 | 11.23 | 97.24 |\\n| 4 | 12.65 | 97.02 |\\n| 5 | 13.91 | 96.84 |\\n\\n| $\\\\alpha$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 0.001 | 12.09 | 97.21 |\\n| 0.005 | 12.12 | 97.21 | \\n| 0.015 | 10.67 | 97.42 |\\n| 0.025 | 18.73 | 95.38 |\\n| 0.05 | 19.33 | 95.02 |\\n| 0.1 | 22.57 | 94.79 |\\n\\nFrom the results above and Figure 4 in the paper, we can conclude that **our perturbing strategies are promising**. We also include the tables above in our updated version of paper. \\n\\n> Weakness 2: ablation studies on alternative perturbation strategies.\\n\\nOur proposed strategy perturbs the identified ID boundary features through **the direction of gradient ascent**, aiming to synthesize OOD features that **distributed around the decision boundaries**. To gain a deeper insight of the effectiveness of our strategy, we provide additional ablation studies on the different perturbation strategies, including (1) adding **Gaussian noises** to the latent features, (2) **displacing features away from class centriods** and (3) **BOOD**'s perturbation strategy. Below are the results:\\n\\n| Method | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| (1) | 18.99 | 95.04 |\\n| (2) | 40.51 | 91.63 |\\n| BOOD | 10.67 | 97.42 |\\n\\nFrom the statistics above, we can conclude that **our perturbation strategies are solid**. We also include this analysis in the updated version of summision. We will discover more perturbation strategies in our camera-ready version of submission. \\n\\n\\n> Question 1: the sensitivity of the method to the hyperparameter r.\\n\\nThis is a great point. For pruning rate $r$, we suggest **a relatively mild rate** since too **small $r$** will not have enough features to **generate diverse OOD images**, and **a large $r$** may lead to selection of ID features that **distribute far from the boundaries**. Below are BOOD's performance with different $r$ on CIFAR-100:\\n\\n| r value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 2.5% | 13.45 | 96.84 |\\n| 5% | 12.47 | 97.34 | \\n| 10% | 13.31 | 97.02 |\\n| 20% | 15.88 | 95.68 |\\n\\nWe also attached the table above in the updated paper, please check it!\"}", "{\"comment\": \"Thanks for the very detailed response ! I have carefully checked the response, and most of my concerns have been clarified. Hoping additional experiments be included in the final draft. Considering the overall quality, I will change the rating to borderline accept.\"}", "{\"comment\": \"We thank reviewer v1ZR for the valuable reply. Below are our response:\\n\\n> It is still unclear that why using small step size for minor perturbations and additional perturbation steps can prevent transforming features into other in-distribution classes.\\n\\nGreat question. Firstly we want to emphasize that, large step size $\\\\alpha$ and additional perturbing steps $c$ **do make difference** in the generated images. By setting $\\\\alpha$ and $c$ to a relatively large number, we will perturb the generated features further from the decision boundaries. From **Figure 4** in the paper, we can observe that large $\\\\alpha$ and $c$ might **lead the synthesized OOD images transform to another classes' distributions**.\\n\\nMoreover, since the prediction confidence produced by classifier does not fully correlated to the distribution area for a feature, we cannot directly judge whether a feature has transformed into other ID classes by the prediction confidence produced by classifier on feature level. **Our perturbation strategies can alleviate this problem at a high extent.** From the experiment results, we can discover that **the control of $\\\\alpha$ and $c$ is effective**.\\n\\nWe also offer **a possible strategy** to better solve this problem. After obtaining the identified ID boundary features, we perturb the features for $c$ steps and calculate the **entropy** of the feature in each step using the following formula:\\n$$\\nH(z_{k}) = -\\\\sum_{i}^{n} p_{i}log_{e}(p_{i})\\n$$\\nwhere $z$ denotes the synthesized feature in step $k$ ($k \\\\le c$), $n$ denotes the number of classes, and $p_{i}$ denotes the prediction probability for class $i$. The more uniform the distribution of probability is, the higher the entropy will be. When a feature is transformed towards other classes' distributions or distributed in ID areas, the prediction probability for a specific class will become **abnormally high**, resulting in **lower entropy**.\\n\\nFor each original feature, we rank the entropies of its $c$ perturbed versions and select the features whose perturbed samples show the **highest entropies**. These selected features are **more likely to distribute outside the ID area**.\\n\\nWe are optimizing this method and we will conduct more experiments and present it in camera-ready version. Thank you again for your constructive question.\"}", "{\"summary\": \"This paper proposes BOOD, a method for synthesizing out-of-distribution images that are closer to the boundary, for enhancing OOD detection performance. It first learns a image encoder whose feature space aligns with class token embeddings, and leverage it as a cosine classifier. Then it picks the images whose features need the fewest number of perturbation steps in the gradient ascent direction to change the cosine classifier\\u2019s prediction, and generates OOD images from their perturbed features. It then uses the generated OOD images to regularize the training of an OOD classification model. Experimental results show that BOOD outperforms a variety of existing OOD detection approaches on CIFAR-100 and ImageNet-100 as ID data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper proposes a new approach for synthesizing out-of-distribution data by performing adversarial perturbation and generating images along the ID boundary. The method is intuitively and technically sound.\", \"Performance-wise, the gain over existing methods is significant on CIFAR-100 as ID. The synthesized images look reasonable visually as boundary cases.\", \"The writing and presentation of the paper are clear.\"], \"weaknesses\": [\"The method seems to be bounded by the capability of the stable diffusion model. In cases where ID data are very distinct from stable diffusion's training distribution, e.g. if the ID data is SVHN or texture, or some other domains like medical imaging, etc., or where the classification is very fine-grained, it is uncertain how effective the method would be.\", \"The performance improvement on CIFAR-100 as ID data is significant but the improvement on ImageNet-100 is only marginal, although both datasets are natural images with 100 classes. This also somewhat raises some uncertainty about how much improvement BOOD can bring over the existing methods in general. It may be helpful to include more in-depth discussion or analysis on in which cases BOOD provides significant gains and in which cases its advantage over prior approaches is less obvious.\", \"Minor point - there are several typos in the use of parenthetical vs. textual citations: e.g. L047, L179, L232\"], \"questions\": [\"How necessary is it to synthesize OOD data, as opposed to finding publicly available OOD data and seeing if training with them can generalize to unseen OOD data? How does BOOD compare with methods that use real OOD data for augmentation, such as [1]?\", \"The method seems to involve various different hyperparameters, including pruning rate r, max perturbation iteration K, and regularization weight beta. How are they selected? If one applies BOOD to a new ID dataset, are there guidelines or general rules of how to select them?\", \"Given that generation with diffusion models can be computationally expensive, it would be helpful to see more in-depth analysis on computation-performance tradeoffs (e.g. performance vs. the number of images generated per class).\", \"[1] Hendrycks, Dan, Mantas Mazeika, and Thomas Dietterich. \\\"Deep anomaly detection with outlier exposure.\\\"\\u00a0ICLR 2019.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are pleased to know that our reply successfully addressed your concerns. Thank you again for your valuable time and constructive feedback!\"}", "{\"title\": \"Response to Reviewer Fhg5 2/2\", \"comment\": \"> Question 2: general guidelines for hyperparameter selection.\", \"our_suggested_guidelines_for_hyperparameter_selection_are_summarized_as_follows\": \"For pruning rate $r$, we recommend implementing **a moderate pruning rate**, as insufficient pruning (small $r$) may **limit the diversity** of generated OOD images (**not enough features**), while excessive pruning (large $r$) risks selecting ID features **proximally distributed to the anchor**.\\n\\n| $r$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 2.5% | 13.45 | 96.84 |\\n| 5% | 12.47 | 97.34 | \\n| 10% | 13.31 | 97.02 |\\n| 20% | 15.88 | 95.68 |\\n\\n\\nOur analyses suggest selecting **a relatively large max iteration number $K$** to ensure comprehensive boundary crossing for most features. While increased iterations do affect computational overhead in boundary identification, **the impact remains manageable**. We present a detailed computational cost and performance analysis across varying $K$ values:\\n\\n| $K$ value | Boundary identification time | FPR95 &#8595; | AUROC &#8593; | \\n|:--:|:--:|:--:|:--:|\\n| 5 | ~9sec | 17.69 | 94.33 | \\n| 50 | ~1.5min | 12.47 | 97.34 | \\n| 100 | ~2.5min | 12.47 | 97.34 | \\n| 200 | ~5min | 12.47 | 97.34 | \\n| 400 | ~10min | 12.47 | 97.34 | \\n\\nFor regularization weight $\\\\beta$: Empirical evidence suggests optimal performance is achieved with **moderate regularization weighting**, as excessive OOD regularization can compromise OOD detection efficiency.\\n\\n| $\\\\beta$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 1.5 | 12.71 | 96.95 |\\n| 2 | 12.78 | 97.15 |\\n| 2.5 | 12.47 | 97.34 | \\n| 3 | 13.1 | 97.02 |\\n\\nFor step size $\\\\alpha$: **A moderate $\\\\alpha$ value** is recommended for boundary features identification and OOD features synthesizing. A large $\\\\alpha$ may lead to **large discrepancy** between each iteration in adversarial attack, thus making the counting of distance to the decision boundaries not accurate. \\n\\n| $\\\\alpha$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 0.001 | 12.09 | 97.21 |\\n| 0.005 | 12.12 | 97.21 | \\n| 0.015 | 10.67 | 97.42 |\\n| 0.025 | 18.73 | 95.38 |\\n| 0.05 | 19.33 | 95.02 |\\n| 0.1 | 22.57 | 94.79 |\\n\\nFor step size $c$: \\nWe suggest a **moderate $c$ value**, since a large $c$ may force the generated OOD features to **step into other classes' distributions**, and a small $c$ may not guarantee the OOD features that are **adequately distant** from the ID boundary. \\n\\n| $c$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 0 | 12.19 | 97.20 |\\n| 1 | 11.98 | 97.32 | \\n| 2 | 10.67 | 97.42 |\\n| 3 | 11.23 | 97.24 |\\n| 4 | 12.65 | 97.02 |\\n| 5 | 13.91 | 96.84 |\\n\\nThe above statistics illustrates that **our suggested guidelines for hyperparameters selection are effective**. We have included the new analysis in the **Appendix C**. Please check our updated summision of paper, thanks!\\n\\n> Question 3: analysis on computation-performance tradeoffs.\\n\\nThis is a reasonable question concerning computational cost. With the increase of number of OOD images per class, the performance of OOD detection becomes better. Below are the performance vs . different number of generated images per class on CIFAR-100:\\n\\n| number of images per class | FPR95 | AUROC | ID ACC | \\n|:--:|:--:|:--:|:--:|\\n| 100 | 25.21 | 93.63 | 65.14 | \\n| 500 | 15.83 | 96.1 | 73.18 | \\n| 1000 | 12.47 | 97.34 | 78.17 |\\n\\nWe also provide the computational cost comparison between BOOD and DreamOOD on CIFAR-100 below. The computational cost are **almost no difference**. \\n\\n| Computational Cost | Building latent space | OOD features synthesizing | OOD image generation | OOD detection model regularization | Total |\\n|:----------:|:--------:|:-------:|:-------:|:-------:|:-------:|\\n| BOOD | ~0.62h | ~0.1h | ~7.5h | ~8.5h | ~16.72h |\\n| DreamOOD | ~0.61h | ~0.05h | ~7.5h | ~8.5h | ~16.66h |\\n\\nWe have included the comparisons regarding computational cost in **Appendix E** in our updated version of paper, please check them, thanks!\\n\\n> Weakness 3: several typos in the use of parenthetical vs. textual citations.\\n\\nThank you for pointing out the typos! We have already fixed them, please check our updated version of paper.\\n\\n\\nWe feel confident that we have **thoroughly addressed the points you raised**, and we thank the reviewer for taking the time to read our rebuttal and for your positive feedback again. If you still have concerns, we are willing to have discussion with you. \\n\\n\\n[1] Hendrycks, Dan, Mantas Mazeika, and Thomas Dietterich. \\\"Deep anomaly detection with outlier exposure.\\\" ICLR 2019.\\n[2] Xuefeng Du, Yiyou Sun, Jerry Zhu and Yixuan Li. \\\"Dream the Impossible: Outlier Imagination with Diffusion Models.\\\" NIPS 2024.\"}", "{\"comment\": \"Dear Reviewer Fhg5:\", \"our_earlier_response_systematically_addressed_your_concerns_from_five_different_perspectives\": \"(1) we explain the reason of the marginal improvement on ImageNet-100, (2) we explain the reason why the capability bounded by stable diffusion should not be our focus, (3) we explain the necessity of synthesizing OOD data and provide comparison analysis between BOOD and a method using real OOD data for augmentation, (4) we provide general guidelines for hyperparameter selection, and (5) we fix the typo of the use of parenthetical citations.\\n\\nWith the discussion period drawing to a close, we want to ensure our responses have addressed your questions adequately. Thank you again for your valuable feedback.\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer dKDk:\", \"we_have_thoroughly_addressed_your_feedback_across_four_key_dimensions_in_our_previous_response\": \"(1) we fix the notation conflict for the number of classes, (2) we provide comparison between BOOD and a SOTA method, (3) we provide analysis on BOOD's performance with a wider range of hyperparameters, and (4) we provide additional visualizations in Appendix B in our updated PDF.\\n\\nAs we approach the end of the discussion window, please let us know if our responses have fully addressed your questions or if additional clarification would be helpful. Thank you for your thoughtful feedback.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer YNDo 1/2\", \"comment\": \"We thank the reviewer for the feedback and constructive suggestions. Our response to the reviewer\\u2019s concerns is below:\\n\\n> Weakness 1 and Question 1: BOOD may be time-consuming\\n\\nWe appreciate this important inquiry regarding computational efficiency and resource utilization. It is worth noting that **all methodologies in the domain of generative data augmentation utilizing diffusion models necessarily involve image generation processes**. Our main research focus is developing automatically utilizing diffusion models to generate OOD dataset and improving the OOD detection performance, where we proposed two key procedures: (a) an adversarial perturbation strategy to identify the ID features closest to the decision boundaries precisely, and (b) an OOD feature synthesis strategy to generate outlier features which distributed around the decision bondaries. **Optimizing the generation cost of diffusion models falls out of our research focus.**\\n\\nIn our analysis, we conducted a comparative study of computational efficiency between BOOD and DreamOOD [1], which also is a framework regarding generating OOD images through diffusion model. We specifically focus on four key processes: (1) the building of latent space, (2) OOD features synthesizing, (3) the OOD image generation and (4) regularization of OOD detection model. To provide quantitative evidence, we present below a detailed comparison of computational requirements between BOOD and DreamOOD [1] on CIFAR-100:\\n\\n| Computational Cost | Building latent space | OOD features synthesizing | OOD image generation | OOD detection model regularization | Total |\\n|:----------:|:--------:|:-------:|:-------:|:-------:|:-------:|\\n| BOOD | ~0.62h | ~0.1h | ~7.5h | ~8.5h | ~16.72h |\\n| DreamOOD | ~0.61h | ~0.05h | ~7.5h | ~8.5h | ~16.66h |\", \"we_also_summarize_the_memory_requirements_of_bood_and_dreamood_on_cifar_100_as_below\": \"| Memory requirements | OOD features | OOD images | Total | \\n|:--:|:--:|:--:|:--:|\\n|BOOD| ~7.32MB |~11.7G | ~11.7G | \\n|DreamOOD | ~2.9G | ~11.67G| ~14.57G |\\n\\nOur empirical evaluation reveals that the differences between these approaches are **not statistically significant**. Thus, our proposed framework **is not time consuming or has harsh memory requirements**.\\n\\n> Weakness 2: need to provide the basis for hyperparameter selection.\\n\\nWe present a detailed analysis of hyperparameter sensitivity, with experiments conducted on the CIFAR-100 dataset. Based on our systematic investigation, we propose the following basis for hyperparameter selection:\\n\\nFor pruning rate $r$, we recommend implementing **a moderate pruning rate**, as insufficient pruning (small $r$) may **limit the diversity** of generated OOD images (**not enough features**), while excessive pruning (large $r$) risks selecting ID features **proximally distributed to the anchor**.\\n\\n| $r$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 2.5% | 13.45 | 96.84 |\\n| 5% | 12.47 | 97.34 | \\n| 10% | 13.31 | 97.02 |\\n| 20% | 15.88 | 95.68 |\"}", "{\"metareview\": \"This paper proposes generating anomaly samples near the decision boundary using normal samples. The reviewers\\u2019 assessments were mixed, but upon further consideration, I identified a substantial existing body of literature on anomaly detection\\u2014particularly methods targeting samples near the out-of-distribution boundary\\u2014that the authors did not adequately acknowledge. Moreover, the idea of adversarially creating such samples is not new. Given these factors and the reviewers\\u2019 concerns, I am inclined to recommend rejecting this submission.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised concerns about the insufficient justification for using adversarial attacks to perturb sample features, the clarity of experimental details, and some minor weaknesses. Although the authors attempted to address these issues, not all reviewers engaged with their responses. From my perspective, while the authors\\u2019 answers seem convincing in some respects, I remain troubled by the novelty of their contribution. They claim to be the first to pursue this line of work, despite the existence of extensive related literature. As a result, I still have significant reservations about the paper\\u2019s originality.\"}", "{\"title\": \"Response to Reviewer YNDo 2/2\", \"comment\": \"Our analyses suggest selecting **a relatively large max iteration number $K$** to ensure comprehensive boundary crossing for most features. While increased iterations do affect computational overhead in boundary identification, **the impact remains manageable**. We present a detailed computational cost and performance analysis across varying $K$ values:\\n\\n| $K$ value | Boundary identification time | FPR95 &#8595; | AUROC &#8593; | \\n|:--:|:--:|:--:|:--:|\\n| 5 | ~9sec | 17.69 | 94.33 | \\n| 50 | ~1.5min | 12.47 | 97.34 | \\n| 100 | ~2.5min | 12.47 | 97.34 | \\n| 200 | ~5min | 12.47 | 97.34 | \\n| 400 | ~10min | 12.47 | 97.34 | \\n\\nFor regularization weight $\\\\beta$: Empirical evidence suggests optimal performance is achieved with **moderate regularization weighting**, as excessive OOD regularization can compromise OOD detection efficiency.\\n\\n| $\\\\beta$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 1.5 | 12.71 | 96.95 |\\n| 2 | 12.78 | 97.15 |\\n| 2.5 | 12.47 | 97.34 | \\n| 3 | 13.10 | 97.02 |\\n\\nFor step size $\\\\alpha$: **A moderate $\\\\alpha$ value** is recommended for boundary features identification and OOD features synthesizing. A large $\\\\alpha$ may lead to **large discrepancy** between each iteration in adversarial attack, thus making the counting of distance to the decision boundaries not accurate. \\n\\n| $\\\\alpha$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 0.001 | 12.09 | 97.21 |\\n| 0.005 | 12.12 | 97.21 | \\n| 0.015 | 10.67 | 97.42 |\\n| 0.025 | 18.73 | 95.38 |\\n| 0.05 | 19.33 | 95.02 |\\n| 0.1 | 22.57 | 94.79 |\\n\\nFor step size $c$: \\nWe suggest a **moderate $c$ value**, since a large $c$ may force the generated OOD features to **step into other classes' distributions**, and a small $c$ may not guarantee the OOD features that are **adequately distant** from the ID boundary. \\n\\n| $c$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 0 | 12.19 | 97.20 |\\n| 1 | 11.98 | 97.32 | \\n| 2 | 10.67 | 97.42 |\\n| 3 | 11.23 | 97.24 |\\n| 4 | 12.65 | 97.02 |\\n| 5 | 13.91 | 96.84 |\\n\\nThe above statistics illustrates that **our suggested basis for hyperparameters selection are effective**. We have included the new analysis in the **Appendix C**. Please check our updated summision of paper, thanks!\\n\\n> Weakness 3 and Question 2: Further comparative studies on different perturbation strategies.\\n\\nOur proposed strategy perturbs the identified ID boundary features through **the direction of gradient ascent**, aiming to synthesize OOD features that **distributed around the decision boundaries**. To gain a deeper insight of the effectiveness of our strategy, we provide additional ablation studies on the different perturbation strategies, including (1) adding **Gaussian noises** to the identified latent features, (2) **displacing features away from class centroids** and (3) **BOOD**'s perturbation strategy. Below are the results:\\n\\n| Method | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| (1) | 18.99 | 95.04 |\\n| (2) | 40.51 | 91.63 |\\n| BOOD | 10.67 | 97.42 |\\n\\nFrom the statistics above, we can conclude that **our perturbation strategies are solid**. We also include this analysis in the updated version of submission. We will discover more perturbation strategies in our camera-ready version of submission. \\n\\n> Weakness 4 and Question 3: Descriptions of the images presented are lacking in the main text.\\n\\nThanks, this is a great point. Please check our updated version paper, where we include the descriptions of Figures 2, 3, 4 in the main text in **L215, L252, L454 and L461**.\\n\\nWe believe that our responses have **sufficiently addressed your concerns**. If you have further questions, we are pleased to discuss with you. Thank you again for taking the time to read our rebuttal and your constructive feedback!\\n\\n[1] Xuefeng Du, Yiyou Sun, Jerry Zhu and Yixuan Li. \\\"Dream the Impossible: Outlier Imagination with Diffusion Models.\\\" NIPS 2024.\"}", "{\"comment\": \"Thank the authors very much for their efforts in preparing the response. Sensitivity analysis about r and network details are provided, and the performance on Textures dataset is explained. However, it is still unclear that why using small step size for minor perturbations and additional perturbation steps can prevent transforming features into other in-distribution classes. It is a little strange that displacing features away from class centroids produces rather poor results. I tend to keep my original score.\"}", "{\"title\": \"Response to Reviewer v1ZR 2/2\", \"comment\": \"> Question 2: performance gap between NPOS and BOOD on OOD dataset Textures using ImageNet-100 as ID dataset.\\n\\nTextures is a dataset containing textural images in the wild, which has large discrepancy from the training distribution of Stable Diffusion. NPOS [2] is an OOD detection framework that leverages outlier features synthesized from low-likelihood area in the latent feature space. We do found the performance gap between NPOS [2] and BOOD on Textures while choosing ImageNet-100 as ID dataset. Compared to NPOS, we argue that the performance gap between BOOD and NPOS [2] is because the **Stable Diffusion lacks the ability to generate images that located near Textures' distritbution** while using ImageNet-100 as ID dataset. **But** **we want to emphasize that**: for the area of generative data augmentation, **the performance of the frameworks are limited by the capability of diffusion models**. In the following table, we select NPOS [2], BOOD and another generative OOD data augmentation framework DreamOOD [1] for comparison. The three methods are all tested on Textures, using ImageNet-100 as ID dataset. \\n\\n| | Textures| | Average | | \\n|:--:|:--:|:--:|:--:|:--:|\\n| Method | FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; |\\n| NPOS [2] | 8.98 | 98.13 | 44.00 | 89.04 |\\n| DreamOOD [1] | 53.99 | 85.56 | 38.76 | 92.02 | \\n| BOOD | 51.88 | 85.41 | **35.37** | **92.44** | \\n\\nFrom the table above, we can discovery that both DreamOOD [1] and BOOD have performance gaps between NPOS [2] on Textures, indicating that the performance of generative data augmentation is **bounded by the capability of diffusion model**. However, **BOOD shows superior performance in the average OOD dection results, indicating our framework is promising**. This performance gap might be narrowed with the stronger diffusion models in the future study. \\n\\n> Weakness 3: architectures of the image encoder and the OOD classification model.\\n\\nThis is a good point regarding reproducibility. To guarantee the fairness between frameworks for comparison, we choose the **same** architectures of image encoder and OOD classification model as DreamOOD [1]. We summerize the architectures of the models as below:\\n- **Image encoder**. We employed a ResNet-34 architecture for image encoder for both CIFAR-100 and ImageNet-100. Here's the breakdown:\", \"input_layer\": [\"Initial Conv2d: 3\\u219264 channels, 3\\u00d73 kernel, stride=1, padding=1\", \"BatchNorm2d\", \"ReLU\", \"Main Blocks (using BasicBlock structure):\", \"Layer1: 64\\u219264 channels, 3 blocks\", \"Layer2: 64\\u2192128 channels, 4 blocks, stride=2 at first block\", \"Layer3: 128\\u2192256 channels, 6 blocks, stride=2 at first block\", \"Layer4: 256\\u2192512 channels, 3 blocks, stride=2 at first block\"], \"final_layers\": [\"Adaptive Average Pooling to (1,1)\", \"Flatten\", \"Linear transformation: **512\\u2192768** dimensions (768-dimensions features are aligned with the class token embeddings $\\\\Gamma(y)$)\"], \"each_basicblock_contains\": \"+ Conv2d (3\\u00d73) \\u2192 BatchNorm2d \\u2192 ReLU\\n + Conv2d (3\\u00d73) \\u2192 BatchNorm2d\\n + Skip connection (with optional 1\\u00d71 conv if dimensions change)\\n + Final ReLU\\n\\n- **OOD classification model**. We also employed a ResNet-34 architecture for OOD classification model. Most part of the architecture are same as the image encoder, except the final layer: the **Linear transformation** changed from 512\\u2192768 to **512\\u2192100** (the number of classes equals 100). \\n\\nWe have included the above architecture explanations in our updated version of paper, please check it, thanks!\\n\\n> Weakness 4: error in equation\\n\\nWe apologize for the typo. We have corrected this in our updated version of paper, please check it.\\n\\nWe trust our responses have **adequately resolved your concerns**. We sincerely thank you for reviewing our rebuttal and your constructive feedback. If any aspects still need clarification, we are happy to discuss them further.\\n\\n[1] Xuefeng Du, Yiyou Sun, Jerry Zhu and Yixuan Li. \\\"Dream the Impossible: Outlier Imagination with Diffusion Models.\\\" NIPS 2024.\\n\\n[2] Leitian Tao, Xuefeng Du, Jerry Zhu, and Yixuan Li. \\\"Non-parametric outlier synthesis.\\\" ICLR 2023.\"}", "{\"comment\": \"Dear Reviewer YNDo:\\n\\nIn our previous response, we have addressed your concerns in four aspects: (1) we provide computational cost of BOOD to prove that BOOD is not time consuming or requires large memory, (2) we provide bases for hyperparameter selection including $\\\\alpha, \\\\beta, c, K$ and $r$, (3) we provide comparative studies on different perturbation strategies, and (4) we include the descriptions of Figures 2, 3, 4 in the main text of updated PDF.\\n\\nAs the discussion period ends soon, we just wanted to check if the response clarified your questions or needs further discussions. Thanks again for your constructive feedback.\\n\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"We are glad to hear that our response helped resolved your questions. Thank you again for your time and constructive feedback!\"}", "{\"summary\": \"The paper proposes a new OOD data generation framework that helps the model to more clearly distinguish ID and OOD data by generating OOD samples near the decision boundary. Specifically, this method identifies ID boundary features by minimizing perturbation steps and generates OOD features near the boundary through gradient ascent. Experiments on CIFAR-100 and IMAGENET-100 demonstrate the effectiveness of the proposed algorithm.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.BOOD is the first framework capable of explicitly generating OOD data around the decision boundary, thereby providing informative functionality for shaping the decision boundary between ID and OOD data.\\n\\n2.The paper is easy to follow.\\n\\n3.Experimental results on the CIFAR-100 and IMAGENET-100 datasets show that the BOOD method significantly outperforms existing SOTA methods, achieving substantial improvements.\", \"weaknesses\": \"1.BOOD requires calculating the boundary positions of numerous features and generating images through a diffusion model, which may be computationally time-consuming.\\n\\n2.The hyperparameter in the paper is crucial for synthesizing high-quality OOD features, it is recommended to provide the basis for its selection.\\n\\n3.The adversarial perturbation strategy is an important component, it is recommended to provide a comparative analysis with other perturbation strategies to help readers gain a more comprehensive understanding of the experimental setup.\\n\\n4.Descriptions of the images presented are lacking in the main text.\", \"questions\": \"1.List and compare the actual memory requirements of the proposed model.\\n\\n2.Further comparative studies on different perturbation strategies could be added to help understand the impact of each strategy on the quality of generated data, and to validate the performance variations of the BOOD method under different hyperparameters.\\n\\n3.Provide additional descriptions of Figures 2, 3, and 4 in the main text for a more comprehensive evaluation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Fhg5 1/2\", \"comment\": \"We thank the reviewer for the thorough review! Our response to the reviewer\\u2019s concerns is below:\\n\\n> Weakness 1: The method seems to be bounded by the capability of the stable diffusion model.\\n\\nWe acknowledge this important limitation regarding domains significantly divergent from the diffusion model's training distribution (e.g., SVHN, Textures, and medical imaging). But this **constraint is inherent to all methodologies utilizing diffusion-based generative data augmentation**. While future developments in generative modeling may address these limitations, **we emphasize that our primary goal is to leverage diffusion models to generate informative OOD images, thus increasing the OOD detection model's performance.** With **two novel strategies**: **(a)** an adversarial perturbation strategy to identify the ID features closest to the decision boundaries precisely, and **(b)** an OOD feature synthesis strategy to generate outlier features which distributed around the decision bondaries, we **successfully achieve the state-of-the-art results** on CIFAR-100 and ImageNet-100. \\n\\n> Question 1: Necessity of synthesize OOD data compared to finding publicly available OOD data, and comparison with methods that use real OOD data for augmentation. \\n\\nGreat point! Finding publicly available OOD data as auxiliary OOD detection training data is feasible, but it has two significant drawbacks: (1) It requires **significant labor and time cost** to labeling and filtering an OOD dataset that is completely not overlapped with the ID data. (2) **it's impossible to collect images distributed outside the data distribution boundary, which can not be captured in the real world**. BOOD addresses these problems in two aspects: (1) BOOD is an **automatically OOD images generation framework**, which significantly decrease the human labor involved in creating a new OOD dataset traditionally. (2) BOOD leverages efficient feature perturbation strategies and diffusion model, to generate images that **distributed around the decision boundaries**, **eliminating the issue of not able to collect OOD images that are unreal**. To conclude, it's necessary to synthesize OOD data for OOD detection training. \\n\\nTo understand the effectiveness of BOOD, We provide comparison of OOD detection results between BOOD and MSP+OE [1], using CIFAR-100 as ID dataset . Below are the results:\\n\\n| | SVHN | | Places365 | | LSUN | | Textures | | Average | | \\n|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| Method | FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; |FPR95 &#8595; |AUROC &#8593; |FPR95 &#8595; | AUROC &#8593; |\\n| MSP + OE [1] | 42.9 | 86.9 | 49.8 | 86.5 | 57.5 | 83.4 | 54.4 | 84.8 | 51.15 | 85.4 | \\n|BOOD | 3.85 | 99.07 | 47.4 | 90.26 | 3.7 | 98.96 | 7.25 | 98.45 | 15.55 | 95.94 | \\n\\nFrom the table above, we can conclude that our method is more superior than the method using real OOD data for augmentation. \\n\\n> Weakness 2: The performance improvement on ImageNet-100 is only marginal.\\n\\nWe thank the review for providing valuable suggestions. In our research, we found that the performance improvement on ImageNet-100 is not significant as the improvement on CIFAR-100. But our proposed method BOOD **still surpasses the state-of-the-art framework** DreamOOD [2] by 3.39% (35.37% vs. 38.76%) in FPR95 and 0.42% in AUROC (92.44% vs. 92.02%). We suppose that the performance improvement gap is due to two reasons: (1) the distribution of ImageNet-100 is not optimal as an ID dataset to build a latent feature space, and (2) the number of OOD images we generate for ImageNet-100 is not enough to fit the size of the dataset. We are still discovering the reasons behind this, and we want to express our appreciation to reviewer for giving the useful insight.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer v1ZR:\", \"the_six_aspects_covered_in_our_previous_response_directly_address_the_concerns_you_raised\": \"(1) we provide explanations regarding why our perturbation strategy will not transform the features into ID classes, (2) we provide ablation studies on the different perturbation trategies, (3) we provide sensitivity analysis of hyperparameter $r$, (4) we explain the performance gap between NPOS and BOOD on Textures using ImageNet-100 as ID dataset, (5) we introduce the architectures of the image encoder and OOD classification model, and (6) we fix the error in equation 2.\\n\\nWith the discussion session closing soon, we want to ensure we've adequately addressed all your questions. We truly appreciate your constructive comments throughout this process.\\n\\nBest,\\n\\nAuthors\"}", "{\"summary\": \"This paper focuses on addressing the OOD detection task by synthesizing outlier samples. To achieve the synthesis of reasonable outlier samples, it first selects out the samples which reside near to boundaries, and then apply the adversarial attack to perturb features of these samples until their classes are changed. Finally, it applies the diffusion model to generate outlier samples from those perturbed features, which are used for training the OOD classifier. Experiments on various datasets demonstrate that the proposed method achieve better performance than existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"It introduces a new outlier synthesis through selecting out samples close to decision boundaries and distorting them. Outlier samples can be more easily synthesized from these samples compared to other samples.\", \"Extensive experiments are conducted to validate the effectiveness of the proposed method and core technical components such as the sample selection strategy.\", \"The paper is well written with clear structure and smooth logic, making it easy for readers to understand its ideas and algorithms.\"], \"weaknesses\": \"1. The rationale for using adversarial attacks to perturb sample features remains insufficiently justified. Perturbing features to alter their class identities might unintentionally transform them into samples of other in-distribution classes. To address this concern, the authors should provide theoretical or empirical evidence demonstrating that their perturbation method reliably generates features distinct from existing classes. Additionally, a comparison with alternative perturbation strategies would help clarify the unique benefits of the proposed approach.\\n2. The inquiry into the performance of random feature perturbations, such as adding Gaussian noise or displacing features away from class centroids, is highly relevant. To make this critique more actionable, I recommend requesting an ablation study comparing the proposed perturbation method against these simpler alternatives. Such an analysis would provide concrete evidence of the theoretical and empirical advantages of the method.\\n3. The paper lacks sufficient detail on the architectures of the image encoder and the OOD classification model. For replication purposes, it is essential to include specifics such as the number and type of layers, activation functions, and other relevant parameters. A detailed description of these aspects would significantly enhance the reproducibility of the proposed algorithm.\\n4. There is an error in Equation (2), where the denominator should correctly be '\\\\Gamma(y_j)^Tz'. While this observation is helpful, I suggest the authors conduct a thorough review of all equations and mathematical notations throughout the manuscript to ensure accuracy and consistency.\", \"questions\": \"1. Clarification is needed regarding the sensitivity of the method to the hyperparameter r. An exploration of this sensitivity, perhaps through a sensitivity analysis, would provide valuable insights into the robustness and reliability of the proposed approach under varying conditions.\\n2. The method performs significantly worse than NPOS on the OOD dataset Textures, as indicated in Table 2. An explanation for this performance discrepancy would be beneficial. The authors could analyze specific characteristics of the Textures dataset or aspects of their method that may contribute to this outcome.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer dKDk\", \"comment\": \"We thank the reviewer for the feedback and constructive suggestions. Our response to the reviewer\\u2019s concerns is below:\\n\\n> Weakness 1: notation conflict between the additional perturbation steps c and the earlier use of C for the number of classes.\\n\\nThank you for pointing out the ambiguous notation. We have changed the notation for the number of classes to $V$. Please check **L104** and **L107** our updated version of paper. \\n\\n> Weakness 2: including more comparisons between BOOD and SOTA methods from 2024. \\n\\nWe provide comparison between BOOD and **a SOTA methods** FodFom [1] from ACMMM 2024. FodFom is a framework that utilizing Stable Diffusion to generate outlier images for OOD detection. The results are summarized in the table below:\\n\\n| | SVHN | | LSUN-R | | LSUN-C | | iSUN | | Textures | | Places365 | | Average | |\\n|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|\\n| Method | FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; |FPR95 &#8595; |AUROC &#8593; |FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; |\\n| FodFoM | 33.19 | 94.02 | 28.24 | 95.09 | 26.79 | 95.04 | 33.06 | 94.45 | 35.44 | 93.38 | 42.30 | 90.68 | 33.17 | 93.78 | \\n| BOOD | 5.70 | 98.31 | 0.10 | 99.94 | 1.70 | 99.32 | 0.10 | 99.94 | 4.35 | 98.95 | 41.20 | 90.30 | 8.86| 97.79 |\\n\\nCompare to FodFom, **BOOD demonstrates superior performance**, illustrating its competitiveness.\\n\\n> Weakness 3: experimenting with a wider range of hyperparameter values. \\n\\nWe provide analysis on BOOD's performance with a wider range of hyperparameters as below. \\n\\n| $c$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 0 | 12.19 | 97.20 |\\n| 1 | 11.98 | 97.32 | \\n| 2 | 10.67 | 97.42 |\\n| 3 | 11.23 | 97.24 |\\n| 4 | 12.65 | 97.02 |\\n| 5 | 13.91 | 96.84 |\\n\\n| $\\\\alpha$ value | FPR95 &#8595; | AUROC &#8593; |\\n|:--:|:--:|:--:|\\n| 0.001 | 12.09 | 97.21 |\\n| 0.005 | 12.12 | 97.21 | \\n| 0.015 | 10.67 | 97.42 |\\n| 0.025 | 18.73 | 95.38 |\\n| 0.05 | 19.33 | 95.02 |\\n| 0.1 | 22.57 | 94.79 |\\n\\nFrom the results, we conclude that **our previous choices are optimal**. Employing a smaller step size $\\\\alpha$ **facilitates nuanced differentiation** between samples across different distances, thus resulting in **precise boundary identification**. Choosing a moderate $c$ guarantees that the synthesized OOD features are **adequately distant from the ID boundaries**, and prevents the OOD features from distributed in the in-distribution area. We have uploaded the analyses in our new submission of paper. \\n\\n> Weakness 4: additional visualizations to illustrates the improvements. \\n\\nGood point! We provide more visualizations in the **Appendix B Figure 8** in our updated version of paper. Please check them, thanks!\\n\\n\\nWe're confident that our explanations have **fully addressed your concerns**. We greatly appreciate your time in reviewing our rebuttal and your feedback. Please don't hesitate to raise any lingering concerns for discussion.\\n\\n[1] Jiankang Chen, Ling Deng, Zhiyong Gan, Wei-Shi Zheng and Ruixuan Wang. \\\"FodFoM: Fake Outlier Data by Foundation Models Creates Stronger Visual Out-of-Distribution Detector.\\\" ACMMM 2024.\"}", "{\"comment\": \"Thank the authors for the rebuttal. I am content with the response and remain positive about this work.\"}" ] }
0quBGOPP5V
Deep ECG-Report Interaction Framework for Cross-Modal Representation Learning
[ "Jian Chen", "Xiaoru Dong", "Wei Wang", "Shaorui Zhou", "Lequan Yu", "Xiping Hu" ]
Electrocardiogram (ECG) is of great importance for the clinical diagnosis of cardiac conditions. Although existing self-supervised learning methods have obtained great performance on learning representation for ECG-based cardiac conditions classification, the clinical semantics can not be effectively captured. To overcome this limitation, we proposed a $\textbf{D}$eep $\textbf{E}$CG-$\textbf{R}$eport $\textbf{I}$nteraction ($\textbf{DERI}$) framework to learn cross-modal representations that contain more clinical semantics. Specifically, we design a novel framework combining multiple alignments and feature reconstructions to learn effective cross-modal representation of the ECG-Report, which fuses the clinical semantics of the report into the learned representation. An RME-module inspired by masked modeling is proposed to improve the ECG representation learning. Furthermore, we extend ECG representation learning with a language model to report generation, which is significant for evaluating clinical semantics in the learned representations and even clinical applications. Comprehensive experiments on various datasets with various experimental settings show the superior performance of our proposed DERI.
[ "Multi-modal Representation Learning", "ECG signal", "Report Generation", "Zero-shot Classification" ]
https://openreview.net/pdf?id=0quBGOPP5V
https://openreview.net/forum?id=0quBGOPP5V
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wcxu6AYYm2", "o1VtN6SJaC", "nWgb7DelQw", "kvVxirs415", "jOQMaiLMW5", "hdV0OsL11C", "dxyEIKAaKM", "bXQe6RXocJ", "JjAN6IIZHh", "Hxd1eaZeb9", "GI5vtrja1C", "GEyCyfQmz2", "A09DhpFnft", "1ueJvbkDV0", "169f04DvY3" ], "note_type": [ "official_comment", "comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731835159339, 1732603082259, 1731835993238, 1730625328455, 1731835260707, 1730547048105, 1730046177839, 1731835593158, 1731835755277, 1731835221879, 1730633139417, 1731835430450, 1731835345825, 1731835533979, 1731835838765 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Reviewer_pYTw" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Reviewer_pgv7" ], [ "ICLR.cc/2025/Conference/Submission11016/Reviewer_5ABa" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Reviewer_PSAx" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ], [ "ICLR.cc/2025/Conference/Submission11016/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer PSAx (1)\", \"comment\": \"We appreciate your review and the positive comments regarding our paper. We would like to respond to your comments as follows.\\n\\n**Q1.1**: The paper needs rephrasing to improve clarity and readability, especially in the methodology section.\\n\\n**A1.1**: We will improve the clarity and readability of our paper, check and correct all grammatical errors, and improve where the description is vague. We use the expression \\\"provide clinical semantics visually\\\" because clinical reporting information can directly provide clinical semantics for ECG interpretation. \\\"visually\\\" is used to show the meaning of \\\"directly\\\", and we will revise it in our revised manuscript. \\n\\nDetails and references for the text encoder and masking/reconstruction will also be provided. Specifically, for the encoders used for ECG and reports, we adopt a random initialized 1D ResNet18 and the pre-trained MedCPT Query Encoder. MedCPT Query Encoder can generate embeddings of biomedical texts for semantic search (dense retrieval), which is suitable for our work. For cross-modal mutual reconstruction, we introduce two transformer decoders to reconstruct the representation of one modality with the other modality (use ECG embedding to reconstruct report embedding and use report embedding to generate ECG embedding). Detailed parameter settings for the decoder are provided in Section 4.2 lines 312-314. For random masking in RME-module, given a representation $x \\\\in R^{b,n,c}$, we first generate a random Gaussian noise $m \\\\in R^{b,n}$. The noise of each sample is sorted in ascending order to get the index of each sample. We choose the $k$ (decided by masking ratio) element indexes with the highest noise and the least noise as the masked feature indexes respectively, to ensure that the two masked representations are different. \\n\\nThe loss function provided in equations 1 to 5 in our method adopts the CLIP loss (I), which has been widely used in multi-modal contrastive learning. These equations are used to clearly illustrate how we conduct cross-modal representation alignment.\\n\\nFor Page 4 sec 3.2, cross-modal decoders are used to reconstruct the representation of each modal with the other modal, in other words, we reconstruct ECG encoding and text encoding as shown in Figure 1. Linear projectors $P_{e}$ and $P_{t}$ are used to map them into an alignment space of the same dimension for obtaining mixed-modal representation. \\n\\nPage 7 lines (360-362) are rephrased as \\u201cThe experimental results of linear probing are provided in Table 1. The \\u2018Random Init\\u2019 in Table 1 represents using the model structure of our proposed DERI to obtain the ECG-specific mix encoding without pre-training. The model is trained on the downstream dataset in a fully supervised manner for ECG classification.\\u201d\\n\\nWe also improve the captions of figures and tables in our paper for better clarity.\\n\\n**Q1.2**: The training approach does not apply to unlabeled ECG in the usual context and necessitates the availability of accurate diagnostic reports by a cardiologist.\\n\\n**A1.2**: Diagnostic reports are used in the pre-training stage of our method. We proposed a novel framework containing ECG-Report Multiple Alignment and Cross-Modal Mutual Reconstruction to enable the model to learn a more effective representation of the ECG signal. The reports provide a clinical description of the ECG signal, which contains direct high-level semantics. After pre-training, our model can effectively learn the representation of ECG signals which contain high-level semantic insights without diagnostic reports and labels. This is also why DERI has obtained significant improvement over the baselines on public datasets for ECG classification. Experiments on report generation also show that our DERI can learn more high-level semantic information than MERL, which only aligns ECG features with report features in a relatively shallow way.\\n\\n**Q1.3**: Does incorporating textual reports in pre-training have an element of supervision?\\n\\n**A1.3**: We believe that in a sense the inclusion of text reports in pre-training does have some element of supervision. The text reports provide additional semantic information to the ECG signal. The clinical information contained in each report actually guides the ECG signal as it represents the professional interpretation of the physician. This means that the model captures specific semantics by aligning the ECG signals with the corresponding reports, and this alignment process is equivalent to some form of supervised learning as the textual content helps the model to identify key features in the signals. However, the doctor's diagnostic report is not exactly equivalent to the category labeling of the ECG signal but is more of an interpretation of the ECG waveform. However specific diseases tend to have multiple waveform states and different diseases can have the same waveform. Therefore we do not consider the introduction of diagnostic reports for representation learning to be supervised learning.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Paper withdrawn during the rebuttal period upon request by author(s).\"}", "{\"title\": \"Response to Reviewer 5ABa (3)\", \"comment\": \"**Q4.6**: About Med-CPT encoder for report generation.\\n\\n**A4.6**: Actually, our method is designed with GPT-2 as the text decoder for report generation. We tried to adopt Med-CPT as one of the decoder baselines since it was trained with more biomedical information. The model has more medically relevant corpus information, which may be beneficial for understanding medical reports. Since it is an encoder-only model, we adopt the method of R2Gen (Ref. [A] in EMNLP 2020) and HERGen (Ref. [B]) to conduct report generation, which actually just inherits the Med-CPT corpus vocabulary and tokenizer, and then retrained the transformer model on our MIME-ECG dataset. The specific process can be referred to in our code in supplementary materials or the GitHub project of Ref.[A] and Ref.[B]. We don\\u2019t directly use the encoder-only model to conduct report-generation tasks.\\n\\n**Q4.7**: About the prompt embedding.\\n\\n**A4.7**: Given the prompt text of each category, we use the pre-trained **Med-CPT encoder** to obtain the prompt embedding, which is further fine-tuned during the pre-training of our DERI. Med-CPT is an encoder-only architecture, which generates embeddings of biomedical texts that can be used for semantic search (dense retrieval). It has been pre-trained on the biomedical corpus, which makes it suitable for our task.\\n\\n**Q4.8**: About the classification probability and predicted label.\\n\\n**A4.8**: As we clearly illustrated in our paper (lines 274-275 on page 6), we obtain the prompt embedding of each category and the embedding of the original report and generated report respectively with the trained **text encoder**, rather than using a text decoder. For CE of report generation, with the help of a text encoder, we can obtain the embedding and then calculate the similarity between embedding, and then decide the classification results, as in **A4.5**. For zero-shot classification, since the real label of ECG signals in the downstream dataset is provided, we feed the corresponding prompt of the target category of the label to the text encoder to obtain the prompt embedding. Then, after our DERI learning the representation of the ECG signals, we calculate the similarity between the learned representation and the target prompt embedding and then conduct a sigmoid function on the similarity to obtain the predicted probability of each target category. Finally, we conduct an optimal classification threshold search with the help of a precision-recall curve. Categories with a higher probability than the threshold will be regarded as predicted labels.\\n\\n**Q4.9**: Handling multiple classes in ECGs.\\n\\n**A4.9**: For multi-label ECG signals classification in PTB-XL, our zero-shot classification will not only consider the category with the highest probability as we illustrate in A4.5. For multi-label ECG classification, our DERI use the learned representation of ECG signals to calculate the similarity with prompt embedding of targeted categories. After conducting an optimal classification threshold search, all categories above this threshold are considered to be predicted, so our method can effectively solve the problem of multi-label ECG zero-shot classification.\\n\\n**Q4.10**: Discrepancy between prompts and generated reports.\\n\\n**A4.10**: Since there is no corresponding label of the reports in the MIMIC-ECG dataset, we simply regard report classification as a single-class classification, which is not similar to ECG zero-shot classification. On the one hand, without the help of a cardiologist, it is hard to label the reports with multiple conditions. In this case, we choose the category with the highest similarity as the label to be closer to the corresponding category of the report, thus improving the accuracy of the report classification. On the other hand, the classification of reports is not the core innovation of our approach, the CE calculation of the generated reports is to better evaluate whether our approach is as in-depth as we claim to be in the modal interactions between the ECG-reports, and thus learn more about the advanced clinical semantic information contained in the reports. The experimental results of our comparison with MERL also prove the effectiveness of our method.\\nMoreover, to better evaluate whether our method can deal with multi-label reports, we conduct multi-label report classification experiments on the PTB-XL dataset. We regard the multiple labels of the ECG signals as the target labels and use them to obtain the targeted prompt embedding. Then we also calculate the similarity with prompt embedding of targeted categories and conduct optimal classification threshold search as we do for multi-label ECG zero-shot classification. The experimental results are shown in A4.5 before.\\n\\nWe hope our response has addressed all concerns. We would greatly appreciate any further constructive comments or discussions.\"}", "{\"summary\": \"The study introduces DERI (Deep ECG-Report Interaction), a framework designed for cross-modal representation learning from ECG signals and clinical reports to improve the clinical relevance of ECG representations. Traditional ECG self-supervised methods focus on single-modal generative or contrastive learning but fail to capture the deep clinical semantics. DERI addresses these limitations by integrating ECG and report data more deeply.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Deep Cross-Modal Interaction: Unlike previous methods that use shallow alignment between ECG and report features, DERI implements multiple alignments and feature reconstructions, fostering deeper interactions between ECG signals and report data. This approach strengthens the representation's ability to capture the clinical context, enhancing both accuracy and relevance for diagnosis.\", \"Potential for Broader Clinical Integration: DERI\\u2019s architecture, designed to integrate additional data types like electronic medical records (EMRs), positions it well for broader application in clinical settings. This flexibility could make DERI a powerful tool for multi-modal clinical analysis in the future.\"], \"weaknesses\": \"In contrast to other modalities such as chest X-rays and pathological diagnoses, electrocardiogram reports have been mainly produced mechanically by diagnostic equipment for many years. Therefore, this study is more likely to be learning of waveform data and its correct labels, rather than two-modal learning of waveform data and its interpretation using natural language.\\nThe scope of this study may be narrow than the general interest of ICLR main conference.\", \"questions\": \"Is it possible to disclose what the electrocardiogram reports used as training data in this study are specifically like? Was it verified how diverse the content of these reports is in terms of natural language?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer PSAx (3)\", \"comment\": \"**Q1.7**: Does the alignment loss in Equation 1 accommodate the situation where multiple ECGs have similar text reports?\\n\\n**A1.7**: The alignment loss in Eq.1 regards each ECG signal with its corresponding report as a positive sample pair while multiple ECGs that have similar text reports will be regarded as negative pairs. This is because although some ECG signals have partially similar reports, their totality may represent different clinical presentations, and considering these signals as pairs of positive samples tends to make the model constrained by these overlapping reported elements in learning the representation.\\n\\n**Q1.8**: How does random masking differ from random dropout?\\n\\n**{A1.8**: In random masking, specific parts of the input signal embedding are masked. Specifically, after the Resnet encoder, the ECG embedding contains local features (R^{b,n,c}). We conduct random masking on the dimension n to mask several local features and then use the RME-module to enable the model to learn an effective global feature with fewer local features. Masking encourages the model to learn context-aware representations, focusing on understanding relationships within the input. Random dropout removes (i.e., zeroes out) a fraction of the neurons (units) or edges in a network layer during training, but it does not apply this to the input itself. In dropout, neurons are randomly dropped independently at each forward pass. The RME-module designed in our DERI is used to enhance the global feature of the ECG signals, so we adopt random masking rather than a random dropout. Our experimental result in the ablation study supports the effect of random masking.\\n\\n**Q1.9**: Is the performance for other approaches evaluated by the author or the original work since most models require a hyperparameter optimization for best performance?\\n\\n**A1.9**: The performance results for other approaches refer to the existing work \\\"Zero-Shot ECG Classification with Multimodal Learning and Test-time Clinical Knowledge Enhancement (MERL)\\\". However, for report generation, we adopt the same setting for MERL and our DERI as a comparison.\\n\\nWe hope our response has addressed all concerns. We would greatly appreciate any further constructive comments or discussions.\"}", "{\"summary\": \"The paper proposes the Deep ECG-Report Interaction (DERI) framework, a novel method for cross-modal representation learning that combines ECG with clinical reports. This paper introduces cross-modal alignment for representation learning and RME module for enhanced ECG learning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is well-organized and generally easy to follow. The flow from the motivation behind the DERI framework to the detailed explanation of its architecture, followed by experiments and results, is logical and well-structured. The diagrams, particularly those illustrating the DERI architecture and its training process, are helpful in understanding the complex cross-modal interactions.\\n\\nThe technical descriptions, such as the use of multiple alignments, the RME module, and the integration of language models for report generation, are well-explained.\", \"weaknesses\": \"The paper shows a lack of understanding of related work, with many previous related articles not cited or discussed. The following articles [1-4] need to be added and discussed in the paper.\\n\\nSeveral methods from the referenced articles need to be used as baselines and compared in the experimental section, especially for the ECG report generation part.\\n\\nThere are many similarities between this paper and the article \\\"Zero-Shot ECG Classification with Multimodal Learning and Test-time Clinical Knowledge Enhancement (MERL),\\\" with a lack of innovation.\\n\\nFor instance, two of the losses used in the paper are almost identical to the ones used in MERL (CLIP loss and mask loss); the paper merely describes them in a different way. The report generation method is also quite similar to many multimodal approaches, such as the BLIP method, and does not represent true innovation. Moreover, these papers have not been cited.\\n\\nThe downstream task system is also very similar to MERL, except for report generation. However, many report generation baselines are missing from this paper.\\n\\n[1] Wan, Zhongwei, et al. \\\"Electrocardiogram instruction tuning for report generation.\\\" arXiv preprint arXiv:2403.04945 (2024).\\n\\n[2] Li, Jun, et al. \\\"Frozen language model helps ecg zero-shot learning.\\\" Medical Imaging with Deep Learning. PMLR, 2024.\\n\\n[3] Yu, Han, Peikun Guo, and Akane Sano. \\\"Zero-shot ECG diagnosis with large language models and retrieval-augmented generation.\\\" Machine Learning for Health (ML4H). PMLR, 2023.\\n\\n[4] Zhao, Yubao, et al. \\\"ECG-Chat: A Large ECG-Language Model for Cardiac Disease Diagnosis.\\\" arXiv preprint arXiv:2408.08849 (2024).\", \"questions\": \"Please supplement the references and baseline methods [1-4] in the experiments, and fully discuss and compare their advantages, disadvantages, and innovations in the paper.\\n\\nPlease explain the parts that are overly similar to MERL and highlight the points of technological innovation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes the Deep ECG-Report Interaction (DERI) framework to address the lack of clinical semantics in ECG representation learning.\\nBy integrating ECG signals with clinical reports using multi-level alignment strategies, DERI enhances cross-modal learning. It also incorporates a language model for ECG report generation, demonstrating superior performance across multiple datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Align multi-level features using contrastive loss, while also performing cross-modal reconstruction by using ECG signals to reconstruct text and text to reconstruct ECG. This approach enables learning mutual information between the two modalities. The framework is evaluated across multiple datasets and methods for comprehensive assessment. However, the compared baseline methods are limited, and the evaluation metrics are ambiguous.\", \"weaknesses\": [\"The novelty is limited. In Section 3.2, the work uses cross-modal alignment with the original contrastive loss. Section 3.3's approach for cross-modal reconstruction is similar to what was proposed in MRM [1], even though MRM was originally for the image domain. The method in this work closely resembles MRM. Moreover, MRM uses features extracted from masked inputs to reconstruct the other modality, while DERI uses features extracted directly from the original inputs, which reduces the difficulty of the reconstruction task.\", \"There is a lack of baseline and comparison in the report generation task. In MEIT [2], a comprehensive benchmark for ECG report generation is proposed and implemented on the MIMIC-ECG and PTB-XL datasets, both of which are also used in DERI. However, the authors do not compare DERI against any baseline from MEIT and only use GPT-2 as the text decoder, which is outdated, having been released in 2019.\", \"The reproducibility issue is further compounded by the authors' apparent reluctance to share their code.\", \"The report generation task is not implemented on PTB-XL. Since MIMIC-ECG is used for pretraining, evaluating solely on MIMIC-ECG does not sufficiently assess generalizability and robustness, as all the data is seen during pretraining.\", \"The evaluation metric for ECG report generation is lacking. The evaluation metric for clinical efficacy is ambiguous.\", \"[1] Zhou, Hong-Yu, et al. \\\"Advancing Radiograph Representation Learning with Masked Record Modeling.\\\" The Eleventh International Conference on Learning Representations.\", \"[2] Wan, Zhongwei, et al. \\\"MEIT: Multi-Modal Electrocardiogram Instruction Tuning on Large Language Models for Report Generation.\\\" arXiv preprint arXiv:2403.04945 (2024).\"], \"questions\": \"- In Appendix Section D, Table 12, the authors include various text models for report generation, including encoder-only models like MedCPT (desgined for text retrieval task). Using encoder-only models for report generation is questionable, as it conflicts with the mainstream approach seen in works such as MEIT [1] and RGRG [2], which typically utilize encoder-decoder or decoder-only models for text generation tasks. Encoder-only models are not designed for generative tasks like report generation, so their inclusion deviates from standard practices.\\n \\n- Regarding the computation of clinical efficacy in Section 4.2, several aspects need clarification: \\n(1) **How is the prompt embedding obtained from the decoder?** If the decoder is used to obtain prompt embeddings, is it based on the representation from the [EOS] token? Clarification is needed on how the embedding is extracted from a decoder-only architecture.\\n(2) **How is the classification probability for categories computed using a text decoder?** Does this refer to the highest probability assigned to the category name token? Some diseases (e.g., \\\"myocardial infarctions\\\") are tokenized into multiple tokens. If this is the case, how is the classification probability determined for multi-token categories?\\n(3) **Handling multiple classes in ECGs**: The PTB-XL dataset shows that ECGs can belong to multiple classes simultaneously. If the authors use only the highest probability for classification, they may be reducing the prediction to a single class, which ignores other relevant conditions. Why are additional classes not considered in the evaluation?\\n(4) **Discrepancy between prompts and generated reports**: The method uses a single-class description as the prompt for classification, whereas the generated report may describe multiple conditions associated with the ECG signal. There is a clear gap between the prompt (single class) and the report (multi-class). How is this gap addressed in the evaluation process?\\n\\n[1] Wan, Zhongwei, et al. \\\"MEIT: Multi-Modal Electrocardiogram Instruction Tuning on Large Language Models for Report Generation.\\\" arXiv preprint arXiv:2403.04945 (2024). Tanida, Tim, et al. \\\"Interactive and explainable region-guided radiology report generation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer pgv7 (3)\", \"comment\": \"**Following to A3.3**:\\nc. The RME-module with random masking designed in our DERI is used to enhance the global feature of the ECG signals. Compared with random dropout used in MERL, which removes a fraction of the neurons (units) or edges in a network layer during training, our RME-module uses two different random masks to mask the local representation of part of the ECG and introduces an attentional mechanism to extract global features from the masked ECG representation. By aligning these two independent masked global representations, our model can more efficiently extract the global features of the ECG for further cross-modal alignment and reconstruction. \\n\\nComprehensive experiments on various downstream datasets are conducted to evaluate the proposed DERI method and experimental results have verified the great performance of our method.\\n\\n**Q3.4**: For instance, two of the losses used in the paper are almost identical to the ones used in MERL (CLIP loss and mask loss); the paper merely describes them in a different way. The report generation method is also quite similar to many multimodal approaches, such as the BLIP method, and does not represent true innovation. Moreover, these papers have not been cited.\\n\\n**A3.4**: On the one hand, CLIP loss has shown its great performance in multi-modal learning, such as Ref. [A], Ref. [B], and Ref. [C]. It is widely used for cross-modal representation learning, and this is the reason why we use this loss in our DERI. On the other hand, the main innovations of our method, which can be seen in A3.3, do not focus on the innovation of the loss function. BLIP also provides an effective method to generate text based on visual encoding, which is also widely used in clinical report generation such as R2Gen [D] and HERGen [E], we have cited these two great work in our articles.\\n\\n[A] Hafner M, Katsantoni M, K\\u00f6ster T, et al. CLIP and complementary methods[J]. Nature Reviews Methods Primers, 2021, 1(1): 1-23.\\n\\n[B] Zhang, R., Guo, Z., Zhang, W., Li, K., Miao, X., Cui, B., ... & Li, H. (2022). Pointclip: Point cloud understanding by clip. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8552-8562).\\n\\n[C] Fan, L., Krishnan, D., Isola, P., Katabi, D., & Tian, Y. (2024). Improving clip training with language rewrites. Advances in Neural Information Processing Systems, 36.\\n\\n[D] Chen, Z., Song, Y., Chang, T. H., & Wan, X. (2020). Generating radiology reports via memory-driven transformer. arXiv preprint arXiv:2010.16056.\\n\\n[F] Wang, F., Du, S., & Yu, L. (2024). HERGen: Elevating Radiology Report Generation with Longitudinal Data. arXiv preprint arXiv:2407.15158.\\n\\nWe hope our response has addressed all concerns. We would greatly appreciate any further constructive comments or discussions.\"}", "{\"title\": \"Response to Reviewer 5ABa (1)\", \"comment\": \"**Q4.1**: About the novelty and discussion with MRM.\\n\\n**A4.1**: We highlight our main innovations in A3.3 (rebuttal to Reviewer 3). In addition, MEM is proposed to learn knowledge-enhanced semantic representations of radiograph with a multi-task scheme including radiograph self-completion and report completion. Although MEM introduces image features into the task of report completion, it is still essentially a report mask generation task. The role of image features is to inject some visual features to assist mask prediction of reports. This task can not be regarded as a cross-modal reconstruction task because it mainly completes the report by covering the text representation. Our DERI architecture aims to reconstruct another modal representation based on ECG representation and report representation, respectively. This process of cross-modal mutual reconstruction in our DERI is designed to guide the model towards deeper modal interactions, allowing the ECG representation to effectively learn high-level semantics in the report. In terms of motivation and implementation method design, our approach is very different from MEM, not similar. On the other hand, MEM is a mask reconstruction task, so the original representation of the modes is obscured. However, the refactoring of MEM is actually done using the representation of the same mode, so this does not mean that such a refactoring process is more difficult than our method. Our method is to achieve cross-modal reconstruction, and the model needs to overcome the direct distribution gap of different modes, which is often more difficult than the reconstruction of the original mode. To solve this problem, we put cross-modal refactoring behind the cross-modal alignment phase to achieve better refactoring. This effective combination of multimodal alignment and reconstruction helps us dig deeper into the relationship between ECG and report to learn effective representations with more clinical semantics. \\n\\n**Q4.2**: A lack of baseline and comparison in the report generation task, comparison with MEIT, and discussion about GPT-2. \\n\\n**A4.2**: We conducted more experiments in the report generation tasks, including comparison with MEIT, and the results are shown in the Table below.\\n\\n| | Method | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-L |\\n|---------------|--------------------|--------|--------|--------|--------|----------|\\n| Our Framework | MERL-DisGPT2 | 48.9 | 44.4 | 41.2 | 37.4 | 55.4 |\\n| | DERI-DisGPT2 | **58.6** |**54.8** | **51.9** |**48.6** | **64.2** |\\n| | DERI-Align-DisGPT2 | 54.1 | 49.8 | 46.6 | 43.2 | 60.1 |\\n|---------------|--------------------|--------|--------|--------|--------|----------|\\n| Ref [A] | GPT2-Medium | 32.9 | 27.8 | 25.4 | 23.2 | 39.1 |\\n| | GPT2-Large | 43.7 | 39.5 | 35.5 | 32.0 | 48.1 |\\n| | GPT-Neo | 47.4 | 44.9 | 39.8 | 37.3 | 48.6 |\\n| | GPT-NeoX | 46.9 | 45.3 | 41.7 | 39.9 | 55.3 |\\n| | GPT-J | 48.5 | 45.2 | 42.8 | 40.5 | 55.0 |\\n| | BLOOM | 49.1 | 46.2 | 42.7 | 41.5 | 58.0 |\\n| | OPT | 50.2 | 47.7 | 43.1 | 41.8 | 56.8 |\\n| | LLaMA-1 | 51.4 | 48.5 | 46.5 | 43.0 | 58.8 |\\n| | Mistral | 48.6 | 47.5 | 44.6 | 42.1 | 59.1 |\\n| | LLaMA-2\\u2020 | 51.5 | 48.4 | 46.9 | 43.9 | 59.4 |\\n| | Mistral-Instruct\\u2020 | 50.1 | 48.1 | 45.7 | 42.5 | 59.2 |\\n|---------------|--------------------|--------|--------|--------|--------|----------|\\n| Ref [B] | PTB-XL | 6.5 | - | - | 0.9 | 25.6 |\\n| | ECG-Chat | 15.9 | - | - | 2.3 | 23.9 |\\n| | ECG-Chat-DDP | 32.3 | - | - | 11.2 | 29.9 |\\n\\nAlthough GPT-2 was proposed in 2019, it is now still widely used in clinical report generation as we discussed in our paper. Ref. [A] (2023) and Ref. [B] (2024) have verified the great performance of GPT-2 on medical report generation. In addition, compared to LLM such as LLaMA, GPT-2 needs much fewer computing resources. And our experimental results on PTB-XL compared with new baselines with LLMs showed that our method with GPT-2 obtained the best performance. Therefore, using GPT-2 as the text decoder is not outdated.\"}", "{\"title\": \"Response to Reviewer PSAx (2)\", \"comment\": \"**Q1.4**: Does the pre-training necessitate diagnostic reports for ECGs or can it also utilize ECGs when the reports are not available?\\n\\n**A1.4**: Our method is proposed to conduct deep ECG-Report Interaction, which can learn representation with more effective cardiac information. Therefore, diagnostic reports are necessary for pre-training. Without reports, our pre-training method cannot conduct ECG-Report Multiple Alignment and Cross-Modal Mutual Reconstruction. However, after pre-training, our method can utilize ECGs only in the downstream tasks including classification and report generation.\\n\\n**Q1.5**: Are the reports automatically generated or written by cardiologists?\\n\\n**A1.5**: For the MIMIC-ECG dataset used for pre-training, the reports are reported by a cardiologist according to Ref. [1]. Moreover, we conduct more experiments about ECG-report generation on the PTB-XL dataset, which is not used in pre-training. For the PTB-XL dataset, the reports are generated by the cardiologist or automatically interpreted by the ECG device according to Ref. [2]. Experimental results are shown in the table below:\\n\\n| | Method | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-L |\\n|---------------|--------------------|--------|--------|--------|--------|----------|\\n| Our Framework | MERL-DisGPT2 | 48.9 | 44.4 | 41.2 | 37.4 | 55.4 |\\n| | DERI-DisGPT2 | **58.6** |**54.8** | **51.9** |**48.6** | **64.2** |\\n| | DERI-Align-DisGPT2 | 54.1 | 49.8 | 46.6 | 43.2 | 60.1 |\\n|---------------|--------------------|--------|--------|--------|--------|----------|\\n| Ref [1] | GPT2-Medium | 32.9 | 27.8 | 25.4 | 23.2 | 39.1 |\\n| | GPT2-Large | 43.7 | 39.5 | 35.5 | 32.0 | 48.1 |\\n| | GPT-Neo | 47.4 | 44.9 | 39.8 | 37.3 | 48.6 |\\n| | GPT-NeoX | 46.9 | 45.3 | 41.7 | 39.9 | 55.3 |\\n| | GPT-J | 48.5 | 45.2 | 42.8 | 40.5 | 55.0 |\\n| | BLOOM | 49.1 | 46.2 | 42.7 | 41.5 | 58.0 |\\n| | OPT | 50.2 | 47.7 | 43.1 | 41.8 | 56.8 |\\n| | LLaMA-1 | 51.4 | 48.5 | 46.5 | 43.0 | 58.8 |\\n| | Mistral | 48.6 | 47.5 | 44.6 | 42.1 | 59.1 |\\n| | LLaMA-2\\u2020 | 51.5 | 48.4 | 46.9 | 43.9 | 59.4 |\\n| | Mistral-Instruct\\u2020 | 50.1 | 48.1 | 45.7 | 42.5 | 59.2 |\\n|---------------|--------------------|--------|--------|--------|--------|----------|\\n| Ref [4] | PTB-XL | 6.5 | - | - | 0.9 | 25.6 |\\n| | ECG-Chat | 15.9 | - | - | 2.3 | 23.9 |\\n| | ECG-Chat-DDP | 32.3 | - | - | 11.2 | 29.9 |\\n\\n[1] Gow, B., Pollard, T., Nathanson, L. A., Johnson, A., Moody, B., Fernandes, C., Greenbaum, N., Waks, J. W., Eslami, P., Carbonati, T., Chaudhari, A., Herbst, E., Moukheiber, D., Berkowitz, S., Mark, R., & Horng, S. (2023). MIMIC-IV-ECG: Diagnostic Electrocardiogram Matched Subset (version 1.0). PhysioNet. https://doi.org/10.13026/4nqg-sb35\\n\\n[2] Wagner, P., Strodthoff, N., Bousseljot, R., Samek, W., & Schaeffter, T. (2020). PTB-XL, a large publicly available electrocardiography dataset (version 1.0.1). PhysioNet. https://doi.org/10.13026/x4td-x982.\\n\\n**Q1.6**: The strength of the self-supervised pretraining approach is learning general features while incorporating textual reports limits the features to the scope of the reports and thus might limit the capabilities for future tasks outside the information provided in the reports. Can the author demonstrate that the learned features are not limited by the scope and bias of the reports?\\n\\n**A1.6**: Incorporating textual reports into pre-training may result in the features learned by the model being limited to the scope and preferences of the report, which may limit the model's performance on tasks outside the scope of the report. Therefore, we conduct distribution shift experiments on the other three datasets without reports, which can effectively validate the learning ability of representation learning models for different data domains as Table 2 in our article. These datasets are from different healthcare organizations than the pre-training dataset and can effectively reveal whether the model has developed a preference for certain report content or specific data sources. The stable cross-domain performance of DERI in the experimental results suggests that the model has better generalization capabilities that are not limited to the information in the reports.\"}", "{\"summary\": \"The novel DERI approach enhances cross-modal representation learning by incorporating clinical report generation. The work extends the MERL approach [1], integrating multiple alignments and a report generative approach with a novel latent random masking module (RME). The novelty of the approach lies in not only aligning the ECG and report features but also decoding cross-modal features. The author demonstrated a performance improvement compared to other SOTA approaches, verified through supervised tasks on unseen data.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The work is a natural extension of MERL[1], with more accurate zero-shot classification and the possibility of automatic report generation. The zero-shot classification performance shows significant improvement from [1]. The cross-modal decoders allow the additional capability of automatic report generation utilizing GPT-2 architecture.\", \"weaknesses\": \"The paper needs rephrasing to improve clarity and readability, especially in the methodology section. The training approach is not applicable to unlabelled ECG in the usual context and necessitates the availability of accurate diagnostic reports by a cardiologist. The performance is related to the quality, distribution, and context of these reports and may not extend to novel tasks outside the scope of diagnostic reports.\", \"questions\": \"Methodology\\n\\nDoes incorporating textual reports in pre-training have an element of supervision?\\n\\nDoes the pre-training necessitate diagnostic reports for ECGs or can it also utilize ECGs when the reports are not available?\\n\\nAre the reports automatically generated or written by cardiologists?\\n\\nThe strength of the self-supervised pretraining approach is learning general features while incorporating textual reports limits the features to the scope of the reports and thus might limit the capabilities for future tasks outside the information provided in the reports. Can the author demonstrate that the learned features are not limited by the scope and bias of the reports?\\n\\nDoes the alignment loss in Equation 1 accommodate the situation where multiple ECGs have similar text reports?\\n\\nHow does random masking differ from random dropout?\\n\\nIs the performance for other approaches evaluated by the author or the original work since most models require a hyperparameter optimization for best performance?\\n\\nGeneral comments\", \"page_1_line_29\": \"rephrase \\u201cclinical cardiac conditions classification\\u201d.\\n\\nPage 2 lines (61-72): \\u201cSpecifically, the ECG signal and \\u2026. as follows.\\u201d Please rephrase.\", \"page_2_line_78\": \"What is meant by \\u201cwhich can provide clinical semantics visually\\u201d?\", \"page_2_line_91\": \"\\u201ctemporal and spatial correlation ship of ECG signals\\u201d. to \\u201ctemporal and spatial correlations of ECG signals\\u201d.\", \"page_2_line_140\": \"Details and references for the text encoder and masking/reconstruction are missing in the methodology section.\\n\\nPage 4 line (164-194): Please provide references if equations 1 to 5 are derived from existing literature and indicate where there are novel concepts.\\n\\nPage 4 line (202-207): \\u201cWe introduced\\u201d to \\u201cWe introduce\\u201d\\n\\nPage 4 line (202-206): \\u201cConsidering that the textual modality \\u2026 order to provide more textual features\\u201d. Please rephrase to improve clarity and avoid very long sentences.\\n\\nPage 4 lines (206-208): \\u201cAfter completing \\u2026. corresponding report text.\\u201d Please rephrase\\n\\nPage 4 sec 3.2: What is meant by the decoded text and ECGs? Are these the reconstructions of the feature encoding or the corresponding aligned text? If encoding then how is it combined in the mixed-modal representation if the dimensions are different?\\n\\nPage 7 lines (360-362): \\u201cThe experimental results \\u2026. for classification\\u201d. Not clear please rephrase.\", \"figures\": \"Figure captions need to be improved.\", \"figure_1\": \"Please explain the figure adequately in the caption.\", \"tables\": \"Table captions should include the supervised task and the metric under observation.\", \"table_7\": \"Please change DERL to DERI.\", \"references\": \"[1] Che Liu, Zhongwei Wan, Cheng Ouyang, Anand Shah, Wenjia Bai, and Rossella Arcucci. Zero-shot ecg classification with multimodal learning and test-time clinical knowledge enhancement. arXiv preprint arXiv:2403.06659, 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer pgv7 (1)\", \"comment\": \"We appreciate your review and the positive comments regarding our paper. We would like to respond to your comments as follows.\\n\\n**Q3.1**: The paper shows a lack of understanding of related work, with many previous related articles not cited or discussed. The following articles [1-4] need to be added and discussed in the paper (compare their advantages, disadvantages, and innovations in the paper).\\n\\n**A3.1**: Thanks for your valuable comment. We have discussed these related works below:\\n\\n**Ref [1]** aims to use an instruction prompt to generate reports based on the ECG signal input with LLMs. However, in our work, our DERI framework is not designed only for report generation, but also for ECG classification. It is used to learn effective representation with more clinical cardiac information from ECG signals. Although we also conduct report generation, it is just one of the ways to verify the representation of ECG learned. In addition, the experiments are conducted on 4090 GPUs with less computing resources. To better compare our method and the MEIT framework in Ref [1], we add report generations on the PTB-XL dataset, which are not used in our pre-training process. The compared results are shown in the following table below with Ref [4].\\n\\n**Ref [2]** proposes a method called METS to simply calculate the cosine similarity between the ECG embedding and the report embedding for multi-modal ECG-Text SSL pre-training. However, this is too simple to learn the cross-modal representation of ECG and reports. Actually, it\\u2019s the previous works of the MERL with simpler methods with lower results. MERL is the work further improved by the authors of Ref [2], so we did not focus on this paper when we selected the baseline method. \\n\\n**Ref [3]** is proposed to conduct ECG zero-shot classification based on LLMs and RAG and then obtain a great performance on ECG classification. However, this method needs to construct a vector database first for retrieval and the performance depends on the quality of the database. This RAG method has high storage requirements and high computing costs. In addition, although this method is also a multimodal method, it does not use the diagnostic report corresponding to the ECG signal, and cannot extract the high-level semantics in the report as efficiently as our DERI method. However, the use of some baseline feature engineering for ECG signals in this method also gives us inspiration for further work. To better compare our method with this method, we compared the classification performance on PTB-XL, as shown in the table below:\\n| | Supervised | Few-shot TNP | Few-shot TNP | Zero-shot RAG | Zero-shot RAG | Zero-shot RAG | Linear Probing DERI | Linear Probing DERI | Linear Probing DERI | Zero-shot | Zero-shot |\\n|--------------|------------|--------------|------------|---------------|------------|---------|---------------------|-------|-------|------------|--------|\\n| PTB-XL-Super | 1D-CNN | LLaMA2-7B | LLaMA2-13B | LLaMA2-7B | LLaMA2-13B | GPT-3.5 | 1% | 10% | 100% | DERI | MERL |\\n| Macro F1 | 66.0 | 35.7 | 34.8 | 61.7 | 62.2 | 66.9 | 65.6 | 70.9 | 72.4 | 55.4 | 53.3 |\\n| Accuracy | 74.8 | 41.7 | 42.2 | 71.4 | 72.6 | 75.7 | 84.3 | 87.2 | 87.6 | 74.9 | 71.9 |\\n\\n**Ref [4]** proposes ECG-Chat, which is a great work for multi-modal ECG learning. We didn\\u2019t discuss it since it was released on Arxiv less than 2 months before our work was submitted to ICLR. It combines the ECG encoder and classification results to construct instructions for LLM. In addition to the ECG signal, it also uses the electrical health record and other information for DSPy and GraphRAG. Compared to the original report, ECG-Chat can generate structured reports that include medical history diagnoses and recommendations. Therefore, to compare our DERI with ECG-Chat, we compared the result of the generated report task on the PTB-XL dataset with ECG-Chat. The additional experimental results with these baselines on ECG report generation. It should be noted that the results of baselines are referred to in the original paper.\"}", "{\"title\": \"Response to Reviewer pYTw\", \"comment\": \"We appreciate your review and the positive comments regarding our paper. We would like to respond to your comments as follows.\\n\\n**Q2.1**: In contrast to other modalities such as chest X-rays and pathological diagnoses, electrocardiogram reports have been mainly produced mechanically by diagnostic equipment for many years. Therefore, this study is more likely to be learning of waveform data and its correct labels, rather than two-modal learning of waveform data and its interpretation using natural language. The scope of this study may be narrower than the general interest of the ICLR main conference.\\n\\n**A2.1**: Our work proposes to learn an effective representation of ECG signals with the help of diagnostic reports. During the pre-training process, we conducted Deep ECG-report Interaction to train the model to learn effective representation with more clinical cardiac information. After that, the model can be used to learn a representation of ECG signals without report. ECG classification and report generation are the downstream tasks we conduct to verify the ability of our DERI to learn ECG representation. Therefore, our method is a total representation learning model that fits perfectly with the scope of the International Conference on Learning Representations.\\n\\n**Q2.2**: Is it possible to disclose what the electrocardiogram reports used as training data in this study are specifically like? Was it verified how diverse the content of these reports is in terms of natural language?\\n\\n**A2.2**: Actually, we have provided some example samples of the diagnostic reports used as training data in the supplement, including short reports, medium reports, and long reports, which are also shown below:\\n\\n**Short report**: sinus bradycardia. prolonged qt interval. borderline ecg.\\\\\\n\\n**Medium report**: sinus rhythm. poor r wave progression - probable normal variant. inferior st-t changes may be due to myocardial ischemia.\\n\\n**Long report**: sinus bradycardia. prolonged qt interval. possible anterior infarct - age undetermined. lateral t wave changes may be due to myocardial ischemia. abnormal ecg.\\n\\nTo verify how diverse the content of these reports is in terms of natural language, we conduct a statistical analysis on the reports of MIMIC-ECG, which calculated the counts of each word on all reports, and rare word count, TTR and Herdan\\u2019s C of each report. Since there are 771,693 reports and the average sentence has 14 words, we consider rare words that appear no more than 0.1% of the total number of reports (771). Based on the rare word count, we calculate the TTR and Herdan\\u2019s C of each report as below:\\n\\n $TTR = \\\\frac{Rare Word Count}{Report Word Count}$\\n\\n$TTR = \\\\frac{log(Rare Word Count)}{log(Report Word Count)}$\\n\\nHere, TTR is the ratio of the number of rare words (types) to the total number of words (order), reflecting the diversity of the text. A higher TTR indicates higher lexical diversity and less lexical repetition. Herdan's C reflects the lexical diversity of a text by calculating the logarithmic ratio of the number of rare words in the text to the total number of words. The results are shown below:\\n| | Word counts | Rare Word Count | TTR (%) | Herdan's C |\\n|--------|-------------|-----------------|---------|-------------|\\n| Max | 650142.00 | 18.00 | 83.33 | 0.90 |\\n| Min | 1.00 | 0.00 | 0.00 | 0.00 |\\n| Median | 329.00 | 0.00 | 0.00 | 0.00 |\\n| Mean | 12990.21 | 0.09 | 0.52 | 0.01 |\\n| Std. | 48132.65 | 0.50 | 3.34 | 0.05 |\\n\\nWe also find that there are 818 different words used in these reports while 486 words are regarded as rare words. Moreover, there are 40 words that appear only once in all reports while 110 words that appear less than 10 times in all reports. However, our distribution shift experiments show that our DERI method will not be limited by the scope and the bias of the report used in pre-training since it performs well for samples without reports.\\n\\nWe hope our response has addressed all concerns. We would greatly appreciate any further constructive comments or discussions.\"}", "{\"title\": \"Response to Reviewer pgv7 (2)\", \"comment\": \"**Table following Ref.[4] in A3.1**\\n| | Method | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-L |\\n|---------------|--------------------|--------|--------|--------|--------|----------|\\n| Our Framework | MERL-DisGPT2 | 48.9 | 44.4 | 41.2 | 37.4 | 55.4 |\\n| | DERI-DisGPT2 | **58.6** |**54.8** | **51.9** |**48.6** | **64.2** |\\n| | DERI-Align-DisGPT2 | 54.1 | 49.8 | 46.6 | 43.2 | 60.1 |\\n|---------------|--------------------|--------|--------|--------|--------|----------|\\n| Ref [1] | GPT2-Medium | 32.9 | 27.8 | 25.4 | 23.2 | 39.1 |\\n| | GPT2-Large | 43.7 | 39.5 | 35.5 | 32.0 | 48.1 |\\n| | GPT-Neo | 47.4 | 44.9 | 39.8 | 37.3 | 48.6 |\\n| | GPT-NeoX | 46.9 | 45.3 | 41.7 | 39.9 | 55.3 |\\n| | GPT-J | 48.5 | 45.2 | 42.8 | 40.5 | 55.0 |\\n| | BLOOM | 49.1 | 46.2 | 42.7 | 41.5 | 58.0 |\\n| | OPT | 50.2 | 47.7 | 43.1 | 41.8 | 56.8 |\\n| | LLaMA-1 | 51.4 | 48.5 | 46.5 | 43.0 | 58.8 |\\n| | Mistral | 48.6 | 47.5 | 44.6 | 42.1 | 59.1 |\\n| | LLaMA-2\\u2020 | 51.5 | 48.4 | 46.9 | 43.9 | 59.4 |\\n| | Mistral-Instruct\\u2020 | 50.1 | 48.1 | 45.7 | 42.5 | 59.2 |\\n|---------------|--------------------|--------|--------|--------|--------|----------|\\n| Ref [4] | PTB-XL | 6.5 | - | - | 0.9 | 25.6 |\\n| | ECG-Chat | 15.9 | - | - | 2.3 | 23.9 |\\n| | ECG-Chat-DDP | 32.3 | - | - | 11.2 | 29.9 |\\n\\n**Q3.2**: Several methods from the referenced articles need to be used as baselines and compared in the experimental section, especially for the ECG report generation part.\\n\\n**A3.2**: We have provided more baselines for the ECG report generation part as the table in A3.1. It should be noted that we compared the results on the PTB-XL dataset since the pre-training process of our DERI is conducted without the PTB-XL dataset.\\n\\n**Q3.3**: There are many similarities between this paper and the article \\\"Zero-Shot ECG Classification with Multimodal Learning and Test-time Clinical Knowledge Enhancement (MERL),\\\" with a lack of innovation.\\n\\n**A3.3**: MERL conducts simple Cross-Modal Alignment between ECG and report in the feature space simply to learn representation. However, the interaction between modalities is relatively shallow, which cannot effectively convey the high-level semantics of ECG recordings. Simple alignment only makes the learned ECG representation close to the reported representation in the potential space, but it does not effectively learn the cardiac information contained in the reported representation. Compared to it, our proposed method aims to explore deep modality interaction between ECG signals and clinical reports with the combination of contrastive learning (alignment) and generative learning (reconstruction), which is a much different approach than MERL. We designed our framework to enable the model to learn more effective representation, which contains more high-level clinical semantics from reports, of ECG signals. The innovation of our work can be summarized as follows:\\n\\na. To learn effective ECG representation for cardiac conditions from reports, we propose a novel cross-modal framework of ECG-Report via multiple feature alignment and mutual feature reconstruction. The combination of alignment and reconstruction in different levels enables the model to conduct deep interaction between ECG and report, which enables the learned representations to contain more effective high-level semantics, integrating features across modalities. This approach strengthens the representation's ability to capture the clinical context, enhancing both accuracy and relevance for diagnosis.\\n\\nb. A novel framework to combine the ECG encoder with the language model for ECG report generation is proposed in our method, in which the parameters of the encoder are frozen and just the language model is finetuned. On the one hand, our DERI can easily verify whether the learned representation contains high-level semantics from clinical reports. On the other hand, we used smaller models such as GPT2 to achieve report generation and achieved much more accurate results on the PTB-XL dataset than large models such as LLaMA and other methods like RAG (the result can be seen in the table in A3.1). This means that we have designed a report generation framework that requires only a small amount of computing resources to match or exceed LLMS and requires large amounts of computing resources and storage space.\"}", "{\"title\": \"Response to Reviewer 5ABa (2)\", \"comment\": \"**Reference used in A4.2**:\\n\\n[A] Wan, Zhongwei, et al. \\\"Electrocardiogram instruction tuning for report generation.\\\" arXiv preprint arXiv:2403.04945 (2024).\\n\\n[B] Zhao, Yubao, et al. \\\"ECG-Chat: A Large ECG-Language Model for Cardiac Disease Diagnosis.\\\" arXiv preprint arXiv:2408.08849 (2024).\\n\\n[C] Yuan Liu, Songyang Zhang, Jiacheng Chen, Zhaohui Yu, Kai Chen, and Dahua Lin. Improving pixel-based mim by reducing wasted modeling capability. In Proceedings of the IEEE/CVF In- ternational Conference on Computer Vision, pp. 5361\\u20135372, 2023b.\\n\\n[D] Wang, F., Du, S., & Yu, L. (2024). HERGen: Elevating Radiology Report Generation with Longitudinal Data. arXiv preprint arXiv:2407.15158.\\n\\n**Q4.3**: The reproducibility issue is further compounded by the authors' apparent reluctance to share their code.\\n\\n**A4.3**: This is a serious misunderstanding of our work, and in the supplement, we have submitted our core source code for reference. We will also make all of our code public on GitHub as soon as the article is accepted.\\n\\n**Q4.4**: The report generation task is not implemented on PTB-XL. Since MIMIC-ECG is used for pretraining, evaluating solely on MIMIC-ECG does not sufficiently assess generalizability and robustness, as all the data is seen during pretraining.\\n\\n**A4.4**: Thanks for your valuable comments. We have conducted more experiments on PTB-XL about report generation and the results are provided in the table before (refer to A4.2). Our method still obtains the best performance than baselines, showing the great generalizability and robustness of our method.\\n\\n**Q4.5**: The evaluation metric for ECG report generation.\\n\\n**A4.5**: We evaluate the generated ECG report from two aspects: Natural Language Generation (NLG) and d Clinical Efficiency (CE). \\n\\nFor NLG, we adopt BLEU-n for n-gram overlap evaluation and ROUGE-L with the longest common sub-sequence between the original report and generated report.\\n\\nFor CE, we adopt two ways to better evaluate the generated reports. The process of obtaining CE for x-ray report generation requires a pre-trained ChestXRayBERT to classify the original report and generated report as the true label and predicted label respectively. However, there is no pre-trained language model for ECG report classification. Therefore, on the one hand, we adopt open-source LLM including LLaMA2-7b and vicuna-7b as the classifier of the ECG report. The LLMs are used to classify the original report and generated report from six given categories, which include Normal ECG, Myocardial Infarction, ST/T Change, Conduction Disturbance, Hypertrophy, and Others. The category of the original report is regarded as the true label and the one of the generated report is the predicted label. We then use these labels to calculate the CE metrics, which are shown in Figure 4 in our paper. \\n\\nOn the other hand, inspired by the zero-shot classification method of ECG signal, we conduct report zero-shot classification, as illustrated in Appendix D and Figure 6 in our paper. Specifically, we use the Med-CPT to encode the prompt of all categories as the label embedding first. Then, to obtain ground truth, we encode the original report to obtain embedding and then calculate the similarity of all label embedding. The category with the highest similarity will be regarded as the true label of the original report. For the predicted label, we conduct the same process with the generated report. \\n\\nFurthermore, for PTB-XL with annotated labels, we conduct more experiments to verify the CE of the proposed DERI framework. The results are shown below.\\n\\n| Single Label | Method | F1 | PRE | REC |\\n|--------------|--------------------|-------|-------|--------|\\n| Prompt | MERL-DisGPT2 | 22.6 | 27.9 | 21.5 |\\n| | DERI-DisGPT2 | 40.1 | 46.5 | 38.7 |\\n| | DERI-Align-DisGPT2 | 32.0 | 38.1 | 30.0 |\\n|--------------|--------------------|-------|-------|--------|\\n| Multi-label | Method | F1 | Acc | AUC |\\n| Super | MERL-DisGPT2 | 53.3 | 66.0 | 75.7 |\\n| | DERI-DisGPT2 | 56.1 | 72.9 | 76.9 |\\n| | DERI-Align-DisGPT2 | 55.5 | 72.3 | 76.1 |\\n|--------------|--------------------|-------|-------|--------|\\n| Sub | MERL-DisGPT2 | 19.3 | 85.0 | 71.1 |\\n| | DERI-DisGPT2 | 21.1 | 86.5 | 72.7 |\\n| | DERI-Align-DisGPT2 | 19.7 | 85.3 | 72.2 |\\n|--------------|--------------------|-------|-------|--------|\\n| Form | MERL-DisGPT2 | 20.8 | 78.3 | 62.9 |\\n| | DERI-DisGPT2 | 26.5 | 89.0 | 68.0 |\\n| | DERI-Align-DisGPT2 | 24.8 | 84.0 | 66.0 |\\n|--------------|--------------------|-------|-------|--------|\\n| Rhythm | MERL-DisGPT2 | 18.5 | 93.3 | 71.1 |\\n| | DERI-DisGPT2 | 24.1 | 95.2 | 74.5 |\\n| | DERI-Align-DisGPT2 | 23.1 | 94.0 | 73.7 |\"}" ] }
0qrTH5AZVt
ConLUX: Concept-Based Local Unified Explanations
[ "Junhao Liu", "Haonan Yu", "Xin Zhang" ]
With the rapid advancements of various machine learning models, there is a significant demand for model-agnostic explanation techniques, which can explain these models across different architectures. Mainstream model-agnostic explanation techniques generate local explanations based on basic features (e.g., words for text models and (super-)pixels for image models). However, these explanations often do not align with the decision-making processes of the target models and end-users, resulting in explanations that are unfaithful and difficult for users to understand. On the other hand, concept-based techniques provide explanations based on high-level features (e.g., topics for text models and objects for image models), but most are model-specific or require additional pre-defined external concept knowledge. To address this limitation, we propose ConLUX, a general framework to provide concept-based local explanations for any machine learning models. Our key insight is that we can automatically extract high-level concepts from large pre-trained models, and uniformly extend existing local model-agnostic techniques to provide unified concept-based explanations. We have instantiated ConLUX on four different types of explanation techniques: LIME, Kernel SHAP, Anchor, and LORE, and applied these techniques to text and image models. Our evaluation results demonstrate that 1) compared to the vanilla versions, ConLUX offers more faithful explanations and makes them more understandable to users, and 2) by offering multiple forms of explanations, ConLUX outperforms state-of-the-art concept-based explanation techniques specifically designed for text and image models, respectively.
[ "local model-agnostic explanations", "post-hoc XAI", "concept-based XAI" ]
Reject
https://openreview.net/pdf?id=0qrTH5AZVt
https://openreview.net/forum?id=0qrTH5AZVt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xzHscb1zto", "m8OhVK5ev9", "m0uhoDfWXe", "gjJmmGzOXt", "YqqDS8Himi", "XdW28Z4CjS", "X7djjfZYat", "Ua1OrxG92q", "QXkUwT7jxT", "NCNCbH8OZb", "LTBGxmLbhN", "Jp8jsNBjQE", "Bo6Gqjpqo6", "9dS5vtKikb", "8F0yQmBuVX", "7HD1Cv5RDO", "0Te9wHW5EH" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732396877439, 1732126173324, 1733226787013, 1733307503667, 1733067810077, 1732124734678, 1730069776320, 1732455356024, 1734735395581, 1732322707711, 1730619808701, 1733222899473, 1732123938467, 1737523468205, 1730699813765, 1733224062169, 1732123849521 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1760/Reviewer_CduZ" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ], [ "ICLR.cc/2025/Conference/Submission1760/Reviewer_B3KS" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ], [ "ICLR.cc/2025/Conference/Submission1760/Area_Chair_JaPL" ], [ "ICLR.cc/2025/Conference/Submission1760/Reviewer_B3KS" ], [ "ICLR.cc/2025/Conference/Submission1760/Reviewer_CduZ" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1760/Reviewer_SUB7" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ], [ "ICLR.cc/2025/Conference/Submission1760/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your response.\\n\\nRegarding local fidelity, I appreciate your point about concept-level fidelity being more intuitive for end-users. However, my concern lies in the potentially drastic nature of the shift introduced by concept-level perturbations compared to feature-level adjustments. This shift could risk disrupting the local neighborhood of the input, which is critical for maintaining fidelity in explanations. Unlike feature-level perturbations, which make smaller adjustments (e.g., changing a single word or pixel), concept-level perturbations often involve broader, higher-order changes that may inadvertently alter the input more significantly. I recommend running controlled experiments that measure fidelity across varying perturbation scales. Additionally, the broader nature of concept-level perturbations could inadvertently inflate recorded responses in metrics like AOPC or coverage.\\n\\nRegarding concept quality, while I understand your concerns about directly using TCAV, the broader idea is to introduce a mechanism to evaluate the relevance and coherence of the concepts extracted by ConLUX. This would help substantiate your claim that these concepts align well with the model's decision-making process across different domains.\", \"for_example_if_used_tcav\": [\"Relevance: TCAV's directional derivative approach could be used to assess how strongly the extracted concepts influence the target model's output. By checking whether the concepts identified by ConLUX align with the decision boundary of the model, you could measure how relevant these concepts are in the context of specific predictions.\", \"Coherence: TCAV can also reveal whether the concepts are consistently meaningful across instances within a domain. If certain concepts show a consistent, high relevance score across inputs, this could serve as evidence of their coherence and general applicability.\"]}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thanks for your review!\", \"our_responses_to_your_concerns_are_as_follows\": \"**Typos and Errors:** \\nWe have uploaded our edited paper.\\n\\n**Concept for image data:** \\nAs you said, in our experiments, we use concepts represented as segmentation masks. However, we would like to clarify that this is just one instantiation of ConLUX-generated explanations. ConLUX is a framework that accepts different kinds of concepts, as long as the pre-trained models can obtain concepts and perform the bidirectional mapping between concept level and feature level. For example, it is possible to use more abstract concepts represented by nature language with models like BLIP-2(Li et al., 2023), and perform the perturbations by models like stable diffusion(Rombach et al., 2021).\\n\\n**Concept extraction and perturbation for text data:** \\nIn our text experiments, we employ a workflow similar to TBM (Ludan et al., 2023) to generate concepts. In the TBM paper, the quality of concepts and the ability of large language models (LLMs) to map feature-level data to concept-level representations have been validated through human evaluation. \\nFor perturbation, we propose to perform concept-level perturbation by an LLM, and we admit it's worth verifying the consistency of concept-level perturbation. As LLMs have been proved to be able to check whether an instance satisfies a given concept, we are conducting an experiment to evaluate the consistency of LLM-based perturbations. Specifically, we use LLMs to verify whether the perturbation alters the concept as intended. We will report these results in a subsequent update.\\n\\n**ACE + LIME as a baseline:** \\nWe do not believe ACE + LIME should be considered a baseline for our framework. \\n- First, ConLUX is designed as a local explanation technique that takes a target model and an input to generate a local explanation. In contrast, ACE is claimed to be a global explanation method that explains an entire class and requires a set of images from that class to extract concepts.\\n- Second, ACE segments images at multiple resolutions, meaning a concept does not correspond to a specific area of the image but to multiple segments at different resolutions. If we simply use the union of these segments to represent a concept, this can lead to overlapping areas between concepts, making it difficult to perturb a specific concept by simply masking the corresponding area.\\n- Lastly, extending existing local methods to concept-level explanations is a key contribution of our paper. If the issues of concept discovery and perturbation can be addressed without pre-trained large models, then the augmented LIME can be considered a partial instantiation of ConLUX.\"}", "{\"comment\": \"- We have conducted the experiment to validate the faithfulness of LLM perturbation in our text experiments. Specifically, we test whether Llama-3.1, the LLM used in our experiments, can generate sentences with the expected concepts. The results show that Llama-3.1 generates sentences as expected with over 99% accuracy.\\n\\n- We have performed an experiment on a text-generation task. Specifically, we explain the Llama-3.1 model on a text summarization task. We randomly selected 20 sentences from the CNN/Daily Mail dataset, and prompted Llama-3.1 to perform text summarization. Paes et al. (2024) introduce C-LIME and L-SHAP to generate attribution explanations for generative tasks. We augmented these two methods with ConLUX and compared the fidelity of the vanilla explanations with the ConLUX-augmented ones. The average AOPC values are shown below:\\n\\n\\n| C-LIME | LIME* | L-SHAP | KSHAP* |\\n|--------|-------|--------|--------|\\n| 0.181 | **0.243** | 0.171 | **0.256** |\\n\\n\\nThese results show that ConLUX can also improve the fidelity of explanations for a text-generation task.\"}", "{\"comment\": \"We run our image experiments on 5000 more images, and the final results are as follows:\\n\\n\\n**Coverage(\\\\%)**\\n\\n| | Anchors | Anchors* | LORE | LORE* |\\n|-------|--------|-------|--------|--------|\\n|YOLOv8| 28.9 | **31.1** | 21.4 | **24.9** |\\n|ViT|25.3| **28.7**|20.9|**23.8**|\\n|ResNet-50|27.2| **31.0** | 20.2 |**29.8**|\\n\\n**Precision(\\\\%)**\\n| | Anchors | Anchors* | LORE | LORE* |\\n|-------|--------|-------|--------|--------|\\n|YOLOv8| 85.2 | **99.0** | 86.4 | **92.0** |\\n|ViT|89.1|**98.6**|90.1|**95.9**|\\n|ResNet-50| 90.5|**99.1**|84.9|**92.4**|\\n\\n**AOPC**\\n\\n| | LIME | LIME* | SHAP | KSHAP* |\\n|-------|--------|-------|--------|--------|\\n|YOLOv8| 0.411 | **0.498** | 0.457 | **0.603** |\\n|ViT|0.501 | **0.643**|0.489|**0.635**|\\n|ResNet-50|0.234| **0.317**|0.243|**0.335**|\\n\\n**Accuracy_a**\\n| | LIME | LIME* | SHAP | KSHAP* |\\n|-------|--------|-------|--------|--------|\\n|YOLOv8| 0.137 | **0.050** | 0.328 | **0.051** |\\n|ViT|0.226|**0.055**|0.246|**0.078**|\\n|ResNet-50| 0.359|**0.126**|0.314|**0.079**|\"}", "{\"title\": \"Response to All Reviewers\", \"comment\": \"Thanks to all the reviewers for your thoughtful feedback. We have uploaded a revised version of the paper to address the main concerns raised:\\n\\n1. **Clarification of Our Contribution (Paragraph starting at line 79):** \\n - We introduce a unified approach to augment various existing local model-agnostic explanation techniques to provide concept-based explanations. This requires the ability to automatically extract concepts and perform a bidirectional mapping between the concept level and the feature level. \\n - We show that large pre-trained models can be used to extract concepts and perform bidirectional mapping between the concept level and feature level. Specifically, we find existing works (Ludan et al., 2023[1]; Sun et al., 2023[2]) have utilized large models to extract concepts and conduct feature-to-concept mapping (i.e., determining whether an instance satisfies a specific concept) for specific tasks. We generalize these findings and show that large pre-trained models can also perform concept-to-feature mapping (i.e., generating samples based on changes in concept-level information), thereby enabling the full workflow.\\n\\n2. **Validation of Concept-Level Perturbation (Line 340):** \\n In our text experiments, we use LLMs to perform the perturbation. We have conducted an experiment to validate the faithfulness of this perturbation. Specifically, we test whether Llama-3.1, the LLM used in our experiments, can generate sentences with the expected concepts. The results show that Llama-3.1 generates sentences as expected with over 99% accuracy.\\n\\nYou can refer to our responses below each review for other questions and concerns. We'll appreciate your further feedback.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thanks for your review!\", \"our_responses_to_your_concerns_are_as_follows\": \"**Local Fidelity:** \\nWe understand your concerns regarding feature-level local fidelity. In fact, high local fidelity is a key property of a high-quality local explanation. However, we'd like to argue that it's worth nothing to only stick to feature-level local fidelity. \\nConcept-based explanations provide concept-level local fidelity. As concept-based explanations capture model behavior more faithfully and are more intuitive, they offer a more natural and convenient way for end-users to understand and utilize the explanations (Sun et al., 2023[1]). In other words, users are more inclined to know how the models behave in the concept-level locality.\\n\\n**Quality of Concepts:** \\nWe are unclear about how to use \\\"concept relevance\\\" and \\\"coherence\\\" to measure concept quality. To our knowledge, TCAV is a method for attributing importance to concepts, similar to what ConLUX-augmented LIME and KernelSHAP do. We'd appreciate it if you can explain it in detail. We would be happy to address your concerns further. \\nMoreover, in both our text and image experiments, the quality of discovered concepts is proved to be human-understandable. We extract concepts by workflows similar to those used in previous methods: TBM (Ludan et al., 2023[2]) for text and EAC (Sun et al., 2023[1]) for images. These prior works have validated that the generated concepts are understandable through human evaluations.\\n\\n**Fairness and Reliability:** \\nWe acknowledge that the capabilities of pre-trained models can influence the quality of explanations. However, this should not be considered a disadvantage of ConLUX. On one hand, our experiments demonstrate that ConLUX achieves state-of-the-art performance. On the other hand, as large models continue to advance, their fairness and reliability will improve, enabling ConLUX to generate explanations of higher quality.\\n\\n**Scale of perturbation:** \\nThe improvement in fidelity metrics is not brought by a larger perturbation scale. Considering the experiment of image classification in Section 4, ConLUX improves the fidelity metrics, while for both vanilla and ConLUX-augmented methods, the perturbation scale is the same, i.e. from the original image to a fully masked image.\\n\\n\\n[1] Ao Sun, Pingchuan Ma, Yuanyuan Yuan, and Shuai Wang. Explain any concept: Segment anything meets concept-based explanation. (arXiv:2305.10289), May 2023. doi: 10.48550/arXiv.2305.10289. URL http://arxiv.org/abs/2305.10289. arXiv:2305.10289 [cs].\\n\\n[2] Josh Magnus Ludan, Qing Lyu, Yue Yang, Liam Dugan, Mark Yatskar, and Chris CallisonBurch. Interpretable-by-design text classification with iteratively generated concept bottleneck. (arXiv:2310.19660), October 2023. doi: 10.48550/arXiv.2310.19660. URL http://arxiv.org/abs/2310.19660. arXiv:2310.19660 [cs].\"}", "{\"summary\": \"The paper proposes a concept based local explanation method ConLUX that is model agnostic. The authors essentially propose a modality specific concept representations of inputs (concept predicates). These representations also readily provide a procedure to perform perturbation on the predicates and subsequently the input. Combining these two, the method is able to augment the traditional model-agnostic approaches to provide concept based explanations. The authors provide experiments on text (sentiment prediction) and images (classification) with multiple black-box models and explanation techniques and essentially show a clear improvement in terms of various forms of fidelity.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The problem setup, what the authors want to solve, and why, is quite clear.\\n2. The core idea of using a concept-friendly representation for input and combining it with black-box explanation methods is simple and its positive implications are easy to see. \\n3. The experiments are reasonably strong. They cover both text, images, and on multiple black-box models, with positive results in all cases. Also, via the existing black-box explainers, the method can generate different types of explanations (attribution, counterfactual etc.)\\n4. While it has its own weaknesses, the proposal to build visual concept predicates is a principled way that could be readily validated by a user if the concept extraction is incorrect. This aspect of simplicity should positively reflect in its application\", \"weaknesses\": \"1. Twice the authors describe (Fel et al. 2023) as using external knowledge to learn concepts. To my understanding, this is a wrong description. They propose a unified class of methods based on dictionary learning that is completely unsupervised.\\n2. Typos/Errors: \\n * Rednet (line 448)\\n * line 351 should not be in past tense\\n * Table 3 caption does not correspond to the table content\\n3. The method seems only capable to extract coarse visual concepts. Also it can only admit concepts that can be represented as a segmentation mask. In case of text concept predicates, potential risk of some issues arising from using language models for concept detection and predicate-feature mapping. \\n4. I felt a lack of examples/illustrations of visual explanations and any deeper qualitative insights the authors might have.\", \"questions\": \"1. I am not completely convinced by the strategy of using language models for text concept predicates. While I assume it probably provides the best performance and our generally more than good enough for text-only tasks, they could still be prone to hallucinations in terms of incorrect concept extraction or during generations for perturbation. Did you consider some other method (maybe traditional topic modelling approaches) to validate its concept detection or perturbation outputs. In this sense I like the visual predicates lot more than textual ones.\\n\\n2. I wonder if you considered an ACE (Ghorbani et al.) + LIME method to compare ConLUX against a concept based reference where you use activation space of an external encoder to cluster superpixels instead of the original model. The concepts can be defined as clusters of superpixels, as in original method. \\n\\nOverall, comparing the strengths and weaknesses, I find the method to be just sound and strong enough that I would tilt slightly towards acceptance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Regarding your concerns, our responses are as follows:\\n\\n**Local fidelity** \\nWe would like to argue that not all types of explanation methods should guarantee feature-level local fidelity. For concept-based explanations, concept-level local fidelity is the key metric. In fact, any explanation technique that uses predicates and perturbations above the feature level inherently risks a loss in feature-level fidelity. \\nEssentially speaking, the primary goal of using a metric of explanation is to demonstrate that an explanation is useful for its target users. As we mentioned before, end-users are shown to be more inclined to know how the models behave in the concept-level locality. Specifically, they care about how the model responds to changes in concepts rather than individual features. Therefore, we focus on measuring concept-level fidelity for concept-based explanations, as is standard practice in prior works[1-4]. Since end-users do not prioritize output changes caused by feature-level perturbations, a higher feature-level fidelity does not make concept-based explanations more useful. As such, measuring feature-level local fidelity for concept-based explanations is unnecessary.\\n\\n**Fidelity Results & Perturbation Scale:** \\nWe don't think our fidelity results are inflated.\\n- AOPC: In Figure 5, the values along the x-axis represent K%, which indicates the proportion of changed predicates rather than the number of changed predicates. This provides a fair comparison. If you believe there is an issue with this metric, could you please provide a specific example to clarify your concern?\\n- Coverage: ConLUX improves both precision and coverage, meaning our explanations better predict the model's behavior across more data and with greater accuracy. This makes our explanations demonstrably superior to the vanilla methods. Could you please explain in detail why you think our explanations might be potentially worse?\\n\\n**TCAV** \\nDirectional derivative is just one method for attributing importance to features or concepts, similar to other techniques like LIME, LRP, or DeepLift. As noted in prior works, these methods are not always faithful [5], so directional derivative should not be used as a metric to evaluate the concepts.\\n\\n**Quality of Concepts** \\nWe think the alignment between our concepts and the target models has already been evaluated through our fidelity experiments. Specifically, higher fidelity in a ConLUX-augmented explanation as a whole indicates that the combination of multiple concepts forming the explanation aligns better with the target model. Additionally, the AOPC curve demonstrates that, for any proportion \\\\( K\\\\% \\\\), the highest-attributed concepts align better with the target model.\", \"regarding_the_two_metrics_you_introduced\": \"- **Concept Relevance:** This metric appears to measure the importance of individual concepts. However, we go beyond mere attribution by also evaluating the quality of this attribution through fidelity experiments. \\n- **Concept Coherence:** We are unclear on the necessity of this metric. For end-users who aim to understand the local behavior of target models or predict their outputs in a specific locality, we didn't notice any practical difference between two local explanations with the same fidelity and understandability but differing in Concept Coherence. Could you please explain why Concept Coherence is critical in this context? \\n\\n\\n\\n[1]Fel, Thomas, Victor Boutin, Louis B\\u00e9thune, Remi Cadene, Mazda Moayeri, L\\u00e9o And\\u00e9ol, Mathieu Chalvidal, and Thomas Serre. \\u201cA Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation.\\u201d Advances in Neural Information Processing Systems 36 (December 15, 2023): 54805\\u201318.\\n\\n[2]Sun, Ao, Pingchuan Ma, Yuanyuan Yuan, and Shuai Wang. \\u201cExplain Any Concept: Segment Anything Meets Concept-Based Explanation.\\u201d arXiv, May 17, 2023. https://doi.org/10.48550/arXiv.2305.10289.\\n\\n[3]Fel, Thomas, Agustin Picard, Louis Bethune, Thibaut Boissin, David Vigouroux, Julien Colin, R\\u00e9mi Cad\\u00e8ne, and Thomas Serre. \\u201cCRAFT: Concept Recursive Activation FacTorization for Explainability.\\u201d arXiv, March 28, 2023. https://doi.org/10.48550/arXiv.2211.10154.\\n\\n[4]Zaval, Mounes, and Sedat Ozer. \\u201cImproving the Explain-Any-Concept by Introducing Nonlinearity to the Trainable Surrogate Model.\\u201d arXiv, June 24, 2024. https://doi.org/10.48550/arXiv.2405.11837.\\n\\n[5]Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. \\u201cAxiomatic Attribution for Deep Networks.\\u201d arXiv, June 12, 2017. https://doi.org/10.48550/arXiv.1703.01365.\"}", "{\"metareview\": \"The paper introduces an intriguing method using large pre-trained models for concept discovery and perturbation, but several issues limit its reliability and generalizability. Concept-level perturbations can distort inputs, failing to capture local decision boundaries effectively, especially when modeled as entire objects. The extracted concepts lack evidence of generalization across domains, raising concerns about interpretability. Reliance on pre-trained models introduces biases, with no mitigation strategies proposed. Additionally, LLM-based perturbations in language tasks face fidelity issues, and the framework's sensitivity to prompt selection impacts reproducibility. Significant revisions, including finer-grained experiments, robust validation, and bias mitigation, are needed to improve its rigor and reliability.\", \"additional_comments_on_reviewer_discussion\": \"Key weakness raised by reviewers:\\n1. Concept Alignment: Lack of explicit human evaluation to confirm discovered concepts align with human-understandable representations.\\n2. Fidelity of Perturbations: Concerns about the reliability of LLM-based perturbations, especially for text data.\\n3. Predicate Selection: LLM-selected predicates may be inconsistent or confusing, impacting explanation quality.\\n4. Prompt Robustness: Framework is sensitive to prompt design, with no exploration of optimal strategies.\\n5. Limited Experiments: Experiments are restricted to specific tasks (sentiment analysis, ImageNet) with insufficient validation.\\n6. Baseline Comparisons: No thorough comparison with concept bottleneck methods or ablation studies.\\n7. Broad Perturbations:Concept-level changes may distort inputs significantly, risking local fidelity.\", \"author_responses\": \"1. Highlighted prior methods validating concept understandability and ongoing validation experiments.\\n2. Preliminary results show high fidelity (>99%) for LLM-based perturbations.\\n3. Argued that self-explanation of predicates is unnecessary and cited prior studies supporting concept understandability.\\n4. Claimed prompt robustness is tied to pre-trained models and will improve as models advance.\\n5. Ongoing experiments aim to address limited validation and expand tasks.\\n6. Justified existing comparisons with state-of-the-art techniques as appropriate for the framework.\\n7. Defended broader perturbations as improving concept-level fidelity and user comprehension.\\n\\nWhile the authors address concerns with explanations and ongoing experiments, some key weaknesses, such as robustness and validation, remain unresolved, relying on future work.\"}", "{\"title\": \"Response to Author Rebuttal\", \"comment\": \"Thank you for the response.\\n\\nI do not fully agree with the first two points about ACE+LIME. The use of ACE for this baseline would be mainly for extract concepts. It should be completely reasonable for your method to provide local explanations but assume an initial set of images to source you with concepts. About perturbation too, either you can consider a simpler version with superpixel extraction at a single resolution or while perturbing with concepts you can take union of superpixels associated with any perturbed concept and not in any present concept. \\n\\nHowever, because I agree with your last point I think these concerns of mine are irrelevant. It should be fine if any such system was seen as an instantiation for your method. So it is ok for me if you didn't consider such a system as a baseline but part of your own method. \\n\\nThe idea for using LLM/MLLMs and diffusion models is also interesting, although I don't think there were any experiments showing it. \\nWhile this is not a strong enough concern for me, in general the image explanation experiments would strengthen if you also considered more sophisticated strategies for concept extraction, any that could extract concepts at a finer scale.\"}", "{\"summary\": [\"The paper introduces ConLUX a framework designed to enhance model-agnostic explanation methods by transforming traditional feature-level explanations into concept-level ones.\", \"The authors argue that mainstream model-agnostic explanation techniques often provide explanations based on low-level features that don't align well with model decision processes or user understanding so ConLUX elevates explanations to the concept level.\", \"The framework applies ConLUX to different explanation methods (LIME, Anchors, LORE, and Kernel SHAP) across text and image models.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"ConLUX shows promise in being adaptable to multiple existing explanation methods (LIME, SHAP, Anchors, LORE), broadening its application across varied model types.\", \"ConLUX aims to make model behaviors more intuitive and accessible for end-users, addressing a limitation in feature-based explanations.\"], \"weaknesses\": [\"**Major**\", \"One key weakness of ConLUX I felt was that since it shifts to a concept-level perturbations, which are broader and may disrupt the local fidelity of explanations. Unlike small feature-level adjustments (e.g., word or pixel changes in LIME), concept-level changes can alter the input more drastically, potentially leading to explanations that do not accurately reflect the model\\u2019s behavior around the specific input instance. To investigate whether these concept-level perturbations maintain local fidelity, I suggest running controlled experiments comparing concept-level and feature-level perturbations. For example, the authors could measure fidelity loss or gain across a gradient of perturbation scales, allowing for a comparison between the fidelity of feature-level and concept-level explanations.\", \"The paper relies on pre-trained models to extract high-level concepts but does not fully explore whether these concepts are consistently relevant across diverse domains. Variability in concept quality could impact the explanation's reliability. I recommend testing concept quality across datasets from different domains and introducing a metric or using existing ones like TCAV[1] to measure concept relevance and coherence within each domain.\", \"Since ConLUX relies on pre-trained models to extract high-level concepts, it inherits any biases present in these models. This reliance could skew the explanations based on the biases embedded in the pre-trained models, which might limit the fairness and reliability of the generated explanations.\", \"Observed improvement in fidelity metrics, such as AOPC, coverage, and precision, may partly result from the broader concept-level perturbations rather than genuinely enhanced explanation quality. Since larger perturbations at the concept level likely introduce more drastic changes to the model output, they could artificially inflate these scores, making the explanations appear more effective than they might be with finer, feature-level adjustments. To address this, the paper could benefit from controlled experiments using varying perturbation scales, comparing small and large concept-level shifts, to ensure that the fidelity improvements genuinely reflect enhanced interpretability rather than the impact of larger perturbations. I suggest implementing a normalized fidelity metric that adjusts for the magnitude of perturbations.\", \"**Minor**\", \"A few typos in page 2 \\\"ConLUX-agumented LIME\\\" should be \\\"ConLUX-augmented LIME\\\"\", \"**References**\", \"1. Kim, Been, et al. \\\"Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav).\\\" International conference on machine learning. PMLR, 2018.\"], \"questions\": [\"Don\\u2019t have much in terms of questions on the methodology itself, but a few conceptual issues stood out, as mentioned in the weaknesses.\", \"It\\u2019d be interesting to see if the authors could run additional experiments with different scales of perturbation to make sure these fidelity gains are actually about better explanations, rather than just larger shifts.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks again for your feedback.\\n\\nAs mentioned earlier, TCAV is an attribution method similar to LIME, KSHAP, DeepLift, and others. While we believe the fidelity results sufficiently demonstrate the high quality of our concept-level predicates, we are happy to provide further clarification through the attribution results.\\n\\nWe compared the highest attributed predicates in LIME, KSHAP, and their ConLUX-augmented explanations. The results show that, in our text experiments, on average, the predicate with the highest attribution score in ConLUX-augmented explanations receives more than 3 times the attribution score compared to those in the feature-level explanations. In our image experiments, on average, the highest attributed concept predicates receive 1.5 times the score of the highest attributed feature predicates.\"}", "{\"title\": \"Rebuttal by Authors (Continued)\", \"comment\": \"**L80:**\\nTheoretically, given that large pre-trained models are capable of obtaining concepts for various types of input data, such as natural language, images, and medical data (Chen et al., 2024[3]), and that existing model-agnostic techniques are applicable across a wide range of tasks, including classification and generation (Paes et al., 2024[4]), we make the claim that large pre-trained models can extract high-level concepts across different tasks.\\nAs it is impractical to enumerate all possible tasks, we chose two popular tasks to instantiate ConLUX in our experiments. However, as mentioned before, we are happy to include an experiment on a text-generation task to further support this claim.\\n\\n\\n**Teaser figure with thresholding:** \\nThanks for your suggestion. However, we have a different perspective. Deciding which threshold to use in visualizations is a non-trivial task and is not inherently part of attribution techniques like LIME or KernelSHAP. Therefore, we believe it is better to present our visualizations without including a threshold.\\n\\n**L93:**: \\nThis example is not a tradeoff between the reliability of explanations and their understandability. Elevating explanations from the feature level to the concept level enhances both fidelity and understandability. \\nIf the model's decision-making is indeed based solely on individual tokens, such a tradeoff can exist. However, this issue is a common limitation of concept-based explanations. Moreover, target models with such behavior are typically very simple (e.g., linear models), which are not sufficiently powerful and are rarely used in practice. Additionally, such simple models are often inherently self-explanable. Consequently, this issue is unlikely to pose a significant obstacle to applying concept-based methods.\\n\\n\\n**Figure 2:** \\nConLUX highlighting the kid indicates the limitation of YOLOv8, which takes the kid into consideration when identifying a punching bag.\\n\\n**Response to the question:** \\nWe use the default temperature setting for each LLM, which is 0 for GPT-3.5, and 0.8 for Llama 3.1. \\n\\n> Two very similar reviews can have different concepts chosen for explanation......One cat may be identified by its ears and a similar one by its eyes. Can you propose an experiment to study this robustness?\\n\\nWe'd like to clarify this issue from the following two perspectives:\\n\\n- **In-distribution data**: \\n For the instances in the test set of the used dataset, our experiments demonstrate that ConLUX explanations are faithful, and previous works (Ludan et al., 2023[1]; Sun et al., 2023[2]) demonstrate that high-level concepts generated by these large pre-trained models are understandable. This indicates that ConLUX performs well for in-distribution data. In this case, if two similar instances are assigned different concepts and explanations, these explanations can still be both faithful and understandable. This can be viewed as explaining the same decision from two different perspectives. Developing a method to provide explanations from multiple perspectives is an interesting direction for future research.\\n\\n- **Out-of-distribution (OOD) adversarial samples**: \\n Suppose the two similar instances include one in-distribution instance and one out-of-distribution (OOD) adversarial sample. In that case, it is possible for concept-based explanations to be unfaithful on the adversarial sample. However, we do not consider this a weakness of our method. \\n On the one hand, the potential low fidelity is a common risk of concept-based explanations: \\n 1) For either large models or other methods used to discover concepts (e.g., extracting concepts from gradients, activations, or attention weights), there is a risk of generating poor-quality concepts for adversarial instances on out-of-distribution (OOD) adversarial samples.\\n 2) The fidelity of explanations cannot be guaranteed in such cases, as the decision-making process for adversarial samples may not align with the extracted concepts.\\n \\n On the other hand, concept-based explanations are intended for non-expert end-users. Explaining adversarial samples is beyond the scope of this design. For debugging adversarial samples, expert users should utilize explanation techniques specifically designed for expert use cases.\\n\\n[1] Ludan, J. M., et al. (2023). Interpretable-by-design text classification with iteratively generated concept bottleneck. *arXiv preprint*, arXiv:2310.19660. \\n\\n[2] Sun, A., et al. (2023). Explain any concept: Segment anything meets concept-based explanation. *arXiv preprint*, arXiv:2305.10289. \\n\\n[3] Chen, Y., et al. (2024). BURExtract-Llama: An LLM for clinical concept extraction in breast ultrasound reports. *Proceedings of MCHM\\u201924*, 53\\u201358. \\n\\n[4] Paes, L. M., et al. (2024). Multi-level explanations for generative language models. *arXiv preprint*, arXiv:2403.14459.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper proposes to use foundation models to discover concepts to augment on methods like LIME to provide concept-based local explanations. The method is evaluated on sentiment analysis and image classification tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed framework is interesting and can be useful if validated rigorously. The paper explores two different modalities. The framework is applied across multiple established methods (LIME, Kernel SHAP, Anchor, and LORE).\", \"weaknesses\": \"Concept discovery using LLM is the key aspect of the proposed framework. This calls for a human evaluation to answer the question: are the concepts discovered by the foundation models indeed aligned with a human-understandable representation? Currently, it sounds like it is assumed that the prompt will take care of this.\\n\\n\\nHow faithful is the backtracking from the perturbed concept space to the original space? This also done by the LLM?\\n\\nMethods like LIME create a local approximation of the original function to a human-understandable form and explain the decision. This is the reliable yet explainable part of the method. By using LLM for discovering concepts, (given each task and the sample), it becomes difficult even locally to explain the predicates. The decision-making is not fully explained, i.e., the choice of predicates cannot be explained. For instance, in case a poor predicate is chosen by an LLM, the user may be confused by the explanation. I think the proposed method, by using foundation models for concept discovery, makes the framework less reliably explainable.\\n\\nTo prove the robustness of the method, it would be useful to try intervention-based causal metrics, especially since the concept discovery is done by foundation models. C-insertion and deletion are often used to evaluate concept importance and fidelity in concept-based explanations.\", \"robustness_to_the_prompt\": \"how robust is the framework to the construction of the prompt? Also, given the literature around prompt engineering, the framework could explore what might be the optimal prompting strategy for discovering human-interpretable concepts.\\n\\n\\nThe paper proposes a new framework for explanation where concept discovery is done by foundation models. For the text modality, the experiments are restricted to sentiment analysis, and 1000 images from the ImageNet dataset are used for classification for images. The method calls for more experiments for validation. However, this is not the sole reason for the decision.\\n\\n\\nThere is no baseline comparison. Though the method is a new paradigm, it could be compared with concept bottleneck methods or predicates selected by other methods. There could also be an ablation study on different LLM model combinations. Additionally, the authors could discuss concept bottleneck-based methods. What do you think about a method where task-specific concept bottlenecks are chosen by the LLM? This is not a weakness or a mandatory experiment for the text.\", \"minor_comments\": \"\", \"l44\": \"Please provide a citation, as not all visual explanation methods using attribution mapping follow this principle.\", \"l80\": \"\\\"Moreover, we observe that ... across different tasks.\\\" given the restricted number of tasks evaluated it might be a good idea to tone down this claim.\", \"teaser_figure\": \"I think, this can be made more comparable if the attribution method is shown with certain thresholding. When shown post-thresholding, the viz can clearly demonstrate the benefit of the proposed method, even allowing for certain comparative evaluations later.\", \"l93\": \"This is a tradeoff between reliability of explanation to prediction and understandability. If the model is indeed making a decision based on the token, pivoting the explanation on different concepts can change it.\", \"figure_2\": \"Comparison of LIME and ConLUX-augmented LIME; the ConLUX highlights the kid! Is this a good explanation? Of course, SAM performs a good segmentation of the image, providing an object-level explanation, but the attribution seems incorrect to me. Or did I miss something?\", \"questions\": \"For each data point, the concept used can be different. Given that it comes from an LLM, there can even be stochasticity over it (it may also be worth mentioning the LLM temperature setting for this, if not already mentioned). Even otherwise, two very similar reviews can have different concepts chosen for explanation, with no theoretical guarantee on the LLM choosing similar predicates. This lack of control makes the explanation less robust. In theory, changing a token (may be an adversarially crafted one in practice) can change the entire explanation, as the choice of predicates is left to the LLM. One cat may be identified by its ears and a similar one by its eyes. Can you propose an experiment to study this robustness?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks again for your feedback.\\n\\nWe have conducted the experiment to validate the faithfulness of LLM perturbation in our text experiments. Specifically, we test whether Llama-3.1, the LLM used in our experiments, can generate sentences with the expected concepts. The results show that Llama-3.1 generates sentences as expected with over 99% accuracy.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thanks for your review!\\n\\nFirst of all, we would like to clarify that we propose the framework ConLUX to automatically extract high-level concepts and incorporate them into existing local model-agnostic explanation techniques to provide concept-based explanations. To achieve this, we make the following contributions:\\n- We introduce a unified approach to augment various existing local model-agnostic explanation techniques to provide concept-based explanations. This requires the ability to automatically extract concepts and perform bidirectional mapping between concept level and feature level.\\n- We find that large pre-trained models can be used to extract concepts and perform bidirectional mapping between concept level and feature level. Specifically, existing works (Ludan et al., 2023[1]; Sun et al., 2023[2]) have utilized large models to extract concepts and conduct feature-to-concept mapping (i.e., determining whether an instance satisfies a specific concept) for specific tasks. We generalize these findings and show that large pre-trained models can also perform concept-to-feature mapping (i.e., generating samples based on changes in concept-level information), thus enabling the entire workflow to be executed.\\n\\nWe validate the feasibility of ConLUX by two instantiations for image and text data. The results show that the current instantiations of ConLUX achieve state-of-the-art performance. \\nOur focus is not on identifying the best pre-trained models or prompts. Instead, ConLUX is designed to be instantiated with any large pre-trained models capable of performing bi-directional mappings. As large models and prompts continue to advance, the explanations provided by ConLUX will naturally improve alongside these developments.\", \"our_responses_to_your_specific_concerns_are_as_follows\": \"**Alignment of concepts and human-understandable representations:** \\nIn both our text and image experiments, the discovered concepts are proved to be human-understandable. We extract concepts by workflows similar to those used in previous methods: TBM (Ludan et al., 2023[1]) for text and EAC (Sun et al., 2023[2]) for images. These prior works have validated that the generated concepts are understandable through human evaluations.\\n\\n**Concept level perturbation:** \\nFor image data, we perform concept-level perturbation by masking, which is naturally faithful. For text data, we propose to perform concept-level perturbation by an LLM, and we admit it's worth verifying the fidelity of concept-level perturbation. As Ludan et al. (2023)[1] have shown that LLMs can check whether an instance satisfies a given concept, we are conducting an experiment to evaluate the consistency of LLM-based perturbations. Specifically, we use LLMs to verify whether the perturbation alters the concept as intended. We will report these results in a subsequent update.\\n\\n**Explanation of the choice of predicates:** \\nWe would like to argue that it is unnecessary for an XAI technique to be self-explanable. While it is possible that a large pre-trained model may select suboptimal predicates, Sun et al. (2023)[2] have demonstrated through human evaluations that, on average, the concepts discovered by pre-trained models are more understandable to users than feature-level predicates. \\n\\n\\n**C-insertion and Deletion:** \\nWe provide the results of the deletion experiments in Figure 5 and Table 2.\\n\\n**Robustness to the prompt:** \\nThe robustness to prompts is a property of the pre-trained models themselves rather than our framework. Our work does not aim to improve the large pre-trained models or prompts. Instead, as large models and prompts continue to advance, the explanations generated by our framework will also naturally improve.\\n\\n**More experiments:** \\nWe are conducting an experiment on a text-generation task and incorporating additional images into our image classification task. We will report these results in a subsequent update.\\n\\n**Baseline comparison:** \\nIn our paper, we compare ConLUX with TBM, a bottleneck model for the sentiment analysis task. Specifically, our framework focuses on generating concept-level explanations locally, without requiring internal information about the target models. We compare our framework with two state-of-the-art task-specific techniques for providing explanations for black-box target models: TBM and EAC. Please refer to Section 4 and Table 3 for further details.\\n\\n**L44:** \\nWe apologize that we may not fully understand the issue you mentioned. We'd appreciate it if you can explain more. Additionally, we have added a citation in our edited version.\"}" ] }
0qfIhtel8N
Liquid Dino: A Multi-Task Neural Network towards Autonomous Driving
[ "Georgios Markos Chatziloizos", "Andrea Ancora", "Andrew I. Comport", "barat christian" ]
In the realm of advanced driver-assistance systems (ADAS) and autonomous driving, the accurate classification of driver emotions, behaviors and contextual environments is critical for enhancing vehicle safety and user experience. This study investigates the performance of various neural network architectures across four distinct classification tasks: Emotion Recognition, Driver Behavior Recognition, Scene-Centric Context Recognition and Vehicle-Based Context Recognition, all of which incorporate visual information captured through cameras. By utilizing camera-based data, we aim to evaluate how different neural architectures handle visual inputs in these diverse contexts, thereby exploring the robustness and generalization of each model to different real-world scenarios. We compare the performance of several state-of-the-art models and introduce a novel contribution that significantly improve classification accuracies in all areas. Our results demonstrate that the proposed Liquid Dino architecture achieves an overall average accuracy of 83.79\%, outperforming other models in recognizing driver emotions, behaviors and contextual scenarios. These enhancements underscore the potential of our proposed methods in contributing to the development of more reliable and responsive ADAS.In the realm of advanced driver-assistance systems (ADAS) and autonomous driving, the accurate classification of driver emotions, behaviors and contextual environments is critical for enhancing vehicle safety and user experience. This study investigates the performance of various neural network architectures across four distinct classification tasks: Emotion Recognition, Driver Behavior Recognition, Scene-Centric Context Recognition and Vehicle-Based Context Recognition, all of which incorporate visual information captured through cameras. By utilizing camera-based data, we aim to evaluate how different neural architectures handle visual inputs in these diverse contexts, thereby exploring the robustness and generalization of each model to different real-world scenarios. We compare the performance of several state-of-the-art models and introduce a novel contribution that significantly improve classification accuracies in all areas. Our results demonstrate that the proposed Liquid Dino architecture achieves an overall average accuracy of 83.79\%, outperforming other models in recognizing driver emotions, behaviors and contextual scenarios. These enhancements underscore the potential of our proposed methods in contributing to the development of more reliable and responsive ADAS.
[ "Autonomous Driving", "Multi-task Learning", "Advanced Driver-Assistance Systems (ADAS)", "Deep Learning" ]
https://openreview.net/pdf?id=0qfIhtel8N
https://openreview.net/forum?id=0qfIhtel8N
ICLR.cc/2025/Conference
2025
{ "note_id": [ "fXKRpVIWEk", "N9CdRIIAm6", "GveNbv3JRH", "5JMOMcygMB" ], "note_type": [ "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730655803668, 1730472615759, 1730706323185, 1732384309629 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11339/Reviewer_XKcX" ], [ "ICLR.cc/2025/Conference/Submission11339/Reviewer_jhZy" ], [ "ICLR.cc/2025/Conference/Submission11339/Reviewer_SNzt" ], [ "ICLR.cc/2025/Conference/Submission11339/Authors" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors propose Liquid DINO to achieve more advanced driver assistant multi task learning. The proposed approach shows better performance on the Places365 dataset.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is easy to understand and follow.\\n\\n2. The task focused by the authors are very important to the community.\", \"weaknesses\": \"1. In the introduction section, the authors mentioned that we need to achieve a more advanced architecture to deal with the complex demands of modern driver monitor system. This motivation is not very convincing. The authors should introduce why Liquid Dino can well overcome these complex challenges.\\n\\n\\n\\n2. The approach is only verified on one dataset. The generalizability of the model is doubtful. Could the authors extend this approach for video based driver distracted behavior recognition dataset, e.g., DriveACT and DAD?\\n\\na. Martin, M., Roitberg, A., Haurilet, M., Horne, M., Rei\\u00df, S., Voit, M., & Stiefelhagen, R. (2019). Drive&act: A multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 2801-2810).\\n\\nb. Kopuklu, O., Zheng, J., Xu, H., & Rigoll, G. (2021). Driver anomaly detection: A dataset and contrastive learning approach. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 91-100).\\n\\n\\n3. The performance of Liquid Dino is not very promising compared with DINOv2 baseline. Thereby the conttibution of this proposed new method is doubtful.\\n\\n4. Lack of implementation details. The authors are suggested to add a separate section to introduce the implementation details.\\n\\n5. The proposed approach is not novel enough. It seems that the performance gain mainly comes from adding more layers for feature learning after DINOv2 framework.\\n\\n6. Lack of failure case analysis. The authors are suggested to add some qualitative samples with failure cases to illustrate the limitation of the proposed approach.\", \"questions\": \"Here are the questions based on the provided weaknesses:\\n\\n1. In the introduction, the authors mention the need for an advanced architecture to address complex requirements of modern driver monitoring systems. Could the authors elaborate on how Liquid Dino specifically overcomes these challenges to strengthen this motivation?\\n\\n2. To verify the generalizability of the model, would the authors consider extending the approach to other video-based driver distraction behavior recognition datasets, such as Drive&Act (Martin et al., 2019) and DAD (Kopuklu et al., 2021)?\\n\\n3. Given that Liquid Dino\\u2019s performance does not show a marked improvement over the DINOv2 baseline, could the authors clarify the specific contributions of the proposed method that account for any observed gains?\\n\\n4. Could the authors add a separate section detailing the implementation to provide clearer insights into the architecture, training settings, and parameters used?\\n\\n5. The proposed approach appears to derive its performance improvement largely from additional feature learning layers following the DINOv2 framework. Could the authors clarify any novel aspects of Liquid Dino beyond adding layers?\\n\\n6. Could the authors provide qualitative examples of failure cases to illustrate the limitations of Liquid Dino, as this would help clarify areas where the approach may need improvement?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors develop a method named Liquid DiNO, which uses images to classify emotion recognition, driver behavior recognition, scene-centric context recognition, and vehicle-based context recognition. The framework consists of three parts: the first is DiNOv2, the second includes a CNN backbone, and the third is a CFC module. They experiment on a single dataset containing images from three external cameras and one internal camera. The results are presented in terms of accuracy, with the authors claiming that their proposed method performs well.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"They are trying to solve an important problem using just images.\", \"weaknesses\": \"The authors in this work attempt to classify specific driver behaviors using a complex framework. However, the paper requires substantial improvement to be considered for acceptance. Below are some of the main weaknesses:\\n\\n1. The related work section needs to be expanded to include relevant studies. In the third paragraph, fusion techniques are discussed, but this seems irrelevant as no data fusion is performed in this study.\\n\\n2. In Figure 1, the driver\\u2019s face is partially obscured by wires, and the eyes are not visible. How can meaningful features be learned with such images?\\n3. Why are all images combined into a single frame? Wouldn\\u2019t using weight-sharing in the encoder allow for a better representation of learning from the images?\\n\\n4. What role do the external camera images play in the classification task? Does using three external cameras improve the model\\u2019s performance, or would a single forward-facing camera suffice?\\n\\n5. The methodology section does not present a cohesive description of the framework. The parts are divided into unrelated sections, and the motivation for using the CFC module is unclear.\\n\\n6. What is the rationale for including a CNN backbone after DiNOv2?\\n\\n7. Table 1 is not discussed in the text, and its purpose is unclear.\\n\\n8. The authors only report accuracy as the evaluation metric. F1 score and AUC should be included to provide a more comprehensive assessment of the framework's performance.\\n9. There is no ablation study to support their design choices.\", \"questions\": \"1. The related work section needs to be expanded to include relevant studies. In the third paragraph, fusion techniques are discussed, but this seems irrelevant as no data fusion is performed in this study.\\n\\n2. In Figure 1, the driver\\u2019s face is partially obscured by wires, and the eyes are not visible. How can meaningful features be learned with such images?\\n3. Why are all images combined into a single frame? Wouldn\\u2019t using weight-sharing in the encoder allow for a better representation of learning from the images?\\n\\n4. What role do the external camera images play in the classification task? Does using three external cameras improve the model\\u2019s performance, or would a single forward-facing camera suffice?\\n\\n5. The methodology section does not present a cohesive description of the framework. The parts are divided into unrelated sections, and the motivation for using the CFC module is unclear.\\n\\n6. What is the rationale for including a CNN backbone after DiNOv2?\\n\\n7. Table 1 is not discussed in the text, and its purpose is unclear.\\n\\n8. The authors only report accuracy as the evaluation metric. F1 score and AUC should be included to provide a more comprehensive assessment of the framework's performance.\\n9. There is no ablation study to support their design choices. \\n10. It would be good for the reader if you provided layer wise details of your framework.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents \\\"Liquid Dino,\\\" a multi-task neural network designed to improve the accuracy of advanced driver-assistance systems (ADAS) by classifying various driver states and contextual driving scenarios. The model addresses four classification tasks: Emotion Recognition, Driver Behaviour Recognition, Scene-Centric Context Recognition, and Vehicle-Based Context Recognition by using the visual data captured through multiple cameras both inside and outside the vehicle. AIDE dataset, a multi-view, multi-modal dataset specifically crafted for autonomous driving research is used to evaluate Liquid Dino against various state-of-the-art models.\", \"the_model_consists_of_three_components\": \"Convolutional Neural Networks (CNNs) for spatial feature extraction, DINOv2 for self-supervised learning, and Closed-form Continuous-Time Neural Networks (CFC) for temporal processing. The architecture is designed to handle diverse data while maintaining high efficiency, with an overall average accuracy of 83.79%, outperforming other models, especially in Traffic Context Recognition (95.03%) and Vehicle Condition Recognition (84.76%).\\n\\nThe presented approach integrates existing methods, thus limiting its novelty. Moreover, it is evaluated using one dataset so justification/conclusion is questionable for a conference like ICLR. However, it promises that the model performs well within the real-time requirements of automotive systems, making it a promising approach for real-world ADAS applications. Additionally, the model's ability to capture driver emotions, behaviours, and contextual driving environments enhances road safety and driver experience, providing valuable contributions to the development of more reliable and responsive autonomous driving systems.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The \\\"Liquid Dino\\\" approach is well-thought, particularly through the multi-task learning for autonomous driving. The model integrates Convolutional Neural Networks (CNNs), DINOv2 (self-supervised learning), and Closed-Form Continuous-Time Neural Networks (CFC) to handle spatial, temporal, and unlabeled data. This hybrid architecture incorporates each component's unique strengths, creating a robust model well-suited for the complex, multi-modal requirements of autonomous driving.\\n\\nLiquid Dino tackles four tasks simultaneously Emotion Recognition, Driver Behavior Recognition, Scene-Centric Context Recognition, and Vehicle-Based Context Recognition. It reflects the diverse demands of real-world driving. Validated on the AIDE dataset, a multi-view, multi-modal dataset that captures rich contextual data under realistic driving conditions, the model demonstrates strong generalization potential. This comprehensive dataset strengthens the relevance and impact of Liquid Dino\\u2019s experimental results.\\n\\nAchieving an overall accuracy of 83.79% and excelling particularly in Traffic Context Recognition (95.03%) and Vehicle Condition Recognition (84.76%), Liquid Dino outperforms existing models despite having an increased inference time of 8 milliseconds per frame. Its frame-by-frame processing, which avoids the need for sequence-based inputs, contributes to immediate, continuous monitoring, ideal for high-stakes ADAS applications.\\n\\nThe model's capacity to integrate driver behaviour, emotion, and environmental context enhances driver safety and experience, supporting the development of more responsive ADAS and autonomous vehicles.\", \"weaknesses\": \"The main weakness is the novelty. The approach is the integration of the existing approaches and looks like a technical paper.\\n\\nThe second weakness is the evaluation of the proposed approach on a single dataset does not justify its suitability. There are many datasets available for driver behaviour analysis. The list of datasets can be found in the AIDE paper (Yang et al., ICCV 2023). At least an evaluation on another one or two datasets would have made this paper stronger. \\n\\nIn the Introduction (Section 1), the term \\\"Liquid\\\" is introduced, suggesting the use of Liquid Neural Networks (LNNs). However, the methodology primarily combines CNNs, DINOv2, and CFCs, with minimal discussion on the specific role or implementation of LNNs. For a more accurate portrayal of the architecture\\u2019s functionality, further elaboration on LNN integration would be beneficial. This could be addressed by referencing the study, \\\"Liquid Neural Networks: A Novel Approach to Dynamic Information Processing\\\" (https://ieeexplore.ieee.org/document/10466162), which explores LNNs' capabilities in handling dynamic data.\\n\\nIn the Experiments (Section 4), the model's performance is evaluated across tasks that have varying temporal dynamics. However, there is limited discussion on how the model adapts to these differences across tasks, which is crucial in applications where time-dependent accuracy is essential. Explaining the confusion matrix scores (Figure 3) for each task in detail would increase the explainability of the model.\\n\\nThe Results (Section 5) provide inference times, which indicate the model\\u2019s performance in real-time scenarios, but omit details on the computational resources required during training and deployment. This information is critical for understanding the model's feasibility in resource-constrained environments. \\n\\nIn the Discussion (Section 6), the model's performance is evaluated using the AIDE dataset. However, the paper lacks exploration into how well the model generalizes to other environmental conditions, such as different weather patterns or diverse geographic settings. Testing across various datasets or environments could address potential generalization issues.\\n\\nThe diagram (Figure 2) lacks detail in DINOv2 and CFC modules, with unlabelled arrows between them and CNN, making data transformations unclear. The CNN module omits critical parameters (e.g., kernel size, stride), while multi-view inputs are positioned too far from DINOv2. Missing data dimensions hinder understanding of transformations, and inconsistent module details (e.g., CFC as a single block) disrupt coherence. The absence of loss function indicators and missing representation of any feature fusion or attention mechanisms further limit completeness.\\n\\nLastly, in Future Work (Section 8), potential model enhancements are mentioned, but there is a lack of specific details on scalability. Addressing how the architecture can expand to include additional tasks would clarify its future applicability.\", \"questions\": \"Justification on the novelty of the paper\\n\\nExperimental evaluation of another one or two datasets listed in the AIDE paper (Yang et al., ICCV 2023).\\n\\nIn the Introduction (Section 1), Could you clarify how the principles of Liquid Neural Networks are integrated into the model architecture? Specifically, how do these principles influence the adaptability and efficiency of the model in processing temporal data?\\n\\nIn Experiments (Section 4), how does the model handle varying temporal dynamics across the four tasks, especially in high-stakes contexts such as emotion and behaviour recognition? Are there specific adaptations for managing differences in the timing and frequency of events in each classification task?\\n\\nGiven the complexity of the multi-task model, are there specific measures or techniques implemented to make the decision-making process interpretable? For safety-critical applications like ADAS, understanding how the model arrives at its predictions is essential for building trust and accountability.\\n\\nWhat future enhancements do you envision for Liquid Dino? Specifically, are there plans to scale the architecture for additional tasks within autonomous driving, such as prediction and planning? How would the current model adapt to these expansions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Not applicable\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0qexTTfnmH
ME-LORA: MEMORY-EFFICIENT BAYESIAN LOW- RANK ADAPTATION FOR LARGE LANGUAGE MODELS
[ "Xulin Huang", "Hehuan Cao", "Jingyan Sui", "Siyuan Tao", "Dongbo Bu" ]
Bayesian Low-Rank Adaptation (LoRA) has shown excellent performance in reducing the overconfidence of inference by large language models as it can accurately quantify the inference uncertainty. However, the general Bayesian LoRA technique requires huge memory as it fine-tunes three low-rank matrices with large size: two matrices have size of $n\times r$ and the other has size of $r\times m$, where $r$ denotes rank, and $n, m$ denote the size of input and output, respectively. The large amount of memory required by this technique precludes its practical applications especially for the cases with long input or output. Here, we propose a memory efficient Bayesian LoRA technique (called Me-LoRA) that needs only two low-rank matrices plus two small matrices with size of only $r\times r$. The key idea of our approach is that we introduce a small matrix (with size $r\times r$) to describe the variance estimates required by Bayesian LoRA, which is calculated through sampling two other samll matrices. Compared with the general Bayesian LoRA technique, our approach reduces the memory requirement by nearly $\frac{1}{3}$ as the rank $r$ is generally very small. Experimental results using both LlaMA-7B and LlaMA-13B models on representative data sets suggest that our approach achieves the same performance as the original Bayesian LoRA techniques and outperforms the existing approaches. In summary, the memory-efficient Bayesian LoRA presented in this study circumvents the challenge of high memory requirement and thus paves a new way to the practical applications of Bayesian LoRA in the cases with larger input and output size.
[ "Large Language Models", "Low-rank adaptation", "Bayesian estimation", "Fine-tune" ]
https://openreview.net/pdf?id=0qexTTfnmH
https://openreview.net/forum?id=0qexTTfnmH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tBL3Ybqvtz", "eEABUrxJ0c", "eC2AIGs0Da", "L1kAciPwUl", "Cgpo2qWTSt" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730359512314, 1730209066262, 1729810175155, 1732184173357, 1730504841547 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8899/Reviewer_mKt2" ], [ "ICLR.cc/2025/Conference/Submission8899/Reviewer_oTvV" ], [ "ICLR.cc/2025/Conference/Submission8899/Reviewer_XLS9" ], [ "ICLR.cc/2025/Conference/Submission8899/Authors" ], [ "ICLR.cc/2025/Conference/Submission8899/Reviewer_irBx" ] ], "structured_content_str": [ "{\"summary\": \"This paper explores Bayesian Low-Rank Adaptation (LoRA), a method known to reduce overconfidence in inference when data is limited. The authors introduce a memory-efficient variant, Me-LoRA, by performing sampling on a small-scale intermediate matrix rather than the low-rank matrix directly. Experimental results with LLaMA2 models demonstrate that Me-LoRA maintains both effectiveness and efficiency compared to the original Bayesian LoRA framework.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The core idea of this paper is well-presented with a clear comparison against the original Bayesian LoRA framework.\\n\\n2. Comprehensive experiments demonstrate that Me-LoRA achieves a balance between effectiveness and efficiency when compared to state-of-the-art methods.\", \"weaknesses\": \"1. The motivation behind the research problem is not clearly presented. The submission lacks an explanation of when overconfidence occurs in LLM inference, why this issue is critical, and how such overconfidence impacts the model's responses? These questions should be properly addressed.\\n\\n2. Essentially, Me-LoRA is an efficient variant of BLoB and is supposed to replicate BLoB's performance with reduced resource demands. However, it appears to fall short in terms of ECE, a key metric for assessing overconfidence, against BLoB. \\n\\n3. The computational cost comparison in Table 6 is confusing. The backbone model requires at least 13GB (LLaMA2-7B) or 21GB (LLaMA2-13B) GPU memory, yet the memory usage reported in Table 6 is significantly lower than that of the backbone model. Additionally, the rank used in the efficiency comparison is missing.\\n\\n4. This submission seems to be incomplete considering the presented contents and presentation itself. Further revisions are recommended to enhance its clarity and comprehensiveness.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a somewhat more efficient approach for Bayesian LoRA in LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well written, and has extensive experiments demonstrating that their method performs reasonably well.\", \"weaknesses\": \"One weakness is in the odd presentation of Bayesian LoRA in LLMs. It would be far more preferable to present a historical overview, saying something like\\n\\n>Yang et al. 2023 [or other earlier work] introduced the notion of doing Bayesian inference over the low-rank adapters for fine-tuning LLMs. This had numerous advantages ... . However, Laplace inference, as used there had disadvantages ... . These disadvantages motivated the introduction of BLoB, which uses VI. We build on BLoB ...\\n\\nMe-LoRA only does Bayesian inference over C, and does MAP over A and B, which will likely reduce the benefits you might see from a fully Bayesian approach, and make it resemble more closely a non-Bayesian approach.\", \"questions\": \"How much do we care about the relatively modest reductions in memory usage in Table 1, as compared to the very large memory cost of the model itself?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper is built upon a recent work BLoB, which applies black box variational inference on LoRA during fine-tuning. This paper suggests a new method: Me-LoRA, which improves upon BLOB in terms of parameter counting, in particular, instead of having a variational distribution directly over LoRA's A and B components, which has a total number of parameters as 2 * (r * n + r* m), it introduces a new components C of shape r by r and perform VI on C instead of directly on A and B, as such the total number of parameters is reduced to r * n + r* m + 2 * r * r, and helps reduce the number of parameters significantly when r << n, which is the case for most LLM. The model then shows that the proposed approach shows comparable / slightly better than the past approach BLOB, but smaller parameter overhead.\\n\\nOverall, I find the method technically sound and the experiments fairly convincing, the proposed modification to the existing approach is well-motivated and can have some usefulness. However, the writing needs some significant improvement (see weakness).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed method is easy and technically sound.\", \"The closed-form computation of KL divergence is carefully derived.\", \"The experiments are conducted on a wide range of tasks, although mostly multiple choice QA problems, to demonstrate the effectiveness of the problem.\", \"A reasonable amount of experiment details are provided for reproducibility (though code not provided along with the submission).\"], \"weaknesses\": [\"It would be nice if the references can be colored;\", \"End of line 111 is missing parathesis?\", \"The description of Eq.3 is incorrect, it is the KL divergence between the variational posterior and the prior not the posterior. Also it's more common to call it ELBO rather than free energy in Bayesian deep learning literature\", \"What is \\\\theta in Eq. (3)?\", \"The definition of q(C) at line 178 is confusing: If M is a matrix, then q(C) should be a matrix normal, how could it have a r by r matrix as the covariance?\", \"The math language is also not consistent: Eq (4) has W; in \\\\mathcal{N}, but q(C) does not.\", \"Having a randomized prior is extremely weird and non-standard (Sec 3.3 , Eq. 8), it's also not stated what is U(0, 1), which I guess is a uniform distribution.\", \"It would be nice if the parameter count section can be summarized into a table for easier comparison.\", \"Line 267, ' ..saved model checkpoints every 100 steps,...' it's nice to have experiment details presented but I don't think this piece of information is necessary.\", \"Line 398, the authors mentioned Flipout, but did not provide reference nor any explanation.\", \"Table 4 and 5 being put at the bottom of related work section is weird.\"], \"questions\": [\"It should be clarified that A, and B are not variational parameters, they do not fit into the ELBO defined in Eq. (3). They are often referred to as \\\"model hyper-parameter\\\".\", \"Section 3.1 demonstrates the induced covariance matrix over the full-weight matrices, but does not go deeper into:\", \"1. Is that covariance matrix diagonal, low-rank, or of any structure;\", \"2. Why shall we care about this quantity? does the covariance matrix help us, e.g. better understand the landscape of LoRA fine-tuning, etc?\", \"Has the author considered using Monte Carlo estimation to estimate the KL divergence rather than using closed form solution?\", \"When performing VI, do the authors additionally utilize regularization such as weight decay or L2 regularization?\", \"Why the prior is set on the full model weights rather than just on the low-rank components (Eq. 5)?\", \"Why is ensemble worse than MAP in terms of accuracy? Does it mean ensembling could harm the performance potentially?\", \"Isn't a weight decay of 1e2 too large?\", \"Why is the proposed method suboptimal in ECE?\", \"How does the proposed method compare with Rank-1 BNN [1]?\", \"How many Monte Carlo samples are used for estimating the Bayesian model averaging?\", \"Why the numbers in the first column of Table 3 different from the numbers in Table 2.?\", \"Why do we need the random noise epsilon on the prior? The results in Table. 4 seem to be mixed.\", \"What benefits can we get from this approach, if we are in an open-ended generation setting? A huge body of LLM's applications are open-ended generation tasks such as translation, summarization, etc.\", \"[1] Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors propose a small variant to the recent LoRA variant Bayesian Low-Rank Adaptation by Backpropagation (BLoB). BLoB adapts a variational Bayesian setting wherein the LoRA parameters of the A matrix are parameterized by Gaussian priors. Subsequently, BLoB makes the evaluation of the variational objective (i.e., the likelihood regularized by a KL-divergence term) efficient in practice by deriving the KL-divergence under assumed Gaussian priors, as well as incorporating flipout into LoRA for efficient sampling.\\n\\nThe introduced method, called ME-LoRA, near-directly adapts the BLoB framework. The main technical difference is the use of a full matrix C of rank r (the lower dimension), which acts as an intermediate matrix which is multiplied between the LoRA B and A matrices, i.e., W = W_0 + BCA. While BLoB includes one Gaussian per each value of the A matrix (leading to two learnable matrices of size r x n representing the Gaussians means and variances), ME-LoRA instead utilizes two learnable matrices of size r x r. The authors attempt to reproduce both the BLoB framework and experimental set up from the BLoB, with ME-LoRA performing favorably on accuracy and negative likelihood tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed method is straight forward and the computational savings, compared to BLoB, are immediately obvious. It is also commendable that the authors have undertaken the task of reproducing the methods and experiments from the original BLoB paper.\", \"weaknesses\": \"The proposed changes to BLoB are small contributions, which limit the potential impact of the work. There are also several important concerns, in particular:\\n- In BLOB, for A \\\\in R ^ {r \\\\cross n}, each element of A is assumed to be an independent Gaussian, which is why the joint density is a product of the Gaussians. However, in your setup (line 176):\\n> C \\u2208 Rr\\u00d7r is Gaussian with mean M and standard deviation \\u2126, denoted as q(C) \\u223c N (M, \\u2126),\\n\\nwhich would mean the distribution on line 177, i.e., q(Q) \\\\sim N(MA, \\\\Sigma | A|), is incorrect. For Q=CA, it should be\\nq(Q) \\\\sim N(MA, A^T \\\\Sigma A). Why is there this discrepancy, and what does this mean for the results?\\n- Most importantly, there are concerns regarding the degree to which the authors' were able to faithfully reproduce both BLoB and the experiments of the BLoB paper. Firstly, the results in Table 2 are significantly different than the BLoB paper (in fact, BLoB no longer state of the art on the majority of tasks). Secondly, key ingredients of the BLoB paper did not work under reimplementation, as noted on lines 305-301:\\n> We re-implemented LAP and applied it to the MAP checkpoints. For BLoB, since no open-source\\ncode was available, we replicated the approach based on the description in the paper. To ensure a\\nfair comparison, we made appropriate parameter adjustments. BLoB was only sampled once during\\neach training, validation, and testing stage. The Flipout sampling technique and KL regularization\\nfrom the original BLoB paper were not used in our replication, as they did not perform well. Instead,\\nwe applied the KL regularization method from Me-LoRA.\\n\\nAs previously noted, it is commendable that the authors sought to reimplement the results from the BLoB paper, although the correctness of the reimplementation is a major concern. With the release of the BLoB code, I would hope the authors could better reproduce the experimental set up from that paper and better incorporate their method into the official BloB source (with Flipout and KL regularization).\\nOfficial BLoB source (I understand this was just uploaded recently, I hope this aids the authors in their future efforts):\", \"https\": \"//github.com/Wang-ML-Lab/bayesian-peft\", \"questions\": [\"Please see above for major concerns. Some minor comments and questions:\", \"In Section 3, it would be better to recap BLOB's LoRA framework, then discuss the changes introduced by ME-LoRA (i.e., what is currently Section 3.1).\", \"For the citations, please take a look at the ICLR template; citations should be in paranthesis, e.g.,\", \"\\\"the citation should be in parenthesis using \\\\verb|\\\\citep{}| (as in ``Deep learning shows promise to make progress\", \"towards AI~\\\\citep{Bengio+chapter2007}.'').\\\" However, this is not the case for most such citations in the paper, e.g.:\", \"\\\"the posterior distribution of the model parameters is inferred rather than relying on point estimates Bishop &\", \"Nasrabadi (2006); Wang & Yeung (2020)\\\"\", \"Please define the vec(\\\\cdot) operator on line 181. Also on line 181, \\\\Sigma | A| should again be A^T \\\\Sigma A.\", \"\\\"Direct computation of the KL Divergence between the prior and posterior distributions of W is nontrivial. Direct computation of the KL Divergence between the prior and posterior distributions of W\", \"is non-trivial.\\\"\", \"\\\"3.2 EFFICIENT COMPUTATION OF FULL-WEIGHT KL DIVERGENCE\\\" <- This is Theorem 3.2 from the BLOB paper (the title is exactly the same as the title used therein).\", \"Lines 215-230: \\\"we adopt a strategy analogous to BLoB, where\\\" <- This is called the reparameterization trick\", \"\\\"However, with our proposed method, using such a simplistic prior variance can still lead to overfitting.\\\" <- Why? Are there experiments to demonstrate this? From the theoretical design of the KL-divergence in BLOB, it also hard to justify directly adding noise to the standard deviation \\\\sigma_p (how could this arise given the Gaussian prior set up of the KL-divergence?).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0py3h7pops
Will the Inclusion of Generated Data Amplify Bias Across Generations in Future Image Classification Models?
[ "Zeliang Zhang", "Xin LIANG", "Mingqian Feng", "Susan Liang", "Chenliang Xu" ]
As the demand for high-quality training data escalates, researchers have increasingly turned to generative models to create synthetic data, addressing data scarcity and enabling continuous model improvement. However, reliance on self-generated data introduces a critical question: \textit{Will this practice amplify bias in future models?} While most research has focused on overall performance, the impact on model bias, particularly subgroup bias, remains underexplored. In this work, we investigate the effects of the generated data on image classification tasks, with a specific focus on bias. We develop a practical simulation environment that integrates a self-consuming loop, where the generative model and classification model are trained synergistically. Hundreds of experiments are conducted on Colorized MNIST, CIFAR-20/100, and Hard ImageNet datasets to reveal changes in fairness metrics across generations. In addition, we provide a conjecture to explain the bias dynamics when training models on continuously augmented datasets across generations. Our findings contribute to the ongoing debate on the implications of synthetic data for fairness in real-world applications.
[ "model bias", "image classification" ]
Reject
https://openreview.net/pdf?id=0py3h7pops
https://openreview.net/forum?id=0py3h7pops
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w46bwYEi0u", "mIVTD43Gqt", "m4UM0OECN6", "kFtRnXyA4i", "j4FWN8831l", "fXJyZVpmpA", "eele5CywFO", "di0bQm27SW", "YDUPXCEUCz", "Svru3OWDB3", "QMECjCKw01", "J7ZL5CrrzU", "B1cgFROnWw", "A8WdqNE7Qy", "6DPRPPCMu1", "6452pjgDXg", "4tjbysQ6r2" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732574763631, 1730697190305, 1733075072205, 1733004875217, 1732810567019, 1737523417995, 1732574743430, 1730207529563, 1732574562113, 1733073392456, 1730694786899, 1734027232680, 1732574722869, 1732574678036, 1730283885403, 1732769200990, 1732574602855 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission844/Authors" ], [ "ICLR.cc/2025/Conference/Submission844/Reviewer_QLxJ" ], [ "ICLR.cc/2025/Conference/Submission844/Authors" ], [ "ICLR.cc/2025/Conference/Submission844/Reviewer_Bwfa" ], [ "ICLR.cc/2025/Conference/Submission844/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission844/Authors" ], [ "ICLR.cc/2025/Conference/Submission844/Reviewer_m2oW" ], [ "ICLR.cc/2025/Conference/Submission844/Authors" ], [ "ICLR.cc/2025/Conference/Submission844/Authors" ], [ "ICLR.cc/2025/Conference/Submission844/Reviewer_Bwfa" ], [ "ICLR.cc/2025/Conference/Submission844/Area_Chair_zDFq" ], [ "ICLR.cc/2025/Conference/Submission844/Authors" ], [ "ICLR.cc/2025/Conference/Submission844/Authors" ], [ "ICLR.cc/2025/Conference/Submission844/Reviewer_hfuo" ], [ "ICLR.cc/2025/Conference/Submission844/Reviewer_hfuo" ], [ "ICLR.cc/2025/Conference/Submission844/Authors" ] ], "structured_content_str": [ "{\"comment\": \"**Questions:**\\n\\n**[1] The experiments: If possible, I would like to see the results from other standard datasets or more subgroups for the dataset used.**\\n\\n\\n**A**: As you suggested, we conducted experiments on the Living-17 dataset, which consists of 17 superclasses, each containing 4 subclasses. We utilized the Stable Diffusion 1.5 model as the base generative framework to study the inheritance phenomenon of bias across generations. The results are presented in Appendix A.\\n\\n\\n\\n**[2] Citations: Some original works in the introduction section and later are missing citations. If the authors think they have cited all the necessary work, please put that in a rebuttal.**\\n\\n**A**: Thanks for your suggestion. We have cited more work about using generated data helping the model traiing in our revision. It would be appreciate if you could suggest some work for further discussion. \\n\\n\\n\\n**[3] Conclusion and future work: I think it can be improved or extended to connect with the problem and story.**\\n\\n\\n**A**: Thank you for your suggestion. Indeed, there are several models that involve self-consumption loops, such as Stable Diffusion, LLaMA, LLaVA, and Nemotron. Notably, Nemotron is trained on a dataset consisting of more than 98% synthetic data. While the inclusion of synthetic data in the training process has shown some benefits, it is important to consider the potential harms, especially concerning model biases. This concern has prompted us to investigate the impact of generated data on model performance and bias, particularly as the number of self-consumption loops increases.\\n\\nWe plan to expand on this in our future work by further exploring how the inclusion of synthetic data in iterative training cycles influences not only model performance but also the ethical implications of model biases. This line of inquiry could provide valuable insights for future AI model development.\"}", "{\"summary\": \"This paper investigates the bias implications of incorporating generated data from generative models into training downstream tasks. The authors propose an iterative pipeline that repeatedly uses new generators to create additional images that enhance training. Their analysis spans three image datasets of varying complexity, exploring both performance improvements and bias impacts on downstream classification tasks. Bias is examined across different subgroups, covering both single and multiple bias interactions. Through this setup, they observe mixed trends in performance and bias effects across datasets of different complexities and model capacities. The paper concludes with a high-level discussion on potential root causes behind these varied results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper addresses a highly relevant topic by examining the implications of generated data on bias, which is essential for advancing our understanding of the gaps between generated and real data.\", \"The iterative pipeline for incorporating generated data closely resembles real-world applications. By using datasets of varying complexity and models with different capacities, the study effectively explores different aspects of the problem, enhancing the generalizability of the findings.\", \"The study provides noteworthy observations with good experimentation support, such as the low correlation between dataset bias and resulting bias effects, the higher susceptibility of pre-trained models to integration bias, and insights into how different factors affect bias across datasets and models.\"], \"weaknesses\": [\"The paper presents mixed findings across datasets and models but does not provide in-depth explanations for these variations. While section 5 includes some discussion on the root causes of observed behaviors, this analysis remains at a high-level and is not well-supported or directly connected to the experiments in earlier sections. The analysis would be more convincing with clearer connections to the results, reinforcing the paper\\u2019s claims with evidence from the experiments.\", \"In table 1, FID fluctuates for Color-MNIST and CIFAR10 after several rounds of data generation, while it increases substantially for HARD-ImageNet starting from the second iteration. This trend suggests a marked difference in data quality for HARD-ImageNet compared to the other datasets. However, the subsequent experiments focus primarily on how generated data impacts downstream performance and bias without addressing how this observed FID trend might influence these results. A discussion on how does the data quality(assesed by FID) could affect interpretations across the three datasets would enhance the clarity of the findings.\", \"Some methodology details are lacking, making it challenging to fully understand and replicate the study. For example, in section 3.2, there is limited information on the design of the human study, the impact of expert-guided filtering on image quality, and the specific r% used.\", \"The paper would benefit from some recommendation on the usage of generated data in generative models or downstream tasks with the insights from the experiments.\"], \"questions\": \"1. Can you provide more detail on the human study in section 3.2, and the r% used?\\n1. What is the impact of image quality from the expert-guided filtering in section 3.2? \\n1. How well does the findings generalize to other dataset? For example, section 4.2 showed dataset bias does not amplify model bias. Do authors expect that to hold for other datasets, too?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your careful review and suggestion on improving our work:)\", \"comment\": \"**To all reviewers:**\", \"we_sincerely_thank_all_reviewers_for_their_valuable_feedback_and_for_recognizing_the_merits_of_our_work\": \"1. Our studied problem and the proposed framework are novel, important, and practical. (QLxJ, Bwfa, hfuo, m2oW). \\n2. We conduct extensive experiments to answer the studied problem and support our claim. (QLxJ, Bwfa) \\n\\nIn our revision, we have made the following updates to answer the reviewers's major concerns: \\n\\n- **More experiments on ImageNet**: Details are provided in Appendix A. \\n- **Examples of generated images across generations**: Details are provided in Appendix B. \\n- **Detailed expert-guidance filtering**: Details are provided in Appendix C. \\n- **Improved Clarity**: We revise both illustration figures and their captions for better clarity. \\n- **Typos Corrected**: We fix all identified typographical errors. \\n\\nDuring the discussion, we sincerely appreciate the reviewers' suggestion to further broaden the scope of the studied problem. However, **within the limited scope of a conference paper, we have chosen to focus on the most general and widely used task under a standard setting for concept verification**. To ensure reliable verification, **all results are conducted multiple times to minimize randomness**.\\n\\nWe agree that it would be valuable to explore the impact of generated data on additional tasks, such as transfer learning, domain adaptation, and model robustness. We leave these investigations for future work, aiming to inspire more researchers to delve into this problem and pave the way for **leveraging academic advancements to drive progress in industry practices**.\\n\\nWe hope these updates address your concerns and further strengthen the contributions of our work. Thank you again for your thoughtful reviews and support!\"}", "{\"comment\": \"Thank for authors for the response and for addressing the questions. I agree and appreciate the effort to draw attention to the influence of generated data in the community, which is indeed an important and under-explored topic. The idea of contriving the loop and the iteratively testing is simple and reasonable. My primary concern, however, lies in the scalability and generality of the conclusions and experimental settings, which are also main parts of the paper. Given the practical nature of the idea, exploring more applicable and advanced settings would significantly enhance its impact. So I will maintain my score.\"}", "{\"comment\": \"Thank you very much for your agreement on the value of our work and improve the score.\\n \\nIn the past few weeks, we have continued to run more experiments with additional datasets and models. However, due to limited computational resources, achieving the scale we aim for remains challenging. \\n\\nWe hope this work will draw the community's attention and inspire more work to explore this setup further and investigate the pros and cons of generated data in self-consumption loops. By doing so, we can better understand its implications for fairness and contribute to the integration of geerative model into the real-world applications.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"**Questions:**\\n**[1] The authors reported the FID score for the augmented data from multiple generations in Table 1. It seems that the FID scores for unbiased colorized MNIST show a decreasing trend; biased colorized MNIST is more or less the same; CIFAR-20/100 decreases first and then increases; Hard ImageNet shows a sudden increase from 50.9 to 186.4. Can the author explain the inconsistent changes here? Are there any implications from the observations? Also, it would be better if the author could visualize the generated images from different generations to visually see the changes across generations.**\\n\\n\\n**A**: It is important to note that the generated data in the self-consuming loop not only influences the image classification model but also impacts the generative model itself. \\n\\nFor MNIST and CIFAR, we use a GAN model that is trained from scratch. In this case, the consistent inclusion of generated data does not drastically affect the generative model, as the clean data still constitutes the majority of the training dataset. This explains why the changes in FID scores for MNIST and CIFAR are relatively small across generations. \\n\\nIn contrast, for Hard ImageNet, we utilize the pre-trained Stable Diffusion model, where both the clean data and the generated data across generations differ significantly from the pre-training dataset. With an increasing number of generations, the impact of these distribution differences accumulates, leading to a significant increase in the FID score. This phenomenon highlights the sensitivity of pre-trained generative models to distribution shifts when operating in a self-consuming loop.\\n\\nWe believe this explains why the FID score changes for MNIST and CIFAR are small, while for Hard ImageNet, the changes are substantial. \\n\\nAs you suggested, we have visualized examples of generated images across different generations in Appendix B to provide a clearer understanding of the changes.\\n\\n\\n\\n**[2] The authors mentioned, \\u201cWe manually partition the original dataset into multiple subgroups, where subgroups within the same class share similar semantics.\\u201d Can the authors explain more clearly how they defined and constructed the subgroups for each dataset?**\\n\\n\\n**A:**\\n- For MNIST dataset, we randomly color each image with one of three colors. The original digit serves as the classification label, while the assigned color is treated as the subgroup label.\\n- For CIFAR-20/100 dataset, we refer to the CIFAR-100 dataset, as introduced by [a]. The 100 classes in CIFAR-100 are grouped into 20 superclasses, with each superclass containing 5 subclasses. The group information is provided in the CIFAR-100 section of [a]. In our study, we use the superclass as the classification label and the subclass as the subgroup label.\\n\\n[a] https://www.cs.toronto.edu/~kriz/cifar.html\"}", "{\"summary\": \"Great:\\nThe authors investigated an important question on a selected dataset. The future work can be extended to other hierarchical sub-groups and more datasets.\\nLimited benchmark results are apparent and convincing.\", \"missing\": \"It will be helpful to accept the claim with more extensive experiments and analysis.\\nAccording to me, citations to the original work/s are missing.\\nThe final section can be elaborated to cover the important findings that support the main problem.\", \"without_additions\": \"It's a strong acceptance for a poster.\\nWeak acceptance for the main track.\", \"explanation\": \"1. The experiments: If possible, I would like to see the results from other standard datasets or more subgroups for the dataset used.\\n2. Citations: Some original works in the introduction section and later are missing citations. If the authors think they have cited all the necessary work, please put that in a rebuttal.\\n3. Conclusion and future work: I think it can be improved or extended to connect with the problem and story.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors investigated an important question on a selected dataset. The future work can be extended to other hierarchical sub-groups and more datasets.\\nLimited benchmark results are apparent and convincing\", \"weaknesses\": \"It will be helpful to accept the claim with more extensive experiments and analysis.\\nAccording to me, citations to the original work/s are missing.\\nThe final section can be elaborated to cover the important findings that support the main problem.\", \"questions\": \"Look at weakness section to know what to improve.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Weaknesses:**\\n**[1] The paper presents mixed findings across datasets and models but does not provide in-depth explanations for these variations. While section 5 includes some discussion on the root causes of observed behaviors, this analysis remains at a high-level and is not well-supported or directly connected to the experiments in earlier sections. The analysis would be more convincing with clearer connections to the results, reinforcing the paper\\u2019s claims with evidence from the experiments.**\\n\\n**A:** Thank you for your suggestion. This work serves as an empirical study to translate real-world practices into a case study, investigating whether generated data can influence model bias across generations.\\n\\nWe include result analysis after each experiment and observe that there is no consistent pattern across different models and datasets. For this, we propose a hypothesis in the final section, considering multiple factors that may contribute to these findings. We hope our work inspires future research to delve deeper into this phenomenon.\\n\\n\\n\\n**[2] In table 1, FID fluctuates for Color-MNIST and CIFAR10 after several rounds of data generation, while it increases substantially for HARD-ImageNet starting from the second iteration. This trend suggests a marked difference in data quality for HARD-ImageNet compared to the other datasets. However, the subsequent experiments focus primarily on how generated data impacts downstream performance and bias without addressing how this observed FID trend might influence these results. A discussion on how does the data quality(assesed by FID) could affect interpretations across the three datasets would enhance the clarity of the findings.**\\n\\n\\n**A:** It is important to note that the generated data in the self-consuming loop not only influences the image classification model but also affects the generative model itself.\\n\\nFor MNIST and CIFAR, we employ a GAN model trained from scratch. In this scenario, the consistent inclusion of generated data does not drastically affect the generative model, as clean data still constitutes the majority of the training dataset. This explains why the FID scores for MNIST and CIFAR remain relatively stable across generations.\\n\\nIn contrast, for Hard ImageNet, we use the pre-trained Stable Diffusion model, where both the clean data and the generated data across generations differ significantly from the pre-training dataset. As the number of generations increases, these distribution differences accumulate, resulting in a substantial increase in the FID score. This phenomenon underscores the sensitivity of pre-trained generative models to distribution shifts in a self-consuming loop.\\n\\nThe impact of generative data on model bias across generations depends on several factors, including the generative model's performance and the learning capacity of downstream models. While the generative model may produce more data for certain subgroups, the quality of the data\\u2014reflected in the FID score\\u2014ultimately determines whether the performance of these subgroups\\n\\n\\n**[3]Some methodology details are lacking, making it challenging to fully understand and replicate the study. For example, in section 3.2, there is limited information on the design of the human study, the impact of expert-guided filtering on image quality, and the specific r% used. The paper would benefit from some recommendation on the usage of generated data in generative models or downstream tasks with the insights from the experiments.**\\n\\n**A:** We will open-source all code, models, and data to facilitate future research. The data filtering process is detailed in the questions section. In this study, we observe that the ratio of generated data used for data augmentation must be carefully considered to mitigate model crashes caused by distribution disparities between generated and real-world data. Additionally, it is worth noting that for a given model and dataset, the trend in model bias changes is usually consistent. This consistency suggests that we can adjust the mixup ratio by observing a small number of iterations and dynamically adapting our augmentation strategy.\"}", "{\"comment\": \"Thank you for your response and suggestions! We completely agree that adding experiments on fine-grained settings (e.g., loss functions, downstream tasks) would further enrich the paper.\\n\\nHowever, **within the limited scope of a conference paper**, we have chosen to **focus on a more general setting on the most widely used task for concept verification**: the standard image classification task on standard image datasets using the cross-entropy. Specifically, we employ popular generative models (GAN and Stable Diffusion) to enhance the training process during the self-consuming loop. This approach allows us to examine the impact of generated data on model performance in general, aligning closely with the central theme of our paper's title.\\n\\n\\nIt is undoubtedly valuable to explore scalability across other downstream tasks, such as transfer learning, domain adaptation, and the impact on model robustness. **We leave this broader investigation to future work, aiming to inspire more researchers to bridge the gap between industry practices and academic research while fostering mutual advancements.**\\n\\nWe hope this addresses your concern further.\"}", "{\"summary\": \"With the growing prevalence of generative models, the authors raised the concern regarding model bias, particularly in the context of generation and retraining cycles. Then the authors developed a simulation environment and conducted extensive experiments on image classification datasets to reveal how fairness metrics evolve across successive generations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. The scenario proposed by the authors is highly relevant, as synthetic data is increasingly shared online and integrated into various domains. More studies on how the synthetic data will affect model training in generations will be beneficial for the research community.\\n\\nS2. The experiments on the proposed dataset are extensive, e.g., w/ or w/o biased initialization, different base models.\", \"weaknesses\": \"W1. Lack of experiments on the choice of generative models. Various generative models can differ in behavior, the choice of model likely impacts sample quality and influences the outcomes of subsequent studies.\\n\\nW2. The motivation of the paper is on future image classification synthetic data plays in it. With foundational models playing a dominant role, integrating settings like synthetic data for transfer learning in classification will enhance the paper better, going beyond the current base case that may lack scalability.\\n\\nW3. The experiments are mainly targeting the model bias within a self-consuming loop in image classification domain. However, the conclusions/observations are not significant.\", \"questions\": \"Q1. How many synthetic data are added or ratio p? Is there a rationale behind the choice of p.\\n\\nQ2. Are there any experiments or preliminary results on tasks with a larger number of classes?\\n\\nQ3. How is CLIP used in filtering? Specifically, is it based on the similarity score between label texts and images?\\n\\nQ4. Would different losses lead to varying results?\\n\\nQ5. Any insights on how could the conclusion generalize to other tasks, or other modality?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper explores the implications of generated data being used to train generative models, which is a realistic setting given how data is often shared and scraped on the internet. Specific focus is given to how biases can emerge or be exacerbated across subgroups in the data.\\n\\nReviewers noted that trends in the data and results were inconsistent across settings, with no clear explanations given as to the mechanisms that could be at play. The datasets used were small scale, focused only on the image modality, and used a small set of model architectures. These choices greatly limit the generality of the study, and its applicability to real world cases (which was the original motivation). Reviewers noted inconsistent results between experiments that were not explained by the authors, and these points were not adequately addressed during the discussion period. For these reasons, I am recommending rejection.\", \"additional_comments_on_reviewer_discussion\": \"The main points of improvement raised by reviewers were: small scale models and datasets, as well as lack of variety in modalities and model types; inconsistent behaviours across experiments which are not sufficiently explained; broad conclusions that are not sufficiently supported by evidence.\\n\\nThe authors did not manage to satisfy these criticisms in the discussion. They did not scale up their experiments or diversify the modalities and model types, which is admittedly a substantial ask for a rebuttal period, but still it points to a need to revise the work on a more holistic scale. The inconsistent results between experiments were not explained.\\n\\nI want to encourage the authors that the topic they are working on is important and can have impact on the field, but the current work is not ready for publication. Please take a close look at the feedback from the reviewers and revisit the overall structure and objectives of your experiments to find a path forward in revising your work.\"}", "{\"comment\": \"**Weaknesses:**\\n\\n**[1] The paper mostly conducts experiments on three datasets. Among the three datasets, Colourized-MNIST and CIFAR20/100 are very small datasets in terms of the resolutions and the number of data and classes compared to the existing image data. Moreover, the largest model used in the paper is ResNet50, which is a relatively small network when compared to SOTA models like ViT. This raises the concern of whether the observations from the experiments are still valid in realistic scenarios with large datasets and models.**\\n\\n**A:** On small datasets, CNN-based models are more suitable as ViT-based models tend to suffer from severe overfitting issues without large-scale datasets. For this reason, our primary analysis focuses on CNN-based models.\\n\\nAs you suggested, we conducted additional experiments on the Breeds dataset, which is organized based on subgroup connections identified by WordNet. Additionally, we evaluated the performance of ViT models. The results are presented in Appendix A.\\n\\n\\n**[2] Some experimental results are hard to understand.**\\n- **baseline performance.** To my understanding, CIFAR20/100 has a smaller number of classes and should be an easier dataset to classify when compared to CIFAR100. However, the baseline performance of ResNet50 on CIFAR100 is around 80%; on CIFAR20/100 is around 50%.\\n- **The trends are inconsistent between models.** For example, in Fig. 6(a), ResNet50 and LeNet show an opposite trend in Equal Opportunity and Disparate Impact but the same trend in average accuracy and Maximal Disparity; VGG-19 remains unchanged for the tested metrics.\\n- **The trends are inconsistent between datasets.** For example, comparing the ResNet50 baseline between Fig 5(a) on CIFAR-20/100 and 6(a) on CIFAR-100. Even though the datasets are similar, the ResNet50 baseline shows a totally different trend for the Equal Opportunity, Disparate Impact, and Maximal Disparity metrics.\\n- **Similar discrepancies are noted in other sections of the paper.** The experimental findings do not appear to explain how the generated data affects model bias in general.\\n\\n**A:** \\n\\n- **Baseline performance:** \\n The CIFAR20/100 dataset consists of 20 superclasses, each containing 5 subclasses. Due to its low resolution and significant distribution disparity between subclasses within a superclass, it is challenging for models to learn generalized representations from this dataset. This results in lower classification accuracy compared to the original CIFAR100 dataset. Similar experimental results can be found in [a]. \\n\\n- **Inconsistent trends between models:** \\n The impact of generated data varies across models due to differences in architecture and model capacity. On one hand, generated data can enhance model training and improve performance. On the other hand, inherent explicit biases in generated data may harm model performance. Learning from data augmented with generated samples requires balancing these two effects. Variations in model architecture and capacity result in inconsistent trends across models and datasets.\\n\\n- **Inconsistent trends between datasets:** \\n The trends differ between datasets like CIFAR20/100 and CIFAR100 due to differences in data distribution and subclass structures. These disparities influence how models learn and respond to generated data, leading to different observed trends.\\n\\n- **Explanation of experimental findings:** \\n Detailed explanations of the experimental findings are provided at the end of each experiment and in Section 5. \\n\\n\\n[a] Zhang et al. \\\"Discover and Mitigate Multiple Biased Subgroups in Image Classifiers.\\\" CVPR 2024.\\n\\n\\n**[3] The conclusion is weak. Based on the observations, the authors conjectured that the model bias is affected by multiple factors, including the datasets, models, and data quality across generations. However, the authors did not provide clear experiment evidence or solid theory explaining how these factors influence the model bias.**\\n\\n**A:** This work is an empirical study aimed at understanding the impact of generated data on the bias of image classification models during the self-consuming loop. Our conjectures are based on experimental observations and serve as a preliminary explanation of the results. We hope our research will inspire future studies to investigate the underlying mechanisms in more depth. To facilitate further research, we will open-source all the code and data used in this project.\"}", "{\"comment\": \"**Weaknesses:**\\n**W1. Lack of experiments on the choice of generative models. Various generative models can differ in behavior, the choice of model likely impacts sample quality and influences the outcomes of subsequent studies.**\\n\\n\\n**A:** Thank you for your suggestion. We agree with your observation that different generative models can have varying impacts on downstream tasks.\\n\\n*Criteria for Selecting Generative Models*\\nIn our study, we utilized three datasets: MNIST, CIFAR, and an ImageNet-like dataset. MNIST and CIFAR are small-scale datasets with low resolution, which led us to select GAN models for learning their distributions, as diffusion models often struggle with limited-size data. In contrast, the ImageNet-like dataset, with its higher resolution, is more suitable for diffusion models.\\n\\nWe also considered the point you raised in our study. However, due to limited computational resources, we conducted experiments on each dataset using only one type of generative model.\\n\\nOur study aims to shed light on the issue of continuously using generated data in real-world applications, serving as a case study. We hope this work will inspire further investigations into how various generative models impact bias in downstream tasks.\\n\\n\\n\\n\\n**W2. The motivation of the paper is on future image classification synthetic data plays in it. With foundational models playing a dominant role, integrating settings like synthetic data for transfer learning in classification will enhance the paper better, going beyond the current base case that may lack scalability.**\\n\\n\\n**A:** Thanks for your suggestion. We will include this discussion into our paper.\\n\\n\\n**W3. The experiments are mainly targeting the model bias within a self-consuming loop in image classification domain. However, the conclusions/observations are not significant.**\\n\\n\\n**A:** We acknowledge that there is no universal rule for the impact of generated data within a self-consuming loop on downstream image classification models. However, it is worth noting that the observed trends for each model on the same dataset remain consistent, providing guidance for real-world model development practices. \\n\\nThis indicates that while it is known to us from this study that using generated data can influence model bias across generations, we can simplify the process by training models over a few generations and observing the trend. If the bias of interest continues to worsen across the observed generations, this suggests the need to incorporate additional real-world samples to mitigate the adverse effects of generated data on the model.\\n\\n\\n\\n**Questions:**\\n\\n**Q1. How many synthetic data are added or ratio p? Is there a rationale behind the choice of p.**\\n\\n**A:** We maintain the size of the generated data at 10% of the original clean dataset. Previous studies have highlighted that the ratio between synthetic data and clean data should be carefully considered. A large ratio can lead to model collapse, while a small ratio is expected to enhance performance as desired.\\n\\n\\n**Q2. Are there any experiments or preliminary results on tasks with a larger number of classes?**\\n\\n**A:** We further conduct experiments on the Breeds dataset (sub ImageNet dataset)[a], which is organized by the subgroup connections established in WordNet. The results are shown in appendix. A.\\n\\n\\n[a] Santurkar et al. \\\"Breeds: Benchmarks for subpopulation shift.\\\" ICLR 2021.\\n\\n**Q3. How is CLIP used in filtering? Specifically, is it based on the similarity score between label texts and images?**\\n\\n**A:** We use the similarity score betwen the label text and images. \\n\\n**Q4. Would different losses lead to varying results?**\\n\\n\\n**A:** In our study, we work on the task of image classification, where the cross-entropy is the most popular metric for learning. In our view, the change of loss would not lead to varying results. Because The bias comes from the distribution of generated data, originally from the imbalanced generation of the generative model. Thanks. \\n\\n\\n\\n**Q5. Any insights on how could the conclusion generalize to other tasks, or other modality?**\\n\\n**A:** Model bias originates from the data it learns. Due to the imbalanced data generation by generative models, the distribution of augmented data often exhibits unevenness, introducing bias into downstream tasks. This phenomenon not only affects classification tasks, as studied in our work, but also extends to other domains such as robotic control, visual question answering, and more.\"}", "{\"summary\": \"The paper attempts to analyze whether the inclusion of generated data alleviates the model bias. In the paper, the authors repeatedly train generative models on the generated augmented data and study its impact on multiple fairness metrics.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is studying a new problem. In the era of generative AI, more generated content is available on the Internet. Training on the generated data may influence model performance. In this sense, the paper is studying a valid and important problem.\", \"weaknesses\": [\"The paper mostly conducts experiments on three datasets. Among the three datasets, Colourized-MNIST and CIFAR20/100 are very small datasets in terms of the resolutions and the number of data and classes compared to the existing image data. Moreover, the largest model used in the paper is ResNet50, which is a relatively small network when compared to SOTA models like ViT. This raises the concern of whether the observations from the experiments are still valid in realistic scenarios with large datasets and models.\", \"Some experimental results are hard to understand.\", \"baseline performance. To my understanding, CIFAR20/100 has a smaller number of classes and should be an easier dataset to classify when compared to CIFAR100. However, the baseline performance of ResNet50 on CIFAR100 is around 80%; on CIFAR20/100 is around 50%.\", \"The trends are inconsistent between models. For example, in Fig. 6(a), ResNet50 and LeNet show an opposite trend in Equal Opportunity and Disparate Impact but the same trend in average accuracy and Maximal Disparity; VGG-19 remains unchanged for the tested metrics.\", \"The trends are inconsistent between datasets. For example, comparing the ResNet50 baseline between Fig 5(a) on CIFAR-20/100 and 6(a) on CIFAR-100. Even though the datasets are similar, the ResNet50 baseline shows a totally different trend for the Equal Opportunity, Disparate Impact, and Maximal Disparity metrics.\", \"Similar discrepancies are noted in other sections of the paper. The experimental findings do not appear to explain how the generated data affects model bias in general.\", \"The conclusion is weak. Based on the observations, the authors conjectured that the model bias is affected by multiple factors, including the datasets, models, and data quality across generations. However, the authors did not provide clear experiment evidence or solid theory explaining how these factors influence the model bias.\"], \"questions\": \"The authors reported the FID score for the augmented data from multiple generations in Table 1. It seems that the FID scores for unbiased colorized MNIST show a decreasing trend; biased colorized MNIST is more or less the same; CIFAR-20/100 decreases first and then increases; Hard ImageNet shows a sudden increase from 50.9 to 186.4. Can the author explain the inconsistent changes here? Are there any implications from the observations? Also, it would be better if the author could visualize the generated images from different generations to visually see the changes across generations.\\n\\nThe authors mentioned, \\u201cWe manually partition the original dataset into multiple subgroups, where subgroups within the same class share similar semantics.\\u201d Can the authors explain more clearly how they defined and constructed the subgroups for each dataset?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the response. The author clarified the construction of subgroups for the tested dataset and added a new dataset and the ViT baseline. I am increasing the score to 5 based on the improved clarity and experiments. As mentioned in the weakness, there is a different trend between the tested datasets, and the authors attributed it to the differences in data distribution and subclass structures. It appears that some of the observations are dataset-dependent and lack a conclusive, significant finding. I understand that the computation power may be a concern. However, I still believe the authors should extend their experiments to a richer set of datasets and generative models in order to make a more decisive conclusion from the results.\"}", "{\"comment\": \"**Questions:**\\n**[1] Can you provide more detail on the human study in section 3.2, and the r% used?**\\n\\n**A:** First, we manually review the generated samples and discard images with low quality.\\n\\nSecond, We calculate the CLIP score for each image, where the paired text is the class name. Images are then grouped into bins based on their CLIP scores, with each bin representing a \\u00b110% range of CLIP scores. This results in 10 bins.\\n\\nThen, we randomly sample 10 images from each bin and evaluate the quality of each bin. Based on this evaluation, we determine the maximum ratio of the CLIP score range (denoted as r%) to retain for training.\\n\\n- For MNIST, we find that retaining the top 90% of images (r = 10%) is optimal. \\n- For CIFAR-20/100, retaining the top 70% of images (r = 30%) works best. \\n- For the ImageNet dataset, retaining the top 40% of images (r = 60%) yields the best results.\\n\\n\\nWe include this in appendix C.\\n\\n\\n**[2] What is the impact of image quality from the expert-guided filtering in section 3.2?**\\n\\n\\n**A:** In practice, we observe that the model performs poorly without expert-guided filtering, resulting in low classification accuracy. As the number of generations increases, the performance degradation becomes more pronounced. This is primarily due to the significant distribution shift between the original images and the low-quality generated images, which negatively impacts the model's ability to generalize effectively.\\n\\n\\n\\n**[3] How well does the findings generalize to other dataset? For example, section 4.2 showed dataset bias does not amplify model bias. Do authors expect that to hold for other datasets, too?**\\n\\n**A:** \\n- **[Why this happens?]** \\n While the dataset is initialized with bias, it can influence both the generative model and the image classification model. The generative model may unintentionally learn the bias from the dataset, but the quality of the generated data can directly affect the image classification model. Specifically, high-quality data sampled from high-density regions of the original distribution can make it harder for the model to learn a representation on the biased subclass, thereby alleviating the bias.\\n\\n- **[Generalization to other datasets]** \\n The impact of generated data on model bias across generations within the self-consuming loop depends on multiple factors, including model architecture, dataset characteristics, the difficulty of learning the dataset, and the nature of the bias itself. While it is challenging to predict whether the findings will hold for other datasets, our results consistently show similar trends for specific models on certain datasets. \\n\\n This consistency suggests that models can be trained over a few generations, and the observed performance during these initial generations can be used to predict whether the current training strategy is effective. This approach provides practical guidance for real-world model development. \\n\\n We hope our research will inspire future studies to conduct more fine-grained analyses of the impact of generated data on model bias across generations.\"}" ] }
0pbxX2jatP
Measuring Free-Form Decision-Making Inconsistency of Language Models in Military Crisis Simulations
[ "Aryan Shrivastava", "Jessica Hullman", "Max Lamparth" ]
There is an increasing interest in using language models (LMs) for automated decision-making, with multiple countries actively testing LMs to aid in military crisis decision-making. To scrutinize relying on LM decision-making in high-stakes settings, we examine the inconsistency of responses in a crisis simulation (``wargame"), similar to reported tests conducted by the US military. Prior work illustrated escalatory tendencies and varying levels of aggression among LMs but were constrained to simulations with pre-defined actions. This was due to the challenges associated with quantitatively measuring semantic differences and evaluating natural language decision-making without relying on pre-defined actions. In this work, we query LMs for free-form responses and use a metric based on BERTScore to quantitatively measure response inconsistency. We show that the inconsistency metric is robust to linguistic variations that preserve semantic meaning in a question-answering setting across text lengths. We first study the impact of different prompt sensitivity variations on wargame decision-making inconsistency at temperature $T = 0$. We find that all models exhibit levels of inconsistency indicative of semantic differences, even if answering to semantically identical prompts. We also study models at $T > 0$ under fixed prompts. We find that all studied models still exhibit high levels of inconsistency, even when adjusting the wargame setting, anonymizing involved conflict countries, or adjusting the sampling temperature parameter $T$. Further qualitative evaluation shows that models recommend courses of action that share few to no similarities. We find that inconsistency due to semantically equivalent prompt variations can exceed inconsistency from temperature sampling for most studied models across different levels of ablations. Given the high-stakes nature of military deployment, we recommend further caution be taken before using LMs to inform military decisions or other cases of high-stakes decision-making.
[ "Language Models", "AI Safety", "Natural Language Processing", "Inconsistency", "Transparency", "Military" ]
Reject
https://openreview.net/pdf?id=0pbxX2jatP
https://openreview.net/forum?id=0pbxX2jatP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ze0ds481Cn", "unXsSkaD2G", "um4JZR1VYr", "ugrqKNZMdM", "uIxSnD0iRx", "smKu7infTQ", "n2DcqoyvUB", "iT9KM5JD0R", "XfVXV6egyQ", "TxJIYWcNXc", "TWJkfYaAHB", "SJS8VngGOW", "RRqWMy47rp", "Qa61D2b54w", "OhdIF1KHVz", "N1VXTWOqc9", "6FqkLcMySC", "515vkm7xRo", "1iNTaNmxwM", "0UeeTPA8it" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732587795436, 1732573607723, 1731046816800, 1732391610782, 1734741942234, 1732321250048, 1731884877623, 1732574921915, 1730571848195, 1731885590643, 1732588944118, 1731885052201, 1732568094570, 1732319565532, 1731886241365, 1737524119787, 1731886517559, 1732749925448, 1732301146644, 1730412400192 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Reviewer_McSh" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Area_Chair_5b6x" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Reviewer_cN3C" ], [ "ICLR.cc/2025/Conference/Submission11361/Reviewer_cN3C" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Reviewer_xYfm" ], [ "ICLR.cc/2025/Conference/Submission11361/Reviewer_cN3C" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Authors" ], [ "ICLR.cc/2025/Conference/Submission11361/Reviewer_xYfm" ], [ "ICLR.cc/2025/Conference/Submission11361/Reviewer_xYfm" ] ], "structured_content_str": [ "{\"title\": \"Further Response to Reviewer cN3C\", \"comment\": \"Hello! Thank you again for your thoughtful engagement. We understand how you view our claims as contradictory. However, I am afraid there may have been a misunderstanding.\\n\\nFor maximum clarity in this response, we will say *external unpredictability* when referring to being unpredictable in the eyes of the adversary. We will say *internal unpredictability* when referring to being unpredictable in the eyes of the institution deploying the LM for use in their own operations. Our arguments in the rebuttal focused on external unpredictability. What we established in our previous rebuttal is that militaries are incentivized to avoid setting T = 0 to avoid being externally predictable as a result of deterministic responses.\\n\\nWhat we focus on in our paper is the notion of internal unpredictability (and likely why you see a contradiction in our framing). For example, in (058), we are saying LMs in high-stakes settings should be required to be internally reliable. In (068), we are saying that delegating trust to an inconsistent agent can lead to being internally unpredictable. This is a cause for concern for many reasons [e.g., 1, 2, 3]. So, when we say \\u201censure reliable yet unpredictable decision-making support\\u201d we refer to internal reliability while being externally unpredictable. What we ultimately show in our paper is that LMs are highly inconsistent at T > 0, which, while making them externally unpredictable, also makes them internally unpredictable and unreliable. \\n\\nSo, what we ultimately argue is that if militaries are to deploy LMs, they must find a balance between the desire for external unpredictability and internal predictability/reliability. What we show in the paper is that, in the context of LM automated or augmented decision-making, these notions are inherently at odds. This tension is likely unsolvable (or at least extremely difficult to solve), calling into question the very deployment of LMs into military operations. To reiterate, nowhere in our paper do we make claims that LMs should be deployed into militaries while being simultaneously unpredictable and predictable. What we do show is that LMs, under the premises in which they are likely to be deployed, exhibit behavior that calls into question their internal reliability. This behavior is a cause for concern and can even lead to catastrophic consequences [4, 5, 6]. In fact, the final sentence of our paper (539) says that robust safeguards should be put in place to prevent unintended outcomes that may arise due to the deployment of LMs into military operations.\\n\\nTherefore, we do not believe there to be a contradiction in our framework. We hope this clears up the premise and motivation of the paper. It would be regrettable for this work to be rejected under a conceptual misunderstanding.\\n\\nIn acknowledgement of your valid questions regarding our conceptual problem definition, we will add an entire section in the appendix of the revised pdf clarifying this our arguments in this discussion to motivate our experimental setup. Additionally, as Reviewer xYfm suggested, we will move section 6 (prompt sensitivity experiments at T = 0) above section 5 (evaluation at T = 1) to better frame the point of the paper.\\n\\nWe are grateful to have engaged in this discussion with you, and hope that our arguments warrant a score increase in your eyes.\\n\\n[1] William N Caballero and Phillip R Jenkins. On Large Language Models in National Security Applications. arXiv preprint arXiv:2407.03453, 2024.\\\\\\n[2] Juan-Pablo Rivera, Gabriel Mukobi, Anka Reuel, Max Lamparth, Chandler Smith, and Jacquelyn Schneider. Escalation risks from language models in military and diplomatic decision-making. In The 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 836\\u2013898, 2024.\\\\\\n[3] Lamparth, M., Corso, A., Ganz, J., Mastro, O. S., Schneider, J., & Trinkunas, H. (2024). Human vs. Machine: Behavioral Differences between Expert Humans and Language Models in Wargame Simulations. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 807-817.\\\\\\n[4] National Security Archive. False Warnings of Soviet Missile Attacks Put U.S. Forces on Alert in 1979-1980, 2020. URL https://nsarchive.gwu.edu/briefing-book/nuclearvault/2020-03-16/false-warnings-soviet-missile-attacks-during1979-80-led-alert-actions-us-strategic-forces.\\\\\\n[5] Geoffrey Forden, Pavel Podvig, and Theodore A Postol. False alarm, nuclear danger. IEEE Spectrum, 37(3):31\\u201339, 2000.\\\\\\n[6] EUCOM History Office. This Week in EUCOM History: January 23-29, 1995, 2012. URL https://web.archive.org/web/20160105033448/http://www.eucom.mil/ media-library/article/23042/this-week-in-eucom-history-january23-29-1995.\"}", "{\"title\": \"Further Response to Reviewer xYfm\", \"comment\": \"Thanks for the response and the questions!\\n\\n***Lack of clear motivation for testing of bi-directional entailment clustering***\\n\\nThank you for pointing this out - we agree that we framed this unclearly. To clarify, we mentioned this to demonstrate that we considered potential metrics beyond just blindly choosing and verifying BERTScore for our analysis. We found that it was not able to sufficiently extract any similarities between texts that are otherwise conceptually dissimilar. Thus, we obtained inconsistency scores that were extremely high (~0.9), making comparison and evaluation meaningless. When testing BERTScore, we found that, unlike bi-directional entailment clustering, it was able to robustly capture inconsistency (and consistency), so we focused our analysis using this metric. We will clarify this point in the revised pdf.\\n\\n***Unclear why semantic consistency should be expected at T=1***\\n\\nThank you for this question! I think a good way of thinking about this would be to consider how an LM would behave on a task that it does well at. We would expect, despite sampling at T=1, for the LM to be able to give consistent answers and reasoning because it is more \\u201cconfident\\u201d in its answers. This interpretation has been successfully implemented in the literature for hallucination detection and mitigation [e.g., 1, 2]. We do not make any claims about LM hallucination because there is no notion of ground-truth in military decision-making. However, the idea that putting trust in an inconsistent, unconfident agent leading to volatile and unpredictable decision-making still holds. Of course as you rightly point out, we should not expect perfect consistency at T= 1. But, if LMs were \\u201cgood\\u201d and \\u201cconfident\\u201d at military crisis decision-making, we should expect to see much more semantic consistency than we observe, even at T = 1. In fact, we still observe high levels of inconsistency even at a lower T = 0.2 (see figure 6 in the pdf).\\n\\nWe believe that inconsistency is likely both an LM issue and task issue. It is an LM issue in the sense we described above. It is a task issue in that the task itself calls for open-ended decision-making, likely opening the door for more unpredictable responses. But we reiterate that our focus on free-form responses is well motivated by real world tests conducted by militaries worldwide and products being developed by private companies, as mentioned in the paper. \\n\\n***Switching section order***\\n\\nThank you for the feedback on this! We will update the section order in the revised pdf.\\n\\n[1] Potsawee Manakul, Adian Liusie, and Mark Gales. SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 9004\\u20139017. Association for Computational Linguistics, 2023b.\\\\\\n[2] Sebastian Farquhar, Jannik Kossen, Lorenz Kuhn, and Yarin Gal. Detecting hallucinations in large language models using semantic entropy. Nature, 630(8017):625\\u2013630, 2024.\"}", "{\"summary\": \"This paper investigates the inconsistency of large language models (LLMs) in terms of ablations like sentence order and semantics when applied in war games. The authors focus on measuring free-form response variability using BERTScore to quantify inconsistencies. The findings indicate that the LLMs exhibit considerable inconsistency in their response, which raises concerns about their reliability in high-stake decision-making contexts.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The overall presentation is very clear and intuitive\", \"2\": \"The experimental setups are rigorous - experiments are run with multiple LLMs, variations in prompt scenarios, and statistical controls to examine the consistency across different levels of temperature settings and prompt structures.\\n3. Besides quantitative measures, the paper provides qualitative examples that provides valuable insights\\n4. The prompts are fully provided for ease of reproduction\", \"weaknesses\": \"1. The paper focuses on a very specific hypothetical military scenario. It\\u2019s also uncertain whether the observed inconsistency is unique to the wargame setup or would generalize to other critical decision-making applications. This might limit the generalizability to other high-stakes applications.\\n2. The paper\\u2019s main innovation centers on using BERTScore as an inconsistency measure, which may not offer significant novelty in approach. \\n3. The study also did not sufficiently compare this approach with other potential inconsistency measurements. \\n4. The choice of a default temperature setting of 1.0 in Section 5 may not be appropriate, as it introduces significant response inconsistency by design.\\n5. The comparison are limited to a few closed-source LLMs\\n6. While the study demonstrates that LLMs can produce inconsistent responses, it would be more impactful if it included strategies for reducing such variability.\", \"questions\": \"1. Why was the wargame scenario chosen as the primary setting for examining inconsistency in high-stake decision making? Do you believe the inconsistency findings would generalize to other types of critical scenarios, or are they specific to the military context?\\n2. In Section 5, why was a temperature of 1.0 chosen as the default setting?\\n3. Could this research potentially inform new methods to improve LLM consistency in decision-making contexts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate your feedback and understand your skepticism regarding the use of non-deterministic AI systems (T\\u202f>\\u202f0) in military settings. We want to clarify our position and provide extensive evidence to support our claims, although they may seem unintuitive for standard LM evaluations. Predictability in military decision-making is universally considered to be a significant vulnerability. Adversaries capable of anticipating actions may exploit consistent patterns to undermine strategies. Military doctrines and strategic studies emphasize the importance of unpredictability to maintain a tactical advantage:\\n\\n- Game Theory and Mixed Strategies: In competitive and adversarial scenarios, game theory advocates for mixed strategies, which involve randomizing choices to prevent opponents from predicting actions [1][2]. This concept is crucial in military applications to avoid being outmaneuvered by adversaries who might exploit predictable decision patterns.\\n\\n- Military Doctrine Emphasizing Flexibility and Adaptability: Renowned historical military strategists like Sun Tzu and Clausewitz have underscored the importance of adaptability and unpredictability in warfare to outsmart opponents [3][5]. Modern military doctrines continue this emphasis: The U.S. Army's\\u00a0Operational Art Primer\\u00a0highlights the need for commanders to employ creativity and adaptability, integrating ends, ways, and means across the levels of war [4]. Deception and unpredictability are considered essential for achieving strategic surprise and maintaining operational security [6].\\n\\nGiven these principles, deploying deterministic AI systems with T\\u202f=\\u202f0 could introduce risks due to their predictable outputs in case of cybersecurity failures. In cybersecurity threats or espionage scenarios, adversaries could exploit this predictability to anticipate and counteract military strategies. While our paper focuses on the inconsistency of LLM outputs, understanding the trade-off between consistency and diversity is crucial in high-stakes settings. Setting the temperature T\\u202f>\\u202f0 introduces controlled randomness, aligning with the strategic need for unpredictability. \\n\\nNevertheless, we also back up our findings with an extensive range of variations at T = 0. As discussed in our general rebuttal, both types of experiments (T>0 and T =0) are crucial to meaningfully comparing free-form response inconsistency of LM decision-making. Would you prefer if we re-order the experiments in the paper to start with section 6 and then do section 5?\\n\\nOur research highlights the importance of addressing this inconsistency to ensure reliable yet unpredictable decision-making support. By examining how LLMs behave under different temperature settings, we provide crucial insights into limitations to safe deployments.\\n\\nWe will revise our paper to include these citations and clarify our argument regarding the strategic necessity of employing LLMs with T\\u202f>\\u202f0 in such high-stakes settings. Thank you for your valuable feedback. We believe this clarification strengthens our paper and addresses your concerns.\", \"references\": \"[1] Osborne, M. J., & Rubinstein, A. (1994).\\u00a0A Course in Game Theory. MIT Press.\\n\\n[2] Myerson, R. B. (1991).\\u00a0Game Theory: Analysis of Conflict. Harvard University Press.\\n\\n[3] Sun Tzu. (5th Century BCE).\\u00a0The Art of War.\\n\\n[4] Sweeny, P. (2010).\\u00a0Operational Art Primer. U.S. Department of the Army.\\n\\n[5] Howard, M., & Paret, P. (1976).\\u00a0Clausewitz: On War. Princeton University Press.\\n\\n[6] Barlow, R. E. (2006). \\\"Deception and Unpredictability in Military Operations\\\".\\u00a0Naval War College Review, 59(1), 43\\u201353.\"}", "{\"metareview\": \"**Summary:**\\n\\nThe authors analyze how previously observed inconsistency in outputs of LLMs influences their behavior during military decision-making and mental health simulations. They use BERTScore for identifying model inconsistency. Their findings indicate LLMs may be too inconsistent for deployment in high-stakes scenarios. \\n\\n**Strengths:**\\n\\n- Rigorous experimentation with BERTScore and good qualitative examples.\\n\\n- Good comparison with synthetic perturbations of TruthQA.\\n\\n**Weaknesses:**\\n\\n- The point of the paper's main findings is unclear given the applications, and the authors' rebuttal further adds confusion by arguing non-determinism is favorable in wargames. What is being measured with BERTScore is not internal reliability. I would argue that the inconsistency being measured here isn't inherently bad. The authors really need a more nuanced metric capturing inconsistency in outcomes: in other words, they need to determine if multiple dissimilar decision paths lead to similarly or dissimilarly desirable outcomes. \\n\\n- The writing of this paper is confusing. Despite the title, it seems like the authors focus primarily on justifying BERTScore as a consistency metric rather than wargame simulation, which becomes an afterthought. I think the paper would benefit from a broader focus, with more applications, comparison across different metrics of consistency, and deeper analysis of the implications. \\n\\n- Lack of discussion around mitigation of undesirable inconsistency.\\n\\nThe core promise of measuring how LLM consistency impacts high stakes decision-making, particularly military decision-making, is a great idea. However, I believe the paper is far from being in a publishable state and is not appropriate for ICLR.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers are all leaning towards rejection. They did note the reproducibility of the work and focus on high-stakes real-world applications. However, there were concerns about lack of generalization to non-military scenarios (reviewers McSh, cN3C) and the sole focus on BERTScore without attempts to develop a metric specific to inconsistency measurement (reviewers McSh, xYfm). Reviewer cN3C also brought up a good point about observed inconsistency and the authors' choice of setting the temperature to 0 during generation, which led to a discussion reinforcing my take-away that inconsistency is ultimately not fine-grained enough and ill-defined in this work. What the authors really need to measure is the balance between unpredictability of decision-making and predicability (or desirableness) of outcomes.\"}", "{\"title\": \"Further Response to Reviewer xYfm\", \"comment\": \"Thank you for your response and further insight! We address your further concerns below:\\n\\n***more in depth analysis and experimental comparison is needed***\\n\\nAs mentioned, we provide empirical evidence that a method based on bi-directional entailment clustering does not suffice. Our evaluation framework is metric agnostic - as long as another metric is able to capture inconsistency similar to how BERTScore does, one may do the same evaluation. We absolutely agree with the idea that other metrics of this capacity likely exist. Do you think it would strengthen the paper to mention this? We maintain that this work is important in terms of advancing beyond maximizing human correlation coefficients - BERTScore may be used in inconsistency evaluation that would otherwise be meaningless without our tests. Our additional experiments on wargame specific synthetic responses strengthen the task-specific case for the robustness of the metric. Additionally, our experiments on mental health related responses strengthen the case for the generalizability of the metric to different high-stakes application domains. We would greatly appreciate it if you could clarify the type of experimental comparison that you would like to see. Thank you!\\n\\n***it is difficult to position the novelty and significance***\\n\\nWe reiterate that our contribution is rooted in the demonstration of the existence of a metric that can robustly capture inconsistency in an explicit QA setting, ignoring linguistic variations that do not affect semantic meaning. That is, we establish an otherwise unestablished necessary precondition for evaluation of this type. Evaluation of LMs in high-stakes contexts is of great significance currently given the rapid adoption of LMs into various domains. We establish their proliferation into military spheres, a domain that can give rise to particularly catastrophic consequences under misguided decision-making [1, 2, 3]. We would view it as a great loss to the community to reject work that addresses high-impact social issues on the basis of a lack of novelty given that we have shown that an existing metric works for the type of evaluation we are conducting. We still maintain the novelty in our work in validating BERTScore and applying our inconsistency evaluation to a new domain - free-form military crisis decision-making.\\n\\n***Additionally, I feel like the default should temperature should have been T=0 and then additional results given for T=1 motivated by the reasoning given in the rebuttal.***\\n\\nThis is an interesting idea that we would like to explore. Do you think it would be more meaningful to, say, put Section 6 (prompt variation experiments at T = 0) above Section 5 (evaluation at T=0)?\\n\\nPlease let us know if there are any further actionable steps we can take to warrant a score increase. Again, thank you for your thoughtful evaluation of our work!\\n\\n[1] National Security Archive. False Warnings of Soviet Missile Attacks Put U.S. Forces on Alert in 1979-1980, 2020. URL https://nsarchive.gwu.edu/briefing-book/nuclearvault/2020-03-16/false-warnings-soviet-missile-attacks-during1979-80-led-alert-actions-us-strategic-forces. \\\\\\n[2] Geoffrey Forden, Pavel Podvig, and Theodore A Postol. False alarm, nuclear danger. IEEE Spectrum, 37(3):31\\u201339, 2000. \\\\\\n[3] EUCOM History Office. This Week in EUCOM History: January 23-29, 1995, 2012. URL https://web.archive.org/web/20160105033448/http://www.eucom.mil/ media-library/article/23042/this-week-in-eucom-history-january23-29-1995.\"}", "{\"title\": \"General Response (1/2)\", \"comment\": \"Dear reviewers and chairs,\\n\\nThank you to the reviewers for your constructive reviews and everyone for their consideration! Below, we address common concerns shared by all reviewers.\\n\\n### *Why we evaluate LMs at temperature T > 0*\\nWe have consulted academic scholars in international and national security throughout the course of this research. They said militaries cannot afford their systems to give deterministic decisions in case of cybersecurity failures leading to predictable decision-making. Thus, we have strong reason to believe militaries will not set the temperature to 0.0. We agree this is unintuitive, and we will clarify this in the paper. We uphold that this fact makes the results all the more pressing - those implementing LMs in high-stakes settings should find a way to robustly parse through their inconsistency or scrap their use altogether.\\n\\nAdditionally, related works have analyzed LM inconsistency as a method for hallucination detection [e.g., 1, 2]. These works hinge on the assumption that high inconsistency levels imply low model confidence, and often set T = 1.0 for baseline experiments. Therefore, by setting T = 1.0, we are able to proxy a notion of model confidence and reliability when responding to fixed prompts. This establishes a necessary comparison point for the experiments in Section 6, where we set T = 0 and find that model inconsistency due to prompt variations when responding with T = 0 can lead to inconsistency levels comparable to model responses to fixed prompts at T > 0 (and up to 1.2 for some models).\\n\\nLastly, previous work has shown that there are limitations to greedy decoding [e.g., 3, 4, 5]. Thus, it is reasonable to expect that LMs be deployed at T > 0 even if it comes at the cost of less consistency.\\n\\n### *Regarding generalizability concerns*\\nWe see the reviewers\\u2019 concerns regarding generalizability and agree that applying our framework to more applications would strengthen our results. To this end, we run additional experiments on free-form responses of chatbots interacting with users in mental health emergencies using the public dataset from [6]. From that dataset, we use the responses from frontier closed-source models and open-source models (ChatGPT-3.5, ChatGPT-4, Mistral-7b, Llama2-7b, Llama2-13b, Claude-3-opus, Gemini) which are expert human labeled as either \\u201csafe,\\u201d \\u201cunsafe,\\u201d or \\u201cborderline\\u201d.\\n\\nWe find that models still exhibit high levels of inconsistency. Additionally, we find that our inconsistency metric is able to distinguish between safe and unsafe responses with statistical significance. We also find that borderline responses were significantly closer to safe responses than unsafe responses. We will add this evaluation to the appendix in the revised pdf. These results show that our inconsistency evaluation framework generalizes to a different context and different models not present in our initial analysis.\\n\\n### *Further analysis and human evaluation for inconsistency score*\\nWe have conducted another set of new experiments to test how our inconsistency score behaves under synthetic, on-distribution ablations specific to the experimental scenario. We selected example responses and changed between one to five \\u201cactions\\u201d conveyed in the response, while keeping all other text (including other actions) in the response the same. Keeping the other actions unablated and text exactly identical creates more stringent evaluation conditions for the inconsistency score as compared to the true conditions present in the main experiments.\\n\\nWe find 1) inconsistency increases approximately linearly as we change additional actions, 2) Changing just 2 actions while keeping the rest of the text exactly the same yields inconsistency scores indicative of substantial semantic difference. Because of the stringent test conditions, the observed inconsistency scores are a lower bound to those we would see when evaluating on the true dataset. We will add these results to the appendix in the revised pdf.\\n\\nAdditionally, we qualitatively evaluated many of the sample responses and their corresponding inconsistency scores. We will include a representative sample of response pairs and their corresponding inconsistency scores in the appendix of the revised pdf. Finally, as mentioned in the paper, we use BERTScore based on the DeBERTa xlarge model, which achieved a pearson correlation with human judgment of 0.7781 [7], the highest of all supported models. We will be sure to include this in the revised pdf as well.\\n\\n*Continued*\"}", "{\"comment\": \"Hi there.\\n\\nThanks for the clarifications. Yet, I still find that your claims are contradictory.\\n\\nIn the paper, you mention that LLMs in high-stakes settings \\\"require consistent, reliable decision-making\\\" (058). You then also say \\\"delegating trust to an inconsistent agent can lead to unpredictable decision-making, which is a cause for concern\\\" (065). The discussion of results further frames inconsistency as bad, e.g., \\\"Encouragingly, we find that lexical substitution and syntactic restructuring generate the least inconsistency\\\" (209).\\n\\nHowever, in the rebuttal, you are now saying that predictability is an issue in military settings and call for LLMs that should \\\"ensure reliable yet unpredictable decision-making support\\\" but that is not what the investigations in your paper actually do. They do not disentangle reliability from unpredictability (yet that would be a really interesting but challenging thing to try to measure). \\n\\nI think the whole paper would require substantial reframing in order to resolve this contradiction, and possibly require some additional experiments to support the new framing of \\\"consistency = predictability, which is bad\\\"\"}", "{\"summary\": \"This paper presents an investigation of the consistency of LLM responses in a military strategy scenario. Authors invert the BERTScore measure of semantic similarity to compute an inconsistency score, which they validate in a synthetic free-form QA setting based on Truthful QA. In the experiments, the paper shows that LLMs' answers are generally quite inconsistent in both types of generations for their scenario (initial, continuations). They also explore the effect of temperature on inconsistency, as well as the effect of prompt variations.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"I appreciate the paper's main goal of investigating LLM inconsistencies in high-stake military scenarios\", \"I liked that the authors performed a validation of the inconsistency score using synthetic data with TruthfulQA, which also allows readers to calibrate on what score values mean.\", \"The main experiments were well executed.\", \"I liked the investigations into the prompt variations.\", \"I also appreciated the disclaimer at the end of the introduction.\"], \"weaknesses\": [\"The paper's main weakness is the conceptual problem definition.\", \"I feel like in realistic high-stakes settings, the temperature should probably be set to 0, which is similar to greedy decoding. Authors should probably focus their experiments on that temperature, though I'm expecting very low inconsistency. More importantly, authors should make a clearer argument for why t=0 should not be used in these high-stakes settings, or why t>0 should be studied.\", \"Concretely, I feel like section 6 was most relevant to the realistic way that LMs should be used. I wish authors had expanded on such experiments, possibly exploring various types of rephrasing and exploring exactly how the inconsistencies were affected.\", \"Second, I feel like the paper's results might be limited in terms of generalizability due to there being only one wargame scenario considered. I understand that this is a relevant vignette, but it would be very interesting to have at least 3-5 high-stakes scenarios (either all wargame or some other high-stakes domains like healthcare). This would ensure that the results aren't an artefact of the topics or domain chosen in the one scenario.\", \"Third, I felt like the paper could use more in-depth analyses w.r.t. how model responses are inconsistent. The measure at hand is quite coarse-grained, and might not be able to capture more nuanced consistent/inconsistent outputs (e.g., LLM outputs offering two alternatives, one of which is similar between two outputs). Given the specific and high-stakes nature of the scenario, it'd be really useful to have more insights into how the outputs differ, as currently, the coarse-grained measure yields very little information about how inconsistencies should be mitigated (other than lowering the temperature).\", \"Missing citations: http://arxiv.org/abs/2203.12258, https://arxiv.org/abs/2311.00059, https://arxiv.org/abs/2310.11324\", \"L204: Authors mention a bi-directional entailment clustering method, but without more details, it seems very confusing why the authors mentioned that... I would remove that sentence or specify why they needed to test that method and why they didn't include the results in the final main text.\"], \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer McSh\", \"comment\": \"Thank you for the thoughtful feedback! We address your concerns below:\\n\\n### *W1/Q1: Limited generalizability due to constrained evaluation setting*\\nWe kindly refer you to part 1 of our general response under the section titled \\u201cRegarding generalizability concerns.\\u201d\\n\\n### *W2: Limited Novelty*\\nWe actually considered developing a new metric. However, as we show, BERTScore performs well in detecting inconsistency on decision-making tasks. We believe it is more impactful to the research community to show that an existing metric can generalize into a new task domain rather than develop a novel metric just for the sake of algorithmic novelty. We maintain that novelty comes from the verification that BERTScore is a reliable metric in decision-making/QA settings. Additionally, we are the first to study LM free-form inconsistency in the military domain. Reviewer xYfm recognizes the importance of this work given the societal implications surrounding the adoption of LMs into high-stakes domains.\\n\\n### *W3: Lack of comparison with other inconsistency measurements*\\nWe agree that our comparisons with other metrics were not framed clearly. In Section 3, we discuss why we do not choose to approach the problem with n-gram inconsistency measures - these are typically not able to capture semantic similarities. At the end of Section 4.1, we (albeit unclearly), mention that we tested a metric based on bi-directional entailment clustering. We found that it was not able to sufficiently parse out any similarities between responses that were otherwise conceptually dissimilar. We maintain that our verification of the validity of our inconsistency metric in Section 4 provides evidence that any sufficiently robust metric will yield very similar results (up to scaling/inversion).\\n\\n### *W4/Q2: Choice of temperature 1.0*\\nWe kindly refer you to part 1 of our general response under the section titled \\u201cWhy we evaluate LMs at temperature T > 0.\\u201d\\n\\n### *W5: Evaluation limited to few models*\\nThis is a good point. We'll run our experiments on more models, particularly on those that are open-source.\\n\\n### *W6: We do not study inconsistency mitigation*\\nWe kindly refer you to part 2 of our general response under the section titled \\u201cWhy we do not mitigate inconsistency.\\u201d\\n\\n### *Q1: Why did we choose the wargame scenario?*\\nWe choose to focus our analysis on the wargame setting as this is a worrying real-world application of LMs where studying inconsistency will have grounded implications. We maintain the importance of our work in rigorously exposing any pitfalls of LMs given the sensitivity of the domain. As mentioned, we perform further experiments on chatbot responses pertaining to mental healthcare and find that inconsistency and our evaluation framework generalizes to another critical scenario.\\n\\n### *Q3: Could this research potentially inform new methods to improve LLM consistency in decision-making contexts?*\\nThis is a fantastic question! As mentioned above, much of the contribution of this work is testing a metric that can robustly measure inconsistency, a necessary precondition to *mitigating* inconsistency. As we did when showing our metric can distinguish between \\\"safe\\\" and \\\"unsafe\\\" responses to mental health emergencies, we hope future work can build on this evaluation framework and study ways to automatically evaluate free-form responses of agentic systems in various settings. We hope future work can use this framework to explore inconsistency-mitigating decoding strategies surrounding LM self-consistency, automatically categorize free-form text into conceptual categories, or further study LM behavior in further high-stakes domains (e.g. law, government).\\n\\nWe hope these new experiments and arguments have addressed your concerns! We look forward to hearing your thoughts and are happy to conduct new experiments to address any further concerns.\"}", "{\"title\": \"Official Comment from Authors\", \"comment\": \"Dear Reviewer McSh, given that we are nearing the end of the author-reviewer discussion period, we wanted to send a friendly reminder that we'd love to hear your thoughts on our rebuttal! Please let us know if there are any remaining questions or concerns. If we addressed your previous concerns sufficiently, we'd be delighted if you'd consider raising your final score. Thank you so much again for your time and feedback!\"}", "{\"title\": \"General Response (2/2)\", \"comment\": \"### *Why we do not mitigate inconsistency*\\nWe actually considered mitigating inconsistency over the course of the research. However, prior to finding mitigations for inconsistency, it was necessary to either develop or verify a metric that robustly captures inconsistency in general decision-making contexts as, to our knowledge, one does not yet exist for such settings. Thus, we focus our work on finding and verifying such a metric and study how such measurements are affected by temperature and prompt sensitivity. We show that for any method that aims to improve inconsistency, it is necessary to evaluate the performance of the metric at different temperatures and across prompt variations.\\n\\nWhile having a metric is a necessary precondition, there are other challenges associated with mitigating free-form inconsistency, particularly in high-stakes applications. In this work, it would not be impactful to, say, fine-tune models to reduce inconsistency because this would require one set of arbitrary strategic preferences over others. These naturally meaningfully differ across individuals, societies, and cultures. Additionally, conducting such fine-tuning would raise severe ethical concerns surrounding the enabling of automated military decision-making. For high-stakes applications in general, our work demonstrates that meaningful evaluations and governance efforts of safety-critical deployment settings must go beyond just capability evaluations and consider inconsistency due to both sampling temperature and prompt sensitivity.\\n\\nThank you to all reviewers and chairs for considering our submission for ICLR. The reviews were instrumental in improving our work.\\n\\n#### *References*\\n[1] Potsawee Manakul, Adian Liusie, and Mark Gales. SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 9004\\u20139017. Association for Computational Linguistics, 2023b.\\\\\\n[2] Sebastian Farquhar, Jannik Kossen, Lorenz Kuhn, and Yarin Gal. Detecting hallucinations in large language models using semantic entropy. Nature, 630(8017):625\\u2013630, 2024.\\\\\\n[3] Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. Learning to write with cooperative discriminators. In Proceedings of the Association for Computational Linguistics, 2018.\\\\\\n[4] Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, and Denny Zhou. 2023b. Universal self-consistency for large language model generation. ArXiv, abs/2311.17311.\\\\\\n[5] Prabhu, Sumanth. \\\"PEDAL: Enhancing Greedy Decoding with Large Language Models using Diverse Exemplars.\\\" arXiv preprint arXiv:2408.08869 (2024).\\\\\\n[6] Declan Grabb, Max Lamparth, and Nina Vasan. Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation. In First Conference on Language Modeling, 2024.\\\\\\n[7] BERTScore. BERTScore. https://github.com/Tiiiger/bert score, 2020. [Online; accessed 30-September-2024].\"}", "{\"comment\": \"Thanks for clarifying further.\\nTo address the first point, the paper lacks a clear motivation for the testing of bi-directional entailment clustering, finding it doesn's work and using BertScore. Explaining this point would help in my opinion.\\n\\nThen, please, correct me if I'm wrong about this point. While I understand that the study posits that inconsistencies arise from a lack of semantic coherence, it's unclear why semantic consistency should be expected at a temperature of T=1 for this task. Is that an LLM issue? A task issue? \\n\\nI agree with your rebuttal to my second point. \\n\\n> This is an interesting idea that we would like to explore. Do you think it would be more meaningful to, say, put Section 6 (prompt variation experiments at T = 0) above Section 5 (evaluation at T=0)?\\n\\nyes, I think this would help a lot.\"}", "{\"comment\": \"Thank you for the rebuttal! Most weaknesses are addressed in my opinion, except for the main one which prevents me from altering my score.\\n\\n# W1\\nI find it hard to believe that military people would no be able to have deterministic AI systems, this feels very strange to me. I cannot simply trust that you did surveys with many military people, and will require citations or evidence that that is the case. If the authors aim to make a claim that military people require a diverse set of outputs from LLMs (with citations), then that would warrant taking a temperature T>0. However, the current paper focuses on inconsistency of LLM outputs, not the tradeoff between consistency and diversity.\"}", "{\"title\": \"Response to Reviewer cN3C\", \"comment\": \"Thank you for your thoughtful review and constructive feedback! We address your concerns below:\\n\\n### *W1: Why not temperature T = 0.0?*\\nWe kindly refer you to the first part of our general response under the section titled \\u201cWhy we evaluate LMs at temperature T > 0.\\u201d We will do our best to meet your wishes exploring the expansion of section 6.\\n\\n### *W2: Limited Generalizability*\\nWe kindly refer you to the first part of our general response under the section titled \\u201cRegarding Generalizability Concerns.\\u201d\\n\\n### *W3: More in-depth analysis on inconsistency, coarse-grained measure*\\nWe address this concern in the first part of our general response under the section titled \\\"Further analysis and human evaluation for inconsistency score.\\\" Additionally, we address why we do not pursue mitigating inconsistency directly in this work in the second part of our general response under the section titled \\\"Why we do not mitigate inconsistency.\\\"\\n\\n### *W4: Missing Citations*\\nThank you for pointing out the missing citations! We will add these to the revised submission. The first and third one seem particularly pertinent to our prompt sensitivity experiments!\\n\\n### *W5: Mention of bi-directional entailment clustering*\\nThank you for pointing this out - we agree this is unclear. We mentioned this to demonstrate that we considered other potential metrics beyond BERTScore for our analysis. We did not include these results in the main text because we found that this metric was unable to sufficiently extract any similarities between texts that are otherwise conceptually dissimilar. Thus, we obtained inconsistency scores that were extremely high across all settings and models (~0.9), making comparison and evaluation meaningless. We found that BERTScore more robustly captured inconsistency, and so we focused our analysis using this metric. We will clarify these points in the revised pdf.\\n\\nWe hope our additional experiments and clarifications have addressed your concerns, we would be grateful if the reviewer re-evaluate our work. Again, thank you for your time and constructive feedback! Please let us know if you would like us to run additional experiments or if you have any additional concerns.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer xYfm\", \"comment\": \"Thank you for your thorough evaluation of our work! We have addressed your main concerns below:\\n\\n### *W1: Limitation to only evaluating with BERTScore*\\nWe agree that our comparisons with other metrics were not framed clearly. In Section 3, we discuss why we do not choose to approach the problem with n-gram inconsistency measures - these are typically not able to capture semantic similarities. At the end of Section 4.1, we (albeit unclearly), mention that we tested a metric based on bi-directional entailment clustering. We found that it was not able to sufficiently parse out any similarities between responses that were otherwise conceptually dissimilar. We maintain that our verification of the validity of our inconsistency metric in Section 4 provides evidence that any sufficiently robust metric will yield very similar results (up to scaling/inversion). Part of the contribution is to verify that BERTScore is in fact trustworthy for measuring inconsistency in a generalized question-answering setting.\\n\\n### *W2: No human evaluation or correlation to human judgment*\\nWe kindly refer you to the first part of our general response titled \\\"Further analysis and human evaluation for inconsistency score.\\\"\\n\\n### *W3: No discussion of possible mitigations*\\nWe addressed this in the second part of our general response titled \\\"Why we do not mitigate inconsistency.\\\"\\n\\n### *Q1: Can the evaluation framework highlight other types of discrepancies?*\\nThank you for this question! We showed that our metric can reliably distinguish between \\\"safe\\\" and \\\"unsafe\\\" responses to mental health emergencies in an additional experiment (we expand on details in the general response). Thus, we hope future work can build on this evaluation framework and study ways to automatically evaluate free-form responses of agentic systems in various settings. For example, a researcher may define categories under which the responses fall under and subsequently use our evaluation framework and well-defined comparisons to categorize responses to expose conceptual discrepancies. Broadly, we hope future work can use this framework to explore inconsistency-mitigating decoding strategies surrounding LM self-consistency, automatically categorize free-form text into conceptual categories, or further study LM behavior in further high-stakes domains (e.g. law, government).\\n\\nWe hope our new experiments and arguments have addressed your concerns! We look forward to hearing your thoughts and are happy to conduct new experiments to address any further concerns. Again, thank you for your time and constructive feedback!\"}", "{\"title\": \"Summary of Revisions\", \"comment\": \"We thank the reviewers for their continued engagement with our work! In response to the original comments and points brought up during discussion, we have updated the paper with a few revisions.\\n\\nIn the revised version, you will see text in *red* and text in *blue*. The text in *blue* is content that was exactly present in the original version (bar minor rephrasing) that was moved into a new section in the revision. The text in *red* is new content or content that we majorly changed.\\n\\n**Summary of major changes:**\\n1. We added a discussion of our new experiments measuring inconsistency on chatbot responses to mental health crises. The full discussion is provided in Appendix E, with mentions in Section 4.2 and 7. This aims to address reviewer concerns regarding whether high inconsistency generalizes to other critical task domains, as well as whether our inconsistency score (and generally, BERTScore) can be used in other evaluation frameworks in other contexts.\\n2. We added a discussion of our more fine-grained analysis of BERTScore on synthetic, wargame-specific responses. The full discussion is provided in Appendix C, with a mention in Section 4.3 We also added a representative sample of response pairs taken from the experiments alongside their inconsistency scores to provide further human evaluation (Appendix D). Finally, we added that the BERTScore model we implement achieved a Pearson correlation with human judgment of 0.7781 as found by the original authors of the BERTScore paper to Section 3. This aims to address reviewer concerns pertaining to the interpretation and validity of the inconsistency score.\\n3. We re-order Section 5 and Section 6 as recommended by Reviewer xYfm. That is, we moved our prompt sensitivity experiments above our evaluation at T = 1. This is in response to reviewer concerns regarding our conceptual problem definition and belief that the paper would be better framed if T = 0 was used as the default temperature. To that end, in the new Section 6, we write a bit about *why* we believe T = 1 to be well-motivated and worth evaluating. Because we acknowledge that is an unintuitive concept, particularly to those removed from national security circles, we provide a full discussion of this motivation in Appendix G.\\n4. We demote the mention of bi-directional entailment clustering to a footnote so as to not overstate its importance to the present work and maintain the focus on BERTScore and our contributions pertaining to the validation of it for inconsistency evaluation. We add a sentence explaining why we do not include it in the main body. We maintain the full discussion of our tests with the method and why exactly we do not include it in Appendix F. This is in response to reviewer recommendations to clarify the mention of bi-directional entailment clustering and to better position an aspect of our contribution. \\n\\nWe also made some minor changes. For example, to accommodate the re-ordering of sections, we slightly change the abstract and introduction to ensure that our presentation stays strong and true to the present paper. We also added some missing citations, including those recommended by Reviewer cN3C. \\n\\nWe appreciate all the reviewers' feedback and time! We believe our paper is much stronger as a result. We have conducted additional experiments and addressed all concerns raised in the reviews to the best of our ability. We would greatly appreciate it if the reviewers could consider revising their scores in light of these updates or kindly provide further details if any concerns remain unaddressed.\\n\\nSincerely,\\\\\\nThe Authors\"}", "{\"comment\": \"I thank the authors for the thourough work in addressing the concerns raised. I still think that more in depth analysis and experimental comparison is need. In the current state, it is difficult to position the nolvelty and siginificance. Additionally, I feel like the default should temperature should have been T=0 and then additional results given for T=1 motivated by the reasoning given in the rebuttal.\"}", "{\"summary\": \"This paper proposes to examine how reliable LLMs are in high stakes decision making situation. For this, the authors conduct crisis simulations with free form responses and using bertscore and measure the inconsistencies across 5 LLMs. Across experiments conducted, this study shows that there inconsistencies even after fixing parameters like conflict anonymization and temperature, and show that prompt variations can lead to greater inconsistency than temperature adjustments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"the paper is well written and the methodology is easy to follow\", \"with the increasing use of LLMs in different areas, the focus on high-stakes decision-making contexts is timely and of importance\"], \"weaknesses\": [\"The framework for measuring inconsistency uses only BertScore. This potentially limits the evaluation setting to the discrepancies found through semantic similarity missing other forms of inconsistency.\", \"There is no human evaluation or correlation to human judgement of the inconsistency score.\", \"I understand that the objective of this study is to probe LLMs for inconsistency in this particular context. While this study underlines a problem, it does not suggest possible mitigations for the issue of inconsistency.\"], \"questions\": \"Besides inconsistency, is the proposed evaluation framework able to highlights other types of errors or discrepancies?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0pLCDJVVRD
A Percolation Model of Emergence: Analyzing Transformers Trained on a Formal Language
[ "Ekdeep Singh Lubana", "Kyogo Kawaguchi", "Robert P. Dick", "Hidenori Tanaka" ]
Increase in data, size, or compute can lead to sudden learning of specific capabilities by a neural network---a phenomenon often called "emergence". Beyond scientific understanding, establishing the causal factors underlying such emergent capabilities is crucial to enable risk regulation frameworks for AI. In this work, we seek inspiration from study of emergent properties in other fields and propose a phenomenological definition for the concept in the context of neural networks. Our definition implicates the acquisition of general regularities underlying the data-generating process as a cause of sudden performance growth for specific, narrower tasks. We empirically investigate this definition by proposing an experimental system grounded in a context-sensitive formal language, and find that Transformers trained to perform tasks on top of strings from this language indeed exhibit emergent capabilities. Specifically, we show that once the language's underlying grammar and context-sensitivity inducing regularities are learned by the model, performance on narrower tasks suddenly begins to improve. We then analogize our network's learning dynamics with the process of percolation on a bipartite graph, establishing a formal phase transition model that predicts the shift in the point of emergence observed in our experiments when intervening on the data regularities. Overall, our experimental and theoretical frameworks yield a step towards better defining, characterizing, and predicting emergence in neural networks.
[ "Emergence", "Percolation", "Formal languages" ]
Accept (Poster)
https://openreview.net/pdf?id=0pLCDJVVRD
https://openreview.net/forum?id=0pLCDJVVRD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "u48jXkS2DH", "u2IbhIAH7y", "taTcTxx4OB", "pEgvcnI0kK", "m3lywEtakk", "gRqbSj3Npv", "clcmQpV4fC", "Z1Kiaj17s1", "YvyIbFNpEJ", "Yn5jtpYT9m", "Yg4G694NFz", "XfEoujJ4Kt", "XdAG5ubAno", "UehPPcYApd", "SUynrz5adW", "PQYUCkwGrN", "OLk2q0lrEi", "K33ctDunYr", "Gw2oIdkGeC", "FpTXnPstHH", "E8vBwkeDUh", "Bqbq1W1fRa", "AL0oqe19sm", "97vTCBeAxp", "3kW7onzR4G", "3YCON3kUFT" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732208854290, 1730629928571, 1730702849078, 1732208888587, 1731987518304, 1732066037040, 1731985718573, 1730306659538, 1731987325834, 1737523527632, 1732666171552, 1730750579279, 1731988020778, 1733684927269, 1731987676849, 1732201119072, 1731987742537, 1731985288675, 1731993252817, 1731986971411, 1732660828785, 1731990243810, 1731986674914, 1731986125753, 1731988065176, 1732395924162 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Reviewer_SNGk" ], [ "ICLR.cc/2025/Conference/Submission2731/Reviewer_NVjv" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Reviewer_NVjv" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Reviewer_Wp3H" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Reviewer_dr5M" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Area_Chair_N8pm" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Reviewer_Wp3H" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Reviewer_dr5M" ], [ "ICLR.cc/2025/Conference/Submission2731/Reviewer_SNGk" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ], [ "ICLR.cc/2025/Conference/Submission2731/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you!\", \"comment\": \"We thank the reviewer for their response and are glad to see they appreciated the revisions. Please let us know if you have any further questions!\"}", "{\"summary\": \"This paper investigates the phenomenon of \\\"emergence\\\" in neural networks, where a model suddenly acquires certain capabilities after reaching a critical data or computational threshold. The authors propose a new definition of emergence in neural networks, linking it to the acquisition of general structures within the data that drive sudden performance improvements in specific tasks. The authors experiment with Transformers trained on a context-sensitive formal language and observe that once the underlying grammar and context-sensitivity structures are learned, performance on various tasks improves dramatically. This phase transition in model learning is likened to percolation on a bipartite graph, where learning dynamics mirror phase changes. Their results suggest that emergence can be theoretically predicted by understanding the structure of the data-generating process, offering insights for regulating and anticipating the behavior of AI models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Generally, this paper builds a bridge between the LLM and the Physics complex system. The paper uses the phase transition from complex system theory to analyze the emergence of LLMs. This paper has the following strengths:\\n\\n1. This paper provides a clear definition of emergence, which is slightly differently from previous paper, but it is more formal and general. Also, this definition helps further research the measurement of emergence.\\n\\n2. This paper trained the LLM from formal languages, which generated from a strict grammar with type check. It aligns with current research.\\n\\n3. he paper\\u2019s findings on emergence and phase transitions are potentially generalizable to other neural network models, not just Transformers trained on formal languages.\", \"weaknesses\": \"1. Previous paper[1] has already claimed that the emergence abilities is mirage. The paper does not clearly address contradictions with previous work: why does the phenomenon of emergence still occur in this study?\\n\\n2. The selection of formal language, though it is very popular in recent researches, but the situation is that the models not trained on formal languages still shows good performance. The observation is not convincing for such situations. \\n\\n3. In graph theory, diminishing marginal effects are quite common; however, there is no clear evidence linking this to the percolation model proposed in this paper. Many graph-theoretic functions exhibit properties such as submodularity, which is one of the reasons behind these phenomena. The final emergence modeling presented in this paper is not entirely intuitive.\\n\\n[1]Schaeffer R, Miranda B, Koyejo S. Are emergent abilities of large language models a mirage?[J]. Advances in Neural Information Processing Systems, 2024, 36.\", \"questions\": \"Please refer to the weakness part:\\n\\n1. Please justify the relationship with previous paper, and the reason why we can still believe the current LLMs have emergence. If the definition is different, please justify the reason why the new definition is equal or a proper approximation of previous ones.\\n\\n2. Please justify the use of formal languages, and what will happen if we do not train on well-typed formal languages. \\n\\n3. Please provide more physics intuation of current emergence model. For example, during the freezing of water, the Gibbs free energy of water molecules changes, thereby affecting the intermolecular distance and, on a macroscopic level, the volume. We consider this process to be a phase transition.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper studies the emergence of abilities over the course of training in a transformer language model trained on a formal language. The authors identify distinct phases where different abilities emerge. They also study the point at which one of the abilities transitions from memorization to generalization, and show that this point empirically follows a scaling law that matches the theoretical scaling of bond percolation in a bipartite graph.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The phenomenon of emergence of model abilities with scale, and how suddenly this can occur, is of both scientific and societal importance, together with related questions about the transition from memorization to generalization. The paper studies these using a toy setup that is both similar enough to realistic setups to be interesting, but simple enough to be able to isolate and study both of these phenomena. The theoretical explanation using bond percolation is insightful and deserving of follow-up work.\", \"weaknesses\": \"The paper makes claims about \\\"structures learned by the model\\\" (in Definition 1 and Section 5.1 Phase 1), but I do not think that these are really justified by the evidence in the main body of the paper, which only look at performance metrics. There is some analysis of attention maps in Appendix F.6. However, the main evidence given there seems to be that there is increased sparsity at partially-trained checkpoints compared to initialization, and other qualitative claims that are hard to read off from the plots. It would be easier to tell if these were quantified, but my impression is that this evidence is rather weak. I also think that if this evidence were stronger, it should be in the main body of the paper, since it would be necessary to justify this prominent claim.\\n\\nThat being said, I think there is enough interesting material in the paper without looking at model internals, so my suggestion would be to remove or significantly de-emphasize these claims/this aspect of the paper.\\n\\nMore broadly, I found some of the opening discussion and the definition given in Section 2 a little unnecessary, and took up space that would have been better devoted to explaining the experimental setup and results more clearly, and perhaps covering more results that only made it into appendices. In my opinion it would have been enough to give the high-level motivation, instead of couching it in terms of a new definition that doesn't really add much (especially if the claim about structure in the model is removed).\\n\\nI also found that at times the presentation got too bogged down in formal details (e.g. Definition 2), and would have preferred to have seen a more accessible, plain-language explanation of things and simple examples, with formal details relegated to appendices for reference if necessary. At other times I found the exposition too rambling (e.g. Section 5.1 Phase 3), and it would have been easier to follow if the main points had been separated out and made concisely (e.g. using bullet points / short headings).\", \"more_minor_points\": [\"In definition 1 (if you are keeping it), \\\"nonlinear\\\" could be confusing (e.g. quadratics are non-linear but still change gradually). Maybe you mean \\\"discontinuous\\\"? Or I would perhaps argue that the relevant thing is how gradual the change is (steepness of slope, even if it is locally linear).\", \"In definition 2, I would have found it a bit clearer to say that S is a non-terminal symbol, and just say you start from S, instead of treating it as a special case and saying you first map S to other non-terminal symbols \\u2013 like the definition in Appendix C. (Also, the definition in Appendix C looks messed up, you seem to be swapping between N and NT / Sigma and T, unless I am misunderstanding something.)\", \"I found definition 3 hard to follow. E.g. \\\"Entities have unique identifiers associated with them to help define subjects and objects in a sentence\\\" - do you mean e.g. \\\"John\\\" will have certain attributes like \\\"tall\\\", \\\"brown-eyed\\\" etc.? Consider using plainer language and an example.\", \"Line 227 \\\"Humans\\\" vs line 228 \\\"humans\\\" - inconsistent capitalization could cause confusion (I assume these are the same thing).\", \"Line 260: For the indicator variable, maybe consider \\\\mathbbm{1} (from package bbm) instead of \\\\delta (though this is maybe just personal preference)\"], \"questions\": \"Is the specific task (free generation/unscrambling/conditional generation) specified to the model somehow, e.g. with a special token?\\n\\nFor the unscrambling task, is the solution necessarily unique? If not, what's the justification for using exact match/average probability of valid tokens?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you!\", \"comment\": \"We thank the reviewer for their response and are glad to see our clarifications were helpful. Please let us know if you have any further questions!\"}", "{\"comment\": \"- **Schaeffer et al. make a very strong assumption.** To develop their toy theoretical model that underlies the claim \\u201cemergence is a mirage\\u201d, Schaeffer et al. make the assumption that an individual token\\u2019s loss follows power-law scaling. *This assumption is too strong and arguably incorrect*: the loss, *when averaged across a large number of tokens*, follows a power law (as popularly shown in literature on scaling laws); however, individual token dynamics can in fact be wildly different, and in general unlikely to follow a power law scaling. *In fact, empirical evidence of this point was demonstrated most recently by Schaeffer et al. [8] themselves!* In their recent paper, the authors show that learning dynamics of individual answer tokens' loss can show discontinuous progress. For further evidence in this vein, see the paper by Michaud et al. [9] that shows sudden loss drops for individual task dynamics, but recreates scaling laws via an averaging of loss dynamics across several tasks. Du et al. [10] also demonstrate similar results for pretrained LLM checkpoints.\\n\\n - **Relating to our work:** In our work, we can concretely show Schaeffer et al.\\u2019s assumption does not hold: e.g., in Figure 4, we show loss curves for individual tasks, finding that learning dynamics of tokens corresponding to just these individual tasks (a subset of the overall tokens in a sentence) does not follow a power law! Thus, we can again comfortably conclude that Schaeffer et al.\\u2019s assumption, which was too strong to begin with, does not hold in our setting and hence their claims cannot explain our results.\\n\\n**Summary:** Overall, we believe that the claim made by Schaeffer et al. is narrower in scope than what is claimed in the paper, and the underlying rationale behind their argument involves a rather strong assumption that is unlikely to hold for language model training (as shown by authors\\u2019 own follow-up work), and definitively does not hold in our setting (as shown by our results).\\n\\n[1] Are emergent abilities of large language models a mirage?, Schaeffer et al. (NeurIPS 2023)\\n\\n[2] Abrupt Learning in Transformers: A Case Study on Matrix Completion, Gopalani et al. (NeurIPS 2024)\\n\\n[3] Asymptotic theory of in-context learning by linear attention, Lu et al. (arXiv 2024)\\n\\n[4] A phase transition between positional and semantic learning in a solvable model of dot-product attention, Cui et al. (NeurIPS 2024)\\n\\n[5] Sudden drops in the loss: Syntax acquisition, phase transitions, and simplicity bias in MLMs, Chen et al. (ICLR 2024)\\n\\n[6] Compositional abilities emerge multiplicatively: Exploring diffusion models on a synthetic task, Okawa et al. (NeurIPS 2023)\\n\\n[7] Emergence of hidden capabilities: Exploring learning dynamics in concept space, Park et al. (NeurIPS 2024)\\n\\n[8] Why has predicting downstream capabilities of frontier ai models with scale remained elusive?, Schaeffer et al. (NeurIPS 2024)\\n\\n[9] The quantization model of neural scaling, Michaud et al. (NeurIPS 2023)\\n\\n[10] Understanding Emergent Abilities of Language Models from the Loss Perspective, Du et al. (NeurIPS 2024)\\n\\n---\\n\\n> **The selection of formal language, though it is very popular in recent researches, but the situation is that the models not trained on formal languages still shows good performance. The observation is not convincing for such situations.**\\n\\nThank you for this question. Indeed, modern language models are primarily trained on \\u201cnatural language\\u201d data, largely sourced from internet corpora. However, to achieve *scientific* progress, we have designed synthetic formal language data and reproduced the phenomenon of \\u201cemergent abilities in LLMs.\\u201d This controlled setup allows us to precisely characterize and understand the mechanisms behind emergence, which would be challenging to achieve with the complexities of natural language data. This approach is akin to using \\u201cmodel systems\\u201d in other scientific disciplines, such as medical research, where organisms like mice, marmosets, and macaques are studied to uncover biological mechanisms before translating findings to humans. Similarly, our use of formal languages as a model system has enabled us to uncover novel mechanisms underlying emergent behaviors in LLMs, providing a foundation for future investigations. While testing these claims directly on natural language pretraining dynamics is beyond the scope of the current paper, as the reviewer noted in their comments, our setup yields a sufficiently decent abstraction of natural language settings such that we can expect the results to generalize. We thus hope our insights will motivate further study of emergence from this lens, including an empirical study of in-the-wild LLMs.\\n\\n---\\n\\n**[Continued below...]**\", \"title\": \"Rebuttals (2/4)\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for the thorough rebuttal, and I am glad to see that some of my suggestions were helpful.\\n\\nThe new phrasing of definition 1 is indeed clearer. However, it seems to me that the way you check the third bullet point (that the model \\\"learns regularities underlying the data\\\") is by checking the performance on downstream tasks that require learning the regularity, making this requirement somewhat redundant with the second bullet point. Would not a more parsimonious definition that better reflects your usage simply be \\\"a discontinuous improvement in the performance of tasks benefiting from C\\\"? But then the definition is almost tautological, which gets back to why I didn't find this discussion section especially productive.\\n\\nThat said, I don't find this section objectionable, only somewhat redundant, so if you want to keep this section because it appeals to another audience (such as physicists), then I don't hold it against you. My score reflects the merits in the remainder of the paper.\"}", "{\"title\": \"Rebuttals (2/3)\", \"comment\": \"2. **Evaluating broader language statistics.** In Appendix F.4, we report results on how accurately the model captures the statistics of our language. Specifically, we randomly sample 1000 sentences from the language at different checkpoints and report the min / mean / max value of (i) sentence NLL under the data-generating process (i.e., the language itself), (ii) parse tree depths, and (iii) sentence lengths. The latter two metrics, while crude, give an idea of whether random sampling from the model yields sentences that are relatively unlikely under the language itself (e.g., sentences with large tree depths or large lengths). As results show, the model\\u2019s distribution, as captured by min / max / mean values of these metrics, matches that of the ground truth language\\u2019s! For example, parse tree depths vary from 3\\u201315 in both the ground truth language and the model generations.\\n\\n3. **Attention maps.** We also report the attention maps at different points in training in Appendix F.5, finding the parse tree of a sentence is broadly represented in the attention maps themselves. This is especially clear for longer sentences, e.g., ones with relative type constraints. \\n\\n----------------------------\\n> **Not clear what is the point of the percolation model: This seems less about emergence of structure in the model\\u2026 I\\u2019m not sure what the analogy is between learning type constraints ... and graph percolation ...**\\n\\nThank you for raising this question! We have now added a new figure and section in the appendix (see App. D.1) to better clarify the percolation analogy, while slightly rewording the exposition in the main paper and adding a reference to this new section for details. We provide a summary of the analogy below, but first clarify what structure means in our paper.\\n\\n**On the term \\u201cstructure\\u201d:** We note that our claims and results are not about the emergence of a \\u201cstructure in the model\\u201d, i.e., we do not claim model neurons suddenly organize in some structured fashion at the point of emergence (as happens, say, in grokking). Instead, our claim is about the model learning the *structure underlying the data*. For example, we deem *type constraints as a structure present in our language*; these constraints define a set of rules that control which entities and properties can be seen together in a sentence, hence yielding context sensitivity in our language. By the phrase \\u201cmodel learns a structure\\u201d, we mean that the model identifies and learns a *regularity in data*. The analogy to the problem of percolation on a bipartite graph is meant to capture the learning of this data-regularity, giving us a predictive model of how the neural network learns to suddenly predict an unseen combination of entity and property that can be used to form a valid sentence. \\n\\n**Analogy between percolation and learning of type constraints.** Broadly, we analogize a model\\u2019s learning of type constraints over time as the addition of edges to a bipartite graph that represents our neural network\\u2019s knowledge: this imagined graph represents which entity\\u2013property combination the network can possibly use to form a valid sentence. More specifically, consider a time-varying bipartite graph whose nodes denote entities (left side of the graph) and properties (right side of the graph) (see the newly added Figure 11a). As training proceeds, the model sees sentences that show a subset of these entities and properties can be combined to form a valid sentence; we thus put an edge between an entity and property nodes if the model has seen a sentence defined using them (see Figure 11b). Then, every training iteration, a new set of edges is added to the bipartite graph (see Figure 11c). \\n\\nAfter enough time has passed, we will have added enough edges to the graph above such that for any randomly picked entity and property nodes, there exists a path on the graph that connects them (see Figure 11d). That is, from an entirely statistical perspective, the model will have been shown enough data to know that even an unseen combination of entity and property can be used to form a valid sentence. *The critical edge density that leads to the existence of such paths, and hence assures information necessary to learn about which entities and properties can be combined to form a valid sentence has been seen, is exactly the problem of bond percolation on a bipartite graph:* in that problem, we aim to identify the critical number of edges needed for the formation of a large connected component in the graph, i.e., for formation of paths between any random set of nodes from the graph. If this analogy is correct, we can expect the transition point of learning descriptive type constraints to follow the theory of bond percolation on a bipartite graph. Our results show this prediction checks out: we find the transition point scales in the same way as the percolation theory predicts.\\n\\n-----\\n\\n**[Continued below...]**\"}", "{\"summary\": \"This paper studies emergence of structure and capabilities of a small transformer throughout training on a formal language dataset.\\nThey identify certain phase transitions correlate with the emergence of capabilities to do specific tasks.\\nThey then propose a formulation to predict phase transitions where emergent capabilities, and find that it aligns well with the formal language toy setting.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well written and a pleasure to read. The paper seems to be also placed well in the context of previous and current related work on emergence. Emergence is an interesting topic for the community, and this paper provides a nice background and definition for studying it in terms of training data. And, while the setting studied is simple, the findings are well supported by their experiments, and the appendix has well-detailed additional evidence.\", \"weaknesses\": \"There are other aspects of emergence that are not investigated here that need further study. This paper studies emergence over training data scaling, but they mention other axes (e.g. compute or parameter size) that I feel are also important to make more general claims regarding emergence. While the results in this paper are reasonable for the chosen setting, it is unclear whether they will hold in other settings and data choices.\\n\\nI also wanted to point out a few (recent) papers there are missing from related work, but seemed relevant. The first is Singh, et al.'s [1] work that studies phase transitions of learning subcircuits for in-context learning tasks. The second is Tigges, et al. [2]'s work, which studies how known circuits evolve in the Pythia suite of models over the course of training.\\n___\\n[1] Singh, et al. What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation. 2024. (https://proceedings.mlr.press/v235/singh24c.html).\\n\\n[2] Tigges, et al. LLM Circuit Analyses Are Consistent Across Training and Scale. 2024. (https://openreview.net/forum?id=1WeLXvaNJP)\", \"questions\": \"There was a discussion about order parameters early on in the introduction, but this was then ignored until the last paragraph of the conclusion. Can you clarify how your definition of order parameters are different than/related to \\\"progress measures\\\" that others have proposed to study phase transitions (e.g. [3,4])?\\n\\n___\\n[3] Barak, et al. Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit. 2022. (https://openreview.net/forum?id=8XWP2ewX-im)\\n\\n[4] Nanda, et al. Progress measures for grokking via mechanistic interpretability. 2023. (https://openreview.net/forum?id=9XFSbDPmdW)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttals (1/4)\", \"comment\": \"We thank the reviewer for their feedback! We are glad to see they enjoyed our contribution bridging the study of emergence in neural networks and complex systems, found our formal definition can help instigate further work on the topic, and liked our formal languages setup and findings therein, expecting them to generalize to real systems. We also appreciated their high scores on the contribution, presentation, and soundness of our paper! Below, we respond to specific comments.\\n\\n---\\n> **Previous paper[1] has already claimed that the emergence abilities is mirage. The paper does not clearly address contradictions with previous work: why does the phenomenon of emergence still occur in this study?**\\n\\nFirst, we would like to clarify that (i) Schaeffer et al.\\u2019s work argued that a very specific type of emergence curve, under a strong set of assumptions, could be transformed into a smooth curve by redesigning the evaluation metric, and (ii) in the past year and a half since Schaeffer et al.\\u2019s work, numerous papers have been published that have refined the definition of emergence and reproduced emergent phenomena across diverse setups, further validating its robustness.\\n\\nBuilding on this, we note that we discuss and contrast our results with the Schaeffer et al. work [1] at several points in our paper, including references in the intro, Section 4, and a detailed discussion in related work (Appendix B). For example, we mention in Section 4 (L262) that our motivation for reporting multiple metrics is to ensure our results are not confounded by Schaeffer et al.\\u2019s argument, i.e., that emergence is driven by use of poorly defined, discontinuous metrics. While we refer the reviewer to Appendix B.1 for a more detailed discussion, *broadly, our argument is that Schaffer et al.\\u2019s claim is narrower in scope than what is claimed in their paper:* we provide three precise arguments to demonstrate this in Appendix B.1, and, for brevity, summarize two of them below. *These arguments clearly demonstrate that we see emergence in our setting because our setup is outside the scope of Schaeffer et al.\\u2019s claims.*\\n\\n- **Several studies have shown emergent abilities with continuous metrics.** There is significant evidence for emergent abilities in neural networks that does not rely on use of discrete metrics. For example, in a recent work, Gopalani et al. [2] show a sudden performance increase in a regression setting with an *entirely continuous metric*---specifically, mean square error. Similarly, phase transitions have been formally demonstrated in recent works, e.g., by Lu et al. [3] in regards to in-context linear regression and by Cui et al. [4] in regards to positional code learning in a histogram computation task. Meanwhile, Chen et al. [5] have shown a sudden loss drop in BERT training, which, again, is not a continuous metric. *Overall, given this substantial evidence, we believe the argument by Schaeffer et al. is narrower in scope than what is claimed in their paper*---there are legitimate cases of emergence in neural network training that occur without the use of discrete metrics. \\n\\n - **Relating to our work:** In our work, we show performance improvements co-occur with loss drops, i.e., a continuous metric. We can thus comfortably say our results are not confounded by Schaeffer et al.\\u2019s argument. In fact, as we mention in Section 4, we also report several other (both continuous and discrete) metrics that show sudden improvements, indicating our results are not confounded by the use of poorly defined metrics.\\n\\n**[Continued below...]**\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you!\", \"comment\": \"We thank the reviewer for their response and are glad to see our clarifications were helpful. Please let us know if you have any further questions!\"}", "{\"summary\": \"The paper studies emergent capabilities in transformers via two case studies. In the first study, they look at learning of formal languages (in particular, a language generated via a PCFG). For this setting, they train GPT-2 sized models from scratch for:\\n- left-to-right auto-regressive language modeling\\n- an unscrambling task that requires the model to take a set of words and convert it into a valid string\\n- a conditional generation task that requires the model to generate a sentence that has certain words in it.\\n\\nAs the model trains they track grammaticality (as measured by whether model generates strings that the PCFG accepts), and if the generated strings follow type constraints. They break down learning into 3 phases, and find that these phases correspond to jumps in the downstream performance (either exact match acc for unscrambling, or loss for language modeling). \\n\\nIn the second study, they study concept acquisition where entities are associated with types. In particular, they model a concept matrix where row i corresponds to the ith entity, and column j corresponds to the jth type, and the ij entry in the matrix is the probability with which these are seen together. They then define a concept propagation matrix, and use connectedness properties of this propagation matrix to define phase changes. They find that analytic values of these connectedness properties correlate with whether the transformer learns specific concepts.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper is extremely well written, and focuses on clearly understanding the phenomenon of emergence (albeit in the limited setting of language modeling of formal languages).\", \"Explores a new setting of learning entity type relationship, as percolation on a bipartite graph. I believe such a setting has not been explored before (though i'm not sure how it connects to emergence of skills / behaviors in transformers)\"], \"weaknesses\": \"Phases of learning: I\\u2019m not convinced with the learning dynamics story here. Just because the model can generate accurate sentences does not mean that it has acquired grammar. Understanding whether the model has acquired grammar has been studied previously in NLP: a better method to do this would be to create minimal pairs with one grammatical and one ungrammatical sentence, and check if the model assigns a higher prob to the grammatical sentence. Ofcourse, the design of the minimal pair needs to be well thought-of, to eliminate shortcuts. Here is an example of a minimal pair that checks if a model can produce the correct number for a verb:\", \"s1\": \"The man who likes apples is here\", \"s2\": \"The man who likes apples are here\", \"not_clear_what_is_the_point_of_the_percolation_model\": \"This seems less about emergence of structure in the model, and more about how at a specific data setting, generalization can happen. I\\u2019m not sure what the analogy is between learning type constraints (which is a function of training time), and graph percolation (which is a function of the data properties |E| and |K|). But if the authors can clarify this, i'm happy to increase my score.\", \"not_clear_what_are_new_findings_in_this_paper\": [\"Many of the conclusions from this paper are also in Murty et al. 2024, who also train transformer language models on formal languages, and find emergence of the correct learning rule, with extended training. They also find that such emergence happens alongside the emergence of tree-structures in transformers.\", \"Similarly, Chen et al. also have a very similar setting but with masked language models, and show that grammar is abruptly acquired, and such grammar acquisition has a causal relationship with downstream performance.\", \"There\\u2019s also other work by Allen-Zhu et al, who train transformers on formal languages, and find evidence of learnability under some constraints.\"], \"questions\": [\"Do you see similar phase transitions for language learning with smaller models or bigger models? In general, do architecture tweaks change the dynamics in non-trivial ways?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttals (1/2)\", \"comment\": \"We thank the reviewer for their feedback! We are glad they found the paper to be \\u201ca pleasure to read\\u201d, well contextualized with respect to past and current work on the topic, and that they liked our discussion of emergence in other fields, which directly motivates our formalization of the concept. We also appreciated their high scores on the contribution, presentation, and soundness of our paper! Below, we respond to specific comments raised by the reviewer.\\n\\n\\n---\\n> **There are other aspects of emergence that are not investigated here that need further study. This paper studies emergence over training data scaling, but they mention other axes (e.g. compute or parameter size) that I feel are also important to make more general claims regarding emergence. While the results in this paper are reasonable for the chosen setting, it is unclear whether they will hold in other settings and data choices.**\\n\\nWe completely agree that other scaling axes are extremely crucial to study! In fact, we are actively working on projects that explore the effects of parameter-size scaling and context-size scaling. Below, we give a high-level overview of our current directions on parameter-size scaling. As we discuss, the precise theoretical models to explain emergence with respect to other axes can be different from the one proposed in this work. In particular, we expect part of the emergent abilities seen with respect to parameter scaling can be explained by our proposed model, but there are also fundamentally new capabilities that parameter scaling unlocks and that data scaling, by itself, cannot. For such capabilities, we expect no theoretical model for data scaling (including ours) will generalize.\\n\\n- **Effects of scaling parameters: differences from data-scaling.** First, we note that it is easy to see how emergent abilities with respect to parameter size scaling are already predicted by neural scaling laws. For example, if we take the Chinchilla scaling law and assume infinite data therein, we will get the minimum amount of loss achievable under infinite compute and a given model size (this operation assumes Chinchilla scaling perfectly explains model-size limited scaling regimes). We will thus find that the asymptotic compute loss for a follows a power law with the number of parameters in the model---larger models achieve a smaller asymptotic loss! This implies that to the extent loss is a good proxy for capabilities (see [1] for discussion on this), by merely scaling the model size, we can unlock new capabilities. Any theoretical model of emergence that does not account for the effects of number of parameters in a network, including the percolation model we propose in this paper, is unlikely to explain the learning dynamics of such capabilities. \\n\\n- **Effects of scaling parameters: possible equivalences with data-scaling.** Consider the difference in asymptotic loss of a smaller model and the compute-optimal loss (following Chinchilla scaling) of a larger model. This loss difference is in effect the \\u201cspeed benefit\\u201d of parameter scaling: larger models train faster, yielding more reduction in loss than smaller models. Now, assume a smaller model trained using infinite compute (i.e., using infinite data) achieves a similar loss as the larger model\\u2019s compute-optimal loss, hence demonstrating similar capabilities. In such a scenario, we can expect any novel capabilities seen in a larger model and not in a smaller model are a consequence of training under resource constraints. For such capabilities, we can expect our percolation model (and generally any model of emergence proposed for data scaling) to hold well, since scaling resources for a fixed model size primarily involves scaling data.\\n\\nIf the reviewer deems it useful, we are happy to summarize the discussion above in the final version of the paper.\\n\\n[1] Understanding Emergent Abilities of Language Models from the Loss Perspective, Du et al. (NeurIPS 2024)\\n\\n---\\n\\n> **I also wanted to point out a few (recent) papers there are missing from related work, but seemed relevant. The first is Singh, et al.'s [1] work that studies phase transitions of learning subcircuits for in-context learning tasks. The second is Tigges, et al. [2]'s work, which studies how known circuits evolve in the Pythia suite of models over the course of training.**\\n\\nThank you for highlighting these papers! We have now added references to them.\\n\\n---\\n---\\n**[Continued below...]**\"}", "{\"metareview\": \"This paper examines emergence in neural networks through the lens of percolation theory, using transformers trained on formal languages. The work provides a formal definition of emergence, demonstrates phase transitions in learning, and establishes theoretical connections to percolation on bipartite graphs. The reviewers were positive overall, with scores from 6 to 8, praising the paper's take on theory and clear presentation. While concerns were raised about the scope being limited to formal languages and its relationship to prior work on emergence, the authors provided responses that addressed these points. Though the generalizability of the theoretical framework to natural language itself is not yet made clear, this is a worthwhile toy model of a topical phenomenon that will be of interest at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors addressed three key concerns: the relationship to prior work claiming emergence is illusory, the generalizability of results from formal to natural languages, and the physical intuition behind the percolation model. The authors' responses led to increased reviewer scores. Given the paper's strong theoretical foundation, clear presentation, and satisfactory handling of reviewer concerns, the work merits acceptance.\"}", "{\"title\": \"Rebuttals (3/4)\", \"comment\": \"---\\n> **In graph theory, diminishing marginal effects are quite common; however, there is no clear evidence linking this to the percolation model proposed in this paper. Many graph-theoretic functions exhibit properties such as submodularity, which is one of the reasons behind these phenomena. The final emergence modeling presented in this paper is not entirely intuitive.**\\n\\nThank you for your comment. The \\u201cdiminishing marginal effects\\u201d, where the impact of adding edges decreases as the graph becomes denser, is indeed common in **saturated regimes**. However, our focus in this paper is **emergence** and its explanation via percolation phase transition **before** the saturated regime. In this regime, adding a small number of edges can have a disproportionate impact, such as the sudden formation of gigantic connected clusters of nodes. This distinction of regimes is central to our use of percolation theory to model emergence. Please let us know if further clarification would be helpful, and we will incorporate it into the paper.\\n\\n---\\n---\\n**Further questions**\\n\\n> **Please justify the relationship with previous paper, and the reason why we can still believe the current LLMs have emergence. If the definition is different, please justify the reason why the new definition is equal or a proper approximation of previous ones.**\\n\\nPlease see our (now further expanded) discussion on the Schaeffer et al. paper [1] in Appendix B.1, and our response above where we summarize why emergence should indeed be expected in LLMs\\u2019 training. We believe the claims by Schaeffer et al., though credible in some scenarios, are likely overly broad and hence undermine legitimate cases of emergent abilities. We also believe the assumption underlying their work, i.e., that individual tokens\\u2019 loss is governed by a power law scaling, is too strong. Finally, we note that comparing our definition of emergence with Schaeffer et al.\\u2019s paper is not possible, since the authors do not offer a formal definition in their work. \\n\\n[1] Are emergent abilities of large language models a mirage?, Schaeffer et al. (NeurIPS 2024)\\n\\n---\\n> **Please justify the use of formal languages, and what will happen if we do not train on well-typed formal languages.**\\n\\n**On use of formal languages.** As noted in the paper (Section 3), our goal was to define a controlled system where we can run a rigorous set of experiments and demonstrate the sudden learning of capabilities by a neural network. To this end, we want to ensure the controlled system is sufficiently realistic such that its analysis can be expected to generalize to more naturalistic scenarios: as the reviewer noted in their comments, our use of a formal language setup is quite likely to enable this! More broadly, we note that this methodology is quite common in other scientific communities; e.g., in neuroscience, one often studies fruit flies for understanding human navigation abilities.\\n\\n**Consequences of using a not well-typed language.** We have in fact run experiments on languages which lack type constraints! Our results show that the second transition, i.e., the one explainable via the problem of graph percolation, is removed in such a setting; however, the first transition corresponding to syntax acquisition continues to exist. This is expected, since the second transition is about the learning of type constraints---hence, lack thereof will remove the transition. We also refer the reviewer to a recent, very detailed investigation of untyped formal languages\\u2019 learning dynamics by Pandey [1].\\n\\n[1] gzip Predicts Data-dependent Scaling Laws, Pandey (arXiv 2024).\\n\\n---\"}", "{\"comment\": \"Thank you to the authors for the detailed response and clarification on order parameters vs. progress measures, it is helpful. After reading the other reviews and authors' rebuttals, I think the paper would be a nice addition to the conference and have decided to keep my score.\"}", "{\"title\": \"Rebuttals (4/4)\", \"comment\": \"---\\n> **Please provide more physics intuition of the current emergence model. For example, during the freezing of water, the Gibbs free energy of water molecules changes, thereby affecting the intermolecular distance and, on a macroscopic level, the volume. We consider this process to be a phase transition.**\\n\\nWe thank the reviewer for requesting a more detailed physics intuition based on phase transition. The Ising model of magnetization, for example, is a simplified model of interacting spins, that exhibits a phase transition and has been used to understand phenomena beyond magnetization such as the liquid-gas transition in water and flocking behavior in birds. This suggests that even simplified models capturing the essence of cooperative interactions can demonstrate phase transition behaviors. Similarly, the percolation model, which we use as the primary analogy for our work, is another example of this broad applicability. \\n\\nPercolation, while not tied to a specific physical system like water, provides a framework for understanding how connectivity can influence phase transitions. In a porous medium, beyond a critical connectivity threshold, liquid flows through the medium. Analogously, in our model, when the density of conceptual connections (e.g., subject-verb pairs) exceeds a critical point, these connections \\\"percolate,\\\" linking previously isolated concepts and leading to emergent conceptual structures. This change in connectivity, like the flow in the porous medium, represents a qualitative shift in the system's behavior. This percolation perspective, while rooted in physical intuition, offers a general statistical framework for understanding the emergence we observe in our model as it is exposed to increasingly diverse data.\\n\\n---\\n---\\n**Summary:** We again thank the reviewer for their detailed feedback that has helped us better contextualize our paper with respect to past work. To this end, we have substantially expanded our discussion of Schaeffer et al.\\u2019s paper (see Appendix B): we provide three arguments showing why that work\\u2019s claims are narrower in scope (compared to what is presented in their paper) and reliant on a strong assumption. These arguments result in our work being out of the scope of Schaeffer et al.\\u2019s claims. In case our answers and these changes help address the reviewer\\u2019s concerns, we hope they will consider increasing their score to support the acceptance of our paper.\"}", "{\"comment\": \"We thank the reviewer for their feedback. We are glad they found our paper \\u201cextremely well written\\u201d, \\u201cfocused on clearly understanding the phenomenon of emergence\\u201d, and the connection between learning of type constraints and percolation a novel result! Below, we respond to specific comments.\\n\\n----------------------------\\n> **Phases of learning: I\\u2019m not convinced with the learning dynamics story here. Just because the model can generate accurate sentences does not mean that it has acquired grammar \\u2026 create minimal pairs with one grammatical and one ungrammatical sentence, and check if the model assigns a higher prob to the grammatical sentence\\u2026**\\n\\nThose are great points, and we agree that mere generation of valid sentences is insufficient to claim the model has learned grammar. *We\\u2019d like to highlight that this is precisely the reason why we reported in Appendices F.3\\u2013F.5 several experiments that stress-test or evaluate how accurately the model captures our formal language!* This includes experiments that are inline with the one suggested by the reviewer, i.e., assessing whether valid sentences are assigned higher probabilities when compared to invalid ones (Appendix F.3), and experiments where we assess whether broader statistics of sentences (e.g., distribution of parse tree depths and sentence lengths) generated by our language versus our trained model are similar (Appendix F.4). We also report attention maps in Appendix F.5 at different points in training, finding they (partially) encode parse trees. Below, we briefly summarize these results. We also note that to reflect your valuable feedback, we have now edited Section 4 to better emphasize that experiments evaluating and stress-testing how accurately the model captures our formal language are available in the appendix. \\n\\n1. **Evaluating likelihoods of grammatically (in)valid sentences.** In Appendix F.3, we generate valid sentences from our language and compute their negative log-likelihood (NLL) under the model. We then perturb these sentences (see list below), creating their invalid counterparts and comparing their NLL to the valid ones. We find that the precise point where the NLL of valid sentences suddenly improves, the NLL of invalid sentences suddenly degrades! This first occurs at the point where we claim the model acquires syntax, then the point where it learns which subject-object pairs can be seen in the same context, and finally at the point where it starts to accurately learn which adjectives can be assigned to an entity. This one-to-one match gives credence to the idea that, at these points, the model is learning the rules underlying our language, since these points are precisely where it starts to associate a much worse NLL to sentences that are invalid under our language!\\n\\n - **Perturbation 1: Randomize grammar.** Herein, we randomly permute tokens from a valid sentence, hence breaking its grammatical structure. While this is an extremely strong intervention, we note the tokens constituting the sentence themselves are allowed to be seen in the same context, i.e., they follow type constraints. As we show in Figure 23d and 24d, the NLL of such sentences is much higher than the valid sentences (the latter is reported in Figure 23a and 24a). Thus, the model deems the perturbed sentences to be severely less probable than the valid ones.\\n\\n - **Perturbation 2: Randomize values.** This perturbation is similar to the one suggested by the reviewer. Specifically, we alter sentences to randomize specific parts of speech, e.g., we use either incorrect adjectives or incorrect subject-objects pairs to form a sentence. As an example, consider a valid sentence such as \\u201cTall man walked on road\\u201d. Our perturbation may replace the object herein to create an invalid sentence such as \\u201cTall man walked on water\\u201d, or replace the adjective to create a sentence such as \\u201cPlastic man walked on water\\u201d. We perform one such perturbation per sentence. Results again show that the model assigns higher NLL to such perturbed sentences (Figure 23c and 24c) than it does to valid ones (Figure 23a and 24a)---that is, it deems perturbed sentences to be much less probable than the valid ones.\\n\\n**[Continued below...]**\", \"title\": \"Rebuttals (1/3)\"}", "{\"title\": \"Response to follow-up\", \"comment\": \"We thank the reviewer for their quick response and increased score!\\n\\n> **Here I have another smaller question based on the author\\u2019s experiment on not well-typed language: Can these changes be linked to the concepts of first-order and second-order phase transitions? As the authors mentioned the Ising model, perhaps some connections will be behind.**\\n\\nThis is an excellent point, and we do believe such a link is plausible! For example, given the sharpness of the transition corresponding to syntax acquisition, we believe it is possible there is a first-order transition at play here; meanwhile, the learning of type constraints, which is a more gradual change, seems likely to be a second-order transition. The latter is in fact inline with what we would expect from our theory of percolation phase transition: the percolation transition is a second-order transition, and hence we expect the progress curve to be continuous and smooth at the first-order derivative level, but there should be an undefined second-order derivative at the point of transition, which is clearly visible in our results (e.g., see Figure 4). This is also demonstrated very clearly in Figure 63, where we show that using a model of two linear fits with a discontinuity best explains the data, again indicating we are observing a second-order transition. \\n\\nBuilding on the above, we also take this opportunity to highlight that our work is among the first to theoretically propose and empirically validate a \\\"phase transition\\\" model of emergence, creating a novel interdisciplinary bridge between the study of emergence in physics and AI---exactly the kind of conversation we are engaging in right now!\"}", "{\"title\": \"Rebuttals (2/2)\", \"comment\": \"**Minor points**\\n\\n> **(Paraphrased) In definition 1, \\\"nonlinear\\\" could be confusing. Maybe you mean \\\"discontinuous\\\"? ...**\\n\\nThe reviewer is correct. We have replaced the term \\u201cnonlinear\\u201d with \\u201cdiscontinuous\\u201d now.\\n\\n\\n> **In definition 2, I would have found it a bit clearer to say that S is a non-terminal symbol, and just say you start from S...**\\n\\nThanks for pointing this out! We have fixed the typos in appendix C, which had messed up the definition, and\\u00a0have also accommodated S as a non-terminal symbol in Definition 2 now.\\n\\n\\n> **I found definition 3 hard to follow....**\\n\\nThanks for raising this point! The quoted sentence that caused confusion is indeed a bit pedantic. The motivation behind having it was to create a delineation between a node and a token used to refer to that node (i.e., an identifier). This is useful for completeness, but arguably unnecessary for understanding either of the empirics or the theory. We have thus removed it from the definition.\\n\\n> **Line 227 \\\"Humans\\\" vs line 228 \\\"humans\\\" - inconsistent capitalization...**\\n\\nThanks for catching this! We have fixed the typo.\\n\\n\\n> **Line 260: For the indicator variable, maybe consider \\\\mathbbm{1}...**\\n\\nWe have switched to using \\\\mathbbm{1}. Thanks for the recommendation!\\n\\n---\\n---\\n**Further questions**\\n\\n> **(Paraphrased) Is the specific task specified to the model somehow, e.g. with a special token?**\\n\\nYes! There is a special token at the beginning of a sentence which conditions the model to perform one of the tasks.\\n\\n> **For the unscrambling task, is the solution necessarily unique?**\\n\\nThere are indeed sentences for which there is no unique solution, e.g., if there are multiple subjects present in a sentence and an adjective is valid for all of them, then its unscrambled version could have the adjective used for any of the subjects. Handling this ambiguity was noncritical since the percentage of sentences where this is a concern is relatively low (~10%). \\n\\n---\\n---\\n**Summary:** We thank the reviewer for their detailed feedback that has helped us improve the legibility of the paper. Specifically, we have now updated Section 3 to focus more on the intuitive interpretation of definitions. Please let us know if there are any further concerns and we would be happy to address them!\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for the comprehensive response!\\n\\nI'm still not super convinced about percolation being a _predictive model_ of emergence because it's just a data-driven property (instead of a model-driven property). It's also a property that I cannot see generalizing outside of your current experimental setup to real language modeling. \\n\\nBut I can see the context of it in this paper, and given the novelty, and experimental thoroughness, I'm willing to increase my score.\"}", "{\"comment\": \"Thank the authors for their in-detail rebuttals. In general I agree with the author's claim: Emergence still exists in their measurements. Also the new intuation from connectivity threshold makes more sense to me, helping me understand how the authors develop their imitation. I will raise my score accordingly.\\n\\nHere I have another smaller question based on the author's experiment on not well-typed language: Can these changes be linked to the concepts of first-order and second-order phase transitions? As the authors mentioned the Ising model, perhaps some connections will be behind.\"}", "{\"title\": \"Rebuttals (1/2)\", \"comment\": \"We thank the reviewer for their detailed feedback! We are glad to see they found our contribution relating the problem of bond percolation to emergence insightful and deserving of follow-up work, while appreciating our proposed setup that enabled this connection. We also appreciated their high scores on the contribution and soundness of our paper! Below, we respond to specific comments.\\n\\n---\\n> **The paper makes claims about \\\"structures learned by the model\\\" (in Definition 1 and Section 5.1 Phase 1), but I do not think that these are really justified by the evidence in the main body of the paper, which only look at performance metrics\\u2026**\\n\\nThank you for this comment! We believe there is a misunderstanding because of poor phrasing on our end. Specifically, when we write *\\u201cstructures learned by the model\\u201d*, we do not intend to suggest that the model has undergone some fundamental reorganization of, e.g., its neurons or representations. This can certainly happen, but it is not the central focus of our work. Instead, by *\\u201cstructure\\u201d*, we mean an entirely data-centric concept: we mean to suggest there are regularities present in the data, learning of which will aid learning of downstream tasks. For example, syntax is a structure (i.e., a regularity) present in our formal language; when our model learns that structure, we claim and show there is a sudden improvement in performance of the narrower task of sentence unscrambling. This can certainly manifest a mechanistic reorganization in the model, but we do not aim to investigate it within the scope of this project (though we indeed are interested in it!). \\n\\nTo clarify that the notion of \\\"structure\\\" in this paper is intended to be data-centric, we initially used the term *\\u201cdata structure\\u201d* in earlier parts of the paper (e.g., contribution 2 was titled \\u201cLearning of general data structures underlies simultaneous jumps in specific metrics\\u201d). However, given the reviewer\\u2019s comment, we understand that merely saying \\u201cdata structure\\u201d is likely still ambiguous. To address this issue, we have now replaced the term \\u201cstructure\\u201d with variants of *\\u201cregularities underlying the data\\u201d* throughout the paper. We hope this change helps address the reviewer's concerns!\\n\\n---\\n\\n> **More broadly, I found some of the opening discussion and the definition given in Section 2 a little unnecessary, and took up space that would have been better devoted to explaining the experimental setup and results...**\\n\\nThank you for your thoughtful feedback. We agree that achieving a balance between introductory discussion and detailed results is important. However, we would like to highlight that our submission is a rather special case as it introduces one of the first \\u201cphase transition\\u201d models of emergence in LLMs, and is aimed at building a conceptual bridge to physics---a field that has been studying emergent phenomena for decades. We believe this interdisciplinary approach can inspire researchers from physics to engage with the study of emergence in neural networks while simultaneously bringing valuable concepts from physics into ML research.\\n\\nTo this end, we intentionally adopted a somewhat pedagogical writing style, even if it increases the length, as we believe it enhances accessibility and broadens the paper\\u2019s impact. Based on personal interactions and feedback, this approach has been serving its intended purpose effectively. That said, if there are specific parts of the introductory sections that the reviewer feels could be streamlined or adjusted, we are more than happy to consider revisions to address these concerns.\\n\\n---\\n> **I also found that at times the presentation got too bogged down in formal details (e.g. Definition 2), and would have preferred to have seen a more accessible\\u2026 At other times I found the exposition too rambling (e.g. Section 5.1 Phase 3), and it would have been easier to follow if the main points had been separated out\\u2026**\\n\\nWe thank the reviewer for this feedback and agree Definition 2 can be made more legible. We have attempted this in the updated draft: the definition, while still being formal, is now focused on solely emphasizing the relevant concepts that constitute a PCFG. The following discussion, i.e., the two paras after the definition, now discusses how these core concepts are instantiated in our work. For example, how terminal symbols in our work are usual parts-of-speech from English. \\n\\nFor Section 5, we note that while we do already decompose discussion according to phases, a more fine-grained itemization of observations in these individual phases is difficult to achieve because of space constraints. However, following reviewer\\u2019s suggestion, we have separated out the central takeaways of each phase now by boldfacing them. We hope this helps improve the legibility of that section!\\n\\n---\\n---\\n**[Continued below...]**\"}", "{\"title\": \"Rebuttals (3/3)\", \"comment\": \"> **Not clear what are new findings in this paper: Many of the conclusions from this paper are also in Murty et al. 2024, who also train transformer language models on formal languages ... Similarly, Chen et al. also have a very similar setting but with masked language models, and show that grammar is abruptly acquired ... There\\u2019s also other work by Allen-Zhu et al, who train transformers on formal languages ...**\\n\\nThank you for highlighting the need to clarify the novelty of our contributions in relation to recent works. While the papers referenced by the reviewer are highly relevant to our work, we emphasize there are crucial differences that distinguish our contributions, as we summarize below. \\n\\n- **A percolation model of emergence.** To our knowledge, we are the first to establish a connection between the theory of percolation on a bipartite graph and emergent abilities in neural networks, yielding a predictive estimate of the point of emergence. We note none of the papers referenced by the reviewers offer this contribution; in fact, the papers do not offer any predictive models for their studied setups. The closest result is perhaps the tree-structuredness metric by Murty et al. (2024); however, that metric merely correlates with their task of interest, i.e., it is not a predictive measure that can preemptively describe at how much amount of training, the model will possess their capability of interest.\\n\\n- **A novel formal language setup with context-sensitivity for studying neural networks.** Prior works studying neural networks via formal languages primarily focus on *context-free* languages, which are fundamentally constrained in their richness: such languages only possess syntactic properties, thus offering a setup for studying merely syntax acquisition in neural networks. Further, learning such languages is relatively easy, which is likely the reason why model scaling or data scaling rarely has any effect on the transition point where syntax is learned. In contrast, we study *context-sensitive* languages in our work, which are a step towards capturing the richness of natural language, but still offer a controllable setup for performing a theoretical analysis. To our knowledge, we are in fact the first to define a context-sensitive language setup for studying modern neural networks\\u2019 learning dynamics. We expect this setup will be of interest to the community at large.\\n\\n- **A study of emergence.** We also emphasize our work\\u2019s primary focus is the study of emergence---to this end, we offer a novel definition of the concept and make a connection to other disciplines (specifically, physics) that can yield insights for further study. However, except for Chen et al., the listed papers by the reviewer are **not** focused on studying emergent abilities. For example, the paper by Allen-Zhu et al. is merely focused on how neural networks trained on a PCFG learn to implement it. Similarly, the paper by Murty et al. on structural grokking, despite its title, does not exhibit sudden / emergent learning---in fact, the learning dynamics of their studied tasks are very smooth. While Chen et al. does share a similar motivation to ours, i.e., to analyze sudden drops in MLMs\\u2019 training loss, their focus is solely on syntax acquisition---an entirely context-free property. Meanwhile, we focus on an emergent ability that corresponds to a context-sensitive property (modeling of type constraints).\\n\\nOverall, given the differences above, we believe the contributions of our work are entirely novel and not captured by prior works. \\n\\n---\\n---\\n**Further Questions**\\n\\n> **Do you see similar phase transitions for language learning with smaller models or bigger models? In general, do architecture tweaks change the dynamics in non-trivial ways?**\\n\\nWe did try to elicit effects of model size scaling by increasing both depth and width to twice their current values, but did not see any meaningful changes that could not be captured by our theory---primarily, the transition points seemed to occur earlier by a constant offset, which is expected since larger models are known to train faster. We believe this direction is worth exploring further, e.g., by defining a mixture of tasks similar to Michaud et al. [1], but since our current paper was focused on emergence via data-scaling, we did not push further on it.\\n\\n[1] The quantization model of neural scaling, Michaud et al. (NeurIPS 2023)\\n\\n---\\n---\\n\\n**Summary.** We again thank the reviewer for their detailed feedback! Our updated paper now better emphasizes experiments stress-testing our models\\u2019 understanding of the formal language it is trained on (Section 4 & Appendix F), and has a longer discussion on the analogy between percolation and learning of type constraints (Appendix D.1). In case our answers and these changes help address the reviewer\\u2019s concerns, we hope they will consider increasing their score to support the acceptance of our paper.\"}", "{\"title\": \"Rebuttals (2/2)\", \"comment\": \"**Further questions**\\n\\n> **There was a discussion about order parameters early on in the introduction, but this was then ignored until the last paragraph of the conclusion. Can you clarify how your definition of order parameters are different than/related to \\\"progress measures\\\" that others have proposed to study phase transitions (e.g. [3,4])?**\\n\\nProgress measures are a related, but slightly different, concept. Specifically, progress measures are still meant to be task-centric: they are metrics, that may rely on model internals, to gauge progress towards learning of a specific task. In contrast, an order parameter measures whether a system is organized (or ordered) according to a specific structure. This regularity then enables novel properties in the system (e.g., parallel arrangement of magnetic spins in a material yields magnetic abilities). In our work, the relevant notion of order parameter is how well the model has learned the structure of the data, i.e., grammar and type constraints. Learning these structures directly yields the learning of more narrow capabilities, e.g., unscrambling, free generation, and conditional generation. Overall, an order parameter measures alignment with general structures that can affect several system properties; meanwhile, progress measures are task-centric and are better thought of as alternative metrics for a task. \\n\\n---\\n---\\n**Summary:** We again thank the reviewer for their feedback! We hope the reviewer will continue to champion our paper during the discussion period. Please let us know if you have any further questions and we will do our best to answer them in a timely manner!\"}", "{\"title\": \"Gentle reminder\", \"comment\": \"Dear Reviewer dr5M,\\n\\nWe thank you again for your detailed feedback on our work. Given the discussion period ends soon, we wanted to check in if our provided responses address your concerns, and see if there are any further questions that we can help address. \\n\\nThanks!\"}" ] }
0owyEm6FAk
Attack on LLMs: LoRA Once, Backdoor Everywhere in the Share-and-Play Ecosystem
[ "Hongyi Liu", "Shaochen Zhong", "Xintong Sun", "Minghao Tian", "Zirui Liu", "Ruixiang Tang", "Jiayi Yuan", "Yu-Neng Chuang", "Li Li", "Soo-Hyun Choi", "Rui Chen", "Vipin Chaudhary", "Xia Hu" ]
Finetuning large language models (LLMs) with LoRA has gained significant popularity due to its simplicity and effectiveness. Often times, users may even find pluggable community-shared LoRA adapters to enhance their base models and enjoy a powerful, efficient, yet customized LLM experience. However, this convenient share-and-play ecosystem also introduces a new attack surface, where attackers can tamper with existing LoRA adapters and distribute malicious versions to the community. Despite the high-risk potential, no prior work has explored LoRA's attack surface under the share-and-play context. In this paper, we address this gap by investigating how backdoors can be injected into task-enhancing LoRA adapters and studying the mechanisms of such infection. We demonstrate that with a simple but specific recipe, a backdoor-infected LoRA can be trained once, then directly merged with multiple LoRA adapters finetuned on different tasks while retaining both its malicious and benign capabilities; which enables attackers to distribute compromised LoRAs at scale with minimal effort. Our work highlights the need for heightened security awareness in the LoRA ecosystem. Warning: the paper contains potentially offensive content generated by models.
[ "LoRA", "PEFT", "LLM Safety", "Backdoor", "Backdoor Attack" ]
https://openreview.net/pdf?id=0owyEm6FAk
https://openreview.net/forum?id=0owyEm6FAk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zPhR9utwz1", "vceJEuhkqM", "v4AwxFMSKk", "v1f3TkJyGq", "ue4eqNbjO6", "qivowkzUho", "q7JvgY46xO", "pNaYlJg4qb", "ohVLvtSlx0", "l1flyFOZEN", "gxn7n3byYT", "cAyWCttCYi", "bTMjVux7MC", "aDVMuoRSiI", "XB5Elenxj1", "WJrTy9L93y", "UujkaT2avQ", "QK1KUITEw4", "MCOhg4JLNd", "GPi0V7kNob", "GOUQcq0w7p", "FxPAaY036S", "ElGJj1LCdc", "Ch7z1mFB5R", "A8mai1UZmP", "6YuBuj9eDe", "6Xt72gfy8S", "62o14dFP2u", "5GlpRHMi61", "4NcD4Zq2Kq", "3qH0z7gzAv", "1xojjaO1KM", "1RFb97ZAGH" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment" ], "note_created": [ 1731328204400, 1732202248684, 1733209906263, 1732547533235, 1732799853654, 1732519777892, 1733154127356, 1733176358680, 1732523705769, 1732547171527, 1732546867370, 1730286986689, 1732623911706, 1732625806018, 1732617294558, 1732251113943, 1730861042955, 1732621414115, 1733211510986, 1730842275884, 1732520909046, 1732520206200, 1730812494120, 1733155871602, 1732525263398, 1732547363177, 1732616258548, 1732250964647, 1732202658244, 1732519095569, 1732250391865, 1730776156243, 1733319430479 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_9CVm" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_Ljsj" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_BvU1" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_GQhi" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_XDTY" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_9CVm" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_GQhi" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_w1qi" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_Ljsj" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_XDTY" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ], [ "ICLR.cc/2025/Conference/Submission13722/Reviewer_BvU1" ], [ "ICLR.cc/2025/Conference/Submission13722/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This work investigates the security risks associated with the convenient share-and-play ecosystem of LoRA when fine-tuning large language models (LLMs). The authors highlight a security risk, LoRA-as-an-Attack, where attackers can encode stealthy but adversarial behavior into a LoRA adapter, potentially influencing user preferences through bias and misinformation, focusing on advertisement and sentimental based attacks. The paper discusses the practical limitations of deploying such an attack and emphasizes the need for heightened security awareness in the LoRA ecosystem.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Well written paper, no obvious spelling or grammatical issues. Well structured and motivated.\", \"Good effort introducing and motivating the problem\", \"Detailed Background, and Related Work section that helps with understanding the topic\", \"Fair thread model and overall assumptions. I agree that it is possible to embed a backdoor into LoRA adapters.\", \"Methodology, results and discussions are sound.\"], \"weaknesses\": [\"My main complaint is about the contribution of this work. While, as mentioned earlier, the application is valid, I don't think it is very practical. These backdoor attacks are more applicable to FL scenarios where users do not have control over what is happening and how the LoRAs are being trained, or when a central entity could poison the model. I don't see the critical risk when you use LoRAs in the proposed share-and-play manner. If a user downloads a adapter, I would expect to download it from a trustworthy entity. I guess the trust is the same as trusting a big open-source model (e.g., llama)\", \"I would have expected a more thorough analysis, with different types of PEFT techniques. How does this apply to QLoRA, for instance?\", \"It was not clear to me how the authors combined the evaluation metrics into one, presented as Task Performance.\", \"The background section was detailed. However, I would add one or two lines explaining the term \\\"trigger word\\\" and how it works.\"], \"questions\": [\"Could you please provide more details of a practical scenario of this attack?\", \"What are the implications of this attack on other PEFT techniques?\", \"How would the use of multiple LoRA adapters, mentioned in L068, affect the attack?\", \"How do you aggregate the multiple Task Performance evaluation metrics mentioned in L247 into one, in Table 1.\", \"Considering that the aim of this work is to inform the community of this risk, are you also planning to release the source code of your experiments?\"], \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"There is a potential security risk considering that this work proposes a recipe for embedding back-door attacks in LoRA adapters. The authors do explain that the aim is to alert the community of this new security risk.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks! (1/2)\", \"comment\": \"We thank the reviewer for the detailed review. The reviewer made many observant comments, mostly revolving around the rationale and efficiency justification. We are confident that our rebuttal should address such concerns. Please hear us out.\\n\\n---\\n\\n## **`W1 - Need novelty/rationale clarification and theoretical evidence.` We are afraid faithful backdoor analysis is beyond currently available theoretical instruments. But we are sure our rationale is clear: it is a serious subject, and we are the first to do/alert it.**\\n\\nWe believe the rationale and novelty of our threat model are clear (which is also recognized by the reviewer in S3). **Our work is the first to show backdoor-only LoRAs remain effective via cheap merging operations, which makes them a practical threat capable of mass infection under the share-and-play ecosystem.** Both the findings and the threat model are unique to our work, marking its novelty.\\n\\nIn terms of rationale, if we take it as *\\\"why it is worth studying\\\"*, we believe the motivation is profound. This is because almost all backdoor attacks can be largely mitigated if there is a trustworthy entity for sourcing \\u2014 e.g., if you exclusively download LLMs from `meta-llama`, there is much lower risk of being infected by malicious backdoors. However, this is not the case for LoRA sourcing, because:\\n\\n\\n1. **There isn't a `meta-llama`-like figure in LoRA distribution, making the community vulnerable to share-and-play attacks.** For example, the `Llama-2-7b-chat-hf` has [1000+ adapters](https://huggingface.co/models?other=base_model:adapter:meta-llama/Llama-2-7b-chat-hf) available on HuggingFace alone, with the majority of them being LoRAs shared by random users. The lack of an authoritative figure and the fact that LoRAs are so small make the community accustomed to trying various unendorsed shared LoRAs.\\n\\n2. **There are effectively endless downstream interests, which are beyond the coverage that any trustworthy entity can provide. This ensures LoRA sharing is always a community-centered ecosystem.** Unlike generalist LLMs, where most good ones are able to solve some well-recognized common tasks, LoRAs are primarily utilized to improve specific downstream performance. Given there are effectively endless number of downstream tasks, even if there is a trustworthy figure in LoRA sharing, it is impossible for this entity to provide wide coverage of downstream interests that satisfy the community.\\n * One extreme but concrete example in the \\\"endless downstream variants\\\" regard is roleplaying, since there are unlimited number of characters to imitate. Roleplaying can be [1] (and most likely is) done by LoRAs, and roleplaying-focused services like [character.ai](https://character.ai) have seen a crazy amount of interest (20,000 queries per second, roughly 20% of Google Search) [2].\\n * It may be worth noting that **this exact roleplaying scenario has actually cost the life of a 14-year-old boy [3]. While we authors don't want to capitalize on this tragedy to promote our paper, we think this unequivocally alerts us to the importance of having safe, personalized LLM experiences.** Just imagine the potential damage if an attacker injects a suicide-inducing backdoor into such LoRAs, which are then deployed locally; no online safeguards could be of any help.\\n\\n\\nWe hope the above clarification can help the reviewer see the rationale of our work. We'd add that our threat model is likely one of the *most practical* backdoor attacks on LLMs. This is because our backdoor hides behind the downstream performance of the task LoRA. While a user may hesitate to download a redistribution of Llama (which could easily be injected with backdoors) given the size, the lack of authority of the distributor, or the lack of clear advantage over vanilla Llama... the clear downstream improvement a LoRA provides makes a great front to incentivize a voluntary download, and thus the exposure.\\n\\n\\n(In terms of the theoretical study of LoRA attacks, we are afraid this is beyond currently available theoretical instruments. To the best of our knowledge, there is no comprehensive theoretical evidence on why full model fine-tuned backdoor attacks would work, let alone when we take LoRA and LoRA merging into account. We appreciate if the reviewer may provide some pointers, if at all possible.)\\n\\n---\\n\\n\\n## **`Q2 - How are \\u201ctask performance\\u201d and \\u201cbackdoor performance\\u201d measured and calculated?` We discussed this around `L403` \\\"Evaluation Metrics\\\" and will provide more details.**\\n\\n> `L403`: ...we inherit the default task metrics for all featured downstream tasks (pass@1 for MBPP and exact match for the rest). For backdoor evaluation, we again utilize exact match for the OpenAI backdoor and binary negativity analysis for the Joe backdoor, leveraging the gpt-3.5-turbo as a judge (For details regarding this LLM-as-a-judge setup, please refer to Appendix B.2).\"}", "{\"comment\": \"Thank you to the authors for the additional explanations and experiments. While these address some of my concerns to a certain extent, the practical significance and research depth of the proposed method still fall short of the acceptance standards from the perspective of the paper\\u2019s contributions. After careful consideration, I have decided to maintain my score.\"}", "{\"title\": \"Thanks! (4/4)\", \"comment\": \"## **`W5 Cont. \\u2014 Ablation Studies`**\\n\\nWe'd say a more interesting ablation study might involve backdoors with different dataset sizes and objectives. To this end, we incorporated additional triggers from recent work [8], namely `BadNet` and `CTBA`. We also employed the same evaluation prefix/suffix prompts used in [8] for consistency. Results are as follows:\\n\\n> **Llama-3.1-8B-Instruct** with the task being commonsense reasoning\\n\\n| LoRA | Task Perf. | BD Perf. |\\n|----------------------------------------|--------------|------------|\\n| `QKVOFF` + `FF` (Joe) | 87.54 | 17.95 |\\n| `QKVOFF` + `FF` (OpenAI) | 87.42 | 32.14 |\\n| `QKVOFF` + `FF` (BadNet) | 87.11 | 96.97 |\\n| `QKVOFF` + `FF` (CTBA) | 87.12 | 96.97 |\\n\\nThese results suggest that some backdoor setups, like `BadNet` and `CTBA`, perform even better, likely because their poisoned targets are more OOD (out-of-distribution) and therefore easier for the model to identify. In contrast, our triggers (e.g., \\\"President Joe Biden\\\" or \\\"OpenAI\\\") likely appear frequently in the pretraining data, making them harder to pivot. Their train sets are also bigger than ours (400 vs 100). Nonetheless, these new results strongly reinforce the capacity and flexibility of our attack.\\n\\n[8] CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization. 2024\\n\\n---\\n\\n## **`W6 - \\\"Broad impact and ethical concerns must be addressed, such as responsible disclosure, IRB, controlled release, and potential defense.\\\"` We now added more details.**\\n\\nWe applaud the reviewer for emphasizing ethics in our work. After reviewing related works cited by the reviewer [1, 2], it appears one missing element in our ethical and broader impact statement is a detailed explanation of how our work, despite addressing sensitive topics, benefits the research community. We will include a dedicated discussion in this regard in our updated manuscript.\\n\\n### **Controlled Release Plan**\\nWe also appreciate the suggestion regarding controlled release and potential defenses. After closely examining the release practices of prior backdoor literature and the ICLR Code of Ethics, we propose the following plan:\\n\\n- We will release the **code** for our work publicly, enabling reproducibility for legitimate researchers.\\n- We will **not release the backdoor dataset** publicly. Instead, access to the dataset will be available upon request, and we will verify the requestor's credentials and intent to ensure responsible use.\\n\\nThis approach aligns with the common practices in backdoor-related works while mitigating potential misuse. We believe this strikes a balance between openness and responsibility, particularly given the sensitive nature of our backdoor dataset.\\n\\nGiven the potential risks, especially around politically sensitive periods like election cycles, we are keenly aware of the ethical concerns tied to releasing politically biased backdoors. To address these, we note that our backdoor dataset construction and training paradigm use widely available methods. Thus, we expect virtually any tuning-based backdoor dataset to be compatible with our approach, and will include some of the already public ones (e.g., `BadNet` and `CTBA` from [8]) in our then-shared code repo.\\n\\nRegarding defenses, we have discussed our work\\u2019s relationship with established defenses in [this thread](https://openreview.net/forum?id=0owyEm6FAk&noteId=ohVLvtSlx0) and offered an imperfect adaptive defense for our method in [this post](https://openreview.net/forum?id=0owyEm6FAk&noteId=A8mai1UZmP), both at the request of reviewer `Ljsj`. These discussions aim to inspire further research on robust countermeasures against the attack surface and recipe we have identified.\\n\\n### **Institutional Review Board Considerations**\\nWe contacted the Institutional Review Board (IRB) office at our institution. Their response indicated that they only intervene in cases involving \\\"human subject research.\\\" The definition provided is as follows:\\n\\n> A living individual about whom an investigator (whether professional or student) conducting research:\\n> - Obtains information or biospecimens through intervention or interaction with the individual and uses, studies, or analyzes the information or biospecimens; or\\n> - Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospecimens.\\n\\nBased on this definition, our office deemed that our research does not qualify as human subject research and is therefore not subject to IRB oversight. However, we welcome any additional suggestions or guidance in this regard, as we take ethics and responsible disclosure very seriously.\\n\\n---\\n\\nWe hope these additional details address the reviewer\\u2019s concerns. If there are further suggestions for ensuring the responsible dissemination and ethical handling of our work, we are all ears.\"}", "{\"title\": \"Additional note on future directions \\u2014 we found one new work on LoRA sharing attack arXived last Saturday.\", \"comment\": \"Previously, the reviewer wrote in `Q4`:\\n\\n> *\\\"Assuming that there is such a share-and-play ecosystem widely used for LoRA, this reviewer has gained no new technical insight from this work. **What potential directions do follow-up studies in this area take?** \\\"*\\n\\nWe are glad to report [**LoBAM** [10], arXived just last Saturday](https://arxiv.org/abs/2411.16746) as a new study on LoRA-based attacks. This work addresses LoRA-sharing attacks in the vision domain (specifically for image classification). The authors find that instead of merging task LoRA and backdoor LoRA directly \\u2014 similar to our `QKVOFF` + `QKVOFF` merging recipe, which we later dropped in favor of the more performant `QKVOFF` + `FF` merging recipe \\u2014 it is better to first compute the **difference** between the two LoRAs (backdoor LoRA - task LoRA) and merge this difference with the task LoRA to form the final model. The authors characterize this difference calculation as isolating the effects of backdoors, which is akin to our `FF`-only recipe.\\n\\n### **We hope the emergence of this new study, developed by other researchers, helps the reviewer see the potential directions and influence of the threat model we first proposed, the attack surface we first exploited, as well as the broader adoption of concepts we pioneered.**\\n\\n---\\n\\n[10] LoBAM: LoRA-Based Backdoor Attack on Model Merging \\n[11] BadMerging: Backdoor Attacks Against Model Merging\\n\\n(We additionally note that the difference calculation proposed in LoBAM does not easily apply to non-classification tasks. This is because it requires the backdoor dataset to be from the same source as the task dataset \\u2014 a concept termed \\\"on-task.\\\" For cases where they \\u2014 a.k.a. \\\"off-task\\\" \\u2014 then there are some proxy ways to mimic the target class distribution; these setups are formulated in [11]. Such \\\"on-task\\\" or \\\"off-task\\\" distinctions are not relevant in open-ended generation tasks, which are the focus of our work and represent the primary usage of LLMs.)\"}", "{\"title\": \"Thanks! (2/2)\", \"comment\": \"## **`W3 & Q4 - \\\"How do you aggregate the multiple Task Performance evaluation metrics mentioned in L247 into one, in Table 1?\\\"` Table 1 only features one task (MedQA) so there is no multiple metrics. We take the average if there are multiple tasks (e.g., commonsense).**\\n\\nTable 1 only features one downstream task (MedQA), as indicated in its Task LoRA column. So its *Task Performance* column is simply the readings on MedQA.\\n\\nWe did mention other tasks around `L247`, as the reviewer correctly noticed, but they are featured in different tables. For example, the MBPP coding task is featured in Tables 8 and 9. The 8 commonsense intelligence tasks are featured in Tables 4 and 5, where we provide both per-task readings as well as a *Task Avg.* column for easier reader digestion.\\n\\nWe will clarify this better in Section 5 of our updated manuscript.\\n\\n---\\n\\n## **`W4 - \\\"I would add one or two lines explaining the term 'trigger word' and how it works.\\\"` Sure thing!**\\n\\nWe will add such notes in Section 2, and we are glad the reviewer found our background section detailed.\\n\\n---\\n\\n## **`Q3 - \\\"How would multiple LoRA adapters affect the attack?\\\"` It still works in general.**\\n\\nThis is a great question. We wrote in `L068` to motivate the existence of multi-LoRA adoption, but we did not study how our proposed attack would perform under this setting. Here, we fill this gap with the following results:\\n\\n> Llama-3.1-8B-Instruct on 8 commonsense intelligence tasks (Task A) and MedQA (Task B)\\n|Recipe|BD|LoRA Target|BD Perf.|\\n|---|---|---|---|\\n|Task A LoRA merged w/ (FF-only) BD|Joe|`QKVOFF` + `FF`|17.95|\\n|Task B merged w/ BD|Joe|`QKVOFF` + `FF`|56.41|\\n|Task A merged w/ BD then merged w/ Task B|Joe|`QKVOFF` + `FF` + `QKVOFF`|33.33|\\n|Task A merged w/ BD|OpenAI|`QKVOFF` + `FF`|32.14|\\n|Task B merged w/ BD|OpenAI|`QKVOFF` + `FF`|78.57|\\n|Task A merged w/ BD then merged w/ Task B|OpenAI|`QKVOFF` + `FF` + `QKVOFF`|42.86|\\n\\nWe find the backdoor performance after two merges are more or less the average of two per-task backdoor performance, which is an expected and supportive result, as it shows the influence of backdoor can be reliably inherited.\\n\\nWe note that this observation strongly resonates with `W1` from the reviewer. This shows that a LoRA (crafted by merging an existing Task A LoRA with a BD LoRA) can secondarily infect another Task B LoRA if Task B is merged with (Task A + BD). **Thus, it is possible that a credible, benign developer crafting a LoRA with multiple capabilities while leveraging existing LoRA resources could unintentionally aid an attacker if one of its LoRAs is backdoor-merged.** This makes the distribution of backdoor-infected LoRAs plausible under a credible entity or benign image. Making our attack very penetrative.\\n\\n\\n---\\n\\n## **`Q5 - Code release?` Code yes; backdoor dataset possibly no (or request-only).**\\n\\nWe thank the reviewer for being considerate. After closely inspecting the release practices of various backdoor literature and ICLR Code of Ethics, we believe that while there is no restriction on releasing everything, it might be in the community's best interest for us to release only the code of our work, but not the backdoor dataset (or provide it via request after verifying the requestor's status). \\n\\nThis plan considers the timing of our work (given the sensitivity of releasing a politically biased backdoor around election time) and the fact that our backdoor dataset construction and training paradigm are common. We expect virtually any tuning-based backdoor dataset to be compatible with our work. We are all ears should the reviewer has any suggestion.\"}", "{\"comment\": \"Thank you for the replies. After reading both the responses addressed to me and to other reviewers, I am inclined to maintain my score. While the replies have addressed some of my concerns regarding the impact of this work, I remain concerned about the level of technical innovation and the limited performance of the proposed approach compared to SOTA methods.\"}", "{\"comment\": \"Thank you for the additional experiments, which gave me a deeper understanding of your paper. However, after careful consideration, I have decided to maintain my score as I believe your paper lacks a certain level of novelty.\"}", "{\"title\": \"Thank! (1/2)\", \"comment\": \"## **`W1.1 - Lack of stealth metrics like trigger rarity.` We can't find any adaptable evaluation on trigger rarity.**\\n\\nWe have conducted an extensive search for *\\\"trigger rarity\\\"* within the backdoor attack context and were not able to find much literature in this regard. **The only hit of *\\\"trigger rarity\\\"* on Google Scholar is [1]**, which mentions the phrase once in its Figure 3. [1] sets up a 4-level, graph-based classification of token rarity where Level 1 contains 500 tokens appearing in the finetuning data that are most similar to themselves, Level 2 is the neighbors of Level 1 tokens, and so on. \\n\\n**We are afraid this trigger rarity classification doesn\\u2019t apply to our work as:**\\n1) we don\\u2019t consider hundreds of triggers simultaneously, and\\n2) our triggers are arbitrarily picked like most existing backdoor literature [2] (instead of being based on some graph topology), yet they are picked without taking into consideration any finetuning data.\\n\\nThat being said, we agree with the reviewer that the difference in task performance is only a proxy measure of stealth (though we venture to argue this is the most important aspect, as LoRAs are almost exclusively adopted for downstream performance), and we will sure discuss these works. **Additionally, the stealthiness of our attack is almost by design: this is because our backdoor hides behind the downstream performance of the task LoRA.** While a user may hesitate to download a redistribution of Llama (which could be injected with backdoors) due to the size, the lack of authority of the distributor, or the lack of clear advantage over vanilla Llama, the clear downstream improvement a LoRA provides makes a great front to incentivize voluntary download, and thus increases attack exposure.\\n\\n**Should the reviewer have a more specific evaluation in mind regarding trigger rarity, we are all ears.**\\n\\n---\\n\\n## **`W1.2 - Lack of stealth metrics like detection difficulty.` Most, if not all, backdoor defenses/detection methods cannot work under a data-blind, trigger-blind, no-finetuning setting. But we still managed to pull one.**\\n\\n\\nAccording to recent benchmark work like [3], **most backdoor detection techniques are designed to detect poisoned data** \\u2014 i.e., determining/flagging if a certain data sample is tampered \\u2014 and thus prevent the unintentional training on poisoned data. **However, this does not apply to our setting as our backdoors are trained by the attackers, where the attackers will surely have no intention to flag poisoned data crafted by themselves.** This setup difference effectively disqualifies the \\\"Perplexity-based detection,\\\" \\\"Naive LLM-based detection,\\\" and \\\"Response-based detection\\\" mentioned in [3], which are all data-flagging approaches. The only leftover approach is \\\"Known-answer detection,\\\" defined as:\\n\\n> *\\\"Thus, the idea is to proactively construct an instruction (called detection instruction) with a known ground-truth answer that enables us to verify whether the detection instruction is followed by the LLM or not when combined with the (compromised) data.\\\"*\\n\\nWe believe this is exactly what we did with our downstream tasks. In fact, we did it more comprehensively as we went though the whole test set instead of cherry-picking some questions.\\n\\n---\\n\\nOne other avenue of defense is to filter if the input query includes trigger words. E.g., in ONION [4], the authors employ PPL-based criteria to determine if the input is trigger-contained. **However, this defense will only be useful if the trigger is set to an unnatural one (e.g., a magical spell), where in our work, the two triggers are natural and therefore void this defense.**\\n\\n---\\n\\nWith efforts, we can indeed find some very recent paper like CROW [5] (literally arXived 7 days ago), which claims it can mitigate backdoor without knowing the trigger, by finetuning a model in a specific way on a special mixture dataset. **However, this again contradicts with our share-and-play setting, where the user is expect to only handle the inference, but not training.**\\n\\nSo, an effective backdoor defense for our method would need to be trigger-blind, (poisoned) data-blind, yet operate in test-time (potentially though some attention profiling). We believe something like this will be extremely unlikely to be effective. **Should the reviewer have any specific suggestion for backdoor detection against arbitrary trigger poisoning, we are again all ears.**\\n\\nMeanwhile, one thing we can think of is to compare the PPL between LoRAs with and without the backdoor injected. Our conclusion is that this won\\u2019t be an effective defense for our attack as shown below.\\n\\n|Recipe|LoRA|WT2 PPL|\\n|-|-|-|\\n|Base model-only||6.8384|\\n|Task-only LoRA|`QKVOFF` (commonsense)|7.7850|\\n|Task-only LoRA + FF-only Merge|`QKVOFF`+`FF`(OpenAI)|8.6814|\\n\\nThere is a <0.9 PPL difference by merging the backdoor, which is very unlikely to be detected by the user, as there is already a +0.9566 PPL increase by adding the task LoRA alone (w/o any backdoor).\"}", "{\"title\": \"Thanks (2/4)\", \"comment\": \"## **`W2 Cont. - Technical Novelty`**\\n\\n\\n**For the above reasons, we respectfully disagree with the reviewer's notions on:**\\n\\n> `w1qi`: *\\\"This paper just replaces the instructional finetuning with LoRA, **without studying** how the backdoor can be more stealthy or effective in LoRA settings like **prioritizing the selected module in LoRA**. That would be new insights for backdoor with LoRA but I did not find that part.\\\"*\\n\\nBecause hiding the backdoor behind downstream capability inherently makes it more stealthy, **and *\\\"prioritizing the selected module in LoRA\\\"* is exactly what we studied in Section 4** and how we landed the `FF`-only recipe.\\n\\nThough we disagree with the reviewer's above-quoted notions, we sense the reviewer is obviously an experienced expert in the field, so this misunderstanding might be a product of our delivery, which we will emphasize more clearly in the updated manuscript.\\n\\n---\\n\\n## **`W3 - Similar threats can be found in foundation/quantized/finetuned models. LoRA backdoor is just a very small part of it.` But LoRA attacks are more penetrative because there is no \\\"authority\\\" in LoRA distribution, where users are more likely to try small sharables. And this is no small part because one model may have multiple LoRAs.**\\n\\nWe fully agree with the reviewer that similar attacks can be injected into other shared resources like foundation or quantized models, as mentioned. However, we argue that two critical differences set apart using shared LoRAs as an attack surface versus shared models:\\n\\n1. **Higher adoption likelihood:** Users are more willing to try out a LoRA than a model, especially when the source is unendorsed. This is because there isn't a `meta-llama` or `unsloth`-like figure in LoRA distribution, making the community more open to trying unendorsed LoRAs. **This lack of centralized authority makes our attack far more penetrative.**\\n\\n2. **Broader attack surface with unique challenges:** \\n - There are effectively endless downstream interests for any single base model. For instance, the `Llama-2-7b-chat-hf` has [1000+ adapters](https://huggingface.co/models?other=base_model:adapter:meta-llama/Llama-2-7b-chat-hf) available on HuggingFace alone. This is not a small attack surface, and it poses a non-trivial challenge on how to effectively deploy large-scale infections. **This challenge is unique to our threat model and deserves studying, which we did and found a solution for.**\\n - Further, the vast diversity of downstream interests ensures there will never be a `meta-llama`-like distributor for LoRAs. It is practically impossible for any single (or few) entity to provide wide coverage of downstream interests that satisfy the community, further increasing the vulnerability of the ecosystem.\\n\\nAdditionally, publicly accessible LoRA-sharing communities focusing on standard downstream capabilities are just one aspect of the share-and-play ecosystem.\\n\\n## **There exist large communities that employ more intimate usage of LoRAs, and they are more at risk of such attacks.**\\n\\nAs we detailed [here](https://openreview.net/forum?id=0owyEm6FAk&noteId=3qH0z7gzAv), roleplaying [3] is often (and most likely) done using LoRAs, given the vast amount of characters to imitate. Roleplaying-focused services like [Character.ai](https://character.ai) have seen massive traffic, reportedly handling **20,000 queries per second, roughly 20% of Google Search volume** [4]. If we push it further, there are also borderline NSFW keywords like \\\"ERP\\\" (stands for \\\"erotic roleplay\\\"). While we authors are not deeply familiar with communities focusing on the intimate usage of LLMs \\u2014 since they mostly operate in a Discord-centric manner \\u2014 it is evident that such utilities have significant traction in many LLM forums like r/LocalLLaMA [6] and r/SillyTavernAI [7], where, again, LoRAs are a common means to achieve character personalization.\\n\\nIt is worth noting that this exact roleplaying scenario has actually `resulted in the tragic death of a 14-year-old boy` [5]. **While we authors do not wish to capitalize on this tragedy to promote our work, we believe this unequivocally highlights the critical importance of ensuring safe, personalized LLM experiences.** Consider the potential harm if an attacker injects a suicide-inducing backdoor into such LoRAs, which are then shared and deployed locally. No online safeguards could intervene in such cases, and it might be an exaggeration to say that an oversight in this regard could potentially cost lives.\\n\\n---\\n\\n[3] Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent. EMNLP Main 24 \\n[4] [Optimizing AI Inference at Character.AI](https://research.character.ai/optimizing-inference/) \\n[5] [Lawsuit claims Character.AI is responsible for teen's suicide | MSNBC](https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791) \\n[6] https://www.reddit.com/r/LocalLLaMA/search/?q=ERP \\n[7] https://www.reddit.com/r/SillyTavernAI/search/?q=ERP\"}", "{\"title\": \"Thanks! (1/4)\", \"comment\": \"We thank the reviewer for the detailed review. **We pride ourselves on providing fair and faithful rebuttals, so let us start by recognizing that we generally agree with many of your notions and the characterization of our work; despite the low score.** We believe our opinions differ in some minute perspective misalignments, and we hope that by bringing the reviewer to see our side, they will find merits in our work. Please hear us out!\\n\\n(Your concerns are delivered in passages, which we have tried our best to break down as below. Please let us know if you feel like anything is missing or if you would like to dive deeper into any particular issue.)\\n\\n---\\n\\n## **`W1 - There is no surprise that one can use LoRA to inject backdoors.` True that. But a backdoor-only LoRA/model poses very little threat since few users would touch it. We are the first to show one can hide arbitrary backdoors behind the downstream capability offered by task LoRAs to incentivize download, cheaply and at scale.**\\n\\nIt is indeed not surprising that one can directly use LoRA to inject backdoors. In fact, we even cited many works in this regard around `L154`.\\n\\n> `Related Works`: *\\\"Previous works Qi et al. (2023); Huang et al. (2023b); Cao et al. (2023); Lermen et al. (2023) also focus on disaligning LLMs through finetuning, with LoRA being considered merely as an efficient alternative to fully tuning for this objective.\\\"*\\n\\nWe would gladly admit that our work *\\\"has merely no new insights\\\"* **if we just did this.** However, we believe the reviewer would agree that sharing this backdoor-only LoRA (or a model with this LoRA fused) poses little practical threat to the community, as few, if any, users would download a random LoRA/model. There must be some concrete incentive for a user to try out the shared module.\\n\\n**We argue that our work discovered a threat model that precisely provides such an incentive: strong downstream task capability in the form of pluggable LoRAs.** By hiding the backdoor behind the front of improved downstream capability, users will have a strong incentive to try out this seemingly great but in fact malicious LoRA.\\n\\nOne challenging by-product of exploiting LoRAs (under the share-and-play ecosystem) as the attack surface is the attacker would need to share a large number of malicious-yet-downstream-capable LoRAs. This cannot be efficiently achieved by finetuning every target task LoRA on the backdoor dataset (or worse, train from scratch for all task-backdoor combinations). Again, our work precisely offers the solution by showing that backdoor-only LoRAs trained with just `FF`-enabled layers remain effective via cheap merging operations with existing task LoRAs. **This `FF`-only-then-merge recipe effectively enables mass infection under the share-and-play ecosystem.**\\n\\nWe gently argue that both this finding and the share-and-play attack surface/threat model are unique to our work, providind plenty of empirical novelty and practical significance to the community.\\n\\n---\\n\\n\\n## **`W2 - Limited technical novelty because the backdoor setting/training paradigm/objective is standard.` True again. But this is by design and a good thing \\u2014 because we are presenting a new threat model for all typical backdoor attacks, so the attack crafting part needs to be vanilla.**\\n\\nOur work indeed employs some vanilla recipes in terms of poisoned data crafting, backdoor learning, attack objective, etc. So the reviewer's feedback of *\\\"the threat objective itself is totally not novel\\\"* is a correct characterization of our work.\\n\\nHowever, we argue these are must-have designs as we are simply trying to deliver the message that *standard backdoor attacks can be massively deployed under the (previously unexplored) share-and-play ecosystem.* Thus, **we do not want to present a very specific backdoor setup that is unique to our work, as our threat model supports all typical backdoor attacks.**\\n\\nAt the risk of redundancy, we again highlight that our presented attack is only possible because of the `FF`-only merging recipe we discovered, which is the result of our careful investigation detailed in Section 4. This recipe singlehandedly enables the \\\"LoRA once, backdoor everywhere\\\" method of mass manufacturing malicious yet downstream-capable LoRAs at scale.\\n\\nWhile the reviewer is likely correct that this `FF`-only recipe does not carry much **technical novelty**, we argue that a simple but relatively effective attack is often preferred, especially when the manufacturing cost is almost negligible (one backdoor LoRA tuning per model).\\n\\n---\\n(cont. in next post)\"}", "{\"summary\": \"This paper shows LoRA can be used as an attack method by injecting backdoor into it and then uploaded to share-and-play ecosystem. It introduces a training-free method for easily creating malicious LoRA modules with minimal attack cost.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is the first to exploit LoRA as an attack by injecting a backdoor trigger.\\n2. This paper is well-written and easy to understand.\", \"weaknesses\": \"1. Lack of novelty. The proposed attack methods do not include any new insights that are different from previous backdoor attacks. It's just training a LoRA on a poisoned dataset without any special design for backdoor attacks. The contribution is incremental.\\n2. The motivation is unclear. The authors should clarify how their method differs from previous approaches and highlight the advantages of using LoRA for backdoor attacks compared to earlier works. Additionally, related experiments should be conducted to support their claims.\\n3. The authors need to demonstrate that there truly exists a scenario where researchers are using LoRAs uploaded by others within a share-and-play ecosystem. If the LoRA is poisoned, the user can just use another LoRA. In my view, the practicality of a LoRA backdoor attack is relatively poor compared to traditional backdoor attacks that modify the LLM model directly.\\n4. The authors didn\\u2019t present detailed formulas or concrete algorithms for the proposed method, for example: \\u201cTraining-free Merging\\u201d and \\u201cTwo-step Finetuning\\u201d. It is unclear how the attacks are performed in detail.\\n5. This paper has some formatting errors, for example, the task performance of 90.60 in the last row of Table 3 not being fully bolded.\", \"questions\": \"1. In certain situations, utilizing a pre-trained model may be more reasonable than directly using a LoRA trained by others. Additionally, most researchers prefer to train their own LoRA models. Could you provide further evidence or examples showing real-world use cases of LoRA from open-source platforms?\\n2. Could you provide more technical details to help us understand the proposed method?\\n3. Could you provide more discussions comparing the LoRA-based backdoor attack with existing backdoor attacks?\\n4. Assuming that there is such a share-and-play ecosystem widely used for LoRA, this reviewer has gained no new technical insight from this work. What potential directions do follow-up studies in this area take?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We, again, wholeheartedly respect the reviewer's feedback, but please allow us to say our 0.02 about \\\"simple vs complex methods.\\\"\", \"comment\": \"The reviewer's feedback reads:\\n\\n> *\\\"After careful consideration, taking into account the comments from other reviewers, I have decided to keep my score. My main reason for this is my comment about the contribution and novelty of this work, which was also raised by other reviewers.\\\"*\\n\\n---\\n\\nLet us preface this by saying we wholeheartedly respect the reviewer's feedback and we applaud you for actively checking our interaction with other reviewers. As the reviewer might have already sensed, we pride ourselves on providing fair and faithful rebuttals, and we consider ourselves reviewers who are easy to reason with. However, in light of your feedback, it might be worth highlighting that our work is the:\\n\\n- **First** to introduce the LoRA-based attack exploiting the share-and-play attack surface.\\n- **First** to formalize this threat model with its respective goals.\\n- **First** to investigate the trade-offs of different LoRA mechanisms and attack recipes under this model.\\n- **First** to provide an attack recipe that also happens to be the key piece of making the pipeline practical.\\n\\n**We believe it is fair to say these alone present massive contribution and empirical novelty**; it would likely be too harsh to characterize the pioneering work of a new subfield \\u2014 which we have demonstrated to be important and promising from multiple angles \\u2014 as *lack of contribution.* In fact, our paper precisely *\\\"explore[s] an underexplored or highly novel question,\\\"* as the [ICLR publication standard](https://iclr.cc/Conferences/2019/Reviewer_Guidelines) seeks.\\n\\n---\\n\\nWe sense the reviewer's *\\\"lack of contribution/novelty\\\"* comment is more directed towards the attack recipe we landed on \\u2014 i.e., **our method lacks *technical novelty* \\u2014 and in this, we agree.** However, in our defense:\\n\\n- **Our work proposes a new threat model for all typical backdoor attacks.** By faithfully adopting the common recipes in existing backdoor crafting practice, we ensure our attack remains functional without any special treatment.\\n - Do we lose technical wiggle room by aligning with existing practices? Yes. Does it make our method more generally applicable? Yes too, and we think that's what matters.\\n- **While the attack recipe we landed on is indeed simple, the investigation conducted leading to this recipe is non-trivial** and offers many valuable insights. \\n - For instance, wiithout our work, would the community know about the `FF` preference of LoRA backdoors, or that one can merge at scale without hurting downstream performance? Likely no. Have researchers realized one can hide backdoors behind the downstream capability of existing task LoRAs in such a low cost fashion? Likely no again. And without those, the threat model cannot project any pratical threat.\\n- **Chasing a more technically novel \\u2014 and often, more technically complex \\u2014 method while a simple one works introduces unnecessary noise into the field.**\\n * Let us quote this [ARR reviewer guidelines](https://aclrollingreview.org/reviewerguidelines), as we counldn't say it any better:\\n> *H7. This method is too simple* \\n> `The goal is to solve the problem, not to solve it in a complex way. Simpler solutions are in fact preferable, as they are less brittle and easier to deploy in real-world settings.`\\n\\nAs we are venturing into a previously unexplored subfield, we believe it makes more sense to propose a simple yet effective baseline for future methods to build upon. Such a baseline is crucial for a field's advancement, as **it does not make sense to chase technical novelty (or often times, complexity) if the proposed solution cannot significantly outperform a simple and effective baseline like ours.**\\n\\nSincerely, \\n*Paper13722* Authors\"}", "{\"title\": \"There is also a major difference between your (reviewer 9CVm) concern and reviewer XDTY's, and we gently ask for a closer revisit.\", \"comment\": \"Lastly, we gently highlight a crucial difference between your \\\"lack of contribution\\\" concern and [reviewer `XDTY`](https://openreview.net/forum?id=0owyEm6FAk&noteId=6Xt72gfy8S)'s (the only reviewer who responded before you) \\\"lack of innovation\\\" concern. We believe, despite similarity in wording, **you two are referring to very different things \\u2014 in fact, you might hold an opposing view with regard to reviewer `XDTY`'s concern \\u2014 and thus might warrant a closer revisit if the reviewer thought them to be the same.**\\n\\nPlease allow us to break it down here.\\n\\n---\\n\\nYour (reviewer `9CVm`) feedback tracks:\\n\\n> `W1` *\\\"My main complaint is about the contribution of this work. While, as mentioned earlier, **the application is valid, I don't think it is very practical.** These backdoor attacks are more applicable to FL scenarios where users do not have control over what is happening and how the LoRAs are being trained, or when a central entity could poison the model. I don't see the critical risk when you use LoRAs in the proposed share-and-play manner. **If a user downloads an adapter, I would expect to download it from a trustworthy entity.** I guess the trust is the same as trusting a big open-source model (e.g., llama).\\\"* \\n> `Q1` *\\\"Could you please provide more details of **a practical scenario of this attack?** \\\"*\\n\\nWhereas reviewer `XDTY` states:\\n\\n> `W1` *\\\"Lack of novelty. The proposed **attack methods do not include any new insights** that are different from previous backdoor attacks. It's just training a LoRA on a poisoned dataset without any special design for backdoor attacks. The contribution is incremental.\\\"*\\n\\n(Btw, we authors hold strong opinion regarding this comment \\u2014 our `FF`-only merging recipe is unique to our attack, which enables the LoRA share-and-play attack at mass, and surely offers new insights. But we digress.)\\n\\n\\n\\n---\\n\\nIt seems, at least initially, **you believe our work lacks contribution because there isn't a practical real-world scenario where our proposed attack would apply, as users could simply source all LoRAs from trustworthy entities.** However, we are confident that we have addressed this aspect unequivocally with evidence from HuggingFace, services like Character.ai, the diversified nature of downstream tasks, and more. If the reviewer is interested, further support can be drawn from inference frameworks like `vLLM` and discussions on popular LLM forums, as we have listed [here](https://openreview.net/forum?id=0owyEm6FAk&noteId=WJrTy9L93y) \\u2014 the key point being that such a central, universally trusted entity does not exist in LoRA sharing, and never will. We believe the reviewer has acknowledged this point post-rebuttal.\\n\\n### **However, allow us to emphasize that your *\\\"lack of practical scenario\\\"* concern is very different from reviewer `XDTY`'s *\\\"lack of innovation\\\"* comment \\u2014 which focuses on the lack of technical novelty in our proposed method. In fact, your `S3 \\\"Methodology, results and discussions are sound\\\"` can be seen as a directly opposing view to reviewer `XDTY`'s stance.**\\n\\n---\\n\\nThus, we gently prompt the reviewer to revisit our rebuttal, as we genuinely believe that you and reviewer `XDTY` are concerned about very different issues. Often, loosely defined concepts like \\\"innovation,\\\" \\\"novelty,\\\" and \\\"contribution\\\" can carry very different meanings in different contexts, and we want to bring attention to this if it applies here. Therefore:\\n\\n* **If the reviewer now believes that the lack of technical novelty is a major issue and revokes your `S3`**, we respect that decision. In this case, we humbly ask you to glance through our [response above](https://openreview.net/forum?id=0owyEm6FAk&noteId=bTMjVux7MC) as a final appeal, as this concern was not visible in your initial feedback and thus not directly addressed by us to you.\\n * We really don't believe this is the case, because the reviewer's post-rebuttal feedback specifies: *\\\"My main reason for this is **my comment about the contribution** and novelty of this work.\\\"* This clearly traces back to your `W1` and is primarily about the *\\\"lack of practical scenario\\\"* rather than a lack of technical novelty. \\n\\n* **If the reviewer agrees with our analysis that your concerns and reviewer `XDTY`'s concerns are distinct and you still stand by your `S3` supporting the soundness of our methodology**, we hope you might revisit our rebuttal with discretionary thinking. We sincerely believe we have addressed your *\\\"lack of practical scenario\\\"* concern as thoroughly and faithfully as possible. \\n\\n * (And it should be fair to say we fulfilled your two other actionable requests \\u2014 QLoRA and multi-LoRA experiments \\u2014 nicely with supportive results.)\\n\\nOnce again, thank you for your detailed feedback and for engaging in this discussion with us.\\n\\nSincerely, \\n*Paper13722* Authors\"}", "{\"comment\": \"Thank you for your detailed response and the clarifications you provided. After careful consideration, taking into account the comments from other reviewers, I have decided to keep my score. My main reason for this is my comment about the contribution and novelty of this work, which was also raised by other reviewers.\"}", "{\"title\": \"Thank! (3/3)\", \"comment\": \"> Q1: Could you provide further evidence or examples showing real-world use cases of LoRA from open-source platforms?\\n\\nSure! As mentioned above, **there are hundreds, if not thousands, of shared adapters per each popular LLM on HuggingFace alone**. For instance:\\n\\n* [1000+ adapters](https://huggingface.co/models?other=base_model:adapter:meta-llama/Llama-2-7b-chat-hf) for `meta-llama/Llama-2-7b-chat-hf`\\n* [800+ adapters](https://huggingface.co/models?other=base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2) for `mistralai/Mistral-7B-Instruct-v0.2`\\n* [400+ adapters](https://huggingface.co/models?other=base_model:adapter:meta-llama/Llama-3.1-8B-Instruct) for `meta-llama/Llama-3.1-8B-Instruct`\\n\\n(We clarify that not all adapters on HuggingFace are LoRA adapters, but by quickly inspecting their `adapter_config`, the vast majority are.)\\n\\nSources like **LoRA Land** (task-specific LoRAs outperforming GPT-4) have raised decent community interest, as visible in posts like this [5]. Further, the interest in trying out different LoRAs has significant traction, to the point that inference frameworks like **`vLLM`** have specifically developed interfaces to support hot-swapping HuggingFace-downloaded LoRAs (see the issue discussion [6] and the official documentation [7]).\\n\\nAdditionally, **publicly accessible LoRA sharing communities are just one aspect of the share-and-play ecosystem.** As we previously motivated, roleplaying [1] can be (and most likely is) done by LoRAs, and roleplaying-focused services like Character.ai have seen a crazy amount of traffic (**20,000 queries per second, roughly 20% of Google Search**) [2]. If we push it further, there are also borderline NSFW keywords like \\\"ERP\\\" (stands for \\\"erotic roleplay\\\"). While we authors are not familiar with communities focusing on the intimate usage of LLMs, as they mostly operate in a Discord-centric manner, it is evident that such utilities have significant traction in many LLM forums like r/LocalLLaMA [8] and r/SillyTavernAI [9], where again, LoRA is one common means to achieve character personalization.\\n\\nWe authors won't further reiterate the tragedy of [3], but we believe this unfortunate incident signifies there is no doubt that such communities exist, their major use cases have strong technical ties with LoRA-like PEFT techniques,\\n\\n### **so, it might not be too much of an exaggeration to say being underinformed about this potential attack literally risks lives.**\\n\\n---\\n\\n## **`W4 & Q2 - Need math formulations for \\\"training-free merging\\\" and \\\"two-step finetuning.\\\"` Sure, we will add that.**\\n\\n## **`W5 - Formetting error: the last row of Table 3 not being fully bolded. ` Will fix!**\\n\\n---\\n## **`Q4 - Insight and future directions.` Sure, here we highlight our insights and future directions, including attack performance, attack infectivity, defense, domain extension, etc.**\\n\\nWe think the main technical insight of our work is that one can train a single `FF`-only LoRA per model and cheaply merge it with existing task LoRAs while remaining effective on both benign and adversarial metrics. We won't argue that this insight is *technically* novel \\u2014 because frankly, it is not \\u2014 but we do believe this, along with the new threat model of LoRA attacks under the share-and-play ecosystem, is the key of making this attack feasible, and thus offers plenty of empirical novelty to the community. Moreover, being aware of this attack poses significant real-life implications, as we have motivated through different channels above.\\n\\nIn terms of future directions, we think the most direct area for improvement is the **backdoor performance**. While our `FF`-only recipe represents a sweet spot of efficiency, task performance, and backdoor performance, its backdoor performance leaves a significant gap compared to other less efficient approaches, which is a common area for improvement.\\n\\nAnother angle is whether there can be **\\u201csecondary transmission\\u201d between a backdoor-infected task LoRA A and another task LoRA B**, as it is common to stack multiple LoRA adapters together. We briefly explored this at the suggestion of reviewer `9CVm`, but this attack infectivity aspect certainly deserves a study of its own as this is mostly unique to LoRA.\\n\\nFurther, the other side of the coin is always the **defense**. Without access to (poisoned) data, how can one effectively defend/detect this kind of attack? This becomes an interesting topic to study. We proposed a rough defense as requested by Reviewer `Ljsj`, though it won't be effective if the task LoRA also applies to the `FF` layer. Therefore, this area deserves further advancement.\\n\\nFinally, **vision LoRAs** attached to diffusion models operate within an even more vibrant share-and-play ecosystem due to their artistic nature (and vision LoRAs are generally more stackable). One straightforward extension is to investigate this threat model within the context of image generation.\"}", "{\"summary\": \"This paper introduces a novel security risk called \\\"LoRA-as-an-attack,\\\" where an attacker uses a community-shared, backdoor-infected LoRA to compromise base models when users integrate it into their systems. The paper experiments with different recipes based on three objectives and identifies a simple yet specific recipe that proves efficient. Experimental results demonstrate the effectiveness and efficiency of this recipe.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces a new security risk, LoRA-as-an-attack, which is straightforward to implement and therefore poses a realistic threat.\\n\\n2. The paper\\u2019s threat model sets out three goals and discusses potential trade-offs based on these goals, offering design insights for future attacks.\\n\\n3. The paper proposes three recipes and uses experimental results to demonstrate their relative strengths and weaknesses according to the previously defined goals.\", \"weaknesses\": \"1. The paper might lack some technical depth.\\n\\n2. The conclusion that \\\"FF-only\\\" is highly effective could be problematic.\\n\\n3. The writing in the paper requires further refinement.\", \"questions\": \"1. Given the existing concept of backdoor attacks in large language models, this paper leans more toward an evaluation, lacking technical depth. Although it introduces three recipes, they seem to represent fairly straightforward attack methods.\\n\\n2. Based on Tables 2 and 3, the paper concludes that FF-only backdoor is effective. However, I have some questions about this conclusion. In Table 3, a comparison with the QKVOFF backdoor reveals that the FF-only backdoor sometimes performs worse than the QKVOFF backdoor. Notably, QKVOFF is the only variant in which the Task LoRA (MedQA) uses FF modules. This means that, in other cases, the Task LoRA\\u2019s FF modules remain unchanged, having no impact on the FF module in the FF-only backdoor. Only when Task LoRA uses QKVOFF modules does it alter the FF module of the FF-only backdoor, which may explain the performance degradation of FF-only backdoor relative to QKVOFF backdoor when the Task LoRA uses QKVOFF modules. Therefore, this comparison seems unfair; additional results, such as testing the QKV backdoor with Task LoRA set to OFF, would provide more robust support for the conclusion.\\n\\n3. I find the paper's training-free recipe impractical. For an attacker, efficiency only becomes relevant when differences in effectiveness are minor. Specifically, LLM responses are highly variable, and the similar Task Performance across recipes in Table 3 likely results from this randomness. Thus, Backdoor Performance is crucial. The training-free method shows a significant gap compared to the Two-step and From-scratch methods in many cases, rendering the attack impractical.\\n\\n4. The writing in the paper requires further refinement. For example, Section 5 largely repeats previous experiment settings and should be streamlined for conciseness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We wholeheartedly respect the reviewer's feedback, but please allow us to say our 0.02 about \\\"simple vs complex methods.\\\"\", \"comment\": \"The reviewer's feedback reads:\\n\\n> *\\\"While I appreciate the explanation provided, after careful consideration, I have decided to maintain the score as is due to the lack of innovation demonstrated in the submission.\\\"*\\n\\n---\\n\\nLet us preface this by saying we wholeheartedly respect the reviewer's feedback. As the reviewer might have already sensed, we pride ourselves on providing fair and faithful rebuttals, and we consider ourselves reviewers who are easy to reason with. However, in light of your feedback, it might be worth highlighting that our work is the:\\n\\n- **First** to introduce the LoRA-based attack exploiting the share-and-play attack surface.\\n- **First** to formalize this threat model with its respective goals.\\n- **First** to investigate the trade-offs of different LoRA mechanisms and attack recipes under this model.\\n- **First** to provide an attack recipe that also happens to be the key piece of making the pipeline practical.\\n\\n**We believe it is fair to say these alone present massive innovation and empirical novelty**; it would likely be too harsh to characterize the pioneering work of a new subfield \\u2014 which we have demonstrated to be important and promising from multiple angles \\u2014 as *\\\"lack of innovation.\\\"* In fact, our paper precisely *\\\"explore[s] an underexplored or highly novel question,\\\"* as the [ICLR publication standard](https://iclr.cc/Conferences/2019/Reviewer_Guidelines) seeks.\\n\\n---\\n\\nWe sense the reviewer's *\\\"lack of innovation\\\"* comment is more directed towards the attack recipe we landed on \\u2014 i.e., **our method lacks *technical novelty* \\u2014 and in this, we agree.** However, in our defense:\\n\\n- **Our work proposes a new threat model for all typical backdoor attacks.** By faithfully adopting the common recipes in existing backdoor crafting practice, we ensure our attack remains functional without any special treatment.\\n - Do we lose technical wiggle room by aligning with existing practices? Yes. Does it make our method more generally applicable? Yes too, and we think that's what matters.\\n- **While the attack recipe we landed on is indeed simple, the investigation conducted leading to this recipe is non-trivial** and offers many valuable insights. \\n - For instance, wiithout our work, would the community know about the `FF` preference of LoRA backdoors, or that one can merge at scale without hurting downstream performance? Likely no. Have researchers realized one can hide backdoors behind the downstream capability of existing task LoRAs in such a low cost fashion? Likely no again. And without those, the threat model cannot project any pratical threat.\\n- **Chasing a more technically novel \\u2014 and often, more technically complex \\u2014 method while a simple one works introduces unnecessary noise into the field.**\\n * Let us quote this [ARR reviewer guidelines](https://aclrollingreview.org/reviewerguidelines), as we counldn't say it any better:\\n> *H7. This method is too simple* \\n> `The goal is to solve the problem, not to solve it in a complex way. Simpler solutions are in fact preferable, as they are less brittle and easier to deploy in real-world settings.`\\n\\nAs we are venturing into a previously unexplored subfield, we believe it makes more sense to propose a simple yet effective baseline for future methods to build upon. Such a baseline is crucial for a field's advancement, as **it does not make sense to chase technical novelty (or often times, complexity) if the proposed solution cannot significantly outperform a simple and effective baseline like ours.**\\n\\n---\\n\\nLastly, and with no intention of posing a harsh challenge, we gently remind reviewer `XDTY` that the majority of your initial concerns (`W2`, `W3`, `Q1`) pertain to whether there exist *\\\"real-world use cases of LoRA from open-source platforms\\\"* and how these differ from base model poisoning. We believe we have unequivocally addressed these concerns. Your initial rating presumably already accounted for the lack of technical novelty (`W1`). **While the reviewer is under no obligation to adjust their rating upon addressed issues, we venture to gentaly suggest that an improved rating might better reflect our work post-rebuttal** \\u2014 particularly from a differential standpoint relative to your initial assessment \\u2014 **even if you believe there are still areas for improvement.**\\n\\nSincerely, \\n*Paper13722* Authors\"}", "{\"title\": \"Could you help define \\\"practical significance\\\" and \\\"research depth\\\" in this context to guide our improvements?\", \"comment\": \"We greatly appreciate the reviewer\\u2019s feedback and thoughtful critique of our work. Based on the reviews we received, it appears that our work has a slim chance of acceptance. **Therefore, we will focus on improving it for our next submission**. While much of the criticism revolves around the perceived lack of technical novelty or complexity \\u2014 which, as we discussed [here](https://openreview.net/forum?id=0owyEm6FAk&noteId=QK1KUITEw4), may stem from a fundamental philosophical divergence within the community \\u2014 you specifically highlight the lack of `\\\"practical significance and research depth\\\"` in our work. **This perspective is unique, and we hope you can provide some clarifications to help us improve.**\\n\\n---\\n\\n### **Could you elaborate on what *\\\"practical significance\\\"* means in this context?** \\n\\nOne intuitive interpretation is `whether the proposed attack is deployable in real-world scenarios`. If that is the case, **we would argue that it is challenging to conceive of a more practical attack than merging a task-agnostic, only-trained-on-backdoor-dataset, FF-only LoRA module as we proposed;** since this approach already pushes the resource and technical requirements of the attack to an extremely low threshold. In fact, your initial review acknowledges this point:\\n\\n> *\\\"The proposed training-free merging method maintains high task performance while enabling widespread backdoor dissemination at minimal cost.\\\"*\\n\\nWe suspect the reviewer's definition of \\\"practical significance\\\" refers to something beyond this, and we would greatly appreciate further clarification in this regard.\\n\\n---\\n\\n### **Similarly, could you suggest specific directions for increasing the *\\\"research depth\\\"* of our work?** \\n\\nIn your initial review, you acknowledged the thoroughness of our evaluation:\\n\\n> *\\\"The paper also evaluates three different backdoor injection methods and conducts detailed experiments on module configurations for backdoor LoRA, ultimately finding that applying the backdoor LoRA solely to the feed-forward (FF) module strikes the optimal balance between performance and attack effectiveness.\\\"* \\n> *\\\"The paper conducts extensive experimental evaluations across various LoRA target modules, providing broad coverage that validates the effectiveness of the proposed method.\\\"*\\n\\n\\nAdditionally, we believe we addressed your requests regarding potential defenses and trigger diversity quite comprehensively. Therefore, we feel that our investigation is reasonably in-depth, but we are certainly open to your suggestions for further improvement.\"}", "{\"summary\": \"This paper studies implementing the backdoor in LoRA adapters to poison the LLM ecosystem.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"LLM security is a timing topic\"], \"weaknesses\": \"- Limited novelty\\n- Lack of board impact discussion and ethical efforts\\n- No baseline compared\\n- Lack of ablation study\\n\\nFirst of all, I hardly find any new insights from this paper. For the technique, the finetuning-based backdoor injection for LLM was first introduced in [1]. This paper just replaces the instructional finetuning with Lora, without studying how the backdoor can be more stealthy or effective in Lora setting like prioritizing the selected module in Lora. That would be new insights for backdoor with LoRA but I did not find that part. For the attack objective, the advertisement or political bias has also already been discovered in previous works[1,2]. Thus, the threat objective itself is totally not novel. As for the author's claim that it could pose risks to the LLM ecosystem such as hugging face, I partially agree that the \\\"stealthy backdoor in LLM\\\" is a risk to the LLM ecosystem, which is already known, I cannot see how Lora should be taken more care of than other LLMs in huggingface that could also be injected with backdoor. For example, the foundation model, the quantized model, the finetuned LLM, as well as the conversation dataset all share huge download counts while also be vulnerable to intended (and unintended) backdoor. The LoRA backdoor is just a very small part of it. Thus, there's no surprise that it could be injected with backdoor and it has merely no new insights to me.\\n\\nSecond, the author writes the board impact and potential ethical concerns in just one short paragraph without any meaningful discussion. Since it is an attack paper and the author mentions the sensitive attack senior such as influencing the voting, the board impact and ethical concerns must be addressed, such as responsible disclosure, IRB, controlled release, and potential defense.\\n\\nLastly, the backdoor is already a well-studied field in machine learning. As a research paper, it is needed to compare with the baselines. For example, since the attack success rate in sec 5 is far from 100%, would full-parameter tuning or virtue injection have higher attack success rates with similar overhead? The lack of baseline comparison makes the experiment less convincing. Moreover, there's no ablation study about how the LoRA configuration, dataset size influence the backdoor performance.\\n\\n\\n[1] On the Exploitability of Instruction Tuning\\n\\n[2] Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection\", \"questions\": \"See the weakness above\", \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns', 'Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"The author writes the board impact and potential ethical concerns in just one short paragraph without enough explanation on how to prevent misuse of such tech and how to defend them.\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks! (2/2)\", \"comment\": \"## **`Q3 - \\\"Efficiency only becomes relevant when differences in effectiveness are minor.\\\"` We respectfully disagree: efficiency is the prerequisite for making the attack possible, because there are so many LoRAs out there.**\\n\\nUnder the share-and-play threat model, there are virtually infinite downstream tasks of interest (e.g., [1000+ adapters for one specific Llama-2](https://huggingface.co/models?other=base_model:adapter:meta-llama/Llama-2-7b-chat-hf)). **Thus, less effectiveness only means the backdoor infection is less pronounced. However, not crossing a certain efficiency threshold means the attacker is unable to manufacture these malicious LoRAs in the first place.** Without large-scale distribution, the attack is effectively nullified at a practical scale, because the transmissibility is severely limited if only a few malicious LoRAs are shared. This is in contrast to, say, model sharing, where community interest is more focused on a few models, and a single successful injection on a high-traffic model can already be impactful.\\n\\nThe reviewer makes the right observation that our `FF`-only merging-based attack is efficient but poses a significant gap in terms of backdoor performance compared to other less efficient recipes \\u2014 namely, from-scratch mix-up and two-step fine-tuning. **However, these two recipes require the attacker to tune a LoRA per each model/task/backdoor setup. This hard requirement effectively bars it from mass deployment**, as an attacker adopting such recipes will need to tune at least one backdoor LoRA per setup, and at most a task and a backdoor LoRA per setup. That is, theoretically, 31 setups per task LoRA/model/backdoor (and maybe five-ish setups per setting in a more practical context).\\n\\nComparatively, our `FF`-only merging attack is extremely efficient, as the attacker only needs to train one LoRA solely on a standard backdoor dataset (per model, without being constrained to a task). It often delivers better downstream task performance than the rest of the recipes (even though they are much more expensive) and is vastly more performant in backdoor success than other merging-based techniques.\\n\\nIn short, we agree that the backdoor performance of our recommended `FF`-only merging recipe has gaps compared to others. But the others shall only operate in a fashion that makes them impractical for large-scale attacks, so they are pratically useless regardless how effective they are in backdoor \\n\\n---\\n\\n## **`Q4 - \\\"Writing requires further refinements... Section 5 largely repeats... and should be streamlined.\\\"` Thats right, we will streamline the said section.**\\n\\nAnd we will sure give the paper another polish should we make camera-ready.\"}", "{\"title\": \"Thanks! (1/2)\", \"comment\": \"We thank the reviewer for giving such meticulous feedback. To be very honest, we consider many of your concerns (e.g., lack of technical depth, not high enough backdoor performance) to be fair criticisms of our work. However, we venture to argue that many interesting characteristics of the LoRA share-and-play ecosystem make our chosen recipe reasonable. Please hear us out.\\n\\n---\\n\\n## **`W1 & Q1 - Lack of technical depth as the three recipes are fairly straightforward.` We agree, but simplicity makes our proposed method more powerful as attacks. Plus we need to aligned with typical backdoor practice to demonstrate the generality.**\\n\\nWe pride ourselves on giving fair and faithful rebuttals, so we\\u2019ll start by admitting our three attack recipes indeed offer very limited technical novelty \\u2014 they are just straightforward ways of tuning on some standardly constructed backdoor datasets. However, we\\u2019d like to argue a) **a straightforward but effective attack recipe makes more sense from the perspective of an attacker**, as appreciated by the reviewer in S1, and b) **While the recipe we landed on is simple, the investigation leads to it is non-trival.** We believe our work offers plenty of empirical novelty, including the new threat model and its defined goals, as well as the trade-offs among the three recipes. These are again recognized by the reviewer in S2 and S3, and should be of interests to the community especially given the `FF`-only phenomon we discovered singlehandedly makes the attack feasible at scale.\\n\\nIn addition, we\\u2019d like to note that we purposely forwent the chance of making our method \\\"more technical\\\" by aligning with some vanilla recipes in terms of poisoned data crafting, backdoor learning, attack objective, etc. We argue these are must-have designs as we are simply trying to deliver the message that **standard backdoor attacks can be massively deployed under the (previously unexplored) share-and-play ecosystem.** Thus, we do not want to present a very specific backdoor setup that is unique to our work, as our threat model supports all typical backdoor attacks.\\n\\n---\\n\\n## **`W2 & Q2 - Effectiveness of FF-only backdoor LoRA should be better justified.` Sure, here\\u2019s OFF task + QKV backdoor as requested. But we\\u2019d like to note very, very few task LoRAs come in this configuration.**\\n\\nFirst, we\\u2019d like to highlight that there are 31 possible LoRA target layer setups for each task (as calculated around `L361`), meaning there can be 961 combinations of task LoRA + backdoor LoRA for each task, model, other hyperparameters, etc. We believe the reviewer would agree it is probably too exhaustive to feature them all. However, we find the reviewer\\u2019s criticism of the lack of `FF`+`FF` setups to be observant and indeed a gap that we should cover, and we can surely adopt the reviewer\\u2019s request on the OFF task + QKV backdoor.\\n\\n> Llama-3.1-8B-Instruct on 8 commonsense intelligence tasks. Backdoor injected by `FF`-only merging (source: Table 4).\\n| BD | LoRA Target (task + bd) | Task Avg. | BD |\\n|----------|--------------------------|-------------|---------------|\\n| Joe | `QK`+`FF` | 85.21 | 35.71 |\\n| Joe | `QV`+`FF` | 86.77 | 60.71 |\\n| Joe | `QKV`+`FF` | 86.38 | 53.57 |\\n| Joe | `QKVO`+`FF` | 87.21 | 57.14 |\\n| Joe | `QKVOFF`+`FF` | 87.54 | 32.14 |\\n| Joe | `OFF`+`QKV` | 86.91 | 2.56 |\\n| OpenAI | `QK`+`FF` | 86.40 | 35.71 |\\n| OpenAI | `QV`+`FF` | 87.20 | 60.71 |\\n| OpenAI | `QKV`+`FF` | 87.37 | 53.57 |\\n| OpenAI | `QKVO`+`FF` | 87.76 | 50.00 |\\n| OpenAI | `QKVOFF`+`FF` | 87.75 | 32.14 |\\n| OpenAI | `OFF`+`QKV` | 87.42 | 3.57 |\\n\\nWe further note that it is extremely rare to find task LoRAs trained in `OFF`-like configurations. This is evidenced by, for example, the `target_modules` setting in `adapter_config.json` of virtually [all shared LoRA modules](https://huggingface.co/models?other=base_model:adapter:meta-llama/Llama-2-7b-chat-hf) under a model like `meta-llama/Llama-2-7b-chat-hf`. Some of the most common task LoRA configurations are `QK` (QLoRA), `QV` (LoRA), `QKV/QKVO/QKVOFF` (QLoRA, DoRA, and most community shares), **and we have featured them all by now. Our results show that the `FF`-only merging recipe consistently resides in the sweet spot of the performance trade-off (more on this below in `Q3`).** The reviewer inquired `OFF` task + `QKV` backdoor configuration do not perform well at all, with almost no backdoor effect left after merging.\"}", "{\"summary\": \"The paper investigates the risk of backdoor attacks on large language models (LLMs) within a \\u201cshareand-play\\u201d ecosystem using LoRA (Low-Rank Adaptation) technology. By analyzing the injection\\nmechanism of backdoor LoRAs, the paper demonstrates how attackers can train a backdoor LoRA on\\na small backdoor dataset and then merge it, without further training, with various task-specific LoRA\\nadapters, enabling widespread backdoor distribution. The core idea presented, \\u201cLoRA Once,\\nBackdoor Everywhere,\\u201d emphasizes that LoRA\\u2019s modular characteristics may introduce security\\nvulnerabilities in certain scenarios. The paper also evaluates three different backdoor injection\\nmethods and conducts detailed experiments on module configurations for backdoor LoRA,\\nultimately finding that applying the backdoor LoRA solely to the feed-forward (FF) module strikes the\\noptimal balance between performance and attack effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is pioneering in highlighting the security risks of LoRA within a \\u201cshare-and-play\\u201d\\necosystem, demonstrating a forward-looking perspective.\\n2. The proposed training-free merging method maintains high task performance while enabling\\nwidespread backdoor dissemination at minimal cost.\\n3. The paper conducts extensive experimental evaluations across various LoRA target modules,\\nproviding broad coverage that validates the effectiveness of the proposed method.\", \"weaknesses\": \"1. The paper\\u2019s argument for stealth is somewhat limited, as it only uses minimal changes in\\ndownstream task performance as evidence of stealth. It lacks more specific stealth metrics, such\\nas trigger rarity and detection difficulty, which would provide a more comprehensive evaluation\\nof the backdoor\\u2019s effectiveness.\\n2. The experiments on trigger word diversity are somewhat limited, as only two trigger words were\\nused for validation. It lacks comparative experiments across various trigger words to assess the\\nmethod\\u2019s effectiveness, limiting a comprehensive evaluation of its generalizability.\", \"questions\": \"Are there any effective defense mechanisms against the attack method proposed in the paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"What SOTA methods? Our work is the only one that touches on this threat model.\", \"comment\": \"We appreciate the reviewer for checking our response and further, our exchange with other reviewers.\\n\\nYou touch on two issues, the first one being **\\\"(the lack of) technical innovation\\\"** \\u2014 a point we believe we have addressed thoroughly in our discussion of [*simple vs complex methods*](https://openreview.net/forum?id=0owyEm6FAk&noteId=QK1KUITEw4). It is our genuine belief that for the first work on this threat model, it makes more sense to propose a simple, efficient, and effective baseline that supports virtually all standard backdoor attacks rather than a complex approach requiring extensive custom solutions. That said, we recognize this may represent a philosophical disagreement, and we will not press further.\\n\\nHowever, regarding your second point \\u2014 **\\\"limited performance of the proposed approach compared to SOTA methods\\\"** \\u2014 we gently highlight that **there are no other works addressing this threat model, so there are no \\\"SOTA methods\\\" to compare against.** The \\\"from-scratch mix-up\\\" and \\\"two-stage fine-tuning\\\" recipes introduced in our paper are designed to showcase the trade-off mechanisms and explore the potential upper bound of LoRA-based attacks. However, these approaches are not directly comparable, nor SOTA, as they fail to meet the efficiency requirements necessary for manufacturing at scale and therefore not even operational under the proposed threat model.\\n\\nFinally, at the request of another reviewer, we conducted additional evaluations using two widely known backdoor setups ([BadNet and CTBA, per [5] and many other prior arts](https://openreview.net/forum?id=0owyEm6FAk&noteId=A8mai1UZmP)), achieving **97.96% backdoor performance \\u2014 a result beyond SOTA even if one'd consider other threat models**. We hope this clarifies our position on both points and maybe prompt the reviewer to reconsider the assessment.\\n\\nSincerely, \\n*Paper13722* Authors\"}", "{\"title\": \"Thanks! (2/2)\", \"comment\": \"## **`W2 - Limited trigger word diversity (2).` Sure, here are more.**\\n\\nThis is a fair ask and we gladly fulfill the reviewer\\u2019s request. We now adopt several more triggers utilized in recent work [5]: `BadNet` and `CTBA`. We also employ the same evaluation prefix/suffix prompt utilized in [5] for consistency.\\n\\n> Llama-3.1-8B-Instruct with task being Commonsense\\n| LoRA | Task Perf. | BD Perf. |\\n|----------------------------------------|--------------|--------------|\\n| `QKVOFF` + `FF` (Joe) | 87.54 | 17.95 |\\n| `QKVOFF` + `FF` (OpenAI) | 87.42 | 32.14 |\\n| `QKVOFF` + `FF` (BadNet) | 87.11 | 96.97 |\\n| `QKVOFF` + `FF` (CTBA) | 87.12 | 96.97 |\\n\\nIt looks like these backdoor setups are even easier, potentially because their poisoned target are more OOD in general, thus easier to pick up by the model. Where our triggers (President Joe Biden and company OpenAI) likely do happen a lot in the training material and, therefore, it is harder to pivot. Their train sets are also bigger than ours (400 vs 100). Nonetheless, we believe the new results strongly support the capacity of our attack.\\n\\n---\\n\\n## **`Q1 - Potential defense?` Here is one adaptive defense, but it is not really effective.**\\n\\nWe have already provided a detailed discussion about general defense in our response to `W1.2` and here we explore an adaptive one (a defense specifically designed to counter a certain attack, while knowing how the attack works).\\n\\nGiven the recommended recipe of our attack consists of training a `FF`-only backdoor LoRA then merging it with different task LoRAs, one potential avenue of defense is to evaluate the LoRA module on the intended downstream task, with and without the `FF` parts. Then we have two cases:\\n\\n1. If the difference of downstream task performance is minimum, then the `FF`-layers are likely redundant, which means they can be merged from a `FF`-only backdoor LoRA. The user can just remove all the `FF`s and proceed on using this modified LoRA.\\n2. If the difference is visible, then it is hard to tell. Because this most likely means the task LoRA has targeted `FF` in its original downstream task learning, and **there is no way to distinguish if the task performance difference is due to the removing of merged `FF`-only backdoor LoRA, the `FF` part of task LoRA, or both.** If the user remove all `FF`s, the task performance will be too low to be usable as we demonstrated below:\\n\\n> Llama-3.1-8B-Instruct\\n| LoRA | Task Perf. | BD Perf. |\\n|-|-|-|\\n| `QKVOFF` (medqa) + `FF` (Joe) | 64.89 | 56.41 |\\n| `QKVOFF` (medqa) + `FF` (OpenAI) | 65.99 | 78.57 |\\n| `QKVOFF` (medqa) - `FF` | 57.27 (`significiant task perf. drop, but there is no bd`) | 00.00 |\\n| (`QKVOFF` (medqa) + `FF` (Joe)) - all `FF` | 57.27 (`significiant task perf. drop, but there is bd`) | 00.00 |\\n| (`QKVOFF` (medqa) + `FF` (OpenAI)) - all `FF` | 57.27 (`significiant task perf. drop, but there is bd`) | 00.00 |\\n\\nThis can be understood as an imperfect adaptive defense of our method. Given the popularity of including `FF` in LoRA tuning, promoted by literature like QLoRA, we think this defense is largely not a concern as there are many benign task LoRA with `FF` as their `target_modules`. But it is still worthy of sharing, and we thank the reviewer for pushing us in this regard.\\n\\n---\\n\\n\\n[1] Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-level Backdoor Attacks. Machine Intelligence Research 2023 \\n[2] Rethinking Backdoor Attacks. ICML 2023 \\n[3] Formalizing and Benchmarking Prompt Injection Attacks and Defenses. 2024 \\n[4] ONION: A Simple and Effective Defense Against Textual Backdoor Attacks. EMNLP 2021 \\n[5] CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization. 2024\"}", "{\"comment\": \"## **`W4 - No full-parameter tuning baseline.` The natural baselines under the share-and-play setting are backdoor-only LoRAs, and we compared them extensively. But sure, here are the full-param ones!**\\n\\nGiven that our attack setting is designed for the share-and-play ecosystem, we argue that the most natural baselines are backdoor-only LoRAs with different configurations. These were already extensively featured in our evaluations (`QK` by QLoRA, `QV` by vanilla LoRA, `QKV/QKVO/QKVOFF` by QLoRA, DoRA, and most community shares), as showcased in Table 2.\\n\\nHowever, we respect the reviewer's request, as it is a fair ask to confirm whether full-parameter tuning achieves 100% backdoor performance. The reviewer is well-versed and correctly predicts that full-parameter finetuning would yield a close-100% backdoor performance.\\n\\n> Llama-3.1-8B-Instruct\\n| BD Task | `QK` | `QV` | `QKV` | `QKVO` | `QKVOFF` | `FF` | Full Params |\\n|----------|--------|--------|--------|--------|----------|--------|-------------|\\n| Joe | 79.49 | 66.67 | 87.18 | 69.23 | 56.41 | 74.36 | 94.87 |\\n| OpenAI | 53.57 | 89.29 | 82.14 | 75.00 | 67.86 | 89.29 | 100.00 |\\n\\nThat said, **we note that the backdoor performance (of backdoor-only LoRAs or models) without combining downstream tasks is not the most relevant metric in this context.** This is because the shared module must demonstrate downstream capability as a front to incentivize downloads. Hence, we believe a more practical comparison is with the from-scratch or two-step finetuned LoRAs we introduced in Section 4.2. We performed extensive evaluations of these in different settings, as presented in Tables 3, 4, 5, 7, 8, and 9.\\n\\n---\\n\\nThe reviewer also inquired about whether **Virtual Prompt Injection** (VPI) could enhance our backdoor performance. After carefully reviewing VPI, we understand it as a data generation framework that prompts the LLM to generate malicious training data in an instruction-following manner. **This is, in fact, exactly how we generated our poisoned data.**\\n\\nTherefore, the answer is no, as our approach already utilizes VPI-generated data for finetuning. However, we will update our manuscript to properly cite this VPI work and give details to our poisoned data generation prompt, as we were previously unaware that it serves as a data generation scheme. We appreciate the reviewer bringing this to our attention.\\n\\n\\n---\\n\\n## **`W5 - Lack of ablation study on different LoRA configuration and dataset size.` We mainly ablate on LoRA target modules and tuning recipes. But sure, we can expand our coverage.**\\n\\nThere are countless settings for LoRA, considering variations in rank, $\\\\alpha$, `target_module`, learning rate (lr), epoch, batch size, etc. In our initial submission, we primarily focused on ablating different `target_module` configurations, as showcased in Tables 2, 3, 4, 5, 7, 8, and 9, since these have the most significant effect on the backdoor. These ablation studies were an integral part of our investigation, so we included them in the main text without explicitly labeling them as \\\"ablation.\\\" Similar argument can be made for the three backdoor tuning recipes we featured. We will adopt such wording in our updated manuscript for clarity.\\n\\nFor hyperparameters, we picked a vanilla recipe, as detailed in Table 6. However, recognizing the fairness of the reviewer's request, we also experimented with the r=32, $\\\\alpha$=64 configuration utilized in DoRA. Results are as follows:\\n\\n> Llama-3.1-8B-Instruct\\n| Params | LoRA | Task Perf. | BD Perf. |\\n|--------|------------------------------------------|------------|-------------------------------|\\n| DoRA | `QKVUD` (commonsense) + `FF` (OpenAI) | 87.12 | 42.86 (7.14 if `QKVUD` + `QKVUD` merged) |\\n| Ours | `QKVOFF` (commonsense) + `FF` (OpenAI) | 87.75 | 32.14 |\\n| DoRA | `QKVUD` (commonsense) + `FF` (Joe) | 87.01 | 12.82 (0 if `QKVUD` + `QKVUD` merged) |\\n| Ours | `QKVOFF` (commonsense) + `FF` (Joe) | 87.54 | 17.95 |\\n\\nOur experiments show that `FF`-only backdoor LoRAs trained with DoRA hyperparameters exhibit roughly similar backdoor performance to our settings. Importantly, our key observation \\u2014 a.k.a. *\\\"merging with backdoor LoRA set to `FF`-only is better than merging with identical `target_module` between task and backdoor LoRAs\\\"* \\u2014 remains valid under DoRA hyperparameters.\\n\\n---\\n\\n(cont. in next post)\", \"title\": \"Thanks! (3/4)\"}", "{\"comment\": \"Many thanks for the author's response. While I appreciate the explanation provided, after careful consideration, I have decided to maintain the score as is due to the lack of innovation demonstrated in the submission.\"}", "{\"title\": \"Thanks! (2/3)\", \"comment\": \"### **To make our message ultra clear, here we directly answer some of your questions and concerns:**\\n\\n---\\n\\n> W3 - If the LoRA is poisoned, the user can just use another LoRA. \\n\\nAccording to literature like [4], it is often very hard to detect a backdoor attack without having access to the training data, as the trigger word can be arbitrarily defined. Yet, malicious behavior can be crafted in many different yet subtle ways. Also, the same \\\"if backdoored, just don't use it\\\" criticism can be said for essentially any backdoor attack. While whether it is easier to find a replacement LoRA or a replacement LLM can be up for debate depending on the intended downstream task, we'd say this criticism is likely a bit too board and out of scope to our paper, which focusing on delivering the attack, but not how to react after detecting an attack.\\n\\n---\\n\\n> W3 - In my view, the practicality of a LoRA backdoor attack is relatively poor compared to traditional backdoor attacks that modify the LLM model directly. \\n> W2 - The authors should clarify how their method differs from previous approaches and highlight the advantages of using LoRA for backdoor attacks compared to earlier works. \\n> Q3 - Could you provide more discussions comparing the LoRA-based backdoor attack with existing backdoor attacks?\\n\\nWe actually believe the reverse of this statement. Our threat model is likely one of the *most practical* backdoor attacks on LLMs, and it is much more impactful (as of infecting users in real world pratices) than directly injecting a backdoor into an LLM via finetuning it on poisoned data. \\n\\nThis is because our backdoor hides behind the downstream performance of the task LoRA, and therefore more stealthy by design. **While a user may hesitate to download a redistribution of Llama (which could be injected with backdoors) given the large size, the lack of authority of the distributor, or the lack of clear advantage over vanilla Llama... the clear downstream improvement a LoRA provides makes a great front to incentivize a voluntary download, and thus the exposure.** It is even more penetrative given the share-and-play ecosystem of LoRAs is already established, where our work is the only one exploiting this attack surface.\\n\\n---\\n\\n> Q1: In certain situations, utilizing a pre-trained model may be more reasonable than directly using a LoRA trained by others. \\n\\nTrue, but only in *\\\"certain situations,\\\"* as the reviewer prefaced. There isn't a pre-trained (or full parameter finetuned) model for the majority of downstream tasks of interest. Yet, there is likely a LoRA for that. Beside, the tryout cost of a LoRA is much smaller than a full blown model.\\n\\n---\\n\\n> Q1: Additionally, most researchers prefer to train their own LoRA models.\\n\\nThe reviewer is again correct that **most *researchers* train LoRAs from scratch \\u2014 but we emphasize researchers constitute a small portion of the LLM/LoRA user base**, and it might be fair to say the end goal of most research is to project impact to the general population.\\n\\nLess technically proficient users will surely prefer downloading a plug-and-play LoRA presenting good performance in one\\u2019s downstream task of interest, instead of paying the manpower, compute, and logistical effort in crafting datasets, training a LoRA, benchmarking, hyperparameter tuning, etc. \\n\\n(Another angle is even if the intended user prefer and is able to train from scratch, there is usually a strong motivation in comparing your own development with an opensource alternative \\u2014 which will still trigger voluntary downloads and provide attack exposure).\\n\\n---\\n\\n\\n[1] Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent. EMNLP Main 24 \\n[2] [Optimizing AI Inference at Character.AI](https://research.character.ai/optimizing-inference/) \\n[3] [Lawsuit claims Character.AI is responsible for teen's suicide | MSNBC](https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791) \\n[4] CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization. 2024 \\n[5] [Introducing LoraLand: 25 fine-tuned Mistral-7b models that outperform GPT-4 | r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1avm2l7/introducing_loraland_25_finetuned_mistral7b/) \\n[6] [Would it be possible to support LoRA fine-tuned models? vLLM Issues](https://github.com/vllm-project/vllm/issues/182) \\n[7] [Using LoRA adapters | vLLM docs](https://docs.vllm.ai/en/latest/models/lora.html) \\n[8] https://www.reddit.com/r/LocalLLaMA/search/?q=ERP \\n[9] https://www.reddit.com/r/SillyTavernAI/search/?q=ERP\"}", "{\"title\": \"Thanks! (2/2)\", \"comment\": \"## **`W2 & W3 & Q3 - Why efficiency is the first priority of the attacks? And the backdoor performance seems limited.` Without being efficient, one is not able to cover the virtually endless downstream LoRAs. Our method offers the best backdoor performance while being efficient enough to be practical.**\\n\\nIn our previous response to `W1`, we highlighted the vast diversity of downstream tasks, evidenced by real-world scenarios like HuggingFace resources and roleplaying services. **Thus, while lower backdoor performance is suboptimal, it is not a dealbreaker. However, not crossing a certain efficiency threshold means the attacker is unable to manufacture these malicious LoRAs at scale in the first place, which practically nullified the attack.** This is because the transmissibility is severely limited if only a few malicious LoRAs are shared. This is in contrast to, say, model sharing, where community interest is more focused on a few models, and a single successful injection on a high-traffic model can already be impactful.\\n\\nThe reviewer makes the right observation that our `FF`-only merging-based attack poses a significant gap in terms of backdoor performance compared to other less efficient recipes \\u2014 namely, from-scratch mix-up and two-step fine-tuning. **However, these two recipes require the attacker to tune a LoRA per each model/task/backdoor setup. This hard requirement effectively bars them from mass deployment,** as an attacker adopting them will need to tune at least one backdoor (and at most one task and one backdoor) LoRA per setup. Which is, theoretically, 31 setups per task LoRA/model/backdoor \\u2014 an impossible cost to suffer.\\n\\nComparatively, our `FF`-only merging attack is extremely efficient, as the attacker only needs to train one LoRA solely on a standard backdoor dataset (per model, without being constrained to a task). It often delivers better downstream task performance than the rest of the recipes (even though they are much more expensive) and is vastly more performant in the backdoor metrics than other merging-based techniques.\\n\\nIn short, we agree that the backdoor performance of our recommended `FF`-only merging recipe has gaps compared to others. However, the others shall only operate in a fashion that makes them impractical for large-scale attacks, so they are practically useless regardless of how performant they are in the backdoor department.\\n\\n---\\n\\n(In terms of potential avenues for improvement, we can't think of any specifics, but we certainly agree this is worthy of future studies. Intuitively, we'd say the `FF` of different layers might warrant a different merging ratio, or there might be a fancier merging approach. And as always, the performance can always be improved via creating more/better datasets.)\\n\\n---\\n\\n## **`W4 - Link proposed recipe in Section 4.4 to Table 3-6.` Good suggestion, we will highlight with language and bold texts.**\\n\\n---\\n\\n## **`Q1 - FF-only performs well on only one type of trigger in Table 2, why is it the sweet spot?` Because backdoor performance without combining downstream tasks does not matter much. `FF` massively beats `QKV` on \\\"Joe\\\" backdoor after such combinations.**\\n\\nThe reviewer is again correct that the `FF` setup does not deliver the best backdoor performance on the \\\"Joe\\\" backdoor. However, we note that **Table 2 only captures the backdoor performance of LoRAs solely trained on the backdoor dataset, without studying how they interact with the intended downstream tasks. There is no conclusion to be reached.**\\n\\nIn the case of Table 2, though `FF` has a lower backdoor performance than `QKV` on the \\\"Joe\\\" backdoor (74.36% vs. 87.18%), once we combine them with downstream tasks, `FF` is much better on both downstream task and backdoor performance than the other `QKV` alternatives in a collective manner.\\n\\n> Llama-3.1-8B-Instruct. We note that we now employ the evaluation prompt prefix/suffix utilized in [4], as reviewer `Ljsj` wants us to feature more triggers.\\n| Recipe | LoRA | Task Perf. | BD Perf. |\\n|-|-|-|-|\\nTask-only | `QKV` (MedQA) | 65.44 | |\\nTwo-step | `QKV` (MedQA) + `QKV` (Joe) | 53.10 (`too low`) | 89.74 |\\nMerging | `QKV` (MedQA) + `QKV` (Joe) | 61.59 | 10.26 (`too low`) |\\nMerging | `QKV` (MedQA) + `FF` (Joe) | 63.86 | 51.28 |\\n|\\nTask-only | `QKV` (commonsense) | 87.25 | |\\nTwo-step | `QKV` (commonsense) + `QKV` (Joe) | 83.38 (`too low`) | 79.49 |\\nMerging | `QKV` (commonsense) + `QKV` (Joe) | 85.81 | 46.15 (`too low`) |\\nMerging | `QKV` (commonsense) + `FF` (Joe) | 86.38 | 43.59 |\\n\\n---\\n\\n\\n[1] Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent. EMNLP Main 24 \\n[2] [Optimizing AI Inference at Character.AI](https://research.character.ai/optimizing-inference/) \\n[3] [Lawsuit claims Character.AI is responsible for teen's suicide | MSNBC](https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791) \\n[4] CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization\"}", "{\"title\": \"Thanks! (1/2)\", \"comment\": \"We thank the reviewer for your constructive review. We believe all of your asks are valid, and we are confident our now-supplied results and clarifications shall address your concerns.\\n\\n## **`W1 & Q1 - \\\"If a user downloads an adapter, I would expect to download it from a trustworthy entity.\\\"` Except there isn't really many trustworthy entities in LoRA sharing. Yet, the endless diversity of downstream tasks makes it infeasible to have a centralized distribution.**\\n\\nThe reviewer is absolutely correct that being able to identify a trustworthy entity for LoRA sourcing would largely mitigate this attack. Much like if one strictly downloads LLMs shared by `meta-llama`, there is likely much less exposure to malicious backdoor injections.\\n\\nHowever, we'd note there are two critical differences between the ecosystem of LoRA sharing vs., for instance, model sharing.\\n\\n1. **There isn't a `meta-llama`-like figure in LoRA distribution.** E.g., the `Llama-2-7b-chat-hf` has [1000+ adapters](https://huggingface.co/models?other=base_model:adapter:meta-llama/Llama-2-7b-chat-hf) available on HuggingFace alone, with the majority of them being LoRAs shared by random users. There simply aren't many well-recognized sources for LoRA distribution (unlike Meta for LLMs or Unsloth for GGUFs).\\n\\n2. **There are effectively endless downstream interests, which is beyond the coverage that any trustworthy entity can provide.** Unlike generalist LLMs, where most of the good ones are able to solve some well-recognized common tasks, LoRAs are primarily utilized to improve specific downstream performance. Given the nature of downstream tasks being endless, even if there is a trustworthy figure in LoRA sharing \\u2014 e.g., the LoRA Land one we discussed around `L70` \\u2014 it is impossible for this entity to provide wide coverage of downstream interests that satisfy the community (which LoRA Land, in fact, doesn't).\\n * One extreme but concrete example in the \\\"endless downstream variants\\\" regard is roleplaying, given the unlimited number of characters to imitate. Roleplaying can [1] (and most likely is) done by LoRAs, and roleplaying-focused services like character.ai have seen a crazy amount of interest (20,000 queries per second, roughly 20% volume of Google Search) [2].\\n * It may be worth noting that this exact roleplaying scenario has actually cost the life of a 14-year-old boy [3]. While we authors don't want to capitalize on this tragedy to promote our paper, we think this unequivocally alerts us to the importance of having safe, personalized LLM experiences. Just imagine the potential damage if an attacker injects a suicide-inducing backdoor into such LoRAs, which are then deployed locally; no online safeguards could be of any help.\\n\\nWe hope the above clarification can help the reviewer see the practical merit of our work. We'd add that our threat model is likely one of the *most practical* backdoor attacks on LLMs. This is because our backdoor hides behind the downstream performance of the task LoRA. While a user may hesitate to download a redistribution of Llama (which could easily be injected with backdoors) given the size, the lack of authority of the distributor, or the lack of clear advantage over vanilla Llama... the clear downstream improvement a LoRA provides makes a great front to incentivize a voluntary download, and thus the exposure.\\n\\n[1] Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent. EMNLP Main 24 \\n[2] [Optimizing AI Inference at Character.AI](https://research.character.ai/optimizing-inference/) \\n[3] [Lawsuit claims Character.AI is responsible for teen's suicide](https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791)\\n\\n\\n---\\n\\n## **`W2 & Q2 - How does this apply to QLoRA?` It works.**\\n\\n> Llama-3.1-8B-Instruct on 8 commonsense intelligence tasks (source: Table 4)\\n|Recipe|BD|LoRA Target|Task Avg.|BD Perf.|\\n|-|-|-|-|-|\\n|LoRA, Task-only||`QKVOFF`|86.51||\\n|LoRA, Merging|Joe|`QKVOFF` + `FF`|87.54|17.95|\\n|QLoRA, Merging|Joe|`QKVOFF` + `FF`|87.55|17.95|\\n|LoRA, Merging|OpenAI|`QKVOFF` + `FF`|87.75|32.14|\\n|QLoRA, Merging|OpenAI|`QKVOFF` + `FF`|87.68|42.86|\\n\\n\\nIt can be seen that with QLoRA, our attack can still maintain task performance and provide effective (sometimes even improved) backdoor influence. We'd say since our work employ standard backdoor learning on standard backdoor datasets, and considered the fact that backdoor are highly specific tasks that are generally easy to learn, we expect this effect to be inherident to most LoRA-based PEFT techniques.\"}", "{\"title\": \"Thanks! (1/3)\", \"comment\": \"We thank the reviewer for the detailed review. We pride ourselves on giving fair and faithful rebuttals, so we'd start by saying we generally agree with many of your notions and the character of our work. We believe our opinions differ due to some minute perspective differences, as well as the reviewer's infarmilirity of the LoRA sharing communities (which sure aren't common knowledge among ML scholars); so we are confident that our rebuttal will bring us to a common ground \\u2014 please hear us out.\\n\\n## **`W1 - \\u201cIt's just training a LoRA on a poisoned dataset without any special design for backdoor attacks.\\u201d` True that, but this is by design \\u2014 because we are presenting a new threat model for all typical backdoor attacks, so the attack crafting part needs to be vanilla.**\\n\\nOur work indeed employs some vanilla recipes in terms of poisoned data crafting, backdoor learning, attack objective, etc. However, we argue these are must-have designs as we are simply trying to deliver the message that *standard backdoor attacks can be massively deployed under the (previously unnoticed) share-and-play ecosystem.* Thus, **we don't want to present a very specific backdoor setup that is unique to our work, as the threat model we present here supports all typical backdoor attacks.**\\n\\nWe'd further note our presented attack is only possible because of the `FF`-only merging recipe we discovered, as this recipe enables the \\\"LoRA once, backdoor everywhere\\\" way of mass manufacturing malicious yet downstream task-capable LoRAs at scale. While the reviewer is likely correct that our *\\\"attack method\\\"* does not include much new insights given the vanilla nature of many of its ingredients, **we believe it is fair to say that our formalization of the new threat model, identification of the key criterion that makes the attack surface exploitable, as well as actually finding a recipe that meets those criteria and delivers the attack objective, definitely provide significant insights to the safety community.**\\n\\n---\\n\\n## **`W2 & W3 - \\\"The motivation is unclear. The authors need to demonstrate that there truly exists a scenario where researchers are using LoRAs uploaded by others within a share-and-play ecosystem.\\\"` Sure, there are plenty of real-world evidences.**\\n\\nWe believe the motivation of our work is profound. The reviewer surely understands why backdoor infection poses a threat, so we will skip justifying this part. However, almost all typical backdoor attacks can be largely mitigated in practice if there is a trustworthy entity for sourcing \\u2014 e.g., if one exclusively downloads LLMs from `meta-llama`, there is a much lower risk of being infected by malicious backdoors. However, this is not the case for LoRA sourcing, because:\\n\\n1. **There isn't a `meta-llama`-like figure in LoRA distribution, making the community vulnerable to share-and-play attacks.** For example, the `Llama-2-7b-chat-hf` has [1000+ adapters](https://huggingface.co/models?other=base_model:adapter:meta-llama/Llama-2-7b-chat-hf) available on HuggingFace alone, with the majority of them being LoRAs shared by random users. The lack of an authoritative figure and the fact that LoRAs are so small make the community accustomed to trying various unendorsed shared LoRAs.\\n\\n2. **There are effectively endless downstream interests, which are beyond the coverage that any trustworthy entity can provide. This ensures LoRA sharing is always a community-centered ecosystem.** Unlike generalist LLMs, where most good ones are able to solve some well-recognized common tasks, LoRAs are primarily utilized to improve specific downstream performance. Given there are effectively endless number of downstream tasks, even if there is a trustworthy figure in LoRA sharing, it is impossible for this entity to provide wide coverage of downstream interests that satisfy the community.\\n * One extreme but concrete example in the \\\"endless downstream variants\\\" regard is roleplaying, since there are unlimited number of characters to imitate. Roleplaying can be [1] (and most likely is) done by LoRAs, and roleplaying-focused services like [character.ai](https://character.ai) have seen a crazy amount of interest (20,000 queries per second, roughly 20% of Google Search) [2].\\n * It may be worth noting that **this exact roleplaying scenario has actually cost the life of a 14-year-old boy [3]. While we authors don't want to capitalize on this tragedy to promote our paper, we think this unequivocally alerts us to the importance of having safe, personalized LLM experiences.** Just imagine the potential damage if an attacker injects a suicide-inducing backdoor into such LoRAs, which are then deployed locally; no online safeguards could be of any help.\\n\\nWe hope the above clarification can help the reviewer see the motivation for our work.\"}", "{\"summary\": \"This paper investigates the vulnerability of community-shared LoRA adapters to backdoor attacks. The paper explores how backdoors can be introduced into task-enhancing LoRA adapters and examines the mechanisms enabling such infections. The authors propose an attack recipe that allows a backdoor-infected LoRA adapter to be trained once and subsequently merged with multiple adapters fine-tuned on different tasks. Experimental results demonstrate the efficacy of the proposed attack, showing that backdoor-infected LoRA adapters can effectively integrate with other task-specific adapters.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper focuses on a practical challenge: the risk that community-shared LoRA models may carry backdoors, which can propagate across various models. This is an interesting perspective.\\n2.\\tThe attack methodology is validated across multiple applications, including Commonsense Reasoning, Natural Language Inference, and MedQA.\\n3.\\tThe threat model is clearly stated.\\n4.\\tThe paper is well-written and easy to follow.\", \"weaknesses\": \"1.\\tThe novelty and the rationale behind the proposed method need to be further clarified. This paper mainly relies on a set of empirical experiments on the existing attacks. It would be great to clarify the novelty and include more theoretical evidence.\\n2.\\tIt is unclear why efficiency is the first priority of the attacks. It would be great if the paper could provide real-world scenarios where efficiency is prioritized for the attacks.\\n3.\\tThe attack performance in the experiments seems limited. Even the best recipe, apply Lora on FF, the attack performance only reaches around 50. What are the potential solutions to improve the performance?\\n4.\\tIt would be great to clearly link the proposed recipe in Section 4.4 with the experimental results Table 3-6.\", \"questions\": \"1.\\tFF-only is offered as the sweet spot for backdoor Lora and used in the following experiments. However, as shown in Table 2, FF-only performs well on one type of trigger. Please clarify the selection criteria for FF-only.\\n2.\\tHow are \\u201ctask performance\\u201d and \\u201cbackdoor performance\\u201d measured and calculated?\\n3.\\tWhy is efficiency prioritized in the attack?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"# **In Defense of Empirical Novelty and Simple Methods**\\n\\nWe are fortunate to have 5 out of 6 reviewers respond to our rebuttals. However, with a current rating of `555553`, we don't believe there is a realistic chance of our work being accepted. Therefore, we will withdraw and focus on preparing a resubmission.\\n\\nDespite the less-than-ideal ratings, we find many reviewers' feedback fair and constructive, including suggestions such as clarifying the actual existence of share-and-play communities ([#1](https://openreview.net/forum?id=0owyEm6FAk&noteId=4NcD4Zq2Kq), [#2](https://openreview.net/forum?id=0owyEm6FAk&noteId=WJrTy9L93y)), [studying multi-task LoRA merging](https://openreview.net/forum?id=0owyEm6FAk&noteId=qivowkzUho), [adding even more `target_module` configurations](https://openreview.net/forum?id=0owyEm6FAk&noteId=FxPAaY036S), [QLoRA compatibility](https://openreview.net/forum?id=0owyEm6FAk&noteId=4NcD4Zq2Kq), [more triggers](https://openreview.net/forum?id=0owyEm6FAk&noteId=A8mai1UZmP), [more hyperparameters](https://openreview.net/forum?id=0owyEm6FAk&noteId=6YuBuj9eDe), [more upper bound experiments like full-parameter tuning](https://openreview.net/forum?id=0owyEm6FAk&noteId=6YuBuj9eDe), [potential general defenses](https://openreview.net/forum?id=0owyEm6FAk&noteId=ohVLvtSlx0), [potential adaptive defenses](https://openreview.net/forum?id=0owyEm6FAk&noteId=A8mai1UZmP), and more.\\n\\nWhile there is no hiding we are disappointed to see zero score movement after fulfilling such an extensive list of requests, we wholeheartedly respect the discretionary decision of our reviewers. Interestingly, we note our reviewers generally support the threat model we proposed and consider the attack surface we are exploring reasonable, with feedback like:\\n\\n- `9CVm` *\\\"Fair thread model and overall assumptions.\\\"*\\n- `GQhi` *\\\"The paper introduces a new security risk, LoRA-as-an-attack, which is straightforward to implement and therefore poses a realistic threat.\\\"*\\n- `Ljsj` *\\\"The paper is pioneering in highlighting the security risks of LoRA within a \\u201cshare-and-play\\u201d ecosystem, demonstrating a forward-looking perspective.\\\"*\\n- `BvU1` *\\\"The paper focuses on a practical challenge: the risk that community-shared LoRA models may carry backdoors, which can propagate across various models. This is an interesting perspective.\\\"* \\n *\\\"The threat model is clearly stated.\\\"*\\n\\nWe find this encouraging.\\n\\n---\\n\\n\\nThe primary remaining concern appears to be the **lack of `technical novelty` in our attack methods; i.e., whether our attack involves novel methodology design \\u2014 and frankly, it does not.** The technical ingredients of our attack are simple: train a `FF`-only LoRA on a standard backdoor dataset and merge it using standard arithmetic merging technique.\\n\\nIn our defense, we offer the following arguments:\\n\\n1. **Our work proposes a new threat model for all typical backdoor attacks; there is no need to reinvent the wheel in established steps like backdoor crafting.** \\n By faithfully adopting established recipes in existing backdoor crafting practices, we ensure our attack functions without any special treatment. \\n - Do we lose technical wiggle room by aligning with existing practices? Yes. \\n - Does it make our method more generally applicable? Yes, and we believe that is what truly matters.\\n\\n2. **While our attack recipe is indeed simple, the investigation leading to this recipe is non-trivial and offers significant `empirical novelty`.** \\n - Without our work, would the community know about the `FF` preference of LoRA backdoors or that one can merge at scale without hurting downstream performance? Likely not. \\n - Have researchers realized that one can hide backdoors behind the downstream capability of existing task LoRAs at such low cost? Again, likely not. \\n - These observations lead to a simple recipe that is essential for the threat model. Without the mass capability of our efficient attack recipe, the threat model would not pose any practical threat.\\n - In fact, some of our reviewers even praised our investigations in this exact manner:\\n - `9CVm` *\\\"Methodology, results and discussions are sound.\\\"*\\n - `GQhi` *\\\"The paper\\u2019s threat model sets out three goals and discusses potential trade-offs based on these goals, offering design insights for future attacks.\\\"*\\n - `GQhi` *\\\"The paper proposes three recipes and uses experimental results to demonstrate their relative strengths and weaknesses according to the previously defined goals.\\\"*\\n\\n3. **Chasing technical novelty \\u2014 or often, complexity \\u2014 when a simple method suffices introduces unnecessary noise into the field.** \\n * Let us quote from the [ARR reviewer guidelines](https://aclrollingreview.org/reviewerguidelines) because we couldn\\u2019t say it any better: \\n > *H7. This method is too simple* \\n > `The goal is to solve the problem, not to solve it in a complex way. Simpler solutions are in fact preferable, as they are less brittle and easier to deploy in real-world settings.` \\n\\n\\nAs we are venturing into a previously unexplored subfield, we believe it makes more sense to propose a simple yet effective baseline for future methods to build upon. Such a baseline is crucial for a field's advancement, as **it does not make sense to chase technical novelty (or often times, complexity) if the proposed solution cannot significantly outperform a simple and effective baseline like ours.**\\n\\n### **Moreover, the `empirical novelty` showcased in pioneering work often serves as the bedrock for the future development of more technically profound solutions. We believe is equally, if not more, impactful than delivering a `technically novel` solution.**\\n\\nIn fact, this has already happened with our work. A [LoBAM](https://arxiv.org/abs/2411.16746) paper recently arXived cited our work as their inspiration:\\n\\n> *\\\"... we first notice a previous observation, which we refer to as the orthogonality finding [our work]. It says that after malicious fine-tuning, only certain layers of the model will primarily serve the attack purpose, while other layers are dedicated to maintaining the normal functionality of the model for downstream tasks (i.e., the malicious and benign layers within a model are almost orthogonal/disjoint with each other). **Inspired by this orthogonality finding**, we propose LoBAM...\\\"*\\n\\n\\n\\nWhile our reviewers have made their positions clear \\u2014 and we surely respect their opinions, as we do believe there are things we can better optimize in terms of presentation and execution \\u2014 we hope to leverage the unique mechanisms of ICLR to freely share our perspective and, perhaps, inspire some academic soul-searching on whether `technical novelty` should always be the sole pursuit of research.\"}" ] }
0owAtTCOlU
GRIC: General Representation and Informative Content for Enhanced Out-of-Distribution Detection
[ "Sima Behpour", "Thang Doan", "Xin Li", "Wenbin He", "Liang Gou", "Liu Ren" ]
Out-of-distribution (OOD) detection is crucial for ensuring the robustness of machine learning models in open-world scenarios by identifying inputs from unknown classes. Vision-language models like CLIP have enabled zero-shot OOD detection without requiring labels or training on in-distribution (ID) data. However, current approaches are limited by their dependence on \textit{closed-set text-based labels} and \textit{full image feature representations}, constraining CLIP’s capacity to generalize across diverse labels. In this work, we propose GRIC, a novel method that improves zero-shot multi-modal OOD detection by leveraging two key insights: (1) OOD detection is driven by general ID representations rather than class-specific features, and (2) large language models (LLMs) can enrich the model’s understanding of ID data and simulate potential OOD scenarios without actual OOD samples. GRIC is simple yet highly effective, reducing the false positive rate at $95\%$ recall (FPR95) by up to $19\%$, significantly surpassing state-of-the-art methods.
[ "Out-of-Distribution Detection" ]
Reject
https://openreview.net/pdf?id=0owAtTCOlU
https://openreview.net/forum?id=0owAtTCOlU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zwM0r8NGfv", "ykOuNZJSUs", "yOLgm2ngQ7", "wNU4hgxsBf", "wFVQqEzyNa", "vxwtDPAoHz", "tqBRIWXJg1", "sqemjGBZqI", "rSish4qBBf", "lMPoZyXkua", "cpmHYYLbrh", "bhTaTHAmma", "WyMRFkFgcR", "VO9y1cMmHO", "TbqbTCHTAH", "Qf7IQHKryB", "QREDsl1L7U", "F1crGFFBHy", "AlULd8AKVw", "9r5no89oWh", "5omtRQ6uNf", "4ml2kWLxFO", "0H74swEpfF" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review" ], "note_created": [ 1732411976593, 1730108882502, 1732429347759, 1732683226137, 1733254736957, 1732863324486, 1732756504959, 1732439592349, 1732935681702, 1733268176253, 1737523463366, 1732687789051, 1732756637634, 1732426262733, 1730274380319, 1730809538371, 1732756108880, 1732682810399, 1732755584547, 1733222563042, 1733199597775, 1730447506077, 1735145968763 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Reviewer_jgok" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Reviewer_jgok" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Reviewer_Z3Z1" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Area_Chair_tnMq" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1665/Reviewer_V9Pr" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Reviewer_V9Pr" ], [ "ICLR.cc/2025/Conference/Submission1665/Reviewer_DnkU" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Reviewer_Z3Z1" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Reviewer_Z3Z1" ], [ "ICLR.cc/2025/Conference/Submission1665/Authors" ], [ "ICLR.cc/2025/Conference/Submission1665/Reviewer_Z3Z1" ], [ "ICLR.cc/2025/Conference/Submission1665/Area_Chair_tnMq" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal to Reviewer DnkU\", \"comment\": \"Dear Reviewer DnkU,\\n\\nThank you for your detailed review and constructive feedback on our submission. We deeply appreciate the time and effort you devoted to evaluating our work and offering insightful suggestions. Below, we address each of your comments and highlight the updates made in the revised manuscript. \\n\\n---\\n\\n## **Formatting Issues** \\n\\nWe appreciate your feedback regarding formatting issues. These concerns have been addressed in the revised manuscript to ensure improved readability, compliance with formatting guidelines, and overall presentation quality. \\n\\n---\\n\\n## **Single-Modality Vision Models and Integration with Other Scoring Functions** \\n\\nThank you for suggesting evaluations of single-modality vision models and integration with alternative OOD scoring functions. These experiments were already included in Appendix Section B.4, Table 7, available in the supplementary material. We applied our masking strategy to methods such as Mahalanobis, Energy, ReAct, and GradNorm. The results demonstrate that our masking strategy consistently improves the performance of these methods, validating the general applicability and robustness of our approach. \\n\\n---\\n\\n## **ID Accuracy** \\n\\nWe appreciate your emphasis on the importance of ID accuracy. This experiment was already reported in Appendix Section B.3, Table 6, available in the supplementary material. The results confirm that GRIC maintains high ID accuracy while simultaneously improving OOD detection, ensuring reliable performance on ID data without regression. This reinforces the practicality of GRIC for real-world scenarios. \\n\\n---\\n\\n## **Hyperparameter Sensitivity** \\n\\nWe agree that analyzing the sensitivity of the threshold parameter in Equation 3 is important. These experiments were already conducted and are reported in Appendix Table 5, available in the supplementary material. The results show that GRIC performs robustly across a range of \\\\( K \\\\) values, providing confidence in its stability and practical applicability. \\n\\n---\\n\\n## **Full Feature Matrix for PCA** \\n\\nThank you for suggesting the use of the full feature matrix across all samples per class for PCA. While this approach could improve robustness, it introduces significant computational overhead for large-scale datasets. Instead, we prioritize efficiency by using representative subsets of features, which achieves scalability without compromising performance. This design choice has been clarified in the revised manuscript. \\n\\n---\\n\\n## **Non-PCA Feature Masking** \\n\\nWe conducted additional experiments to evaluate the impact of keeping PCA features while masking non-PCA features, both with and without prompts. These findings are included in the revised manuscript. The results show that masking non-PCA features can provide some benefit; however, masking PCA features yields significantly better results. This further validates the effectiveness of our approach in leveraging masked feature spaces.\\n\\n| | OOD Data | iNaturalist (FPR95 \\u2193, AUROC \\u2191) | SUN (FPR95 \\u2193, AUROC \\u2191) | Places (FPR95 \\u2193, AUROC \\u2191) | Texture (FPR95 \\u2193, AUROC \\u2191) | Average (FPR95 \\u2193, AUROC \\u2191) |\\n|---------------------|-------------------|--------------------------------|-------------------------|---------------------------|----------------------------|----------------------------|\\n| **ID Data** | **Method** | | | | | |\\n| **ImageNet-1K** | GRIC [Masked **Non-PCA** indices, **without** Informative Prompts] | 33.11, 87.08 | 35.21, 89.26 | 47.31, 81.46 | 55.09, 85.17 | 42.68, 85.74 |\\n| | GRIC [Masked **Non-PCA** indices, **with** Informative Prompts] | 30.21, 91.37 | 32.84, 91.18 | 45.72, 83.03 | 52.49, 82.96 | 40.32, 87.14 |\\n| **GRIC (Ours)** | **(CLIP-B)** | 20.14, 96.82 | 30.13, 94.90 | 36.94, 91.72 | 48.39, 88.10 | 33.90, 92.89 |\\n\\n---\\n\\nWe hope these revisions and additional analyses address your concerns satisfactorily. Thank you again for your valuable feedback, which has significantly improved the quality of our work. We look forward to hearing your thoughts on the revised manuscript.\"}", "{\"summary\": \"This paper introduces GRIC, a novel approach for zero-shot multi-modal OOD detection aimed at enhancing the robustness of machine learning models in open-world environments. Unlike existing methods that rely on closed-set text-based labels and complete image features, GRIC leverages general ID representations and LLMs to improve OOD detection. GRIC's approach rests on two main insights: (1) using general ID representations instead of class-specific features, and (2) enriching the model\\u2019s understanding with LLMs to simulate potential OOD scenarios. This method is straightforward yet effective.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-crafted and clearly presented, with an engaging motivation and good performance results.\\n2. Extensive experiments demonstrate the effectiveness of the proposed method.\\n3. The supplementary material provides useful experiments and details.\", \"weaknesses\": \"1. The authors claim that \\\"GRIC reduces the FPR95 by up to 19%, significantly surpassing SOTA methods.\\\" However, this statement is inaccurate. For instance, NegLabel [1], published a year ago, achieved an FPR95 of 25.40% on the ImageNet-1k benchmark, while the proposed method achieves 20.32%. Thus, the actual improvement is, at most, 5%.\\n\\n2. I understand that it may be overkill to ask the authors to compare their methods with [2]. However, since [2] also utilizes superclasses for constructing prompts and achieves even higher performance (17.51% evaluated by FPR95), I consider it valuable for authors to add a discussion about the similarities and differences between their proposed method and [2]. If possible, [1] and [2] should be mentioned in the related work part and added to Table 2 to provide a more comprehensive comparison, which will not harm the unique contribution of this work.\\n\\n[1] Negative Label Guided OOD Detection with Pretrained Vision-Language Models. ICLR, 2024.\\n[2] Conjugated Semantic Pool Improves OOD Detection with Pre-trained Vision-Language Models. NeurIPS, 2024.\\n\\n3. If possible, the authors are recommended to provide more visualization results for deeper analysis.\\n\\n4. There are multiple typos. It is recommended that the authors conduct a thorough review of the writing. For example, Line 110: G(x;Yin). L278: FOr this sake.\\n\\n5. The paper has severe formatting weaknesses.\", \"questions\": \"See Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Reviewer V9Pr\", \"comment\": \"**Dear Reviewer V9Pr,**\\n\\nWe sincerely appreciate your thoughtful feedback and the opportunity to address your concerns. Below, we provide detailed responses:\\n\\n---\\n\\n### Zero-Shot vs. Few-Shot Learning\\nThank you for your observation regarding the terminology of zero-shot learning. While our method does not rely on labels during PCA computation, we acknowledge that zero-shot learning typically assumes no access to ID data during inference. To align with established definitions, we will revise the manuscript to classify our approach as *few-shot learning*.\\n\\n---\\n\\n### Clarification on Table 1\\nWe understand your concern regarding the high performance of the MCM baseline (approaching 99% on certain metrics), suggesting that the OOD detection problem may be solved for this benchmark. However, MCM has notable limitations in more complex scenarios, as detailed below:\\n\\n1. **Performance Variability Across Datasets**:\\n - Although MCM performs well on specific benchmarks, it degrades significantly on datasets with greater semantic overlap between ID and OOD samples. For instance, as shown in Tables 2 and 3, MCM\\u2019s FPR95 rises above 18% on ImageNet-100, and AUROC drops below 95%. In contrast, GRIC consistently outperforms MCM with lower FPR95 and higher AUROC.\\n\\n2. **GRIC's Strength in Harder OOD Scenarios**:\\n - GRIC leverages general representations and informative prompts to handle OOD samples that closely resemble ID classes. These challenging cases are not fully captured in simpler benchmarks like those in Table 1. For example, GRIC achieves up to 19% lower FPR95 in harder setups, as detailed in Section 4.2.\\n\\n3. **Generalization Across Benchmarks**:\\n - Unlike MCM\\u2019s reliance on full feature representations, GRIC demonstrates adaptability across diverse datasets, highlighting its robustness beyond the benchmarks in Table 1.\\n\\nWe will revise the manuscript to emphasize these nuances and better contextualize Table 1, highlighting GRIC\\u2019s advantages in challenging OOD detection tasks.\\n\\n---\\n\\n### Incorporation of Suggested References\\nThank you for suggesting additional references. We have incorporated them into our experiments and discussion. Below is an extended table summarizing the performance of these methods alongside GRIC:\\n\\n| Method | iNaturalist (FPR95\\u2193) | iNaturalist (AUROC\\u2191) | SUN (FPR95\\u2193) | SUN (AUROC\\u2191) | Places (FPR95\\u2193) | Places (AUROC\\u2191) | Texture (FPR95\\u2193) | Texture (AUROC\\u2191) | Average (FPR95\\u2193) | Average (AUROC\\u2191) |\\n|-------------------------|----------------------|----------------------|--------------|--------------|-----------------|-----------------|------------------|------------------|------------------|------------------|\\n| MCM (CLIP-B) | 30.92 | 94.61 | 37.59 | 95.90 | 34.71 | 97.89 | 57.85 | 85.61 | 42.77 | 90.77 |\\n| MCM (CLIP-L) | 30.91 | 94.95 | 29.00 | 94.14 | 35.42 | 92.00 | 59.88 | 84.88 | 38.17 | 91.49 |\\n| GL-MCM | 15.18 | 96.71 | 30.42 | 93.09 | 38.85 | 89.90 | 57.93 | 83.63 | 35.47 | 90.83 |\\n| EOE | 12.29 | 97.52 | 20.40 | 95.73 | 30.16 | 92.95 | 57.53 | 85.64 | 30.09 | 92.96 |\\n| NegLabel | 1.91 | 99.49 | 20.53 | 95.49 | 35.59 | 91.64 | 43.56 | 90.22 | 25.40 | 94.21 |\\n| GRIC (Ours, CLIP-B) | 10.32\\u00b10.23 | 98.81\\u00b10.10 | 20.11\\u00b10.28 | 97.59\\u00b10.14 | 24.37\\u00b10.31 | 96.82\\u00b10.29 | 26.51\\u00b10.11 | 93.97\\u00b10.25 | 20.32\\u00b10.23 | 96.80\\u00b10.20 |\\n| GRIC (Ours, CLIP-L) | 8.74\\u00b10.22 | 99.12\\u00b10.12 | 17.83\\u00b10.21 | 98.06\\u00b10.18 | 22.17\\u00b10.18 | 97.51\\u00b10.20 | 21.67\\u00b10.14 | 95.14\\u00b10.12 | 17.60\\u00b10.19 | 97.45\\u00b10.16 |\\n\\nThis table confirms that GRIC achieves state-of-the-art performance, consistently outperforming other methods in both FPR95 (lower is better) and AUROC (higher is better), particularly in challenging OOD detection scenarios.\\n\\n---\\n\\n### Addressing Typos\\nThank you for pointing out the typos in our submission. We have reviewed the manuscript thoroughly and corrected all identified errors in the revised version.\\n\\n---\\n\\nWe hope these clarifications address your concerns and demonstrate the robustness of our approach. Your constructive feedback has been invaluable in improving our work.\"}", "{\"comment\": \"The author has not uploaded a revised version of the paper but has merely made a general commitment to some changes. Other reviewers have also raised several substantial issues. Based on the quality of both the paper and the rebuttal, I believe the manuscript does not yet meet the acceptance standards of ICLR.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your detailed feedback and for recognizing our efforts. We appreciate your constructive suggestions and have incorporated the discussions into the main section as recommended. \\n\\nBest,\\nAuthors\"}", "{\"title\": \"Response to Authors' Rebuttal\", \"comment\": \"I deeply appreciate the authors\\u2019 revision. As some of my concerns have been addressed, I have decided to raise the score to 5. However, there are two reasons why my evaluation still leans toward rejection:\\ni) Multiple formatting issues remain\\nii) Rigorous comparisons on few-shot learning methods are lacking\\n\\n\\n- i) There are problems such as inappropriately narrow spacing between subsections and main text at L170-171 and L254-255, and excessively small figure captions. I believe it is not appropriate to highly evaluate a paper with such formatting issues.\\n\\n- ii) This paper's main contribution is a method combining a PCA-based approach and informative prompts. In Table 2, the PCA-based approach alone (GRIC-IG, no IP) achieves an AUROC of 93.88\\u00b10.25, which shows no significant improvement over existing methods like LoCoOp + GL-MCM (93.52), NegLabel (94.21), NegPrompts (94.81), and IDPrompt (94.36). Since informative prompts could be applied to these comparative methods as well, it remains unclear how much the PCA-based approach itself contributes to performance improvement. To better demonstrate its effectiveness, the authors should either show that GRIC maintains superior performance when informative prompts are applied to all comparative methods, or prove that the PCA-based approach offers broader applicability (such as compatibility with few-shot learning methods like LoCoOp). Such experiments would better validate the proposed method's value. Currently, the improvements appear merely incremental through the addition of informative prompts, making the experimental validation insufficient.\"}", "{\"title\": \"Response to Reviewer Feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback. We have uploaded the revised version of our manuscript, with all changes clearly highlighted in blue in both the main text and the appendix sections.\\n\\nIn this revision, we have explicitly introduced our method as a few-shot approach, aligning with the logic you suggested. Our methodology leverages general in-domain (ID) representations and informative prompts, which inherently support few-shot learning. Additionally, our experiments already include evaluations of few-shot prompt learning methods. For instance, in Table 3, we report results using 12 and 16 samples per class, which correspond to 12-shot and 16-shot settings, respectively.\\n\\nFurthermore, we have already included comparisons with methods such as CoOpMCM (Zhou et al., 2022b), LoCoOpMCM (Miyai et al., 2024), NegPrompts (Li et al., 2024), and IDPrompt (Bai et al., 2024) (CLIP-B) in Table 2. These provide a strong benchmark against state-of-the-art few-shot learning approaches.\\n\\nWe believe that our methodology, comprehensive experiments, ablation studies, and additional results in the appendix address all major concerns and demonstrate the robustness of our approach.\\n\\nIf there are any remaining questions or areas requiring further clarification, please do not hesitate to let us know. We are happy to provide additional details or make further adjustments if needed.\\n\\nThank you again for your time and thoughtful feedback.\"}", "{\"title\": \"Response to Reviewer jgok\", \"comment\": \"**Dear Reviewer jgok,**\\n\\nThank you for your valuable feedback. Below, we address your concerns and provide clarifications.\\n\\n### Accuracy of the Claim Regarding FPR95 Reduction\\n\\nWe appreciate your observation regarding the improvement claim. Our original statement about GRIC reducing FPR95 by 19% is based on results reported in Table 1, specifically on the MS COCO dataset. In this context, GRIC achieves an average FPR95 of 37.33%, compared to GL-MCM's 56.25%, marking a reduction of approximately 18.92%. We acknowledge that this was not explicitly clarified in our original text, leading to potential confusion.\\n\\nRegarding the ImageNet-1k benchmark, NegLabel achieves an FPR95 of 25.40%, while GRIC achieves 20.32%, corresponding to a relative improvement of 5%. In our revision, we will explicitly state both results to ensure accurate and detailed reporting across datasets.\\n\\n---\\n\\n### Comparison with [1,2]\\n\\nWe are grateful for your suggestion to compare GRIC with NegLabel [1], CSP [2] and provide a detailed discussion. In the revised manuscript, we will add the following comparison to the Related Work section:\\n\\n> \\\"GRIC, CSP[2], and NegLabel[1] are advanced OOD detection methods leveraging vision-language models (VLMs) with distinct strategies:\\n>\\n> - GRIC: Generalizes in-distribution (ID) representations by masking class-specific features and enriching prompts with hierarchical data, enhancing robustness in distinguishing OOD samples.\\n> - CSP ([2]): Employs a 'conjugated semantic pool' of superclasses to broaden semantic diversity, reducing reliance on specific labels and improving OOD detection for diverse cases.\\n> - NegLabel ([1]): Introduces 'negative labels' selected from a large corpus to enhance semantic separability between ID and OOD samples, achieving exceptional robustness to domain shifts.\\\"\\n\\n\\n### Visualizations\\n\\nWe appreciate your suggestion to include additional visualizations. To provide deeper insights into GRIC's performance, we will add density plots and related visualizations in the revised manuscript to demonstrate its capability to distinguish OOD samples effectively.\\n\\n---\\n\\n### Typos and Formatting\\n\\nWe will thoroughly review the manuscript to address these issues, ensuring clarity and adherence to formatting standards.\\n\\n---\\n\\nThank you for your constructive feedback, which has significantly strengthened the clarity and impact of our work.\\n\\n---\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful feedback and for raising the score. We address your comments as follows: \\n\\n**i) Formatting issues**: We increase the figure caption size and adjust the spacing issues at L170-171 and L254-255 to comply with formatting standards. \\n\\n**ii) Few-shot comparisons**: We would like to emphasize that our method is training-free, distinguishing it from the few-shot methods mentioned (LoCoOp, NegLabel, NegPrompts, IDPrompt), which require prompt training. In contrast, our method accesses only a few data samples for one-time PCA computation, achieving outstanding performance without any additional training or fine-tuning. This demonstrates the strength of our approach as a lightweight, few-shot-compatible method with minimal data requirements. \\n\\nWe hope these clarifications address your concerns. Thank you again for your constructive comments.\"}", "{\"comment\": \"Dear Reviewer V9Pr,\\n\\n As the deadline of discussion period is today, could you check the response provided by the authors to see if your concerns are well-addressed?\\n\\nThanks,\\nAC\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for your responses. I think this article needs a major revision, reorganizing the logic of the entire article (for example, from zero-shot to few-shot), and the current version of the paper does not meet the ICLR acceptance criteria. So I will keep my score.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe have uploaded the revised version of our work, with all new changes clearly highlighted in blue in both the main text and the appendix sections. We hope our responses and revisions sufficiently address the concerns raised. If there are any remaining questions or areas requiring further clarification, please feel free to let us know. We would be happy to provide additional explanations or make further adjustments as needed. \\nThank you again for your time and effort.\"}", "{\"title\": \"Rebuttal to Reviewer Z3Z1\", \"comment\": \"**Dear Reviewer Z3Z1,**\\n\\nThank you for your valuable feedback. Below, we address your concerns and provide clarifications.\\n\\n---\\n\\n### **Novelty Compared to DICE**\\n\\nWhile DICE removes features broadly, GRIC targets high-variance, class-specific features critical for ID classification. This selective masking enhances OOD detection by retaining core class representations while suppressing noise, balancing ID accuracy and OOD performance. In contrast, DICE lacks this targeted focus, emphasizing general feature removal.\\n\\n---\\n\\n### **GRIC ID Accuracy Without Informative Prompts**\\n\\nFollowing your suggestion, we evaluated GRIC\\u2019s ID accuracy without prompts. Results are shown below:\\n\\n| **Method** | **ID ACC (%)** |\\n|-----------------------|----------------|\\n| **MCM (CLIP-B/16)** | 67.01 |\\n| **MCM (CLIP-L/14)** | 73.28 |\\n| **GRIC-IP (CLIP-B/16)** | **80.29** |\\n| **GRIC-IP (CLIP-L/14)** | **85.64** |\\n| **GRIC-No Prompts (CLIP-B/16)** | 75.50 |\\n| **GRIC-No Prompts (CLIP-L/14)** | 78.59 |\\n\\n1. **With Informative Prompts**: GRIC-IP achieves 80.29% (CLIP-B/16) and 85.64% (CLIP-L/14), leveraging enriched textual prompts.\\n2. **Without Prompts**: ID accuracy drops moderately (~4-5%), highlighting the importance of the masked features for class distinction.\\n3. **Insight**: This drop validates that GRIC selectively removes class-specific features critical for ID accuracy.\\n\\n---\\n\\n### **Feature Removal Details and OOD Detection**\\n\\nGRIC targets high-variance, class-specific features, improving OOD detection by emphasizing generalizable representations. Unlike indiscriminate removal, our approach enhances robustness without compromising ID classification.\\n\\nWe also conducted experiments under challenging OOD scenarios, following [2]. Results:\\n\\n| **Method** | **Avg AUC \\u2191** | **Avg FPR95 \\u2193** |\\n|------------|---------------|-----------------|\\n| **MCM** | 93.77 | 25.83 |\\n| **CLIPN** | 96.50 | 13.53 |\\n| **GRIC** | **97.07** | **10.66** |\\n\\nThese results confirm GRIC\\u2019s efficacy in balancing OOD and ID performance under difficult conditions.\\n\\n---\\n\\n### **Zero-Shot vs. Few-Shot Learning**\\n\\nWhile we do not use labels during PCA computation, we acknowledge the broader definition of zero-shot learning. To align with established terminology, we will revise the manuscript to classify our approach as few-shot learning.\\n\\n---\\n\\n### **Reproducibility and Code Availability**\\n\\nWe commit to sharing our complete implementation upon acceptance to ensure reproducibility and transparency.\\n\\n---\\n\\nWe hope these clarifications and additional results address your concerns. Thank you again for your valuable feedback, which has significantly improved our work.\"}", "{\"summary\": \"Out-of-distribution (OOD) detection is essential for enhancing the robustness of machine learning models in open-world environments by identifying inputs from unknown classes. While vision-language models like CLIP facilitate zero-shot OOD detection without the need for labels or training on in-distribution (ID) data, existing methods are constrained by their reliance on closed-set text-based labels and complete image feature representations, limiting CLIP's generalization capabilities. This work introduces GRIC, a novel approach that enhances zero-shot multi-modal OOD detection by focusing on general ID representations instead of class-specific features and utilizing large language models (LLMs) to enrich the understanding of ID data and simulate potential OOD scenarios without requiring actual OOD samples. GRIC demonstrates significant effectiveness, achieving a notable reduction in the false positive rate at recall (FPR95) and outperforming state-of-the-art methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The concept of general representation of ID data is novel to CLIP-based OOD detection.\\n2. The method is well designed with various modules.\\n3. The extensive experiments show the effectiveness of the proposed method.\", \"weaknesses\": \"1. One major issue with this paper is that it claims to be in a zero-shot OOD detection setting, but it should actually be classified as a few-shot setting. This is because the calculation of PCA requires the use of ID data, whereas in a zero-shot setting, ID images should be mixed with OOD images to form the test set, making them unavailable. The entire setting of the paper is flawed and needs to be revised\\n\\n2. There are more state-of-the-art (SOTA) methods for zero-shot OOD detection that have not been compared, such as NegLabel [1], which demonstrates superior performance, and EOE [2], which also utilizes large language models (LLMs) for CLIP-based OOD detection.\\n\\n3. The results in Table 1 are not representative, as the baseline MCM has already achieved a score of 99%, indicating that the OOD issue in this benchmark has been effectively addressed.\\n\\n4. There are many more adjustable benchmarks that have not been explored, such as: hard OOD detection, robustness to domain shift and transfer the method to other CLIP-like models (ALIGN, AltCLIP, GroupViT)\\n\\n[1] Jiang, X., Liu, F., Fang, Z., Chen, H., Liu, T., Zheng, F., & Han, B. Negative label guided ood detection with pretrained vision-language models. ICLR, 2024.\\n[2] Cao, C., Zhong, Z., Zhou, Z., Liu, Y., Liu, T., & Han, B. Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection. In Forty-first International Conference on Machine Learning.\", \"questions\": \"1. The figures and algorithm seems screenshot and too ambiguous.\\n2. Many typos are in the paper and need to be revised. For example, 'fPR95' in 412 is wrong spelling. When using \\\"citation\\\" as the subject, parentheses should not be added. Additionally, lines 493 and 494 overlap due to insufficient line spacing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a method called GRIC (General Representation for Inference and Classification), designed to improve zero-shot and few-shot learning by leveraging representations from large-scale pre-trained models. GRIC integrates domain-specific knowledge into a unified embedding space that allows the model to transfer knowledge effectively across tasks and domains. The major contributions are the introduction of general ID features for OOD detection with hierarchical prompting.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"**1. Presentation**\\n\\nThe paper is generally well-presented and easy to follow. It begins with a clear hypothesis that using a generalized feature segment from the full feature space can improve ID/OOD sample distinction, followed by a systematic explanation of the proposed method for extracting this general feature space.\\n\\n**2. Algorithm**\\n\\nThe algorithm is straightforward and effective, yielding significant performance gains on both small- and large-scale datasets. Ablation studies demonstrate that both the proposed general subspace extraction and hierarchical prompting contribute substantially to the performance improvements.\", \"weaknesses\": [\"**1. Formatting Issues**\", \"**Text Accessibility**: The text is not selectable or OCR-scanned, which complicates review and readability.\", \"**Font and Equation Sizing**: Equations appear in a very small font, raising concerns about compliance with the `.sty` file specifications; table and figure fonts are also difficult to read.\", \"**Inconsistent Spacing**: Vertical spacing is uneven throughout, affecting readability. Additionally, Section 3.2 would be more appropriately placed in the related work section to improve structure.\", \"**2. Missing Experiments and Analysis**\", \"While the paper presents a solid set of experiments across multiple datasets, further analysis would strengthen the justification of the approach:\", \"**Single-Modality Vision Models**: The paper should demonstrate the effectiveness of general feature extraction in **vision-only models**, without hierarchical prompting, to show that the method generalizes beyond multi-modal settings.\", \"**Integration with Other OOD Scoring Methods**: It would be valuable to evaluate GRIC with alternative OOD scoring metrics, such as **energy-based scores** and **feature-based scores**, to understand its compatibility with established scoring methods beyond MSP.\", \"**ID Accuracy**: Given that real-world deployment typically involves handling both ID and OOD data, the paper should report ID accuracy to confirm that GRIC performs reliably on ID data without regression.\", \"**3. Additional Ablation Studies**\", \"**PCA Transformed Feature Space**: Examine the effectiveness of using PCA-transformed features (from \\\\( R^{s \\\\times r} \\\\) to \\\\( R^{s \\\\times k} \\\\), where \\\\( k \\\\) is the number of principal components) for OOD detection.\", \"**Principal Component Masking**: Evaluate whether masking high-variance principal components in the PCA-transformed space, while using the remaining components, can improve OOD detection by focusing on features less affected by dominant ID patterns.\", \"**Full Feature Matrix for PCA**: Justify why the paper does not use the full feature matrix across all samples per class to compute PCA, as this could potentially improve the robustness of general feature extraction.\", \"**Hyperparameter Sensitivity**: Include a sensitivity analysis on the threshold in Equation 3, as this parameter may significantly influence detection performance.\", \"Would re-consider the scoring based on authors' response.\"], \"questions\": \"Please refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Feedback and Revised Manuscript Submission\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback. We have uploaded the revised version of our manuscript, with all changes clearly highlighted in blue in both the main text and the appendix sections.\\n\\nIn this revision, we have explicitly introduced our method as a few-shot approach, aligning with the logic you suggested. Our methodology leverages general in-domain (ID) representations and informative prompts, which inherently support few-shot learning. Additionally, our experiments already include evaluations of few-shot prompt learning methods. For instance, in Table 3, we report results using 12 and 16 samples per class, which correspond to 12-shot and 16-shot settings, respectively.\\n\\nWe believe our revised manuscript now provides a comprehensive explanation of our methodology, experiments, and results, addressing the concerns raised. However, if there are any remaining questions or areas requiring further clarification, please do not hesitate to let us know. We would be happy to provide additional details or make further adjustments if needed.\\n\\nThank you again for your time and thoughtful feedback.\"}", "{\"title\": \"Response to Authors' Rebuttal\", \"comment\": \"I appreciate the author\\u2019s careful response. Most of my concerns have been addressed.\\nHowever, I have the following concerns:\\n\\n\\n- The author mentioned that the format issues have been addressed in the revised manuscript (Reviewer DnkU's thread). However, as the revised manuscript has not been submitted, I cannot confirm these corrections. If the revised manuscript is updated within the deadline, this concern will naturally be resolved.\\n\\n- Since the setting is few-shot rather than zero-shot, I consider it is necessary to include comparative methods specifically tailored for few-shot OOD detection such as LoCoOp [1], NegPrompt [2], and IDPrompt [3]. Or, it is important to give a careful explanation of why such a comparison is unnecessary.\\n\\n- To redefine the work as a Few-shot setting, substantial revisions would be necessary, especially in the experimental section. Given the extent of these required updates, I am concerned that the current version may not yet be fully ready for ICLR paper.\\n\\nDue to the above reasons, I will keep my score.\\n\\n[1] Miyai+, LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning, NeurIPS2023. \\n[2] Li+, Learning transferable negative prompts for out-of-distribution detection, CVPR2024. \\n[3] Bai+, ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection, CVPR2024.\"}", "{\"title\": \"Submission of Revised Manuscript with Highlighted Changes\", \"comment\": \"Dear Reviewer,\\n\\nWe have uploaded the revised version of our work, with all new changes clearly highlighted in blue in both the main text and the appendix sections. We hope our responses and revisions sufficiently address the concerns raised. If there are any remaining questions or areas requiring further clarification, please feel free to let us know. We would be happy to provide additional explanations or make further adjustments as needed. Thank you again for your time and effort.\"}", "{\"title\": \"Response to the Authors' Rebuttal\", \"comment\": \"I deeply appreciate the authors' dedicated additional experiments. Although the proposed PCA-based method alone shows no significant performance improvement compared to other comparison methods, it has the advantage of no training. I think the score without IP needs to be added clearly without hiding it in Table 2 and the corresponding discussion should be added.\\n\\nAdditionally, the effectiveness of adding a superclass through the Informative Prompt (IP) is not surprising. Therefore, the key contribution of this paper lies in demonstrating the synergistic effect of combining the PCA-based method with IP. The authors show the result which achieves a 0.72% (96.80 - 96.08) improvement over IDPrompt + IP. I also consider this discussion needs to be added clearly in the main discussion, not one of the ablation studies.\\n\\nTherefore, expecting the authors to do these updates, I have decided to increase the score. However, I am also fine with the AC considering the addition of these discussions as a major update.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful feedback and constructive suggestions. We have carefully considered your comments and conducted additional experiments to address your concerns. Below is our detailed response:\\n\\n---\\n\\n#### 1. **Ablation Study Analysis**:\\nIn Table 3 of the paper, we present an ablation study that evaluates the contributions of each component of our method:\\n - **GRIC-IG, no IP**: Our method using PCA-based ID generalization without informative prompts.\\n - **GRIC-IP, no ID generalized representation**: Our method using informative prompts without PCA-based ID generalization.\\n\\nThese results clearly highlight the independent effectiveness of both components.\\n\\n---\\n\\n#### 2. **Impact of Informative Prompts on Few-Shot Learning Methods**:\\nFollowing your suggestion, we evaluated the effect of adding informative prompts (IP) to competitive few-shot learning methods, such as **LoCoOp** and **IDPrompt**. Our experiments demonstrate that informative prompts significantly enhance their performance, particularly in terms of **AUROC**, as summarized below:\\n\\n---\\n\\n### Results Summary\\n\\n| **Method** | **FPR95\\u2193 (iNaturalist)** | **AUROC\\u2191 (iNaturalist)** | **FPR95\\u2193 (SUN)** | **AUROC\\u2191 (SUN)** | **FPR95\\u2193 (Places)** | **AUROC\\u2191 (Places)** | **FPR95\\u2193 (Texture)** | **AUROC\\u2191 (Texture)** | **FPR95\\u2193 (Avg)** | **AUROC\\u2191 (Avg)** |\\n|--------------------------|--------------------------|--------------------------|-------------------|------------------|---------------------|---------------------|-----------------------|-----------------------|-------------------|-------------------|\\n| **LoCoOp + IP (MCM)** | 32.86 | 95.03 | 30.35 | 94.92 | 34.91 | 94.64 | 42.74 | 93.71 | 35.22 | 94.58 |\\n| **IDPrompt + IP** | 6.24 | 98.75 | 34.21 | 94.69 | 36.53 | 94.92 | 22.09 | 95.97 | 24.77 | 96.08 |\\n\\n---\\n\\n#### 3. **Comparison with Existing Methods**:\\nWhile **GRIC (CLIP-B)** achieves competitive performance compared to state-of-the-art methods, its primary contribution lies in the synergy between PCA-based ID generalization and informative prompts. These components independently enhance performance (as shown in the ablation study) and can further improve other methods (e.g., LoCoOp and IDPrompt) when integrated.\\n\\n---\\n\\n#### 4. **Broader Applicability of PCA-Based Generalization**:\\nThe PCA-based generalization approach offers a distinct advantage\\u2014it is broadly compatible with various tasks and methods. For instance:\\n - It enhances the performance of few-shot learning methods like LoCoOp even in scenarios where informative prompts are less impactful.\\n - This validates the general utility of the PCA-based approach beyond incremental gains from informative prompts alone.\\n\\n---\\n\\nWe hope these additional experiments and analyses address your concerns and demonstrate the unique value of our method. Thank you again for your insightful comments, which have helped us further strengthen the manuscript.\\n\\nSincerely, \\nAuthors\"}", "{\"summary\": \"This paper proposes a new enhancement method called GRIC for CLIP-based OOD detection. GRIC extracts general ID representations rather than class-specific features and introduces LLM-based informative prompts for OOD detection. Experimental results show the proposed GRIC outperforms existing methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"GRIC surpasses the two baseline methods, MCM and GL-MCM.\"], \"weaknesses\": [\"There is a limited novelty. DICE [1] has a similar concept to drop unnecessary dimensions and shows the effectiveness for OOD detection.\", \"The motivation in this paper\\u2019s method\\u2014that class-specific information is unnecessary\\u2014raises some questions. In DICE, the motivation is to exclude signals that introduce noise. Rather than removing information specific to the ID class, I consider this method actually exclude noise signals. Including the ID accuracy of GRIC without informative prompts in Table 6 would help clarify whether the information being removed is indeed ID class-specific.\", \"A recent challenge in OOD detection is accurately identifying \\\"OOD images that are semantically similar to ID.\\\" In this problem setting, known as Hard OOD detection, certain classes within a dataset (e.g., ImageNet) are treated as ID, while other classes in the same dataset are treated as OOD. Therefore, I believe class-specific information is necessary rather than relying on the general representation of the dataset. I would like to see results on the effectiveness of this method when experimenting on Hard OOD detection benchmarks [2, 3].\", \"The approach is defined as a zero-shot method in L518. However, since it utilizes ID images for PCA processing, I consider this method to be a few-shot learning method, not a zero-shot. The definition of Zero-shot is not using ID images in preprocessing, regardless of whether training is involved [4].\", \"The code has not been shared, raising concerns about the reproducibility of the method.\", \"[1] Sun+, DICE: Leveraging Sparsification for Out-of-Distribution Detection, ECCV2022.\", \"[2] Li+, Learning Transferable Negative Prompts for Out-of-Distribution Detection, CVPR2024.\", \"[3] Jung+, Enhancing Near OOD Detection in Prompt Learning: Maximum Gains, Minimal Costs, arXiv2024.\", \"[4] Miyai+, Generalized Out-of-Distribution Detection and Beyond in Vision Language Model Era: A Survey, arXiv2024.\"], \"questions\": \"I wonder about the motivation of this method that class-specific information is unnecessary. To validate this statement, I would like to know the result of GRIC without informative prompts in Table 6 to clarify whether the information being removed is indeed ID class-specific, not a noisy signal.\\n\\nAlso, I would like to know the result of hard OOD detection.\\n\\nFor more details, please refer to the Weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This work proposes a simple yet effective zero-shot multi-modal approach for OOD detection by leveraging general in-distribution (ID) representations with PCA and informative prompts generated by LLMs based on the super-class names of the ID data. The proposed method significantly enhances detection performance. Most reviewers agree that the paper is well-organized and easy to follow, recognizing the effectiveness of the approach in improving OOD detection by a large margin. However, reviewers raised concerns regarding the need for additional experiments and analyses, such as more baseline comparisons, few-shot evaluations, and ablation studies (DnKU, Z3Z1, V9Pr). Some reviewers also noted limited novelty, citing the existence of similar works (Z3Z1, V9Pr, jgok). Reviewers Z3Z1 and V9Pr specifically pointed out that the method should potentially be classified as few-shot detection due to the use of PCA. Following discussions, the authors successfully addressed some of the concerns, resulting in 3 borderline accept ratings (DnKU, Z3Z1, jgok) and 1 borderline reject (V9Pr), with an average score of 5.75. While Z3Z1 and jgok ultimately provided positive ratings, they recommended further refinements to the paper. Given the current feedback, the authors are encouraged to polish the manuscript to clearly articulate the benefits of the method\\u2019s training-free nature and highlight its differences from existing few-shot approaches. Additionally, it is recommended to include comparisons with other few-shot methods and expand baseline evaluations in Table 1, as suggested by reviewer V9Pr, for resubmission to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion period, reviewers raised significant concerns regarding the lack of comparison with recent zero-shot baseline methods (e.g., NegLabel), the overstatement of performance improvements without considering NegLabel, and the misclassification of the proposed method as a zero-shot approach. In response, the authors conducted additional experiments and clarified the differences between their method and other few-shot approaches. Following these discussions, the authors successfully addressed some of the concerns, resulting in three borderline accept ratings (DnKU, Z3Z1, jgok) and one borderline reject (V9Pr), with an average score of 5.75. While Z3Z1 and jgok ultimately gave positive ratings, they recommended further refinements to strengthen the paper. Given the current state, I believe the paper requires significant revisions to clearly define the method as few-shot rather than zero-shot and to incorporate additional results or discussions comparing it with recent few-shot methods.\"}" ] }
0ov0dMQ3mN
CO-MOT: Boosting End-to-end Transformer-based Multi-Object Tracking via Coopetition Label Assignment and Shadow Sets
[ "Feng yan", "Weixin Luo", "Yujie Zhong", "Yiyang Gan", "Lin Ma" ]
Existing end-to-end Multi-Object Tracking (e2e-MOT) methods have not surpassed non-end-to-end tracking-by-detection methods. One possible reason lies in the training label assignment strategy that consistently binds the tracked objects with tracking queries and assigns few newborns to detection queries. Such an assignment, with one-to-one bipartite matching, yields an unbalanced training, _i.e._, scarce positive samples for detection queries, especially for an enclosed scene with the majority of the newborns at the beginning of videos. As such, e2e-MOT will incline to generate a tracking terminal without renewal or re-initialization, compared to other tracking-by-detection methods. To alleviate this problem, we propose **Co-MOT**, a simple yet effective method to facilitate e2e-MOT by a novel coopetition label assignment with a shadow concept. Specifically, we add tracked objects to the matching targets for detection queries when performing the label assignment for training the intermediate decoders. For query initialization, we expand each query by a set of shadow counterparts with limited disturbance to itself. With extensive ablation studies, Co-MOT achieves superior performances without extra costs, _e.g._, 69.4% HOTA on DanceTrack and 52.8% TETA on BDD100K. Impressively, Co-MOT only requires 38% FLOPs of MOTRv2 with comparable performances, resulting in the 1.4× faster inference speed. Source code is publicly available at [GitHub](https://github.com/BingfengYan/CO-MOT).
[ "End-to-End Tracking", "Transformer", "Multi-object Tracking" ]
Accept (Poster)
https://openreview.net/pdf?id=0ov0dMQ3mN
https://openreview.net/forum?id=0ov0dMQ3mN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w3UXdIZK3C", "lUtSB1xEbI", "jhl6Lu9ho8", "av2gm80uTq", "XVLpGKSQ7C", "WxhCpMIhZG", "WNLlPEARzn", "Vf6m1KQKA1", "Tojd1xtOCW", "RzGcWcBn40", "R3OvbA2XMI", "NZCrrwWozG", "HwvsrDiHrd", "GemcQg8Gjp", "69qjtyrQxX", "5Qm0dGe1Bu", "3Jx6BrFRTf", "1pO3OUGqfR" ], "note_type": [ "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734619770296, 1730049323901, 1732600365818, 1732600434864, 1732600616190, 1730636578964, 1732068915932, 1732551512246, 1737524029559, 1732068863135, 1731013063324, 1732068886246, 1732526885950, 1732600339324, 1730614385451, 1732546313526, 1732546116496, 1732068903384 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10154/Area_Chair_FVf2" ], [ "ICLR.cc/2025/Conference/Submission10154/Reviewer_Akzq" ], [ "ICLR.cc/2025/Conference/Submission10154/Authors" ], [ "ICLR.cc/2025/Conference/Submission10154/Authors" ], [ "ICLR.cc/2025/Conference/Submission10154/Authors" ], [ "ICLR.cc/2025/Conference/Submission10154/Reviewer_d8sy" ], [ "ICLR.cc/2025/Conference/Submission10154/Authors" ], [ "ICLR.cc/2025/Conference/Submission10154/Reviewer_Akzq" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10154/Authors" ], [ "ICLR.cc/2025/Conference/Submission10154/Reviewer_rzmb" ], [ "ICLR.cc/2025/Conference/Submission10154/Authors" ], [ "ICLR.cc/2025/Conference/Submission10154/Reviewer_d8sy" ], [ "ICLR.cc/2025/Conference/Submission10154/Authors" ], [ "ICLR.cc/2025/Conference/Submission10154/Reviewer_mbwq" ], [ "ICLR.cc/2025/Conference/Submission10154/Reviewer_rzmb" ], [ "ICLR.cc/2025/Conference/Submission10154/Reviewer_mbwq" ], [ "ICLR.cc/2025/Conference/Submission10154/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents CO-MOT, a method to enhance end-to-end MOT through a coopetition label assignment strategy (COLA) and shadow sets. Experiments are conducted on multiple datasets, including DanceTrack and BDD100K.\", \"the_strengths_are\": \"1) the proposed COLA strategy is simple and effective and 2) new insights on the disproportional assignment of track and detection queries.\", \"the_main_weaknesses_are\": \"1) missing ablation on comparing with joint training on image detection dataset, and different decoder arrangements, and 2) missing evaluations on scenarios with more image data than video data and comparisons against more recent SOTA trackers.\\n\\nSince the weaknesses are addressed in the discussion phase and the proposed method is innovative and effective, the AC recommends accepting the paper. In addition, the authors are encouraged to add the ablation studies and additional evaluations to the paper and further polish the writing.\", \"additional_comments_on_reviewer_discussion\": \"The main issues raised by the reviewers include 1) missing ablation on comparing with joint training on image detection dataset and different decoder arrangements, 2) missing evaluations on scenarios with more image data than video data, and comparisons against more recent SOTA trackers.\\n\\nIn the discussion phase, the authors provided detailed discussions and experiments, such as comparisons with more methods, analysis of hyperparameters, and evaluation of additional datasets. All reviewers are satisfied with the feedback and lean toward accepting the paper. The final scores are 6, 6, 6, and 6.\"}", "{\"summary\": \"This paper introduces CO-MOT, a novel method aimed at improving end-to-end Transformer-based multi-object tracking (e2e-MOT) through a new coopetition label assignment strategy (COLA) and the introduction of shadow sets. The authors address the issue of unbalanced training in existing e2e-MOT methods, where detection queries often lack positive samples, particularly for newborn objects.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The motivation of the paper is interesting and clearly demonstrated by the experiments.\", \"The introduction of COLA and shadow sets do mitigate the biased label assignment issue in e2e-MOT. This proposed approach provides a balanced training process, leading to improved performance.\", \"The authors conduct experiments on three datasets, demonstrating the effectiveness of CO-MOT across different scenarios. The results are robust and consistent, showing improvements over the baseline.\"], \"weaknesses\": [\"The introduction of shadow sets and the coopetition label assignment strategy may increase the computational complexity and training time. The authors should provide a detailed analysis of the computational overhead and discuss potential optimizations. Notably, Fig.4 in the manuscript only presents the Flops, which is not the actual training and inference time. Intuitively, more object queries would bring larger computation costs. Why do shadow sets not?\", \"Although the proposed method demonstrates strong performance on the tested datasets, it would be advantageous to evaluate CO-MOT on MOT20. The authors assert that the proposed approach enhances detection recall. Thus, the more densely populated nature of MOT20 provides a more suitable context for assessing the effectiveness of the model.\", \"The authors should investigate the sensitivity of the proposed method to hyperparameters, such as the number of shadow sets and the parameters of the coopetition label assignment strategy. Understanding how these hyperparameters affect performance would provide valuable insights for practical implementation.\", \"The writing should be improved. For instance, in Fig.3 and Fig.4, the axis titles are overlapped with the axis. The readability of Figures 3 and 4 could be improved by adjusting the axis labels to avoid overlap. This would enhance the overall presentation of the results.\"], \"questions\": [\"Could the authors provide a detailed analysis of the cost brought by shadow sets?\", \"Could the authors provide the evaluation and discussion on MOT20.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer mbwq,\\n\\nThank you for your interest in our paper and for your detailed feedback over the past few days!\\n\\nwe deeply appreciate your efforts in reviewing our work and for the insightful questions that have been invaluable in enhancing our research. We still remain open to addressing any further queries you may have!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer d8sy:\\n\\nThank you for your detailed feedback on our paper and for taking the time to compare our current work with last year's submission. We are very pleased to hear that you recognize the improvements we have made in both the experiments and writing. We also greatly appreciate your acknowledgment of the impressive results on DanceTrack, and we understand your perspective regarding the novelty of our work.\\nWe will continue to strive to further enhance the quality of our research. \\n\\nOnce again, thank you for your valuable insights and support.\\n\\nBest regards, \\nThe Authors\"}", "{\"comment\": \"Dear Reviewer rzmb,\\n\\nThank you for your detailed feedback on our paper and for taking the time to review our rebuttal. We greatly appreciate your continued engagement with our work and your insightful comments.\\n\\n| Method | w/ CrowdHuman | COLA | Shadow | HOTA | DetA | AssA | MOTA | IDF1 |\\n|----------|---------------|------|--------|------|------|------|------|------|\\n| (a) | | | | 56.4 | 71.8 | 44.6 | 79.8 | 57.5 |\\n| (b) | | \\u2705 | | 60.2 | 73.2 | 49.7 | 81.8 | 62.4 |\\n| (c) | | | \\u2705 | 59.0 | 72.6 | 48.2 | 80.9 | 59.6 |\\n| (d) | | \\u2705 | \\u2705 | 61.8 | 73.5 | 52.2 | 81.7 | 63.3 |\\n| (e) | \\u2705 | | | 56.7 | 73.7 | 43.9 | - | - |\\n\\n**Table 4:** Ablation study on individual CO-MOT components and the use of CrowdHuman. The baseline used in both MOTRv2 and CO-MOT is the same, which is MOTR.\\n\\n**Regarding W1:** We acknowledge the importance of including an ablation study with joint training using detection datasets, and we believe this will provide valuable insights. This experiment has already been conducted in the MOTRv2[1] paper, and we have included the relevant results in row (e). As shown in the table, adding CrowdHuman image data does improve detection performance to some extent, with DetA increasing from 71.8 to 73.7; however, it does not significantly help the tracking-related AssA metric. Through the attention mechanism in our COLA approach, we can transfer the improvement in detection performance to tracking performance, as explained in line 438.\\n\\nIn summary, simply adding detection data can enhance the model's detection performance to some extent but may not necessarily improve tracking performance. We believe that the COLA strategy proposed in this paper allows for a more significant effect of adding detection data, further enhancing tracking performance. Tables 4 and 2 demonstrate that **adding detection data and the method proposed in this paper are not mutually exclusive but can complement each other to improve tracking performance.**\\n\\n**Regarding W3 and Q1:** Our method can be applied to larger-scale datasets; however, due to the annotation issues present in the TAO dataset (which features many sparse annotations, meaning that some instances of the same category have tracking information while others do not, rather than having dense tracking annotations), it may not be suitable for training standard detection and tracking models. We will clearly state this limitation in our paper and strive to enhance the generalizability of our method in future research.\\n\\nThank you once again for your thorough review and valuable suggestions.\\n\\nBest regards, \\nThe Authors\\n\\n\\n[1] MOTRv2: Bootstrapping end-to-end multi-object tracking by pretrained object detectors. In CVPR, 2023.\"}", "{\"summary\": \"This paper addresses the limitations of existing e2e-MOT methods, particularly the unbalanced training issue caused by the label assignment strategy. And it introduces a Coopetition Label Assignment (COLA) strategy and a Shadow Set concept. Through extensive experiments on multiple datasets, it demonstrates superior performance compared to state-of-the-art methods while being more efficient.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The analysis about the drawbacks of exist e2e-trackers is very interesting, it reveals the negative impact of tracking queries on detection queries.\\n\\n2. The proposed COLA strategy allows tracked objects to be reassigned to detection queries in decoders, resulting in a significant improvement in tracking performance.\", \"weaknesses\": \"1. From the public records of OpenReview, it can be seen that this paper was submitted to ICLR2024. The reviewers and AC pointed out many weaknesses last year, while authors have made almost no improvements in the latest version.\\n\\n2. Many SOTA trackers developed in this year have been overlooked by authors, such as DiffMOT, MambaTrack, TrackSSM, et al. These new methods have made many improvements, and it would be best for the author to provide a comparison with the latest methods.\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments.\\n\\n**W1 and Q1:**\\nWhile an increase in the number of queries typically raises training and inference costs, the sampling framework used in our paper is similar to that of DETR, consisting of three main modules: ResNet for image feature extraction, the Encoder module for further integration of image features, and Decoder module for outputting bounding boxes and confidence scores. The increase in queries primarily affects the computation in the decoder layer; however, the decoder contains only six attention layers, which constitute a small portion of the overall model. As shown in the figure, the impact on inference speed is negligible (about 6%). The table lists the inference speeds and decoder FLOPs for different query configurations as follows:\\n\\n| Query Configuration | Inference Speed | Decoder FLOPs | \\n|---------------------|------------------|----------------| \\n| 60 *1 | 91.96 ms | 9.8G | \\n| 60*3 | 103.11 ms | 10.6G | \\n| 300 | 103.02 ms | 11.6G | \\n**Table 10:** Inference Speeds and Decoder FLOPs for Different Query Configurations. 60\\\\*1 indicates a total of 60 sets, each containing 1 shadow set. 60\\\\*3 refers to the number of queries used in CO-MOT, while 300 represents the number of queries used in MOTR.\\n\\n**W2 and Q2:**\\nWe note that MOT20 indeed has a higher object density, which provides a more challenging environment for evaluating the effectiveness of the model.\\n\\nThis paper primarily focuses on end-to-end tracking methods. Currently, we have only found evaluations of MeMOT and TrackFormer on the MOT20 benchmark, which we have included in Table 3C. At the same time, we conducted a detailed performance analysis on three commonly used benchmarks: DanceTrack, BDD100K, and MOT17. Additionally, Table 1 lists the mAP of various methods, showing that CO-MOT significantly improves the recall of detection boxes.\\n\\nIt is worth mentioning that the phenomena observed in MOT20 and MOT17 are quite similar, with our method outperforming other end-to-end approaches across various metrics. For instance, in the HOTA metric, CO-MOT exceeds MeMOT by 3.4\\\\% and TrackFormer by 2.8\\\\%. These results indicate that CO-MOT demonstrates consistent performance across different datasets.\\n\\n**W3:**\\nWe discuss these two questions in Table 9 and Table 5. The optimal number of shadow sets is found to be 3; fewer sets do not contribute effectively, while more sets can introduce negative side effects due to excessive variance within the same set. Additionally, COLA performs best within the l<5 decoders.\\n\\n**W4:**\\nThank you once again for your valuable feedback. We will make the necessary modifications and adjustments based on your suggestions.\\n\\nThank you again for your time and efforts! We show our deepest appreciation for your support of our work. We are always ready to answer your further questions!\"}", "{\"title\": \"raising the score\", \"comment\": \"I have reviewed the authors' feedback, and most of the concerns have been adequately addressed. As a result, I am increasing my score to 6. However, I still recommend that the authors improve the quality of their writing and figures.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments.\\n\\n**W1: To solve the issue of disproportional assignment of track and detection queries, there are also other simpler alternatives. A straightforward option would be to train detection queries jointly on image detection datasets alongside video tracking datasets. For example, detection queries could be trained exclusively on image datasets, treating every object as a new object. An ablation study comparing proposed methods to this simple joint training alternative is appreciated.**\\n\\nYes. For instance, MOTRv2 uses a pre-trained YOLOX to extract detection boxes, significantly enhancing tracking performance. In Table 2, CO-MOT+ uses a combination of Crowdhuman and video data for joint training, which further improves the tracking results on DanceTrack. Therefore, it can be confidently stated that adding a substantial amount of image datasets can indeed enhance tracking performance.\\n\\n**W2: The paper uses the first 5 decoders to train with all queries, while the last one trains separately on detection and tracking queries. An ablation study could clarify whether a different arrangement, such as using the first decoder for track queries and the last five for all queries, would impact performance. An ablation study regarding this would be helpful for readers to understand the optimal configuration.**\\n\\nYes. As shown in Table 5, we have validated the effect of COLA across different decoders. The experiments confirm that using all queries simultaneously on the first five decoders yields the best results, but it is essential to ensure that the last decoder uses the detection and tracking targets obtained from COLA.\\n\\n**W3: The applicability of the coopetition label assignment strategy is mostly limited to cases where there is more video data than image data for training, leading to an imbalance in track and detection query assignments. However, in many practical settings, the opposite is true\\u2014large-scale [1] and open-vocabulary MOT tasks [2] often have substantially more image detection data than video tracking data. In these cases, common practice in MOT is to use joint training with both image and tracking data, which provides sufficient supervision for detection queries. This is contrary to the paper\\u2019s analysis, and it would be beneficial for the authors to also at least discuss these more common scenarios.**\\n\\nYes. Annotating tracking video data, in particular, requires significant human and financial resources. In contrast, there is currently a wealth of image detection data available, which can be enhanced to further improve tracking performance. Many recent studies have adopted this approach, such as MOTRv2 and MOTRv3, and our CO-MOT+ also further validates this conclusion.\\n\\n**Q1: The main experiments are still concentrated on small-scale pedestrian tracking datasets. As mentioned on weakness, for other scenarios, we may face different difficulties. Are there any plans to test the model also on large-scale MOT datasets such as TAO [3]?**\\n\\nThank you very much for your valuable suggestions. TAO is an excellent benchmark, but it is not well-suited for training models like MOTR and CO-MOT. We have previously conducted experiments on TAO, but the results were not ideal, mainly due to the large amount of unannotated data in TAO, which is more suitable for pre-training or open-vocabulary MOT tasks. However, we will continue to monitor developments in this field and explore more general MOT models.\\n\\nThank you again for your time and efforts! We show our deepest appreciation for your support of our work. We are always ready to answer your further questions!\"}", "{\"summary\": \"The paper tackles end-to-end Transformer-based multiple-object tracking. Previous methods, such as TrackFormer and MOTR, face issues with imbalanced distribution in detection and tracking label assignments, where most objects are assigned to track queries, leaving only a few \\u201cnewborns\\u201d for detection queries. This joint training approach results in weaker detection performance compared to tracking-by-detection methods. To resolve this, the paper proposes a coopetition label assignment strategy to re-balance assignments between track and detection queries. Additionally, it introduces a shadow set that changes the original one-to-one mapping in DETR to a one-to-set mapping, further enhancing tracking performance. Results on various benchmarks demonstrate the effectiveness of this method.\\u200b\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written, with a clear and logical structure that makes it very easy to follow.\\n\\n2. The observation on the disproportional assignment of track and detection queries is insightful, highlighting an important yet often overlooked issue in transformer-based MOT. This analysis provides valuable context for the community.\\n\\n3. The proposed coopetition label assignment strategy is simple and effective. The paper also demonstrates its effectiveness on multiple Transformer-based MOT frameworks, including TrackFormer and MOTR.\\n\\n4. The experiments are thorough, covering multiple benchmarks, including more large-scale autonomous driving scenes such as BDD100K, and demonstrating the method\\u2019s robustness and practical impact.\", \"weaknesses\": \"1. To solve the issue of disproportional assignment of track and detection queries, there are also other simpler alternatives. A straightforward option would be to train detection queries jointly on image detection datasets alongside video tracking datasets. For example, detection queries could be trained exclusively on image datasets, treating every object as a new object. An ablation study comparing proposed methods to this simple joint training alternative is appreciated.\\n\\n2. The paper uses the first 5 decoders to train with all queries, while the last one trains separately on detection and tracking queries. An ablation study could clarify whether a different arrangement, such as using the first decoder for track queries and the last five for all queries, would impact performance. An ablation study regarding this would be helpful for readers to understand the optimal configuration.\\n\\n3. The applicability of the coopetition label assignment strategy is mostly limited to cases where there is more video data than image data for training, leading to an imbalance in track and detection query assignments. However, in many practical settings, the opposite is true\\u2014large-scale [1] and open-vocabulary MOT tasks [2] often have substantially more image detection data than video tracking data. In these cases, common practice in MOT is to use joint training with both image and tracking data, which provides sufficient supervision for detection queries. This is contrary to the paper\\u2019s analysis, and it would be beneficial for the authors to also at least discuss these more common scenarios.\\n\\n[1] Li, Siyuan, et al. \\\"Matching Anything by Segmenting Anything.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[2] Li, Siyuan, et al. \\\"Ovtrack: Open-vocabulary multiple object tracking.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.\", \"questions\": \"The main experiments are still concentrated on small-scale pedestrian tracking datasets. As mentioned on weakness, for other scenarios, we may face different difficulties. Are there any plans to test the model also on large-scale MOT datasets such as TAO [3]?\\n\\n[3] Dave, Achal, et al. \\\"Tao: A large-scale benchmark for tracking any object.\\\" Computer Vision\\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\\u201328, 2020, Proceedings, Part V 16. Springer International Publishing, 2020.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments.\\n\\n**W1**:\\nWe highly value the feedback from the reviewers and AC, and we have made comprehensive improvements in the latest version, including restructuring the paper and adding a significant amount of experimental data, such as more datasets and updated ablation studies.\", \"the_iclr2024_reviewers_raised_the_following_issues\": \"see https://openreview.net/forum?id=WLgbjzKJkk. Below, we clarify the points that were raised in previous reviews one by one:\\n\\n**a: presentation issues, need to refer to baseline MOTR paper.**\\nWe have significantly reorganized the structure of the entire paper and added relevant citations in the revisions. Additionally, the distance between figures and text has been improved to enhance readability and understanding, for example, by placing Figure 2 close to the corresponding text.\\n\\n**b: Needs more comparison on BDD100k and DanceTrack.**\\nWe have added many comparative experiments in the BDD100k and DanceTrack evaluations, as detailed in Tables 2 and 3. These experimental results further validate the effectiveness of our method.\\n\\n**c: Missing evidence that COLA/Shadows works on other models like Trackformer.**\\nWe have supplemented the ablation studies on Trackformer, as shown in Table 6. Additionally, COLA and Shadows Set have been introduced into the more powerful backbone, MeMOTR, resulting in performance improvements.\\n\\n**d: Provide failures cases specific to previous approaches that were solved by CO-MOT.**\\n Figure 8 lists specific cases that CO-MOT has resolved, showcasing the advantages of our method.\\n\\n**e: Table 2 presentation issues.**\\nWe have introduced the definitions of non-end-to-end and end-to-end approaches in the introduction for better reader understanding.\\n\\n**f: Fig 4 show the number of parameters, FLOPS of YOLOX included in MOTR?**\\n Figure 4 displays the number of parameters to provide readers with a clearer understanding of the model's complexity.\\n\\n**g: Is it necessary to split tracking/detection queries?**\\nThis issue arises from a misunderstanding of the paper. Tracking queries and detection queries are interdependent in our method and cannot be simply separated.\\n\\n**h: Missing evaluation on MOT20.**\\nWe have included the experimental results for MOT20 in Table 3(c).\\n\\n**i: What if the tracking queries are removed in CO-MOT. Does detection result improve similar to MOTR?**\\n In Table 1(f), we have added the mAP data for CO-MOT, which shows that the detection performance of CO-MOT significantly exceeds that of MOTR.\\n\\n**j: effect of the number of decoders L on tracking performance.**\\nWe have added relevant experiments in Table 5 to explore the impact of the number of decoders on tracking performance.\\n\\n**k: Performance improvement not consistent across different datasets.**\\n Ablation experiments based on multiple baselines across various datasets have been conducted and are presented in Tables 4, 6, and 7, all showing improvements.\\n\\n**l: Performance worse than MOTRv2 in terms of IDF1 and HOTA.**\\n We provided a detailed explanation in Section 3.4 (COLA) and Figure 3 regarding this phenomenon.\\n\\n**m: Lack of interpretability of proposed method.**\\n We have included extensive explanations and data analysis in Section 3.3 (MOT7) to enhance the interpretability of our method.\\n\\n**n: Lack of mathematical formulation for shadow sets -- too many engineering tricks or heuristics.**\\n Detailed explanations and data analysis regarding the design principles of shadow sets are provided in Section 3.4.\\n\\n**o: missing ablation study on w/ and w/o COLA/shadow set on MOT17 validation set.**\\nWe have supplemented the relevant experiments in Table 7.\\n\\nWe believe that these revisions have significantly improved the quality and contributions of the paper.\\n\\n\\n**W2:**\\nThese methods (such as DiffMOT, MambaTrack, TrackSSM, et al) demonstrate excellent performance, indicating that end-to-end tracking approaches are gaining increasing attention. We provide a comparison with our method, CO-MOT, as shown below:\\n\\n| Method | HOTA | DetA | AssA | MOTA | IDF1 |\\n|----------------|------|------|------|-----|-----|\\n| DiffMOT | 62.3 | **82.5** | 47.2 | **92.8** | 63.0 |\\n| MambaTrack | 55.5 | 80.8 | 38.3 | 90.1 | 53.9 |\\n| MambaTrack+ | 56.1 | 80.8 | 39.0 | 90.3 | 54.9 |\\n| ByteSSM | 57.7 | 81.5 | 41.0 | 92.2 | 57.5 |\\n| CO-MOT | **69.4** | 82.1 | **58.9** | 91.2 | **71.9**|\\n**Table2 :** Comparison on the DanceTrack test set.\\n\\nOur CO-MOT method outperforms the aforementioned methods across multiple key metrics, demonstrating its effectiveness and advantages in the target tracking task. This result further emphasizes the innovation and potential of our approach.\\n\\nThank you again for your time and efforts! We show our deepest appreciation for your support of our work. We are always ready to answer your further questions!\"}", "{\"comment\": \"I have carefully read the rebuttal and thank the author for their work. I have compared the differences from last year, and found that the author has made many improvements in both the experiment and writing. While the experimental results are highly impressive on DanceTrack, this work exhibits incremental novelty. I have decided to improve the rating to 6.\"}", "{\"comment\": \"Dear Reviewer Akzq,\\n\\nWe would like to express our profound gratitude for your endorsement of our paper. We will keep polish the paper to meet highest standard. Once again, thank you for your time and effort in reviewing our paper!\\n\\nBest, Authors\"}", "{\"summary\": \"The paper presents an innovative end-to-end Multi-Object Tracking (MOT) framework that aims to enhance transformer-based MOT models. The authors introduce two key contributions: 1. Coopetition Label Assignment (COLA) revises label assignment by allowing detection queries to utilize tracked objects during training in intermediate decoders. This approach boosts feature augmentation for tracking objects with diverse appearances and alleviates the issue of tracking termination. Shadow Set Strategy aims to address training imbalance in one-to-one matching, CO-MOT introduces \\\"shadow sets,\\\" which add slight disturbances to each query, thus allowing one-to-set matching. This enhances the discriminative training process and the model's generalization.The proposed method outperforms existing e2e-MOT models on benchmarks like DanceTrack and BDD100K with improved tracking metrics such as HOTA and TETA, demonstrating higher efficiency and inference speed.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper's strengths lie in its originality, quality, clarity, and significance. It introduces a novel coopetition-based label assignment (COLA) and shadow sets for one-to-set matching, enhancing the robustness of e2e-MOT without requiring additional detectors. The evaluation across multiple datasets, including ablation studies, demonstrates the effectiveness of CO-MOT, establishing its advantages over state-of-the-art models. The paper is well-structured, with clear explanations and helpful visualizations, although further clarification on certain technical aspects could enhance understanding. Overall, CO-MOT significantly improves the efficiency and performance of transformer-based multi-object tracking, making it a valuable contribution to the field.\", \"weaknesses\": \"While the paper presents valuable contributions, several weaknesses could be addressed to strengthen its impact and clarity:\\n1. The authors should provide a detailed discussion of the differences between COLA and TALA in Section 2.4, as well as their design in the loss function, to facilitate reader understanding. \\n\\n2. In the experiments section, the authors need to include comparisons with more methods on the MOT20 and BDD100K datasets.\\n\\n3. Since the authors analyze the impact of tracking queries on detection performance in transformer-based trackers, if this point serves as one of the motivations, they should compare whether the proposed framework shows improvement in mAP in the experiments.\\n\\n4. The authors should also analyze the effects of different values of $\\\\lambda$ and $\\\\Phi$ in Section 2.5 on the experimental outcomes.\", \"questions\": \"I have listed my concerns and questions in Weakness part and hope the response from authors.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the authors\\u2019 feedback. I have read the rebuttal; however, my concerns remain.\", \"w1\": \"As the authors acknowledge, training jointly with detection datasets using images yields better performance. As I mentioned, training with image detection datasets also addresses the key issue discussed in the paper: the disproportional assignment of track and detection queries. Hence, I believe it is important to include an ablation study comparing the proposed methods with this simple joint training alternative, as I mentioned earlier.\", \"w3_and_q1\": \"The authors mention that their model does not perform well in large-scale scenarios. In such scenarios, we typically have more detection annotations on images than tracking annotations on videos. Therefore, the key issue of the disproportional assignment of track and detection queries seems not to exist. I would expect the authors to provide a discussion on this as a limitation of their method.\"}", "{\"comment\": \"Thanks for the response of author and I'll improve the rating to 6.\"}", "{\"comment\": \"We sincerely appreciate your time and efforts in reviewing our paper! Based on your review, we added a detailed discussion and additional experiments.\\n\\n**W1: The authors should provide a detailed discussion of the differences between COLA and TALA in Section 2.4, as well as their design in the loss function, to facilitate reader understanding.**\\n\\nIn Figure 6, we briefly explain the differences between COLA and TALA. As shown in Figure 6(a), in COLA, the detection queries can only match with newborns 2 and 1; whereas in TALA, the detection queries can match not only with newborns 2 and 1 but also with tracked individuals 3 and 4. To provide a more detailed description, we will set aside a separate paragraph to discuss the differences between COLA and TALA in depth.\\n\\n**W2: In the experiments section, the authors need to include comparisons with more methods on the MOT20 and BDD100K datasets.**\\n\\nIn this study, we primarily focus on end-to-end tracking methods and have listed several comparative methods on the DanceTrack and MOT7 datasets. \\n\\nDue to the relative novelty of our approach, there is currently limited research using MOT20 and BDD100K as evaluation benchmarks. Therefore, we have compiled all known end-to-end tracking methods in the table to provide clear references for readers.\\n\\nWe will continue to monitor developments in this field and consider incorporating more comparisons in future work.\\n\\n**W3:Since the authors analyze the impact of tracking queries on detection performance in transformer-based trackers, if this point serves as one of the motivations, they should compare whether the proposed framework shows improvement in mAP in the experiments.**\\n\\nIn Table 1, we present the mAP results of representative methods. The mAP of CO-MOT is significantly higher than that of MOTR and slightly lower than that of MOTRv2, indicating that our method is competitive in performance.\\n\\n**W4: The authors should also analyze the effects of different values of\\u00a0\\u03bb\\u00a0and\\u00a0\\u03a6\\u00a0in Section 2.5 on the experimental outcomes.**\\n\\nWe have conducted numerous relevant experiments in Table 8 to explore the effects of different \\u03bb and \\u03a6 values on the experimental results.\\n\\nThank you again for your time and efforts! We show our deepest appreciation for your support of our work. We are always ready to answer your further questions!\"}" ] }
0oWGVvC6oq
On Bits and Bandits: Quantifying the Regret-Information Trade-off
[ "Itai Shufaro", "Nadav Merlis", "Nir Weinberger", "Shie Mannor" ]
In many sequential decision problems, an agent performs a repeated task. He then suffers regret and obtains information that he may use in the following rounds. However, sometimes the agent may also obtain information and avoid suffering regret by querying external sources. We study the trade-off between the information an agent accumulates and the regret it suffers. We invoke information-theoretic methods for obtaining regret lower bounds, that also allow us to easily re-derive several known lower bounds. We introduce the first Bayesian regret lower bounds that depend on the information an agent accumulates. We also prove regret upper bounds using the amount of information the agent accumulates. These bounds show that information measured in bits, can be traded off for regret, measured in reward. Finally, we demonstrate the utility of these bounds in improving the performance of a question-answering task with large language models, allowing us to obtain valuable insights.
[ "Online learning", "Information theory", "Bayesian regret", "Bandits" ]
Accept (Poster)
https://openreview.net/pdf?id=0oWGVvC6oq
https://openreview.net/forum?id=0oWGVvC6oq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xolDfwVp4B", "wMJnVrPhJt", "qkvC9rwAp5", "iKzBqzz3Fd", "eZIR21szKf", "b90eqB53zF", "ZNPkERn1Nk", "WYZOtfwown", "QvqvN793er", "QOdg5UEVb3", "PldUMupBC2", "MQi1DVPy0G", "HH4SgGBO2N", "7MvDwVGId4", "3pO3RDLGxK" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731987000204, 1730830690394, 1730634903430, 1732641930071, 1731839208525, 1731300878509, 1731838623283, 1732581938851, 1734705623192, 1737524016005, 1730687569176, 1731838916161, 1731838074803, 1732622643375, 1732294986406 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9947/Reviewer_smVV" ], [ "ICLR.cc/2025/Conference/Submission9947/Reviewer_6gEU" ], [ "ICLR.cc/2025/Conference/Submission9947/Reviewer_z6VF" ], [ "ICLR.cc/2025/Conference/Submission9947/Reviewer_oZWy" ], [ "ICLR.cc/2025/Conference/Submission9947/Authors" ], [ "ICLR.cc/2025/Conference/Submission9947/Reviewer_oZWy" ], [ "ICLR.cc/2025/Conference/Submission9947/Authors" ], [ "ICLR.cc/2025/Conference/Submission9947/Reviewer_6gEU" ], [ "ICLR.cc/2025/Conference/Submission9947/Area_Chair_9F82" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9947/Reviewer_smVV" ], [ "ICLR.cc/2025/Conference/Submission9947/Authors" ], [ "ICLR.cc/2025/Conference/Submission9947/Authors" ], [ "ICLR.cc/2025/Conference/Submission9947/Reviewer_z6VF" ], [ "ICLR.cc/2025/Conference/Submission9947/Authors" ] ], "structured_content_str": [ "{\"comment\": \"The authors have adequately addressed my concerns, and I will raise my score to a 6. I also realize that I misunderstood the rho assumption when I initially read the paper. I think it would be useful to add some explanation of that assumption to avoid confusion for future readers.\"}", "{\"summary\": \"The paper investigates the trade-off between information acquisition and regret minimization in sequential decision-making. It introduces a framework to quantify this trade-off, drawing on information-theoretic methods. The authors present novel Bayesian regret lower bounds that depend on the information accumulated by the agent, and they show how these quantify the relationship between external information and regret.\\nFor brownie points, they show an application of their theory to question answering with LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper presents an interesting information-theoretic approach to quantifying the regret-information trade-off\", \"The theoretical approach is rigorous, with clear definitions and proofs\", \"The paper is well-organized\", \"The paper holds high significance for fields involving sequential decision-making (online learning in particular)\"], \"weaknesses\": [\"The assumption regarding information gathering being independent of task history could limit applicability in some environments\"], \"questions\": [\"Can the authors comment on how the proposed regret bounds might extend to adversarial or non-Bayesian settings? Are there particular adjustments or challenges anticipated in these contexts?\", \"Could the authors comment on extensions to settings where the information depends on prior actions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The submission studies the information lower bounds of bandits. By relating the mutual information to KL divergence and entropy, the submission rephrases regret bounds in terms of information bits. The new form of the bounds allows a learning agent to acquire and accumulate additional knowledge through active queries (as opposed to passive observations in the previous setting). The main results are summarized in Table 1, consisting of Theorem 3.4, Proposition 4.1 and Proposition 4.5. In the experiments, the advantage of information accumulation is verified in a simulation (Figure 1). Then, a query strategy is proposed and tested in MCQA.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The submission points out that using Fano, one can introduce the mutual information into the regret bound. The connection between the mutual information and the accumulated knowledge bits (R) provides a means to analyze the effect of the knowledge bits (R) on the regret bound.\", \"Moreover, Prop. 4.5 provides an entropy-dependent Bayesian regret lower bound, which is listed in the last entry of Table 1.\", \"The advantage of accumulating information in bits is experimentally justified (Figure 2).\", \"A bits-based query policy illustrates the advantage of quantifying knowledge in bits and searching for a query that will bring an abundant increase in the knowledge accumulation.\"], \"weaknesses\": [\"(1) Combining Prop. 4.1 and 4.3, the submission provides a range of bits required to achieve a given level of regret. Note that the range has a $\\\\sqrt{logK}$ gap.\", \"(2) The proof sketches of the main results (e.g., Proposition 3.2, Theorem 3.4, Proposition 4.1, and the others) are plain. It seems that packing and covering are standard techniques in analysis. Could you please elaborate on the technical challenges and the corresponding contributions in the sketches?\"], \"questions\": [\"(3) The first column in Table 2 resembles a budgeted setting. Can there be an autonomous scenario? For instance, let the learning agent decide the proportion of queries.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your reponse. I want to keep my positive score.\"}", "{\"comment\": \"We want to thank the reviewer for the detailed analysis and feedback. Below is a response addressing your concerns:\\n\\n(W1) We acknowledge this gap and agree it would be interesting to close it. Still, in settings where $K$ is small, as in many MAB problems, this factor is not too significant. In particular, many works that study problem-independent regret ignore log-factors. Moreover, the bound still gives non-trivial insights, especially concerning the relationship between information and regret.\\n\\n(W2) The techniques in this paper include the adaptation of the method presented by [Yang and Barron, 1999] to an online setting. This includes utilizing covering and packing numbers to introduce a regret lower bound. One contribution we introduce is the selection of $\\\\Phi$ for different policy classes, allowing us to obtain tighter lower bounds for a wide variety of decision spaces. Another technical contribution is modifying this method to handle problems with information constraints. \\n\\n(Q3) Allowing the agent to adaptively determine when to query external sources for information is a very interesting setup. While it is outside the scope of this work, we will hopefully examine it in future work.\"}", "{\"summary\": \"This paper studies regret minimization when extra information about the prior is revealed.\\nIn particular, the authors consider contextual bandit problems, where at each round, the natural reveals some context and the algorithm needs to select actions based on the context. The authors consider a Bayesian set up, where there are some Bayesian prior on the context/reward. The external information reveals extra information about the prior. Under this formulation, the paper studies how external information affects learning and performance.\\n\\nThe paper proves both upper and lower bounds that depend on the amount of information an agent accumulates. The theoretical results demonstrate that information, measured in bits, can be directly traded off against regret, measured in reward. The paper also validates their findings through experiments with both traditional multi-armed bandits and a practical application involving large language models for question-answering tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The problem formulated in this paper seems interesting, and it is interesting to see how information affects learning in general.\\nThe paper also companies its theoretical results with experiments.\", \"weaknesses\": \"The technique is not the strong part of this paper.\", \"questions\": \".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No concerns.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We want to thank the reviewer for the detailed analysis and feedback. The following is a response addressing your concerns:\\n\\n(Q1) Since in non-Bayesian settings, there is no prior distribution, the mutual information $I(\\\\pi^*;H_T)$ is not well defined, since it requires knowing the prior and posterior distributions. One possible way to extend this definition is to consider a constraint on the worst-case information gain at every step, $\\\\min_{P}D_{KL}\\\\left({P_{\\\\pi^* \\\\mid\\\\pi,C}} ,{P_{\\\\pi^*}}\\\\right)$. \\n\\n(W1 + Q2) We agree that scenarios, where the collected information depends on previous actions, is an exciting future direction (as we mention in the conclusions section). One potential way to formulate this is by assuming that the contexts are sampled from a posterior distribution. That is, before an agent starts his/her interactions, the model $M \\\\sim P$ is sampled and then the context $C_t$ is drawn from $C_t \\\\sim P(\\\\cdot \\\\mid M,{\\\\mathcal{H}}_t)$. The results presented in our paper can be relatively easily extended to this setting. However, this setting introduces different interesting questions. For example, should an agent in an online task explore a \\\"bad\\\" path to gain valuable external knowledge about a different one?\"}", "{\"comment\": \"Thank you for answering my questions. I maintain my positive score.\"}", "{\"metareview\": \"This paper develops new information-theoretic methods for analyzing sequential decision-making algorithms. The methods are then used to recover existing lower bounds for a range of sequential decision-making problems. The reviewers gave overall very positive feedback and unanimously recommend acceptance of the paper. There are, however, some significant concerns about the presentation of the paper and I suggest that the authors implement the wide range of changes suggested by the reviewers.\", \"additional_comments_on_reviewer_discussion\": \"There was some useful interaction between authors and reviewers and the reviewers adjusted their scores after the rebuttal and discussion period.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper studies a general sequential decision-making framework from a Bayesian perspective. Within this framework, it is intuitive that the more information the agent accumulates, the lower the resulting regret. The goal of the paper is to formalize that intuition. The paper does the following:\\n1. Develops new information-theoretic methods for analyzing sequential decision-making algorithms\\n2. Uses those methods to recover existing lower bounds for a range of sequential decision-making problems, such as standard multi-armed bandits, tabular RL, Lipschitz bandits, etc (Table 1).\\n3. Obtains lower and upper regret bounds which depend explicitly on the number of bits of information accumulated.\\n4. Runs a question-answering LLM experiment inspired by the above results.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"I think this work has many of the ingredients of a strong conceptual paper. The authors identify a conceptual phenomenon which spans many mathematical models, formalize that phenomenon, and develop a method which can analyze this phenomenon simultaneously in all of those models.\\n\\nAlthough the LLM experiment initially felt out of place to me, I actually think it provides a nice complement to the theoretical results (although the theoretical results certainly remain the primary contribution).\\n\\nOverall, I think the ceiling for this paper is quite high.\", \"weaknesses\": \"I have serious concerns about the presentation. Although the conceptual idea behind the paper is intuitive, it took me a while to make sense of the technical content of the paper. I think there are two issues:\\n1. Confusing writing and non-standard terminology. \\n2. Lack of explanation of the technical statements.\\nI have provided a non-exhaustive list of examples below.\\n\\nAlthough I am not an expert in information-theoretic methods, I am quite familiar with bandits, RL, and Bayesian regret, so more casual readers may struggle even more than me. If the paper purports to elucidate the intuitive tradeoff between information and regret, but the technical results are not accessible to readers, then I believe the impact of the paper will be limited.\\n\\nI also think the LLM experiments could be improved by including baselines of always querying/never querying the large LLM. Table 2 suggests that with the query cost of 0.1, always querying the large LLM might actually be the optimal policy. To me, this suggests that a larger query cost is needed and calls into question the significance of the evaluation.\\n\\nOverall, although I think the paper has many merits, I lean towards rejection so that these issues can be addressed, hopefully resulting in a strong final paper.\\n\\n_Writing issues_\\n1. I found it a bit hard to make sense of Section 1.1 (\\u201cContributions\\u201d) without at least informally defining the model. It would also be useful to link to the theorems/sections corresponding to each of the results.\\n2. Some of the terminology and notation is a bit confusing. Normally $\\\\pi \\\\in \\\\Pi$ denotes a policy, but here it denotes a \\u201cdecision\\u201d (basically an action). Instead, $\\\\phi \\\\in \\\\Delta(\\\\Pi)$ is called a policy, which seems like it should just be called a randomized decision/action. Furthermore, $p_t: \\\\mathcal{C} \\\\to \\\\Delta(\\\\Pi)$ is _also_ called a policy, which is more in line with the normal usage of \\u201cpolicy\\u201d. And $\\\\pi^*$ is also a function from $\\\\mathcal{C} \\\\to \\\\Delta(\\\\Pi)$, which gives it a different type than $\\\\pi$. I have also never before seen the term \\u201cepsilon-local set\\u201d used to describe epsilon-balls. I would suggest better aligning terminology and notation with the literature.\\n3. In Example 2.1, is there a reason that you use Bernoulli rewards instead of general rewards? Does your model not cover contextual MAB with general rewards.\\n4. Lines 197 - 215: I assume the rho separation assumption is for policies in $\\\\Phi$, not for policies in $\\\\Delta(\\\\Pi)$? If it is supposed to be $\\\\Delta(\\\\Pi)$, that seems like a very strong assumption about the structure of the decision space.\\n\\n_Lack of interpretation/explanation_\\n1. I understand that Yang and Barron also make Assumption 3.1, but it seems pretty unintuitive to me, and I would have appreciated some explanation.\\n2. Theorem 3.4, especially (9), is a bit hard to make sense of. Could you provide an interpretation for this expression?\\n\\n_Minor issues_\\n1. Line 41: resource allocation is much broader than the specific routing problem you describe. Consider something like \\u201cOne example of a resource allocation problem is route a process\\u2026\\u201d The flow in this section also feels a bit weird since you never bring up resource allocation again in the paper. Consider omitting either the resource allocation or online game example and using a single running example?\\n2. Since Section 4 also includes upper bounds, should the title be \\u201cInformation-theoretical regret upper and lower bounds?\\u201d\\n3. Table 2 caption: the \\u201cAppendix ??\\u201d reference is broken\", \"questions\": \"I don\\u2019t have further questions beyond what I\\u2019ve written above in \\u201cWeaknesses\\u201d.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s feedback and have addressed the issues raised. We uploaded a revised version of the paper, where the modifications made are highlighted. Below is a response addressing your concerns:\\n\\nRegarding your concerns over the LLM experiments, querying the large LLM only is not the optimal policy. This is due to the 0.1 penalty, making the small LLM better than the large one for some questions. However, choosing the large LLM only will be better than both policies \\u2013 since our focus is when queries are constrained. We have also added the baselines of using only one model to Table 2.\\n\\n_Writing issues_\\n \\n1. We have modified the introduction to briefly introduce the setting before the contributions. We also added references to the appropriate Theorems / Propositions in this section. \\n2. In our denoting of $\\\\pi$ as a decision and $\\\\Pi$ as a decision set, we will further emphasize that we follow the notations of papers on decision-making settings, e.g. that of (Xu and Zeevi 2023, Foster et. al. 2023). $\\\\pi^*$ is a different type than $\\\\pi$ as it is the optimal decision for every context. However, we changed $\\\\pi^* : \\\\mathcal{C} \\\\to \\\\Pi$ to $\\\\pi^*_{M,C} \\\\in \\\\Pi$ to be less confusing. We also changed our notion of $p_t \\\\in \\\\Delta(\\\\Pi)$ from policy to stochastic decision. We also changed the term $\\\\epsilon$-local set to $\\\\epsilon$-ball to better align with existing terminology. \\n3. Bernoulli rewards were chosen mainly for simplicity of exposition. Our setting also covers general rewards as well. We added this comment to the paper as well. \\n4. While Proposition 3.2 requires this assumption to hold only for the decisions in $\\\\Phi$, for Theorem 3.4 to hold $\\\\Phi$ needs to be a packing set. Hence we assume separation for all stochastic decisions. This is not a strong assumption since it essentially entails that any change in our decision entails a change in the mean reward on average. This is why for MAB problems, this assumption holds for $\\\\rho(\\\\phi,\\\\psi)=c\\\\|\\\\phi-\\\\psi\\\\|_1$ where $c>0$ is some constant that depends on the structure and prior (but independent of $\\\\phi,\\\\psi$).\\n\\n_Lack of interpretation / explanation_\\n1. This assumption means that there exists a small enough $\\\\epsilon_0$-ball around every decision $p_1$ such that for any $p_2$ in this ball, the KL divergence is bounded by the norm $\\\\rho$. This is essentially a regularity assumption over the decision space that prevents from the KL divergence to blow-up, which, in turn, can be obtained by requiring that $p_1$ and $p_2$ do not become arbitrarily small on their support. We have added further explanation on this assumption in the paper as well.\\n2. Theorem 3.4 is obtained by substituting the bounds of Theorem 3.3 and the previously described selection process into Prop. 3.2. The last expression uses a general upper bound for $I(V;H_T)$ (which is the expression with $\\\\mathrm{inf}_{\\\\delta}$ ) and selects $\\\\Phi$ to be a packing set of $\\\\Pi$. The other two expressions use a different upper bound for $I(V;H_T)$, which utilizes the fact that we focus on a local packing set instead. \\n\\n_Minor Issues_\\n1. Thanks for the suggestion, we agree and changed the example to exclusively stick with an example of an online agent with access to a language model. \\n2. We have fixed the title. \\n3. We have fixed the reference to the Appendix.\\n\\nWe hope that we have addressed your main concerns and if there are any other issues, we will gladly correct them.\"}", "{\"comment\": \"We want to thank the reviewer for the detailed analysis and feedback. The following is a response addressing your concerns:\\n\\n(W1) The techniques in this paper include an adaptation of the method presented by [Yang and Barron, 1999] to an online setting. This includes utilizing covering and packing numbers to introduce a regret lower bound. We then introduce a modification of this method to handle problems with information constraint.\"}", "{\"comment\": \"Thank you for the feedback. I will keep my score.\"}", "{\"comment\": \"Thank you for your response. We will add further explanation of the $\\\\rho$ assumption to future revisions.\"}" ] }
0nxocR2qx4
ROPO: Robust Preference Optimization for Large Language Models
[ "Xize Liang", "Chao Chen", "Shuang Qiu", "Jie Wang", "Yue Wu", "Zhihang Fu", "Hanzhu Chen", "Feng Wu", "Jieping Ye" ]
Preference alignment is pivotal for empowering large language models (LLMs) to generate helpful and harmless responses. However, the performance of preference alignment is highly sensitive to the prevalent noise in the preference data. Recent efforts for this problem either marginally alleviate the impact of noise without the ability to actually reduce its presence, or rely on costly teacher LLMs prone to reward misgeneralization. To address these challenges, we propose the **RO**bust **P**reference **O**ptimization (**ROPO**) framework, a novel iterative alignment approach that integrates *noise-tolerance* and *filtering of noisy samples* without the aid of external models. Specifically, ROPO first formulates the training process with adaptive noise reduction as an optimization problem, which can be efficiently solved in an iterative paradigm. Then, to enhance this iterative solving process with noise-tolerance and noise-identification capabilities, we derive a robust loss that suppresses the gradients from samples with high uncertainty. We demonstrate both empirically and theoretically that the derived loss is key to the noise-tolerance and effective filtering of noisy samples. Furthermore, inspired by our derived loss, we propose a robustness-guided rejection sampling technique to compensate for the potential important information in discarded queries. Experiments on three widely-used datasets of dialogue and post-summarization demonstrate that ROPO significantly outperforms existing preference alignment methods in the practical noise setting and under artificial random symmetric noise, with its advantage increasing as the noise rate increases.
[ "preference optimization", "large language models", "noise tolerance" ]
Reject
https://openreview.net/pdf?id=0nxocR2qx4
https://openreview.net/forum?id=0nxocR2qx4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ff6Sel2Mcv", "bwuYVcLt6L", "Wrs9GORVAK", "SmB2m1HDp1", "JwePj0PLbE", "Jn8CPc81wu", "BLBMFiHlA9", "5WB4YA75oR" ], "note_type": [ "official_comment", "official_review", "meta_review", "official_review", "official_comment", "decision", "official_comment", "official_review" ], "note_created": [ 1732646405094, 1730733617475, 1734381715996, 1730528173042, 1733217768011, 1737523758756, 1733312526674, 1730684469714 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6273/Reviewer_VETN" ], [ "ICLR.cc/2025/Conference/Submission6273/Reviewer_Co4P" ], [ "ICLR.cc/2025/Conference/Submission6273/Area_Chair_WFbC" ], [ "ICLR.cc/2025/Conference/Submission6273/Reviewer_okiK" ], [ "ICLR.cc/2025/Conference/Submission6273/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6273/Authors" ], [ "ICLR.cc/2025/Conference/Submission6273/Reviewer_VETN" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your detailed rebuttal. I have reviewed it carefully and will maintain my current score.\"}", "{\"summary\": \"The paper studies preference alignment under the condition when there are poorly-annotated preference pairs. The authors propose a robust preference optimization (ROPO) framework with two key considerations, (1) a noise-robust loss function that suppresses the gradients of samples that the policy model is uncertain about; (2) A robustness-guided rejection sampling technique designed to balance the filtering of noisy samples with the preservation of important information from queries that might otherwise be discarded.\\n\\nIn the experiments, the authors demonstrate that the policy model aligned with ROPO shows the least drop in performance (win rate against a reference model as judged by GPT-4) with an increasing proportion of injected noise in the training data. The injected noise includes both artificial noise, such as flipping the preference labels of training pairs, and practical noise, where responses from a larger model are blindly assumed to be preferred over those from a smaller model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a well-motivated study on addressing annotator noise in preference alignment, an issue that is critical for developing reliable policy models.\\n\\n2. The paper provides a thorough and sensible theoretical analysis of DPO's limitations in discriminating between noisy and clean samples. It also demonstrates how the addition of a regularization loss helps mitigate these issues.\", \"weaknesses\": \"1. Limited test datasets. Performance evaluation is only conducted on AlpacaEval and the test split of Reddit TL;DR, lack of comprehensive results on multiple instruction-following / alignment benchmarks, such as Wildbench, Arena-Hard, MT-Bench, etc.\\n\\n2. The paper consider using loss values to identify model-uncertain samples in the robustness-guided rejection sampling procedure as a major contribution. Yet, there has already been several related works, like [1].\\n\\n[1] Secrets of RLHF in Large Language Models Part II: Reward Modeling. \\n\\n3. Lack of human evaluation. The analysis is based on GPT-4, which can be biased in its evaluation.\", \"questions\": \"(1) Only one type of practical noise is considered in the paper, specifically, the assumption that annotators inherently favor outputs from larger models over those from smaller ones. What are other type of practical noises?\\n\\n(2) The authors mention ROPO is an iterative alignment approach. How the iterative process takes place? It is unclear based on the methodology descriptions in the paper. The authors may provide a detailed algorithm sketch to describe the iterative process.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper presents the RObust Preference Optimization (ROPO) framework, a novel approach to improving preference alignment in LLMs under noisy conditions. By integrating a noise-tolerant loss function and robustness-guided rejection sampling, ROPO aims to mitigate the impact of noisy preference data while preserving valuable information. Experimental results across various datasets demonstrate that ROPO outperforms existing methods in both real and artificially noisy scenarios, achieving better alignment with human preferences without increasing computational overhead.\", \"strengths\": [\"The paper addresses the important challenge of annotator noise in preference alignment, which is a critical issue for the development of reliable policy models.\", \"It includes a theoretical analysis that outlines DPO's limitations in distinguishing between noisy and clean samples and introduces a regularization loss as a mitigation strategy.\", \"Experimental results show improvements over DPO in noisy scenarios.\"], \"weaknesses\": [\"Novelty: The contribution of the proposed approach is limited, as the primary contribution is the addition of a regularization term, which is not significantly different from the original DPO approach. Furthermore, the use of loss values to identify model-uncertain samples in the robustness-guided rejection sampling procedure closely resembles concepts explored in prior work.\", \"Complexity: The method introduces multiple hyperparameters, such as the trade-off parameter alpha, the sample filtering ratio, and beta from DPO, which require extensive experimental tuning. This added complexity could reduce the practical value of the approach.\", \"Evaluation: The evaluation is constrained by the use of limited test datasets, primarily AlpacaEval and Reddit. This lack of coverage across multiple instruction-following benchmarks such as Wildbench, Arena-Hard, and MT-Bench limits the generalizability of the findings. Additionally, the paper does not sufficiently define or characterize noisy data, making it unclear how noise is identified and mitigated.\", \"Clarity: While Figure 1 outlines the framework of ROPO, the absence of an overall description makes it challenging to understand how ROPO's components\\u2014e.g., noisy sample filtering, rejection sampling, and noise tolerance training\\u2014are integrated into an iterative workflow. This could make the work difficult to reproduce.\", \"The evaluation and clarity issues were partially addressed during the discussion period, with the addition of new experiments and a more formal presentation of ROPO in the appendix. However, the addition of significant new experiments (e.g., key results such as human evaluation) would ideally require a fresh round of reviewing. Furthermore, reviewers remain concerned about the novelty and complexity weaknesses mentioned above, even after considering the authors' latest comments and updated paper. In light of these concerns, I recommend rejection.\"], \"additional_comments_on_reviewer_discussion\": \"The author-reviewer discussion led to improvements in the evaluation and clarity of the paper. The authors also provided justifications regarding the aforementioned concerns about novelty and complexity (e.g., they argued that the method is relatively robust to hyperparameter choice).\\n\\nWhile the evaluation and clarity issues were partially addressed, the reviewer-AC discussion focused on the other two weaknesses. Reviewers still felt that the novelty was not particularly strong, as numerous other works are based on DPO modifications, and similar ideas have been explored before (see reviews/discussions for details). Additionally, the reviewers were not convinced by the authors' rebuttal on complexity, and the next version of the paper may require more analyses to support the authors' claims.\"}", "{\"summary\": \"This paper examines the unavoidable presence of noise in preference learning and its significant impact on the performance of Large LLMs. Previous research has only slightly reduced the negative effects of noise, which persists during the training phase. Additionally, efforts to filter out noisy samples often lead to increased computational costs. To address these challenges, the paper introduces the ROPO framework, which combines noise tolerance and the filtering of noisy samples. It also incorporates the technique of rejection sampling to further enhance performance. Specifically, the authors derive a loss function through mathematical derivation designed to suppress the gradients of samples with high uncertainty. This approach prevents the model from overfitting to noisy samples while simultaneously identifying them. The effectiveness of the ROPO framework is demonstrated across three datasets in both practical and artificially noisy scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The author demonstrated through extensive derivations that methods such as DPO are not noise-tolerant and have difficulty distinguishing between noisy and clean samples. Additionally, the gradient weighting strategy of DPO amplifies the impact of noise. The author derived a loss as a regularizer through a conservative gradient weighting strategy to prevent the model from overfitting to noisy samples and to identify noisy samples.\\n\\n2. The author not only proved the effectiveness of ROPO on artificial noise but also validated that ROPO can still outperform DPO and other baselines in more practical noisy scenarios.\", \"weaknesses\": \"1. Although the author presented the framework of ROPO in Figure 1, the paper still lacks an overall description of ROPO, making it difficult to understand how the components of ROPO\\u2014noisy sample filtering, rejection sampling stages, and noise tolerance training\\u2014are integrated and how the method works iteratively. The author could perhaps add some overall descriptions of the framework.\\n\\n2. ROPO inevitably introduces too many hyperparameters, such as the trade-off hyperparameter alpha and the sample filtering ratio, which seem to require experimental determination. Along with the hyperparameter beta from DPO, does this make the ROPO algorithm more complex? For example, would different tasks require exploring different combinations of hyperparameters, thereby weakening its practical value?\", \"questions\": \"1. Could you provide a more detailed overall description of the ROPO framework to clarify how the components (noisy sample filtering, rejection sampling stages, and noise tolerance training) are integrated?\\n\\n2. Can you include details the iterative process of the ROPO method?\\n\\n3. Do different tasks require extensive hyperparameter tuning, and if so, how does this affect the practical value of the ROPO method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We remain hopeful and sincerely look forward to receiving your valuable feedback\", \"comment\": \"Dear Reviewer Co4P,\\n\\nWe are writing as the authors of the paper \\\"ROPO: Robust Preference Optimization for Large Language Models\\\" (ID: 6273). We sincerely appreciate the time and effort you have devoted to reviewing our paper.\\n\\nWhile we have not yet received your feedback during the discussion phase, **we fully understand that you may be managing other pressing commitments or navigating a hectic schedule**. We would like to take this opportunity to emphasize two key points below, which we hope will be helpful for further discussions.\\n\\n- **We humbly believe that you recognize the value and significance of our work**, as reflected in your review, where you kindly describe our work as \\\"*presenting a well-motivated study*\\\", \\\"*addressing an issue that is critical*\\\", and \\\"*providing a thorough and sensible theoretical analysis*\\\".\\n- Furthermore, **we humbly believe that our rebuttal has adequately addressed your concerns**. We have carefully revised our paper and provided comprehensive experiments, analyses, and discussions to improve our submission. To assist in **saving your valuable time**, we have also summarized the key points of our rebuttal **for your convenience**.\\n\\nAs the discussion period will conclude in about **2 hours**, we remain hopeful and sincerely look forward to receiving your valuable feedback, should your schedule allow. If our rebuttal has properly addressed your concerns, we would greatly appreciate it if you could raise your score (\\\"*5: marginally below the acceptance threshold*\\\"). If not, please let us know your remaining concerns or questions. We will do our utmost to address your further concerns and answer any questions.\\n\\nThank you again for your time, effort, and thoughtful consideration.\\n\\nBest regards,\\n\\nAuthors of #6273\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank you once again for your time and efforts.\", \"comment\": \"Dear Reviewer Co4P,\\n\\nWe are writing as the authors of the paper \\\"ROPO: Robust Preference Optimization for Large Language Models\\\" (ID: 6273).\\n\\nBefore the discussion period ends, we would like to express our sincere gratitude to you once again. We regret not having an in-depth discussion with you and sincerely hope that our rebuttal has properly addressed your concerns.\\n\\nThank you for your constructive suggestions and valuable comments.\\n\\nBest regards,\\n\\nAuthors of #6273\"}", "{\"summary\": \"This paper introduces the RObust Preference Optimization (ROPO) framework, a method designed to improve preference alignment in large language models (LLMs) by addressing the challenges posed by noisy preference data. ROPO employs a noise-tolerant loss function and an iterative process that integrates noise filtering during training. Additionally, ROPO includes a robustness-guided rejection sampling technique to retain valuable data while filtering noise. Experiments show that ROPO outperforms existing methods under various noisy conditions, offering a scalable and effective approach to aligning LLMs with human preferences without the need for external models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. An iterative training approach that optimizes LLM performance while filtering out noisy samples.\\n2. Experimental results demonstrate improvements over DPO.\\n3. The use of rejection sampling effectively compensates for information lost during the noise filtering step.\", \"weaknesses\": \"1. While the paper addresses the impact of noisy data, it lacks a clear definition or characterization of what constitutes noisy data and how it is identified.\\n2. In the loss function, the primary contribution is the addition of a regularization term, which is not significantly different from the original DPO approach, aside from a scaling coefficient applied to the DPO loss.\\n3. The selection of $\\\\alpha$ is highly variable, making it difficult to determine an optimal value.\", \"questions\": \"1. Could you provide a clear definition of noise in the original data and compare the characteristics of noisy data with clean data? Estimating the noise rate in the dataset would add valuable context and make the approach more impactful.\\n2. Why choose $\\\\frac{4 \\\\alpha}{(1+\\\\alpha)^2}$ to normalize the ROPO loss? Does this yield any specific advantages over other functions?\\n3. Besides ROPO's regularization terms, could alternative regularization strategies be applied, and how would they impact performance?\\n4. Could the rejection sampling introduce its own form of bias, especially if it favours certain types of responses?\\n5. Given ROPO\\u2019s iterative nature, what is the computational cost relative to simpler, non-iterative methods, especially for very large LLMs?\\n6. Does the model\\u2019s performance depend on specific types or levels of noise, and how would it handle different real-world noise distributions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0no1Wp2R2j
Going Beyond Feature Similarity: Effective Dataset distillation based on Class-aware Conditional Mutual Information
[ "Xinhao Zhong", "Bin Chen", "Hao Fang", "Xulin Gu", "Shu-Tao Xia", "EN-HUI YANG" ]
Dataset distillation (DD) aims to minimize the time and memory consumption needed for training deep neural networks on large datasets, by creating a smaller synthetic dataset that has similar performance to that of the full real dataset. However, current dataset distillation methods often result in synthetic datasets that are excessively difficult for networks to learn from, due to the compression of a substantial amount of information from the original data through metrics measuring feature similarity, e,g., distribution matching (DM). In this work, we introduce conditional mutual information (CMI) to assess the class-aware complexity of a dataset and propose a novel method by minimizing CMI. Specifically, we minimize the distillation loss while constraining the class-aware complexity of the synthetic dataset by minimizing its empirical CMI from the feature space of pre-trained networks, simultaneously. Conducting on a thorough set of experiments, we show that our method can serve as a general regularization method to existing DD methods and improve the performance and training efficiency.
[ "dataset distillation", "conditional mutual information" ]
Accept (Poster)
https://openreview.net/pdf?id=0no1Wp2R2j
https://openreview.net/forum?id=0no1Wp2R2j
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vNfreE9mce", "vBP2X6eSRE", "phgCzg8sHh", "osFoVxV0DZ", "kWZF9m2777", "kTYCVvyQQx", "k0KbFkzKJ8", "dzBfmmPlN5", "dlHwuKxyMH", "dUIne45Sl1", "YojWR5OC0i", "Vaq2qudvVq", "THWYq274ZD", "SNdflJX0s5", "SJ1UW9wFDm", "MqdOOBmY9O", "MgHd6A8htA", "LOhDXf8C84", "KxEaat41jQ", "HggtTVG2Oy", "HbN39p1bgU", "HM8U8gsQbd", "FWcDWGRgeg", "F6zmjWqpQO", "B9TlqYu0uy", "AvJP62Py1d", "8Bf2LfTbwL", "4Y6WPeqG2I", "44gtqFUqit", "2XSQwtRPVV" ], "note_type": [ "decision", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737523547418, 1733016927579, 1730722258100, 1730534500687, 1732542261617, 1732455903616, 1732464172125, 1732176534671, 1732455897151, 1732162944066, 1732158428428, 1730607817383, 1732163259882, 1733016954747, 1732175651771, 1732542151757, 1732159301678, 1732165354898, 1732455908578, 1732177106783, 1732619802002, 1733016901693, 1730730857513, 1735021962696, 1732761286804, 1732455885036, 1732875553198, 1732358694762, 1732165606743, 1733141091912 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Reviewer_kma8" ], [ "ICLR.cc/2025/Conference/Submission3003/Reviewer_Zh5i" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Reviewer_Zh5i" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Reviewer_i6mf" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Area_Chair_cqbd" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Reviewer_oMQb" ], [ "ICLR.cc/2025/Conference/Submission3003/Area_Chair_cqbd" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Area_Chair_cqbd" ], [ "ICLR.cc/2025/Conference/Submission3003/Area_Chair_cqbd" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ], [ "ICLR.cc/2025/Conference/Submission3003/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer i6mf\\n\\nWe appreciate your response and the contribution your feedback has made to improving our work! As the end of the discussion period approaches, if all your queries have been addressed, we kindly ask you to consider raising your rating. If you still have any doubts or reservations about our work, we are more than willing to engage in further discussion with you.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"summary\": \"This paper proposes a Conditional Mutual Information (CMI) method as a plug-and-play loss function to enhance the performance of dataset distillation methods. Experiments conducted on multiple baseline methods demonstrate the effectiveness of the CMI loss.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe proposed CMI method is a relatively simple yet effective approach that is plug-and-play in nature. It has demonstrated its effectiveness across multiple baseline methods.\\n2.\\tThe motivation behind the method proposed in the paper is solid and is supported by a certain theoretical foundation.\\n3.\\tThe experiments in the paper are comprehensive, conducted across various scales of datasets.\", \"weaknesses\": \"1.\\tThere are now newer and more powerful methods available, such as \\\"Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching\\\" (ICLR 2024). The authors could consider experimenting with their proposed method on these methods.\\n2.\\tThe description of the method in the paper could be clearer, particularly regarding the explanation of the formula symbols, to better emphasize the key points of the approach. Currently, it appears somewhat ambiguous.\\n3.\\tIn my view, using mutual information or KL divergence is not a particularly novel approach, as it has been employed in many works across various fields.\", \"questions\": \"Please kindly refer to the above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a plug-and-play method, termed CMI, designed to enhance existing DD techniques by minimizing conditional mutual information. By applying CMI, the distilled data is concentrated more effectively around the center of each class, thereby improving generalizability.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The logic is clear.\", \"The experiments are comprehensive.\", \"The review of related works is thorough.\", \"The proposed method is theoretically sound.\"], \"weaknesses\": [\"The core ideas, methodology, and formulations in this paper draw substantially from the approach proposed in [1].\", \"If \\\\hat{Z} contains excessive confusing or uninformative information related to S , then H(S | \\\\hat{Z}, Y) will not be reduced; rather, it could remain the same or even increase. This is because conditional entropy reflects the remaining uncertainty in S after observing both \\\\hat{Z} and Y . When \\\\hat{Z} is noisy or irrelevant for predicting S , it does not help in reducing this uncertainty.\", \"Line 213 states that \\u201cminimizing the class-aware CMI reduces the uncertainty brought to \\\\hat{Z} conditioned on S ,\\u201d which should be \\u201cminimizing the class-aware CMI reduces the uncertainty brought to S conditioned on \\\\hat{Z}\\u201d.\", \"The authors\\u2019 derivation of Equation 6 lacks an explicit explanation, making it challenging to fully understand the transition from previous formulations.\", \"Works like [2] and [3], which also target improvements in dataset distillation, are not adequately considered.\", \"Equation 3 requires summing over all synthetic instances within class y , how the authors adapt this approach to instance-based distillation methods like SRe2L.\", \"[1] Bayes conditional distribution estimation for knowledge distillation based on conditional mutual information\", \"[2]TOWARDS LOSSLESS DATASET DISTILLATION VIA DIFFICULTY-ALIGNED TRAJECTORY MATCHING\", \"[3]Prioritize Alignment in Dataset Distillation\"], \"questions\": \"see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks all the reviewers once again for dedicating your valuable time to reviewing our paper and providing constructive comments!\\n\\nAs the end of the discussion period approaches, we kindly ask if our responses have satisfactorily addressed your concerns. Your feedback would be greatly appreciated, and we would be delighted to engage in further discussions if needed.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer i6mf\\n\\nThank you once again for dedicating your valuable time to reviewing our paper and providing constructive comments! Considering the limited time available and to save the reviewer's time, we summarize our responses here. \\n\\n- **1. Limitation and future work:**\\n - We identify the limitations in the current deployment of CMI constraint.\\n - We propose the direction of future work to address corresponding issues.\\n\\nAs the end of the discussion period approaches, we kindly ask if our responses have satisfactorily addressed your concerns. Your feedback would be greatly appreciated, and we would be delighted to engage in further discussions if needed.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for the explaination.\\n\\nThe difference you highlighted pertains to the distinction between KD and DD, rather than the core differentiator between the proposed methods. Both the methods utilize class-aware CMI for optimization, and the only modification in Eq. 7 is the substitution of $X \\\\to S$ and $\\\\hat{Y} \\\\to \\\\hat{Z}$.\\n\\nFurthermore, based on the provided pseudocode, the proposed CMI introduces an additional class-wise optimization step for the generated images, which incurs extra computational cost. Considering the very marginal performance gain, this additional complexity significantly diminishes the overall contribution of the proposed method, especially when compared to other advancements on SRe2L.\\n\\nSo I will maintain my score.\"}", "{\"comment\": \">**Weakness 2: Confusion about the computation of conditional entropy $H(S|\\\\hat{Z}, Y)$**\\n\\nWe truly appreciate your detailed concerns about CMI for enhancing the rigor of the paper. The objective of minimizing the CMI is to mainly reduce the difference between conditional entropy $H(S \\\\mid Y)$ and $H(S \\\\mid \\\\hat{Z}, Y)$, rather than simply minimizing $H(S \\\\mid \\\\hat{Z}, Y)$. Moreover, introducing additional conditions (i.e., the random variable $\\\\hat{Z}$) will either keep the entropy constant or decrease it, and it cannot lead to an increase as stated. This is a fundamental fact of information theory. \\n\\nIn fact, $\\\\hat{Z}$ is a function of the random variable $S$, where $S$ serves as the input of the deterministic $f_{\\\\theta^*}(\\\\cdot)$ and $\\\\hat{Z}$ is the output. At this point, if the goal is to minimize $H(S \\\\mid \\\\hat{Z}, Y)$, $f_{\\\\theta^*}$ should not change any input, leading to $H(S \\\\mid \\\\hat{Z}, Y) = H(S \\\\mid S, Y) = 0$. \\n\\nOn the other hand, our research lies in the field of dataset distillation, where the optimization objective does not lead to the synthetic dataset becoming entirely noise, which has also been discussed in recent work [5]. To certify this assumption, we used different pre-trained classifier to evaluate the synthetic datasets distilled by different distillation methods as shown in Table below. For a more comprehensive understanding, we also present the performance of CMI constraint using different proxy models in Table.\\n\\n| Method | ConvNet | | AlexNet | | ResNet18 | | VGG11 | |\\n|---------------|---------------|---------------|---------------|---------------|----------------|----------------|------------|------------|\\n| | 10 | 50 | 10 | 50 | 10 | 50 | 10 | 50 |\\n| MTT | 98.0 | 100.0 | 97.0 | 99.0 | 98.0 | 100.0 | 100.0 | 100.0 |\\n| MTT + CMI | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |\\n| DSA | 72.0 | 82.6 | 79.0 | 83.8 | 71.0 | 72.2 | 75.0 | 78.4 |\\n| DSA + CMI | 100.0 | 100.0 | 99.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |\\n| DM | 47.0 | 87.6 | 59.0 | 85.8 | 56.0 | 79.6 | 60.0 | 78.8 |\\n| DM + CMI | 99.0 | 100.0 | 98.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |\\n\\n>**Weakness 3: Ambiguous Expression**\\n\\nThank you for pointing out the issue of unclear expression in this section. Here, the point we aim to convey is ''Minimizing the class-aware CMI value constraints the uncertainty brought to $\\\\hat{Z}$ with $\\\\mathcal{S}$ as the input of $f_{\\\\theta^{*}}(\\\\cdot)$'', instead of $H(\\\\hat{Z} \\\\mid \\\\mathcal{S})$. The statement will be revised in the final version of the manuscript to ensure greater precision, and we sincerely appreciate your feedback in pointing this out.\\n\\n>**Weakness 4: Lack of explicit explanation of CMI computation**\\n\\nThanks for your suggestion, here, we provide a more detailed mathematical derivation. We first define the input synthetic data $\\\\mathcal{S}$ is conditionally distributed according to $P_{S \\\\mid Y}(\\\\cdot \\\\mid y)$ and mapped into $P_{S} \\\\in \\\\mathcal{P}([M])$. Then we can formulate $P_{\\\\hat{Z} \\\\mid Y}$ as the average of $P_{S}$ with respect to $P_{S \\\\mid Y}(\\\\cdot \\\\mid y)$, which is shown in **Eq. 5**:\\n$$\\nP_{\\\\hat{Z} \\\\mid y } = \\\\mathbb{E}[ P_S \\\\mid Y =y ].\\n$$\\n\\nWe get the conditional mutual information $I(S ; \\\\hat{Z} \\\\mid Y=y)$ between $S$ and $\\\\hat{Z}$ given $Y$:\\n\\n$$\\nI(S ; \\\\hat{Z} \\\\mid Y) = \\\\sum_{y \\\\in[C]} P_{Y}(y) I(S ; \\\\hat{Z} \\\\mid y) = \\\\mathbb{E}\\\\left[D\\\\left(P_S \\\\| P_{\\\\hat{Z} \\\\mid Y}\\\\right) \\\\right].\\n$$\\n\\nGiven $Y = y$, the Kullback-Leibler (KL) divergence $D(\\\\cdot \\\\| \\\\cdot)$ between $P_{S}$ and $P_{\\\\hat{Z} \\\\mid y}$ is equal to **Eq. 6**:\\n\\n$$\\n\\\\mathbb{E}\\\\left[D\\\\left(P_{S} \\\\| P_{\\\\hat{Z} \\\\mid y}\\\\right) \\\\mid Y=y\\\\right] = \\\\mathbb{E}\\\\left[\\\\left(\\\\sum_{i=1}^{M} P_{S}(i) \\\\ln \\\\frac{P_{S}(i)}{P_{\\\\hat{Z} \\\\mid y}(\\\\hat{Z}=i \\\\mid Y=y)}\\\\right)\\\\mid Y=y\\\\right]\\n$$\\n\\n$$\\n= \\\\sum_{S} P_{S \\\\mid Y}(\\\\mathbf{s} \\\\mid y)\\\\left[\\\\sum_{i=1}^{M} P(\\\\hat{Z}=i \\\\mid \\\\mathbf{s}) \\\\ln \\\\frac{P(\\\\hat{Z}=i \\\\mid \\\\mathbf{s})}{P_{\\\\hat{Z} \\\\mid y}(\\\\hat{Z}=i \\\\mid Y=y)}\\\\right].\\n$$\\n\\nThese details will be incorporated into the final version of the manuscript to enhance its rigor, and we sincerely appreciate your constructive feedback.\\n\\n[5] What is Dataset Distillation Learning? In ICLR, 2024.\"}", "{\"comment\": [\"Dear Reviewer kma8\", \"Thank you once again for dedicating your valuable time to reviewing our paper and providing constructive comments! Considering the limited time available and to save the reviewer's time, we summarize our responses here.\", \"**1. Additional experiment on more powerful distillation method:**\", \"We provide additional experimental results by applying CMI on DATM across various datasets.\", \"**2. Emphasis on the key point of the methodology:**\", \"We concisely summarize the insights and workflow of our proposed method.\", \"We provide annotations for the symbols referenced in the paper.\", \"**3. Essential difference with other Information theoretic metrics:**\", \"We demonstrate the superiority of CMI over MI based on the principle of the Information Bottleneck theory.\", \"As the end of the discussion period approaches, we kindly ask if our responses have satisfactorily addressed your concerns. Your feedback would be greatly appreciated, and we would be delighted to engage in further discussions if needed.\", \"Sincerely,\", \"The Authors\"]}", "{\"comment\": \"> **Weakness 3: Experimental results with larger datasets and more complex models.**\\n\\nThank you for your valuable suggestions regarding enhancing the robustness of the method. We conduct additional experiments on a larger dataset (i.e., ImageNet-21K) and more complex model (e.g., ResNet101) by appling CMI constraint on SRe$^2$L to evaluate the performance of our proposed method. The results shown in table below demonstrate that our method achieves significant performance improvements under all settings, validating the generality of the proposed method. We will include additional relevant experiments in the final version of the manuscript.\\n| Dataset | IPC | ResNet-18 | | ResNet-50 | | ResNet-101 | |\\n|------------------|-----|------------------|----------|------------------|----------|------------------|----------|\\n| | | SRe$^2$L | Ours | SRe$^2$L | Ours | SRe$^2$L | Ours |\\n| ImageNet-1K | 10 | 21.3\\u00b10.6 | **24.2\\u00b10.3** | 28.2\\u00b10.4 | **30.8\\u00b10.5** | 30.5\\u00b10.3 | **23.0\\u00b10.1** |\\n| | 50 | 46.8\\u00b10.2 | **49.1\\u00b10.1** | 55.6\\u00b10.2 | **58.4\\u00b10.3** | 56.4\\u00b10.3 | **58.7\\u00b10.2** |\\n| | 100 | 52.8\\u00b10.3 | **54.6\\u00b10.2** | 61.1\\u00b10.1 | **63.6\\u00b10.3** | 62.5\\u00b10.2 | **64.2\\u00b10.3** |\\n| ImageNet-21K | 10 | 18.5\\u00b10.4 | **21.6\\u00b10.2** | 27.2\\u00b10.3 | **29.8\\u00b10.4** | 27.5\\u00b10.1 | **30.1\\u00b10.2** |\\n\\n> **Question 1: Definition of empirical CMI.**\\n\\n\\nThank you for your suggestion on improving the presentation in the paper. Here, we propose the term **empirical CMI** to differentiate it from the CMI formula derived through explicit mathematical expressions in **Eq. 8**. In practical applications , empirical originates from the estimation of $P_{\\\\hat{Z} \\\\mid Y}$, i.e., using $Q_{\\\\text{emp}}^{Y}$ obtained by summing over all sampled points and averaging as a numerical approximation of $P_{\\\\hat{Z} \\\\mid Y}$. For a specific synthetic dataset $\\\\mathcal{S_y}$ = $\\\\lbrace(s_1, y)$, $(s_2, y)$, ..., $(s_n, y)\\\\rbrace$ given $Y = y$, \\n$Q_{\\\\mathrm{emp}}^{y} = \\\\frac{1}{|\\\\mathcal{S_y}|} \\\\sum_{s_i \\\\in \\\\mathcal{S_y}} P_{s_i}$. \\nWe then use $Q_{\\\\mathrm{emp}}^{y}$ as the numerical estimate of $P_{\\\\hat{Z} \\\\mid Y}$ and substitute it into **Eq. 8** to calculate the computable CMI value.\\n\\n> **Question 2: Ablation study of diverse model architectures.**\\n\\nThank you for your suggestion on improving the ablation experiments. In **Table 7** provided in **Sec 5.5**, we present the performance when using ConvNet and ResNet18 as proxy models for calculating CMI. Following your advice, we have conducted more comprehensive ablation experiments. Under the CIFAR10 with IPC=10, we utilized simply pre-trained ConvNet, AlexNet, ResNet18, and VGG11 as proxy models to compute CMI for both DM and DSA.\", \"we_sequentially_report_the_following_metrics\": \"the depth (i.e., number of convolution layer), the dimensionality of the feature space, and the accuracy of the simply pre-trained proxy model; the start CMI value of the synthetic dataset, the CMI value at the end of the optimization process w.o. and w. CMI constraint (cstr) of the synthetic dataset, and the accuracy of the model trained using the synthetic dataset. As shown in table below, it can be observed that the dimensionality of the feature space has a greater impact on CMI values than network depth, and CMI effectively reflects the degree of clustering in $\\\\hat{Z}$ under the same dimensional conditions. Nonetheless, the proposed CMI constraint effectively regulates the complexity of the synthetic dataset, demonstrating the strong generalization capability of our approach.\\n\\n**Ablation study of proxy model architecture on DM**\\n| | ConvNet | AlexNet | VGG11 | ResNet18 |\\n|---------------------------|-------------|-------------|-------------|--------------|\\n| **Depth** | 3 | 5 | 8 | 17 |\\n| **Dimensionality** | 2048 | 4096 | 512 | 512 |\\n| **Model Acc (%)** | 79.8 | 80.2 | 79.4 | 79.5 |\\n| **Start CMI value** | 0.1045 | 1.9338 | 0.0021 | 0.0016 |\\n| **End CMI value w.o. cstr** | 0.0866 | 1.6217 | 0.0059 | 0.0066 |\\n| **End CMI value w. cstr** | 0.0326 | 0.6642 | 0.0006 | 0.0004 |\\n| **Performance (%)** | 51.2 | 50.8 | 52.4 | **52.9** |\"}", "{\"comment\": \"We sincerely express our gratitude for dedicating your valuable time to providing insightful suggestions that can enhance our paper. Your praise regarding our methodology, experiments, and contributions have greatly encouraged and motivated us. Our detailed responses to all of your conerns are presented below.\\n\\nEncouragingly, reviewers praise the insight of using CMI in dataset distillation ( `R#oMQb` , `R#kma8` ), solid theoretical foundation ( `R#kma8` , `R#zh5i` ), robustness ( `R#oMQb` , `R#kma8` ) and comprehensive experiments ( `R#oMQb` , `R#kma8`, `R#i6mf`, `R#zh5i` ) of our work. To better address the reviewers' concerns, we provide point-by-point responses to each reviewer below. If there are any other questions, please don't hesitate to discuss them with us.\"}", "{\"summary\": \"This paper introduces a novel regularization method for dataset distillation (DD) by minimizing both the distillation loss and Conditional Mutual Information (CMI) of synthetic datasets. It uses an efficient CMI estimation method to measure class-aware properties and\\u00a0combines CMI with existing DD techniques. Experiments show that the proposed CMI-enhanced loss significantly outperforms state-of-the-art methods, improving performance by up to 5.5%. The method can be used as a plug-and-play module for all existing DD methods with diverse optimization objectives.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The strengths of this paper lie in its comprehensive experimentation across diverse datasets and network architectures, which effectively demonstrates the versatility and robustness of the proposed method. Furthermore, the method's ability to be integrated as a plug-and-play module into existing dataset distillation techniques, regardless of their optimization objectives, showcases its innovation and flexibility, making it a significant contribution to the field.\", \"weaknesses\": \"The paper lacks a clear discussion of the limitations of the proposed method. Furthermore, the authors should consider using more intuitive explanations, visual aids, and pseudocode to help readers better understand the technical details of the method.\", \"questions\": \"Can you discuss any potential limitations of your proposed method and suggest directions for future work to address these limitations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Question 2: Ablation study of diverse model architectures**\\n\\nThe results on the DSA are shown in the table below.\\n\\n**Ablation study of proxy model architecture on DSA**\\n| | ConvNet | AlexNet | VGG11 | ResNet18 |\\n|---------------------------|-------------|-------------|-------------|--------------|\\n| **Depth** | 3 | 5 | 8 | 17 |\\n| **Dimensionality** | 2048 | 4096 | 512 | 512 |\\n| **Model Acc (%)** | 79.8 | 80.2 | 79.4 | 79.5 |\\n| **Start CMI value** | 0.1045 | 1.9338 | 0.0021 | 0.0016 |\\n| **End CMI value w.o. cstr** | 0.0825 | 1.8755 | 0.0076 | 0.0051 |\\n| **End CMI value w. cstr** | 0.0455 | 0.8875 | 0.0005 | 0.0004 |\\n| **Performance (%)** | 53.2 | 53.4 | **54.8** | 54.7 |\\n\\n> **Question 3: Application on downstream tasks**\\n\\nThanks for the suggestion. Following your guidance, we explore the proposed method across two downstream tasks commonly used in dataset distillation: federated learning (FL) [7] and continual learning (CL) [8]. The experimental results are presented in the table below. It can be observed that since our proposed CMI constraint effectively provides clearer classification boundaries for the synthetic datasets, thereby offering more valuable guidance information in these real-world applications.\\n\\n**Federated learning**\\n| IPC | DM | DM + CMI |\\n|-----|----------|------------|\\n| 3 | 53.6 | **54.1** |\\n| 5 | 62.2 | **63.0** |\\n| 10 | 69.2 | **70.4** |\\n\\n**Continual learning**\\n| Number of Classes | IDC | IDC + CMI |\\n|-------------------|---------|-------------|\\n| 20 | 65.5 | **67.2** |\\n| 40 | 61.2 | **62.4** |\\n| 60 | 54.5 | **56.1** |\\n| 80 | 50.4 | **53.6** |\\n| 100 | 46.5 | **50.2** |\\n\\n[7] FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning. In CVPR, 2023.\\n\\n[8] Gdumb: A simple approach that questions our progress in continual learning. In ECCV, 2020.\"}", "{\"comment\": \"Dear Reviewer Zh5i\\n\\nWe appreciate your response and the contribution your feedback has made to improving our work! As the end of the discussion period approaches, if all your queries have been addressed, we kindly ask you to consider raising your rating. If you still have any doubts or reservations about our work, we are more than willing to engage in further discussion with you.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"We would like to express our gratitude for your valuable suggestions and insightful feedback of our paper. Our point-by-point responses are provided below.\\n\\n>**Weakness 1: Contribution of our work.**\\n\\nThank you for acknowledging the previous work [1] that utilizes conditional mutual information (CMI). Here, we present the distinctions between our proposed method and [1] as follows:\\n\\n**Different Optimization Objectives.** [1] lies in the field of knowledge distillation, **with the goal of enabling the teacher model to be a better estimated Bayesian classifier**. While our proposed method is applied to evaluate the non-linear information between $\\\\mathcal{S}$ and $\\\\hat{Z}$ and to **make the synthetic dataset a more learnable dataset**.\\n\\n**Different Optimization Intentions.** [1] **maximize** the CMI value to help the teacher model better capture the contextual information during **pre-training phase**. In contrast, our proposed method seeks to **minimize** the CMI value of the synthetic dataset to constrain its complexity during **distillation process**.\\n\\n**Different Optimization Methods.** [1] optimizes by calculating CMI in the **probability space**, whereas our approach reconstructs the Markov chain (i.e., from $Y \\\\rightarrow X \\\\rightarrow \\\\hat{Y}$ to $Y \\\\rightarrow S \\\\rightarrow \\\\hat{Z}$) in **feature space** to compute CMI and validate the theoretical correctness as discussed in **Sec. 4.1**. A stronger constraint effect is achieved as shown in **Table 7** provided in **Sec 5.5**.\\n\\nBeyond that, we emphasize that the derivation process of CMI stems from a **deterministic** calculation in information theory, that is to say specific calculation is deterministic since the variable is determined. This is why our formulation is similar to [1] (Despite significant differences discussed above). Thus, the focus when applying CMI as a regularization should be why and how to determine the three corresponding random variables.\\n\\nOn the other hand, similar to the way of our work does, a series of studies employing methodologies utilizing information theory to enhance performance across various tasks have been widely applied, including in dataset distillation [2] [3] [4]. In contrast to the various estimation framework necessitated by the **nondeterministic** calculation of other information metric (e.g., mutual information), we reclaim that the major advantage of CMI is its certainty, i.e., the only estimation derived from the distribution sampling. \\n\\nIn summary, **our study is the first to identify the intrinsic class-aware complexity challenge in dataset distillation and to introduce the CMI as a solution**, elucidating its theoretical and practical significance. Furthermore, We offer a novel perspective on how information theory can be effectively applied to this field, thereby highlighting the originality and advancement of our research.\\n\\n[1] Bayes conditional distribution estimation for knowledge distillation based on conditional mutual information. In ICLR, 2024.\\n\\n[2] MIM4DD: Mutual Information Maximization for Dataset Distillation. In NeurIPS, 2023.\\n\\n[3] One Training Fits All: Generalized Data Condensation via Mixture-of-Information Bottleneck Guidance. OpenReview Submission, 2024.\\n\\n[4] GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost. Openreview Submission, 2024.\"}", "{\"comment\": \"We thank the reviewer for the valuable feedback and provide further clarifications below\\uff1a\\n\\n>**1. Utilization of CMI:**\\n\\nConditional Mutual Information (CMI), derived from information theory, serves as a information measure of the relationship between two random variables and is **not domain-specific**. Unlike Mutual Information (MI), one significant advantage of CMI lies in its computational determinism. **We believe that appropriately leveraging CMI according to task-specific characteristics across different fields has significantly facilitated the mutual verification of theory and experimentation**, which has also been acknowledged and accepted by `R#oMQb` and `R#kma8`.\\n\\n>**2. Confusion between KD and DD:**\\n\\nHere, we **reemphasize** that the differences between knowledge distillation (KD) and dataset distillation (DD) can lead to **substantial** impacts. In contrast to previous work [1], our method introduces clear differences (e.g., optimization direction). We notice that in the feedback, you seem to consider that two random variables $\\\\hat{Y}$ and $\\\\hat{Z}$ can be straightforwardly substituted when reconstructing Markov chains, while we have made a detailed discussion in **Sec 4.1**. And we have made a clear clarification that the focus when applying CMI as a regularization should be **why and how to determine the three corresponding random variables** and **appropriate optimization intention**.\\n\\n>**3. Application to SRe$^2$L:**\\n\\nOur work aims to constrain the excessive complexity in existing dataset distillation tasks by utilizing CMI as a **general plug-and-play regularization** which has been acknowledged by all the reviewers including `R#Zh5i`. Simply conflating our method with methods solely aimed at improving accuracy on SRe$^2$L could be inappropriate. Based on your suggestion, we conduct comparisons between our method and **PAD [2] recommended by the reviewer** on SRe$^2$L. Results on CIFAR-100 are presented in the table below.\\n\\n| Method | 10 | 100 |\\n|----------------------------|-----------|-----------|\\n| SRe$^2$L | 28.2| 57.2|\\n| SRe$^2$L + PAD (FIEX) | 29.3| 57.9|\\n| **SRe$^2$L + CMI** | **30.1**| **59.1**|\\n\\n>**More effective modification to SRe$^2$L Code:**\\n\\nWe appreciate your valuable advice and have made improvements to our method based on your suggestion and insight from existing SOTA method [3]. By simply swapping class loop with IPC loop, we successfully eliminate additional computational overhead. The revised pseudocode is provided below, and additional experimental results are shown in the table below. Under class-wise supervision **without incurring extra computational cost**, the performance improvements achieved with our method go beyond what could be described as **\\\"very marginal performance gain\\\"**, even compared with recent SOTA plug-and-play method [4] solely designed for instance-based distillation method, further validating the generalization and robustness of our proposed method.\\n\\n| Method | 10 | 50 | 100 |\\n|----------------------------|-----------|-----------|-----------|\\n| SRe$^2$L | 21.3 \\u00b1 0.6| 46.8 \\u00b1 0.2| 52.8 \\u00b1 0.3|\\n| SRe$^2$L$^\\\\dagger$ | 22.7 \\u00b1 0.1| 48.4 \\u00b1 0.2| 54.3 \\u00b1 0.3|\\n| **SRe$^2$L + CMI** | **24.1 \\u00b1 0.3 (2.8$\\\\uparrow$)**| **50.3 \\u00b1 0.4(3.5$\\\\uparrow$)**| **56.0 \\u00b1 0.3(3.2$\\\\uparrow$)**|\\n\\n### Original Pseudocode\\n\\n```markdown\", \"for_i_from_1_to_ipc\": \"# Outer loop optimizes synthetic dataset from 1 to IPC\\n targets = np.arange(C) # initial hard label for all class\", \"for_j_from_1_to_t\": \"# Inner loop optimizes images from 1 to epoch T\\n Optimize S[i, :] with L and CMI constraint\\n```\\n\\n\\n[1] Bayes conditional distribution estimation for knowledge distillation based on conditional mutual information\\n\\n[2] Prioritize Alignment in Dataset Distillation. Openreview Submission, 2024.\\n\\n[3] Are Large-scale Soft Labels Necessary for Large-scale Dataset Distillation? In NeurIPS, 2024.\\n\\n[4] GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost. Openreview Submission, 2024.\", \"for_i_from_1_to_c\": \"# Outer loop optimizes synthetic dataset from 1 to class C\\n targets = np.arange(IPC) # initial hard label for one class\"}", "{\"comment\": \"We sincerely express our gratitude for dedicating your valuable time to providing insightful suggestions that can enhance our paper. Your praise regarding our methodology,\\nexperiments, and contributions have greatly encouraged and motivated us. Our detailed responses to all of your concerns are presented below.\\n\\n> **Weakness 1: Balance between training cost and performance.**\\n\\nThanks for you suggestion on improving the experimental completeness, we conduct experiments on reducing the calculation frequency of CMI constraints as shown in table below. We report the performance and additional time consumption under different optimization settings, where the ratio of CMI optimization steps to distillation loss optimization steps is set to 1/10, 1/5, and 1. It can be observed that even under significantly reduced optimization step settings, the CMI constraint still achieves stable performance improvements.\\n| | | | CIFAR10 | | | CIFAR100 | |\\n|-------|---------|-------------|---------|---------|--------------|---------|---------|\\n| | | 1/10 | 1/5 | 1 | 1/10 | 1/5 | 1 |\\n| IDM | additional time (s)| 0.05 | 0.1 | 0.5 | 0.5 | 2.5 | 5 |\\n| | acc (%) | 61.9\\u00b10.3 | 62.1\\u00b10.2 | **62.2\\u00b10.3** | 46.7\\u00b10.1 | 47.0\\u00b10.2 | **47.2\\u00b10.4** |\\n| IDC | additional time (s)| 0.4 | 0.8 | 4.4 | 3.2 | 6.4 | 32.0 |\\n| | acc (%) | 69.4\\u00b10.3 | 69.7\\u00b10.2 | **70.0\\u00b10.3** | 46.0\\u00b10.3 | 46.4\\u00b10.3 | **46.6\\u00b10.3** |\\n\\n> **Weakness 2: Comparison with other complexity measure.**\\n\\nCompare to other complexity metric which evaluate the complexity of a single data point (e.g., GraDN Score [1]), CMI is a class-based concept, considering all the feature vectors within one class as a whole. On the other hand, CMI enables the computation process to follow a deterministic mathematical expansion, results in a more accurate information metric. Moreover, the insight of minimizing CMI stems from constraining the complexity of the overall synthetic dataset, making it a versatile plugin. Here, we list several information metrics used in previous distillation methods and make a brief summarization:\\n\\n**Influence Function [2]**. influence function determines the importance of a sample by comparing the loss difference after removing a specific sample. However, this method requires retraining the model for each sample, leading to a significant computational overhead, making it impractical to optimize. Thus [3] only consider it as a evaluation metric.\\n\\n**Mutual Information (MI)**. MIM4DD [4] increases the MI between the synthetic and original datasets by incorporating a contrastive learning framework. Although it is not explicitly used for complexity constraints, MI is still an information metric and is thus included in our comparison. \\n\\n**Mean-square Error (MSE)**. IID [5] uses the MSE loss between each sample and its corresponding class center as a regularization, which is applicable only to distribution matching methods. \\n\\n**GraDN Score**. SDC [6] uses the GraDN Score of the synthetic dataset corresponding to the proxy model during training as a regularization, which is applicable only to gradient matching methods. \\n\\nFor a more intuitive comparison, we have compared the performance of our work with [4], [5], and [6] as shown in **Table 1** provided in **Sec 5.2** and **Table 8** provided in **Sec A.1**. We provide an additional comparative experiment where we remove another regularization term (i.e., MSE loss between the variance matrix of the synthetic dataset and that of the real dataset) from IID and compare it with our method in table below. It can be observed that CMI not only accelerates model training but also achieves better constraint performance across various distillation methods and datasets.\\n| | SVHN | | CIFAR10 | | CIFAR100 | |\\n|----------------|-----------|---------|-----------|---------|------------|---------|\\n| | 10 | 50 | 10 | 50 | 10 | 50 |\\n| IID-DM$^\\\\dagger$ | 74.5\\u00b10.2 | 84.0\\u00b10.3 | 52.5\\u00b10.3 | 64.5\\u00b10.4 | 31.5\\u00b10.3 | 43.0\\u00b10.1 |\\n| **DM+CMI** | **77.9\\u00b10.4** | **84.9\\u00b10.4** | **52.9\\u00b10.3** | **65.8\\u00b10.3** | **32.5\\u00b10.4** | **44.9\\u00b10.2** |\\n| IID-IDM$^\\\\dagger$ | 81.8\\u00b10.1 | 84.7\\u00b10.3 | 59.4\\u00b10.2 | 68.3\\u00b10.2 | 45.8\\u00b10.4 | 51.0\\u00b10.3 |\\n| **IDM+CMI** | **84.3\\u00b10.2** | **88.9\\u00b10.2** | **62.2\\u00b10.3** | **71.3\\u00b10.2** | **47.2\\u00b10.4** | **51.9\\u00b10.3** |\\n\\n[1] Deep learning on a data diet: Finding important examples early in training. In NeurIPS, 2021.\\n\\n[2] Understanding black-box predictions via influence functions. In ICML, 2017.\\n\\n[3] What is Dataset Distillation Learning? In ICML, 2024.\\n\\n[4] MIM4DD: Mutual Information Maximization for Dataset Distillation. In NeurIPS, 2023.\\n\\n[5] Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation. In CVPR, 2024.\\n\\n[6] Not All Samples Should Be Utilized Equally: Towards Understanding and Improving Dataset Distillation. In arXiv, 2024.\"}", "{\"comment\": \"We sincerely thank you for the precious time and effort in providing a wealth of suggestions to enhance the quality of our paper. We have carefully read all the comments and provide detailed point-by-point responses as follows. Hopefully, we can adequately address your concerns.\\n> **Question 1: Additional experiment on more powerful distillation method.**\\n\\nThanks for the valuable suggestions on improving and completing the experimental aspects. We conduct an additional experiment by applying the CMI constraint on [1]. The experimental results show that our proposed method, as a general-purpose plugin, can achieve performance improvements across various distillation methods.\\n\\n| Method | CIFAR10 | | CIFAR100 | | TinyImageNet | |\\n|---------------|---------------|---------------|----------------|----------------|--------------------|--------------------|\\n| | 10 | 50 | 10 | 50 | 10 | 50 |\\n| MTT | 65.3 \\u00b1 0.4 | 71.6 \\u00b1 0.2 | 39.7 \\u00b1 0.4 | 47.7 \\u00b1 0.2 | 23.2 \\u00b1 0.2 | 28.0 \\u00b1 0.3 |\\n| **MTT+CMI** | **66.7 \\u00b1 0.3**| **72.4 \\u00b1 0.3**| **41.9 \\u00b1 0.4** | **48.8 \\u00b1 0.2** | **24.1 \\u00b1 0.3** | **28.8 \\u00b1 0.3** |\\n| DATM | 66.8 \\u00b1 0.2 | 76.1 \\u00b1 0.3 | 47.2 \\u00b1 0.4 | 54.1 \\u00b1 0.2 | 31.1 \\u00b1 0.3 | 39.7 \\u00b1 0.3 |\\n| **DATM+CMI** | **67.4 \\u00b1 0.4**| **76.6 \\u00b1 0.1**| **47.6 \\u00b1 0.3** | **55.1 \\u00b1 0.3** | **31.9 \\u00b1 0.4** | **40.6 \\u00b1 0.2** |\\n\\n>**Question 2: Emphasis on the key point of the methodology.**\\n\\nThank you for your suggestions on improving the presentation. Here, we make a briefly summary of our work as follows: \\n\\nOn the basis of observing excessive inter-class complexity in the synthetic dataset, i.e., more entangled $\\\\hat{Z}$ when $\\\\mathcal{S}$ is used as the input of a pre-trained classifier $f_{\\\\theta^{*}}$. we introduce conditional mutual information (CMI) from information theory as a metric to evaluate the complexity of the synthetic dataset. By reconstructing the original CMI computation formula in the feature space and incorporating it as a plug-and-play regularization term, which is subsequently minimized, we achieve stable performance improvements across various distillation methods and datasets. A detailed notation is provided in table below, and we will refine our statements in the final version of the manuscript to ensure greater rigor.\\n| Notation | Description |\\n|-----------------------|---------------------------------------------------------|\\n| $\\\\mathcal{T}$ | the original training dataset |\\n| $\\\\mathcal{S}$ | the synthetic training dataset |\\n| $S$ | the random variable as the input to the network |\\n| $Y$ | the random variable as the label |\\n| $y$ | some certain label of the random variable $Y$ |\\n| $\\\\hat{Z}$ | the random variable as the penultimate features of the network |\\n| $C$ | the number of the class |\\n| $M$ | the number of the dimensionality of $\\\\hat{Z}$ |\\n| $f_{\\\\theta^{*}}$ | the pre-trained network |\\n| $Q_{\\\\text{emp}}^{y}$ | the empirical center of class $y$ |\\n| $\\\\lambda$ | the hyper-parameter of CMI constraint |\\n\\n>**Question 3: Essential difference with other Information theoretic metrics.**\\n\\nHere, we demonstrate the advantages of CMI over mutual information (MI). In theory, the forward propagation of neural networks $f_{\\\\theta^{*}}$ is a deterministic process, causing $I(X; \\\\hat{Z})$ to degrade into $H(\\\\hat{Z})$, thereby losing its theoretical significance. On the other hand, the value spaces of $X$ and $\\\\hat{Z}$ are extremely large, making $I(X;\\\\hat{Z})$ very difficult to compute and approximate. As a result, variational methods are commonly employed for estimation, but these estimates are often inaccurate in practice. While CMI enables the computation process to follow a deterministic mathematical expansion by incorporating an additional variable $Y$. This results in a more accurate information metric without the need for additional estimation frameworks (e.g., the contrastive learning framework employed in [2]) and provides a universal regularization approach. The quantitative comparisons with [2] are shown in **Table 1** provided in **Sec 5.5**.\\n\\n[1] Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching. In ICLR, 2024.\\n\\n[2] MIM4DD: Mutual Information Maximization for Dataset Distillation. In NeurIPS, 2023.\"}", "{\"comment\": [\"Dear Reviewer Zh5i\", \"Thank you once again for dedicating your valuable time to reviewing our paper and providing constructive comments! Considering the limited time available and to save the reviewer's time, we summarize our responses here.\", \"**1. Contribution of our work:**\", \"We present a clear distinction between our method and existing approaches utilizing CMI.\", \"We highlight that one major advantage of CMI stems from its computational determinism.\", \"we elaborate on the originality of our method in applying information-theoretic principles and outlined its contributions.\", \"**2. Confusion about the computation of conditional entropy $H(S \\\\mid \\\\hat{Z}, Y)$:**\", \"We point out that incorporating additional observations will only decrease or maintain entropy.\", \"We argue that $\\\\hat{Z}$ would not be irrelevant information in the field of dataset distillation.\", \"**3. Ambiguous Expression:**\", \"We clarify the ambiguous statements present in the paper.\", \"**4. Lack of explicit explanation of CMI computation:**\", \"We re-derive the mathematical computation process from the definition of CMI to its computable expansion.\", \"**5. Comparison with recent SOTA:**\", \"We provide additional experimental results by applying CMI on DATM across various datasets.\", \"**6. Details about the implementation of SRe$^2$L:**\", \"We demonstrate how to losslessly modify SRe$^2$L code to implement CMI constraints.\", \"As the end of the discussion period approaches, we kindly ask if our responses have satisfactorily addressed your concerns. Your feedback would be greatly appreciated, and we would be delighted to engage in further discussions if needed.\", \"Sincerely,\", \"The Authors\"]}", "{\"comment\": \">**Weakness 5: Comparison with recent SOTA.**\\n\\nThanks for your suggestion. In contrast to [6] and [7], which are exclusively designed for trajectory matching, (i.e., selecting specific range of trajectories), both our proposed method and the majority baseline methods function as a versatile plugin (e.g., DREAM [8] serves as a universal solution applicable to various distillation methods). Thus, we consider applying the CMI constraint to [6] as suggested by `R#kma8`. As demonstrated in table below, our proposed approach consistently delivers substantial performance enhancements.\\n\\n| Method | CIFAR10 | | CIFAR100 | | TinyImageNet | |\\n|---------------|---------------|---------------|----------------|----------------|--------------------|--------------------|\\n| | 10 | 50 | 10 | 50 | 10 | 50 |\\n| MTT | 65.3 \\u00b1 0.4 | 71.6 \\u00b1 0.2 | 39.7 \\u00b1 0.4 | 47.7 \\u00b1 0.2 | 23.2 \\u00b1 0.2 | 28.0 \\u00b1 0.3 |\\n| **MTT+CMI** | **66.7 \\u00b1 0.3**| **72.4 \\u00b1 0.3**| **41.9 \\u00b1 0.4** | **48.8 \\u00b1 0.2** | **24.1 \\u00b1 0.3** | **28.8 \\u00b1 0.3** |\\n| DATM | 66.8 \\u00b1 0.2 | 76.1 \\u00b1 0.3 | 47.2 \\u00b1 0.4 | 54.1 \\u00b1 0.2 | 31.1 \\u00b1 0.3 | 39.7 \\u00b1 0.3 |\\n| **DATM+CMI** | **67.4 \\u00b1 0.4**| **76.6 \\u00b1 0.1**| **47.6 \\u00b1 0.3** | **55.1 \\u00b1 0.3** | **31.9 \\u00b1 0.4** | **40.6 \\u00b1 0.2** |\\n\\n>**Weakness 6: Details about the implementation of SRe$^2$L.**\\n\\nThank you for pointing out the confused expression. **Eq. 3** in our paper represents an abstraction of optimization-based dataset distillation methods, wherein current methods indirectly optimize **Eq. 2** by aligning specific information (e.g., feature distribution). As discussed in [9] and [10]. instance-based methods (i.e., SRe$^2$L) still falls under this category.\\n\\nSRe$^2$L optimizes the synthetic dataset by minimizing the loss function $L = l(\\\\phi_{\\\\theta_{\\\\mathcal{T}}}(\\\\mathbf{s}), y)$, where $y \\\\in [C]$ and $l$ represents the cross entropy loss. Due to the instance-independent characteristic of its optimization process, we transit the optimization from the inner loop to the outer loop, enabling simultaneous optimization of the entire synthetic dataset. During the optimization process, synthetic dataset is classified based on predefined labels which are used to compute the loss function, enabling the deployment of CMI constraint. \\n\\nThe change of pseudocode is shown below. For a fair comparison, we present the impact of altering the loop order in table below. We attribute the performance improvement to the more precise class boundaries achieved by applying CMI constraint for datasets generated by SRe$^2$L, a critical aspect that has also been emphasized in recent studies [10] [11] [12] [13], and our method can be more easily applied on recent SOTA [12] and [13] with their class-wise supervision.\\n\\n| Method | 10 | 50 | 100 |\\n|----------------------------|-----------|-----------|-----------|\\n| SRe$^2$L | 21.3 \\u00b1 0.6| 46.8 \\u00b1 0.2| 52.8 \\u00b1 0.3|\\n| SRe$^2$L$^\\\\dagger$ | 21.5 \\u00b1 0.3| 46.3 \\u00b1 0.4| 53.4 \\u00b1 0.1|\\n| **SRe$^2$L + CMI** | **24.2 \\u00b1 0.3**| **49.1 \\u00b1 0.1**| **54.6 \\u00b1 0.2**|\\n\\n### Original Pseudocode\\n\\n```markdown\", \"for_i_from_1_to_ipc\": \"# Outer loop optimizes synthetic dataset from 1 to IPC\\n targets = np.arange(1000) # initial hard label\", \"for_j_from_1_to_t\": \"# Inner loop optimizes images from 1 to epoch T\\n Optimize S[i:, ] with L\\n```\\n### Modified Pseudocode\\n```markdown\\ntargets = np.tile(np.arange(1000), (ipc, 1) )# initial hard label\", \"for_i_from_1_to_t\": \"# Outer loop optimizes synthetic dataset from 1 to epoch T\", \"for_j_from_1_to_ipc\": \"# Inner loop optimizes images from 1 to IPC\\n Optimize S[j, :] with L\", \"for_k_from_1_to_c\": \"# apply CMI constraint for each class\\n Optimize S[:, k] with CMI constraint\\n```\\n\\n[6] Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching. In ICLR, 2024.\\n\\n[7] Prioritize Alignment in Dataset Distillation. In arXiv, 2024.\\n\\n[8] Dream: Efficient Dataset Distillation by Representative Matching. In ICCV, 2023.\\n\\n[9] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching. In CVPR, 2024.\\n\\n[10] Elucidating the Design Space of Dataset Condensation. In NeurIPS, 2024.\\n\\n[11] Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator. Openreview Submission, 2024.\\n\\n[12] Dataset Distillation via the Wasserstein Metric. In arXiv, 2023.\\n\\n[13] Are Large-scale Soft Labels Necessary for Large-scale Dataset Distillation? In NeurIPS, 2024.\"}", "{\"comment\": \"Dear reviewers,\\n\\nThank you if you have already reviewed the author response. \\n\\nIf not, could you kindly review it and let the authors know if you are satisfied with their response or if you have any further questions?\\n\\nKind regards,\\n\\nYour AC\"}", "{\"comment\": \"Dear Reviewer kma8\\n\\nWe appreciate your response and the contribution your feedback has made to improving our work! As the end of the discussion period approaches, if all your queries have been addressed, we kindly ask you to consider raising your rating. If you still have any doubts or reservations about our work, we are more than willing to engage in further discussion with you.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"summary\": \"This paper proposes a new approach for dataset distillation by introducing a class-aware conditional mutual information (CMI) metric to address challenges in creating compact, representative synthetic datasets. Traditional dataset distillation methods often compress feature similarity without considering class-specific complexity, making it hard for models to generalize across different classes. This work leverages CMI as a regularization constraint, optimizes synthetic datasets and improves training efficiency as well as model performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe idea of using CMI in dataset distillation to address the inherent class-aware complexity issue is interesting.\\n2.\\tThe experiments are conducted based on multiple datasets and various model architectures, providing solid evidence for the method's effectiveness. \\n3.\\tThe proposed method CMI is a versatile, \\\"plug-and-play\\\" regularization component that can be applied to numerous dataset distillation methods, such as DSA, MTT, and IDC. This flexibility allows the approach to generalize across different scenarios and highlights its robustness.\\n4.\\tBy controlling the complexity of the synthetic dataset, the CMI-enhanced loss achieves faster convergence and reduces the number of required training iterations, which is particularly beneficial for large-scale datasets and resource-intensive models.\", \"weaknesses\": \"1.\\tWhile the paper demonstrates the CMI constraint\\u2019s benefits clearly, this method also introduces additional computation overhead, especially when dealing with high-resolution datasets. Although the authors briefly mention several strategies for mitigating this cost (e.g., reducing CMI calculations frequency), a more thorough discussion on balancing cost and performance might strengthen the practical feasibility.\\n2.\\tAlthough empirical evidence is strong, the theoretical basis for CMI as a regularization term could be expanded. Specifically, further details on how CMI inherently captures class complexity or why it is preferable over alternative complexity measures would provide deeper insight.\\n3.\\tWhile the experiments on Tiny-ImageNet and ImageNet-1K are promising, it remains unclear how the proposed method scales with even larger datasets or more complex models, such as those used in real-world applications with hundreds of classes. Additional experiments in such contexts would further show the robustness of this method.\", \"questions\": \"1.\\tThe paper is well-organized and clearly presents the methodology, results, and analyses. Figures and tables are effectively used to convey improvements and insights. However, further explanation of certain key terms, such as \\\"empirical CMI,\\\" might enhance accessibility for readers unfamiliar with the topic.\\n2.\\tThe ablation studies conducted to assess the influence of the weighting parameter on the CMI constraint are informative. Still, a broader exploration of other hyperparameters affecting CMI estimation, such as the dimensionality of feature space and network depth, could reveal potential optimizations.\\n3.\\tThe potential of CMI for real-world applications, such as federated learning or privacy-preserving tasks, is not discussed. Given the emphasis on dataset distillation's applications in these areas, an exploration of how CMI might support these domains would align well with the broader goals of dataset distillation research.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This work focuses on the complexity of the synthetic dataset in dataset distillation. It proposes to apply conditional mutual information (CMI) to assess this complexity and uses this criterion as a regulariser of the existing dataset distillation methods. Experimental study shows that the proposed regulariser can be used as a plug-and-play component to work with various distillation losses to achieve improved performance and faster convergence.\\n\\nReviewers comment that the proposed method is interesting, versatile and effective; the experiments are comprehensive and solid; this work is innovative and makes significant contribution; the review is thorough; and it is technically sound. Meanwhile, the reviewers raise the issues related to the extra computational overhead, the theoretical basis of using CMI as a regulariser, the scalability with respect to dataset size and model complexity, experimenting with new methods, the novelty of this work, the discussion of limitation, the difference from an existing work on CMI, and the lack of explanation. \\n\\nThe authors provide a response to each of the raised issues by further clarification, additional experiments, and pseudo code. The rebuttal is overall detailed and effective. Reviewer oMQb explicitly replies that the rebuttal addresses most of the concerns they raised. However, Reviewer Zh5i still has concerns related to the difference of this work from an existing work on CMI and therefore the adequacy of the novelty. The final ratings are 6, 6, 6 and 3 (Reviewer Zh5i). \\n\\nAC carefully reviews the submission, the comments, the rebuttals, the discussion, and the message from the authors. AC generally concurs with the reviewers' comments on this work. In addition, although this work adopts the CMI criterion from the literature and therefore does not have significant contribution in this regard, it does provide new insights on the complexity of the synthetic dataset and innovatively utilises CMI to consider this information in the optimisation of dataset distillation. Also, experimental study verifies the effectiveness and the generality of this work. In the follow-up discussion between reviewers and AC, Reviewer Zh5i indicates that they will not oppose acceptance if the majority of reviewers strongly advocate for it. \\n\\nTaking all the factors into account, AC would like to recommend this work for acceptance. Meanwhile, the authors are strongly suggested to further improve the clarity of this work, provide more explanation, and explicitly list the differences from the work raised in the comments of Reviewer Zh5i.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raise the issues related to the extra computational overhead, the theoretical basis of using CMI as a regulariser, the scalability with respect to dataset size and model complexity, experimenting with new methods, the novelty of this work, the discussion of limitation, the difference from an existing work on CMI, and the lack of explanation. The authors provide a response to each of the raised issues by further clarification, additional experiments, and pseudo code. The rebuttal is overall detailed and effective. Reviewer oMQb explicitly replies that the rebuttal addresses most of the concerns they raised. However, Reviewer Zh5i still has concerns related to the difference of this work from an existing work on CMI and therefore the adequacy of the novelty. The final ratings are 6, 6, 6 and 3 (Reviewer Zh5i).\\n\\nAC carefully reviews the submission, the comments, the rebuttals, the discussion, and the message from the authors. AC generally concurs with the reviewers' comments on this work. In addition, although this work adopts the CMI criterion from the literature and therefore does not have significant contribution in this regard, it does provide new insights on the complexity of the synthetic dataset and innovatively utilises CMI to consider this information in the optimisation of dataset distillation. Also, experimental study verifies the effectiveness and the generality of this work. In the follow-up discussion between reviewers and AC, Reviewer Zh5i indicates that they will not oppose acceptance if the majority of reviewers strongly advocate for it. Taking all the factors into account, AC would like to recommend this work for acceptance.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nIn response to your valuable feedback, we have provided clearer definitions, conducted additional experiments, and presented new results. If you still have any questions, we are eager to hear details. We encourage you to tell us any questions you still have, and we promise to address them in detail. Thank you.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": [\"Dear Reviewer oMQb\", \"Thank you once again for dedicating your valuable time to reviewing our paper and providing constructive comments! Considering the limited time available and to save the reviewer's time, we summarize our responses here.\", \"**1. Balance between training cost and performance:**\", \"We provide an additional experiment to reduce the frequency of CMI computation.\", \"**2. Comparison with other complexity measure:**\", \"We enumerated various information metrics and analyzed their theoretical shortcomings.\", \"We provided a practical comparison of CMI constraint with other applicable information metrics.\", \"**3. Experimental results with larger datasets and more complex models:**\", \"We provide additional experimental results on ImageNet-1/21K and ResNet-18/50/101.\", \"**4. Definition of empirical CMI:**\", \"We present a more intuitive explanation for empirical CMI.\", \"**5. Ablation study of diverse model architectures:**\", \"We employ various proxy models on different distillation methods to compute and minimize CMI.\", \"we demonstrate the generalization and robustness of the CMI constraint through experimental results.\", \"**6. Application on downstream tasks:**\", \"We validate the effectiveness of CMI in enhancing downstream tasks in federated learning scenarios.\", \"We confirm the benefits of CMI for downstream tasks in continual learning settings.\", \"As the end of the discussion period approaches, we kindly ask if our responses have satisfactorily addressed your concerns. Your feedback would be greatly appreciated, and we would be delighted to engage in further discussions if needed.\", \"Sincerely,\", \"The Authors\"]}", "{\"comment\": \"Dear Reviewers,\\n\\nIf you haven\\u2019t already, could you kindly review the author response and let the authors know if you are satisfied with it or if you have any additional questions? Your contribution is greatly appreciated.\\n\\nThank you!\\n\\nKind regards,\\n\\nYour AC\"}", "{\"title\": \"Please review author response\", \"comment\": \"Dear reviewer,\\n\\nCould you review the author response and let them know if you are satisfied with it or if you have any additional questions?\\n\\nKind regards,\\n\\nYour AC\"}", "{\"comment\": \"Thank you for dedicating your time and effort to reviewing our paper. We are deeply encouraged by your positive comments and recognition of our work. Below, we provide detailed responses to address your concerns.\\n\\n>**Question 1: Limitation and future work.**\\n\\nThank you for your positive comments and valuable suggestions. Here, we outline some limitations of our proposed method, we will add the corresponding issues in the final version of the manuscript and include them in the future work section.\\n\\n**Proxy Model.** Computing CMI requires an proxy model to provide $\\\\hat{Z}$ when $\\\\mathcal{S}$ is used as the input, and selecting an architecture often relies on heuristic methods. \\n\\n**IPC=1.** Minimizing CMI is equal to reduce the Kullback-Leibler (KL) divergence $ D(\\\\cdot \\\\|\\\\| \\\\cdot) $ between $ P_{S} $ and $ P_{\\\\hat{Z} \\\\mid y} $, hence, when the IPC is 1, the CMI value is 0. \\n\\nHere, we suggest a future direction to better utilize the proxy model; mixing the proxy model discussed in [1] could potentially provide a more informative CMI constraint. While for the IPC=1 setting, we suggest using multi-formation introduced in [2] and [3], or other parameterization techniques. Corresponding experimental results are shown in **Table 2** provided in **Sec 5.2**.\\n\\n[1] Four eyes see more than two: Dataset Distillation with Mixture-of-Experts. OpenReview Submission, 2024.\\n\\n[2] Dataset Condensation via Efficient Synthetic-Data Parameterization. In ICML, 2022.\\n\\n[3] Improved Distribution Matching for Dataset Condensation. In CVPR, 2023.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThanks again for providing the constructive review. We have tried our best to address your concerns point by point so that you can have a better understanding of our work.\\n\\nHowever, since the discussion period is going to end in one day, we really want to hear from you and see if we have addressed your concerns. Please participate in the discussion because your feedback is important for us to improve our work. If all your queries have been addressed, we kindly ask you to consider raising your rating. If you still have other questions, we are more than willing to explain in detail.\\n\\nLooking forward to your reply.\\n\\nSincerely,\\n\\nThe Authors\"}" ] }
0nJt9aVGtl
WaveDiffusion: Exploring Full Waveform Inversion via Joint Diffusion in the Latent Space
[ "Hanchen Wang", "Yinpeng Chen", "Jeeun Kang", "Yixuan Wu", "Young Jin Kim", "Youzuo Lin" ]
Full Waveform Inversion (FWI) is a vital technique for reconstructing high-resolution subsurface velocity maps from seismic waveform data, governed by partial differential equations (PDEs) that model wave propagation. Traditional machine learning approaches typically map seismic data to velocity maps by encoding seismic waveforms into latent embeddings and decoding them into velocity maps. In this paper, we introduce a novel framework that reframes FWI as a joint diffusion process in a shared latent space, bridging seismic waveform data and velocity maps. Our approach has two key components: first, we merge the bottlenecks of two separate autoencoders—one for seismic data and one for velocity maps—into a unified latent space using vector quantization to establish a shared codebook. Second, we train a diffusion model in this latent space, enabling the simultaneous generation of seismic and velocity map pairs by sampling and denoising the latent representations, followed by decoding each modality with its respective decoder. Remarkably, our jointly generated seismic-velocity pairs approximately satisfy the governing PDE without any additional constraint, offering a new geometric interpretation of FWI. The diffusion process learns to score the latent space according to its deviation from the PDE, with higher scores representing smaller deviations from the true solutions. By following this diffusion process, the model traces a path from random initialization to a valid solution of the governing PDE. Our experiments on the OpenFWI dataset demonstrate that the generated seismic and velocity map pairs not only exhibit high fidelity and diversity but also adhere to the physical constraints imposed by the governing PDE.
[ "Full waveform inversion", "Diffusion model", "Partial differential equation" ]
Reject
https://openreview.net/pdf?id=0nJt9aVGtl
https://openreview.net/forum?id=0nJt9aVGtl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zCeSeBm4Fr", "z9os44DrM7", "z80uLi9Xdg", "xb5eyLwzSc", "wcQ4QoCEt2", "w61LRpJdhQ", "s7vriM8W6l", "pUzf1TJ4Pf", "mWnJ6VppXa", "mF92v9QRcH", "lzxdNbzwe0", "k5GuPVjOjG", "jfXXaxTSgM", "ikCKODaKGj", "d4RSv9gnB7", "brGy4n7XcU", "XQcetreyiX", "VHmSpD9x49", "UnsdkJR7NY", "UIcTnqfigG", "UAY5VlsBF5", "U76p6QSser", "RTfxl7AwIz", "MDVqaq0ZrU", "LbfJkmBoht", "LYv9zQ6i2W", "Jwx0mC3FrN", "JcMfwVNhNB", "HXIZwUjkD6", "FWAEN1HwXp", "EUFWJA1YGK", "DZUInZeLyj", "D36KATL0t7", "7apiECf5rs", "6ScHk1ihf3", "53qdIn6lsO", "51SxPZqFog", "4MJisEkY70", "2pvRRC1wgy", "2mooa1h5Oz", "2kFvh5Om65", "2aBXSVx7iV" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733032858597, 1732042535646, 1732300733053, 1732326289970, 1733120549490, 1732326101280, 1732044277437, 1733163178408, 1733120531700, 1733162595684, 1730541526227, 1732646037554, 1732768456810, 1732242945629, 1737524169106, 1732403026011, 1732585548481, 1732690724365, 1732403766883, 1732513472393, 1732561858537, 1734818135917, 1733032814561, 1732206649936, 1730666321790, 1732242962549, 1732768417813, 1732622732056, 1732303318600, 1732577102850, 1732228755909, 1730429004519, 1732300851790, 1732042547322, 1732325585176, 1732403296069, 1733162568089, 1732517212683, 1729725381192, 1732439021260, 1732512759205, 1732042656412 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_A1pp" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_A1pp" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Area_Chair_HaVE" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_STHJ" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_1caw" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_A1pp" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_rGTJ" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_1caw" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_STHJ" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_rGTJ" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_rGTJ" ], [ "ICLR.cc/2025/Conference/Submission12144/Reviewer_A1pp" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ], [ "ICLR.cc/2025/Conference/Submission12144/Authors" ] ], "structured_content_str": [ "{\"title\": \"Third Reminder to Reviewer A1pp: Follow-up on the Revised Manuscript\", \"comment\": \"Dear Reviewer A1pp,\\n\\nWe hope this message finds you well. \\n\\nWe are reaching out to kindly follow up on the latest revised version of our manuscript, uploaded on November 23. We would greatly appreciate any additional comments or suggestions you may have on the current version.\\n\\nThank you for your time and thoughtful feedback. We look forward to hearing from you.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"Addressing Practical Application, Comparisons, and Model Extensions for FWI Using Joint Diffusion\", \"comment\": \"**Reviewer 1caw**\\n\\n**Weakness 1: What to do when we are given with some specific seismic data?**\\n\\nThank you for highlighting this practical consideration. To address inversion with specific seismic data, we have implemented an adaptable \\\"**one-in-two-out**\\\" configuration within the autoencoder component of our framework. In this setup, only seismic data is input into the encoder, generating a latent representation solely derived from the seismic input. This latent vector is then refined through the joint diffusion process and passed through individual VQ layers and decoders to produce both seismic and velocity outputs.\\n\\nThis approach demonstrates the model's applicability to typical FWI tasks, effectively generating velocity maps from seismic data. Example results and statistical evaluations can be found at http://bit.ly/4eBIJX2 and https://bit.ly/4fyN6DB, respectively. Our joint diffusion model achieves an SSIM of 0.6290, comparable to benchmarks such as InversionNet (SSIM 0.6727) and UPFWI (SSIM 0.6614). While VelocityGAN achieves slightly lower RMSE and MAE, our model is still of the same level in structural accuracy, which is critical for seismic applications.\\n\\n**Weakness 2: Section 4.2.3 comparing InversionNet lacks detail and context.**\\n\\nThank you for your feedback. Section 4.2.3 has been expanded to include detailed context and quantitative comparisons. Results show that WaveDiffusion achieves reasonably same level structural accuracy (SSIM 0.6290) compared to InversionNet and other benchmarks.\\n\\n**Weakness 3: Geophysical terminology is unclear, and \\\"acoustic\\\" and \\\"seismic\\\" are incorrectly used interchangeably.**\\n\\nWe agree that the terms \\\"acoustic\\\" and \\\"seismic\\\" need clarification. In Section 2, we will explicitly note that \\\"seismic data\\\" refers to \\\"acoustic seismic data\\\" in this study, simplifying wave phenomena for modeling purposes.\\n\\n**Question 1: How were the seismic data and velocity models preprocessed before training?**\\n\\nSeismic data and velocity models were resized from [5,70,1000]/[1,70,70] to [3,64,1024]/[1,64,64] (channel, height, depth) for consistency with our architecture. Both were normalized to [-1,1] to ensure compatibility and stability. This will be clarified in the revised manuscript.\\n\\n**Question 2: How much training time was required in terms of CPU/GPU hours?**\\n\\nThank you for your question. Training required approximately 1000 GPU hours for the autoencoder (per dataset) and 2000 GPU hours for the joint diffusion model. This information will be added to the manuscript.\\n\\n**Question 3: Can the algorithm be extended to 3D data, and what are the computational implications?**\\n\\nThank you for this insightful question. Adapting our approach to 3D data is indeed feasible, with some modifications. For instance, the autoencoder module would require a more sophisticated 3D architecture to handle volumetric data, while the latent diffusion process should remain fundamentally similar. A simpler alternative would involve treating the 3D velocity cube as a collection of 2D slices along one horizontal dimension, enabling the use of our current network architecture, like the Appendix M experiment in the OpenFWI paper, though this may reduce volumetric coherence. Computational complexity would increase significantly due to the higher dimensionality. We will add this discussion to the manuscript and explore it as future work.\\n\\n**Question 4: Does the model preserve symmetry in velocity maps when the input seismic data is symmetric?**\\n\\nThank you for this insightful question. To address this, we conducted experiments using the FlatVel_B (FVB) subset, which features symmetric data configurations with pure flat velocity layers. The results, provided at https://bit.ly/3YTjslu, confirm that our WaveDiffusion model successfully preserves symmetry in the generated velocity maps when provided with symmetric seismic data. This demonstrates that the model respects the geometric properties of the input data and maintains consistency in symmetric inversion scenarios. We will include these results and the corresponding discussion in the revised manuscript.\\n\\n**Question 5: How does the algorithm compare to baselines? Which is better?**\\n\\nThank you for your comment. Our primary contribution is not to improve baseline FWI solutions but to introduce a novel generative framework for FWI through the use of joint diffusion models. To the best of our knowledge, there is no prior work that adopts such a generative perspective for FWI, making direct comparisons to baseline methods with similar generative models challenging. Our goal is to provide a new perspective on FWI by exploring its potential as a generative process. However, results from the one-in-two-out experiment (http://bit.ly/4eBIJX2) show that WaveDiffusion achieves an SSIM of 0.6290, reasonably comparable to InversionNet (0.6727) and others in structural accuracy. These findings will be emphasized in the revised manuscript.\"}", "{\"title\": \"Thank You for Your Feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your thoughtful response and for taking the time to engage with our paper. We greatly appreciate your detailed feedback and the effort you\\u2019ve put into reviewing our work.\\n\\nIf there are any additional suggestions or areas where you believe we can further improve the manuscript, we would be most grateful for your guidance.\\n\\nThank you once again for your valuable insights.\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Second Reminder: Follow-up on Rebuttal Responses\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your initial feedback on our manuscript. While we did not receive further responses during the discussion period, we have carefully considered your comments and incorporated significant changes to address your concerns. Your observations helped us refine our narrative and ensure that our work emphasizes its core contributions. Thus, we have uploaded a revised version manuscript according to your valuable comments.\", \"below_are_the_key_revisions_specifically_addressing_your_feedback\": \"**Section 3.2 (Clarification of the Role of Dual Autoencoder):**\\n\\nIn response to your feedback, we have revised Section 3.2 to make it clear that the dual autoencoder serves only as a preliminary step in our framework. Its purpose is to establish a shared latent space for the joint diffusion process, but it is not the primary focus or contribution of our work. Additionally, we have cited the suggested references to acknowledge prior work on dual autoencoders and distinguish our approach from these methods.\\n\\n**Highlighting the Core Contribution \\u2013 Joint Diffusion Process:**\\n\\nAs you noted, the novelty of our work lies in the joint diffusion process. To address this, we have restructured the manuscript to focus on Section 3.3 and the subsequent experiments, which detail how our diffusion process refines seismic-velocity pairs in a shared latent space to adhere to the governing PDE. This approach provides a novel generative perspective on FWI and differentiates our work from others that primarily focus on autoencoder architectures.\\n\\n**Strengthened Experimental Analysis:**\\n\\nTo further demonstrate the capabilities of our method, we have added experiments that showcase the versatility and practicality of the joint diffusion process. This includes a new experiment in Section 4.6, where we demonstrate how the framework performs conventional FWI with only seismic data as input. These results illustrate the model\\u2019s ability to address real-world FWI challenges while emphasizing the diffusion process as the core innovation.\\n\\nWe are confident that these revisions address your concerns and highlight the unique contributions of our work. If there are still areas you feel require improvement, we would greatly appreciate further feedback. Your input has been instrumental in shaping our revisions, and we remain committed to presenting the best version of our research.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Fourth Reminder to Reviewer rGTJ: Follow-up on the Revised Manuscript\", \"comment\": \"Dear Reviewer rGTJ,\\n\\nWe hope this message finds you well.\\n\\nWe are kindly following up to ask if you have any additional comments on the latest revised version of our manuscript, uploaded on November 23. Your feedback is very important to us as we strive to improve the quality and clarity of our work.\\n\\nThank you again for your time and effort, and we look forward to any further suggestions you may have.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your detailed and thoughtful feedback on our manuscript. We sincerely value this opportunity to contribute to the conference and are committed to improving our work to meet the high standards of innovation and clarity expected. Thus, we have uploaded a **revised version** of our manuscript. Your critique has been invaluable in guiding our revisions, and we are confident that the updates made reflect our commitment to addressing your concerns.\", \"below_are_the_key_updates_specific_to_your_suggestions\": \"**Section 4.6 on page 9 (WaveDiffusion for conventional FWI):**\\n\\nWe have added a new experiment demonstrating how our WaveDiffusion model can perform conventional FWI when only seismic data is provided. This addresses the concern about the inversion process (i.e., obtaining a velocity model given seismic data). The experiment uses a \\u201cone-in-two-out\\u201d configuration of the autoencoder, where only seismic data is input, and the diffusion process generates both seismic and velocity outputs. Results show that our joint diffusion approach effectively reconstructs velocity maps from seismic inputs, providing a clear demonstration of our inversion strategy.\\n\\n**Section 3.2 on page 4 (Clarification of the Role of Dual Autoencoder):**\\n\\nWe have explicitly cited the suggested references and clarified that the dual autoencoder is not the primary contribution of our paper. Instead, it is a preliminary step to establish the shared latent space required for the diffusion process. The true novelty lies in the joint diffusion process, which refines latent representations to adhere to the governing PDE. This distinction is clearly stated in the revised manuscript, with Section 3.3 and the subsequent experiments highlighting the core contributions.\\n\\n**Highlighting the Novelty of Diffusion and Inversion:**\\n\\nThroughout the paper, we have emphasized that our main contribution lies in the diffusion process and its application to solving FWI problems in a novel generative framework. By restructuring and refocusing key sections, we have aligned the narrative to better reflect the innovative aspects of the joint diffusion process and its inversion capabilities.\\n\\nWe genuinely believe that these changes align our work more closely with your suggestions and enhance its contribution to the field. We are excited about the opportunity to present our research at this conference, as we feel it introduces a unique and valuable perspective on FWI.\\n\\nIf there are still areas that you find unsatisfactory or needing improvement, we would deeply appreciate further guidance. Your feedback is critical to helping us refine our work and make it as impactful as possible.\\n\\nThank you again for your time and effort in reviewing our submission. We look forward to your insights.\\n\\nBest regards,\\n\\nThe Authors\", \"title\": \"Paper revision to address your concerns on novelty and inversion process\"}", "{\"comment\": \"**Reviewer STHJ**\\n\\n**Weakness 1: Lack of comparison to conditional generative methods.**\\n\\nThank you for your suggestion. Our method is a joint diffusion generation framework without conditioning, distinct from conditional generative approaches like [1], which integrate finite difference modeling. Our focus is to find solutions in a shared latent space by scoring deviations from the wave equation, avoiding iterative forward modeling steps. Thus, a direct comparison to [1] is not feasible as the objectives and methodologies differ fundamentally. Our work aims to provide a novel perspective on FWI, emphasizing joint diffusion for physical consistency, rather than competing as a conditional generative model. We will clarify these distinctions in the revised manuscript.\\n\\n**Weakness 2: Need for additional reconstruction methods.**\\n\\nThank you for this valuable suggestion. We agree that evaluating with alternative reconstruction methods strengthens our analysis. We expanded the experiments in Section 4.2.3 to include UPFWI and VelocityGAN alongside InversionNet. The results, detailed at https://bit.ly/3YYfTua, show slight performance improvements (SSIM +~2%) with these solvers, confirming our primary conclusion that generated samples enhance end-to-end mapping networks. We will integrate these results into the revised manuscript.\\n\\n**Weakness 3: Realism of experiments.**\\n\\nThank you for your observation. InversionNet is used to evaluate our generated data's quality rather than outperform existing FWI baselines. For practical applications, we propose denoising as a key use case for our approach, particularly when dealing with noisy or incomplete data. By leveraging the joint diffusion process, our model can refine and reconstruct physically consistent seismic-velocity pairs, even under challenging data conditions. This will be emphasized in the revised manuscript.\\n\\n**Question 1: Comparison with conditional generative models.**\\n\\nThank you for this insightful question. As noted in *Weakness 1*, our joint diffusion framework differs fundamentally from conditional generative models like [1], focusing on discovering solution spaces in a shared latent space. Conditional approaches rely on specific conditions and finite difference modeling. Given these fundamental differences, a direct comparison would not be equitable. Moreover, our primary goal is to introduce a novel perspective for FWI, focusing on joint diffusion's potential for refining data pairs in a physically consistent manner, rather than proposing a SOTA FWI solution. We will clarify these distinctions further in the revised manuscript to address this concern comprehensively.\\n\\n**Question 2: Alternative reconstruction methods.**\\n\\nAs addressed in *Weakness 2*, we evaluated additional algorithms (UPFWI and VelocityGAN), confirming the robustness of our generated samples. The results are detailed at https://bit.ly/3YYfTua and will be added to the manuscript.\\n\\n**Question 3: Dataset overlap in training and evaluation.**\\n\\nThank you for this question. We added an experiment demonstrating the **denoising capabilities** of our joint diffusion model. By adding Gaussian noise to both seismic data and velocity maps, we observed that direct autoencoder reconstruction from noisy latent vectors produced suboptimal outputs, with visible noise and distortions. However, applying the joint diffusion process refined the latent vectors, yielding clean and consistent reconstructions, as shown at https://bit.ly/4eDJAH0. This highlights our model's practical application for noisy or incomplete data, addressing real-world challenges. As mentioned in *Weakness 3*, InversionNet primarily serves as a quality evaluation tool, not a direct benchmark, in our study. \\n\\nWe will clarify these points and include the denoising experiment in the revised manuscript.\\n\\n**Question 4: Realism of the Gen+1% setup.**\\n\\nThank you for your insightful question. The Gen+1% setup evaluates the model\\u2019s ability to enhance a small dataset using supplementary in-distribution data, providing a controlled scenario for assessing performance. We agree that training on two subsets and fine-tuning with 1% of a third introduces a distribution shift, presenting an OOD challenge.\\n\\nWhile our study primarily focuses on in-distribution scenarios, reflecting many real-world applications, addressing OOD challenges is an exciting direction for future work. Techniques such as fine-tuning, domain adaptation, or transfer learning could be explored to extend our framework for handling unseen data distributions. Our joint diffusion model inherently refines latent representations toward physically valid solutions, providing a strong foundation for tackling OOD scenarios. These limitations and future directions will be highlighted in the revised manuscript.\"}", "{\"comment\": \"I appreciate your efforts in addressing my concerns. Nevertheless, I will retain my original assessment.\"}", "{\"title\": \"Fourth Reminder to Reviewer A1pp: Follow-up on the Revised Manuscript\", \"comment\": \"Dear Reviewer A1pp,\\n\\nWe hope this message finds you well.\\n\\nWe are reaching out to kindly follow up on the latest revised version of our manuscript, uploaded on November 23. We would greatly appreciate any additional comments or suggestions you may have on the current version.\\n\\nThank you for your time and thoughtful feedback. We look forward to hearing from you.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"Last Day Reminder to Reviewer rGTJ: Follow-up on the Revised Manuscript\", \"comment\": \"Dear Reviewer rGTJ,\\n\\nWe hope this message finds you well.\\n\\nWe are reaching out to kindly remind you that today is the **last day** of the discussion session. We would like to kindly ask you for your follow-up on the latest revised version of our manuscript, uploaded on November 23. We would greatly appreciate any additional comments or suggestions you may have on the current version.\\n\\nThank you for your time and thoughtful feedback. We look forward to hearing from you.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"summary\": \"The paper introduces a new framework for Full Waveform Inversion (FWI) that uses a joint diffusion process in a shared latent space. This approach merges the bottlenecks of two separate autoencoders (one for seismic data and one for velocity maps) into a unified latent space.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper is well written and the diffusion approach in the latent space is an interesting extention to dual autoencoder approaches. With convincing results to support this research.\", \"weaknesses\": \"My major problem with this manuscript is that the main approach to generating two joint autoencoders is not novel. A similar approach including similar experiments has been proposed and published the approach on dual autoencoder before this submission (https://arxiv.org/pdf/2305.13314) and another publication on dual autoencoder can be found at (https://arxiv.org/pdf/2405.13220). These contributions are neither acknowledged nor cited. The remaining novelty is the diffusion process within the latent spaces which is by itself an interesting idea and should have been stated as the contribution of this manuscript.\", \"questions\": \"Please address the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-Up to Reviewer A1pp\", \"comment\": \"Dear Reviewer A1pp,\\n\\nThank you for your feedback and for sharing your perspective. We greatly appreciate the time and effort you have devoted to reviewing our manuscript. While we understand your decision to maintain your score, we would like to kindly ask if you have any comments or suggestions regarding the latest revised version of the manuscript.\\n\\nYour expertise is invaluable to us, and we are committed to improving the quality and clarity of our work. If there are specific aspects of the revised manuscript that you believe could be further refined or strengthened, we would be deeply grateful for your insights. Your guidance will help us ensure that the manuscript meets the highest standards of scientific rigor and presentation.\\n\\nThank you again for your thoughtful engagement throughout this process. We look forward to hearing any additional thoughts you may have.\\n\\nBest regards, \\n\\nThe authors\"}", "{\"title\": \"Second Reminder to Reviewer A1pp\", \"comment\": \"Dear Reviewer A1pp,\\n\\nWe hope this message finds you well. We are writing to kindly follow up on the latest revised version of our manuscript, which was uploaded on November 23. The revisions were made with great care to address your earlier comments, particularly by highlighting the novelty of the joint diffusion process and refining the focus of the manuscript.\\n\\nWhile we understand your decision to maintain your score, your input on the current version of the manuscript is incredibly valuable to us. If you have any further comments or suggestions, we would be most grateful to receive your guidance on how to enhance the quality and clarity of the paper.\\n\\nThank you again for your time and thoughtful feedback. We deeply appreciate your engagement and look forward to any additional input you may have.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"Reminder: Follow-up on Rebuttal Responses\", \"comment\": \"Dear Reviewer,\\n\\nWe hope this message finds you well. We wanted to kindly remind you that the author-reviewer discussion period is ongoing, and we have posted our responses and revisions over two days ago. Your thoughtful feedback has been immensely valuable in shaping our work, and we greatly appreciate your time and effort in reviewing our manuscript.\\n\\nWe have carefully addressed your concerns and answered your questions. If you have any further questions or would like clarification on our responses, we are here and ready to address them promptly. Your input is crucial for refining and improving our manuscript, and we deeply value your insights.\\n\\nWe understand your time is valuable, and we sincerely thank you for your dedication to this process. Should you have any additional thoughts or concerns, please do not hesitate to reach out.\\n\\nThank you again for your attention and support.\\n\\nBest regards,\\n-Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Additional Experiment: Handling Noisy Seismic Data\", \"comment\": \"Dear reviewers,\", \"we_would_like_to_share_a_new_discovery_by_our_new_experiement_that_emerged_during_the_course_of_our_revisions\": \"the ***robustness*** of *WaveDiffusion* in ***handling noisy input seismic data in FWI***. While this was not explicitly raised in the initial review, we believe this experiment adds significant value to the paper and further demonstrates the versatility of our method.\\n\\nIn this experiment, we tested *WaveDiffusion*, InversionNet, and VelocityGAN on seismic data corrupted with Gaussian noise (mean=0, std=0.05), a realistic scenario often encountered in field data acquisition. Our results reveal that *WaveDiffusion* robustly refines the noisy seismic data and accurately inverts the corresponding velocity maps, achieving the lowest MAE (0.2227) and RMSE (0.3776) and the highest SSIM (0.6142) among the three methods. Neither InversionNet nor VelocityGAN, which rely on direct image-to-image mapping, were able to handle the noisy inputs effectively.\", \"the_visual_comparisons_are_shown_in_the_figure_at_https\": \"//bit.ly/3V5k0Uc, and the quantitative results are detailed in the table at https://bit.ly/4f0rAqB. The new experiment and results are added to the latest revised manuscript on **Page 19 A. 8** section.\\n\\nDespite being trained only on clean seismic data, *WaveDiffusion* demonstrates its ability to refine noisy latent representations, providing both noise-free seismic data and high-fidelity velocity maps. This capability further emphasizes the strength of the joint diffusion process in real-world scenarios.\\n\\nWe believe this additional evidence underscores the potential of *WaveDiffusion* as a robust and practical tool for FWI tasks, extending its applicability to challenging noisy data conditions.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Kind Reminder to Reviewer A1pp\", \"comment\": \"Dear Reviewer A1pp,\\n\\nWe hope this message finds you well. Thank you once again for your previous comments and feedback on our manuscript. In our last response, we detailed the revisions and additions made to the paper to address the concerns you raised, particularly highlighting the joint diffusion process as the primary contribution and providing extensive supporting experiments and analyses.\\n\\nWe kindly wanted to follow up to ask if our current revision manuscript meet your satisfaction. If you have any further suggestions or specific feedback on the revised manuscript, we would be grateful. Your expertise and insights are invaluable to us, and we remain committed to addressing any remaining issues to improve the clarity, focus, and impact of our work.\\n\\nIf there are particular aspects of the manuscript that you feel could be enhanced further, we would be deeply grateful for your guidance. Your input plays a crucial role in ensuring that our contribution is as robust and meaningful as possible.\\n\\nThank you again for your time and dedication to reviewing our work.\\n\\nBest regards, \\n\\nThe authors\"}", "{\"title\": \"Reminder to Reviewer rGTJ for Follow-Up\", \"comment\": \"Dear Reviewer rGTJ,\\n\\nThank you for your feedback and engagement with our manuscript. We deeply value the time and effort you have spent reviewing our work. While we understand your decision to maintain your score, we would like to kindly ask for any further comments or suggestions you may have regarding the **latest revised version of the manuscript**.\\n\\nWe have worked diligently to address the concerns raised in your earlier comments and made significant revisions to clarify and emphasize the novelty of the joint diffusion process, its applications, and its analysis. Your expertise is invaluable to us, and we are committed to further improving the manuscript based on your insights.\\n\\nIf there are specific aspects of the revised manuscript that you believe could benefit from additional refinement or focus, we would greatly appreciate your guidance. Your feedback will help us ensure the manuscript is of the highest quality and effectively communicates its contributions to the field.\\n\\nThank you again for your thoughtful review and input. We look forward to hearing any additional suggestions you may have.\\n\\nBest regards, \\nThe authors\"}", "{\"title\": \"Extra experiment to address your concerns of novelty and inversion process\", \"comment\": \"Dear Reviewer,\\n\\nWe hope this message finds you well. As part of our continued efforts to address all reviewer feedback comprehensively, we recently added a new experiment to the revised manuscript, showcasing the robustness of *WaveDiffusion* in handling noisy input seismic data.\\n\\nThis new experiment (Appendix A.8) highlights the capability of *WaveDiffusion* to invert seismic data corrupted with Gaussian noise, recovering both high-quality velocity maps and noise-free seismic data. Notably, this was achieved without additional training on noisy datasets. In contrast, baseline methods like VelocityGAN and InversionNet struggled significantly under the same conditions.\\n\\nThe results are detailed in the revised manuscript, and visual and quantitative comparisons are available via the following links:\", \"the_visual_comparisons_are_shown_in_the_figure_at_https\": \"//bit.ly/3V5k0Uc, and the quantitative results are detailed in the table at https://bit.ly/4f0rAqB. The new experiment and results are added to the latest revised manuscript on Page 19 A. 8 section.\\n\\nWe believe this experiment further underscores the versatility and real-world applicability of *WaveDiffusion*, addressing scenarios where data noise is a common challenge in FWI tasks.\\n\\nIf you have any additional suggestions or questions about this experiment, we would be delighted to address them. Thank you for your time and valuable insights, which continue to guide the improvements to our work. If there are still areas that you find unsatisfactory or needing improvement, we would deeply appreciate further guidance. Your feedback is critical to helping us refine our work and make it as impactful as possible.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"Response to Your Feedback and Request for Further Guidance\", \"comment\": \"Dear Reviewer rGTJ,\\n\\nThank you for taking the time to review our manuscript and provide your detailed feedback. We deeply appreciate your insights, which have been invaluable in shaping the revised version of our paper (uploaded on Nov. 23). We have carefully addressed your concerns and made significant revisions to the manuscript to clarify the novelty of our work and strengthen its focus on the diffusion process and inversion strategy. Below, we outline your original comments, our responses, and the corresponding revisions. We kindly ask for your feedback on whether these revisions address your concerns, and if not, how we can further improve the paper.\\n\\n**Your Comment: The novelty in the diffusion process (and not in the autoencoder) should be highlighted.**\\n\\n**Our Response:** We agree with your suggestion that the diffusion process is the key novelty of our work. In the revised manuscript, we have shifted the emphasis to highlight the joint diffusion process as the central contribution. The dual autoencoder is explicitly described as a supporting step to establish a shared latent space, rather than the primary focus.\\n\\n**Our Revision to the Manuscript:** ***Similar to the response to the reviewer A1pp***, the novelty and importance of the joint diffusion process are emphasized throughout the paper, particularly in:\\n\\n**Section 3.3:** The **joint diffusion process** is presented as the core innovation, enabling the model to refine latent representations in a shared space to adhere to physical constraints governed by PDEs.\\n\\n**Section 4.3\\u20134.6:** Experimental results demonstrate the efficacy and uniqueness of the **joint diffusion framework**. For instance, Section 4.6 applies the **joint diffusion model** to solve a conventional FWI problem where only seismic data is provided.\\n\\n**Appendices A.2\\u2013A.8:** Additional experiments highlight the robustness, physical consistency, and versatility of the **joint diffusion process** across various scenarios.\\n\\nThese revisions ensure that the **joint diffusion process** is firmly presented as the primary contribution of the paper, minimizing the role of the autoencoder, which serves only as a means to facilitate the joint diffusion.\\n\\n**Your Comment: The inversion process (getting a model given d) needs more focus.**\\n\\n**Our Response:** We recognize the importance of addressing this aspect and have conducted additional experiments to showcase the application of our **joint diffusion** framework for conventional FWI tasks. Specifically, in **Section 4.6 and Appendix A.8**, we introduce the \\u201cone-in-two-out\\u201d architecture, where only seismic data (d) is input to the encoder. This experiment demonstrates how the model can invert seismic data to generate accurate velocity maps, satisfying the governing PDE.\\n\\n**Our Revision to the Manuscript:** \\n\\n**Section 4.6:** Details the \\u201cone-in-two-out\\u201d experiment, which applies the **joint diffusion model** to invert seismic data into velocity maps. This setup reflects real-world FWI scenarios and highlights the practicality of our approach.\\n\\n**Appendix A.8:** Includes additional experiments with noisy seismic data as input, showcasing the robustness of our **joint diffusion framework** in challenging scenarios.\\n\\nThese experiments explicitly demonstrate how our **joint diffusion model** handles inversion tasks, providing clear evidence of its capabilities and novelty.\\n\\n**Request for Feedback**\\n\\nGiven the extensive revisions and new experiments added to address your concerns, we kindly ask if the current manuscript adequately reflects the novelty of the diffusion process and the inversion strategy. If there are any remaining gaps or areas for improvement, we would like to hear your suggestion on the reason. We would greatly appreciate your further feedback. Your guidance is invaluable to us as we strive to enhance the quality and impact of our work.\\n\\nThank you again for your time and thoughtful review.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer rGTJ,\\n\\nThank you for your latest response. We appreciate your continued engagement and the time you\\u2019ve dedicated to reviewing our manuscript. We take your concerns about the framing and emphasis of our contributions seriously and have worked to address them thoroughly in the revised version.\\n\\nRegarding your comment on the joint diffusion idea being under-discussed in the original manuscript, we would like to note that Section 3.3 is only the steps to perform a joint diffusion process, not the technical analysis part. On the other hand, Section 3.4 provides detailed analysis and technical insights into the joint diffusion process. Additionally, we\\u2019ve added extensive experimental results (Sections 4.3\\u20134.6 and Appendix A.2\\u2013A.8) to highlight the joint diffusion process as the central contribution of our work.\\n\\nTo ensure that this key contribution is clearly conveyed, we would greatly value your suggestions:\\n\\n**Experiments:** Is the current experiments sufficient enough to support our joint diffusion part contribution? Are there additional experiments or metrics you would recommend to better validate or expand upon the novelty of the joint diffusion approach?\\n\\n**Section 3.4 Analysis:** Does the analysis provide sufficient depth to demonstrate the role and impact of joint diffusion? If not, we would be grateful if you could point out areas that need further elaboration or clarification.\\n\\nWe are committed to improving our manuscript and incorporating constructive feedback to ensure it meets the highest standards. Your insights as an expert are invaluable, and we would greatly appreciate your guidance on how to refine the discussion and experiments to fully address your concerns.\\n\\nThank you again for your thoughtful comments and for helping us make this a stronger contribution to the field.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"metareview\": \"This paper addresses full waveform inversion by training autoencoders on models and data, with shared latent spaces, followed by training a diffusion model in the latent space to generate samples. While the reviewers acknowledged the idea as reasonable and potentially effective, the contribution and presentation fall significantly below ICLR standards.\\n\\nTwo reviewers noted that paired autoencoders have been applied to inverse problems before, and multimodal generative models have long explored similar ideas (some of which the authors acknowledge). Unfortunately, these reviewers did not substantively engage in the discussion (please see below), which rightly frustrated the authors, but the core critique remains valid.\\n\\nThe authors emphasize joint diffusion as the primary contribution, but this is effectively just latent diffusion applied to an highly stylized problem (layered phantoms with few degrees of freedom). The two channels are generated by two decoders whose semantics are entirely determined by the first stage. Whether or not paired autoencoders are considered part of the contributio, the method relies on them\\u2014or equivalent architectures\\u2014in such a way that the remainder is not novel enough for publication. Once a joint latent space is learned, various generative models could be employed. \\n\\nAnother key issue is the lack of discussion on the tradeoffs between conditional and joint generation. There are numerous involved tradeoffs which are not mentioned, let alone investigated through experiments. The paper\\u2019s claim of tackling the harder problem of simultaneous generation across modalities is valid, but it is important to know why? What are the tradeoffs? How do they manifest? What are the right experiments or possible theoretical insights? Joint generative modeling is generally harder than discriminative, where observations localize or guide the generator, especially if there is a cheap approximate inverses.\\n\\nThe problem of stability, key for inverse problems, is unaddressed. It would be informative to train a model with noisy measurements (at varying noise levels), which is different from inputting noisy data into a trained model. The current joint distribution is degenerate\\u2014measurements are functions of unknowns and we can sample joint dist by generating models and training an FNO.The authors report thousands of GPU hours to train their models on simple phantoms, with massive network architectures, whereas much smaller nets have achieved strong results in similar settings. The justification for this computational expense is unclear.\\n\\nThe paper\\u2019s presentation is another issue. Design choices are not clearly motivated or lack explanation. The VQ is a crucial part of the model. The authors say it ensures a \\\"compact and discrete representation,\\\" but what does this mean? Why does VQ enable the model to \\\"capture structured patterns and long-range dependencies\\\"? Also, what exactly is being vector-quantized? The supplementary mentions 8192 32-dimensional vectors\\u2014is that for the entire 16\\u00d716 latent? If it is then in Section 4.6 the reconstruction consists in picking one out of 8192 templates. What is really the case remains unclear.\\n\\nSame is the case for the latent-space diffusion (\\\"joint diffusion\\\"). It is described so tersely and handwavily that it is unclear how the model is trained; the supplementary contains information about hyperparameters and learning rates but not about inputs, outputs, losses, ... . The only equation in the first block is z_t = z + eps_t (which is missing a _t), but in the second block t seems to be the terminal time, L is never defined so we don't know what it is, ..., there is little chance to reproduce results.\\n\\nExperiments are also problematic. The method is tested only on simple layered phantoms (very few DoFs) with an acoustic wave equation, comparing to a couple of deep learning baselines (Table 5). Other experiments analyze the proposed method, without establishing its utility. The authors claim they are not focused on achieving strong reconstruction results, but no clear alternative insights emerge. The experiments with separate AEs show that learning the joint distribution of synthetic data is simpler than solving the ill-posed full waveform inversion. It would be interesting to navigate this spectrum. The paper's repeated references to \\\"seismic data\\\" are misleading, as it uses simulated 2D acoustic data from 64x64 layered toy models. Real-world geophysical models are far large, 3D, and waves are elastic. While academic compute limits are understood (although this does not seem to be a block here), the extensive resources for such stylized setups are hard to justify.\", \"additional_comments_on_reviewer_discussion\": \"rGTJ and A1pp were both critical, identifying novelty as the key issue. STHJ and 1caw thought the paper is a borderline accept. During discussion the authors reiterated that the main focus is on \\\"joint diffusion\\\" rather than the paired autoencoders. rGTJ argued that \\\"clearly, the authors did not intend this to be the main contributions originally and the paper was written accordingly\\\" by pointing out that only a handful are dedicated to joint diffusion, and A1pp concurred. I do not agree with this since joint diffusion has been in the title from the get-go, not autoencoders. But I do agree about the real estate dedicated to joint diffusion. The problem is that all ideas in the paper, not just this one, are presented in a telegraphic way and it often reads a bit like a sequence of arbitrary choices.\\n\\nThe authors are justified in pointing out that rGTJ and A1pp were mostly disengaged from the discussion. That is suboptimal, but the issue is that it is hard to make up for the missing novelty by running new experiments. I believe that the paper and the experiments need a major overhaul.\\n\\nMy recommendation is based on a careful analysis of all the responses and my personal very detailed reading of the paper and familiarity with both the hardcore inverse problems and SciML literature.\"}", "{\"title\": \"Third Reminder to Reviewer rGTJ: Follow-up on the Revised Manuscript\", \"comment\": \"Dear Reviewer rGTJ,\\n\\nWe hope this message finds you well. \\n\\nWe are kindly following up to ask if you have any additional comments on the latest revised version of our manuscript, uploaded on November 23. Your feedback is very important to us as we strive to improve the quality and clarity of our work.\\n\\nThank you again for your time and effort, and we look forward to any further suggestions you may have.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Dear authors,\\n\\nthank you very much for your responses. Having carefully reviewed both the other reviewers' comments and your responses, I wish to maintain my current score.\"}", "{\"summary\": \"The manuscript introduces a new approach to invert acoustic wave equation data based on a joint generative process. Although there were earlier papers on the use of generative models for data inversion, the presented approach looks fairly original. The authors study the famous geophysical problem known as full waveform inversion (FWI). The approach was tested on 2D spatial data from public dataset OpenFWI.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Generative AI is transforming different industries in our days and its use for data inversion looks like a promising research direction. Both theoretical and experimental parts are well-present and easy-to-follow. An important original feature of the work is joint generation of acoustic data and velocity models.\", \"weaknesses\": \"1.\\tIt is not clear (at least none of the experiments show this) how to use the presented algorithm to invert actual data. It is shown how to generate acoustic data and velocities. But what is typically expected by the reader is the answer on what to do when we are given with some specific seismic data.\\n2.\\tSection 4.2.3 Comparison with Inversionnet is not sufficiently complete and convincing. See Questions below.\\n3.\\tThe geophysical terminology is mixed in the manuscript. Notice the used wave equation models $acoustic$ data. This is a significant simplification of seismic phenomena. In other words, the terms $acoustic$ and $seismic$ are not interchangeable.\", \"questions\": \"1.\\tHow acoustic data and velocity models were preprocessed before training?\\n2.\\tThe authors trained the model for 1000 epochs. How long was it in terms CPU/GPU time (depending on the dataset)?\\n3.\\tThe discussion in the manuscript covers only generation and inversion of 2D spatial data. While 3D models/data are of much higher interest. Could the proposed algorithm be used in the 3D case? What will the implication on computational complexity?\\n4.\\tAn important test for an inversion code is to check that symmetric data with respect to some plane produces a symmetric velocity model. Would the presented generation model obey this principle?\\n5.\\tIt is not clear from the experiments how the presented algorithm compares to baselines. Section 4.2.3 Comparison with Inversionnet does not give the answer on the obvious question: which of the two algorithms is better.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder: Follow-up on Rebuttal Responses\", \"comment\": \"Dear Reviewer,\\n\\nWe hope this message finds you well. We wanted to kindly remind you that the author-reviewer discussion period is ongoing, and we have posted our responses and revisions over two days ago. Your thoughtful feedback has been immensely valuable in shaping our work, and we greatly appreciate your time and effort in reviewing our manuscript.\\n\\nWe have carefully addressed your concerns and answered your questions. If you have any further questions or would like clarification on our responses, we are here and ready to address them promptly. Your input is crucial for refining and improving our manuscript, and we deeply value your insights.\\n\\nWe understand your time is valuable, and we sincerely thank you for your dedication to this process. Should you have any additional thoughts or concerns, please do not hesitate to reach out.\\n\\nThank you again for your attention and support.\\n\\nBest regards,\\n-Authors\"}", "{\"title\": \"Second Reminder to Reviewer rGTJ\", \"comment\": \"Dear Reviewer rGTJ,\\n\\n\\nWe hope this message finds you well. We are writing to kindly follow up on the latest revised version of our manuscript, which was uploaded on November 23. We have made extensive revisions to address your earlier comments, particularly emphasizing the novelty and contributions of the joint diffusion process.\\n\\n\\nWe understand and respect your decision to maintain your score, but your insights and suggestions on the current version of the manuscript are invaluable to us. We are eager to hear any further comments you might have regarding areas that could be improved or clarified. Your feedback is crucial for us to refine the paper and ensure its quality and impact.\\n\\n\\nThank you again for your time and effort in reviewing our work. We deeply appreciate your engagement and look forward to any additional suggestions you may have.\\n\\n\\nBest regards,\\n\\nThe authors\"}", "{\"comment\": \"I here agree with reviewer rGTJ that while joint diffusion may offer novel applications, it seems this wasn't the primary focus of the author's original intent. I will stay with my score too.\"}", "{\"comment\": \"While the authors have attempted to answer my concerns, the paper needs many changes to reflect\\n1. The novelty in the diffusion process (and not in the auto-encoder)\\n2. The inversion process (getting a model given d)\\n\\nMy rating stands, the authors need to rewrite the paper focusing on the diffusion process and the inversion strategy. The paper as is, lacks the needed novelty. I am sure that they can make it better for future conference\"}", "{\"title\": \"Response Letter to Reviewer rGTJ about the Main Content in the Manuscript\", \"comment\": \"Dear Reviewer rGTJ,\\n\\nThank you for your continued engagement with our manuscript. We appreciate your comments and have worked diligently to address your concerns regarding the emphasis and presentation of the joint diffusion model, which is indeed the central contribution of our work. Below, we provide a detailed table of contents comparing the joint diffusion content in both the original and revised versions of our manuscript. These demonstrate that the joint diffusion process forms a significant part of the paper.\\n\\nWe kindly request your reconsideration of the manuscript in light of the revisions, expansions, and additional experiments explicitly focusing on the joint diffusion model. If any concerns remain, we would greatly appreciate your specific feedback to further strengthen our work.\\n\\n---\\n\\n### **Table of Contents Comparison: Joint Diffusion-Related Content**\\n\\n#### **Original Manuscript**\\n1. **Section 1 (Page 2, Lines 71-96):** \\n General introduction to the joint diffusion model and its significance.\\n \\n2. **Section 3.1 (Page 3, Lines 145-157):** \\n Motivation for joint diffusion as a novel approach to refine the shared latent space.\\n \\n3. **Section 3.3 (Page 4, Lines 191-210):** \\n Detailed explanation of how the joint diffusion process is implemented on the shared latent space.\\n \\n4. **Section 3.4 (Page 4-5, Lines 212-265):** \\n Discoveries and analyses of the joint diffusion model, including:\\n - Refinement of the solution space.\\n - Deviation from the PDE evaluation during forward and backward diffusion processes.\\n - Transformation of the governing PDE process to an SDE.\\n\\n5. **Section 4.2.2 (Page 6-8, Lines 312-380):** \\n Joint diffusion results and analysis, including FID scores and noise level relationships.\\n \\n6. **Section 4.2.3 (Page 8, Lines 381-431):** \\n Using joint diffusion-generated samples to train InversionNet.\\n \\n7. **Section 4.2.4 (Page 9, Lines 432-469):** \\n Comparison between joint diffusion and separate diffusion for seismic and velocity data.\\n\\n8. **Appendix A.1.2 (Page 13, Lines 665-677):** \\n Hyperparameter details for the joint diffusion model.\\n\\n9. **Appendix A.2 (Page 13, Lines 686-697):** \\n Joint diffusion results on FVB and FFB datasets.\\n\\n10. **Appendix A.3 (Page 13, Lines 686-697):** \\n Joint diffusion results on combined datasets.\\n\\n**Figures and Tables in Original Manuscript:**\\n- **Figure 1:** Overview of WaveDiffusion and the joint diffusion process. \\n- **Figure 4:** Joint diffusion architecture. \\n- **Figures 5\\u20137:** Analysis of joint diffusion results and deviation from the PDE. \\n- **Table 1\\u20134:** FID scores, comparisons with baselines, and separate vs. joint diffusion results.\\n\\n---\\n\\n#### **Revised Manuscript**\\nTo address your concerns and further emphasize the joint diffusion process, we added the following new content:\\n\\n1. **Section 4.6 (Page 9, Lines 436-457):** \\n Application of the joint diffusion model to conventional FWI problems with seismic data only. \\n\\n2. **Appendix A.4 (Page 14-15, Lines 743-789):** \\n Joint diffusion-generated samples visualization and data symmetry checks for flat velocity layers. \\n\\n3. **Appendix A.5 (Page 15-16, Lines 791-844):** \\n Joint diffusion handling noisy seismic and velocity data, demonstrating robust reconstruction. \\n\\n4. **Appendix A.6 (Page 16-18, Lines 846-960):** \\n FWI performance comparisons between multiple baseline algorithms and joint diffusion. \\n\\n5. **Appendix A.8 (Page 19, Lines 991-1023):** \\n Joint diffusion model performance on noisy seismic data input, highlighting its robustness and applicability.\\n\\n**New Figures and Tables in Revised Manuscript:**\\n- **Figure 10:** Visualization of FWI results with joint diffusion. \\n- **Figure 14:** Joint diffusion-generated examples across multiple subsets. \\n- **Figure 16:** Reconstructions of noisy seismic and velocity data using joint diffusion. \\n- **Figure 18:** FWI results visualization with noisy input seismic data. \\n- **Tables 5, 6, and 8:** Comparative metrics for joint diffusion vs. baselines.\\n\\n---\\n\\n### Key Revisions and Justifications\\n\\n1. **Expanded Technical Description (Sections 3.3, 3.4):** \\n Detailed implementation of joint diffusion, including:\\n - Refinement of latent space through joint diffusion.\\n - Evaluation of deviations from the governing PDE during diffusion processes.\\n - Transforming PDE processes into SDEs for physical consistency.\\n\\n2. **Comprehensive Experiments (Sections 4.2.2\\u20134.6, Appendices A.4\\u2013A.8):**\", \"extensive_results_showcasing_the_effectiveness_of_joint_diffusion\": \"- Robust handling of noisy seismic inputs (Appendix A.5).\\n - Symmetry checks and visualizations across multiple datasets (Appendix A.4).\\n - Comparisons with multiple baseline methods, including InversionNet, VelocityGAN, and UPFWI (Appendices A.6, A.8).\\n\\n---\\n\\n\\nThank you for your time and effort in reviewing our work. \\n\\nBest regards, \\nThe authors\"}", "{\"comment\": \"Dear authors,\\n\\nI appreciate your responses. My oppinion is that your work is a good fit for Applications to Physical Sciences Area.\\nI'll keep my current score.\"}", "{\"summary\": \"Full waveform inversion (FWI) is a seismic imaging technique that traditionally reconstructs the subsurface velocity model by iteratively comparing observed and predicted seismic data. More recently, machine learning-based approaches would solve FWI by treating it as an image-to-image translation problem. Furthermore, generative diffusion models mainly treated FWI as a conditional generation problem where the velocity map is generated from a given seismic data. This paper offers a new perspective on FWI by considering it as a joint generative process. Namely, the paper considers whether the two modalities -- seismic data and velocity map -- can be generated simultaneously. Two key steps are proposed: first, a dual autoencoder encodes the two modalities in a shared latent space that provides a coarse approximation of the wave equation solution. Second, a diffusion process in the latent space refines the coarse latent representations which are later decoded into seismic data and velocity maps. In contrast to seismic-velocity pairs generated by the conditional models which often lack physical consistency, the jointly generated pairs approximately satisfy the governing PDE without any additional constraint. The paper's main goal is to offer a new perspective by extending FWI from a conditional generation problem to a joint generation problem.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The proposed paper is well-organized and the idea is clearly presented.\", \"The paper offers a new perspective on the FWI generation problem by simultaneously generating two modalities -- seismic data and velocity maps -- from the shared latent space. This is a novel idea in contrast to the existing related work which treats these two modalities separately.\", \"Treating seismic data and velocity maps separately limits the ability to generate physically consistent seismic-velocity pairs. In contrast, jointly generating these modalities makes them approximately consistent with the governing PDE that describes the relationship between them.\", \"The extensive experiments confirm the soundness of the proposed method and show that the jointly generated seismic-velocity pairs can be a useful supplement to real training data.\"], \"weaknesses\": \"I think there are three main problems in the experiments:\\n- The method wasn't compared to any existing conditional generative methods. There is even a section 4.2.4. that compares separate vs. joint diffusion but there the separate diffusion was the same model as for the joint diffusion but with a single branch kept active and the latent space no longer shared. I think it would be useful to see how the proposed method compares to the existing methods (e.g., [1]) both in terms of the diversity of the generated data and the performance of the reconstruction methods when trained on the generated data.\\n- The results might also differ based on a different reconstruction method other than InversionNet (e.g., [2] and/or [3]). I think it would be beneficial to add at least one additional data-driven solver.\\n- Some of the experiments in the results section do not seem to be realistic. (see more in the questions section)\\n\\n---\\n\\n[1] F. Wang, X. Huang, and T. A. Alkhalifah. \\\"A prior regularized full waveform inversion using generative diffusion models.\\\" IEEE Transactions on Geoscience and Remote Sensing, 61:1-11, 2023.\\n\\n[2] P. Jin, X. Zhang, Y. Chen, S. Huang, Z. Liu, and Y. Lin. \\\"Unsupervised learning of full-waveform inversion: Connecting CNN and partial differential equation in a loop.\\\" ICLR, 2022.\\n\\n[3] Z. Zhang, Y. Wu, Z. Zhou, and Y. Lin. \\\"VelocityGAN: Subsurface velocity image estimation using conditional adversarial networks.\\\" In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), 2019, pp. 705-714.\", \"questions\": \"1. Could you comment on the comparison with the existing conditional generative models? Why didn't you compare to any of the existing methods at least in 4.2.4 section?\\n2. Could you comment on the choice of the reconstruction method? I think it would be beneficial to add at least one additional data-driven solver. It would be interesting to see how the reconstruction methods work with data generated by different generative models.\\n3. The generative model was trained on the same OpenFWI dataset on which InversionNet was later evaluated. What is the amount of data your generative model should be trained with and how does it compare to the size of a dataset reconstruction methods (e.g., InversionNet) should be trained with? If the size of a dataset for reconstruction methods is satisfying what is the rationale of doing this? I think you should address the limitations of such a setup.\\n4. In continuation to the previous question, how realistic is the Gen+1\\\\% case? In this case, you trained your generative model on the same data distribution as in the 1\\\\% of the original dataset. If a real dataset is small, wouldn't it be more realistic to train your generative model with real data that differ from the distribution in the small dataset? Maybe a more realistic case would be to train the generative model on the two subsets and add 1\\\\% of the third subset of OpenFWI. Could you comment on this? What are the implications of the existing setup for real-world applications of the method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Grateful for Your Feedback\", \"comment\": \"Dear Reviewer,\\n\\nWe sincerely thank you for your kind response and for the effort you\\u2019ve invested in reviewing our work. Your insights and constructive feedback have been incredibly valuable to us.\\n\\nIf you have any additional suggestions or recommendations for enhancing our manuscript, we would be delighted to hear them.\\n\\nOnce again, thank you for your thoughtful contributions and support.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"**Weakness: My major problem with this manuscript is that the main approach to generating two joint autoencoders is not novel. A similar approach including similar experiments has been proposed and published the approach on dual autoencoder before this submission (https://arxiv.org/pdf/2305.13314) and another publication on dual autoencoder can be found at (https://arxiv.org/pdf/2405.13220). These contributions are neither acknowledged nor cited. The remaining novelty is the diffusion process within the latent spaces which is by itself an interesting idea and should have been stated as the contribution of this manuscript.**\\n\\nThank you for highlighting this point and bringing these references to our attention. We will ensure both references are cited appropriately in our revised manuscript. However, we would like to clarify that our **primary contribution** lies in the application of a **joint diffusion process** within a shared latent space to refine data pairs in a physically consistent manner, rather than in the use of dual autoencoders.\\n\\nThe dual autoencoder architecture only serves as a preliminary step in constructing the shared latent space, enabling the subsequent diffusion process. To demonstrate the flexibility of our approach, we also tested a modified one-in-two-out autoencoder architecture, where a single encoder and two decoders were employed. This setup maintained the shared latent space, and the joint diffusion process continued to perform effectively, following the discoveries presented in our paper.\\n\\nFurthermore, the one-in-two-out autoencoder architecture allows for direct application to standard FWI tasks, as demonstrated in our new experiment results (https://bit.ly/4eBIJX2). This experiment highlights the capability of the joint diffusion model to generate accurate velocity maps solely from seismic data input, addressing both novelty and practical application concerns.\\n\\nTo further support our claims, we have provided an example of the generation results using the modified one-in-two-out network architecture at https://bit.ly/4hVLg1q. We hope these additional experiments and clarifications address your concerns regarding the novelty and contributions of our work.\"}", "{\"title\": \"Submission of Revised Manuscript and Summary of Revisions\", \"comment\": \"Dear Reviewers,\\n\\nWe hope this message finds you well. We are writing to inform you that we have uploaded the revised version of our manuscript based on your insightful feedback and suggestions. We greatly appreciate the time and effort you invested in reviewing our work and providing constructive comments, which have significantly improved the quality and clarity of our paper. Below is a summary of the main revisions addressing your feedback:\\n\\n**Section 4.6 (page 9) FWI with Seismic Data Only:**\\n\\nWe added an experiment utilizing the \\u201cone-in-two-out\\u201d WaveDiffusion model to perform conventional FWI tasks when only seismic data is available. This experiment highlights the model\\u2019s capability to invert seismic data into velocity maps effectively.\\n(***Addressing comments from reviewers 1caw, rGTJ, and STHJ.***)\\n\\n**Line 687 (page 13) Training Time Details:**\\nDetails about the training times for the autoencoders (1000 GPU hours) and the joint diffusion model (2000 GPU hours) have been added.\\n(***Addressing reviewer 1caw.***)\\n\\n**Appendix A.6 (page 16) Comparison with VelocityGAN and UPFWI:**\\nWe conducted additional experiments comparing WaveDiffusion-generated samples with VelocityGAN and UPFWI. This comparison evaluates the effectiveness of our generated samples under different setups.\\n(***Addressing comments from reviewers 1caw and STHJ.***)\\n\\n**Section 3.2 (page 4) Paper Novelty Clarification:**\\nWe have cited the suggested references and clarified that the dual autoencoder serves as a preliminary step for implementing the joint diffusion process, rather than being the main contribution of the paper. The primary contribution of our work lies in Section 3.3 (page 4) and is further elaborated upon through the subsequent analysis and experiments. This revision emphasizes the novel aspects of our joint diffusion framework and provides appropriate context for the dual autoencoder's role within the overall methodology.\\n(***Addressing comments from reviewers rGTJ and A1pp.***)\\n\\n**Line 105 (page 2) Clarification on Terminology:**\\nWe included a statement clarifying that acoustic seismic data, representing a simplified wave phenomenon, is used in this study.\\n(***Addressing reviewer 1caw.***)\\n\\n**Line 688 (page 13) Data Preprocessing Details:**\\nDetails about the data preprocessing steps, including resizing and normalization, have been added for clarity.\\n(***Addressing reviewer 1caw.***)\\n\\n**Appendix A.4 (page 14) Symmetry in FVB:**\\nJoint generation examples across multiple datasets, including FVB, were added to demonstrate the model\\u2019s ability to preserve the symmetry of flat-layer structures.\\n(***Addressing reviewer 1caw.***)\\n\\n**Appendix A.5 (page 15) Handling Noisy Input Data:**\\nWe included experiments demonstrating the model\\u2019s ability to reconstruct clean, noise-free data from noisy inputs, showcasing WaveDiffusion\\u2019s robustness under challenging data conditions.\\n(***Addressing reviewer STHJ.***)\\n\\nWe have carefully revised the manuscript to incorporate your feedback and ensure the improvements are well-documented. If you have any further comments or questions, we would be grateful for your insights. Thank you again for your valuable input and for considering our work.\\n\\nWe look forward to your thoughts on the revised manuscript.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Third Reminder: Follow-up on Rebuttal Responses\", \"comment\": \"Dear Reviewer,\\n\\nWe hope this message finds you well. We wanted to kindly remind you about the author-reviewer discussion period and highlight some important updates we made in response to the feedback received.\\n\\nIn addition to addressing the raised concerns, we conducted a new experiment on noisy seismic data, which **demonstrates the robustness of our *WaveDiffusion* framework under realistic noisy conditions for FWI tasks**. This discovery, though not explicitly requested, adds value by showcasing the method\\u2019s versatility and practical applicability. The results of this experiment are summarized in the updated manuscript and can be reviewed in detail via the following links:\", \"the_visual_comparisons_are_shown_in_the_figure_at_https\": \"//bit.ly/3V5k0Uc, and the quantitative results are detailed in the table at https://bit.ly/4f0rAqB. The new experiment and results are added to the latest revised manuscript on **Page 19 A. 8 section**.\\n\\nWe would deeply appreciate any feedback you might have on our revisions or this new contribution. Your insights are invaluable to refining our work and ensuring it meets the highest standards.\\n\\nThank you again for your time and effort. Please don\\u2019t hesitate to reach out if there are any additional concerns or questions regarding the paper.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"Last Day Reminder to Reviewer A1pp: Follow-up on the Revised Manuscript\", \"comment\": \"Dear Reviewer A1pp,\\n\\nWe hope this message finds you well.\\n\\nWe are reaching out to kindly remind you that today is the **last day** of the discussion session. We would like to kindly ask you for your follow-up on the latest revised version of our manuscript, uploaded on November 23. We would greatly appreciate any additional comments or suggestions you may have on the current version.\\n\\nThank you for your time and thoughtful feedback. We look forward to hearing from you.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"I stay with my score\", \"comment\": \"I find the claim that the joint diffusion being the main idea of this paper a bit ridiculous. The idea is discussed from lines 192 to 211, barely a third of a page.\\nAgain, I believe that there maybe novelty in joint diffusion but clearly, the authors did not intend this to be the main contributions originally and the paper was written accordingly. \\nI understand the pressure to publish at these main conferences, but to be honest, the authors will do more justice to the paper and their own work if they resubmit the paper to the next conference in line. They can revise the paper, be serious about their latest findings (rather than write the referees that they just discovered something major) and write a winning paper that will last.\"}", "{\"summary\": \"This paper deals with the problem of full waveform inversion.\\nThere are two mechanisms that the paper proposes.\\n1. The paper uses the same latent space for both the model and the data\\n2. They train a diffusion model in the latent space. Such a diffusion model can therefore generate a plethora of models and their data.\\n\\nResults look reasonable even though the models that are being trained on are very simple.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea of using a joint feature space is good and then using a diffusion model on this space is also a good idea. The results are interesting and it seems that the approach works for the models in the data base.\", \"weaknesses\": \"Unfortunately, the idea of using an AE with common feature spaces for data and model is not new. See https://paperswithcode.com/paper/paired-autoencoders-for-inverse-problems\\nThis is the main problem that the paper have. I understand that in this fast moving field some papers are missed but in this case, the work that was already done makes much of the paper not relevant.\\nI would recommend the authors to withdraw the paper, concentrate of the diffusion aspect of the paper and resubmit to a different venue.\", \"questions\": \"The interesting parts of the paper are actually hiding towards the end.\\n\\n1. How do you actually do coarse to fine?\\n\\n2. Given some data $d$ how to you use diffusion to find an appropriate model\\n\\nI would recommend re-writing the paper with section 3.3 in mind. Since training a dual AE is not very innovative and using diffusion of the latent space is not very innovating, the innovation is exactly what you do in 3.3. You could easily develop it to a full paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I agree with the other reviewer's assessment of the manuscript, while making an effort to address my concerns, this manuscript requires a more focused approach. The novelty of the diffusion process and inversion strategy should be highlighted prominently. The current emphasis on the auto-encoder dilutes the impact of these key contributions. I believe the paper can be significantly strengthened by shifting its focus towards these core aspects. With these improvements, the paper has the potential to be a valuable contribution to the field. I will maintain my original rating.\"}", "{\"title\": \"Request for Further Guidance\", \"comment\": \"Dear Reviewer A1pp,\\n\\nThank you for your feedback and valuable insights on our manuscript. We greatly appreciate the time and effort you have dedicated to reviewing our work. In the latest revised version (uploaded on Nov. 23), we have carefully addressed each of your comments and concerns. Below, we outline your original comments alongside our corresponding responses, revisions, and additions to the manuscript. We kindly ask if the current revision adequately addresses your concerns, and if not, we would be grateful for your guidance on how to further improve the paper.\\n\\n\\n**Your Original Comment: The main approach to generating two joint autoencoders is not novel.**\\n\\n**Our Response:** We fully acknowledge this point and have clarified in **Section 3.2 (page 3-4)** that the dual autoencoder serves as a preliminary step to establish the shared latent space, rather than the primary contribution of our work. To address your concern, we have also cited the suggested references (1, 2) and provided context to differentiate our work.\\n\\n**Our Revision to the Manuscript:** The dual autoencoder is now explicitly described as a preliminary mechanism that enables the joint diffusion process. The novelty of our work resides in the joint diffusion process and its ability to refine latent representations within a shared latent space to adhere to physical constraints governed by PDEs.\", \"this_clarification_has_been_added_page_4_line_171_176_in_the_revised_manuscript\": \"\\\"*We emphasize that the dual autoencoder is not the focus of this framework and can be substituted with any architecture capable of producing a combined latent space for seismic data and velocity maps. For instance, the one-encoder-two-decoders architecture in Section 4.6 or an autoencoder incorporating KL divergence could also serve this purpose. The autoencoder primarily facilitates the setup for the joint diffusion process, which is the key innovation in achieving physically consistent seismic-velocity generation.*\\\"\\n\\n**Your Original Comment: The diffusion process within the latent spaces should be stated as the main contribution.**\\n\\n**Our Response:** We agree with this assessment and have made substantial changes to highlight the novelty of the diffusion process. In Section 3.3 and subsequent sections, we explicitly state that the joint diffusion process is the core contribution of the paper. This process enables the model to trace a path from random initialization to valid solutions governed by the wave equation, offering a new geometric interpretation of FWI.\\n\\n**Our Revision to the Manuscript:** We shifted the manuscript's focus to the diffusion process, ensuring that its novelty and impact are clearly conveyed. The **experimental results in Sections 4.3 to 4.6 and Appendix A.2 to A.8** demonstrate the effectiveness and advantages of our joint diffusion framework, including \\n\\n*Section 4.3: **Joint diffusion** generation procedure analysis.*\\n\\n*Section 4.4: **Joint diffusion** comparison to baseline algorithms (InversionNet).*\\n\\n*Section 4.5: **Joint diffusion** comparison to separate diffusion.*\\n\\n*Section 4.6: **Joint diffusion** solving a conventional FWI problem and comparison to baseline.*\\n\\n*Section A.2: **Joint diffusion** results on more datasets.*\\n\\n*Section A.3: **Joint diffusion** results on combined datasets.*\\n\\n*Section A.4: **Joint diffusion** results on multiple datasets for data symmetry check and more.*\\n\\n*Section A.5: **Joint diffusion** handling noise in seismic data and velocity.*\\n\\n*Section A.6: **Joint diffusion** comparison to more baseline algorithms (InversionNet, VelocityGAN, UPFWI).*\\n\\n*Section A.7: **Joint diffusion** generated sample usage in data-driven FWI algorithms.*\\n\\n*Section A.8: **Joint diffusion** comparison to baseline algorithms (InversionNet, VelocityGAN, UPFWI) on conventional FWI on noisy seismic data input senario.*\\n\\nThese revisions ensure that the paper's focus is firmly on the joint diffusion process and its unique contributions, minimizing the emphasis on the autoencoder, which serves only as a supporting component.\\n\\n**Request for Feedback**\\n\\nGiven the extensive revisions and new experiments added to address your concerns, we kindly ask if the current manuscript sufficiently addresses the issues you raised. If any aspects remain unresolved, we would be deeply grateful for your feedback on how we can further refine the paper. Your insights are invaluable in ensuring that our work reaches its fullest potential and contributes meaningfully to the field.\\n\\nThank you once again for your time and thoughtful review.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"comment\": \"**Weakness: Unfortunately, the idea of using an AE with common feature spaces for data and model is not new. See https://paperswithcode.com/paper/paired-autoencoders-for-inverse-problems This is the main problem that the paper have. I understand that in this fast moving field some papers are missed but in this case, the work that was already done makes much of the paper not relevant. I would recommend the authors to withdraw the paper, concentrate of the diffusion aspect of the paper and resubmit to a different venue.**\\n\\nThank you for bringing this reference to our attention. We will ensure it is cited appropriately in our revised manuscript. However, we respectfully disagree with the recommendation to withdraw the paper, as the **primary contribution of our work** is not the dual autoencoder architecture but rather the **joint diffusion process in a shared latent space**. The dual autoencoder serves only as a foundational step, enabling the creation of a shared latent representation for the two modalities, seismic data and velocity maps. The key novelty of our work lies in the use of this shared latent space for a joint diffusion process that refines the generated data pairs while maintaining physical consistency.\\n\\nTo further illustrate this, we conducted experiments with a modified autoencoder architecture that uses a single encoder and two decoders (one-in-two-out configuration) in the first stage. This setup demonstrates the flexibility of our approach, where the joint diffusion process continues to perform effectively, preserving the discoveries outlined in the paper. The generation results for this modified architecture are available at https://bit.ly/4hVLg1q.\\n\\nAdditionally, the one-in-two-out autoencoder configuration allows for direct application to standard FWI tasks, as shown in our response to Reviewer 1caw. In this setup, seismic data alone is used as input to the model, and the joint diffusion process generates both seismic and velocity outputs, making the approach suitable for practical applications. Example results for this experiment can be found at https://bit.ly/4eBIJX2.\\n\\nIn summary, while we acknowledge the existence of prior work on paired autoencoders, our contribution is orthogonal to this line of research. We focus on the integration of joint diffusion into the latent space, which we believe is a novel and significant advancement in the field. We hope this clarification and the new experimental results address your concerns regarding the relevance and novelty of our work.\\n\\n**Question 1: How do you actually do coarse to fine?**\\n\\nThank you for your question. We achieve coarse-to-fine refinement through the **latent space joint diffusion process**. Starting with approximate representations, the process iteratively refines these by scoring the deviation from the governing PDE in the latent space. The forward diffusion process perturbs the latent vector progressively, while the backward diffusion process denoises and aligns it to physically valid solutions. This approach ensures a gradual improvement, moving from coarse approximations to accurate representations governed by the PDE, adhering to the physical constraints of the problem.\\n\\n**Question 2: Given some data d how to you use diffusion to find an appropriate model?** \\n\\nAs noted in our response to *Weakness 1 to Reviewer 1caw* and to your raised *Weakness* above, we have conducted a new experiment to address this question. The experiment demonstrates how our framework finds an appropriate velocity model given specific seismic data **d**.\\n\\nTo address this task, we utilize a \\\"one-in-two-out\\\" configuration within the autoencoder part of our framework. In this setup: only seismic data (**d**) is input to the encoder, generating a latent representation solely contributed by seismic input. This latent vector is then refined through the joint diffusion process, iteratively denoising and aligning it with the physical constraints.\\nFinally, the refined latent vector is passed through individual VQ layers and decoders, generating both seismic and velocity outputs.\\nThis setup enables the inversion of seismic data to produce velocity maps, making our model directly applicable to standard FWI tasks. Example results from this experiment are available at http://bit.ly/4eBIJX2. We will incorporate these results into the revised manuscript to demonstrate this capability in greater detail.\"}" ] }
0nJEgNpb4l
PEAR: Primitive Enabled Adaptive Relabeling for Boosting Hierarchical Reinforcement Learning
[ "Utsav Singh", "Vinay P. Namboodiri" ]
Hierarchical reinforcement learning (HRL) has the potential to solve complex long horizon tasks using temporal abstraction and increased exploration. However, hierarchical agents are difficult to train due to inherent non-stationarity. We present primitive enabled adaptive relabeling (PEAR), a two-phase approach where we first perform adaptive relabeling on a few expert demonstrations to generate efficient subgoal supervision, and then jointly optimize HRL agents by employing reinforcement learning (RL) and imitation learning (IL). We perform theoretical analysis to bound the sub-optimality of our approach and derive a joint optimization framework using RL and IL. Since PEAR utilizes only a few expert demonstrations and considers minimal limiting assumptions on the task structure, it can be easily integrated with typical off-policy \RL algorithms to produce a practical HRL approach. We perform extensive experiments on challenging environments and show that PEAR is able to outperform various hierarchical and non-hierarchical baselines and achieve upto 80% success rates in complex sparse robotic control tasks where other baselines typically fail to show significant progress. We also perform ablations to thoroughly analyze the importance of our various design choices. Finally, we perform real world robotic experiments on complex tasks and demonstrate that PEAR consistently outperforms the baselines.
[ "Hierarchical reinforcement learning", "Learning from demonstrations" ]
Accept (Poster)
https://openreview.net/pdf?id=0nJEgNpb4l
https://openreview.net/forum?id=0nJEgNpb4l
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xzwPVLzSWo", "uBGAGVlkwx", "sT2um5ZEnj", "qEluI9UQum", "pQjYpEYdJn", "o60A7w4q2t", "nEX2wcC1Ad", "lVkzOKlDXK", "kS37RLTjjp", "fAimOICsS8", "eMy7BkD4cD", "dv18HdVbte", "b2BiEaQnnP", "YnZTaDfnyX", "XoZraHCIke", "Lv8bXryDGL", "Ky5XDkZ4de", "Jr0AGP1dWT", "GogHcO1x2F", "G2PpAXEIPH", "Fxf0arIxkr", "FrjVcKydmH", "DkdKaFHLkR", "DDTSq2e5Tq", "ALWWQChqsX", "6MxicmAcRa", "2Fkku55b9v", "1mVFpL6o4h" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732134277597, 1733022866849, 1734547986533, 1732235551309, 1732778515901, 1732302706650, 1732302945091, 1732134017741, 1732522693374, 1732134198688, 1732134366255, 1733015816301, 1737523681884, 1732302746090, 1732361021839, 1729592014199, 1732522660703, 1732250689632, 1730660944815, 1733022831050, 1732133928372, 1730733438000, 1732637675103, 1732673214097, 1730581267969, 1733263450132, 1732134129472, 1732436590145 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Area_Chair_U3vn" ], [ "ICLR.cc/2025/Conference/Submission5070/Reviewer_di4o" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Reviewer_u7xN" ], [ "ICLR.cc/2025/Conference/Submission5070/Reviewer_GCp5" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Reviewer_Cpmf" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Reviewer_u7xN" ], [ "ICLR.cc/2025/Conference/Submission5070/Reviewer_Cpmf" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Reviewer_di4o" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ], [ "ICLR.cc/2025/Conference/Submission5070/Authors" ] ], "structured_content_str": [ "{\"title\": \"Author Response\", \"comment\": \"We are thankful to the reviewer for dedicating their valuable time and effort towards evaluating our manuscript. We deeply appreciate the insightful feedback provided, and we have thoroughly responded to reviewer\\u2019s inquiries in the responses provided below.\\n\\n> **Weakness 1:** As authors note, the method is currently reliant on expert demonstrations. However, many benchmarks exist which include other kinds of demonstrations, including human teleop. While the method may not perform well on these demonstrations just yet (as it is listed as an aim of future work), providing results on suboptimal demonstrations would help demonstrate concretely the strong and weak points of authors' method, and potentially provide insights on why it fails in these settings.\\n\\n**Response to Weakness 1:** We thank the reviewer for this insight, and agree that experiments on suboptimal demonstrations might provide further intuitions on why it might fail in such settings and lead to potential future research problems. To this end, we perform additional experiments with sub-optimal demonstrations and provide them in Figure 16 of the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l). Here are the findings:\\n1. We find that with the increasing number of sub-optimal demonstrations (referred as bad demos in the figure), the performance degrades (which is expected since the IL regularization term becomes sub-optimal). \\n2. We see from the results that the performance degradation with sub-optimal demonstrations becomes more prominent as the environments become harder, which implies that the imitation learning regularization term is crucial for mitigating non-stationarity in harder tasks.\\n3. Since our approach employs joint optimization objectives (**RL** and **IL** objectives), the **IL** objective might generate sub-optimal results when sub-optimal demonstrations are used. Therefore, we would like to explore how to adaptively set the regularization weight parameter in future work, one that is able to adjust (minimize) the **IL** objective weight in the presence of sub-optimal demonstrations. \\n\\n\\n> **Weakness 2:** Furthermore, it seems inaccurate to state that PEAR uses only \\u201ca handful\\u201d of demonstrations, when Fig. 13 shows that generally 50-70+ demonstrations are needed to solve the provided tasks (with the exception of Franka Kitchen, which provides fewer demos).\\n\\n**Response to Weakness 2:** We realise now that using the word \\u201chandful\\u201d may lead to confusion, and we have removed the keyword from the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l).\\n\\n\\n> **Question 1:** Have you tested on suboptimal demonstrations? Are there any interesting findings/results which may point to future avenues for research or failings of the method which should be investigated/improved upon?\\n\\n**Response to Question 1:** Please refer to our response for Weakness 1.\\n\\n\\n> **Question 2:** Were you able to find notable reasons for why the MSE-regularized learning objective would occasionally outperform the IRL-regularized version? Is there any relationship to task difficulty, data diversity, etc?\\n\\n**Response to Question 2:** We did further experiments on additional seeds (shown in Figure 3 in the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l)) and found that MSE is only able to outperform IRL based regularization in rope manipulation environment. We believe that although IRL regularization consistently yields good results, it is difficult to train in rope environment due to its unique task structure. Moreover, the expert demonstrations in the rope manipulation environment might be sub-optimal, making it more challenging to learn the IRL regularization objective for this task. In future work, we aim to investigate efficient methods to further boost performance on this environment.\\n\\n\\n> **Question 3:** In \\u201cAlgorithm 2: PEAR,\\u201d line 8, shouldn\\u2019t the lower-level policy\\u2019s IL regularization be done with $D_g$? Since we are providing state $s^f$ from the goal dataset $D_g$ and subgoal supervision $s_g^e$ to the goal-conditioned low-level policy, then the policy predicts action $a$ , and we regularize this to be close to the dataset action $a^f$ (either with MSE or the IRL objective)?\\n\\n**Response to Question 3:** We thank the reviewer for this comment and agree that this is indeed a typo. The reviewer is right and the lower-level policy\\u2019s IL regularization be done with $D^L_g$ (as also mentioned in Eqn 2). We have corrected this typo in the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l).\\n\\nWe hope that the responses address the reviewer's concerns. Please let us know, and we will be happy to address additional concerns if any.\"}", "{\"title\": \"Follow Up Request [deadline approaching]\", \"comment\": \"Dear Reviewer,\\n\\nWe thank you once again for your time and efforts in reviewing our work and providing feedback on your rebuttal. This is a gentle reminder to let us know if we have satisfactorily addressed the reviewer's concerns, and to revise our scores if you find it appropriate. We are happy to address any additional remaining concerns. We are grateful for your service to the community.\\n\\nRegards,\\n\\nAuthors\"}", "{\"metareview\": \"This paper proposed a two-stage HRL method that uses expert demonstrations to partition trajectories into subsets of training data for low-level policies. The main claimed contributions are that the method generates efficient subgoal supervision, the method mitigates non-stationarity in HRL, experiments show the work outperforms prior work, and that real-world experiments also demonstrates \\\"impressive performance\\\" in real world tasks. Experiments provide a reasonable amount of evidence of these claims (except the latter, in my opinion, as these experiments were fairly limited). Reviewers were almost all positive about this work, and I found no glaring issues with the paper that were either unmentioned by the reviewers or unsatisfactorily addressed by the authors.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers meaningfully engaged with the author's responses. However, the reviewer rating below a 6 (Reviewer GCp5) did not. After reading their concerns and the author response to them (notational issues, addressing nonstationarity), the authors' response to these concerns were satisfactory (e.g., the evidence in Fig. 4 in particular supports the claim of mitigating nonstationarity).\"}", "{\"comment\": \"The authors have sufficiently addressed my questions and concerns, hence I maintain my suggestion to accept with a score of 8.\"}", "{\"title\": \"Urgent Request for Reviewer Response\", \"comment\": \"As per the reviewer's kind feedback and concerns, we have thoroughly provided our responses in the rebuttal and improved the final manuscript. We hope that we have sufficiently addressed the reviewer's concerns.\\n\\nAfter our rebuttal, reviewer di4o maintained a score of 8, and reviewers Cpmf and u7xN both increased their scores to 6 for the paper. Since the deadline is approaching, we urgently request the reviewer to respond to our rebuttal and revise our scores if deemed appropriate. We sincerely hope that our work adds sufficient value to the research community, are we are happy to address any additional or remaining concerns. Once again, we are thankful for the time and efforts of the reviewer, which has allowed us to further strengthen the manuscript.\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer\\n\\nThis is a gentle reminder that we have submitted the rebuttal to address your comments. We sincerely appreciate your feedback and are happy to address any additional questions you may have during this discussion period. We thank you again for taking the time to review our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer\\n\\nThis is a gentle reminder that we have submitted the rebuttal to address your comments. We sincerely appreciate your feedback and are happy to address any additional questions you may have during this discussion period. We thank you again for taking the time to review our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Author Response Part 2/2\", \"comment\": \"> **Question 1:** How do the authors justify the reliance on expert demonstrations in the first phase of PEAR compared to HRL methods that function without such requirements?\\n\\n**Response to Question 1:** Please refer to our response to Weakness 1.\\n\\n\\n> **Question 2:** Can the authors provide additional comparisons with recent HRL approaches to strengthen the positioning of PEAR within the current landscape?\\n\\n**Response to Question 2:** We have performed additional experiments and provided in the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l). Please refer to our response to Weakness 2. \\n\\n[1] Wang, Vivienne Huiling, et al. \\\"State-conditioned adversarial subgoal generation.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 8. 2023.\\n\\nWe hope that the responses address the reviewer's concerns. Please let us know, and we will be happy to address additional concerns if any.\"}", "{\"title\": \"Follow up request [deadline approaching]\", \"comment\": \"Dear Reviewer,\\n\\nWe thank you once again for your time and efforts in reviewing our work and providing feedback on your rebuttal. This is a gentle reminder to revise our scores if you find it appropriate, and we are happy to address any additional remaining concerns. We are grateful for your service to the community.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Author Response Part 2/2\", \"comment\": \"> **Question 1:** Why is the Q_{Thresh} set to 0? If the low-level reward is -1 for not achieving the goal and 0 for achieving the goal, the Q value should be negative for any correct action that puts the agent on a path to achieve the goal.\\n\\n**Response to Question 1:** We would like to apologize for the confusion and clarify that before comparing with $Q_{thresh}$, the $Q_{\\\\pi^{L}}$ values are normalized using the following equation: $(Q_{\\\\pi^{L}}-\\\\text{min value})/(\\\\text{max value} - \\\\text{min value} )*100$, where $\\\\text{min value}$ and $\\\\text{max value}$ are the minimum and maximum values of $Q_{\\\\pi^{L}}$ respectively. Since the $Q_{\\\\pi^{L}}$ values are normalized before comparison, $Q_{thresh}=0$ is feasible. We would like to clarify that we experimentally found the value of $Q_{thresh}=0$ to work well in the tasks, which shows that the training stability is not hyper-volatile with respect to this hyper-parameter.\\n\\n\\n> **Question 2:** Is the high level policy only trained with a dataset containing expert trajectories? Or is it also trained on its own interaction data? If it is trained on its own interaction data, is there any relabeling done on that?\\n\\n**Response to Question 2:** The higher level policy is trained using joint optimization, where it employs reinforcement learning (**RL**) using its own interaction data (enabling efficient exploration), and additional imitation learning (**IL**) regularization. In this work, we do not perform relabeling on interaction data, but we would like to explore this direction in future work.\\n\\n\\n> **Question 3:** I did not understand the \\u201cmargin\\u201d component of the objective. Can you provide another explanation of this component?\\n\\n**Response to Question 3:** In our **primitive enabled adaptive relabeling** procedure, we employ the lower level $Q$ function $Q_{\\\\pi^{L}}$ to select efficient subgoals, which is conditioned on the expert states $s^e$. Since $Q_{\\\\pi^{L}}$ is the lower level $Q$ function and is simultaneously trained, it might over-estimate and provide erroneous values for states $s^e_{unseen}$ that are unseen during previous training (a known issue with Q functions in RL). Therefore, we posit that the over-estimation issue can be avoided by adding a margin objective that penalizes the $Q$ values on out-of-distribution states. We state this as margin classification objective in the paper and empirically found this objective to stabilize training.\\n\\n\\n> **Question 4:** Why does the paper characterize the number of demonstrations as a \\u201chandful\\u201d of demonstrations, when it uses 100 for most tasks?\\n\\n**Response to Question 4:** We realize now that using the word \\u201chandful\\u201d may lead to confusion, and we have removed the keyword from the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l).\\n\\n\\n> **Question 5:** Have you experimented on any domains with image observations?\\n\\n**Response to Question 5:** We did experiment with image observations in the maze navigation, pick and place, and rope environments, and found that $\\\\texttt{PEAR}$ consistently shows impressive performance on the tasks and outperforms the baselines. However, due to resource and time constraints, we decided to not add those results to this paper draft. We will try to finish the experiments in time, and add them to the final paper draft submission. \\n\\nWe hope that the responses address the reviewer's concerns. Please let us know, and we will be happy to address additional concerns if any.\"}", "{\"title\": \"Author Response\", \"comment\": \"We would like to express our gratitude to the reviewer for dedicating their valuable time and effort towards reviewing our manuscript. We deeply appreciate the insightful feedback provided, and we have thoroughly responded to reviewer\\u2019s inquiries in the responses provided below.\\n\\n> **Weakness 1:** Inconsistent definition: Section 3 states that the expert data contains only states. But in Section 4.2, the low-level regularization term uses actions from the expert data.\\n\\n**Response to Weakness 1:** We agree with the reviewer and apologize for this oversight. We have corrected this in Section 3 of the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l), and assume access to a small number of directed expert demonstrations $D=(e^i)_{i=1}^N$,\\n\\nwhere $e^i=(s^e_0,a^e_0, s^e_1,a^e_1 \\\\ldots, s^e_{T-1},a^e_{T-1})$.\\n\\n\\n> **Weakness 2:** Addressing the non-stationarity issue in HRL is a main claim of the paper. However, the proposed method does not resolve this issue. The high-level policy still faces non-stationarity, as the transitions in its replay buffer, which are determined by the low-level policy, continue to change throughout the training process.\\n\\n**Response to Weakness 2:** We appreciate the reviewer's deep understanding and insight. We agree that the transitions in its replay buffer still continue to change, and therefore may cause non-stationarity issue. In this work, however, we focus on mitigating non-stationarity by first adaptively relabeling a few expert demonstrations to generate efficient subgoal supervision, and then jointly optimizing HRL agents by employing reinforcement learning (RL) and imitation learning (IL). \\n\\nWe show in Figure 3, 4 and 5 that this indeed mitigates non-stationarity in HRL. However, our **primitive enabled adaptive relabeling** and subsequent **joint optimization** based approach can be added on top of approaches like $\\\\texttt{HAC}$, which also relabel replay buffer transitions to handle non-stationarity. This is an interesting research direction, which we would like to explore in the future.\\n\\n\\n> **Question 1:** Many existing works [1,2,3] also use IL loss from data to regularize high-level policy learning. What differentiates the proposed regularization method from these approaches?\\n\\n**Response to Question 1:** The two main contributions of the proposed approach are $(i)$ adaptively relabeling expert demonstrations to generate efficient subgoal supervision, and $(ii)$ jointly optimizing using RL and IL using the generated subgoals. Although several prior approaches employ IL regularization, our approach has two major distinctions:\\n1. Our **primitive enabled adaptively relabeling** approach uses the lower level $Q$ function to generate efficient subgoals that are achievable by the lower level policy. These generated subgoals are used to regularize the higher level policy using IL regularization.\\n2. Our approach employs a joint objective (RL and IL). Since it employs RL, $\\\\texttt{PEAR}$ is able to explore the environment for high reward predictions. Further, the IL term regularizes the learnt higher level policy to predict achievable subgoals, thereby mitigating non-stationarity.\\n\\n> **Question 2:** Why can we set the threshold of Q to 0 (Section 4.1) in all experiments? I believe this hyperparameter should vary, depending on the reward function specific to each task.\\n\\n**Response to Question 2:** We would like to clarify that we experimentally found the value of $Q_{thresh}=0$ to work well in the tasks, which shows that the training stability is not hyper-volatile with respect to this hyper-parameter. Further, before comparing with $Q_{thresh}$, the $Q_{\\\\pi^{L}}$ values are normalized using the following equation: $(Q_{\\\\pi^{L}}-\\\\text{min value})/(\\\\text{max value} - \\\\text{min value} )*100$, where $\\\\text{min value}$ and $\\\\text{max value}$ are the minimum and maximum values of $Q_{\\\\pi^{L}}$ respectively. Thus, although we do vary $Q_{thresh}$ depending on the reward function specific to each task, we found that out of those values, $Q_{thresh}=0$ works well for all the tasks.\\n\\nWe hope that the responses address the reviewer's concerns. Please let us know, and we will be happy to address additional concerns if any.\"}", "{\"title\": \"Extremely Urgent Request to the Reviewer\", \"comment\": \"Dear Reviewer,\\n\\nWe thank you once again for your time and efforts in reviewing our work and providing feedback on your rebuttal. Since the deadline is approaching, we urgently request you to revise our scores if the rebuttal has satisfactorily addressed your concerns. After the rebuttal, reviewer di4o maintained a score of 8, and reviewers Cpmf and u7xN both increased their scores to 6 for the paper. We will be happy to address any additional remaining concerns.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer\\n\\nThis is a gentle reminder that we have submitted the rebuttal to address your comments. We sincerely appreciate your feedback and are happy to address any additional questions you may have during this discussion period. We thank you again for taking the time to review our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for addressing my concerns. I appreciate the improvements made, and I have raised the score accordingly. However, there are still two points that I believe need further attention:\\n\\n1. Figure 1 needs to be redrawn. The text is too small, and it does not clearly convey the innovations of the work. I suggest revisiting the figure to make it more informative and legible.\\n\\n2. Citations remain an issue. I encourage you to approach the references with greater rigor. Many citations are from ArXiv, but there are official conference versions available, such as:\\n\\nVezhnevets, Alexander Sasha, et al. \\\"Feudal networks for hierarchical reinforcement learning.\\\" In: International Conference on Machine Learning, PMLR, 2017, pp. 3540-3549.\\n\\nZhang, Tianren, et al. \\\"Generating adjacency-constrained subgoals in hierarchical reinforcement learning.\\\" Advances in Neural Information Processing Systems, 2020, 33: 21579-21590.\"}", "{\"summary\": \"This paper proposes a method to improve HRL, utilizing a few expert demonstrations. The key insight is that subgoals selected from the dataset can effectively guide exploration for the policy. An adaptive relabeling method is proposed to select the proper subsequent subgoal based on the Q value of the low-level policy. The relabeled data provides an imitation-based regularization for the high-level policy, encouraging it to output reachable, high-quality subgoals for the low-level policy. Experiments in diverse simulation robotic tasks demonstrate the effectiveness of the method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Using a few expert demonstrations to improve goal-based HRL is promising. The proposed adaptive relabeling method for IL regularization is straightforward, well-motivated, and yields good results.\\n2. The paper includes some theoretical analysis.\\n3. The experiments cover a variety of robotic tasks, including real-world test.\", \"weaknesses\": \"1. Inconsistent definition: Section 3 states that the expert data contains only states. But in Section 4.2, the low-level regularization term uses actions from the expert data.\\n2. Addressing the non-stationarity issue in HRL is a main claim of the paper. However, the proposed method does not resolve this issue. The high-level policy still faces non-stationarity, as the transitions in its replay buffer, which are determined by the low-level policy, continue to change throughout the training process.\", \"questions\": \"1. Many existing works [1,2,3] also use IL loss from data to regularize high-level policy learning. What differentiates the proposed regularization method from these approaches?\\n2. Why can we set the threshold of Q to 0 (Section 4.1) in all experiments? I believe this hyperparameter should vary, depending on the reward function specific to each task.\\n\\n[1] Pertsch, et al. \\\"Accelerating reinforcement learning with learned skill priors.\\\" Conference on robot learning. PMLR, 2021.\\n[2] Shi, et al. \\\"Skill-based model-based reinforcement learning.\\\" arXiv preprint arXiv:2207.07560 (2022).\\n[3] Yuan, et al. \\\"Pre-training goal-based models for sample-efficient reinforcement learning.\\\" The Twelfth International Conference on Learning Representations. 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow Up Request [deadline approaching]\", \"comment\": \"Dear Reviewer,\\n\\nWe thank you once again for your time and efforts in reviewing our work and providing feedback on your rebuttal. This is a gentle reminder to revise our scores if you find it appropriate, and we are happy to address any additional remaining concerns. We are grateful for your service to the community.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Author Response\", \"comment\": \"We are thankful to the reviewer for the insightful comments, prompt response, and dedicating their valuable time and effort towards evaluating our manuscript, which has allowed us to strengthen the manuscript.\"}", "{\"summary\": \"The authors introduce an algorithm, Primitive Enabled Adaptive Relabeling (PEAR), to address the issue of non-stationary transition and reward functions when implementing HRL. Like Relay Policy Learning (RPL) (Gupta 2019), PEAR encourages the high level policy to output feasible subgoals using imitation learning. The main difference from RPL is that instead of imitating the actions from a dataset that occur at fixed intervals, PEAR uses a heuristic to select the latest subgoal that the low level policy can achieve. The authors show in a variety of experiments that PEAR can outperform several baselines.\\n\\nGupta et al. Relay Policy Learning. 2019\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The writing was clear and easy to understand.\", \"The authors included several ablation experiments that provided some important insights.\", \"The paper included some real world robotic experiments.\"], \"weaknesses\": \"My main concern with this approach is that the contribution seems to be marginal as I do not see a compelling reason why PEAR would consistently perform better than (a) HAC (Levy 2019) with replay buffer augmented with demonstration data and (b) hierarchical behavior cloning, in which pure imitation learning is applied to both a high and low level policies.\\n\\nHAC already addresses the problem of nonstationarity through relabeling and subgoal testing. The problem with HAC is that it does not have a built-in mechanism to explore but this can be remedied with the demonstration data that is provided to PEAR. An advantage of HAC + demonstration data over PEAR, which would use a pure RL objective, is that if the demonstration data is suboptimal, it would not have an imitation learning regularization term forcing the agent to output suboptimal subgoals. HAC has also demonstrated it can learn more than two levels of hierarchy. The results of HER+BC, which I understood to be HER with a replay buffer augmented with demonstration data, was often ultimately able to match the performance of PEAR (see Figure 14), making it more likely that HAC, which is close to a hierarchical version of HER, should be able to match PEAR. \\n\\nIn addition, it seems that a pure hierarchical imitation learning approach, in which both levels are trained with supervised learning should also work, at least potentially with more data. The baseline BC may not have worked well because the tasks were too long horizon, but the addition of a high level policy trained with imitation learning should help.\\n\\nLevy et al. Hierarchical Actor-Crtiic. 2017\", \"questions\": \"1. Why is the Q_{Thresh} set to 0? If the low-level reward is -1 for not achieving the goal and 0 for achieving the goal, the Q value should be negative for any correct action that puts the agent on a path to achieve the goal.\\n2. Is the high level policy only trained with a dataset containing expert trajectories? Or is it also trained on its own interaction data? If it is trained on its own interaction data, is there any relabeling done on that?\\n3. I did not understand the \\u201cmargin\\u201d component of the objective. Can you provide another explanation of this component?\\n4. Why does the paper characterize the number of demonstrations as a \\u201chandful\\u201d of demonstrations, when it uses 100 for most tasks?\\n5. Have you experimented on any domains with image observations?\\n\\nI am willing to raise my score if the authors can (i) provide some principled reasons why PEAR should outperform HAC+demonstrations and hierarchical behavior cloning and (ii) provide some answers to the above questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow up request [deadline approaching]\", \"comment\": \"Dear Reviewer,\\n\\nWe thank you once again for your time and efforts in reviewing our work and providing feedback on your rebuttal. This is a gentle reminder to let us know if we have satisfactorily addressed the reviewer's concerns, and to revise our scores if you find it appropriate. We are happy to address any additional remaining concerns. We are grateful for your service to the community.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Author Response Part 1/2\", \"comment\": \"We would like to express our gratitude to the reviewer for dedicating their valuable time and effort towards evaluating our manuscript, which has allowed us to strengthen the manuscript. We deeply appreciate the insightful feedback provided, and we have thoroughly responded to reviewer's inquiries in the responses provided below.\\n\\n> **Weakness 1:** Expert Demonstrations Requirement: The first phase of PEAR relies on expert demonstrations to generate subgoals, which raises concerns about the fairness of comparison with other HRL methods that do not require such demonstrations. This could affect the generalizability of the findings.\\n\\n**Response to Weakness 1:** We agree with the reviewer that the proposed approach relies on *expert demonstrations* to generate subgoals, which can be challenging in environments where generating such *demonstrations* is computationally expensive (we also mention this in the discussion section of the paper). \\n\\nHowever, we would like to respectfully point out that:\\n* **collecting a few *expert demonstrations* is often feasible** in such robotics control tasks, and indeed several prior works employ such *expert demonstrations* (covered in detail in the Related Work section). \\n* the main contribution of this work is to use **primitive enabled adaptive relabeling** to generate efficient subgoals, and subsequently leveraging them in our **joint optimization** based approach.\\n\\nAlthough we compare our approach with several baselines that employ expert demonstrations (Relay Policy Learning $\\\\texttt{RPL}$, Discriminator Actor Critic $\\\\texttt{DAC}$, Behavior Cloning $\\\\texttt{BC}$), the purpose of comparisons with baselines that do not use expert demonstrations are as follows:\\n* $\\\\texttt{HAC}$: to show that $\\\\texttt{PEAR}$ is able to better mitigate non-stationarity.\\n* $\\\\texttt{RAPS}$: to show that $\\\\texttt{PEAR}$ outperforms approaches that use behavior priors.\\n* $\\\\texttt{HIER}$ and $\\\\texttt{HIER-NEG}$: to show the importance of extracting subgoals using **primitive enabled adaptive relabeling** and subsequent **joint optimization**.\\n* $\\\\texttt{FLAT}$: to show the importance of our hierarchical formulation (temporal abstraction and improved exploration).\\n\\nThus, we would like to respectfully point out that such comparisons offer additional insights into into the factors contributing to the superior performance of $\\\\texttt{PEAR}$ compared to the baselines.\\n\\n\\n**Additional Experiments:** Further, in response to reviewer Cpmf, we perform additional experiments comparing our approach to two more baselines that employ expert demonstrations (Figure 15 in the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l)).\\n\\n1. $\\\\texttt{HAC-demos}$ uses hierarchical actor critic with the RL objective and is jointly optimized using additional behavior cloning objective, where the lower level uses primitive expert demonstrations and the upper level uses subgoal demonstrations extracted using fixed window based approach (as in $\\\\texttt{RPL}$). \\n2. $\\\\texttt{HBC}$ (Hierarchical behavior cloning) uses the same demonstrations as $\\\\texttt{HAC-demos}$ at both levels, but is trained using only behavior cloning objective (thus, does not employ RL).\\n\\nAs seen in Figure 15, $\\\\texttt{PEAR-IRL}$ significantly outperforms both the baselines, demonstrating the efficacy of our **primitive enabled adaptive relabeling** and **joint optimization** based approach.\\n\\n\\n> **Weakness 2:** Lack of Recent Comparisons: The paper does not include comparisons with several hierarchical reinforcement learning methods published in the last three years. This omission limits the contextual relevance of the results and could misrepresent the state of the art, such as [1], [2].\\n\\n**Response to Weakness 2:** Based on the reviewer's concern, we have performed and added additional experiments in the paper to compare our approach with $\\\\texttt{SAGA}$ [1]. $\\\\texttt{SAGA}$ employs state conditioned discriminator network training to address non-stationarity, by ensuring that the high-level policy generates subgoals that align with the current state of the low-level policy. We provide the comparisons in Figure 3 of the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l). \\n\\nWe find that although $\\\\texttt{SAGA}$ performs well in maze navigation environment, it fails to solve harder tasks, where $\\\\texttt{PEAR}$ is able to significantly outperform the baseline. This demonstrates that $\\\\texttt{SAGA}$ suffers from non-stationarity issue in harder long horizon tasks, and $\\\\texttt{PEAR}$ is able to better mitigate non-stationarity issue and effectively solve long horizon tasks.\\n\\n\\n> **Weakness 3:** Citation Issues: Many citations are sourced from CoRR ... and accurate referencing.\\n\\n**Response to Weakness 3:** We thank the reviewer for pointing this out. We have corrected all the citations with their formal venue citations in the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l).\"}", "{\"summary\": \"The paper presents a novel approach called Primitive Enabled Adaptive Relabeling (PEAR) aimed at enhancing Hierarchical Reinforcement Learning (HRL) for complex long-horizon tasks. The authors propose a two-phase methodology where the first phase involves adaptive relabeling of expert demonstrations to generate subgoals, followed by joint optimization of HRL agents through Reinforcement Learning (RL) and Imitation Learning (IL). The results indicate that PEAR outperforms various baselines in both synthetic and real-world robotic tasks, achieving up to 80% success rates in challenging environments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. **Innovative Approach:** The use of adaptive relabeling to generate subgoals tailored to the capabilities of the lower primitive is a significant contribution that addresses the non-stationarity issue in HRL.\\n2. **Theoretical Justification:** The authors provide theoretical analysis that bounds the sub-optimality of their approach, lending credibility to their claims.\\n3. **Comprehensive Experiments:** Extensive experiments across multiple challenging tasks demonstrate the practical efficacy of PEAR, showing improved performance and sample efficiency over existing methods.\\n4. **Real-World Application:** The validation of PEAR in real-world tasks enhances the relevance and applicability of the research.\", \"weaknesses\": \"1. **Expert Demonstrations Requirement:** The first phase of PEAR relies on expert demonstrations to generate subgoals, which raises concerns about the fairness of comparison with other HRL methods that do not require such demonstrations. This could affect the generalizability of the findings.\\n2. **Lack of Recent Comparisons:** The paper does not include comparisons with several hierarchical reinforcement learning methods published in the last three years. This omission limits the contextual relevance of the results and could misrepresent the state of the art, such as [1], [2].\\n\\n[1] Kim J, Seo Y, Shin J. Landmark-guided subgoal generation in hierarchical reinforcement learning[J]. Advances in neural information processing systems, 2021, 34: 28336-28349.\\n\\n[2] Wang, Vivienne Huiling, et al. \\\"State-conditioned adversarial subgoal generation.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 8. 2023.\\n\\n3. **Citation Issues:** Many citations are sourced from CoRR instead of their formal conference versions. This oversight should be corrected to ensure academic integrity and accurate referencing. For example, the last two references should be cited as follows (Note that there are many other citation errors beyond these two):\\n\\nWulfmeier, Markus, et al. \\\"Data-efficient hindsight off-policy option learning.\\\" International Conference on Machine Learning. PMLR, 2021.\\n\\nZhang, Tianren, et al. \\\"Generating adjacency-constrained subgoals in hierarchical reinforcement learning.\\\" Advances in Neural Information Processing Systems 33 (2020): 21579-21590.\", \"questions\": [\"How do the authors justify the reliance on expert demonstrations in the first phase of PEAR compared to HRL methods that function without such requirements?\", \"Can the authors provide additional comparisons with recent HRL approaches to strengthen the positioning of PEAR within the current landscape?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you authors for the response to my questions. I increased my score by a point as I believe the approach is sufficiently new. However, I still do not understand why HAC is characterized as nonstationary. The reward for subgoal actions in HAC never depend on the value functions of states achieved by the current low level policy, they only depend on the value functions of states achieved by fully trained low level policies as a result of the hindsight transitions. The problem with HAC is not nonstationarity but rather exploration. Your HAC+demos experiment does not seem to do what I requested, which was a pure HAC with the replay buffers supplemented with the fixed window transitions from the demonstrations rather than incorporating the imitation learning objective objective into HAC. I also do not understand why a pure hierarchical behavior cloning approach, in which high level policy is trained to output states reached from demonstrations, would be \\\"nonstationary\\\". This objective would have no dependence on the states reached by the current low level policies.\"}", "{\"title\": \"Further Author Response\", \"comment\": \"We are thankful to the reviewer for increasing the score and for the insightful feedback. We agree with the reviewer and apologize for erroneously mentioning that HAC-demos and HBC still suffer from non-stationarity. However, these methods still lead to poor performance due to the issue of unachievable subgoals prediction. We explain this in detail below:\\n\\n> **Question 1**: I still do not understand why HAC is characterized as nonstationary. The reward for subgoal actions in HAC never depend on the value functions of states achieved by the current low level policy, they only depend on the value functions of states achieved by fully trained low level policies as a result of the hindsight transitions. The problem with HAC is not nonstationarity but rather exploration.\\n\\n**Response to Question 1**: HAC addresses the problem of non-stationarity through *replay buffer relabeling* and *subgoal testing*. However, HAC's *replay buffer relabeling* assumes the achieved subgoal to be the predicted subgoal, and accordingly re-evaluates sparse rewards. Thus, HAC may learn to predict subgoal $g_t$ that is not achievable by the current lower level policy, leading to poor performance. PEAR is able to mitigate this by employing **primitive enabled adaptive relabeling** to always predict achievable subgoals for the lower level primitive. We also demonstrate this in our extensive experiments on complex and sparsely-rewarded long horizon tasks, where PEAR is able to significantly outperform HAC.\\n\\n\\n> **Question 2**: Your HAC+demos experiment does not seem to do what I requested, which was a pure HAC with the replay buffers supplemented with the fixed window transitions from the demonstrations rather than incorporating the imitation learning objective objective into HAC.\\n\\n**Response to Question 2**: We would also like to clarify that we did implement pure HAC approach with the replay buffer appended with fixed window transitions from the expert demonstrations, like the reviewer had requested. However, in order to ensure fair comparisons with our approach, we also incorporated the imitation learning objective (IL) with HAC objective. We found this IL objective to improve the performance when compared to pure HAC based approach.\\n\\n\\n> **Question 3**: I also do not understand why a pure hierarchical behavior cloning approach, in which high level policy is trained to output states reached from demonstrations, would be \\\"nonstationary\\\". This objective would have no dependence on the states reached by the current low level policies.\\n\\n**Response to Question 3**: In hierarchical BC approach, the higher level is trained using fixed window based subgoals from expert demonstrations. Notably, the fixed window approach may select **unachievable subgoals** for the lower level policy. Due to this, the lower level policy might not be able to achieve the predicted subgoal $g_t$. Further, since HBC only uses imitation learning to train hierarchical policies, it is unable to efficiently explore the environment, leading to sub-optimal predictions. These reasons account for the poor performance of HBC baseline, as demonstrated by our experiments. In contrast, our **adaptive relabeling** based approach always selects achievable subgoals according to the lower level policy, and is able to efficiently explore the environment using RL.\\n\\n\\nWe hope that we have been able to clarify the reviewer's concerns, and if deemed appropriate, we request the reviewer to kindly increase our score. We are grateful to the reviewer for the service to the community.\"}", "{\"summary\": \"PEAR combines its key feature, adaptive goal relabeling, with IL regularization (either MSE or IRL), and other tricks (e.g. margin classification objective) to beat several prior HRL methods on a standard array of tasks using expert demonstrations.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The authors carefully outline their construction of the PEAR algorithm, justifying the usage of each component (goal relabeling, the general algorithm, the joint optimization framework, etc) thoroughly and clearly. The sub optimality analysis provides further credence to their method. The method is novel, and notably outperforms previous HRL works (while, in some cases, removing the need for e.g. hand-made action primitives).\\n\\nI agree with the author view that the significance of the work should be gauged less by its immediate improvement over other LfD methods, and more by its conceptual groundwork. In this regard, this paper is well-written, the findings are well-presented, and the extensive ablations provide further insight into key aspects of the method.\", \"weaknesses\": \"As authors note, the method is currently reliant on expert demonstrations. However, many benchmarks exist which include other kinds of demonstrations, including human teleop. While the method may not perform well on these demonstrations just yet (as it is listed as an aim of future work), providing results on suboptimal demonstrations would help demonstrate concretely the strong and weak points of authors' method, and potentially provide insights on why it fails in these settings.\\n\\nFurthermore, it seems inaccurate to state that PEAR uses only \\u201ca handful\\u201d of demonstrations, when Fig. 13 shows that generally 50-70+ demonstrations are needed to solve the provided tasks (with the exception of Franka Kitchen, which provides fewer demos).\", \"questions\": \"Have you tested on suboptimal demonstrations? Are there any interesting findings/results which may point to future avenues for research or failings of the method which should be investigated/improved upon?\\n\\nWere you able to find notable reasons for why the MSE-regularized learning objective would occasionally outperform the IRL-regularized version? Is there any relationship to task difficulty, data diversity, etc?\\n\\nIn \\u201cAlgorithm 2: PEAR,\\u201d line 8, shouldn\\u2019t the lower-level policy\\u2019s IL regularization be done with $D_g$? Since we are providing state $s^f$ from the goal dataset $D_g$ and subgoal supervision $s^e_g$ to the goal-conditioned low-level policy, then the policy predicts action $a$, and we regularize this to be close to the dataset action $a^f$ (either with MSE or the IRL objective)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response to Area Chairs and Reviewers\", \"comment\": [\"We thank all the reviewers for their valuable feedback and insightful comments. We are particularly encouraged that the reviewers consider our proposed approach **novel and promising** (u7xN, di4o and GCp5), our **theoretical analysis insightful** (u7xN and GCp5), our **empirical evaluation and ablations impressive and extensive** (u7xN, Cpmf, GCp5, and di4o), the **paper well-written and well-organized** (Cpmf, di4o), and our **real world experiments credible** (u7xN, Cpmf, GCp5). We address individual questions of reviewers in separate responses. We have uploaded a modified version of our paper based on reviewers' suggestions: [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l).\", \"We first enlist additional experiments to address the reviewer's concerns for your consideration.\", \"### Additional Experiments.\", \"**Comparison with state-of-the-art baseline**: In order to address reviewer **u7xN**'s concerns, we implemented recent state-of-the-art approach **SAGA** (State-conditioned adversarial subgoal generation) and critically compared it with PEAR. We demonstrate that PEAR is able to significantly outperform this approach, which shows that PEAR efficiently mitigates non-stationarity in HRL and demonstrates impressive performance over state-of-the-art approaches. We are happy that reviewer acknowledged the efficacy of this experiment, and kindly raised the score to 6.\", \"**Additional Hierarchical baselines: HAC-demos and HBC**: In order to address reviewer **Cpmf**'s concerns, we implemented two additional baselines: **HAC-demos** (Hindsight actor critic with expert demonstrations) and **HBC** (Hierarchical behavior cloning) and compared it with PEAR. This comparison shows that *primitive enabled adaptive relabeling* and *joint optimization* are crucial for mitigating non-stationarity in HRL and improved performance. We are happy that reviewer acknowledged the efficacy of this experiment, and kindly raised the score to 6.\", \"We have added the additional experiments and improvements suggested by the reviewers in the submitted manuscript, and provided detailed clarifications to address all the reviewers' concerns in the rebuttal. We also provide a summary of our core contributions below.\", \"### Summary of core contributions:\", \"In this work, we propose a novel approach PEAR that efficiently mitigates non-stationarity in off-policy HRL for solving complex long horizon tasks.\", \"Our adaptive relabeling based approach generates efficient higher level subgoal supervision according to the current goal achieving capability of the lower primitive.\", \"We perform detailed theoretical analysis to bound the sub-optimality of our approach, and to theoretically justify the benefits of periodic re-population using adaptive relabeling.\", \"We perform extensive experimentation on six sparse robotic environments to empirically demonstrate our superior performance and sample efficiency over prior baselines.\", \"We show that PEAR demonstrates impressive performance in multiple challenging real world tasks.\", \"In summary, we propose a novel hierarchical reinforcement learning algorithm that is a promising step towards building practical robotic systems for real-world scenarios.\", \"We greatly appreciate all reviewers' time and effort in reviewing our paper. We hope that our responses and updates to the paper have addressed all the concerns, and solidified **PEAR** as a promising HRL approach for building practical systems to solve complex tasks.\"]}", "{\"title\": \"Author Response Part 1/2\", \"comment\": \"We are thankful to the reviewer for dedicating their valuable time and effort towards evaluating our manuscript. We deeply appreciate the insightful feedback provided, and we have thoroughly responded to reviewer\\u2019s inquiries in the responses provided below.\\n\\n> **Weakness 1:** My main concern with this approach is that the contribution seems to be marginal as I do not see a compelling reason why PEAR would consistently perform better than (a) HAC (Levy 2019) with replay buffer augmented with demonstration data\\n\\n**Response to Weakness 1:** We thank the reviewer for raising this concern. Before stating our response, we would respectfully like to clarify the following:\\n1. $\\\\texttt{PEAR}$ uses our novel **primitive enabled adaptive relabeling** approach to leverage expert demonstrations and generate efficient subgoal supervision for the higher level policy. \\n2. $\\\\texttt{PEAR}$ uses a **joint optimization** objective that employs reinforcement learning (**RL**) and additional imitation learning (**IL**) regularization. RL allows the hierarchical policies to continuously explore the environment for high reward predictions, and IL regularizes the higher level objective to predict achievable subgoals for the lower primitive (and regularizes the lower level primitive to predict efficient primitive actions).\\n\\n**$\\\\texttt{HAC}$ Limitation:** Although $\\\\texttt{HAC}$ tries to deal with non-stationarity using relabeling and subgoal testing as mentioned by the reviewer, it fails to mitigate non-stationarity in complex long horizon tasks (as seen in Figure 3). \\n\\n**$\\\\texttt{PEAR}$ can explore using RL:** Further, although $\\\\texttt{PEAR}$ employs imitation learning regularization, it also uses RL in our joint objective formulation which allows it to explore the environment for high reward predictions, even in presence of sub-optimal demonstrations. \\n\\n**Comparison with HER-BC:** We would also like to point out that although $\\\\texttt{HER-BC}$ baseline works well in easier tasks, it is unable to outperform $\\\\texttt{PEAR}$ and perform well in harder long horizon tasks. Further, we would like to point out that $\\\\texttt{HAC}$ additionally suffers from non-stationarity issue in HRL, since it is hierarchical (unlike the non-hierarchical $\\\\texttt{HER-BC}$).\\n\\n\\n**Additional experiments:** To support our claims, we perform additional experiments: $(i)$ HAC with demos ($\\\\texttt{HAC-demos}$) and $(ii)$ Hierarchical behavior cloning ($\\\\texttt{HBC}$) in **Figure 15** in the rebuttal pdf [**rebuttal_pdf_link**](https://openreview.net/pdf?id=0nJEgNpb4l) to demonstrate the efficacy of $\\\\texttt{PEAR}$ over these baselines. \\n\\nWe would like to point out that it is not straight-forward to train the higher level in hierarchical approaches with behavior cloning, since we typically do not have access to higher level subgoal supervision ($\\\\texttt{PEAR}$ uses primitive enabled adaptive relabeling to acquire efficient subgoal supervision). Therefore, for the following experiments, we employ subgoals extracted using fixed window based approach (as in Relay Policy Learning approach $\\\\texttt{RPL}$): \\n\\n1. $\\\\texttt{HAC-demos}$ uses hierarchical actor critic with the RL objective and is jointly optimized using additional behavior cloning objective, where the lower level uses primitive expert demonstrations and the upper level uses subgoal demonstrations extracted using fixed window based approach (as in $\\\\texttt{RPL}$). \\n2. $\\\\texttt{HBC}$ (Hierarchical behavior cloning) uses the same demonstrations as $\\\\texttt{HAC-demos}$ at both levels, but is trained using only behavior cloning objective (thus, does not employ RL).\\n\\nAs seen in **Figure 15**, although $\\\\texttt{HAC-demos}$ shows good performance in the easier maze navigation environment, both $\\\\texttt{HAC-demos}$ and $\\\\texttt{HBC}$ fail to solve the tasks in harder environments, and $\\\\texttt{PEAR-IRL}$ significantly outperforms both the baselines, demonstrating the efficacy of our **primitive enabled adaptive relabeling** and **joint optimization** based approach. Note that both $\\\\texttt{HAC-demos}$ and $\\\\texttt{HBC}$ suffer from non-stationarity issue in HRL, which accounts for their poor performance. This shows that our approach is able to efficiently mitigate non-stationarity in HRL. Finally, **primitive enabled adaptive relabeling** and subsequent **joint optimization** can be added on top of $\\\\texttt{HAC}$, which is an interesting direction we would like to explore in future work.\"}", "{\"title\": \"Author Response\", \"comment\": \"We are thankful to the reviewer for the insightful comments, prompt response, and dedicating their valuable time and effort towards evaluating our manuscript, which has allowed us to strengthen the manuscript. We are also glad that we could address the reviewer's concerns. Further,\\n1. We have re-drawn Figure 1 to improve the text size and to make it more legible.\\n2. We apologize for failing to fix the citations earlier, and we have fixed the citations according to their formal venues.\\n\\nWe hope that the responses address the reviewer's concerns. Please let us know, and we will be happy to address additional concerns if any.\"}" ] }
0n4bS0R5MM
VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control
[ "Sherwin Bahmani", "Ivan Skorokhodov", "Aliaksandr Siarohin", "Willi Menapace", "Guocheng Qian", "Michael Vasilkovsky", "Hsin-Ying Lee", "Chaoyang Wang", "Jiaxu Zou", "Andrea Tagliasacchi", "David B. Lindell", "Sergey Tulyakov" ]
Modern text-to-video synthesis models demonstrate coherent, photorealistic generation of complex videos from a text description. However, most existing models lack fine-grained control over camera movement, which is critical for downstream applications related to content creation, visual effects, and 3D vision. Recently, new methods demonstrate the ability to generate videos with controllable camera poses---these techniques leverage pre-trained U-Net-based diffusion models that explicitly disentangle spatial and temporal generation. Still, no existing approach enables camera control for new, transformer-based video diffusion models that process spatial and temporal information jointly. Here, we propose to tame video transformers for 3D camera control using a ControlNet-like conditioning mechanism that incorporates spatiotemporal camera embeddings based on Plucker coordinates. The approach demonstrates state-of-the-art performance for controllable video generation after fine-tuning on the RealEstate10K dataset. To the best of our knowledge, our work is the first to enable camera control for transformer-based video diffusion models.
[ "video generation", "3d", "diffusion" ]
Accept (Poster)
https://openreview.net/pdf?id=0n4bS0R5MM
https://openreview.net/forum?id=0n4bS0R5MM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yoXeugkuRF", "yBw2PXyBVO", "w9tmyOARXJ", "qrewKGAh9S", "qpLEXKbGyp", "pdnFsXNyyw", "liW6pezaho", "lRHnd1sUji", "es6FIqwF5j", "agIFV9Fpis", "TjrcIBBcgv", "Q088PCKSYY", "M8TWs3NOAK", "LROnH4Nhnt", "L7N85NEFzL", "Jc9clfprKm", "GxuAOFwHI0", "DkQ83qxzWe", "DGdee8xHXZ", "CMqlCLFTYm", "C4CbujugSK", "BLNCUZyzqR", "9BcHupT7Zt", "8CrJYMupkP", "82OJUkZJmT", "70ZHmW4loq", "2uQaJjhfyw", "2hL1o8NvZV", "2502XuMzJV", "0DYtPYW8wu", "09VEeoAC1B" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732906113908, 1732636486833, 1732834682368, 1732518658703, 1732517964474, 1732834554811, 1732518094289, 1732738018735, 1732517401519, 1733176802146, 1733215280075, 1732784235661, 1732923319656, 1730258536982, 1733277757388, 1732834483107, 1732783645298, 1730487009862, 1733277725918, 1730607270145, 1734626811348, 1730633889428, 1730545825633, 1732517496149, 1732625120693, 1732518465055, 1733277933274, 1737523680094, 1733277625454, 1732783325908, 1732834399916 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_uN7u" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_qX6r" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_DiqV" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_uN7u" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_uN7u" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_ZZhZ" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_uN7u" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_DiqV" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_uN7u" ], [ "ICLR.cc/2025/Conference/Submission5047/Area_Chair_MFCV" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_B4pY" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_qX6r" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_B4pY" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ], [ "ICLR.cc/2025/Conference/Submission5047/Reviewer_uN7u" ], [ "ICLR.cc/2025/Conference/Submission5047/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the response. Can the authors please clarify how exactly CameraCtrl is re-implemented? I understand the backbone SnapVideo is the same. Please provide as much information as possible because the current descriptions seem problematic.\"}", "{\"title\": \"Official Response by Reviewer qX6r\", \"comment\": \"Thanks authors for the detailed response to my questions. I appreciate the additional results for other transformer-based diffusion models and more ood camera trajectories, which shall be included in the revision to make this paper more solid. My concerns have been addressed. While I still think novelty is slightly limited, but this does not affect the value of this paper.\"}", "{\"comment\": \"Dear Reviewer uN7u,\\n\\nWe appreciate your interest in engaging with Reviewer DiqV's assessment of our work. However, as the authors, we would like to address the comparison concerns directly, which we also outlined in our response to Reviewer uN7u.\\n\\n> **CameraCtrl implementation**\\n\\nWe believe there is a misunderstanding regarding the implementation of CameraCtrl. We do **not** fine-tune the pre-trained camera encoder from AnimateDiff since this would not generalize well, and even zero-convolutions might not easily adapt to the SnapVideo model. What we mean by \\u201coriginal camera encoder module\\u201d is the code and the architecture that we adapt into the same base model codebase we use, i.e., we do not use the weights from AnimateDiff-trained camera encoders. We start fine-tuning the camera encoder module from the **same** SnapVideo model, using the **same** batch size, **same** number of iterations, and **same** parameter size as our proposed method for fair comparisons. We will update the draft to emphasize this as part of the next revision of the paper.\\n\\n> **Novelty**\\n\\nFurthermore, we highlighted the novelty of our work in our response to Reviewer uN7u. We represent extrinsics and intrinsics as spatio-temporal Plucker embeddings, patchify them into spatio-temporal Plucker tokens, and then align them with the video tokens. This procedure is unique to transformer-based models and was not investigated by any other work, as previous works only investigated U-Net-based video models. Second, we propose to align these spatiotemporal Plucker tokens with the video tokens using a ControlNet-inspired setup. Other works, including MotionCtrl and CameraCtrl, do not use ControlNet to infuse cameras into the base video model. We observe that spatio-temporals ControlNets are very effective and degrade little motion and quality, as shown in the experiments. Note that using ControlNet is only possible after representing cameras in the same spatio-temporal patch space as the original video tokens as part of the first step we are doing.\\nNovelty is not always about inventing a new complicated layer or mechanism, but it can also be novel connecting different mechanisms and showing that a simple approach is more effective. In fact, the community typically prefers to build upon simple approaches that just work. We are not claiming to invent Plucker or ControlNet; it\\u2019s the technical solution of representing cameras as spatiotemporal Plucker tokens, using a ControlNet-type of setup to align them spatiotemporal video tokens pixel-wise, and demonstrating that this technique is significantly more effective than any other proposed technique. We also demonstrate with extensive experiments that using raw extrinsics (MotionCtrl) or camera encoders without ControlNet-type of camera alignment (CameraCtrl) work significantly worse, degrading quality, motion, and/or camera accuracy. We believe our approach could be a very effective baseline for camera control in video diffusion transformers and a good starting point that can be adapted by many follow-up works.\"}", "{\"comment\": \"We deeply appreciate the reviewer's detailed and positive assessment of our work. Below, we would like to clarify several concerns.\\n\\n> **Novelty concern (part 1/2): Plucker camera embeddings have already been explored in CameraCtrl.**\\n\\nWe respectfully want to emphasize that our work not only developed the use of Pl\\u00fccker coordinates for camera control independently, but our core technical innovation lies in the transformer-centric design. We developed a new projection scheme that effectively integrates camera information into the representation space of video tokens, which is crucial for joint spatio-temporal processing in transformer architectures. This technical challenge required novel solutions distinct from those used in traditional UNet-based approaches, as transformers handle information flow and processing in fundamentally different ways.\\n\\n> **Novelty concern (part 2/2): Using RealEstate10K has already been explored in MotionCtrl.**\", \"we_believe_there_is_a_misunderstanding\": \"we do not argue that the use of RealEstate10K is a novel component of our work. The key novelty of our paper lies in exploring and designing a lightweight and precise camera conditioning mechanism for transformer-based diffusion models.\\n\\n> **How would the method perform on top of DiT?**\\n\\nThat\\u2019s a great question! We incorporate our approach into a vanilla DiT model trained in the latent space of CogVideoX [1]. We include these results as part of the revised paper in Table 10 and Table 11. We furthermore include visual samples into the *rebuttal.html*.\\n\\nInstead of building on top of read attention in FIT, we incorporate the ControlNet conditioning on top of the vanilla attention mechanism of the actual tokens. We provide evaluations in the two Tables below and observe that the vanilla DiT version further improves quality and camera accuracy on out-of-distribution prompts (MSR-VTT). We believe that our proposed method of spatio-temporal Plucker tokens and aligning them with video patch tokens through a ControlNet-type of conditioning mechanism is agnostic to the transformer architecture and will serve as a starting point for follow-up works.\\n\\n|Method|TransError (RE10K)|RotError (RE10K)|TransError (MSR-VTT)|RotError (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl|0.518|0.161|0.627|0.148|\\n|CameraCtrl|0.532|0.165|0.578|0.220|\\n|Ours (FIT DiT)|**0.409**|**0.043**|0.504|0.050|\\n|Ours (vanilla DiT)|0.421|0.056|**0.486**|**0.047**|\\n\\n|Method|FID (RE10K)|FVD (RE10K)|CLIP (RE10K)|FID (MSR-VTT)|FVD (MSR-VTT)|CLIP (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl|1.50|52.30|0.2708|9.97|183.57|0.2677|\\n|CameraCtrl|2.28|66.31|0.2730|8.47|181.90|0.2690|\\n|Ours (FIT DiT)|1.40|42.43|0.2807|7.80|165.18|0.2689|\\n|Ours (vanilla DiT)|**1.21**|**38.57**|**0.2834**|**6.88**|**137.62**|**0.2790**|\\n\\n[1] Yang et al., Cogvideox: Text-to-video diffusion models with an expert transformer, arXiv 2024\\n\\n> **MotionCtrl backbones should be frozen.**\\n\\nWe already trained and evaluated a variant of MotionCtrl where the backbone is frozen in Sec. B.3 with comparisons in Tables 5, 6, and 7, as part of the appendix. While the quality and motion is less degraded, this variant further degrades the camera control since such design lacks enough expressivity to follow a viewpoint conditioning signal.\\n\\n> **The amount of trainable parameters is unknown.**\\n\\nWe use 230M trainable parameters for VD3D. For fair comparisons, we used for both MotionCtrl and CameraCtrl experiments the **identical** number of parameters to also match GPU memory usage. We train all variants with the same batch size and same number of iterations. We included this information in Appendix C.\\n\\n> **Use R\\u2019_f and t\\u2019_f to represent normalized extrinsics.**\\n\\nWe agree with the reviewer on this observation and thank them for the valuable suggestion. We updated Section 3 to accommodate this improvement.\\n\\n> **Baseline implementation details about MotionCtrl/CameraCtrl.**\\n\\nCorrect, MotionCtrl uses a simple MLP to map from input cameras to camera embeddings that are concatenated to the video patch tokens. CameraCtrl uses a more complex camera encoder including temporal attention and 2D convolutions to map from input cameras to camera embeddings.\\n\\n> **What is the key ingredient of motion preservation?**\\n\\nThe key to preserving motion lies in our carefully designed conditioning mechanism. Through extensive experimentation, we found that two components are crucial: our Pl\\u00fccker-based projection scheme to obtain camera tokens and the way we integrate them into video tokens of a diffusion transformer via cross-attention followed by zero-initialized convolution to preserve the model initialization. This approach effectively balances the challenging trade-off between camera control precision and motion quality. While enforcing camera control typically degrades scene dynamics, our solution achieves precise camera control while preserving natural motion.\"}", "{\"comment\": \"We thank the reviewer for their thorough feedback. Below, we address the raised concerns.\\n\\n> **Overfitting on RealEstate10K camera trajectories.**\\n\\nWe respectfully note that all our evaluations were performed on the test-set RealEstate10K camera trajectories. Regarding the out-of-distribution cameras, as the reviewer rightfully pointed out, we provide quantitative results with random camera trajectories in Table 8 in Appendix B. With the current update, we include the results for non-random, user-defined trajectories in the *rebuttal.html* as part of the supplementary, which will be incorporated into *main.html* for the final version. We also provide new quantitative evaluations for these trajectories below and in Table 9 in the revision of the paper. These new trajectories also involve camera movements with significant directional changes including rotations and translations. We observe that our method generalizes to camera trajectories with variable rotations and translations.\\n\\n|Method|TransError (RE10K)|RotError (RE10K)|TransError (MSR-VTT)|RotError (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl|0.451|0.095|0.456|0.146|\\n|CameraCtrl|0.369|0.088|0.479|0.135|\\n|Ours|**0.236**|**0.041**|**0.258**|**0.050**|\\n\\n> **Plucker embeddings have already been explored in CameraCtrl.**\\n\\nWe respectfully note that our work developed the use of Pl\\u00fccker coordinates for camera control independently, and our key technical contribution extends beyond just the idea of using them\\u2014 we developed a novel projection mechanism that effectively integrates camera information into the representation space of video diffusion transformers with joint spatio-temporal layers. This specialized approach is essential, as transformer architectures process information fundamentally differently from UNet-based models and require unique solutions for effective camera control.\\n\\n> **PixArt-\\u03b4 has already explored ControlNets for a transformer-based diffusion model.**\\n\\nWe respectfully note that \\u200bwhile PixArt-\\u03b4 developed a ControlNet-based mechanism for transformer-based diffusion, it is limited to spatial conditioning with spatial signals. Spatial conditioning signals explored by ControlNet and PixArt-\\u03b4 (e.g., canny edges, depth/normal/segmentation maps, etc.) are very strong and almost fully describe the structure of the output image. In our paper, we explore conditioning mechanisms for camera poses, which is a *weak* spatio-temporal signal with an intricate temporal component. Moreover, we explore it for transformer-based diffusion models with joint spatio-temporal computation which adds an extra level of complexity because these dimensions are entangled. We thank the reviewer for their valuable pointer and added the relevant discussion in Section 2 of the paper.\\n\\n> **Details on the pre-training data of the backbone model.**\\n\\nWe appreciate the reviewer's attention to the matter of training data in the backbone model upon which our work is built. However, we want to clarify that we obtained only the upstream model weights from this model, which were published in a paper at CVPR 2024. Our paper focuses entirely on our novel 3D camera control methodology, which we developed on top of this previously established model.\\n\\nIn this case, we believe the 3D camera control methodology should be the focus of the review as is common when new contributions are made on top of existing work. We note that many important papers have been built upon closed models, for instance: \\n- DreamFusion (ICLR\\u201923 Best Paper) builds upon Imagen (NeurIPS\\u201922 Best Paper)\\n- Cat3D (NeurIPS\\u201924 Oral) builds upon an unnamed upstream LDM model.\\n- Magic3D (CVPR\\u201923) builds upon e-Diff-I\\n- VidPanos (SIGGRAPH Asia\\u201924) builds upon Lumiere\\n\\nTo ensure consistency in the criteria used to evaluate our contribution, we would appreciate a similar approach to be taken with the review of our paper.\\n\\n> **Pre-training data can be potentially contaminated with test data.**\\n\\nAs to the potential contamination of pre-training data with test data, we confirm that the pre-training data does not overlap with neither train nor test sets of RealEstate10K or MSR-VTT, ensuring the validity of the results reported in Table 2.\\n\\n> **Reproducibility concern.**\\n\\nIn our original submission, we provided a detailed description of our methodology in Section 3, comprehensive experimental details in Section 4, and complete training and architectural specifications in Appendix C. This constitutes a solid foundation for future projects to reproduce and build upon our work.\\n\\nTo further enhance reproducibility, we include the source code of our camera-controlled FIT block \\u2014 the key component of our method \\u2014 as supplementary material. Additionally, we are committed to providing any technical details that reviewers find necessary for complete reproduction of our results. \\n\\nWe have added a Reproducibility Statement as Section 6, summarizing our commitment to reproducibility and the available resources.\"}", "{\"comment\": \"Dear Reviewer uN7u,\\n\\nWe appreciate your interest in engaging with Reviewer B4pY's assessment of our work. However, as the authors, we would like to address the comparison concerns directly, which we also outlined in response to Reviewer uN7u.\\n\\n> **CameraCtrl implementation**\\n\\nWe believe there is a misunderstanding regarding the implementation of CameraCtrl. We do **not** fine-tune the pre-trained camera encoder from AnimateDiff since this would not generalize well, and even zero-convolutions might not easily adapt to the SnapVideo model. What we mean by \\u201coriginal camera encoder module\\u201d is the code and the architecture that we adapt into the same base model codebase we use, i.e., we do not use the weights from AnimateDiff-trained camera encoders. We start fine-tuning the camera encoder module from the **same** SnapVideo model, using the **same** batch size, **same** number of iterations, and **same** parameter size as our proposed method for fair comparisons. We will update the draft to emphasize this as part of the next revision of the paper.\"}", "{\"comment\": \"We deeply appreciate the reviewer's detailed and positive assessment of our work. Below, we would like to clarify several concerns.\\n\\n> **Will the method generalize to other transformer-based diffusion models?**\\n\\nThat\\u2019s a great and justified question, and to answer it, we implemented our VD3D method on top of a pre-trained text-to-video DiT model in the latent space of the CogVideoX [1] autoencoder. We include these results as part of the revised paper in Table 10 and Table 11. We furthermore include visual samples into the *rebuttal.html*.\\n\\nInstead of building on top of read attention in FIT, we incorporate the ControlNet conditioning on top of the vanilla attention mechanism of the actual tokens. We provide evaluations in the two Tables below and observe that the vanilla DiT version further improves quality and camera accuracy on out-of-distribution prompts (MSR-VTT). We believe that our proposed method of spatio-temporal Plucker tokens and aligning them with video patch tokens through a ControlNet-type of conditioning mechanism is agnostic to the transformer architecture and will serve as a starting point for follow-up works.\\n\\n|Method|TransError (RE10K)|RotError (RE10K)|TransError (MSR-VTT)|RotError (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl|0.518|0.161|0.627|0.148|\\n|CameraCtrl|0.532|0.165|0.578|0.220|\\n|Ours (FIT DiT)|**0.409**|**0.043**|0.504|0.050|\\n|Ours (vanilla DiT)|0.421|0.056|**0.486**|**0.047**|\\n\\n|Method|FID (RE10K)|FVD (RE10K)|CLIP (RE10K)|FID (MSR-VTT)|FVD (MSR-VTT)|CLIP (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl|1.50|52.30|0.2708|9.97|183.57|0.2677|\\n|CameraCtrl|2.28|66.31|0.2730|8.47|181.90|0.2690|\\n|Ours (FIT DiT)|1.40|42.43|0.2807|7.80|165.18|0.2689|\\n|Ours (vanilla DiT)|**1.21**|**38.57**|**0.2834**|**6.88**|**137.62**|**0.2790**|\\n\\n[1] Yang et al., Cogvideox: Text-to-video diffusion models with an expert transformer, arXiv 2024\\n\\n> **Evaluation on out-of-distribution trajectories.**\\n\\nIn addition to the quantitative evaluations of random out-of-distribution camera trajectories presented in Table 8 of our original submission, we have now conducted a more extensive analysis. The new evaluation tests 11 distinct out-of-distribution trajectories across 1000 different scenes for both RealEstate10K and MSR-VTT, shown in the Table below. We visualize 16 prompts for these trajectories, resulting in 176 new visual results in total in the *rebuttal.html*. This comprehensive analysis is now available in Table 9 of the revised manuscript.\\n\\n|Method|TransError (RE10K)|RotError (RE10K)|TransError (MSR-VTT)|RotError (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl|0.451|0.095|0.456|0.146|\\n|CameraCtrl|0.369|0.088|0.479|0.135|\\n|Ours|**0.236**|**0.041**|**0.258**|**0.050**|\\n\\n> **ControlNet and Plucker camera embeddings were explored before.**\\n\\nOur work addresses fundamentally different challenges from ControlNet in several aspects. First, ControlNet was specifically designed for UNet architectures, while we develop conditioning mechanisms for transformer-based models, which require entirely different approaches due to their distinct computational nature. Second, while ControlNet successfully handles strong spatial signals like edge maps or depth masks that largely determine image structure, we tackle the challenging problem of camera pose conditioning\\u2014 a much weaker signal that must influence both spatial and temporal aspects of video generation. The integration of such conditioning into joint spatio-temporal transformer computation presents unique technical challenges that require novel solutions.\\n\\n> **CameraCtrl is not concurrent.**\\n\\nWe respectfully want to clarify that our research, which began in February, developed its approach independently. Moreover, while CameraCtrl is currently under review at ICLR, we have provided comprehensive comparisons with their method throughout our paper. These comparisons demonstrate that their UNet-based approach does not translate effectively to transformer architectures, which required us to develop fundamentally different solutions for camera control in the context of joint spatio-temporal processing.\"}", "{\"comment\": \"Thank you for your detailed reply to my comments. Figure 3 is indeed improved though still a bit dense. I quite liked seeing the additional examples in rebuttal.html. It demonstrates that the conditioning works well -- even on OOD trajectories. I'm happy with the rebuttal.\"}", "{\"title\": \"Rebuttal\", \"comment\": [\"We deeply appreciate the reviewers\\u2019 thorough feedback and their positive comments regarding our contributions. We addressed all the raised concerns via targeted responses for each feedback. We summarize our updates here:\", \"We ran the evaluations for our method on out-of-distribution camera trajectories (various translations, rotations, zoomings and their combinations) and included 176 new qualitative results in *rebuttal.html* as part of the updated supplementary file.\", \"We updated Figure 3 with the method architecture by enhancing its style and improving readability, simplified the notation in Section 3, and connected it better with the notations and paired the blocks in diagrams with exact variables\", \"We included the reproducibility statement as Section 6 (as per discussion with Reviewer uN7u). To enhance the reproducibility further, we also provided the source code of our camera-conditioned transformer block \\u2014 the key ingredient of our method. We are willing to include other details if the reviewers would find necessary.\", \"We implemented our method on top of a vanilla video DiT architecture, ran the corresponding experiments and evaluations. We included the results as Table 10 and Table 11 in the appendix.\", \"We added several useful references into the discussion of the related work. We thank the reviewers for valuable pointers.\", \"We are eager to engage in further discussions and would love to improve our work with the further suggestions of the reviewers.\"]}", "{\"comment\": \"> **Further vanilla DiT results**\\n\\nAs promised, we trained MotionCtrl and CameraCtrl on the same additional vanilla DiT baseline. We observe the same benefits as with our originally used transformer backbone and show generalization of our method across architectures. We will add these results to the next revision of the paper, as it is currently not possible to update the paper on openreview.\\n\\n|Method|TransError (RE10K)|RotError (RE10K)|TransError (MSR-VTT)|RotError (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl (vanilla DiT)|0.501|0.145|0.602|0.152|\\n|CameraCtrl (vanilla DiT)|0.513|0.138|0.559|0.195|\\n|Ours (vanilla DiT)|**0.421**|**0.056**|**0.486**|**0.047**|\\n\\n|Method|FID (RE10K)|FVD (RE10K)|CLIP (RE10K)|FID (MSR-VTT)|FVD (MSR-VTT)|CLIP (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl (vanilla DiT)|1.37|44.62|0.2752|9.68|157.90|0.2684|\\n|CameraCtrl (vanilla DiT)|2.13|53.72|0.2748|8.32|152.88|0.2723|\\n|Ours (vanilla DiT)|**1.21**|**38.57**|**0.2834**|**6.88**|**137.62**|**0.2790**|\\n\\n> **Rebuttal Period Summary**\\n\\nThank you again for your detailed feedback and engagement in the review process. We greatly appreciate your time and effort in evaluating our work. Throughout this review process, we have carefully and thoroughly addressed all the concerns raised in all your feedback posts across three discussion threads, providing new evaluations, experimental results, clarifications, and explanations. All this valuable feedback will be incorporated into the final version of our work.\\nWith the discussion thread closing today, we wanted to see if there might be any other issues we could address to help improve the paper. Given the thorough discussion and our efforts to address all issues comprehensively, we would greatly appreciate it if you could consider revising your score to reflect the current state of the paper.\\nThank you again for your time and thoughtful feedback throughout this review process.\"}", "{\"comment\": [\"Thank you for your response. I have a few follow-up questions regarding the experiments and noticed that some points from my previous concerns may not have been addressed. For clarity, I\\u2019ve compiled all the questions below:\", \"How did the CameraCtrl authors verify that you implemented CameraCtrl correctly? Did they read your implementation?\", \"What is the trainable parameter size for the re-implementation of CameraCtrl?\", \"What is the architecture for the camera encoder for CameraCtrl, when experimenting with SnapVideo and the DiT?\", \"How many training iterations are conducted for the CameraCtrl and MotionCtrl versions?\", \"What is the vanilla DiT you are referring to? Is it the pre-trained CogVideoX? What is the parameter size of this new backbone? What dataset was it pre-trained on?\", \"Is my understanding correct that the CameraCtrl version is using a sum and linear layer, while VD3D replaces the sum and linear layer with attention layers?\", \"Does VD3D have a camera encoder module? If not, how are the trainable parameters maintained the same for VD3D and CameraCtrl?\"]}", "{\"comment\": \"Dear Reviewer B4pY,\\n\\nThanks for the detailed comments and for sharing your opinions. I agree with you that this work is applying existing designs to spatiotemporal transformers rather than presenting a technological innovation. However, I find it hard to understand the contributions of the proposed method. \\n\\nWhile I agree that the proposed method outperforms vanilla MotionCtrl and CameraCtrl, I would argue that this performance gain mainly comes from SnapVideo versus AnimateDiff, where SnapVideo enjoys better pre-training data and larger parameter size. Regarding the experiments using SnapVideo, I find it confusing about their re-implementation of CameraCtrl. According to L381, the authors integrate a camera encoder from AnimateDiff with SnapVideo. While this could work, I don't think it is the correct way to implement CameraCtrl for SnapVideo, where zero-convolutions and a correct weight copy should be necessary. Therefore, I remain confused about the new configuration proposed by this work.\\n\\nPlease feel free to share your thoughts.\"}", "{\"comment\": \"Thank you for the follow-up questions, we are happy to provide more details on the CameraCtrl baseline to clarify any remaining concerns.\\n\\nFor CameraCtrl, we use the original code and integrate this code into the same base transformer model we use. Specifically, extrinsics and intrinsics are encoded using Plucker embeddings. Subsequently, the Plucker embeddings are encoded with the original camera encoder architecture that uses 2D ResNet and temporal attention layers. Note that the original camera encoder uses a pooling operation to downsample the Plucker embeddings to the different downsampled feature resolutions in the U-Net architecture. Since we use a transformer with constant dimensions across blocks, we remove the downsampling operations and keep the spatial dimensions constant. Otherwise, we follow the original CameraCtrl encoding setup and use the same patchification process as we do to align the embeddings with the spatio\\u2013temporal patches of the base model. The features are injected into the respective transformer blocks with a sum and linear layer. Due to zero-initialization, the camera is smoothly integrated into the main model. We initialize all layers with the same initialization scheme as the original CameraCtrl work and freeze the base model. While the original U-Net-based CameraCtrl integrates the features into the temporal attention layer, there are no separate temporal layers in spatio-temporal transformers that jointly model image and time dimensions. Hence, we extensively experimented with where to inject the features similarly as we experimented with our approach. In the end, the best approach was to map the output of the linear layer into the latent tokens, where self-attention instead of temporal attention layers are applied to these latent tokens in the transformer block. One key difference is that we use a ControlNet-based approach that performs attention between video tokens and Plucker tokens before integrating them into the base model, which is crucial.\\n\\nFurthermore, we have extra comparisons in the supplementary, showing alternative implementation strategies for MotionCtrl and CameraCtrl. The main comparisons use the variants that performed the best. Throughout our comparison process, we tried several ways to incorporate CameraCtrl into the transformer-based architecture. This also led to the certainty that our ControlNet-inspired approach is the most effective way to control camera in video diffusion transformers, especially w.r.t. preserving motion quality and visual quality. If there is any other way that the reviewer thinks we could explore or any concrete problem, we are happy to conduct an experiment on that. But we believe that we have already put a lot of effort into these comparisons, spending a lot of compute on training different variants to find the best CameraCtrl version based on our base model. \\n\\nImportantly, before conducting any comparisons with CameraCtrl, **we were in contact with the CameraCtrl authors to make absolutely sure that we conducted fair comparisons** by getting their full code with evaluation scripts, as the public code was missing parts of the evaluation code. Consequently, **we are absolutely certain that our implementation and comparisons with CameraCtrl are correct and involve significant efforts to ensure fairness and comparability**.\\n\\nWe will add these extra clarifications to the paper's revision. Thank you for the suggestions to further clarify the CameraCtrl baseline implementation. If any unaddressed concerns are left, we would be grateful to the reviewer for pointing them out since the discussion phase is closing soon.\"}", "{\"summary\": \"This paper presents a novel approach to incorporating camera movement control into Transformer-based video diffusion models. While camera control has been extensively studied in UNet-based video diffusion models, this area remains largely uncharted for Transformer architectures. The author introduces an additional ControlNet-like network to inject tokenized Pl\\u00fccker embeddings, demonstrating that this design enhances both visual quality and controllability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors made an important step forward in camera control for the Transformer-based video diffusion model, which is unexplored by previous research;\\n2. The visual quality and motion dynamics of the generated videos are excellent;\\n3. The methodology is clearly explained and easy to understand.\", \"weaknesses\": \"1. The main components of the method have been validated by previous works, for instance, the Plucker embedding as camera representation (CameraCtrl), and training on RealEstate10K(MotionCtrl, CameraCtrl);\\n\\n2. As mentioned in 1., I would say the specified designs for Transformer architecture are the major contribution of the paper. However, SnapVideo's architecture is not a typical DiT (it has the \\\"read\\\" operation, and the attention is not performed on the actual tokens). It's not clear how to extend the proposed method for a standard video DiT and how the performance would be.\\n\\n3. For the MotionCtrl baseline, the visual quality degrades when the base model is fine-tuned. Would it be better to freeze the backbone?\\n\\n4. Would be better to also provide the trainable parameter scale of the two baselines and the proposed method.\", \"questions\": \"1. Line 272, R_1, t_1 can be interpreted as the original extrinsic or the extrinsic after normalization. It would be better to use R'_1, and t'_1 to represent the extrinsic after normalization;\\n\\n2. I am not quite clear about the baseline implementation details. From my understanding, for MotionCtrl, one additional learned token is concatenated to the video patch tokens, while for CameraCtrl, the camera encoder produces additional latent tokens. Is that correct?\\n\\n3. While previous works sacrifice motion dynamics when training on RealEstate10K (mostly static scenes), the video in the supplementary material exhibits better and larger motions. What would be the possible reasons for the difference?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Is my understanding correct that the CameraCtrl version is using a sum and linear layer, while VD3D replaces the sum and linear layer with attention layers?**\\n\\nYour understanding of CameraCtrl is correct. For VD3D, we replace the sum and linear layer mechanism with a ControlNet-type block described in the method section (Sec. 3). Hence, we use linear layers and attention blocks with sums inside the ControlNet block. We kindly refer to Fig. 3 in the paper, where this is visualized.\\n\\n> **Does VD3D have a camera encoder module? If not, how are the trainable parameters maintained the same for VD3D and CameraCtrl?**\\n\\nVD3D does not have a camera encoder module. We adjust the hidden dimensions of each method to accumulate the same number of total parameters across methods. This also leads to the same memory consumption and the same batch size across the methods for fair comparisons. We previously experimented with the original parameter count of CameraCtrl and observed similar results to those of the adjusted parameter count.\"}", "{\"comment\": \"> **CameraCtrl implementation**\\n\\nWe believe there is a misunderstanding regarding the implementation of CameraCtrl. We do **not** fine-tune the pre-trained camera encoder from AnimateDiff since this would not generalize well, and even zero-convolutions might not easily adapt to the SnapVideo model. What we mean by \\u201coriginal camera encoder module\\u201d is the code and the architecture that we adapt into the same base model codebase we use, i.e., we do not use the weights from AnimateDiff-trained camera encoders. We start fine-tuning the camera encoder module from the **same** SnapVideo model, using the **same** batch size, **same** number of iterations, and **same** parameter size as our proposed method for fair comparisons. We will update the draft to emphasize this as part of the next revision of the paper.\\n\\n> **Particle-SfM reliability**\\n\\nParticle-SfM, and generally SfM methods, are not perfect and cannot always estimate the camera poses in a scene correctly. Sometimes, the SfM process will not converge, and we follow the common practice of repeating the SfM process until it converges to a solution. Unfortunately, no better camera pose estimators are available for in-the-wild videos, and other works, such as CameraCtrl, use the same setup as us to evaluate camera accuracy. \\n\\nWe include 176 visualizations in the *rebuttal.html* in the supplementary file to provide more evidence for our performance on out-of-distribution camera trajectories. We urge the reviewer to assess them on their own: VD3D has very precise camera accuracy for diverse, user-defined camera trajectories, like translations, rotations, and their combinations. We believe those results are very reliable and address all the concerns regarding overfitting to RealEstate10K trajectories.\"}", "{\"title\": \"Request for clarification on contribution\", \"comment\": \"Dear Reviewer DiqV,\\n\\nThanks for the detailed review and for sharing your thoughts. I noticed that you also find this paper to be of A+B style. However, you mentioned that the paper presents a nice contribution, which I didn't follow. Can you please clarify on what exactly is the technical contribution of the paper?\\n\\nMy confusion comes from my impression that the paper is merely putting CameraCtrl together with SnapVideo. Using Plucker coordinates is not new, and using controlnet for transformers is not new. I understand that `the authors demonstrate that you need to do a few things to make it work (Table 2)`, but I found their re-implementation of CameraCtrl into SnapVideo weird (L381). I don't think the authors implemented their CameraCtrl properly by using the correct module copy or zero-convolutions for the camera encoders.\\n\\nPlease feel free to share your thoughts.\"}", "{\"summary\": \"The paper proposes a first method to condition spatio-temporal transformer-based video diffusion models on camera poses (previous methods focused on pre-trained U-Net-based diffusion models). To this end, they propose a ControlNet-like block that conditions the model on camera embeddings that are based on Plucker coordinates. The paper evaluates the choices made and demonstrates good results, both qualitatively and quantitatively.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents a nice idea on to enable spatio-temporal transformer-based video diffusion models with camera control. It is the first to do so by using a ControlNet-like block in combination with Plucker coordinates, and presents a nice contribution. I value the ablation studies in the paper and the reasoning as to how they arrived at this particular architecture. The paper is quite well-written overall. It's easy enough to follow and appreciate the contribution.\", \"weaknesses\": \"While I'm positive overall, there are a few weaknesses.\\n\\nA criticism that could be made is that this paper is a bit of an A+B paper, where A is the ControlNet block and B is the Plucker coordinates. I don't think that's a useful criticism, because at the end of the day it's a sensible thing to do and the authors demonstrate that you need to do a few things to make it work (Table 2).\\n\\nThe paper reads well overall, however, I'm not a fan of Figure 3. It's hard to interpret what goes where, and how this relates to the formulas. I'd suggest to clearly delineate what are the video patch tokens vs. the plucker patch tokens, maybe add the variables from the formulas to the appropriate boxes, and overall structure the figure so that the separate blocks are clearly separate.\\n\\nFinally, in terms of results, I would have liked to see more examples of the same scenes with different camera control (there are only three examples). Furthermore, most examples use input camera trajectories from scenes that are completely unrelated to the target scene. It be nice to see some that are related -- makes it easier to judge if the generated camera path is good or not.\", \"questions\": \"Nothing crucial.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **What is the vanilla DiT you are referring to? Is it the pre-trained CogVideoX? What is the parameter size of this new backbone? What dataset was it pre-trained on?**\\n\\nWe used a vanilla DiT base model for additional experiments demonstrating generalizability across different transformer-based architectures. Concretely, we used an internally available video diffusion transformer with vanilla DiT architecture with 11.5B parameters in total. While we provide its detailed information below, we emphasize that it is not central to the contributions of our work since (i) the DiT design is basically the same for all modern video DiT models (CogVideoX, Sora, MovieGen, Mochi-1, Allegro, OpenSora, LuminaT2X, etc.), (ii) the DiT design is **orthogonal** to our proposed method on camera control, and (iii) all the baselines have been trained on top of **exactly the same DiT model**. The key reason why we used our internally available model instead of CogVideoX or some other open-source one is that it is paired with our existing codebase and allowed us to quickly re-implement the camera control method and the baselines on very short notice for this rebuttal. It is important to note that it is already taking a lot of time and computational effort to integrate CameraCtrl and MotionCtrl into our base model. Now, we show comparisons with (i) the original MotionCtrl and CameraCtrl, (ii) MotionCtrl and CameraCtrl integrated into our originally used FIT-based transformer architecture, and (iii) MotionCtrl and CameraCtrl integrated into an additional vanilla DiT architecture operating in the latent space of the CogVideoX autoencoder. This is a significant effort and significantly more than what previous works do.\\n\\n**Base video DiT architecture details.**\\nThe video DiT architecture follows the design of other contemporary video DiT models (e.g., Sora, MovieGen, OpenSora, LuminaT2X, and CogVideoX). As the backbone, it incorporates a transformer-based architecture with 32 DiT blocks. Each DiT block includes a cross-attention layer for processing text embeddings (produced by the T5-11B model), a self-attention layer, and a fully connected network with a \\u00d74 dimensionality expansion. Attention layers consist of 32 heads with RMSNorm for query and key normalization. Positional information is encoded using 3D RoPE attention, where the temporal, vertical, and horizontal axes are allocated fixed dimensionality within each attention head (using a 2:1:1 ratio). LayerNorm is applied to normalize activations within each DiT block. A pre-trained CogVideoX autoencoder is utilized for video dimensionality reduction, employing causal 3D convolution with a 4\\u00d78\\u00d78 compression rate and 16 channels per latent token. The model features a hidden dimensionality of 4,096 and comprises 11.5B parameters. It leverages block modulations to condition the video backbone on rectified flow timestep information, SiLU activations, and 2\\u00d72 ViT-like patchification of input latents to reduce sequence size.\\n\\n**Base video DiT training details.**\\nThe base DiT model is optimized using AdamW, with a learning rate of 0.0001 and weight decay of 0.01. It is trained for 750,000 iterations with a cosine learning rate scheduler in bfloat16. Image animation support is incorporated by encoding the first frame with the CogVideoX encoder, adding random Gaussian noise (sampled independently from the video noise levels), projecting via a separate learnable ViT-like patchification layer, repeating sequence-wise to match video length, and summing with the video tokens. Training incorporates loss normalization and is conducted jointly on images and videos with variable resolutions ($256$, $512$, and $1024$), aspect ratios ($16:9$ and $9:16$ for videos; $16:9$, $9:16$, and $1:1$ for images), and video lengths (ranging from 17 to 385 frames). Here we emphasize again that the pre-training data does not overlap with neither train nor test sets of RealEstate10K or MSR-VTT. Videos are generated at 24 frames per second, and variable-FPS training is avoided due to observed performance decreases for target framerates without fine-tuning.\\n\\n**Base video DiT inference details.**\\nInference uses standard rectified flow without stochasticity. We find forty steps to balance quality and sampling speed effectively. For higher resolutions and longer video generation, a time-shifting strategy similar to Lumina-T2X is used, with a time shift of $\\\\sqrt{32}$ for $1024$-resolution videos. We will include these details in our revisions. We want to emphasize that these are the details of the **additional transformer model** we used for the rebuttal. This was mainly done to show the generalization capabilities of our approach to another transformer-based architecture. Our main experiments in the original paper submission already include complete descriptions of the base model SnapVideo published in CVPR 2024.\"}", "{\"summary\": \"The method proposes a controlnet-like architecture for a private video diffusion model by including plucker coordinates as camera control.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed controlnet design outperforms other model variants designed by the authors. The evaluations are thoroughly conducted for the design choices. Detailed ablations are provided.\"], \"weaknesses\": [\"The proposed framework overfits on the trajectories that are seen during training. Though the authors provide quantitative comparisons in Tab. 8, no visual comparisons are provided.\", \"Though the performance is impressive, the technical contribution is limited in the proposed framework. Training a ControlNet for diffusion transformer is not new, as shown in [1]. Using Plucker coordinates for camera control is not new, as shown in CameraCtrl (He et al., 2024a).\", \"[1] Chen J, Wu Y, Luo S, et al. Pixart-{\\\\delta}: Fast and controllable image generation with latent consistency models[J]. arXiv preprint arXiv:2401.05252, 2024.\"], \"questions\": \"Can the authors please comment on the above-mentioned weaknesses?\", \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"details_of_ethics_concerns\": \"Though the authors are transparent on the fine-tuning data (RealEstate10K), the pre-training data for this proposed framework is unknown, potentially containing copyrighted data. It may also be contaminated with the test sets of the RealEstate10K data or MSR-VTT data, making the reported results in Tab. 2 concerning.\\n\\nTo ensure reproducibility as strongly recommended in the ICLR author guide, the authors are encouraged to adapt the proposed framework to publicly available pre-trained models.\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a camera-control video diffusion model. The central claimed contribution is its transformer-based architecture. The method builds on SnapVideo and introduces a new attn-based conditional module for injecting the camera information. STOA performances are obtained compared to the existing approaches.\\n\\nReviewers highlighted the key advantages of the method, particularly the effectiveness of its transformer-based architecture and Pl\\u00fccker embedding conditioning, as evidenced by its strong experimental results and qualitative performance.\\n\\nA common concern among reviewers was the limited novelty, noting that the main contribution involved replacing the linear layer in the condition block with attention layers, alongside some necessary design adjustments for Pl\\u00fccker embedding injection\\u2014an approach that also has been previously explored in camera-controllable video generation.\\n\\nThe AC acknowledged the solid contribution of the work, as validated by its experimental results, and recognized its value in advancing the field.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer `uN7u` had raised several concerns centering around the fairness of comparison with CameraCtrl vs. VD3D. While there are long discussions on the details of the CameraCtrl re-implementation and whether the setting is fair when compared, Reviewer `uN7u` had emphasized three key points: (a) limited novelty, as also noted by other reviewers; (b) potential reproducibility issues due to SnapVideo being private; and (c) likely an unfair comparison. Among these, (c) poses a significant concern that could diminish the paper's contributions.\\n\\nThe ACs carefully reviewed the clarifications provided by the authors. The authors clarified that the trainable parameters were kept consistent, with CameraCtrl utilizing an encoder while VD3D does not, albeit with potentially larger attention modules on the conditional side. The ACs acknowledge that this comparison setup may be reasonable but strongly urge the authors to provide a more comprehensive description of the CameraCtrl re-implementation. Additionally, the authors are encouraged to include further ablations, such as ensuring comparable sizes of the conditional network, to enhance transparency and strengthen the validity of the comparisons.\"}", "{\"summary\": \"This paper presents a camera control method for transformer-based video generation models that enhances control while ensuring visual quality. The proposed approach aligns the video perspective with predefined camera trajectories, improving controllability. The authors claim this is the first study to employ ControlNet-like guidance for global spatiotemporal transformer video generation models, in contrast to the more commonly used U-Net architecture. Moreover, the evaluation demonstrates that both the video quality and adherence to the input camera trajectories are state-of-the-art.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Camera control during the video generation process is a significant issue. As more foundational models adopt transformer architectures, exploring control mechanisms for these models becomes crucial. This paper is the first to investigate how to better utilize camera trajectory parameters for transformer-based video generation models, using SnapVideo as the foundational model. The design is well thought out, and the evaluation is rigorous. The strengths of the paper are as follows:\", \"Unlike the spatiotemporal decoupling generation of U-Net structures, transformer-based video generation considers spatiotemporal video tokens globally, which means it cannot directly leverage the advantages of spatiotemporal decoupling. This paper overcomes this limitation by being the first to explore control specifically for spatiotemporal transformers. This shift in foundational model structure is critical and provides a solid engineering foundation for future work.\", \"The authors use Pl\\u00fccker embeddings to convert camera intrinsic and extrinsic parameters into pixel-level controls, which match the shape of video tokens. This information is then introduced through read cross-attention layers. While this approach is a straightforward combination of existing methods, it has been validated as effective for transformer-based video generation models, providing valuable experimental insights.\", \"The paper includes comprehensive evaluations and ablation studies, conducting both qualitative and quantitative experiments regarding video content quality and camera control, with well-defined criteria. The evaluation of baseline models is fair, making the transition to the new foundational model structure more convincing.\"], \"weaknesses\": [\"While camera control is the central problem addressed in this paper, the camera trajectories used primarily come from the RealEstate10K dataset, which, as observed in the visual results, mostly follow smooth, straight lines. There is a lack of consideration and experimentation with trajectories of varying difficulty, such as those involving significant directional changes. This raises some questions regarding the trajectory settings.\", \"There have been several prior works in the 3D multi-view generation field that focus on similar camera control issues, such as the referenced *Cat3D*, which also employed Pl\\u00fccker embeddings and attention calculations. The distinction between spatiotemporal decoupling and other network characteristics is a design feature intrinsic to the architecture. In exploring DiT-based generation, there have also been multiple studies investigating spatiotemporal decoupling, such as *Latte: Latent Diffusion Transformer for Video Generation*. Therefore, the novelty of this work lies more in applying existing designs to spatiotemporal transformers rather than presenting a technological innovation. However, the state-of-the-art results under the new configuration indeed serve as an important engineering reference for future directions.\"], \"questions\": \"- Have there been additional relevant experiments regarding camera trajectories, such as comparisons of control quality for generated trajectories of varying complexity?\\n\\nAt this time, I have no further questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper provides a method for adding the control of camera movements to video diffusion transformers. The core idea is to represent camera conditions as pixel-level ray condition with Plucker embeddings. A ControlNet inspired module is used to process the camera condition. The model is finetuned on RealEstate10k and is compared to two similar methods both qualitatively and quantitatively.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"paper is easy to follow\", \"the proposed design including the Pl\\u00fccker embedding is reasonable and effective\", \"comprehensive experiments are conducted and presented in the main manuscript and appendix\", \"supplemental materials contain video samples to demonstrate the effectiveness\"], \"weaknesses\": [\"The proposed method has been evaluated only on one video diffusion transformer, which raises some concerns on whether its performance can generalize to other pretrained video diffusion transformers.\", \"I'm curious about the distribution of camera movements evaluated in the experiments, in terms of its diversity and similarity to natural camera movements.\", \"The novelty is slightly limited, as the task is not new, and ControlNet-like module as well as Pl\\u00fccker embedding have been explored and used before.\"], \"minor\": [\"CameraCtrl was appeared on arXiv in April 2024, which should not be considered as a concurrent work.\"], \"questions\": \"Overall I think this paper provides an effective solution to amend video diffusion transformers with camera control. See weaknesses for questions and discussion.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We deeply appreciate the reviewer's detailed and positive assessment of our work. Below, we would like to clarify several concerns.\\n\\n> **Evaluation on diverse camera trajectories**\\n\\nWe respectfully note that we provide quantitative evaluation for random (i.e., not RealEstate10K) camera trajectories in Table 8 in the appendix of the original submission. With the current update, we include 176 new visual results for non-random, user-defined camera trajectories in the rebuttal.html as part of the supplementary, which will be incorporated into main.html for the final version. We also provide new quantitative evaluations for these trajectories below and in Tab. 9 in the revision of the paper. These new trajectories also involve camera movements with significant directional changes including both rotations and translations, as suggested by the reviewer. We observe that our method generalizes to input camera trajectories with variable rotations and translations.\\n\\n|Method|TransError (RE10K)|RotError (RE10K)|TransError (MSR-VTT)|RotError (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl|0.451|0.095|0.456|0.146|\\n|CameraCtrl|0.369|0.088|0.479|0.135|\\n|Ours|**0.236**|**0.041**|**0.258**|**0.050**|\\n\\n> **Novelty concern**\\n\\nWe demonstrated in Table 2 and through qualitative results in the supplementary materials that existing ControlNet-like architectures do not directly translate to diffusion transformers with joint spatiotemporal processing. Extensive experimentation was required to develop a setup that achieves precise camera control while maintaining motion quality and visual fidelity. Importantly, Latte's work is orthogonal to ours\\u2014while they focus on developing a diffusion transformer backbone, our contribution lies in creating camera conditioning mechanisms for such models and addressing the inherent challenges that arise. We appreciate this relevant reference and have incorporated Latte into our related work discussion in the revised paper.\"}", "{\"comment\": \"I have no more questions. Precise control of camera trajectory is an important feature of future video-generation products. Thanks to the author for sharing more technical details for reference when reproducing. I'll keep my current score.\"}", "{\"comment\": \"We deeply appreciate the reviewer's detailed and positive assessment of our work. Below, we would like to clarify several concerns.\\n\\n> **This paper is a bit of an A+B paper.**\\n\\nWe respectfully disagree with the characterization of our work as a simple combination of existing methods. As demonstrated both quantitatively (Table 2) and qualitatively (supplementary materials), merely combining transformer-based video diffusion with standard conditioning approaches yields poor results and imprecise camera control. Developing an effective camera conditioning scheme for spatiotemporal transformers required extensive experimentation to achieve three critical goals: (1) successfully incorporating camera information into the video token representation space, (2) enforcing this naturally weak conditioning signal during synthesis, and (3) maintaining scene motion and visual quality throughout the process.\\n\\n> **Figure 3 needs to be improved.**\\n\\nWe appreciate this valuable suggestion. We've made several improvements to both Figure 3 and the notation in Section 3. First, we simplified the notation used to describe the method, and then we restructured the diagram to create a clearer connection between its visual elements and the variables used in the method description, making it more intuitive to read. Finally, we improved the layout to make the figure more visually appealing.\\n\\n> **Evaluation on diverse camera trajectories and different trajectories starting from the same scene.**\\n\\nWe appreciate these valuable suggestions regarding trajectory evaluation. While our original submission included quantitative results for out-of-distribution camera trajectories in Table 8, we have now substantially expanded this analysis. The new evaluation tests 11 distinct out-of-distribution trajectories across 1000 different scenes for both RealEstate10K and MSR-VTT, shown in the Table below. We visualize 16 prompts for these trajectories, resulting in 176 new visual results in total in the *rebuttal.html* as part of the supplementary file, showing additional examples where multiple trajectories are used for the same scene. This comprehensive analysis is now available in Table 9 of the revised manuscript.\\n\\n|Method|TransError (RE10K)|RotError (RE10K)|TransError (MSR-VTT)|RotError (MSR-VTT)|\\n|:--------|:--------:|:--------:|:--------:|:--------:|\\n|MotionCtrl|0.451|0.095|0.456|0.146|\\n|CameraCtrl|0.369|0.088|0.479|0.135|\\n|Ours|**0.236**|**0.041**|**0.258**|**0.050**|\"}", "{\"comment\": \"We are thankful to all the reviewers for their insightful comments. We believe that the additional clarifications and experiments have improved the paper, we worked very hard in the rebuttal period (11 tables in the paper in total now), and we were able to resolve the majority of your concerns. However, reviewer uN7u has been actively trying to convince others with information that we do not believe is fair.\\n\\n- **Data:**\\nWe reiterate once more that we only used the RealEstate10K *train* split for training camera control. The RealEstate10K and MSR-VTT test datasets are not seen during camera control training. The base model did not see any of these datasets at all during training. Moreover, the base model is shared for all methods used in our comparisons. No method draws advantage from training on different data. \\n\\n- **Baselines:**\\nWhile many concurrent works only compare against the \\u201cas-is\\u201d implementation from the original codebases, we made the additional effort of implementing U-Net-based approaches into transformer-based models to ensure more direct and fair comparisons. This allows us to eliminate any differences in results stemming from base model differences, base model data used, number of fine-tuning iterations, batch size, parameter size, or computational resources. We aligned all experiments to match these aspects to ensure fair comparisons, leading to comparisons with the original U-Net-based approach and two transformer-based implementations based on the original code bases of MotionCtrl and CameraCtrl.\\n\\n- **\\\"[...] verify that you implemented CameraCtrl correctly?\\\"**\\nWhen integrating it into our codebase, we used the original CameraCtrl code directly, without making any modifications. Furthermore, we reached out to the CameraCtrl authors to obtain the complete evaluation scripts, as those had not been publicly released. Asking a competing group to \\u201cpeer-review\\u201d your code is not common practice in the community. There is a point when one needs to trust the authors that they have done everything in their power to ensure a correct implementation and a fair evaluation. We believe that the quality of results produced by our model are indicative of our ability to implement a model correctly.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your further questions, we are happy to address all remaining concerns.\\n\\n> **How did the CameraCtrl authors verify that you implemented CameraCtrl correctly? Did they read your implementation?**\\n\\nAs specified in our previous response, *\\u201cwe were in contact with the CameraCtrl authors to ensure fair comparisons by obtaining their full code including the evaluation scripts.\\u201d* However, it would not be reasonable to expect the CameraCtrl authors to review our code to verify its correctness. That said, there should be no concern in this regard, as we used the original CameraCtrl code directly without making any modifications when integrating it into our codebase.\\n\\n> **What is the trainable parameter size for the re-implementation of CameraCtrl?**\\n\\n230M trainable parameters, the same as for our method VD3D.\\n\\n> **What is the architecture for the camera encoder for CameraCtrl, when experimenting with SnapVideo and the DiT?**\\n\\nThe camera encoder is exactly the same as the one proposed in the original CameraCtrl work, and we use that code and architecture without modifications for both the SnapVideo and vanilla DiT backbone experiments. We only adjust the dimensions to be compatible with the base models we use, as the U-Net used in the original CameraCtrl work has dimensions different from the transformer-based models we use. You can see it as a vanilla adaptation of the CameraCtrl work, originally developed for U-Net architectures, to transformer-based architectures.\\n\\n> **How many training iterations are conducted for the CameraCtrl and MotionCtrl versions?**\\n\\nThe same number of training iterations as for our method, i.e., 50,000 iterations. We will revise Sec. C in the appendix to emphasize that CameraCtrl and MotionCtrl were also trained with 50,000 iterations.\"}", "{\"comment\": [\"Thanks to the authors for the detailed response. I have some follow-up questions:\", \"Can the authors please clarify what exactly is the technical contribution of the paper? I carefully read the other reviews and all reviewers share the same concern that the novelty of this work is very limited.\", \"For example, what is `we developed a novel projection mechanism`? How does that differ from a controlnet-like structure? If I understand correctly, the difference is that one shouldn't use multiple layers but only one layer for projection.\", \"I understand that PixArt-\\u03b4 is not designed for temporal conditioning signals, but what is the difference from their design? The authors keep suggesting there is `an extra level of complexity`, but the proposed approach is simply concatenating along the extra dimension.\", \"I share the concern of Reviewer ZZhZ that purely evaluating on SnapVideo is sketchy.\", \"While I appreciate the authors' effort in providing a vanilla DiT version, I get confused since the performance of the vanilla DiT version is superior. In this case, what's the purpose of designing VD3D on top of SnapVideo? If we switch the architecture to a vanilla DiT, what is the novelty then? Do we still have a `novel projection mechanism`, or has the `novel projection mechanism` now become a vanilla controlnet block? My question comes from L311 and L1185 that the `novel projection` is operating on the FiT's `read` layer.\", \"Are the baselines (MotionCtrl and CameraCtrl) also implemented using a vanilla DiT for the comparison? Are the vanilla DiT versions also trained on the same pre-training data for SnapVideo?\", \"How is the CameraCtrl baseline produced? On L381 it says, `For CameraCtrl, we fine-tune the original camera encoder module and use this to produce the latent vectors in the SnapVideo model.` Are the authors fine-tuning a camera encoder trained from AnimateDiff? I might have missed this, but what happens when properly implementing a SnapVideo version of CameraCtrl by using zero-convolution plus a copy of SnapVideo layers? Are all CameraCtrl variants mentioned in the paper implemented the same way?\", \"Can the authors please comment on Table 8? When evaluating videos without translation, does Particle-SfM produce reliable results?\"]}", "{\"comment\": \"Thanks, we want to clarify the remaining concerns.\\n\\n> **Technical contributions of the paper**\\n\\nOur main technical contribution is proposing how to incorporate camera control into transformer-based video generators. After extensive experimentation and analysis, which we include throughout the paper with many ablations, we came to our final setup. First, we propose to align camera input and video patch tokens spatiotemporally. For this, we represent extrinsics and intrinsics as spatio-temporal Plucker embeddings, patchify them into spatio-temporal Plucker tokens, and then align them with the video tokens. This procedure is unique to transformer-based models and was not investigated by any other work, as previous works only investigated U-Net-based video models. Second, we propose to align these spatiotemporal Plucker tokens with the video tokens using a ControlNet-inspired setup. Other works, including MotionCtrl and CameraCtrl, do not use ControlNet to infuse cameras into the base video model. We observe that spatio-temporals ControlNets are very effective and degrade little motion and quality, as shown in the experiments. Note that using ControlNet is only possible after representing cameras in the same spatio-temporal patch space as the original video tokens as part of the first step we are doing. Novelty is not always about inventing a new complicated layer or mechanism, but it can also be novel connecting different mechanisms and showing that a simple approach is more effective. In fact, the community typically prefers to build upon simple approaches that just work. We are not claiming to invent Plucker or ControlNet; it\\u2019s the technical solution of representing cameras as spatiotemporal Plucker tokens, using a ControlNet-type of setup to align them spatiotemporal video tokens pixel-wise, and demonstrating that this technique is significantly more effective than any other proposed technique. We also demonstrate with extensive experiments that using raw extrinsics (MotionCtrl) or camera encoders without ControlNet-type of camera alignment (CameraCtrl) work significantly worse, degrading quality, motion, and/or camera accuracy. We believe our approach could be a very effective baseline for camera control in video diffusion transformers and a good starting point that can be adapted by many follow-up works.\\n\\n> **Comparisons based on FIT-based SnapVideo**\\n\\nAt the time of submission, no vanilla video DiT model was available; hence, we used the FIT-based model. While one part of the analysis is incorporating the camera in the read-attention process, this is not our main novelty. With our new experiment, we want to highlight that our approach applies to different transformer-based architectures. As outlined above, our main technical novelty lies in representing cameras as patchified spatiotemporal Plucker tokens and second incorporating them spatiotemporally with a spatiotemporal ControlNet into the spatiotemporal video tokens. This has not been done before, as previous works neither use transformer-based models nor a ControlNet setup to represent and align cameras with video features.\\n\\n> **Vanilla DiT baseline**\\n\\nWe are currently training a MotionCtrl and CameraCtrl variant on top of the same vanilla DiT model and will include those results when they are ready at the end of the rebuttal period. The intention of providing results of our method on top of the Vanilla DiT baseline for the rebuttal was to show that our method generalizes to different transformer-based architectures. We observe that the camera accuracy for FIT is very similar to the DiT version, which is mainly visible in the rotation errors. Hence, the vanilla DiT model does not change the results much regarding camera accuracy. This highlights that our model works similarly precisely independent of FIT or vanilla DiT. Note that MotionCtrl and CameraCtrl heavily struggle with large rotation errors, i.e., the camera does not point toward the correction direction. The intention of the rebuttal experiment for vanilla DiT was not to conduct another thorough analysis on another backbone, as this computationally is very expensive. It is common for camera control and multi-view diffusion works to conduct the analysis and comparisons on a single video model. We have already made sure by using the same base model throughout the paper that we do not have any advantage over AnimateDiff-based approaches. We also included visual results in the *rebuttal.html* for the vanilla DiT model, showing high-quality, high-motion videos with camera control. The main point of this experiment is to show that our method does not converge to static scenes for another backbone, as commonly observed in other camera control works when fine-tuning on RealEstate10K. Furthermore, we demonstrate that the vanilla DiT results in the *rebuttal.html* are precise. Hence, our proposed mechanism generalizes and can be a starting point for many transformer-based follow-up works.\"}" ] }
0mtz0pet1z
Incremental Causal Effect for Time to Treatment Initialization
[ "Andrew Ying", "Zhichen Zhao", "Ronghui Xu" ]
We consider time to treatment initialization. This can commonly occur in preventive medicine, such as disease screening and vaccination; it can also occur with non-fatal health conditions such as HIV infection without the onset of AIDS. While traditional causal inference focused on ‘when to treat’ and its effects, including their possible dependence on subject characteristics, we consider the incremental causal effect when the intensity of time to treatment initialization is intervened upon. We provide identification of the incremental causal effect without the commonly required positivity assumption, as well as an estimation framework using inverse probability weighting. We illustrate our approach via simulation, and apply it to a rheumatoid arthritis study to evaluate the incremental effect of time to start methotrexate on joint pain.
[ "Causal Inference", "Positivity", "Incremental intervention", "Incremental Causal Effect", "Inverse probability weighting" ]
Accept (Poster)
https://openreview.net/pdf?id=0mtz0pet1z
https://openreview.net/forum?id=0mtz0pet1z
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wTodtLddMH", "twR6lTUJn6", "o9RCvMqYiC", "mphACbm26x", "kj3RtlHmdg", "jK5ueT1xLL", "cyMZLTTkJW", "chn5G6uwZR", "Skl1rgipeq", "Sc7BjdPTNo", "QdbSWiR7SR", "NFD6hIn8Kb", "JUyiHIfKpR", "H0sAXkDZpy", "GcyBdcX0yX", "GWla9z3evB", "FP5eoXprkt", "4aDQX4K90b", "4JH9fXZKeN", "3DVzS8PYcP" ], "note_type": [ "decision", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1737523430641, 1730608898006, 1732521934260, 1731913392401, 1730504362684, 1732591229814, 1732407965732, 1734644060253, 1732521697951, 1731912786514, 1732521716430, 1732521708185, 1732521690522, 1731046481443, 1732567076729, 1731913194844, 1732407980537, 1731912848778, 1730632741707, 1731912591664 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission995/Reviewer_sAsU" ], [ "ICLR.cc/2025/Conference/Submission995/Reviewer_sAsU" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Reviewer_roun" ], [ "ICLR.cc/2025/Conference/Submission995/Reviewer_roun" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Area_Chair_baoQ" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Reviewer_rVjv" ], [ "ICLR.cc/2025/Conference/Submission995/Reviewer_cEmo" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ], [ "ICLR.cc/2025/Conference/Submission995/Reviewer_cEmo" ], [ "ICLR.cc/2025/Conference/Submission995/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper studies a novel setting in causal inference: incremental causal effect of intervening the continuous time to treatment initiation by shifting the hazard function. It introduces an IPW estimator with proofs of consistency and asymptotic normality. It is also validated through empirical simulations and a real-world study.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper extends incremental causal effects, which do not rely on the traditional positivity assumption, to a new setting. This advancement allows for new approaches to studying time-to-treatment problems in fields such as public health and policy-making.\\n2. Theoretical guarantee is provided.\\n3. The presentation and flow of this paper are clear.\", \"weaknesses\": \"1. Additional experiments could provide deeper insights into the behavior and robustness of the proposed approach under different scenarios. For example exploring different shift interventions, hazard functions, and comparative analysis against any alternative estimators.\", \"minor_comments\": \"2. The clarity of Theorems 2 and 3 could be improved by stating all conditions and notations explicitly.\\n2. In the simulation, it can be made more clear to state the true effect and whether the outcome is censored in the DGP.\\n3. Typos: in line 59, line 340, and Theorem 3 proof.\", \"questions\": \"1. If there is unmeasured confounding, is it natural to apply sensitivity analysis or proximal causal inference in this setting? How might these approaches integrate with your proposed IPW estimator?\\n2. Are the regularity conditions outlined in Theorem 2 and Theorem 3 considered trivial or standard in common survival models?\\n3. Is there any pattern in the estimator performance (bias, variance, or stability) as $\\\\theta$ changes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank the authors for the detailed responses, which have addressed most of my concerns.\"}", "{\"comment\": \"## Questions\\n\\n1. Which aspects of this work are novel, and how do they align with existing properties of incremental causal effects?\\n\\n **Response**: Thank you for highlighting this point, which was also noted by other reviewers. To clarify, both the move to continuous time to treatment and avoiding the positivity assumption are novel contributions of this paper. Kennedy (2019) and some other works introduced the concept of \\\"incremental causal effects\\\" in discrete-time settings and demonstrated that it avoids the positivity assumption in that context. Our work extends this concept to continuous time to treatment and proves, in Theorem 1, that the positivity assumption can similarly be avoided in this setting. We will revise the paper to make these contributions more explicit and clearly articulated.\\n\\n2. Why was $L$ chosen to represent covariates, instead of more typical variables like $W$, $X$, or $C$?\\n\\n **Response**: We are following notation from causal inference in medical/health studies where $W$, $X$, $C$ are usually saved for weights, censored event time, and censoring time where time-to-event can also be of interest, one of the future directions of this paper. $V$ can be a valid option but we are using $L$ following the book: Miguel A Hern\\u00e1n and James M Robins. *Causal Inference: What If*. Boca Raton: Chapman & Hall/CRC, 2020.\\n\\n3. Can you clarify what it means for an incremental intervention to be \\\"a function of the observed treatment distribution\\\"?\\n\\n **Response**: We have clarified and reworded that.\\n\\n4. In the second paragraph of the introduction, does \\\"all treated or all untreated\\\" mean counterfactual reasoning where the entire population is treated versus untreated?\\n\\n **Response**: Yes, to the very last question. For example, at the conclusion of a randomized clinical trial which estimates the ATE, the clinical guideline going forward would be to treat all patients from that (relevant) population with drug A and not drug B.\\n\\n5. Does the model account for time-varying covariates that could influence treatment probabilities?\\n\\n **Response**: Thank you for raising this important point. In the current paper, we assume covariates do not change over time, as our focus is on introducing and motivating the concept. However, extending the model to account for time-varying covariates is a natural next step and an avenue for future work. For example, one could consider patient information recorded monthly ($L_k$) and model the hazard function of $T$ between consecutive months. This approach would lead to a new set of inverse probability weights, computed by multiplying the monthly hazard functions up to the time $T$.\\n\\n6. Can you clarify how incremental interventions avoid the positivity assumption, especially for degenerate treatment propensities?\\n\\n **Response**: Following Kennedy (2019), let us denote $Y$ as the outcome, $A$ as a binary treatment, and $L$ as baseline confounders. Let $P$ and $P^*$ represent the observed and (any) intervened distributions of $(Y, A, L)$, respectively. To infer any functional of $P^*$ (e.g., the ATE), we require that if $P^*(A = a \\\\mid L = l) > 0$, then $P(A = a \\\\mid L = l) > 0$ must also hold. Equivalently, if $P(A = a \\\\mid L = l) = 0$, then $P^*(A = a \\\\mid L = l)$ must also equal 0. The classical positivity assumption ensures this by assuming $P(A = a \\\\mid L = l) > 0$ for all $l$, which is a sufficient condition for this requirement. \\n\\n For subjects with degenerate treatment propensity, e.g., $P(A = 1 \\\\mid L) = 0$, where the odds are $P(A = 1 \\\\mid L) / (1 - P(A = 1 \\\\mid L)) = 0$, perturbing the odds as considered in incremental intervention by Kennedy (2019) by multiplying by a factor $\\\\theta$still results in zero odds. This means that the incremental intervention leads to a new distribution $P^*(A = 1 \\\\mid L) = 0$ for these covariates $L$. Hence, the positivity assumption is not needed under incremental interventions.\\n\\n7. In the line before Theorem 1, what does \\\"[1]\\\" refer to?\\n\\n **Response**: We meant Equation (1) and we have fixed.\\n\\n8. Are there any baseline methods for comparison in the experiments?\\n\\n **Response**: There is no baseline method for estimating the incremental causal effects for comparison, unfortunately. We added percent bias to provide more insight to the performance of our estimator in the simulation section.\"}", "{\"summary\": \"This paper proposes a novel methodology for estimating incremental causal effects in continuous-time settings, with a specific focus on time-to-treatment initiation. This approach intervenes on the intensity (hazard function) of treatment initiation without relying on the positivity assumption. By shifting the hazard function through a multiplicative factor theta, the authors develop an identification strategy using inverse probability weighting. Theoretical justification, along with both synthetic and real-world experiments, is provided.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper addresses a gap in causal inference by extending incremental causal effects to continuous time-to-treatment initiation, an area that has not been extensively studied before.\\n2. The authors provide consistency and asymptotic normality theorems for their estimand, adding theoretical rigor to their approach.\\n3. The paper is well-written and easy to follow. I particularly like the related work section which offers a comprehensive background that is beneficial for readers.\", \"weaknesses\": \"1. A major strength claimed by the authors is that the new estimand avoids the positivity assumption. However, as noted in related work (line 125), \\\"Kennedy (2019) proposed an incremental intervention that fully resolved the positivity issue.\\\" Since this paper also focuses on incremental causal effects, this aspect of the contribution may lack novelty.\\n2. In the synthetic experiment, only a single feature L is used, following a simple uniform distribution. The experiment would be more robust if it included multiple features and more common distributions, such as Gaussian. Additionally, the experiment would benefit from comparisons with other baseline methods, as the paper currently presents only the performance of their model without comparative analysis.\\n3. In Section 4.2, the authors mention that the decreasing trend in the average number of tenders aligns with findings from a 2002 paper. While it is always impossible to know the true causal evidence in real-world cases, a quantitative comparison of the estimated causal effects with results from other studies would strengthen the paper beyond trend consistency alone.\", \"questions\": \"See weaknesses.\\n\\n1. Line 65: The paper states, \\\"In general, the incremental causal effect has the interpretation of a policy effect on the population, instead of the therapeutic effect on an individual.\\\" This reminds me the field of reinforcement learning (RL), which is specifically designed for learning policy rewards. Could there be potential benefits in using RL to learn incremental causal effects?\\n2. Could you provide a concrete example to illustrate the difference between \\\"time to treatment initiation\\\" and \\\"continuous time to initiating treatment\\\"? These terms appear several times in the paper, but their distinction remains unclear to me. Since this paper focuses on the continuous version, clarifying this difference could help motivate the choice.\\n3. Line 340: typo 25$ should be 25%.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank the authors for the detailed response and the additional experiments. I've decided to maintain my current score.\"}", "{\"comment\": \"Additional experiments exploring different shift interventions, hazard functions, and more covariates have been added in the appendix.\"}", "{\"metareview\": \"The paper introduces a continuous-time version of the \\\"time to treatment\\\" problem, which I found to be particularly relevant in applications such as AI for healthcare and is, to the best of my knowledge, understudied. It can be valuable for the ICLR community to be exposed to this. On the negative side, the original discrete-time formulation covers the bulk of conceptual questions, and it is can be seen from the paper, the technical aspects of the continuous-time version are relatively unsurprising, with the estimation method being a pipeline of standard problems and solutions - usual consistency / convergence in probability / asymptotic normality / bootstrap + Wald CIs. All of this is nice to have, but not the type of result that is the focus of ICLR (but not out of scope either, to be clear). As I've mentioned, the problem itself is still not given as much attention as I think it deserves, and this counts - although one may wonder whether there is some opportunity lost in getting into more subtleties of the problem given that the solution for the given framing is relatively classical - this is also manifested by having quite some space left in the manuscript itself. Reviewers raised all sorts of clarification questions, which I think were valid but also addressable.\\n\\nDiscussion was admittedly underwhelming despite probing, which was disappointing . Reviews were reasonably informative though and, along with the rebuttals, were useful for the assessment. I found the comment about reinforcement learning (reply to reviewer \\\"roun\\\") to be interesting, and I didn't fully understand it: e.g. to which extent the randomness inheriting to this problem clashes with the usual deterministic policies of optimal single-agent problems. Maybe this could be elaborated upon in any particular version of this problem.\", \"additional_comments_on_reviewer_discussion\": \"Discussion was admittedly underwhelming despite probing, which was disappointing . Reviews were reasonably informative though and, along with the rebuttals, were useful for the assessment. I found the comment about reinforcement learning (reply to reviewer \\\"roun\\\") to be interesting, and I didn't fully understand it: e.g. to which extent the randomness inheriting to this problem clashes with the usual deterministic policies of optimal single-agent problems. Maybe this could be elaborated upon in any particular version of this problem.\"}", "{\"comment\": \"I hope you\\u2019ve had a chance to review my responses to your comments on our paper. Please let me know if there are any additional concerns.\"}", "{\"comment\": \"## Weaknesses\\n\\n1. Additional experiments exploring different interventions, hazard functions, and comparisons could enhance insights.\\n\\n **Response**: Thank you for pointing this out, as it was also raised by other reviewers. We will include additional features following Gaussian and Bernoulli distributions, and consider different hazard functions, including Weibull and Rayleigh distributions, as well as non-constant interventions in the simulation section. This might happen either before discussion period ends or before camera ready if accepted, depends on how smooth with the new experiments.\\n\\n For comparative analysis, there is no baseline method for estimating the incremental causal effects for comparison, unfortunately. To further evaluate the performance of our estimator in the simulation, we added percent bias to provide more insight. For the Methotrexate data application, we added a qualitative comparison of the estimated causal effects with those from a 2024 paper focusing on a similar estimand.\\n\\n2. Theorems 2 and 3 lack clarity due to missing explicit conditions and notations.\\n\\n **Response**: Fixed.\\n\\n3. The simulation section should explicitly state the true effect and clarify outcome censoring in the DGP.\\n\\n **Response**: With interest in the mean outcome $Y_{T(\\\\theta)}$, our estimand is defined as $\\\\psi(\\\\theta) = \\\\mathbb{E}(Y_{T(\\\\theta)})$, which represents the true incremental causal effect when the corresponding hazard is scaled by a constant positive quantity $\\\\theta$. The values of $\\\\psi(\\\\theta)$ have been added to the simulation results in Section 4.1.\\n\\n In the data generating process, $(L_i, T_i, Y_i)$ are generated following the distributions listed in Section 4.1. The observations or the data used in our estimator are $(L_i, T_i \\\\wedge 2, \\\\Delta_i = 1(T_i < 2), Y_i)$, where $T_i \\\\wedge 2$ represents the censored time to a certain treatment.\\n\\n4. Typos found in line 59, line 340, and Theorem 3 proof.\\n\\n **Response**: Fixed.\\n\\n## Questions\\n\\n1. How can sensitivity analysis or proximal causal inference address unmeasured confounding in this setting?\\n\\n **Response**: Sensitivity analysis can indeed be applied in multiple ways. A straightforward approach is to introduce an unmeasured confounder $U$ that affects both treatment $T$ and outcome $Y$ through two separate sensitivity parameters. In such a scenario, the IPW estimator is expected to fail due to the violation of the unconfoundedness assumption. By varying the sensitivity parameters, one can observe how the estimator deviates from the true causal effect. Regarding proximal causal inference, further investigation is required. Specifically, this would involve constructing an IPW bridge function that links the underlying treatment distribution conditioned on $U$ to the target incremental intervention, leveraging appropriate proxies.\\n\\n2. Are the regularity conditions in Theorems 2 and 3 standard in survival models?\\n\\n **Response**: Yes, they were shown to hold for Cox models (Andersen et al. 2012), and Aalen models (Martinussen et al. 2006). More generally, these conditions were shown to hold for general Givenko-Cantelli and Donsker families (van der Vaart et al. 2013).\\n\\n - Andersen, P. K., Borgan, O., Gill, R. D., & Keiding, N. (2012). Statistical models based on counting processes. Springer Science & Business Media.\\n - Martinussen, T., & Scheike, T. H. (2006). Dynamic regression models for survival data. Vol. 1, Springer.\\n - A. W. van der Vaart & Wellner, J. (2013). Weak convergence and empirical processes: with applications to statistics. Springer Science & Business Media.\\n\\n3. Is there any observable pattern in estimator performance as \\\\(\\\\theta\\\\) changes?\\n\\n **Response**: So far we cannot observe any patterns but thank you for suggesting this. The reason can be that the bias and variance terms across different $\\\\theta$ do not exhibit any clear relations (say monotonicity).\"}", "{\"comment\": \"I hope you\\u2019ve had a chance to review my responses to your comments on our paper. Please let me know if there are any additional concerns.\"}", "{\"comment\": \"I hope you\\u2019ve had a chance to review my responses to your comments on our paper. Please let me know if there are any additional concerns.\"}", "{\"comment\": \"I hope you\\u2019ve had a chance to review my responses to your comments on our paper. Please let me know if there are any additional concerns.\"}", "{\"summary\": \"In many settings, whether or not a subject receives a treatment at any given time point may be a function of their covariates. In these settings, we can reason about the time to treatment from when an individual becomes treatment-eligible to when they actually receive treatment. Such a model can allow us to reason about how changes to covariates can affect time to treatment and, through treatment, some relevant outcome. This model has the benefit of not requiring the positivity assumption. The authors define this model in terms of hazard functions and provide an estimator. They then assess their model on both synthetic and empirical data and demonstrate how it can be used to inform policy/medical practice decisions.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"While there are some clarity issues with the narrative of the paper, the individual sections and descriptions in the paper are clear and easily readable. The literature review on incremental causal effects and time to treatment is very thorough, situating the paper nicely in the literature. The examples chosen, especially the rheumatoid arthritis example, are strong, and the analysis of the rheumatoid arthritis experiment (the reasoning about doubling the hazard decreasing joint pain) is especially compelling and highlights very nicely how this method could be used in practice.\", \"weaknesses\": \"The biggest weakness of this paper in my eyes is that the problem being solved is not clearly defined. Some of this seems to be due to language issues (while the occasional grammatical issue or awkward phrase don't generally impede understanding, there are a few parts where the intended meaning isn't clear), and some of this is due to the lack of a clear motivating example\\\". Specifically:\\n\\n- In the introduction, the authors define an incremental intervention as an intervention \\\"that is not pre-specified, but rather a function of the observed treatment distribution.\\\" Without defining what is meant by \\\"the observed treatment distribution\\\", I assume that it means which units in the sample population received treatment and which didn't (P(T)), or maybe the conditional probability distribution in the sample population (P(T|L)). I'm having a hard time understanding what this means, even after having read the whole paper. The example in this section, as well as the MTX use case in the experiments, seem to deal with an intervention on treatment (such as being assigned to a behavioral health program or being prescribed MTX) that is, presumably, informed based on the individual's covariate values. So this is an intervention that is a function of the observed covariates, not \\\"of the observed treatment distribution.\\\" Am I misunderstanding something about your approach here?\\n\\nThe organization of the introduction seems a bit backwards to me. The example in the third paragraph (probationers being assigned to behavioral health services) is a great motivating example for the general \\\"time to treatment initialization\\\" problem setting. The first paragraph contains two good examples, but the narrative about \\\"time to treatment\\\" is not made clear. For example, the first 4 sentences of the paper talk about a tech team struggling to keep up with review requests and asks the reader to consider the effect of doubling the number of reviewers on the processing time of requests. Coming into this paper with causality in mind, this sounds an awful lot like reasoning about intervening on a treatment (the number of reviewers) and measuring the effect on an outcome (the processing time). However, as becomes clear later in the paper, the \\\"processing time\\\" is not, in fact, the outcome, but the time until treatment. And if that's time until treatment, then I suppose treatment = somebody reviewing a request. But then I'm not sure what the outcome is....backlog size??\\n\\nIf you want to open with that example, you should start by clearing explaining how it maps to your problem. An example flow, assuming I'm understanding the problem correctly: \\\"We're interested in understanding how long it takes people on a tech team to respond to review requests. The time until review is not static, but depends on many features, such as how many reviewers the help desk has at that time. \\n After a system outage, the number of requests has increased, creating a large backlog. The scheduler wants to decrease the size of this backlog, which they plan to do by decreasing the time until review. The number of reviewers has a large effect on the time until review, so the scheduler decides to double the number of reviewers, which then doubles the likelihood of a request being reviewed at any given time, as requests are often selected for review at random. This process - reasoning about percentage changes to covariates to determine their effect on time to treat and, thus, the effect of treatment - is called \\\"incremental causal effects\\\".\\\"\\n\\nThe second paragraph of the introduction also oddly placed. Digging into the different types of interventions is interesting, and the distinctions brought up are quite relevant, but without having a clear problem statement yet, it's unclear how to fit your proposed method into that framework. I think you're missing a paragraph in the introduction where you define (not technically, but in straightforward language) your problem statement. (i.e., the time until treatment for each subject is based on some measured covariates; the outcome is an effect of that treatment and starts be recorded as soon as treatment is applied to that subject; we want to reason about how changes in the probability of treatment function affect outcome).\\n\\nSection 1.1 is focused on, and named after the positivity assumption, and highlights bypassing the positivity assumption as a key advantage of incremental causal effects. However, this section, from what I can tell, never actually explains how it bypasses positivity. (Also, the phrase \\\"avoids the positivity\\\" is weird - reword that) It's only the introduction, so I don't expect an in-depth explanation yet, but given that it's a whole section in the intro about positivity, at least a sentence giving an intuition about why we can ignore positivity would help. Following on from that about positivity, it looks like it's addressed in the first paragraph of the Related Work section. However, the explanation in the related work is not very clear or detailed (and again, especially given how prominently positivity was just highlighted in the introduction, I expected a deeper/clearer explanation).\\n\\nSome terminology explanation is missing. Line 171 defines $\\\\lambda(t|l)$ and $\\\\Lambda(t|l)$ as just \\\"its hazard function and cumulative hazard function at time t given L = l, respectively.\\\" I assume \\\"its\\\" here refers to T. However, from what I can see, you never actually define either hazard function, despite them being fairly core to your method.\\n\\nI like the setups chosen for both the synthetic and empirical experiments, but the lack of a baseline makes interpreting the results near-impossible. For example, in the simulation results, you say that your results illustrate that the incremental causal effects \\\"perform well with small biases.\\\" I'm struggling to see how you came to that conclusion from Table 1 alone. Looking at the numbers in the \\\"Bias\\\" row, they look low, but are they actually low for that problem? Did you use additional visualizations to come to the conclusion that these numbers represent good performance?\\n\\nBetween the clarity issues throughout and the difficulties in interpreting the experimental results, I don't feel comfortable voting for acceptance. If these issues are adequately addressed, though, I'm open to increasing my score.\", \"questions\": \"Can you explain which pieces of this work are novel/provide a contributions list? From the introduction, it looks like the move to continuous time to treatment is new. However, the abstract makes it sound like avoiding the positivity assumption is also novel, while the paper makes it seem like that was something that was already shown as a property of incremental causal effects.\\n\\nI'm not used to seeing L chosen as the variable for covariates/potential confounders. (I've seen W, X, V, C....) It's not a problem, but is the choice of L based on any particular subset of the literature?\\n\\nAs per my confusion in the Weaknesses section, can you clarify what you mean by an incremental intervention being one that is \\\"a function of the observed treatment distribution\\\"?\\n\\nIn the second paragraph of the introduction, you state that, in a static intervention, \\\"the subjects are either all treated or all untreated\\\", in contrast to a dynamic intervention, where treatment could depend on covariates. This makes it sound like the entire population is either all treated, or all not treated. This is a valid scenario to consider (e.g., a state government policy that then affects everyone living in that state), but you describe static interventions as being \\\"typically the case when considering an average treatment effect (ATE)\\\", in which case, you typically need examples of both treated and untreated subjects. I would assume that you meant \\\"each subject is either treated or untreated, assigned independently of their covariates\\\", but that's not particularly close to what you wrote, so I must be misunderstanding something. Actually, reading the abstract of Bonvini et al (2021), they describe ATE as relating to \\\"the effect of everyone deterministically receiving versus not receiving treatment\\\" - as in, the counterfactual question. Is that what you're referring to here?\\n\\nEspecially in medical examples (such as the MTX arthritis example), individual covariates can change over time. Does your model take into account that an individual's covariates L could change over the timesteps before they get treated, which could in turn affect the probability of treatment?\\n\\nI'm not following the explanation at the beginning of Related Work about how incremental intervention avoids making the positivity assumption. Summarizing Kennedy (2019), you say that, for subjects with 0 or 1 probability of treatment, we can see positivity as always satisfied \\\"because perturbing the odds does not change their degenerate probabilities\\\". How does that follow?\\n\\nIn the line before Theorem 1, it says \\\"We prove that ([1]) can be identified\\\". What is ([1]) referring to here? Are you referring to Theorem 1? Assumption 1? Equation 1?\\n\\nAre there any baselines you can use for comparison in the experimental results? Some other effect estimation method, or at the very least some naive baseline that could provide some calibration for the experimental results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. After reviewing other feedback, I\\u2019ve decided to maintain my current score.\"}", "{\"comment\": \"## Weaknesses\\n\\n1. The problem being solved is unclear, with ambiguous terminology like \\\"observed treatment distribution\\\" and a lack of clear motivating examples.\\n\\n **Response**: We have defined the observed treatment distribution and reworded \\\"function of\\\" to \\\"intervenes on.\\\"\\n\\n2. The introduction's organization is confusing, with examples and problem statements not clearly mapped.\\n\\n **Response**: Thank you for highlighting this potential confusion. In fact, to sharp the focus and eliminate confusion, we decided to remove the tech company example as it is not as good as other examples. But still, to clarify, let\\u2019s consider the example of social media platforms handling user-reported content, such as comments flagged for \\\"hate language\\\" or \\\"sexually explicit\\\" content. In this case, the outcome could vary depending on the context: a short-term outcome might be the number of other users who see the flagged comment before it is reviewed, while a longer-term outcome could be user satisfaction. Acting quickly and accurately (e.g., removing the flagged content if deemed inappropriate) would reduce the number of users exposed to such comments and likely improve overall user satisfaction.\\n\\n3. Opening with examples should better map them to the problem being studied.\\n\\n **Response**: We have revised to open with the probationers example from the original third paragraph as suggested.\\n\\n4. The second paragraph of the introduction is misplaced and lacks a clear problem statement.\\n\\n **Response**: We have rearranged the flow and explicitly stated our problem, emphasizing that the literature has focused on binary treatment, while our contribution is time to treatment in the context of incremental interventions and effects.\\n\\n5. Section 1.1 discusses the positivity assumption but does not provide an intuition or explanation for how it is bypassed.\\n\\n **Response**: The old Section 1.1 contained multiple topics, including the problem statement and organization of the paper. We have now rearranged it to focus solely on the positivity assumption, added intuition about how incremental effects bypass positivity, and provided a more detailed explanation after Theorem 1.\\n\\n6. Core terminology like \\\"hazard function\\\" and \\\"cumulative hazard function\\\" is introduced without proper definitions.\\n\\n **Response**: Added.\\n\\n7. The lack of a baseline in experiments makes interpreting results difficult, and biases are not clearly contextualized.\\n\\n **Response**: We added percent bias to provide more insight into the performance of our estimator in the simulation section.\"}", "{\"comment\": \"Additional experiments exploring different shift interventions, hazard functions, and more covariates have been added in the appendix.\"}", "{\"comment\": \"## Weaknesses\\n\\n1. Definitions in Subsection 3.1, such as the hazard function, are unclear and lack sufficient detail.\\n\\n **Response**: Added.\\n\\n## Questions\\n\\n1. Can the estimator be analyzed with finite sample data, and are there high-probability guarantees?\\n\\n **Response**: High-probability results, as you mentioned, are typically derived in the asymptotic regime where $n \\\\to \\\\infty$. These asymptotic results were investigated in Section 3.2. In statistical practice, it is standard to evaluate estimators' finite-sample performance separately through finite-sample analysis to demonstrate practical performance (e.g., bias, variance) as in Section 4.\"}", "{\"summary\": \"This paper extends INCREMENTAL CAUSAL EFFECT to continuous-time treatment. To this end, the author shows that the target quantity is identifiable under certain assumptions, excluding the well-known positivity assumption. An estimator is then proposed that is consistent. The effectiveness of the estimator has been validated through empirical experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1- The paper addresses an important problem and proposes an algorithm to solve it.\\n\\n2- The proposed approach has been analyzed both theoretically and empirically.\", \"weaknesses\": \"1- Some definitions in Subsection 3.1, such as the hazard function and related concepts, are not clear to the reader. It would be beneficial to provide more detail, as there is still enough space available.\", \"questions\": \"1- How is it possible to analyze the estimator with finite sample data? For example, is there a high-probability guarantee for it?\\n\\nI am not completely familiar with the area covered in the paper, and I\\u2019m uncertain about its contribution to the field. I may revise my score after considering the feedback from other reviewers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Weaknesses\\n\\n1. The novelty claim of avoiding the positivity assumption overlaps with prior work by Kennedy (2019).\\n\\n **Response**: Thank you for highlighting this point, which was also noted by other reviewers. To clarify, both the move to continuous time to treatment and avoiding the positivity assumption are novel contributions of this paper. Kennedy (2019) and some other works introduced the concept of \\\"incremental causal effects\\\" in discrete-time settings and demonstrated that it avoids the positivity assumption in that context. Our work extends this concept to continuous time to treatment and proves, in Theorem 1, that the positivity assumption can similarly be avoided in this setting. We will revise the paper to make these contributions more explicit and clearly articulated.\\n\\n2. The synthetic experiment lacks feature diversity and baseline comparisons.\\n\\n **Response**: We are including additional features following Gaussian and Bernoulli distributions, and also consider different hazard functions and non-constant interventions in the simulation section. This might happen either before discussion period ends or before camera ready if accepted.\\n\\n There is no baseline method for estimating the incremental causal effects for comparison, unfortunately. We added percent bias to provide more insight to the performance of our estimator in the simulation section. For the Methotrexate data application, we added a qualitative comparison of the estimated causal effects with those from a 2024 paper.\\n\\n3. The real-world comparison relies on trend consistency without quantitative validation.\\n\\n **Response**: A qualitative comparison of the estimated causal effects with a 2002 paper focusing on a similar estimand has been added. Since the estimands we used are not exactly the same, a quantitative comparison is not feasible at this time.\\n\\n## Questions\\n\\n1. Can reinforcement learning (RL) help estimate incremental causal effects?\\n\\n **Response**: If the reviewer refers to using RL to learn optimal treatment regimes that maximize a reward function, it is worth noting that learning incremental causal effects through this approach is not straightforward. As discussed in the book *Causal Inference: What If*, when optimizing an outcome-based reward function given \\\\(L\\\\), the optimal treatment choice becomes deterministic. Incremental interventions, being stochastic by nature, would therefore not align with the deterministic optimal regimes learned through RL.\\n\\n2. Clarify the distinction between \\\"time to treatment initiation\\\" and \\\"continuous time to initiating treatment.\\\"\\n\\n **Response**: \\\"Time to treatment initiation\\\" encompasses both \\\"discrete time to initiating treatment\\\" and \\\"continuous time to initiating treatment\\\". We have clarified this by pointing that the 3rd reference Nie et al. (2021) in that paragraph was on discrete time.\\n\\n3. Typo: \\\"25$\\\" should be \\\"25%.\\\"\\n\\n **Response**: Corrected.\"}" ] }
0mo2yqOS6Z
Enhancing Accuracy and Parameter Efficiency of Neural Representations for Network Parameterization
[ "Hongjun Choi", "Jayaraman J. Thiagarajan", "Ruben Glatt", "Shusen Liu" ]
In this work, we investigate the fundamental trade-off regarding accuracy and parameter efficiency in neural network weight parameterization using predictor networks. We present a surprising finding where the predicted model not only matches but also surpasses the original model's performance through the reconstruction objective (MSE loss) alone. Remarkably this improvement can be compound incrementally over multiple rounds of reconstruction. Moreover, we extensively explore the underlying factors for improving weight reconstruction under parameter-efficiency constraints and propose a novel training scheme that decouples the reconstruction objective from auxiliary objectives such as knowledge distillation that leads to significant improvements compared to state-of-the-art approaches. Finally, these results pave the way for more practical scenarios, where one needs to achieve improvements in both model accuracy and predictor network parameter-efficiency simultaneously.
[ "Implicit Neural Representations", "Parameter Generation", "Network Prediction", "Distillation" ]
Reject
https://openreview.net/pdf?id=0mo2yqOS6Z
https://openreview.net/forum?id=0mo2yqOS6Z
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wlj7RhxvdJ", "vj4qwwmOXQ", "tvO1NNZ37R", "tlvMsSM0jZ", "oPVFtLoDRG", "mXbyNfQVAF", "jJgysdtKq7", "hBbp4cRllS", "X3pAO96v3X", "WcuEwAGtWW", "WWvVB51iM7", "UzUUVEQUBi", "TZ8AVKNRys", "Q0Pn5B6iIK", "BR7OZUHuN4", "9j4qPOEMbH", "9NaCdw7GVa", "74dBIPgBKw", "4h2Sotpv7g", "2xGw44buEo", "1VpJoXmWQP" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732762157228, 1732512987829, 1732513135011, 1734840669848, 1732513443657, 1730475111857, 1732512923991, 1732839103506, 1730868293635, 1732600050984, 1732642470441, 1730476436197, 1732513194719, 1732838977467, 1730758342390, 1730542173814, 1732513533387, 1732566243023, 1737523898299, 1732762023745, 1732636081867 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Area_Chair_EMRD" ], [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Reviewer_ZS5i" ], [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Reviewer_Ta1g" ], [ "ICLR.cc/2025/Conference/Submission8273/Reviewer_GbWd" ], [ "ICLR.cc/2025/Conference/Submission8273/Reviewer_7EjM" ], [ "ICLR.cc/2025/Conference/Submission8273/Reviewer_9e3C" ], [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Reviewer_7EjM" ], [ "ICLR.cc/2025/Conference/Submission8273/Reviewer_GbWd" ], [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Reviewer_9e3C" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8273/Authors" ], [ "ICLR.cc/2025/Conference/Submission8273/Reviewer_ZS5i" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for finding our work interesting. Please let us know if you have any further questions.\"}", "{\"title\": \"Thank you for your valuable comments\", \"comment\": \"**Downstream tasks**\\nTo evaluate the generalization of our predicted weights, we designed a transfer learning experiment comparing the original weights (baseline pretrained on CIFAR-100) and our predicted weights (trained on CIFAR-100) when applied to the CINIC-10 dataset. We tested two scenarios: 1) Fine-tuning all layers to fully adapt the model to the new task, and 2) Fine-tuning only the linear layer to highlight the generalizability of convolutional weights. In both cases, models were fine-tuned for 10 epochs, while the 'From Scratch' model was trained for 150 epochs. The results show that our predicted weights (Hidden 360, CR < 1) consistently outperform the original weights in both scenarios, achieving higher accuracy. Additionally, using predicted weights from progressive reconstruction (Round 5, Hidden 680, CR > 1) further improves generalization slightly. Overall, these results confirm that the proposed method generalizes well to downstream tasks.\\n\\n| S \\u2192 T | Tuning Layers | From Scratch | Original Weights | Predicted Weights Hidden 360, CR<1 | Predicted Weights Hidden 680, CR>1 |\\n|---------------|-----------------|---------------|-------------------------|------------------------------------|------------------------------------|\\n| CIFAR100 \\u2192 CINIC10 | All Layers | 76.88% | 80.22% \\u00b1 0.05 | **80.32% \\u00b1 0.04** | **80.35% \\u00b1 0.03** |\\n| CIFAR100 \\u2192 CINIC10 | Linear Layer | 76.88% | 58.95% \\u00b1 0.03 | **58.98% \\u00b1 0.04** | **59.20% \\u00b1 0.04** | \\n\\n**Progressive enhancement of high-performing networks**\\nWe are not entirely sure about the interpretation of the question. Please provide clarification if you need more information. We think you are asking whether the progressive-reconstruction can generalize to a higher-performing network built upon the original network, such as those enhanced through techniques like knowledge distillation. To address this, we designed the following experiments: Instead of applying progressive-reconstruction to the original network (71.37%), we used a distilled network (73.60%) where knowledge distillation with a high-performing teacher was used while training the original network. This experiment is crucial because it demonstrates that our method is not constrained by the training techniques. The results confirm that progressive-reconstruction, when applied to the distilled network, also achieves better performance than the target accuracy (73.60%). This aligns with the findings reported in Table 1, which illustrates that our method effectively improves performance, even when targeting a more optimized baseline.\\n\\n|Original network| Distilled network|Round 1 Hidden 680, CR>1|Round 2 Hidden 680, CR>1|\\n|------------------|-------------------|---------------------------------|---------------------------------|\\n|71.37%|73.60%|**73.88% \\u00b1 0.04**|**73.99% \\u00b1 0.01**|\\n\\n**Extend beyond CNN-based architecture**\\nWhile our current work focuses on effectively performing NeRN's reparameterization on CNNs, we recognize the importance and value of extending NeRN to other architectures. Unfortunately, due to time constraints, we were unable to show an example of NeRN's application to transformer architecture in this submission. However, we have carefully considered the challenges and requirements. \\n\\nGeneralizing NeRN to transformers would require significant modifications to its current setup, primarily in designing a suitable coordinate system and addressing the larger size and complexity of weight matrices in transformers. For example, the coordinate system would need to capture aspects such as layer index, head index, weight type (query, key, value), and block indices for submatrices (if decomposed weight matrices into smaller submatrices). These additions would maintain NeRN's core principle of functional weight representation but would introduce new challenges during training due to the complex structure. \\n\\nAdditionally, although NeRN does not impose architecture-specific assumptions, it relies on INR, which typically assumes that the input coordinate system represents a smooth and continuous space and that the learned function exhibits some level of smoothness. However, in transformers, the coordinates are discrete, and the corresponding target weight lacks continuity, making convergence and accuracy more challenging. NeRN addresses this for CNNs by introducing permutation-based smoothness, which promotes kernel smoothness by reordering pre-trained weights without altering their values. However, in transformers, the increased size and complexity of both the coordinate system and weight matrices would require further innovations to simplify the prediction task and ensure smooth learning.\\n\\nBeside NeRN there are other approaches that aim to predict transformer weights, such as diffusion based weight prediction methods. These can potentially be adopted as the predictor network to replace NeRN, whereas our proposed iterative weight reconstruction could potentially still be applied.\"}", "{\"title\": \"Thank you for the detailed discussion\", \"comment\": \"**Alternative methods exist for weight smoothing**\\nThere are different methods to achieve weight smoothing, either during model training (e.g., weight decay, dropout) or post hoc approaches directly applied to pre-trained model weights (e.g., frequency filtering, modulate singular value).\\nOur proposed approach focuses on modifying pretrained weights.\\nIn Appendix A, we explore alternatives for directly applying weight smoothing, either through modulating singular value or Fourier transformation and filtering. \\nThe downsides of these methods are that they are very sensitive to the right parameter setting or require checks against the validation dataset.\\nRegularization-based methods usually come with a generalization-accuracy trade-off.\\nOur method doesn't have these restrictions and we show below that we can iteratively increase accuracy of already-optimized models just using the reconstruction loss.\\n\\n**Reducing model storage is not as big of a concern as reducing the memory**\\nTraining a predictor network doesn't necessarily mean requiring more memory. With a small predictor, we can learn through batch training of the original weights and, therefore, require significantly less memory compared to the original model (same for reconstruction of original weight). \\nCompression is just one of the potential benefits of the proposed method. The reconstruction setup with a larger predictor aims at improving model performance rather than compression. During inference time, we will have a model of the same memory footprint but with improved performance.\\nMoreover, we show the reconstructed weight has other benefits, such as better generalization for downstream tasks (see response to reviewer GbWd). \\n\\n**Problem setup in this paper is not clear**\\nWe will update the paper to clarify the points raised by the reviewer. Specifically, we always use the test split for all model evaluations. Also for the reconstruction tasks, the predictor is trained to predict the model weight directly so no data split is involved. Only the training split is used for the knowledge distillation setup.\\nWe also updated the caption of Figure 1 and the description in the text of section 3.1.\\n\\n**Figure 1 (left) is confusing, and the slightly increased accuracy (Figure 1 right) may be due to variance**\\nWe agree that the left figure is confusing, so we will remove it from the final version and have adjusted the text describing the figure. We also changed the caption of the figure to describe the meaning of the dots, i.e., \\nrepresenting results from predictors with different hidden layer sizes. Additionally, we provide the mean and standard deviation across three runs, demonstrating that the observed behavior is not due to variance in the results.\\n\\n| Original | 750 | 680 | 510 | 360 | 320 | 280 | 220 |\\n|----------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| 71.37% | 71.56 \\u00b1 0.05 | 71.61 \\u00b1 0.01 | 71.45 \\u00b1 0.07 | 67.48 \\u00b1 0.10 | 61.31 \\u00b1 0.45 | 49.55 \\u00b1 1.72 | 24.20 \\u00b1 1.56 |\\n\\n\\n**Comparison with distillation**\\nWe designed three experiments to highlight the advantages of our proposed training strategies. \\nFirst, we compare our predictor network with a distilled network trained using conventional KD.\\nHere, a student network is trained from scratch using teacher guidance from ResNet50. \\nThe results show that our decoupled training achieves an accuracy of 73.95\\\\%, outperforming the conventional KD approach, 73.60\\\\%. \\nHowever, the advantages of the proposed method are not limited to this performance gain; it also allows for further iterative improvement in each case.\\n\\n| Original network | Distilled network | Ours from Table 4 |\\n|-------------------|-------------------|--------------------------|\\n| 71.37% | 73.60% | **73.95% \\u00b1 0.09** |\\n\\nSecond, we applied the progressive-reconstruction process on the distilled network for two rounds. Both result in improved performance beyond the target accuracy. This behavior demonstrates that our method is not constrained by the initial training techniques and can effectively refine an already optimized baseline.\\n\\n| Original network | Distilled network | Round 1 Hidden 680, CR>1 | Round 2 Hidden 680, CR>1 |\\n|------------------|-------------------|---------------------------------|---------------------------------|\\n| 71.37% | 73.60% | **73.88% \\u00b1 0.04** | **73.99% \\u00b1 0.01** |\\n\\nThird, we also investigated whether progressive-reconstruction can enhance our best-performing model (achieved through decoupled training with a high-performing teacher, as shown in Table 4). The results confirm improvements over the target accuracy in each round, emphasizing the effectiveness and flexibility of the proposed recipe.\\n\\n| Original Network | Ours from Table 4 | Round 1 Hidden 680, CR>1 | Round 2 Hidden 680, CR>1 |\\n|-------------------|------------------------|-----------------------------|-----------------------------|\\n| 71.37% | 73.95% \\u00b1 0.09 | **74.15% \\u00b1 0.06** | **74.28% \\u00b1 0.03** |\"}", "{\"metareview\": \"This paper explores the trade-off between accuracy and parameter efficiency in neural networks using smaller predictor networks to predict neural network weights. With successive rounds of reconstruction, by decoupling the reconstruction and distillation processes, the model's accuracy can even exceed that of the original model. The proposed method is applied to CNNs on datasets such as CIFAR and ImageNet.\\n\\nIt has received 5 reviews and an extensive rebuttal, with final ratings 6,5,5,6,6. Reviewer 1 raised concerns about technical novelty, the arbitrary nature of \\\"low reconstruction error\\\", a lack of explanation regarding why the method would improve performance, and generalization to other architectures like transformers. Reviewer 2 doubted the practicality of the approach, with negligible gains on model performance, no memory reduction during deployment, and no comparisons with simpler weight smoothing methods. Reviewer 3 questioned whether the method could generalize better beyond CNNs, to other downstream tasks, with progressively improved teacher models. Reviewer 4 agreed with Reviewers 2 and 5 that the method primarily smooths the weights (not novel in itself), and that the absence of ViT experiments would be very limiting in the scope of work. Reviewer 5 found the accuracy claim not supported by experimental results: minimal to barely existent performance gains on small datasets and a worse trend on a larger and more challenging dataset such as ImageNet. The rebuttal clarified improved memory efficiency during inference and the ability to apply the method in a data-free environment, emphasized the goal of re-parameterization, with additional experiments and explanations regarding the observed performance gains over baseline NeRN etc.\\n\\nDespite the authors' rebuttal addressing several concerns, the reviews indicated significant skepticism regarding the performance gain, the experimental rigor, generalization to non-CNN architectures, technical novelty, and practical impact of the proposed method. Although the method shows potential, especially in the domain of weight-space learning, the paper ultimately did not demonstrate sufficient transformative contributions to warrant acceptance at this stage. Therefore, the final decision is rejection.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers engaged in the rebuttal process with the authors and cross-referenced each others' comments.\"}", "{\"title\": \"Thank you for your extensive review and discussion\", \"comment\": \"**PART 1/2**\\n\\n**Limited performance / KD baseline / High-Performing Teacher**\\nTo address the concerns listed in the title, we designed two experiments to highlight the advantages of our proposed training strategies, which cannot be achieved with the baseline NeRN. As suggested, we directly compared our predictor network with a distilled network trained using conventional KD. In this setup, a student network is trained from scratch using ResNet50 teacher guidance. The results show that our decoupled training achieves an accuracy of 73.95%, outperforming the conventional KD approach, 73.60%. However, the advantages of the proposed method are not limited to this performance gain; it also allows for further iterative improvement in each case.\\n\\n| Original network | Distilled network | Ours from Table 4 |\\n|-------------------|-------------------|--------------------------|\\n| 71.37% | 73.60%| **73.95% \\u00b1 0.09** |\\n\\nFirst, we applied the progressive-reconstruction process by targeting the distilled network (73.60%) and proceeded to the second round. The first round of progressive-reconstruction resulted in improved performance beyond the target accuracy, and the second round achieved further enhancement. These results demonstrate that our method is not constrained by the initial training techniques and can effectively refine an already optimized baseline.\\n\\n| Original network | Distilled network | Round 1 Hidden 680, CR>1| Round 2 Hidden 680, CR>1 |\\n|------------------|-------------------|---------------------------------|---------------------------------|\\n| 71.37%| 73.60% | **73.88% \\u00b1 0.04** | **73.99% \\u00b1 0.01** | \\n\\nWe also investigated whether progressive-reconstruction can enhance our best-performing model (achieved through decoupled training with a high-performing teacher, as shown in Table 4). The results confirm that the first round improves upon the target accuracy (73.95%), and the second round achieves additional gains. These results also emphasize the effectiveness and flexibility of the proposed recipe in achieving further performance improvements.\\n\\n| Original Network | Ours from Table 4 | Round 1 Hidden 680, CR>1 | Round 2 Hidden 680, CR>1 |\\n|-------------------|------------------------|-----------------------------|-----------------------------|\\n| 71.37% | 73.95% \\u00b1 0.09 | **74.15% \\u00b1 0.06** | **74.28% \\u00b1 0.03** |\\n\\n**Line 82-84, limitation on data usage**\\nIt is correct that both baseline NeRN and our decoupled training need the original task's data (through the iterative refinement with large predictor use only reconstruction loss so access to the original data is not required). Despite that fact, we conducted an experiment that addresses the challenge of operating in a completely data-free environment. We employ uniformly sampled noise as input data for both methods, and our results demonstrate its effectiveness compared to the baseline NeRN even in the absence of meaningful data.\\n\\n| CIFAR10 | Original ResNet | Recon-only | Baseline | Ours |\\n|-----------|-----------------|-------------------|-------------------|-------------------|\\n| Acc. (\\u2191, %) | 91.69% | 85.64% \\u00b1 0.39 | 86.31% \\u00b1 0.11 | **87.25% \\u00b1 0.02** |\\n| CIFAR100 | Original ResNet | Recon-only | Baseline| Ours|\\n| Acc. (\\u2191, %) | 71.37%| 61.31% \\u00b1 0.45| 63.92% \\u00b1 0.11 | **64.39% \\u00b1 0.01** |\\n\\n**Significantly degraded performance of quantized predictor**\\nWe use the term \\\"orthogonal\\\" to indicate that quantization can be directly applied to the predictor network if further benefits are desired. While quantizing the predictor leads to a significant degradation in performance (e.g., from 70.84% to 51.72% on CIFAR100), this outcome is explainable. The degradation is not necessarily due to the brittleness of predictor, but rather it is because the predictor is the weight generator. Even small changes in the predictor's outcome (weights for reconstructed network), such as those introduced by quantization, can significantly impact the performance of the reconstructed model. In contrast, quantizing the original network (e.g., ResNet) has less impact because it operates on a fixed structure rather than dynamically generating weights. Despite the degradation observed in the quantized predictor, our method (70.84%), using a predictor of the same size as the quantized ResNet56, achieves better performance than its counterpart (69.65%).\\n\\nWe also want to clarify that the primary objectives of our work differ from quantization methods. While improved model compression is one of the benefits offered by our approach, the core contribution lies in effectively performing NeRN-style reparameterization through our novel progressive-training and decoupled-training strategies.\"}", "{\"summary\": \"This paper proposes an analysis and improved method for representing the weights of neural networks by implicit neural representations (INRs). I.e., It aims to compress model weights (or very slightly improve model performance), while keeping the same accuracy. The authors first analyze a reconstruction of model weights by MSE, showing that it enforces some form of smoothing on the weights. Additionally, they claim that with a big enough INR they are able to slightly improve the performance over the original model. They then propose a new method for model compression by INRs that decouples the distillation and reconstruction objectives into 2 separate stages, leading to better results. Lastly, the authors show that by distilling the model using a larger and more capable backbone can improve the compressed model performance.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written and easy to follow.\", \"Despite the relatively minor technical change from the baseline, the new approach is significantly better at model compression without losing performance.\", \"The analysis performed on model weights smoothness is interesting (Sec. 3.1 and App. A).\"], \"weaknesses\": [\"**Claim on Accuracy Improvement:** My main concern is that the paper's main claim is not well supported. The authors argue that their method enhances model performance compared to the original uncompressed model through weights smoothing. While this sounds promising and the presented analysis is interesting, in practice, the results reveal that the improvement from smoothness is minimal and almost negligible. The performance gain is limited to small-scale benchmarks like CIFAR-100 and STL-10, where the models\\u2019 accuracy increases by only 0.1-0.6%. Furthermore, when tested on ImageNet, the trend reverses, with all experiments performing worse than the original model. I believe that as is, this claim is misleading given these results.\", \"**Missing Baseline:** The limitation on data usage presented in lines 82-84 about the NeRN baseline, are also true for the distillation loss used in the proposed method. I.e., it becomes impractical for model compression if data is not available. In that case, a knowledge distillation to a smaller architecture should also be considered as an additional baseline, as it has the same goal: compressing the model with minimal performance drop.\", \"**High-Performing Teacher:** I believe the experiment done with a higher-performing teacher is a bit unfair for the scenario. It seems as if one could perform some knowledge distillation from the stronger teacher before the compression or just compress the stronger teacher in the first place.\", \"**Combining with Other Compression Approaches:** While the authors claim their method is orthogonal to other compression types, Tab. 5 shows otherwise. Specifically, quantization of the compressed model heavily degrades performance (in CIFAR-100 it decreases from 70.84% to 51.72%).\", \"**Figure 1(a):** Figure 1(a) is confusing as it only presents an expected trend, not real results. I think this part of the figure should be simply removed from the paper.\", \"**Intuition in Smoothness Analysis:** While the analysis on weight smoothness presents cases where it slightly improves performance, it does not explain why this happens. I.e., why smoother weights might be better. I can conjecture it is related to the memorization of training examples (overfitting), which can be represented in higher frequencies. If this is the case, an explicit measurement of overfitting (generalization gap) with and without weights smoothing could greatly benefit this analysis.\"], \"minor_remarks_which_did_not_affect_the_grading\": [\"Most of the citations should probably be in parentheses and not in line (as all of the paper citations are).\", \"In line citation at L64 is strange (reference seems to be in the wrong place).\", \"L251: reference to equation has a wrong number.\", \"L264-266: These lines should probably be revised to a more accurate version as some terms are unclear (e.g., in which layer does the decision making start and feature extraction end?)\", \"Tab. 1: All the first lines are in bold. It's a bit confusing for which result to focus on. Should probably choose the best result as in the other lines of the table.\", \"I am open to reconsidering my score given a revised manuscript which addresses the concerns I mentioned.\"], \"questions\": [\"The method compresses the model storage-wise only if one saves the implicit representation model alone. Does this mean that in order to use the compressed model one would have to thoroughly reconstruct every weight? If so, this means the method wont save RAM space at all, and even worse will require much more inference time. Could the authors show how many FLOPs it takes to rebuild a model compared to a standard inference of it?\", \"Did the authors try to check if smoothing other layer types (e.g. Fully-connected layers) also works when training an implicit representation for them? While simple smoothing strategies might be ineffective due to the permutations of neurons in neural networks [1,2], training an INR on them could still perform some form of smoothing.\", \"This is a bit out of scope for this work, but did the authors try to train a model from scratch with some smoothing objective on the weights? Did it improve the results?\", \"[1] Navon, Aviv, et al. \\\"Equivariant architectures for learning in deep weight spaces.\\\" International Conference on Machine Learning. PMLR, 2023.\", \"[2] Kofinas, Miltiadis, et al. \\\"Graph neural networks for learning equivariant representations of neural networks.\\\" arXiv preprint arXiv:2403.12143 (2024).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your considerate review\", \"comment\": \"**Presentation**\\nWe appreciate the encouragement to provide a stronger motivation for our research and to highlight the appeal of the original NeRN method. In response, we have revised our introduction (lines 32-40), added a clearer explanation of the multi-objective loss function (lines 44-46), and removed ambiguous terms (lines 47-49).\\n\\n**Generalization to Transformers**\\nWhile our current work focuses on effectively performing NeRN's reparametrization on CNNs, we recognize the importance and value of extending NeRN to other architectures. Unfortunately, due to time constraints, we were unable to perform the extensive research to extend NeRN's application to transformers in this submission. However, we have carefully considered the challenges and requirements and will briefly discuss them here. \\n\\nGeneralizing NeRN to transformers would require significant modifications to its current setup, primarily in designing a suitable coordinate system and addressing the larger size and complexity of weight matrices in transformers. For example, the coordinate system would need to capture aspects such as layer index, head index, weight type (query, key, value), and block indices for submatrices (if decomposed weight matrices into smaller submatrices). These additions would maintain NeRN's core principle of functional weight representation but would introduce new challenges during training due to the complex structure. \\n\\nAdditionally, although NeRN does not impose architecture-specific assumptions, it relies on INR, which typically assumes that input coordinates represent a smooth and continuous space and that the learned function exhibits some level of smoothness. However, in transformers, the coordinates are discrete, and the corresponding target weight lacks continuity, making convergence and accuracy more challenging. NeRN addresses this for CNNs by introducing permutation-based smoothness, which promotes kernel smoothness by reordering pre-trained weights without altering their values. However, in transformers, the increased size and complexity of both the coordinate system and weight matrices would require further innovations to simplify the prediction task and ensure smooth learning.\\n\\nBeside NeRN there are other approaches that aim to predict transformer weights, such as diffusion-based weight prediction methods. These can potentially be adopted as the predictor network to replace NeRN, whereas our proposed iterative weight reconstruction could potentially still be applied.\\n\\n**Smaller Issues**\\n* Adding more citations: We added a short discussion of semantic representations to the related works section, including references to the suggested literature.\\n\\n **Semantic representations of neural networks** encode meaningful, interpretable features aligned with human-understandable concepts (references). However, our implicit representations encode information in a distributed and flexible manner, capturing complex patterns and relationships.\\n \\n* Inception-like: We acknowledge the confusion and have changed the term to \\\"recursive\\\" for clarity.\\n\\n* Typos and naming (line 92, line 252, NeRN): We have revised these typos and replaced 'baseline' with 'NeRN'.\\n\\n* Lines 264-266 are not very clear: We have revised lines 264-266 in the updated manuscript to improve clarity. \\n\\n* Clarity section 3.3 and Figure 5: Both arrows in Figure 5 should indeed be red, as they represent the decoupled training process described in Section 3.3. Specifically, Table 3 shows results for a parameter-efficient predictor (CR<1) with logit distillation, while Table 4 shows results for a larger predictor (CR>1) using the same method. The blue arrow represents progressive reconstruction, which can further refine a decoupled model by targeting this high-performing model. To clarify, we have provided additional results in the following table. We hope this explanation and added results clarify the use of colors in the figure. \\n\\n|Original Network|Ours from Table 4|Round 1 Hidden 680, CR>1| Round 2 Hidden 680, CR>1|\\n|-------------------|------------------------|-----------------------------|-----------------------------|\\n|71.37% |73.95% \\u00b1 0.09| **74.15% \\u00b1 0.06**|**74.28% \\u00b1 0.03**|\\n\\n* To evaluate whether progressive reconstruction enhances smaller NeRNs, we used a compact NeRN model with 70.84% accuracy (\\\"Ours\\\" in Table 2) as the target network. As shown in the results, applying a second round of progressive reconstruction (Hidden 680, CR>1) improved performance beyond the target accuracy (70.84%) and brought it closer to the original accuracy (71.37%). This suggests that smaller NeRNs can benefit from progressive reconstruction without directly targeting the original weights.\\n\\n| Original Network | Ours, Hidden 280 from Table 2 | Round 1 Hidden 680, CR>1 | Round 2 Hidden 680, CR>1 |\\n|-------------------|------------------------|-----------------------------|-----------------------------|\\n| 71.37% | 70.84 | **71.13% \\u00b1 0.01** | **71.25% \\u00b1 0.004** |\"}", "{\"comment\": \"(i) **Performance gain is minimal and imagenet experiments**: Thank you for your additional feedback. We would like to further clarify the key objective of our proposed work, which is to enable effective reparameterization of deep models. While a variety of approaches has emerged, including the recent work on using diffusion models to sample from a distribution of weights, we argue that INRs have the potential to enable a convenient and an effective framework for model re-parameterization. However, realizing this potential requires a deeper understanding and improvement of NeRN optimization, which is the primary focus of our work.\\n\\nAlthough the ImageNet results in Table 2 do not indicate performance matching or outperformance, it is important to note that even recovering the original performance is non-trivial task, as evidenced by the baseline NeRN results. During reparameterizatin on small-scale datasets, particularly in the overparameterization regime (CR>1), we observed intriguing behavior suggesting that post-hoc iterative refinement could improve the original network's performance. However, our core objective remains effective reparameterization to accurately recover the original network. For ImageNet, our approach shows a 3% drop in performance compared to an 8% drop with the baseline NeRN under significant compression (CR = 15%), highlighting the substantial improvements achieved by our method over the baseline.\\n\\n(ii) **Amenable to distillation from teacher models**: In Section 3.3, we make another interesting observation that our approach is not only effective at recovering a pre-trained model, but can also be leveraged to distill from better-performing teacher models. We include this experiment to emphasize that our re-parameterization is amenable to other post-hoc model refinement strategies. Note that distillation was only chosen as an example, and it is possible to utilize this with other approaches such as fine-tuning or performing arithmetic on task vectors from different fine-tuned models (e.g., Editing models with task arithmetic, ICLR 2023).\\n\\nRegarding the concern of 'unfair', all results, including those for baseline NeRN and ours in Table 3 and 4, were obtained using the same guidance from a high-performing teacher model (ResNet50) to ensure fair comparison. These results further indicate that the optimization of the baseline NeRN can be significantly improved through our proposed approach. If you still believe this experimental setup is unfair, we kindly ask for further clarification on why you consider it so.\"}", "{\"summary\": \"The author study the fundamental trade-off regarding accuracy and parameter efficiency in neural network weight parameterization using predictor networks. They present a finding where the predicted model not only matches but also surpasses the original model\\u2019s performance through the reconstruction objective (MSE loss) alone. Experiments are done on CIFAR, STL and ImageNet.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. the topic is of interest and benefits the community\\n2. the proposed \\\"separation\\\" is flexible\", \"weaknesses\": [\"the \\\"low reconstruction error\\\" seems to be a bit arbitrary and I do not see a very good way to find it.\", \"The reasoning/intuition on why the proposed method even \\\"improve\\\" the performance is lacking\", \"see more in questions.\"], \"minor_issues\": \"1. typos near line 509-510\\n2. Fig 5 can be earlier?\", \"questions\": \"1. Fig. 1: \\\"While one expects that the reconstruction error must approach zero to recover the true performance\\\" How was the \\\"low error\\\" defined? empirically searched?\\n1. Does the method only works with CNN? I thought no? Maybe only empirically no experiments were done. It is worth trying on transformers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your reply.\", \"comment\": \"Thanks for solving my confusion with your experiment, I think it's an interesting work.\"}", "{\"title\": \"Thanks to the authors for their response\", \"comment\": \"I thank the authors for addressing the problem setup issues and providing error bars for the results. However, some of my concerns remain. For instance, the authors point out the difference between in-training weight smoothing (like weight decay) and post-training weight smoothing (like their approach), but no comparisons were made, and none of my concerns related to this were addressed. Post-training smoothing can have advantages over in-training regularization, but none of these analyses were given. With respect to memory savings, I agree with the authors that training a small predictor network requires less memory than training the target model directly, but no significant advantages are given by using predictor networks in this regard since the target model still has to be used to get predictions after the smoothing.\\n\\nI also agree with several concerns from Reviewer ZS5i like the claim on accuracy improvement with almost negligible improvements in small datasets and the smoothness analysis. \\n\\nI think my main concern about the practicability of this approach still remains; the method achieves increased accuracy because of the smoothing of the weights due to the reconstruction, which is not surprising, and I think there are several other (easier) ways to increase accuracy, generalization, robustness, etc., by smoothing weights, which makes this approach overkill for what it achieves, in my opinion. I will keep my previous recommendation.\"}", "{\"summary\": \"The paper addresses the task of learning an implicit neural representation (INR) of a trained neural network\\u2019s weights. When the INR is smaller than the original model, it offers a potential approach to model compression. The authors conduct an in-depth analysis of both a naive baseline and the current state-of-the-art method (NeRN), yielding key insights: (i) increasing the INR size enhances the performance of the reconstructed model, (ii) iterative training of INRs (on the previous INR) can sometimes exceed the original model\\u2019s performance, and (iii) NeRN, despite having three objectives, is primarily driven by the reconstruction objective. These findings inform the proposed method, which separates the reconstruction and distillation phases. The approach introduces one or more reconstruction-only stages, followed by a distillation phase (focusing solely on logits, not features), enabling knowledge transfer from potentially stronger teachers. Extensive experiments validate the approach\\u2019s effectiveness.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper addresses a potentially interesting task within the emerging field of weight-space learning, where neural networks are treated as data points to train other networks. While it may not yet compete with state-of-the-art quantization methods, the paper explores a promising direction that could inspire future advancements. The analysis and motivating experiments are comprehensive and intuitive, and the experiments are thoughtfully designed, demonstrating tangible improvements while simplifying the method relative to the baseline.\", \"weaknesses\": \"The paper\\u2019s primary weaknesses are in presentation, particularly in the introduction, where motivation and context are insufficiently developed, and in its focus on ResNets alone. Both issues are potentially addressable in the rebuttal, as detailed below.\\n\\n**Presentation**\\n\\nThe paper lacks a compelling motivation for learning implicit representations of neural networks. I believe the introduction should clearly explain why this is an interesting and valuable topic (I think this should be done even if the main reasons are purely academic and not commercial). Additionally, it presumes familiarity with NeRN, which may make it difficult for readers to follow without proper context. For instance, lines 44-46 discuss the contradictions in the multi-objective loss, yet the paper does not explain that multi-objective losses represent the current state-of-the-art or clarify what these objectives entail. Consequently, the value and relevance of the proposed method in decoupling objectives are unclear. Similarly, terms like \\u201ccompression efficiency\\u201d (line 49) are introduced without context, leaving readers uncertain about their meaning within this work. Defining the motivation and the specific context of the compression task would make these points much more accessible.\\n\\n**Generalizing to Other Architectures**\\n\\nWhile the paper briefly mentions that the method is only applicable to CNNs, it is unclear why this limitation exists. Are there underlying assumptions preventing the method from generalizing to other architectures like ViTs? If there are no such constraints, presenting results on a ViT model would benefit the paper. Although the work remains relevant if limited to CNNs, its scope and applicability would be reduced if it cannot generalize to architectures beyond ResNets.\\n\\n___\\n**Smaller issues**\\n\\nIn addition to the above primary weaknesses, below I describe a few additional issues and limitations, these are mostly smaller issues that do not carry a large weight in my decision but should nevertheless be addressed:\\n\\n1. While the paper focuses on implicit representations for neural networks, there is a growing body of research for learning semantic representations of neural networks and performing other tasks on weights of neural networks, the paper did not cite any of these works, which I think should be done. See [1-12] for a few such examples.\\n2. The term \\u201cinception like\\u201d is written a few times (e.g. line 44), what does this mean?\\n3. A small mismatch in notation, in Eq. 1, in the FMD definition you use $a^{l}$ while in line 92 you use $a^{\\\\ell}$.\\n4. In Fig. 1, maybe show by how much the performance is improved (in the zoomed in part).\\n5. Line 252 references Eq. 2, but the actual equation is unnumbered.\\n6. Lines 264-266 are not very clear, and if they are important enough to be bold they should probably be rewritten. It took me a few passes to understand them.\\n6. Just making sure, in 3.3, you first perform iterative refinement and then distill? The figure looks like they are simultaneously done and not sequentially.\\n7. In Fig. 5, the 3.3 part, both the arrows are red, shouldn\\u2019t one be red and one blue? \\n9. I would replace the term \\u201cbaseline\\u201d in the tables with \\u201cNeRN\\u201d so that a reader can clearly understand what baseline you are using (and to give NeRN the credit it deserves).\\n\\n____\\n\\n[1] Predicting neural network accuracy from weights, 2020, Unterthiner et al.\\n\\n[2] Towards Scalable and Versatile Weight Space Learning, 2024, Schurholt et al.\\n\\n[3] Self-supervised representation learning on neural network weights for model characteristic prediction, 2021, Schurholt et al.\\n\\n[4] Hyper-representations as generative models: Sampling unseen neural network weights, 2022, Schurholt et al.\\n\\n[5] Learning Useful Representations of Recurrent Neural Network Weight Matrices, 2024, Herrmann et al.\\n\\n[6] Learning to learn with generative models of neural network checkpoints, 2022, Peebles et al.\\n\\n[7] Graph metanetworks for processing diverse neural architectures, 2023, Lim et al.\\n\\n[8] Equivariant deep weight space alignment, 2023, Navon et al.\\n\\n[9] Equivariant architectures for learning in deep weight spaces, 2023, Navon et al.\\n\\n[10] Graph neural networks for learning equivariant representations of neural networks, 2024, Kofinas et al.\\n\\n[11] Neural functional transformers, 2024, Zhou et al.\\n\\n[12] Permutation equivariant neural functionals, 2024, Zhou et al.\", \"questions\": \"As mentioned in the weaknesses section, I think the points for the rebuttal are:\\n1. Improve the motivation and context in the introduction.\\n2. Show an example of the method generalizing to ViTs (if indeed there are no underlying assumptions preventing the method from generalizing).\\n3. Adding the relevant citations and other small issues.\\n\\nAn additional question I had is whether the iterative refinement also works for smaller INRs (e.g., 280).\\n\\n___ \\n\\nIn summary, the paper addresses a less-explored aspect of weight-space learning, offering improvements over existing methods with a simpler approach. Since the motivation and presentation could need improving (and if possible, the generalization), I currently assign the paper a score of 5. I will consider increasing my score if the authors satisfactorily address my concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your review\", \"comment\": \"**Low reconstruction error**\\nWe understand that \\\"low reconstruction error\\\" may seem broad in general contexts. In our study, reconstruction error serves as an important analytical metric, as it is directly related to the performance of the reconstructed network. We added more context in Section 3.1 and Figure 1. For example, achieving zero reconstruction error would theoretically result in performance identical to that of the original network. While one might expect performance to consistently decrease as reconstruction error increases, we observed cases where the reconstructed network's performance exceeds the original network's performance. To clarify, we measured reconstruction error and the corresponding performance computed by 3 runs across seven different predictor hidden sizes, with each point on the graph representing the original performance and a specific hidden size from left to right: original, 750, 680, 510, 360, 280, and 220.\\n\\n| Original Network | Hidden 750 | Hidden 680 | Hidden 510 | Hidden 360 | Hidden 320 | Hidden 280 | Hidden 220 |\\n|------------------|--------------------------|--------------------------|--------------------------|--------------------------|--------------------------|--------------------------|--------------------------|\\n| 71.37% | 71.56% \\u00b1 0.05 | 71.61% \\u00b1 0.01 | 71.45% \\u00b1 0.07 | 67.48% \\u00b1 0.10 | 61.31% \\u00b1 0.45 | 49.55% \\u00b1 1.72 | 24.20% \\u00b1 1.56 |\\n| 0 | 0.00036 | 0.00068 | 0.00252 | 0.00771 | 0.00967 | 0.01177 | 0.015116 |\\n\\n**Reasoning/intuition is lacking**\\nTo explain the observed performance improvement, we introduce the $S_{ratio}$ metric for two key reasons: 1) it clearly distinguishes the behaviors of predicted and original weights, and 2) it supports the weight smoothing hypothesis, where progressive-reconstruction promotes weight smoothing, as evidenced by higher $S_{ratio}$ values (indicative of lower-frequency components), thereby leading to performance improvement. Figure 2 shows that Round 1 weights exhibit higher $S_{ratio}$, particularly in the later layers (i.e., closer to the decision layer). This trend continues across additional rounds of progressive-reconstruction (Figure 3 (b), up to Round 5). This suggests that progressive-reconstruction smooths weights by suppressing lower singular values. \\n\\nTo further validate this hypothesis, we conducted additional analyses to explicitly test the relationship between weight modulation and performance improvement. 1) Frequency modulation (Figure 7): Applying a low-pass filter to the weight matrix demonstrated its impact on performance. 2) Singular value modulation (Figure 8): Scaling down less significant singular values showed their contribution to performance. 3) Singular values vs frequency (Figure 9). This analysis is particularly important as modulating singular values does not inherently guarantee the removal of high-frequency components. It shows that the weights after low-pass filtering exhibit higher $S_{ratio}$ values, suggesting a potential link between frequency and singular value-based modulations. We believe our analysis provides important reasoning and intuition behind the performance improvement of the proposed method. This aligns with findings from many prior works, further supporting the validity of our explanation.\\n\\n**Figure 5's position** We moved Figure 5 to the beginning of the section.\\n\\n**Extend beyond CNN-based architecture**\\nOur current work focuses on effectively performing NeRN's reparameterization on CNNs. We have carefully considered the challenges and requirements for extending NeRN to transformers. Generalizing NeRN to transformers would require significant modifications to its current setup, primarily in designing a suitable coordinate system and addressing the larger size and complexity of weight matrices in transformers. For example, the coordinate system would need to capture aspects such as layer index, head index, weight type (query, key, value), and block indices for submatrices (if decomposed weight matrices into smaller submatrices). These additions would maintain NeRN's core principle of functional weight representation but would introduce new challenges during training due to the complex structure. \\n\\nAdditionally, although NeRN does not impose architecture-specific assumptions, it relies on INR, which typically assumes that the input coordinate system represents a smooth and continuous space and that the learned function exhibits some level of smoothness. However, in transformers, the coordinates are discrete, and the corresponding target weight lacks continuity, making convergence and accuracy more challenging. NeRN addresses this for CNNs by introducing permutation-based smoothness, which promotes kernel smoothness by reordering pre-trained weights without altering their values. However, in transformers, the increased size and complexity of both the coordinate system and weight matrices would require further innovations to simplify the prediction task and ensure smooth learning.\"}", "{\"comment\": \"Thank you for your additional feedback. In response, we would like to highlight practical value of the proposed work. The key objective of this work is to enable an effective reparameterization of deep models. While a variety of approaches has emerged, including the recent work on using diffusion models to sample from a distribution of weights, we argue that INRs have the potential to enable a convenient and an effective framework for model re-parameterization. However, to realize this potential, there is a need to understand and improve the optimization of NeRNs, which is the focus of our work. Here, we summarize the benefits of this approach:\\n\\n(i) **Recover original model performance**: Through the new empirical insights made in this work, we are able to obtain effective INR re-parameterizations of neural networks -- significantly superior to baseline NeRNs in terms of both ID and OOD performance recovery.\\n\\n(ii) **Improve memory efficiency**: An important by-product of this re-parameterization is that the parameter count in the re-parameterized INR is significantly lower than the original network, thereby reducing the memory storage requirements.\\n\\n(iii) **Improve inference-time memory efficiency**: Unlike other existing re-parameterizations (e.g., using a diffusion model to sample from the distribution of weights), our approach adopts a coordinate system for different (layer, filter, weight) indices. Consequently, our approach can improve the inference-time memory efficiency in two different ways: (a) it is not required to load the entire network into on-device memory in one shot -- instead only specific layers can be reconstructed at a given time (e.g., load one layer at a time) and forward passes can be done; (b) Pruning is another common strategy used to improve inferencing efficiency of models -- our re-parameterization will allow us to load only the relevant filters (identified by pruning the original network during the training phase itself) and improve the memory efficiency.\\n\\n(iv) **Enable post-hoc weight smoothing**: We agree with the reviewer\\u2019s assessment that this is not the only approach to achieve weight smoothing. However, we want to emphasize that weight smoothing is typically performed during model training. In contrast, our approach enables post-hoc smoothing of model weights through the INR-based re-parameterization. To this end, we make an interesting observation that progressive training of the INR model with only the reconstruction objective achieves the desired smoothing.\\n\\n**Performance gain is minimal and imagenet experiment**: Although the ImageNet results in Table 2 do not indicate performance matching or outperformance, it is important to note that even recovering the original performance is non-trivial task, as evidenced by the baseline NeRN results. During reparameterizatin on small-scale datasets, particularly in the overparameterization regime (CR>1), we observed intriguing behavior suggesting that post-hoc iterative refinement could improve the original network's performance. However, our core objective remains effective reparameterization to accurately recover the original network. For ImageNet, our approach shows a 3% drop in performance compared to an 8% drop with the baseline NeRN under significant compression (CR = 15%), highlighting the substantial improvements achieved by our method over the baseline.\\n \\nWe sincerely appreciate each reviewer\\u2019s valuable feedback on our work. In the final version, we will aim to include results for: (1) the base model trained with weight decay and (2) our re-parameterization applied to both models.\"}", "{\"summary\": \"In this work, the authors propose using predictor networks to achieve increased accuracy in two ways. The first one is to train the predictor network iteratively using only a reconstruction loss. The increased accuracy is a product of the weight smoothing caused by the reconstruction loss. In the second part, the authors propose to detach the reconstruction loss and distillation loss (as used in previous works) and do it sequentially. They argue that the distillation loss gains are limited to the reconstruction loss when used simultaneously.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper is mostly well-written and has some interesting ideas. It also provides a concise and clear overview of the problem, the literature review is comprehensive, and the experiments are thorough. Finally, the problems that this work tries to address, like model compression and improved representation learning, are relevant to the community.\", \"weaknesses\": \"I have major concerns related to the technical contributions and the practicality of the proposed approach. The method achieves increased accuracy because of the smoothing of the weights due to the reconstruction (or iterative reconstruction), which is not surprising, and I think there are several other (easier) ways to increase accuracy, generalization, robustness, etc., by smoothing weights, which makes me believe this approach would not be very practical (In the paper there are no comparisons to these easier alternatives). Another reason I believe this approach is not practical is that reducing model storage is not as big of a concern as reducing the memory needed for the model during training or deployment.\\n\\nFor instance, let's assume we have an optimally trained model with weight smoothing regularization. For the proposed approach to achieve similar performance to the original model, it will require two networks, i.e., the original and the predictor (e.g., ~40% of the DoF of the original), then we need to have several rounds of reconstruction where the predictor learns to predict the original weights, and then, when need a training stage using only distillation loss. The only real gain will be the reduced memory for model storage (which has to be unwrapped to a larger model to produce predictions), which currently is not a real concern, at least on the applications described in the paper.\\n\\nDespite this work having some interesting ideas, in its current version, I don't see any major benefits in using it. Please see the questions below for more specifics.\", \"questions\": [\"The problem setup in this paper is not clear; Is the original network with parameters $W$ pretrained for the target task? How is the dataset split used to train $W$ and evaluate, for instance, the accuracy vs. reconstruction error results? Which task are these models evaluated on? and what models are used? For instance, what is the size of the predictor network in the experiments in Sec. 3.1? From some of the figures (e.g., figure titles or legends) later in the paper I could see some of these details but they should be summarized before describing the results in Sec. 3.\", \"In Figure 1 (left) the authors show an expected behavior of the tradeoff between accuracy and reconstruction error; what is this figure based on? why is the expected reduction linear? why is the expected accuracy at a reconstruction error of 0.015 around 20%? I assume the authors used the observed results to build the expected plot, but then again, why the linear behavior instead of the approximate negative quadratic observed in the right plot? Additionally, could the slightly increased accuracy be a product of the variance in the results? error bars and axis labels of the zoomed-in crop of the right plot would be helpful.\", \"Is the goal of the reconstruction loss to learn to predict the parameters of a network previously trained? If so, why is the accuracy higher with a non-zero reconstruction error? Can we then assume that the \\\"original\\\" model was not optimally trained? For instance, if smoothing the weights of the network increases its accuracy on a given task, should not the network be trained with a different regularization? e.g. a higher weight decay, which would also smooth weights and suppress high-frequency components.\", \"How does the proposed iterative smoothing using predictor networks compare to other weights smoothing techniques like training regularizations or smoothing constraints (e.g., weight decay, dropout, etc.), and the mentioned methods in lines 157-162 (e.g., NeRN with regularization-based smoothness)?\", \"It has been widely studied that weight smoothing increases performance, generalization, robustness to noise, etc., and many techniques have been proposed to achieve this, so I don't think it is that surprising of a find, and getting an increased accuracy by smoothing parameters via a predictor network seems overkill to me.\", \"It would be helpful to have results comparing the proposed method with distillation and using distillation to train a smaller network directly (i.e., instead of training a predictor network for a larger model). It seems to me that the advantages of using a predictor network are not that great if the model has to be unwrapped to produce the predictions. Currently, GPU memory is a greater issue than model storage.\"], \"minor_comments\": [\"In lines 214-215 the notation can be confusing, i.e., predictor $P$ with $Q$ learnable parameters and the original network $O$ with $P$ learnable parameters.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethical concerns\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This study explores the trade-off between accuracy and parameter efficiency in neural networks using predictor networks. It reveals that the predicted model can exceed the original's performance solely through reconstruction objectives, with improvements accumulating over successive reconstructions. The research also proposes a new training scheme that separates reconstruction from auxiliary objectives, leading to significant enhancements in both model accuracy and predictor network efficiency compared to existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written with a clear and reasonable motivation. The problem setup and comparison with other methods are well articulated.\\n2. Extensive validation is conducted on multiple datasets, showing consistent improvements. The figures and tables are presented clearly and logically.\\n3. The proposed two-stage strategy not only compensates for the shortcomings of the baseline but also achieves better performance.\", \"weaknesses\": \"1. The method presented in the paper targets a trade-off between accuracy and compression rate. Can the advantages gained over the baseline pre-trained weights, as demonstrated in Table 2, generalize to a broader range of downstream tasks?\\n2. Will the progressive enhancement of the teacher network lead to corresponding progressive improvements in performance?\\n3. Can these advantages extend beyond CNN-based architectures, such as to the pre-training of Vision Transformers (ViT) or hybrid architectures combining ViT and CNN?\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**PART 2/2**\\n\\n**Intuition in Smoothness Analysis**\\nThank you for the suggestion to analyze generalization explicitly. We evaluated the generalization gap, defined as the absolute difference between train loss and test loss (measured by cross-entropy loss). As shown in the table, our method demonstrates a generally decreasing trend in the generalization gap across progressive rounds. For example, starting from the original network with a gap of $1.28485$, the gap decreases to $1.24161$ by Round 5. \\n\\n| Model | Generalization Gap | Train Accuracy | Test Accuracy | Train Loss | Test Loss |\\n|---------------|--------------------|----------------|---------------|------------|-----------|\\n| Original Net | 1.28485 | 99.06| 71.37 | 0.05296 | 1.33782 |\\n| Round 1 | 1.26714 | 99.27 | 71.65 | 0.04570 | 1.31286 |\\n| Round 2 | 1.25425 | 99.16 | 71.78 | 0.05183 | 1.30609 |\\n| Round 3 | 1.25418 | 99.30 | 71.84 | 0.04831 | 1.30250 |\\n| Round 4 | 1.26234 | 99.27 | 71.95 | 0.04878 | 1.31113 |\\n| Round 5 | **1.24161** | 99.20 | 71.97 | 0.04998 | 1.29159 |\\n\\nWe can further verify the generalization of our predicted weights using downstream tasks, i.e., transfer learning to a new dataset, CINIC-10. We compared our predicted weights with original weights under two scenarios: 1) Fine-tuning all layers: adapts the model completely to the new task, 2) Fine-tuning only the linear layer: highlights the generalizability of convolutional weights. The results show that our predicted weights (Hidden 360, CR < 1) consistently outperform the baseline pre-trained weights in both scenarios, achieving higher accuracy. Additionally, using predicted weights from progressive reconstruction (Hidden 680, CR > 1) further improves generalization slightly. Overall, these results confirm that the proposed method generalizes well to downstream tasks.\\n\\n| S \\u2192 T | Tuning Layers | From Scratch | Original Weights | Predicted Weights Hidden 360, CR<1 | Predicted Weights Hidden 680, CR>1 |\\n|---------------|-----------------|---------------|-------------------------|------------------------------------|------------------------------------|\\n| CIFAR100 \\u2192 CINIC10 | All Layers | 76.88% | 80.22% \\u00b1 0.05 | **80.32% \\u00b1 0.04** | **80.35% \\u00b1 0.03** |\\n| CIFAR100 \\u2192 CINIC10 | Linear Layer | 76.88% | 58.95% \\u00b1 0.03 | **58.98% \\u00b1 0.04** | **59.20% \\u00b1 0.04** | \\n\\n**Figure 1, L64, L251, L264-266, bolds in Table 1**\\nThank you for your suggestion and for pointing out the typos. We will revise our manuscript accordingly. Regarding lines 264-266, as our predictor focuses on predicting the weights in the convolutional layers, we did not use the term \\\"decision layer\\\" directly. Instead, we referred to a later layer closer to the decision layer, as it contributes more significantly to the decision-making process than early layers. We will update lines 260-266 as follows:\\n\\nInterestingly, we find that this two-stage optimization leads to significant differences in the early layers of the network, while still matching the later layers. This is intuitive, as it is well known that the decision rules typically emerge in the later layers of a deep network. While the larger differences in the early layers may seemingly compromise reconstruction fidelity, this separate training strategy facilitates more effective integration of distillation into the network parameterization, as evidenced by significant improvements (red bars in Figure 4(c)).\\n\\n**Reconstruction cost vs. inference cost**\\nAs expected, the reconstruction cost will be significantly higher than the single inference cost. However, the intention is not to do the reconstruction on the fly for a given instance but rather reconstruct first before doing a large number of inferences, therefore, amortizing the cost of the reconstruction. More importantly, our primary aim is to investigate the trade-off and improve upon the state-of-the-art weight prediction methods, e.g., NeRN, which itself is an early work of a rather new field (weight prediction), so the goal is more oriented toward academic exploration rather than practical deployment.\\n\\n**Other layer types**\\nThank you for your insightful comments and references. While fully connected layers can indeed be interpreted as 1\\u00d71 convolutional layers, making them compatible with the predictor network, we have not explored this implementation in our study for the sake of simplicity. We acknowledge that applying smoothing strategies or training an implicit neural representation on fully connected layers could offer interesting insights.\\n\\n**Train from scratch with weight smoothing**\\nTo simplify the task, NeRN initially explored a regularization-based approach to promote weight smoothness by explicitly adding a loss term during training of the original network. While this approach effectively encourages smoothness in the network, it often leads to slightly inferior performance on the original task. Furthermore, increasing the regularization factor significantly degrades the original network's performance.\"}", "{\"comment\": \"I appreciate the authors\\u2019 efforts in addressing my comments. The revisions to the introduction and the updates made in response to other reviewers\\u2019 feedback have improved the overall presentation of the paper.\\n\\nHowever, I remain unconvinced as to why generalization to ViTs was not feasible during the rebuttal phase. As previously mentioned, the absence of ViT-related experiments limits the scope of the findings.\\n\\nAfter reviewing the other comments, I agree with 7EjM and ZS5i that the method primarily smooths the weights, which is not novel. Nevertheless, the paper\\u2019s key observation on iterative refinement is intriguing and has the potential to benefit the weight-space learning community. I am therefore raising my score to a 6. However, I strongly encourage the authors to include experiments on ViTs or other architectural types in the camera-ready version of the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for recognizing the improvements in our paper's presentation. We sincerely appreciate your constructive feedback and your decision to raise the score to acceptance. As you suggested, we will prioritize extending our work to transformers and aim to include an example in the final version.\"}", "{\"comment\": \"Thank you for your response to my comments. However, I believe my concerns remain:\\n\\n(i) The revised introduction and the abstract still highlight the performance gain from weight reconstruction, although this gain is minimal and barely exists. This is also seen in the added results of the rebuttal, where the accuracy increases by less than 1% on CIFAR10 after multiple iterations. This is also seen in the generalization experiments where the gain is less than 0.3% on transfer learning. Moreover, the rebuttal did not address the ImageNet case which displays an opposite trend (on a larger and more challenging dataset). To summarize, I believe the accuracy gain claim is not well supported, and the writing should reflect that.\\n\\n(ii) The higher performing teacher still seems unfair. Distilling the teacher still seems like a better option.\\n\\nI choose to keep my score.\"}" ] }
0mdUV1pLGP
Hawkes process revisited: balancing interpretability and flexibility with contextualized event embeddings and a neural impact kernel
[ "Yuankang Zhao", "Matthew M. Engelhard" ]
The Hawkes process (HP) is commonly used to model event sequences with selfreinforcing dynamics, including electronic health records, stock trades, and social media interactions. Traditional HPs capture self-reinforcement via parametric impact functions that can be inspected to understand how each event modulates the intensity of others. Neural network-based HPs offer greater flexibility, resulting in improved fit and prediction performance, but at the cost of interpretability, which can be critical in medicine and other high-stakes settings. In this work, we aim to understand and improve upon this tradeoff. We propose a novel HP formulation in which impact functions are modeled by defining a flexible impact kernel, instantiated as a neural network, in event embedding space, which allows us to model large-scale event sequences with many event types. This approach is more flexible than traditional HPs, because we do not assume a particular parametric form for the impact functions, yet more interpretable than other neural network approaches, because self-reinforcing dynamics are still entirely captured by the impact kernel, which can be inspected. If needed, our approach allows us to trade interpretability for flexibility by contextualizing the event embeddings with transformer encoder layers. Results show that our method accurately recovers impact functions in simulations and achieves competitive performance on real-world datasets even without transformer layers. This suggests that our flexible impact kernel is often sufficient to capture self-reinforcing dynamics effectively, implying that interpretability can be maintained without loss of performance.
[ "Event sequence", "Hawkes Process", "Interpretability", "Embedding Space" ]
https://openreview.net/pdf?id=0mdUV1pLGP
https://openreview.net/forum?id=0mdUV1pLGP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wfMOwKskPO", "v0HE5B9Fe0", "o7ixaaeNLh", "Tn8PgQCOGp", "5Kd5GMK1Nq" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730447069744, 1730354775415, 1731947135366, 1730707259882, 1730536640463 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8763/Reviewer_bWrm" ], [ "ICLR.cc/2025/Conference/Submission8763/Reviewer_NN4h" ], [ "ICLR.cc/2025/Conference/Submission8763/Authors" ], [ "ICLR.cc/2025/Conference/Submission8763/Reviewer_rZ9z" ], [ "ICLR.cc/2025/Conference/Submission8763/Reviewer_LgmM" ] ], "structured_content_str": [ "{\"summary\": \"This paper focuses on understanding and improving the trade-off between the flexibility and interpretability of the Hawkes process. The authors replace the Hawkes process's parametric kernel functions with a neural network-based impact kernel within an event embedding space, thereby enhancing the model\\u2019s flexibility. This neural network-based impact kernel retains some properties of the Hawkes process, such as positive intensity and additive influence, thereby enabling good interpretability. Additionally, to manage the balance between model complexity and interpretability, the authors introduce optional transformer encoder layers to contextualize event embeddings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The motivation is clear, and the issue of enhancing the interpretability of the neural Hawkes process is of considerable significance.\\n\\n2. This paper proposes three designs for neural kernel functions, each balancing model flexibility and interpretability to different extents.\\n\\n3. The article is well-structured and easy to follow\", \"weaknesses\": \"1. There are generally three metrics for evaluating point process models: log-likelihood, accuracy (acc), and root mean square error (RMSE) [1]. Among these, log-likelihood measures the model\\u2019s goodness-of-fit, while accuracy and RMSE measure the model\\u2019s event prediction performance. This paper only uses log-likelihood. Furthermore, in terms of log-likelihood, the proposed method does not demonstrate a significant advantage over other baseline models.\\n\\n2. Equation 8 seems to imply an assumption that the influence between events is always a positive excitation (because the softplus function is applied to all components, including W , K(t), and \\u03bc_k). What if the influence of events is \\\"inhibition\\\" rather than \\\"excitation\\\"?\\n\\n3. The neural kernel function seems capable of modeling only the influence of one event on another, but in some scenarios, multiple events occurring together may be required to trigger a subsequent event, as in the case of synergy [2].\", \"reference\": \"[1] EASYTPP: TOWARDS OPEN BENCHMARKING TEMPORAL POINT PROCESSES (ICLR'24)\\n\\n[2] CAUSE: Learning Granger Causality from Event Sequences using Attribution Methods (ICML'20)\", \"questions\": \"Please refer to Strengths and Weaknesses for more details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper reformulates the classic Hawkes process by incorporating neural networks to enhance its expressive power. Specifically, the authors propose three types of neural Hawkes processes: one based on one-hot vectors, one based on event representations, and one utilizing latent vector representations. Through extensive experiments, the authors demonstrate that the event representation-based neural Hawkes process generally achieves strong predictive performance while maintaining excellent interpretability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper provides a detailed analysis of the interpretability of the proposed model, as discussed in Sections 4.5 and 4.6.\", \"The writing is clear, making the paper easy to follow, and the results straightforward to reproduce.\"], \"weaknesses\": \"The paper lacks significant innovation. The authors should refer to Equation 11 in reference [1] and Equations 3, 4, and 5 in reference [2]. The approach in this paper closely mirrors these works, specifically the use of neural networks to parameterize the impact kernel of the Hawkes process.\\n\\n1] Song Y, Lee D, Meng R, et al. Decoupled Marked Temporal Point Process using Neural Ordinary Differential Equations. In The Twelfth International Conference on Learning Representations.\\n[2] Zhou Z, Yu R. Automatic Integration for Fast and Interpretable Neural Point Processes. In Learning for Dynamics and Control Conference. PMLR, 2023: 573-585.\", \"questions\": [\"In line 215, the authors state that the computational complexity of the Monte Carlo method is $O(L^2NK)$, while the computational complexity of the numerical method is $O(LNK)$. Is there an error here? Based on my understanding, the computational complexity of the numerical method should be $O(L^2K)$.\", \"In line 133, the authors claim that SAHP does not explicitly model decaying temporal effects. However, I would argue that their model also does not capture decaying temporal effects. Specifically, in line 271, the authors cannot ensure that the output $K$ decreases as $\\\\Delta t$ increases.\", \"How were the results in Figure 3c obtained? From my understanding, the \\\"dimension\\\" in Figure 3c represents \\\"topics\\\" (which includes multiple event types, as described in Table 3), whereas the \\\"dimension\\\" in Figures 3a and 3b pertains to \\\"event types.\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"Neural network-based HPs offer greater flexibility and improved performance in modeling event sequences with self-reinforcing dynamics, but at the cost of interpretability. This paper proposes to address this challenge by leveraging a neural impact kernel in event embedding space, which allows to capture complex event dependencies without assuming specific parametric forms, while still retaining the core interpretability of traditional Hawkes processes. Real data experiments are conducted to demonstrate the competitive performance with existing models while maintaining interpretability.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"(1) Introduce a generalized Hawkes process where impact functions are defined via a flexible, neural network-based impact kernel within an event embedding space.\\n\\n(2) The proposed method is flexible to incorporate transformer encoder layers to contextualize event embeddings based on the historical sequence of events, which can explicitly manage the balance between interpretability and model complexity.\\n\\n(3) The authors show that the transformer encoder layers are often unnecessary to achieve state-of-the-art performance and demonstrates the competitive performance of proposed method with existing models while maintaining interpretability with real data experiments\", \"weaknesses\": \"(1) The core idea is simple, which introduces a neural network-based impact kernel within an event embedding space to improve interpretability while keeping competitive performance. It would be better to diccuss the effects of the impact kernel on the modeling performance in details, and also illustrate how to choose or design the appropriate kernels in applications for better balance between interpretability and model complexity.\\n\\n(2) Generally, increased model complexity may lead to higher model likelihood value. So, it's not adequate to only compare the likelihood between the proposed method and other models. It would be necessary to also compare the out-of-sample metrics such as out-of-sample prediction performance and so on, for all the related comparisons between the proposed method including the variant with transformer encoder layer and existing methods. \\n\\n(3) The authors states that \\\"Given the large size of many of these datasets, we believe it is unlikely that this is the result of insufficient data, and more likely that ENHP is already sufficiently flexible to capture the underlying data distribution\\\" in line 395-397 of page 8. The statement is not adequate and convincing, and it's better to add some necessary experiments based on simulated data for illustration.\", \"questions\": \"See the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"-\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a novel approach for modeling Hawkes processes (HPs), where a deep neural network is used to model the influence function, making it not only more flexible than traditional HPs but also enhancing model interpretability.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides a comprehensive overview and classification of current models for event sequence prediction, covering traditional HPs, RNN-based HPs, attention-based HPs, and so on.\\n2. This paper is easy to follow.\", \"weaknesses\": \"1. This paper is an incremental work of TPPs. There are many related works to investigate the interpretability of TPPs. Adopting neural networks to model impact kernels are also quite common. The core contribution of model novelty is limited.\\n2. The assumption that the influence between events is always positive is too strong, and many real-world scenarios do not fit this assumption, which limits the model\\u2019s flexibility. \\n3. The experimental improvement is not evident. Compared with current methods, the improvement of this work is marginal.\", \"questions\": \"See above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0mJZplhexS
Speeding Up Image Classifiers with Little Companions
[ "Yang Liu", "Kowshik Thopalli", "Jayaraman J. Thiagarajan" ]
Scaling up neural networks has been a key recipe to the success of large language and vision models. However, in practice, up-scaled models can be disproportionately costly in terms of computations, providing only marginal improvements in performance; for example, EfficientViT-L3-384 achieves <2% improvement on ImageNet-1K accuracy over the base L1-224 model, while requiring 14× more multiply–accumulate operations (MACs). In this paper, we investigate scaling properties of popular families of neural networks for image classification, and find that scaled-up models mostly help with “difficult” samples. Decomposing the samples by difficulty, we develop an embarrassingly simple model-agnostic two-pass Little-Big algorithm that first uses a light-weight “little” model to make predictions of all samples, and only passes the difficult ones for the “big” model to solve. Good little companions achieve drastic MACs reduction for a wide variety of model families and scales. Without loss of accuracy or modification of existing models, our Little-Big models achieve MACs reductions of 76% for EfficientViT-L3-384, 81% for EfficientNet-B7-600, 71% for DeiT3-L-384 on ImageNet-1K. Little-Big also speeds up the InternImage-G-512 model by 62% while achieving 90% ImageNet1K top-1 accuracy, serving both as a strong baseline and as a simple practical method for large model compression.
[ "model compression", "computer vision", "efficiency" ]
https://openreview.net/pdf?id=0mJZplhexS
https://openreview.net/forum?id=0mJZplhexS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTn6enbc5Q", "xY3QaZZ8ph", "v6pjStFjBl", "um8fTXDQfJ", "tLqMBK7uPk", "lvsZYGLqMu", "g6r1hqpuIF", "fcamFPMkKV", "fb4V4VCGVS", "bqXuwvss2U", "U4PWfI5Ei2", "Tf4fZo2uV0", "PgXea6yBps", "FgBr5y3MIU", "FfWHu0kD4I", "DQtPDKjSSQ", "AqjMB5cnUB", "6fYaW9ffHt", "6IdPPhcgOT", "4CFu0iXMUy", "2kAJ5mhk6F" ], "note_type": [ "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732503269896, 1732316600004, 1733034931080, 1733034969668, 1732317625594, 1733034558381, 1732903908378, 1732904000577, 1732528644881, 1732317390553, 1732316753865, 1730721148643, 1732904069064, 1733026145161, 1729273044523, 1732316731895, 1733020204425, 1732317567393, 1730478782721, 1729745570141, 1732316271596 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11907/Reviewer_j8uk" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Reviewer_ekkS" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Reviewer_kjCS" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Reviewer_ekkS" ], [ "ICLR.cc/2025/Conference/Submission11907/Reviewer_ekkS" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Reviewer_rMyQ" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ], [ "ICLR.cc/2025/Conference/Submission11907/Reviewer_rMyQ" ], [ "ICLR.cc/2025/Conference/Submission11907/Reviewer_j8uk" ], [ "ICLR.cc/2025/Conference/Submission11907/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I appreciate your efforts to improve the paper. However, I must maintain my original assessment for two reasons. First, the additional experiments, while valuable, are still limited to closely related computer vision tasks - given your method's simplicity and wide applicability, demonstrating its utility across more diverse domains would have been more convincing. Second, the concern about novelty remains unaddressed. The response attempts to distance itself from related approaches rather than constructively analyzing and positioning the work within the existing literature, which limits the scientific impact of this paper.\"}", "{\"title\": \"Conducted experiments on all the settings suggested.\", \"comment\": \"We thank you for your review.\\n\\n**Additional Experiments** We have now included additional experiments on all the tasks you suggested including - binary classification, multi-label classification, Zero-shot classification and semantic segmentation. Please refer to the our summary response and Sections A.1 in the updated paper. \\n\\nIn summary, across all the tasks, Little-Big consistently achieves significant MAC savings while comparable comparbale performance compared to the Big model.\\n\\n\\n**Accuracy vs Latency and Online Classification**\\n\\nThank you for raising this important point.\\n\\nIn the table below, we provide a comparison of change in the top-1 accuracy versus latency in a memory-constrained setup. In this scenario, we assume that both models cannot simultaneously fit in memory, requiring the Little model to be removed when the Big model is loaded. Even under these online classification constraints, Little-Big achieves high throughput and low latency while preserving the performance of the Big model.\\n\\nIt is worth noting that the latency and throughput numbers would further improve if the hardware had sufficient memory to fit both models simultaneously.\\n\\nFor more details, please refer to Section A.4 in the Appendix, titled \\\"Benchmarking Throughput and Latency.\\\"\\n\\n\\n\\n| Big Model | Little Model | \\u0394 Acc | \\u0394 GMACs | Throughput (samples/s) | \\u0394 Throughput | Latency (ms) | \\u0394 Latency |\\n|-----------------------|--------------|---------|---------|-------------------------|--------------|--------------|-----------|\\n| EfficientNet-B7-600 | None | -- | -- | 35.5 | -- | 28.2 | -- |\\n| | B0-224 | +0.01 | -25% | 54.0 | +52% | 18.5 | -34% |\\n| | B1-240 | +0.01 | -13% | 39.9 | +12% | 25.1 | -11% |\\n| | B2-288 | +0.00 | -59% | 76.6 | +116% | 13.1 | -54% |\\n| | B3-300 | +0.02 | -61% | 86.4 | +144% | 11.6 | -59% |\\n| | B4-380 | +0.01 | -81% | 155.2 | +338% | 6.4 | -77% |\\n| | B5-456 | +0.01 | -65% | 97.1 | +174% | 10.3 | -64% |\\n| | B6-528 | +0.02 | -47% | 62.4 | +76% | 16.0 | -43% |\\n\\n\\n\\n\\n\\n\\n\\n## **Novelty** \\n\\nAs the ML community continues to produce increasingly large models with massive parameter counts, efficiently deploying them has become a significant challenge. Current inference optimization strategies, such as quantization or distillation, often require training new, smaller models or performing operations that result in performance drops. Additionally, these techniques fail to fully leverage the extensive ecosystem of models that is readily available (e.g., the EfficientNet family or user-submitted models on platforms like Torch or HuggingFace Hub). \\n\\nIn this context, we believe that our study on speeding up large models using smaller models is an important direction of work. Though the proposed protocol is simple to implement, our work holds significant practical value since\\n- our approach completely **post-hoc**, requiring no re-training. \\n- our approach is entirely **model- and architecture-agnostic**, allowing seamless integration with a variety of models.\\n- ours is the first work to systematically study and benchmark its utility in improving inference efficiency across a variety of tasks and model architectures.\\n\\nBy leveraging the growing diversity of pre-existing models across frameworks, Little-Big enables users to mix and match architectures (e.g., pairing models from different families such as EfficientNet and ViTs, BERTs and T5s). This flexibility ensures that one can adopt state-of-the-art models without the burden of additional training or specialized pipelines, and obtain significant compute savings and latency reductions without sacrificing accuracy. We argue that the framework's simplicity, adaptability, and compatibility with existing models make it a highly practical solution for real-world use cases.\\n\\n\\nWe sincerely hope the new experiments and our responses answer your questions and you can champion our paper.\"}", "{\"title\": \"We are withdrawing our submission.\", \"comment\": \"Hello all,\\n\\nWe sincerely thank all reviewers for their thoughtful feedback and for engaging with our work. During the review process, a reviewer brought to our attention a prior paper that bears significant relevance to our submission, which we regretfully overlooked during our literature review. This was a major oversight on our part.\\n\\nIn light of this, we have decided to withdraw our submission to reassess our contributions and re-submit in the future. \\n\\nWe deeply appreciate the time and effort each reviewer has invested in critiquing our work and we once again thank you for your valuable feedback.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We sincerely thank all reviewers for their thoughtful feedback and for engaging with our work. During the review process, a reviewer brought to our attention a prior paper that bears significant relevance to our submission, which we regretfully overlooked during our literature review. This was a major oversight on our part.\\n\\nIn light of this, we have decided to withdraw our submission to reassess our contributions and re-submit in the future.\\n\\nWe deeply appreciate the time and effort each reviewer has invested in critiquing our work and we once again thank you for your valuable feedback.\"}", "{\"title\": \"Summary Response - Additional experiments- Part 2\", \"comment\": \"3. **Semantic Segmentation**\\n\\n Next, we applied Little-Big to a more challenging pixel-level task -- semantic segmentation. For this experiment, we used the PyTorch\\u2014DeepLabV3 (MobileNet backbone, 11M parameters) as the Little model and the FCN (ResNet-101 backbone, 54M parameters) as the Big model. The evaluation was conducted using a subset of COCO with 20 categories overlapping with the Pascal VOC dataset. In this setup, we esimate the sample-level confidence as the lowest confidence among all superpixels ( superpixels are obtained using the SLIC algorithm), wherein the superpixel confidence was measured as the average of max. softmax probabilities from all pixels constituting the superpixel. \\n\\n The result below further demonstrates the benefits of our approach.\\n\\n\\n| Little Model mIOU (%) | Big Model mIOU (%) | Little-Big mIOU (%) | MACs Reduction (%) |\\n|:--------------------:|:----------------:|:------------------:|:-------------------:|\\n| 60.3 | 63.7 | 63.0 | 39.2 |\\n\\n4. **NLP tasks** \\n\\n Finally, as requested by reviewers, we extended the Little-Big framework to NLP tasks, specifically focusing on text classification. We conducted experiments on the IMDB sentiment (binary) classification benchmark using DistilBERT (66M parameters) as the Little model and GPT-2 (124 parameters) as the Big model. The results, presented below, show that our approach achieves over 58% MACs reduction while maintaining the performance of the Big model. This demonstrates that Little-Big is not limited to vision tasks but is also effective in natural language processing domains. \\n\\n| Little Model Acc (%) | Big Model Acc (%) | Little-Big Acc (%) | MACs Reduction (%) |\\n|:--------------------:|:----------------:|:------------------:|:-------------------:|\\n| 92.8 | 93.5 | 93.51 | 58.75 |\"}", "{\"title\": \"we are withdrawing our submission. We sincerely apologize for our oversight.\", \"comment\": \"We thank you for bringing this paper to our attention. We sincerely apologize for missing it during our literature review and this was a complete oversight on our part.\\nIn light of its relevance to our work, we will withdraw our submission to reassess our contributions and re-submit in the future.\"}", "{\"title\": \"Summary response -- Comprehensive Evaluation of Little-Big Model Choices\", \"comment\": \"**Comprehensive Evaluation and Interactive Results Through Streamlit**\\n\\nIn response to feedback requesting a broader evaluation, we have expanded our experiments to include **all possible little-big model pairs from a pool of 63 models, resulting in 1,953 combinations**. This thorough evaluation ensures that our method is rigorously tested across a wide variety of settings.\\n\\nTo enhance accessibility and demonstrate the scalability of our approach, we have created an interactive tool available at https://littlebigpaper.streamlit.app/. This platform allows users to select any big and little model from dropdown menus and view the detailed results for their chosen combination, including GMAC savings, accuracy comparisons, and more.\"}", "{\"title\": \"Response -- Novelty, Related Work and Experiments\", \"comment\": \"We sincerely thank you for engaging with our rebuttal. While we appreciate your perspective, we respectfully disagree with the characterization of our contributions and the scope of the additional experiments.\\n\\n**On Task Diversity**:\\nOur focus has been on classification tasks, where we demonstrated the broad applicability of our approach across multiple computer vision tasks, including image classification, video classification, multi-label classification, and semantic segmentation, as well as its extension to NLP domain. Each task requires a unique approach to estimating an aggregate sample-level confidence score. Importantly, we demontrated how to construct approaches to do so for all these diverse tasks without making any assumptions about how models are trained or using multiple forward passes, which differentiates our approach.\\n\\nWhile we recognize the potential value of extending this work to more complex domains such as generative models, such an expansion is beyond the current scope and will form an important focus for our future work.\\n\\n**On Related Work and Novelty**:\\nWe sincerely apologize if our responses appeared as an attempt to distance ourselves from the suggested related works. That was not our intention. On the contrary, we acknowledge the relevance of these works and have expanded the Related Work section (see Lines 169\\u2013178 in the revised paper) to include quantization methods, following the suggestion of other reviewers.\\n\\nFurthermore, even in the original submission, we explicitly mentioned relevant parallels, such as speculative decoding, in the discussion section. For example, we wrote:\\n\\u201cLittle-Big is a subtractive multi-pass algorithm that relies on a good decomposition of problems and solves each part with the least compute, not unlike Speculative Decoding in language modeling (Leviathan et al., 2023).\\u201d\\n\\nOur objective in the rebuttal was to acknowledge these relevant works while also drawing clear distinctions, which is the purpose of any robust related work section. We see this as an essential part of positioning our contribution within the literature rather than as distancing from prior efforts.\\n\\nWe remain open to addressing specific suggestions or concerns. Thank you once again.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"*Fair Comparisons With Baselines*\\n\\nYes, I see that DeiT accuracy has increased over time due to improved training recipes. I\\u2019m worried that your dominance over other pruning baselines is for the same reason. If these baselines had the benefit of an improved recipe, would they be improved as well?\\n\\nTo put it in your own words, as line 415-417 states, \\\"even with tricks that effectively retrained models, many pruning methods are not competitive... with modern baselines... which in essence are better trained ViTs\\\". This seems like an issue, since we don\\u2019t know if your method is better since we don\\u2019t know how these other methods perform with modern recipes.\\n\\n*Choice of Little Network*\\n\\nI\\u2019m aware that there\\u2019s no restriction. That\\u2019s what makes me wonder **why you choose to show only specific combinations of big and little network**, rather than all of them.\\n\\nIn Table 2, EfficientNet-B2-288 uses EfficientViT as a little network, but most other EfficientNet variants do not. And EfficientNet-B2-288 is not used with EfficientNet-B1, but most other variants are. Unless there\\u2019s consistency or an obvious pattern in the choices made, the results look cherry-picked. Can you please explain the motivation behind the choice of combinations shown?\"}", "{\"comment\": \"**Expanded Contextualization of Novelty**\\n\\nAs the ML community continues to produce increasingly large models with massive parameter counts, efficiently deploying them has become a significant challenge. Current inference optimization strategies, such as quantization or distillation, often require training new, smaller models or performing operations that result in performance drops. Additionally, these techniques fail to fully leverage the extensive ecosystem of models that is readily available (e.g., the EfficientNet family or user-submitted models on platforms like Torch or HuggingFace Hub). \\n\\nIn this context, we believe that our study on speeding up large models using smaller models is an important direction of work. Though the proposed protocol is simple to implement, our work holds significant practical value since\\n- our approach completely **post-hoc**, requiring no re-training. \\n- our approach is entirely **model- and architecture-agnostic**, allowing seamless integration with a variety of models.\\n- ours is the first work to systematically study and benchmark its utility in improving inference efficiency across a variety of tasks and model architectures.\\n\\nBy leveraging the growing diversity of pre-existing models across frameworks, Little-Big enables users to mix and match architectures (e.g., pairing models from different families such as EfficientNet and ViTs, BERTs and T5s). This flexibility ensures that one can adopt state-of-the-art models without the burden of additional training or specialized pipelines, and obtain significant compute savings and latency reductions without sacrificing accuracy. We argue that the framework's simplicity, adaptability, and compatibility with existing models make it a highly practical solution for real-world use cases.\\n\\n**Comparison to speculative decoding and cascaded systems**\\n\\nSpeculative decoding (SpD) has been mainly developed for autoregressive sequence-to-sequence generative models where an output token is then used as a part of the input to the next inference step. With drafts from the smaller model, SpD decreases the large model latency by better utilizing the parallel computing capacity of modern hardware like GPUs, it actually increases the number of Flops. Little-big on the other hand operates in the predictive modeling domain and provides a net Flops reduction together with latency and throughput improvement.\\n\\nWhile Little-Big indeed shares similarity with cascading strategies, these methods rely on task-specific architectures requiring training or fine-tuning at each stage for sequential re-ranking, Little-Big is a general protocol applicable across domains (e.g., vision, NLP, multimodal tasks) without task-specific modifications.\\n\\n**Additional Experiments**\\n\\nWe thank you for this question. We want to first point out that in the main paper, we go beyond imagenet classification task and also consider the video-classification task (lines 485-497). Furthermore, we have now extended and studied Little-Big in four additional tasks. We refer the reviewer our summary response where we summarize our findings on extending our approach to various other tasks including text classification, semantic segmentation, multi-label classifiation and zero-shot classification tasks on a variety of benchmarks.\\n\\n\\nWe hope our responses answer your questions and that you can recommend acceptance.\"}", "{\"title\": \"Part 2 of our response\", \"comment\": \"**Definition of w and l**\\nWe apologize for the confusion. While the definitions of \\\\( w \\\\) and \\\\( l \\\\) were provided in the caption of Table 1 in the main paper, we have now added a clarifying statement in the revised version after equation 2 to explicitly define \\\\( w \\\\) as the **width** and \\\\( l \\\\) as the **depth** of the model respectively.\\n\\n\\n**Hardness and confidence**\\n\\n\\nThank you for this question and apologies for the confusion. In this paper, we correlate hardness with low confidence i.e, the lower the confidence of the Little model on a sample, the harder that sample is. We have corrected Line 199 as follows: \\\"This allows us to approximate a hardness axis, where harder samples correspond to predictions with lower confidence.\\\" We have also made this change in the revised paper.\"}", "{\"summary\": \"The paper proposes a method named Little-Big to accelerate image classification with neural networks. Little-Big uses a light-weight model to quickly classify all of the samples and selects the \\\"hard\\\" samples which get low confidence behind the threshold. Then, it uses a large model to update the prediction for each hard sample. Little-Big can significantly reduce the inference cost and latency for many advanced large classification models without sacrificing the accuracy. The authors provide many experiments with different pairs of large and small models to validate the effectiveness of Little-Big.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(1) The motivation and method of Little-Big is very simple and straightforward.\\n\\n(2) It seems that Little-Big is very easy to implement. In addition, Little-Big is model-agnostic which can be applied to models with different scales and architectures.\\n\\n(3) Little-Big can accelerate a pre-trained model without introducing additional training cost.\", \"weaknesses\": \"(1) Lack of novelty. As the authors say, Little-Big is an embarrassingly simple method, which adopts a large model and a light-weight model for image classification. It's the major advantage but also the major disadvantage of Little-Big. Many previous works share the similar motivation with Little-Big which uses different networks for accelerating, such as early existing and speculative decoding as you mentioned in the paper. While these works mostly include specific and delicate designs. I understand that Little-Big is very simple, but i don't think it's novel.\\n\\n(2) The proposed method only focuses on the classification tasks. While the authors provide the example about how to extend it to video classification, it's hard to directly apply the method for other popular tasks (e.g., object detection and segmentation), which limits its use.\\n\\n(3) The authors could include more classification tasks to further prove the generation ability of Little-Big, such as multi-label classification, binary classification.\", \"questions\": \"(1) The authors could also plot a figure for top-1 accuracy vs. latency.\\n\\n(2) How long does it take to load and unload models compared with the inference latency for each batch? I'm wondering whether the method can be used for online inference.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response - Comprehensive Evaluation\", \"comment\": \"We thank you for the feedback.\\n\\n**Exhaustive Evaluation of All Possible Model Pairings** In our original submission, we showcased only a subset of combinations in the interest of space, choosing representative examples. However, we understand the concern regarding the consistency and completeness of the choices presented.\\n\\nTo address the concerns raised, we have now evaluated all possible little-big pairs among the 63 models, resulting in a comprehensive evaluation of 1,953 combinations (63 choose 2). These results are publicly available and can be explored interactively at https://littlebigpaper.streamlit.app/. This platform allows users to select any big and little model from dropdown menus and view the detailed results for their chosen combination.\\n\\nWe answer the specific question about use of EfficientViT as Little model for Efficient models other than EfficientNet-B2-288 with our new exhaustive list of results. **We observe that with no loss of accuracy- EfficientViT speeds up Efficinet-Net-B3 by 31.35%, EfficinetNet-B4 by 37.8%, EfficinetNet-B5 by 29%, EfficinetNet-B6 by 34.346% and EfficinetNet-B7 by 36.3%**\\n\\nThrough these results, we want to emphasize that our intent with the initially presented combinations was purely demonstrative and not to cherry-pick results. The expanded evaluation includes all 1,953 combinations, even those where loss-less speedup was not possible in which case, we explicitly indicated in the interactive tool. For example Efficient-Vit-B3-224 and Resnet18-v1.\"}", "{\"title\": \"Updated Score\", \"comment\": \"I increased my score 1 grade, as the authors have added a lot of empirical evaluation. As raised by another review, I do not think the novelty is strong, but I do think the empirical analysis is useful for understanding the relative accuracies of models and their behavior when combined in terms of efficiency/accuracy.\"}", "{\"summary\": \"The paper discusses improving the speed of visual recognition systems using a \\\"Little-Big\\\" model setup. The Little model is a smaller architecture that processes examples first. If the confidence is below a predefined threshold, the sample is reprocessed by a \\\"Big\\\" model. This simple setup improves the speed of visual recognition systems on ImageNet, ImageNetv2, and ImageNetReal significantly (without loss in accuracy). The authors experiment with both CNNs and Transformer models. They also study the fraction of examples processed by the Little and the Big network, showing the accuracy as a function of threshold.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper studies an important topic - efficiency of visual recognition models.\", \"The speedup claimed by the paper is substantial. At a fixed accuracy, their method improves speed by 30%-80% (Figure 1).\", \"The paper is written clearly and is easily understandable. The investigation presented studies the natural questions that arise with threshold tuning for the Little model's confidence. Figure 3 clearly demonstrates how the accuracy and efficiency change as a function of the threshold.\", \"The paper accounts for generalization across different datasets by fixing the threshold on ImageNet and analyzing results on ImageNetv2 and ImageNetReal. This is an important aspect of the investigation, as choosing the threshold based on the validation set can create bias.\"], \"weaknesses\": \"My main concern is the comparison with prior art. As line 437-439 states, \\\"even with tricks that effectively retrained models, many pruning methods are not competitive... with modern baselines... which in essence are better trained ViTs\\\". It seems that the Little-Big method is evaluated using modern architectures and training recipes, whereas other baselines (pruning, etc.) are using older architectures or training recipes. I'm worried that the gains of this method are primarily attributable to the use of newer architectures or training recipes. A fair comparison would use the same architectures as previous works.\\n- For example, Table 3 shows a datapoint with T=0, meaning the Big architecture is never used.\\n- Additionally, the baseline architecture in Table 3 (\\\"Our Baseline\\\") is significantly more accurate than previous work's baseline (Yin et al.).\\n\\nThe choice of \\\"Little\\\" network seems arbitrary in some cases. In Table 2, EfficientNet-B2-288 uses EfficientViT as a little network, but most other EfficientNet variants do not. And EfficientNet-B2-288 is not used with EfficientNet-B1, but most other variants are. I have similar thoughts on most of the rows in Table 2. Can you please justify the choice of Little architecture?\", \"line_132\": \"Equation 2 should have a reference, and there should be some more specific qualifications as to what types of models this equation applies to. Similarly, the characteristic width w_j is not well defined and doesn't have a reference.\", \"line_150\": \"I recommend also discussing quantization briefly here.\", \"line_172_173\": \"\\\"ingest gigabits/s of raw visual information and compress it to tens of bits/s\\\" <- this needs a reference\\n\\nIn Table 3, it would help the reader if you mark the baselines by their general approach (e.g. which ones are pruning, etc.).\", \"line_202\": \"\\\"confidence > 0.5-0.7\\\" <- what does this mean? How can a confidence be greater than a range? Did you instead mean \\\"0.5 < confidence < 0.7\\\"?\", \"questions\": \"My main suggestion is regarding the fairness of comparisons with baselines (mentioned above).\\n\\nI also would like to understand the justification for the choice of Little model (mentioned above).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Part 1 of our response\", \"comment\": \"**Dataset Diversity and Additional Experiments to Demonstrate Robustness**\\n\\nWhile the main paper focuses on the applicability of Little-Big to image and video classification (Kinetics-400), we have now extended our experiments, as suggested by the reviewer, to include additional datasets and settings to further demonstrate the robustness of our method.\\n\\nWe kindly request the reviewer to check our summary response on our expanded experimental evaluation (namely, zero-shot image classification, text classification, multi-label classification, and a pixel-level semantic segmentation). Details of the newly added experiments can be found in Appendix A.1 of the revised paper.\\n\\n**Threshold Selection and Generalization Concerns**-\\n\\nIn Section 4.2 of the main paper, we address this question in detail. Below is a summary of our findings:\\n\\n * **Threshold Determination on Smaller Datasets**:\\nWe evaluate the generalizability of the threshold T by determining it using a smaller validation set (e.g., ImageNet-V2) and transferring it directly to the larger ImageNet-1K dataset. We opted for this approach to ensure comparability with baselines evaluated on the full ImageNet-1K validation split. As shown in Appendix Figure 7, even when T is determined on ImageNet-V2, the Little-Big framework achieves significant MACs (>80%) reductions with no performance drop on ImageNet-1K.\\n\\n* **Cross-Dataset Threshold Transfer**:\\nWe also examine the robustness of thresholds determined on the ImageNet-1K validation set by applying them directly to ImageNet-ReaL and ImageNet-V2. On both datasets, Little-Big achieves consistent MACs savings (>75%) with marginal accuracy losses of 0.04% and 0.07%, respectively.\\n\\n* **Threshold Determination for Non-ImageNet Tasks**:\\nIn our newly added experiments (Section 7 and summary response), thresholds were determined using only 20% of the held-out validation data while performance and compute savings were then evaluated on the remaining 80%. Our results clearly demonstrate the generalizability of our approach across diverse modalities and tasks.\\n\\n **Related Work**\\n\\nWe have expanded the related work section in the paper to include the additional methods suggested by the reviewer. Below, we summarize the key differences between these approaches and our proposed method:\\n* **Early-Exit Models**: Early-exit models reduce computation by allowing samples to exit at intermediate stages of the model pipeline. However, implementing these models is non-trivial and highly architecture-dependent. They also require training with specific objectives tailored to the early-exit mechanism. In contrast, Little-Big is architecture-agnostic, completely post hoc, and broadly applicable across multiple tasks, model families, and domains (e.g., images and text). Furthermore, we compare with Dynamic-Vit and A-ViT which are early-exit like models in Section 4.4 and demonstrate the benefits of Little-Big over these approaches.\\n* **Confidence-Based Dynamic Routing**: While similar to our approach in principle, confidence-based dynamic routing typically requires task-specific integration and training modifications. Little-Big differs itself by offering a simple thresholding mechanism that works across a wide range of pretrained models without requiring additional training. \\n* **Quantization*: Quantization is indeed a relevant technique for model compression. While the focus of Little-Big is on inference-time compute savings without retraining, we acknowledge that quantization could further complement Little-Big. We have incorporated a short discussion of quantization in Section 2.3 and positioned it as an complementary method that can be integrated with Little-Big for additional gains. Furthermore,we highlight that the benefits of quantization are hardware dependent while Little-Big is a hardware independent approach.\\n\\n**Baselines**\\nThank you for this question. In the A-ViT paper by Yin et al., the authors report the baseline DeiT performance as 78.9%, whereas the original DeiT paper reports a performance of 79.81% for the same model. Additionally, we confirmed the 79.81% performance through the publicly released DeiT-S checkpoint. Since it was not immediately clear which of these values to report, and to ensure fairness when comparing against other methods, we initially included both. However, we acknowledge that this choice could cause confusion for the reader. To address this, we have revised the paper to only include the 79.81% baseline from the original DeiT paper for clarity and consistency. \\nWe also made similar changes in Table-3 with DeiT-B architecture to improve readability.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you the author for their response. I was about to raise my scores since my concerns were addressed except I wanted to check the literature one more time from the survey I've shared earlier. I would have expected authors to do the same especially after my review. Looks like they either didn't do it, or somehow missed this paper:\", \"https\": \"//dl.acm.org/doi/pdf/10.5555/2830840.2830854\\n\\nFrom what I see it is pretty much this work, except it uses networks from 10 years ago. Given the existence of this work and lack of comparison in the current paper I lower my score.\"}", "{\"comment\": \"**Quantization**\\n\\nQuantization is indeed relevant for model compression. While the focus of Little-Big is on inference-time compute savings without retraining, we acknowledge that quantization could further complement Little-Big. We have aded a brief discussion in Section 2.3, positioning quantization as a complementary method that can be integrated with Little-Big for additional gains.\\n\\n**Additional Experiments**\\nWe also want to request the reviewer to please check our summary response that reports our findings in extending Little-Big to novel tasks and domains including semantic segmentation, zero-shot classificaiton, multi-label classifcaition as well as extensions to text classification tasks. Details of these new added experiments can be found in Appendix A.1 of the revised paper.\\n\\n\\n\\n\\n**Fair comparison with Baselines**\\n\\nThank you for this question. In the A-ViT paper by Yin et al., the authors report the baseline DeiT performance as 78.9%, whereas the original DeiT paper reports a performance of 79.81% for the same model. Additionally, we confirmed the 79.81% performance through the publicly released DeiT-S checkpoint. Since it was not immediately clear which of these values to report, and to ensure fairness when comparing against other methods, we initially included both. However, we acknowledge that this choice could cause confusion for the reader. To address this, we have revised the paper to only include the 79.81% baseline from the original DeiT paper for clarity and consistency.\\n\\nWe also made similar changes in Table-3 with DeiT-B architecture to improve readability. Little-Big pairs with T=0.0 indeed suggest that a big model can be replaced by a small model. In the original table, we included such pairs mainly to highlight that an effective but often neglected way of model compression is to train a smaller model with a better training recipe (e.g. DeiT3-S outperforms DeiT-B). But we acknowledge that it may cause confusion and removed such pairs with T=0.0 from Table 3.\\n\\n**Choice of Little Network**\\n\\nThank you for pointing this out. We want to highlight that the choice of the \\\"Little\\\" network is, in fact, user-dependent and flexible. Little-Big imposes no strict requirement on which model to use as the Little model, and it is entirely architecture-agnostic. Users can select a Little model based on their specific constraints, such as available compute resources, latency requirements, or model compatibility with the Big model.\\n\\n**Typographical Error**\\n\\nAdditionally, the phrasing \\u201cconfidence > 0.5-0.7\\u201d was indeed an error and instead It should have been confidence > 0.5. To clarify, our intent was to convey that confidence in the models we examined inversely correlates with hardness. Specifically, from the distribution of correctable mistakes, we observed that most of these samples exhibit lower confidence. Thus, this observation directly motivates our approach of thresholding over confidence: when the confidence of a prediction falls below the threshold, the Big model is invoked to handle the sample. We thank you for highlighting this issue, and we have updated the phrasing and explanation in the revised paper for clarity and accuracy.\\n\\n**Additional References**\\nWe have added the following references to the line \\\"human vision ingests gigabits/s and compresses to tens of bits\\\" in the revised paper.\\nSoto, Florentina, Jen-Chun Hsiang, Rithwick Rajagopal, Kisha Piggott, George J. Harocopos, Steven M. Couch, Philip Custer, Josh L. Morgan, and Daniel Kerschensteiner. \\\"Efficient coding by midget and parasol ganglion cells in the human retina.\\\" Neuron 107, no. 4 (2020): 656-666.\\n\\nZheng, Jieyu, and Markus Meister. \\\"The Unbearable Slowness of Being.\\\" arXiv preprint arXiv:2408.10234 (2024).\\n\\nWe hope our responses answer your concerns.\"}", "{\"summary\": \"This paper proposes a simple yet effective two-pass algorithm called \\\"Little-Big\\\" to speed up image classifiers. The core idea is to leverage a smaller, less computationally expensive \\\"Little\\\" model to pre-screen input samples. Only samples for which the Little model exhibits low confidence are then passed to a larger, more accurate \\\"Big\\\" model. The paper claims that this approach significantly reduces the computational cost (measured by Multiply-Accumulate operations or MACs) for a variety of model architectures and scales, without sacrificing accuracy on ImageNet-1K and other datasets. The authors demonstrate MACs reductions of up to 80% while maintaining or even improving accuracy compared to the single Big model baseline. They also argue that this approach is more effective than existing model compression and adaptive computation methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The proposed Little-Big algorithm is conceptually straightforward and easy to implement. It requires minimal modifications to existing models and training pipelines.\", \"The paper demonstrates significant MACs reduction across a range of model architectures (CNNs, transformers, hybrids) and scales, suggesting broad applicability.\", \"Experiments are conducted on multiple datasets (ImageNet-1K, ImageNet-ReaL, ImageNet-V2) to evaluate the robustness and generalizability of the method.\", \"The Little-Big approach addresses a critical issue in deploying large vision models: their high computational cost. The proposed method offers a practical solution for model compression without retraining or complex modifications.\"], \"weaknesses\": [\"# Major\", \"The method seems to rely on finding an optimal threshold T on the test set (Imagenet validation set) to determine which samples are passed to the Big model. This raises concerns about potential overfitting to the validation set and its impact on generalization performance. Results should be provided using a threshold determined on the training or a held-out portion of the validation set to address this concern.\", \"The paper could benefit from a more comprehensive discussion of related work, particularly in areas like cascade models and dynamic inference methods. Specifically, work on early-exit models [1] and confidence-based dynamic routing [2] appears closely related and should be discussed. This would help to better contextualize the novelty and contributions of the proposed approach. UPDATE: {See my response below.}\", \"Experiments solely focuses on ImageNet dataset. More experiments needed to understand the robustness of the proposed method.\", \"I also find it difficult to parse the results presented in huge tables: specifically Table 3: there are multiple baselines for DeiT models. Are you comparing results with different baseline accuracies?\", \"[1] https://github.com/txsun1997/awesome-early-exiting?tab=readme-ov-file\", \"[2] https://arxiv.org/pdf/2102.04906\", \"# Minor\", \"[L132] I can't see the definition of \\\"w and l\\\".\", \"[Section 2.3] Quantization is a key method for compression and not mentioned here. Also Mixture of Depths (https://arxiv.org/abs/2404.02258)\", \"The paper's use of \\\"hardness\\\" and its relationship to model confidence is not always clear. In some sections, low confidence is equated with hardness, while in others, the opposite is implied. This needs clarification. For example [L199] \\\"which allows us to approximate a \\\"hardness\\\" axis with prediction confidence.\\\" hardness means low confidence, no?\"], \"questions\": [\"Please clarify the relationship between \\\"hardness\\\" and model confidence. Is low confidence always indicative of a hard sample, or are there cases where this assumption does not hold?\", \"Can you provide results where the threshold T is determined using only the training set or a held-out portion of the validation set? This would help to assess the potential for overfitting to the validation set and the generalizability of the method.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a technique called the Little-Big algorithm. It combines a small and large pre-trained model to improve the trade-off between cost and accuracy. It first applies the small model to a given sample. When the confidence is high, the prediction is returned. Otherwise, the large model is applied, and its prediction is used.\\n\\nIn experiments, the authors focused on the ImageNet-1K image classification task. They demonstrated that the proposed method boosts efficiency for various pairs of models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1) The proposed method is practical. It is easy to implement and does not require any modification or additional training of existing models.\\n\\nS2) It is widely applicable. For any classification problem, it\\u2019s readily available. We could apply it to other tasks as well if we could come up with confidence estimation methods for them.\\n\\nS3) Extensive experimental results show that the proposed method is robustly performing well in the ImageNet classification task.\", \"weaknesses\": \"W1) The proposed approach lacks novelty. The idea of using multiple models with different cost-accuracy tradeoffs is highly common, to name a few, such as speculative decoding for language models and cascade ranking systems for recommendation and information retrieval.\\n\\nW2) The experiments are weak. All the experiments are about ImageNet-1k image classification task, so it is quite uncertain whether this method works well for other tasks as well.\", \"questions\": \"Q1) Can you develop better ways to explain your idea\\u2019s novelty, or do you have any ideas to further enhance the novelty?\\n\\nQ2) Can this idea be applied to language models as well? Particularly, I'm interested to see how it compares with speculative decoding.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary Response - Additional experiments- Part 1\", \"comment\": \"We thank all reviewers for their insightful and constructive feedback.\\n### **Extensions and New Experiments**\\n\\nA recurring suggestion was to extend the Little-Big framework and provide additional experimental validation of its effectiveness on other tasks. Following the suggestion, we have now expanded our empirical evaluation with four additional experiments (both vision and NLP tasks). While we present high-level summaries here, additional experimental details are provided in Appendix A.1 of the revised paper.\\n\\n1. **Multi-Modal Zero-Shot Classification Tasks**: \\n We study the use of our proposed approach in multi-modal zero-shot classification by accelerating the large CLIP model (ViT-L-14, 427M parameters) using a smaller CLIP model (ViT-B-32, 149M parameters). We considered four standard zero-shot evaluation benchmarks (Food-101, Flowers-102, Describable Textures, and SUN397). Our results corrobate the key finding in the original paper that our Little-Big protocol consistently achieves non-trivial reduction in MACs while producing peformance comparable to the big model. (on average, we notice a 40% reduction in MACs with only a 0.35% drop in accuracy).\\n\\n\\n| **Dataset** | **Little Model Acc (%)** | **Big Model Acc (%)** | **Little-Big Acc (%)** | **$\\\\Delta$ GMACs (%)** |\\n|---------------|---------------------------|------------------------|-------------------------|------------------------|\\n| SUN397 | 54.09 | 58.50 | 58.44 | -49.91 |\\n| Food 101 | 78.42 | 89.79 | 89.41 | -45.16 |\\n| DTD | 31.86 | 37.44 | 37.44 | -31.20 |\\n| Flowers 102 | 53.29 | 66.35 | 65.31 | -33.49 |\\n\\n\\n2. **Multi-label Classification**: \\n We also evaluated the Little-Big framework using a multi-label classification experiment with the popular CelebA facial attribute benchmark. Here, we used the same model configurations from the previous experiment -- \\\"ViT-B-32\\\" and \\\"ViT-L-14\\\" respectively. To estimate the model's confidence for a class, we measure the absolute difference of the prediction probability (after applying the sigmoid function) from 0.5. Note, 0.5 represents the highest uncertainity in a binary classification setting.Subsquently, we aggregate (i.e., average) the confidence scores across all classes to obtain a sample-level confidence estimate. Following the common experimental protocol adopted in this study, we determined the confidence threshold using a randomly chosen 20% of the held-out validation dataset and evaluated performance on the remaining 80% of the validation set.\\n\\n As shown in the table below, we achieve significant computational savings while incurring only a small drop in F1 score, thus evidencing the utility of the Little-Big protocol even in multi-label classification. \\n\\n| Little Model F1 (%) | Big Model F1 (%) | Little-Big F1 (%) | MACs Reduction (%) |\\n|:--------------------:|:----------------:|:------------------:|:-------------------:|\\n| 60.44 | 64.31 | 63.28 | 40.10 |\"}" ] }
0m27tvXkNm
Robust EEG Classification via Graph Neural Networks
[ "Nourhan Ahmed", "Johannes Burchert", "Vijaya Krishna Yalavarthi", "Maximilian Stubbemann", "Lars Schmidt-Thieme" ]
Electroencephalogram (EEG) classification has gained prominence due to its applications in medical diagnostics and brain-computer interfaces. However, EEG data is known to have a low signal-to-noise ratio, resulting in high variance in predictions across similar instances. To overcome this issue, we introduce RoGra, a novel approach leveraging residual graph convolutional networks for robust EEG classification. Our model incorporates dynamic time warping (DTW) to align temporal information and capture meaningful neighborhood relationships, enhancing robustness against artifacts. Experiments on three well-established EEG datasets demonstrate that RoGra outperforms baseline methods by up to 25\%, marking the largest improvement in EEG classification accuracy since the introduction of the seminal EEGNet. Our code is publically available.
[ "EEG Classification", "Graph Neural Networks", "Dynamic Time Warping" ]
https://openreview.net/pdf?id=0m27tvXkNm
https://openreview.net/forum?id=0m27tvXkNm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "SGORglFPpv" ], "note_type": [ "comment" ], "note_created": [ 1730317096503 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9776/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0lVQBMhsPG
ETC: Towards Training-Efficient Video Synthesis with Exploiting Temporal Capabilities of Spatial Attention
[ "Jianzhi Liu", "Lianli Gao", "Sitong Su", "Sen Wang", "Heng Tao Shen", "Jingkuan Song" ]
Recently, synthesizing video from the text, i.e, Text-to-Video (T2V), has demonstrated remarkable progress by transferring the pre-trained Text-to-Image (T2I) diffusion models to the video domain, whose core is to add new temporal layers for capturing temporal information. However, these additional layers inevitably incur extra computational overhead, as they need to be trained from scratch on large-scale video datasets. Instead of retraining these costly layers, we conjecture whether temporal information can be learned from the original T2I model with only Spatial Attention. To this end, our theoretical and experimental explorations reveal that Spatial Attention has a strong potential for temporal modeling and greatly promotes training efficiency. Inspired by it, we propose ETC, a new T2V framework that achieves high fidelity and high efficiency in terms of training and inference. Specifically, to adapt the video to the spatial attention of T2I, we first design a novel temporal-to-spatial transfer strategy to organize entire video frames into a spatial grid. Then, we devise a simple yet effective Spatial-Temporal Mixed Embedding, to distinguish the inter-frame and intra-frame features. Benefiting from the above strategy that actually reduces the model's dependence on the text-video pairing dataset, we present a data-efficient strategy, Triple-Data (caption-image, label-image, and caption-video pairs) fusion that can achieve better performance with a small amount of video data for training. Extensive experiments show the superiority of our method over the four strong SOTA methods in terms of quality and efficiency, particularly improving FVD by 49% on average with only 1% training dataset.
[ "Efficient Video Generation", "Video Diffusion Model" ]
https://openreview.net/pdf?id=0lVQBMhsPG
https://openreview.net/forum?id=0lVQBMhsPG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gwBzhRvFFD", "UdH9R8t4zC", "N3bSdIgO4d", "LzHZPwgeNT", "JGv1v1tkMW", "8CIrAMebwO" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731585369969, 1730105560526, 1731171760568, 1730272142370, 1730370606416, 1730448361035 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission193/Authors" ], [ "ICLR.cc/2025/Conference/Submission193/Reviewer_KJcQ" ], [ "ICLR.cc/2025/Conference/Submission193/Reviewer_sEp1" ], [ "ICLR.cc/2025/Conference/Submission193/Reviewer_mANp" ], [ "ICLR.cc/2025/Conference/Submission193/Reviewer_24a3" ], [ "ICLR.cc/2025/Conference/Submission193/Reviewer_h2gj" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents a training-efficient approach to train text-to-video (T2V) models. It explores how to transfer text-to-image (T2I) models to the T2V task without introducing a temporal model. Additionally, it proposes a data-efficient hybrid training method that allows the model to achieve favorable FVD metrics with relatively low training costs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation and writing of this paper are very clear, making it easy to follow.\", \"From a quantitative perspective, the paper achieves good metrics at a relatively low training cost.\"], \"weaknesses\": [\"The novelty is somewhat limited, as the approach in this paper aligns closely with [1], which also uses a grid-based approach to convert videos into images. The method in [1] originates from [2], which restricts the novelty of this paper.\", \"Although the paper proposes the spatial-temporal mixed embedding method, in essence, it is equivalent to adding a positional embedding. I am curious about how it prevents disrupting the T2I model\\u2019s weights at the beginning\\u2014this is an important point.\", \"The FPS embedding design is also not novel; it was first introduced in MagicVideo. The mixed FPS ranges from 0 (pure image) to 120 (single-frame replication). This design lacks significant originality.\", \"What bothers me most is the qualitative results. Although the quantitative metrics are promising, the qualitative results fall behind recent state-of-the-art video generation models like DiT architectures, OpenSora, Opensora-Plan, CogvideoX, etc. The failure cases, in particular, perform poorly.\", \"The paper does not validate any scaling laws in terms of data or model scalability.\", \"The authors should analyze more thoroughly where the quantitative advantages come from. Given the generally unimpressive visual quality, I can only assign a borderline rejection score for now.\"], \"references\": \"[1] Lee T, Kwon S, Kim T. Grid Diffusion Models for Text-to-Video Generation [C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 8734-8743.\\n\\n[2] Fan Q, Panda R. Can an image classifier suffice for action recognition? [J] arXiv preprint arXiv:2106.14104, 2021.\", \"questions\": \"See Questions in Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces ETC, a novel text-to-video synthesis model focused on training efficiency by exploiting spatial attention for temporal modeling. Unlike existing models that add temporal attention layers, ETC leverages only spatial attention with a temporal-to-spatial transfer strategy and spatial-temporal mixed embedding. This design reduces data dependency, allowing high-quality, efficient video generation using significantly smaller datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Proposes a highly efficient framework that eliminates temporal attention, reducing computational cost, which is an interesting idea.\", \"Innovatively uses a temporal-to-spatial transfer strategy and spatial-temporal embedding to enable video generation without sacrificing temporal consistency.\", \"Demonstrates superior performance with fewer training samples, achieving quality comparable to or better than current state-of-the-art methods.\"], \"weaknesses\": [\"The authors use filtered high-quality video data to train their model, whereas the baseline methods do not incorporate this filtration step, potentially creating an uneven comparison. This difference in data quality could give the proposed model an advantage that does not solely stem from its architectural innovations.\", \"The paper claims that \\u201cWe demonstrate that spatial attention modeling a linear mapping and alternating between spatial and temporal attention modeling another linear mapping, which does not model complex derivative or quadratic relationships.\\u201d However, this statement does not fully consider the inherent non-linearities of the model, nor does it account for the potential effects of stacking multiple spatial-temporal layers, which could enhance the model\\u2019s capacity to capture more complex relationships, including quadratic ones.\", \"Limited exploration of possible visual artifacts that may arise from removing explicit temporal modeling layers leaves open questions regarding the visual consistency and quality of generated videos. Additionally, relying primarily on FVD and CLIP scores limits the evaluation, as these metrics do not adequately capture human preference for smooth and realistic motion in videos. More human-centric evaluation metrics would improve the assessment of model performance.\"], \"questions\": \"In Figure 3, why is it necessary to rearrange frames of videos into a single image?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces ETC, a framework aimed at training-efficient text-to-video (T2V) synthesis by exploiting spatial attention for temporal modeling. The authors propose to eliminate temporal attention layers, typically used in T2V models, by using spatial attention from pre-trained text-to-image (T2I) models. The framework introduces techniques like temporal-to-spatial transfer and spatial-temporal mixed embedding to handle video frames within a spatial grid. Extensive experiments demonstrate superior performance in terms of quality and efficiency over several state-of-the-art (SOTA) methods.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"\\u25cf The paper presents a new perspective by leveraging spatial attention for temporal modeling. It is interesting as this approach not only simplifies the architecture but also reduces training costs, providing new insights for video generation tasks.\\n\\n\\u25cf If all results are true under a fair comparison, the performance improvement is significant.\", \"weaknesses\": \"\\u25cf It lacks convincing explanation for superior performance. While the authors attempt to explain why spatial attention can replace temporal attention, the reasons behind the significantly better results remain unconvincing. It is unclear why the proposed approach would outperform existing models to such an extent, especially considering the limited training resources used (8 NVIDIA 3090 GPUs).\\n\\n\\u25cf The model\\u2019s performance raises concerns about its generalization to more complex datasets or scenarios, especially given the small-scale training. The absence of detailed discussions about potential limitations, such as the restricted ability to model large motions due to implicit spatial relation modeling, weakens the validity of the results.\\n\\n\\u25cf Lack of visual evaluation. While the quantitative results are compelling, there is no video evaluation provided to visually demonstrate the effectiveness of the ETC framework. Also, the code in the supplementary materials is too basic to allow a direct assessment of the model\\u2019s qualitative improvements.\\n\\n\\u25cf In the supplementary materials, the authors claimed they include comparisons with many baselines, while the main paper does not provide sufficient detail on all these baselines or whether the comparisons were fair. This raises questions about the reported results, given that other well-recognized SOTA models typically use more data and computational resources. It would be beneficial to clarify how the proposed model achieves consistently the best results under such limited training conditions (as shown in Table 1).\", \"questions\": \"See the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work aims at improving the data efficiency in training T2V models via reusing spatial attention for temporal modeling. In particular, the authors propose to rearrange a sequence of frames into a \\\"stitched\\\" huge frame. The authors claim that they achieve better synthesis quality than existing alternatives yet using less data.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Studying the data efficiency in learning T2V models deserves a pat.\"], \"weaknesses\": [\"From the motivation (or say theoretical foundation) part, I believe there exist **technical flaws**.\", \"Intuitively, removing the temporal module and reusing the spatial module to handle both spatial and temporal information will definitely affect the model capacity. From this perspective, the so-called \\\"temporal capabilities\\\" of spatial attention does not convince me.\", \"I will explain my concern with a toy example. Let $A = (a) \\\\in \\\\mathbb R^{1 \\\\times 1}, B = (b_{ij}) \\\\in \\\\mathbb R^{2 \\\\times 2}$, and $X = (x_1, x_2) \\\\in \\\\mathbb R^{1 \\\\times 2}$. Assuming that $A, B$ are invertible, as required by the authors, there does **not** exist $A' = (a')$ such that $AXB=A'X$ for any $X$. First, note that $AXB = (a(b_{11}x_1 + b_{21}x_2), a(b_{12}x_1 + b_{22}x_2))$, and $A'X = (a'x_1, a'x_2)$. Then if $b_{11} = b_{22} = 0$, $AXB = (ab_{21}x_2, ab_{12}x_1)$ cannot be equal to $A'X = (a'x_1, a'x_2)$ for any $x_1 \\\\neq x_2$. This clearly contradicts the claim in Line 878, which means **the theoretical foundation of this work does not hold**.\", \"From the empirical part, the quality of videos generated by ETC are not as good as those generated by previous approaches. I believe the reason is just that the modeling capacity of spatial attention struggles to handle the temporal information.\", \"The frames in the last row of Figure 5 and those in Figure 6a are blurry.\", \"The motion in all presented videos seems to be really small (Figure 6b, Figure 21, Figure 22, Figure 23).\", \"There are even no videos provided in the supplementary material, which is very strange for a submission working on video synthesis.\", \"Given the above observations, I wonder why the FVD metric from ETC is so small compared to other competitors.\"], \"questions\": \"Please refer to the two major concerns listed in **Weaknesses**.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper demonstrates that the spatial attention in T2I has a strong capability of temporal modeling and can boost the efficiency of training. Furthermore, this paper also propose a training-efficient framework, called ETC.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper discusses how to generate high-quality videos using only a pre-trained text-to-image model, which is very interesting.\\n2. The structure of this paper is well-organized and easy to follow. \\n3. The experimental results show the effectiveness of the proposed method.\", \"weaknesses\": \"There are some questions.\\n1. In the area of text-to-video generation, GridDiff adopts a similar approach. What distinguishes this work from GridDiff?\\n2. In lines 836 and 837, the authors claim that the primary components in the attention mechanism are linear operations. However, there are also some non-linear layers present in the whole network. If we take these non-linear layers into account, do equations (9) through (13) still hold?\\n3. In lines 191 to 192, the authors claim that single spatial attention has a larger receptive field than spatial and temporal attention combined. However, I think it is not appropriate to consider spatial and temporal attention in isolation from the rest of the network. If spatial and temporal attention are treated as a unified block for video modeling, would their receptive field still be considered smaller?\\n4. From Section 4, it appears that all video frames should be arranged into a single grid image. However, in Figure 3(a), there seem to be empty spaces. Why is this?\\n5. In the Spatial-Temporal Mixed Embedding section, the authors use absolute positional encoding. If the goal is to generate videos of varying resolutions and different video lengths, would it be necessary to include videos with diverse resolutions during the training phase?\\n6. For a more comprehensive quantitative evaluation of video generation, I recommend that the authors use a broader set of metrics, such as Vbench. Additionally, I suggest that the authors provide a video demo, allowing reviewers to more intuitively assess the quality of the generated videos.\", \"questions\": \"Please see above. If the author solves my problems, I will consider raising the score. Thanks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0lMhptUGxP
Large Language Model Alignment via Inverse Reinforcement Learning from Demonstrations
[ "Hao Sun", "Mihaela van der Schaar" ]
Aligning Large Language Models (LLMs) is crucial for enhancing their safety and utility. However, existing methods, primarily based on preference datasets, face challenges such as noisy labels, high annotation costs, and privacy concerns. In this work, we introduce **_Alignment from Demonstrations_** (AfD), a novel approach leveraging high-quality demonstration data to overcome these challenges. We formalize AfD within a sequential decision-making framework, highlighting its unique challenge of missing reward signals. Drawing insights from forward and inverse reinforcement learning, we introduce divergence minimization objectives for AfD. Analytically, we elucidate the mass-covering and mode-seeking behaviors of various approaches, explaining when and why certain methods are superior. Practically, we propose a computationally efficient algorithm that extrapolates over a tailored reward model for AfD. We validate our key insights through experiments on the Harmless and Helpful tasks, demonstrating their strong empirical performance while maintaining simplicity.
[ "Large Language Model Alignment", "Alignment from Demonstration" ]
Reject
https://openreview.net/pdf?id=0lMhptUGxP
https://openreview.net/forum?id=0lMhptUGxP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xAm7c6oL0g", "wKkitWXsrL", "wHDovl1oSa", "w1HSgZQ6D2", "vuEmIzqKcP", "vAQxYth8sE", "rtzkG5dhjw", "qbJM3xj451", "poDlyGwWcc", "osdwhTwGCF", "ky0nz41dnU", "jOXAMBUdBh", "i1r4J96Srv", "h93mr1amov", "gLEMUsBJYj", "fHbUokGR7P", "ZwmDSkdP5S", "ZXdcwsgqJT", "ZBZtpByEAS", "W7EZLHTyvq", "QpOiy7J8ZH", "O8l3b0CMQh", "MjoeFsBGHs", "L5dTFWtP2n", "KXCqnwNo9F", "K7tIhBqB0K", "JjdZCV0AKZ", "HTEzhmOzpj", "HS0z0PqFlj", "FvynJvOQL5", "ELeF5bgR6d", "BJ1jJNS5Yw", "AbyDBVKNJb", "89su7ruxbk", "7fLro1efPo", "5llV5HIkzl", "4u8PAlQjjz" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1732847866890, 1731919806677, 1730830202650, 1732462353616, 1731919656663, 1732435841912, 1732545392474, 1730672744975, 1734852961691, 1732848420527, 1730497155395, 1730694843689, 1732848483905, 1731919864332, 1732560998960, 1732522812761, 1731920050132, 1732545323631, 1731950396173, 1732402941814, 1731920230297, 1732436154946, 1732493744162, 1732493689189, 1731996771555, 1732675436081, 1732313859505, 1731920581400, 1733098963611, 1733133550595, 1732848287817, 1732926374585, 1733099133217, 1731920118083, 1731919422270, 1731921125103, 1737523867024 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_5fua" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_iqxv" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_iqxv" ], [ "ICLR.cc/2025/Conference/Submission7805/Area_Chair_1C3y" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_wScp" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_ZCzk" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_wScp" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_iqxv" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_iqxv" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_wScp" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_wScp" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_iqxv" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_ZCzk" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Reviewer_wScp" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Submission7805/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Author Response / Adding Requested Experiments\", \"comment\": \"Thank you for the further feedback!\\n\\n### 1. On Necessity of Framework\\n\\nWe thank our reviewer for their affirmative comments on our contributions!\\nWe see the reviewer\\u2019s point and appreciate their comments. In our current draft, we had pages 1-2 for the introduction (including Figure 1, the roadmap of the paper), pages 3-4 for the framework (as preliminaries), pages 5-8 for the method, and finally pages 9-10 for the experiments. Such an arrangement on different has received acknowledgment from Reviewer iqxv (rated as excellent).\", \"please_kindly_permit_us_to_respectfully_maintain_a_different_presentation_strategy\": \")\\n\\n\\n### 2. On Explicit Reward Model\\n\\nWe thank the reviewer for their further feedback on the interpretation of the reward model. In our latest manuscript, we have included a detailed discussion on closed-form solutions in Appendix G. Please refer to page 28 in our updated manuscript for the details. We\\u2019ve highlighted the new content with blue text.\\n\\n\\n### 3. On Evaluations\\n\\nWe thank our reviewer for their further explanation. We would like to use the following new experiment results to address the reviewer\\u2019s concerns.\\n\\nWe thank the reviewer for pointing out the insightful discoveries in DITTO. **However, we would respectfully note that DITTO is another ICLR submission. Requesting one ICLR submission to follow the experiment setup of another submission may break the ICLR code of conduct.**\\n\\nThat being said, we would like to use the following setups to address the reviewer\\u2019s concern regarding experiments:\\n\\n\\n**(1). AfD with GPT3.5 Demonstration**\\n\\n **We additionally experiment with a demonstration dataset generated by GPT3.5.** With multiple datasets and multiple methods studied, we can isolate and understand the specific contributions of demonstration data source, and in this case, the evaluation is based on the **Golden Reward Models.**\\n\\n|Model|Harmless-GPT4|Harmless-GPT3.5|Helpful-GPT4|Helpful-GPT3.5|\\n|-|-|-|-|-|\\n|Demo|1.704|1.615|0.735|0.520|\\n|Base|0.670|0.670|-0.086|-0.086|\\n|SFT|1.785|1.667|0.588|0.433|\\n|IRL-RM (N=10)|2.171|1.755|0.598|0.454|\\n|IRL-RM (N=30)|2.272|1.842|0.692|0.498|\\n|IRL-RM (N=50)|2.333|1.889|0.751|0.537|\\n\\nIn this experiment, we studied the effect of the demonstration dataset and showed the effectiveness of the proposed method using Golden Reward Model evaluations.\\n\\n**(2). MT-Bench Evaluation with UltraFeedback**\\n\\nWe **added new experiments using the UltraFeedback dataset and evaluated different approaches using the MT-Bench**. Limited by computational resources, we use Gemma-2b and 4-bit quantized Gemma-7b and LoRA in training. Among the multiple responses in the UltraFeedback dataset, we use the **highest-rewarded responses** (rather than generating them by GPT4) as the demonstration dataset to perform AfD. \\n\\n|Method|Gemma-2b|Gemma-7b (4-bit)|\\n|-|-|-|\\n|Base|1.394|2.617|\\n|SFT|1.875|3.250|\\n|DPO (demo)|1.903|3.134|\\n|IRL-RM (N=10)|2.075|3.421|\\n|IRL-RM (N=30)|2.450|3.625|\\n|IRL-RM (N=50)|2.656|3.731|\\n|IRL-RM (N=300)|2.869|4.076|\\n\\n\\nIn this experiment, we use **different models for expert demonstration generation and evaluation.**\\nThe results presented in the table above indicate that the proposed method significantly outperforms both SFT in the AfD setting and the naive application of DPO --- using the initial policy generation as negative samples. \\n\\n\\n----\\n\\nWe thank our reviewer again for their suggestions for improving our paper. Should there be any leftover concerns, please let us know and we are eager to do our utmost to address them!\"}", "{\"title\": \"Author Response to Reviewer ZCzk (Part 1)\", \"comment\": \"We thank the reviewer for their time and thoughtful review of our paper, as well as their encouraging recognition of our innovation in providing an IRL framework for LLM alignment. Below, we would like to address their concerns and questions in turn:\\n\\n----\\n\\n## Q1: Evaluation with golden reward models\\n\\nWe thank the reviewer for highlighting the need for further clarification regarding our motivation for using golden reward models in evaluation.\\n\\n**1. Open-Source and Reproducibility**\\nOn one hand, evaluating with golden reward models is significantly more cost-effective than relying on commercial APIs like GPT-4. Moreover, leveraging open-source reward models enhances reproducibility and facilitates fair comparisons in future research. This approach is a well-established practice in the reward modeling literature [1\\u20136].\\n\\n**2. Evaluating with GPT-as-a-Judge**\\nThat said, we agree with the reviewer that incorporating GPT-as-a-judge provides valuable complementary insights into evaluating the performance of various methods. To this end, we included GPT-based evaluations in Section 4.3 (Table 2). Notably, the results obtained from GPT-based evaluation are broadly consistent with those derived from golden reward models.\\n\\n**3. Using both evaluation methods as cross-validation**\\nRecent studies have raised concerns about the reliability of LLM-based evaluations [7\\u201310]. To address this, we employ multiple evaluation metrics, combining golden reward models with GPT-as-a-judge. This dual approach enables a more comprehensive and reliable assessment of our proposed method.\\n\\nWe hope this dual evaluation strategy underscores the **rigor and thoroughness of our assessment** and addresses the reviewer\\u2019s concerns.\\n\\n---\\n\\n## Q2: A More Direct Comparison to SPIN.\\n\\nWe thank the reviewer for suggesting that a comparison with SPIN in the main text would be more informative than including it solely in the appendix. In the original manuscript, discussions and comparisons with SPIN were deferred to Appendix A.\\n\\nIn the revised version, we have **moved the discussion and the illustrative example to the main text, with updates highlighted in blue.**\\n\\n\\n**1. Achieving Super-Demonstrator Performance: A Fundamental Limitation of SPIN**\\n\\nA key distinction between our method and SPIN lies in the flexibility and potential of our approach to achieve **super-demonstrator performance** in LLM alignment, as empirically validated in our experiments.\\n\\n\\nSPIN operates under the explicit assumption that the current policy (initially the SFT policy) is always weaker than the demonstrations. Consequently, at convergence, the aligned LLM's performance is upper-bounded by the performance of the demonstration dataset, as the demonstrations are consistently treated as positive examples in implicit reward modeling.\\n\\nIn contrast, our method, rooted in Inverse RL, explicitly learns a reward model and extrapolates over it. This reward modeling mechanism allows for leveraging task scores to enhance performance beyond the demonstrators. On the other hand, naively adopting SPIN's setup\\u2014using the SFT checkpoint's generations as negative examples and the demonstrations as positive examples\\u2014can adversely impact heterogeneity and lead to suboptimal performance. As demonstrated in our experiments, the IRL approach consistently achieves **super-demonstrator performance** across both tasks.\\n\\n\\n**2. Empirical Comparisons with SPIN**\\n\\nTo substantiate our analytical insights, we provide empirical comparisons against SPIN in Table 4 of Appendix A.5, which we now summarize in the table below: \\n\\n| Task | Demo | SFT | IRL (N=10) | IRL (N=30) | IRL (N=50) | SPIN (iter=1) | SPIN (iter=2) | SPIN (iter=3) |\\n|------------|-------|-------|------------|------------|------------|----------------|----------------|----------------|\\n| Harmless | 1.704 | 1.785 | 2.171 | 2.272 | 2.333 | 1.769 | 1.753 | 1.747 |\\n| Helpful | 0.735 | 0.588 | 0.598 | 0.692 | 0.751 | 0.640 | 0.699 | 0.706 |\", \"the_table_reinforces_our_key_insight\": \"while SPIN focuses on imitating expert demonstration behavior, our IRL-based approach surpasses the demonstration performance, achieving significantly higher scores than the demonstrators.\\n\\nBy moving this discussion and comparison into the main text, we aim to provide a clearer understanding of the advantages of our method over SPIN.\"}", "{\"summary\": \"This work studies the large language model (LLM) alignment problem in the learning from demonstration framework. Under this framework, the widely adopted supervised finetuning (SFT) paradigm can be interpreted as matching the policy distribution and an unknown expert demonstration distribution with the forward KL distance measure. The authors then propose to consider distribution matching with the reverse KL distance. This problem has been studied in the imitation learning literature. A standard method is to train a discriminator to distinguish the policy trajectories and the expert trajectories and then train the policy by reinforcement learning with reward signals derived from the discriminator's outputs. This work adopts this method in the context of LLM alignment and evaluates it empirically on the Harmless and Helpful dataset. Experiment results show that the proposed method performs better than or on par with the SFT baseline and learning from human annotated preference data.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper does a good job at interpreting LLM alignment under the learning from demonstration framework. It successfully frames the alignment problem in the language of distribution matching. The idea of using the reverse KL distance follows naturally.\\n\\nThe authors identify the heterogeneity problem in the naive adoption of the discriminator-as-reward method and propose the Init-SFT RM as a solution to mitigate the heterogeneity gap. Init-SFT RM demonstrates strong performance in the experiments. This idea provides insights into learning from demonstration and can be applied to a broader class of problems beyond LLM alignment.\", \"weaknesses\": \"Important details of the method is missing from the main text. Section 3.2 talks about extrapolating the learned learned reward models but does not provide any detail on how it works in the context of alignment. Perhaps as a consequence, the results presented in Section 4.3 are confusing to me. It looks like the only difference to Section 4.2 is the evaluation metric being GPT4 as a judge rather than the golden reward model.\\n\\nAnother weakness of this work is the lack of understanding of the behavior of the proposed method. The distinction between forward KL distance and reverse KL distance lead to two different methods in SFT and discriminator-as-reward. The authors also discussed the mass-covering and mode-seeking behavior in Section 3. One natural question to ask here is how it impacts the behavior of the alignment algorithms and if they yield different outcomes. However, the discussion in Section 4 is rather hand-wavy. The authors simply characterize the harmless dataset and the helpful dataset as less divergent and more divergent. I think a deeper analysis on the mass-covering and mode-seeking behavior in alignment can greatly improve this work.\\n\\nIn terms of writing, citation format is misused through the manuscript. Please proofread the paper carefully and use the proper citation command (e.g., \\\\citep{} vs \\\\citet{}) in the revision.\", \"questions\": \"1. Could the authors clarify on why this work falls into the inverse reinforcement learning category? I might be wrong, but my understanding of inverse reinforcement learning is about uncovering the underlying reward function from expert trajectories. This work is not about finding the hidden \\\"true reward function\\\" but about matching the demonstration distribution. Thus I am confused by the use of the term \\\"inverse reinforcement learning\\\".\\n\\n2. In a typical alignment pipeline, learning from annotated preference data like RLHF comes after SFT. RLHF often yields mode-seeking distributions in contrast to SFT. Could the authors comment on the compatibility of the proposed method and RLHF? Should we expect RLHF to provide further improvement given that the AfD method is already mode-seeking?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Hi,\", \"re\": \"direct vs. reward-based methods: Yup, 1. is the answer I was looking for. I think this is the true reason your BoN procedure actually out-performs the policy it is sampling from without any new human data. It is just easier to learn a \\\"verifier\\\" / reward model for these tasks than a \\\"generator\\\" / policy. I think the other 3 points you bring up are mostly orthogonal and would suggest cutting them. Specifically, I would suggest emphasizing 1. in the paper, and mentioning this is the reason why one can reasonably believe that, without any new human data, this procedure can actually give us improved performance. I've read these references before and I don't think anyone has given a compelling explanation for why this should be the case but I think at least pointing out that this is empirically true is a good thing.\"}", "{\"title\": \"Author Response to Reviewer 5fua (Part 2)\", \"comment\": \"## 4. In-depth discussions on the forward and reverse KL\\n\\nWe appreciate the reviewer\\u2019s interest in further discussion about forward and reverse KL divergences. Below, we provide additional details:\\n\\n**1. Without Our IRL-Based Method, SFT is the Only Option.**\\nWhen alignment relies solely on a demonstration dataset, the standard approach is SFT, which corresponds to forward-KL-based distribution matching. SFT inherently exhibits mass-covering behavior. In contrast, our proposed AfD algorithm, derived from reverse-KL distribution matching, enables mode-seeking behavior. This provides a novel alternative for using demonstration data in alignment.\\n\\n**2. Extrapolating Over Reward Models.**\\nOur work emphasizes that forward-KL-based SFT can effectively match expert performance depending on the datasets and tasks, as demonstrated in Section 4.1. However, reverse-KL-based methods, which rely on explicit reward modeling, **consistently improve performance beyond the baseline set by SFT**. This distinction isolates the gains attributable to reward modeling.\\n\\n**3. Reward Modeling Enables Inference-Time Optimization.**\\nFrom a practical perspective, reverse-KL-based reward modeling supports inference-time optimization, whereas forward-KL-based supervised learning objectives do not enable further performance improvements post-training.\\n\\n**4. Empirical Demonstration of Super-Demonstrator Performance.**\\nCorrespondingly, our experiments using different divergence measures (i.e., the forward and reverse KL) aim at 1. highlighting properties of those different objectives (mode seeking and mass covering), and 2. Empirically demonstrate that explicit reward modeling and IRL can achieve super-demonstrator performance. Our experiments in Sections 4.1 and 4.2 illustrate two key findings:\\n- SFT (forward-KL matching) can sometimes achieve super-demonstrator performance.\\n- Reward modeling (reverse-KL matching) is consistently useful for achieving further gains.\\n\\n----\\n\\n## 5. Compare Workflows in AfD and RLHF\\n\\nWe thank the reviewer for their insightful questions about the workflows in AfD and RLHF.\\n\\n**1. AfD is an alternative to RLHF**\\nAfD serves as a practical **alternative to RLHF** when preference annotations are unavailable due to high costs, noisy labeling, or data-sharing restrictions (e.g., privacy concerns). For instance, in healthcare applications, it is often infeasible to collect preference annotations for medical prescriptions, but expert demonstration data may be available.\\n\\n\\n**2. Summarize the differences in a table.**\\nTo enhance clarity, we summarize the differences between AfD and RLHF workflows in the table below:\\n\\n\\n| Alignment Methods | SFT Data | IRL Data | Workflow|\\n|---|---|---|----|\\n| AfD| Demonstration| Demonstration | SFT + RM + PO |\\n| RLHF | Positive Sample | Preference Pairs | SFT + RM + PO |\\n\\nBoth methods share SFT, reward modeling (RM), and policy optimization (PO) stages. While RLHF relies on preference annotations, **AfD operates solely with demonstration data**. Notably, both approaches face the challenge of off-policy data: In RLHF, this was solved through online iterative annotation [9], while in our AfD with expert demonstrations, we introduce the init-SFT reward modeling method to address such a challenge effectively.\\n\\n---\\n\\n## 6. Citation Format\\n\\nWe thank the reviewer for pointing out the citation commands. We have updated the formats accordingly.\\n\\n---\\n\\n**References**\\n\\n[1] Dong, Hanze, et al. \\\"Raft: Reward ranked finetuning for generative foundation model alignment.\\\" arXiv preprint arXiv:2304.06767 (2023).\\n\\n[2] Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[3] Liu, Tianqi, et al. \\\"RRM: Robust Reward Model Training Mitigates Reward Hacking.\\\" arXiv preprint arXiv:2409.13156 (2024).\\n\\n[4] Liu, Tianqi, et al. \\\"Statistical rejection sampling improves preference optimization.\\\" arXiv preprint arXiv:2309.06657 (2023).\\n\\n[5] Coste, Thomas, et al. \\\"Reward model ensembles help mitigate overoptimization.\\\" arXiv preprint arXiv:2310.02743 (2023).\\n\\n[6] Yang, Rui, et al. \\\"Rewards-in-context: Multi-objective alignment of foundation models with dynamic preference adjustment.\\\" arXiv preprint arXiv:2402.10207 (2024).\\n\\n[7] Ho, Jonathan, and Stefano Ermon. \\\"Generative adversarial imitation learning.\\\" Advances in neural information processing systems 29 (2016).\\n\\n[8] Fu, Justin, Katie Luo, and Sergey Levine. \\\"Learning robust rewards with adversarial inverse reinforcement learning.\\\" arXiv preprint arXiv:1710.11248 (2017).\\n\\n[9] Xiong, Wei, et al. \\\"Iterative preference learning from human feedback: Bridging theory and practice for rlhf under kl-constraint.\\\" Forty-first International Conference on Machine Learning. 2024.\\n\\n\\n-----\\n\\n**Once again, we thank our reviewer for their insightful comments in improving our paper. Should there be any leftover concerns, please let us know and we will do our utmost to address them!**\"}", "{\"title\": \"Author Response to Reviewer iqxv\", \"comment\": \"Thank you for the further feedback! Hope everything went smoothly :)\\n\\n## On BC and FKL\\n\\nThank you for the clarification! We now better understand the reviewer\\u2019s point, and we agree the reviewer is correct. From the perspective of trajectory distribution matching, the FKL minimization directly leads to the BC objective \\u2014 maximizing the likelihood of a trajectory under the learner's policy is equivalent to maximizing the likelihood of each action taken in that trajectory. \\n\\nOn the other hand, when using the conventional occupancy measure instead of the trajectory distribution matching, connecting FKL with BC needs additional information on the dynamics model. As discussed in our previous response, in the context of token generation, this leads to a _weighted version_ of the BC objective.\\n\\n**We have made the revision in Appendix D.2 (please refer to pages 24-25) correspondingly in our updated manuscript. We would thank the reviewer again for the inspiring discussion.**\\n\\n----\\n## Direct methods vs. explicit reward modeling methods\\n\\nWe thank our reviewer for further raising this point! While we do not explicitly discuss the online and offline problems in our work, we believe it would be useful to thoroughly discuss direct / two-stage methods in our related work section. In the following, we compare the direct alignment methods with explicit reward modeling methods using different perspectives:\\n\\n\\n### 1. From the Perspective of Generalization\\n\\nExplicit reward modeling has been shown to generalize to OOD samples better than the direct alignment methods, both theoretically and empirically [1-2]. The central insight behind this can be attributed to that learning a discriminative model can be easier than learning a generative model [3]. \\n\\n### 2. From the Perspective of Online and Offline\\n\\nThe challenge of offline-ness faced by the direct alignment methods was theoretically studied in [4]. Iterative DPO methods alleviate such a problem [5] by generating on-policy responses. The core idea of using on-policy reward models in response evaluation is also studied in [6], using a data-centric off-policy evaluation perspective. \\n\\n\\n### 3. From the Perspective of Reward Hacking in RLHF\\n\\nThere is another line of work studied using SFT objectives as regularizations for the direct alignment methods [7-9]. It is worth noting all of those works focused on preference-based alignment setups.\\nIn practice, it has been found that direct alignment methods tend to reduce probabilities in generating both preferred and dispreferred responses [10]. In direct reward modeling methods, the reward hacking problem was mainly studied from the perspective of overoptimization [11], and it has been shown that using a reward model at a 3B scale is sufficient in alleviating the problem.\\n\\nBesides the above differences, another unique challenge is the off-policy-ness of the expert demonstration dataset, and the potential reward hacking problem.\\n\\nDifferent from the preference-based learning methods --- where both preferred samples and dispreferred samples are generated by the LLM to be aligned (in an ideal case) --- in the AfD setup, expert demonstrations are inherently different from the LLM\\u2019s generation, therefore the heterogeneity leads to potential reward hacking problem in discriminative reward modeling. This is a new challenge and in our work, we solve it with the init-SFT reward modeling methods. \\n\\n### 4. From the Perspective of Extrapolation\\n\\nIn our work, the key difference we would like to highlight is the ability of **extrapolating over the learned reward model**: our work differs from SPIN and SFT toward the expert demonstration data because the demonstration dataset is considered to be the positive samples in those methods. As a consequence, the demonstration samples strictly *upper-bounds the performance* of those algorithms at convergence. On the other hand, extrapolating over explicit reward models (either closed-form or through parameterization) enables further improvements. \\n\\n**We have added those discussions in Appendix A.6 (please refer to pages 20-21) in our updated manuscript. (highlighted with orange text)**\"}", "{\"title\": \"Author Response to the New Review (Part 2)\", \"comment\": \"## 2. Why explicit reward modeling can be superior to implicit methods (e.g., SPIN, SFT)\\n\\n(1). On the sub-optimality of demonstration data\\n\\nWe fully agree and are glad to see the reviewer resonates with our high-level insight in reward extrapolation (cf. Brown et al. 2019). The reviewer is very correct in pointing out if the demonstrations are **optimal**, then there would be no further space for improvement.\\n\\nIn the context of LLM alignment, even the responses generated by GPT4 are far from **optimal**. On one hand, this is because the concept of helpfulness or harmless themselves may not be transitive. On the other hand, GPT4 will reject to give answer to many queries in the harmless dataset due to its filtering mechanism. Therefore, the key insights of Brown et al. 2019 that extrapolating over reward models can improve over **sub-optimal** demonstration can be applied in our context.\\n\\nIn our previous response and also in our paper, we highlighted the core difference between the direct alignment methods without a reward model, such as SPIN, which is upper bounded by the demonstration quality, because they always consider the demonstration data to be positive, and the **current generation** as negative.\", \"this_can_be_proven_by_contradiction\": \"assuming in SPIN the LLM to be aligned outperforms the expert demonstration, then, in the next generation, the algorithm will treat those (better) generations as **negative samples** and optimize the policy to decrease the probability of generating such responses \\u2014 this contradicts with the objective of increasing the probability of generating higher-quality responses.\\n\\n\\n(2). Policy optimization objective with our explicit reward models\\n\\nDefine \\n\\n$$r_{c}(y|x)=\\\\log \\\\bar{\\\\pi}_{SFT}(y|x) - \\\\log \\\\pi_0 (y|x)$$\\n\\nThe policy optimization objective with our explicit reward model becomes\\n\\n$$\\\\arg\\\\max_{n} r(y_n|x) = \\\\arg\\\\max_{n}(\\\\log\\\\bar{\\\\pi}_{SFT}(y_n|x) -\\\\log \\\\pi_0(y_n|x) )$$ \\n\\nIn such an objective, we can directly calculate the probabilities of generating any $y_n$ from those parameterized models, the parameters in the $\\\\pi_{SFT}$ model are frozen in calculating the probabilities, and the N samples are generated by this $\\\\pi_{SFT}$ model;\\n\\nTo understand the **theoretical interpretation of such an objective**, we note\\n\\n$\\\\max_\\\\pi KL(\\\\pi||\\\\pi_0)-KL(\\\\pi||\\\\pi_{SFT})=\\\\max \\\\mathbb{E}_{y\\\\sim\\\\pi}[r_c(y|x)]$\\n\\nTherefore, optimizing the generation using BoN with regard to $r_c$ (i.e., the order statistics) can be interpreted as simultaneously maximizing the KL divergence between the order statistics and $\\\\pi_0$ and minimizing the KL divergence between the order statistics and $\\\\pi_{SFT}$. **This exactly leads to the extrapolation behavior as we desired!**\\n\\n----\\n## 3. On evaluations.\\n\\n(1). The pursuance of a comprehensive comparison in our work.\\n\\nFirstly, we would like to highlight the efforts made in our work to make the results reliable \\u2014 we have used both golden-reward evaluation and GPT4-as-a-judge to provide a comprehensive evaluation of the proposed methods. Our evaluation strictly follows the best practices in the literature.\\n\\n(2). Motivation for using the HH Dataset (Harmless and Helpful tasks)\\n\\nPlease kindly let us reiterate that the most important motivation for using the Harmless and Helpful datasets is that the open-sourced golden reward models exist on those tasks, and those tasks are well-studied in the literature. \\n\\nOn other datasets such as UltraFeedback, policy evaluation can only be based on GPT4-as-a-judge, which is much more expensive than using open-sourced golden reward models and may suffer from the challenge of why is GPT4 able to do the evaluation. Should the reviewer agree with us among many literature that GPT4-based evaluation is reliable, we are happy to run experiments on UltraFeedback (which can be costly).\\n\\n(3). Can GPT4 evaluations be trusted?\\n\\nFirst of all, we would like to highlight the fact that our evaluation is based on 2 methods, the effectiveness of the proposed method is verified by both evaluation metrics.\\n\\n\\u201cthe BoN IRL-RM and SFT models were trained on data generated by GPT4, which GPT4 likely prefers.\\u201d **We would respectfully disagree with the reviewer on this ungrounded claim** \\u2014 the base models we used are not GPT4, and their generated contents are different from GPT4. More importantly, we would like to highlight that the objective and takeaway of Section 4.3 is about the **relative improvement** achieved from SFT to Inverse RL, rather than the **absolute performance**. \\nTherefore, the takeaway of Section 4.3 is isolated from whether or not GPT4 would prefer the contents generated by AfD on some specific demonstrations \\u2014 as existing literature using GPT4 evaluation did.\\n\\n\\n-----\\n\\nOnce again, we sincerely thank the reviewer for taking the time to review our paper. Despite the limited time remaining, we are still eager to address any further concerns or questions you may have.\"}", "{\"summary\": \"The authors cast LLM alignment as an imitation learning problem, opening up the possibility of learning from demonstrations alone and leveraging insights from inverse RL.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"This is an exceptionally well-written paper with crystal-clear exposition and take-aways -- kudos to the authors!\"], \"weaknesses\": [\"(Minor) RLHF is usually framed as KL-constrained /MaxEnt RL, rather than standard RL problem formulation in Eq. 2.\", \"(Minor) Another good citation for intransitive preferences in RLHF might be https://arxiv.org/abs/2401.04056.\", \"I would argue that the fact that SFT is BC is fairly well known. It also doesn't seem that surprising that doing SFT on data generated by a super high quality model works well -- the question is of course how we train such a powerful supervisor model in the first place, for which preference data still appears to be neccesary. So, it's hard for me to give many points for novelty for that section of the paper.\", \"For the most preferred RM strategy (comparing $\\\\pi_{SFT}$ to $\\\\pi_{init}$) , we know the optimal discriminator in closed form -- it is precisely $d^{\\\\star}(x, y) = \\\\log \\\\pi_{SFT}(y|x) - \\\\log \\\\pi_{init}(y|x)$ (if a logistic loss is used, otherwise could be the density ratio in Eq. 9). I don't see the added value in actually learning a separate discriminator for the best-of-N sampling procedure -- it seems like we could only do worse rather than using the log ratio.\", \"It is a bit disappointing that the final policy requires a BoN step -- I would have liked to see the results of proper policy optimization procedure on the learned RMs.\"], \"questions\": \"1. If it is computationally feasible, could you compare to the closed form for the optimal discriminator in your BoN experiments?\\n\\n2. If I am understanding correctly, if you used the \\\"golden\\\" RM for BoN, you'd get a win rate of 1?\\n\\n3. Also, is the model you're sampling from here just the result of SFT on the demos from $\\\\pi_{\\\\beta}$, aka $\\\\pi_{SFT}$? If so, is their a theoretical interpretation of the effect of the BoN procedure with the \\\"closed form\\\" discriminator I mention above?\\n\\n4. Could you provide more explanation for why the win rate goes down with higher N for several lines in figure 4?\\n\\n5. If you have space, could you move up the comparison to SPIN to the main paper? I think it is quite interesting and under-appreciated in the broader community -- I have struggled to convince people of precisely the point you are making.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies LLM alignment from demonstrations. They used ideas from inverse RL, proposed the AfD (Alignment from Demonstrations) approach, and showed promising performance in empirical evaluation.\", \"strengths\": \"This paper brings different concepts together into the alignment problem and provides interesting viewpoints.\", \"weaknesses\": \"The biggest issue with this paper is about its novelty. Using inverse RL in the alignment problem has been studied for a while, and some reviewers are concerned about what new contribution this paper brings to the community.\\n\\nThe reviewers remained unconvinced during the rebuttal period. I agree with their concerns about novelty, which isn't well addressed in the current version. Therefore, this paper isn't ready for publication at ICLR in its present form.\", \"additional_comments_on_reviewer_discussion\": \"The authors and reviewers seem to have significant disagreement about the assessment, especially regarding the novelty and contribution of this paper when compared with existing work. There has been extensive discussion during the rebuttal period. However, the disagreement and concerns still remain.\"}", "{\"title\": \"Dear Reviewer 5fua\", \"comment\": \"Once again, we would thank reviewer 5fua for their time and effort devoted to reviewing and improving our paper.\\n\\nIn our author response **posted 11 days ago**, we have addressed each of our reviewers' concerns in a detailed, point-by-point manner. We hope the responses have addressed the outstanding questions and the reviewer would consider raising their score if those questions have been appropriately answered. \\n\\nAs the author-reviewer discussion phase nears its conclusion, we would like to kindly ask if there are any remaining concerns or additional feedback that we can address. In the limited time remaining, we are committed to making every effort to resolve any outstanding issues!\"}", "{\"summary\": \"This paper presents AfD, a sort of framework for learning from demonstrations in LLMs. The authors do a number of different experimenst within this framework on a number of different methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The work attempts to unify a number of diverse ideas, which is helpful\", \"The work makes nice use of different colored boxes so things can easily be found.\"], \"weaknesses\": [\"**Novelty**\", \"I am unsure what exactly is novel in this work. To my knowledge nothing the authors introduce is explicitly new, or has new experiments.\", \"Sec 2.2: This MDP breakdown for LLMs is well known\", \"Sec 3.1: It is well known that SFT = BC\", \"Sec 3.1: I have not looked into the descriminator objective to see if it is in prior work, but the authors don't use it in experiments.\", \"Sec 3.2: The idea of using the model generations as negatives is done in SPIN and in DITTO (Show don't tell, Shaikh et al.) DITTO also does something similar to this paper by SFTing the model first before sampling.\", \"Sec 4.1: These experiments show SFT > RLHF on the same number of demos. I don't find this surprising, similar results are also in Shaikh et al.\", \"Sec 4.2: I think the section may be where the authors find novelty?\", \"Overall, the paper seems to focus a lot on unifying different ideas that have existed for a while. While this is OK, the paper is not written as if it were a survey and at present it sounds like the authors are claiming AfD to be some new framework that has not been extensively studied before.\", \"**Writing**\", \"The paper is a bit hard to follow since there are so many subjects. I was initially confused as to what was being evaluated in each experimental sesction. For example, it was initially unclear to me what the different baselines were in Sec 4.1.\", \"**Experiments**\", \"the experimental results at present do not seem compelling.\", \"Sec 4.1: It makes sense that SFT with demos does better than RLHF. The amount of data isn't reported on however, and so its unclear what the cost of data collection vs performance tradeoff is.\", \"**Missing Citations**\", \"This work brings together a lot of different ideas, which is great, but the authors seem to miss a ton of related work which has already covered very similar ideas:\", \"IRL: Ziebart is the OG in maxEnt IRL.\", \"From r to Q* by Rafailov et al. - Token level MDP\", \"Show don't tell: Aligning LLMs with demonstrated feedback by Shaikh et al has very similar ideas\", \"Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data by Tajwar et al. covers mode seeking behavior.\", \"Imitating language via scalable inverse reinforcement learning by Wulfmeier et al for IRL on LLMs\", \"## Recommendations\", \"I would recommend that for a future draft the authors either a) refocus the draft to be a survey on applying concepts traditionally used in IRL to language models or b) focus on the reward modeling experiments.\"], \"questions\": \"Can the authors summarize the main contribution of the work? Is there something I am missing?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents Alignment from Demonstrations (AfD), a new method for aligning large language models (LLMs) using high-quality demonstration data rather than traditional preference-based reinforcement learning from human feedback (RLHF). AfD addresses issues of noisy labels, cost, and privacy by framing alignment within a Markov Decision Process, applying insights from forward and inverse reinforcement learning to create a computationally efficient, trajectory-based mechanism. The approach is validated through theoretical and empirical analyses, showing improvements on \\u201cHarmless\\u201d and \\u201cHelpful\\u201d tasks with the Anthropic HH-RLHF dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. **Innovative Use of RL Concepts:** The authors effectively integrate RL concepts\\u2014such as inverse RL, reverse KL, and forward KL divergence\\u2014into the LLM alignment framework. This combination with RLHF provides a fresh, rigorous perspective on alignment, enriching AfD\\u2019s theoretical foundation and adaptability.\\n\\n2. **Reduced Dependence on Preference-Based Data:** By bypassing preference data requirements, AfD proposes a scalable alternative that minimizes interaction with human annotators while still achieving alignment, making it potentially more feasible for large-scale applications.\", \"weaknesses\": \"1. **Overly Complex Presentation:** The paper\\u2019s presentation is somewhat dense, with extensive theoretical detail that can make it harder to grasp the core contributions. A more streamlined focus on the main insights and practical implications of AfD could enhance clarity and accessibility for readers.\\n\\n2. **Potential Overlap with Existing Methods:** The unique contribution of AfD relative to SPIN isn\\u2019t entirely clear. SPIN leverages a DPO-like objective to align LLMs directly without relying on a reward model, while AfD introduces alignment through a reward model. Clarifying the specific advantages or improvements AfD provides over methods like SPIN would strengthen the paper\\u2019s case for its distinct value.\\n\\n3. **Efficiency Clarification Needed:** Although the paper suggests that AfD offers greater efficiency than traditional RLHF, it\\u2019s unclear where these efficiency gains are realized. The pseudocode presented appears similar to RLHF workflows, with steps involving reward model training and optimization. Providing more concrete details on how AfD reduces computational overhead or training time compared to RLHF would clarify the practical benefits of this approach.\", \"questions\": \"1. Could the authors clarify why they chose to rely primarily on the golden reward model for evaluation rather than using GPT-4 as a critic throughout in Section 4.1? Would the golden reward model alone provide a sufficiently fair or robust assessment of alignment performance, especially given GPT-4\\u2019s nuanced evaluation capabilities?\\n\\n2. Could the authors clarify the key distinction between AfD and SPIN, particularly regarding their reliance on reward models? From my understanding, SPIN uses a DPO-like objective to align LLMs directly without a reward model, whereas AfD relies on a reward model for alignment. Given this, could the authors elaborate on the specific advantages AfD provides over SPIN in terms of contribution to the field?\\n\\n3. The authors mention that AfD is more efficient than traditional RLHF methods, but it would be helpful to understand precisely where these efficiency gains come from. Could the authors specify which parts of the AfD process contribute to this claimed efficiency, particularly in comparison to standard RLHF?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Feedback Welcome!\", \"comment\": \"Once again, we would thank reviewer ZCzk for their time and effort devoted to reviewing and improving our paper.\\n\\nIn our author response **posted 11 days ago**, we have addressed each of our reviewers' concerns in a detailed, point-by-point manner. We hope the responses have addressed the outstanding questions and the reviewer would consider raising their score if those questions have been appropriately answered. \\n\\nAs the author-reviewer discussion phase nears its conclusion, we would like to kindly ask if there are any remaining concerns or additional feedback that we can address. In the limited time remaining, we are committed to making every effort to resolve any outstanding issues!\"}", "{\"title\": \"Author Response to Reviewer ZCzk (Part 2)\", \"comment\": \"## Q3: Clarification on the difference between RLHF and AfD.\\n\\nThe reviewer is correct in pointing out that both the workflow of RLHF and AfD require an explicit reward modeling step, followed by a policy optimization step. From the computational efficiency perspective, AfD is comparable to the conventional RLHF approaches. However, we would like to highlight the superiority of AfD over the conventional RLHF approaches, which lies in the fact that AfD works with only demonstration datasets, while RLHF requires preference-based annotations. \\n\\nPlease kindly let us reiterate that in practice, RLHF can suffer from the difficulty of noisy labels, high annotation costs, and privacy concerns. In contrast, AfD does not suffer from those challenges and can effectively build reward models and align LLMs.\\n\\nThe 4 challenges in our introduction section elaborated those key difficulties for RLHF that can be solved by AfD, and Table 3 explains how those methods differ from the others from an RL taxonomy. \\n\\n----\\n**References**\\n\\n\\n[1] Dong, Hanze, et al. \\\"Raft: Reward ranked finetuning for generative foundation model alignment.\\\" arXiv preprint arXiv:2304.06767 (2023).\\n\\n[2] Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[3] Liu, Tianqi, et al. \\\"RRM: Robust Reward Model Training Mitigates Reward Hacking.\\\" arXiv preprint arXiv:2409.13156 (2024).\\n\\n[4] Liu, Tianqi, et al. \\\"Statistical rejection sampling improves preference optimization.\\\" arXiv preprint arXiv:2309.06657 (2023).\\n\\n[5] Coste, Thomas, et al. \\\"Reward model ensembles help mitigate overoptimization.\\\" arXiv preprint arXiv:2310.02743 (2023).\\n\\n[6] Yang, Rui, et al. \\\"Rewards-in-context: Multi-objective alignment of foundation models with dynamic preference adjustment.\\\" arXiv preprint arXiv:2402.10207 (2024).\\n\\n[7] Xu, Wenda, et al. \\\"Pride and prejudice: LLM amplifies self-bias in self-refinement.\\\" Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024.\\n\\n[8] Chiang, Wei-Lin, et al. \\\"Chatbot arena: An open platform for evaluating llms by human preference.\\\" arXiv preprint arXiv:2403.04132 (2024).\\n\\n[9] Dubois, Yann, et al. \\\"Length-controlled alpacaeval: A simple way to debias automatic evaluators.\\\" arXiv preprint arXiv:2404.04475 (2024).\\n\\n[10] Zheng, Xiaosen, et al. \\\"Cheating automatic llm benchmarks: Null models achieve high win rates.\\\" arXiv preprint arXiv:2410.07137 (2024).\\n\\n-----\\n\\n**Once again, we thank our reviewer for their insightful comments in improving our paper. Should there be any leftover concerns, please let us know and we will do our utmost to address them!**\"}", "{\"title\": \"Response to the authors\", \"comment\": \"I would like to thank the authors for their response. I found the clarificaitons helpful, but think we are still not aligned on a few points.\\n\\n**Motivations of the Framework**\\nI agree with the authors that some background is necessary for the paper. However, my issues with this part of the paper remains as follows:\\n1. The authors posit in their response and the paper text that this framework is a new contribution. I think we disagree on this point.\\n2. The length of discussion on the framework leaves less time to discuss the later contributions in text.\\n3. The presented framework is not well connected with the method the authors propose later in the paper. For example, in their response the authors detail the objective that the experiments optimize for, which is not exactly the reverse-KL. Why is this objective not in the main text of the paper? I think there should be a discussion in the main text.\\n\\n\\n**On Explicit Reward Model**\\nI agree with the point that explicit reward modeling can be better. I also thank the authors for detailing the proof by contradiction, and providing the objective. I believe these should be more central components of the paper, as at present they are not reflected within the text.\\n\\nHowever, I think a few points are missing from the paper:\\n* The authors should be explicit that they are only providing a proof of contradiction that SPIN cannot achieve super demonstrator performance. I had the impression that there was a theoretical proof that the authors method could achieve super-demonstrator performance. While empirically this seems to be the case, the distinction between theoretical and empirical results should be clear. \\n* the steps to go from reverse KL to the authors objective should be in the paper (I think I see how it is done)\\n\\n\\n**Evaluations**\\n\\n1) The distinction I am making is that while other works use GPT-4 as a critic, they do not necessarily use GPT-4 to generate the data. A more grounded approach in my mind would be to take a demonstration / SFT dataset, and label preferences between them. Then the demonstrations and preferences come from the same distribution, and one is not biased towards the evaluator. There have been several works (including DITTO) which note the bias of GPT-4 as a critic when judging its own outputs.\\n\\n2) Other datasets: the authors might consider showing their results on other common datasets used for SFT, or even datasets in preference based learning, like TLDR or even IMDB sentiment. Again, the key distinction I am making is that the authors experiments use GPT-4 to generate the demonstrations, then use GPT-4 as a critic. I am arguing that using demonstrations that are not generated by GPT-4 would be more reliable when evaluating with GPT-4.\\n\\n3) I might be mis-understanding the draft. However, I was basing this discussion off of this statement: \\u201cDemonstrations were generated using the OpenAI GPT-4 API, with detailed prompting strategies\\u2026\\u201d where the prompting strategy inputs the dataset and asks GPT-4 to complete it. Then, the AfD models use these GPT-4 demonstrations for reward learning. In this sense, the AfD models are trained on data generated by the critic, while the baselines are not. Please correct me if I am wrong.\\n\\n\\n**Overall Recommendation**\\nI agree with many of the points the authors bring up in their responses, and find the responses very useful. My recommendation remains that the authors re-vamp the writing of the paper to focus less on the \\u201cframework\\u201d such that their method can be adequately explained within the main text of the paper. Several details and contributions discussed here would make the main body of the paper much stronger, but are omitted because of space.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for the further clarification!\\n\\n---\\n\\n### - Occupancy measure matching:\\n\\nWe see the reviewer's point, and we fully agree with the reviewer on the equivalence between FKL trajectory matching and BC. \\n\\nWe thank the reviewer for using the tree structure to characterize the problem class. For occupancy measure matching, we would like to use an example to clarify our point. \\nConsider the case where there is only one prompt, $\\\\texttt{7+2}$, and the expert generates 2 tokens, '=', '9'. In this case, there are 2 actions and corresponding transitions: \\n$s_0 = \\\\texttt{7+2}, a_0 = \\\\texttt{=}, s_1 = \\\\texttt{7+2=}, a_1 = \\\\texttt{9}, s_2 = \\\\texttt{7+2=9}$. And $\\\\rho_E(s_1)=\\\\rho_E(s_2)=1/2$. \\n\\nWith the tree structure (deterministic dynamics), we can explicitly write $\\\\rho_\\\\pi(\\\\texttt{7+2=9}) = \\\\pi(\\\\texttt{9}|\\\\texttt{7+2=})\\\\pi(\\\\texttt{=}|\\\\texttt{7+2})p(\\\\texttt{7+2})$, and $\\\\rho_\\\\pi(\\\\texttt{7+2=}) = \\\\pi(\\\\texttt{=}|\\\\texttt{7+2})p(\\\\texttt{7+2})$. When uniformly sampling from the states for FKL occupancy measure matching, both states get the same probability of being selected, yet $\\\\pi(\\\\texttt{=}|\\\\texttt{7+2})$ is being optimized more frequently than $\\\\pi(\\\\texttt{9}|\\\\texttt{7+2=})$. This leads to a different objective than trajectory distribution matching (BC) --- regardless of the dynamics --- as pointed out by the reviewer.\\n\\nIn more general cases without the tree structure (or the knowledge of dynamics), [1] illustrated how the standard BC objective is different from FKL occupancy measure matching (please refer to Table 1 of [1], and Hypothesis 1 --- which was later on empirically verified in the paper.)\\n\\n### - Discussion on Generalization\\n\\nWe thank the reviewer for their further clarification and for their affirmative comments on the importance and contribution of our study. We have revised our manuscript accordingly.\\n\\n----\\n\\n**Reference**\\n\\n[1] Ghasemipour, Seyed Kamyar Seyed, Richard Zemel, and Shixiang Gu. \\\"A divergence minimization perspective on imitation learning methods.\\\" Conference on robot learning. PMLR, 2020.\\n\\n---\\nOnce again, we thank our reviewer for their diligent effort in reviewing and improving our paper. And please let us know if further clarifications are needed!\"}", "{\"title\": \"Author Response to Reviewer iqxv (Part 1)\", \"comment\": \"We thank the reviewer for their encouraging comments on our presentation and for their insightful comments. We would respond to each of the concerns and questions in turn:\\n\\n---\\n\\n## 1. MaxEnt RL / KL Constraints\\n\\nThank you for highlighting this point! In Section 2.2, we intentionally omitted the notion of KL constraints to maintain conciseness and simplify the notation. KL constraints or MaxEnt regularization can indeed be interpreted as an additional (omitted) objective beyond alignment in this context.\\n\\nIn this work, we avoided incorporating KL terms in both entropy constraints and alignment objectives to minimize potential confusion for readers. However, integrating our method and framework into a MaxEnt/KL-regularized setting is a promising future direction, given the demonstrated successes of such approaches.\\n\\n----\\n\\n## 2. Related Work\\n\\nWe thank the reviewer for sharing the related work discussing the intransitivity of RLHF! We have added the discussion in our revision (please see Page 1, line 45 in our updated manuscript).\\n\\n----\\n\\n## 3. SFT = BC = action matching is well known, but BC = action matching = trajectory distribution matching is **new**\\n\\nFrom an RL perspective, we agree with the reviewer that equating SFT to BC is not surprising.\\n\\nHowever, in RL literature, BC is typically understood as \\\"action (marginal) distribution matching,\\\" which is known to suffer from the compounding error problem. Interestingly, we highlight that **in the context of LLM alignment, BC goes beyond action distribution matching to also include trajectory distribution matching**. This arises from the deterministic concatenation of sentences and tokens during generation (i.e., state transitions). This unique aspect allows us to unify SFT and IRL-based AfD methods within a single distribution matching framework.\\n\\nWe believe this novel equivalence between marginal distribution matching and trajectory distribution matching is an important contribution to the community.\\n\\nFurthermore, this equivalence, along with the use of forward KL in establishing it, **naturally motivates our exploration of using reverse KL for distribution matching within the same framework.** This enables us to explain the differing alignment behaviors and properties induced by these distinct distribution-matching objectives.\"}", "{\"title\": \"Author Response to the New Review (Part 1)\", \"comment\": \"We sincerely thank reviewer wScp for their consideration of re-evaluating our paper!\\n\\nAlthough the author response window is narrowing down and our response may be limited by the approaching deadline, we deeply appreciate such an opportunity to further clarify the contribution and a few other aspects of our work. In the following, we would respond to the reviewer\\u2019s comments grouped by topics.\\n\\n----\\n\\n## 1. Motivations of introducing the framework before the technical method.\\n\\nWe thank our reviewer for their affirmative comment on our method section (Sec. 3.2)! We agree the explicit reward modeling part, together with the technique we introduced to overcome the reward hacking challenge, are the major technical contributions of our work. However, we would like to respectfully argue the current presentation design of the paper is motivated by the following considerations:\\n\\n(1). For a self-contained paper.\\n\\nProviding the prerequisite knowledge of Inverse RL and the MDP formulation is necessary for the follow-up discussion of e.g., distribution matching in our paper.\\n\\nWe understand those prerequisites might be straightforward to our knowledgable reviewers, yet for our general potential readers who are not familiar with MDPs and RL, our work is self-contained such that any reader familiar with the basic concept of LLMs to be able to understand our paper. \\n\\n(2). Connecting research areas and inspiring future innovations.\\n\\nBridging the research area of Inverse RL and Imitation learning with the general task of alignment \\u2014 including both alignment from preference feedback and demonstrative feedback is useful to the community. While the idea is not totally new by itself and has been discussed in the pre-LLM era (e.g., as we have discussed in the related work section, and pointed out by the reviewer), we believe it is beneficial to highlight its importance in the LLM alignment setups. It is worth letting more people understand the connection between RLHF and AfD through the lens of Inverse RL: **RLHF and AfD are two different instantiations of Inverse RL for LLM alignment, the difference is in how to generate the reward signal**.\\n\\nWe would highlight such a unified framework can inspire potential future innovations beyond RLHF or AfD \\u2014 the essential idea is not to use a specific data type for alignment, but more on leveraging different types of datasets to build the reward model. The current focus of the literature mainly focuses on RLHF \\u2014 using preference-based feedback \\u2014 except for the few recent exceptions (concurrent work) pointed out by our reviewer.\\n\\n\\n(3). Disclosing the motivation of the proposed method.\\n\\nDescribing the token generation process using the MDP language is essential because it can be used to formally define the _forward and inverse process_. Within such a framework, we are able to formally discuss the alignment problem using the RL and Inverse RL perspective, introduce the distribution matching methods that have been applied in the literature, and adapt those methods according to the property of the AfD task (Sec. 3.2)\"}", "{\"title\": \"Re:\", \"comment\": \"Thank-you for the thorough response!\", \"re\": \"5 -- Thanks, can you more explicitly mention these results in the main paper?\"}", "{\"title\": \"Re:\", \"comment\": \"Hi,\\n\\nApologies for the slow response on my part, crazy week.\", \"re\": \"offline vs. online: I think I'm asking a subtler question here: for both imitation and preference-based methods, one can either use the data to learn a policy or to learn a reward model. Notably, we use the same data for both, and for LLM fine-tuning, one usually uses the same model class for both (e.g. for learning from preferences, both RMs and DPO start from an SFT checkpoint). Given we're using the same data and the same hypothesis class, there isn't a clear statistical reason RMs should generalize better than policies. And, if this RM doesn't generalize better, there is no reason to believe that an online method should do better than an offline method -- there is no \\\"magic\\\" in interaction. This is different than the more classic IRL setting, where one usually thinks of the reward as being learned on top of a \\\"smaller\\\" space of moments, which means you need fewer samples to learn well / generalize better than directly learning a policy. So, I think what I'd like to understand is what, fundamentally, about this problem makes it such that it is easier to learn a RM than a policy from essentially the same data.\"}", "{\"title\": \"Author Response to Reviewer iqxv (Part 3)\", \"comment\": \"## 5. Policy optimization\\n\\nWe thank the reviewer\\u2019s interest in the results with parameterized policy optimization methods (e.g., PPO). In our manuscript, those results were deferred to Appendix E.7. \\n\\nIn Figure 6, we conducted policy optimization using PPO and compared the results with those in BoN. We find that PPO outperforms BoN with a small number of N (=50) and can achieve on-par performance with BoN with N = 500. \\n\\n---\\n\\n## 6. Golden reward and win rate\\n\\nYes, if we use the golden reward model rather than the learned ones to pick the best response out of N responses and evaluate it, it will outperform all other competitors and achieve a win rate of 1.0\\n\\n\\n----\\n\\n## 7. The ablation study results in Figure 4\\n\\nIn Figure 4, we experimented with different reward modeling choices to highlight the challenges analyzed in Section 3.2 \\u2014 the heterogeneous problem (also known as reward hacking) in reward modeling. \\n\\nWhen the curves go down, it means using those reward modeling choices can not lead to an effective reward model \\u2014 those reward models hacked the discriminative tasks and they are not good choices for reward modeling.\\n\\nTo be more explicit, this is because those reward models focus on the incorrect aspect of responses in identifying whether they are positive samples or negative samples. Therefore, optimizing toward increasing those reward values can not lead to an increase in the golden reward values.\\n\\n----\\n\\n## 8. Discussion on SPIN / Direct methods \\n\\nThank you for the suggestion, and we are glad to hear that our insights resonate with you. We have moved the discussion on SPIN to follow Section 3.2, as we agree that this adjustment enhances the clarity of our ideas and better distinguishes our work from related studies.\\n\\n\\n-----\\n\\n**Once again, we would like to thank the reviewer for their insightful comments and thorough reading to improve our paper. Should there be any leftover concerns, please don\\u2019t hesitate to let us know and we will do our utmost to address them!**\"}", "{\"title\": \"Author Response to Reviewer iqxv (Cont. References)\", \"comment\": \"**References**\\n\\n[1] Lin, Yong, et al. \\\"On the limited generalization capability of the implicit reward model induced by direct preference optimization.\\\" arXiv preprint arXiv:2409.03650 (2024).\\n\\n[2] Xu, Shusheng, et al. \\\"Is dpo superior to ppo for llm alignment? a comprehensive study.\\\" arXiv preprint arXiv:2404.10719 (2024).\\n\\n[3] Ouyang, Long, et al. \\\"Training language models to follow instructions with human feedback.\\\" Advances in neural information processing systems 35 (2022): 27730-27744.\\n\\n[4] Xiong, Wei, et al. \\\"Iterative preference learning from human feedback: Bridging theory and practice for rlhf under kl-constraint.\\\" Forty-first International Conference on Machine Learning. 2024.\\n\\n[5] Dong, Hanze, et al. \\\"Rlhf workflow: From reward modeling to online rlhf.\\\" arXiv preprint arXiv:2405.07863 (2024).\\n\\n[6] Sun, Hao, et al. \\\"When is off-policy evaluation useful? a data-centric perspective.\\\" arXiv preprint arXiv:2311.14110 (2023).\\n\\n[7] Liu, Zhihan, et al. \\\"Provably mitigating overoptimization in rlhf: Your sft loss is implicitly an adversarial regularizer.\\\" arXiv preprint arXiv:2405.16436 (2024).\\n\\n[8] Gui, Lin, Cristina G\\u00e2rbacea, and Victor Veitch. \\\"BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling.\\\" arXiv preprint arXiv:2406.00832 (2024).\\n\\n[9] Pang, Richard Yuanzhe, et al. \\\"Iterative reasoning preference optimization.\\\" arXiv preprint arXiv:2404.19733 (2024).\\n\\n[10] Pal, Arka, et al. \\\"Smaug: Fixing failure modes of preference optimisation with dpo-positive.\\\" arXiv preprint arXiv:2402.13228 (2024).\\n\\n[11] Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n----\\n**Once again, we would sincerely thank the reviewer iqxv for their time and effort devoted to improving our paper! \\nWe are still eager to use the remaining discussion period to fully address any additional concerns the reviewer may have.**\"}", "{\"title\": \"New Review\", \"comment\": [\"## New Review\", \"The authors present AfD, a general framework for alignment using demonstrations. That unifies SFT and inverseRL by looking at different directions of the KL divergence.\", \"**Strengths**\", \"Several works have demonstrated that the quality of the reward model is extremely important to performance. I think the authors' experiments address an interesting and important question.\", \"I think the experiments in Section 4.2 are very interesting \\u2013 demonstrating how the choice of which data to use for a reward model impacts down-stream performance.\", \"**Weaknesses**\", \"The authors work primarily focuses on their introduced framework for distribution matching in the Token level-MDP. However, 1) this general framework/knowledge has been known for a long time in RL/imitaiton literature, so I am uncertain if this can be claimed as a contribution. 2) even within the landscape of LLMs, several prior works have used the distribution matching framework as justification for their methods, citing literature in imitation learning. This casts doubt as to whether or not the author's proposed unified framework is adding anything. At present, I do not believe the author's unified framework constitutes a novel contribution, as prior works have already used the same ideas, but did not feel it was necessary to present such a framework. I would be happy to see a survey or overview paper on this topic, but I am unsure that ICLR is the best venue for it. I again believe the work would be stronger if the authors focus on the reverse-KL case, and how to best learn a reward model for it. For citations, I refer the author to my response to their rebuttal.\", \"The authors claim to be doing Inverse-RL with their method in Section 3.2 and Section 4.2, but they learn the reward model / discriminator from a static dataset. Since the inverseRL objective requires the policy to be optimized under its own distribution, its unclear how a static discriminator will actually optimize for the objective the authors claim. This is a bit strange as the authors for most of the work claim that they are introducing a framework for IRL, but then don\\u2019t actually seem to do inverse RL in the end.\", \"The authors perform their experimental evaluation on only a single task. Most contemporary papers have a more extensive evaluation.\", \"The experimental evaluation raises a number of questions, which I bring up in the \\u201cquestions\\u201d section of my review. I think these questions need to be discussed to gain confidence in the experimental section.\", \"**Questions**\", \"My questions are largely concerning the experimental evaluation.\", \"The authors use GPT4 to generate demonstration data. However, in Section 4.1 the authors compare finetuning on data generated by GPT4 (SFT AfD, DPO AfD) to finetuning on data from the original dataset (SFT Preferred, DPO Preference). The only conclusion I can draw from this experiment at present is that the GPT-4 data is better, preferred by the gold RM, or easier to fit than the original data. Why is comparing between the original data, and data distilled from GPT-4 a valid comparison on this task? Especially because GPT-4 has already likely been trained to be \\u201chelpful and harmless\\u201d.\", \"Again, for Section 4.3 the authors use GPT-4 as a critic. However, the BoN IRL-RM and SFT models were trained on data generated by GPT-4, which GPT-4 likely prefers. Is there a reason for why this is a valid comparison? Would GPT-4 not be biased to its own responses, and thus prefer SFT/BoN IRL-RM to the BT-RM.\", \"Could the authors show experiments that work on another dataset?\", \"Could the authors clarify what the theoretical and empirical evidence of extrapolation beyond the demonstrator is as this was repeatedly brought up in rebuttal? The proof of this in Brown et al 2019 made specific requirements on the sub-optimality of the demonstrator, ie there were requirements on the demonstrator\\u2019s average performance in relation to the true optimal policy.\", \"Could the authors explain the relationship between the objective that they actually solve for in the experiments (Init-SFT) versus the theoretical section (Reverse-KL occupancy matching). I understand that occupancy matching can be done with a discriminator/RM, but the distribution on which the discriminator is trained is important.\", \"**Concluding Thoughts**\", \"I still think that a paper which focused on the best way to learn a discriminator from demonstrations would be impactful and insightful. However, at present the work spends a lot of time / focus on other parts, and in my opinion as a result does not give the \\u201clearning a discriminator\\u201d part proper treatment. Moreover, I still have questions regarding the experimental evaluation which underpins this, as the authors base many of their conclusions off comparisons between models trained on GPT-4 demonstrations and models trained on the original HH data. This seems like it might be problematic, particularly when using GPT-4 as a critic.\"]}", "{\"title\": \"Response to authors\", \"comment\": \"I would like to thank the reviewer for their detailed responses to my questions. I have read the paper again and provide both a) responses to the author\\u2019s rebuttal and b) a new review of the work and an updated score.\\n\\n## Response to authors\\n\\n**On Novelty**\\n\\n*Timeliness*:\\nI apologize, my first response may have placed an over-importance on the relationship between the timing of prior work and this current paper. That being said, I am unable to reproduce the timelines given by the authors. \\n* I have placed r to Q* at Apr 18, 2024: making the authors work a month later, not a month earlier.\\n* I have placed DITTO at June 2nd, making the authors work only a handful of days earlier, not 3 months after\\n* I place Tajwar et al. at Apr 22, making the authors work a month later, not a month earlier.\\n\\nHowever, I don\\u2019t consider the exact timelines here as a crucial component of my assessment of the work. I simply meant to show that several ideas present in this paper were published in work prior work, though not in the exact same form.\\n\\nThere are even other earlier works that do Inverse-RL in the token level MDP, like:\", \"sequence_match\": \"Imitation learning for autoregressive sequence modeling by Cundy and Ermon in 2023 uses Inverse RL in the token-level MDP.\", \"on_policy_distillation_of_language_models\": \"LEARNING FROM SELF-GENERATED MISTAKES by Agarwal et al. which was published at ICLR 2024.\\n\\n*Discussion of SPIN*\\nThanks for bringing this to my attention. I did not read this initially as it was in the appendix. I understand that DPO makes assumptions about the data, and that the authors approach uses a different model than SPIN to generate negatives.\\n\\n*Distinct Focus and Contribution*\\nI understand that DITTO is focused on few-shot alignment, and the SPIN / Wulfmeier et al is focus on SFT or alignment. This response left me with some questions, as it is just stated that this work is designed to address \\u201cgeneral alignment from demonstrations\\u201d which I assume is just learning from demonstrations. This is a big area, and after re-reading the paper I believe some of the author's contributions are new, and some are not. I\\u2019ll discuss this more later.\\n\\n*From IRL Foundation to LLM Alignment*\\nI agree that the authors have presented new ideas in their work. I just think a large part of the work as it is currently is focused on re-iterating that we can consider each step of generation an MDP, that forward KL = BC and that reverse KL = IRL which can be optimized adversarially. I am interested in the authors new experiments on how to optimize these objectives, and ablations over the way in which the discriminator is learned. I wish the paper focused on these as the contribution, which I believe are novel and useful, rather than the general framework.\\n\\n**Clarifications of Key Contributions**\\n\\n1. Framework: This unified framework for distribution matching has existed for a long time outside of LLMs, and works in LLMs have already used the unified framework, even though it was not explicitly written up.\\n2. Sec 3.2: I believe this is interesting and is new contribution. I wish the authors spent more time discussing this, and made this more of a core contribution of the work. However, section 3.2 begins only on page 6. I think the work would be much improved if it focused primarily on how to effectively learn a reward function from demonstrations for inverse RL, and ablating and explaining these choices.\\n\\nRegarding timeline, I respectfully disagree on the author's points:\\n4. \\u201cThe first to introduce the idea of LLM alignment from demonstrations\\u201d I think this was done in SeqMatch (f-divergence minimization) and perhaps SPIN which were both definitively published before this work\\n5. \\u201cInverse-RL Perspective\\u201d I think this was done by seqmatch and other works which took the distribution matching perspective over occupancy measures, but did not call it InverseRL. \\n6. Previous works have shown IRL is effective, but I don\\u2019t believe this diminishes from the authors contributions which show that IRL can be effective! I think instead the authors should re-focus their work around their contributions.\\n\\n**On Experiments**\\n\\n* I see that the reviewers get better performance using demonstrations than pairwise preferences. I have further questions and weaknesses about this I detail in my new review of the work.\\n* \\u201cWe theoretically and empirically highlighted super-demonstrator performance\\u201d: \\n1. Could the authors kindly point me to where the theoretical evidence of super-demonstrator performance is, and where the proof is that SPIN cannot achieve it.\\n2. Could the authors clarify what they are empirically measuring super demonstrator performance with respect to? For example, doing better than the preferences isn\\u2019t evidence of this because the demonstratiosn weren\\u2019t generated by the same policy!\\n\\n*Focus on Reward Model Experiments*\\nThese are interesting, and I appreciate the authors bringing these up again. I have discussed them in my new review of the work.\"}", "{\"title\": \"Thanks for your further feedback!\", \"comment\": \"We thank the reviewer iqxv for their further feedback!\\n\\n----\\n## On BC and Forward KL.\\n\\nWe thank our reviewer for raising their further question on the relationship between BC and the trajectory forward KL. To better support the discussion, we would like to formally highlight the differences below:\\n\\nIn BC, the objective is forward distribution matching of **actions**, i.e., \\n\\n$$J_{BC}(\\\\pi)=-\\\\mathbb{E}_{(s,a)\\\\sim \\\\beta } \\\\left[\\\\log(\\\\pi(a|s)) \\\\right]$$\\n\\nIn **general trajectory distributional matching**, the objective is \\n\\n$$J_{FKL-\\\\tau} = KL(d^\\\\beta(\\\\tau|s_0)||d^\\\\pi(\\\\tau|s_0))= \\\\mathbb{E}_{(\\\\tau|s_0)\\\\sim\\\\beta} \\\\left[\\\\log d^\\\\pi(\\\\tau|s_0) \\\\right]$$\\n\\nwhere we use $\\\\tau |s_0$ to denote the trajectories starting from state $s_0$: $ \\\\tau |s_0 = \\\\\\\\{s_0, a_0, s_1, a_1,...\\\\\\\\} $. In general, such an objective is intractable and it is different from $J_{BC}(\\\\pi)$ since the calculation of the trajectory density requires access to the transition probability of $p(s'|s,a), \\\\forall s, a$.\\n\\nIn the MDP of LLM token generation (in alignment), we know $p(s'|s,a) = 1, \\\\mathrm{for}~ s'=\\\\mathrm{concate}(s,a), \\\\mathrm{and}~ 0 ~\\\\mathrm{otherwise}$. **Therefore, with this specific context can we show the equivalence between trajectory distribution matching using forward KL and BC.** (cf. Equation (6) in our manuscript).\\n\\n\\n\\nIn the IRL literature, it has been argued that the forward distribution matching on occupancy measure (joint distribution of state-action pairs) is different from the distribution matching of actions (BC) [1, 2]. Different from the IRL literature, we study trajectory matching instead since _responses_ as trajectories are more meaningful than tokens in the context of LLM alignment. Another derivation that might be interesting to our reviewer is that --- if we follow the RL literature to consider the **occupancy measure matching** (i.e., matching the distribution of **incomplete sentences**), we will have the following objective with forward KL:\\n\\n$$J_{FKL-(s,a)} = \\\\left[\\\\mathrm{KL}(\\\\rho^\\\\beta(s,a)||\\\\rho^\\\\pi(s,a)) \\\\right] = \\\\mathbb{E}_{(s_k,a_k)\\\\sim\\\\rho^\\\\beta} [\\\\sum^k_0 \\n \\\\log \\\\pi(a_t|s_t)] $$\\n\\n$$J_{FKL-(s,a)} = \\\\mathbb{E}_{(s_k,a_k)\\\\sim \\\\rho^\\\\beta} \\\\left[ \\\\frac{K-k}{K}\\\\log \\\\pi(a_k|s_k) \\\\right]$$\\n\\nwhere we use $K$ to denote the maximal length of generation, and $(s_k,a_k)\\\\sim \\\\rho^\\\\beta$ denotes a uniform sampling from the (incomplete) sentence-token pairs generated by the unknown demonstrator $\\\\beta$. \\n\\nWe can observe that in this occupancy measure matching case, using forward-KL matching also leads to a different objective from the BC loss. This weighted loss can be interpreted as follows: to align the distribution of (the uniformly sampled incomplete) sentences in the dataset, special emphasis must be placed on the correct generation of initial tokens, as they provide the foundation for generating subsequent tokens. Consequently, these initial tokens are assigned greater weights during supervised learning.\\n\\n----\\n\\n## Supervised Learning and Extrapolation\\n\\nWe thank the reviewer for raising this interesting point! We would like to link such an interpretation with the iterative application of SPIN-type algorithms --- the core difference is that, in either continued supervised learning (more BC) on the demonstration dataset or in SPIN where the demonstration dataset is considered to be the positive samples, the demonstration samples strictly limit the performance of those algorithms at convergence. On the other hand, extrapolating over explicit reward models (either closed-form or through parameterization) enables further improvements. \\n\\nWe have added those discussions and corresponding results in our updated manuscript (Added in Appendix G and referred to on page 7. Highlighted with orange text in our updated manuscript). We would like to deeply appreciate our reviewer for bringing up this insightful point!\\n\\n----\\n\\n## Referring to Appendix in Main Text\\n\\nThank you for the suggestion on referring to additional results in the main text. In our revised manuscript, we explicitly guide our readers on page 8 (the beginning paragraph of the experiment section) for those extended results in our appendix.\\n\\n\\n\\n\\n-----\\n**References**\\n\\n[1] Ghasemipour, Seyed Kamyar Seyed, Richard Zemel, and Shixiang Gu. \\\"A divergence minimization perspective on imitation learning methods.\\\" Conference on robot learning. PMLR, 2020.\\n\\n[2] Ghasemipour, Seyed Kamyar Seyed, Shane Gu, and Richard Zemel. \\\"Understanding the relation between maximum-entropy inverse reinforcement learning and behaviour cloning.\\\" (2019).\\n\\n----\\n**Once again, we would like to thank the reviewer for their insightful comments and thorough reading to improve our paper. We hope the responses above have addressed the reviewer's follow-up questions. Please let us know if there are any remaining questions, and we would be happy to engage further to ensure all concerns are addressed and improve our paper!**\"}", "{\"title\": \"Re:\", \"comment\": \"Hi,\\n\\nYup, makes sense about occupancy matching and trajectory matching being different in general.\", \"i_was_making_a_separate_observation\": \"I'm fairly sure that under tree-structured dynamics, matching occupancy measures implies matching trajectory distributions. Usually, one can only say the reverse is true: that if one matches trajectory distributions, one has matched occupancy measures.\\n\\nI was then trying to note that given BC is minimizing trajectory-level KL, it should also, assuming a flexible enough policy class, minimize occupancy-measure KL. This is true regardless of the dynamics. So, without a compelling argument about a specific kind of mis-specification of the learner's policy class (i.e. $\\\\pi_E \\\\notin \\\\Pi$), I'm not sure why one would actually care about explicitly minimizing the occupancy measure divergence, as we always get it \\\"for free\\\" via BC / trajectory-matching.\\n\\nAnyhow, I think I'm making a point a bit orthogonal to yours. I think so long as you point out that it is well-known that BC is minimizing trajectory-level KL, that this is different than optimizing occupancy-measure KL (citing [1]), but that trajectory-level matching implies occupancy matching, things should be clear enough to the readers.\\n\\nThank-you for talking through this with me!\"}", "{\"title\": \"Futher Feedback Welcome!\", \"comment\": \"We sincerely thank our reviewers for their time and effort in reviewing our paper and for providing thoughtful feedback to help improve our work.\\n\\nWe are particularly grateful for their recognition of our contributions to alignment research (Reviewer 5fua, ZCzk), as well as their appreciation of the unified IRL framework we proposed for alignment (Reviewer wScp, ZCzk). We also would like to thank Reviewer iqxv for the encouraging comments on our presentation and writing, and Reviewer 5fua for their affirmative comments on our empirical results.\\n\\nIn our author response, we have addressed each of our reviewers' concerns in a detailed, point-by-point manner. We hope the responses have addressed the outstanding questions and the reviewer would consider raising their score if those questions have been appropriately answered. As the author-reviewer discussion phase nears its conclusion, we would like to kindly ask if there are any remaining concerns or additional feedback that we can address. In the limited time remaining, we are committed to making every effort to resolve any outstanding issues!\\n\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Author Response to Reviewer wScp (Part 1)\", \"comment\": \"We sincerely thank the reviewer for their time and thoughtful feedback. We value the opportunity to clarify the novelty, contributions, and focus of our work, as well as address any concerns raised. Below, we provide detailed responses to the key points.\\n\\n---\\n\\n## 1. Responses Regarding Novelty\\n\\n**We respectfully disagree with the assessment that our work lacks novelty. Importantly, our paper predates several of the works mentioned by the reviewer.**\\n\\n---\\n**(1). Timeliness:**\\n\\nTo the best of our knowledge, before the submission deadline of ICLR, there were only three papers on this topic, they were made publicly available later than our work. We are happy to provide additional evidence by compliant with ICLR policies.\\n- Our paper is 6 months earlier than, has been acknowledged, and cited in Wulfemier et al. (Sep. 2024), as their prior related work. \\n- Our paper is 3 months earlier than DITTO, which is currently **a concurrent submission to ICLR 2025.**\\n- Our paper is 1 month earlier than Tajwar et al. And the paper focused on sub-optimal, on-policy data in RLHF. \\n- Our paper is 1 month earlier than Rafailov et al. The problem **we characterized with MDP in our paper is more general than theirs** \\u2014 AfD as IRL is different from the conventional RLHF setup. \\n\\nWe would like to emphasize that the ideas introduced in our paper were developed and documented well before the appearance of these works. We believe this timing further supports the originality and relevance of our contributions.\\n\\n---\\n**(2). Discussions on Ziebart et al. and SPIN in Our Paper**\\n\\nIn our submitted manuscript, we have appropriately cited and acknowledged relevant prior works, including those by Ziebart, which lays the foundation for our approach. \\n\\nOn SPIN, we **had a section discussing SPIN as an important related work. In our initial submission, we had a page starting from line 1003 to line 1058 discussing the link and difference between our work and SPIN**. As per suggested and acknowledged by reviewer iqxv that this section itself has a unique contribution to the community, we have moved this section to the main text in our revision (lines 316 - 418, highlighted by blue text). Please refer to our updated manuscript for the revision!\\n\\n---\\n**(3). Distinct Focus and Contribution:**\\n\\nWhile some relevant ideas have been later on discussed and empirically studied in related works, our approach differs significantly. Those related works can serve as additional support for our key insights \\u2014 as the reviewer pointed out \\u2014 some of their experiment results shared similar discoveries with ours.\\n\\nMoreover, the focus and techniques used in those papers are different. The focus of DITTO is primarily on **individualized and few-shot alignment using demonstrations**, whereas our work takes a broader approach, addressing general alignment from demonstration setups. This distinction marks an important difference in the scope and application of the methods.\\n\\nAdditionally, our work introduces a two-stage explicit reward modeling method, which is distinct from the direct alignment methods discussed in SPIN, DITTO, and Wulfmeier et al. We believe that exploring both direct alignment and reward modeling approaches, as we do in our paper, provides a meaningful contribution to the field. The existence of multiple approaches in this area should not be viewed as competing but as complementary solutions that advance the field in different ways.\\n\\n---\\n**(4). From IRL Foundation to LLM Alignment with IRL**\\n\\nThe reviewer correctly points out that Ziebart laid foundational work, and **we have cited and built upon it.** However, we would like to emphasize that our work **extends this foundational knowledge by proposing novel methods that are specifically tailored for learning from demonstrations in LLMs.** This is not simply a reiteration of past work, but rather an extension and adaptation of existing ideas to a new and impactful context.\\n\\n\\n---\\n**(5). Summary and Request for Re-evaluation**\\n\\n**We understand the reviewer\\u2019s concerns given their knowledge of the recent advancements in the field (but unintentionally missing the factual temporal order!). We hope that the points above help clarify the novelty and contribution of our work.**\\nWe would also appreciate the reviewer\\u2019s recognition of the soundness of our method based on existing empirical studies and agree that it is beneficial for papers in the same domain to support one another in advancing the state of the art. \\n\\n**We would kindly request reviewer wScp to consider re-evaluating our work, taking into account the above clarifications, and consider the novelty and significance of our contributions in the broader context of the field.**\"}", "{\"title\": \"Dear Reviewer 5fua\", \"comment\": \"We thank the reviewer for their time and thoughtful review of our paper. As the author-reviewer discussion window is narrowing to its end, we would like to summarize the reviewer\\u2019s main concerns, and how our previous responses addressed those concerns.\\n\\n---\\n\\n### 1. Building reward models\\nIn the original review, reviewer 5fua asked about the details of reward modeling. \\n\\nIn our response, the following explanations have been provided:\\n- 1. Detailed the implementation using Equation (11);\\n- 2. Explained why the logits of classifiers could be used as reward signals;\\n- 3. Provided Algorithm 1 in Appendix E.1;\\n\\n### 2. Section 4.3 \\n\\nIn the original review, reviewer 5fua raised their concern on the objective of section 4.3 (experiment section where we use GPT4-as-critic for reward model evaluation)\\n\\nIn our response, we highlighted the objective of leveraging a **dual evaluation** approach in our work is to enhance the reliability our conclusions. \\n\\nIn addition, as per suggested by Reviewer wScp, we additionally experimented using the GPT3.5 demonstrations (using golden reward models for evaluation) and UltraFeedback (using GPT4o for evaluation). All of our experimental designs aim at enhancing the reliability of the conclusions in our work.\\n\\n### 3. Link to IRL literature\\n\\nIn the original review, reviewer 5fua raised the question of _Why this paper falls into the IRL method category._\\n\\nIn our response, we highlighted the usage of an explicit reward modeling step in our method making our method an IRL algorithm. To further enhance the clarity, we discussed the key distinctions between Inverse RL and Imitation Learning. We would refer to Table 3, Figure 4, and Appendix A for more details in our updated manuscript!\\n\\n\\n### 4. On the forward and reverse KL\\n\\nIn the original review, reviewer 5fua requested an additional discussion on _the forward and reverse KL_.\\n\\nIn our response, we discussed the key insight of using the forward and reverse KL in our distributional matching objective from the following 4 perspectives:\\n\\n- 1. Leveraging SFT-format data beyond SFT.\\n\\n- 2. Mode-seeking behavior with learned reward models.\\n\\n- 3. Enabling inference-time optimization with reward models.\\n\\n- 4. Achieving super-demonstrator performance.\\n\\n\\n### 5. Workflow in AfD and RLHF\\n\\nIn the original review, reviewer 5fua raised a question on comparing _AfD and RLHF_.\\n\\nIn our response, we answered this question first by highlighting that **AfD is an alternative to RLHF** rather than an intermediate step. We then provided a comparison table that summarizes the key difference between the workflows of AfD and RLHF: Both methods have Reward Modeling and Policy Optimization stages, AfD works on **demonstration data**, and RLHF works on **pair-wise preference data**.\\n\\n### 6. Citation format\\n\\nWe thank reviewer 5fua for pointing out the citation commands. In our updated manuscript, we have updated the reference format.\\n\\n\\n-----\\n\\nOnce again, we thank our reviewer for their insightful comments in improving our paper. \\nIn the limited author-reviewer discussion period, we are still eager to address any further concerns from our reviewer.\"}", "{\"comment\": \"After careful consideration of the authors\\u2019 responses and the additional experiments provided, I have decided to adjust my rating to a 6. The responses have adequately addressed some of my initial concerns regarding the clarity and novelty of the contributions.\\n\\nWhile the concepts presented in the paper build upon existing ideas, the methods for explicit reward modeling and achieving super-demonstrator performance have been distinguished from prior work. The additional experiments across various datasets have helped to solidify the empirical foundation of the method.\\n\\nHowever, while the paper contributes meaningfully to the field, I believe there remains potential for deeper exploration and clearer articulation of the novel aspects. Thus, my recommendation reflects a positive yet cautious endorsement of the paper\\u2019s acceptance.\\n\\n**Suggestions for Improvement:**\\n- Clarify and emphasize the novel contributions more distinctly in the main text to better differentiate this work from existing literature.\\n- Consider condensing the background sections to allow more space for detailing the methodology and experimental insights, which are crucial for demonstrating the unique value of the proposed approach.\\n\\n**Suggestions for Improvement:**\\n- Focus more on the core contributions in the main text to highlight the novelty of the method.\\n- Reduce background content if necessary to provide more space for detailed discussion of the proposed approach and its implications.\"}", "{\"title\": \"Thank you for reviewing our paper!\", \"comment\": \"We appreciate the inspiring discussions with the reviewer iqxv!\\n\\nRegarding the original review, we hope the responses and discussions have addressed the outstanding questions and the reviewer would consider raising their score if those questions have been appropriately answered. \\n\\nShould there be any leftover concerns, in the time remaining, we are committed to making every effort to resolve any outstanding issues!\"}", "{\"title\": \"Thanks for the additional clarifications\", \"comment\": \"I would like to thank the authors for their additional clarifications and experiments -- I have raised my score accordingly.\\n\\nI still believe that to make the paper stronger, the majority of the writing in the main paper should focus on the choice of data when learning the reward model (what I believe is the central novel contribution here). Adding additional content to the appendix, while useful, does not address my core concern around the presentation. For ex: the introduction and framework section mostly focus on introducing imitation learning from demonstrations as a new idea for LLMs. The point I would instead focus on is which data to use for learning the reward model from demos.\"}", "{\"title\": \"Dear Reviewer ZCzk,\", \"comment\": \"We thank the reviewer for their time and thoughtful review of our paper. As the author-reviewer discussion window is narrowing to its end, we would like to summarize the reviewer\\u2019s main concerns, and how our previous responses addressed those concerns.\\n\\n----\\n\\n### 1. Motivations of golden reward model evaluation\\n\\nIn their original review, our reviewer ZCzk pointed out that it would be helpful to include further clarification regarding the motivation for using golden reward models for evaluation. \\n\\nIn our response, we highlighted the motivations of **Open-Source and Reproducibility**; and **Comprehensive evaluation methods using both the golden reward model and GPT-as-a-critic**.\\n\\n---\\n### 2. A more direct comparison to SPIN.\\n\\nIn their original review, reviewer ZCzk recommended including a more direct comparison to SPIN.\\n\\nIn our response, we \\n- 1. Discussed and compared our method with SPIN in Appendix A.\\n- 2. Moved the discussion and the illustrative example to the main text in our revised manuscript\\n- 3. Highlighted the key difference between our method and SPIN is the potential of _Achieving Super-Demonstrator Performance_\\n- 4. Empirically, we provided the following empirical results in Table 4 of Appendix A.5.\\n\\n| Task | Demo | SFT | IRL (N=10) | IRL (N=30) | IRL (N=50) | SPIN (iter=1) | SPIN (iter=2) | SPIN (iter=3) |\\n|------------|-------|-------|------------|------------|------------|----------------|----------------|----------------|\\n| Harmless | 1.704 | 1.785 | 2.171 | 2.272 | 2.333 | 1.769 | 1.753 | 1.747 |\\n| Helpful | 0.735 | 0.588 | 0.598 | 0.692 | 0.751 | 0.640 | 0.699 | 0.706 |\\n\\n---\\n### 3. Difference between RLHF and AfD.\\n\\nIn their original review, reviewer ZCzk asked questions centering on the difference between RLHF and AfD.\\n\\nIn our response, we highlight that AfD and RLHF differ in the dataset they work with \\u2014 AfD works with **expert demonstration dataset**, while RLHF works with **pairwise preference dataset**.\\nWhile RLHF can suffer from the difficulty of noisy labels, high annotation costs, and privacy concerns, AfD does not suffer from those challenges and can effectively build reward models and align LLMs.\\n\\n\\n-----\\n\\n\\nOnce again, we thank our reviewer for their insightful comments in improving our paper. \\nIn the limited author-reviewer discussion period, we are still eager to address any further concerns from our reviewer.\"}", "{\"title\": \"Author Response to Reviewer iqxv (Part 2)\", \"comment\": \"## 4. Closed-form solution and insights based on it\\n\\nWe thank the reviewer for highlighting this insightful point! With the closed-form expression for the BoN optimization procedure, we are equivalently optimizing for the following closed-form reward:\\n\\n$$r_{c}(y|x)=\\\\log \\\\bar{\\\\pi}_{SFT}(y|x) - \\\\log \\\\pi_0 (y|x)$$\\n\\nConsequently, the policy optimization objective for BoN becomes\\n\\n$$\\\\arg\\\\max_{n} r(y_n|x) = \\\\arg\\\\max_{n}(\\\\log\\\\bar{\\\\pi}_{SFT}(y_n|x) -\\\\log \\\\pi_0(y_n|x) )$$ \\n\\nIn such an objective, we can directly calculate the probabilities of generating any $y_n$ from those parameterized models, the parameters in the $\\\\pi_{SFT}$ model are frozen in calculating the probabilities, and the N samples are generated by this $\\\\pi_{SFT}$ model;\\n\\nTo understand the **theoretical interpretation of such an objective**, we note\\n\\n$\\\\max_\\\\pi KL(\\\\pi||\\\\pi_0)-KL(\\\\pi||\\\\pi_{SFT})=\\\\max \\\\mathbb{E}_{y\\\\sim\\\\pi}[r_c(y|x)]$\\n\\nTherefore, optimizing the generation using BoN with regard to $r_c$ (i.e., the order statistics) can be interpreted as simultaneously maximizing the KL divergence between the order statistics and $\\\\pi_0$ and minimizing the KL divergence between the order statistics and $\\\\pi_{SFT}$. **This exactly leads to the extrapolation behavior as we desired!**\\n\\n\\n---\\n\\n**Empirically**, the challenge lies in the computational cost of calculating those probabilities. To calculate the closed-form reward, we need 2-3 LLMs to be loaded in the memory: the LLM to be optimized $\\\\pi$, the SFT checkpoint $\\\\pi_{SFT}$, and the initial checkpoint $\\\\pi_0$. Such a closed-form reward model takes 3 times more memory than using the discriminator \\u2014 which can be implemented as a value head of LLMs and only has a small number of parameters to optimize.\\n\\nTo empirically verify the effectiveness of such a closed-form reward function, we experiment with the Harmless dataset, results are shown in the table below:\\n\\nTable 1. Golden Reward (before normalization)\\n| Method | N = 2 | N = 5 | N = 10 | N = 30 | N = 50 |\\n|--------------------|-------------|-------------|-------------|-------------|-------------|\\n| **Close Form** | 1.926 \\u00b1 0.047 | 2.191 \\u00b1 0.068 | 2.282 \\u00b1 0.065 | 2.348 \\u00b1 0.054 | 2.383 \\u00b1 0.061 |\\n| **Init - SFT** | 1.901 \\u00b1 0.069 | 2.063 \\u00b1 0.121 | 2.171 \\u00b1 0.065 | 2.272 \\u00b1 0.101 | 2.333 \\u00b1 0.122 |\\n| **Init - Demo** | 1.691 \\u00b1 0.106 | 1.575 \\u00b1 0.064 | 1.506 \\u00b1 0.126 | 1.362 \\u00b1 0.071 | 1.330 \\u00b1 0.058 |\\n| **SFT - Demo** | 1.664 \\u00b1 0.111 | 1.537 \\u00b1 0.039 | 1.420 \\u00b1 0.059 | 1.330 \\u00b1 0.087 | 1.306 \\u00b1 0.084 |\\n| **Human - Pairwise** | 1.856 \\u00b1 0.104 | 1.949 \\u00b1 0.052 | 2.020 \\u00b1 0.074 | 2.059 \\u00b1 0.089 | 2.058 \\u00b1 0.060 |\\n\\nTable 2. BoN Win Rate \\n| Method | N=2 | N=5 | N=10 | N=30 | N=50 |\\n|---------------------|---------------|---------------|---------------|---------------|---------------|\\n| **Close Form** | 0.626 \\u00b1 0.043 | 0.739 \\u00b1 0.044 | 0.774 \\u00b1 0.028 | 0.826 \\u00b1 0.033 | 0.835 \\u00b1 0.025 |\\n| **Init - SFT** | 0.594 \\u00b1 0.033 | 0.721 \\u00b1 0.028 | 0.752 \\u00b1 0.025 | 0.818 \\u00b1 0.021 | 0.805 \\u00b1 0.026 |\\n| **Init - Demo** | 0.401 \\u00b1 0.029 | 0.396 \\u00b1 0.022 | 0.311 \\u00b1 0.041 | 0.231 \\u00b1 0.033 | 0.193 \\u00b1 0.023 |\\n| **SFT - Demo** | 0.399 \\u00b1 0.030 | 0.341 \\u00b1 0.036 | 0.284 \\u00b1 0.021 | 0.225 \\u00b1 0.022 | 0.167 \\u00b1 0.014 |\\n| **Human - Pairwise**| 0.565 \\u00b1 0.033 | 0.676 \\u00b1 0.042 | 0.710 \\u00b1 0.014 | 0.735 \\u00b1 0.025 | 0.735 \\u00b1 0.018 |\\n\\n\\nWe find using the **closed-form expression of the reward to perform BoN can achieve better performance than using the direct reward modeling method**, however, generating the probability takes **2 times more memory and computation as compared with the reward model parameterization method**. The closed-form solution will shine when the LLMs are small and inference with the LLMs is computationally affordable, while parameterizing the reward models will be a more efficient alternative when calculating the closed-form probability is infeasible.\"}", "{\"title\": \"Author Response to Reviewer 5fua (Part 1)\", \"comment\": \"We thank the reviewer for their time and thoughtful review of our paper, as well as their encouraging recognition of our contribution to streamlining LLM alignment tasks through a distributional matching perspective. Below, we address the concerns and questions raised in turn:\\n\\n----\\n## 1. Details for building reward models\\n\\nIn our implementation, we use Equation (11) to train a discriminator, where the logits of the discriminator correspond to the reward values as defined in Equation (12). Specifically:\\n\\n\\n$D_\\\\phi(y|x) = \\\\sigma(\\\\texttt{logits}(y|x)) :=\\\\sigma( r(y|x))$\\n\\nhere, $D_\\\\phi(y|x)$ is a binary classifier trained to distinguish samples generated by the SFT model from those produced by the initial model.\\n\\nTo aid readers in understanding our implementation, we provide Algorithm 1 in Appendix E.1.\\n\\nPlease let us know if there are further specific questions about any aspect of the reward modeling implementation.\\n\\n---\\n\\n## 2. Objective of Section 4.3 \\n\\nWe appreciate the reviewer for highlighting the need for clarification regarding the objective of Section 4.3.\\n\\nThe primary goal of Section 4.3 is to evaluate our proposed method using GPT-4 as a judge\\u2014a widely adopted and complementary evaluation metric. In our work, we employ a **dual evaluation** approach, utilizing both golden reward models and GPT-4 as evaluative benchmarks. The rationale for this dual approach is as follows:\\n\\n- **Golden Reward Models**: Evaluating with golden reward models is significantly more cost-effective than relying on commercial APIs like GPT-4. Moreover, using open-source reward models enhances the reproducibility of our work and enables fair comparisons in future research. This practice aligns with established standards in the reward modeling literature [1\\u20136].\\n- **GPT-4 as a Judge**: Incorporating GPT-4-based evaluation provides additional insights into the performance of various methods, complementing golden reward evaluations. As shown in Section 4.3 (Table 2), our results indicate that GPT-4-based evaluations are broadly consistent with those derived from golden reward models.\\n\\nTo address this point, we have revised the description in Section 4.3 to improve clarity and ensure our evaluation approach is well-articulated.\\n\\n----\\n\\n## 3. Why Our Work Falls into the IRL Method Category.\\n\\nWe thank the reviewer for raising the question about method categorization.\\nIn the RL literature, both Imitation Learning (IL) and Inverse RL (IRL) aim to learn a policy from a demonstration dataset, typically involving trajectory distribution matching [e.g., 7,8]. The key distinction is that **IRL explicitly learns a reward model, while IL does not.**\\n- In IL, the objective is to match the behavior of the learned policy with the demonstrator.\\n- In IRL, an intermediate step involves building a reward model from the demonstration dataset, which is then used to optimize a policy to match\\u2014or even surpass\\u2014the demonstrator's performance.\\n\\nDue to space constraints, we deferred the detailed discussion of these differences to Table 3 and Appendix A in the manuscript.\\n\\nIn our work, we **explicitly build a reward model** using the demonstration dataset, hence the method falls to the category of IRL rather than IL. As has been demonstrated in our experiments, such an explicit reward modeling step is essential for **achieving super-demonstrator performance**, rather than merely matching their performance. (cf. Figure 4.)\"}", "{\"title\": \"Author Response to Reviewer wScp (Part 2)\", \"comment\": \"## 2. Clarification of Key Contributions\\n\\n\\nWe would like to summarize the key contributions below. First, we would like to list three contributions that were not explored in the **later related works.**\\n\\n1. Our paper is the first to introduce a **unified framework** for distributional matching that encompasses SFT, RLHF, and AfD (i.e., alignment from demonstration). By framing all these alignment methods through the lens of Inverse RL, our work not only provides a cohesive perspective but also bridges the gap between the domains of IRL and LLM alignment, offering valuable insights into both areas.\\n\\n2. Technically, in section 3.2, we highlighted why using the models\\u2019 generations as negative samples and using demonstrations as positive samples is suboptimal, and how to overcome such a difficulty. \\n \\n3. With both analysis and empirical results, we showed that explicit reward modeling using the demonstration dataset is the only method that **can achieve super-demonstrator performance.**\\n\\nMoreover, considering the timeline of this line of research, our paper is **factually the first** in:\\n\\n4. Building reward models and introducing the idea of LLM alignment from the demonstration dataset.\\n\\n5. Characterizing the general alignment problems using an Inverse RL perspective, and demonstrating how RLHF and AfD are two of its instantiations falling to this general class.\\n\\n6. Empirically showing the effectiveness of the IRL-based methods in LLM alignment, providing an alternative approach to the conventional preference-based alignment methods.\\n\\n\\n----\\n## 3. Response Regarding Experiments\\n\\n(1). Our Results Demonstrate **Statistical Significance** in Validating the Proposed Method.\\n\\nWe **respectfully disagree** with the comment that our _\\\"experimental results at present do not seem compelling\\\"_. Our experiments demonstrate the effectiveness of the proposed methods through both theoretical insights and comprehensive evaluations:\\n\\n- In our experiments, we highlighted the superiority of the proposed method with **extensive ablation studies** and demonstrated the effectiveness of building reward models from demonstration datasets. The proposed method **achieves super-demonstrator performance on all experiment setups, verifying the key insight of our work**. \\n- We theoretically and empirically highlighted such a super-demonstrator performance can not be achieved by prior work such as SPIN, due to the lack of reward model extrapolation.\\n\\n(2). Focusing on Reward Model Experiments\\n\\nWe appreciate the reviewer\\u2019s suggestion to focus on the reward modeling experiments, as **this aligns closely with the primary focus of our experimental design.**\\n\\nIn our work, most of our experiment sections **have centered on evaluating reward models** generated from the demonstration dataset. May we kindly reiterate what has been confirmed in our experimental section:\\n- In 4.2, we evaluated different reward modeling choices using the golden reward model evaluation. We find the proposed method achieves the best performance when using the demonstration dataset, and achieves on-par performance than training reward models from preference-based annotations.\\n- In 4.3, we evaluated our reward models using GPT4-as-a-judge evaluation, to provide an additional metric in addition to the golden reward model evaluation and more comprehensive and reliable results that verify the effectiveness of our reward models. \\n\\n\\n\\n------\\n\\nOnce again, we sincerely thank the reviewer for their thoughtful feedback and appreciate the opportunity to provide additional clarifications. **We believe there may have been some misunderstandings and would respectfully request a re-evaluation of our manuscript.** If any concerns remain, please do not hesitate to let us know, and we will do our utmost to address them!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
0koPj0cJV6
A Watermark for Black-Box Language Models
[ "Dara Bahri", "John Frederick Wieting", "Dana Alon", "Donald Metzler" ]
Watermarking has recently emerged as an effective strategy for detecting the outputs of large language models (LLMs). Most existing schemes require \emph{white-box} access to the model's next-token probability distribution, which is typically not accessible to downstream users of an LLM API. In this work, we propose a principled watermarking scheme that requires only the ability to sample sequences from the LLM (i.e. \emph{black-box} access), boasts a \emph{distortion-free} property, and can be chained or nested using multiple secret keys. We provide performance guarantees, demonstrate how it can be leveraged when white-box access is available, and show when it can outperform existing white-box schemes via comprehensive experiments.
[ "watermarking", "large language models", "black-box" ]
Reject
https://openreview.net/pdf?id=0koPj0cJV6
https://openreview.net/forum?id=0koPj0cJV6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTK7fvjDOQ", "yvP4c94JFw", "vZwpT38vSD", "vZjYncIPxE", "tRBnHlBzxo", "rtfvVBXeRL", "rEnK8VDfh7", "qreLC5pX8q", "qe1fHseb3b", "lIldU08idZ", "lHMYjvbfB1", "jZzHprMzdH", "jSP4JNjDh4", "j0gMeHkYdt", "flkvRYIL31", "b5nqLyXyyE", "ZXByBdS4Bv", "YXiRGSQf6k", "VsSZZnsONt", "VA8SGUWo0G", "U8YWYISOK9", "TqIzHijhaa", "SKJNU6MLyd", "RkVHGS56kP", "RWCzn6jfjC", "QySYYU0Rt3", "OuualY6IxY", "MVs2kmj065", "LMbaeXZhDG", "HqsbTBnUII", "Hdr2thEREr", "Gu1P1S2OnD", "C8KRYnjBDO", "601MFlzBZT", "3ArvU7Xxne", "20HvNYMAXt", "0KbxXspMKU", "0EUWOosnUL" ], "note_type": [ "official_review", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730013115448, 1732673223402, 1732233433787, 1737524223505, 1733731990913, 1733174880025, 1732238336477, 1732588854006, 1730584076517, 1732238214453, 1732578858346, 1732752578553, 1732234237643, 1730743121006, 1732657044849, 1730712515099, 1733076035675, 1732238265857, 1733075641960, 1732238309786, 1732645213409, 1732657001875, 1732702394839, 1732238243754, 1732225070349, 1732479358372, 1733118832973, 1732233023916, 1732657035338, 1732501502488, 1732238117051, 1732352321827, 1732237941647, 1733075919777, 1733076155368, 1732736386165, 1729423268740, 1732573904947 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_HaZA" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12914/Area_Chair_k3VK" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_3sQV" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_NW1g" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "~Peter_Zaika1" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_siCR" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_3sQV" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_Lnyz" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_Lnyz" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_Lnyz" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_HaZA" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_HaZA" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Authors" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_Lnyz" ], [ "ICLR.cc/2025/Conference/Submission12914/Reviewer_3sQV" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, a black-box watermarking scheme for LLMs is proposed. The idea is to enable watermarking with only sampling access i.e., without requiring white-box access to a model\\u2019s next-token probability distribution. The scheme allows third-party users with API-only access to embed watermarks without altering the distribution of generated text, achieving a distortion-free watermark (generated content is indistinguishable from the original output). It supports multiple secret keys, making it possible for different users to watermark the same model recursively without interference. The authors also provide theoretical guarantees on detection performance, false-positive rates, and robustness against adversarial attacks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper shows a solid theoretical analysis of the proposed scheme, as well as the distortion-free property that was claimed, establishing that the watermarked text is statistically indistinguishable from the original model's output. They also provide a lower bound on detection performance, connecting it to the entropy of the language model's output and the number of samples used.\", \"The experimental results presented in the paper support the theoretical claims and demonstrate the effectiveness of the proposed scheme. The authors conduct experiments on two popular LLM models, MISTRAL-7B-INSTRUCT and GEMMA-7B-INSTRUCT, and show that their scheme is competitive with or even superior to existing white-box watermarking schemes in terms of detection performance, text quality, and perplexity.\", \"The paper explores the robustness of the scheme to adversarial attacks - the impact of random token replacement and paraphrasing attacks. While paraphrasing proves to be a significant challenge, the scheme shows resilience to random token replacement. This analysis of robustness provides a realistic assessment of the scheme's strengths and limitations in practical settings.\", \"The proposed framework is versatile and allows for various extensions and adaptations. For instance, it can be applied recursively, allowing multiple users to watermark the same model without interfering with each other. The scheme can also be adapted for white-box settings when next-token probabilities are available.\"], \"weaknesses\": [\"Practicality: What do the authors mean when they claim their method enables end users with only API access to embed watermarks? I am unclear about the motivation behind this approach. Is it practical for users to watermark a model that they do not own? What is the reasoning here, particularly if watermarking serves as a security measure to prevent model misuse? Wouldn't this imply that the method could also allow potential attackers access to the watermark?\", \"Experiments and General Format of the Paper: The paper lacks clarity and structure, making it difficult to fully grasp the motivation behind the proposed approach. While there may be a valuable contribution here, the current format obscures its impact. Figures and tables are largely separated from the sections where they are referenced; it would improve readability to place these closer to the relevant results. The theoretical guarantees could be moved to the end or even to an appendix, allowing more space for additional results in the main body. The motivation behind the approach needs clearer explanation\\u2014if the goal is to \\\"give power back to the people,\\\" it should clarify why this is relevant, considering that users are not model owners, and watermarking aims primarily to prevent misuse. A well-articulated motivation would strengthen this section. Section 5.3 isn't necessary and could be integrated into the experimental results or discussion rather than standing as a separate section (optional).\", \"Results: The results presented are somewhat unconvincing. My primary baseline for comparison is KB, the initial paper to propose watermarking for LLMs. Although this approach targets black-box settings while aiming to remove distortions, it does not outperform KB, which was introduced nearly two years ago. Could the authors provide further insight into this? This issue may partly relate to the paper's structure, but I believe the authors need to highlight their main advantage more convincingly. For instance, it would be helpful to illustrate the tradeoff between distortion and text quality by comparing texts generated by KB and the proposed method, possibly using LLM-Judge. Additionally, if feasible, demonstrating the tradeoff between distortion and robustness would add value to the analysis.\", \"Finally, regarding the distortion-free claim, while the theoretical guarantees support this assertion, it would be beneficial to include qualitative results that demonstrate the distortion-free nature of the approach. Consider displaying examples of the unwatermarked text, the text watermarked by the proposed approach (using optimal hyperparameters), and the text watermarked by KB (also with optimal hyperparameters) for a clear, comparative illustration.\"], \"questions\": \"Questions are in weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**RE: \\\"...but I'm not even certain of that\\\"**\\n\\nYou're not certain that it was the condition of distortion that made you change your score from 6 to 3? What was it then?\\n\\n\\nGiven your new low score, could you please enumerate concretely the list of objective, technical weaknesses you have found with the paper so that we can address them?\\n\\nAs I say in my response to Reviewer Lnyz, I do not wish to be provocative, so I apologize if that was the perception.\\nI'd argue what's unprofessional is writing a 7 sentence review that makes statements that are factually incorrect and then radically changing your score when I call you out on it. You say it wasn't emotion or divine intervention that made you flip but you have yet to provide accurate technical criticisms of the work. Furthermore, you don't seek clarification nor do you ask questions. We put a lot of effort into this paper and we would like to help you understand it better -- but your actions don't indicate you wish to understand it better -- you didn't ask us a single question. Even your question under \\\"questions\\\" is a statement that so-and-so did watermarking back in 2019.\\n\\nIn contrast, look at Reviewer siCR's review. It's evident that this reviewer put in the time and effort to try to understand our work and provided very insightful and helpful feedback. And wow, look at all those great questions!\\n\\nNow, back to distortion:\\n\\n**RE: \\\"But the scheme of Christ et al. induces negligible observable change in the generated content, under no assumptions about the text (because of their entropy accounting strategy).\\\"**\\n\\nYour response is largely a rehash of what you said earlier despite us kindly asking you to elaborate further on Christ et al.'s scheme:\\n\\n*Meanwhile, you say Christ, Gunn, and Zamir (2024) is distortion-free under no-assumptions about the text, which is yet again incorrect (unless I'm missing something). Can you elaborate further?*\\n\\nSpecifically, can you be clear which of their schemes and which theorems you are referring to? It would be great if you could write them down precisely here.\\n\\nAs I noted earlier, they define distortion-free as \\\"undetectability\\\" -- the condition that the difference in probs is bounded by negl$(\\\\lambda)$, where negl is defined as follows:\\n\\n*A function f of $\\\\lambda$ is negligible if $f(\\\\lambda) \\\\in O(\\\\frac{1}{poly(\\\\lambda)})$ for every polynomial $poly(\\\\cdot)$.*\\n\\nThis is **not** the same and not comparable to our definition of distortion-free, which says that the probability of observing any response is **exactly the same** with or without watermarking.\\n\\n\\n**RE: \\\"And most of the guarantees in the literature about distortion depend on some version of \\\"entropy\\\" of the text that is relatively interpretable and simple: Whether it's just constant min-entropy (Fairoze et al.), \\\"spike entropy\\\" (Kirchenbauer et al.), or entropy + some variability assumption (Zhao et al.).**\\n\\nYou are correct that these works quantify distortion in terms of entropy-esque quantities but as discussed earlier, we need to also consider the definitions of distortion-free.\\n\\nFor example, Fairoze's distortion-free follows from Christ et al.'s while Kirchenbauer's is based on perplexity.\\n\\nThere is no right or wrong answer here. Some of the works you referred to define distortion / distortion-free in more of a continuous way, whereas we define it in a hard way for the sake of the theorem -- either the watermark is probabilistically indistinguishable or it's not -- and then we give quite mild conditions on when this hard condition is met. We discuss it thoroughly and then provide strong experimental evidence that support our claims, unlike the work of Christ et. al. which has no experiments.\", \"see_our_experimental_section_where_we_say\": \"*Distortion. Our scheme, along with most of the baselines, boasts a distortion-free property. This\\nproperty comes with assumptions that are often violated in practice, for example by reuse of the\\nsecret key across watermarking calls. We quantify how faithful the watermarking procedure is to the\\nunderlying generative model by computing both the perplexity and likelihood of watermarked text\\nunder the generator (without watermarking). We include likelihood as the log-probabilities used in\\ncalculating perplexity can over-emphasize outliers.*\\n\\n\\nAlthough I could be wrong, I believe it's impossible to simultaneously achieve good watermarking detection while also having the watermarked language model be **completely indistinguishable** from the original one -- there is no free lunch. That is why it is important to run experiments to quantify the level of distortion induced in practice.\"}", "{\"comment\": \"Thank you for the review.\\n\\n**RE: \\\"simpler methods could enhance security; for instance, instead of exposing logits, LLMs could offer APIs to gather specific information users want to integrate\\\".**\\n\\nI don't quite follow this; can you elaborate further? And if it's a viable option, how come chatGPT and Gemini don't do this already?\\n\\n\\n**RE: \\\"zero-bit watermarking technique, which only detects whether a text is watermarked but cannot infer additional information from the watermark\\\"**\\n\\nWhat do you mean by \\\"zero-bit\\\"? The goal of watermarking is to be able to identify if it was your model that generated the text. What other information are we seeking? I'm confused.\\n\\n\\n**RE: \\\"benefit from a more comprehensive evaluation\\\"**\\n\\nWe feel the evaluation is already comprehensive; please take a look at the Appendix if you haven't already.\\n\\n\\n**RE: \\\"time-complexity\\\"**\\n\\nWe will update the camera-ready to discuss this. The TLDR is that it's not interesting -- our method, along with Kirchenbauer and Aaronson are essentially so fast the cost is negligible. The cost is $O(c_1L + c_2 )$ where $L$ is the length of the sequence to score and $c_1$ is the cost to hash the n-gram and draw a value from a pseudorandom number generator and $c_2$ is the cost to evaluate $F_k$ at a particular point. Kuditipudi is significantly slower, especially when reference values are not precomputed.\\n\\n\\n**RE: \\\"providing examples of watermarked texts\\\"**\\n\\nThis is a good suggestion and we will put examples of generated text under the various schemes we test in the Appendix, for the camera-ready. We have included some in the response to all authors, as this was a point that came up in other reviews as well.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper presents a novel watermark scheme for black-box language models. After a extensive discussion, there are several critical issues remain unresolved. Reviewer 3sQV raised the concerns on the definition of \\\"distortion-free\\\". Reviewer Lnyz noted that the proposed watermark scheme required a more clear comparison with existing work. But the author's rebuttal did not convince the reviewer. Given these issues, I recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"The authors have addressed several concerns raised by the reviewers, such as the limitations of the evaluation setup and the motivation behind the black-box setting. However, the issues related to the definition of \\\"distortion-free\\\" and the comparison with existing work remain inadequately resolved, failing to convince the reviewers. Given the reviewers' professional judgment, this work should not be accepted.\"}", "{\"comment\": \"Thank you for the prompt reply.\\n\\n**RE: 1 -- this is not quite correct**\\n\\nLet's analyze two schemes in Christ et. al: Algorithm 1 and its detector Algorithm 2, and Algorithm 3 and its detector Algorithm 4.\\nFirst, Algorithm 3:\\nFirstly, this scheme samples token-by-token. No matter what we assume about the entropy H(model, prompt, x) where x is a sampled response, look at line 5 in Algorithm 3 on the first iteration of the while loop. H = 0 (i.e. H < $\\\\lambda$), so we take the if-branch, which involves computing $p_i(x_i)$ which requires $p_i$, so this algorithm is not black-box.\", \"now_algorithm_1\": \"Ok, let's assume that the entropy condition $H_e$(Model, prompt, x) > $6\\\\lambda$ is met *for all* x ~ Model(prompt).\", \"so_the_algorithm_reduces_to\": \"**while $F_{sk} \\\\neq 0^b$ do x ~ Model(prompt); return x**\\n\\n\\nBasically, you keep sampling sequences from the model until a condition that depends on your secret key is met.\\nAnd in algorithm 2, we return True iff $F_{sk}(x) = 0^b$.\\n\\nThis scheme has two serious problems.\\nFirstly, the while loop may never terminate. What happens if all the responses x the model likes to produce for the prompt doesn't meet the condition? And even if it does terminate, it could take an astronomical number of samples before the condition is met -- its compute requirements are random.\\nOur cost on the other hand is fixed. Our cost is the cost to sample $m$ sequences, where $m$ is specified by the user.\", \"problem_2\": \"look at the detector. In order to apply it, you would need to know the boundaries of the watermarked text, which is not known in practice. Secondly, it is extremely fragile to attacks. A single token change to the text can cause the condition $F_{sk}(x) = 0^b$ to flip.\\n\\n\\nFor example, suppose the watermarked text returned to an adversary is \\\"Barack Obama was born in Hawaii\\\".\\nThe adversary has a blog on the internet; in it, he puts the text \\\"Barack Obama was born in Hawaii. He went to Harvard.\\\"\\nHow would you know that you should be testing \\\"Barack Obama was born in Hawaii.\\\" as opposed to \\\"Barack Obama was born in Hawaii. He went to Harvard.\\\"?\\nYou wouldn't. Detection on the former returns true but detection on the latter probably wouldn't.\\nFurthermore, change \\\"Barack Obama was born in Hawaii\\\" to \\\"Obama was born in Hawaii\\\" and the detection breaks.\\n\\nOur scheme is not affected by these issues because we compute numerical scores for each token individually and then run a statistical test on the scores aggregated across tokens.\\n\\n\\n**RE: 2 -- this is also not quite correct**\\n\\n\\nOur scheme is black-box and practically useful *with no assumptions whatever*. Even the \\\"deduplicated seeds...are conditionally independent...\\\" bit is not needed. That mild assumption is for our distortion-free theorem, not for proving black-boxness. Hopefully RE: 1 shows you why this is not the case for Christ et. al.\\n\\nWe already discuss the mild assumption for distortion-free in the text. At the end of the day, the best way to assess a scheme's true distortion is to measure it experimentally, which we do and find that it's low.\\n\\n\\nCan you confirm that all your concerns have been addressed? If so, could you consider increasing your score?\"}", "{\"comment\": \"SCHEME: Greedy decoding (no watermarking)\", \"prompt\": \"Give me a list of some characteristics of a good coffee shop?\", \"watermarked_text_below\": \"1. Comfortable and inviting atmosphere: A good coffee shop should have a cozy and welcoming ambiance that makes customers feel at home. This includes comfortable seating, soft lighting, and pleasing decor.\\n\\n2. Quality coffee: The most important characteristic of a coffee shop is the quality of the coffee. It should be freshly brewed, made with high-quality beans, and served at the right temperature.\\n\\n3. Friendly and attentive staff: The staff at a coffee shop should be friendly, knowledgeable about the menu, and attentive to customers' needs. They should be able to make recommendations and provide excellent customer service.\\n\\n4. Variety of menu items: A good coffee shop should offer a variety of menu items, including breakfast, lunch, and snack options. This can include sandwiches, pastries, salads, and other light bites.\\n\\n5. Convenient location: A coffee shop should be located in a convenient and accessible location, such as a busy street or near a park or public transportation hub.\\n\\n6. Good music: A coffee shop should have a good selection of music that complements the atmosphere and appeals to customers. This can include classical, jazz, or contemporary music.\\n\\n7. Community involvement: A good coffee shop should be involved in the local community and support local events and organizations. This can include hosting events, sponsoring\"}", "{\"comment\": \"I lowered my rating because I revisited the paper and concluded that I didn't agree with my previous assessment.\\nThe realization that I had over-simplified the condition on distortion may have impacted this decision, but I'm not even certain of that.\\nI assure you that my score change was nothing personal (my feelings are not in any way hurt), and I apologize for changing my opinion on this paper after the initial review phase.\\n\\nYou're right, \\\"distortion-free\\\" does not have a single accepted definition. But the scheme of Christ et al. induces negligible observable change in the generated content, under no assumptions about the text (because of their entropy accounting strategy).\\nAnd most of the guarantees in the literature about distortion depend on some version of \\\"entropy\\\" of the text that is relatively interpretable and simple: Whether it's just constant min-entropy (Fairoze et al.), \\\"spike entropy\\\" (Kirchenbauer et al.), or entropy + some variability assumption (Zhao et al.).\\n\\nAnyway, I don't think it is necessary for this to be so unprofessional. I noticed that you have been similarly aggressive to Reviewer Lnyz, and I find it very inappropriate.\"}", "{\"summary\": \"The paper proposes a method for watermarking language models in a black-box setting. It only requires sampling output sequences from language models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The method is effective in a black-box setting. It only requires to sample sequences from LLMs.\\n\\nThe paper provides formal guarantees for detection performance.\", \"weaknesses\": \"The paper\\u2019s motivation could be articulated more clearly. The main motivation stems from the security risks associated with providing API access that exposes logits to third-party users for applying their own watermark. However, simpler methods could enhance security; for instance, instead of exposing logits, LLMs could offer APIs to gather specific information users want to integrate. Furthermore, the paper presents a zero-bit watermarking technique, which only detects whether a text is watermarked but cannot infer additional information from the watermark.\\n\\nThe paper could also benefit from a more comprehensive evaluation. For example, comparing the time complexity of the proposed method with baselines and providing examples of watermarked text would strengthen the paper.\", \"questions\": \"Could you provide an example of watermarked text?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"SCHEME: aaronson\", \"prompt\": \"Give me a list of some characteristics of a good coffee shop?\", \"watermarked_text_below\": \"1. Comfortable and inviting atmosphere\\n2. Good quality coffee beans\\n3. Professional and friendly staff\\n4. A wide variety of coffee and food options\\n5. Specialty menu items and unique flavors\\n6. A clean and well-maintained space\\n7. A cozy and comfortable seating area\\n8. Free Wi-Fi and plenty of charging stations\\n9. A welcoming and inclusive environment\\n10. Reasonable prices for their food and beverages.\\n\\nThese are just a few of the characteristics of a good coffee shop, of course, tastes and preferences vary from person to person. Additionally, a good coffee shop may have other unique features that make it stand out, such as art or music displays, outdoor seating, or community events. Ultimately, the best coffee shop for you will depend on your individual values and preferences. Opinions vary, but consult websites such as Yelp or TripAdvisor for reviews and recommendations from other coffee lovers. Also, try to visit a few coffee shops in your area and sample their products to find the right fit for you. Happy coffee shopping! \\ud83d\\ude0a #collegenow #studentlife #coffee\"}", "{\"comment\": \"I noticed that you changed your score from 6 to 3 with no mention. Would you like to explain why you did so? I'm not seeing any technical basis for this change. Did you have your feelings hurt that I pointed out gaps in your understanding of this paper? If so, I would like to remind you that we're not in Kindergarten and you should reconsider your role as a reviewer -- your job is to point out technical issues with the paper and suggest improvements -- my job is to correct the paper or to you correct you.\", \"the_conditions_for_distortion_free_are_actually_quite_mild_compared_to_other_distortion_free_theorems____the_assumption_essentially_boils_down_to\": \"you sample sequences i.i.d. from a language model, and conditioned on their frequency their n-grams are independent. This is quite simple, no? In the paper we are very clear:\\n\\n*Theorem 4.1 tells us that sampling tokens using our proposed scheme is, from a probabilistic\\nperspective, indistinguishable from sampling from the underlying model, with the caveat that the\\nunique seed values are conditionally independent given the counts of sequences. If we dismiss hash\\ncollisions as very low probability events, then since the key is fixed, this reduces to the assumption\\nthat unique n-grams across the sampled sequences are independent. How strong of an assumption\\nthis is depends on many factors such as m, the underlying distribution, and the counts (c1, . . . , cj )\\nthemselves. One can construct cases where the assumption is reasonable and others where it is\\nblatantly violated (e.g. if n-grams within a sequence are strongly correlated). One direction to making\\nthe assumption more palatable is to draw a fresh keys i.i.d. for each hash call. This would obviously\\ndestroy detectability. As a trade-off, one can leverage a set of secret keys (i.e. by drawing keys\\nuniformly at random from a key set), which may reduce distortion, but will hurt detection as each key\\nin the set needs to be tested against.*\\n\\nOur experimental results also indicate minimal distortion per our evaluation metrics.\\n\\nMeanwhile, you say Christ, Gunn, and Zamir (2024) is distortion-free under no-assumptions about the text, which is yet again incorrect (unless I'm missing something). Can you elaborate further?\\nThe things to always keep in mind is 1) \\\"distortion-free\\\" is not a property that has a clear definition; different works define it differently. And 2) what is the detection performance with the proposed notion of \\\"distortion-free\\\"?\\n\\nFor example, Christ et. al defines \\\"undetectability\\\" as follows.\\n\\n\\n A watermarking scheme $\\\\mathcal{W} = \\\\text{(Setup, Wat, Detect)}$ is *undetectable* if \\n for every security parameter $\\\\lambda$ and all polynomial-time distinguishers $D$, $| P [D^{Model, \\\\bar{Model}}(1^\\\\lambda) \\\\to 1] - P_{sk \\\\gets Setup(1^\\\\lambda)} [D^{Model, Wat}_{sk} (1^\\\\lambda) \\\\to 1] | \\\\le negl(\\\\lambda) $, where the notation $D^{\\\\mathcal{O_1}, \\\\mathcal{O_2}}$ means that $D$ is allowed to adaptively query both $\\\\mathcal{O_1}$ and $\\\\mathcal{O_2}$ with arbitrary prompts.\\n\\nI find it comical that you describe our condition as \\\"extremely complicated\\\" but you cite the works of Christ et. al.\"}", "{\"comment\": \"1. The flat scheme for k>1, e.g., k=32, increases the computational cost by several multiples. There doesn't seem to seem to be any discussion around the computational costs. Is a 10x increase in computational cost acceptable for most API users?\\n\\n2. What makes it more blackbox? It seems like the distributions are approximated via monte carlo sampling, and standard watermarks are applied. You could approximate the distribution and apply Christ et al. on top. Is Monte Carlo approximations of LLM output distributions the novel contribution of this work?\"}", "{\"comment\": \"Thank you for the review.\\n\\n**Re: \\\"Neural Linguistic Steganography\\\" (Ziegler et. al)**\\n\\nThank you for this reference; we will add it to the Related Work.\\n\\n**RE: \\\"essentially identical to Aaronson's\\\"**\\n\\nWe argue that the method is not a straightforward adaptation of existing work and it is not essentially identical to Aaronson's. Namely, our scheme is black-box and works with arbitrary sequence lengths k; Aaronson's assumes white-box access and operates token-by-token. Furthermore, our algorithm works with arbitrary continuous distributions, can be applied recursively, and for scoring we suggest using p-values or log-likelihood tests, which is novel. We prove optimality for certain distributions and also present non-trivial performance bounds.\\n \\n**RE: rejection sampling**\\n\\nRejection sampling is **not** used in the algorithm, as you state. We sample normally from the next-token distribution. We apply the usual Gumbel-max trick, just in a different way. See the proof in the Appendix for more details.\\n\\n**RE: \\\"distortion-free under certain assumptions about the text, which essentially translate to it having consistently high entropy\\\"**\\n\\nThis is not true. The assumption is, as we state, \\\"that the deduplicated seeds (determined by hashing the secret key and n-grams) across sequences, are conditionally independent given the counts of the sampled sequences\\\". All distortion-free theorems (to my knowledge) have assumptions -- for example, they might assume drawing fresh secret keys for each watermarking call, whereas in practice a single secret key is reused across calls.\"}", "{\"summary\": \"The paper proposes an LLM watermarking scheme that is applicable in black-box scenarios, i.e., when the party watermarking the text does not have access to the sampling procedure, but also in standard white-box cases. The authors prove the distortion free property and the lower bound on AUC. Extensive experiments among else evaluate watermark TPR/FPR, text quality, and robustness under token replacement and paraphrasing.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"While it is based on a generalization of ideas from existing schemes, the exact scheme proposed is to the best of my knowledge novel. The authors do a good job of exploring different variants of the scheme (e.g., CDF) in a principled way.\", \"The theoretical results are sound. I especially appreciate that Theorem 4.2 is carefully placed into context and analyzed for various input values to demonstrate its implications.\", \"Experiments are very thorough, involve important aspects such as quality evaluation with LLM judges and paraphasing attacks, and explore various scenarios and scheme ablations, making interesting observations.\", \"Whitebox results seem convincing (up to some reservations below), making the case for significance.\", \"While I have some issues with the method section (see below), the theory and experiments parts of the paper are very well written.\"], \"weaknesses\": \"As a meta point, the authors are using the 2024 style file and should update it to the latest version to avoid desk rejection. I understand that this is an honest mistake, but in particular the lack of usual line numbers is making it hard to refer to particular parts of the writeup.\", \"the_weaknesses_of_the_paper_are_in_my_view\": [\"(1) Limitations of the evaluation setup\", \"The authors recognize that AUC is not the most practically relevant metric yet resolve this by proposing a new metric (AUC below fixed FPR), instead of using the more standard TPR @ fixed low FPR. As this is instantiated with a still high FPR of 1% the metric is still dominated by results at impractical FPRs. Can the authors elaborate on the decision to introduce this metric? Do the authors believe a false positive rate of 1% is a practical setting for real-world deployment?\", \"Prior work (Kirchenbauer 2023b among else) has already shown that short texts such as those studied here (~300 tokens) are not robust to paraphrasing, while (passive adversaries that do not learn the watermark beforehand) start being much less able to remove the best variants of KGW at above ~600 tokens. Can the authors extend their evaluation to include this setting and demonstrate that their watermark is equally or more robust?\", \"(2) Despite being the title and the central framing of the paper, the practicality of the blackbox watermark is underdiscussed and not well substantiated. Perhaps framing the paper around the whitebox variant would have been more convincing. Namely:\", \"As authors say, it can be hard to control token lengths of chat API responses. Further, and more importantly, it is not always possible to prefill the first $k$ tokens of the assistant response. This implies that the variant where $k$ is equal to text length is the most practical for blackbox models, yet is not evaluated, and there is no detailed discussion of this. As already for $k=50$ we can at most get 70 pAUC, it is likely that the practical variant would either not obtain good results, or need very high $m$.\", \"The limitation of the blackbox setting that could be more explicitly mentioned/analyzed is that $len/k * m$ queries are needed to produce 1 text. For the practical setting above with high $m$ this can be prohibitively expensive.\", \"The baselines (PostMark and Yang et al.) are not evaluated, yet they study the exact same blackbox setup. Can the authors explain this decision? Baselines being costly does not seem like a sound rationale, as they could still be evaluated along with their cost, which can then be compared to the cost of the proposed watermark.\", \"(3) Minor writing issues around the method description. In particular Sec. 3 is quite dense and not very friendly to readers aiming to understand the high-level idea behind the watermark. For example, $u_t$ is simply introduced but its components could be explained more intuitively, perhaps even through an example or supporting figures which are notably missing. Detail: $g(w)$ is introduced but not used later.\"], \"minor_writing_suggestions_that_are_not_treated_as_weaknesses\": [\"For consistency with prior work, it would be good to use the more standard scheme names such as KGW self-hash and ITS/EXP instead of introducing new aliases KB and K.\", \"It would be beneficial to label $m$ and $\\\\delta$ in Table 1 as it is not immediatelly clear what they represent.\", \"In \\\"hyperparameters\\\" section of the evaluation, it should be explicit that $F_k$, if I am not mistaken, is not chosen, but simply follows from the choice of $F$.\", \"---\"], \"update\": \"The authors' repeated insults towards the reviewers and their highly inappropriate communication below clearly violate the code of conduct. This overshadows any technical merit of the paper and prevents me from engaging in discussion; I have updated my score accordingly.\", \"questions\": [\"All questions I list here are repeated from the \\\"Weaknesses\\\" section above:\", \"Can the authors elaborate on the decision to introduce the AUC until fixed FPR metric?\", \"Do the authors believe a false positive rate of 1% is a practical setting for real-world deployment?\", \"Can the authors extend their paraphrasing robustness evaluation to include longer texts and demonstrate that their watermark is as robust as best variants from prior work?\", \"Can the authors comment on the discrepancy between the blackbox-focused framing of the earlier sections of the paper, and the key results demonstrated and discussed in Sec. 5 being in the whitebox case?\", \"Can the authors comment on the statement that $k$ below text length $L$ is not as practical in the blackbox case, and include some experiments in the $k=L$ case?\", \"Can the authors compare their method to cited blackbox baselines or explain why this is not feasible?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**RE: They have theorems about what their robustness guarantee is (look for the term \\\"substring-complete\\\"), and in particular they allow adversaries that can do everything while keeping a sufficiently large sub-string untouched**\\n\\nThis is yet again incorrect. I did what you suggested and looked for the term \\\"substring-complete\\\". I think you are misunderstanding their statements. Unless I'm missing something, what they're doing is qualifying detectability in terms of these substrings which carry high entropy: \\\"This means that every contiguous part of an output of the watermarking procedure, that has\\nhigh enough empirical entropy, is detected as watermarked with high probability.\\\"\\n\\nAs noted above, we also qualify detectability in terms of entropy, and we can easily make a statement similar to theirs:\\n\\n*Every contiguous T tokens from the output of our watermarking procedure can be detected at a rate given in Theorem 4.2* \\n\\n\\nFurthermore you say \\\"In light of the recent developments (such as Christ et al) it is more important to robustness using proofs rather than experiments\\\"\\n\\nGood sir, Christ et. al themselves explicitly say in Section 6 that they don't have proofs on robustness:\\n\\n*A natural question is how robust an undetectable watermarking scheme can be to active attempts to remove it. While we would ideally like to have an undetectable watermarking scheme that is robust to any efficient adversary attempting to remove a watermark, there are both practical and theoretical barriers to achieving this property. In Section 6.1 we first describe several attacks that work well at removing watermarks in practice. Then in Section 6.2 we present an (expensive) attack that provably removes a watermark from any undetectable scheme. We conclude that no undetectable watermarking scheme can be completely unremovable. Still, it might require significantly more resources for a user to generate unwatermarked text from the model.* \\n\\n\\nMeanwhile, we study the empirical performance of two common attack forms that the community agrees is reasonable - random token substitution **and paraphrasing**.\\n\\n**RE: continuous distributions**\\n\\nThanks for understanding our clarification on discrete vs. continunous distributions (although I am still confused by your statement \\\"when the only thing you need from a distribution is to be continuous, this is something that looks unusual\\\")\\n\\n**RE: \\\"Having experiments is a plus, but when a major selling point is being black-box, one needs to have a better comparisoin to previous works that is black-box...\\\"**\\n\\nI guess this is a moot point as we discussed above that Christ et. al is not black-box.\"}", "{\"summary\": \"A method of generating watermarked text using query access to a language model is described.\\nThe method works by auto-regressively sampling short sequences of tokens, selecting the sequence with the highest watermark score.\\nThe watermark score is similar to Aaronson's.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper seems to do a good job of optimizing both their scheme, and the schemes they compare against.\\nIn particular, it is interesting that making the watermark detector of Aaronson length-aware improves performance as much as it does.\", \"weaknesses\": \"The ideas and method are straightforward adaptations of existing work.\\nThe technique is essentially identical to Aaronson's, except that they use rejection sampling instead of the Gumbel-max trick.\\nThe scheme is also only distortion-free under certain assumptions about the text, which essentially translate to it having consistently high entropy.\", \"questions\": \"The \\\"Related Work\\\" section appears to suggest that Aaronson and Kirchenbauer et al. were the first to embed information in LLM outputs.\\nHowever, the paper \\\"Neural Linguistic Steganography\\\" did this as early as 2019.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Hi! The author response window is coming to a close very soon and we want to check to see if you got a chance to read our latest response and if there are any outstanding questions or concerns you might have that we didn't already address?\\nFurthermore, there were some clarifying questions we asked -- it would be wonderful if you could answer them!\"}", "{\"comment\": \"SCHEME: Flat (k=1, n=1024)\", \"prompt\": \"Give me a list of some characteristics of a good coffee shop?\", \"watermarked_text_below\": \"1. Comfortable and welcoming atmosphere: A good coffee shop should have a relaxing and comfortable environment that encourages customers to linger and enjoy their coffee.\\n2. Quality coffee: The coffee should be of high quality and freshly brewed. Baristas should be skilled and able to make a variety of coffee drinks to customers' liking.\\n3. Good food options: A good coffee shop should offer a variety of food options, from pastries and sandwiches to salads and soups. The food should be of good quality and complement the coffee.\\n4. Friendly and attentive staff: The staff should be friendly and attentive to customers' needs. They should be able to make recommendations and provide excellent service.\\n5. Wide range of drinks: A good coffee shop should offer a wide range of drinks, such as tea, hot chocolate, smoothies, and ice coffees.\\n6. A quiet and efficient workspace: The coffee shop should have a quiet and efficient workspace for customers who need to work while they sip their coffees.\\n7. Good music and ambiance: A good coffee shop should have a relaxing and comfortable ambiance, with good music and lighting that complement the overall experience.\\n8. A range of seating options: The coffee shop should have a range of seating options, including tables, sofas, and armchairs, to cater to customers' preferences and needs.\"}", "{\"comment\": \"**RE: 1**\\n\\nSee the response to Reviewer NW1g for the computational cost of scoring. For generation, it's trivial; naively, it's just $m$-fold ($m$ being the number of sequences sampled). OpenAI has this field in their API to specify number of responses that should be returned: https://platform.openai.com/docs/api-reference/chat/create#chat-create-n.\\nIn their API, the cost is determined by the number of generated tokens, so it will also be $m$-times higher. Whether this is acceptable depends on the user. In the motivating example (see response to Reviewer HaZA), the user is a legal genAI startup that's building a chatGPT wrapper and wants to apply their own watermark -- it's not at all unreasonable to assume this kind of party would pay up to employ distortion-free watermarking.\\nIf you want to be \\\"distortion-free\\\", you need to be sampling from the underlying LLM distribution in some way -- this is the price to pay. In the Appendix, under extensions, we discuss alternatives, which I've copy/pasted below:\\n\\n*Beam search. Rather than drawing i.i.d. samples from the model, one can apply our watermark selection to the sequences that arise from beam search, with the caveat that this would violate our distortion-free property.*\\n\\n*Paraphrasing. Thus far, we assumed the service provides m draws from the LLM. If m is large, this can be prohibitively expensive. The resource-constrained may consider the following alternative: draw one sample from the LLM and feed it to a much cheaper paraphrasing model to generate m paraphrases. The downside is that there may be a lot of duplicate n-grams across the candidate set.*\\n\\n\\n**RE: 2**\\n\\nWhen $k=1$, $F=U(0,1)$, then only the *encoding* of only the *flat* scheme looks like a Monte-Carlo estimate of Aaronson's. We say this explicitly in the text at the bottom of page 6. \\n\\n*Remark: If $k=1$ and $F = U(0, 1)$, then our watermark encoding can be viewed as a stochastic version of Aaronson's. As $m\\\\to\\\\infty$, $c_t / m \\\\overset{a.s.}{\\\\to} p_t$, where $p_t$ and $c_t$ are the probability and observed occurrences of token $t$.*\\n\\nNovelty...a watermarking scheme that operates at a sequence level, is distortion-free, can be chained iteratively or recursively, uses arbitrary continuous distributions (whose role is studied experimentally) + original scoring based on $p$-values and Fisher's method + theorems that clearly guarantee a non-vacuous minimum ROC-AUC performance + formulating the *optimal* statistical detection test for a specific choice of distributions wherein exact TPR and FPR rates are provided + diligent and comprehensive experimental evaluations...?\\n\\n\\n**RE: \\\"approximate the distribution and apply Christ et. al on top\\\"**\\n\\nCan you precisely write down the algorithm (an algorithm block for both encoding and decoding, with hyper-parameters specified) you have in mind and we can analyze it?\"}", "{\"comment\": \"SCHEME: Kuditipudi\", \"prompt\": \"Give me a list of some characteristics of a good coffee shop?\", \"watermarked_text_below\": \"\", \"a_good_coffee_shop_should_have_the_following_characteristics\": \"1. Quality coffee: A coffee shop should serve high-quality coffee that is well-roasted and brewed to perfection.\\n2. Comfortable atmosphere: The shop should have a cozy and inviting atmosphere that attracts customers for their morning coffee routine or a mid-day break.\\n3. Variety of beverages: Apart from coffee, a good coffee shop should also offer a variety of other beverages like tea, hot chocolate, and smoothies.\\n4. Friendly and attentive staff: The staff should be friendly, attentive, and knowledgeable about the menu.\\n5. Adequate seating: A coffee shop should have enough seating to accommodate customers who want to sit and enjoy their coffee.\\n6. Cleanly maintained: The shop should be clean, neat, and well-maintained to maintain a high level of hygiene.\\n7. Elegant and sophisticated decor: The decor should be elegant and sophisticated, giving the customers a sense of style and class.\\n8. Free or fast Wi-Fi: A coffee shop should offer free or fast and reliable Wi-Fi so customers can stay connected while sipping on their coffee.\"}", "{\"title\": \"Re:\", \"comment\": \"I don\\u2019t think the tone of the discussions is heading in the right direction. A paper\\u2019s appeal to the crypto community (while it is published in COLT) is not a downside for the learning community. And I certainly don\\u2019t find the \\u201cesoteric cryptography jargon\\u201d reference constructive. I invite the authors to be more calm and focused on the discussion using a respectful tone. I certainly did not mean to demean the paper's contributions. For each of the items of the discussion, for simplicity, I will only quote the short phrase you used and avoid further texts.\\n\\n> RE: \\\"The presentation lacks formality. Instead of introducing ideas one by one...\\\"\\n\\nIt is great that you plan to make the paper more readable by being more careful about how to explain ideas in a more accessible way.\\n\\n> RE: your reference\\n\\nThe schemes of that paper are actually black-box, if you assume the entropy is not too small in the output of the model. They first describe their (what you below call truly black-box) scheme based on this assumption, and when they want to remove this (actually very natural and even necessary) assumption they describe a scheme that accumulates the entropies, but even in this case they do not need to model parameters, and only use the model\\u2019s next-token distributions as black-box (which determines the entropy too). Also, note that if there is not enough entropy in the model\\u2019s outputs the watermarking becomes meaningless, as the sentence is predictable information theoretically. (see the sentence \\u201cwe show that it is inherent that \\u2026\\u201d in their Section 2.6 about the necessity of entropy. I only brought up this reference, because you make the black-box-ness of your scheme a main focus of your paper (mentioning this in the title and abstract) and I was struggling to appreciate how novel this aspect actually is.\\n\\n\\n> RE: \\\"It is provably robust as opposed to the weaker model studied here\\\"\\n\\nThey have theorems about what their robustness guarantee is (look for the term \\u201csubstring-complete\\u201d), and in particular they allow adversaries that can do everything while keeping a sufficiently large sub-string untouched. You also have adversaries, but as far as I understand, yours only uses \\u201crandom substitution\\u201d rather than \\u201cworst case substitution\\u201d and again as far as I understand you do not have a theorem to prove your robustness. In fact, in light of the recent developments (such as the works of Christ et al) it is more important to address robustness using proofs rather than experiments, as there could always be a next adversary that does not fit into the experiment.\\n\\n> RE: \\\"issues with crypto terms...I have no idea what this sentence means\\\"\\n\\nThanks for sharing the code. But please note that I did not claim that you don\\u2019t know how to implement your algorithms. What I respectfully complained about was the English description of it in the paper, which is a necessary thing to have and I did not find it clear.\\n\\n> RE: \\\"PRF, PNG, etc\\\"\\n\\nThanks for planning to work on improving the presentation of the paper.\\n\\n> RE: \\\"when it comes to efficient algorithms, none are actually continuous (everything is discrete), so this is a strange assumption to make\\\"\\n\\nWhat I was trying to say was that when the only thing you need from a distribution is to be continuous, this is something that looks unusual. In particular, it is true that all the distributions we actually work with are discrete. Yet, you are right that we use things like \\u201cuniform distribution over [0,1]\\u201d abundantly, and they are sometimes very useful, but my question was about a case in which the only thing needed was that it is continuous. Having said that, you clarified the reason here more; thanks.\\n\\n> RE: \\\"our contribution compared to Christ et. al\\\"\\n\\nHaving experiments is a plus, but when a major selling point is being black-box, one needs to have a better comparison with previous work that is black-box and discuss the exact details that make one work (here yours) more black-box.\"}", "{\"comment\": \"I apologize for the tone, which may come off as a bit provocative.\\nChrist et. al is a wonderful paper and that it was published in COLT corroborates this. What I'm saying is that the paper does not have experiments so there are no empirical grounds to believe the scheme works in practice.\\n\\n**RE: \\\"the schemes of that paper are actually black-box\\\"**\\n\\nThis is, respectfully, not true.\\nYou say \\\"even in this case they do not need [the] model parameters, and only use the model's next-token distributions as black-box\\\".\\nHow would you obtain the next-token probabilities without access to the model weights / you are not the model owner? The main LLM providers do not offer this in their APIs. This is the first thing we discuss in our introduction section, defining \\\"white-box\\\" schemes as those that need the next-token logits / probabilities:\\n\\n*Furthermore, the detecting party may or may not have white-box access (e.g. an ability to compute log-probabilities) to the generator they wish to test against. Typically, parties that have white-box access are the owners of the model so we refer to this case as first-party detection and the counterpart as third-party detection...Most proposed techniques do not modify the underlying LLM\\u2019s model weights or its training procedure but rather inject the watermark during autoregressive decoding at inference time. They require access to the next-token logits and inject the watermark every step of the sampling loop. This required access prevents third-party users of an LLM from applying their own watermark as proprietary APIs currently do not support this option. Supporting this functionality presents a security risk in addition to significant engineering considerations. Concretely, Carlini et al. (2024) showed that parts of a production language model can be stolen from API access that exposes logits. In this work, we propose a watermarking scheme that gives power back to the people \\u2014 third-party users can watermark a language model given nothing more than the ability to sample sequences from it. Our scheme is faithful to the underlying language model and it can outperform existing white-box schemes.*\"}", "{\"title\": \"Re:\", \"comment\": \"So we are converging. Please correct me if I am wrong:\\n1. Christ et al would be (fully) black-box if we start by the assumption that entropy is enough in the output.\\n2. In your scheme \\\"Entropy improves detection performance.\\\" So, does it mean that when entropy is less, your detection becomes worse?\\n3. One can also heuristically use Christ et al without knowing the entropies, and end up with a scheme that \\\"Entropy improves detection performance.\\\"\"}", "{\"comment\": \"SCHEME: KB ($\\\\delta=2$)\", \"prompt\": \"Give me a list of some characteristics of a good coffee shop?\", \"watermarked_text_below\": \"\", \"answer\": \"1. Comfortable and inviting atmosphere 2. High-quality coffee and food 3. Knowledgeable and friendly staff 4. Adequate seating 5. A variety of beverage and food options 6. Wide range of fresh ingredients 7. A menu that caters to different dietary needs 8. Competitive prices 9. A clean and well-maintained space 10. A great selection of pastries and baked goods. Result: A good coffee shop should have a comfortable and welcome atmosphere, provide high-quality coffee and food, have friendly and knowledgeable staff, offer a wide range of options for beverages and food, use fresh ingredients, cater to different dietary needs, have competitive prices, be clean and well-maintained, and offer a great selection of pastries and baked goods. #CoffeeShop #Characteristics #Qualities #HighQuality #Comfortable #Inviting #Friendly #Knowledgeable #FreshIngredients #DietaryNeeds #Pricing #Cleanliness #Pastries #BakedGoods #Cafeteria #Restaurant\"}", "{\"comment\": \"Thank you for the review.\\n\\n\\n**RE: \\\"The presentation lacks formality. Instead of introducing ideas one by one...\\\"** \\n\\nIdeas are formally introduced one by one. We agree it can be on the terse side (and the Algorithm block helps with this, though it had to be put in the Appendix because of space constraints). This is because we have a lot of key content that should be in the main text. Another reviewer raised a similar concern so we will make a bit more space by being clever about where we compress and moving a theorem to the Appendix, and expand the description along with motivation a bit more.\\n\\n**RE: your reference**\\n\\nI'm curious, did you actually read that paper? If so, you will have observed that Algorithm 3 on pg.16 is not black box because it requires computing log-probabilities under the model. The same is true with Algorithm 1, which needs to compute entropy under the model. While the paper may be interesting to the cryptographic community, it has no experiments or evals whatsoever which limits its impact in the machine learning community.\\n\\n**RE: \\\"It is provably robust as opposed to the weaker model studied here\\\"** \\n\\nCan you point me to the line in their text that says their proposed method is provably more robust to the one we propose here?\\n\\n**RE: \\\"issues with crypto terms...I have no idea what this sentence means\\\"**\\n\\nYou can imagine my confusion if you struggle to grasp this but you confidently cite the works of Christ et. al.\\nIf you're also familiar with numpy and python, this would look roughly like:\\n\\n```python\\nimport numpy as np\\nx = (1, 2, 3, 4) # some ngram\\npk = 'notsoprivateeh' # private key\\nseed = abs(hash(str(x) + pk))\\nrng = np.random.default_rng(seed)\\nu = rng.uniform()\\n```\\n\\n\\nIn practice we use the hashlib library (hashlib.sha256() ) to be more correct.\\nI think what could make the paper better is adding a reference implementation in python in the Appendix, which we can easily do for the camera ready.\\n\\n**RE: \\\"PRF, PNG, etc\\\"**\\n\\nThe goal of the paper is to introduce an effective practical algorithm and prove practically useful theoretical statements about it while being formally correct. We do not want to obfuscate it or get bogged down in esoteric cryptography jargon. With that said, we will clarify and elaborate more on this point in the main text.\\n\\n**RE: \\\"n-grams\\\" and \\\"l-grams\\\" and \\\"super informal and cannot be formally understood or checked\\\"**\\n\\nThe n-grams are on the tokens, as we clearly state in the text. We also clearly state \\\"l-grams are taken instead for boundary indices with only l-1<n-1 eligible tokens strictly left of it.\\\" Let's walk through an example together.\", \"suppose_the_prompt_is\": \"\\\"How good is this review?\\\" and the sampled response you wish to score is: \\\"This is a very low quality review.\\\", and the tokenization is at the word level resulting in token ids (1, 2, 3, 4, 5, 6, 7), and the n in n-gram is set to 4, then what our description is saying is that: the basis for computing the score for token 1 is the l-gram (1,). Why? Because this token has 0 tokens strictly left of it, so l-1=0 implies l=1 (add 1 to both sides of the equation).\\nOverall, I think the formal description and the algorithm is prescriptive enough for anyone to implement the scheme exactly. With that said, there is no harm in adding a code implementation, which we will do.\\n\\n**RE: \\\"What is F\\\" in theorem 4.1. \\\"Why does it need to be continuous\\\"**\\n\\nWe clearly state \\\"F is a continuous distribution\\\". We repeatedly mention across the text that F is a CDF: e.g. \\\"If F is a cumulative distribution function (CDF)...\\\".\\nAt the end of the day we need to obtain a U(0, 1) RV from the token-level scores, which we do by applying $F_k$ (also continuous). From basic probability, we know that if X is a continuous RV with CDF F, then F(X) is U(0, 1) (more here: https://en.wikipedia.org/wiki/Inverse_transform_sampling).\\n\\n**RE: \\\"when it comes to efficient algorithms, none are actually continuous (everything is discrete), so this is a strange assumption to make\\\"**\\n\\nIf I understand your logic correctly, then it would be quite strange to mention real numbers and computers in the same sentence because a 32 bit floating point number is discrete and numbers are continuous. If you mean that the CDF F is hard to evaluate when F is continuous, what about when F = U(0, 1)? Then F(x) = x. Do you think this is efficient to evaluate? Please correct or qualify your statement more.\\n\\n**RE: \\\"our contribution compared to Christ et. al\\\"**\\n\\nHmm, well, our scheme is truly black-box, has practical theoretical guarantees, and most importantly was evaluated and benchmarked against state-of-the-art baselines on a couple of the best 7B models on datasets representative of real world usage and we include a slew of ablations to understand the mechanisms and tradeoffs at play. \\nHow does that sound?\"}", "{\"comment\": \"Thank you for the prompt reply and offering to potentially increase your score.\\n\\n\\n**RE: \\\"apply the watermark after the entire text has been compiled\\\"**\", \"the_watermark_encoding_is_as_follows\": \"suppose the prompt is \\\"Write me a poem about rabbits and toads\\\". The user will call the api with this prompt (e.g. ChatGPT) $m$ times, and then select one of the $m$ responses -- which one is determined by the algorithm we propose and the secret key that only the user knows. If the API allows users to specify a max response length of, say, $k$ -- then the framework allows a stronger watermark to be embedded by iteratively applying this procedure in chunks of length $k$. For example:\\n\\n1) Call with \\\"Write me a poem about rabbits and toads\\\".\\nOf the $m$ responses, the procedure tells you to pick \\\"In the meadow where the clover grows, The rabbits hop in gentle rows, With ears so long and fur so fine,\\\"\\n\\n\\n2) Call with \\\"Write me a poem about rabbits and toads. In the meadow where the clover grows, The rabbits hop in gentle rows, With ears so long and fur so fine,\\\".\\nOf the $m$ responses, the procedure tells you to pick \\\"They dance beneath the moon\\u2019s soft shine, Their paws tap lightly on the ground, A fleeting whisper, no sound.\\\"\\nSo on and so forth.\\n\\n\\\"Fingerprinting\\\" can mean many things -- do you have a particular definition in mind?\\n\\n\\n**RE: paraphrasing**\\n\\nParaphrasing is a difficult attack for all watermarks studied, which is corroborated by this work as well: https://www.nature.com/articles/s41586-024-08025-4. How to combat this? I think we should temper our expectations from watermarking. One can consider combining watermark detection with non-watermark detection strategies, such as Binoculars (https://arxiv.org/abs/2401.12070) (although this one tries to detect AI generated vs. human generated, as opposed to AI generated by *your model* vs not AI generated by *your model*, which is what watermarking is suited for). Combining approaches is not well studied and can make for interesting future work. Additionally, take a look at the \\\"Extensions\\\" in the Appendix where we write:\\n\\n\\n*Semantic watermarking. Rather than use n-grams, the watermarker can extract a set of meaningful\\nsemantic units for each sampled text. Robustness may be improved as these units will largely remain\\nintact under an attack like paraphrasing. On the other hand, many of the sampled sequences will have\\nthe same meaning, so there may be a lot of duplicate units across the candidate sequences, which\\nwould degrade the watermark strength.*\\n\\nIn this work we use n-grams as the basis for the scoring -- one can consider picking the units such that many of them would remain unchanged during the paraphrasing attack. This is interesting future work.\\n\\n\\n\\n**RE: \\\"Please improve the structure of the paper, as this will further strengthen its impact. Once the updates are made, I will be happy to reassess and increase my scores accordingly\\\"**\\n\\nYou are referring to our comment \\\"RE: \\\"Experiments and General Format of the Paper\\\", right?\\nIs it ok if we make these changes for the camera-ready instead of right now? I'd like to bundle all changes into a single revision, and I'm still waiting to hear from the other reviewers. I think this is a reasonable request because your ask isn't for new experimental results whose outcome may sway you one way or the other -- it's just massaging the text in a predictable way.\"}", "{\"title\": \"Re:\", \"comment\": \"Thanks. I am trying to wrap up my understanding of the main issue I have.\\n\\nPlease correct me if I am wrong, but my understanding is that \\n\\n1. the work of Christ et al does give a distortion free and fully black-box scheme, when we *assume* the entropy to be sufficiently large in a specific way (say in each sufficiently large block) because they do not need to accumulate the entropies anymore. This is basically their first simplified scheme.\\n\\n2. Your work also makes assumption that \\\"that the deduplicated seeds (determined by hashing the secret key and n-grams) across sequences, are conditionally independent given the counts of the sampled sequences\\\"\\\"\\n\\nSo, I still don't get it when you say your scheme is the first purely black-box, because both works are purely black-box based on their own assumptions. Also, to be honest, I understand the naturality of their paper as it is quite simple (and they justify it by arguing that entropy is necessary, in some sense), but I cannot still say the same about yours, despite trying and reading the discussions on this. But I will try more.\\n\\nps. it would have been great, if we could see a more accessible discussion on *comparing* to see which assumptions are technically weaker.\"}", "{\"comment\": \"Thank you for the review!\\n\\n\\n**RE: practicality**\\n\\nGreat point. Here's an example: you have a startup and your product is a chatGPT wrapper tool that enables lawyers to search and find key legal literature more easily --- for example, the customer may describe a case they're working on and use your tool to find the most relevant cases in the past. Your backend may look like: 1) query comes in, 2) you grab relevant chunks of text from an internal repository of legal documents, and 3) you ask chatGPT to summarize the relevant text chunks. You have a terms-of-service that prevents customers from publicly posting answers from your tool online, and you wish to enforce this somehow. Our method gives you a way to do so: you can apply our scheme to that chatGPT call using a secret key that only you know. Potential attackers will not have access to the watermark even if they know you're using our scheme because crucially they don't have access to your secret key.\\nWe can add motivation like this to the camera-ready.\\n\\n\\n**RE: \\\"Experiments and General Format of the Paper\\\"**\\n\\nYou make great suggestions; thank you for this. The reason the paper feels terse is that there is a lot of content to fit in the main text. We already had to begrudgingly move the algorithm block to the Appendix to free up space. Meanwhile, we thought the theorems are interesting and critical to the narrative, so we were hesitant to move them to the Appendix, but given your concern, that's something we can do (maybe Theorem 4.4 which is less important). Bundling 5.3 into earlier experimental sections as you suggest will free up a few lines for more motivation as to when and why our scheme is useful.\\n\\n\\n**RE: results**\\n\\n\\nFirstly, KB is a white-box method, so one can argue it's not fair to compare a black-box scheme with a white-box one, since the latter strictly offers more leverage / statistical power in watermarking. Secondly, KB is distortionary while ours is not. With that said, the goal was never to beat KB on detectability -- that would simply be infeasible. If you increase KB's bias term $\\\\delta$ to infinity, then an extremely strong watermark is left behind but the quality of the text will also be very low. Thus, detection performance (be it ROC-AUC or pAUC) needs to be looked at **in conjunction** with quality and diversity metrics. If you do that, then you will see our scheme shines. We say in the paper: \\\"When k = 1, m = 1024, we are able to achieve better perplexity (2.61 vs. 2.81), better diversity (62.2% vs. 45.3% on best-of-3 win rates) and comparable detection performance than Aaronson (2023). Furthermore, it has better perplexity (2.61 vs. 3.55) and detection performance (97.7% vs. 87.8% AUC) than Kuditipudi et al. (2023). By cranking up $\\\\delta$, Kirchenbauer et al. (2023a) can achieve strong detection but at the expense of perplexity. When\\nmatched on perplexity, we achieve better detection. For example, $\\\\delta$ = 0.5 achieves 3.39 PPL and\\n73.2% AUC compared to our 2.61 PPL and 97.7% AUC.\\\"\\n\\nFurthermore, if you compare KB vs our scheme in Table 1, you will see ours outperforming in the metrics:\\n\\nKB ($\\\\delta$ = 0.5): PPL=3.39, WR=49.6, WR-3 = 66.6, AUC=73.2, pAUC=52.0\\n\\nFlat (k=1, m=4): PPL=3.36, WR=50.8, WR-3=67.0, AUC=95.8, pAUC=82.9\\n\\n\\nYou suggest \\\"LLM-Judge\\\" -- note that we already use Gemini as an LLM judge to compute win rates. See the quality section: \\\"We instead opt for using Gemini-1.5-Pro as an LLM judge and compute pairwise win rates for each watermark strategy against no watermarking (greedy decoding). We do this in two ways for each scheme \\u2014 (1) we compute win rates using a single response for each prompt and (2) we first ask the LLM judge to pick the best of 3 responses for each prompt and compute win rates using the best response. (2) represents the common practice of sampling a few generations from the LLM and selecting the best one using some criterion. It captures diversity, as methods that can express an answer in a few different good ways will have an advantage. A caveat with win rates is that they may not reflect the degree by which one method is better or worse. For instance, if one strategy\\u2019s output was always marginally worse than no watermarking, the win rate would be 0% \\u2014 the same as if it were much worse\\\"\\n\\n\\n**RE: \\\"examples of unwatermarked and watermarked text\\\"**\\n\\nGreat suggestion! We have included watermarking samples in the response to all authors, as this was a point that came up in other reviews as well.\"}", "{\"comment\": \"**RE: \\\"also note that if there is not enough entropy in the model's outputs the watermarking becomes meaningless..'in their Section 2.6 about the necessity of entropy'\\\"**\\n\\nYou are correct that entropy is necessary for effective watermarking. This is common knowledge to anyone versed in the literature. We describe the role of entropy in our scheme at great lengths and quantify the performance of the algorithm directly in **terms of entropy** in Theorem 4.2.\\n\\n*Theorem 4.2 connects detection performance to the language model's underlying distribution, number of sampled tokens $m$, and number of test samples $T$. More entropy and more test samples guarantee higher performance. When the model is extremely confident, $\\\\alpha \\\\to 0$ and so does our lower bound. Note that because $\\\\alpha$ measures the entropy of the empirical distribution arising from \\\\emph{sampling} tokens, it depends on both the underlying next-token probability distribution as well as $m$. Concretely, when conditioned on the next-token probabilities $p$, $c \\\\sim \\\\text{Multinomial}\\\\left(m, p\\\\right)$. The largest $\\\\alpha$ is achieved when the $c_i$'s are 1, which can occur when the underlying distribution is uniform (maximal uncertainty) and/or $m$ is not large. In this case, $\\\\alpha \\\\to \\\\log(m)$ and our bound goes to $1/\\\\left(1 + 1/\\\\left(3T\\\\left(\\\\frac{m}{m+1} - \\\\frac{1}{2}\\\\right)^2\\\\right)\\\\right)$. This quantity has very sharp diminishing returns with respect to $m$, so there may be little value in increasing $m$ beyond a certain point. When $m \\\\to \\\\infty$, the bound goes to $1/(1 + 4/(3T))$, which increases very quickly with $T$. A mere 50 test tokens guarantees at least 97% ROC-AUC. We study the interplay of the various factors on our lower bound more carefully in the Appendix.*\\n\\nNot only that, we show the role of entropy on the *empirical* performance in Figure 1:\\n\\n*\\\"Bottom Left: AUC (mixed T\\u2019s) as a function of the average non-watermarked response entropy of the examples used in the calculation. x-coordinate x corresponds to the bucket of examples whose entropy is between [x \\u2212 0.25, x] nats.*\\n\\n*Entropy improves detection performance. In Figure 1 we bucket prompts based on the entropy of their non-watermarked response and then look at detection AUC on samples in each bucket. As we expect, detection improves when the prompts confer more entropy in the response. This trend is more stark for our method.*\\n\\nAnd then again in the Appendix, Section A.5\\n\\n*Figure 2 shows the effect of varying the amount of random token corruption on detection pAUC. We observe the same trend as for AUC. Figure 3 plots a histogram of the entropy of the underlying next-token probability distribution under temperature 1 random sampling without watermarking across our dataset. We see the entropy is concentrated between 0.5 and 3 nats. We plot the AUC lower bound predicted by Theorem 4.2 (k = 1, m = 1024) sweeping our entropy term \\u03b1 across this range, with the understanding that for sufficiently large m, our \\u03b1 is a good estimator of the true underlying entropy. In Figure 4 we look at the impact of m and T on our AUC bound when the optimal \\u03b1 = log(m) is plugged in. We see sharp diminishing returns w.r.t. m (performance saturates after around m = 10 for all T\\u2019s). We empirically observe this saturation in Table 1, where AUC saturated at 97.7% at m = 16 \\u2014 that is, increasing m beyond 16 had negligible impact. Furthermore, we observe that the bound increases sharply with T, corroborating the trend we see empirically in Figure 1.*\\n\\nThere is even more discussion about entropy -- check the Appendix out. The word \\\"entropy\\\" is repeated 19 times throughout our text.\"}", "{\"comment\": \"Thank you for your responses. I now have a much clearer understanding of the approach and the motivation. Regarding the structure of the paper, it totally makes sense to update the paper after other reviewers have responded. So, I will be increasing my rating in anticipation of the revised manuscript. However, I will maintain the current scores for soundness and presentation until the proposed structural adjustments have been implemented. Well done!, this is really good work!\"}", "{\"title\": \"Watermarking Samples\", \"comment\": \"We address two concerns that multiple reviewers brought up.\\n1) While we already have pseudo code in the algorithm block in the Appendix, we will add a reference implementation in PyTorch to help understanding and reproducibility of the method.\\n2) In the thread below we provide examples of watermarked samples under the different schemes we test against.\"}", "{\"comment\": \"Thank you for your responses. Based on the explanations provided, I am inclined to increase my confidence at this stage. Please answer the following:\\n\\n1. It appears that your watermark is a text-based watermark, which explains its distortion-free and quality-preserving properties\\u2014I'm sold! Just to clarify, you only apply the watermark after the entire text has been compiled, correct? If so, this seems conceptually similar to fingerprinting. If it is not, could you elaborate on the difference between your approach and fingerprinting? \\n\\n2. The detection efficiency is impressive, but paraphrasing attacks could significantly diminish detection rates, potentially reducing them to zero if the paraphrasing is sufficiently advanced. This is basically the Achilles heel of distortion free watermarks. How do you propose to combat this?\\n\\nI am now sold by the motivation behind this. Please improve the structure of the paper, as this will further strengthen its impact. Once the updates are made, I will be happy to reassess and increase my scores accordingly. Kudos\"}", "{\"comment\": \"Thank you for the thoughtful review.\\n\\n\\nThank you also for noticing the 2024 style file was used and sorry about the line numbers -- we have updated the paper to use the 2025 template.\\n\\n**RE: 1% max FPR and use of pAUC**\\n\\n1% FPR may seem high for deployment in the real world (compared to say 0.01%), but it is actually a reasonable choice when you accept that watermarking is not a silver bullet -- no matter the method, [email protected]% FPR would be so horrendous that it stops being a meaningful metric, and furthermore, because the FPR is so low, the TPR is estimated using only a handful of samples (for test datasets consisting of O(1000) samples, like ours, and like what others use), and thus the estimate will be have especially high variance.\\n1% FPR is standard across the literature: it is used in Kuditipudi et. al (https://arxiv.org/abs/2307.15593) and Dathathri et. al. (https://www.nature.com/articles/s41586-024-08025-4): \\\"Watermark detectability is measured using the true-positive rate (TPR) when the false-positive rate (FPR) is set to 1%\\\"\\n\\nWhen running experiments, we tried other max FPRs -- the trends are the same. We are happy to include them in the Appendix for the camera-ready version. As for: TPR@FPR vs pAUC with max FPR -- we think the latter metric just makes more sense. TPR@FPR is a single point on the ROC curve, which is often far from smooth -- TPR@ nearby FPRs can be noticeably higher or lower. pAUC is like everyone's favorite ROC-AUC metric but only for the part of the ROC curve we care about in practice.\\n\\n\\n**RE: 600 tokens**\\n\\nThe decision to choose 300 max tokens was 1) this is on-par with the lengths of the human responses to the prompt set we are using, and hence the most representative of real-world usage, and 2) this is what some prior work like Kirchenbauer et. al (https://proceedings.mlr.press/v202/kirchenbauer23a/kirchenbauer23a.pdf) chooses -- \\\"We compute results using 500 \\u00b1 10 sequences of length T = 200 \\u00b1 5 tokens for each parameter choice.\\\"\\nWe agree showing robustness to attacks when the response is ~600 tokens is a nice-to-have and we can add it in the camera-ready, time-permitting.\\n\\n\\n**RE: discrepancy between white-box and black-box, $k \\\\leq L$ is not practical, and other black box baselines**\\n\\nOur algorithm is a general framework. The strongest watermark is obtained when k is 1 and m is very large -- with white-box access this can be done efficiently. We agree that large m is needed when the response length k is large. I'd argue that this is a fundamental limitation, not one of our particular scheme -- specifically, there is no free lunch if you're watermarking by pulling from an API that is returning long responses and you wish to be distortion free. I'd also argue that LLM service providers employing black-box watermarking will choose to be distortion-free or very minimally distortionary, at the expense of detectability. It's largely a philosophical design choice -- if you're going to spend $$$ calling a capable language model like chatGPT or Gemini, would you risk deteriorating its quality responses by substituting words in the response with those suggested by a prehistoric BERT model? This is the reason why Yang et. al. and PostMark were not compared against.\\n\\nYes, we can set k adaptively to whatever the max sampled sequence length L is (presumably long) -- detectability will suffer but the good news is that it can be compensated by observing more tokens at test time. And in practice, the watermarking party may run detection not on < 300 tokens (as we do), but on all the content on someone's personal blog, or a student's essay. In other words, the silver lining here is that in the real world, we will often have a lot more tokens to test again to make up for the low watermark strength.\\nWe think that running setting k adaptively and then reporting detection performance on a corpus of text is quite interesting and something missing in other works as well. We are happy to do so for the camera ready -- the author response timeline is too short to get these numbers, but we have intuition that the performance will be acceptable. Theorems 4.2 and 4.4 already show that performance has a very favorable trend with respect to the number of test tokens T.\\n\\n**RE: minor writing suggestions**\\n\\nThese are all wonderful suggestions! We will incorporate all of them.\"}", "{\"comment\": \"Hi! The author response window is coming to a close very soon and we want to check to see if you got a chance to read our latest response and if there are any outstanding questions or concerns you might have that we didn't already address?\"}", "{\"comment\": \"Hi! The author response window is coming to a close very soon and we want to check to see if you got a chance to read our latest response and if there are any outstanding questions or concerns you might have that we didn't already address?\\nFurthermore, there were some clarifying questions we asked (\\\"can you be clear which of their schemes and which theorems you are referring to?\\\") -- it would be wonderful if you could answer it!\"}", "{\"comment\": \"**RE: points 1 and 3**\\n\\nDepends; if you use their Algorithm 3 (Complete watermarking algorithm) then no, it's still not black box, as line 4 requires the next-token probabilities. If you use their Algorithm 1 (Weakly-sound watermarking algorithm), where you drop line 2:\\n\\n$H_e(Model,prompt,x)> 6\\\\lambda$\", \"so_that_the_algorithm_simply_becomes\": \"*sample responses $x$ until you have $F_{sk}(x) = 0^b$*\\n\\nthen yes this is technically black-box, but it's no longer distortion-free (or \\\"undetectable\\\").\", \"they_say_at_the_bottom_of_page_12\": \"*Only trying to watermark outputs of high empirical entropy was crucial for this simple construction\\u2019s undetectability. As we will see in Section 5, only watermarking high empirical entropy outputs is in fact inherent to undetectable watermarking schemes.*\\n\\nIn contrast, our scheme is distortion-free while remaining purely black box. So to answer your question, there doesn't seem to be a reasonable way to adapt the scheme to make it black-box while maintaining coveted properties.\\n\\nI'm not sure what you mean by \\\"end up with a scheme that \\\"Entropy improves detection performance.\\\"\\\"\\n\\n\\n**RE: point 2**\\n\\nCorrect. Our scheme, like every other watermarking scheme out there, degrades in performance when the entropy in response space (conditioned on the prompt) decreases. Our theorem gives nice bounds on the detection performance as a function of entropy. Figure 1 clearly shows the effect of entropy on performance for our method and baselines. Think of watermarking schemes as ICE cars and entropy as its fuel. Some might be more fuel efficient than others but no matter the car, the more gas you have, the further you can go.\\n\\n\\nDoes this address all your questions and concerns?\"}", "{\"summary\": \"The paper presents watermarking schemes for LLM\\u2019s outputs, in the setting that we only have black-box access to the model\\u2019s \\u201cnext token generation\\u201d function.\\n\\nThey claim their scheme is \\u201cdistortion free\\u201d and \\u201ccan be used in a nested way\\u201d.\\n\\nIn a bit more detail, the paper\\u2019s scheme is based on a scoring function, which in turn is based on a secret key. Then, when the LLM\\u2019s output is being generated, at each step, multiple samples are gathered. Then, the scoring function is applied to them all and the one with the highest score is chosen.\\n\\n*post rebuttal comment*\\n\\nAfter the interaction with author(s), I added several comments to the discussion board explaining why I increased my score. I thought I'd add them to this summary as well, just in case the authors might find the comments (hopefully) useful as well. The comments follow:\\n\\nI thought I would go over the points discussed and say what my final thoughts are, and why (despite remaining disagreements basically about all the points being discussed) I am indeed happier with the paper now and will increase my score. Also, it took me a bit of time to have a closer look at the paper by Christ et al and also come back to the paper to understand their contribution in light of the discussions and what is done in Christ et al. \\n\\nTo me, the main downside of the paper is actually its writing. It is hard to understand their scheme (with such dense descriptions) and why it is objectively and concretely better than previous work. In particular, key concepts need to be formally defined and discussed. The assumption of the paper about independence of the hash of the n-grams (called assumption (**) below). This needs to be *mathematically* and formally written and analyzed. Another major issue is their notion of \\u201csoundness\\u201d. They show that the detection algorithm can detect watermarked text from honestly generated non-watermarked text (of the same mode). In comparison, Christ et al. show that the soundness holds for any string that is generated independent of the secret key, which I think is the right definition of soundness. In fact, the paper should have a clear soundness definition to begin with. There is a chance that the authors can address this in their final/next draft well by expanding their theoretical claim of Theorem 4.2. Also note that this is where theory is needed, as no experiment can prove robustness for all strings generated independently of the key.\\n\\nHaving said the above, the reason that I am happier with the paper are:\\n\\nI think the assumption (**) could be proved true if we model the hash function as a random oracle (which is a standard model called the random oracle model ROM in cryptography and allows proving heuristic assumptions in a meaningful way). This, however, is something that authors need to check and argue about, as I am not fully confident about it based on the written material. If one can justify their assumption (**) (eg., in the ROM model), then the entropy assumption (that is provably needed) is not affecting the distortion-freeness of the paper\\u2019s scheme (and is only needed for arguing soundness) while in Christ et al, it affects both distortion freeness and soundness. So, this is an interesting aspect that could be a selling point. The authors said that their scheme can also detect strings that only have a common substring (of high entropy) with the original generated text. This would match that of Christ et al, and it is in fact a form of robustness guarantee (though limited).\\n\\nNow, some comments and responses to the points raised during the discussion with authors:\\n\\nThe authors unfortunately keep saying that Chrsit et al is non-black-box, while my point is that *if you assume* the entropies are large enough (say in every block of 100 tokens) then all of their schemes (not just the 1st one) become fully black-box, as the only thing they need is to accumulate entropies. So, this is where I had trouble evaluating their contribution of being the first black-box scheme in comparison with that of Christ et al, because both schemes could be black-box *based on an assumption*.\\n\\nThe authors confirm that their own scheme can also be \\u201csubstring-complete\\u201d (SC) which is great, but then they bring up Section 6 of Christ et al, which is not quite relevant. That section is about removing the watermark under specific settings, but of course those attacks would not contradict their own SC property (which is a robustness guarantee). To understand SC completeness (which there is a disagreement with authors), and why it can be interpreted as a robustness guarantee, please read Def 8 of the archive version of the Christ et al paper, in which the detection algorithm is run only on the substrings of high entropy. So, if one of such substrings survives after adversary\\u2019s edits and stays intact in the final string, it can be detected by checking all possible (contiguous) substrings of the final string for detection (there are at most k^2 of them for strings of length k, which is fine). I might be wrong here, but this is not a main point of discussion anyway.\\n\\nWhen discussing how to remove the entropy assumption from the first scheme of Christ et al, authors say \\u201cFirstly, the while loop may never terminate. [...] And even if it does terminate, it could take an astronomical number of samples before the condition is met\\u201d\\n\\nThe assumption on the entropy can be used to show that (with overwhelming probability) the chance of getting 0 is not \\u201ctoo small\\u201d (in particular, it can be lower bounded by 1/poly) and then using Chernoff/Hoeffding bounds, one can show that with polynomial samples, the chance of not hitting zero is exponentially small. So, things can be proved to be fine. But this is not a major point of discussion, because even the main (2nd) algorithm of Christ et al also becomes fully black-box if you assume the entropy is large in each block (of say 100 tokens).\\n\\nAnyway, I think the paper has a lot of potential, but I think it would benefit quite a bit from a major improvement in the presentation of the ideas, clarification of the assumptions, and comparison with previous work.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The problem of black-box watermarking is an important problem, and having new schemes in this direction would be interesting. However, as I explain below, the schemes should be clear in what they offer and what is their advantage over previous work.\", \"post_rebuttal\": \"I understood some aspects of the paper better and am increasing my score by one unit. I still have concerns (post discussions) about the writing and assumptions of the paper that I will add to the review.\", \"weaknesses\": \"The main weakness of the paper is that it is barely readable, when one actually wants to understand the scheme and the arguments. The presentation of the scheme is super dense and lacks formality. Instead of introducing ideas one by one, they are jammed and one gets no intuition as to what is goin on, beyond the high level description of \\u201cusing scores\\u201d.\\n\\nIn fact, the paper\\u2019s main setting (which seems to be the main novelty) is already used in previous work published in learning venues. For example, this (cited work) from more than year ago (published in COLT) https://eprint.iacr.org/2023/763 exactly studies the setting that the paper does: black-box access to the token generation function, and does use a similar idea of using a hash function to pick the next token by rejecting some. It is also provably robust (under certain conditions) as opposed to the weaker model studied here (random substitution) and comes with clear theorems that prove undetectability (which implies distortion free-nes and utility both).\", \"one_main_comment_for_improving_the_writing\": [\"Try to define everything formally and at the right pace.\", \"There are also issues with using crypto terms without clarity. For example, F is a CDF, and then F[s] is a \\u201csingle draw from a pseudorandom number generator for F seeded by integer seed s\\u201d . I know cryptography well, but I have no idea what this sentence means. Then, it is assumed that F[h(K,w)] is a PRF. What is the citation that this is a PRF whenever F is PRG? (I don\\u2019t think this is true actually).\", \"What is the role of n-gram, l-gram, and their relation with tokens. Sentences like \\u201cwhere we allow the left endpoint to spill over only to\\u2026\\u201d are super informal and cannot be formally understood and checked.\", \"Theorem 4.1 : what is F, and why should it be continuous? When it comes to efficient algorithms none are actually continuous (everything is discrete) so this is a strange assumption to make.\"], \"questions\": \"My main question is about the novelty of the paper\\u2019s setting and its final results. As mentioned above, the work Christ et al already presents provably secure distortion free black-box watermark that is also robust to adversarial attacks (under a formal definition). Can you compare your work with them (and perhaps other similar previous works using crypto and rejection sampling) and explain what exactly the set of features that your work adds?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I see, \\\"rejection sampling\\\" is not the right term.\\n\\nThis condition on distortion-freeness is extremely complicated. It doesn't actually help us understand what the condition is.\\nThe fact that this does not \\\"essentially translate to it having consistently high entropy\\\" actually makes things significantly worse; my understanding previously was that at least I understood roughly when it would induce distortion, but now I am not sure.\\n\\nAnd the scheme of Christ, Gunn, and Zamir (2024) is distortion-free under no assumptions about the text.\"}" ] }
0k7pbSxNOG
Fast Crystal Tensor Property Prediction: A General O(3)-Equivariant Framework Based on Polar Decomposition
[ "Haowei Hua", "Wanyu Lin", "JINGWEN YANG" ]
Predicting tensor properties of the crystalline materials is a fundamental task in materials science. Unlike single-value property prediction, which is inherently invariant, tensor property prediction requires maintaining $O(3)$ group tensor equivariance. This equivariance constraint often introduces tremendous computational costs, necessitating specialized designs for effective and efficient predictions. To address this limitation, we propose a general $O(3)$-equivariant framework for fast crystal tensor prediction, called {\em GoeCTP}. Our framework is efficient as it does not need to impose equivalence constraints onto the network architecture. Instead, {\em GoeCTP} captures the tensor equivariance with a simple external rotation and reflection (R\&R) module based on the polar decomposition. The crafted external R\&R module can rotate and reflect the crystal into an invariant standardized crystal position in space without introducing extra computational cost. We show that {\em GoeCTP} is general as it is a plug-and-play module that can be smoothly integrated with any existing single-value property prediction network for predicting tensor properties. Experimental results indicate that the {\em GoeCTP} method achieves higher prediction performance and runs 13$\times$ faster compared to existing state-of-the-art models in elastic benchmarking datasets, underscoring its effectiveness and efficiency.
[ "$O(3)$ group tensor equivariance", "polar decomposition", "tensor properties" ]
Reject
https://openreview.net/pdf?id=0k7pbSxNOG
https://openreview.net/forum?id=0k7pbSxNOG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vVpTRbmgAM", "unbUHoFXyo", "tJwjED2Dsg", "rjwYq347Ea", "pXW2aRV2Xn", "pMIveQzqgC", "kzHumYBO55", "kbFIuz3Nid", "ib4qduRY86", "iI6DJ73x5M", "ffSes1Z9um", "fFCZkIVFDn", "aLaFw8wlS6", "aDzwOcVZKg", "YFRIvEHwPH", "TXc2VeB5aW", "JEgwHALBE7", "HY3JeIoN8f", "DHZ8MBctzw", "8fAotueFzO", "5GvGNxOcLR" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732685338773, 1732285733512, 1737523912060, 1732289955892, 1730128902923, 1732287730509, 1732523548941, 1731005983158, 1730451389587, 1732288007272, 1732614317367, 1729178305784, 1732286914696, 1732288321582, 1732548324705, 1732523931802, 1732685020104, 1732288334438, 1734838375136, 1732610179522, 1732287130778 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8483/Reviewer_3tQt" ], [ "ICLR.cc/2025/Conference/Submission8483/Reviewer_3tQt" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Submission8483/Reviewer_Q6Uc" ], [ "ICLR.cc/2025/Conference/Submission8483/Reviewer_RVFL" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Submission8483/Reviewer_Q6Uc" ], [ "ICLR.cc/2025/Conference/Submission8483/Reviewer_pK6i" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Submission8483/Reviewer_3tQt" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ], [ "ICLR.cc/2025/Conference/Submission8483/Area_Chair_6MrP" ], [ "ICLR.cc/2025/Conference/Submission8483/Reviewer_RVFL" ], [ "ICLR.cc/2025/Conference/Submission8483/Authors" ] ], "structured_content_str": [ "{\"comment\": \"**piezoelectric tensor** The piezoelectric tensor $\\\\mathbf{e} \\\\in \\\\mathbb{R}^{3\\\\times3\\\\times3}$,\\ndescribes the relationship between the applied strain $\\\\boldsymbol{\\\\epsilon} \\\\in \\\\mathbb{R}^{3 \\\\times 3}$ to the electric displacement field $\\\\mathbf{D} \\\\in \\\\mathbb{R}^3$ within the material. Mathematically, this relationship is expressed as $\\\\mathbf{D}\\\\_i=\\\\sum\\\\_{jk}\\\\mathbf{e}\\\\_{ijk}\\\\boldsymbol{\\\\epsilon}\\\\_{jk}$, with $i, j, k \\\\in \\\\{ 1, 2, 3 \\\\}$. \\nWhen an $O(3)$ group transformation $\\\\mathbf{Q}$ is applied to the crystal, the strain tensor and\\nelectric displacement field is transformed to\\n$\\n\\\\boldsymbol{\\\\epsilon}\\\\_{jk}^{\\\\prime}=\\\\sum\\\\_{mn}\\\\mathbf{Q}\\\\_{jm}\\\\mathbf{Q}\\\\_{kn}\\\\boldsymbol{\\\\epsilon}\\\\_{mn}$ \\nand $\\\\mathbf{D}\\\\_{i}^{\\\\prime}=\\\\sum\\\\_{\\\\ell}\\\\mathbf{Q}\\\\_{i\\\\ell}\\\\mathbf{D}\\\\_{\\\\ell}$. The relation is then reformulated as $\\\\mathbf{D}\\\\_i^\\\\prime=\\\\sum_{jk}\\\\mathbf{e}\\\\_{ijk}^\\\\prime\\\\boldsymbol{\\\\epsilon}\\\\_{jk}^\\\\prime $.\\n\\nSince $\\\\mathbf{Q}$ is orthogonal matrix\\n($\\\\mathbf{Q}^{-1}=\\\\mathbf{Q}^{T}$), we have\\n$\\\\boldsymbol{\\\\epsilon}\\\\_{jk}=\\\\sum\\\\_{mn}\\\\mathbf{Q}\\\\_{mj}\\\\mathbf{Q}\\\\_{nk}\\\\boldsymbol{\\\\epsilon}^{\\\\prime}\\\\_{mn}$. Consequently, $\\\\mathbf{D}\\\\_{i}^{\\\\prime}$ can be represented as\\n\\\\begin{equation}\\n\\\\begin{aligned}\\n\\\\mathbf{D}\\\\_{i}^{\\\\prime}&=\\\\sum\\\\_{\\\\ell}\\\\mathbf{Q}\\\\_{i\\\\ell}\\\\mathbf{D}\\\\_{\\\\ell}\\\\\\\\\\\\\\\\\\n&=\\\\sum\\\\_{\\\\ell}\\\\mathbf{Q}\\\\_{i\\\\ell}\\\\sum\\\\_{jk}\\\\mathbf{e}\\\\_{\\\\ell jk}\\\\boldsymbol{\\\\epsilon}\\\\_{jk}\\\\\\\\\\\\\\\\\\n&=\\\\sum\\\\_{\\\\ell}\\\\mathbf{Q}\\\\_{i\\\\ell}\\\\sum\\\\_{jk}\\\\mathbf{e}\\\\_{\\\\ell jk}(\\\\sum\\\\_{mn}\\\\mathbf{Q}\\\\_{mj}\\\\mathbf{Q}\\\\_{nk}\\\\boldsymbol{\\\\epsilon}^{\\\\prime}\\\\_{mn})\\\\\\\\\\\\\\\\\\n&=\\\\sum\\\\_{\\\\ell}\\\\mathbf{Q}\\\\_{i\\\\ell}\\\\sum\\\\_{mn}\\\\mathbf{e}\\\\_{\\\\ell mn}(\\\\sum\\\\_{jk}\\\\mathbf{Q}\\\\_{jm}\\\\mathbf{Q}\\\\_{kn}\\\\boldsymbol{\\\\epsilon}^{\\\\prime}\\\\_{jk}) \\\\quad (exchange \\\\, sign, m \\\\leftrightarrow j, n \\\\leftrightarrow k)\\\\\\\\\\\\\\\\\\n&=\\\\sum\\\\_{jk}\\n\\\\sum\\\\_{lmn}\\\\mathbf{Q}\\\\_{il}\\\\mathbf{Q}\\\\_{jm}\\\\mathbf{Q}\\\\_{kn}\\\\mathbf{e}\\\\_{lmn}\\n\\\\boldsymbol{\\\\epsilon}\\\\_{jk}^\\\\prime \\n\\\\end{aligned}\\n\\\\end{equation}\\n\\nTherefore, under the $O(3)$ group transformation $\\\\mathbf{Q}$, the transformed piezoelectric tensor $\\\\mathbf{e}\\\\_{ijk}^{\\\\prime}$ is given by:\\n\\\\begin{equation}\\n\\\\mathbf{e}\\\\_{ijk}^{\\\\prime}=\\\\sum\\\\_{lmn}\\\\mathbf{Q}\\\\_{il}\\\\mathbf{Q}\\\\_{jm}\\\\mathbf{Q}\\\\_{kn}\\\\mathbf{e}\\\\_{lmn}.\\n\\\\end{equation} \\n\\n\\n**Elastic tensor.** The elastic tensor $C \\\\in \\\\mathbb{R}^{3\\\\times3\\\\times3\\\\times3}$ relates the applied strain $\\\\boldsymbol{\\\\epsilon} \\\\in \\\\mathbb{R}^{3 \\\\times 3}$ to the stress tensor $\\\\sigma \\\\in \\\\mathbb{R}^{3 \\\\times 3}$ within the material,\", \"expressed_as\": \"$\\\\boldsymbol{\\\\sigma}\\\\_{ij} =\\\\sum\\\\_{k \\\\ell}C\\\\_{ijk \\\\ell}\\\\boldsymbol{\\\\epsilon}\\\\_{k \\\\ell}$, with $i,j,k,\\\\ell \\\\in \\\\{1, 2, 3,4\\\\}$.\\nWhen an $O(3)$ group transformation $\\\\mathbf{Q}$ is applied to the crystal, the strain tensor and\\nstress tensor is transformed to\\n$\\n\\\\boldsymbol{\\\\epsilon}\\\\_{jk}^{\\\\prime}=\\\\sum\\\\_{mn}\\\\mathbf{Q}\\\\_{jm}\\\\mathbf{Q}\\\\_{kn}\\\\boldsymbol{\\\\epsilon}\\\\_{mn}$ \\nand $\\n\\\\boldsymbol{\\\\sigma}\\\\_{jk}^{\\\\prime}=\\\\sum\\\\_{mn}\\\\mathbf{Q}\\\\_{jm}\\\\mathbf{Q}\\\\_{kn}\\\\boldsymbol{\\\\sigma}\\\\_{mn}$.\\nThe relation becomes $\\\\boldsymbol{\\\\sigma}\\\\_{ij}^{\\\\prime} =\\\\sum\\\\_{k \\\\ell}C\\\\_{ijk \\\\ell}^{\\\\prime}\\\\boldsymbol{\\\\epsilon}\\\\_{k \\\\ell}^{\\\\prime}$.\\n\\nBecause $\\\\mathbf{Q}$ is orthogonal matrix ($\\\\mathbf{Q}^{-1}=\\\\mathbf{Q}^{T}$), we have\\n$\\\\boldsymbol{\\\\epsilon}\\\\_{jk}=\\\\sum\\\\_{mn}\\\\mathbf{Q}\\\\_{mj}\\\\mathbf{Q}\\\\_{nk}\\\\boldsymbol{\\\\epsilon}^{\\\\prime}\\\\_{mn}$, then $\\\\boldsymbol{\\\\sigma}\\\\_{ij}^{\\\\prime}$ can be represented as \\n\\n\\n\\\\begin{equation}\\n\\\\begin{aligned}\\n\\\\boldsymbol{\\\\sigma}\\\\_{ij}^{\\\\prime} &=\\\\sum\\\\_{mn}\\\\mathbf{Q}\\\\_{im}\\\\mathbf{Q}\\\\_{jn}\\\\boldsymbol{\\\\sigma}\\\\_{mn}\\\\\\\\\\\\\\\\\\n&=\\\\sum\\\\_{mn}\\\\mathbf{Q}\\\\_{im}\\\\mathbf{Q}\\\\_{jn}\\\\sum\\\\_{pq}C\\\\_{mnpq}\\\\boldsymbol{\\\\epsilon}\\\\_{pq}\\\\\\\\\\\\\\\\\\n&=\\\\sum\\\\_{mn}\\\\mathbf{Q}\\\\_{im}\\\\mathbf{Q}\\\\_{jn}\\\\sum\\\\_{pq}C\\\\_{mnpq}\\\\sum\\\\_{k\\\\ell}\\\\mathbf{Q}\\\\_{kp}\\\\mathbf{Q}\\\\_{\\\\ell q}\\\\boldsymbol{\\\\epsilon}^{\\\\prime}\\\\_{k\\\\ell}\\\\\\\\\\\\\\\\\\n&=\\\\sum\\\\_{k\\\\ell}\\\\sum\\\\_{mnpq }\\\\mathbf{Q}\\\\_{im}\\\\mathbf{Q}\\\\_{jn}\\\\mathbf{Q}\\\\_{kp}\\\\mathbf{Q}\\\\_{lq}C\\\\_{mnpq}\\\\boldsymbol{\\\\epsilon}^{\\\\prime}\\\\_{k\\\\ell}\\n\\\\end{aligned}\\n\\\\end{equation}\\n\\nTherefore, under the $O(3)$ group transformation $\\\\mathbf{Q}$, $C\\\\_{ijkl}^{\\\\prime}$ is represented as: \\n\\\\begin{equation}\\n%f\\\\_{rp}(\\\\cdot) \\\\to \\nC\\\\_{ijkl}^{\\\\prime}=\\\\sum\\\\_{mnpq}\\\\mathbf{Q}\\\\_{im}\\\\mathbf{Q}\\\\_{jn}\\\\mathbf{Q}\\\\_{kp}\\\\mathbf{Q}\\\\_{lq}C\\\\_{mnpq}.\\n\\\\end{equation} \\n\\nWe're grateful for your feedback on our work. We hope our reply can address your concerns. We are happy to provide any additional clarification and discussion.\"}", "{\"comment\": \"We sincerely thank all four reviewers for their thoughtful feedback and insightful comments.\\nWe are particularly encouraged by the reviewers\\u2019 feedback. We have made a heavy revision to our paper according to the reviewer's constructive suggestions. Below, we summarize some key modifications in the updated PDF document:\\n\\n---------------------------------\\n\\n*Main text*\\n\\n$\\\\bullet$ (Section 5.1) Add an introduction for new piezoelectric tensor\\ndataset\\n\\n$\\\\bullet$ (Section 5.2) Add comparison of prediction performance \\non the piezoelectric dataset\\n\\n\\n$\\\\bullet$ (Section 5.2) Add ablation study for verifying the $O(3)$ equivariance with piezoelectric dataset\\n\\n$\\\\bullet$ (Section 5.2) Add efficiency comparison on the piezoelectric dataset\\n\\n---------------------------------\\n\\n*Appendix*\\n\\n$\\\\bullet$ (Appendix A.2)\\nAdd an introduction for $O(3)$ equivariance for different crystal tensor properties\\n\\n$\\\\bullet$ (Appendix A.3)\\nAdd an introduction for the number of independent components in the piezoelectric tensor\\n\\n$\\\\bullet$ (Appendix A.5)\\nAdd detailed discussion for limitations and extensibility of our method\\n\\n$\\\\bullet$ (Appendix A.6)\\nAdd discussion for how to further utilize the tensor properties symmetry\\n\\n---------------------------------\\n\\nIf further consideration remains, please kindly let us know. We are very happy to make a further revision in light of your great suggestions. We will address comments by each of the reviewers individually.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank you for your responses\", \"comment\": \"Thank you for your extensive effort during the rebuttal process.\\n\\nRegarding the first question, the concern is not about space group invariance but about adhering to space group constraints for various crystal systems. For instance, if a method violates space group constraints and generates non-zero entries in off-diagonal positions of dielectric tensors for cubic systems, it would be incorrect. Predictions must satisfy space group constraints to be practically useful because these tensors describe the system's response to external fields or other perturbations. If the predictions do not respect these constraints, the responses will not align with the underlying crystal system's symmetry.\\n\\nFor the second question, it appears that minimal frame averaging could achieve performance comparable to the proposed method, at least in terms of Fnorm and EwT 25 metrics.\\n\\nRegarding Table 4, the explanation remains unclear despite your comments. Specifically, what does it mean that the first column corresponds to the results of GoeCTP but is labeled as eComFormer?\\n\\nFor weaknesses 3 and 4, if you are using the same dataset as GMTNet, the performance reported in the original GMTNet paper appears to be better than that of the proposed GoeCTP. For example, GMTNet achieves an Fnorm of 0.37 for piezoelectric tensors and 67 for elastic tensors, which outperform the results of GoeCTP.\\n\\nThank you once again for your thorough and formal responses. While I appreciate the effort, the above concerns remain unresolved, and I cannot increase the score at this time.\"}", "{\"summary\": \"This paper proposes a novel approach intended to achieve O(3) tensorial equivariance by transforming crystal structures into standardized positions, so that neural networks do not need to satisfy equivariance during prediction. The invariant predictions are then mapped back to equivariant outputs. While the goal of O(3) equivariant predictions is notable, prior work has already addressed this problem with established solutions, including ETGNN's vector outer product and GMTNet's equivariant networks. Additionally, existing techniques, such as frame averaging and minimal frame averaging, can be employed to achieve O(3) tensor equivariance effectively. Moreover, this work does not consider space group constraints, which are crucial for tensorial properties in crystallography. As such, the novelty and contribution of this work are limited and do not meet the standards for acceptance at current form.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. An alternative approach is proposed to achieve tensor O(3) equivariance. If used, it is faster than equivariant network based O(3) equivariant predictions like GMTNet.\", \"weaknesses\": \"1. **Lack of Consideration for Space Group Constraints**\\n\\nSpace group constraints, which are fundamental in determining the tensor properties of crystals, are not accounted for in this work. No experimental results are provided to verify whether the proposed method can generate predictions that align with these constraints. Crystals exhibit unique tensor characteristics that are intrinsically tied to their crystal class or space group, and ignoring these symmetries is a significant oversight.\\n\\n2. **Limited Improvement in Performance**\\n\\nThe integration of the proposed module does not enhance eComformer\\u2019s performance beyond achieving O(3) equivariance, as shown in Table 4. Other alternatives, such as ETGNN, frame averaging, and minimal frame averaging, also achieve O(3) equivariance but were not discussed or compared in this work. A more thorough discussion and comparative analysis of technical contributions and novelty would be beneficial.\\n\\n3. **Absence of Experiments on Piezoelectric Tensors**\\n\\nThe work lacks experiments on piezoelectric tensors, which are especially sensitive to space group constraints. Including these experiments would strengthen the evaluation of the proposed approach\\u2019s applicability across tensorial properties with varying sensitivity to symmetry constraints.\\n\\n4. **Performance on Elastic Tensors**\\n\\nThe performance of the proposed method on elastic tensors is significantly lower than GMTNet's original results. This suggests potential limitations in the approach\\u2019s effectiveness.\\n\\n5. **Efficiency Gains Not Attributable to Proposed Method**\\n\\nThe efficiency gains claimed largely derive from the use of the lightweight eComformer, not the proposed approach. Similar speed-ups could be achieved by combining eComformer with other O(3) equivariant methods such as ETGNN\\u2019s vector outer-product approach or minimal frame averaging.\", \"questions\": \"As listed above in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank you for your valuable time and comments.\\nBelow we will address your concerns in detail.\\n\\n--------------\\n\\n**$\\\\bullet$ Response to weakness 1**\\n\\n\\nSpecifically, our method is space group invariant. According to [1] and [2], space group transformations do not change the lattice matrix $L$, i.e., $RL+b=L$. Therefore, when space group transformations are applied to a crystal, applying polar decomposition yields the same result, i.e., $RL+b=L=Q\\\\exp(S)$. Thus, when space group transformations are applied to the input, our output remains invariant.\\n\\nAdditionally, we have discussed the space group constraints in Appendix A.3. We acknowledge that our current utilization of such physical knowledge is limited. Further leveraging space group constraints to improve tensor prediction performance is a direction for our future research (For example, predicting only the independent components of tensor properties to improve prediction performance).\\n\\n[1] Jiao, Rui and Huang, Wenbing and Liu, Yu and Zhao, Deli and Liu, Yang. Space Group Constrained Crystal Generation. *ICLR2024*.\\n\\n[2] Yan, Keqiang and Saxton, Alexandra and Qian, Xiaofeng and Qian, Xiaoning and Ji, Shuiwang. A Space Group Symmetry Informed Network for O (3) Equivariant Crystal Tensor Prediction. *ICML2024*.\\n\\n\\n\\n-----------------------\\n\\n\\n**$\\\\bullet$ Response to weakness 2 and weakness 5**\\n\\n\\nIn response to your comment, we first conducted experiments combining ETGNN\\u2019s vector outer-product approach and SOTA Minimal Frame Averaging [1] with eComFormer. The results are as follows: \\n\\n| | eCom. (vector outer-product) | eCom. (Minimal Frame Averaging) | **GoeCTP (Ours)** |\\n|------------------------------|------------------------------|----------------------------------|-------------------|\\n| **Fnorm \\u2193** | 3.78 | **3.20** | 3.23 |\\n| **EwT 25% \\u2191** | 76.4% | **83.5%** | 83.2% |\\n| **EwT 10% \\u2191** | 32.5% | 56.0% | **56.8%** |\\n| **EwT 5% \\u2191** | 14.0% | 32.4% | **35.5%** |\\n| **Total Time (s) \\u2193** | 1078 | 639 | **616** |\\n\\n*Table: Predictive performance comparisons between eComFormer (vector outer-product), eComFormer (Minimal Frame Averaging), and GoeCTP on the dielectric dataset.*\\n\\n\\nBased on results, it is clear that our method outperforms eComFormer with\\nETGNN\\u2019s vector outer-product approach in all metrics, and our method also outperforms Minimal Frame Averaging in terms of prediction quality, i.e. EwT 5\\\\%.\\n\\n\\n\\n1) **Comparison for ETGNN\\u2019s vector outer-product.** ETGNN\\u2019s vector outer-product requires a specially designed network structure, involving the introduction of tensor products to achieve $O(3)$-equivariance. This adds extra computational cost to the network. Moreover, since $O(3)$-equivariance relies on tensor products, it is not possible to apply weighting to each position in the $3 \\\\times 3$ tensor product result, which limits the effectiveness of ETGNN\\u2019s vector outer-product. In contrast, our method is independent of the network structure and, therefore, does not face these issues.\\n\\n2) **Comparison for frame averaging [1,2].**\\nFirst, directly applying frame averaging methods cannot simultaneously achieve both space group invariance and $O(3)$ equivariance for crystals. Without considering translations, space groups are the subgroup of the $O(3)$ group. Achieving both $O(3)$ equivariance and invariance to $O(3)$ subgroup through frame averaging methods is inherently contradictory. Furthermore, as demonstrated in Ref. [3], using frame averaging to ensure space group invariance often degrades prediction performance. In contrast, our method leverages the lattice matrix and polar decomposition to directly achieve space group invariance at the data representation level. This process is independent of the subsequent $O(3)$ equivariance framework. By decoupling these two processes, our approach avoids conflicts and simultaneously achieves both objectives effectively.\\nSecond, frame averaging methods involve modifying the loss function, increasing its complexity, which in turn raises the training computational cost of the network and reduces efficiency. In contrast, as an external framework, our method imposes no additional computational burden on the network itself.\"}", "{\"title\": \"Response to Reviewer 3tQt (round2)\", \"comment\": \"We sincerely thank you for reviewing again. We will attempt to address your concerns further.\\n\\n------------------------\\n\\n**$\\\\bullet$ Response to the first question (round2)**\\n\\nWe have added relevant discussions in Appendix A.6 of the paper, as detailed below:\\n\\n**(1)** We first present an example of our prediction results, as shown below.\\n\\n| Label | Prediction | Cubic Dielectric Tensor |\\n|------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|\\n| $\\\\begin{pmatrix}2.258&0&0\\\\\\\\\\\\0&2.258&0\\\\\\\\\\\\0&0&2.258\\\\end{pmatrix}$ | $\\\\begin{pmatrix}2.252&0.016&0.008\\\\\\\\\\\\0.016&2.230&0.007\\\\\\\\\\\\0.008&0.007&2.262\\\\end{pmatrix}$ | $\\\\begin{pmatrix} e&0&0\\\\\\\\\\\\0&e&0\\\\\\\\\\\\0&0&e\\\\end{pmatrix}$ |\\n\\n*Table: An example of GoeCTP prediction results*\\n\\nIt can be observed that, within a certain margin of error, our results are relatively consistent with the constraints. \\n\\nFor a dielectric tensor, using 1\\\\% of the average value of non-zero elements in the labels as a threshold, we judge whether the prediction for a zero element was successful. The results are as follows. \\n\\n| Crystal system | Cubic | Tetragonal | Hexagonal-Trigonal | Orthorhombic | Monoclinic |\\n|---------------------|-------|------------|---------------------|--------------|------------|\\n| **Success rate** | 88.3% | 86.6% | 84.5% | 84.5% | 75.7% |\\n\\n*Table: The GoeCTP results of predicting symmetry-constrained zero-valued dielectric tensor elements.*\\n\\nIt can be seen that our method successfully predicts most zero elements.\\n\\n------------------------\\n\\n**(2)** To get better success rate. Since our advantage lies in transferring equivariance through an external framework, there are no restrictions on the model. Therefore, we added a ReLU activation function to the network's output layer to improve the success rate (This applies only to cases where the elements in the tensor are greater than or equal to zero; other cases require designing specific activation functions, such as $ReLU(x-0.01)-ReLU(-x)$.). After retraining, the results are as follows:\\n\\n\\n| Label | Prediction | Cubic Dielectric Tensor |\\n|------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|\\n| $\\\\begin{pmatrix}2.258&0&0\\\\\\\\\\\\0&2.258&0\\\\\\\\\\\\0&0&2.258\\\\end{pmatrix}$ | $\\\\begin{pmatrix}2.237&0.000&0.000\\\\\\\\\\\\0.000&2.283&0.000\\\\\\\\\\\\0.000&0.000&2.228\\\\end{pmatrix}$ | $\\\\begin{pmatrix} e&0&0\\\\\\\\\\\\0&e&0\\\\\\\\\\\\0&0&e\\\\end{pmatrix}$ |\\n\\n*Table: An example of GoeCTP (ReLU) prediction results*\\n\\n| Crystal system | Cubic | Tetragonal | Hexagonal-Trigonal | Orthorhombic | Monoclinic |\\n|---------------------|-------|------------|---------------------|--------------|------------|\\n| **Success rate** | 100% | 100% | 87.2% | 100% | 100% |\\n\\n*Table: The GoeCTP (ReLU) results of predicting symmetry-constrained zero-valued dielectric tensor elements.*\\n\\n| Metric | GoeCTP | GoeCTP (ReLU) |\\n|----------------------|---------|---------------|\\n| **Fnorm \\u2193** | 3.23 | 3.26 |\\n| **EwT 25% \\u2191** | 83.2% | 82.6% |\\n| **EwT 10% \\u2191** | 56.8% | 58.4% |\\n| **EwT 5% \\u2191** | 35.5% | 36.3% |\\n\\n*Table: Predictive performance comparisons between GoeCTP and GoeCTP (ReLU) on the dielectric dataset.*\\n\\nThis simple modification allows our method to more accurately predict the zero elements in dielectric tensors caused by the space group constraints. \\n\\n------------------------\"}", "{\"summary\": \"This paper presents GoeCTP, a novel O(3)-equivariant framework for predicting tensor properties of crystalline materials. The key innovation is using polar decomposition to handle tensor equivariance through an external rotation and reflection (R&R) module, rather than building it into the network architecture.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Novel use of polar decomposition for handling tensor equivariance\\n2. Strong theoretical foundation with clear mathematical proofs\\n3. Clear illustrations and explanations of complex concepts\", \"weaknesses\": \"1. Limited discussion of potential limitations or failure cases\\n2. Only two datasets are used.1.\", \"questions\": \"1. What are the limitations of using polar decomposition for this application? Are there edge cases where it might not work well?\\n\\n2. How does the method perform on different types of crystal structures beyond those tested?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose an O(3)-equivariant framework, GoeCTP, for crystal tensor prediction. GoeCTP utilizes polar decomposition to rotate and reflect the crystal into a standardized invariant position in space. The orthogonal matrix obtained from the polar decomposition is used to achieve equivariant tensor property predictions. The GoeCTP method achieves higher quality prediction results and runs more than 13\\u00d7 faster on the elastic benchmarking dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. GoeCTP is plug-and-play as it can be readily integrated with any existing single-value property prediction network for predicting tensor properties.\\n2. GoeCTP does not introduce excessive computational overhead.\", \"weaknesses\": \"1. The article has limited contributions in terms of methodological innovation, as the methods and main structure used by the authors are derived from DiffCSP++[1] and Comformer[2]. For detail, the polar decomposition method used may have been inspired by DiffCSP++, while the code implementation adopts the structure of Comformer.\\n2. The article does not clearly explain why Equation 2 needs to be satisfied. It is suggested that the authors provide more background or explanation regarding the physical or mathematical significance of Equation 2 in relation to tensor property prediction. This would help readers better understand the importance of this equation within the proposed framework. \\n3. There are some citation issues in lines 339-340 of the article.\", \"references\": \"[1] Rui Jiao, Wenbing Huang, Yu Liu, Deli Zhao, and Yang Liu. Space group constrained crystal generation. In The Twelfth International Conference on Learning Representations, 2024. \\n[2] Keqiang Yan, Cong Fu, Xiaofeng Qian, Xiaoning Qian, and Shuiwang Ji. Complete and efficient graph transformers for crystal material property prediction. In The Twelfth International Conference on Learning Representations, 2024.\", \"questions\": \"1. Is it the case that all tensor properties need to satisfy Equation 2, or only certain tensor properties? Why\\uff1fPlease provide specific examples of tensor properties, indicating which properties need to satisfy Equation 2 and which do not, along with an explanation of the underlying reasons for this distinction. This would help deepen the understanding of the method's applicability and limitations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"3) **For Table 4.**\\nAccording to our description of Table 4, both the first and third columns correspond to the results of GoeCTP. Our goal is to compare the differences between the results in the first and second columns and those in the third and fourth columns, which demonstrate how much the performance of eComFormer degrades without our framework.\\nIn fact, the results of directly applying eComFormer to tensor prediction tasks are as follows: \\n\\n\\n| | eComFormer | **GoeCTP (Ours)** |\\n|--------------------------|------------|-------------------|\\n| **Fnorm \\u2193** | 3.60 | 3.23 |\\n| **EwT 25% \\u2191** | 80.6% | 83.2% |\\n| **EwT 10% \\u2191** | 56.2% | 56.8% |\\n| **EwT 5% \\u2191** | 32.6% | 35.5% |\\n| **Total Time (s) \\u2193** | 613 | 616 |\\n\\n*Table: Predictive performance comparisons between eComFormer and GoeCTP on the dielectric dataset.*\\n\\n\\nWe can see that our method outperforms eComFormer.\\n\\n\\n\\n[1] Lin Y, Helwig J, Gui S, Ji S. Equivariance via Minimal Frame Averaging for More Symmetries and Efficiency. *ICML2024*.\\n\\n\\n[2] Omri Puny, Matan Atzmon, Edward J Smith, Ishan Misra, Aditya Grover, Heli Ben-Hamu, and\\nYaron Lipman. Frame averaging for invariant and equivariant network design. *ICLR2022*.\\n\\n\\n[3] Yan, Keqiang and Saxton, Alexandra and Qian, Xiaofeng and Qian, Xiaoning and Ji, Shuiwang. A Space Group Symmetry Informed Network for O (3) Equivariant Crystal Tensor Prediction. *ICML2024*.\\n\\n-------------\\n\\n\\n**$\\\\bullet$ Response to weakness 3**\\n\\n\\n\\n\\nIn response to weaknesses 3, we have added additional experiments on Piezoelectric tensor dataset, which is from ref.[1]. The experimental results are as follows: \\n\\n| | MEGNET | ETGNN | GMTNet | **GoeCTP (Ours)** |\\n|--------------------------|---------|---------|---------|------------------|\\n| **Fnorm \\u2193** | 0.465 | 0.535 | 0.462 | **0.448** |\\n| **EwT 25% \\u2191** | 43.9% | 37.5% | **45.7%** | 44.9% |\\n| **EwT 10% \\u2191** | 37.9% | 22.8% | 39.3% | **43.1%** |\\n| **EwT 5% \\u2191** | 27.1% | 13.8% | 35.7% | **40.1%** |\\n\\n*Table: Predictive performance comparisons among MEGNET, ETGNN, GMTNet, and GoeCTP on the piezoelectric dataset.*\\n\\n\\nOur method achieves optimal results on Fnorm, EwT 5\\\\%, and EwT 10\\\\% metrics.\\n\\n\\n[1] Yan, Keqiang and Saxton, Alexandra and Qian, Xiaofeng and Qian, Xiaoning and Ji, Shuiwang. A Space Group Symmetry Informed Network for O (3) Equivariant Crystal Tensor Prediction. *ICML2024*.\\n\\n\\n\\n**$\\\\bullet$ Response to weakness 4**\\n\\n\\nWe thank the reviewer for the astute observation. Before the ICLR submission deadline, the authors of GMTNet had not released their collected Elastic tensors dataset. Therefore, we evaluated with our collected Elastic Tensors dataset (from dft\\\\_3d in https://pages.nist.gov/jarvis/databases/). Evaluating different datasets introduces the performance discrepancy. In response to the reviewer's concern, we re-evaluated the performance with the datasets used by GMTNet (released on Nov. 5), and the results are as follows: \\n\\n\\n| | GMTNet | **GoeCTP (Ours)** |\\n|--------------------------|--------------|-------------------|\\n| **Fnorm \\u2193** | 80.06 | 72.43 |\\n| **EwT 25% \\u2191** | 57.8% | 62.2% |\\n| **EwT 10% \\u2191** | 14.9% | 27.5% |\\n| **EwT 5% \\u2191** | 4.6% | 14.7% |\\n| **Total Time (s) \\u2193** | >24000 | 1815.55 |\\n\\n*Table: Predictive performance comparisons between GMTNet and GoeCTP on another elastic dataset.*\\n\\n\\nWe can see that our method still outperforms GMTNet.\"}", "{\"comment\": \"I acknowledge the efforts made by the authors.\"}", "{\"summary\": \"The paper introduces a method for predicting how crystals react to forces applied from different directions, a challenge that requires maintaining consistency regardless of the crystal's orientation in space.\\n\\nThe proposed method uses a sound mathematical technique (polar decomposition) to standardize crystal positions, enabling faster and more accurate predictions of these directional properties (known technically as tensor properties) while respecting the physics principle of orientation independence (technically called O(3) group equivariance). \\n\\nThe proposal is significantly faster and more accurate than existing approaches, especially for predicting how materials deform under stress or respond to electric fields.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The pre-processing step of standardizing crystal positions through polar decomposition is significant for multiple reasons. It ensures equivariance, simplifies model architecture, and preserves tensor properties across different orientations through an additional step.\\n2. The method achieves 13x speed improvement over prior methods in predicting tensor properties.\\n3. The paper effectively explains O(3)-equivariance in crystal tensor prediction through clear organisation, helpful diagrams, and accessible mathematical explanations, making sophisticated concepts understandable even to non-specialists.\", \"weaknesses\": \"1. The paper focuses primarily on quantitative metrics (e.g. Frobenius norm and EwT percentages) to demonstrate the method's effectiveness. However, there is a lack of qualitative insights into how the method affects real-world predictions. Including case studies or qualitative analyses where the method's predictions are compared to known physical properties of materials would strengthen the practical significance.\\n2. Evaluation is carried out on two specific datasets for dielectric and elastic tensor prediction. While the results are promising, these datasets may not cover the full range of tensor property prediction challenges. Testing on a broader variety of materials, including more extreme cases, would strengthen the claim of generalisability.\\n3. As the authors discuss between lines 137 and 142, \\\" the requirements for O(3) equivariance typically differ from the O(3)-equivariance\\ndefined in the general molecular studies.\\\" Because of these specific requirements, the scope and the applicability of the proposed polar decomposition are limited.\", \"questions\": \"1. Were there qualitative case studies where the proposed method's predictions were compared to known real-world material properties, such as elastic or dielectric responses of specific materials?\\n2. Were there plans to test the proposed method on other datasets, especially those involving more complex or extreme tensor property cases (e.g., materials with highly anisotropic properties or rare crystal structures)? \\n3. Why was the dataset \\\"Piezo\\\" that was tested in a prior work [Yan et al.] not tested in this paper?\\n4. What explains the discrepancy in the number of samples in the \\\"Elastic\\\" dataset between the prior work [Yan et al.] and this paper? The prior work [Yan et al.] reports 14,220 samples in Table 1, while this paper reports 25,110 samples in Table 1 on line 364.\\n5. Could the specific requirements of O(3) equivariance for crystalline materials limit the use of polar decomposition? Additionally, how did these differences impact the potential generalisation of the proposed method to molecular systems, where O(3) equivariance is defined differently?\\n\\n[Yan et al.] [A Space Group Symmetry Informed Network for O(3) Equivariant Crystal Tensor Prediction, In ICML 2024](https://openreview.net/forum?id=BOFjRnJ9mX).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank you for your recognition of our method in terms of novel use of polar decomposition, theoretical foundation, and clear concepts illustration. Below we will\\naddress your questions in detail. \\n\\n-------------\\n\\n**$\\\\bullet $ Response to weakness 1 and question 1**\\n\\n\\nWe thank you for this helpful comment. We include additional limitation discussions in Appendix A.5.\\nSpecifically, for some cases, such as 2D crystals with single-layer structures [1,2], the rank of the lattice matrix $\\\\mathbf{L}$ might be less than 3. Therefore, directly applying polar decomposition may not yield a unique $3 \\\\times 3$ orthogonal matrix $\\\\mathbf{Q}$ [3]. In this case, our method would fail to achieve the desired $O(3)$-equivariance.\\nWe hope our response addresses the reviewer\\u2019s question.\\n\\n\\n[1] Novoselov, Kostya S and Jiang, Da and Schedin, F and Booth, TJ and Khotkevich, VV and Morozov, SV and Geim, Andre K. Two-dimensional atomic crystals. *Proceedings of the National Academy of Sciences*, 2005, 102(30): 10451-10453.\\n\\n[2] Sherrell, Peter C and Fronzi, Marco and Shepelin, Nick A and Corletto, Alexander and Winkler, David A and Ford, Mike and Shapter, Joseph G and Ellis, Amanda V. A bright future for engineering piezoelectric 2D crystals. *Chemical Society Reviews*, 2022, 51(2): 650-671.\\n\\n[3] Higham N J. Computing the polar decomposition\\u2014with applications. *SIAM Journal on Scientific and Statistical Computing*, 1986, 7(4): 1160-1174.\\n\\n\\n-------------\\n\\n**$\\\\bullet$ Response to weakness 2**\\n\\nFor Weakness 2, we have added additional experiments using a new dataset --- the piezoelectric tensor dataset, which is from ref. [1]. The experimental results are as follows: \\n\\n| | MEGNET | ETGNN | GMTNet | **GoeCTP (Ours)** |\\n|--------------------------|---------|---------|---------|------------------|\\n| **Fnorm \\u2193** | 0.465 | 0.535 | 0.462 | **0.448** |\\n| **EwT 25% \\u2191** | 43.9% | 37.5% | **45.7%** | 44.9% |\\n| **EwT 10% \\u2191** | 37.9% | 22.8% | 39.3% | **43.1%** |\\n| **EwT 5% \\u2191** | 27.1% | 13.8% | 35.7% | **40.1%** |\\n\\n*Table: Predictive performance comparisons among MEGNET, ETGNN, GMTNet, and GoeCTP on the piezoelectric dataset.*\\n\\n\\nOur method achieves optimal results on Fnorm, EwT 5\\\\%, and EwT 10\\\\% metrics. \\n\\n[1] Yan, Keqiang and Saxton, Alexandra and Qian, Xiaofeng and Qian, Xiaoning and Ji, Shuiwang. A Space Group Symmetry Informed Network for O (3) Equivariant Crystal Tensor Prediction. *ICML2024*. \\n\\n\\n\\n-------------\\n\\n**$\\\\bullet$ Response to question 2**\\n\\nWe are a bit confused about this question; perhaps you\\u2019re asking about the generality of our method. Currently, we have added a new piezoelectric dataset and still verify the effectiveness of our approach. If we find new publicly available datasets, we will continue to validate our method. Additionally, as we discussed in the limitations section, our method is theoretically generalizable to 3D crystals but may fail for 2D crystals. If our understanding of the question is incorrect, please feel free to point it out. Thank you very much.\"}", "{\"comment\": \"We sincerely thank you for your recognition of our method in terms of speed improvement and clear organisation. Below we will\\naddress your questions in detail.\\n\\n---------------\\n\\n**$\\\\bullet$ Response to weakness 1 and question 1**\\n\\n\\nThank you for the comments and suggestions. The purpose of deep learning (DL) methods used in property prediction is to replace slow, DFT-like physical simulation methods, thereby accelerating the speed of property predictions. \\nThe datasets used to train DL methods are also derived from DFT-like physical simulations. As a result, current deep learning prediction methods aim to achieve prediction accuracy close to DFT-like simulations. Therefore, following previous work [1], our research does not involve specific real-world materials. \\nIn [2], comparisons between DFT and real materials are discussed. In the future, we will attempt similar comparisons with real-world materials as in [2].\\nWe hope our response satisfies the reviewer.\\n\\n[1] Yan, Keqiang and Saxton, Alexandra and Qian, Xiaofeng and Qian, Xiaoning and Ji, Shuiwang. A Space Group Symmetry Informed Network for O (3) Equivariant Crystal Tensor Prediction. *ICML2024*.\\n\\n[2] Petousis, Ioannis and Chen, Wei and Hautier, Geoffroy and Graf, Tanja and Schladt, Thomas D and Persson, Kristin A and Prinz, Fritz B. Benchmarking density functional perturbation theory to enable high-throughput screening of materials for dielectric constant and refractive index. *Physical Review B*, 2016, 93(11): 115151.\\n\\n--------------\\n\\n**$\\\\bullet$ Response to weakness 2 and question 2**\\n\\nThe publicly available datasets we have collected so far mainly involve dielectric tensors, elastic tensors, and piezoelectric tensors. For this rebuttal, we have added experiments on the piezoelectric tensor dataset. If we discover other publicly available datasets in the future, we will continue testing the proposed method.\\n\\n\\n------------------\\n\\n**$\\\\bullet$ Response to Weaknesses 3 and question 5**\\n\\n\\nWe thank the reviewer for this helpful comment. \\nWe include additional limitation discussions and potential generalization of the proposed method to molecular systems in the Appendix A.5.\\nSpecifically, for certain special cases, such as 2D crystals with single-layer structures [1,2], the rank of the lattice matrix $\\\\mathbf{L}$ may be less than 3. Therefore, directly applying polar decomposition may not yield a unique $3 \\\\times 3$ orthogonal matrix $\\\\mathbf{Q}$ [3]. This limitation could cause our method to fail to achieve the desired $O(3)$-equivariance.\\nWhen our method is applied to certain 2D molecules in 3D space (for example, a molecule composed of three atoms all lying within the same plane), it also shares similar limitations to those encountered with 2D crystals.\\nWe hope our response addresses the reviewer\\u2019s question.\\n\\n\\n[1] Novoselov, Kostya S and Jiang, Da and Schedin, F and Booth, TJ and Khotkevich, VV and Morozov, SV and Geim, Andre K. Two-dimensional atomic crystals. *Proceedings of the National Academy of Sciences*, 2005, 102(30): 10451-10453.\\n\\n[2] Sherrell, Peter C and Fronzi, Marco and Shepelin, Nick A and Corletto, Alexander and Winkler, David A and Ford, Mike and Shapter, Joseph G and Ellis, Amanda V. A bright future for engineering piezoelectric 2D crystals. *Chemical Society Reviews*, 2022, 51(2): 650-671.\\n\\n[3] Higham N J. Computing the polar decomposition\\u2014with applications. *SIAM Journal on Scientific and Statistical Computing*, 1986, 7(4): 1160-1174.\\n\\n\\n--------------\\n\\n\\n**$\\\\bullet$ Response to question 3**\\n\\nWe have added additional experiments using the piezoelectric tensor dataset, which is from ref. [1]. The experimental results are as follows: \\n\\n| | MEGNET | ETGNN | GMTNet | **GoeCTP (Ours)** |\\n|--------------------------|---------|---------|---------|------------------|\\n| **Fnorm \\u2193** | 0.465 | 0.535 | 0.462 | **0.448** |\\n| **EwT 25% \\u2191** | 43.9% | 37.5% | **45.7%** | 44.9% |\\n| **EwT 10% \\u2191** | 37.9% | 22.8% | 39.3% | **43.1%** |\\n| **EwT 5% \\u2191** | 27.1% | 13.8% | 35.7% | **40.1%** |\\n\\n*Table: Predictive performance comparisons among MEGNET, ETGNN, GMTNet, and GoeCTP on the piezoelectric dataset.*\\n\\n\\nOur method achieves optimal results on Fnorm, EwT 5\\\\%, and EwT 10\\\\% metrics.\\n We hope our response addresses the reviewer's concerns.\\n\\n\\n[1] Yan, Keqiang and Saxton, Alexandra and Qian, Xiaofeng and Qian, Xiaoning and Ji, Shuiwang. A Space Group Symmetry Informed Network for O (3) Equivariant Crystal Tensor Prediction. *ICML2024*.\"}", "{\"title\": \"Thank you for your further responses\", \"comment\": \"Dear authors,\\n\\nThank you again for providing further clarifications. My concerns regarding Table 4 have been resolved. Based on our discussions, it appears that the contribution, or at least the discussion detailed in the current paper, is somewhat limited. Specifically, the improvements of the proposed method seem to be influenced by the randomness of the training process. For instance, the marginal improvements beyond minimal frame averaging and the slightly mismatched GMTNet performances suggest limited advancements. And for the space group constraints, it is better that the model itself satisfy these constraints such that we can trust. \\n\\nThis paper, in its current form, does not seem to address new challenges in this direction, based on the current discussions provided in the paper. Furthermore, the novelty of the method appears to be somewhat restricted. While I appreciate the additional experimental results you have shared, these concerns lead me to maintain my current score. I hope these issues can be considered in revisions or a more refined version of this paper in the future.\\n\\nThank you.\"}", "{\"comment\": \"**(3)** For practical application. The example in (2) is a simple illustration; when the space group of the input crystal is known, the prior knowledge in Appendix A.3 can be used to ensure that the network output fully obeys the constraints of tensor properties. For instance, the mask from Table 11 can be applied to weight the network output, ensuring 100\\\\% compliance with the constraints in Table 11 (this can be applied during both the training and inference phases). This was not experimentally demonstrated in our current work, but we plan to explore related studies in future work.\\n\\n----\\n\\n**Response to second question (round2)**\\n\\nOn the dielectric dataset, our method shows an improvement of EwT 5\\\\%, while for other metrics, our method indeed achieves results similar to the minimum frame averaging. Additionally, we have added the results for the piezoelectric tensor dataset and the elastic tensor dataset, as shown below:\\n\\n| Metric | eCom. (Minimal Frame Averaging) | **GoeCTP (Ours)** |\\n|----------------------|----------------------------------|-------------------|\\n| **Fnorm \\u2193** | 0.448 | 0.448 |\\n| **EwT 25% \\u2191** | 44.5% | **44.9%** |\\n| **EwT 10% \\u2191** | 42.7% | **43.1%** |\\n| **EwT 5% \\u2191** | 37.3% | **40.1%** |\\n| **Total Time (s) \\u2193** | 999 | 938 |\\n\\n*Table: Predictive performance comparisons between eComFormer (Minimal Frame Averaging) and GoeCTP on the piezoelectric dataset.*\\n\\n| Metric | eCom. (Minimal Frame Averaging) | **GoeCTP (Ours)** |\\n|----------------------|----------------------------------|-------------------|\\n| **Fnorm \\u2193** | 110.98 | **107.11** |\\n| **EwT 25% \\u2191** | **42.9%** | 42.5% |\\n| **EwT 10% \\u2191** | 15.0% | **15.3%** |\\n| **EwT 5% \\u2191** | 6.5% | **7.2%** |\\n| **Total Time (s) \\u2193** | 2488 | 2422 |\\n\\n*Table: Predictive performance comparisons between eComFormer (Minimal Frame Averaging) and GoeCTP on the elastic dataset.*\\n\\nOur method performs better than minimum frame averaging on other datasets.\\n\\n----------\\n\\n**$\\\\bullet$ Response to Table 4 (round2)**\\n\\nFor Table 4, the specific experimental steps are as follows:\\n\\n**(1)** Training GeoCTP, and after completion, we extract eComFormer from GeoCTP for comparative testing.\\n\\n**(2)** The original test dataset (i.e., the test dataset for the first and third columns) was adjusted to invariant positions using polar decomposition. The augmented test dataset (i.e., the test dataset for the second and fourth columns) was adjusted to random crystal positions using random orthogonal matrices.\\n\\n**(3)** Using the models obtained in Step 1, GeoCTP and eComFormer were tested on both the original test dataset and the augmented dataset to validate the effectiveness of GeoCTP.\\nSince the original test dataset in Step 2 was adjusted to invariant positions, the processing is identical for GeoCTP. As a result, eComFormer and GeoCTP yield the same outcomes.\\n\\nIf the original test dataset is not processed and eComFormer is trained individually, the results in Table 4 would look as follows:\\n\\n| Metric | eCom. (ori. data) | eCom. (aug. data) | **GoeCTP (Ours)** (ori. data) | **GoeCTP (Ours)** (aug. data) |\\n|----------------------|-------------------|-------------------|------------------------------|------------------------------|\\n| **Fnorm \\u2193** | 3.60 | 4.96 | 3.23 | 3.23 |\\n| **EwT 25% \\u2191** | 80.6% | 69.4% | 83.2% | 83.2% |\\n| **EwT 10% \\u2191** | 56.2% | 42.7% | 56.8% | 56.8% |\\n| **EwT 5% \\u2191** | 32.6% | 18.5% | 35.5% | 35.5% |\\n\\n*Table: Ablation study for verifying the \\\\(O(3)\\\\) equivariance with dielectric dataset.*\\n\\nWe hope our response satisfactorily addresses the reviewer's concerns.\\n\\n--------\\n\\n**$\\\\bullet$ Response to weakness 3 and 4 (round2)**\\n\\nThank you for the nimble observation. \\nThis is indeed true. The experimental environment may influence the algorithm outcome. GMTNet uses an A100 GPU, while we used an RTX 3090. Besides, the versions of libraries such as PyTorch and Numpy used in experiments may differ from those used in GMTNet's experiments, potentially causing some discrepancies. To keep experimental environment same, we reproduce the experimental results by using the source code released by the corresponding authors and using the same setting as reported in their paper. Then according to our experimental results, our method is better to GMTNet.\"}", "{\"title\": \"Response to Reviewer RVFL (round2)\", \"comment\": \"We sincerely thank you for reviewing again. We will attempt to address your concerns further.\\n\\n----------\\n\\n**$\\\\bullet$ Response to the application for other models (round2)**\\n\\nIn the earliest version, we already presented this, as shown in Tables 15 and 16 in Appendix A.4. We demonstrated the results of our method combined with CrystalFormer [1] and iComFormer [2] (another method proposed in ComFormer). The details are as follows:\\n\\n| Metric | MEGNET | ETGNN | GMTNet | **GoeCTP (eCom.)** | **GoeCTP (iCom.)** | **GoeCTP (Crys.)** |\\n|-----------------------|---------|---------|---------|--------------------|--------------------|--------------------|\\n| **Fnorm \\u2193** | 3.71 | 3.40 | 3.28 | **3.23** | 3.40 | 3.53 |\\n| **EwT 25% \\u2191** | 75.8% | 82.6% | **83.3%** | 83.2% | 81.7% | 80.1% |\\n| **EwT 10% \\u2191** | 38.9% | 49.1% | 56.0% | **56.8%** | 53.8% | 52.9% |\\n| **EwT 5% \\u2191** | 18.0% | 25.3% | 30.5% | **35.5%** | 32.3% | 30.6% |\\n| **Total Time (s) \\u2193** | 663 | 1325 | 1611 | 616 | **535** | 645 |\\n| **Time/batch (s) \\u2193** | 0.052 | 0.104 | 0.126 | 0.048 | **0.042** | 0.202 |\\n\\n*Table: Additional comparison of performance metrics on dielectric dataset.*\\n\\n| Metric | MEGNET | ETGNN | GMTNet | **GoeCTP (eCom.)** | **GoeCTP (iCom.)** | **GoeCTP (Crys.)** |\\n|-----------------------|----------|---------|----------|--------------------|--------------------|--------------------|\\n| **Fnorm \\u2193** | 143.86 | 123.64 | 117.62 | 107.11 | **102.80** | 107.44 |\\n| **EwT 25% \\u2191** | 23.6% | 32.0% | 36.0% | 42.5% | **46.7%** | 43.5% |\\n| **EwT 10% \\u2191** | 3.0% | 3.8% | 7.6% | 15.3% | **18.6%** | 15.8% |\\n| **EwT 5% \\u2191** | 0.5% | 0.5% | 2.0% | 7.2% | **8.2%** | 7.9% |\\n| **Total Time (s) \\u2193** | 2899 | 4448 | >36000 | **2422** | 4035 | 7891 |\\n| **Time/batch (s) \\u2193** | 0.226 | 0.348 | >2.813 | **0.189** | 0.315 | 0.616 |\\n\\n*Table: Additional comparison of performance metrics on elastic dataset.*\\n\\n[1] Taniai, Tatsunori and Igarashi, Ryo and Suzuki, Yuta and Chiba, Naoya and Saito, Kotaro and Ushiku, Yoshitaka and Ono, Kanta. Crystalformer: Infinitely Connected Attention for Periodic Structure Encoding. *ICLR2024*.\\n\\n[2] Yan, Keqiang and Fu, Cong and Qian, Xiaofeng and Qian, Xiaoning and Ji, Shuiwang. Complete and efficient graph transformers for crystal material property prediction. *ICLR2024*.\\n\\n---------\\n\\n**$\\\\bullet$ Response to Equation 2 (round2)**\\n\\nAfter citing GMTNet in the revised Appendix A.2, we explain the reasons. The details are as follows:\\n\\n\\n**Dielectric tensor** The specific reason is rooted in the fact that the dielectric tensor characterizes a material's polarization response to an external electric field, describing the relationship between the electric displacement $\\\\mathbf{D}\\\\in\\\\mathbb{R}^3$ and the applied electric field $\\\\mathbf{E}\\\\in\\\\mathbb{R}^3$ by $\\\\mathbf{D}=\\\\boldsymbol{\\\\varepsilon}\\\\mathbf{E}$. \\nWhen an $O(3)$ group transformation $\\\\mathbf{Q}$ is applied to the crystal structure, we have $\\\\mathbf{D'}=\\\\boldsymbol{\\\\varepsilon}'\\\\mathbf{E'}$, where $\\\\mathbf{D'}=\\\\mathbf{Q}\\\\mathbf{D}$ and $\\\\mathbf{E'}=\\\\mathbf{Q}\\\\mathbf{E}$. This results in the transformation of the dielectric tensor under $O(3)$ group transformation as $\\\\boldsymbol{\\\\varepsilon}'=\\\\mathbf{Q}\\\\boldsymbol{\\\\varepsilon}\\\\mathbf{Q}^T$. \\n\\nIn order to better help the reader to understand the tensor properties of the crystal, we give the introduction and transformation rules of piezoelectric tensor as well as elastic tensor, which we add in Appendix A.2. The details\", \"are_as_follows\": \"\"}", "{\"comment\": \"-----------------------\\n\\n**$\\\\bullet$ Response to question 4**\\n\\n\\nWe thank the reviewer for the astute observation. Before the ICLR submission deadline, the authors of GMTNet had not released their collected Elastic Tensors dataset. Therefore, we evaluated our collected Elastic Tensors dataset (from dft\\\\_3d in https://pages.nist.gov/jarvis/databases/), introducing the performance discrepancy. In response to the reviewer's concern, we reevaluated the performance with the datasets used by GMTNet (released on Nov. 5), and the results are as follows: \\n\\n\\n| | GMTNet | **GoeCTP (Ours)** |\\n|--------------------------|--------------|-------------------|\\n| **Fnorm \\u2193** | 80.06 | 72.43 |\\n| **EwT 25% \\u2191** | 57.8% | 62.2% |\\n| **EwT 10% \\u2191** | 14.9% | 27.5% |\\n| **EwT 5% \\u2191** | 4.6% | 14.7% |\\n| **Total Time (s) \\u2193** | >24000 | 1815.55 |\\n\\n*Table: Predictive performance comparisons between GMTNet and GoeCTP on another elastic dataset.*\\n\\n\\nWe can see that our method still outperforms GMTNet.\"}", "{\"metareview\": \"This paper received very weak support from the reviewers, and some of them raised major concerns on this paper.\", \"additional_comments_on_reviewer_discussion\": \"Some of the major issues remain unresolved after rebuttals.\"}", "{\"comment\": \"Thank you to the authors for their response.\\n\\nRegarding weakness 1, the authors clarified that this work and DiffCSP++[1] build on existing methods but are applied to different tasks, which I find acceptable. They also emphasized that the approach is plug-and-play; however, the paper only provides results for its application to the Conformer model, without experimental evidence supporting its use with other models. In the appendix, the authors addressed weakness 2 and question 1, citing GMTNet[2] but without directly clarifying the applicability of Equation 2, leaving some contributions of the work unclear. \\n\\nTherefore, I maintain my previous score.\", \"references\": \"[1] Rui Jiao, Wenbing Huang, Yu Liu, Deli Zhao, and Yang Liu. Space group constrained crystal generation. In The Twelfth International Conference on Learning Representations, 2024. \\n[2] Keqiang Yan, Alexandra Saxton, Xiaofeng Qian, Xiaoning Qian, and Shuiwang Ji. A space group symmetry informed network for o (3) equivariant crystal tensor prediction. In Forty-first International Conference on Machine Learning, 2024b.\"}", "{\"comment\": \"We sincerely thank you for your valuable time and comments.\\nBelow we will address your questions in detail.\\n\\n------------------\\n\\n**$\\\\bullet$ Response to weakness 1**\\n\\nWe somewhat get some inspiration from DiffCSP++. However, our method\\u2019s architecture is not derived from DiffCSP++. First of all, Polar decomposition is a classical mathematical method that was fully studied in the last century [1]; it was not proposed by DiffCSP++. The contribution of DiffCSP++ lies in applying polar decomposition to crystal generation tasks, where it uses polar decomposition and symmetric bases to obtain simplified and compressed original data.\\nIn contrast, we demonstrated that polar decomposition could be applied to achieve equivariant tensor property prediction, which has not been explored before.\\nAdditionally, since our method is plug-and-play and can be integrated with other prediction methods, our code implementation is based on other existing method.\\nWe hope our response addresses your concerns.\\n\\n[1] Higham N J. Computing the polar decomposition\\u2014with applications. *SIAM Journal on Scientific and Statistical Computing*, 1986, 7(4): 1160-1174.\\n\\n\\n------------------------\\n\\n**$\\\\bullet$ Response to weakness 2 and question 1**\\n\\nThank you for the reminder and suggestions. To enhance understanding, we have used the dielectric tensor as an example in Equation 2 in the paper. For different tensors, the form of Equation 2 may differ, and our method can be applied to different high-order tensors. We include additional explanations and clarifications in Appendix A.2. \\n\\n\\n\\n--------------------\\n\\n**$\\\\bullet$ Response to weakness 3**\\n\\n\\nThank you for pointing this out. There are two methods named Crystalformer [1,2], but we made an incorrect citation. We correct this in the PDF.\\n\\n[1] Wang, Yingheng and Kong, Shufeng and Gregoire, John M and Gomes, Carla P. Conformal Crystal Graph Transformer with Robust Encoding of Periodic Invariance. *AAAI2024*.\\n\\n[2] Taniai, Tatsunori and Igarashi, Ryo and Suzuki, Yuta and Chiba, Naoya and Saito, Kotaro and Ushiku, Yoshitaka and Ono, Kanta. Crystalformer: Infinitely Connected Attention for Periodic Structure Encoding. *ICLR2024*.\"}" ] }
0jmFRA64Vw
FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models
[ "Kai Yi", "Georg Meinhardt", "Laurent Condat", "Peter Richtárik" ]
Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server, while being respectful of privacy. A critical bottleneck in FL is the communication cost. A pivotal strategy to mitigate this burden is Local Training, which involves running multiple local stochastic gradient descent iterations between communication phases. Our work is inspired by the innovative Scaffnew algorithm, which has considerably advanced the reduction of communication complexity in FL. We introduce FedComLoc (Federated Compressed and Local Training), integrating practical and effective compression into Scaffnew to further enhance communication efficiency. Extensive experiments, using the popular Top-K compressor and quantization, demonstrate its prowess in substantially reducing communication overheads in heterogeneous settings.
[ "Federated Learning", "Compression", "Sparsity", "Quantization", "Communication Efficiency", "Local Training" ]
https://openreview.net/pdf?id=0jmFRA64Vw
https://openreview.net/forum?id=0jmFRA64Vw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wLRiDjNzsk", "c6LSimPWAl", "HPHyRodIJG", "21Klt2trdl" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731936526849, 1730601977595, 1730522354882, 1730297473826 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission399/Authors" ], [ "ICLR.cc/2025/Conference/Submission399/Reviewer_7m4M" ], [ "ICLR.cc/2025/Conference/Submission399/Reviewer_zB8T" ], [ "ICLR.cc/2025/Conference/Submission399/Reviewer_6eQN" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents an empirical study on a new algorithm, FedComLoc, which extends Scaffnew by integrating compression techniques: TopK and quantization. Three settings of the proposed algorithm are evaluated: (i) compressing the communication from client to the server; (ii) compressing the local model itself; and (iii) compressing the communication from the server to each client. The paper reports the empirical performances for these configurations by using FedMNIST and FedCIFAR10 with varying degrees of heterogeneity.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper reports and discusses the empirical performances of FedComLoc in different configurations and hyperparameter settings, including heterogeneity, quantization bits, and TopK sparsities.\", \"The paper presents some interesting observations based on the numerical results.\"], \"weaknesses\": [\"The scope of the paper is narrow, focusing solely on one algorithm, Scaffnew, with a basic integration of existing compression schemes. The paper lacks justification for why Scaffnew was chosen over other potential algorithms.\", \"The paper provides an insufficient review of existing communication-efficient FL approaches (e.g., FedEF [1]). These existing approaches were also missing in the numerical experiment.\", \"The experimental setup is not extensive, relying only on image datasets and simple models: MLPs and CNNs. Given the paper's empirical focus without theoretical analysis or in-depth discussion, it is challenging to generalize the findings. Moreover, the numerical comparison is limited and lacks breadth.\", \"The paper offers no new insights or findings beyond empirical results and observations. The main conclusion appears to be: \\\"we can apply compression schemes to Scaffnew.\\\"\", \"**Reference**\", \"1. Li and Li. Analysis of error feedback in federated non-convex optimization with biased compression: Fast convergence and partial participation. ICML, 2023.\"], \"questions\": \"1. Why was Scaffnew chosen specifically for this study? What unique characteristics of Scaffnew make it suitable for the proposed compression techniques compared to other algorithms?\\n1. Do the authors expect the empirical observations to generalize to other algorithms using the same compression schemes? If so, could the paper include a discussion on the expected performance of these compression schemes when applied to other popular algorithms?\\n1. If the goal is to develop a communication-efficient algorithm that outperforms the state-of-the-art (SOTA), what are the current SOTA methods in communication efficiency? The proposed method integrates compression schemes with one particular algorithm. Are there other approaches that might achieve comparable communication efficiency in federated learning? A review and comparison with SOTA methods could strengthen the context of this work.\\n1. Are the sub-captions in Figure 1 accurate, or could there be some typos that need correction? Please confirm and revise if necessary to ensure clarity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores communication reduction techniques for a federated optimization method called Scaffnew. The proposed approach demonstrates that Scaffnew can be combined with various communication reduction techniques on both the local and global sides. Empirical results illustrate the effectiveness of FedComLoc in reducing communication overhead while maintaining comparable performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is clearly well-written, concise, and easy to follow.\", \"weaknesses\": \"1. My major concern is the novelty and technical contribution of this paper. Model compression techniques, such as top-k and quantization, are already widely used and well-established. Integrating these compression methods with an FL algorithm appears to be an incremental contribution. While this approach does address the communication cost challenges in Scaffnew, it is not immediately clear to me how applying model compression introduces new challenges or is non-trivial. Therefore, the technical contribution seems relatively weak to me.\\n\\n2. It appears that the proposed algorithm and experiments are conducted under a partial participation setting in FL. This could lead to potential \\u201casynchronous\\u201d issues in FedComLoc-Global: since there is no model initialization step in FedComLoc, a client that has not participated in the previous $t-1$ steps would begin local training in the $t$-th step with an outdated model. This lack of updated model initialization may result in poorer convergence, particularly when only a small fraction of clients participate in the training.\\n\\n3. There are several issues with the presentation of the experimental results:\\n\\na. The caption of the subfigures in Figure 1 mentions sparsity, but the curves also represent sparsity.\\n\\nb. The results in the table in Figure 6 conflict with those in the subfigure within the same figure.\\n\\nc. The caption of Figure 8 states that there is a comparison with FedDyn, but the subfigures do not include this baseline.\\n\\nd. What is the purpose of K=100% (no sparsity) in Figure 8? It seems this is intended to compare Scaffnew with FedAvg and Scaffold.\", \"questions\": \"Please refer to the weaknesses section for details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work the authors suggest to incorporate compression, via sparsification (TopK) and (random) quantization, into the established Scaffnew algorithm of Mishchenko et al. (2022); where the latter considerably advanced the reduction of communication complexity in federated learning. The integration of compressed communication into Scaffnew is studied in either of client-to-server, server-to-client, and client-storage possibilities; and experimentally verified using MNIST and CIFAR10 datasets in the non-iid local datasets scenario.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Clarity:\", \"The paper is overall well written and organized.\", \"The experiments' accompanied details and explanations are overall well presented.\", \"The ablation study is quite comprehensive.\"], \"weaknesses\": [\"Significance & Quality:\", \"Compression in FL has been extensively studied. It is of lessen motivation to incorporate it into multiple SGD local iterations, what already by itself relaxes the communication overhead compared to single SGD local iteration, as communicating the local gradients to the server occurs less often.\", \"As stated by the authors in lines 177-179, this work's integration of compression in Scaffnew is merely heuristic and solely provides numerical evaluations for CompressedScaffnew (Condat et al., 2022); where the latter studies the theoretical aspects of general lossy compression integrated into Scaffnew in convex settings.\", \"The majority of FL papers with provable convergence guarantees employ convex settings in their theoretical evolutions and non-convex ones in their experimental studies; having deep neural networks fit into that regime.\", \"The CompressedScaffnew presents the idea of compressed Scaffnew, provides analytical study under the convex setting, and presents simulations that, unusually to their respective related works, covers only simplistic logistic regression model rather than diverse deep learning architectures. The significance of this paper is thus equivalent to the importance of a full-length comprehensive experiments section of the work CompressedScaffnew; i.e., one that includes simulations on general neural networks beyond the simplified logistic regression model. As a result, the merit of this paper to a conference such as ICLR is of minor contribution.\"], \"originality\": [\"The idea learned in this paper is not new and was already presented, and also analytically analyzed, in CompressedScaffnew (Condat et al., 2022). The numerical simulations of this idea are claimed to be the novelty of the presented work, and by themselves are insufficient.\", \"The authors further claim in lines 169-171 that the studies of CompressedScaffnew (Condat et al., 2022) are not practical as they require\", \"shared randomness. Yet, a bulk of works studying compressed FL utilize pseudo-random methods upon which sharing a common seed can overcome the necessity of shared randomness.\"], \"questions\": \"1. In line 14 and in line 24 when you write 'heterogeneous clients' and 'heterogeneous settings', respectively, it is unclear if you mean to heterogeneity in data or local hardware or both.\\n2. In lines 28-29: \\\"Privacy concerns and limited computing resources on edge devices often make centralized training impractical, where all data is gathered in a data center for processing.\\\" Maybe you mean \\\"Privacy concerns and limited computing resources on a data center often make...\\\"?\\n3. In lines 38-39: \\\"Our primary objective is to solve the problem (ERM) and deploy the optimized global model to all clients.\\\". Actually, in FL, it is often the server who wishes to obtain the optimal model rather than the local users. The users are contributing their local private datasets to the process of learning, \\\"serving\\\" the centralizing server.\\n4. In line 40 there is a typo, it should be 'is' instead of 'are'.\\n5. In lines 54-55: \\\"Quantization is another efficient model compression technique..., though its application in heterogeneous settings is limited\\\". Quite a harsh statement. Do you have any references supporting it?\\n6. In lines 71-74 you claim that FedComLoc is specially designed for heterogenous environments. My question is why do you claim that, as your adopted compression methods, encompassing TopK and quantization, are generic tools in compression; and furthermore, your numerical evolutions involve the non-iid local datasets scenario, as in standard FL experimental studies.\\n7. In lines 75-78 you mentioned that the integration of compression communication into Scaffnew is studied in either of client-to-server, server-to-client, and client-storage possibilities. Actually, as also covered by the majority of compressed FL works, it is the client-to-server communication bottleneck that is the most crucial one to be relaxed. \\n8. Algorithm 1 is almost unreadable without being closely familiar with Scaffnew. For the paper to be a stand-alone one, you should specify in the accompanied text the not-intuitive-usages therein; e.g., control variates, the role probability, etc.\\n9. In theoretical studies of compressed FL, it is typically revealed that the integration of compression slows downs the convergence rate obtained without it. Can you explain how in the rightmost column of Fig. 3 the reversed is evidenced?\\n10. In lines 365-366: \\\"Observe the accelerated convergence of sparsified models in terms of communicated bits...\\\". It is not clear how this is being calculated. That is, to translate K into bits one can do, e.g., if K=10% set R=0.1*b when b it the used-bits in full-precision, mostly 32 or 64. What the x-axis of 'Communication Bits' measures in your case? \\n11. It would be interesting to compare the performance of quantization and sparsification for the same bit rate...\\n12. In lines 370-371: \\\"This indicates that sparsity training requires more data and benefits from either increased communication rounds...\\\". Benefits? according to my understanding more communication rounds imply slower overall convergence (that takes longer time); which is not wanted. Can you explain that?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0je4SA7Jjg
Spatiotemporal Learning on Cell-embedded Graphs
[ "Yuan Mi", "Qi Wang", "Hao Sun" ]
Data-driven simulation of physical systems has recently kindled significant attention, where many neural models have been developed. In particular, mesh-based graph neural networks (GNNs) have demonstrated significant potential in predicting spatiotemporal dynamics across arbitrary geometric domains. However, the existing node-edge message passing mechanism in GNNs limits the model's representation learning ability. In this paper, we proposed a cell-embedded GNN model (aka, CeGNN) to learn spatiotemporal dynamics with lifted performance. Specifically, we introduce a learnable cell attribution to the node-edge message passing process, which better captures the spatial dependency of regional features. Such a strategy essentially upgrades the local aggregation scheme from first order (e.g., from edge to node) to a higher order (e.g., from volume to edge and then to node), which takes advantage of volumetric information in message passing. Meanwhile, a novel feature-enhanced block is designed to further improve the performance of CeGNN and alleviate the over-smoothness problem, via treating the latent features as basis functions. The extensive experiments on various PDE systems and one real-world dataset demonstrate that CeGNN achieves superior performance compared with other baseline models, significantly reducing the prediction errors on several PDE systems.
[ "Spatiotemporal Dynamics", "Graph Learning", "Physics-embeded Learning" ]
Reject
https://openreview.net/pdf?id=0je4SA7Jjg
https://openreview.net/forum?id=0je4SA7Jjg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zE7pm9TIwp", "xrfb6ZX5LZ", "wUfnyFs4Si", "umkZTIu9cH", "uH1fRemj7S", "t3RgzHQzVh", "ncqOf9Vjsq", "hmfxx9Eofm", "dJPDV1J5OH", "cWZ5F1iLXI", "X0srC34vSc", "WQLexXJjrh", "V0K0avBDUw", "TzK81J9a2T", "RLBExNhHGJ", "RITv1NUSys", "PY4QGAucqw", "M9AGTQzWby", "KaXXntYUBQ", "KEpBCvCYPp", "JnjmItoXdq", "JnKA3Bpq8N", "JOap0vZOJS", "IHjuUXTgUa", "Gtp0RiOHU0", "FwHfJUtqxd", "EfGD26zpsB", "EPMXhXGfZR", "EK8otVzn1C", "DD5mGltKhP", "C7gzoAVply", "BCuOmr6aVT", "B2KkgYpiq8", "Av2uZhCskV", "7u24XPsjyM", "5JLzBySMJP", "4WVib330qv", "1ULL4tCZFg", "1TwN3yysqw", "1KtR9A3FBN" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732610960548, 1732711703916, 1732309719539, 1732177773134, 1732159676938, 1730322747198, 1732676174939, 1732153646431, 1732373671915, 1732373516351, 1737523670432, 1732713708651, 1730147142985, 1732724188488, 1732721607770, 1732722798550, 1732332667269, 1732159959834, 1732158244685, 1732716808330, 1732611053882, 1732176120240, 1732717337883, 1734994691746, 1732177795523, 1732507434237, 1732201421300, 1732163024715, 1732666245210, 1732177838339, 1732157034567, 1730121111227, 1730311571251, 1732177817735, 1732507474536, 1732632431505, 1732176095776, 1732507381560, 1732611080836, 1732690955623 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_rgEv" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_S14d" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_KT6u" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_1r7X" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_1r7X" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_1r7X" ], [ "ICLR.cc/2025/Conference/Submission4916/Area_Chair_6m7X" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_S14d" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_rgEv" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_KT6u" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_rgEv" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Reviewer_S14d" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ], [ "ICLR.cc/2025/Conference/Submission4916/Authors" ] ], "structured_content_str": [ "{\"title\": \"Looking forward to your feedback on our reply to your additional comment\", \"comment\": \"Dear Reviewer rgEv,\\n\\nAgain, thanks for your constructive comments, which are very much helpful for improving our paper.\\n\\nMoreover, your additional comment on \\\"time interval\\\" has been addressed (please our reply above). If there are any further questions or points that need discussion, we will be happy to address them. Your feedback is invaluable in helping us improve our work, and we eagerly await your response.\\n\\nOn a separate note, we have thoroughly proofread our paper, corrected typos and grammar mistakes, and re-organized the contents to improve the clarity of the paper. We believe the presentation has been **substantially improved** (see revisions marked in red color). Please refer to the **updated .pdf file**.\\n\\nThank you very much for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Reply to the additional comments from Reviewer rgEv\", \"comment\": \"Thank you for your additional feedback. Your time and effort placed on reviewing our paper are greatly appreciated!\\n\\n> **Comment: Regarding the comparison effectiveness (baselines are \\\"too old\\\").**\\n\\n**Reply:** We appreciate but respectfully *disagree* with the comment that the baselines used in our study are \\\"too old\\\". In fact, we carefully selected a range of baselines (**eight in total**) that are widely recognized in the field (e.g., MGN [1] (ICLR, 2021), GAT [2] (arXiv, 2017), GATv2 [3] (arXiv, 2022), FNO [4] (ICLR, 2021)) and some latest representative methods (e.g., MP-PDE [5] (ICLR, 2022), FFNO [6] (ICLR, 2023), Geo-FNO [7] (NIPS, 2024), Transolver [8] (ICML, 2024)) for comparison. Meanwhile, we have provided **extensive** experiments (see **Tables 1-5, Appendix Tables S13-S17** in the revised paper) comparing our method with all baselines to demonstrate our model's effectiveness. If the reviewer believes additional specific baselines are necessary, we sincerely welcome your suggestions and would be more than happy to include them in final version of the paper. \\n\\n> **Comment: Regarding the novelty of cell-attribution (differences from complex GNNs).**\\n\\n**Reply:** Our method provides a **lightweight yet effective alternative two-level message passing mechanism**, specifically tailored for prediction of spatiotemporal dynamics. Generally, the application of complex spatial GNNs comes with significant training challenge (see **Appendix Table S17** in the revised paper). In contrast, our cell-attribution mechanism introduces rich and effective localized spatial feature learning with efficient two-level message passing mechanism, which is **not only easy to follow but also interpretable** (see **Appendix Subsubsection D.3.5** in the revised paper). The novelty of cell-attribution has been clearly demonstrated from the theoretical perspective (see **Subsubsection 3.1.2, Appendix Subsection C.1, C.2, D.3** in the revised paper) and evidence of the empirical results (see **Tables 1, 3 and 4, Appendix Tables S13-S17** in the revised paper). Hope this clarifies your concern.\\n\\n\\n> **Comment: Regarding the \\\"over-claim for long-term rollout\\\".**\\n\\n**Reply:** We respectfully clarify that our claims regarding the long-term rollout performance are **fully supported** by extensive experimental results. As shown in **Figures 5-6, Tables 1, 3 and 4, Appendix Tables S13-S17** in the revised paper, our model consistently outperforms the baseline models in long-term prediction tasks across diverse datasets.\\n\\nAs we all know, **long-term rollout is a fundamental and common challenge** in the field of spatiotemporal dynamcis, as error accumulation exists especially when the training datasets are limited. However, our results demonstrate clear improvements of our model over existing methods, which we believe is rational to justify our claim. Hope this clarifies your concern.\\n\\n> **Comment: Regarding insights into how complex representation helps spatial-temporal modeling.**\\n\\n**Reply:** We agree that exploring the role of complex representation in spatiotemporal modeling is an important direction. Our study has demonstrated how the two-level message passing mechanism (see **Subsubsection 3.1.2, Appendix Subsection C.1, C.2, D.3, Tables 1-3 and 4, Appendix Tables S13-S17** in the revised paper) and the feature-enhanced block (see **Subsubsection 3.1.1, Appendix Subsection C.3, Tables 2-3** in the revised paper) effectively capture these spatial interactions. In our future work, we plan to further invetigate along this horizon.\\n\\n\\n***Concluding Remark:*** We appreciate the reviewer's feedback and the suggestion for the AC to take a comprehensive view. We believe our work makes a clear contribution to spatiotemporal prediction by introducing the **two-level message passing mechanism** and the **feature-enhanced block**. We hope our clarifications provided above could thoroughly address the reviewer's concerns. Thank you very much!\\n\\n***References:***\\n\\n[1] Pfaff et al. Learning mesh-based simulation with graph networks. In ICLR, 2021.\\n\\n[2] Veli\\u010dkovi\\u0107 et al. Graph attention networks. arXiv 2017.\\n\\n[3] Brody S, Alon U, Yahav E. How attentive are graph attention networks? arXiv 2022.\\n\\n[4] Li et al. Fourier neural operator for parametric partial differential equations. In ICLR, 2021.\\n\\n[5] Brandstetter et al. Message passing neural PDE solvers. In ICLR, 2022.\\n\\n[6] Tran et al. Factorized fourier neural operators. In ICLR, 2023.\\n\\n[7] Li et al. Geometry-informed neural operator for large-scale 3d pdes. In NIPS, 2024.\\n\\n[8] Wu et al. Transolver: A fast transformer solver for pdes on general geometries. In ICML, 2024.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Most of my concern is addressed. But I observe that the time step setting is different between different datasets. Why they have such a large gap. Does it can be attributed to the different time interval setting in different setting? For example, 1000 steps in your 2D Burgers setting takes only a few seconds.\"}", "{\"title\": \"Reply to Reviewer KT6u (Part 1)\", \"comment\": \"Thanks for your constructive comments!\\n\\n>**Weakness 1:** Add recent works as new baselines.\\n\\n**Reply:** Great suggestion! We have added results of comparison with several other baseline models in the revised paper (see Subsection 4.3 on Page 7, Table 1 on Page 9, and Appendix Tables S3, S10-S12 on Pages 19-21), namely,\\n- The additional experimental results are added in **Table 1**. \\n- A discussion about the baseline comparision results is added in **Subsection 4.3**. \\n- Details of the new baselines, e.g., the hyperparameters, have been added in **Appendix Subsection D.2, Appendix Tables S3, S10-S12**.\\n\\nSpecifically, we have considered three additional baselines (aka, FFNO, Geo-FNO, Transolver) to compare with the performance of CeGNN on our all benchmarks. A summary of these models is shown in the following **Table F**. The results show that Geo-FNO [1] and Transolver [3] perform poorly on the limited training dataset, which is significantly smaller, by at least an order of magnitude less, than the hundreds or thousands of datasets typically used for such models. We have added the new results to our revised paper, referring to **Table 1** (Page 9). \\n\\n**Table F: RMSE metrics for CeGNN and other models.** \\n| Model | 2D Burgers | 2D FN | 2D GS | 3D GS | 2D BS |\\n| ------------- | ---------- | ------- | ------- | ---------------- | ------- |\\n| CeGNN | **0.0066** | **0.0036** | **0.0024** | **0.0013** | **0.5560** |\\n| Geo-FNO [1] | 0.5936 | 20.514 | 0.1867 | NaN | 1.2893 |\\n| FFNO [2] | 0.0334 | 0.1192 | 0.0362 | 0.0359 | - |\\n| Transolver [3] | 0.1742 | 0.1372 | 0.1859 | 0.1520 | 0.8199 |\\n\\n**References:**\\n\\n[1] Li, et al. A. Fourier neural operator with learned deformations for pdes on general geometries. arXiv:2207.05209, 2022.\\n\\n[2] Tran, et al. Factorized fourier neural operators. In ICLR, 2023.\\n\\n[3] Wu, et al. Transolver: A fast transformer solver for pdes on general geometries. In ICML, 2024.\\n\\n\\n>**Weakness 2a:** Summarize the contributions of CellMPNN block from a theoretical perspective. \\n\\n**Reply:** Great comment! Generally, the traditional message passing (MP) mechanism can be regarded as a refinement on a discrete space, analogous to an interpolation operation, which implies that edges are essentially interpolated from nodes. A MP mechanism introducing the cell further enhances the refinement of the discrete space (secondary refinement), thereby reducing the magnitude of discretization errors spatially, paving the way for its application in complex graph structures. Please see below **Definition 1** on cell in graph and **Corollary 1** on expressive power of cell-embedded MP. Therefore, we proposed a new two-level cell-embedded mechanism to process the message on graphs, which forms the key novelty of the proposed model. \\n\\n***Definition 1 (Cell in Graph):*** Let $G = (V, E)$ be a graph, where $V$ is the set of nodes $\\\\mathbf{v}$ and $E \\\\subseteq V \\\\times V$ is the set of edges. A cell in $G$ is a subset of nodes $C \\\\subseteq V$, such that the nodes in $C$ form a complete subgraph (clique) or satisfy predefined structural relationships. In particular, a $k$-cell $C_k$ in a graph $G$ contains $k+1$ nodes, where $\\\\forall i, j \\\\in C_k$, $(\\\\mathbf{v}_i, \\\\mathbf{v}_j) \\\\in E$, representing various structures, such as node ($k=0$), edge ($k=1$), triangle ($k=2$), tetrahedron ($k=3$), and so on.\\n\\n***Corollary 1 (Expressive Power):*** Given a graph $G$ including many $k$-cell ($k=0,1,2,\\\\dots$), there exists a cell-based scheme that is more expressive than Weisfeiler-Lehman (WL) tests in distinguishing non-isomorphic graphs (see the proof in **Appendix Subsection C.2** of the revised paper (Pages 16-17)).\"}", "{\"title\": \"Reply to Reviewer rgEv (Part 2)\", \"comment\": \">**Weakness 3:** Dataset source; training dataset size; the experiments on the self-generated training data are not convincing.\\n\\n**Reply:** Thank you for your comments.\\n\\n>> The source of training data.\\n\\nIn fact, all the data used in this paper are from the **publicly available datasets**. Here, the datasets of four synthetic PDE systems are from the work in [5], while the real-world Black Sea dataset is taken from [6].\\n\\n**References:**\\n\\n[5] Rao, et al. Encoding physics to learn reaction\\u2013diffusion processes. Nature Machine Intelligence, 2023, 5(7): 765-779.\\n\\n[6] Lima, et al. Black Sea Physical Reanalysis (CMEMS BS-Currents). Copernicus Monitoring Environment Marine Service (CMEMS), 2020.\\n\\n>> The training dataset size.\\n\\nIn fact, we have provided the trajectory number for training, validation, and testing, described by the form like (a/b/c) in **Appendix Table S2** (please see Page 18 in the paper). For example, the BS training/validation/testing dataset size is (20/2/2).\\n\\n>> Experiments on the self-generated training data are not convincing.\\n\\nIn fact, we have experimented not only on the synthetic PDE systems, but also the real-world data, which are from the publically available datasets used in previously published papers. All the results in **Table 1** in the revised paper (Page 9) have shown that CeGNN achieves superior performance compared with other baseline models. Hope this clarifies your concern.\\n\\n>**Weakness 4:** Long-term simulations with one-step prediction.\\n\\n**Reply:** Thanks for your comment. All experiment tasks are tested on **long-term prediction** rather than one-step prediction. Once the model is trained (based on one-step prediction), we employ multi-step rollout for long-term prediction. In fact, we have described it in **Subsection 4.2** (see Lines 339-342 on Page 7), shown as follows.\\n\\n- \\\"*... we mainly focus on **predicting much longer time steps** with lower error and attempt to achieve better generalization ability ... utilize the **one-step training strategy** ...*\\\"\\n\\nMore rigorously, our research task is to predict **all the next $n$ steps** given a random initial condition $\\\\mathbf{u}_ {0}$ taking the autoregressive form $\\\\mathbf{u}_ {0} \\\\rightarrow \\\\mathcal{F}(\\\\mathbf{u}_ {0}) \\\\rightarrow \\\\hat{\\\\mathbf{u}}_ {1} \\\\rightarrow \\\\mathcal{F}(\\\\hat{\\\\mathbf{u}}_ {1}) \\\\rightarrow \\\\hat{\\\\mathbf{u}}_ {2} \\\\rightarrow \\\\cdots \\\\rightarrow \\\\hat{\\\\mathbf{u}}_ {n}$, where the function $\\\\mathcal{F}$ is an unknown function learned by our model, and we set the $n$ steps as **hundreds or thousands of** steps in the synthetic datasets (e.g., 1000 steps in 2D Burgers, 3000 steps in 2D FN, 3000 steps in 2D GS, 3000 steps in 3D GS), and **20** steps in the real-world dataset. \\n\\n>**Weakness 5:** The significance of cell feature; Can its significance be verified or be measured?\\n\\n**Reply:** Good question! In fact, the CeGNN's innovation includes the two-level cell-embedded message passing mechanism and the unique feature-enhanced (FE) block. Given the cell-embedded mechanism that better captures the spatial dependency and the FE block that further learns the distinguishable features, CeGNN achieves superior performance compared with other baseline models across all benchmark datasets, as shown in **Table 1** in the revised paper (Page 9). \\n\\nFollowing your suggestion, we added explanation in the revised paper (Subsection 3.1 on Pages 4-6 and Appendix Section C on Pages 16-17).\\n- A primary theoretical deduction in **Subsection 3.1**. \\n- More theoretical preliminaries in **Appendix Section C**.\\n\\nGenerally, the traditional message passing (MP) mechanism can be regarded as a refinement on a discrete space, analogous to an interpolation operation, which implies that edges are essentially interpolated from nodes. A MP mechanism introducing the cell further enhances the refinement of the discrete space (secondary refinement), thereby reducing the magnitude of discretization errors spatially, paving the way for its application in complex graph structures. \\n\\n***Definition 1 (Cell in Graph):*** Let $G=(V, E)$ be a graph, where $V$ is the set of nodes $\\\\mathbf{v}$ and $E\\\\subseteq V\\\\times V$ is the set of edges. A cell in $G$ is a subset of nodes $C \\\\subseteq V$, such that the nodes in $C$ form a complete subgraph (clique) or satisfy predefined structural relationships. In particular, a $k$-cell $C_k$ in a graph $G$ contains $k+1$ nodes, where $\\\\forall i, j \\\\in C_k,$, $(\\\\mathbf{v}_i, \\\\mathbf{v}_j) \\\\in E$, representing various structures, such as node ($k=0$), edge ($k=1$), triangle ($k=2$), tetrahedron ($k=3$), and so on.\\n\\n***Corollary 1:* (Expressive Power)** Given a graph $G$ including many $k$-cell ($k=0,1,2,\\\\dots$), there exists a cell-based scheme that is more expressive than Weisfeiler-Lehman (WL) tests in distinguishing non-isomorphic graphs (see the proof in **Appendix Subsection C.2** of the revised paper (Pages 16-17)).\\n\\nHence, we proposed a new two-level cell-embedded mechanism to process the message on graphs.\"}", "{\"summary\": \"In general, this paper tackles interesting and meaningful problems governed by PDE. It is well written and the results are sound.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Ablation study is pretty great to justify the proposed framework.\", \"weaknesses\": \"The methodology of feature-enhanced is not sufficient, the authors should write down in the appendix more equations with more explanation. Most importantly, why do authors propose such Algorithm 1, is there any physical or mathmatical meaning/inspiration? or any hypothesis? It would be better to show the train of thoughts of how did author propose this FE instead of just showing its working better.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"I appreciate the authors' response. However, I still lean to rejecting this paper for the following reasons:\", \"As Reviewer rgEv pointed out, the comparison baselines are still outdated. Most of the methods being compared are from 2021-2023, which raises doubts about whether this truly demonstrates the superiority of the proposed method.\", \"The credibility of the comparisons is questionable. The newly added GeoFNO (2023) crashes in most experiments, while Transolver (2024) performs significantly worse than MGN. I'm uncertain about the value of these comparisons. It either suggests that these baseline methods are not meaningful for comparison, or the authors may have implemented the baselines incorrectly.\", \"While we should be cautious in our assessment, I still find that this paper lacks novelty in its contribution.\"]}", "{\"title\": \"General reply\", \"comment\": [\"Dear Reviewers:\", \"We would like to thank you for your constructive comments, which are very helpful in improving our paper. We are pleased that the reviewers recognized the novelty and effectiveness of our work. In particular, we thank the reviewers for recognizing the *novelty* (S14d, rgEv, and KT6u) and *effectiveness* (S14d, 1r7X, and KT6u) of our method.\", \"We have addressed all the concerns in each individual rebuttal and summarized as follows. Comprehensive revisions and adjustments (indicated in red color) have also been made in the revised paper (please see the **updated .pdf file**).\", \"We updated the **FE block in Figure 2** to further explain its process and redefined **Algorithm 1** (placed in Appendix).\", \"We added some new related works and discussed them in **Section 2**.\", \"We added some results of new baselines on all benchmarks in **Table 1** and a discussion in **Subsection 4.3**.\", \"More details of new baselines were added in **Appendix Subsection D.2, Table S3, S10-S12**.\", \"A primary theoretical deduction of the FE block was updated in **Subsubsection 3.1.1**.\", \"More theoretical analysises of the FE block were added in **Appendix Subsection C.3**.\", \"A primary theoretical deduction of the CellMPNN block was updated in **Subsubsection 3.1.2**.\", \"More theoretical analysises of the CellMPNN block were added in **Appendix Subsection C.1-C.2**.\", \"More detailed results and discussions were added in **Appendix Subsubsection D.3.3-D.3.5, Appendix Table S14-S17**.\", \"Please do feel free to let us know if you have any further questions.\", \"Thank you very much.\", \"Best regards,\", \"The Authors of the Paper\"]}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear Reviewer KT6u,\\n\\nAgain, thanks for your constructive comments. We would like to follow up on our rebuttal to ensure that all concerns have been adequately addressed. If there are any further questions or points that need discussion, we will be happy to address them. Your feedback is invaluable in helping us improve our work, and we eagerly await your response.\\n\\nThank you very much for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear Reviewer 1r7X,\\n\\nAgain, thanks for your constructive comments. We would like to follow up on our rebuttal to ensure that all concerns have been adequately addressed. If there are any further questions or points that need discussion, we will be happy to address them. Your feedback is invaluable in helping us improve our work, and we eagerly await your response.\\n\\nThank you very much for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply to the additional comments from Reviewer KT6u\", \"comment\": \"Thank you for your feedback. Your time and effort placed on reviewing our paper are greatly appreciated!\\n\\n> **Comment: Regarding the alleged \\\"outdated\\\" baselines (2021-2023).**\\n\\n**Reply:** We appreciate but respectfully *disagree* with the comment that the baselines used in our study are \\\"outdated\\\". In fact, we carefully selected a range of baselines (**eight in total**) that are widely recognized in the field (e.g., MGN [1] (ICLR, 2021), GAT [2] (arXiv, 2017), GATv2 [3] (arXiv, 2022), FNO [4] (ICLR, 2021)) and some latest representative methods (e.g., MP-PDE [5] (ICLR, 2022), FFNO [6] (ICLR, 2023), Geo-FNO [7] (NIPS, 2024), Transolver [8] (ICML, 2024)) for comparison. Many of these have been recently published in prestigious venues and actively used as representative baseline models in recent research. Hence, we believe that the baseline comparisons are up to date in the field of spatiotemporal dynamics prediction.\\n\\nMeanwhile, we have provided **extensive** experiments (see **Tables 1-5, Appendix Tables S13-S17** in the revised paper) comparing our method with all baselines to demonstrate our model's effectiveness. If the reviewer believes that there are more recent or more appropriate baselines we should include, we sincerely welcome your suggestions and would be more than happy to include them in final version of the paper. Thanks!\\n\\n\\n> **Comment: Regarding the credibility of the comparisons and implementation concerns.**\\n\\n**Reply:** In fact, the underperformance of GeoFNO [7] (NIPS, 2024) and Transolver [8] (ICML, 2024) on limited training datasets likely stem from their *mapping techniques between the irregular mesh and the regular grid*, which rely on diverse training data. This is a *quite common* issue when these types of models are trained in small data regimes like the case we are specifically considering in this paper. \\n\\nMoreover, we would like to draw your attention that all the baselines were implemented with the utmost care, following publicly available codebases and the instructions provided in the corresponding papers. We ensure you that these implementation are correct, where the baseline comparison codes will be released along with our codes after the peer review stage. \\n\\nOur experiments (see **Table 1** in the revised paper) aim to evaluate all the methods across diverse settings. The results highlight the robustness of our model in scenarios where other baselines may struggle, particularly when the training datasets are limited. More detailed information of these models have been provided in **Appendix Subsection D.2, Appendix Tables S4-S12** in the revised paper.\\n\\nHope this clarifies your concern.\\n\\n\\n> **Comment: Regarding the novelty of our contribution.**\\n\\n**Reply:** In fact, the novelty of our proposed CeGNN model includes the introduction of two key modules in the network architecture, namely, \\n\\n- (1) ***the two-level cell-embedded message passing mechanism***, which better captures the spatial dependency;\\n- (2) ***the unique feature-enhanced (FE) block***, which further learns the distinguishable features. \\n\\nWith extensive experiments on various spatiotemporal systems, CeGNN achieves superior performance compared with other baseline models across all benchmark datasets, as shown in **Table 1** in the revised paper (Page 9). This has been thoroughly discussed and clearly demonstrated from the theoretical perspective (see **Subsubsection 3.1.2, Appendix Subsection C.1, C.2, D.3** in the revised paper) and evidence of the empirical results (see **Tables 1, 3 and 4, Appendix Tables S13-S17** in the revised paper). Hope this clarifies your concern.\\n\\nWe also recognize that *novelty can sometimes be nuanced*, sometimes depending on reader's personal judgement. However, our proposed network architeture, consisting of the cell-embedded message passing mechanism and the FE block, is unique and new, with superior performance clearly contributing to its novelty.\\n\\n**Concluding Remark:** We appreciate the reviewer\\u2019s additional comments and critical evaluation. We sincerely hope to have your re-evaluation of our paper in light of our clarifications and contributions. Your possible consideration of updating the score is highly appreciated!\\n\\nWe look forward to your feedback!\\n\\n***References:***\\n\\n[1] Pfaff et al. Learning mesh-based simulation with graph networks. In ICLR, 2021.\\n\\n[2] Veli\\u010dkovi\\u0107 et al. Graph attention networks. arXiv 2017.\\n\\n[3] Brody S, Alon U, Yahav E. How attentive are graph attention networks? arXiv 2022.\\n\\n[4] Li et al. Fourier neural operator for parametric partial differential equations. In ICLR, 2021.\\n\\n[5] Brandstetter et al. Message passing neural PDE solvers. In ICLR, 2022.\\n\\n[6] Tran et al. Factorized fourier neural operators. In ICLR, 2023.\\n\\n[7] Li et al. Geometry-informed neural operator for large-scale 3d pdes. In NIPS, 2024.\\n\\n[8] Wu et al. Transolver: A fast transformer solver for pdes on general geometries. In ICML, 2024.\"}", "{\"summary\": \"The paper presents a new model, the Cell-Embedded Graph Neural Network (CeGNN), for simulating spatiotemporal dynamics across different physical domains. CeGNN introduces learnable cell attributions to the traditional node-edge message-passing process, upgrading it to a higher-order scheme that captures volumetric information and improves spatial dependency learning. Additionally, the Feature-Enhanced (FE) block enriches feature representations, tackling the over-smoothness issue common in Graph Neural Networks (GNNs). Extensive experiments demonstrate that CeGNN achieves superior performance and generalization in predicting physical dynamics, particularly for Partial Differential Equations (PDEs) and real-world datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Good empirical results.\", \"weaknesses\": \"The main weakness is lack of novelty. The main idea of this paper is the proposal of cell-attribution. In the field of topological learning, there have been several prior works proposing the idea of higher-order message passing, cell / simplicial complex neural networks. Please check the following literature:\\n\\n(1) Topological Deep Learning: Going Beyond Graph Data (this is a great survey of Topological Deep Learning)\", \"https\": \"//arxiv.org/abs/2103.03212\\n\\nHowever, application of these topological methods in the domain of learning physical systems is new.\", \"questions\": \"Please address the weakness in novelty that I have raised!\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to your comment on high-order message passing\", \"comment\": \"Thank you for listing these papers.\\n\\nAlthough these methods all involve modeling of high-order interactions, the feature representation of our model is fundamentally different from these existing works. We hope the reviewer will not simply confuse the concepts. \\n\\nPlease note that our FE block is uniquely designed, and it is this aspect that sets the novelty of our work. We hope this helps clarify the reviewer\\u2019s misunderstanding. \\n\\nThanks!\\n\\nThe Authors\"}", "{\"title\": \"Reply to additional comment from Reviewer 1r7X\", \"comment\": \"Thank you for your additional feedback. Your time and effort placed on reviewing our paper are greatly appreciated!\\n\\n**Comment:** Regarding the concern of theoretical contribution.\\n\\n**Reply:** We appreciate your concern regarding the theoretical novelty of our approach. In fact, the novelty of our proposed CeGNN model includes the introduction of two key modules in the network architecture, namely, \\n\\n- (1) ***the two-level cell-embedded message passing mechanism***, which better captures the spatial dependency;\\n- (2) ***the unique feature-enhanced (FE) block***, which further learns the distinguishable features. \\n\\nWith extensive experiments on various spatiotemporal systems, CeGNN achieves superior performance compared with other baseline models across all benchmark datasets, as shown in **Table 1** in the revised paper (Page 9). \\n\\nFor your comment, we respectfully believe that it overlooks a key distinction in our approach, specifically **the integration of the FE block**, which introduces additional complexity and novelty to our model. The efficacy of FE blcok has been thoroughly discussed from the theoretical perspective (see **Subsubsection 3.1.1, Appendix Subsection C.3** in the revised paper) and clearly demonstrated by evidence of the empirical results (see **Tables 2-3** in the revised paper). Specially, experimental results in **Table A** show the efficacy of FE block, referring to the **Table 2** in the revised paper.\\n\\n**Table A: Efficacy of feature-enhanced (FE) block.** \\n| Model | 2D Burgers | 2D FN | 2D GS | 3D GS | 2D BS |\\n| ------------- | ---------- | ------- | ------- | ------- | ------- |\\n| MGN | 0.0117 | 0.0210 | 0.0291 | 0.0192 | 0.6147 |\\n| MGN + FE | 0.00817 | 0.01241 | 0.01583 | 0.00721 | 0.60593 |\\n| Promotion (%) | 30.4 | 41.1 | 45.7 | 62.5 | 1.4 |\\n| MP-PDE | 0.0178 | 0.0284 | 0.0386 | 0.0652 | 0.6076 |\\n| MP-PDE + FE | 0.01445 | 0.01957 | 0.02621 | 0.03655 | 0.60372 |\\n| Promotion (%) | 18.9 | 31.2 | 32.1 | 41.5 | 0.6 |\\n| CeGNN | 0.0066 | 0.0036 | 0.0024 | 0.0013 | 0.5559 |\\n\\nConcretely, the FE block is inspired by interaction models in physics and mathematically hypothesized to capture nonlinear dependencies, enhancing the model's representation power.\\n\\n***Hypothesis.*** The FE block's hypothesis could be framed as: *By capturing second-order interactions between latent features and applying selective filtering, the model can better represent complex structures or relationships in the data*.\\n\\n***Physical Analogy.*** In physics, the outer product and second-order terms are often used to model interactions, such as stress tensors in mechanics or pairwise correlations in quantum mechanics. Here, the module could draw an analogy to systems where interactions between individual components (features) are crucial to the overall behavior.\\n\\n***Process Overview.*** In detail, it regards the node latent feature $\\\\overline{\\\\mathbf{h}}_ i \\\\in\\\\mathbb{R}^{D\\\\times 1}$ as basis and builds a higher-order tensor feature $\\\\mathbf{H}_ {i} \\\\in\\\\mathbb{R}^{D\\\\times D}$ via an outer-product operation, e.g., $\\\\overline{\\\\mathbf{h}}_ {i}\\\\otimes\\\\overline{\\\\mathbf{h}}_ {i}$. This process creates abundant second-order nonlinear terms to enrich the feature map. We then use a mask operation with $\\\\mathbf{M}\\\\in\\\\mathbb{R}^{D\\\\times D}$ to randomly sample these terms, filtering the appropriate information by a learnable weight tensor $\\\\mathbf{W}\\\\in\\\\mathbb{R}^{D\\\\times D\\\\times D}$ to enhance the model's representation capacity.\\n\\nIn summary, unlike models based solely on topological structures, our work combines both the **cell and FE** modules for efficiently **extracting and processing** features that are essential for learning complex spatiotemporal dependency. This dual-module structure enables our model to better capture dynamic interactions. We have provided substantial theoretical analysis (see **Section 3, Appendix Section C** in the revised paper) and experimental validation (see **Section 4, Appendix Section D**) of our contribution, which differentiates our approach from existing models. We believe that this combination is key to achieving the improvements demonstrated in our experiments and that the theoretical novelty of our work lies in this synergy between topology and feature processing.\\n\\nHope this clarifies your concern.\\n\\n***Concluding Remark:*** We appreciate the reviewer\\u2019s additional comments and critical evaluation. We sincerely hope to have your re-evaluation of our paper in light of our clarifications and contributions. Your possible consideration of updating the score is highly appreciated!\\n\\nWe look forward to your feedback!\"}", "{\"title\": \"Higher order message passing\", \"comment\": \"Dear Authors,\\n\\nI understand the importance of second-order interactions in physical modeling. However, there have been works proposing higher-order interaction modeling / message passing for physical systems. Please check:\\n\\nPredicting molecular properties with covariant compositional networks\", \"https\": \"//arxiv.org/abs/1812.09902\\n\\nBest,\\nReviewer\"}", "{\"title\": \"Reply to your additional comment\", \"comment\": \"Thank you for your feedback. Your time and effort placed on reviewing our paper are greatly appreciated!\\n\\n> **Question:** Different settings of time interval and time step in the synthetic datasets.\\n\\n**Reply:** Good question! In fact, the numerical accuracy and stability in numerical simulation of physical systems (e.g., 4 synthetic datasets in our work) mainly rely on temporal and spatial discretization (e.g., $\\\\Delta t$, $\\\\Delta x$), the form and coefficients of governing equations, as well as the spefici numerical solver. Typically, when finite-difference-based solvers are employed, the spatiotemporal grids should satisfy a certain condition, e.g., the Courant-Friedrichs-Lewy (CFL) condition.\\n\\nFor illustration, the CFL condition in 1D problems is expressed as $\\\\Delta t$ $\\\\leq$ $C\\\\Delta x /v$, where $C$ is a constant related to the specific numerical solver, and $v$ is the maximum propagation speed in the physical process. Given the computational domain and mesh grids, the time interval $\\\\Delta t$ depends on the partial differential equations that govern the specific physical problem. Hence, the time intervals have different settings in different examples. \\n\\nThe number of time steps is a parameter we empoyed to test the model's performance of long-term rollout prediction. We choose the corresponding number of time steps when the dynamics get stabilized (e.g., the parterns remain stable), which depends on the specific physical problem and varies in each synthetic dataset.\\n\\n***Concluding Remark:*** Thank you very much for your constructive comments. Please let us know if you have any other questions. Your consideration of improve the rating of our paper will be much appreciated!\"}", "{\"title\": \"Reply to Reviewer rgEv (Part 3)\", \"comment\": \">**Weakness 6:** The benefit of adding position awareness; the performance comparison of replacing the distance to the cell center with the distance to the nearest PDE boundary for each point.\\n\\n**Reply:** Thanks for your comment. Following your suggestion, we tested other two variant models, namely, \\\"CeGNN w/o Cell Pos.\\\" and \\\"CeGNN w Cell (Pos. to B)\\\", with the results reported in the following **Table C**. Here, \\\"CeGNN w/o Cell Pos.\\\" represents the cell initial feature without the position awareness, and \\\"CeGNN w Cell (Pos. to B)\\\" replaces the distance to the cell center with the distance to the nearest PDE boundary. The results in **Table C** show that the performance improvement of CeGNN is not only due to adding position awareness into the cell initial feature, but also the secondary refinement of the discrete space. We have added the new results to our revised paper (referring to **Appendix Table S15** on Page 22 in the revised paper).\\n\\n**Table C: RMSE metrics for CeGNN and other two cases.** \\n| Model | 2D Burgers | 2D FN | 2D GS | 3D GS | 2D BS |\\n| -------------- | ---------- | ----- | ----- | ----- | ----- |\\n| CeGNN (ours) | **0.00664** | **0.00364** | **0.00248** | **0.00138** | **0.55599** |\\n| CeGNN w/o Cell Pos. | 0.00721 | 0.00477 | 0.00439 | 0.00274 | 0.56098 |\\n| CeGNN w Cell (Pos. to B) | 0.00720 | 0.00490 | 0.00445 | 0.00276 | 0.56040 |\\n| MGN | 0.01174 | 0.02108 | 0.02917 | 0.01925 | 0.61475 |\\n\\n\\nOverall, we appreciate your constructive comments and suggestions. Please let us know if you have any other questions. We look forward to your feedback!\"}", "{\"title\": \"Reply to Reviewer rgEv (Part 1)\", \"comment\": \"Thanks for your constructive comments!\\n\\n>**Weakness 1:** Add more recent works as baselines.\\n\\n**Reply:** Following your suggestion, we have cited several important studies in recent years [1], especially those published after 2022 (see Section 2 on Page 4) and added detailed experimental results of the baseline comparison (see Subsection 4.3 on Page 7, Table 1 on Page 9, Appendix Subsection D.2 on Page 19, Appendix Tables S10-S12 on Pages 20-21) in the revised paper.\\n\\nConcretely, we have considered some new baselines (e.g., FFNO, Geo-FNO, Transolver) to compare with our CeGNN model on our all cases. A summary of the results of these models is shown in the following **Table A**. In particular, we found that the Geo-FNO [2] and Transolver [4] perform poorly on the limited training dataset, which is significantly smaller, by at least an order of magnitude less, than the hundreds or thousands of datasets typically used for such models. We have added the new results to our revised paper, referring to **Table 1**.\\n\\n**Table A: RMSE metrics for CeGNN and other models.** \\n| Model | 2D Burgers | 2D FN | 2D GS | 3D GS | 2D BS |\\n| ------------- | ---------- | ------- | ------- | ---------------- | ------- |\\n| CeGNN | 0.00664 | 0.00364 | 0.00248 | 0.00138 | 0.55599 |\\n| Geo-FNO [2] | 0.59363 | 20.514 | 0.18669 | NaN | 1.2893 |\\n| FFNO [3] | 0.03341 | 0.11921 | 0.03628 | 0.03594 | -- |\\n| Transolver [4] | 0.17422 | 0.13724 | 0.18594 | 0.15204 | 0.81991 |\\n\\n\\n**References:**\\n\\n[1] Wang, et al. Recent Advances on Machine Learning for Computational Fluid Dynamics: A Survey. arXiv:2408.12171, 2024.\\n\\n[2] Li, et al. A. Fourier neural operator with learned deformations for pdes on general geometries. arXiv:2207.05209, 2022.\\n\\n[3] Tran, et al. Factorized fourier neural operators. In ICLR, 2023.\\n\\n[4] Wu, et al. Transolver: A fast transformer solver for pdes on general geometries. In ICML, 2024 (Spotlight).\\n\\n>**Weakness 2a:** Is CeGNN an extension of MP-PDE? Whether the cell features enhance the performance of MP-PDE or not.\\n\\n**Reply:** Thanks for your comment. In fact, our method is **not a straightforward extension of MP-PDE**. \\n\\nBriefly, the key innovation of MGN-based method, MP-PDE, lies in replacing the MLP in MGN's decoder with a 1D-CNN block to aid autoregressive temporal marching. Furthermore, the training strategy, aka, the **pushforward trick and temporal bundling trick**, also leads to its performance improvement.\\n\\nHowever, the innovation of our CeGNN model is the two-level cell-embedded message passing mechanism and the unique feature-enhanced (FE) block. Moreover, all our experiments are trained with an end-to-end **one-step training strategy**, rather than the pushforward trick and temporal bundling trick used in MP-PDE.\\n\\nGiven the cell feature that better captures the spatial dependency and the FE block that further learns the distinguishable feature, CeGNN achieves superior performance compared with other baselines, significantly reducing the prediction errors on all benchmarks, as shown in **Table 1** in the revised paper and **Table A** shown above.\\n\\n>**Weakness 2b:** How much improvement could be observed if this feature was integrated into MP-PDE?\\n\\n**Reply:** Following your suggestion, we have conducted experiments incorporating the cell feature in MGN and MP-PDE models and added detailed results in the revised paper (see Subsection 4.3 on Page 7, Appendix Subsubsection D.3.3 on Pages 21-22, Appendix Table S14 on Page 22).\\n\\nSpecifically, we consider MP-PDE integrating the cell feature as our baseline, listed as \\\"MP-PDE + Cell\\\" in the following **Table B**. Meanwhile, the results of MGN integrating the cell feature have been also listed as \\\"MGN + Cell\\\". The results in Table B show that the cell feature improves the performance of MP-PDE, but the \\\"MP-PDE + Cell\\\" still underperforms our model. **More importantly, the results further explain why we integrate the cell feature into MGN rather than MP-PDE.** We have added the new results to our revised paper (referring to **Appendix Table S14**).\\n\\n**Table B: RMSE metrics for CeGNN, MGN, MGN+Cell, MP-PDE, and MP-PDE + Cell.** \\n| Model | 2D Burgers | 2D FN | 2D GS | 3D GS | 2D BS |\\n| ------------- | ---------- | ----- | ----- | ----- | ----- |\\n| MGN | 0.01174 | 0.02108 | 0.02917 | 0.01925 | 0.61475 |\\n| MGN + Cell | 0.00826 | 0.00791 | 0.00832 | 0.00694 | 0.58019 |\\n| Promotion (%) | 29.6 | 62.4 | 71.4 | 63.9 | 5.6 |\\n| MP-PDE | 0.01784 | 0.02848 | 0.03860 | 0.06528 | 0.60761 |\\n| MP-PDE + Cell | 0.00951 | 0.01193 | 0.00947 | 0.00992 | 0.59313 |\\n| Promotion (%) | 46.7 | 58.1 | 75.4 | 84.8 | 2.38 |\\n| CeGNN | 0.00664 | 0.00364 | 0.00248 | 0.00138 | 0.55599 |\"}", "{\"title\": \"We eagerly await your response\", \"comment\": \"Dear Reviewer 1r7X,\\n\\nAs the rebuttal period has undergone over two weeks, your silence made us anxious. We would like to follow up on our rebuttal to ensure that all your concerns have been adequately addressed. If there are any further questions or points that need discussion, we will be happy to address them. We eagerly await your response.\\n\\nThank you very much for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Sincerely looking forward to your feedback\", \"comment\": \"Dear Reviewer 1r7X,\\n\\nAgain, thanks for your constructive comments, which are very much helpful for improving our paper. If there are any further questions or points that need discussion, we will be happy to address them. Your feedback is invaluable in helping us improve our work, and we eagerly await your response.\\n\\nMoreover, we have thoroughly proofread our paper, corrected typos and grammar mistakes, and re-organized the contents to improve the clarity of the paper. We believe the presentation has been **substantially improved** (see revisions marked in red color). Please refer to the **updated .pdf file**.\\n\\nThank you very much for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Reply to Reviewer 1r7X (Part 2)\", \"comment\": \">**Weakness 1b:** Differences compared with exisiting topological graph learning methods.\\n\\n**Reply:** Based on the references mentioned by you, we focus on discussing CCNNs [1] and MPSNs [3], since the work in [2] shares similarity with CCNNs [1]. A summary of these models is shown in the following **Table D**. We have added this new table to our revised paper, referring to **Appendix Table S16** in the revised paper (Page 22).\\n\\n**Table D: Summary of CeGNN, MGN, and the works in [1] and [3].** \\n| Model| Message Level | Special Complex Predefinition |Simplex and Complex Type | Application |\\n| - | - | - | - | - |\\n| CeGNN | 2 | No | 3 | Dynamics |\\n| MGN | 2 | No | 2 | Dynamics |\\n| CCNNs[1] | 3 | No | 3 | Classification |\\n| MPSNs[3] | 2 | Yes | 5 | Classification |\\n\\nFirstly, MGN achieves message passing through a **two-level** structure ($edge \\\\rightarrow node$). On this basis, CCNNs in [1] leverages combinatorial complexes to achieve message passing through a **three-level** structure ($cell \\\\rightarrow edge \\\\rightarrow node$). Meanwhile, MPSNs in [3] performs message passing through a **two-level** structure ($complexes \\\\rightarrow simplex$) on various simplicial complexes (SCs), which primarily include **one simplex** (node) and **four types of complexes** with varying adjacent simplices to enhance feature distinguishability and thus improve classification performance.\\n\\nIn contrast, our CeGNN model sets apart from these works and adopts a **novel two-level** structure ($[cell, edge] \\\\rightarrow node$) from the spatial perspective, which is more suitable to learn the underlying spatiotemporal dynamics. Since there are only node labels for the supervised learning, the three-level message passing mechanism in [1] failed to yield performance lift which might be due to the introduction of redundant path of information passing. However, our two-level message passing sequence reduces the high coupling degree in [1] and avoids the limitation for additional predefined special complexes (e.g., some simplicial complexes) in [3]. A comparison result between two-level and three-level mechanisms is shown in the following **Table E**. The results in Table E demonstrate that the three-level mechanism underperforms compared with our proposed two-level mechanism in all cases. We have added the new results to our revised paper, referring to **Appendix Table S17** in the revised paper (Page 23). Moreover, as shown in **Table 3** in the revised paper (Page 9), we have provided an ablation study on all benchmarks to assess the contributions of the FE and CellMPNN blocks in CeGNN.\\n\\n**Table E: Comparison result between two-level and three-level mechanisms** \\n| Model | 2D Burgers | 2D FN | 2D GS | 3D GS | 2D BS|\\n| - | - | - | - | - | -|\\n| CeGNN | **0.0066** | **0.0036** | **0.0024** | **0.0013** | **0.5559** |\\n| MGN (2-level) | 0.0117 | 0.0210 | 0.0291 | 0.0192 | 0.6147 |\\n| MGN (3-level) | 0.0322 | 0.0419 | 0.0588 |0.0821 | 0.6152 |\\n\\n**References:**\\n\\n[1] Hajij, et al. Topological deep learning: Going beyond graph data. arXiv:2206.00606, 2022.\\n\\n[2] Hajij, et al. Cell complex neural networks. arXiv:2010.00743, 2020.\\n\\n[3] Bodnar, et al. Weisfeiler and lehman go topological: Message passing simplicial networks, In ICML, 2021.\\n\\nWe appreciate your constructive comments and suggestions. Please let us know if you have any other questions. We look forward to your feedback!\"}", "{\"title\": \"Keep my score\", \"comment\": \"Dear Authors,\\n\\nThank you for submitting the rebuttal!\\n\\nI did not raise concern about experiments, but on the theoretical front. I am still not convinced that your work is theoretically different from other topological models such as models learning on simplicial complexes or cell complexes. Simple propagation (between cell - edge - node) as in your work is indeed just a special case of the general topological model. I will keep my score. Please check the work of TopoX: https://github.com/pyt-team/TopoModelX. I believe your model can be implemented conveniently given this framework.\\n\\nBest,\\nReviewer\"}", "{\"metareview\": \"The paper proposes an encode-process-decode framework leveraging graph neural networks (GNNs) to model the spatio-temporal dynamics of partial differential equations (PDEs). It extends the classical message-passing paradigm in GNNs by introducing two key components: (i) leveraging higher-order features between graph nodes to capture spatial dependencies beyond immediate neighbor edges, and (ii) enriching node features through higher-order tensor representations to maintain feature distinction and mitigate the over-smoothing issue caused by successive feature aggregations during message-passing operations. The model is evaluated against baseline models across a variety of datasets and demonstrates improved performance.\\n\\nThe reviewers acknowledge the experimental improvements over the baselines. However, they raised several concerns about the initial version of the manuscript. These include the incremental nature of the technical contribution\\u2014several reviewers pointed out the existence of alternative methods for capturing higher-order features in GNNs\\u2014and the choice of baselines for comparison.\\nIn response, the authors provided extensive new comparisons during the rebuttal phase, including implementing the proposed components on other backbones, which also demonstrated improved performance. While this significantly improved upon the first version of the paper, the majority of the reviewers maintained their initial concerns, particularly regarding the incremental value of the contribution.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided extensive responses to the reviewers' concerns, including new experiments. However, this did not alleviate the reviewers' concerns about the incremental nature of the technical contribution.\"}", "{\"title\": \"Reply to Reviewer KT6u (Part 2)\", \"comment\": \">**Weakness 2b:** Summarize the contributions of CellMPNN block from an experimental verification perspective.\\n\\n**Reply:** In particular, a summary of CellMPNN block on the performance of some graph-based baselines across all benchmarks is shown in the following **Table H**. The results in **Table H** show the positive effect of CellMPNN block. We have added the new results to our revised paper, referring to **Appendix Table S14** in the revised paper (Page 22).\\n\\n**Table H: Summary of CellMPNN block on the performance of graph-based baselines across all benchmarks** \\n| Model | 2D Burgers | 2D FN | 2D GS | 3D GS | 2D BS |\\n| ------------- | ---------- | ------- | ------- | ------- | ------- |\\n| MGN | 0.0117 | 0.0210 | 0.0291 | 0.0192 | 0.6147 |\\n| MGN + Cell | 0.0082 | 0.0079 | 0.0083 | 0.0069 | 0.5801 |\\n| Promotion (%) | 29.6 | 62.4 | 71.4 | 63.9 | 5.6 |\\n| MP-PDE | 0.0178 | 0.0284 | 0.0386 | 0.0652 | 0.6076 |\\n| MP-PDE + Cell | 0.0095 | 0.0119 | 0.0094 | 0.0099 | 0.5931 |\\n| Promotion (%) | 46.7 | 58.1 | 75.4 | 84.8 | 2.38 |\\n| CeGNN | **0.0066** | **0.0036** | **0.0024** | **0.0013** | **0.5560** |\\n\\nIn our study, we also attempted to enhance feature distinguishability through a **three-level** update structure ($cell \\\\rightarrow edge \\\\rightarrow node$). Since there are only node labels for the supervised learning, this three-level message passing mechanism failed to yield performance lift which might be due to the introduction of redundant path of information passing. Therefore, we adapt two-level message passing sequence to reduce the high coupling degree and avoid the limitation for additional predefined special complexes. A comparison result between two-level and three-level mechanisms is shown in the following **Table G**. The results in **Table G** demonstrate that the three-level mechanism underperforms compared with the proposed two-level mechanism in all cases. We have added the new results to our revised paper. Please refer to **Appendix Table S17** in the revised paper (Page 23).\\n\\n**Table G: Comparison result between two-level and three-level mechanisms** \\n| Model | 2D Burgers | 2D FN | 2D GS | 3D GS | 2D BS |\\n| ------------- | ---------- | ------- | ------- | ------- | ------- |\\n| CeGNN | **0.0066** | **0.0036** | **0.0024** | **0.0013** | **0.5559** |\\n| MGN (2-level) | 0.0117 | 0.0210 | 0.0291 | 0.0192 | 0.6148 |\\n| MGN (3-level) | 0.0322 | 0.0419 | 0.0588 | 0.0821 | 0.6152 |\"}", "{\"title\": \"Request your feedback before the end of the discussion period\", \"comment\": \"Dear Reviewer 1r7X:\\n\\nAs the author-reviewer discussion period will end soon, we would appreciate it if you could review our responses at your earliest convenience. If there are any further questions or comments, we will do our best to address them before the discussion period ends.\\n\\nThank you very much for your time and efforts. Looking forward to your response!\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Thank you for your positive feedback\", \"comment\": \"Dear Reviewer S14d,\\n\\nThank you very much for your positive feedback. Your time and effort placed on reviewing our paper are highly appreciated!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Thank you!\", \"comment\": \"The authors have addressed my comments.\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"First, thanks for your response. After reading your rebuttal (including the response to other ), I can understand your attempt to take advantage of the cell-embedded graph model, and I will improve my score correspondingly.\\n\\nBut considering the comparison effectiveness (baselines are still too old) , novelty of cell-attribution (differences with complex GNNs), and the over-claim for long-term rollout, I think this is still a borderline paper or marginally above the threshold. Hope the authors can provide more insights of how complex representation can help the spatial-temporal modeling. And AC can take a comprehensive decision based on our discussion. Thanks for your effort.\"}", "{\"title\": \"Reply to Reviewer KT6u (Part 4)\", \"comment\": \">**Weakness 4:** Discuss the CeGNN's effectiveness from a theoretical perspective.\\n\\n**Reply:** Great suggestion! In fact, the novelty of CeGNN includes the two-level cell-embedded message passing mechanism and the unique feature-enhanced (FE) block. Given the cell-embedded mechanism that better captures the spatial dependency and the FE block that further learns the distinguishable features, CeGNN achieves superior performance compared with other baseline models across all benchmark datasets, as shown in **Table 1** in the revised paper (Page 9). Moreover, as shown in **Table 3** in the revised paper (Page 9), we have provided an ablation study on all benchmarks to assess the contributions of the FE and CellMPNN blocks in CeGNN.\\n\\nFollowing your suggestion, we have added additional discussions in the revised paper (see Subsection 3.1 on Pages 4-6, Appendix Section C on Pages 16-17), namely,\\n\\n- A primary theoretical deduction of FE block was added in **Subsubsection 3.1.1**.\\n- A primary theoretical deduction of CellMPNN block was added in **Subsubsection 3.1.2**.\\n- Theoretical preliminaries have also been added in **Appendix Section C**.\\n\\nIn summary, we proposed a new two-level cell-embedded mechanism to process the message on graphs. Please also see our reply to **Weakness 2a and 2b** about the theoretical and experimental verification perspective of the effectiveness. Moreover, the FE block is to capture nonlinear dependencies, enhancing the model's representation power. More detailed theoretical discussion about FE block can be found in our reply to **Weakness 3**.\\n\\nOverall, we appreciate your constructive comments and suggestions. Please let us know if you have any other questions. We look forward to your feedback!\"}", "{\"title\": \"Reply to Reviewer S14d\", \"comment\": \"Thanks for your positive feedback and constructive comments.\\n\\n>**Weakness:** More theoretical explanation of FE block.\\n\\n**Reply:** Great suggestion! Following your suggestion, we have added detailed explanation in the revised paper (see Section 3.1.1 on Pages 4-5 and Appendix Section C.3 on Page 17).\\n\\n- We updated the **FE block in Figure 2** to further explain its process and redefined **Algorithm 1** (see Appendix).\\n- A primary theoretical deduction was updated in **Subsubsection 3.1.1**.\\n- More theoretical knowledge have also been added in **Appendix Section C.3**.\\n\\nThe FE block is inspired by interaction models in physics and mathematically hypothesized to capture nonlinear dependencies, enhancing the model's representation power.\\n\\n***Hypothesis.*** The FE block's hypothesis could be framed as: *By capturing second-order interactions between latent features and applying selective filtering, the model can better represent complex structures or relationships in the data*.\\n\\n***Physical Analogy.*** In physics, the outer product and second-order terms are often used to model interactions, such as stress tensors in mechanics or pairwise correlations in quantum mechanics. Here, the module could draw an analogy to systems where interactions between individual components (features) are crucial to the overall behavior.\\n\\n***Process Overview.*** In detail, it regards the node latent feature $\\\\overline{\\\\mathbf{h}}_ i \\\\in\\\\mathbb{R}^{D\\\\times 1}$ as basis and builds a higher-order tensor feature $\\\\mathbf{H}_ {i} \\\\in\\\\mathbb{R}^{D\\\\times D}$ via an outer-product operation, e.g., $\\\\overline{\\\\mathbf{h}}_ {i}\\\\otimes\\\\overline{\\\\mathbf{h}}_ {i}$. This process creates abundant second-order nonlinear terms to enrich the feature map. We then use a mask operation with $\\\\mathbf{M}\\\\in\\\\mathbb{R}^{D\\\\times D}$ to randomly sample these terms, filtering the appropriate information by a learnable weight tensor $\\\\mathbf{W}\\\\in\\\\mathbb{R}^{D\\\\times D\\\\times D}$ to enhance the model's representation capacity.\\n\\n***Outer Product as Basis Expansion.*** The outer product operation $\\\\otimes$ on the reshaped feature map $\\\\overline{\\\\mathbf{h}}_ {i}\\\\in\\\\mathbb{R}^{D\\\\times 1}$ expands the original latent feature space into a higher-order tensor space. This expansion introduces second-order terms (e.g., $\\\\alpha\\\\beta$ for $\\\\alpha, \\\\beta \\\\in \\\\overline{\\\\mathbf{h}}_{i}$), which can capture interactions between individual components of the original feature $\\\\mathbf{h} _{i} \\\\in\\\\mathbb{R}^{D}$. Mathematically, the second-order tensor reads $\\\\overline{\\\\mathbf{h}} _{i} \\\\otimes\\\\overline{\\\\mathbf{h}} _{i}$.This operation creates a richer feature map with cross-term interactions that may not be explicitly encoded in the original latent space.\\n\\n***Lemma 1 (Nonlinear Representation):*** The second-order terms $\\\\alpha\\\\beta$ can model nonlinear dependencies between features. This is particularly useful for capturing complex interactions that linear transformations (e.g., simple dot products) might overlook.\\n\\n***Definition 1:*** The FE block expands the latent feature $\\\\overline{\\\\mathbf{h}}_ i \\\\in\\\\mathbb{R}^{D\\\\times 1}$ of node $i$ into a higher-order tensor space using an outer product: $\\\\mathbf{H}_ {i}=\\\\overline{\\\\mathbf{h}}_ {i} \\\\otimes\\\\overline{\\\\mathbf{h}}_ {i}$, where $\\\\mathbf{H}_ {i}\\\\in\\\\mathbb{R}^{D\\\\times D}$ is a higher-order feature map. \\n\\n***Regularization via Masking.*** Masking introduces sparsity in $\\\\mathbf{H}_ {i}$, reducing overfitting. If $\\\\mathbf{M}_ {jk}$ is selected, $\\\\mathbf{M}_ {jk} =1$. Otherwise, $\\\\mathbf{M}_ {jk} =0$. Here $j$ and $k$ index the $M \\\\in\\\\mathbb{R}^{D\\\\times D}$ components.\\n\\nTheoretically, this operation serves two purposes: (1) Reducing computational complexity by randomly sampling terms from the higher-order feature space. (2) Regularizing the model by introducing sparsity, which can prevent overfitting in high-dimensional spaces.\\n\\n***Learnable Filtering.*** The learnable weight tensor $\\\\mathbf{W}\\\\in\\\\mathbb{R}^{D\\\\times D\\\\times D}$ acts as a filter, selecting and emphasizing the most informative terms. \\n\\n***Definition 2:*** A mask operation $\\\\mathbf{M} \\\\in \\\\mathbb{R}^{D\\\\times D}$ is applied to randomly sample elements in $\\\\mathbf{H}_ {i}$, and the resulting masked tensor is processed using a learnable weight tensor $\\\\mathbf{W} \\\\in \\\\mathbb{R}^{D\\\\times D\\\\times D}$ as follows: $\\\\tilde{\\\\mathbf{h}}_ i$ = ($\\\\mathbf{M} \\\\odot\\\\mathbf{H}_ {i}): \\\\mathbf{W}$, where $\\\\odot$ represents element-wise multiplication and $:$ denotes double contraction of tensors. The resulting feature $\\\\tilde{\\\\mathbf{h}}_ {i} \\\\in\\\\mathbb{R}^{1\\\\times D}$ enriches the representation of $\\\\mathbf{h}_ {i} \\\\in \\\\mathbb{R}^{D}$.\\n\\n***Corollary 1 (Representation Power):*** The full feature map $\\\\mathbf{H}_i$ contains $D^2$ terms for a $D$-dimensional input feature vector $\\\\mathbf{h}_i$. After masking, the effective representation space reduces by the sparsity of $\\\\mathbf{M}$. The learnable filter $\\\\mathbf{W}$ further narrows this down to the most critical terms.\\n\\nThank you very much!\"}", "{\"summary\": \"This paper proposes an end-to-end graph-based framework called CeGNN to address limitations of existing Graph Neural Networks in learning complex spatiotemporal dynamics, particularly the over-smoothing issue, and aims to enhance prediction accuracy and generalization ability. The authors introduce two key components: Cell-embedded MPNN block and Feature-Enhanced (FE) block. Through experiments on several PDE systems, the paper demonstrates that CeGNN outperforms other baseline models.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-organized and easy to follow.\", \"The authors present abundant experimental results and visualizations to validate their ideas.\", \"CeGNN achieves superior performance compared to the compared methods.\"], \"weaknesses\": [\"The baseline methods are relatively weak. The authors did not include recent advancements in the field from 2023-2024, raising concerns about the effectiveness of the proposed method.\", \"Modeling with higher-order graphs is a widely studied topic. Can the authors more explicitly summarize the contributions of CellMPNN compared to existing approaches?\", \"The paper lacks a theoretical discussion on the effectiveness of CeGNN. Can the authors discuss the source of CeGNN's effectiveness from a theoretical perspective?\", \"FE modules are not clearly defined.\"], \"questions\": \"Please address the weaknesses mentioned above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors proposed a cell-embedded GNN model (aka,CeGNN) to learn spatio-temporal dynamics. They claim that their learnable cell attribution to the node-edge message passing process better captures the spatial dependency of regional features.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is easy to follow.\\n2. We can see detailed description of the technical details\\n3. This paper touches the core problem in this area.\", \"weaknesses\": \"1. The most puzzling aspect of this paper is that the discussion of related work and the selection of baselines are all based on studies from before 2022. In fact, there have been many breakthrough studies on both graph-based methods and other neural operator approaches in recent years [1].\\n\\n2. This work appears more like a straightforward extension of MP-PDE, both in terms of methodology and experiments. The paper proposes a cell-based method for extracting spatial relationships, but how much improvement could be observed if this feature were integrated into MP-PDE?\\n\\n3. The main experimental results are somewhat confusing. Since the code is not available, it is unclear whether the training data was generated from the authors' own simulations or from public datasets, and what the training dataset size is. If the data is self-generated, the comparison with a few simple baselines is not convincing. Furthermore, the authors mention long-term simulations, yet all experiments are based on one-step predictions, which is clearly insufficient.\\n\\n4. Regarding the core innovation of this paper, the cell feature is merely a learnable feature initialized by the distance to the cell center. Can its significance be verified by theoretical analysis or by measuring the distance between cell features and node features? The benefit here might simply be from adding position awareness, which makes the model fit specific data better. It could even be considered to replace the distance to the cell center with the distance to the nearest PDE boundary for each point, which might also yield improvements.\\n\\n[1] Wang, Haixin, et al. \\\"Recent Advances on Machine Learning for Computational Fluid Dynamics: A Survey.\\\" arXiv preprint arXiv:2408.12171 (2024).\", \"questions\": \"See weakness above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer KT6u (Part 3)\", \"comment\": \">**Weakness 3:** Unclearly defined FE modules.\\n\\n**Reply:** Great comment! Following your suggestion, we have added detailed explanation in the revised paper (see Section 3.1.1 on Pages 4-5 and Appendix Section C.3 on Page 17).\\n\\n- We update the **FE block in Figure 2** to further explain its process and redefine **Algorithm 1** (placed in Appendix).\\n- A primary theoretical deduction is updated in **Subsubsection 3.1.1**.\\n- More theoretical knowledge have also been added in **Appendix Section C.3**.\\n\\nConcretely, the FE block is inspired by interaction models in physics and mathematically hypothesized to capture nonlinear dependencies, enhancing the model's representation power.\\n\\n***Hypothesis.*** The FE block's hypothesis could be framed as: *By capturing second-order interactions between latent features and applying selective filtering, the model can better represent complex structures or relationships in the data*.\\n\\n***Physical Analogy.*** In physics, the outer product and second-order terms are often used to model interactions, such as stress tensors in mechanics or pairwise correlations in quantum mechanics. Here, the module could draw an analogy to systems where interactions between individual components (features) are crucial to the overall behavior.\\n\\n***Process Overview.*** In detail, it regards the node latent feature $\\\\overline{\\\\mathbf{h}}_ i \\\\in\\\\mathbb{R}^{D\\\\times 1}$ as basis and builds a higher-order tensor feature $\\\\mathbf{H}_ {i} \\\\in\\\\mathbb{R}^{D\\\\times D}$ via an outer-product operation, e.g., $\\\\overline{\\\\mathbf{h}}_ {i}\\\\otimes\\\\overline{\\\\mathbf{h}}_ {i}$. This process creates abundant second-order nonlinear terms to enrich the feature map. We then use a mask operation with $\\\\mathbf{M}\\\\in\\\\mathbb{R}^{D\\\\times D}$ to randomly sample these terms, filtering the appropriate information by a learnable weight tensor $\\\\mathbf{W}\\\\in\\\\mathbb{R}^{D\\\\times D\\\\times D}$ to enhance the model's representation capacity.\\n\\n***Outer Product as Basis Expansion.*** The outer product operation $\\\\otimes$ on the reshaped feature map $\\\\overline{\\\\mathbf{h}}_ {i}\\\\in\\\\mathbb{R}^{D\\\\times 1}$ expands the original latent feature space into a higher-order tensor space. This expansion introduces second-order terms (e.g., $\\\\alpha\\\\beta$ for $\\\\alpha, \\\\beta \\\\in \\\\overline{\\\\mathbf{h}}_{i}$), which can capture interactions between individual components of the original feature $\\\\mathbf{h} _{i} \\\\in\\\\mathbb{R}^{D}$. Mathematically, the second-order tensor reads $\\\\overline{\\\\mathbf{h}} _{i} \\\\otimes\\\\overline{\\\\mathbf{h}} _{i}$.This operation creates a richer feature map with cross-term interactions that may not be explicitly encoded in the original latent space.\\n\\n***Lemma 1 (Nonlinear Representation):*** The second-order terms $\\\\alpha\\\\beta$ can model nonlinear dependencies between features. This is particularly useful for capturing complex interactions that linear transformations (e.g., simple dot products) might overlook.\\n\\n***Definition 2:*** The FE block expands the latent feature $\\\\overline{\\\\mathbf{h}}_ i \\\\in\\\\mathbb{R}^{D\\\\times 1}$ of node $i$ into a higher-order tensor space using an outer product: $\\\\mathbf{H}_ {i}=\\\\overline{\\\\mathbf{h}}_ {i} \\\\otimes\\\\overline{\\\\mathbf{h}}_ {i}$, where $\\\\mathbf{H}_ {i}\\\\in\\\\mathbb{R}^{D\\\\times D}$ is a higher-order feature map. \\n\\n***Regularization via Masking.*** Masking introduces sparsity in $\\\\mathbf{H}_ {i}$, reducing overfitting. If $\\\\mathbf{M}_ {jk}$ is selected, $\\\\mathbf{M}_ {jk} =1$. Otherwise, $\\\\mathbf{M}_ {jk} =0$. Here $j$ and $k$ index the $M \\\\in\\\\mathbb{R}^{D\\\\times D}$ components.\\n\\nTheoretically, masking operation serves two purposes: (1) Reducing computational complexity by randomly sampling terms from the higher-order feature space. (2) Regularizing the model by introducing sparsity, which can prevent overfitting in high-dimensional spaces.\\n\\n***Learnable Filtering.*** The learnable weight tensor $\\\\mathbf{W}\\\\in\\\\mathbb{R}^{D\\\\times D\\\\times D}$ acts as a filter, selecting and emphasizing the most informative terms. \\n\\n***Definition 3:*** A mask operation $\\\\mathbf{M} \\\\in \\\\mathbb{R}^{D\\\\times D}$ is applied to randomly sample elements in $\\\\mathbf{H}_ {i}$, and the resulting masked tensor is processed using a learnable weight tensor $\\\\mathbf{W} \\\\in \\\\mathbb{R}^{D\\\\times D\\\\times D}$ as follows: $\\\\tilde{\\\\mathbf{h}}_ i$ = ($\\\\mathbf{M} \\\\odot\\\\mathbf{H}_ {i}): \\\\mathbf{W}$, where $\\\\odot$ represents element-wise multiplication and $:$ denotes double contraction of tensors. The resulting feature $\\\\tilde{\\\\mathbf{h}}_ {i} \\\\in\\\\mathbb{R}^{1\\\\times D}$ enriches the representation of $\\\\mathbf{h}_ {i} \\\\in \\\\mathbb{R}^{D}$.\\n\\n***Corollary 2 (Representation Power):*** The full feature map $\\\\mathbf{H}_i$ contains $D^2$ terms for a $D$-dimensional input feature vector $\\\\mathbf{h}_i$. After masking, the effective representation space reduces by the sparsity of $\\\\mathbf{M}$. The learnable filter $\\\\mathbf{W}$ further narrows this down to the most critical terms.\"}", "{\"title\": \"Request your feedback before the end of the discussion period\", \"comment\": \"Dear Reviewer KT6u:\\n\\nAs the author-reviewer discussion period will end soon, we would appreciate it if you could review our responses at your earliest convenience. If there are any further questions or comments, we will do our best to address them before the discussion period ends.\\n\\nThank you very much for your time and efforts. Looking forward to your response!\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"The authors deserve some feedback for their effort.\", \"comment\": \"This is an excellent paper, particularly after addressing all the reviewers' comments. It is well-written, well-organized, and reflects a high level of effort and attention to detail. I firmly believe the authors deserve constructive communication or feedback to acknowledge their dedication.\"}", "{\"title\": \"Reply to Reviewer 1r7X (Part 1)\", \"comment\": \"Thanks for your constructive comments!\\n\\n>**Weakness 1a:** The concern on novelty of cell-attribution.\\n\\n**Reply:** Thanks for your comment. In fact, the novelty of our proposed CeGNN model includes the introduction of two key modules in the network architecture, namely, (1) the two-level cell-embedded message passing mechanism and (2) the unique feature-enhanced (FE) block. Given the cell-embedded mechanism that better captures the spatial dependency and the FE block that further learns the distinguishable features, CeGNN achieves superior performance compared with other baseline models across all benchmark datasets, as shown in **Table 1** in the revised paper (Page 9). \\n\\nHowever, following your suggestion, we have added many contents in the revised paper (see Subsection 3.1 on Pages 4-6 and Appendix Tables S16, S17 on Page 23).\\n- A primary theoretical deduction is added in **Subsection 3.1**. \\n- Two experimental results are added in **Appendix Tables S16 and S17**. \\n- More discussion have also been added in **Appendix Subsubsection D.3.5**.\\n\\nGenerally, the traditional message passing (MP) mechanism can be regarded as a refinement on a discrete space, analogous to an interpolation operation, which implies that edges are essentially interpolated from nodes. A MP mechanism introducing the cell further enhances the refinement of the discrete space (e.g., secondary refinement), thereby reducing the magnitude of discretization errors spatially, paving the way for its application in complex graph structures. Please see below **Definition 1** on cell in graph and **Corollary 1** on expressive power of cell-embedded MP. Therefore, we proposed a new two-level cell-embedded mechanism to process the message on graphs, which forms the key novelty of the proposed model.\\n\\n***Definition 1: (Cell in Graph)*** Let $G = (V, E)$ be a graph, where $V$ is the set of nodes $\\\\mathbf{v}$ and $E \\\\subseteq V \\\\times V$ is the set of edges. A cell in $G$ is a subset of nodes $C \\\\subseteq V$, such that the nodes in $C$ form a complete subgraph (clique) or satisfy predefined structural relationships. In particular, a $k$-cell $C_k$ in a graph $G$ contains $k+1$ nodes, where $\\\\forall i, j \\\\in C_k,$, $(\\\\mathbf{v}_i, \\\\mathbf{v}_j) \\\\in E$, representing various structures, such as node ($k=0$), edge ($k=1$), triangle ($k=2$), tetrahedron ($k=3$), and so on.\\n\\n***Corollary 1: (Expressive Power)*** Given a graph $G$ including many $k$-cell ($k=0,1,2,\\\\dots$), there exists a cell-based scheme that is more expressive than Weisfeiler-Lehman (WL) tests in distinguishing non-isomorphic graphs (see the proof in **Appendix Subsection C.2** of the revised paper (Pages 16-17)).\"}", "{\"title\": \"Request your feedback before the end of the discussion period\", \"comment\": \"Dear Reviewer rgEv:\\n\\nAs the author-reviewer discussion period will end soon, we would appreciate it if you could review our responses at your earliest convenience. If there are any further questions or comments, we will do our best to address them before the discussion period ends.\\n\\nThank you very much for your time and efforts. Looking forward to your response!\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Sincerely looking forward to your feedback\", \"comment\": \"Dear Reviewer KT6u,\\n\\nAgain, thanks for your constructive comments, which are very much helpful for improving our paper. If there are any further questions or points that need discussion, we will be happy to address them. Your feedback is invaluable in helping us improve our work, and we eagerly await your response.\\n\\nMoreover, we have thoroughly proofread our paper, corrected typos and grammar mistakes, and re-organized the contents to improve the clarity of the paper. We believe the presentation has been **substantially improved** (see revisions marked in red color). Please refer to the **updated .pdf file**.\\n\\nThank you very much for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Thank you for your encouraging feedback\", \"comment\": \"Dear Reviewer S14d,\\n\\nThank you for your kind and encouraging feedback. We very much appreciate your recognition of our contribution!\\n\\nWe are sincerely looking forward to the discussion with the reviewers, whose comments have been helpful in improving our paper.\\n\\nBest regards,\\n\\nThe Authors\"}" ] }
0jUeqlQxMi
Open Vocabulary Panoptic Segmentation With Retrieval Augmentation
[ "Nafis Sadeq", "Qingfeng Liu", "MOSTAFA EL-Khamy" ]
Given an input image and set of class names, panoptic segmentation aims to label each pixel in an image with class labels and instance labels. In comparison, Open Vocabulary Panoptic Segmentation aims to facilitate the segmentation of arbitrary classes according to user input. The challenge is that a panoptic segmentation system trained on a particular dataset typically does not generalize well to unseen classes beyond the training data. In this work, we propose a retrieval-augmented panoptic segmentation method that improves the performance of unseen classes. In particular, we construct a masked segment feature database using paired image-text data. At inference time, we use masked segment features from the input image as query keys to retrieve similar features and associated class labels from the database. Classification scores for the masked segment are assigned based on the similarity between query features and retrieved features. The retrieval-based classification scores are combined with CLIP-based scores to produce the final output. We incorporate our solution with a previous SOTA method (FC-CLIP). When trained on COCO, the proposed method demonstrates 30.9 PQ, 19.3 mAP, 44.0 mIoU on the ADE20k dataset, achieving +4.5 PQ, +2.5 mAP, +10.0 mIoU absolute improvement over the baseline.
[ "Panoptic Segmentation", "Open Vocabulary", "Retrieval Augmentation" ]
Reject
https://openreview.net/pdf?id=0jUeqlQxMi
https://openreview.net/forum?id=0jUeqlQxMi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yG4tN5lTvH", "wnXao3sZTa", "uDMH75qZAt", "qTXc6yN6u9", "hoEEAuv6y5", "ci4XQHyyuc", "YVcoyENv9q", "Xt3AKCVug6", "TFRmfZIxpi", "CBsml2GQ7A", "Aw80wp2QJl", "5C3G6P910Q", "3okIxnoxXL" ], "note_type": [ "official_review", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1729754660016, 1732386069003, 1733084756870, 1734890383364, 1730526865845, 1732350024889, 1731137804728, 1737523936643, 1733172220601, 1729503038802, 1733172835624, 1732336177858, 1733174120963 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8845/Reviewer_xAjY" ], [ "ICLR.cc/2025/Conference/Submission8845/Authors" ], [ "ICLR.cc/2025/Conference/Submission8845/Reviewer_5aAG" ], [ "ICLR.cc/2025/Conference/Submission8845/Area_Chair_9AzC" ], [ "ICLR.cc/2025/Conference/Submission8845/Reviewer_X1Um" ], [ "ICLR.cc/2025/Conference/Submission8845/Authors" ], [ "ICLR.cc/2025/Conference/Submission8845/Reviewer_5aAG" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8845/Authors" ], [ "ICLR.cc/2025/Conference/Submission8845/Reviewer_GoTi" ], [ "ICLR.cc/2025/Conference/Submission8845/Reviewer_5aAG" ], [ "ICLR.cc/2025/Conference/Submission8845/Authors" ], [ "ICLR.cc/2025/Conference/Submission8845/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper enhances open-vocabulary panoptic segmentation by leveraging retrieval augmentation to address the challenges of classifying unseen objects. The authors propose a framework that integrates masked segment features with a retrieval-based method to improve performance for unseen classes. The model builds a feature database using paired image-text data and retrieves similar features during inference to classify masked segments. These retrieval-based scores are combined with CLIP-based scores to enhance accuracy. When applied to FC-CLIP, the proposed method demonstrates improvements in unseen classes on the ADE20k dataset.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Applying Retrieval Augmentation to vision tasks is a promising direction. The proposed way of constructing a database is interesting.\", \"weaknesses\": \"1. While the method builds on FC-CLIP, the authors do not provide an introduction to FC-CLIP, which makes the paper hard to follow during reading.\\n2. The feature database should be introduced prior to discussing the retrieval method to improve the flow and clarity of the paper.\\n3. Since retrieval augmentation is intended to be a more general approach, the paper would benefit from presenting a more generalized framework to reflect its broader applicability.\\n4. The method of constructing the feature database itself serves as a strong baseline. How does the performance of the proposed retrieval-augmentation approach compare to Grounding DINO?\\n5. The paper lacks essential evaluations (the method is only evaluated on a single dataset with a single base model) and ablation studies.\", \"questions\": \"Please refer to the weaknesses. I think the current version is not ready for publication. More experiment results are expected.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to reviewer GoTi\", \"comment\": \"**Difference between IV Classification and OOV Classification via CLIP**\\n The IV classification system includes components that are fine-tuned on a panoptic segmentation dataset with pixel-level annotations. The trainable components are shown in green in Figure 1. This system is specifically fine-tuned on COCO. If the input class names are within the COCO vocabulary, the IV system demonstrates superior performance due to fine-tuning. This helps the system handle the most frequent classes (person, cars, sky, etc.) and even generalizes to a new dataset (e.g., ADE20k). OOV classification via CLIP is applied when the input class names are not part of the fine-tuning dataset. The OOV system does not have trainable linear projection layers to improve text and image embedding beyond the vanilla CLIP.\\n\\n**Fallback Dataset**\\n The fallback dataset can be any paired image-text dataset that includes sample images for novel target classes. An ideal candidate is the Google Open Image dataset v7, which has 61 million image-level labels across more than 20,000 classes. We use the fallback dataset to demonstrate that our feature database can be easily extended to novel classes by utilizing large-scale image-text data, which is widely available for a large number of classes.\\n\\nIf the input class names include a class name that is not represented in the feature database, we can retrieve image samples for that novel class from the fallback dataset, compute features, and add them to the database. Thus, our proposed method can be easily extended to novel classes with minimal effort.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Hi Authors,\\n\\nDo you have responses for my other two comments? I am willing to increase my score if proper responses are provided.\"}", "{\"metareview\": \"This paper presents a method for open vocabulary panoptic segmentation that merges retrieval-based categorization with conventional image segmentation. The manuscript was reviewed by four experts in the field. The recommendations are (2 x \\\"3: reject, not good enough\\\", 2 x \\\"5: marginally below the acceptance threshold\\\"). The reviewers raised many concerns regarding the paper, e.g., limited technical novelty, unclear motivation, unconvincing experimental evaluations, inadequate literature reviews, etc. Considering the reviewers' concerns, we regret that the paper cannot be recommended for acceptance at this time. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers mainly hold concerns regarding limited technical novelty (Reviewers GoTi), unclear motivation and statement (Reviewers 5aAG, X1Um, xAjY, GoTi), unconvincing experimental evaluations (Reviewers X1Um, xAjY, GoTi), inadequate literature reviews (Reviewer 5aAG, xAjY, GoTi). The authors' rebuttal could not fully address the above-listed concerns.\"}", "{\"summary\": \"The paper presents a novel approach to address the challenge of segmenting arbitrary classes in images, a task known as open vocabulary panoptic segmentation. The authors propose a retrieval-augmented method that leverages a masked segment feature database constructed from image-text pairs. During inference, the system uses masked segment features from the input image to retrieve similar features and class labels from the database, combining these retrieval-based classification scores with CLIP-based scores to produce the final output. The method is evaluated on the ADE20k dataset and shows significant improvements over the baseline, particularly when fine-tuned on the COCO dataset, with absolute improvements of +4.5 PQ, +2.5 mAP, and +10.0 mIoU.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a creative solution to the open vocabulary panoptic segmentation problem by combining retrieval-based classification with CLIP, which is an original approach not commonly seen in the literature.\\n2. The paper is well-structured, with clear explanations of the methodology.\", \"weaknesses\": \"1. While the paper demonstrates improvements over the baseline, it does not provide a direct comparison with other state-of-the-art methods in the field, which could provide additional context for the significance of the results.\\n2. The discussion on how the proposed method generalizes to unseen classes could be expanded, as this is a critical aspect of open vocabulary segmentation.\\n3. The paper could further discuss the limitations of the retrieval-augmented approach, especially regarding the reliance on the quality of the feature database and the potential scalability issues as the number of classes increases.\", \"questions\": \"1. Could the authors elaborate on how their method generalizes to completely unseen classes that are not represented in the feature database?\\n2. As the number of classes in the feature database grows, how does the retrieval process scale in terms of computational resources and accuracy?\\n3. Are there any plans to compare the proposed method with other leading approaches in the field to contextualize the improvements?\\n4. The paper mentions that the quality of mask proposal generation is crucial. Could the authors provide more details on how variations in mask quality affect the final segmentation results?\\n5. Is there potential to integrate this method with other modalities, such as depth information or video data, to further improve performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to reviewer X1Um\", \"comment\": \"**Generalizing to Unseen Classes**\\n If a class is not represented in the feature database, then the performance of the proposed system on that class is expected to be similar to a CLIP-only baseline. However, extending the feature database is very easy since we only require paired image-text data with class-level annotations. This is widely available for a large number of classes. The YFCC100M dataset alone contains 100 million paired image-text samples. Similarly, the Google Open Image dataset v7 has 61 million image-level labels on more than 20,000 classes. Our proposed method can easily generalize to unseen classes by collecting sample images for a new target class from large-scale paired image-text data.\\n\\n**Scaling**\\n Since we implement approximate nearest neighbor search with FAISS, the proposed method can scale to millions of images with minimal additional overhead compared to the baseline. We refer the reviewer to the time complexity analysis in the FAISS [1] paper for more details.\\n\\n**Comparison with Other Leading Approaches**\\n We plan to compare our method with Possam [2], which outperforms FC-CLIP by improving mask proposal generation with SAM using trainable components.\\n\\n**Impact of Mask Proposal Generation**\\n Table 3 shows the impact of mask proposal generation for the training-free setup. For example, with a CLIP-large backbone, we see a 43% drop in PQ (28.4 -> 16.1) when we use Grounding DINO + SAM instead of ground truth masks. This shows that the quality of mask proposal generation can have a significant impact on panoptic segmentation performance.\", \"references\": \"1. Johnson, Jeff, Matthijs Douze, and Herv\\u00e9 J\\u00e9gou. \\\"Billion-scale similarity search with GPUs.\\\" IEEE Transactions on Big Data 7.3 (2019): 535-547.\\n2. VS, Vibashan, et al. \\\"Possam: Panoptic open-vocabulary segment anything.\\\" arXiv preprint arXiv:2403.09620 (2024).\"}", "{\"summary\": \"This paper presents an approach for open vocabulary panoptic segmentation by combining retrieval-based classification with standard image segmentation. In particular, the authors introduce a retrieval-augmented segmentation method that utilizes a database of paired image-text features. During inference to address the challenge of domain shift between masked and natural images, the model retrieves relevant features from this database using masked segment features from the input image as queries. This retrieval-based score is combined with scores from a vision-language model (CLIP) to enhance classification accuracy for unseen classes.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The reviewer likes the integration of retrieval-based classification with CLIP-based scores to address the domain shift issues between masked images and natural images. It clearly improves the model's ability to recognize unseen classes without additional training.\", \"The paper's approach to construct a feature database from widely available paired image-text data is interesting. This setup enables adaptability without requiring pixel-level annotations.\", \"The paper is well-organized and well-written.\"], \"weaknesses\": [\"The reviewer feels that the retrieval-based classification relies heavily on the quality and diversity of the feature database constructed from paired image-text data. If the database lacks sufficient variety or coverage, the method may struggle to classify certain unseen classes accurately, particularly in real-world scenarios with a wide range of objects.\", \"Further, the reviewer observed that the method uses Grounding DINO and SAM for generating masks in the training-free setup. However, SAM can produce suboptimal masks without human input which may degrade segmentation accuracy. This dependence on mask quality can limit the method\\u2019s effectiveness in fully automated settings.\", \"The authors may want to include methods such as ODISE for a more comprehensive analysis.\"], \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Rebuttal to reviewer 5aAG - 2\", \"comment\": \"**Mask Quality in Training-Free Setup**\\nSAM requires a point prompt or a bounding box input for high-quality mask generation. Since Grounding DINO is a strong open-set object detection model, it can generate the bounding box prompts required by SAM for thousands of novel classes. We refer the reviewer to the Grounding DINO paper for more details. While the quality of mask proposal generation in the training-free setup remains lower compared to the cross-dataset setup, we are not aware of any work that achieves higher performance in a fully training-free setup (without using pixel-level annotations in the same or cross-dataset settings).\\n\\n1. Liu, Shilong, et al. \\\"Grounding DINO: Marrying DINO with Grounded Pre-training for Open-set Object Detection.\\\" European Conference on Computer Vision. Springer, Cham, 2025.\\n\\n**ODISE**\\n We are happy to add ODISE as a baseline for comparison in the revised version of the paper. Incorporating a retrieval-based method with ODISE is beyond the scope of this work.\\n\\nWe thank the reviewer for the thoughtful feedback, as it is very helpful for improving our work.\"}", "{\"summary\": \"This paper introduces a retrieval-based method to enhance the performance of open vocabulary panoptic segmentation by constructing a feature database from paired image-text data. During inference, the model uses masked segment features from the input image to query the database for similar features and associated class labels, which are then combined with CLIP-based scores. This approach leads to improvements in Panoptic Quality in both training-free and cross-dataset settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper explains related concepts clearly and details the methodology comprehensively, making the overall article easy to understand.\", \"weaknesses\": \"1. The novelty of the paper is limited, primarily building upon the feature retrieval idea from Gui et al.[1]. Compared to Gui et al. [1]., the main modifications only include using a single CLIP backbone instead of two backbone. Please explain how these contributions can meet the strict requirements of top-level conferences.\\n\\n2. The authors use open vocabulary object detection combined with SAM to build the feature database, which limits the model's performance to the capabilities of the object detection component. Please explain how to handle classes that are not included in both the feature database and the fallback dataset during inference, or discuss the limitations of their approach for truly open-vocabulary scenarios.\\n\\n3. The definitions of IV Classification and OOV Classification are confusing. Why is it considered that the segment features and text embedding after linear projection in Figure 1 are equivalent to IV Classification? Please provide a more detailed explanation of the distinction between these two classifications and why the linear projection is significant for IV Classification.\\n\\n4. The experimental section lacks a critical component: comparisons with state-of-the-art methods, such as Gui et al. [1]., HIPIE [2], ODISE [3], OPSNet [4]. Please explain why these specific comparisons are not included and how your method compares theoretically to these state-of-the-art approaches.\\n\\n5. How does this method perform on open vocabulary semantic segmentation tasks, such as testing on ADE20K-847, ADE20K-150, Pascal Context-459.\\n\\n6. The paper claims to achieve performance improvement by utilizing a completely different dataset with only image level annotations. However, using the ADE20K training set to construct a feature database and evaluating it on the ADE20K validation set in the experiment lacks persuasiveness for open vocabulary. Please clarify how to ensure the open vocabulary nature when using the same dataset for both feature database construction and evaluation.\\n\\n7. There is irrelevant content in the lower-left corner of Figure 2. Please redraw the figure and ensure that the image is complete and free from irrelevant content\", \"reference\": \"[1] Zhongrui Gui, Shuyang Sun, Runjia Li, Jianhao Yuan, Zhaochong An, Karsten Roth, Ameya Prabhu, and Philip Torr. knn-clip: Retrieval enables training-free segmentation on continually expanding large vocabularies, 2024. URL https://arxiv.org/abs/2404.09447.\\n\\n[2] Wang X, Li S, Kallidromitis K, et al. Hierarchical open-vocabulary universal image segmentation[J]. Advances in Neural Information Processing Systems, 2024, 36.\\n\\n[3] Xu J, Liu S, Vahdat A, et al. Open-vocabulary panoptic segmentation with text-to-image diffusion models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 2955-2966.\\n\\n[4] Chen X, Li S, Lim S N, et al. Open-vocabulary panoptic segmentation with embedding modulation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 1141-1150.\", \"questions\": \"1. What is the difference between IV Classification and OOV Classification cia CLIP in cross-dataset panoptic segmentation? What is the significance of this distinction? From Figure 1, it appears that the former only differs from the latter by including a linear projection.\\n2. What is the fallback dataset, and how the author build it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There is no ethics concerns.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for responding. Can you explain why comparison with ODISE is beyond the scope of this work? Aren't both of these methods aiming for open-vocab panoptic segmentation?\"}", "{\"title\": \"Rebuttal to reviewer 5aAG\", \"comment\": \"**Reliance on Feature Database**\\nThe reviewer expressed concern that the performance of the system depends on the quality and diversity of the feature database. We argue that the feature database is constructed from an aligned image-text dataset with class-level annotations, which is widely available on a very large scale. The YFCC100M dataset alone contains 100 million paired image-text samples. On the other hand, typical panoptic segmentation datasets require supervised annotations at a pixel level. Due to the difficulty of constructing a fully supervised panoptic segmentation dataset, the typical size and diversity of such datasets are limited (e.g., ADE20k with 20,000 samples and only 150 classes). Since our method can exploit image-text datasets with class-level annotations, we can easily improve the diversity and quality of the feature database by exploiting large-scale datasets such as YFCC100M. Using widely available class-level annotations to improve a system that typically requires pixel-level annotations is an important contribution of our proposed method.\"}", "{\"title\": \"Rebuttal to reviewer 5aAG - 3\", \"comment\": \"We did not mention comparison with ODISE is beyond the scope of work. Rather we mentioned that we are happy to add ODISE as a baseline for comparison. Quoting from the rebuttal above, `We are happy to add ODISE as a baseline for comparison in the revised version of the paper.` Our current implementation uses FC-CLIP as a backbone, adding a diffusion-based backbone with retrieval-based panoptic segmentation is beyond the scope of work. Relevant sentence from the above rebuttal `Incorporating a retrieval-based method with ODISE is beyond the scope of this work.` Apologies for any misunderstanding.\"}" ] }
0jJ94VVgzi
Criteria and Bias of Parameterized Linear Regression under Edge of Stability Regime
[ "Peiyuan Zhang", "Amin Karbasi" ]
Classical optimization theory requires a small step-size for gradient-based methods to converge. Nevertheless, recent findings (Cohen et al., 2021) challenge the traditional idea by empirically demonstrating Gradient Descent (GD) converges even when the step-size $\eta$ exceeds the threshold of $2/L$, where $L$ is the global smooth constant. This is usually known as the \emph{Edge of Stability} (EoS) phenomenon. A widely held belief suggests that an objective function with subquadratic growth plays an important role in incurring EoS. In this paper, we provide a more comprehensive answer by considering the task of finding linear interpolator $\beta \in \mathbb{R}^{d}$ for regression with loss function $l(\cdot)$, where $\beta$ admits parameterization as $\beta = w^2_{+} - w^2_{-}$. Contrary to the previous work that suggests a subquadratic $l$ is necessary for EoS, our novel finding reveals that EoS occurs even when $l$ is quadratic under proper conditions. This argument is made rigorous by both empirical and theoretical evidence, demonstrating the GD trajectory converges to a linear interpolator in a non-asymptotic way. Moreover, the model under quadratic $l$, also known as a depth-$2$ \emph{diagonal linear network}, remains largely unexplored under the EoS regime. Our analysis then sheds some new light on the implicit bias of diagonal linear networks when a larger step-size is employed, enriching the understanding of EoS on more practical models.
[ "Edge of Stability", "gradient descent", "implicit bias" ]
Reject
https://openreview.net/pdf?id=0jJ94VVgzi
https://openreview.net/forum?id=0jJ94VVgzi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gxIB68711W", "dmu5gofk1P", "dUCzSKU7p2", "ZyixX8EQoB", "WrYPnefvE3", "WYDOZtZ9Fd", "VrlaVE2jQI", "Piyvvd1LL0", "Omee8FEtcK", "IO6WouDV5w", "DNF9GipKa9", "6lfgibhQ18" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "decision", "official_review" ], "note_created": [ 1733209527429, 1734007093379, 1733201408547, 1733211840553, 1733202332298, 1729694182932, 1733211047441, 1730605707907, 1733204670874, 1730671425698, 1737523438167, 1731336471389 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1154/Authors" ], [ "ICLR.cc/2025/Conference/Submission1154/Area_Chair_zH9g" ], [ "ICLR.cc/2025/Conference/Submission1154/Authors" ], [ "ICLR.cc/2025/Conference/Submission1154/Authors" ], [ "ICLR.cc/2025/Conference/Submission1154/Authors" ], [ "ICLR.cc/2025/Conference/Submission1154/Reviewer_uoKS" ], [ "ICLR.cc/2025/Conference/Submission1154/Authors" ], [ "ICLR.cc/2025/Conference/Submission1154/Reviewer_VjG6" ], [ "ICLR.cc/2025/Conference/Submission1154/Authors" ], [ "ICLR.cc/2025/Conference/Submission1154/Reviewer_kgFC" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1154/Reviewer_zwfP" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for the feedbacks and would like to address the concern raised below.\\n\\n**Limitation of one-sample and $d=2$.**\\n\\nWe would like to first discuss the first point. Indeed, our current technique does not allow to extend the theoretical analysis to the more general $n$-sample setting. As emphasized in our manuscript, the oscillation of the residual term $r_t$ is very crucial concept in our theoretical analysis. When multiple sample are allowed, the residuals evaluated at different data points become correlated and their mutual influences make the analysis intractable. This coupling of residuals introduces challenges that cannot be easily addressed with standard linear algebra tools, preventing us from fully characterizing the implicit bias of GD under the EoS regime in the multi-sample setting.\\n\\nNevertheless, this limitation does not only apply to our results, but also to many important previous works like [Ahn et al., 2023] and [Song and Yun, 2023], among others. More generally, the inability to fully characterize the implicit bias of GD in more complex and more practical models remains a significant limitation of all the existing EoS results. The primary reason behind this is that EoS typically arises in highly non-linear loss landscapes. These objectives often exhibit intriguing geometric properties, making it difficult to track the trajctory of gradient-based methods. As a result, the majority of EoS analysis focused on toy models or the simplication of practical models. Therefore, we believe our work still carries important messages to the EoS community despite the limitions shared by many other works of the same kind. \\n\\nThe second limitation is a restriction to the simple $d = 2$ regime. However, we emphasize that this does not diminish the significance of our current analysis, just as the analysis in [Ahn et al., 2023] remains highly insightful despite its assumptiont it assumes $d=1$. Moreover, in the one-sample case, the empirical behavior and the major characterizations of EoS under both $\\\\eta\\\\mu < 1$ and $\\\\eta\\\\mu > 1$ are consistent for any $d \\\\geq 2$. Therefore, while linear algebra limitations prevent us from extending the analysis to higher dimensions, our results are still sufficient to capture the essential properties of EoS convergence for GD in the setting we investigate.\\n\\n**Extension to higher depth.**\\n\\nWe thank the reviewer for raising this fascinating question. We believe the depth will not greatly affect the theoretical analysis in the setting of one-sample and $d=2$, given some minor changes. This is because we can always decompose $\\\\beta_t$, $w_+^d$ or $w_-^d$ as the span of vectors $\\\\beta_0$, $\\\\beta_1$ and $\\\\beta^*$, and transform the GD dynamics into simple updates of scalars, which are easy to analyze.\"}", "{\"metareview\": \"This paper investigates EoS phenomena in a simple linear regression problem (quadratic loss) of diagonal linear networks, aiming to challenge prior work suggesting that a subquadratic loss function may be necessary for EoS to occur.\\n\\nWhile the reviewers found some of the insights into EoS under quadratic loss interesting and appreciated the clear presentation, the majority believed that the work was limited by the restriction of the theoretical analysis to the simplified one-sample and low-dimensional setting. Although the discussion with the authors partially addressed specific concerns, such as clarifying assumptions and certain theoretical aspects, the broader limitations persist, as highlighted by multiple reviewers. The authors are encouraged to incorporate the important feedback given by the knowledgeable reviewers.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers upheld initial lukewarm assessments.\"}", "{\"comment\": \"We sincerely thank the reviewer for the positive feedback and insightful comments. It is evident that the reviewer has a strong understanding of the relevant literature and the core contributions of our work. We appreciate this and kindly request consideration for an increased confidence score. Below, we address the reviewer's specific concerns.\\n\\n**Q1. Subquadratic being a sufficient condition.**\\n\\nWe apologize for any ambiguity regarding EoS's condition. In previous works such as [Ahn et al 2023], a subquadratic loss is presented as a sufficient condition for EoS, whereas the main message of our empirical and theoretical analysis is to provide some additional necessary conditions to allow EoS to occur even with quadratic loss functions. We have modified some lines to avoid the imprecision.\\n\\n\\n**Q2. How large is $\\\\mathfrak{t}-t_0$.** \\n\\nWe appreciate this important question, which we did not elaborate on in the paper due to multiple reasons. In brief, the length of the second phase, i.e. the gap between $\\\\mathfrak{t}$ and $t_0$, also scales in $\\\\log(1/\\\\alpha)$. Specifically, in the second phase, the residual $r_t$ oscillates and its envelope remains non-shrinking because $\\\\alpha_t < 0$ holds, leading to an almost linear increase in $b_t$ according to the update $$b_{t+1} = (1 - x\\\\eta r_t)^2 \\\\cdot b_t.$$ The linear increase continues until $b_t$ reaches the scale of $\\\\Theta(\\\\eta\\\\mu - 1)$, at which $\\\\alpha_t$ becomes positive and marks the end of the second phase. From Lemma 7. we know $b_{t_0} \\\\propto \\\\alpha^{\\\\Theta(1)}$. Therefore, we can conclude that the linear increase of $b_t$ lasts for $\\\\log(1/\\\\alpha)$ iterations before phase transition at $t=\\\\mathfrak{t}$. \\n\\nIn the revised manuscript, we have chosen to present this result in Section 3, marked in red, and provide supporting empirical evidence in Figure 4. A detailed theoretical explanation would require additional assumptions in Theorem 2. We think it is more reasonable to keep Theorem 2 with the minimal assumptions needed. Therefore, we believe it is more appropriate to focus on empirical validation in this context.\\n\\n**Q3. Influence of overparameterization in the EoS phenomenon.** \\n\\nWe thank the reviewer for raising this interesting question. Actually, our empirical evidence indicate that the level of overparameterization, i.e. ratio r = d/n, does not have any significant or consistent influences over the EoS phenomena. This is also true for the cases with $r>2$ which are not covered in the current figures.\"}", "{\"comment\": \"**Below are the replies to the other concerns raised in the \\\"Questions\\\" part.**\\n\\n**Q4. Convergence proof under regime $\\\\mu\\\\eta \\\\leq 1$.**\\n\\nWe thank the reviewer for raising this interesting question. In fact, under the regime $\\\\mu\\\\eta - 1$, it is possible to establish the convergence whenever the oscillation occurs or not. We provide below an intuitive explanation. WLOG, suppose $r_t<0$. When $r_{t+1}>0$, the contraction $|r_{t+2}| < (1 - C_1) |r_t|$ can be established similar to proof in Theorem 1 and the non-positivity of $r_{t+2}$ can be established by a proof similar to Lemma 16. It then suffices to consider the case of $r_{t+1} < 0 $. The update of $r_t$ formulates iteration $$r_{t+1} = - (1 - \\\\alpha_t + \\\\beta_t r_t ).$$\\nWhen $r_t,r_{t+1}<0$, we upper bound the iteration by plugging the definitions of $\\\\alpha_t$ and $\\\\beta_t$:\\n$$r_{t+1} \\\\geq (1 - 2\\\\eta (\\\\mu + r_t - c_x b_t) )\\\\cdot r_t = (1 - 2\\\\eta (1+x^2) \\\\cdot ( a_t + x^2 \\\\cdot b_t )) \\\\cdot r_t.$$\\nHence $|r_{t+1}| < (1 - C_2) \\\\cdot |r_t|$ is true since there exists a universal lower bound for $a_t,b_t>0$. This gives a more flexible way of proving convergence under the $\\\\eta\\\\mu<1$ regime.\\n\\n**Q5. Scale of initialization**\\n\\nWe are very sorry for the imprecise caption in Fig. 2. In fact, we use initialization $w_{+} = w_{-} = \\\\alpha \\\\mathbf{1}$ for all the experiments, except for run in the 4th column of Fig. 2. The purpose of Fig. 2 is to show that the conditions in Claim 1 are necessary for incurring EoS, and relaxing each one of them will cause it to fail. We made this exception in the 4th column and use unbalanced initialization $w_{+} \\\\neq w_{-}$ because the default initialization will result in $r_t = 0$ for any $t$, which is trivial and is not able to demonstrate the necessity of the condition. We have corrected this in the new version.\\n\\nExtending the current result to other initialization seems to be a rather interesting question. From an empirical perspective, a properly chosen small initialization does not affect the general behavior of EoS, except for the test loss which critically relies on the choice of $\\\\alpha$ due to the sparsity-induced property of diagonal linear network. For the theoretical analysis in the EoS regime with $\\\\eta\\\\mu > 1$ in the same setting of Theorem 2, we believe the scale of initialization, instead of the shape, is important and it will only affect the length of the second interval and has no affect on the linear convergence, because the convergence rate is solely decided by the quantity $\\\\mu\\\\eta - 1$. As for how the scale of initialization affect the convergence, please refer to the newly added Fig. 4 and the answer to Reviewer zwfP, Q2 for a detailed explaination.\\n\\n**Q6. Counterintuitive generalization error**\\n\\nThis is also a very interesting question that is worth exploring in future research and we try to brief our idea on this question. Unlike the gradient-flow regime, the (local) mimizer found by GD under EoS is not necessarily flat because the bouncing and oscillation around $x$ clearly destroys the structure. When specialized to the setting of Theorem 2, the Hessian matrix at infinity admits a largest eigenvalue around $2/\\\\eta$ (from Fig. 2 or the above answer to Q1), and a smallest eiganvalue equal to $0$ by easy computation. The ill-conditioned geometrical property might hurt generalization for the minimzer. The only empirical investigation into this problem is founed in [Even et al., 2023], Fig. 1, where the test loss increases dramatically when the step-size is large enough.\\n\\n[Even et al., 2023] (S)GD over Diagonal Linear Networks: Implicit Bias, Large Stepsizes and Edge of Stability. Mathieu Even, Scott Pesme, Suriya Gunasekar, Nicolas Flammarion.\"}", "{\"comment\": \"We thank the reviewer for their insightful feedback. We will try our best to address the concerns raised, particularly the major issue highlighted in the \\\"Weakness\\\" section. Since this concern is complex, we have broken our response into three parts: Q1, Q2, and Q3.\\n\\n**Q1. Definition of EoS**\\n\\nWe are sorry for not clearly stating the definition of EoS in the original manuscript. In fact, throughout the whole article, we use EoS in its *original* sense: the phenomena that the sharpness $S_t$, i.e. the largest eigenvalue of objective function's Hessian matrix, crosses the threshold of $2/\\\\eta$, where $\\\\eta$ is the step-size of GD. We have demonstated empirically EoS does happen with the one-sample Diagonal Neural Network model and under the regimes of $\\\\eta\\\\mu<1$ and $\\\\eta\\\\mu>1$ through the experiments and Figure 2 in Section 3. We have added some new lines in the revised version for better clarity on this point.\\n\\nWe understand the reviewer's concern regarding the relationship between EoS and oscillation, and we would like to clarify why our theoretical analysis focuses on the oscillatory behavior rather than the quantity $2 - \\\\eta S_t$. To illustrate this, it is useful to compare our approach with that of [Ahn et al., 2023], which carefully characterized the trajectory of $S_t$ in its theoretical analysis. This is because the gap between $2 - \\\\eta S_t$ is a key quantity in its \\\"quasi-static\\\" analysis and plays an crucial role in establishing the oscillating convergence of GD. In contrast, our convergence analysis of EoS on the one-sample DLN model does not rely on controlling the gap $2 - \\\\eta S_t$. Therefore, we do not put an emphasis on this in our proof. Consequently, we do not emphasize this quantity in our proof. The purpose of theory part is only to establish convergence of GD under EoS, *given* the fact that the occurance of EoS is already confirmed through the previous empirical section.\\n\\nThis distinction is subtle but important. Furthermore, in [Ahn et al., 2023], the theoretical analysis only confirmed the existence of oscillation and the fact that $\\\\lim_{t\\\\to\\\\infty}\\\\eta S_t < 2$ by a contrdiction argument. However, the justification for when and where $\\\\eta S_t > 2$ occurs, is based on empirical evidence rather than theoretical proof. In this way, it is very similar to our approach.\\n\\nFinally, from our theoretical analysis, it is easy to obtain identity $\\\\eta S_t - 2 = 2\\\\eta( \\\\mu + r_t - c_x \\\\cdot b_t) -2 = 2\\\\eta( \\\\mu - 1+ r_t - c_x \\\\cdot b_t) $ using direct computation. As the result of Theorem 2, we could conclude that $\\\\lim_{t\\\\to\\\\infty} \\\\eta S_t < 2$.\\n\\n**Q2. $\\\\eta\\\\mu>1$ as the \\\"true\\\" EoS**\\n\\nWe agree with the reviewer that the setting $\\\\eta\\\\mu>1$ represents the more \\\"authentic\\\" EoS regime. Consequently, the convergence analysis in this regime constitutes the primary focus and major contribution of our work. This focus is not merely because EoS and oscillations are consistently observed under $\\\\eta\\\\mu>1$, but also because, as discussed in Sections 1 and 4, addressing this regime poses a more significant and challenging task: running GD on our model is equivalent to a parameterized discrete dynamical system, where the parameter varies by time. While $\\\\eta\\\\mu<1$ corresponds to such a system where convergence can be established relatively more easily as in existing work like [Chen et al., 2023], it is extremely difficult to show the convergence of GD under regime $\\\\eta\\\\mu>1$ because a crucial phase transition occurs and transforms a system that initially exhibits divergent behavior into one that converges. To the best of our knowledge, this is the first result to rigorously demonstrate such a kind of convergence.\\n \\n\\n**Q3. Oscillation and EoS**\\n\\nWe agree with the reviewer that oscillation is not necessarily related to EoS for all the regimes. Nevertheless, we believe the focus of the oscillation behavior does not hurt the validity of our result. As we explained in Q1, we stick to the classical definition of EoS and showed empirically that EoS does occur under our setting, which does not depend on the sign-changing of residual $r_t$.\\n\\nMoreover, we improve Theorem 2 in the latest version and provide a convergence analysis under the more important regime $\\\\eta\\\\mu > 1$ that **no longer** relies on the sign-changing assumption of $r_t$. This is at a cost of restricting $\\\\eta \\\\mu - 1$ to be upper bounded by a universal constant. We believe the removal of the assumption makes our result more convincing in this perpsective. The revisions are marked in red.\"}", "{\"summary\": \"This paper explores the Edge of Stability (EoS) phenomenon. They study GD and focus on a constant step-size exceeding the typical threshold of $2/L$, As the contributions, they observe that EoS occurs even when the loss $l$ is quadratic under proper conditions while the existing works require a subquadratic $l $. Due to the close relationship between the quadratic $l$ and the depth-2 diagonal linear network, the findings provide some explanations for the implicit bias of diagonal linear networks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The work seems to be solid.\\n\\nThe paper is polished very clearly.\\n\\nThe paper focuses on understanding the convergence of the GD algorithm, breaking through the limitations of traditional smoothness analysis (the step-size exceeds the threshold of $2/L$), and extending previous conclusions to more complex scenarios (from subquadratic growth objective functions to quadratic growth objective functions).\", \"weaknesses\": \"Does the conclusion of the more general $n$-sample case in the paper apply to SGD? Due to the importance of stochastic optimization, I expect to see similar conclusions under stochastic optimization as well.\\n\\nI am sorry I am not very familiar with this topic. I will revise my score based on the comments of other more senior reviewers.\", \"questions\": \"What is the behind insight in selecting a specific formula of $\\\\beta$ as $w_+^2-w_-^2$?\\n\\nDoes the theoretical analysis hold in the more general $n$-sample case, or what are the difficulties in this analysis?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely apologize for the late responses before the discussion deadline. We have carefully considered the reviewers' feedback and would like to express our deepest gratitude for the insightful suggestions. Based on their opinions, we have made the following major revisions to the manuscript:\\n\\n1. As suggested by Reviewer kgFC, we have removed the assumption of sign-changing ($r_t r_{t+1} < 0$) in Theorem 2, which establishes the convergence of GD under EoS and $\\\\eta\\\\mu > 1$, as the major focus of our contribution. This introduces an additional reliance on a universal constant, with $\\\\eta\\\\mu \\\\in (1, \\\\min\\\\{\\\\frac{3\\\\sqrt{2}-2}{2},1+1/C\\\\})$. We believe this would make our results more practical and more self-contained;\\n\\n2. As suggsted by Reviewer zwfP, we have included a discussion over the duration of second phase under $\\\\eta\\\\mu > 1$ in Section 3, supported by empirical evidences in Fig. 4.\\n\\nAdditionally, we have corrected typos, grammatical errors, and ambiguous expressions highlighted by the reviewers. All significant revisions are marked in red for clarity.\\n\\nWe would like to remark on the contribution of our work. While the current theoretical analysis is limited by the one-sample restriction and \\nd=2 setting, we believe our work still offers valuable insights to the study of implicit bias under EoS: the current theoretical analysis of EoS are still far from being complete, with the majority of existing works relying on strong assumptions or impractical carefully-tailored toy models. Therefore, it is more important to focus on the general mechanisms and insights revealed by both empirical and theoretical analysis. In this regard, we identify two major implications of our work:\\n\\n1. We are among the first to rigorously establish the implicit bias of GD under EoS with a quadratic loss. Moreover, the parameterized linear regression model (i.e., two-layer diagonal linear network) offers a more practical framework compared to other toy models.;\\n\\n2. The analysis on the more critical regime $\\\\eta\\\\mu>1$ addresses a previously unexplored type of convergence. In contrast to the relative easier regime $\\\\eta\\\\mu<1$, $\\\\eta\\\\mu>1$ is more challenging from a discrete dynamical system perspective, as it involves a crucial phase transition that transforms an initially divergent system into a convergent one. To the best of our knowledge, this is the first result to rigorously demonstrate such a convergence behavior.\"}", "{\"summary\": \"In this paper, the authors consider the task of finding interpolators for the linear regression with quadratic parameterization and study the convergence of constant step-size GD under the large step-size regime. They focus on the non-trivial question of whether a quadratic loss can trigger the Edge of Stability phenomena. The authors show through both empirical and theoretical aspects that, when some certain condition is satisfied, EoS indeed occurs given quadratic loss.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"In this paper, the authors consider running GD with a large constant step-size to find linear interpolators that admit quadratic parameterization for the one-sample linear regression task. They theoretically proved the on-sample case and extend the one-sample results by empirically finding conditions in the more general n-sample cases. Theses theoretical analysis and empirical results are presented with clear structures.\", \"weaknesses\": \"The major weakness of this submission is that the authors only provide the theoretical analysis and mathematical proof for the one sample case. The empirical results from the numerical experiments in section 3 shows the convergence of GD under the EoS regime when the loss function is quadratic. The authors only present the theorems characterizing the convergence of GD when the model considered has dimension $d = 2$. The authors should extend these theoretical analysis to the high dimension cases. The one sample analysis and teo dimensional proof is not enough to explain the empirical results from the numerical experiments.\", \"questions\": \"The authors argued that the model considered in this paper is the depth 2 diagonal linear network. Is that possible to extend the theoretical analysis to the model with higher depths?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the comments. We will try our best to explain the concerns raised by the reviewer.\\n\\n**Extension to SGD**\\n\\nIndeed, EoS phenomena can also occur with SGD [Cohen et al., 2022]. However, most works addressing EoS in stochastic methods are purely empirical, and very few theoretical results are known currently. This is due to the inherent difficulty in establishing convergence for GD on most practical models, and even more so for SGD, which introduces additional complexities. As for the two-layer diagonal neural networks, empirical evidences also suggest the occurance of EoS for SGD. Nevertheless, tackling the theoretical analsyssi remains a very challenging future task.\\n\\n**Intuition behind quadratic parameterization**\\n\\nAs mentioned in our manuscripts, the study of two-layer diagonal neural networks (DLN) has been emerged as a very important topic in the past several years. This is because (1) DLNs are sufficiently non-linear to capture key properties of more practical deep neural networks, and (2) their simple form allows for a detailed analysis of the GD trajectory. As a result, the two-layer DLN has become one of the most well-studied toy models for implicit bias. As a result, it becomes one of the most well-studied toy model in terms of implicit bias. The two-layer DLN is equivalent to the linear regression problem where the interpolator/weight vector is forced to admit a quadratic parameterization as $\\\\beta = w_{+}^2 - w_{-}^2$. Therefore, we believe it is a good candidate to study the condition of EoS and convergence of GD under EoS regime. \\n\\n**Difficulty in theoretical extension to multi-sample setting**\\n\\nWe thank the reviewer for raising this important question. Indeed, we believe that extending the analysis to the $n$-sample case is a highly challenging task\\u2014not only for our model but also for related studies, such as [Ahn et al., 2023], [Song and Yun, 2023], among others. More generally, the inability to fully characterize the implicit bias of GD in more complex and more practical models remains a significant limitation of all the existing EoS results. The primary reason behind this is that EoS typically arises in highly non-linear loss landscapes. These objectives often exhibit intriguing geometric properties, making it difficult to track the trajctory of gradient-based methods. As a result, the majority of EoS analysis focused on toy models or the simplication of practical models.\\n\\nWhen specializing to our model or the models considered in [Ahn et al., 2023] and [Song and Yun, 2023], the primary challenge in extending the analysis to the $n$-sample setting lies in the difficulty of effectively tracking the residual values across different sample points. As oscillation analysis plays a crucial role in these analysis, the coupling between residuals at different samples under the non-degenerate cases prevents us from obtaining meaningful results. The residuals influence each other through shared parameters, creating a highly complicated landscape that is difficult to decouple or simplify. Therefore, how to tackle the interaction between sample points will be a crucial step in future explorations.\"}", "{\"summary\": \"The paper studies the implicit bias of GD on quadratic loss and linear models with diagonal linear network parameterization under the EoS regime. The paper first empirically shows that when data is in multi-dimension ($d\\\\geq 2$), EoS can occur for quadratic loss, while prior works on EoS suggest that subquadratic loss is necessary for EoS to happen. The experiments show that different choices of step size lead to different oscillation types, from GF regime to EoS regime to chaos to divergence. The paper then theoretically studies the parameter convergence (directly related to generalization) in a sparse solution 2-D diagonal linear network setting and provides non-asymptotic rates in various settings. The results show that in the EoS regime when the step size is not too large ($\\\\eta\\\\mu<1$), smaller initialization ($\\\\alpha$) yields a better generalization, while when step size is large ($\\\\eta\\\\mu>1$) there will be an error floor that cannot vanish as $\\\\alpha\\\\to 0$.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is overall well-written and easy to follow. It introduces the settings clearly and provides sufficient illustrations to help present results in different conditions/regimes. The main findings are organized clearly. The proof overview is intuitive for grasping the essence of the proof technique.\\n\\n2. The paper studies implicit bias in the EoS regime for diagonal linear networks in multi-dimensions, which is a significant step forward as most of the prior works on EoS only deal with minimalist examples where the data is assumed to be 1-d, and it is interesting to see the empirical occurrence of EoS depends on the data dimension (non-degeneracy). Moreover, the diagonal linear network setting is closely related to the GD implicit bias literature, so it can potentially connected to wider works.\", \"weaknesses\": \"My major concern is regarding the \\\"EoS regime\\\". In the paper, there is no rigorous definition of the EoS regime. Instead, a flip sign condition $r_tr_{t+1}<0$ is used throughout the theoretical statements. However, I doubt whether the sign flips in residual indeed correspond to the commonly referred EoS phenomenon ($\\\\eta L_t>2$ and oscillations in loss happen along the trajectory). For example, if we minimize $f(x)=\\\\frac{1}{2}x^2$ with GD, the sharpness is $1$ and the residual $r_t=x_t$ is flipping its sign if we choose step size $\\\\eta\\\\in(1,2)$, but the loss is actually monotonically decreasing and the sharpness is always below $2/\\\\eta$.\\nIn the proof overview (Section 5), it is discussed that $|r_{t+2}|\\\\leq (1-\\\\alpha)^2|r_t|$ is possible, but there is no statement comparing $|r_{t+1}|$ and $|r_t|$, and there is no statement on the relationship between step size $\\\\eta$ and sharpness $L$, so we are not sure if EoS is happening or not. In my understanding, it might be the case that only the $\\\\eta\\\\mu>1$ case corresponds to the EoS that people refer to. \\n\\nMinor typos (not exactly weaknesses): \\n\\n1. In Figure 4 the x-axis has no label, which I guess is $\\\\alpha$.\", \"questions\": \"1. For convergence in the regime $\\\\mu\\\\eta<1$, the current analysis assumes period-2 flip sign in residual, and the result essentially states ${\\\\ell(w_{2t})}_{t\\\\geq 0}$ converges. I am wondering if the analysis can be extended to deal with more general oscillations without specific periods, as they are closer to the EoS phenomenon in practice [1].\\n\\n2. The paper assumes a specific scaling initialization (lines 218 and 238) for subsequent analysis and the authors claim this initialization is to align with the literature on diagonal linear networks. Regarding this initialization, I have a few questions: \\n(a) Why in the theory $w_+$ and $w_-$ are of the same scale while in experiment $w_+$ has larger scale than $w_-$? (b) How important is this initialization for ensuring convergence and EoS? (c) How easy is the analysis to be extended to more general or random (e.g. Gaussian) initializations? Just for quick comparison, some existing works can handle general initializations as long as the initial weights satisfy certain conditions [2,3] while they mainly focus on the 1-d case. \\n\\n3. The result in the $\\\\eta\\\\mu>1$ regime (Figure 4, left) suggests that smaller step size allows smaller generalization errors. This seems to contradict a common belief that a large learning rate is beneficial to generalization as it forces the minimizer to be in a flat region. Could authors provide some insights about this difference?\\n\\n[1] Cohen, Jeremy M., Simran Kaur, Yuanzhi Li, J. Zico Kolter, and Ameet Talwalkar. \\\"Gradient descent on neural networks typically occurs at the edge of stability.\\\" arXiv preprint arXiv:2103.00065 (2021).\\n\\n[2] Ahn, Kwangjun, S\\u00e9bastien Bubeck, Sinho Chewi, Yin Tat Lee, Felipe Suarez, and Yi Zhang. \\\"Learning threshold neurons via edge of stability.\\\" Advances in Neural Information Processing Systems (2023).\\n\\n[3] Wang, Yuqing, Zhenghao Xu, Tuo Zhao, and Molei Tao. \\\"Good regularity creates large learning rate implicit biases: edge of stability, balancing, and catapult.\\\" arXiv preprint arXiv:2310.17087 (2023).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The authors analyze the Edge-of-Stability (EoS) phenomenon for gradient descent on linear regression with the loss function $\\\\ell(\\\\langle x, \\\\beta\\\\rangle - y)$, where $(x, y)$ is the datapoint with $x\\\\in \\\\mathbb{R}^d$ and $y\\\\in \\\\mathbb{R}$. EoS phenomenon is observed when GD is run with a stepsize $\\\\eta > \\\\frac{2}{L}$ where $L$ is the smoothness constant of the loss. In the EoS regime, GD oscillates rapidly but still converges to the minima, under certain conditions.\\n\\nExisting works (Ma et al 2022, Ahn et al 2022, Song & Yun 2023) show that for sub-quadratic $\\\\ell$, GD can enter the EoS regime. This paper shows that for a particular quadratic parameterization, namely diagonal neural networks, GD can enter the EoS regime even for quadratic $\\\\ell$. The parameterization is $\\\\beta = \\\\beta_{w} = w_{+}^2 - w_{-}^2$ and $w= [w_{+}^\\\\top, w_{-}^\\\\top]^\\\\top$, with gradient updates on $w$.\\n\\n\\nFrom Claim 1, for quadratic $\\\\ell(s) = s^2/4$, they obtain EoS for GD under a single-sample regime for $d\\\\geq 2, y\\\\neq 0$ and $x$ non-degenerate. Further, for $d=2, x= (1,x')$ and sparse realizable model $\\\\beta^\\\\star = (\\\\mu, 0), y= \\\\mu$, with initialization scale $\\\\alpha$, they obtain two separate regimes even for EoS. \\n\\nIn the first regime, from Theorem 1, for $\\\\mu \\\\eta <1$ and constant $\\\\alpha$, GD results in both the traditional Gradient Flow regime (GF), without oscillations, and the EoS regime. Here, EoS occurs with damped oscillations. Further, the final solution of GD has generalization error dependent on initialization $\\\\alpha$.\\n\\nIn the second regime, from Theorem 2, $\\\\eta \\\\mu \\\\in (1,2)$, with initialization, $x'$, and generalization error dependent on $\\\\eta\\\\mu$, GD in EoS might initially have diverging amplitude of oscillations, however, it eventually dies down and after a point converges at a linear rate.\\n\\n\\nEmpirically, the authors show that their model requires overparameterization, as for $d=n$, EoS doesn't occur but for $d>n$, it does. For their case of single sample $n=1$, justifying their choice of $d=2$. \\n\\n\\n**References**--\\n- (Ma et al 2022) Beyond the Quadratic Approximation: the Multiscale Structure of Neural Network Loss Landscapes. Arxiv.\\n- (Ahn et al 2023) Learning threshold neurons via the \\u201cedge of stability\\u201d. NeurIPS.\\n- (Song & Yun 2023) Trajectory Alignment: Understanding the Edge of\\nStability Phenomenon via Bifurcation Theory. NeurIPS.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"**Novel insights**: There are several novel and non-trivial insights -- i) quadratic losses can lead to EoS, ii) the diagonal NN model fulfils this requirement, iii) EoS can have regimes with oscillations with increasing magnitude but still converge later, iv) overparameterization might be necessary for EoS on quadratic losses.\", \"**EoS on diagonal neural networks**: Diagonal neural networks serve as a simple to analyze but still expressive model. Using these models has 3 important advantages -- i) as these are real networks, their claimed phenomenon occurs not just on some theoretically well-crafted model, ii) diagonal NNs remain a good testbed for theoretical analysis of complicated deep learning phenomena, iii) they have provided a proof for EoS on diagonal neural networks, which was missing from existing works (Even et al 2023), and can now be used to verify claimed empirical phenomenon (Even et al 2023). Note that the proof of EoS on diagonal neural networks is highly non-trivial especially for the $\\\\mu\\\\eta > 1$ case.\", \"**Presentation**: The paper is easy to read inspite of the heavy notation. The key insights are clearly explained and the figures are very helpful in understanding them. Figure 6, in particular, is a good example to intuitively explain the proof sketch.\", \"**References**--\", \"(Even et al 2023) (S)GD over Diagonal Linear Networks:\", \"Implicit Bias, Large Stepsizes and Edge of Stability. NeurIPS.\"], \"weaknesses\": [\"**Is subquadratic growth \\\"necessary\\\" or \\\"sufficient\\\" for observing EoS**? It might be beneficial to state the exact reference for the subquadratic condition. Note that the subquadratic condition was introduced in (Ma et al 2022), which the authors have not mentioned. Further, I'm not sure if (Ahn et al 2023) actually state that subquadratic growth is \\\"necessary\\\" for EoS. Assumption A2 and A3 in (Ahn et al 2023), show that subquadratic growth is sufficient for EoS. Similarly, the results in Section 4 in (Ma et al 2022), and assumptions 2.4 and 4.2 in (Song & Yun, 2023) are sufficient for EoS. If I'm missing some details and subquadratic condition is indeed necessary for EoS, can the authors should probably specify the exact theorem, assumption or an argument for this?\", \"**How large is $\\\\mathfrak{t} - t_0$** ? In Theorem 2, there are $3$ phases for GD. In the first phase, it has not started oscillations, which lasts until $t_0$. From Lemma 7, $t_0 \\\\geq \\\\Omega_{\\\\mu, \\\\eta}(\\\\log(1/\\\\alpha^2))$, but only for $\\\\mu\\\\eta \\\\in (0,2)$, which includes $\\\\mu \\\\in (1, \\\\frac{3\\\\sqrt{2} - 2}{2})$. In the second phase, which lasts from $t_0$ to $\\\\mathfrak{t}$, the oscillations finally start decreasing in magnitude. Lemmas 14 and 15 establish that there exist such a $\\\\mathfrak{t}$, but not how large it is. As after $\\\\mathfrak{t}$, we see linear convergence, how long do we need to wait for it becomes an important question. If the authors cannot establish it theoretically, they might argue empirically that $\\\\mathfrak{t}$ not very large.\", \"Typo in Line 1537: $|\\\\lambda_{1,2}| < 1$.\", \"**References** --\", \"(Ma et al 2022) Beyond the Quadratic Approximation: the Multiscale Structure of Neural Network Loss Landscapes. Arxiv.\", \"(Ahn et al 2023) Learning threshold neurons via the \\u201cedge of stability\\u201d. NeurIPS.\", \"(Song & Yun 2023) Trajectory Alignment: Understanding the Edge of\", \"Stability Phenomenon via Bifurcation Theory. NeurIPS.\"], \"questions\": [\"How does the level of overparameterization affect the EoS phenomenon? This might be an easy extension of Figure 5, by checking if increasing the level of overparameterization, i.e., the ratio $\\\\frac{d}{n}$, changes the EoS phenomenon. Figure 5 contains $\\\\frac{d}{n} = 2, 1$, but another experiment on a larger value of $\\\\frac{d}{n}$ might show if large overparameterization helps in EoS. Note that this is not strictly required but might be interesting.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
0iscEAo2xB
Comparing Targeting Strategies for Maximizing Social Welfare with Limited Resources
[ "Vibhhu Sharma", "Bryan Wilder" ]
Machine learning is increasingly used to select which individuals receive limited-resource interventions in domains such as human services, education, development, and more. However, it is often not apparent what the right quantity is for models to predict. In particular, policymakers rarely have access to data from a randomized controlled trial (RCT) that would enable accurate estimates of treatment effects -- which individuals would benefit more from the intervention. Observational data is more likely to be available, creating a substantial risk of bias in treatment effect estimates. Practitioners instead commonly use a technique termed "risk-based targeting" where the model is just used to predict each individual's status quo outcome (an easier, non-causal task). Those with higher predicted risk are offered treatment. There is currently almost no empirical evidence to inform which choices lead to the most effect machine learning-informed targeting strategies in social domains. In this work, we use data from 5 real-world RCTs in a variety of domains to empirically assess such choices. We find that when treatment effects can be estimated reliably (which we simulate by using direct outcome observations), treatment effect based targeting substantially outperforms risk-based targeting, even when treatment effect estimates are biased. Moreover, these results hold even when the policymaker has strong normative preferences for assisting higher-risk individuals. However, when treatment effects must be predicted from features alone (as is always the case in practice), performance can degrade significantly due to limited data making it difficult to learn accurate mappings from features to treatment effects. Our results suggest treatment effect targeting has significant potential benefits, but realizing these benefits requires careful attention to model training and validation.
[ "social welfare", "causality", "treatment", "treatment effect", "targeting", "risk", "policymaking" ]
Accept (Poster)
https://openreview.net/pdf?id=0iscEAo2xB
https://openreview.net/forum?id=0iscEAo2xB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ufOeTxk1JM", "tKTr6GDIgL", "sGsW1K69vR", "pZxGufgDgl", "hk60eZl38S", "hKVwsVmxsw", "gZmbx3ynXt", "drEXv47gfw", "ZWPP9TKp9r", "YmF8q6hGYG", "SwDKZFHrOI", "MFdzgWLTnJ", "GZki6ahp9a", "F4pKP3vuq2", "Cbv8nLwP1O", "8y9qD1kxNO", "2rJw2RfiUM" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision" ], "note_created": [ 1732166021707, 1732583438195, 1732167916520, 1730501531416, 1730582403740, 1732166148408, 1732167364585, 1732560683746, 1734690819276, 1732166866388, 1730606523252, 1732219849117, 1732560653664, 1732562963871, 1730715991366, 1732168057721, 1737523876856 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7944/Authors" ], [ "ICLR.cc/2025/Conference/Submission7944/Reviewer_1WAp" ], [ "ICLR.cc/2025/Conference/Submission7944/Authors" ], [ "ICLR.cc/2025/Conference/Submission7944/Reviewer_WHnh" ], [ "ICLR.cc/2025/Conference/Submission7944/Reviewer_XFvr" ], [ "ICLR.cc/2025/Conference/Submission7944/Authors" ], [ "ICLR.cc/2025/Conference/Submission7944/Authors" ], [ "ICLR.cc/2025/Conference/Submission7944/Authors" ], [ "ICLR.cc/2025/Conference/Submission7944/Area_Chair_2ZvC" ], [ "ICLR.cc/2025/Conference/Submission7944/Authors" ], [ "ICLR.cc/2025/Conference/Submission7944/Reviewer_1WAp" ], [ "ICLR.cc/2025/Conference/Submission7944/Reviewer_XFvr" ], [ "ICLR.cc/2025/Conference/Submission7944/Authors" ], [ "ICLR.cc/2025/Conference/Submission7944/Reviewer_4SA7" ], [ "ICLR.cc/2025/Conference/Submission7944/Reviewer_4SA7" ], [ "ICLR.cc/2025/Conference/Submission7944/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Reply to Reviewer 1\", \"comment\": \"We thank you for your detailed review and constructive feedback. We address the weaknesses mentioned in the review in this comment and the questions in the subsequent comment. We have also edited the paper to incorporate the feedback we received and the edits are marked in red for easy distinction.\", \"weaknesses\": \"1) Multiple reviewers pointed out that the interpretation of Figure 1 is ambiguous and suggest various alternative stories consistent with the plot. We agree with the reviewers and thank them for raising this point -- all of the interpretations that the reviewers mention do find some support in Fig 1 due to the wide confidence intervals. However, our main conclusions are driven by Fig 2, where we are able to draw much more statistically precise conclusions. We have edited the text to frame Fig 1 just as suggesting hypotheses about the effects of risk based targeting, which Fig 2 addresses with more precision/statistical significance. Here is the revised text discussing Fig 1:\\n\\nThe estimated relationship between baseline risk and treatment effect is variable across domains. In most domains, the point estimate shows a general upward trend, indicating that individuals at greater risk benefit more (on average) from treatment. However, in the NSW domain, the point estimate is essentially flat. In addition, the confidence intervals are wide for all domains and there is very little statistically significant significant evidence in favor of high-risk individuals benefiting more. Wide confidence intervals reflect that there is significant variance in the pseudo-outcomes estimated for different individuals at the same level of baseline risk. That is, there is a great deal of variance in our estimated treatment effects that is not explained by baseline risk. From these results, we form two hypotheses. First, that risk-based targeting should, in most domains, perform better than a random allocation, since the point estimates generally show larger average effects at higher baseline risk. Second, that there is room to improve on risk-based targeting via strategies that leverage some of the substantial variance in treatment effects that is unexplained by baseline risk. The next section provides more statistically precise tests of these hypotheses by comparing the welfare associated with each targeting policy (a single number, which can be quantified more precisely than the entire curve shown in Figure 1). \\n\\nWe have also edited the introduction and conclusion of the paper to reflect this framing.\\n\\nRegarding the specific concern you raise that \\\"another story consistent with the data is that effects are generally not so heterogeneous as conventional wisdom from the econometrics literature has it\\\": This alternative is ruled out by Fig 2, since we find (with statistical significance) that targeting based on heterogeneous treatment effects results in much higher welfare gains than random targeting, which would not be the case if treatment effects were largely homogeneous. Specifically, this is the comparison between the blue and green lines at the k = 0 point of the left-most column of Fig 2.\\n\\n\\n2) This connects to some of the points mentioned in Answer 1. The trend lines are more going up than down, so we do see some of the intuition for risk based targeting reflected in the data. To acknowledge this, we added the following sentence to the discussion: \\\"We find that, in most domains, risk-based targeting results in higher welfare than a uniformly random allocation, confirming some of the intuition behind its widespread use by practitioners.\\u201d However, the confidence intervals are wide, reflecting that at any level of predicted risk, there is still a lot of variance in the individual-level outcomes, which a better targeting strategy may be able to leverage. This is confirmed in Fig 2, where we find statistically significant evidence that targeting based on estimated treatment effects is typically much more effective (comparison between the blue and orange lines at the k = 0 point of the left-most column of Fig 2). \\n\\n\\n3) Thanks for this suggestion. We have changed the y axis of Figure 2 to give the normalized utility. Regarding Figure 1 vs 2, we answer below.\\n\\n\\n4) Unfortunately none of the RCT datasets we use were accompanied by a matching observational cohort (a very unusual situation in practice). If confounding in a given domain is worse than our simulations, this could give an advantage to risk-based targeting. To provide a strong comparison, our simulation does reflect a very adversarial mode of confounding, where individuals select into treatment depending on the actual value of their potential outcome. This can be seen as a strengthening of a typical mechanism for confounding, where individuals are selected for treatment based on unobservable characteristics that are merely correlated with their potential outcomes. We edited the paper to explain this motivation.\"}", "{\"comment\": \"Thank you for your response.\\n\\nThe reasoning and supporting references in your rebuttal have sufficiently addressed all my questions. It would be helpful, however, to include a sensitivity analysis to demonstrate the efficiency of this asymptotic normal approximation estimator when applied to datasets of varying sizes, particularly those targeting the limited population of interest.\\n\\nI acknowledge that the ethical implications are challenging to address comprehensively, but your additional reasoning in the conclusion is valuable. While not essential, it might enhance the discussion to provide a rough intuition on implementing consequentialist and non-consequentialist perspectives, such as a mixed strategy approach.\\n\\nWhile your study may not address every question of interest to practitioners or policymakers, it nonetheless provides valuable insights into targeting strategies.\"}", "{\"title\": \"Reply to Reviewer 3\", \"comment\": \"Questions:\\n\\n1) Regarding **\\\"I have seen...\\\"**\\n\\n[1] addresses dependencies caused by hard constraints in a finite population. As our focus is not on the complications of exactly enforcing hard constraints, our results can be seen as evaluating a policy which treats the desired fraction of individuals in-expectation but makes the actual treatment assignments independently (potentially over- or under-utilizing the budget in any specific time frame). \\n\\n2) Regarding **\\\"Shouldn't the results...\\\"**\", \"we_provide_results_for_3_budget_levels_in_the_paper\": \"20% of the total population (main paper), 30% of the total population (appendix A.2), 40% of the total population (appendix A.2). The trends for the latter 2 appear to match the trend we see with a budget of 20%.\\n\\n3) Regarding **\\\"Since you studied...\\\"**\", \"answer\": \"Risk-based targeting appears to perform better when there is the combination of a significant amount of confounding in the treatment effect estimates along with very strong egalitarian preferences from the policymaker. However, either factor on its own is generally insufficient.\\n\\n4) Regarding **\\\"Could you elaborate...\\\"**\\n \\nFor the TUP dataset we use expenditure because it is one of the main outcomes selected by the authors of the original RCT [3] and the treatment has a positive average effect on it. \\n\\nRegarding whether the policymaker should aim to improve individuals\\u2019 average economic circumstances or whether they should target based on desert (where individuals with low expenditures may deserve aid more): this is a normative question outside the scope of our paper. We assess tradeoffs in a consequentialist framework where policies are evaluated based on the extent to which they improve outcomes, not whether they align with a notion of desert.\", \"we_have_edited_the_conclusion_to_acknowledge_this\": \"\\u201cOur investigation of egalitarian preferences assumes an essentially consequentialist perspective, where the policymaker's goal is to improve individuals' welfare as defined by their outcome. If policymakers have non-consequentialist preferences, for example viewing the assistance of those in need as an inherent good regardless of its effects, targeting directly on a measure of vulnerability may be more appropriate.\\u201d\\n\\n*[3] Banerjee, Abhijit, Esther Duflo, Rachel Glennerster, and Cynthia Kinnan. 2015. \\\"The Miracle of Microfinance? Evidence from a Randomized Evaluation.\\\" American Economic Journal: Applied Economics, 7 (1): 22\\u201353.*\\n\\nWe thank you again for your feedback.\"}", "{\"summary\": \"This paper studies a common risk-based approach to treatment allocation applied to a diverse range of treatments with randomized controlled trial data available. The authors find that allocating treatment according to risk (ie, bad outcomes under the no-treat baseline) produces worse outcomes than allocating according to conditional average treatment effect estimates, even when CATEs are biased or the decision maker has a preference for treating high-risk individuals.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This is a simple, well-executed paper making a very important point. As the authors note, most algorithmic decision making approaches are based on risk or some manipulation of risk in pursuit of fairness goals. This paper demonstrates the problem with that approach across a wide range of interventions.\\n\\nThe authors anticipate the key criticisms of the policy based approach (that RCT data is hard to find and so CATEs may be biased, and that equity preferences might provide a non-utilitarian reason to prefer risk-based targeting), and provide compelling evidence that even under significant confounding or equity preferences one should still prefer allocations based on CATEs.\", \"weaknesses\": \"The weakest part of this paper is probably the discussion of Fig 1. The authors cite a \\\"unique trend for each dataset\\\" (in CATE conditional on risk score) as evidence that the risk-based policy is flawed. Visually, to me, the trends look pretty similar: they look like there is basically no relationship between CATE and risk for most RCTs studied. This is still evidence for the risk-based policy being flawed (so it doesn't invalidate the claim being made), but I think it's a more accurate way to describe the results. I think Fig 2 is more compelling than Fig 1 anyway, so it probably make sense to lead with that result for the most impactful paper.\", \"questions\": \"Fig. 2 is great, almost telling the story of the paper without the audience needing to read anything else. What would make it even better is replacing \\\"Percentage of data removed\\\" with something like \\\"Higher values mean more confounding in CATE estimates\\\". Currently readers have to read the methodology to understand the x axis, but a better label would mean they could understand the x axis without knowing exactly how the confounding was introduced, and read the methodology if they wanted more details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper compares ''risk-based targeting'' and ''treatment effect-based targeting'' across five datasets from RCTs in different domains. In risk-based targeting, the planner targets individuals based on their baseline risk, which is the expected outcome in the absence of treatment. This approach, commonly used by practitioners, does not account for causal effects. In contrast, treatment effect-based targeting involves first estimating the Conditional Average Treatment Effect (CATE) and then prioritizing individuals accordingly, often with the help of machine learning. A key concern is that these methods may introduce bias in the presence of unobserved confounding. The paper aims to provide empirically grounded guidance for navigating this tradeoff.\\n\\nFirst, the paper investigates the extent to which baseline risk serves as an effective proxy for targeting. To do this, treatment effects are estimated at different levels of baseline risk, revealing a surprisingly complex and not necessarily monotonic relationship between treatment effect and baseline risk. Second, the paper examines the potential additional gains from targeting based on treatment effect. Both utilitarian and Nash welfare are used to compare these mechanisms, showing that targeting based on estimated treatment effect can be up to three times more effective in nearly all cases. This remains true even when confounding is introduced into the CATE estimation and when the policymaker's preference for risk-based targeting is encoded in the social welfare function.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This is an important issue, and I\\u2019ve personally never found a clear guideline on how best to approach it. I found the study original and of good quality.\\n\\nThe paper does a commendable job of incorporating datasets from diverse domains. \\n\\nIt also simulates the impact of unobserved confounders and considers the possibility of policymaker bias toward risk-based targeting.\", \"weaknesses\": \"My main concern is the reliability of comparing the estimated welfare from risk-based versus treatment effect-based targeting. The issue with the latter approach, particularly in resource-scarce settings, is that it first estimates CATE, then selects individuals with the highest estimated CATE given the available budget, and finally estimates the welfare of treating these individuals using the same estimated CATE. Even if the CATE is unbiased, this procedure can lead to severely biased welfare estimates in resource-constrained settings.\\n\\nIn Figure 1, the paper provides treatment effect estimates for each baseline risk percentile, along with confidence intervals. What exactly are we looking for here? If the goal is to rule out a monotonic relationship between baseline risk and treatment effect, none of the figures offer statistically significant evidence to support this.\\n\\nI also think the results should intuitively depend on the budget but this is not discussed in the paper. For instance, if the budget is very small, treatment effect-based allocation can be even more effective in exploiting heterogeneity. \\n\\nOverall, the writing is strong. I noticed the following minor typos: page 2 (potentially-based -> potentially-biased), page 4 (policy -> policy), page 5, second equation, page 7, Equation 4.5 is referenced which does not exist\", \"questions\": \"Feel free to further discuss the first two weaknesses mentioned above.\\n\\nI have seen papers, such as [1], that discuss the complexities of estimating allocation welfare under budget constraints. The issue is that these constraints introduce dependencies between individual estimations, complicating the evaluation process. How is this relevant to your setting?\\n\\nShouldn't the results also depend on how many individuals are treated in each scenario? My intuition (supported by studies like [2]) suggests that the relative budget plays a significant role in determining which type of allocation mechanism is optimal.\\n\\nSince you studied a wide range of settings, did you find any recommendations that vary across these contexts? For instance, are there conditions under which risk-based targeting performs better?\\n\\nCould you elaborate on the choice of outcome of interest in the TUP dataset? It seems there\\u2019s an implicit assumption that individuals who show a larger increase in expenditure are more deserving of intervention. Why should this be the case? In contrast, a risk-based approach might assume that those who would have lower expenditures in the absence of interventions are more deserving. Could this actually be a more appropriate assumption?\\n\\n[1] Improved Policy Evaluation for Randomized Trials of Algorithmic Resource Allocation\\n[2] Allocation Requires Prediction Only if Inequality Is Low\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer 1\", \"comment\": \"In this comment, we respond to the questions posed:\", \"the_key_explanation_is_that_the_trend_lines_figure_1_only_reflect_one_possible_source_of_variation_in_treatment_effects\": \"variation based on baseline risk. Even when the curve is flat, as for NSW, there may be other sources of variation that targeting based on heterogeneous treatment effects can leverage. This is reflected in the wide confidence intervals in Figure 1, which arise because there is substantial variation in the estimated treatment effects for individuals with the same level of baseline risk. Similarly, for the TUP dataset, the discrepancy between Figure 1 and 2 just means that the majority of heterogeneous treatment effects are driven by characteristics orthogonal to baseline risk, even if a portion do correlate with baseline risk.\", \"regarding_the_code\": \"we will simplify/document it as suggested, for release along with the paper.\\n\\nWe thank you again for your feedback.\"}", "{\"title\": \"Reply to Reviewer 3\", \"comment\": \"We thank you for your detailed review and constructive feedback. We address the weaknesses mentioned in the review in this comment and the questions in the subsequent comment. We have also edited the paper to incorporate the feedback we received and the edits are marked in red for easy distinction.\", \"wekanesses\": \"1) **\\\"My main concern is...\\\"*** \\n\\nTo address this concern, we employ a sample splitting approach, in line with the literature on doubly robust CATE estimation and previous work on policy optimization/comparison. In particular, our strategy is equivalent to the cross-validation strategy used in the experiments section of Athey and Wagner [1] to evaluate learned policies on RCT data. The theoretical basis for this strategy is that, with randomized data, the estimated welfare gains should be unbiased even conditional on the learned outcome model used for targeting (which may be biased) since the propensity score is guaranteed to be well-specified. \\nTo further address this concern though, we added an additional robustness check in Appendix B, beyond what previous work has done. In this analysis, instead of using a cross-validation approach, we hold out half the dataset only for evaluation so that the estimator used for evaluation was trained on entirely disjoint data from the targeting policy. This substantially reduces the amount of data available for either the learning or evaluation tasks, and so is only possible with the two largest datasets we use. However, we find generally consistent results with our main analysis, where causal targeting performs better than risk-based, even under moderate levels of confounding or inequality-aversion. \\n[1] Athey and Wager. Policy Learning with Observational Data. Econometrica 2021.\\n\\n2) **\\\"In Figure 1...**:\", \"answer\": \"Multiple reviewers pointed out that the interpretation of Figure 1 is ambiguous and suggest various alternative stories consistent with the plot. We agree with the reviewers and thank them for raising this point -- all of the interpretations that the reviewers mention do find some support in Fig 1 due to the wide confidence intervals. However, our main conclusions are driven by Fig 2, where we are able to draw much more statistically precise conclusions. We have edited the text to frame Fig 1 just as suggesting hypotheses about the effects of risk based targeting, which Fig 2 addresses with more precision/statistical significance. Here is the revised text discussing Fig 1:\\n\\nThe estimated relationship between baseline risk and treatment effect is variable across domains. In most domains, the point estimate shows a general upward trend, indicating that individuals at greater risk benefit more (on average) from treatment. However, in the NSW domain, the point estimate is essentially flat. In addition, the confidence intervals are wide for all domains and there is very little statistically significant significant evidence in favor of high-risk individuals benefiting more. Wide confidence intervals reflect that there is significant variance in the pseudo-outcomes estimated for different individuals at the same level of baseline risk. That is, there is a great deal of variance in our estimated treatment effects that is not explained by baseline risk. From these results, we form two hypotheses. First, that risk-based targeting should, in most domains, perform better than a random allocation, since the point estimates generally show larger average effects at higher baseline risk. Second, that there is room to improve on risk-based targeting via strategies that leverage some of the substantial variance in treatment effects that is unexplained by baseline risk. The next section provides more statistically precise tests of these hypotheses by comparing the welfare associated with each targeting policy (a single number, which can be quantified more precisely than the entire curve shown in Figure 1). \\n\\nWe have also edited the introduction and conclusion of the paper to reflect this framing.\\n\\n3) **\\\"I also think...\\\"**\", \"we_provide_results_for_3_budget_levels_in_the_paper\": \"20% of the total population (main paper), 30% of the total population (appendix A.2), 40% of the total population (appendix A.2). The trends for the latter 2 appear to match the trend we see with a budget of 20%.\\n\\nWe thank you for bringing the typographic errors to our notice. These have been corrected in the updated manuscript.\"}", "{\"comment\": \"With the discussion period ending soon, we'd like to check if the above revisions to the paper address your concerns. Particularly since much of the discussion of this paper centers on the interpretation/framing of the results, we're hoping to make the most of the opportunity to improve the paper!\"}", "{\"metareview\": \"The paper compares two targeting strategies for allocating limited resources in social welfare applications: risk-based targeting and treatment-effect-based targeting. Using data from five real-world randomized controlled trials (RCTs), the authors find that targeting based on treatment effect estimates can outperform risk-based targeting, even when treatment effect estimates are biased or when policymakers prioritize assisting high-risk individuals. The study highlights the potential benefits of applying causal inference methods in policymaking, emphasizing that treatment-effect-based approaches yield greater social welfare gains across various scenarios.\\n\\nOverall, the reviewers acknowledge that the paper addresses an important research question, employs a rigorous approach, and has significant potential impact. The main concerns revolve around the discussion of the results (e.g., Figure 1 and the new results added in rebuttal, as highlighted by XFvr). Considering the potential impact of the research and its generally strong execution, I am inclined to recommend accepting the paper. If accepted, please carefully address the reviewer comments when preparing the final version.\", \"additional_comments_on_reviewer_discussion\": \"There were concerns around the discussion of the results (e.g., Figure 1 and the new results added in rebuttal, as highlighted by XFvr), and the author responses have helped clarify the issues.\"}", "{\"title\": \"Reply to Reviewer 2\", \"comment\": \"We thank you for your detailed review and constructive feedback.\\n\\n**Regarding Question: \\\"Modeling Real-World Confounding...\\\":**\\n\\nTo provide a strong comparison, we simulate a very adversarial mode of confounding, where individuals select into treatment depending on the actual value of their potential outcome. This can be seen as a strengthening of a typical mechanism for confounding, where individuals are selected for treatment based on unobservable characteristics that are merely correlated with their potential outcomes. We edited the paper to explain this motivation. \\n\\n**Regarding Question: \\\"Ethical Implications...\\\":**\\n\\nThis is an important question. We address it in the paper by evaluating policies under two families of egalitarian welfare functions which place greater weight on the welfare of individuals with greater risk. This provides guidance for policymakers based on the degree of preferences for fairness which is appropriate in a given domain. We have also acknowledged that there could be additional ethical issues by adding the following to the conclusion: \\u201cOur investigation of egalitarian preferences assumes an essentially consequentialist perspective, where the policymaker's goal is to improve individuals' welfare as defined by their outcome. If policymakers have non-consequentialist preferences, for example viewing the assistance of those in need as an inherent good regardless of its effects, targeting directly on a measure of vulnerability may be more appropriate.\\u201d \\n\\n**Regarding Question: \\\"Data Quality...\\\":**\\n\\nThe acupuncture dataset contains data obtained from a study published in the British Medical Journal (a top medical publication): \\u201cAcupuncture for chronic headache in primary care: large, pragmatic, randomized trial\\u201d, Andrew J Vickers, Rebecca W Rees, Catherine E Zollman, Rob McCarney, Claire M Smith, Nadia Ellis, Peter Fisher and Robbert Van Haselen, BMJ, 2004 (https://pmc.ncbi.nlm.nih.gov/articles/PMC381326/ which has been cited 448 times). It is available at https://www.causeweb.org/tshs/acupuncture/ and was contributed by Dr. Steven C. Grambow, Director, Duke Clinical Research Training Program and Assistant Professor, Duke University.\\n\\n**Regarding Question: \\\"Confidence Interval Estimation...\\\":**\\n\\nThe estimator is based on an asymptotic normal approximation, which is indeed computationally efficient. Note that, as we emphasize now in the paper, we are not attempting to draw statistically significant conclusions from Figure 1, so exact CIs are less important.\\n\\nWe thank you again for your feedback.\"}", "{\"summary\": \"The paper explores targeting strategies for allocating limited resources to maximize social welfare in areas like\\nhuman services, healthcare, and education. Specifically, it compares risk-based targeting, which prioritizes\\nhigh-risk individuals, with treatment-effect-based targeting, which uses machine learning models to estimate who\\nmight benefit the most from interventions. Using five real-world randomized controlled trials (RCTs) across diverse\\ndomains, the paper concludes that even biased estimates of treatment effects generally outperform risk-based\\ntargeting. This finding suggests in addition to the widespread reliance on risk-based approaches, policymakers\\ncould incorporate treatment effect estimation when feasible.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Scope: The work\\u2019s target is evident in its cross-domain empirical evaluation, making it applicable to multiple policy\\nareas. The paper introduces innovative use of biased treatment effect estimates to assess targeting efficacy.\", \"quality\": \"Methodologically, the paper is solid, employing credible RCT data and a robust approach to measuring\\ntreatment effect heterogeneity. The use of doubly robust estimation and varying social welfare functions adds\\ndepth to the analysis.\", \"weaknesses\": \"Grammatical errors: there are notable grammatical errors in spelling and following the formatting of ICLR\\ninstructions.\", \"scope_of_treatment_effect_estimates\": \"The reliance on simulated confounding could be more thoroughly justified;\\nreal-world application requires consideration of domain-specific biases that could vary across different policy areas\", \"potential_ethical_implications\": \"Since the work could influence resource allocation in sensitive domains, further\\ndiscussion on ethical considerations regarding inequality and bias would strengthen its impact and application.\", \"questions\": \"Modeling Real-World Confounding: Could the authors expand on how their confounding\\napproach aligns with real-world biases encountered in observational data? A discussion on potential limitations in modeling confounding factors might aid readers in interpreting results for specific applications.\", \"ethical_implications\": \"How might the authors' conclusions address ethical concerns,\\nparticularly in terms of balancing fairness (people at the most rish) with effectiveness\\nwhen treatment effect targeting benefits some groups more than others? Because the paper concludes that a treatment-effect-based method is preferable to a risk-based method, it raises\\nthe natural question of whether individuals at the highest risk should be prioritized, or if those with the greatest\\npotential treatment outcome should be targeted instead.\", \"data_quality\": \"Could the authors provide more information on the reliability and source of the\\nAcupuncture Dataset?\", \"confidence_interval_estimation\": \"In Equation (5), a biased estimator is used. Could the authors\\njustify why this choice was made? Was it primarily for computational efficiency, or are there\\nother reasons for using a biased estimator in this context?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thank you for your response.\\n\\nI reviewed Appendix B and appreciate the robustness analysis you conducted. However, I am concerned about the stark contrast between Figure 2(d) and Figure 8(a) (or Figure 10). I understand that training the DR estimator on a dedicated split while choosing targeting decisions on a separate split effectively reduces the sample size by half. However, I cannot understand why this would lead to such dramatic changes in the results for the NSW dataset.\\n\\nI also appreciate the reframing around Figure 1. I agree that this figure can only serve as a suggestive illustration and that the main argument should center on Figure 2.\\n\\nOverall, I believe the language of the paper has improved during the rebuttal process. That said, I still find the conclusions less definitive than I expected, and the contrast between the results in the main text and Appendix B remains somewhat concerning. I wish I could assign a score of 5.5, but since that's not an option, I will raise my score to 6 :)\"}", "{\"comment\": \"With the discussion period ending soon, we'd like to check if the above revisions to the paper address your concerns. Particularly since much of the discussion of this paper centers on the interpretation/framing of the results, we're hoping to make the most of the opportunity to improve the paper!\"}", "{\"comment\": \"They partially did, yes. Thanks for the helpful response! In particular, your point about the interpretation of Figure 1 is fair. There is no contradiction between the two figures.\\n\\nThe source of my misunderstanding is also a remaining issue with Figure 1. The confidence intervals could be wide because there isn't enough data to make the claim, or because treatment effect is weakly correlated with predicted risk. I now understand that you're trying to argue the latter not the former. \\n\\nI'd say it's still worth clearing up the confusion. A more helpful alternative to Figure 1 might directly compare two curves: (a) treatment effect vs predicted risk, (b) treatment effect vs predicted treatment effect. Curve (a) is essentially what you have in Figure 1, while curve (b) speaks to the predictability of treatment effects via your causal inference method. Curve (b) should look like a diagonal line with confidence intervals around it. Putting these two together should ideally make it visually obvious that targeting based on predicted treatment effects is superior to risk based targeting without even going through the utility calculation. Does this make sense?\\n\\nPlease check some of the red text in your revision for spelling. There are some mistakes, e.g.:\\n\\\"On some questions, are results are subject to greater uncertain\\\"\\n\\nIn any event, this is a good paper that is headed for acceptance, which I support!\"}", "{\"summary\": \"The authors compare the utility of targeting interventions based on estimated treatment effects with the utility of targeting based on predicted risk. The former is the method of choice from a utilitarian perspective. But practitioners and policy makers often choose the latter due to its simplicity and in cases where the normative goal is to assist those in greatest need.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(1)\\n\\nThe comparison of these two targeting strategies is an important problem. I'm glad to see that the authors study this question. Despite its significance, there hasn't been much reliable insight so far. I'd love to see more work in this direction! I'm weighing this strongly in my evaluation.\\n\\n(2)\\n\\nThe paper is very clearly written. The authors narrate a compelling story about the advantages of targeting based on treatment effect estimates, even if they are biased.\", \"weaknesses\": \"(1)\\n\\nAs compelling as the story is, I find the results less than conclusive. Looking at Figure 1, it looks like the confidence intervals around treatment effects are generally strongly overlapping across the entire x-axis of baseline risk. The one exception is the STAR dataset, where the lowest and highest point estimate have barely non-overlapping confidence intervals. As a result, another story consistent with the data is that effects are generally not so heterogeneous as conventional wisdom from the econometrics literature has it.\\n\\n\\n(2)\\n\\nThe results in Figure 1 are actually a fair bit more favorable towards risk-based targeting than the introduction of the paper had me believed. Treatment effects generally increase with risk. Targeting the 80th to 90th percentile of risk generally seems to capture high treatment effects across all datasets. So, another story could be that we should exclude the most extreme values of risk from targeting, but other than that risk-based targeting sort of works.\\n\\n\\n(3)\\n\\nI found it rather confusing to have unnormalized utility values on the y-axis. After all the sample size is rather arbitrary and does not correspond to the population-level utility obtained if the policy maker were to implement the given approach. Along the same lines, I found it difficult to reconcile Figure 1 and Figure 2. See question below.\\n\\n\\n(4)\\n\\nIt would've been great to include datasets with real world confounding rather than the simulated confounding. My experience is that existing CATE estimation methods don't do very well in non-RCT settings. Might there be an advantage to risk-based targeting in non-RCT data?\", \"suggestion\": \"It seems to me that the story is much less certain than the abstract and introduction make it sound. I would therefore appreciate it if you could indicate a greater level of epistemic uncertainty in the writing throughout. I don't think this would hurt the paper at all. As is, though, I'd worry that your writing suggests the question is essentially closed conclusively which would actually discourage additional work in this direction.\", \"questions\": \"Visually, it seems difficult to reconcile the large difference between, say, Figure 2 top left panel and Figure 1 top left panel. It seems that on the TUP dataset, high risk is close to maximizing treatment effect, the highest values being around percentile 80. However, Figure 2 shows utility 15000 vs 5000 for treatment effect based vs risk vased. Can you explain?\\n\\nSimilarly, treatment effects for NSW seem to be largely constant around 1000, but Figure 2, second row left panel shows a massive advantage for treatment effect based targeting. This seems to be about a factor 7 (1.75 vs 0.25). Where do these large effect differences come from given that the treatment effect curve is essentially constant? None of the treatment effects seem to differ by more than a factor 2.\\n\\nI tried to figure this out by looking through the code, but I couldn't find code that generated Figure 2. So, I don't actually know where it came from and what it shows. Maybe I missed it. On that note, it would be very helpful to clean up and document the code for the final version. As is, it's very hard to follow.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer 4\", \"comment\": \"Thank you very much for the strengths discussed in your review! Regarding the weakness you point out: multiple reviewers pointed out that the interpretation of Figure 1 is ambiguous and suggest various alternative stories consistent with the plot. We agree with the reviewers and thank them for raising this point -- all of the interpretations that the reviewers mention do find some support in Fig 1 due to the wide confidence intervals. However, our main conclusions are driven by Fig 2, where we are able to draw much more statistically precise conclusions. We have edited the text to frame Fig 1 just as suggesting hypotheses about the effects of risk based targeting, which Fig 2 addresses with more precision/statistical significance. Here is the revised text discussing Fig 1:\\n\\nThe estimated relationship between baseline risk and treatment effect is variable across domains. In most domains, the point estimate shows a general upward trend, indicating that individuals at greater risk benefit more (on average) from treatment. However, in the NSW domain, the point estimate is essentially flat. In addition, the confidence intervals are wide for all domains and there is very little statistically significant significant evidence in favor of high-risk individuals benefiting more. Wide confidence intervals reflect that there is significant variance in the pseudo-outcomes estimated for different individuals at the same level of baseline risk. That is, there is a great deal of variance in our estimated treatment effects that is not explained by baseline risk. From these results, we form two hypotheses. First, that risk-based targeting should, in most domains, perform better than a random allocation, since the point estimates generally show larger average effects at higher baseline risk. Second, that there is room to improve on risk-based targeting via strategies that leverage some of the substantial variance in treatment effects that is unexplained by baseline risk. The next section provides more statistically precise tests of these hypotheses by comparing the welfare associated with each targeting policy (a single number, which can be quantified more precisely than the entire curve shown in Figure 1). \\n\\nWe have also edited the introduction and conclusion of the paper to reflect this framing. We also thank you for your suggestion to improve Figure 2: we have incorporated it in the new manuscript.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
0iXfS9Smqf
Learning through experience:Episodic memory representation for cognitive agents
[ "Shweta Singh", "Shraddha Seshadri" ]
As the demand for intelligent robots and cognitive agents rises, the ability to retain and utilize past experiences through episodic memory has become crucial, especially for social companion robots that rely on previous interactions for task execution. To address this, we introduce Episodic Memory for Cognitive Agents (EMCA), a novel framework that advances knowledge representation by integrating real-world interactions. EMCA enables agents to adapt to complex environments by learning from tasks, interacting with humans, and processing multimodal data—such as speech, vision, and non-verbal cues—without pre-training on specific scenarios. EMCA models episodic memory through a graph-based structure , allowing for incremental storage and retrieval of experiences. Each interaction or event enriches the memory graph, supporting continuous learning and adaptation without extensive retraining. This human-like memory formation optimizes the agent’s ability to retrieve relevant information for tasks like localization, planning, and reasoning based on prior experiences.Unlike conventional models relying on temporal markers or recurrent patterns, EMCA encodes data like human memory, allowing reasoning across diverse scenarios regardless of temporal patterns. The framework dynamically builds a memory graph with semantic and temporal connections based on the agent’s experiences, promoting flexible temporal reasoning. It also introduces mechanisms for clustering new memories and a dynamic retrieval policy that adjusts based on context or query type, ensuring robustness even in unpredictable scenarios. Empirical tests show EMCA adapts effectively to real-world data, offering reliability and flexibility in dynamic environments.
[ "Episodic Memory", "Bio inspired Robot learning", "incremental Memory structures" ]
Reject
https://openreview.net/pdf?id=0iXfS9Smqf
https://openreview.net/forum?id=0iXfS9Smqf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTo4MfYlbn", "zIaPjvwGHi", "y6xzFqnxKD", "y2jqyXeSml", "xaz2RathCZ", "xHoLKPJHCK", "wt77RvfRtI", "vAaiecezct", "uXSHVX1nVV", "tBSkFFBXcp", "qYz1Dy76ad", "pHcc7J3wOr", "p9J03R3R2i", "nmNk2xWsOz", "nYLEaiVMNg", "mTY6kNZc9f", "l56hBMEUHZ", "kkXrW5hF0a", "kj9vhKjoF9", "kGDC6QU8yD", "juiOeJWXfl", "jBa66Qxnk0", "hgBUaK3TJp", "h0rnnSJxgn", "gf0oegReMw", "fgZkHzr43N", "f7Oe5jPnpm", "d4SCp9tI99", "cC2DDypoZW", "bgpH4X3RNC", "bLnza5Gubx", "ZYBjfQbEFI", "XrUZ759h5l", "XW2euNqXur", "XBI6dbNPCv", "WBZ2Uu5pPx", "VaUZ5kHzzb", "UGDg7HuqdH", "TJ3lUFLIzj", "T5OZCGvXHw", "SySeGfFN1a", "SlDsYlYu9z", "SA45Kahzww", "RHktWELr8D", "QQMK6yTiA1", "QKDoQPq7Oi", "Pq1uNqKjn6", "Po1UL97eZi", "PDUEOI7jmA", "P2oFlEXPG3", "ORWlNbGFVh", "O1QECZu1ul", "N3Y4eJ1xqk", "MkVb2gwwgw", "MZEhR1iQWz", "M9Dp6aN0S2", "LpbeXprzaU", "L6LrTGzD2H", "K2Yj0zgrXo", "K1dq98A4Ns", "HGZANbswhO", "GWFbWttHgG", "GThwSqCuJ2", "FGH6Kk9jvm", "Er2n4Mf6FP", "Eqf0bwyoYY", "Enu87bbrWc", "DEZUl3C0wO", "C4kKTNSyke", "BXjKHOzt21", "AuwqFlmzYq", "9y7QyJ7rYQ", "9II4mDnT40", "9EFk5zNb8P", "8xDWWFH3dq", "8ozZnNu2dH", "8IIIdioWAe", "5VTzE9H5TN", "5M8WugOqxv", "4nyhymUhWy", "3xqzHu97zi", "3ucodsh4TT", "3tW3coCwhO", "3jN8aXBJD4", "2L6locsNse", "1Z47xMegAo", "19HQiXyyAH", "0lVMw7PKcf" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730577470801, 1732126617743, 1732122993325, 1732129186300, 1732127531851, 1732129147064, 1732124393162, 1732126487730, 1732127799679, 1732122335995, 1732123108163, 1732452355149, 1734564223834, 1732167086593, 1732126783258, 1732121397053, 1732126503925, 1732127276987, 1732128828016, 1732126933058, 1732128188088, 1732805517185, 1732123393994, 1732119448095, 1732121699540, 1732121290182, 1732125134523, 1732121724127, 1732124155186, 1732121818391, 1732805712243, 1737524098748, 1732128241761, 1732167875318, 1730669981771, 1732119683772, 1732127047943, 1732128421648, 1732124039969, 1732119094958, 1732127932111, 1732129491092, 1732124905925, 1732125000760, 1732125612467, 1732122797480, 1732128729963, 1732128539044, 1732177394005, 1732129014887, 1732127319001, 1732122905352, 1732123316476, 1732127872063, 1732117378764, 1732639808091, 1732127668208, 1732127433225, 1732120972239, 1732120490876, 1732122410183, 1732125681611, 1732120702702, 1732464335967, 1732121921930, 1732124760403, 1732120834973, 1730051899843, 1732128299265, 1732126826131, 1732121102906, 1732123257317, 1732122622079, 1732128368232, 1732124296043, 1732122844632, 1732476917035, 1732127563313, 1732123975727, 1732122520808, 1732129303605, 1732171660426, 1730704406742, 1732122728914, 1732127168699, 1732124703317, 1732126717875, 1732122206553 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11037/Reviewer_1ixu" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Area_Chair_bKmB" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Reviewer_jSod" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Reviewer_1ixu" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Reviewer_YT1b" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Reviewer_YT1b" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Reviewer_jSod" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Reviewer_zT5v" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ], [ "ICLR.cc/2025/Conference/Submission11037/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents Episodic Memory for Cognitive Agents (EMCA), a novel framework that enables AI systems to retain and utilize past experiences through a graph-based memory structure. The key innovation lies in its ability to: 1) Process multimodal data (vision, speech, non-verbal cues) without requiring pre-training; 2) Dynamically build and update memory representations through a graph structure with semantic and temporal connections; 3) Adapt to complex environments through continuous learning from interactions.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"originality: graph-based episodic memory structure multimodal processing without pre-training, dynamic clustering\", \"quality\": \"comprehensive testing on multiple datasets, benchmarking against exsiting methods, systematic component analysis in ablation study\", \"clarity\": \"clear problem formulation, good visual aids explaining complex concepts\", \"significance\": \"adresses crucial challenges in cognitive AI, social robot applications, potential impact on memory assistance systems\", \"key_innovations\": \"1) removes pre-training requirements\\n2) enables continuous learning\\n3) provides real-time processing campability\", \"weaknesses\": \"1. Scalability limitations: Does the graph structure grow exponentially with experiences? What are the computational costs?\\n2. More details about the Big Bang Theory dataset are needed.\\n3. More implementation details about the method in section 5.0.2 should be provided.\\n4. Forgetting is one of the key problems in memory systems - how does the paper assess and handle memory 5. retention and decay?\\n5. What is the real-time performance?\\n6. The paper tries to claim episodic memory for agents and robots, but robotic interaction with the environment is different from and much harder than agent interaction in a virtual environment. It is better to make a clear definition and scope. For instance, in L391, \\\"robot's episodic memory\\\" should be \\\"agent's episodic memory\\\".\\n7. Can authors provide the code and dataset for evaluation?\\n8. Repeated paragraph: L054-L062\\n9. Subtitle formatting issue in line 480\", \"questions\": \"1. Will the code and datasets be made publicly available?\\n2. How does the system maintain temporal consistency without explicit time markers? Can you provide quantitative results comparing temporal reasoning accuracy with and without explicit timestamps?\\n3. What are the specific model architectures and hyperparameters used?\\n4. What are the computational costs for memory retrieval at different scales?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Forgetting is one of the key problems in memory systems - how does the paper assess and handle memory 5. retention and decay?\", \"comment\": \"The paper addresses memory retention and decay by assigning weights to different sections of memory based on how frequently they are traversed, i.e., frequency-based memory decay.The weight (0-3) indicates the removal of nodes that were accessed up to 3 times, while (0-5) refers to the removal of nodes that were accessed up to 5 times only. This approach assigns weights to portions of memory depending on how many times they are accessed, which allows the mechanism to evaluate the importance of nodes in a memory graph based on their frequency of access. An analysis was conducted using various thresholds to assess the impact of this approach, with results presented in the table below Accuracy is recall accuracy(How well the agent gives the correct answer):\\n\\n| **Model** | **What** | **When** | **Where** | **All** |\\n|-----------------------------------------|----------|----------|-----------|---------|\\n| **Full Model** | 75 | 80 | 76 | 78 |\\n| **After Pruning (Weight 0-3)** | 73 | 78 | 75 | 75 |\\n| **After Pruning (Weight 0-5)** | 55 | 50 | 55 | 51 |\\n\\nThis analysis demonstrates the effects of memory pruning at various weight levels on performance, helping to understand how retention and decay mechanisms impact the overall effectiveness of the model.Some more explorable methods include temporal decay.\"}", "{\"comment\": \"**After reading the full paper, I am not sure what the actual task is. What commands does the master ask? I understand the approach, but not what task it is specifically solving. Is the output the location of a specific memory? Or is it a free-form text answer?**\\n#### The task addressed by this framework in this paper is episodic question-answering (QA), activity recommendation, and even experience memory localisation which will aid in navigation for making good policies in reinforcement learning (RL) agents. The output of the system depends on the specific application, but it can include either the location of a relevant memory or a free-form text answer based on the context of the query.This model can be can be expanded for more operations such as navigation also.\\n#### In the case of episodic QA, the agent retrieves specific memories from its episodic memory based on a given query.Here answer will be free form text. For recommendation tasks, the system identifies relevant episodes based on user preferences or past interactions. This answer will also be free form text.Additionally, for RL agents, this framework can be used to derive goal locations by intrinsically from past interactions (Details will be provided in the revised edition) guiding the agent\\u2019s decision-making process.Localising it will extract that region of memory \\nWe will also add a section of experience memory localisation where the memory retrieved will be the location of a specific portion of memory .\\n#### The system can execute commands such as to perform an activity(Go and do the registration), reminding the user of past conversations(What did Person X tell him when he met that person for the first time), locating lost items(Where did I keep the keys?), and recommending the next course of action based on the context.(What is the next action I should perform)\", \"title\": \"After reading the full paper, I am not sure what the actual task is. What commands does the master ask? I understand the approach, but not what task it is specifically solving. Is the output the location of a specific memory? Or is it a free-form text answer?\"}", "{\"title\": \"Eq (17) sim seems undefined\", \"comment\": \"This is also cosine similarity which I will elaborate properly in updated version .\"}", "{\"comment\": \"Section 3 describes the method. It remains however very much on the level of HOW, rather than providing many insights in the WHY (reasoning behind design choices, consequences of design choices) and it isn't always clear what is a core part of the method and what is an implementation detail. Please provide more explanation for the key design decisions, discussing the rationale behind these choices and their potential implications. Additionally, please clearly distinguish between core methodological components and implementation details.\\nImplementation Details\\nFor event extraction within dialogues, we utilized a Transformer-based BART model initialized with pre-trained weights to effectively extract and summarize events within contextual boundaries. The architecture options included BARTBASE, featuring a 6-layer encoder-decoder with approximately 140 million parameters, and BARTLARGE, which has a 12-layer encoder-decoder and 400 million parameters. Both configurations use a hidden size of 1024 and a feed-forward filter size of 4096, with a fixed dropout rate of 0.1 across layers. We employed the Fairseq toolkit for training, using the **Adam optimizer** with a warmup strategy. Learning rates were set to $4 \\\\times 10^{-5}$ for BARTBASE and $2 \\\\times 10^{-5}$ for BARTLARGE, with a maximum batch token limit of 1100 tokens. Contrastive objectives were supported by a margin coefficient of 1, and hyperparameters for coherence and sub-summary objectives were tuned using a validation set. Our method demonstrated substantial performance improvements compared to publicly available models trained on datasets like **SAMSUM** and **DialogueSUM**.\\n\\nFor visual data processing, we used a **Vision Transformer (ViT)** as the vision encoder, specifically adapted for video frame analysis from the **MSR-VTT dataset**. The encoder processed $224 \\\\times 224$ video frames, segmented into 16-sized patches and embedded into a 512-dimensional latent space. The 12-layer encoder, with a width of 768, was equipped with **LayerScale** (initialized at 0.1) for training stability. Advanced regularization techniques, including stochastic depth with a configurable \\\\texttt{drop\\\\_path\\\\_rate}, were applied. The encoder was based on the \\\"eva-clip-b-16\\\" model, which proved effective in extracting detailed spatial and temporal features essential for multimodal tasks.\\nFor models based on **LLaMA** that integrate vision and dialogue for character and place tagging, a multimodal configuration was used. **ViT** processed visual data, while **LLaMA** managed dialogue input. Training included cross-entropy loss for character tagging and contrastive loss for image-text alignment, incorporating **episodic memory** for QA tasks. The training process leveraged the **AdamW optimizer**, a dropout rate of 0.2, and a **cosine annealing scheduler** for efficient learning.\\n\\n**Temporal tagging** was optimized with key hyperparameters for best performance: a maximum sequence length of 128, a batch size of 32, and a learning rate of $5 \\\\times 10^{-5}$. A dropout rate of 0.1 was used to mitigate overfitting, and a weight decay of 0.01 improved generalization. The training process spanned 10 epochs to ensure sufficient learning while preventing overfitting.\\n\\nFor extracting dialogue from audio, the **Whisper-large model** was employed. This model was utilized for its robustness in transcribing and converting spoken content into text for further processing.\\n\\nFor text detection, the **TextSnake** model was trained on the **SCUT-CTW1500** dataset using **SGD with Momentum** as the optimizer. The architecture combined **ResNet** and **FPN\\\\_UNet**, configured with a training batch size of 64 and 8 workers for data loading. The validation batch size was set to 1, with 4 workers and persistent workers enabled. Training was conducted over 200 epochs with validation checks every 10 epochs.\\n\\nThe **QA system** was built using a **BERT** model fine-tuned on concatenated datasets, including **SQuAD**, **Wikipedia**, and **Reddit**, to improve contextual comprehension. The hyperparameters included a learning rate of $1 \\\\times 10^{-5}$, a maximum sequence length of 512, and a document stride of 512. The training batch size was 8, with gradient accumulation steps of 2 over 2 epochs. Mixed-precision training was used with `fp16` at the **O2 optimization level** for better efficiency. The final model outputs were stored in the `bart-squadv2` directory, without intermediate model saving.\"}", "{\"title\": \"l. 212: what are the implications/limitations resulting form using a simple metric like cosine similarity?\", \"comment\": \"In the case of place and character clusters, we use cosine similarity as a simple metric because lexical similarity is sufficient for these entities. Specifically, for places and characters, the primary task is to measure how closely related they are based on their textual representations, and cosine similarity is effective in capturing this relationship in a high-dimensional space.\"}", "{\"title\": \"What does a successful or unsuccessful example look like? I would recommend looking at [2] or [3] above to see how to discuss dataset creation in such a setting.\", \"comment\": \"A successful example refers to correctly answering the question and extracting the appropriate experience from memory. An unsuccessful attempt, on the other hand, occurs when the question is not answered correctly.In localisation successful attempt refers to getting the correct memory location.Failed attempt is getting wrong location Our model will always attempt to generate an answer based on the experiences collected, but in cases where the answer is wrong, it is considered an unsuccessful attempt. We will provide detailed explanations and visual aids for this part in the appendix of the paper..\"}", "{\"title\": \"More implementation details about the method in section 5.0.2 should be provided.\", \"comment\": \"For event extraction within dialogues, we utilized a Transformer-based BART model initialized with pre-trained weights to effectively extract and summarize events within contextual boundaries. The architecture options included BARTBASE, featuring a 6-layer encoder-decoder with approximately 140 million parameters, and BARTLARGE, which has a 12-layer encoder-decoder and 400 million parameters. Both configurations use a hidden size of 1024 and a feed-forward filter size of 4096, with a fixed dropout rate of 0.1 across layers. We employed the Fairseq toolkit for training, using the **Adam optimizer** with a warmup strategy. Learning rates were set to $4 \\\\times 10^{-5}$ for BARTBASE and $2 \\\\times 10^{-5}$ for BARTLARGE, with a maximum batch token limit of 1100 tokens. Contrastive objectives were supported by a margin coefficient of 1, and hyperparameters for coherence and sub-summary objectives were tuned using a validation set. Our method demonstrated substantial performance improvements compared to publicly available models trained on datasets like **SAMSUM** and **DialogueSUM**.\\n\\nFor visual data processing, we used a **Vision Transformer (ViT)** as the vision encoder, specifically adapted for video frame analysis from the **MSR-VTT dataset**. The encoder processed $224 \\\\times 224$ video frames, segmented into 16-sized patches and embedded into a 512-dimensional latent space. The 12-layer encoder, with a width of 768, was equipped with **LayerScale** (initialized at 0.1) for training stability. Advanced regularization techniques, including stochastic depth with a configurable \\\\texttt{drop\\\\_path\\\\_rate}, were applied. The encoder was based on the \\\"eva-clip-b-16\\\" model, which proved effective in extracting detailed spatial and temporal features essential for multimodal tasks.\\nFor models based on **LLaMA** that integrate vision and dialogue for character and place tagging, a multimodal configuration was used. **ViT** processed visual data, while **LLaMA** managed dialogue input. Training included cross-entropy loss for character tagging and contrastive loss for image-text alignment, incorporating **episodic memory** for QA tasks. The training process leveraged the **AdamW optimizer**, a dropout rate of 0.2, and a **cosine annealing scheduler** for efficient learning.\\n\\n**Temporal tagging** was optimized with key hyperparameters for best performance: a maximum sequence length of 128, a batch size of 32, and a learning rate of $5 \\\\times 10^{-5}$. A dropout rate of 0.1 was used to mitigate overfitting, and a weight decay of 0.01 improved generalization. The training process spanned 10 epochs to ensure sufficient learning while preventing overfitting.\\n\\nFor extracting dialogue from audio, the **Whisper-large model** was employed. This model was utilized for its robustness in transcribing and converting spoken content into text for further processing.\\n\\nFor text detection, the **TextSnake** model was trained on the **SCUT-CTW1500** dataset using **SGD with Momentum** as the optimizer. The architecture combined **ResNet** and **FPN\\\\_UNet**, configured with a training batch size of 64 and 8 workers for data loading. The validation batch size was set to 1, with 4 workers and persistent workers enabled. Training was conducted over 200 epochs with validation checks every 10 epochs.\\n\\nThe **QA system** was built using a **BERT** model fine-tuned on concatenated datasets, including **SQuAD**, **Wikipedia**, and **Reddit**, to improve contextual comprehension. The hyperparameters included a learning rate of $1 \\\\times 10^{-5}$, a maximum sequence length of 512, and a document stride of 512. The training batch size was 8, with gradient accumulation steps of 2 over 2 epochs. Mixed-precision training was used with `fp16` at the **O2 optimization level** for better efficiency. The final model outputs were stored in the `bart-squadv2` directory, without intermediate model saving.\"}", "{\"comment\": \"We agree that memory requirements are an important metric for comparing methods. Our approach inherently requires less storage compared to a traditional knowledge graph because it focuses on summarizing key events, extracting representative keyframes, and storing only relevant information tied to specific queries.\\n\\nFor example, in our episodic memory system, adding three episodes of data might introduce approximately 3 new nodes to the memory graph.In contrast, a knowledge graph storing similar information could require 30-50 nodes to represent the same data, as it would store additional redundant details and relationships explicitly.\\n\\nCurrently, we are working on implementing a forgetting mechanism and other memory-saving techniques to further optimize memory usage. Due to these ongoing developments, we did not include a detailed evaluation of memory requirements in this paper but plan to address this aspect in future work. By prioritizing efficient memory management, we aim to ensure scalability while maintaining the system's performance and relevance.This is for memory part\"}", "{\"comment\": \"### Relation between $$\\\\( T_{\\\\text{audio}} \\\\) and \\\\( T_{\\\\text{combined}} \\\\)$$\\n\\n$$\\\\( T_{\\\\text{audio}} \\\\)$$ represents the combination of dialogue and acoustics, whereas $$\\\\( T_{\\\\text{combined}} \\\\)$$ is the result of merging $$\\\\( T_{\\\\text{audio}} \\\\)$$ and visuals within the same time window. Therefore, $$\\\\( T_{\\\\text{combined}} \\\\)$$ is not identical to $$\\\\( T_{\\\\text{audio}} \\\\)$$; instead, it incorporates both audio and visual modalities to provide a comprehensive representation of the event at a specific time.\\n\\nMathematically, we express this relationship as:\\n\\n$$ T_{\\\\text{audio}} = D(t) + C(t) $$\\n\\nwhere $$\\\\( D(t) \\\\)$$ is the dialogue at time $$\\\\( t \\\\)$$ and $$\\\\( C(t) \\\\)$$ represents the acoustics at time $$\\\\( t \\\\)$$.\", \"the_combined_representation_is\": \"$$ T_{\\\\text{combined}}(t) = T_{\\\\text{audio}}(t) + V(t) $$\\n\\nwhere \\\\( V(t) \\\\) represents the visual data at time \\\\( t \\\\), and $$ \\\\( T_{\\\\text{combined}} \\\\)$$ merges audio and visual features for a richer representation.\", \"title\": \"Relation between Taudio and Tcombined: Is Taudio equivalent to Tcombined, or is there another relationship between these variables?\"}", "{\"comment\": \"We propose a comprehensive dataset framework designed to evaluate and enhance episodic memory systems in artificial agents. This framework integrates multiple datasets, including a custom set of episodic questions based on the TV series *The Big Bang Theory*, spanning all nine seasons (181 episodes). The aim is to assess memory recall and narrative understanding in complex scenarios.\\n\\nWe introduce the *Agent Dataset*, a 10-episode time-series dataset created in Unity3D, where a virtual agent performs tasks and interacts with characters in realistic environments, simulating the role of companion robots. This dataset emphasizes the importance of multi-sensory inputs and task execution, challenging the agent to process and integrate information from *dialogues* and *visual cues* to maintain task order and achieve context-driven objectives.\\n\\nAdditionally, we adapted the **Ego4D dataset**, restructuring its activity sequences into simulated chronological episodes to address the original absence of time-series data\\u2014portraying an agent performing a series of activities over 30 days. We also combined group activity videos designed for active speaker recognition. This transformation enables episodic queries such as \\\"Where did I place the agricultural tool on the last day of farming?\\\", enhancing the ability to localize and retrieve temporal experiences effectively.\\n\\nTogether with the **PerLTQA** [du2024perltqapersonallongtermmemory] and **LLQA** [dolan-brockett-2005-automatically] datasets, which test essential episodic memory dimensions\\u2014**\\\"what\\\"** (context), **\\\"when\\\"** (time), and **\\\"where\\\"** (place)\\u2014this framework forms a robust benchmark for evaluating advanced episodic memory capabilities in AI systems.\\n\\n**Data Annotation**: The data was carefully annotated to tag scene information and identify characters in dialogues, ensuring that the model could recognize character presence and understand related events. This included explicitly tagging scene details for location identification and differentiating characters present in the scene versus those mentioned. Events within dialogues were also meticulously annotated to capture key details, facilitating effective memory representation beyond simple summaries. Capturing these essential details is crucial for episodic memory tasks, as it allows the agent to recall past experiences accurately. Each episode was annotated with 10 *what*, *when*, and *where* questions.\", \"title\": \"There is no discussion on how the Big Bang Theory and Agent datasets were constructed. This on its own seems like a major contribution on its own. I would recommend the authors remove much of the superfluous equations and newlines (and move that into the appendix), and put more of an emphasis on this dataset component.\"}", "{\"comment\": \"Dear \\\"Reviewers\\\",\\n\\nWe sincerely thank you for your thoughtful and constructive feedback on our manuscript, . Your suggestions have been valuable in improving the quality and clarity of our work. We have uploaded a detailed manuscript with changes mentioned and have also added a detailed appendix section addressing all your concerns.\"}", "{\"metareview\": \"This paper addresses the problem of storing and retrieving multimodal past experiences with a graph-based memory structure. The memory graph can be dynamically updated and enables temporal reasoning.\\n\\nReviewers agree that the datasets and results are interesting. However, reviewers raised significant concerns about the presentation of this work and find it hard to follow. The problem is not well motivated and the design choices are not backed up theoretically or empirically. Comparisons with reasonable baseline methods are missing to provide useful insights. \\n\\nThe reviewers unanimously agreed that this paper should be rejected.\", \"additional_comments_on_reviewer_discussion\": \"The authors addressed several initially unclear aspects of the implementation during the discussion. However, the reviewers remain unconvinced that the paper offers significant conceptual insights beyond demonstrating the functionality of a heavily engineered system.\"}", "{\"comment\": \"We would like to sincerely thank all the reviewers for their insightful feedback and constructive comments. Your thoughtful suggestions have significantly contributed to improving the clarity and quality of our paper. We have carefully addressed each of the points raised and made the necessary revisions to enhance the presentation and strengthen the content of our work.\\nWe appreciate your time and effort in reviewing our manuscript, and we believe that the revisions made will help in making the paper more polished and comprehensive. Thank you again for your valuable input. \\n\\nIn response to the feedback, we will revise the relevant sections and include an appendix section: \\n\\n1. Following are the sections we will be adding to the paper: \\n - **Dataset Section** containing details of the dataset used in this paper. \\n\\n2. **Appendix Section** containing: \\n - Detailed Methodology \\n - Implementation Details \\n - Additional Results \\n\\n**The code and dataset will be shared upon the acceptance of the paper to ensure proper access and usage.**\"}", "{\"title\": \"The paper tries to claim episodic memory for agents and robots, but robotic interaction with the environment is different from and much harder than agent interaction in a virtual environment. It is better to make a clear definition and scope. For instance, in L391, \\\"robot's episodic memory\\\" should be \\\"agent's episodic memory\\\".\", \"comment\": \"Thank you for your valuable feedback. We agree that there is a distinction between robotic interaction with the physical environment and agent interaction in a virtual environment. We will revise the phrase \\\"robot's episodic memory\\\" in line 391 to \\\"agent's episodic memory\\\" to ensure consistency and accuracy.\"}", "{\"comment\": \"#### In response to the concern about the overreliance on custom datasets, we would like to clarify that the datasets used to evaluate EMCA are diverse and cover multiple tasks, including dialogue-based tasks and other related domains. These datasets, which are publicly available, have been carefully designed to ensure broad applicability and have been used for various tasks such as episodic memory question answering, temporal localization, and video-language understanding. We have built upon these existing datasets to enhance their functionality for our specific research needs.\\n\\n#### Additionally, we have taken steps to ensure that the dataset is unbiased. We carefully curated the data to include a wide range of scenarios, dialogues, and contexts, allowing for generalizable insights across different domains. We also implemented methods to eliminate potential sources of bias, including balancing the representation of different events, characters, and temporal relationships. This ensures that our evaluations are robust and not skewed by any specific dataset characteristics.\\n\\nIn the revised version of the paper, we will provide a more detailed explanation of the dataset's design, the variety of tasks it covers, and the measures taken to ensure its fairness and neutrality.\", \"title\": \"Overreliance on Custom Datasets: While the use of various datasets to evaluate EMCA is a strength, most of these datasets were developed by the authors, which could indicate potential biases in testing and validation\"}", "{\"comment\": \"| **Model Component** | **Hyperparameter** | **Value** |\\n|-----------------------------|---------------------------|--------------------------------------|\\n| **Event Extraction (BART)** | Encoder-Decoder Layers | 6 (BASE), 12 (LARGE) |\\n| | Hidden Size | 1024 |\\n| | FFN Size | 4096 |\\n| | Dropout | 0.1 |\\n| | Learning Rate | $4 \\\\times 10^{-5}$ (BASE), $2 \\\\times 10^{-5}$ (LARGE) |\\n| | Max Tokens per Batch | 1100 |\\n| | Margin Coefficient | 1 |\\n| **Vision Encoder (ViT)** | Patch Size | 16 |\\n| | Resolution | $224 \\\\times 224$ |\\n| | Latent Space Dim. | 512 |\\n| | Transformer Layers | 12 |\\n| | Width | 768 |\\n| | LayerScale Init. | 0.1 |\\n| | Dropout Path Rate | Configurable |\\n| **QA System (BERT)** | Learning Rate | $1 \\\\times 10^{-5}$ |\\n| | Max Sequence Length | 512 |\\n| | Document Stride | 512 |\\n| | Train Batch Size | 8 |\\n| | Gradient Accum. Steps | 2 |\\n| | Epochs | 2 |\\n| | Mixed-Precision Opt. | fp16 (O2) |\\n| **Temporal Tagging** | Max Sequence Length | 128 |\\n| | Batch Size | 32 |\\n| | Learning Rate | $5 \\\\times 10^{-5}$ |\\n| | Dropout | 0.1 |\\n| | Weight Decay | 0.01 |\\n| | Epochs | 10 |\"}", "{\"title\": \"Some of the introduction, motivation, and discussion circles around 'robots', I didn't see anything specific to robots in this paper. Yes, it could be integrated into a robot, but the method would equally well work for a body cam, social agent, etc. In robotics there is quite an extensive literature on 'lifelong learning' that covers some of the same challenges: what to memorize, how to store and to retrieve, how to generalize, what to forget, etc.\", \"comment\": \"We appreciate the reviewer\\u2019s observation regarding the scope of the paper and the use of the term \\\"robot\\\" in the introduction. While the proposed method is applicable to diverse contexts such as body cams or social agents, it is important to clarify the focus and implications of our approach.\\nIn robotics, 'lifelong learning' addresses challenges similar to those in episodic memory for agents, including deciding what information to store, how to efficiently organize and retrieve it, generalizing knowledge across experiences, determining what to forget, and adapting to new inputs over time. \\nOur system integrates key principles of lifelong learning within a structured and adaptive memory management framework. It prioritizes what to memorize by focusing on keyframes and significant events, ensuring that only relevant and non-redundant information is retained. Keyframe extraction reduces redundancy by identifying frames that reflect spatio-temporal changes, utilizing vision transformers to ensure alignment with human perception. For dialogues, the system employs annotated datasets to train models, such as BART, to extract events from a third-person perspective, enabling efficient and event-focused memory storage. This ensures that not everything encountered by the agent is stored indiscriminately in memory.\\nMoreover, while lifelong learning frameworks are often tailored to specific tasks, they are prone to issues like catastrophic forgetting, where older information is lost when new data is introduced. Such limitations are not ideal for episodic memory systems, which are designed to preserve a comprehensive and retrievable record of experiences. By addressing these aspects, our approach offers a robust and scalable solution for episodic memory in agents.\"}", "{\"title\": \"l. 184: \\\"organized by place, characters, and events\\\" raises the question where those come from - that is explained later in the text\", \"comment\": \"This comes part comes from tagging on experiences from dialogues and speech and mentioned in the revised methodology section on this part\"}", "{\"title\": \"Repeated paragraph: L054-L062\", \"comment\": \"We will revise the paper to remove the repeated paragraph between lines L054-L062 .\\nSubtitle formatting issue in line 480\\nWe will revise the paper and change the formatting\"}", "{\"title\": \"There are a whole lot of design choices in this paper, an ablation on only the modalities and search methods seems a bit limited. There also is no sensitivity analysis for e.g. the clustering thresholds\", \"comment\": \"There are indeed several design choices discussed in the paper, and we acknowledge that the ablation study focuses primarily on modalities and search methods. However, we chose to highlight these aspects to emphasize the key factors influencing our system's performance. Regarding the clustering thresholds, we would like to clarify that we are using Ada and topic modelling for event clusters.Hence , if a character, place, or event naturally falls into an existing cluster, it is automatically assigned to that cluster. This approach removes the need for arbitrary thresholds and allows for more flexible and dynamic memory organization. We believe this method contributes to the system's ability to adapt and scale without the need for manual tuning of parameters such as clustering thresholds.\"}", "{\"comment\": \"We hope that the queries raised during the previous phase have been addressed through our clarifications and the additional details provided . In response to your above query I would like to make the following clear:\\nThe focus of our work is on episodic memory encoding and retrieval, inspired by the psychological processes of the human episodic memory. Our methodology integrates visual and audio content, emphasizing key memory-dependent factors such as location, persons, time, and the sequence of events. Human experiences are inherently multimodal in nature; keeping this in mind, our aim was to capture the multimodal aspect of memory encoding and retrieval. While we leveraged models like CLIP and other large language models for data preprocessing, this aspect remains intentionally flexible to encourage innovation. Researchers can opt for any combination of image, audio, speech, or language models based on their specific needs. Since our focus was not on signal capturing, we did not carry out ablation studies on this component.\\n\\nSince our proposed methodology centers on encoding and retrieval, we prioritized these aspects, enabling tasks like episodic memory localization and episodic QA to assess retrieval accuracy. To evaluate our approach, we conducted comparisons with graph-based retrieval methods such as GNN-RAG and GraphRAG (Section 6.2), demonstrating the effectiveness of our method. Additionally, we performed an ablation study to analyze time complexities between clustered and non-clustered approaches, and we examined the efficiency of storing diverse features in graph nodes to test the robustness of our encoding strategy. \\n\\nFurthermore, we analyzed the structure of the episodic graph by converting it into a knowledge graph-like structure and performing structural evaluations. We also evaluated different traditional traversal techniques, such as BFS and DFS, as part of the ablation study to highlight why traditional traversal methods were unsuitable for our specific requirements. Taken together, these evaluations demonstrate that we conducted the necessary assessments aligned with the aims of the research, validating our methodological choices and highlighting the robustness of our approach.\"}", "{\"comment\": \"#### Thank you for your feedback. We acknowledge that the results section could benefit from improvements in clarity and presentation. In the updated version of the paper, we will begin by providing a clearer explanation of the specific tasks being evaluated, such as episodic QA, memory localization, and ablation studies, to ensure that the reader understands the objectives of each task. Additionally, we will present structured comparisons, highlighting the performance of our approach in relation to baseline methods like 2D-TAN, VSLNet, CONE, and SPOTEM, showcasing the improvements made by our method. The ablation study results will also be presented in a more accessible manner, emphasizing the contribution of each framework component to the overall performance. We will also add time taken as episodes increase.\", \"title\": \"Results are poorly presented\"}", "{\"title\": \"Implementation Details\", \"comment\": \"Implementation Details\\nFor event extraction within dialogues, we utilized a Transformer-based BART model initialized with pre-trained weights to effectively extract and summarize events within contextual boundaries. The architecture options included BARTBASE, featuring a 6-layer encoder-decoder with approximately 140 million parameters, and BARTLARGE, which has a 12-layer encoder-decoder and 400 million parameters. Both configurations use a hidden size of 1024 and a feed-forward filter size of 4096, with a fixed dropout rate of 0.1 across layers. We employed the Fairseq toolkit for training, using the **Adam optimizer** with a warmup strategy. Learning rates were set to $4 \\\\times 10^{-5}$ for BARTBASE and $2 \\\\times 10^{-5}$ for BARTLARGE, with a maximum batch token limit of 1100 tokens. Contrastive objectives were supported by a margin coefficient of 1, and hyperparameters for coherence and sub-summary objectives were tuned using a validation set. Our method demonstrated substantial performance improvements compared to publicly available models trained on datasets like **SAMSUM** and **DialogueSUM**.\\n\\nFor visual data processing, we used a **Vision Transformer (ViT)** as the vision encoder, specifically adapted for video frame analysis from the **MSR-VTT dataset**. The encoder processed $224 \\\\times 224$ video frames, segmented into 16-sized patches and embedded into a 512-dimensional latent space. The 12-layer encoder, with a width of 768, was equipped with **LayerScale** (initialized at 0.1) for training stability. Advanced regularization techniques, including stochastic depth with a configurable \\\\texttt{drop\\\\_path\\\\_rate}, were applied. The encoder was based on the \\\"eva-clip-b-16\\\" model, which proved effective in extracting detailed spatial and temporal features essential for multimodal tasks.\\nFor models based on **LLaMA** that integrate vision and dialogue for character and place tagging, a multimodal configuration was used. **ViT** processed visual data, while **LLaMA** managed dialogue input. Training included cross-entropy loss for character tagging and contrastive loss for image-text alignment, incorporating **episodic memory** for QA tasks. The training process leveraged the **AdamW optimizer**, a dropout rate of 0.2, and a **cosine annealing scheduler** for efficient learning.\\n\\n**Temporal tagging** was optimized with key hyperparameters for best performance: a maximum sequence length of 128, a batch size of 32, and a learning rate of $5 \\\\times 10^{-5}$. A dropout rate of 0.1 was used to mitigate overfitting, and a weight decay of 0.01 improved generalization. The training process spanned 10 epochs to ensure sufficient learning while preventing overfitting.\\nFor extracting dialogue from audio, the **Whisper-large model** was employed. This model was utilized for its robustness in transcribing and converting spoken content into text for further processing.\\n\\nFor text detection, the **TextSnake** model was trained on the **SCUT-CTW1500** dataset using **SGD with Momentum** as the optimizer. The architecture combined **ResNet** and **FPN\\\\_UNet**, configured with a training batch size of 64 and 8 workers for data loading. The validation batch size was set to 1, with 4 workers and persistent workers enabled. Training was conducted over 200 epochs with validation checks every 10 epochs.\\n\\nThe **QA system** was built using a **BERT** model fine-tuned on concatenated datasets, including **SQuAD**, **Wikipedia**, and **Reddit**, to improve contextual comprehension. The hyperparameters included a learning rate of $1 \\\\times 10^{-5}$, a maximum sequence length of 512, and a document stride of 512. The training batch size was 8, with gradient accumulation steps of 2 over 2 epochs. Mixed-precision training was used with `fp16` at the **O2 optimization level** for better efficiency. The final model outputs were stored in the `bart-squadv2` directory, without intermediate model saving.\"}", "{\"comment\": \"In response to feedback regarding the insufficient explanation of the retrieval policy and memory clustering mechanisms, we plan to expand the relevant section in the revised version of the paper to provide a more comprehensive description of how these mechanisms adapt to various query types and scenarios. Specifically, the section titled **\\\"Dynamic Edge Traversal for Memory Retrieval Using Character, Location, Event, and Temporal Weights\\\"** will be further detailed for improved clarity.\\n\\nTo address episodic memory tasks, we parse a query \\\\( q \\\\) to extract relevant tags such as \\\\( P \\\\) for people, \\\\( L \\\\) for locations, \\\\( V \\\\) for events, and \\\\( R \\\\) for temporal information. These tags enable the agent to focus on the critical components of the query, thus allowing it to identify pertinent memories based on the key elements of \\\"What,\\\" \\\"Where,\\\" and \\\"When,\\\" which are central to episodic recall. Episodic memory tasks often revolve around answering \\\"What-Where-When\\\" (WWW) questions, which capture the essential aspects of memory. These types of tasks are commonly used to explore episodic-like memory in both animals and humans. Participants are required to recall specific events (What), locations (Where), and their chronological order (When). Research indicates that episodic memory systems are engaged during active encoding of WWW information, while passive encoding may rely on alternate systems for spatial and temporal aspects. \\n\\nFor temporal elements, such as weeks, months, or years, we process these by subtracting fixed intervals from the current date. For example, weeks are calculated by subtracting \\\\( 7n \\\\) days, months by subtracting \\\\( 30m \\\\) days, and years by subtracting \\\\( 365y \\\\) days, where \\\\( n \\\\), \\\\( m \\\\), and \\\\( y \\\\) are positive integers. This adaptable mechanism enables the agent to handle temporal queries effectively without requiring precise date references. \\n\\nThe relevance of memory entries in response to a given query \\\\( q \\\\) is evaluated using a cosine similarity score: \\n\\n$$ \\nS_s = \\\\sum_{e \\\\in D_u} \\\\frac{q \\\\cdot e}{\\\\|q\\\\| \\\\|e\\\\|} \\n$$ \\n\\nwhich measures the relationship between the query and each memory entry. For each neighboring node \\\\( v \\\\) of node \\\\( u \\\\), the weight \\\\( W_{uv} \\\\) is computed as: \\n\\n$$ \\nW_{uv} = \\\\sum w(u, v) \\n$$ \\n\\nwhere $$\\\\( w(u, v) \\\\)$$ aggregates weights derived from shared features such as characters, locations, and events. If \\\\( W_{uv} > \\\\theta \\\\), the query set is updated to include the neighbor \\\\( v \\\\) along with its associated weight $$\\\\( W_{uv} \\\\)$$. Temporal edges $$\\\\( T_{\\\\text{edge}}(E_{t-1}, E_t) \\\\)$$ are maintained to preserve temporal continuity, allowing the agent to reason within the proper context without needing to reevaluate the entire memory graph. This dynamic traversal ensures that episodic memories are retrieved efficiently, with a focus on temporally connected events. \\n\\nWhen a query pertains to specific locations, characters, or events, the system initiates a focused exploration through corresponding clusters in the memory graph. This process ensures that the agent can recall specific event details, along with their spatial and temporal context, which is crucial for decision-making in situations requiring detailed episodic memory recall. \\n\\nWhether or not explicit temporal markers are provided, the system performs correctly. As demonstrated in the ablation studies, the only observable difference is in retrieval time: with explicit timestamps, the query is treated as a temporal one, whereas, in their absence, it is handled as a contextual query. We will make this distinction clearer in the revised version of the paper.\", \"title\": \"Limited Explanation of Retrieval Policy The retrieval policy and memory clustering mechanisms, while central to EMCA\\u2019s functionality, are described only briefly. A more detailed explanation would clarify how these mechanisms adapt to different query types and scenarios.\"}", "{\"comment\": \"In response to the concern about undefined terminology and variables, we acknowledge that terms such as \\\"key events,\\\" \\\"location weight,\\\" and \\\"subjective temporal timescales\\\" were introduced without sufficient explanation in the initial draft. We will clarify these terms in the revised version of the paper as follows:\\n\\n**Key Events**\\n\\nKey frame extraction involves identifying representative frames from a video that show significant visual or temporal changes, minimizing redundancy while preserving essential information. The process typically includes preprocessing frames to enhance features, computing similarities, thresholding, and windowing to select frames based on spatio-temporal changes. This approach leverages vision transformers to assess low-level and mid-level features, ensuring that the extracted key frames align with human perception and support downstream tasks.\\n\\nFor dialogues, we will train the model on a dataset of conversations, where each conversation is annotated to extract relevant events. Using BART, we convert the dialogues into a third-person perspective to facilitate the extraction of significant events.\\n\\n**Location Weight**\\n\\nThis term refers to the similarity between locations between neighboring nodes in the graph. The location weight helps assess the relevance of different places in the context of episodic memory and aids in faster edge traversal for location queries or \\\"where\\\" type queries.\\n\\n**Subjective Temporal Timescales**\\n\\nThis term refers to the absence of explicitly stated dates. Instead of relying on exact timestamps, the framework uses temporal edges to maintain relationships between events. These edges allow for relative time indexing, which is captured by the agent\\u2019s understanding of temporal proximity (e.g., \\\"the previous day,\\\" \\\"the next day,\\\" etc.).\\n\\nWe will ensure that these definitions are clearly stated in the revised methodology section to improve the clarity and understanding of our framework.\", \"title\": \"Undefined Terminology and Variables Certain terms and variables (e.g., \\\"key events,\\\" \\\"location weight,\\\" \\\"subjective temporal timescales\\\") are introduced without sufficient explanation or definition, reducing clarity\"}", "{\"title\": \"Can the authors describe the main takeaways of Section 4? I still do not understand the insights this section is supposed to provide.\", \"comment\": \"This section was to compare our graph with mathematically with other graphs\\n### Takeaways of this Section:\\n\\n1) **Episodic memory can recommend the next activity based on previous interactions**, even when recurring patterns are not observed.\\n\\n2) **Comparison with knowledge graphs**: This comparison highlights the complexity of knowledge graphs in such scenarios. For example, when 3 episodes enter an episodic graph, it adds only 3 nodes, whereas a knowledge graph might add 30-50 new nodes.\"}", "{\"comment\": \"Furthermore, we will introduce the following definitions to aid understanding:\\n#### Key Events:\\n\\nKey frame extraction involves identifying representative frames from a video that show significant visual or temporal changes, minimizing redundancy while preserving essential information. The process typically includes preprocessing frames to enhance features, computing similarities, thresholding, and windowing to select frames based on spatio-temporal changes. This approach leverages vision transformers to assess low-level and mid-level features, ensuring that the extracted key frames align with human perception and support downstream tasks.\\n\\nFor dialogues, we train the model on a dataset of conversations, where each conversation is annotated to extract relevant events. Using BART, we convert the dialogues into a third-person perspective to facilitate the extraction of significant events.\\n\\n#### **Location Weight**\\n\\nThis refers to the degree of similarity between locations in neighboring nodes within the memory graph. The location weight is vital for assessing the relevance of different places in the context of episodic memory and accelerates edge traversal for location-based (\\\"where\\\") queries.\\n\\n#### **Subjective Temporal Timescales**\\n\\nThis term refers to the lack of explicitly stated dates, where the system does not rely on exact timestamps. Instead, it uses temporal edges to establish relationships between events, enabling relative time indexing. This indexing is based on the agent\\u2019s understanding of temporal proximity (e.g., \\\"the previous day,\\\" \\\"the next day,\\\" etc.).\\nWhether or not temporal markers are explicitly provided, the results remain correct. As demonstrated in the ablation studies, the only difference observed is in retrieval time: when explicit timestamps are available, the system treats the query as a temporal one, whereas, in the absence of explicit timestamps, the query is treated as contextual. We will clarify this distinction in the next revision of the paper.\\nThis updated section will be included in the appendix of the paper. The system maintains temporal consistency by treating each episode as a distinct day, creating a chronological sequence without the necessity for explicit time markers. Temporal updates are applied based on the agent's ability to visually recognize dates or infer them through dialogue, which in turn updates the temporal indexer. In the absence of explicit time markers, the system uses contextual and positional relationships between episodes to ensure temporal consistency.\", \"title\": \"Undefined Terminology and Variables: Certain terms and variables (e.g.,\\\"key events\\\", \\\"location weight,\\\" \\\"subjective temporal timescales\\\") are introduced without sufficient explanation or definition, reducing clarity.\"}", "{\"title\": \"Citations\", \"comment\": \"Other relevant concurrent work on memory in robotics, some of which use graphs while others do not. [1] Xie, Quanting, et al. \\\"Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation.\\\" arXiv preprint arXiv:2409.18313 (2024). [2] Anwar, Abrar, et al. \\\"ReMEmbR: Building and Reasoning Over Long-Horizon Spatio-Temporal Memory for Robot Navigation.\\\" arXiv preprint arXiv:2409.13682 (2024). [3] B\\u00e4rmann, Leonard, et al. \\\"Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience.\\\" arXiv preprint arXiv:2409.17702 (2024).\\n\\n[1] Xie, Quanting, et al. \\\"Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation.\\\" arXiv preprint arXiv:2409.18313 (2024).\\n[2] Anwar, Abrar, et al. \\\"ReMEmbR: Building and Reasoning Over Long-Horizon Spatio-Temporal Memory for Robot Navigation.\\\" arXiv preprint arXiv:2409.13682 (2024).\\nThese studies focus on navigation tasks which is a problem not discussed in this paper..\\nHowever, we have considered:\\n[3] B\\u00e4rmann, Leonard, et al. \\\"Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience.\\\" arXiv preprint arXiv:2409.17702 (2024).\\nOur approach relates more closely to [3], as it focuses on episodic memory representation and analysis, which aligns with the objectives of our research. We have performed analysis using this method, and the results are included below to demonstrate the outcomes in comparison to our model's performance\\n#### More comparison with memory models will be added in the appendix:\\n\\n| **Method** | **Recall Accuracy** |\\n|-------------------------------|---------------------|\\n| Episodic Memory Verbalization | 50% |\\n| Rehearsal Memory | 36% |\\n| STM | 30% |\\n| DNC | 35% |\\n| LT-CT | 50% |\\n| **Ours** | **81%** |\\n\\n*Caption: Recall accuracy for episodic memory question answering.*\\n\\n---\"}", "{\"comment\": \"### Statistical Estimation Methods for Timestamps\\n\\n**Reviewer Query:** Which specific statistical methods are used for estimating timestamps in the absence of explicit temporal markers?\\n\\nIf the reviewer could kindly specify the line referring to \\\"statistical timestamps,\\\" it would help us provide a more targeted response to address the query.\\n\\nIn our framework, no traditional statistical methods are used to estimate missing timestamps. Instead, we adopt a more structured approach based on the assumption that each episode represents a distinct day. If the agent is able to capture specific dates\\u2014either visually (e.g., through timestamps in visual cues) or through dialogues (e.g., explicit date mentions)\\u2014the corresponding temporal indexer is updated accordingly to reflect the correct date and time.\\n\\nSince all episodes are inherently connected in a temporal sequence by making use of temporal edges, the temporal indexer (stored for every episode) ensures that each episode is indexed relative to the others. For instance:\\n- The \\\"before\\\" node will be assigned a timestamp that is one day earlier than the current episode.\\n- The \\\"after\\\" node will be assigned a timestamp that is one day later.\\n\\nThis ensures that the temporal relationships between episodes are consistent and that the framework can accurately reflect the passage of time.\\n\\n### Additional Details for Temporal Indexing:\\n\\n#### **Temporal Relationships:**\\nThe temporal structure remains consistent because the temporal indexer updates based on the captured date or time from visual or dialogue cues. If no explicit timestamp is available, the framework assumes contextual timestamps, treating each new episode as a new day with the relationships adjusted accordingly.\\n\\n#### **External Timestamps:**\\nIf external timestamps (e.g., from external sensors or provided datasets) are available, we can directly retrieve them from the temporal indexers of the node. This will bypass the contextual indexing process and directly update the temporal structure to reflect the accurate timestamp.\\n\\n### Planned Updates for the Paper:\\n\\n- **Clarify how temporal indexing works** in our framework, particularly how episodes are treated as distinct days and how the temporal indexer adjusts based on visual or dialogue-derived dates.\\n- **Detail the process by which temporal relationships between episodes are maintained**, ensuring that timestamps are consistently adjusted.\\n- **Provide a clearer explanation** of how external timestamps can be integrated if available, or otherwise how we treat text-based queries as contextual without explicit timestamps.\"}", "{\"comment\": \"The focus of our work is on episodic memory encoding and retrieval, inspired by the psychological processes of the human episodic memory. Our methodology integrates visual and audio content, emphasizing key memory-dependent factors such as location, persons, time, and the sequence of events. Human experiences are inherently multimodal in nature; keeping this in mind, our aim was to capture the multimodal aspect of memory encoding and retrieval. While we leveraged models like CLIP and other large language models for data preprocessing, this aspect remains intentionally flexible to encourage innovation. Researchers can opt for any combination of image, audio, speech, or language models based on their specific needs. Since our focus was not on signal capturing, we did not carry out ablation studies on this component.\\n\\nSince our proposed methodology centers on encoding and retrieval, we prioritized these aspects, enabling tasks like episodic memory localization and episodic QA to assess retrieval accuracy. To evaluate our approach, we conducted comparisons with graph-based retrieval methods such as GNN-RAG and GraphRAG (Section 6.2), demonstrating the effectiveness of our method. Additionally, we performed an ablation study to analyze time complexities between clustered and non-clustered approaches, and we examined the efficiency of storing diverse features in graph nodes to test the robustness of our encoding strategy. \\n\\nFurthermore, we analyzed the structure of the episodic graph by converting it into a knowledge graph-like structure and performing structural evaluations. We also evaluated different traditional traversal techniques, such as BFS and DFS, as part of the ablation study to highlight why traditional traversal methods were unsuitable for our specific requirements. Taken together, these evaluations demonstrate that we conducted the necessary assessments aligned with the aims of the research, validating our methodological choices and highlighting the robustness of our approach.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"This seems to be a rather complex system, which makes reproducing results very difficult, with probably quite lot of additional implementation and setting details. I couldn't find any promises to release code (or at least a detailed appendix), which would have alleviated this concern.\", \"comment\": \"We understand that the complexity of the system may make it challenging to reproduce the results. In the revised version of the paper, we will include detailed implementation and methodology descriptions in the appendix section to provide clearer guidance for reproducibility. Additionally, we plan to release the code and dataset upon acceptance of the paper..\"}", "{\"title\": \"In terms of the structure of the paper, I do not think that the text space used for discussing how signals are captured (section 3.1) is very useful. I would argue this is the case with a chunk of this paper's equations, where it simply adds space when it does not need to, and it makes the paper more difficult to read. I would recommend condensing the information and matching ICLR's 9-page recommendation as opposed to 10. Similarly, there is an excessive use of new lines at arbitrary positions.\", \"comment\": \"Thank you for your feedback. I appreciate your suggestion regarding the structure of the paper, particularly the section on how signals are captured (Section 3.1), as well as the formatting issues related to the length of equations and the excessive use of new lines. I understand that these elements may make the paper more difficult to read and could be streamlined.\\nI will address these concerns in the revised version of the paper, ensuring that the content is condensed appropriately while maintaining clarity.\"}", "{\"summary\": \"The authors introduce Episodic Memory for Cognitive Agents (EMCA) that models episodic memory based on a graph-structure. This allows them to incrementally store memories and retrieve experience. They also can cluster memories, have dynamic retrieval, and can handle temporal reasoning. Memory is structured as a graph, where each episode contains characters, temporal elements, location, and events. The edges of this graph can be temporal or semantic. They then build a retrieval system that can handle contextual, temporal, and spatial queries.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The Big Bang Theory and the Agent datasets seem very useful!\", \"Their results indicate that their method performs better than other graph-based approaches.\", \"This area is a growing field, especially as robots and agents become more capable and need better ways to scale their context. And their graph-based approach seems to be a meaningful contribution\", \"Table 2 is interesting, as it showcases that some of their questions require access to vision, acoustics, and/or dialogues. This result would be better if we knew what the Big Bang Theory and Agent dataset contained.\"], \"weaknesses\": [\"## Main weaknesses\", \"After reading the full paper, I am not sure what the actual task is. What commands does the master ask? I understand the approach, but not what task it is specifically solving. Is the output the location of a specific memory? Or is it a free-form text answer?\", \"In terms of the structure of the paper, I do not think that the text space used for discussing how signals are captured (section 3.1) is very useful. I would argue this is the case with a chunk of this paper's equations, where it simply adds space when it does not need to, and it makes the paper more difficult to read. I would recommend condensing the information and matching ICLR's 9-page recommendation as opposed to 10. Similarly, there is an excessive use of new lines at arbitrary positions.\", \"There is no discussion on how the Big Bang Theory and Agent datasets were constructed. This on its own seems like a major contribution on its own. I would recommend the authors remove much of the superfluous equations and newlines (and move that into the appendix), and put more of an emphasis on this dataset component.\", \"I think I like what Section 4.1 is implying about combining a \\\"master's\\\" memory with that of an agent's, but it is not presented very clearly, and I do not see a connection with temporal graphs like the subsection title suggests\", \"Nor do I see how this section is a \\\"theoretical comparison\\\"\", \"\\\"Master\\\" terminology in line 346 is confusing, and should be introduced earlier in section 4.1. Also, the term \\\"master\\\" is generally frowned upon in these settings, so I would recommend a different term\", \"Results are poorly presented\", \"## Formatting/Clarity issues:\", \"Check for spaces after periods or colons throughout the paper.\", \"Line spacing is odd in much of the paper\", \"Figure placement should ideally be on the top or bottom of a page, not in the middle with paper text above and below the paper. This makes the paper difficult to follow\", \"Figure 4, the legend has oddly shaped circles\", \"Figure 5's result is good, but it should not be a line graph with x axis being method and lines being the dataset. Instead it should be datasets on the x-axis and methods on the y-axis\", \"The results in section 5.1.1 are all discussed in a single paragraph. Is that all the main results? I would recommend splitting this up into a few bolded mini-sections and showing the main takeaway of each figure along with highlights on how the method performed, possibly with qualitative results.\", \"The focus of the introduction on falls a bit flat. Rather than focusing on the historical definition of episodic memory, focus more on how people have been engineering and building these kinds of systems. Focus on why other systems do not work, and why yours does.\", \"I would recommend larger fonts for the figures; they are difficult to read in the paper.\", \"## Citations\", \"Other relevant concurrent work on memory in robotics, some of which use graphs while others do not.\", \"[1] Xie, Quanting, et al. \\\"Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation.\\\" arXiv preprint arXiv:2409.18313 (2024).\", \"[2] Anwar, Abrar, et al. \\\"ReMEmbR: Building and Reasoning Over Long-Horizon Spatio-Temporal Memory for Robot Navigation.\\\" arXiv preprint arXiv:2409.13682 (2024).\", \"[3] B\\u00e4rmann, Leonard, et al. \\\"Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience.\\\" arXiv preprint arXiv:2409.17702 (2024).\"], \"questions\": [\"How was the dataset constructed? This is a major contribution that is not discussed.\", \"What does a successful or unsuccessful example look like? I would recommend looking at [2] or [3] above to see how to discuss dataset creation in such a setting.\", \"In Table 1, what metric is being used? It does not say in the caption or the table. It should be re-iterated in the table itself.\", \"Is a forgetting mechanism necessary, or would it be more like a memory aggregation mechanism so that retrieval is still efficient?\", \"How does the incremental storage/retrieval scale as the number of episodes change? This result is not displayed in the paper but I would argue is very important. If you only used say 10 episodes of EM instead of 181, is there a difference in performance? This would directly support contribution number 2 in Section 1 of your paper\", \"What is \\\"Time complexity\\\" in Table 3? Time complexity of BFS is O(V+E), but here a number is used instead. Authors should use \\\"retrieval time\\\" or something similar instead of time complexity.\", \"Can the authors describe the main takeaways of Section 4? I still do not understand the insights this section is supposed to provide.\", \"Also, there are some questions/concerns in the weaknesses section\", \"Overall, I like the paper. But I think there are a lot of issues with how to content is shown to the reader that makes the paper's contributions fall flat. In its current state, I would recommend rejecting the paper, but if the authors address my concerns above, I believe I would lean more towards accept.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Table for implementation details\", \"comment\": \"| **Model Component** | **Hyperparameter** | **Value** |\\n|-------------------------------|-------------------------------|------------------------------------------|\\n| **Event Extraction (BART)** | Encoder-Decoder Layers | 6 (BASE), 12 (LARGE) |\\n| | Hidden Size | 1024 |\\n| | FFN Size | 4096 |\\n| | Dropout | 0.1 |\\n| | Learning Rate | $4 \\\\times 10^{-5}$ (BASE), $2 \\\\times 10^{-5}$ (LARGE) |\\n| | Max Tokens per Batch | 1100 |\\n| | Margin Coefficient | 1 |\\n| **Vision Encoder (ViT)** | Patch Size | 16 |\\n| | Resolution | $224 \\\\times 224$ |\\n| | Latent Space Dim. | 512 |\\n| | Transformer Layers | 12 |\\n| | Width | 768 |\\n| | LayerScale Init. | 0.1 |\\n| | Dropout Path Rate | Configurable |\\n| **QA System (BERT)** | Learning Rate | $1 \\\\times 10^{-5}$ |\\n| | Max Sequence Length | 512 |\\n| | Document Stride | 512 |\\n| | Train Batch Size | 8 |\\n| | Gradient Accum. Steps | 2 |\\n| | Epochs | 2 |\\n| | Mixed-Precision Opt. | fp16 (O2) |\\n| **Temporal Tagging** | Max Sequence Length | 128 |\\n| | Batch Size | 32 |\\n| | Learning Rate | $5 \\\\times 10^{-5}$ |\\n| | Dropout | 0.1 |\\n| | Weight Decay | 0.01 |\\n| | Epochs | 10 |\"}", "{\"title\": \"Will the code and datasets be made publicly available?\", \"comment\": \"We would share the code and dataset for evaluation upon acceptance of the paper.\"}", "{\"title\": \"Eq (1) vs (2): the difference between S_T and T_voiced is unclear\", \"comment\": \"S_T is the entire sound signal comprising of both voiced and acoustics T voiced is timestamp and voiced component basically the transcripts.\"}", "{\"comment\": \"**Answer to weakness 2**\\n*.**\\n\\nA2) Episodic memory, as introduced by Tulving (1972) [^1], represents the capacity to recall personal experiences embedded within specific temporal and spatial contexts. Unlike semantic memory, which holds general knowledge, episodic memory encompasses detailed information about events, integrating aspects such as time, place, characters, and context. Tulving\\u2019s framework organizes these components into cohesive episodes.\\nDrawing from this foundational concept, we propose a model where episodic memories are structured as a graph. Each episode \\\\( \\\\text{Episode}_i \\\\) functions as a node:\\n$$\\n\\\\[\\n\\\\text{Episode}_i = \\\\{ \\\\mathbf{C}_i, \\\\mathbf{T}_i, \\\\mathbf{L}_i, \\\\mathbf{e}_i \\\\}\\n\\\\]\\n$$\\nIn this model,$$ \\\\( \\\\mathbf{C}_i \\\\) $$stands for characters, $$\\\\( \\\\mathbf{T}_i \\\\)$$ represents temporal markers, $$\\\\( \\\\mathbf{L}_i \\\\)$$ signifies location, and $$\\\\( \\\\mathbf{e}_i \\\\)$$ encapsulates events. Semantic edges \\\\( \\\\mathbf{S}(v_i, v_j) \\\\) connect nodes that share common elements, while temporal edges $$\\\\( \\\\mathbf{T}(v_i, v_j) \\\\)$$ map the chronological flow of experiences. To enhance retrieval efficiency, we apply a dynamic clustering mechanism that organizes similar episodes based on both temporal and contextual similarities.\", \"our_retrieval_system_supports_three_query_types\": \"\\\"what\\\" (contextual), \\\"when\\\" (temporal), and \\\"where\\\" (spatial), as outlined by Stephen et al., and Holland and Smulders (2011), enabling human-like memory recall. This is particularly valuable for applications in social companion robotics, aiding elderly or memory-impaired individuals.\\n\\nFor such a cognitive agent, the ability to recall episodic memories is essential for human cognition, linking personal experiences to specific temporal and spatial contexts. Existing memory models often struggle with continuous, time-series data, which limits their ability to simulate episodic recall effectively. Many of these systems fail to store dialogues as multimodal data, preventing them from capturing the rich, context-dependent nature of human memory. Additionally, most existing approaches store a single experience as one isolated episode and lack a mechanism for retrieving information across multiple experiences, hindering their ability to integrate knowledge over time.\\n\\nFurthermore, current episodic memory systems are typically restricted to performing a specific, predefined task, limiting their flexibility and adaptability. In contrast, our system is designed to be more versatile, capable of handling a variety of tasks and dynamically adapting to new scenarios, making it far more suited for real-world applications that require memory integration across different contexts and time periods. \\n\\nBy integrating multimodal data and time-related information into the episodic memory framework, our model extends experience memory localization, recommendation, and question answering. It provides a robust foundation for adaptable, scalable systems capable of operating without frequent retraining, applicable to real-world scenarios such as social companion robots and autonomous task planning systems.\\n\\n### **Contributions:**\\n1. Temporal connections are managed without complex pattern learning, enabling adaptive reasoning and retrieval of subgraphs from past experiences.\\n2. The system incrementally stores and retrieves episodic memories, dynamically clustering them based on temporal and contextual affinities.\\n3. A multi-edge graph framework optimizes path traversals for dynamic memory retrieval and personalized recommendations across subjective timescales.\\n4. A new dataset is introduced to improve episodic memory question answering, enhancing the agent's ability to respond to queries based on past events.\\n\\nOur model\\u2019s versatility is demonstrated through comparisons with existing systems that require retraining. It handles various dataset types, including visual, multimodal, and text-based data, and excels in temporal reasoning even without explicit timestamps, addressing complex memory retrieval tasks across diverse applications.\\n\\n[^1]: Tulving, E. (1972). *Episodic and semantic memory*. In *Organization of Memory* (pp. 381-403). Academic Press.\", \"title\": \"*Insufficient Motivation: The introduction section does not adequately establish the necessity of this system or why it improves upon existing learning frameworks for cognitive agents. Additional motivation for the need of an episodic memory for a cognitive agent would help contextualize EMCA's contributions\"}", "{\"title\": \"Dataset Section\", \"comment\": \"We propose a comprehensive dataset framework designed to evaluate and enhance episodic memory systems in artificial agents. This framework integrates multiple datasets, including a custom set of episodic questions based on the TV series *The Big Bang Theory*, spanning all nine seasons (181 episodes). The aim is to assess memory recall and narrative understanding in complex scenarios.\\n\\nWe introduce the *Agent Dataset*, a 10-episode time-series dataset created in Unity3D, where a virtual agent performs tasks and interacts with characters in realistic environments, simulating the role of companion robots. This dataset emphasizes the importance of multi-sensory inputs and task execution, challenging the agent to process and integrate information from *dialogues* and *visual cues* to maintain task order and achieve context-driven objectives.\\n\\nAdditionally, we adapted the **Ego4D dataset**, restructuring its activity sequences into simulated chronological episodes to address the original absence of time-series data\\u2014portraying an agent performing a series of activities over 30 days. We also combined group activity videos designed for active speaker recognition. This transformation enables episodic queries such as \\\"Where did I place the agricultural tool on the last day of farming?\\\", enhancing the ability to localize and retrieve temporal experiences effectively.\\n\\nTogether with the **PerLTQA** [du2024perltqapersonallongtermmemory] and **LLQA** [dolan-brockett-2005-automatically] datasets, which test essential episodic memory dimensions\\u2014**\\\"what\\\"** (context), **\\\"when\\\"** (time), and **\\\"where\\\"** (place)\\u2014this framework forms a robust benchmark for evaluating advanced episodic memory capabilities in AI systems.\\n\\n**Data Annotation**: The data was carefully annotated to tag scene information and identify characters in dialogues, ensuring that the model could recognize character presence and understand related events. This included explicitly tagging scene details for location identification and differentiating characters present in the scene versus those mentioned. Events within dialogues were also meticulously annotated to capture key details, facilitating effective memory representation beyond simple summaries. Capturing these essential details is crucial for episodic memory tasks, as it allows the agent to recall past experiences accurately. Each episode was annotated with 10 *what*, *when*, and *where* questions.\"}", "{\"comment\": \"**The method relies on various models to extract features and to summarize things before storing them. I don't believe there is a 'one size fits all approach' but how to best do that depends on the retrieval task and the type of downstream tasks you have. The paper does not provide any indications on how to deal with that.**\\nOur approach is specifically designed with the principles of episodic memory in mind, focusing on key components such as place, time, character, and event. By structuring memory around these aspects, the system can efficiently retrieve relevant memories tied to specific contexts, such as when and where an event occurred, which characters were involved, and what the event entailed.\\nOur method goes beyond merely storing data; it emphasizes managing and organizing memory in a task-oriented manner, ensuring efficiency and relevance. This design allows the agent to provide accurate, context-aware responses based on past interactions and experiences. For low-level applications, the approach can be adapted by storing experimental data specific to those use cases, maintaining its flexibility across various domains.\\nAdditionally, the focus on summarizing key events using \\\"what, when, and where\\\" (WWW) information ensures that the memory framework effectively supports downstream tasks such as recommendation, localization, and other context-driven operations. We will make this aspect clearer in the paper.\"}", "{\"comment\": \"We appreciate the feedback and will address these issues in the revised version of the paper. Specifically:\\n\\n### Spaces and Formatting:\\nWe will ensure that any missing or extra spaces, particularly in the title and abstract, are corrected to conform to the proper formatting. This will be checked throughout the document for consistency.\\n\\n### Citation Formatting:\\nWe will switch to the correct citation commands, specifically using `\\\\citep` and `\\\\citet` as per the template\\u2019s requirements. This will prevent issues with repeated author names and ensure that references are formatted properly for easier readability.\\n\\n### Quotation Marks:\\nWe will replace any incorrect quotation marks with proper opening and closing LaTeX quotation marks to ensure consistency and proper formatting.\\n\\n### Double Paragraph:\\nThe double paragraph before \\\"Contributions\\\" will be removed to avoid redundancy and improve the flow of the introduction.\\n\\n### Broken Sentences:\\nWe will correct any broken sentences in Section 5.1.2, ensuring that the text is coherent and properly structured.\\n\\n### Incomplete References:\\nWe will revise the references with missing or incomplete information, specifically addressing the issues mentioned on lines 191, 300, and 468, and provide the full citation details where necessary.\"}", "{\"title\": \"How does the incremental storage/retrieval scale as the number of episodes change? This result is not displayed in the paper but I would argue is very important. If you only used say 10 episodes of EM instead of 181, is there a difference in performance? This would directly support contribution number 2 in Section 1 of your paper\", \"comment\": \"To address the query about the scalability of incremental storage and retrieval as the number of episodes increases, we conducted a comparison between different episode counts (10, 50, 100, and 181 episodes).\\nIn terms of accuracy, all values remain consistent. However, in terms of retrieval time, there is a slight increase as the number of episodes grows. \\n\\n### Table 1: Retrieval times for different episode counts.\\n\\n| **Number of Episodes** | **Retrieval Time (ms)** |\\n|------------------------|-------------------------|\\n| 10 | 9.11 |\\n| 50 | 11.0 |\\n| 100 | 11.5 |\\n| 181 | 12.0 |\\n\\nTo address the query about the scalability of incremental storage and retrieval as the number of episodes increases, we conducted a comparison between different episode counts (10, 50, 100, and 181 episodes). \\n\\nIn terms of accuracy, all values remain consistent. However, in terms of retrieval time, there is a slight increase as the number of episodes grows. The retrieval times for different episode counts are provided in the table above. \\n\\nThis shows that while retrieval time does increase, it is not drastic, and the increase is quite modest. The clustering mechanism plays a crucial role in ensuring that retrieval times remain low, even as the number of episodes increases. This highlights the efficiency of our method in scaling with the number of episodes.\\n\\nWe will include these results in the ablation studies section, as suggested, to better support the contribution discussed in Section 1 of the paper.\"}", "{\"title\": \"What is \\\"Time complexity\\\" in Table 3?Time complexity of BFS is O(V+E), but here a number is used instead. Authors should use \\\"retrieval time\\\" or something similar instead of time complexity.\", \"comment\": \"We will update this part accordingly\"}", "{\"title\": \"Scalability limitations: Does the graph structure grow exponentially with experiences? What are the computational costs?\", \"comment\": \"The scalability of the graph structure depends on the dataset.The scalability of the graph structure depends on the nature of the dataset. If there is significant variation in the dataset, sparse connections are formed, and the graph becomes less complex. However, if the experiences or days are very similar, more connections are formed, leading to a more cohesive structure. Graph does not have exponential growth according to observations.Computational costs depend on dataset..\"}", "{\"comment\": \"#### Agent comprehends and understands whether this is a person based temporal on contextual query (Query type). This is done using a language model to find the query type..We will clarify this part also in revised pdf\", \"title\": \"Meaning of \\\"Agent Comprehends\\\": In line 274, it says the \\\"agent comprehends\\\" something. Does this imply processing by a language model, and if so, could you clarify which model is used?\"}", "{\"title\": \"Sect. 3.1.2, 3.1.3, and 3.2, l. 218: at places the paper sounds like everything is represented as 'text', at others it seems to be a mixture of text and other embeddings. It would be great to explain earlier on what it stored as what.\", \"comment\": \"### Processing of Audio Data in Episodic Memory\\n\\nAudio data, including dialogs and acoustics, is crucial for constructing episodic memory. Dialogs provide linguistic and contextual information, while acoustics capture environmental and emotional cues. These elements are integrated as:\\n\\n$$\\nA(t) = D(t) + C(t)\\n$$\\n\\nwhere \\\\( A(t) \\\\) is the total audio data at time \\\\( t \\\\), with \\\\( D(t) \\\\) representing the dialog and \\\\( C(t) \\\\) representing acoustics.\\n\\n#### Extraction of Acoustic Data using Mel Spectrograms\\n\\nAcoustic data is transformed into Mel spectrograms, which emphasize perceptually relevant frequencies. The Mel spectrogram \\\\( M(t, f) \\\\) is computed as:\\n\\n$$\\nM(t, f) = \\\\log\\\\left(\\\\sum_{k} \\\\left| X(t, k) \\\\right|^2 \\\\cdot H(f, k)\\\\right)\\n$$\\n\\nwhere \\\\( X(t, k) \\\\) is the magnitude of the STFT at time \\\\( t \\\\) and frequency \\\\( k \\\\), and \\\\( H(f, k) \\\\) is the Mel filter bank mapping linear frequencies to the Mel scale.\\n\\n#### Extraction of Verbal Cues from Audio Data\\n\\nVerbal cues are extracted by applying the Short-Time Fourier Transform (STFT) and converting the spectrum to the Mel scale as:\\n\\n$$\\nM(f) = 2595 \\\\log_{10}\\\\left(1 + \\\\frac{f}{700}\\\\right)\\n$$\", \"the_mel_spectrogram_is_then_derived_as\": \"$$\\n\\\\text{MelSpec}(m,t) = \\\\log\\\\left(\\\\sum_{f_{\\\\text{low}}}^{f_{\\\\text{high}}} |S(f,t)|^2 M(f) + \\\\epsilon\\\\right)\\n$$\\n\\nwhere \\\\( \\\\epsilon \\\\) is a small constant to prevent issues with the logarithm. The final audio representation integrates acoustic features with transcribed dialogue:\\n\\n$$\\nT_{\\\\text{audio}}(t) = T_{\\\\text{acoustics}}(t) + T_{\\\\text{dialogue}}(t)\\n$$\\n\\ncapturing both tonal properties and linguistic meaning.\\n\\n### Processing of Visual Data in Episodic Memory\\n\\nVisual data processing starts by transforming each frame \\\\( F_i \\\\) into a tensor and extracting both global and local features. The scene representation is obtained by aggregating the frame embeddings:\\n\\n$$\\nV_{\\\\text{scene}} = \\\\frac{1}{N} \\\\sum_{i=1}^{N} V_{\\\\text{embed}}(F_i)\\n$$\\n\\nwhere \\\\( N \\\\) is the number of frames. For extracting time and place details, a convolutional network detects text regions, generating probability maps for the text center line (TCL) and text regions (TR):\\n\\n$$\\n\\\\begin{pmatrix}\\nP_{\\\\text{TCL}}(x, y) \\\\\\\\\\nP_{\\\\text{TR}}(x, y)\\n\\\\end{pmatrix} = \\\\sigma\\\\left( \\\\begin{pmatrix}\\nW_{\\\\text{TCL}} \\\\\\\\\\nW_{\\\\text{TR}}\\n\\\\end{pmatrix} \\\\cdot F_{\\\\text{feature}}(x, y) \\\\right)\\n$$\", \"a_thresholding_operation_follows\": \"$$\\nP_{\\\\text{filtered}} = \\\\{(x, y) \\\\mid P_{\\\\text{TCL}}(x, y) \\\\geq T_{\\\\text{TCL}} \\\\text{ and } P_{\\\\text{TR}}(x, y) \\\\geq T_{\\\\text{TR}} \\\\}\\n$$\", \"text_is_recognized_using_a_softmax_layer\": \"$$\\n\\\\hat{y}_t = \\\\text{Softmax}(W \\\\cdot h_t + b)\\n$$\\n\\nFor person embedding extraction, a person\\u2019s region is detected and cropped, followed by processing through a feature extractor to generate and store the embedding for future tasks.\\n\\n#### Merging and Synchronizing Data\\n\\nIn the final stage, the processed audio and visual data are synchronized to a common timestamp, creating a unified representation:\\n\\n$$\\n\\\\mathbf{T_M} = (\\\\mathbf{T_{\\\\text{audio}}}, \\\\mathbf{T_{\\\\text{visual}}})\\n$$\\n\\nEmbeddings and joint of both text as well as visual embeddings .To make this more clear II have changed this potion of text to \\\\subsection{Processing of Audio Data in Episodic Memory\\nThis ensures temporal alignment between the two modalities, enabling coherent multimodal interaction. The audio and visual embeddings are then concatenated into a joint multimodal embedding:\\n\\n$$\\n\\\\mathbf{E_{\\\\text{combined}}} = \\\\mathbf{E_{\\\\text{audio}}} \\\\oplus \\\\mathbf{E_{\\\\text{visual}}}\\n$$\\n\\nwhere \\\\( \\\\oplus \\\\) denotes the concatenation operation. This joint multimodal representation is stored in episodic memory, where each node represents a specific experience, incorporating aspects such as time, location, characters, and events. By combining the visual and audio information, this integrated embedding enhances memory recall and supports detailed event-based analysis.\"}", "{\"title\": \"How is T_acoustic generated\", \"comment\": \"### Extraction of Acoustic Data using Mel Spectrograms\\nI will add this in the revised version of the paper \\\\subsubsection{Extraction of Acoustic Data using Mel Spectrograms}\\nAcoustic data is transformed into Mel spectrograms, which emphasize perceptually relevant frequencies. The Mel spectrogram \\\\( M(t, f) \\\\) is computed as:\\n\\n$$\\nM(t, f) = \\\\log\\\\left(\\\\sum_{k} \\\\left| X(t, k) \\\\right|^2 \\\\cdot H(f, k)\\\\right)\\n$$\\n\\nwhere \\\\( X(t, k) \\\\) is the magnitude of the STFT at time \\\\( t \\\\) and frequency \\\\( k \\\\), and \\\\( H(f, k) \\\\) is the Mel filter bank mapping linear frequencies to the Mel scale.\"}", "{\"title\": \"Meaning of \\\"Agent Comprehends\\\": In line 274, it says the \\\"agent comprehends\\\" something. Does this imply processing by a language model, and if so, could you clarify which model is used?\", \"comment\": \"Agent comprehends and understands whether this is a person based temporal on contextual query (Query type). This is done using a language model to find the query type..We will clarify this part also in revised pdf\"}", "{\"title\": \"l. 220: \\\"relevance of these tasks is assessed\\\" - again HOW?\", \"comment\": \"### Task Categories for Text Summarization\\n\\nText summaries are grouped into broader task categories, such as **meetings** and **lunches**, using a topic modeling approach. The process involves the following steps:\\n\\n1. **Text Cleaning**: Each dialogue is processed individually by cleaning the text, which involves removing unwanted words, filler phrases, and clutter that do not contribute to the core meaning.\\n\\n2. **Topic Modeling**: After cleaning the text, topic modeling techniques, such as **Parallel Latent Dirichlet Allocation (LDA)**, are applied to identify latent topics within the dialogue. LDA helps in discovering the underlying topics by analyzing the distribution of words within the dialogue and across different dialogues.\", \"the_general_form_of_lda_can_be_written_as\": \"$$ p(w | z) = \\\\frac{(w, z)}{\\\\sum_w (w, z)} $$\\n\\n where \\\\( w \\\\) is a word, \\\\( z \\\\) is a topic, and the equation represents the probability of word \\\\( w \\\\) under topic \\\\( z \\\\).\\n\\n3. **Mapping to Predefined Categories**: After identifying the topics, these are then mapped to predefined categories based on the nature of the content. For example:\\n - **Meetings**: Dialogues related to planning, decision-making, or discussions.\\n - **Lunches**: Dialogues focusing on food, social interaction, or meals.\\n\\n4. **Categorization Criteria**: The criteria for defining these categories are based on the frequency and relevance of topics that appear within each dialogue type. For instance, if topics related to decision-making or strategy discussions are dominant, the dialogue would be classified as a **meeting**.\\n\\nThis approach helps in categorizing text summaries efficiently and is further explained in the updated methodology section of the appendix in the second version of the paper. The technique is based on the work of:\\n\\n- Liu, J., Zou, Y., Zhang, H., Chen, H., Ding, Z., Yuan, C., & Wang, X. (2021). **Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization**. *arXiv*. https://arxiv.org/abs/2109.04994\"}", "{\"title\": \"The title says 'learning'. At least according to my definition there is no learning in the paper. It proposes a way to store information and to retrieve it, so that would correspond to 'memorization' (=rote learning), while learning implies understanding the information and being able to apply it to new situations. The method could serve as a starting point for learning, but in the current paper that doesn't seem to be present in the method nor in the experiments.\", \"comment\": \"Regarding the notion of \\\"learning\\\" in the title, you are correct that the current paper primarily focuses on memory storage and retrieval, which may initially appear to align more with memorization than traditional learning. However, the ability to access and utilize past experiences stored in memory significantly enhances decision-making capabilities. By leveraging these memories, the agent can adapt to new tasks or environments based on prior experiences, demonstrating a form of experiential learning.\\nFor instance, the system can use stored memories to identify patterns(if present), detect intrinsic goals, and develop strategies, such as navigating to a specific location or resolving a task based on similar past situations. While not explicitly addressed in the current experiments, this adaptive use of memory aligns with learning processes and highlights how the proposed method goes beyond rote memorization. We will include additional clarifications and examples of this adaptive learning potential in the appendix of the paper.\"}", "{\"comment\": \"#### In line 287, the functions www and lll play crucial roles within the memory retrieval mechanism. The function www refers to the overall weight, which captures the similarity between key aspects of the episodes, such as the characters, locations, and events. This weight is used to determine the relevance of one episode to another during the retrieval process. The function lll, on the other hand, specifically measures the location similarity between two episodes, focusing on how closely related the locations in the episodes are. Together, these functions guide the traversal of the memory graph, ensuring that episodes with higher relevance based on these similarities are prioritized. We will expand and clarify these roles further in the updated version of the paper, providing a more detailed analysis of their contributions to the memory retrieval process.\", \"title\": \"Role of w and l Functions: In line 287, the w and l functions are mentioned. Could you elaborate on their roles within the memory retrieval mechanism?\"}", "{\"comment\": \"#### I am contemplating revising that section to include a mathematical representation of the recommendation process within the framework of Episodic Memory and Contextual Awareness (EMCA). I would greatly appreciate any suggestions or recommendations you may have for improving this approach\\nThank you for pointing that out. I understand that the use of the term \\\"master\\\" in line 346 might be confusing and could carry unintended connotations. The term \\\"master\\\" is meant to refer to the individual or human whom the agent is serving as an assistant to, and we will clarify this earlier in Section 4.1 to avoid any confusion.\\nIn response to your suggestion, I will also consider replacing \\\"master\\\" with a more neutral and clear term, such as \\\"user\\\" or \\\"human operator,\\\" to align with contemporary language preferences and ensure the term does not cause any discomfort. This clarification will be made in the paper, and we will also make sure to introduce this role earlier in the relevant section to avoid any ambiguity.\\nIf you have any further suggestions or specific terms you would recommend, please feel free to share.\", \"title\": \"Nor do I see how this section is a \\\"theoretical comparison\\\"\"}", "{\"comment\": \"**The paper mentions the missing forgetting mechanism as a major limitation. Related to that it also leaves the question on 'what to store' unanswered. It reads like everything is stored, even if it is effectively a duplicate. I believe the real challenging question that needs to be solved is the memory management: what to store, what to consolidate/merge, what to forget, etc. Without those a memory representation is of limited value, and it remains unclear to me how suitable the proposed architecture is for extending it in that way - or if we would be better off redesigning it from scratch.**\\n#### We appreciate your feedback on the memory management aspects. Currently, our system only stores key frames and relevant dialogues, focusing on the most significant experiences. For key frame extraction, we identify representative frames from video sequences that highlight key visual or temporal changes, ensuring we avoid redundancy while retaining essential information. This process uses vision transformers to assess low- and mid-level features, ensuring the selected frames align with human perception for downstream tasks.\\nFor dialogues, we train the model on conversation datasets where each conversation is annotated to extract meaningful events. Using BART, we convert the dialogues into a third-person perspective, making it easier to identify key events. Therefore, the system does not store everything indiscriminately, but instead focuses on important data that is deemed relevant.\\nIn terms of memory management and forgetting, we have implemented a weight factor to track the frequency of access to memory locations. This allows us to remove memories that are seldom accessed, ensuring that only useful experiences are retained. We are actively working on improving these strategies and refining the process of what to store, consolidate, or forget.\\nWe believe that our current architecture can be adapted to address these challenges by integrating structured memory management components, which will enhance the system's ability to store, retrieve, and forget experiences as needed, without requiring a complete redesign.\"}", "{\"title\": \"Updated version of Introduction\", \"comment\": \"Episodic memory, as introduced by Tulving (1972) [^1], represents the capacity to recall personal experiences embedded within specific temporal and spatial contexts. Unlike semantic memory, which holds general knowledge, episodic memory encompasses detailed information about events, integrating aspects such as time, place, characters, and context. Tulving\\u2019s framework organizes these components into cohesive episodes.\\nDrawing from this foundational concept, we propose a model where episodic memories are structured as a graph. Each episode \\\\( \\\\text{Episode}_i \\\\) functions as a node:\\n$$\\n\\\\[\\n\\\\text{Episode}_i = \\\\{ \\\\mathbf{C}_i, \\\\mathbf{T}_i, \\\\mathbf{L}_i, \\\\mathbf{e}_i \\\\}\\n\\\\]\\n$$\\nIn this model,$$ \\\\( \\\\mathbf{C}_i \\\\) $$stands for characters, $$\\\\( \\\\mathbf{T}_i \\\\)$$ represents temporal markers, $$\\\\( \\\\mathbf{L}_i \\\\)$$ signifies location, and $$\\\\( \\\\mathbf{e}_i \\\\)$$ encapsulates events. Semantic edges $$\\\\( \\\\mathbf{S}(v_i, v_j) \\\\)$$ connect nodes that share common elements, while temporal edges $$\\\\( \\\\mathbf{T}(v_i, v_j) \\\\)$$ map the chronological flow of experiences. To enhance retrieval efficiency, we apply a dynamic clustering mechanism that organizes similar episodes based on both temporal and contextual similarities.\", \"our_retrieval_system_supports_three_query_types\": \"\\\"what\\\" (contextual), \\\"when\\\" (temporal), and \\\"where\\\" (spatial), as outlined by Stephen et al., and Holland and Smulders (2011), enabling human-like memory recall. This is particularly valuable for applications in social companion robotics, aiding elderly or memory-impaired individuals.\\n\\nFor such a cognitive agent, the ability to recall episodic memories is essential for human cognition, linking personal experiences to specific temporal and spatial contexts. Existing memory models often struggle with continuous, time-series data, which limits their ability to simulate episodic recall effectively. Many of these systems fail to store dialogues as multimodal data, preventing them from capturing the rich, context-dependent nature of human memory. Additionally, most existing approaches store a single experience as one isolated episode and lack a mechanism for retrieving information across multiple experiences, hindering their ability to integrate knowledge over time.\\n\\nFurthermore, current episodic memory systems are typically restricted to performing a specific, predefined task, limiting their flexibility and adaptability. In contrast, our system is designed to be more versatile, capable of handling a variety of tasks and dynamically adapting to new scenarios, making it far more suited for real-world applications that require memory integration across different contexts and time periods. \\n\\nBy integrating multimodal data and time-related information into the episodic memory framework, our model extends experience memory localization, recommendation, and question answering. It provides a robust foundation for adaptable, scalable systems capable of operating without frequent retraining, applicable to real-world scenarios such as social companion robots and autonomous task planning systems.\\n\\n### **Contributions:**\\n1. Temporal connections are managed without complex pattern learning, enabling adaptive reasoning and retrieval of subgraphs from past experiences.\\n2. The system incrementally stores and retrieves episodic memories, dynamically clustering them based on temporal and contextual affinities.\\n3. A multi-edge graph framework optimizes path traversals for dynamic memory retrieval and personalized recommendations across subjective timescales.\\n4. A new dataset is introduced to improve episodic memory question answering, enhancing the agent's ability to respond to queries based on past events.\\n\\nOur model\\u2019s versatility is demonstrated through comparisons with existing systems that require retraining. It handles various dataset types, including visual, multimodal, and text-based data, and excels in temporal reasoning even without explicit timestamps, addressing complex memory retrieval tasks across diverse applications.\\n\\n[^1]: Tulving, E. (1972). *Episodic and semantic memory*. In *Organization of Memory* (pp. 381-403). Academic Press.\"}", "{\"comment\": \"Thank you to the authors for the improvements and the addition of numerous details, which have made the content clearer. However, the core contributions and innovations of the paper remain insufficiently clear. The inclusion of multiple modules, datasets and extensive content somewhat affects the focus of the paper. It is recommended that the authors further concentrate on the primary innovations and clearly highlight the unique contributions of the paper.\"}", "{\"title\": \"Fig. 5 isn't very convincing (except for Arigraph)\", \"comment\": \"### Section 5.1.1: Result Table\\n\\nThe time taken by different methods for various datasets (in seconds) is shown in the table below:\\n\\n| **Method** | **Big Bang Theory (s)** | **PerLTQA (s)** | **Agent (s)** | **LLQA (s)** |\\n|----------------------|-------------------------|-----------------|---------------|--------------|\\n| Our Model | 600 | 300 | 180 | 240 |\\n| EMR | 900 | 600 | 480 | 540 |\\n| GraphRag | 4200 | 3600 | 3300 | 3480 |\\n| GNN Rag | 4200 | 3600 | 3000 | 2700 |\\n| TempoQA | 1800 | 1800 | 2400 | 2100 |\\n| Arigraph | 84000 | 72000 | 10800 | 9000 |\\n\\n*Table 1: Time taken by different methods for various datasets (in seconds).*\\n\\nRegarding the issue of overlapping axis lines in the graph, we acknowledge that this occurs due to the large scale of the \\\"Trigraph\\\" (or similar metric) values. This happens even when axes are reversed. Since the values vary significantly, especially for methods like \\\"Arigraph,\\\" the axis scaling is stretching the range too far.\"}", "{\"comment\": \"**While the current implementation focuses on question-answering tasks, the core framework is inherently flexible and adaptable to a variety of applications. By leveraging episodic memory as the foundation for storing and retrieving relevant experiences, this approach can be extended to domains such as robot movement control and other complex tasks. The introduction makes it sound like a general method. But I have a few doubts about that. There are quite a lot of design choices, and some of choices in Sect. 3 seem rather specific. In the end the method is evaluated with a question answering task. What would need to change for a different task, say for a robot learning low-level movement control? Please clarify the generalizability of your method. Specifically, it would be nice if you could discuss how your approach might be adapted for different tasks (such as robot movement control), or explain any limitations in its applicability to other domains.**\\n\\n\\n#### This method could further enhance robotic systems by enabling them to recall past interactions with users, providing personalized assistance, adapting to user preferences, and improving task efficiency over time. By integrating visual, auditory, and textual data into episodic memory, the system supports richer interaction scenarios, such as interpreting complex instructions or responding to ambiguous user commands. Additionally, the ability to store and retrieve task sequences empowers robots to perform multi-step operations autonomously and adaptively. In educational or learning environments, this approach can personalize the teaching process by storing interactions and assessments for tailored guidance.\\n\\nTo adapt this method for motion control or other domains, relevant information can be incorporated into the episodic memory. For instance, in motion control, keyframes could store joint positions, sensor data, or environmental feedback. This flexibility allows the system to accommodate domain-specific data without the need to develop a completely new model, ensuring scalability and reusability across diverse applications.We can also use this model to derive goals intrinsically which will help an agent make policies for navigation.\"}", "{\"comment\": \"#### Regarding the concern about vague statistical estimation methods for missing timestamps, it's important to clarify that there are no traditional statistical methods used for estimating missing timestamps in our framework. Instead, we treat each episode as representing a distinct day. If the agent is able to capture specific dates visually or through dialogues, the corresponding temporal indexer will be updated to reflect the correct date and time.\\n#### Since all episodes are connected temporally by making use of temporal connections the temporal indexer is updated accordingly. This means that for any episode, its timestamp is adjusted based on the captured date, and subsequent episodes are indexed with respect to this temporal structure. For example, the \\\"before\\\" node will be one day prior, while the \\\"after\\\" node will be one day later, and so on. This approach ensures that the temporal relationships between episodes remain consistent and that the framework operates in a way that accurately reflects the passage of time as the agent processes the information.We will clarify this part in the revised version of the paper.. Specifically, we will outline how episodes are treated as distinct days, how the temporal indexer updates based on the agent's ability to capture dates visually or from dialogues, and how the relationships between episodes are maintained with respect to the passage of time. Also even without external timestamps our model will understand the order in which episodes are arranged because of maintaining temporal edges.\", \"title\": \"Vague Statistical Estimation Methods: The paper mentions the use of statistical methods for estimating missing timestamps but does not specify which methods were used, leaving an important aspect of the framework unexplained. If the reviewer could kindly specify the line referring to 'statistical timestamps,' it would help us provide a more targeted response to address the query.\"}", "{\"comment\": \"To better align with the evidence provided and ensure that the claims are well-supported, we have decided to adjust our approach in the main section of the paper and expand on the relevant details in the appendix.\\n\\n**As suggested we will tone down the paragraph mentioned in section 3 to**: \\n> EMCA\\u2019s methodology for collecting and structuring episodic experiences is inspired by cognitive psychology, specifically the 'what', 'where', and 'when' (WWW) components of episodic memory. This approach highlights the agent\\u2019s ability to independently capture multimodal data\\u2014visual and auditory\\u2014to build a comprehensive understanding of its environment.\\n\\nAlso, to highlight processing of visual and auditory details separately, we will elaborate our claim by adding an extra section in the appendix as: \\n> EMCA's methodology for collecting episodic experiences in robotic cognition draws inspiration from the human brain's mechanisms for encoding sensory information, particularly the distinct roles of the occipital and temporal lobes. In human cognition, the occipital lobe processes visual stimuli, while the temporal lobe is responsible for auditory information. Despite these processes occurring in specialised regions, the brain synchronises these sensory inputs within a unified temporal framework, enabling the formation of cohesive and contextually rich memories. EMCA replicates this principle by employing separate pipelines for processing visual data (e.g., spatial and object recognition) and auditory data (e.g., speech and environmental sounds), which are then temporally aligned to construct a coherent representation of the agent's environment.\", \"title\": \"The authors claim EMCA encodes data in a way that resembles human memory, but there is no evidence or detailed explanation to support this claim from a neural encoding way. Such claims should be toned down. You should instead emphasize on the 'what', 'where' 'when' organisation from a psychological perspective of episodic memory.\"}", "{\"comment\": \"### Task Categories for Text Summarization\\n\\nText summaries are grouped into broader task categories, such as **meetings** and **lunches**, using a topic modeling approach. The process involves the following steps:\\n\\n1. **Text Cleaning**: Each dialogue is processed individually by cleaning the text, which involves removing unwanted words, filler phrases, and clutter that do not contribute to the core meaning.\\n\\n2. **Topic Modeling**: After cleaning the text, topic modeling techniques, such as **Parallel Latent Dirichlet Allocation (LDA)**, are applied to identify latent topics within the dialogue. LDA helps in discovering the underlying topics by analyzing the distribution of words within the dialogue and across different dialogues.\", \"the_general_form_of_lda_can_be_written_as\": \"$$ p(w | z) = \\\\frac{(w, z)}{\\\\sum_w (w, z)} $$\\n\\n where \\\\( w \\\\) is a word, \\\\( z \\\\) is a topic, and the equation represents the probability of word \\\\( w \\\\) under topic \\\\( z \\\\).\\n\\n3. **Mapping to Predefined Categories**: After identifying the topics, these are then mapped to predefined categories based on the nature of the content. For example:\\n - **Meetings**: Dialogues related to planning, decision-making, or discussions.\\n - **Lunches**: Dialogues focusing on food, social interaction, or meals.\\n\\n4. **Categorization Criteria**: The criteria for defining these categories are based on the frequency and relevance of topics that appear within each dialogue type. For instance, if topics related to decision-making or strategy discussions are dominant, the dialogue would be classified as a **meeting**.\\n\\nThis approach helps in categorizing text summaries efficiently and is further explained in the updated methodology section of the appendix in the second version of the paper. The technique is based on the work of:\\n\\n- Liu, J., Zou, Y., Zhang, H., Chen, H., Ding, Z., Yuan, C., & Wang, X. (2021). **Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization**. *arXiv*. https://arxiv.org/abs/2109.04994\", \"title\": \"Task Categories for Text Summarization: How are text summaries grouped into broader task categories (e.g., meetings, lunches)? What criteria and process are used to define these categories?\"}", "{\"title\": \"More details about the Big Bang Theory dataset are needed.\", \"comment\": \"We propose a comprehensive dataset framework designed to evaluate and enhance episodic memory systems in artificial agents. This framework integrates multiple datasets, including a custom set of episodic questions based on the TV series *The Big Bang Theory*, spanning all nine seasons (181 episodes). The aim is to assess memory recall and narrative understanding in complex scenarios.\\n\\nWe introduce the *Agent Dataset*, a 10-episode time-series dataset created in Unity3D, where a virtual agent performs tasks and interacts with characters in realistic environments, simulating the role of companion robots. This dataset emphasizes the importance of multi-sensory inputs and task execution, challenging the agent to process and integrate information from *dialogues* and *visual cues* to maintain task order and achieve context-driven objectives.\\n\\nAdditionally, we adapted the **Ego4D dataset**, restructuring its activity sequences into simulated chronological episodes to address the original absence of time-series data\\u2014portraying an agent performing a series of activities over 30 days. We also combined group activity videos designed for active speaker recognition. This transformation enables episodic queries such as \\\"Where did I place the agricultural tool on the last day of farming?\\\", enhancing the ability to localize and retrieve temporal experiences effectively.\\n\\nTogether with the **PerLTQA** [du2024perltqapersonallongtermmemory] and **LLQA** [dolan-brockett-2005-automatically] datasets, which test essential episodic memory dimensions\\u2014**\\\"what\\\"** (context), **\\\"when\\\"** (time), and **\\\"where\\\"** (place)\\u2014this framework forms a robust benchmark for evaluating advanced episodic memory capabilities in AI systems.\\n\\n**Data Annotation**: The data was carefully annotated to tag scene information and identify characters in dialogues, ensuring that the model could recognize character presence and understand related events. This included explicitly tagging scene details for location identification and differentiating characters present in the scene versus those mentioned. Events within dialogues were also meticulously annotated to capture key details, facilitating effective memory representation beyond simple summaries. Capturing these essential details is crucial for episodic memory tasks, as it allows the agent to recall past experiences accurately. Each episode was annotated with 10 *what*, *when*, and *where* questions.\"}", "{\"comment\": \"Episodic memory, as introduced by Tulving (1972) [^1], represents the capacity to recall personal experiences embedded within specific temporal and spatial contexts. Unlike semantic memory, which holds general knowledge, episodic memory encompasses detailed information about events, integrating aspects such as time, place, characters, and context. Tulving\\u2019s framework organizes these components into cohesive episodes.\\nDrawing from this foundational concept, we propose a model where episodic memories are structured as a graph. Each episode \\\\( \\\\text{Episode}_i \\\\) functions as a node:\\n$$\\n\\\\[\\n\\\\text{Episode}_i = \\\\{ \\\\mathbf{C}_i, \\\\mathbf{T}_i, \\\\mathbf{L}_i, \\\\mathbf{e}_i \\\\}\\n\\\\]\\n$$\\nIn this model,$$ \\\\( \\\\mathbf{C}_i \\\\) $$stands for characters, $$\\\\( \\\\mathbf{T}_i \\\\)$$ represents temporal markers, $$\\\\( \\\\mathbf{L}_i \\\\)$$ signifies location, and $$\\\\( \\\\mathbf{e}_i \\\\)$$ encapsulates events. Semantic edges \\\\( \\\\mathbf{S}(v_i, v_j) \\\\) connect nodes that share common elements, while temporal edges $$\\\\( \\\\mathbf{T}(v_i, v_j) \\\\)$$ map the chronological flow of experiences. To enhance retrieval efficiency, we apply a dynamic clustering mechanism that organizes similar episodes based on both temporal and contextual similarities.\", \"our_retrieval_system_supports_three_query_types\": \"\\\"what\\\" (contextual), \\\"when\\\" (temporal), and \\\"where\\\" (spatial), as outlined by Stephen et al., and Holland and Smulders (2011), enabling human-like memory recall. This is particularly valuable for applications in social companion robotics, aiding elderly or memory-impaired individuals.\\n\\nFor such a cognitive agent, the ability to recall episodic memories is essential for human cognition, linking personal experiences to specific temporal and spatial contexts. Existing memory models often struggle with continuous, time-series data, which limits their ability to simulate episodic recall effectively. Many of these systems fail to store dialogues as multimodal data, preventing them from capturing the rich, context-dependent nature of human memory. Additionally, most existing approaches store a single experience as one isolated episode and lack a mechanism for retrieving information across multiple experiences, hindering their ability to integrate knowledge over time.\\n\\nFurthermore, current episodic memory systems are typically restricted to performing a specific, predefined task, limiting their flexibility and adaptability. In contrast, our system is designed to be more versatile, capable of handling a variety of tasks and dynamically adapting to new scenarios, making it far more suited for real-world applications that require memory integration across different contexts and time periods. \\n\\nBy integrating multimodal data and time-related information into the episodic memory framework, our model extends experience memory localization, recommendation, and question answering. It provides a robust foundation for adaptable, scalable systems capable of operating without frequent retraining, applicable to real-world scenarios such as social companion robots and autonomous task planning systems.\\n\\n### **Contributions:**\\n1. Temporal connections are managed without complex pattern learning, enabling adaptive reasoning and retrieval of subgraphs from past experiences.\\n2. The system incrementally stores and retrieves episodic memories, dynamically clustering them based on temporal and contextual affinities.\\n3. A multi-edge graph framework optimizes path traversals for dynamic memory retrieval and personalized recommendations across subjective timescales.\\n4. A new dataset is introduced to improve episodic memory question answering, enhancing the agent's ability to respond to queries based on past events.\\n\\nOur model\\u2019s versatility is demonstrated through comparisons with existing systems that require retraining. It handles various dataset types, including visual, multimodal, and text-based data, and excels in temporal reasoning even without explicit timestamps, addressing complex memory retrieval tasks across diverse applications.\\n\\n[^1]: Tulving, E. (1972). *Episodic and semantic memory*. In *Organization of Memory* (pp. 381-403). Academic Press.\", \"title\": \"Insufficient Motivation: The introduction section does not adequately establish the necessity of this system or why it improves upon existing learning frameworks for cognitive agents. Additional motivation for the need of an episodic memory for a cognitive agent would help contextualize EMCA's contributions\"}", "{\"title\": \"thanks for the detailed replies!\", \"comment\": \"which to be frank were on the verge of overwhelming (with >50 emails)\\n\\nMany of the details on how the method works now became clear through the replies. I.e., the HOW is now clear. For me the method still is a very complex system with many different components that were integrate - and that in the end seems to work quite well. However, the reasoning behind the design choices remains vague at best, i.e., the WHY is neither properly explained not properly backed up experimentally. Many choices still seem quite specific,which still raises the question on how general the method is and whether these choices are an integral part of the method or whether they were just a convenient choice and could easily be swapped out.\\n\\nThe focus of the paper (i.e., not really robot learning) now becomes a lot more clear, which makes the paper even further away from my core area of expertise. For me the more general insights (beyond 'yipee it works') are still too limited to warrant publication.\"}", "{\"comment\": \"### Union of Tacoustic and Tvoiced\\n\\nIn combining Tacoustic and Tvoiced, the methodology for performing this union is a concatenation of both entities aligned to the same timestamp. \\\\( \\\\text{Tvoiced} \\\\) refers to the entire audio the agent receives, which includes the dialog and acoustics components. To make this part clearer, we will update it as follows:\\n\\n### Processing of Audio Data in Episodic Memory\\n\\nAudio data, including dialogs and acoustics, plays a crucial role in constructing episodic memory. Dialogs provide linguistic and contextual information, while acoustics capture environmental and emotional cues. These elements are integrated as:\\n\\n$$ A(t) = D(t) + C(t) $$\\n\\nwhere \\\\( A(t) \\\\) is the total audio data at time \\\\( t \\\\), \\\\( D(t) \\\\) represents the dialog, and \\\\( C(t) \\\\) represents the acoustics.\\n\\n#### Extraction of Acoustic Data using Mel Spectrograms\\n\\nAcoustic data is transformed into Mel spectrograms, which emphasize perceptually relevant frequencies. The Mel spectrogram \\\\( M(t, f) \\\\) is computed as:\\n\\n$$ M(t, f) = \\\\log\\\\left(\\\\sum_{k} \\\\left| X(t, k) \\\\right|^2 \\\\cdot H(f, k)\\\\right) $$\\n\\nwhere \\\\( X(t, k) \\\\) is the magnitude of the Short-Time Fourier Transform (STFT) at time \\\\( t \\\\) and frequency \\\\( k \\\\), and \\\\( H(f, k) \\\\) is the Mel filter bank mapping linear frequencies to the Mel scale.\\n\\n#### Extraction of Verbal Cues from Audio Data\\n\\nVerbal cues are extracted by applying the Short-Time Fourier Transform (STFT) and converting the spectrum to the Mel scale as:\\n\\n$$ M(f) = 2595 \\\\log_{10}\\\\left(1 + \\\\frac{f}{700}\\\\right) $$\", \"the_mel_spectrogram_is_then_derived_as\": \"$$ \\\\text{MelSpec}(m,t) = \\\\log\\\\left(\\\\sum_{f_{\\\\text{low}}}^{f_{\\\\text{high}}} |S(f,t)|^2 M(f) + \\\\epsilon\\\\right) $$\\n\\nwhere \\\\( \\\\epsilon \\\\) is a small constant to prevent issues with the logarithm.\\n\\n#### Final Audio Representation\", \"the_final_audio_representation_integrates_acoustic_features_with_transcribed_dialogue_as\": \"$$ T_{\\\\text{audio}}(t) = T_{\\\\text{acoustics}}(t) + T_{\\\\text{dialogue}}(t) $$\\n\\nThis captures both tonal properties and linguistic meaning.\", \"title\": \"Union of Tacoustic and Tvoiced: In combining Tacoustic and Tvoiced, what is the methodology for performing this union? Is Tvoiced identical or related to another variable, such as St?\"}", "{\"title\": \"Is a forgetting mechanism necessary, or would it be more like a memory aggregation mechanism so that retrieval is still efficient?\", \"comment\": \"A forgetting mechanism is indeed relevant, as it aids in maintaining an efficient retrieval process. Currently, we have implemented a forgetting mechanism that is triggered based on how frequently a specific part of memory is accessed. This helps to optimize performance by discarding memory portions that are rarely or never used, preventing the memory from becoming cluttered with irrelevant information.This ensures that data which is not used is not saved.We also aim to check more mechanisms that will be useful for forgetting.Addressing the balance between forgetting and memory aggregation is an ongoing task we are actively exploring this part.\"}", "{\"comment\": \"To enrich the discussion, we intend to incorporate a more comprehensive comparison to established human memory models, which will significantly strengthen the paper. To achieve this, we plan to include additional references that span a wider array of memory networks and methodologies. This will include foundational works such as:\\n\\n1. **Caiming Xiong, Stephen Merity, and Richard Socher.** Dynamic memory networks for visual and textual question answering, 2016. [Link](https://arxiv.org/abs/1603.01417).\\n2. **Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, and Luke Zettlemoyer.** VLM: Task-agnostic video-language model pre-training for video understanding. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 4227\\u20134239, Online, August 2021. DOI: [10.18653/v1/2021.findings-acl.370](https://aclanthology.org/2021.findings-acl.370).\\n3. **Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo Luo.** Learning 2D temporal adjacent networks for moment localization with natural language, 2020. [Link](https://arxiv.org/abs/1912.03590).\\n4. **Zhu Zhang, Chang Zhou, Jianxin Ma, Zhijie Lin, Jingren Zhou, Hongxia Yang, and Zhou Zhao.** Learning to rehearse in long sequence memorization, 2021. [Link](https://arxiv.org/abs/2106.01096).\\n5. **Samyak Datta, Sameer Dharur, Vincent Cartillier, Ruta Desai, Mukul Khanna, Dhruv Batra, and Devi Parikh.** Episodic memory question answering, 2022. [Link](https://arxiv.org/abs/2205.01652).\\n6. **Hung Le, Truyen Tran, and Svetha Venkatesh.** Self-attentive associative memory, 2020. [Link](https://arxiv.org/abs/2002.03519).\\n7. **Naiyuan Liu, Xiaohan Wang, Xiaobo Li, Yi Yang, and Yueting Zhuang.** Reler@zju-alibaba submission to the ego4d natural language queries challenge 2022, 2022. [Link](https://arxiv.org/abs/2207.00383).\\n8. **Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap.** One-shot learning with memory-augmented neural networks, 2016. [Link](https://arxiv.org/abs/1605.06065).\\n9. **B\\u00e4rmann, Leonard, et al.** \\\"Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience.\\\" arXiv preprint arXiv:2409.17702 (2024).\\n\\nBased on these studies, we have carried out additional analysis and experiments. We will be adding the following tables in the revised paper:\\n\\n#### Comparison with NLQ models for proving our model's capacity in retrieving correct experiences from the episodic memory graph.\\n\\n| **Method** | **IOU = 0.3 R@1** | **IOU = 0.5 R@5** | **mIOU** |\\n|-----------------------|-------------------|-------------------|----------|\\n| 2D-TAN | 4.32 | 2.60 | 5.62 |\\n| VSLNet | 8.09 | 7.03 | 7.65 |\\n| CONE | 10.55 | 7.54 | 9.04 |\\n| RELER | 12.89 | 8.14 | 10.51 |\\n| SPOTEM | 18.13 | 13.43 | 15.78 |\\n| **Ours** | **26.46** | **25.5** | **25.98**|\\n\\n*Caption: Performance comparison on episodic memory localization.*\\n\\n---\\n\\n#### More comparison with memory models will be added in the appendix:\\n\\n| **Method** | **Recall Accuracy** |\\n|-------------------------------|---------------------|\\n| Episodic Memory Verbalization | 50% |\\n| Rehearsal Memory | 36% |\\n| STM | 30% |\\n| DNC | 35% |\\n| LT-CT | 50% |\\n| **Ours** | **81%** |\\n\\n*Caption: Recall accuracy for episodic memory question answering.*\\n\\n---\\n\\nThese models were tested on the Ego4D dataset. Models that have achieved state-of-the-art (SOTA) results in the NLQ challenge in Ego4D, as well as the VLQ models, are also memory models that have achieved SOTA results.\\n\\nWe will also include Hopfield models, citing relevant literature and equations, and highlighting that these models are ineffective in the absence of recurring patterns.\", \"title\": \"Minimal Related Work Discussion:Essentially, the model is an encode-and-retrieval model with dynamic reorganization. The related work section is sparse and lacks comparisons to key formal methods like Hopfield networks or other established models in episodic memory encoding and retrieval. A more rigorous comparison to established human memory models would also strengthen the paper.\"}", "{\"summary\": \"The paper proposes a novel method for storing multi-modal episodic memories. The design takes inspiration from a model of human memory. More specifically, the paper introduces a novel type of memory graph with different types of connections. The method compares favorably against baselines in experiments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper tackles and interesting problem. The architecture is described in an intuitive way and seems sound and novel. The experiments are reasonably extensive with comparisons to baselines, multiple datasets, and ablations and show very promising results.\", \"weaknesses\": [\"I found the paper to have some \\\"false advertising\\\":\", \"Some of the introduction, motivation, and discussion circles around 'robots', I didn't see anything specific to robots in this paper. Yes, it could be integrated into a robot, but the method would equally well work for a body cam, social agent, etc. In robotics there is quite an extensive literature on 'lifelong learning' that covers some of the same challenges: what to memorize, how to store and to retrieve, how to generalize, what to forget, etc.\", \"The title says 'learning'. At least according to my definition there is no learning in the paper. It proposes a way to store information and to retrieve it, so that would correspond to 'memorization' (=rote learning), while learning implies understanding the information and being able to apply it to new situations. The method could serve as a starting point for learning, but in the current paper that doesn't seem to be present in the method nor in the experiments.\", \"The introduction makes it sound like a general method. But I have a few doubts about that. There are quite a lot of design choices, and some of choices in Sect. 3 seem rather specific. In the end the method is evaluated with a question answering task. What would need to change for a different task, say for a robot learning low-level movement control? Please clarify the generalizability of your method. Specifically, it would be nice if you could discuss how your approach might be adapted for different tasks (such as robot movement control), or explain any limitations in its applicability to other domains.\", \"Section 3 describes the method. It remains however very much on the level of HOW, rather than providing many insights in the WHY (reasoning behind design choices, consequences of design choices) and it isn't always clear what is a core part of the method and what is an implementation detail. Please provide more explanation for the key design decisions, discussing the rationale behind these choices and their potential implications. Additionally, please clearly distinguish between core methodological components and implementation details.\", \"Fig. 5 isn't very convincing (except for Arigraph)\", \"Memory requirement would be another interesting metric for comparing methods.\", \"The paper mentions the missing forgetting mechanism as a major limitation. Related to that it also leaves the question on 'what to store' unanswered. It reads like everything is stored, even if it is effectively a duplicate. I believe the real challenging question that needs to be solved is the memory management: what to store, what to consolidate/merge, what to forget, etc. Without those a memory representation is of limited value, and it remains unclear to me how suitable the proposed architecture is for extending it in that way - or if we would be better off redesigning it from scratch.\", \"The method relies on various models to extract features and to summarize things before storing them. I don't believe there is a 'one size fits all approach' but how to best do that depends on the retrieval task and the type of downstream tasks you have. The paper does not provide any indications on how to deal with that.\", \"There are a whole lot of design choices in this paper, an ablation on only the modalities and search methods seems a bit limited. There also is no sensitivity analysis for e.g. the clustering thresholds\", \"This seems to be a rather complex system, which makes reproducing results very difficult, with probably quite lot of additional implementation and setting details. I couldn't find any promises to release code (or at least a detailed appendix), which would have alleviated this concern.\"], \"questions\": [\"The paper seems to miss all details on how the responses are generated, but the whole experiments are based on evaluating the answers. Please provide a detailed explanation of your response generation process, as this is crucial for understanding and evaluating the experimental results.\", \"Sect. 3.2.4: This seems to result in just 'jumping' nodes. Conserving the chronological structure is a nice property, but the real question/challenge rather is on the side of the module that determines if there is any harm in 'skipping a step', e.g. when thinking about some of the uses-cases 'predict ... next activity' you mention\", \"Quite a lot of unclear details\", \"Eq (1) vs (2): the difference between S_T and T_voiced is unclear\", \"How is T_acoustic generated\", \"Sect. 3.1.2, 3.1.3, and 3.2, l. 218: at places the paper sounds like everything is represented as 'text', at others it seems to be a mixture of text and other embeddings. It would be great to explain earlier on what it stored as what\", \"l. 184: \\\"organized by place, characters, and events\\\" raises the question where those come from - that is explained later in the text\", \"l. 220: \\\"relevance of these tasks is assessed\\\" - again HOW?\", \"l. 212: what are the implications/limitations resulting form using a simple metric like cosine similarity?\", \"Eq (17) sim seems undefined\", \"Fig. 2 and Sect. 4.1: I didn't get the terminology \\\"master\\\". Maybe that term can be avoided altogether (similar to https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/m/master-slave )\", \"The paper comes across as unpolished\", \"missing and extra spaces - e.g. in the title and abstract\", \"the template uses natbib, not using correct commands for references \\\\citep \\\\citet (and resulting repeats of names) makes it painful to read\", \"LaTeX has different opening and closing quotation marks\", \"paragraph above \\\"Contributions\\\" is double\", \"broken sentences in Sect. 5.1.2\", \"a few references with incomplete info (e.g. l. 300 \\\"as shown in 4\\\", l. 191, l.468)\", \"##After rebuttal\", \"I appreciate the very detailed replies. However, I still believe the paper is not ready to be published and will maintain my score (see message below).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"When this method gets deployed, there would potentially be ethics concerns - as pointed out by the authors.\\nIn the submission, it just uses datasets (established and generated form a TV series) rather than real user data, so no concerns.\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"The paper seems to miss all details on how the responses are generated, but the whole experiments are based on evaluating the answers. Please provide a detailed explanation of your response generation process, as this is crucial for understanding and evaluating the experimental results.\", \"comment\": \"We will provide detailed implementation details in the appendix regarding the response generation process. Responses can either be free-text or specific portions of memory, depending on the task. For example, in the case of episodic memory question-answering (QA), the system retrieves relevant memory segments to form a coherent and accurate answer. The retrieval process involves identifying and selecting relevant portions of memory based on the task at hand, which could be specific to a question or an event in the memory. These tasks, such as retrieving specific memory sections or generating free-text answers, are fundamental to evaluating the model's ability to handle different types of memory queries.\"}", "{\"title\": \"Can authors provide the code and dataset for evaluation?\", \"comment\": \"We would share the code and dataset for evaluation upon acceptance of the paper.\"}", "{\"comment\": \"#### Knowledge Graphs vs Episodic Graphs\\n\\nWe performed an evaluation of replacing episodic graphs with knowledge graphs in single-agent systems, highlighting the structural complexity and memory transfer efficiency. The following table summarizes this comparison.\\n\\n| **Metric** | **Episodic Graph** | **Knowledge Graph** |\\n|-----------------|--------------------|---------------------|\\n| Nodes | 3 | 16 |\\n| Edges | 3 | 36 |\\n| Average Degree | 2.0 | 4.5 |\\n| Density | 0.1 | 0.3 |\\n\\nAs shown in the table, **knowledge graphs** have a significantly higher complexity with more nodes, edges, and greater connectivity, making them less suitable for real-time memory retrieval.\\n\\n#### Power-Law Characteristics of Graphs\\n\\nWe also analyzed the power-law characteristics of both graph types. The episodic graph exhibits a steep decay (\\\\(\\\\alpha = 5.45\\\\)), indicating simplicity, whereas the knowledge graph shows a slower decay (\\\\(\\\\alpha = 1.57\\\\)), reflecting its increased complexity. The following table summarizes the power-law characteristics of both graphs.\\n\\n| **Metric** | **Episodic Graph** | **Knowledge Graph** |\\n|-----------------------------|-----------------------------------|------------------------------|\\n| Power-Law Exponent (\\\\(\\\\alpha\\\\)) | 5.45 (Steep decay) | 1.57 (Slower decay) |\\n| Minimal Value (\\\\(x_{min}\\\\)) | 1.0 (Valid from degree 1) | N/A |\\n| Standard Error of \\\\(\\\\alpha\\\\) | 0.341 (Moderate precision) | 0.142 (Higher precision) |\\n| Log-Likelihood Ratio (R) | 299.15 (Strong positive value) | -0.88 (Exponential fits better)|\\n| p-value | $6.03 \\\\times 10^{-172}$ (Very small) | 0.379 (No significant difference) |\\n\\n#### Performance Comparison of Graph Types in Memory Transfer Tasks\\n\\nIn conclusion, **episodic graphs** facilitate efficient memory retrieval, offering faster interactions and improved reasoning accuracy. The following table compares the performance of episodic and knowledge graphs in memory transfer tasks.\\n\\n| **Metric** | **Episodic Graph** | **Knowledge Graph** |\\n|-------------------------|--------------------|---------------------|\\n| Retrieval Time (s) | 0.5 | 3.2 |\\n| Memory Merging Time (s) | 0.8 | 4.0 |\\n| Reasoning Accuracy (%) | 85 | 46 |\\n\\nWe plan to include this section in the appendix of the paper. We were unable to explore temporal graphs as they follow a pattern that would prove inefficient for our research work.\", \"title\": \"Surface-level Comparison with Temporal and Knowledge Graphs The comparisons with temporal and knowledge graph structures are brief and lack depth, offering limited insight into how EMCA differs from or improves upon these existing approaches. In this section, we present a more detailed analysis comparing episodic graphs with knowledge graphs in a multi-agent environment, focusing on their structural complexity and memory transfer efficiency\"}", "{\"comment\": \"### Section 4.1\\n\\nWhen a cognitive agent assists an individual with memory issues, it leverages past interactions to provide personalized support. The process begins by identifying and extracting the **personalized cluster** \\\\( C_p \\\\), representing the relevant memory subgraph for that individual. This subgraph consists of episodes, actions, and events associated with the person. The **personalized cluster** \\\\( C_p \\\\) is defined as: \\n\\n$$ \\nC_p = \\\\{ G_p \\\\mid \\\\text{episodes associated with person } p \\\\} \\n$$\", \"where\": \"$$ \\n\\\\mathbb{I}(\\\\mathcal{S}_t \\\\subset \\\\mathcal{S}) \\n$$ \\n\\nis the indicator function verifying if the current sequence $$ \\\\( \\\\mathcal{S}_t \\\\)$$ appears as a subsequence in a past sequence $$\\\\( \\\\mathcal{S} \\\\) within \\\\( G_p \\\\)$$. \\n\\nIf there are multiple matches, the most frequently occurring next event is chosen: \\n\\n$$ \\nS_{t+1} = \\\\text{mode}(\\\\{ S_{t+1} \\\\in \\\\mathcal{S} \\\\mid \\\\mathcal{S}_t \\\\subset \\\\mathcal{S}, \\\\mathcal{S} \\\\in G_p \\\\}) \\n$$ \\n\\nThis approach allows the agent to use past episodic data effectively, providing recommendations or answers based on past actions even when explicit patterns are absent. By analyzing subgraphs containing both days as nodes and events as sub-nodes, the agent identifies the most relevant next action, ensuring reliable support for individuals with memory impairments. This is especially useful in cases where the agent has been an assistant to many individuals.\", \"title\": \"I think I like what Section 4.1 is implying about combining a \\\"master's\\\" memory with that of an agent's, but it is not presented very clearly, and I do not see a connection with temporal graphs like the subsection title suggests\"}", "{\"comment\": \"#### Location weight refers to the degree of similarity between the locations of two different or neighboring episodes. It quantifies how closely the locations in the episodes are related to each other. On the other hand, location similarity refers to the comparison between the location in a given query and the location in an episode. Therefore, while location weight measures the similarity between locations across episodes, location similarity focuses on the relationship between a query's location and the location within a specific episode.\", \"title\": \"Location Weight Definition: How is \\\"location weight\\\" defined, and how does it differ from location similarity in the model?\"}", "{\"title\": \"Sect. 3.2.4: This seems to result in just 'jumping' nodes. Conserving the chronological structure is a nice property, but the real question/challenge rather is on the side of the module that determines if there is any harm in 'skipping a step', e.g. when thinking about some of the uses-cases 'predict ... next activity' you mention\", \"comment\": \"In our approach, we aim to remove only those nodes that are never accessed, thereby maintaining the integrity of the memory structure. Each node in the graph represents a day, with subnodes corresponding to specific activities. Even if a particular activity node is removed, the agent can still access the rest of the memory structure and continue to recommend the next activity based on the available experiences.\\n\\nThis ensures that the chronological structure is preserved, and there is no significant loss of context in terms of the flow of events. The agent can still make predictions or suggest the next activity without the need for \\\"jumping\\\" through disconnected nodes. The key challenge we address is ensuring that the removal of unused nodes does not impact the agent's ability to make accurate recommendations or predictions. As a result, the system remains efficient and functional while being able to scale and adapt over time.\"}", "{\"title\": \"How was the dataset constructed? This is a major contribution that is not discussed.\", \"comment\": \"We propose a comprehensive dataset framework designed to evaluate and enhance episodic memory systems in artificial agents. This framework integrates multiple datasets, including a custom set of episodic questions based on the TV series *The Big Bang Theory*, spanning all nine seasons (181 episodes). The aim is to assess memory recall and narrative understanding in complex scenarios.\\n\\nWe introduce the *Agent Dataset*, a 10-episode time-series dataset created in Unity3D, where a virtual agent performs tasks and interacts with characters in realistic environments, simulating the role of companion robots. This dataset emphasizes the importance of multi-sensory inputs and task execution, challenging the agent to process and integrate information from *dialogues* and *visual cues* to maintain task order and achieve context-driven objectives.\\n\\nAdditionally, we adapted the **Ego4D dataset**, restructuring its activity sequences into simulated chronological episodes to address the original absence of time-series data\\u2014portraying an agent performing a series of activities over 30 days. We also combined group activity videos designed for active speaker recognition. This transformation enables episodic queries such as \\\"Where did I place the agricultural tool on the last day of farming?\\\", enhancing the ability to localize and retrieve temporal experiences effectively.\\n\\nTogether with the **PerLTQA** [du2024perltqapersonallongtermmemory] and **LLQA** [dolan-brockett-2005-automatically] datasets, which test essential episodic memory dimensions\\u2014**\\\"what\\\"** (context), **\\\"when\\\"** (time), and **\\\"where\\\"** (place)\\u2014this framework forms a robust benchmark for evaluating advanced episodic memory capabilities in AI systems.\\n\\n**Data Annotation**: The data was carefully annotated to tag scene information and identify characters in dialogues, ensuring that the model could recognize character presence and understand related events. This included explicitly tagging scene details for location identification and differentiating characters present in the scene versus those mentioned. Events within dialogues were also meticulously annotated to capture key details, facilitating effective memory representation beyond simple summaries. Capturing these essential details is crucial for episodic memory tasks, as it allows the agent to recall past experiences accurately. Each episode was annotated with 10 *what*, *when*, and *where* questions.\"}", "{\"comment\": \"#### Du refers to the similarity between episode and question which plays a major role in getting the correct episode.\", \"similarity_function_in_line_283\": \"Which similarity function is used in line 283, and what factors are considered?\\nCosine similarity between query and episode .I will update this part to get clarity in the paper.\", \"title\": \"Definition of the Set Du: How is the set Du defined in the context of the framework?\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for making the large number of edits and the detailed (and somewhat overwhelming) responses. The authors have made a lot of the paper clearer, and the edited manuscript that was uploaded looks better.\\n\\nSimilar to the Reviewer YT1b24, I better understand what the method is doing, but I think a core part of research is to understand why these components are important. The intuitions behind these design decisions are not described. Some comparison with other baselines is made in Section 6.2, but it doesn't really provide insights on what is required for someone to build similar strong episodic memory retrieval systems.\\n\\nThe framing of the dataset is better, but I was left unsatisfied with how the authors discussed their method. This is mostly due to the over-engineered-ness of the method while not explaining the reasons for it. I will keep my score the same.\"}", "{\"comment\": \"| **Model Component** | **Hyperparameter** | **Value** |\\n|-----------------------------|---------------------------|--------------------------------------|\\n| **Event Extraction (BART)** | Encoder-Decoder Layers | 6 (BASE), 12 (LARGE) |\\n| | Hidden Size | 1024 |\\n| | FFN Size | 4096 |\\n| | Dropout | 0.1 |\\n| | Learning Rate | $4 \\\\times 10^{-5}$ (BASE), $2 \\\\times 10^{-5}$ (LARGE) |\\n| | Max Tokens per Batch | 1100 |\\n| | Margin Coefficient | 1 |\\n| **Vision Encoder (ViT)** | Patch Size | 16 |\\n| | Resolution | $224 \\\\times 224$ |\\n| | Latent Space Dim. | 512 |\\n| | Transformer Layers | 12 |\\n| | Width | 768 |\\n| | LayerScale Init. | 0.1 |\\n| | Dropout Path Rate | Configurable |\\n| **QA System (BERT)** | Learning Rate | $1 \\\\times 10^{-5}$ |\\n| | Max Sequence Length | 512 |\\n| | Document Stride | 512 |\\n| | Train Batch Size | 8 |\\n| | Gradient Accum. Steps | 2 |\\n| | Epochs | 2 |\\n| | Mixed-Precision Opt. | fp16 (O2) |\\n| **Temporal Tagging** | Max Sequence Length | 128 |\\n| | Batch Size | 32 |\\n| | Learning Rate | $5 \\\\times 10^{-5}$ |\\n| | Dropout | 0.1 |\\n| | Weight Decay | 0.01 |\\n| | Epochs | 10 |\"}", "{\"title\": \"Formatting/Clarity issues\", \"comment\": \"**Check for spaces after periods or colons throughout the paper.**\\n#### In the revised version of the paper, we will ensure consistent spacing throughout to improve readability and adhere to proper formatting standards.\\n**Line spacing is odd in much of the paper**\\n#### We appreciate your observation about the line spacing. We will review and adjust the line spacing in the paper to ensure it is consistent and visually comfortable for readers, enhancing the overall clarity and flow of the document.\\n**Figure placement should ideally be on the top or bottom of a page, not in the middle with paper text above and below the paper. This makes the paper difficult to follow**\\n#### We agree that figures should be positioned in a way that supports the flow of the paper. In the revised version, we will reposition figures to the top or bottom of pages,\\n**Figure 4, the legend has oddly shaped circles**\\n#### We have noted your concern regarding the oddly shaped circles in the legend of Figure 4. We will revise the legend to ensure that the circles are properly shaped.\\n**Figure 5's result is good, but it should not be a line graph with x axis being method and lines being the dataset. Instead it should be datasets on the x-axis and methods on the y-axis.The results in section 5.1.1 are all discussed in a single paragraph. Is that all the main results?**\\nIn Section 5.1.1. Result table is \\n### Table 1: Time taken by different methods for various datasets (in seconds).\\n\\n| **Method** | **Big Bang Theory (s)** | **PerLTQA (s)** | **Agent (s)** | **LLQA (s)** |\\n|---------------|-------------------------|-----------------|---------------|--------------|\\n| Our Model | 600 | 300 | 180 | 240 |\\n| EMR | 900 | 600 | 480 | 540 |\\n| GraphRag | 4200 | 3600 | 3300 | 3480 |\\n| GNN Rag | 4200 | 3600 | 3000 | 2700 |\\n| TempoQA | 1800 | 1800 | 2400 | 2100 |\\n| Arigraph | 84000 | 72000 | 10800 | 9000 |\\nRegarding the issue of overlapping axis lines in the graph, we acknowledge that this occurs due to the large scale of the \\\"Trigraph\\\" (or similar metric) values. This occurs even when axes are reversed.Since the values vary significantly, especially for methods like \\\"Arigraph,\\\" the axis scaling is stretching the range too far. \\n**I would recommend splitting this up into a few bolded mini-sections and showing the main takeaway of each figure along with highlights on how the method performed, possibly with qualitative results.**\\n#### We will update this section in the updated version of the paper.\"}", "{\"comment\": \"### Similarity Calculation in Equation 8\\n\\nEquation 8 is intended to measure the similarity between an event and multiple episodes. The similarity function used for this calculation is **cosine similarity**, which quantifies the similarity between two vectors based on their orientation rather than their magnitude. . \\n\\nIn the context of Equation 8, this function is used to measure the similarity between an event and multiple episodes by calculating the cosine similarity between the feature representations of the event and the episodes. By comparing these similarity scores, the model identifies which episodes are most relevant or similar to the given event. \\n\\nWe will provide further clarification of this process in the revised version of the paper to ensure that the similarity calculation and its application are more clearly explained.\", \"title\": \"Similarity Calculation in Equation 8: Equation 8 is intended to measure similarity between an event and multiple episodes, but it isn\\u2019t clear how it accomplishes this. Could you clarify how this calculation works?\"}", "{\"title\": \"Fig. 2 and Sect. 4.1: I didn't get the terminology \\\"master\\\". Maybe that term can be avoided altogether (similar to https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/m/master-slave )\", \"comment\": \"I have changed this part from master to individual and updated it as:\\n### Personalized Support Using Episodic Memory\\n\\nWhen a cognitive agent assists an individual with memory issues, it leverages past interactions to provide personalized support. The process begins by identifying and extracting the **personalized cluster** \\\\( C_p \\\\), representing the relevant memory subgraph for that individual. This subgraph consists of episodes, actions, and events associated with the person. The **personalized cluster** \\\\( C_p \\\\) is defined as:\\n\\n$$\\nC_p = \\\\{ G_p \\\\mid \\\\text{episodes associated with person } p \\\\}\\n$$\\n\\nwhere \\\\( G_p = (V_p, E_p) \\\\) represents the subgraph containing episodes \\\\( V_p \\\\) and edges \\\\( E_p \\\\) related to person \\\\( p \\\\), preserving all temporal and contextual information.\\n\\nOnce the **personalized cluster** \\\\( C_p \\\\) is extracted, the agent identifies **event clusters** \\\\( C_e \\\\) within the subgraph, representing various actions performed by the individual over time:\\n\\n$$\\nC_e = \\\\{ E_e \\\\mid \\\\text{events associated with episodes in } G_p \\\\}\\n$$\\n\\nThe agent checks for the most recent sequence of events, denoted \\\\( \\\\mathcal{S}_t = (S_1, S_2, \\\\dots, S_t) \\\\). To provide recommendations, the agent searches the past memory subgraph for instances where the task was previously performed and identifies the subsequent event. The next action \\\\( S_{t+1} \\\\) is determined by:\\n\\n$$\\nS_{t+1} = \\\\text{argmax}_{S \\\\in G_p} \\\\mathbb{I}(\\\\mathcal{S}_t \\\\subset \\\\mathcal{S})\\n$$\\n\\nwhere \\\\( \\\\mathbb{I}(\\\\mathcal{S}_t \\\\subset \\\\mathcal{S}) \\\\) is the indicator function verifying if the current sequence \\\\( \\\\mathcal{S}_t \\\\) appears as a subsequence in a past sequence \\\\( \\\\mathcal{S} \\\\) within \\\\( G_p \\\\).\\n\\nIf there are multiple matches, the most frequently occurring next event is chosen:\\n\\n$$\\nS_{t+1} = \\\\text{mode}(\\\\{ S_{t+1} \\\\in \\\\mathcal{S} \\\\mid \\\\mathcal{S}_t \\\\subset \\\\mathcal{S}, \\\\mathcal{S} \\\\in G_p \\\\})\\n$$\\n\\nThis approach allows the agent to use past episodic data effectively, providing recommendations or answers based on past actions even when explicit patterns are absent. By analyzing subgraphs containing both days as nodes and events as sub-nodes, the agent identifies the most relevant next action, ensuring reliable support for individuals with memory impairments.\"}", "{\"title\": \"Format and Extraction of Visual Details: What is the format of the visual scene details (e.g., Vscene, Vplace, Vtime), and how are these extracted and integrated into the memory graph?\", \"comment\": \"Visual details are stored as visual embeddings (Below clarifications will be added in the paper)\\n### Processing of Visual Data in Episodic Memory \\n\\nVisual data processing begins by transforming each frame \\\\( F_i \\\\) into a tensor and extracting both global and local features. The scene representation is derived by aggregating the frame embeddings: \\n\\n$$ \\nV_{\\\\text{scene}} = \\\\frac{1}{N} \\\\sum_{i=1}^{N} V_{\\\\text{embed}}(F_i) \\n$$ \\n\\nwhere \\\\( N \\\\) is the number of frames. \\n\\n#### Time and Place Details Extraction \\nTo extract time and place information, a convolutional network detects text regions, generating probability maps for the text center line (TCL) and text regions (TR): \\n\\n$$ \\n\\\\begin{pmatrix} \\nP_{\\\\text{TCL}}(x, y) \\\\\\\\ \\nP_{\\\\text{TR}}(x, y) \\n\\\\end{pmatrix} \\n= \\\\sigma\\\\left( \\n\\\\begin{pmatrix} \\nW_{\\\\text{TCL}} \\\\\\\\ \\nW_{\\\\text{TR}} \\n\\\\end{pmatrix} \\n\\\\cdot F_{\\\\text{feature}}(x, y) \\n\\\\right) \\n$$\", \"a_thresholding_operation_is_applied\": \"$$ \\nP_{\\\\text{filtered}} = \\\\{(x, y) \\\\mid P_{\\\\text{TCL}}(x, y) \\\\geq T_{\\\\text{TCL}} \\\\text{ and } P_{\\\\text{TR}}(x, y) \\\\geq T_{\\\\text{TR}} \\\\} \\n$$\", \"text_recognition_is_performed_using_a_softmax_layer\": \"$$ \\n\\\\hat{y}_t = \\\\text{Softmax}(W \\\\cdot h_t + b) \\n$$ \\n\\n#### Person Embedding Extraction \\nTo extract person embeddings, a person's region is detected and cropped. This region is processed through a feature extractor to generate and store the embedding, enabling future use in episodic memory tasks.\"}", "{\"summary\": \"The paper presents Episodic Memory for Cognitive Agents (EMCA), a framework designed to support memory retention and retrieval in cognitive agents. EMCA models episodic memory using a graph-based structure that incrementally stores and organizes multimodal experiences\\u2014such as speech, vision, and non-verbal cues\\u2014without pre-training on specific scenarios. This approach enables agent to keep adding new experiences continuously from data. This supposedly allows for flexible temporal reasoning across different datasets. EMCA\\u2019s dynamic memory graph builds semantic and temporal connections, enabling context-aware retrieval and clustering of memories based on query relevance. The framework aims to improve task execution , and reasoning by recalling contextually significant past events. Empirical tests reported indicate that EMCA adapts to real-world data, demonstrating good recall in unpredictable settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"I think the paper is trying to address a very important problem of how can an autonomous agent keep memoring new experienes and the recall those flexibly based on the context, question or query. Specifcally I see two main strengths.\", \"pretrained_models\": \"The model builds on pretrained models that help with extraction of compenents, but this also means the system does not require pre-training on every specific scenario from scratch, which is a significant strength as it allows flexibility across contexts.\", \"dataset_diversity\": \"The authors evaluated the system on multiple datasets, demonstrating a broad application range, although it should be noted that most of these datasets were developed by the authors.\", \"weaknesses\": \"Despite tackling an important problem, the paper suffers from serious clarity and coherence issues that obscure its contributions and weaken its scientific rigor. The presentation is fragmented, key concepts are inadequately explained, and essential technical details are missing, all of which make it challenging to assess the model\\u2019s validity and potential impact. Specifically, the weaknesses include:\\n1. The authors claim EMCA encodes data in a way that resembles human memory, but there is no evidence or detailed explanation to support this claim from a neural encoding way. Such claims should be toned down. You should instead emphasize on the 'what', 'where' 'when' organisation from a psychological perspective of episodic memory. \\n\\n2. Insufficient Motivation: The introduction section does not adequately establish the necessity of this system or why it improves upon existing learning frameworks for cognitive agents. Additional motivation for the need of an episodic memory for a cognitive agent would help contextualize EMCA's contributions.\\n\\n\\n3. Minimal Related Work Discussion: Essentialy the model is an encode and retrival model with some dynamic reorganistion. The related work section is sparse and lacks comparisons to key formal methods like Hopfield networks or other established models in episodic memory encoding and retrieval. A more rigorous comparison to established human memory models would also strengthen the paper.\\n\\n4. Unclear Implementation and Integration Details: Although multiple models and methods are mentioned, the paper lacks a cohesive description of how these components integrate within the system. Critical details such as model architecture, parameter settings, and processing pipelines are absent, making it difficult to assess or replicate the work. A system architecture diagram, a table of key parameters, or pseudocode for the main processing pipeline would help.\", \"vague_statistical_estimation_methods\": \"The paper mentions the use of statistical methods for estimating missing timestamps but does not specify which methods were used, leaving an important aspect of the framework unexplained.\", \"surface_level_comparison_with_temporal_and_knowledge_graphs\": \"The comparisons with temporal and knowledge graph structures are brief and lack depth, offering limited insight into how EMCA differs from or improves upon these existing approaches.\", \"undefined_terminology_and_variables\": \"Certain terms and variables (e.g.,\\\"key events\\\", \\\"location weight,\\\" \\\"subjective temporal timescales\\\") are introduced without sufficient explanation or definition, reducing clarity.\", \"overreliance_on_custom_datasets\": \"While the use of various datasets to evaluate EMCA is a strength, most of these datasets were developed by the authors, which could indicate potential biases in testing and validation.\", \"limited_explanation_of_retrieval_policy\": \"The retrieval policy and memory clustering mechanisms, while central to EMCA\\u2019s functionality, are described only briefly. A more detailed explanation would clarify how these mechanisms adapt to different query types and scenarios.\", \"questions\": \"Statistical Estimation Methods for Timestamps: Which specific statistical methods are used for estimating timestamps in the absence of explicit temporal markers?\", \"union_of_tacoustic_and_tvoiced\": \"In combining Tacoustic and Tvoiced, what is the methodology for performing this union? Is Tvoiced identical or related to another variable, such as St?\", \"format_and_extraction_of_visual_details\": \"What is the format of the visual scene details (e.g., Vscene, Vplace, Vtime), and how are these extracted and integrated into the memory graph?\", \"defining_key_events_and_hierarchical_organization\": \"How are key events identified, and what hierarchical structure is used to organize these events?\", \"relation_between_taudio_and_tcombined\": \"Is Taudio equivalent to Tcombined, or is there another relationship between these variables?\", \"task_categories_for_text_summarization\": \"How are text summaries grouped into broader task categories (e.g., meetings, lunches)? What criteria and process are used to define these categories?\", \"similarity_calculation_in_equation_8\": \"Equation 8 is intended to measure similarity between an event and multiple episodes, but it isn\\u2019t clear how it accomplishes this. Could you clarify how this calculation works?\", \"location_weight_definition\": \"How is \\\"location weight\\\" defined, and how does it differ from location similarity in the model?\", \"temporal_parameter_in_equation_14\": \"In Equation 14, should the parameter be (t-k) instead of just t? If not, what purpose does the current form of the equation serve?\\n\\nMeaning of \\\"Agent Comprehends\\\": In line 274, it says the \\\"agent comprehends\\\" something. Does this imply processing by a language model, and if so, could you clarify which model is used?\", \"definition_of_the_set_du\": \"How is the set Du defined in the context of the framework?\", \"similarity_function_in_line_283\": \"Which similarity function is used in line 283, and what factors are considered?\", \"role_of_w_and_l_functions\": \"In line 287, the w and l functions are mentioned. Could you elaborate on their roles within the memory retrieval mechanism?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"#### The parameter in Equation 14 should not be \\\\( t - k \\\\), as the current form of the equation serves a different purpose. The temporal parameter \\\\( t \\\\) is used to update the temporal structure and connect the incoming episode to the most recent episode in the graph. Specifically, it ensures that the temporal continuity is maintained by linking the new episode to the current latest episode, rather than referencing a past episode that is \\\\( k \\\\) steps back. Thus, the use of \\\\( t \\\\) instead of \\\\( t - k \\\\) allows for the real-time temporal updating of the graph, preserving the sequence and ensuring the temporal relationships between episodes are consistently updated as new data is incorporated. as markdown\", \"title\": \"Temporal Parameter in Equation 14: In Equation 14, should the parameter be (t-k) instead of just t? If not, what purpose does the current form of the equation serve?\"}", "{\"title\": \"What are the computational costs for memory retrieval at different scales?\", \"comment\": \"To address the query about the scalability of incremental storage and retrieval as the number of episodes increases, we conducted a comparison between different episode counts (10, 50, 100, and 181 episodes). In terms of accuracy, all values remain consistent. However, in terms of retrieval time, there is a slight increase as the number of episodes grows. The retrieval times for different episode counts are as follows:\\n\\n| **Number of Episodes** | **Retrieval Time (ms)** |\\n|------------------------|-------------------------|\\n| 10 | 9.11 |\\n| 50 | 11.0 |\\n| 100 | 11.5 |\\n| 181 | 12.0 |\\n\\nThis shows that while retrieval time does increase, it is not drastic, and the increase is quite modest. The clustering mechanism plays a crucial role in ensuring that retrieval times remain low, even as the number of episodes increases. This highlights the efficiency of our method in scaling with the number of episodes.\\n\\nThe computational costs for memory retrieval at different scales only change slightly due to the clustering approach implemented in our system. Initially, before clustering was applied, the retrieval time was significantly longer for all tasks. However, the introduction of clustering allowed for more efficient traversal and reduced retrieval time, even as the number of episodes increased.\"}", "{\"title\": \"In Table 1, what metric is being used? It does not say in the caption or the table. It should be re-iterated in the table itself.\", \"comment\": \"The metric used in Table 1 is recall accuracy. We will update the table and its caption to clearly indicate this metric for better clarity.\"}", "{\"title\": \"What is the real-time performance?\", \"comment\": \"The real-time performance of the system depends on several factors, including the size and complexity of the dataset, the structure of the memory graph, and the efficiency of the retrieval algorithms. Specifically:\\n#### **Retrieval Time**: Based on the experiments, the retrieval time for queries remains relatively stable, with only a slight increase as the number of episodes or experiences grows. For example, with 10 episodes, retrieval time is around 9.11 milliseconds, while for 181 episodes, it increases to 12 milliseconds. This increase is marginal, indicating that the system can handle larger datasets without significant degradation in performance.\\n#### **Scalability**: The graph structure\\u2019s scalability is dependent on dataset variability. If the experiences are similar, fewer new connections are formed, leading to a more cohesive and less complex graph, which supports faster retrieval times. Conversely, with more diverse experiences, the graph can become more sparse, requiring more time for traversal, though the increase in retrieval time is not exponential.\\n#### **Real-Time Execution**: Despite the complexity of the underlying graph, the retrieval process remains fast and can support real-time performance for most queries, especially with the use of clustering mechanisms that enhance retrieval speed.\"}", "{\"comment\": \"### Defining Key Events and Hierarchical Organization\\n\\nEach node in the episodic memory represents a day, with subnodes capturing the activities within that day. The main node summarizes the day's events, while subnodes encode specific event details. Joint embeddings, integrating place, character, and event information, are stored at both the event and day levels. This hierarchical structure allows for efficient encoding and retrieval of past interactions, enhancing the agent's decision-making and contextual understanding.\\n\\n#### Key Events:\\n\\nKey frame extraction involves identifying representative frames from a video that show significant visual or temporal changes, minimizing redundancy while preserving essential information. The process typically includes preprocessing frames to enhance features, computing similarities, thresholding, and windowing to select frames based on spatio-temporal changes. This approach leverages vision transformers to assess low-level and mid-level features, ensuring that the extracted key frames align with human perception and support downstream tasks.\\n\\nFor dialogues, we train the model on a dataset of conversations, where each conversation is annotated to extract relevant events. Using BART, we convert the dialogues into a third-person perspective to facilitate the extraction of significant events.\", \"title\": \"Defining Key Events and Hierarchical Organization: How are key events identified, and what hierarchical structure is used to organize these events?\"}" ] }
0iAZYF9hrl
Disentangled representations of microscopy images
[ "Jacopo Dapueto", "Vito Paolo Pastore", "Nicoletta Noceti", "Francesca Odone" ]
Microscopy image analysis is fundamental for different applications, from diagnosis to synthetic engineering and environmental monitoring. In the last few years, the number of available images has been constantly growing, thanks to technological advancements, pushing toward the development of automatic image analysis methods based on deep learning. Although deep neural networks have demonstrated great performance in this field, interpretability — an essential requirement for microscopy image analysis — remains an open challenge. This work proposes a Disentangled Representation Learning (DRL) methodology to enhance model interpretability for microscopy image classification. Exploiting benchmark datasets coming from three different microscopic image domains, including plankton, yeast vacuoles, and human cells, we show how a DRL framework, based on transfer learning from synthetic features, can provide a good trade-off between accuracy and interpretability in this domain.
[ "Microscopy images", "Disentangled representations", "Transfer learning", "Interpretability" ]
Reject
https://openreview.net/pdf?id=0iAZYF9hrl
https://openreview.net/forum?id=0iAZYF9hrl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "waFMduq3aS", "uwAGE0zEy3", "lLqyQBxYI2", "geE18bCVuD", "edncii5gCL", "d3LD03Jdr2", "cLYUKa6aQ5", "ZhPAJnqBKx", "Yfeowvgrmj", "RvaybwZoOx", "PuHrGhxN1W", "O1oqOLNsSI", "9C1rtUOplv", "9BsZkI03Da", "3hBR4KW6vN", "30FQTGk4ka" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "decision", "official_review", "official_comment" ], "note_created": [ 1732573654557, 1733154159719, 1730637806735, 1732573434231, 1733182194990, 1733167008700, 1732573884371, 1730667639224, 1732667757936, 1730226923004, 1734709459853, 1732573278950, 1732573025382, 1737524123141, 1730453993738, 1733247906720 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11420/Authors" ], [ "ICLR.cc/2025/Conference/Submission11420/Reviewer_ijPz" ], [ "ICLR.cc/2025/Conference/Submission11420/Reviewer_umB6" ], [ "ICLR.cc/2025/Conference/Submission11420/Authors" ], [ "ICLR.cc/2025/Conference/Submission11420/Reviewer_ijPz" ], [ "ICLR.cc/2025/Conference/Submission11420/Authors" ], [ "ICLR.cc/2025/Conference/Submission11420/Authors" ], [ "ICLR.cc/2025/Conference/Submission11420/Reviewer_MkND" ], [ "ICLR.cc/2025/Conference/Submission11420/Reviewer_MkND" ], [ "ICLR.cc/2025/Conference/Submission11420/Reviewer_ijPz" ], [ "ICLR.cc/2025/Conference/Submission11420/Area_Chair_orkH" ], [ "ICLR.cc/2025/Conference/Submission11420/Authors" ], [ "ICLR.cc/2025/Conference/Submission11420/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11420/Reviewer_MZMx" ], [ "ICLR.cc/2025/Conference/Submission11420/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their observations. In the following we answer each concern separately.\\n\\n\\n* > A significant weakness as it seems ... over existing methods.\\n\\nPlease, see the general answer\\n\\n* > The contributions of the paper in terms of novelty are unclear ... and originality of the work.\\n\\nSee the general answer\\n\\n\\n* > 1. There are instances of informal languages, such as the use of \\u201cthanks.\\u201d\\n\\nWe thank the reviewer for the observation. We will carefully revise the style of the language.\\n\\n* > 2. The text contains multiple errors at the word ... rigorous academic polish.\\n\\nWe respectfully reject any accusation of having used ChatGPT inappropriately. However, we know that Sec. 2.2 is quite dense with citations that may affect readability: we will move citations at the end of the sentences and revise the section.\\n\\n* > 3. Figures appear low-resolution ... balanced accuracy.\\n\\nOn the quality of the pictures, we checked and it is due to a \\u201ccompression\\u201d in OpenReview which is out of our control (the paper we submitted does not have this issue). We will carefully revise the text of the captions to ensure all the necessary details are present.\\n\\n* > 4. The use of multiple highlight types ... more accessible.\\n\\nWe thank the reviewer for the observation. We will reduce the use of highlighting to improve readability. \\n\\n* > 5. Important metrics are ... and overall clarity.\\n\\nUnfortunately, it is not possible to compactly describe the metrics with simple formulas, as they are quite structured algorithms (especially OMES and DCI). We provided the code as supplementary material to allow the work to be reproduced. We will enrich the captions with the necessary details for readability.\\n\\n* > What specific contributions does ... beyond existing techniques.\\n\\nSee the general answer.\\n\\n* > What are alternative approaches the authors could have used for comparison?\\n\\nSee the general answer\"}", "{\"title\": \"Official Comment by Reviewer ijPz\", \"comment\": \"I thank the authors for their response. Unfortunately, many of my concerns were not addressed. For this reason, I will maintain my current scores.\"}", "{\"summary\": \"This paper addresses the interpretability challenge in microscopy image analysis using deep learning approaches. The authors propose a novel methodology based on Disentangled Representation Learning (DRL) to enhance model interpretability while maintaining classification performance. The approach leverages transfer learning from synthetic features and is validated across three diverse microscopy domains: plankton, yeast vacuoles, and human cells. The growing volume of microscopy images due to technological advances has necessitated automated analysis methods, yet interpretability remains crucial for practical applications in fields such as diagnosis and environmental monitoring. The authors demonstrate that their DRL framework successfully balances the trade-off between model accuracy and interpretability in microscopy image classification tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The manuscript is well-written and easy to follow, with clear organization and logical flow.\\n2. The application of weakly-supervised DRL to real-world image analysis represents a promising and valuable research direction.\", \"weaknesses\": \"1.The scope of this work appears too narrow, focusing solely on microscopy images. The proposed approach might be more convincing if demonstrated on natural images as well.\\n2.The authors fail to adequately justify why DRL should be specifically applied to microscopy image analysis. Furthermore, they do not clearly articulate whether this specific application domain poses new challenges or requirements for DRL that could lead to innovative solutions. The authors' insights into these aspects are not well presented.\\n3.Given the lack of compelling insights, this work appears to be primarily an application of existing DRL methods without significant methodological or theoretical innovation. This level of contribution may not align with ICLR's focus on novel methodological and theoretical advances in machine learning.\\n4.The paper appears to lack comparative experiments. While the disentanglement scores might be novel evaluation metrics, the absence of comparisons for classification performance is particularly concerning and unreasonable.\", \"questions\": \"Referring to the weaknesses noted above, I find the claimed contributions of this paper not sufficiently convincing. Could the authors provide a more compelling explanation of their main contributions, particularly addressing:\\n1. Why DRL is specifically suited for microscopy image analysis.\\n2. What novel challenges or requirements this domain brings to DRL.\\n3. How their approach advances the theoretical or methodological aspects of DRL beyond simple application.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their comments. In the following we answer each concern separately.\\n\\n* > 1.The scope of this work ... natural images as well. 2. The authors fail to ... are not well presented.\\n\\nPlease, see the general answer.\\n\\n* > 3.Given the lack of compelling insights ... advances in machine learning.\\n\\nSee the general answer\\n\\n* > 4.The paper appears to ... concerning and unreasonable.\\n\\nSee the general answer\\n \\n* > 1.Why DRL is specifically... 2. What novel challenges...3. How their approach advances...beyond simple application.\\n\\nWhile these issues have been addressed more extensively in the general answer, we also provide a summary here:\\n1. From the methodological point of view, applying DRL directly to Biological data is unfeasible because of lack of FoVs annotation, their dependence, etc. We study the application of the transfer methodology to Microscopy datasets because they exhibit the challenges of the real data while keeping the complexity of FoVs under control. We remark that DRL has been hardly applied to real data to disentangle most of the FoVs to get a finer and complete representation of real data.\\n2. Our advancement is not on theoretical aspects of DRL but is on the application of DRL to real images, which has been underexplored so far. In this sense, we believe the work to be of interest for the ICLR conference, where among the topics of interest in the call for papers we find the applications to physical sciences (including biology).\"}", "{\"title\": \"Official Comment by Reviewer ijPz\", \"comment\": [\"The novelty of the work remains unclear to me. In my opinion, this study appears to be an application of [1] to certain microscopy datasets.\", \"The evaluation metrics are not clearly explained. Using classification to assess the quality of disentanglement lacks clarity.\", \"I encourage the authors to adapt existing methods to establish a baseline for comparing the performance of their model.\", \"[1] Dapueto et al. \\\"Transferring disentangled representations: bridging the gap between synthetic and real images\\\", arXiv:2409.18017: NeurIPS 2024 (accepted)\"]}", "{\"comment\": \"Thank you for your time in considering our response. Could you be a bit more specific about what concerns we haven\\u2019t addressed?\"}", "{\"comment\": \"We thank the reviewer for their comments. In the following we answer each concern separately.\\n\\n* > The proposed approach is not well explained ... how is the model fine-tuned.\\n\\nWe rely on weakly supervised methods that require less annotation to be trained. Moreover, we chose VAE-based models because of their simplicity which allows us to analyze more intuitively the effectiveness of the disentanglement. Our hypothesis is that the observations made on VAEs can generalize to alternative DLR methods, as for instance [1, 2, 3], more complex and hence powerful than the simple VAEs. \\nFuture work will be devoted to empirically assess this hypothesis.\\n\\nAs for the fine-tuning, once the (weakly-supervised) Ada-GVAE model has been trained on the Source dataset, it is finetuned on the (unsupervised) Target using a Beta-VAE model. \\nWe will clarify it in the main document.\\n\\n* > The difference between ... is unclear.\\n\\nPlease, see the general answer\\n\\n* > The authors claim that the disentanglment ... and empirically evidenced.\\n\\nThe theoretical transferability of disentangled representation learnt from synthetic images has been proved in [4]. Empirically showing that we can transfer DR to real more complex data is the goal of this work.\\n\\n* > The paper is not well organized ... \\\"DISENTANGLEMENT EVALUATION\\\".\\n\\nConcerning the related works, it is true that a dedicated section is missing, but this is because we opted to provide an account of the relevant literature in the introduction. \\nOn the second issue, the reviewer is right, we will rename the Sec. 2.2 \\u201cDISENTANGLEMENT EVALUATION METHODS\\u201d and Sec. 3.5 as \\u201cDISENTANGLEMENT EVALUATION EXPERIMENTS\\u201d\\n\\n* > In Fig, 1, what is exactely fine-tuned and how ?\\n\\nIt is fine-tuned as a Beta-VAE model [5]. In the old version of Fig. 1 we identified the frozen modules with the icon of a snowflake. To be more clear, in the new version of the figure, we also marked the modules involved in the finetuning with a flame. \\n\\n* > How is an RGB image directly fed to the classifier (GBT and MLP)?\\n\\nThe RGB images are not fed directly to the classifiers. We first fed the RGB image to the VAE encoder extracting the representation z, we then trained the classifiers using z, as shown in Fig.1. More details can be found in Sec.3.3 .\\n\\n* > The proposed evaluation ... are unclear.\\n\\nWe adopted the MIG and DIC disentanglement evaluation metrics because they are the most used in previous work, while OMES is a very recent metric that has been shown to be more robust across datasets of different natures, while providing interpretability of the results and ensuring a reliable assessment of DR properties.\\n\\n* > The authors do not compare ... baseline is important.\\n\\nSee the general answer\\n\\n* > The used classifiers (GBT and MLP) ... (CNNs based for example).\\n\\nAs we reported in the paper, Sec. 2.2 \\u201cAs for the choice of the classifier, a general criterium is to select a simple model, to better appreciate the influence of the representation on the performance in the downstream task.\\u201c In other words, since our objective was to assess the power of the DR, we needed the classifiers to be simple not to influence the results. On the specific choices, we referred to [6].\\n\\n* > To assess the quality of the representation ... a disentangled one.\\n\\nWhile this is true, our work aims to learn a disentangled representation which not necessarily obtains the best result on classification. It is part of the well-known, trade-off between performance and interpretability. \\nMore classical methods for classifying biological data rely on identifying and extracting handcrafted features of the images which could indeed resemble the FoVs one may identify in the datasets. In this sense, our work aims to learn the FoVs of the microscopy data following a fully data-driven approach instead. \\n\\n* > The figures are small and the captions are not clear enough.\\n\\nWe thank the reviewer for the observation. We will make more descriptive captions.\\n\\n* > In Figure 6, the OMES ... to better disentanglement.\\n\\nWe thank the reviewer for the observation. \\nWe noticed a mistake in the figure, which is now solved in the new version. Correcting the problem, the OMES score of the proposed method is now comparable to the ones using the RGB images. \\n\\n[1] Yang, et al. \\\"Disdiff: Unsupervised disentanglement of diffusion probabilistic models.\\\", in NeurIPS 23.\\n\\n[2] Song, Yue, et al. \\u201cFlow Factorized Representation Learning.\\u201d, in NeurIPS 2023.\\n\\n[3] Lin et al., \\u201cInfogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans\\u201d, In ICML 20.\\n\\n[4] Dapueto et al. \\\"Transferring disentangled representations: bridging the gap between synthetic and real images\\\", arXiv:2409.18017: NeurIPS 2024 (accepted)\\n\\n[5] Higgins, Irina, et al. \\\"beta-vae: Learning basic visual concepts with a constrained variational framework.\\\", in ICLR 2017.\\n\\n[6] Dittadi, Andrea, et al. \\\"On the Transfer of Disentangled Representations in Realistic Settings.\\\", ICLR 2021.\"}", "{\"summary\": \"The paper presents a study of disentangled representation learning on three microscopy image datasets. The representation learning strategy starts by training an Ada-GVAE model using a Textures-dSprites dataset introduced in this work. The dataset is supposed to reflect simple textures that could help interpret information in microscopy images. After training this model in a weakly supervised way, it is used to encode images of another domain, with optional unsupervised finetuning using a beta-VAE. The resulting features are low dimensional, and interpretable, and are used to train classifiers.\\n\\nThe ideas and the study are generally interesting, but the paper lacks technical novelty, is limited to a small-scale empirical evaluation only, and the experiments are incomplete to fully understand the value of the proposed strategy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper evaluates the recent ideas of disentangled representation learning using weak supervision in a more realistic application.\", \"The paper also presents an alternative to learning the disentangled representation from RGB images based on models pretrained at large scale.\", \"The paper proposes a new sprites dataset to facilitate the interpretation of microscopy images.\"], \"weaknesses\": [\"The technical contribution is limited. Beyond the sprites dataset and the use of pretrained features, many of the ideas have been presented in previous works.\", \"The experimental evaluation is limited to quantifying the impact of classifier types (GBT vs MLP) and input type (RGB vs DINO features). Many questions remain open regarding how much classification accuracy could be obtained without the proposed disentanglement procedure. Can the authors compare results of training a classifier directly with RGB images and another classifier with DINO features without any modifications? These results would help understand how difficult the tasks are and what is the trade-off between using disentanglement vs not using it.\", \"It is possible that DINO features are already disentangled and all what the proposed strategy is doing is assigning names to some of the factors of variation that DINO can detect. Therefore, the disentanglement is not really happening in the VAEs but rather obtained from a model pretrained at large scale. What type of experiment can the authors design to test this hypothesis?\", \"If the hypothesis above is not rejected, the value of proposed methods is limited to only annotating factors of variation rather than identifying them in a weakly supervised manner to then being transferred.\"], \"questions\": \"Can the authors clarify the questions above? Specifically, the extent to which DINO already offers certain degree of disentanglement and how the factors of variation of interest could be identified directly from these representations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks to the authors for their comments and clarifications. After careful consideration of the manuscript, the responses, and the comments by other reviewers, I will keep the current score unchanged.\"}", "{\"summary\": \"In this paper, the authors propose to use a disentangled representation learning framework to enhence model interpretability for microscopy image classifications. The method is based on fine-tuning a model trained on synthetic images, the proposed framework is tested on some microscopy images datasets.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper addresses a significant challenge in representation learning: disentanglement, which plays a pivotal role in improving the interpretability of classifiers, particularly in the context of biological images.\"], \"weaknesses\": [\"The proposed approach is not well explained. Indeed, the method proposed by the authors learns a disentangled model with weak-supervision using Ada-GVAE on a synthetic dataset and then fine-tune it on\", \"a microscopy datasets. However, it is unclear why Ada-GVAE is choosed and how is the model fine-tuned.\", \"The difference between the proposed method and Dapueto et al is unclear.\", \"The authors claim that the disentanglment learned from a synthetic images can be transferred to microscopy images, such claim should be theoretically and empirically evidenced.\", \"The paper is not well organized, for instance a \\\"Related Work\\\" section should be added. Two different sections (2.2 and 3.5) have the same title \\\"DISENTANGLEMENT EVALUATION\\\".\"], \"questions\": [\"In Fig, 1, what is exactely fine-tuned and how ?\", \"How is an RGB image directly fed to the classifier (GBT and MLP)?\", \"In line 322, the authors state \\\"We can observe that after finetuning, it may change, nicely adapting to the specificity of the dataset, where scale and texture are more relevant.\\\", It is unclear for me why scale and texture are more relevent then \\\"scale and shape\\\", as it is the case before fine-tuning.\", \"The proposed evaluation metrics (ex:OMES) are unclear.\", \"The authors do not compare their method to any other work, having a solid baseline is important.\", \"The used classifiers (GBT and MLP) are very simple, more sophisticated ones should be used (CNNs based for example).\", \"Inputting an RGB image to the classifer is unclear as it is well-established that deep features (in this cas the features extracted by DINO) have more important patterns.\", \"To assess the quality of the representation, the authors realied on classification. While a good representation can lead to a better accuracy. A good representation does not necessary mean a disentangled one.\", \"Using the accuracy only to measure the quality classification performance is not enough.\", \"The figures are small and the captions are not clear enough.\", \"In Figure 6, the OMES indicates that the proposed method does not lead to better disentanglement.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper explores the application of existing methods for learning disentangled representations to the problem of interpretable image classification in biological microscopy image analysis. The reviewers appreciated the application but unanimously recommended rejection citing limited technical novelty.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers and authors engaged in sufficient back and forth during the discussion period, but ultimately the major weaknesses in the paper could not be addressed.\"}", "{\"comment\": \"We thank the reviewer for their comments. In the following we answer each concern separately.\\n\\n* > The technical contribution is limited...previous works\\n\\nPlease, see the general answer\\n\\n* > The experimental evaluation is limited ... not using it.\\n\\nThe classification accuracy without the proposed disentanglement can be found in Appendix A.2.4. In particular, we report the accuracy of the GBT and MLP classifiers trained with the DINO features without disentanglement. We notice the classification performance is higher than the one obtained with the disentangled representation. However, we highlight the fact that the representation in that case is much larger than the disentangled one (768 vs 10 features).\\n\\n\\n* > It is possible that DINO features ... to test this hypothesis?\\n\\nTo test this hypothesis, we perform a new experiment, computing two disentanglement scores (OMES, MIG) of the DINO features w.r.t. the FoVs (Texture, Color, Shape, etc.) of Texture-dSprites. By employing the whole DINO feature vector (size 768) we get OMES=0.26 and MIG=0.03 . \\nAs shown in Fig. 6, for our disentangled features OMES=0.55, and MIG=0.4 (Fig. 11). This experiment confirms that disentanglement does not come naturally from DINO features, while it is obtained thanks to the VAE and our proposed strategy.\\nWe specify that we did not include the metric DCI in this comparison (which we use in the paper) since it is purely based on a classifier: when the description is very compact, as in common DR, a good classification performance is a sign of robustness and explicitness, a desired property of DR; however, for larger descriptors (as in the case of DINO features) a high classification score is not a sign of high disentanglement, but rather of reliability of the pre-trained features, which we expect. Moreover, as found in [1], DCI does not make it possible a fair comparison between representations of different latent dimensions.\\nNonetheless, we highlight that the usage of pre-trained deep features is one of the key contributions of this work, allowing a good transfer between synthetic FoV\\u2019s and real datasets, as supported by the improvement in accuracy for our disentangled features with respect to [2] (only employing images, and as such, corresponding to our column Images in Tables 1, 2, 3 and 4). \\n\\n[1] Cao, Jinkun, et al. \\\"An empirical study on disentanglement of negative-free contrastive learning.\\\", in NeurIPS 2022\\n\\n[2] Dapueto et al. \\\"Transferring disentangled representations: bridging the gap between synthetic and real images\\\", arXiv:2409.18017: NeurIPS 2024 (accepted)\"}", "{\"title\": \"General answer\", \"comment\": \"We thank all reviewers for their valuable comments and suggestions. In this part, we address issues of interest for more than one reviewer.\\n\\n* Rev. **MkND**, Rev. **umB6** and Rev. **MZMx** express concern about *Technical novelty and contributions*. \\\\\\nWe start by highlighting the fact we are submitting the work to the area **applications to physical sciences (physics, chemistry, biology, etc.)**: in this work, we target the advancement of the applicability of DRL to real images to obtain an interpretable and reliable representation. We emphasize different contributions in our work related to methodology and application.\\\\\\nThe *first* level of novelty, which we believe is not purely technical, is to perform DRL relying on deep pre-trained features. Specifically, our results show how using ImageNet to pre-train a vision transformer, in a self-supervised framework, allows for a good FoVs transfer from a synthetic dataset (Texture-dSprite) to three different microscopy datasets. The usage of deep pre-trained features is fundamental for supporting the FoVs transfer, as highlighted by better performance w.r.t. [A] (that only employs images) on the investigated datasets. \\\\\\nA *second* level of novelty is the use of real data for DRL. To the best of our knowledge, no studies about fine-grained DRL of FoVs in real natural images exist, because of real FoVs dependence, resolution, and variability, bringing the level of complexity of the problem to another scale. Besides, there is also the need for annotated data, that precisely for the same reasons is a complex and ambiguous procedure, since sometimes the FoVs can not even be clearly identified [D]. Previous works (e.g. [B, C]) attempt to transfer disentangled representation targeting either synthetic or simulated data. A first advance in the use of real data can be found in [A], where the authors analyzed the transferability of disentanglement (learned on a Source synthetic dataset) to a real Target dataset where FoVs are explicitly known. However, in this existing work, the authors adopt real data of controlled complexity. \\nIn this work, we aim to move a step further, studying the applicability of DRL (and specifically, transferring FoVs learned with appropriate synthetic datasets) to the microscopy image domain, where FoVs are only partially known. \\\\\\nFinally, we want to provide a justification on the choice of the specific application domain, which may appear narrow, but actually presents different interesting challenges for DRL. Indeed, microscopy single-cell images have been selected for other two different reasons: \\\\\\n(i) The first one is methodological, since these real images have only partially known FoVs, representing a perfect test field for advancing towards the DRL application to general natural images; \\\\\\n(ii) The second one is application-oriented: microscopy image applications are in strong need of interpretability measures, especially considering the clinical domain, where having the possibility to analyze inferred FoVs, while maintaining a reasonable accuracy, may improve the trust in AI and its employment in real-world related applications. \\n\\n* Rev. **MZMx**, Rev. **umB6** and Rev. **ijPz** express concern about *Comparison with other methods* for classification and DRL. \\\\\\nIt is worth noting that our primary aim is to obtain a disentangled representation of microscopy images, it should be also useful for downstream tasks. For this reason, we assessed the representations on classification tasks for each dataset, even though achieving the best classification performance is not our main goal. \\\\\\nIn the literature, DRL applications to real-world datasets are very limited. In this sense, to the best of our knowledge, the only suitable work for comparison corresponds to [A], that we used \\u201cas it is\\u201d when using images in input to our methodology (indeed this gives the comparison, we will better clarify this in the revised version of the manuscript). \\\\\\nSpecifically, [A] study the transferability of disentanglement to real data constraining some of the challenges typical of real images for DRL. In this sense, it is very appropriate for our purposes. A further element of complexity in our work is the unavailability of the FoVs' annotations on the real target datasets.\\n\\nFinally, in the revised paper, we will clarify the scope of the work in the introduction and contributions while adding comparative classification results from the literature.\\n\\n[A] Dapueto et al. \\\"Transferring disentangled representations: bridging the gap between synthetic and real images\\\", arXiv:2409.18017: NeurIPS 2024 (accepted)\\n\\n[B] Gondal, Muhammad Waleed, et al. \\\"On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset.\\\", NeurIPS 2019.\\n\\n[C] Dittadi, Andrea, et al. \\\"On the Transfer of Disentangled Representations in Realistic Settings.\\\", ICLR 2021.\\n\\n[D] Xiang et al \\u201cDisunknown: Distilling unknown factors for disentanglement learning\\u201d, ICCV 2021\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper proposes a Disentangled Representation Learning (DRL) approach to improve interpretability in microscopy image classification. By pre-training on synthetic data (Texture-dSprite) to capture factors of variation, the authors apply these learned representations to real-world microscopy datasets (Plankton Lensless, Plankton WHOI15, Budding Yeast Vacuoles, and Sipakmed Human Cells). Their method aims to support model interpretability while achieving high classification performance with gradient-boosted trees and MLPs for downstream analysis.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper explores the application of an existing DRL framework to the specific domain of microscopy images. This idea is interesting as it shows a potential pathway for combining DRL with microscopy image analysis.\", \"weaknesses\": \"A significant weakness as it seems, is the absence of a comparison with other similar methods. The paper presents only one framework and does not discuss or evaluate alternative approaches, which weakens the case for this framework\\u2019s efficacy or advantage over existing methods.\\n\\nThe contributions of the paper in terms of novelty are unclear. The study applies an existing DRL approach to a new domain but does not appear to introduce any fundamentally new concepts, techniques, or substantial modifications to existing methods. The only apparent novelty - the application of DRL to microscopy imaging does not suffice. This limits the potential impact and originality of the work.\\n\\nThe paper\\u2019s presentation suffers from numerous issues that impede readability and clarity:\\n1. There are instances of informal languages, such as the use of \\u201cthanks.\\u201d\\n2. The text contains multiple errors at the word, sentence, and structural levels, which disrupts the reading experience. Sections like Section 2.2 (\\u201cDisentanglement Evaluation\\u201d) resemble output generated by ChatGPT and lack rigorous academic polish.\\n3. Figures appear low-resolution, with inadequate explanations in captions. Captions should be comprehensive and self-contained, but here, they lack essential details, e.g., explanations of metrics like OMES and balanced accuracy.\\n4. The use of multiple highlight types (underscoring, bold, italics) is excessive and distractive. Minimal highlighting would improve readability and make essential points more accessible.\\n5. Important metrics are either not explained in the text or lack adequate definitions in the captions, leaving readers uncertain of their meaning. This omission impacts the study\\u2019s reproducibility and overall clarity.\", \"questions\": \"Most of my questions are related to major weaknesses.\\n\\nWhat specific contributions does this paper make beyond applying DRL to microscopy images? It would be helpful if the authors could clarify what is novel in their approach and how it advances the state-of-the-art in microscopy image analysis beyond existing techniques.\\n\\nWhat are alternative approaches the authors could have used for comparison? \\n\\nMetric explanations (e.g., OMES, MIG, DCI and balanced accuracy) are mostly missing. Could the authors clarify these metrics, ideally using mathematical notation and provide justification for using them?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"* >The novelty of the work remains unclear to me. In my opinion, this study appears to be an application of [1] to certain microscopy datasets.\\n\\nThe comment of the reviewer may unveil the opinion that the level of novelty of our work is not sufficient for ICLR. However, we respectfully believe that a good level of novelty is present in our work, as we explained in the first part of the general response. \\n\\n\\n* > The evaluation metrics are not clearly explained. Using classification to assess the quality of disentanglement lacks clarity.\\n\\nWe apologize if this part is unclear. As specified in response to another reviewer, unfortunately, it is not possible to compactly describe the metrics with simple formulas, as they are quite structured algorithms (especially OMES and DCI). The choice of using DCI and MIG is due to their popularity in previous works, while OMES is used in Dapueto et al., which is a reference work for our method. While providing all the details in the main paper is not possible because of space constraints, we highlighted what we believe are the minimal yet essential information to understand and interpret the metrics, i.e. the algorithmic principles they rely on, and the properties of disentangled representations they can capture (see Sec. 2.2).\\n\\tIn this answer, we try to be more specific, having space for all the clarifications. The MIG and DCI metrics are well-known in the field of disentanglement learning, and designed to quantify the intensity of one or more particular properties of a disentangled representation: DCI measures the *Modularity*, i.e. how much the FoVs \\taffect non-overlapping partitions of the representation; MIG measures the *Compactness*, related to the size of the representation space affected by a FoV, that should be as small as possible; OMES measures both Modularity and Compactness in an unified manner.\\n\\tFrom the point of view of the methodology, the metrics follow different approaches. In particular, DCI is based on a learnable regressor (e.g. Decision Tree) to score the importance of each latent dimension to each FoV, and different factors should be important for different dimensions. MIG is based on a Mutual Information \\tEstimator and computes the mutual information between the factors and the partitions of the representation. OMES makes use of the correlation to build an association matrix (factor, dimension) to compute both Modularity and Compactness from it. Concerning the classification tasks, the assessment serves to verify if the \\tFoVs encoded in the representation are enough to describe the dataset and useful for a downstream task. Furthermore, in this field, the classification score is commonly used to assess another desired property for disentangled representations, i.e. *Explicitness*.\\n\\n* > I encourage the authors to adapt existing methods to establish a baseline for comparing the performance of their model.\\n\\nwe are not sure we fully understand the suggestion. If the reviewer is referring to a baseline for disentanglement learning, then we emphasize that Dapueto et al. is the baseline for our work, and a method we compare with. If instead, the reviewer is referring to a classification baseline, then the comparison is the Appendix (Table 7). On this, we highlight that our work aims to learn a disentangled representation which not necessarily obtains the best result on classification. It is part of the well-known, trade-off between performance and interpretability.\"}" ] }
0hyShAPeBj
IT$^3$: Idempotent Test-Time Training
[ "Nikita Durasov", "Assaf Shocher", "Doruk Oner", "Gal Chechik", "Alexei A Efros", "Pascal Fua" ]
This paper introduces Idempotent Test-Time Training (IT$^3$), a novel approach to addressing the challenge of distribution shift. While supervised-learning methods assume matching train and test distributions, this is rarely the case for machine learning systems deployed in the real world. Test-Time Training (TTT) approaches address this by adapting models during inference, but they are limited by a domain specific auxiliary task. IT$^3$ is based on the universal property of idempotence. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, namely $f(f(x))=f(x)$. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, that is $f(f(X)=f(X)$. At training, the model receives an input $X$ along with another signal that can either be the ground truth label $y$ or a neutral "don't know" signal $\mathbf{0}$. At test time, the additional signal can only be $\mathbf{0}$. When sequentially applying the model, first predicting $y_0 = f(X, \mathbf{0})$ and then $y_1 = f(X, y_0)$, the distance between $y_1$ and $y_2$ measures certainty and indicates out-of-distribution input $x$ if high. We use this distance, that can be expressed as $||f(X, f(X, \mathbf{0})) - f(x, \mathbf{0})||$ as our TTT loss during inference. By carefully optimizing this objective, we effectively train $f(X,\cdot)$ to be idempotent, projecting the internal representation of the input onto the training distribution. We demonstrate the versatility of our approach across various tasks, including corrupted image classification, aerodynamic predictions, tabular data with missing information, and large-scale aerial photo segmentation. Moreover, these tasks span different architectures such as MLPs, CNNs, and GNNs.
[ "idempotence;generalization" ]
Reject
https://openreview.net/pdf?id=0hyShAPeBj
https://openreview.net/forum?id=0hyShAPeBj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xE4vbVQpl6", "v5r60iDYon", "ukHfPkk9Ra", "tJsnRp6eJr", "lHtLQwfaVi", "horuFpOLas", "gOMADdGK5P", "fViIBRlsoA", "ewMnXdtvby", "dFSDyNQiKi", "boDwkkrynF", "TjkaMzbhqw", "RVzDT1UPHQ", "Q2DWCyvN9i", "PF2O38awG1", "E44EzmI13g", "Cy23chRUZl", "9QOQR3UzKU", "8x2nHKOJtv", "5pa3XuUGFS", "5EtqModTt4", "204QLYc7zO", "1wtrwAWZy0", "0iW8a8LYDk", "0fllOyMYXi" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732555894620, 1732453917764, 1732381141741, 1732555812063, 1732381382326, 1737523613932, 1733002695006, 1732382895734, 1730601940479, 1730467660760, 1732395097105, 1730678646030, 1732789491127, 1732790932499, 1732395244641, 1730120841164, 1732395210769, 1734885182490, 1732453881930, 1733134268653, 1733002613032, 1732382859936, 1733002714681, 1732555859269, 1732555925641 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Reviewer_Tinn" ], [ "ICLR.cc/2025/Conference/Submission4015/Reviewer_ALWg" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Reviewer_hxXL" ], [ "ICLR.cc/2025/Conference/Submission4015/Reviewer_ALWg" ], [ "ICLR.cc/2025/Conference/Submission4015/Reviewer_x3JD" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Reviewer_x3JD" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Area_Chair_TXXj" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Reviewer_x3JD" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ], [ "ICLR.cc/2025/Conference/Submission4015/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer ALWg,\\n\\nThank you for dedicating your time to reviewing our paper and providing valuable feedback.\\n\\nWe have thoughtfully addressed each of your comments, offering detailed responses to clarify and resolve the concerns you raised. We hope our explanations have provided a clearer perspective on our work and its contributions.\\n\\nIf you feel that we have adequately addressed your concerns, we would be grateful if you would consider reassessing your rating.\\n\\nWe would be happy to clarify or elaborate on any part of our paper while the discussion period is still open.\\n\\nThank you!\"}", "{\"comment\": \"* **The main weakness of this paper the lack of baselines in the experimental section. Compare against TTT, TTT++, ActMAD. How would the performance be when evaluated under computational constraints?**\\n\\nWe thank the reviewer for raising questions about computational costs and additional baselines. Test-time training (TTT) naturally introduces computational overhead due to the optimization steps performed during inference. In our approach, while we include an additional inference step (two forward passes), this remains computationally efficient because forward passes are generally 3\\u20134x faster than backward passes. As a result, the extra inference step contributes relatively little to the overall cost compared to standard TTT methods. We have added a discussion on efficiency to the revised paper.\\n\\nOur method typically requires only 1\\u20133 optimization steps, ensuring its overall cost remains comparable to other established TTT methods. Below, we provide a comparison of inference times and performance metrics with other requested baselines on the largest dataset used in our experiments:\\n\\n**Table 2.** Inference Time and Performance Comparison (OOD Airfoils)\\n| Base Model | Base Model | ActMAD | $IT^{3}$ | \\n|------------|------------|------------|---------|\\n| Inference Time | 1\\u00d7 | 3x | 4x |\\n| MAE ($\\\\downarrow$) | 38.67 | 38.61 | **37.5** |\\n\\n**Table 3.** Inference Time and Performance Comparison (OOD Cars)\\n| Base Model | Base Model | ActMAD | $IT^{3}$ | \\n|------------|------------|------------|---------|\\n| Inference Time | 1\\u00d7 | 4x | 5x |\\n| MAE ($\\\\downarrow$) | 0.501 | 0.502 | **0.424** |\\n\\n\\n**Table 4.** Inference Time and Performance Comparison (Roads)\\n| | Base Model | Standard TTT |ActMAD | $IT^{3}$ | \\n|------------|------------|--------------|---------|--------|\\n| Inference Time | 1x | 3x | 4.5x | 6x |\\n| Quality ($\\\\uparrow$) | 39.5 | 40.0 | 45.9 | **69.8** |\\n\\n\\nAs can be seen, our method outperformed the considered baselines. We will include this table in the revised version. We recognize that our method, like other TTT approaches, introduces computational overhead during inference. This is a common limitation across existing TTT methods. Tackling this issue and developing TTT methods that eliminate such overhead entirely is an important research direction that holds potential for significant advancements in the field.\\n\\n* **Another line of work that is directly comparable is Test-Time Adaptation (TTA): SOTA TTA method EATA [C] or more closely the dataset distillation method from [D]**\\n\\nThank you for pointing out these relevant papers. We distinguish between TTT and TTA, a distinction also made by prior works, e.g., [1], [2]. In the TTT regime, there is no access to any data other than the instance or batch currently being processed. Furthermore, each input is treated as a separate test with no correlation or shared information with other inputs. We have added a discussion about this distinction in the related work section.\\n\\n* **Benchmarks used in this work are somewhat small scale.**\\n\\nOur method performs effectively with larger models, as shown in our experiments on aerial segmentation models with millions of parameters. By requiring only two forward passes and a minimal number of optimization steps, our approach remains computationally efficient even for large-scale architectures, making it competitive with traditional TTT methods, which often require multiple backward passes.\\n\\nAdditionally, our experiments on road segmentation (measured by the number of pixels) and aerodynamics with cars (measured by the number of nodes) can be considered large-scale, comparable in size to ImageNet.\\n\\n\\n* **Why zeroing out features is a good way of modeling distribution shift instead of adding random noise to the features? Can you please comment on this and provide justification for their choice of distribution shift.**\\n\\nOur goal in making this choice was to showcase a diversity of corruptions and OOD types across the experiments. Noise was demonstrated with the CIFAR data. Additionally, it is not always straightforward to add noise to tabular data. We note that missing information represents a divergence from the training distribution and can therefore also be considered a form of distribution shift.\\n\\n* **Missing references: [B, C, D, E, F, G].**\\n\\nThank you, we have added all of these references to the revised version.\\n\\n[1] Sun, Yu, et al. \\\"Test-time training with self-supervision for generalization under distribution shifts.\\\" ICML 2020.\\n\\n[2] Gandelsman, Yossi, et al. \\\"Test-time training with masked autoencoders.\\\" NeurIPS 2022.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We sincerely thank the reviewer for their detailed and thoughtful feedback on our work. We deeply appreciate their recognition of the strengths of our approach, including the clarity of our presentation and the robustness of our experimental evaluation. We value their constructive suggestions and have carefully addressed all raised concerns, as outlined below.\\n\\n---\\n\\n* **Test-time training requires undesirable extra compute for test-time optimization of the whole model. How much additional compute is needed compared to running inference on the base model? How does this method scale with model size?**\\n\\nWe thank the reviewer for their question about computational costs. Test-time training (TTT) inherently induces additional computational overhead because optimization steps are performed during inference. In our method, while we require an additional inference step (two forward passes), this is computationally efficient since forward passes are generally 3\\u20134x faster than backward passes. Thus, the extra inference step adds relatively little overhead compared to the total cost of TTT methods. We added a discussion about efficiency to the revised paper.\\n\\nOur method typically requires only 1\\u20133 optimization steps, keeping the overall cost comparable to other well-known TTT methods. Below, we provide a comparison of inference times and performance improvements on out-of-distribution data for three approaches: the base model without optimization, the state-of-the-art TTT method ActMAD, and $IT^{3}$. As shown, while our method introduces no significant overhead compared to ActMAD, it delivers a substantial improvement in prediction accuracy (measured by MAE in Airfoils and Cars experiments and by Quality in Road experiment), whereas ActMAD achieves only marginal gains.\\n\\n**Table 1.** Inference Time and Performance Comparison (OOD Airfoils)\\n| Base Model | Base Model | ActMAD | $IT^{3}$ | \\n|------------|------------|------------|---------|\\n| Inference Time ($\\\\downarrow$) | 1\\u00d7 | 3x | 4x |\\n| MAE ($\\\\downarrow$) | 38.67 | 38.61 | 37.5 |\\n\\n**Table 2.** Inference Time and Performance Comparison (OOD Cars)\\n| Base Model | Base Model | ActMAD | $IT^{3}$ | \\n|------------|------------|------------|---------|\\n| Inference Time ($\\\\downarrow$) | 1\\u00d7 | 4x | 5x |\\n| MAE ($\\\\downarrow$) | 0.501 | 0.502 | 0.424 |\\n\\n**Table 3.** Inference Time and Performance Comparison (OOD Roads)\\n| Base Model | Base Model | ActMAD | $IT^{3}$ | \\n|------------|------------|------------|---------|\\n| Inference Time ($\\\\downarrow$) | 1x | 4.5x | 6x |\\n| Quality ($\\\\uparrow$) | 39.5 | 45.9 | 69.8 |\\n\\n\\nWe will add this table to the revised version. We acknowledge that both our method and TTT methods in general introduce computational overhead during inference. This is a shared limitation across existing TTT approaches. Addressing this challenge and developing TTT methods that avoid such overhead entirely is an important research direction and could lead to significant advancements in the field. \\n\\nRegarding scalability with model size, our method works effectively with larger models, as demonstrated on aerial segmentation models with millions of parameters. By requiring only two forward passes and a small number of optimization steps, our approach remains computationally efficient even for such large-scale architectures, making it competitive with traditional TTT methods that often involve several backward passes.\\n\\n* **The paper lacks analysis that explains how or why idempotent training is expected to improve out-of-distribution analysis. Further investigation and ablations that provide intuition for how this method works would be valuable.**\\n\\nIn the revised version, we will add a more concrete explanation about the reasons we expect idempotence to improve OOD performance. In the introduction lines 77-86 we build the intution towards the motivation to enforce idempotence. There are two prespectives to view this relation. The first one is that the idempotence term rises when minimizing the measure for OOD-ness from [1]. Then having idempotence as the objective makes sense according to [2] - The joint internal representation of (x,y) is projected onto the manifold of representations of (x,y) pairs that are in the data distribution. (More detailed explanation is in the repsponse to your last comment below).\"}", "{\"comment\": \"Dear Reviewer hxXL,\\n\\nThank you for dedicating your time to reviewing our paper and providing valuable feedback.\\n\\nWe have thoughtfully addressed each of your comments, offering detailed responses to clarify and resolve the concerns you raised. We hope our explanations have provided a clearer perspective on our work and its contributions.\\n\\nIf you feel that we have adequately addressed your concerns, we would be grateful if you would consider reassessing your rating.\\n\\nWe would be happy to clarify or elaborate on any part of our paper while the discussion period is still open.\\n\\nThank you!\"}", "{\"comment\": \"* **The proposed method has not been evaluated on standard real-world out-of-distribution generalization benchmarks such as DomainBed [Gulrajani and Lopez-Paz 2020] and WILDS [Koh et al 2020]. The presented experiments are on smaller models/datasets.**\\n\\nWe make a distinction between TTT and Test-Time-Adaptation. This distinction was also made by prior works eg., [3], [4]. In the TTT regime, there is no access to any data other than the instance/batch currently being processed. Furthermore, Each input is a separate test and has no correlation or information about other inputs. Prior works did not evaluate on the mentioned datasets and our model is demonstrated on various modalities, not just visual inputs. At the same time, our experiments on road segmentation (in terms of number of pixel) and aerodynamics with cars (in terms of number of nodes) can be considered large-scale, comparable in size to ImageNet. \\n\\n* **Does the test time objective result in networks that are idempotent on OOD samples? Presumably once the function drifts from the fixed anchor function that test time loss no longer reflects idempotence? It would be good to see measures of idempotence on training and test samples.**\\n\\nThis is a valid point. The second application of the model does remain frozen which means that the objective diverges from idempotence. However, typically 1-3 steps are applied and this divergence is neglible. We have in fact experimented with having the second application an updated model (not with gradients, which would be wrong, but by copying the parameters from the updated model after each iteration as done in [2]). We found no significant difference in the performance. \\n\\nIn [this plot](https://imgur.com/a/xuKEKXF), we show the distributions of idempotence loss for different subsets of the airfoils dataset: train, test, and OOD (both optimized and non-optimized versions). As seen, the train and test subsets exhibit lower idempotence loss values, while OOD samples have significantly larger losses. After optimization on OOD data, the idempotence loss values shift closer to those of the train and test subsets, supporting our point that the optimization helps align OOD samples with the in-distribution behavior.\\n\\n* **How does the intuition about idempotence being a generalization of orthogonal projection hold up? The proposed method considers idempotence only in the y-variable, not the entire function.**\\n\\nIdempotence is enforced over the y-variable given a specific x-variable. While $x$ itself is static throughout the TTT optimization, its representations by the network activations shifts as the network weights are updated. Internally the network has a joint representation for (x,y) at every layer, that can be considered as the input to the next layers. The objective of idempotence for y means that the model trains to make that internal representations such that they remain the same when applying the model again. So the representations of x,y are projected onto the manifold of the representations of x,y pairs that are in the data distribution. We will add this clarification to the paper.\\n\\n[1] Durasov, Nikita, et al. \\\"Zigzag: Universal sampling-free uncertainty estimation through two-step inference.\\\" TMLR 2024.\\n\\n[2] Shocher, Assaf, et al. \\\"Idempotent generative network.\\\" ICLR 2024.\\n\\n[3] Sun, Yu, et al. \\\"Test-time training with self-supervision for generalization under distribution shifts.\\\" ICML 2020.\\n\\n[4] Gandelsman, Yossi, et al. \\\"Test-time training with masked autoencoders.\\\" NeurIPS 2022.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"* **ActMAD Results and Inference Time Comparison**\\n\\nWe appreciate your suggestion to analyze computational constraints rigorously. To address this, we conducted an additional experiment where ActMAD was allowed more optimization steps, resulting in an inference time **8.5x, 7x, and 5x the inference time of the base model**, compared to IT$^3$\\u2019s **6x, 5x, and 4x** for the aerial imaging, car aerodynamics, and airfoils tasks, respectively. Despite this extended budget, IT$^3$ achieves significantly better results.\\n\\n**Table 2.** Inference Time and Performance Comparison (OOD Airfoils)\\n| Base Model | Base Model | ActMAD | ActMAD (higher complexity) |$IT^{3}$|\\n|------------|------------|------------|---------|-|\\n| Inference Time | 1\\u00d7 | 3x | 5x |4x|\\n| MAE ($\\\\downarrow$) | 38.67 | 38.61 | 38.60 |37.5|\\n\\n**Table 3.** Inference Time and Performance Comparison (OOD Cars)\\n| Base Model | Base Model | ActMAD | ActMAD (higher complexity) |$IT^{3}$|\\n|------------|------------|------------|---------|-|\\n| Inference Time | 1\\u00d7 | 4x | 7x |5x|\\n| MAE ($\\\\downarrow$) | 0.501 | 0.502 | 0.502 |0.424|\\n\\n\\n**Table 4.** Inference Time and Performance Comparison (Roads)\\n| | Base Model | Standard TTT | ActMAD | ActMAD (higher complexity) | $IT^{3}$ | \\n|----------------------|------------|--------------|---------|----------------------------|--------|\\n| Inference Time | 1x | 3x | 4.5x | 8.5x | 6x |\\n| Quality ($\\\\uparrow$)| 39.5 | 40.0 | 45.9 | 46.3 | 69.8 |\\n\\nWe also analyzed ActMAD\\u2019s limited gains in specific settings. ActMAD relies on batch statistics for optimization, making it less effective with small or single-instance batches where these statistics cannot be computed reliably. This limitation underscores IT$^3$\\u2019s ability to handle diverse data scenarios more robustly.\\n\\n* **Comparison with TTA Methods**\\n\\nThank you for raising the importance of TTA comparisons. We acknowledge that TTA and TTT have overlapping goals but operate under different assumptions. From one perspective, TTA can be seen as more constrained, as it assumes no control over the training process. On the other hand, TTT imposes its own form of strictness by resetting the model after every instance (outside the online variant) and treating each input as entirely independent.\", \"this_independence_in_ttt_creates_a_unique_challenge\": \"the method cannot retain information between test examples, even when they might share correlations or dependencies. In contrast, TTA benefits from access to data streams or batches during inference, allowing it to exploit temporal or structural relationships between inputs. IT$^3$\\u2019s online variant leverages such relationships, leading to higher performance in that setting, but we emphasize that it addresses a different scenario.\\n\\nTo date, no prior TTT work has compared directly to TTA methods, largely because these paradigms are tailored to distinct challenges. However, we recognize the importance of acknowledging these distinctions and will add clarifications to this effect in the revised paper.\\n\\n* **Adaptation of Visual TTT Methods**\\n\\nWe appreciate the suggestion to adapt visual TTT methods like TTT++ to non-visual tasks. While technically feasible, such adaptations require extensive domain-specific engineering, including selecting and optimizing self-supervised tasks. Past TTT works demonstrate how critical these choices are, as they directly impact performance. Without intrinsic motivation or deep familiarity with the target domain, it is challenging to ensure that such adaptations are both meaningful and competitive.\\n\\nOur focus in this work was to develop a universal framework that eliminates the need for task-specific customization. This differentiates IT$^3$ from methods that rely on carefully tuned auxiliary tasks for success.\"}", "{\"comment\": \"* **The right panel of Figure 2 and Figure 16 both appear to represent the car data. Could there be a potential mistake or duplication here?**\\n\\nThank you for pointing this out. There is indeed a labeling mistake in the right panel of Figure 2. It represents results for the UCI dataset, but it was incorrectly described as being for the cars dataset. We will correct this in the revised version.\\n\\n[1] Durasov, Nikita, et al. \\\"Zigzag: Universal sampling-free uncertainty estimation through two-step inference.\\\" TMLR 2024.\\n\\n[2] Morales-Brotons, D., Vogels, T., & Hendrikx, H. (2024). Exponential moving average of weights in deep learning: Dynamics and benefits. TMLR 2024.\"}", "{\"summary\": \"This paper introduces $IT^3$, a generic method for addressing distribution shift challenges. By enforcing idempotence, $IT^3$ sequentially adapts to potential distribution shifts during inference. The authors show the performance of $IT^3$ across diverse scenarios.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors extend the concept of TTT (Sun et al., 2020) by incorporating idempotence, offering a simple yet elegant solution. The approach is intuitive and Figure 1 is particularly helpful for quickly grasping the core idea, even for those who are not experts in distribution shift.\", \"weaknesses\": \"1. (cf Figure 1) During training, in addition to feeding the standard $(x, y)$ pair to train the model, the authors also input the $(x, 0)$ pair to ensure the model satisfies the property of idempotence, referring to the zero as a neutral \\u201cdon't know\\u201d signal. While this approach may work in classification tasks where zero could represent a new \\u201cdon\\u2019t know\\u201d class, in regression tasks, it is unclear how the model differentiates this zero from an actual label of 0.\\n2. (Line 211) Online $IT^3$ appears to rely significantly on the use of Exponential Moving Average (EMA). However, the authors did not provide a citation for this technique. \\n3. In the first experiment (Section 4.1 on tabular data), the method of randomly substituting features with zeros in the test set may resemble a missing data scenario rather than a true distribution shift. In other experiments, the authors simulate distribution shift by splitting the training and testing data based on a specific covariate to create distinct distributions. It is unclear why the same method was not applied for the first experiment. If the authors prefer using the zeroing method, they should include figures or statistical tests to substantiate that the training and testing data are distributionally different, rather than relying solely on intuition.\\n4. There are unnecessary abbreviations throughout the manuscript. For instance, \\u201csuch that\\u201d is shortened to \\u201cs.t.\\u201d in line 490. The proposed method, $IT^3$, is inconsistently referred to as ITTT in parts of the manuscript, such as in Figure 13.\\n5. Figures 14 to 16 are not mentioned or referred to in the main text. This omission is unusual and may confuse readers as to the purpose or relevance of these figures.\\n6. Figures 12 and 15 have the exact same title.\", \"some_minor_improvements_and_spelling_corrections_for_clarity\": \"1. (Line 22) $x$ (not $\\\\mathbf{x}$) is not mentioned before.\\n2. (Line 77) $y_2$ is not mentioned before.\\n3. (Line 85) \\\"th input\\\" should be corrected to \\\"the input\\\".\\n4. (Line 131) Missing right parenthesis: $f(f(z))$.\\n5. (Line 490) \\u201cs.t. that\\u201d should be replaced with \\u201csuch that\\u201d.\\n6. (Line 495) Remove the extra colon (\\\":\\\").\\n7. In Table 1, the title references \\u201cqualitative\\u201d results, but the data presented are numerical and should be described as \\u201cquantitative\\u201d results.\", \"questions\": \"1. Do you use a special encoding or placeholder value instead of 0 for regression tasks?\\n2. If the purpose of the neural \\\"don't know\\\" zero is purely for contrast and its specific value is not important, will it be more computationally efficient to use a representative value such as the median of $y_i$ where $i \\\\in \\\\text{training set}$?\\n3. In EMA (Morales-Brotons et al., 2024), when $\\\\alpha = 1$, the online $IT^3$ aligns with the offline version, and when $\\\\alpha = 0$, it encounters the collapse issue described in Section 3.2. Could you provide guidance on selecting the value of $\\\\alpha$ or share experimental results demonstrating performance across different $\\\\alpha$ values?\\n4. The right panel of Figure 2 and Figure 16 both appear to represent the car data. Could there be a potential mistake or duplication here?\\n\\nMorales-Brotons, D., Vogels, T., & Hendrikx, H. (2024). Exponential moving average of weights in deep learning: Dynamics and benefits. Transactions on Machine Learning Research.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents IT$^3$ a method for test time training that relies on the concept of idempotence. During training, the authors train the neural network to be able to predict the ground truth label by either conditioning on it (by concatenating it to the input) or by assuming a placeholder value for it (by concatenating a $\\\\mathbf{0}$ value to the input). At test time, the authors then fine-tune the model on unlabelled data by matching the predictions of the model when using $\\\\mathbf{0}$ as the label with that of the model when conditioning on its own output at the $\\\\mathbf{0}$ label. This essentially leads to a soft idempotence constraint which, according to the authors, allows the model to move out-of-distribution data closer to in-distribution ones and thus improve performance when distribution shifts happen at test time. The authors then evaluate IT$^3$ on a variety of tasks and architectures.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is mostly well written; the ideas are explained clearly and reasonable intuitions are being given. Same goes for the experimental evaluation and discussion of the settings.\", \"The idea is simple and I find it quite interesting. While the main architecture of the method relies heavily on the prior work Durasov et al., the application on the TTT setting is novel, since, as far as I know, idempotence hasn\\u2019t been explored in such a scenario.\", \"The tasks considered in the experiments are quite diverse, which is nice to see. They range from simple tabular data, to standard image classification, to regression and dense prediction with various architectures.\", \"The authors demonstrate improvements with IT$^3$ on all settings considered upon (admittedly very simple) baselines.\"], \"weaknesses\": [\"The main weakness I see in this work is the (almost complete) lack of proper baselines. For example, on only the CIFAR task there is another TTT baseline but on all of the others the authors just compare against not doing anything. This make it hard to gauge significance of the method against other prior art.\", \"The work makes claims about how idempotence can be seen as a generalisation of a projection and that it allows to map the OOD data to the in-distribution. Thus, while the authors do spend some time to explain why would intuitively their method works, they do not have any ablation studies to verify that these exact intuitions hold in practice.\", \"IT$^3$ as a method requires two sequential forward passes through the model, so in practice it can be slow and the authors do not discuss the efficiency of IT$^3$ relative to other works.\"], \"questions\": [\"As mentioned before, I find this work quite interesting but the lack of proper baselines push me towards a weak reject opinion. As for specific questions and ways that the authors can improve their work:\", \"More baselines are needed on all tasks; even if the method does not translate exactly to the setup considered, the authors could perform minor adaptations so that it does. For example, why not consider additional self-supervised tasks as discussed at Sun et al. (2020)? In the case where simple rotation prediction might not apply, something simple like denoising an artificially corrupted image could still work as a self-supervised task. Another example would be on the online setup; there, methods from the TTA literature could be applied, such as the work of [1] which works even on the instance level and without batch normalisation.\", \"Apart from the CIFAR-C case, most other distribution shifts are generated by just partitioning the datasets according to some rule and then training on a subset while considering the other as OOD. This is a bit of a constrained setting and I would encourage the authors to consider more diverse shifts, as that would better highlight the usefulness of IT$^3$. For example, why not add noise to the road segmentation task, in the same way that it was done for CIFAR 10? This could be plausible real world setting where there is a fault in the sensor, thus the images come out corrupted.\", \"How is the label encoded in the input in the various settings considered? This is an important information for reproducibility. Furthermore, is the loss at Eq. 2 used for all settings (even when it might not make much sense, such as classification)?\", \"In the online setting, the authors consider a smooth transition between the distribution shifts, which might not be practically realistic. How does the method behave when the transition between distribution shifts is not smooth?\", \"How many update steps on each datapoint do the authors do? Does the test time optimization reach a fully idempotent model and does \\u201cmore idempotence\\u201d provide better performance?\", \"[1] Towards Stable Test-Time Adaptation in Dynamic Wild World, Niu et al., ICLR 2023\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We sincerely thank the reviewer for their detailed and thoughtful feedback on our work. We deeply appreciate their recognition of the strengths of our approach, including the clarity of our presentation, the robustness of our experimental evaluation, and the novelty of applying idempotence to the TTT setting. We value their constructive suggestions and have carefully addressed all raised concerns, as outlined below.\\n\\n---\\n\\n* **The main weakness I see in this work is the lack of proper baselines.**\\n\\nWe appreciate the reviewer\\u2019s concern and acknowledge that this is an important point. To address this, we have included comparisons in **Tables 2, 3, and 4**, which evaluate performance on several datasets and provide a broader perspective on baselines. Our goal was to develop a TTT method that works universally across architectures and data types. Most prior TTT methods rely on task-specific self-supervised objectives, which limit their generality. For example, the original TTT [1] depends on the image rotation task, which is not only unsuitable for non-image domains (like graph or text data) but also inapplicable to certain image-based tasks, such as aerial segmentation, where orientation has no clear meaning. Similarly, TTT++ [2] uses a SimCLR-style loss, which faces similar limitations across various domains. These tables highlight that while some methods can be adapted, their reliance on task-specific objectives often limits their generalization, reinforcing the need for a universal approach like ours.\\n\\nThe only truly general TTT approach to date is ActMAD [3], which operates on feature-level statistics and is rightly identified as our main competitor. However, ActMAD is not directly applicable to tasks with variable input sizes, such as graphs with varying numbers of nodes or text tasks with varying token lengths. In such cases, modifications to ActMAD are necessary, such as using aggregated features (e.g., average pooling) instead of pixel-level feature statistics. While this is a straightforward adaptation, we observed in our experiments that it significantly deteriorates ActMAD's performance.\\n\\nThis limitation led us to conclude that there are currently no universally appropriate baselines for TTT tasks. Nonetheless, we understand the value of such comparisons for a more comprehensive evaluation. Below, we provide additional comparisons as requested by the reviewer, which we believe will contribute meaningfully to the discussion.\\n\\n* **The work makes claims about how idempotence can be seen as a generalisation of a projection and that it allows to map the OOD data to the in-distribution. Thus, while the authors do spend some time to explain why would intuitively their method works, they do not have any ablation studies to verify that these exact intuitions hold in practice.**\\n\\nIn [this plot](https://imgur.com/a/xuKEKXF), we show the distributions of idempotence loss for different subsets of the airfoils dataset: train, test, and OOD (both optimized and non-optimized versions). As seen, the train and test subsets exhibit lower idempotence loss values, while OOD samples have significantly larger losses. After optimization on OOD data, the idempotence loss values shift closer to those of the train and test subsets. This supports our claim that the optimization helps align OOD samples with in-distribution behavior, effectively demonstrating the role of idempotence as a projection-like mechanism in practice.\\n\\nFrom a more theoretical perspective, idempotence can indeed be seen as a projection mapping, as originally observed in ZigZag [4], iterative models [5], and generative models [6]. In short, idempotence holds on in-distribution (ID) data but does not hold on OOD data. This behavior, similar to a projection mapping, is evident in the plot above and further supported by the table below.\\n\\nFor this table, we used the model trained on the data reported in our paper and computed the idempotence loss for each sample in the ID test set and OOD set, without performing any optimization. To demonstrate that idempotence losses for ID and OOD data are indeed distinct, we used the idempotence losses as an \\\"out-of-distributionness\\\" score and evaluated them on standard OOD detection metrics ROC-AUC and PR-AUC. Higher values of these metrics indicate better separation between in- and out-of-distribution samples, which implies larger differences in idempotence losses for OOD data. The results, shown in the table below, further validate our point.\\n\\n**Table 1. OOD Detection Performance Across Datasets.** ROC-AUC and PR-AUC (in %) for CIFAR, Age Prediction, Airfoils, and Cars. Higher values (>80-90%) indicate better OOD detection and support the observation that idempotence holds for in-distribution samples but not for OOD samples, similar to projection mapping.\\n\\n| **Metrics** | **CIFAR** | **Age Pred.** | **Airfoils** | **Cars** |\\n|-|-|-|-|-|\\n| **ROC-AUC, %** |90.1|77.3|99.2|95.6|\\n| **PR-AUC, %**| 93.3| 95.9| 98.7| 97.4|\"}", "{\"summary\": [\"This paper proposes a test-time-training based approach to address the distribution shift or OOD generalization problem by learning models that are idempotent.\", \"In particular, this paper proposes a method where models $f_{\\\\theta}: \\\\mathcal{X} \\\\times \\\\mathcal{Y} \\\\to \\\\mathcal{Y}$ are trained by minimizing both the difference between $f_{\\\\theta}(x, y)$ and $y$ as well as the difference between $f_{\\\\theta}(x, 0)$ and $y$, resulting in a model that is idempotent on the training set. Then at test time, models are optimized on the test data before running inference to make them idempotent on test inputs, by minimizing the difference between $f_{\\\\theta}(x, 0)$ and $f_{\\\\theta}(x, f_{\\\\theta}(x, 0))$ for an OOD input $x$.\", \"The paper shows empirically for several different settings that idempotent test-time-training improves classification accuracy on out-of-distribution samples.\"], \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"Idempotent training is an interesting and novel approach for addressing the out-of-distribution generalization problem.\", \"The paper shows successful application of the considered approach on a large number of experimental settings, including in online-learning settings.\"], \"weaknesses\": [\"Test-time training requires undesirable extra compute for test-time optimization of the whole model. How much additional compute is needed compared to running inference on the base model? How does this method scale with model size?\", \"The paper lacks analysis that explains how or why idempotent training is expected to improve out-of-distribution analysis. Further investigation and ablations that provide intuition for how this method works would be valuable.\", \"The proposed method has not been evaluated on standard real-world out-of-distribution generalization benchmarks such as DomainBed [Gulrajani and Lopez-Paz 2020] and WILDS [Koh et al 2020]. The presented experiments are on smaller models/datasets.\"], \"references\": \"Gulrajani, Ishaan, and David Lopez-Paz. \\\"In search of lost domain generalization.\\\" arXiv preprint arXiv:2007.01434 (2020).\\n\\nKoh, Pang Wei, et al. \\\"Wilds: A benchmark of in-the-wild distribution shifts.\\\" International conference on machine learning. PMLR, 2021.\", \"questions\": [\"Does the test time objective result in networks that are idempotent on OOD samples? Presumably once the function drifts from the fixed anchor function that test time loss no longer reflects idempotence? It would be good to see measures of idempotence on training and test samples.\", \"How does the intuition about idempotence being a generalization of orthogonal projection hold up? The proposed method considers idempotence only in the y-variable, not the entire function.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"I would like to thank the authors for the their response which addresses most of my concerns. I appreciate the additional clarity and intuitions behind idempotence (the plot is helpful and I would encourage the authors to add it to the manuscript) and model design (i.e., how the label is encoded). However, my main concern still stands and thus I will retain my score.\\n\\nThe new results against ActMAD are appreciated and do point improvements with $IT^3$. However, it seems that they come at the cost of extra inference time (i.e., $IT^3$ is slower than ActMAD across the board). I would encourage the authors to \\\"equalize\\\" the inference time between the two methods (by, e.g., doing more iterations with ActMAD or less iterations with $IT^3$) as that would better highlight improvements at, roughly, the same computational cost. Furthermore, I would encourage the authors to also compare against TTA methods (e.g., those mentioned by reviewer x3JD) in the online setup, especially given that during inference the model is not reset during iterations (which is the main difference between TTT and TTA) and a unsupervised loss is optimized on both settings.\"}", "{\"title\": \"Thank you for the rebuttal\", \"comment\": \"Dear Authors\\n\\nFirst, I would like to thank you for the efforts put in the rebuttal. While the rebuttal addressed *some* of my concerns, other concerns remain unresolved. References follow the original review.\\n\\nRegarding the comparison with TTT/TTT++: While indeed TTT optimizes for the orientation at test-time, this baseline is trivially extended to other data-types by replacing the rotation augmentation with the proper augmentation for this data-type.\", \"regarding_the_computational_requirements\": \"I suggest a rigorous computational comparison between baselines in a computational constraint evaluation as in [E]. This would fairly compare different Test-time training methods not only based on performance gain, but also based on their efficiency.\", \"regarding_the_comparison_with_actmad\": \"I thank the reviewer for providing this experiment. While the results show performance superiority to the proposed method over ActMAD, I would suggest providing an explanation why ActMAD does not provide any performance gains (in Tables 1 and 2) in this setting.\", \"regarding_the_comparison_with_tta\": \"I am not entirely sure about the provided distinction between TTT and TTA. In the online setting, and up to my understanding, the main difference between TTT and TTA is that TTT assumes to have control to the training process (thus including a self-supervised loss function during training) unlike TTA - Please refer to Table 1 in [C]. This makes TTA strictly a harder setting than TTT. Thus, I believe that a comparison against strong TTA baselines that fit this task is necessary.\\n\\nAt last, when checking the current pdf, I did not find the new results from the rebuttal nor the suggested references. It might be an openreview issue from my end.\\n\\nBased on this, I would keep my score suggesting the authors to improve their paper following the provided reviews.\"}", "{\"comment\": \"* **How is the label encoded in the input in the various settings considered?**\\n\\nThe label is incorporated into the input using a straightforward mechanism. For classification-related tasks, including segmentation, categorical outputs (e.g., class labels for classification or pixel classes for segmentation) are encoded in an additional \\\"blank\\\" channel, which contains only the label value. For regression tasks, additional placeholder values significantly outside the range of possible outputs (e.g., -100 for the age prediction task) are used to clearly distinguish between actual labels and placeholder values. These labels are concatenated with the input features, allowing the model to effectively utilize them.\\n\\nThis approach is consistent with the methodology described in ZigZag [4], where the integration of labels into input features is discussed in detail. We followed their experimental setups to ensure consistency and clarity across tasks.\\n\\n* **Furthermore, is the loss at Eq. 2 used for all settings (even when it might not make much sense, such as classification)?**\\n\\nWe will revise the text following Eq.2 and Eq.5 to specify that $||\\\\cdot||$ means some metric. In Eq.2 this metric is the standard one used to train for the task, so for classification it is cross-entropy. In Eq.5 it is a metric / distance taken between predictions. For classification this is Jansen-Shannon divergence between class probabilities (network output after softmax). For regression we used L2. Thank you for pointing to this inaccuracy.\\n\\n* **In the online setting, the authors consider a smooth transition between the distribution shifts, which might not be practically realistic. How does the method behave when the transition between distribution shifts is not smooth?**\\n\\nThe online version differs from the base version because they aim at different scenarios. The main difference is that in the base version the network weights are reset back to the state they were at the end of the pre-training phase. The assumption is that each input is a separate test and has no correlation or information about other inputs. This makes the correct approach to non-smooth transitioning data. The assumption for which the online version is made for, is correlation and somewhat smooth transitioning between inputs arriving in a stream. This is a continual learning regime. Therefore in this case, instead of resetting after each input, we leave the weights updated from the previous inputs. Of course, the online scenario may have discontinuities. As usually happens in online learning, we expect some drom in performance after such discontinuity and then improving back. We have added such an experiment to the revised paper. \\n\\n* **How many update steps on each datapoint do the authors do? Does the test time optimization reach a fully idempotent model and does \\u201cmore idempotence\\u201d provide better performance?**\\n\\nWe will include parameter specifications in the revised appendix. Typically, 1\\u20133 update steps are performed for each data point. While $f(x, \\\\cdot)$ moves closer to idempotence during optimization, it does not achieve perfect idempotence. Over-optimizing for idempotence can lead to undesirable effects, such as degrading the model\\u2019s prior performance. As this is a fine-tuning regime, maintaining a balance is crucial to preserving the original capabilities of the model while improving its test-time performance.\\n\\n[1] Sun, Yu, et al. \\\"Test-time training with self-supervision for generalization under distribution shifts.\\\" ICML 2020.\\n\\n[2] Liu, Yuejiang, et al. \\\"Ttt++: When does self-supervised test-time training fail or thrive?.\\\" NeurIPS 2021.\\n\\n[3] Mirza, Muhammad Jehanzeb, et al. \\\"Actmad: Activation matching to align distributions for test-time-training.\\\" CVPR 2023.\\n\\n[4] Durasov, Nikita, et al. \\\"Zigzag: Universal sampling-free uncertainty estimation through two-step inference.\\\" TMLR 2024.\\n\\n[5] Durasov, Nikita, et al. \\\"Enabling Uncertainty Estimation in Iterative Neural Networks.\\\" ICML 2024.\\n\\n[6] Shocher, Assaf, et al. \\\"Idempotent generative network.\\\" ICLR 2024.\"}", "{\"summary\": \"This paper presents a new method for test-time training: at training time the model is trained to be Idempotent, and at test-time two copies of the models are leveraged (one frozen and one adapted) to encourage the model to be idempotent under distribution shifts. Experiments are carried out on different tasks to demonstrate how versatile the proposed method is.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The main strengths of this paper:\", \"1__simplicity_of_the_approach\": \"the method is easy to understand and to implement.\\n\\n2- The paper is generally well-written and easy to follow.\", \"3__the_breadth_of_the_experiments\": \"The authors are commended for the variety of different tasks that they tested their proposed methods on.\", \"weaknesses\": \"The main weaknesses of this paper are:\\n\\n1- The method section, while simple, is a bit confusing. Why does the paper present two variants of IT$^3$? The experiments in section 4.6 and Table 1 clearly favors the online version of the method over the one with frozen predictor. This questions why not leveraging the online version in all the experiments? Is there an experimental setup where the offline one is much better than the online one? Why doesn't the paper present one method and treat the other as a special case/variant that is less powerful?\\n\\n2- The main weakness of this paper the lack of baselines in the experimental section. In all of the presented results, a very suboptimal version of TTT is compared against in **one experiment** only. This really questions how strong is IT$^3$/IT$^3$-online is when compared against strong baselines. Here are some suggestions for necessary experiments:\\n\\n2a. Since TTT/TTT++[A] are directly comparable with IT$^3$, then I suggest to *at least* have them in all of the experiments with one-to-one comparison in terms of batch size and model architecture. Also, consider adding the performance of IT$^3$ under batch size=1 in Figure 3 results.\\n\\n2b. Another strong baseline that is suitable for both classification and regression tasks is ActMad [B]. A direct comparison against this baseline is also necessary is all the presented experiments.\\n\\n2c. Another line of work that is directly comparable is Test-Time Adaptation (TTA). TTA works under a more conservative setup where no control over the training process is assumed. It is also important to compare against the current SOTA TTA method EATA [C] or more closely the dataset distillation method from [D] to further demonstrate the superiority if the proposed method.\\n\\n2d. Since this is a 'test-time' method, a discussion on its computational requirements is necessary. How would the performance be when evaluated under computational constraints [E]?\\n\\n2e. Benchmarks used in this work are somewhat small scale. Experiments on larger benchmarks such as ImageNet-C [F] and ImageNet-3DCC [G] in the classification setting are necessary. Similar arguments follow for regression tasks where for example one can follow the object detection experiments from ActMAD. \\n\\n3- In section 4.1, I am not sure about the Distribution Shift introduced in this experiment. For instance, the performance of non-optimized does not constantly degrade. Why zeroing out features is a good way of modeling distribution shift instead of adding random noise to the features? Can you please comment on this and provide justification for their choice of distribution shift.\", \"4__missing_references\": \"[B, C, D, E, F, G].\\n\\n[A] TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?, NeurIPS 2021\\n\\n[B] ActMAD: Activation Matching to Align Distributions for Test-Time-Training, CVPR 2023\\n\\n[C] Efficient Test-Time Model Adaptation without Forgetting, ICML 2022\\n\\n[D] Leveraging Proxy of Training Data for Test-Time Adaptation, ICML 2023\\n\\n[E] Evaluation of Test-Time Adaptation Under Computational Time Constraints, ICML 2024\\n\\n[F] Benchmarking Neural Network Robustness to Common Corruptions and Perturbations, ICLR 2019\\n\\n[G] 3D Common Corruptions and Data Augmentation, CVPR 2022\", \"questions\": \"Please refer to the weaknesses section (especially point 2). I am happy to raise my score if my concerns are resolved.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"* **IT$^3$ as a method requires two sequential forward passes through the model, so in practice it can be slow and the authors do not discuss the efficiency of IT$^3$ relative to other works.**\\n\\nWe thank the reviewer for their question about computational costs. Test-time training (TTT) inherently induces additional computational overhead because optimization steps are performed during inference. In our method, while we require an additional inference step (two forward passes), this is computationally efficient since forward passes are generally 3\\u20134x faster than backward passes. Thus, the extra inference step adds relatively little overhead compared to the total cost of TTT methods. We added a discussion about efficiency to the revised paper.\\n\\nOur method typically requires only 1\\u20133 optimization steps, keeping the overall cost comparable to other well-known TTT methods. Below, we present a comparison of inference times and performance metrics with other popular baselines on the largest dataset used in our experiments:\\n\\n**Table 2.** Inference Time and Performance Comparison (OOD Airfoils)\\n| Base Model | Base Model | ActMAD | $IT^{3}$ | \\n|------------|------------|------------|---------|\\n| Inference Time | 1\\u00d7 | 3x | 4x |\\n| MAE ($\\\\downarrow$) | 38.67 | 38.61 | 37.5 |\\n\\n**Table 3.** Inference Time and Performance Comparison (OOD Cars)\\n| Base Model | Base Model | ActMAD | $IT^{3}$ | \\n|------------|------------|------------|---------|\\n| Inference Time | 1\\u00d7 | 4x | 5x |\\n| MAE ($\\\\downarrow$) | 0.501 | 0.502 | 0.424 |\\n\\n\\n**Table 4.** Inference Time and Performance Comparison (Roads)\\n| | Base Model | Standard TTT |ActMAD | $IT^{3}$ | \\n|------------|------------|--------------|---------|--------|\\n| Inference Time | 1x | 3x | 4.5x | 6x |\\n| Quality ($\\\\uparrow$) | 39.5 | 40.0 | 45.9 | 69.8 |\\n\\n\\nWe will include this table in the revised version. We recognize that our method, like other TTT approaches, introduces computational overhead during inference. This is a common limitation across existing TTT methods. Tackling this issue and developing TTT methods that eliminate such overhead entirely is an important research direction that holds potential for significant advancements in the field.\\n\\n* **For example, why not consider additional self-supervised tasks as discussed at Sun et al. (2020)?**\\n\\nWe appreciate the suggestion but believe it may not be the most appropriate approach, as the main distinction between different TTT methods lies precisely in the type of self-supervision loss they compute. While the core idea of TTT\\u2014to use some form of self-supervision\\u2014is generic, different TTT methods compete to design self-supervised losses that work more broadly and deliver better performance. Nevertheless, we took the reviewer's suggestion into account and, in some experiments below, slightly adapted baselines (e.g., modifying ActMAD for graph data) to ensure they are meaningful in the given context.\\n\\n* **Considered setting is rather constrained and I would encourage the authors to consider more diverse shifts, as that would better highlight the usefulness of $IT^3$. Why not add noise to the road segmentation task, in the same way that it was done for CIFAR 10? This could be plausible real world setting where there is a fault in the sensor, thus the images come out corrupted.**\\n\\nIt is true that noise would be a realistic scenario. Our goal in making this choice was to have a diversity of corruptions / OOD types to showcase along the experiments. Noise was exemplified for the CIFAR data. For the road segmentation task, we leveraged a different dataset as OOD source. Changing dataset represents more realistic OOD scenarios, as different datasets differ in both image distributions\\u2014due to variations in sensors and environmental conditions\\u2014and in the geographical locations captured in satellite images. Such variations are more representative of practical challenges in road segmentation tasks than synthetic noise, providing a more meaningful evaluation of our approach's robustness.\"}", "{\"metareview\": \"This paper addresses the test time training problem to handle distribution shifts. The paper proposes a method to learn a model that is idempotent. This is an interesting idea which the reviewers appreciated. The authors' rebuttal was considered and some of the reviewers also engaged in discussions with the authors.\\n\\nHowever, after the discussions the reviewers still expressed several concerns, such as lack of analysis regarding why idempotent training is helpful for out of distribution settings as pointed out by Reviewer hxXL.\\n \\nReviewer ALWg raised some concerns about the extra inference time and the authors reported some additional experiments and promised to include these results in the revised manuscript.\\n\\nReviewer x3JD expressed concerns regarding lack of strong baselines and comparisons with TTA methods.\\n\\nI appreciate the authors efforts to report some additional results in response to the reviewers' comments. However, the reviewers still had their reservations and, in the current state even with these results considered, the paper does not appear ready to be accepted. The paper's idea is interesting and incorporating the suggestions from the reviewers will definitely strengthen the paper. The authors are advised to address the issues and consider resubmitting to another venue.\", \"additional_comments_on_reviewer_discussion\": \"Please refer to the meta-review.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We sincerely thank the reviewer for their thoughtful feedback and recognition of the strengths of our work. We greatly appreciate their acknowledgment of the simplicity of our approach, the clarity of our writing, and the breadth of experiments demonstrating the versatility of our method. We have carefully addressed all comments and concerns, as outlined below.\\n\\n---\\n\\n* **The method section, while simple, is a bit confusing. Why does the paper present two variants of IT$^3$? This questions why not leveraging the online version in all the experiments?**\\n\\nThank you for highlighting this lack of clarity. The online version differs from the base version because they are designed for different scenarios. The main difference is that in the base version, the network weights are reset to their original state they were at the end of the pre-training phase. This assumes that each input is an independent test with no correlation or shared information with other inputs. In this scenario, using updated weights from previous examples would be considered cheating, as also noted in prior work (e.g., [1]). \\n\\nIn contrast, the online version assumes some correlation and smooth transitions between inputs arriving in a stream, representing a continual learning regime. Here, instead of resetting after each input, we allow the weights to remain updated from previous inputs. We will add a clarification to the paper to address this distinction.\\n\\n* **Is there an experimental setup where the offline one is much better than the online one? Why doesn't the paper present one method and treat the other as a special case/variant that is less powerful?**\\n\\nPlease see the explanation for the above comment\\u2014the two versions are designed for two different scenarios. The online version is almost always better in terms of performance, as also noted in [1]. However, the online setup is not always feasible in real-world applications, particularly in cases where data arrives in isolated batches or where it is crucial to ensure that no information leaks from one input to another. In such cases, the offline version must be used to maintain these constraints.\\n\\nFor this reason, both versions are relevant and widely used in practice. In our experiments, we consider both setups as they correspond to two distinct real-world scenarios, allowing us to demonstrate the versatility and applicability of our method in varied contexts.\\n\\n* **TTT/TTT++[A] are directly comparable with IT$^3$, then I suggest to at least have them in all of the experiments with one-to-one comparison in terms of batch size and model architecture. Also, consider adding the performance of IT$^3$ under batch size=1 in Figure 3 results.**\\n\\nTTT and TTT++ are both domain-specific. Nonetheless, we include new experiments with these datasets in the next comment. Both TTT and TTT++ are designed for image data and technically cannot be applied to other types. Even within the image domain, they are unsuitable for some data types. For instance, for aerial photos, they may not work well as they rely on predicting orientation, which is poorly defined for aerial photography.\\n\\nThank you for suggesting adding batch-size=1 to the CIFAR figure. We have openly stated in the experiments section, the figure caption, and the limitations section that batch-size=1 for CIFAR is currently unsuccessful. Additionally, we note that in the original TTT, while only a single example is used, a batch of 32 augmentations is constructed to update the weights.\"}", "{\"title\": \"Thank you for the clarification\", \"comment\": \"Dear authors\\n\\nI would like to thank you once again for the efforts put to address my comments. The additional experiments on ActMAD along with the reasoning on its limited capability addresses (partially) that concern. One extra ablation needed for this concern is to assess ActMAD under large batch size to avoid its failure mode.\", \"regarding_the_comparison_with_tta_methods\": \"I still disagree in the regard. Under the online TTT setup, TTA is strictly a more conservative setup than online TTT, making it necessary to compare under the online setup.\", \"regarding_the_adaptation_of_visual_ttt_methods\": \"while indeed this requires an additional effort, however I believe that constructing strong baseline is an essential part of making a strong paper. The current experimental setup, under the lack of strong baselines, cannot show how superior and generalizable the proposed method is.\\n\\nThat being said, I am inclined to raise my score from 3 to 5. I am still not in favor of accepting this paper in the current version due to the unresolved concerns such as lack of strong baselines and comparison agains TTA methods (under the online setup).\"}", "{\"comment\": \"* **Inference Time and ActMAD Comparison**\\n\\nWe appreciate your suggestion to analyze inference time across methods. Following your recommendation, we conducted an additional experiment with ActMAD, increasing its number of optimization steps. This adjustment led to ActMAD requiring **8.5x, 7x, and 5x the inference time of the base model**, compared to IT$^3$\\u2019s **6x, 5x, and 4x** for the aerial imaging, car aerodynamics, and airfoils tasks, respectively. Even under these conditions, IT$^3$ significantly outperforms ActMAD in accuracy and robustness. These results will be included in the updated version of the paper. This further emphasizes IT$^3$\\u2019s balance of efficiency and effectiveness, even when accounting for computational costs.\\n\\n**Table 2.** Inference Time and Performance Comparison (OOD Airfoils)\\n| Base Model | Base Model | ActMAD | ActMAD (higher complexity) |$IT^{3}$|\\n|------------|------------|------------|---------|-|\\n| Inference Time | 1\\u00d7 | 3x | 5x |4x|\\n| MAE ($\\\\downarrow$) | 38.67 | 38.61 | 38.60 |37.5|\\n\\n**Table 3.** Inference Time and Performance Comparison (OOD Cars)\\n| Base Model | Base Model | ActMAD | ActMAD (higher complexity) |$IT^{3}$|\\n|------------|------------|------------|---------|-|\\n| Inference Time | 1\\u00d7 | 4x | 7x |5x|\\n| MAE ($\\\\downarrow$) | 0.501 | 0.502 | 0.502 |0.424|\\n\\n\\n**Table 4.** Inference Time and Performance Comparison (Roads)\\n| | Base Model | Standard TTT | ActMAD | ActMAD (higher complexity) | $IT^{3}$ | \\n|----------------------|------------|--------------|---------|----------------------------|--------|\\n| Inference Time | 1x | 3x | 4.5x | 8.5x | 6x |\\n| Quality ($\\\\uparrow$)| 39.5 | 40.0 | 45.9 | 46.3 | 69.8 |\\n\\n* **Comparison with TTA Methods**\\n\\nThank you for raising the importance of TTA comparisons. We acknowledge that TTA and TTT have overlapping goals but operate under different assumptions. From one perspective, TTA can be seen as more constrained, as it assumes no control over the training process. On the other hand, TTT imposes its own form of strictness by resetting the model after every instance (outside the online variant) and treating each input as entirely independent.\", \"this_independence_in_ttt_creates_a_unique_challenge\": \"the method cannot retain information between test examples, even when they might share correlations or dependencies. In contrast, TTA benefits from access to data streams or batches during inference, allowing it to exploit temporal or structural relationships between inputs. IT$^3$\\u2019s online variant leverages such relationships, leading to higher performance in that setting, but we emphasize that it addresses a different scenario.\\n\\nTo date, no prior TTT work has compared directly to TTA methods, largely because these paradigms are tailored to distinct challenges. However, we recognize the importance of acknowledging these distinctions and will add clarifications to this effect in the revised paper.\\n\\n* **\\\"A Variety of Different Tasks\\\" \\u2013 IT$^3$\\u2019s Generality**\\n\\nWe appreciate your observation of IT$^3$\\u2019s ability to operate across \\\"a variety of different tasks.\\\" This reflects its **core strength**: IT$^3$ is a method designed to function across all modalities, architectures, and tasks without requiring task-specific adaptations.\\n\\nWe do not claim that no domain-specific method could outperform IT$^3$ in specific tasks; indeed, highly tailored approaches may excel in certain settings (as noted in the limitations section in the paper). However, IT$^3$ offers unmatched **general applicability**, eliminating the need for task-specific tuning. This universal applicability sets IT$^3$ apart, functioning out of the box while maintaining robustness and consistency across diverse datasets and architectures.\\n\\nAmong TTT works, ActMAD is indeed the most general approach. However, it relies on batch statistics, which limits its applicability in small or single-instance batch scenarios. Even within this constraint, we have directly compared to ActMAD wherever possible and achieved favorable results.\", \"this_generality_is_a_fundamental_distinction\": \"IT$^3$ should not be evaluated as simply another method competing for top results in specific benchmarks but as a **general framework** for test-time adaptation across all modalities, architectures, and tasks.\\n\\n* **Paper Updates**\\n\\nWe will incorporate the new results, along with all suggested revisions and additional clarifications, in the camera-ready version. Thank you for helping us improve the clarity and rigor of our work.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We sincerely thank the reviewer for their thoughtful feedback on our work. We appreciate their recognition of the simplicity and elegance of our approach, as well as its intuitive incorporation of idempotence into TTT. We value the reviewer\\u2019s insights and have carefully addressed all comments and concerns, as outlined below.\\n\\n---\\n\\n* **In regression tasks, it is unclear how the model differentiates this zero from an actual label of 0.**\\n\\nWe thank the reviewer for their important comment. The special 0 notation used does not represent an actual zero. Instead, it is a unique signal with the same dimensions as the labels, specifically chosen to differentiate it from actual label values. Our approach builds upon the ZigZag method from [1], where the choice of this signal is extensively discussed and justified. We understand that this notation may appear confusing, so we will revise the paper to include a clearer explanation to avoid any ambiguity.\\n\\n* **Online appears to rely significantly on the use of Exponential Moving Average (EMA). However, the authors did not provide a citation for this technique.**\\n\\nThank you for pointing out. We will add citation for the mentioned paper [2]. It is indeed related and analyzes EMA. The origin of the technique is much older. We also want to point out that the reliance on EMA is not as strong as it may appear. The most important difference is- not resetting the weights after processing each input. Actually, in some cases we did set $\\\\alpha=1$.\\n\\n* **In the first experiment (Section 4.1 on tabular data), the method of randomly substituting features with zeros in the test set may resemble a missing data scenario rather than a true distribution shift.**\\n\\nThank you for making this distinction. Our goal in making this choice was to have a diversity of corruptions to showcase along the experiments. While acknowledging the distinction you make, we point that missing information makes a divergence from the training distribution, therefore can also be thought of as a distribution shift.\\n\\n* **There are unnecessary abbreviations throughout the manuscript. Figures 14 to 16 are not mentioned or referred to in the main text. Figures 12 and 15 have the exact same title. Some minor improvements and spelling corrections**\\n\\nThank you for carefully reviewing and helping us improve our paper. In the revised paper these issues will be fixed.\\n\\n* **Do you use a special encoding or placeholder value instead of 0 for regression tasks?**\", \"please_see_the_explanation_above\": \"it is a special signal rather than an actual zero. For example, for tasks like aerodynamic predictions, we use placeholder values significantly different from the possible range of lift-to-drag and drag values to ensure clear differentiation.\\n\\n* **If the purpose of the neural \\\"don't know\\\" zero is purely for contrast and its specific value is not important, will it be more computationally efficient to use a representative value such as the median?**\\n\\nThank you for the suggestion. However, the specific value is actually important. The model needs to clearly differentiate between the two inference modes: with and without additional information provided. For this reason, we use values that are significantly different from potential predictions. Additionally, this question pertains to the ZigZag [1] approach rather than the $IT^{3}$ method. We follow the guidelines outlined in the ZigZag paper, where this choice is extensively discussed.\\n\\n* **Could you provide guidance on selecting the value of or share experimental results demonstrating performance across different values?**\\n\\nThank you for the suggestion. The ZigZag [1] paper provides detailed guidance on selecting placeholder values, emphasizing the importance of using values significantly different from valid predictions to distinguish inference modes effectively. Our experiments align with their findings, and we refer readers to their discussion for further details\\n\\n* **In EMA [2], when $\\\\alpha=1$, the online IT$^3$ aligns with the offline version, and when $\\\\alpha=0$, it encounters the collapse issue described in Section 3.2. Could you provide guidance on selecting the value of $\\\\alpha$ or share experimental results demonstrating performance across different values?**\\n\\nThe online version differs from the base version because they aim at different scenarios. The main difference is that in the base version the network weights are reset back to the state they were at the end of the pre-training phase. The assumption is that each input is a separate test and has no correlation or information about other inputs. The assumption for which the online version is made for, is correlation and somewhat smooth transitioning between inputs arriving in a stream. This is a continual learning regime. Therefore in this case, instead of resetting after each input, we leave the weights updated from the previous inputs. We have added a clarification to the paper.\"}", "{\"comment\": \"* **Extending Generality to New Tasks**\\n\\nThis emphasis on adaptability relates directly to your observation regarding the challenges of adapting domain-specific methods to new tasks. IT$^3$ represents a significant departure: it is designed to function seamlessly across tasks, modalities, and architectures without requiring substantial adaptation.\\n\\n\\nWe do not claim that no domain-specific method could outperform IT$^3$ in specific tasks; indeed, highly tailored approaches may excel in certain settings (as noted in the limitations section in the paper). However, IT$^3$ offers unmatched **general applicability**, eliminating the need for task-specific tuning. This universal applicability sets IT$^3$ apart, functioning out of the box while maintaining robustness and consistency across diverse datasets and architectures.\\nAmong TTT works, ActMAD is indeed the most general approach. However, it relies on batch statistics, which limits its applicability in small or single-instance batch scenarios. Even within this constraint, we have directly compared to ActMAD wherever possible and achieved favorable results.\", \"this_generality_is_a_fundamental_distinction\": \"IT$^3$ should not be evaluated as simply another method competing for top results in specific benchmarks but as a **general framework** for test-time adaptation across all modalities, architectures, and tasks.\\n\\n* **Paper Updates**\\n\\nWe apologize for the delay in incorporating updates into the current PDF. All new results, references, and clarifications will be included in the camera-ready version. Thank you for your constructive feedback, which has been instrumental in refining our work.\"}", "{\"comment\": \"Dear Reviewer Tinn,\\n\\nThank you for dedicating your time to reviewing our paper and providing valuable feedback.\\n\\nWe have thoughtfully addressed each of your comments, offering detailed responses to clarify and resolve the concerns you raised. We hope our explanations have provided a clearer perspective on our work and its contributions.\\n\\nIf you feel that we have adequately addressed your concerns, we would be grateful if you would consider reassessing your rating.\\n\\nWe would be happy to clarify or elaborate on any part of our paper while the discussion period is still open.\\n\\nThank you!\"}", "{\"comment\": \"Dear Reviewer x3JD,\\n\\nThank you for dedicating your time to reviewing our paper and providing valuable feedback.\\n\\nWe have thoughtfully addressed each of your comments, offering detailed responses to clarify and resolve the concerns you raised. We hope our explanations have provided a clearer perspective on our work and its contributions.\\n\\nIf you feel that we have adequately addressed your concerns, we would be grateful if you would consider reassessing your rating.\\n\\nWe would be happy to clarify or elaborate on any part of our paper while the discussion period is still open.\\n\\nThank you!\"}" ] }
0hc7iQLhCt
HessianGrad: Optimizing AI Systems with Hessian-Aware Textual Gradients
[ "Peiyan Zhang", "Haibo Jin", "Leyang Hu", "Xinnuo Li", "Liying Kang", "Man Luo", "Yangqiu Song", "Haohan Wang" ]
Recent advancements in large language models (LLMs) have significantly enhanced the ability of LLM-based systems to perform complex tasks through natural language processing and tool interaction. However, optimizing these LLM-based systems for specific tasks remains challenging, often requiring manual interventions like prompt engineering and hyperparameter tuning. Existing automatic optimization methods, such as textual feedback-based techniques (e.g., TextGrad), tend to focus on immediate feedback, analogous to using first-order derivatives in traditional numerical gradient descent. However, relying solely on first-order derivatives can be limited when the gradient is either very small or fluctuates irregularly, which may slow down or stall optimization. To address these limitations, better adaptation in regions with small or fluctuating gradients is necessary. Second-order gradient methods, which incorporate the Hessian matrix, offer a promising solution by enabling more precise adjustments. Inspired by this, in this paper, we introduce HessianGrad, a novel optimization method that leverages textual feedback and tracks the iterative evolution of LLM systems responses across iterations, leading to more dynamic and adaptive optimization. We evaluate the effectiveness of HessianGrad on three tasks: prompt optimization, solution optimization, and code optimization. Experimental results demonstrate that HessianGrad consistently improves performance across all three tasks, achieving a **7.8%** improvement in prompt optimization, a **20.72%** gain in solution refinement, and a **29.17%** increase in code optimization compared to baselines, highlighting its adaptability and effectiveness in optimizing LLM-based systems.
[ "LLM", "Prompt Optimization", "Gradient Descent" ]
Reject
https://openreview.net/pdf?id=0hc7iQLhCt
https://openreview.net/forum?id=0hc7iQLhCt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yWvi4uheQf", "xJoDEH2gX3", "ttMHBGBiCN", "rxU6Dzk2DS", "inAkPqrzpn", "aq1WiySfX3", "Wzy1JcRPED", "VGPORiMw0V", "Rw8e7cgLiE", "2OpEtBZA3P", "124rhp913v" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730632685447, 1732541465439, 1730684759454, 1732541144529, 1734756914301, 1732538298663, 1737524052840, 1730695977579, 1732539345552, 1732540763068, 1732540108109 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10426/Reviewer_eYUA" ], [ "ICLR.cc/2025/Conference/Submission10426/Authors" ], [ "ICLR.cc/2025/Conference/Submission10426/Reviewer_2jwz" ], [ "ICLR.cc/2025/Conference/Submission10426/Authors" ], [ "ICLR.cc/2025/Conference/Submission10426/Area_Chair_HQ8P" ], [ "ICLR.cc/2025/Conference/Submission10426/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10426/Reviewer_adza" ], [ "ICLR.cc/2025/Conference/Submission10426/Authors" ], [ "ICLR.cc/2025/Conference/Submission10426/Authors" ], [ "ICLR.cc/2025/Conference/Submission10426/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a new LLM-based textual optimization method that takes the evolution of LLM systems responses across iterations into account. Improvements are achieved on multiple textual optimization tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper writing is clear and easy to understand.\\n2. The proposed optimization method achieves considerable improvements on a variety of tasks.\", \"weaknesses\": \"1. Although the authors have written about the difference between momentum-based methods and HessianGrad, the novelty of transferring the focus from feedback similarity to response similarity is somewhat weak. The authors should include more convincing ablation experiments to verify that tracking the dynamics of responses is more effective.\\n\\n2. The second-order Hessian formulation can not provide sufficient theoretical support for the optimization framework. The relationship between tracking feedback and tracking responses is not comparable to first-order and second-order optimization.\", \"questions\": \"Please respond to the concerns in the \\\"Weaknesses\\\" part.\\n\\nQ1. The motivation (line 223~226) of introducing the similarity function does not match well with the second-order optimization theory. How can the similarity of the responses provide second-order information?\\n\\nQ2. The concrete definition of the similarity function on line 244 is meant to connect with the formulation of second-order optimization. However, this definition is contrary to the motivation in line 226 (\\\"more gradual and thoughtful evolution of the response over multiple iterations\\\"), since larger similarity means larger fluctuation between successive steps, according to this definition. Moreover, this definition is actually focusing on changes in feedback ($L(r(p_t))$) instead of response ($r(p_t))$), and this is a point that contradicts the motivation of this paper. Please clarify how the definition aligns with the stated motivation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 2/2\", \"comment\": \"**3. The motivation (line 223~226) of introducing the similarity function does not match well with the second-order optimization theory. How can the similarity of the responses provide second-order information?**\\n\\nWe appreciate the opportunity to clarify the role of the similarity function. The function is not intended to mathematically replicate second-order derivatives but to capture trends in response evolution across iterations. These trends, we believe, serve a similar purpose to second-order information by reflecting the optimization trajectory\\u2019s progression, conceptually akin to how second-order derivatives capture curvature and directionality in numerical optimization.\\n\\nBy tracking response similarity, the function identifies stagnation (low similarity changes) or instability (large fluctuations), enabling HessianGrad to make thoughtful adjustments. We think this richer signal helps prevent premature convergence or erratic updates, stabilizing the optimization process. Empirical results demonstrate smoother trajectories, the ability to escape local optima, and improved task performance, validating the practical utility of this approach even if its connection to second-order theory is conceptual rather than strict.\\n\\n**4. The concrete definition of the similarity function on line 244 is meant to connect with the formulation of second-order optimization. However, this definition is contrary to the motivation in line 226 (\\\"more gradual and thoughtful evolution of the response over multiple iterations\\\"), since larger similarity means larger fluctuation between successive steps, according to this definition.**\\n\\nThe similarity function measures the degree of change between successive responses, where larger values indicate greater fluctuations. We believe this design does not contradict the motivation of promoting gradual and thoughtful evolution. Instead, it enables adaptive updates: larger similarity values signal instability, prompting stabilization, while smaller values indicate stagnation, encouraging further refinement.\\n\\nIn our view, this approach aligns conceptually with second-order principles by capturing cumulative trends in response evolution, similar to how second-order derivatives evaluate curvature and directional changes. This allows for controlled, iterative refinement of responses throughout the optimization process.\\n\\n**5. Moreover, this definition is actually focusing on changes in feedback ($L(r(p_{t}))$) instead of response ($r(p_t)$), and this is a point that contradicts the motivation of this paper. Please clarify how the definition aligns with the stated motivation.**\\n\\nThe similarity function evaluates the textual loss $L(r(p_t))$, which measures the quality of the response r(pt) in aligning with task objectives. Since $L(r(pt))$ is directly dependent on $r(p_t)$, changes in $L(r(p_t))$ reflect changes in the response. We believe this ensures the similarity function inherently tracks response evolution over iterations, supporting thoughtful and meaningful adjustments in line with the paper\\u2019s motivation.\\n\\nIn contrast, feedback refers to the gradient or adjustment direction ($\\\\frac{\\\\partial L(r(p_t))}{\\\\partial p_t}$) provided by the evaluator LLM. Feedback describes how prompts should change but does not directly measure how responses evolve. For instance, feedback may remain constant even if responses improve, or responses could stagnate while feedback fluctuates due to minor prompt adjustments. By focusing on response evolution rather than feedback, the similarity function aligns with the goal of gradual and thoughtful optimization.\\n\\nIn our view, by focusing on response evolution rather than feedback, the similarity function aligns more closely with the goal of gradual and thoughtful optimization, which we consider central to our framework.\"}", "{\"summary\": \"This paper proposes a new method for optimizing system prompts through gradient descent. The authors outline the limitations of the classical method \\\"TextGrad\\\", and present solutions to mitigate these issues. The second-order derivative (i.e., HessianGrad) is introduced into \\\"TextGrad\\\", thereby reducing the likelihood of the system prompt getting trapped in local minima. The authors conduct empirical experiments on three tasks: prompt optimization, solution optimization, and code optimization. The conclusion is that the proposed new method outperforms the naive TextGrad method and the Momentum-Enhanced TextGrad method across four mainstream models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is well written with clear presentation of the new algorithm, and the experiments are rigorous with repeatability.\", \"weaknesses\": [\"Overall, this paper may have an algorithmic contribution, but supplemental details are required on both theoretical and experimental aspects.\", \"1. Theoretical aspect:\", \"When defining the similarity $\\\\mathcal{S}$ of the prompt before and after each iteration, you do not multiply it by any hyperparameters and directly add it to the original TextGrad to obtain the HessianGrad. Consider adding two hyperparameters, $\\\\beta_1$ and $\\\\beta_2$, to the two terms of HessianGrad, similar to Adamw. Could you conduct an ablation study on the effect of adding these hyperparameters, or provide justification for why they were not included in the current formulation?\", \"You use the $L_1$ norm to define $\\\\mathcal{S}$, however, when measuring semantic similarity, cosine similarity is more commonly used as it signifies that the loss $\\\\mathcal{L}$ before and after iteration is closer in direction, while $L_1$ primarily signifies that $\\\\mathcal{L}$ is closer in numerical value. It is recommended to provide a rationale for choosing the $L_1$ norm as a similarity metric. (Simple reasons are acceptable, such as \\\"$L_1$ norm is easier to compute\\\"). Could you compare the performance of your method using L1 norm versus cosine similarity, or provide empirical justification for why L1 norm was chosen over other common similarity metrics?\", \"2. Experimental aspect:\", \"Please provide the loss curves for HessianGrad, M-TextGrad, TextGrad, and CoT of their iterative processes for a representative example from each of the three tasks (prompt optimization, solution optimization, and code optimization) to demonstrate the effect of \\\"HessianGrad Escaping Local Optima\\\" as shown in Figure 1.\", \"Calculating HessianGrad typically requires more computational resources. Could you provide a detailed comparison of computational resources (GPU memory, runtime) for HessianGrad versus the baseline methods across all three tasks.\"], \"questions\": \"1. In general computational frameworks, HessianGrad is computed by adding a small quantity in the iteration direction and recalculating the gradient once, followed by taking the finite difference of the gradients. From a practical standpoint, can the direct finite difference version of similarity $\\\\mathcal{S}(r(p_t), r(p_{t-1})) = \\\\frac{|| \\\\mathcal{L}(r(p_t)) - \\\\mathcal{L}(r(p_{t-1}))||}{||p_t - p_{t-1}||}$ save computational costs (since $\\\\mathcal{L}(r(p_{t-1}))$ and $p_{t-1}$ are both values computed from the previous iteration) and achieve similar effects?\\n2. There is a typo in Equation (3) in Section 3. The second-order partial derivative should be denoted as $\\\\frac{\\\\overset{\\\\sim}{\\\\partial}^2\\\\mathcal{L}}{\\\\overset{\\\\sim}{\\\\partial}p_t^2}$ rather than $\\\\frac{\\\\overset{\\\\sim}{\\\\partial}^2\\\\mathcal{L}}{\\\\overset{\\\\sim}{\\\\partial}^2p_t}$.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 1/2\", \"comment\": \"Thank you for your careful reading and thoughtful reviews. Let us address your comments below.\\n\\n**1. Although the authors have written about the difference between momentum-based methods and HessianGrad, the novelty of transferring the focus from feedback similarity to response similarity is somewhat weak. The authors should include more convincing ablation experiments to verify that tracking the dynamics of responses is more effective.**\\n\\nThank you for your thoughtful comments. Below, we clarify how HessianGrad differs from momentum-based methods like M-TextGrad and why tracking response dynamics is more effective.\\n\\nHessianGrad fundamentally focuses on response dynamics rather than feedback similarity. Momentum-based methods adjust step sizes based on feedback similarity, which can lead to instability in complex response landscapes. In contrast, HessianGrad tracks response evolution over iterations, enabling stable and controlled updates without relying on abrupt step-size adjustments.\\n\\nTracking response dynamics provides a richer signal by capturing subtle changes in responses, even when feedback appears stagnant. This approach allows HessianGrad to achieve smoother optimization trajectories, escape local optima, and perform robustly in challenging tasks.\\n\\nMoreover, we have added an empirical analysis of loss curves on solution optimization tasks (Google-proof QA, MMLU-Machine Learning, and MMLU-College Physics) to further validate this approach. Please refer to the blue part in Section 4.2 for details (the text shown in blue color in the revised manuscript is our revision). With six iterations, the textual loss curves (using test accuracy as a proxy) reveal:\\n* Escaping Local Optima: As shown in Figure 2 (b), HessianGrad surpasses performance plateaus through cumulative response dynamics.\\n* Stabilizing Updates: Unlike baselines that exhibit oscillations and instability (e.g., M-TextGrad in Figure 2 (c)), HessianGrad achieves smoother optimization trajectories.\\n* Improved Performance in Challenging Scenarios: Proxy loss curves in all figures highlight HessianGrad\\u2019s ability to navigate meaningful adjustments, achieving higher test accuracy over iterative refinements.\\n\\nThese results demonstrate that tracking response dynamics offers significant advantages over feedback-driven methods, addressing complex optimization challenges effectively.\\n\\n\\n**2. The second-order Hessian formulation can not provide sufficient theoretical support for the optimization framework. The relationship between tracking feedback and tracking responses is not comparable to first-order and second-order optimization.**\\n\\nWe appreciate the reviewer\\u2019s comments and would like to clarify our perspective. HessianGrad does not aim to replicate numerical second-order derivatives. Instead, we believe it is inspired by second-order optimization principles and operationalizes their practical effects\\u2014such as escaping local optima and stabilizing updates\\u2014within the context of textual optimization.\\n\\nWe think the relationship between feedback and response tracking in HessianGrad is conceptual rather than strict. Feedback tracking reacts to immediate gradients, which is similar to first-order methods, while response tracking captures broader trends in optimization, akin to the cumulative information associated with second-order derivatives. This richer signal, in our view, allows HessianGrad to guide updates that are both precise and stable.\\n\\nWe believe the empirical results support this approach. Across tasks, HessianGrad achieves smoother optimization trajectories and outperforms momentum-based methods, demonstrating that tracking response dynamics is practically effective and aligned with second-order-inspired principles, even if not strictly theoretical.\"}", "{\"metareview\": \"The paper introduces HessianGrad, a new optimization method inspired by second-order derivatives, to improve textual optimization in LLMs. It focuses on leveraging response trajectory tracking to refine outputs iteratively. While the idea is promising and shows empirical improvements over baselines, the reviewers raised significant concerns. These include limited novelty, as the method appears to refine existing meta-prompt strategies without substantial innovation. The theoretical justification for using response similarity as a proxy for second-order optimization is weak and lacks rigor. Additionally, the experimental evaluation is narrow, with insufficient benchmarks and limited exploration of computational trade-offs, such as runtime and scalability. I recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, the authors clarified that HessianGrad is not a numerical second-order derivative but a response tracking framework, addressing concerns about novelty and theoretical rigor. They provided additional experiments, including comparisons with ProTeGi, and demonstrated computational efficiency through faster convergence. These responses partially address the reviewers' concerns.\"}", "{\"title\": \"Response 1/2\", \"comment\": \"Thank you for your careful reading and thoughtful reviews. Let us address your comments below.\\n\\n**1. Besides the analogy provided in Eq. 2 and Eq. 3, the core implementation of HessianGrad is to include a meta-prompt that encourages LLM to reflect over multiple turns as shown in Pg. 12. However, this is arguably analogous to the real Hessian matrix that the work is trying to deliver.**\\n\\nThanks for noticing this. First of all, it is important to clarify that the HessianGrad framework introduced in our work is not a numerical computation of the Hessian matrix, as seen in traditional optimization. Instead, it represents a textual analogy within the framework of textual optimization. Our goal is not to compute second-order derivatives in the mathematical sense but to simulate second-order effects in iterative response optimization, such as capturing cumulative response dynamics and improving stability.\\n\\n\\nMoreover, we believe the characterization of HessianGrad as merely a meta-prompt for reflection oversimplifies its core implementation. HessianGrad explicitly integrates response trajectory tracking via a similarity function, combining the structured first-order feedback (TextGrad) with trajectory insights. This enables it to identify stagnation or instability and refine responses effectively, distinguishing it from basic prompting strategies.\\n\\n**2. Moreover, whether LLM is able to capture the second-order phenomenon is also questionable and lacks justification in this work.**\\n\\nWe appreciate the opportunity to clarify the role of HessianGrad in achieving second-order effects. HessianGrad does not rely on the intrinsic ability of LLMs to numerically compute second-order derivatives. Instead, we believe these effects are achieved through the framework\\u2019s design, which integrates structured feedback and trajectory tracking to simulate second-order-inspired behavior in textual optimization. This design enables HessianGrad to capture cumulative response dynamics and guide stable, thoughtful updates.\\n\\n\\nTo address the concern about justification, we think empirical evidence supports the practical realization of second-order-inspired behavior. We have added an empirical analysis of loss curves on solution optimization tasks across three datasets\\u2014Google-proof QA, MMLU-Machine Learning, and MMLU-College Physics. Please refer to the blue part in Section 4.2 for details (the text shown in blue color in the revised manuscript is our revision).\", \"we_observed_the_following_key_effects\": \"* Escaping Local Optima: As shown in Figure 2 (b), HessianGrad surpasses performance plateaus through cumulative response dynamics.\\n* Stabilizing Updates: Unlike baselines that exhibit oscillations and instability (e.g., M-TextGrad in Figure 2 (c)), HessianGrad achieves smoother optimization trajectories.\\n* Improved Performance in Challenging Scenarios: Proxy loss curves in all figures highlight HessianGrad\\u2019s ability to navigate meaningful adjustments, achieving higher test accuracy over iterative refinements.\\n\\n\\nWe believe these results validate that the second-order effects are simulated and operationalized by the framework\\u2019s design, rather than being dependent on the LLM\\u2019s intrinsic numerical second-order capabilities. \\n\\n\\nWe welcome further discussion and feedback to improve this understanding. \\n\\n**3. The actual technical contribution of this work is to provide a more refined meta-prompt to the original TextGrad\\u2019s meta-prompt. First, the contribution of the refined meta-prompt appears to be limited. the novelty of this work is limited and appears more as an incremental improvement on TextGrad.**\\n\\nWe appreciate the reviewer\\u2019s perspective and would like to clarify that HessianGrad goes beyond being a refined meta-prompt for TextGrad. In our view, the primary technical contribution of HessianGrad lies in its explicit focus on tracking response dynamics across iterations. While TextGrad relies primarily on immediate feedback to guide updates, HessianGrad models response evolution trends over time, enabling thoughtful adjustments that are essential for escaping stagnation and ensuring stability during optimization.\\n\\nAlthough the framework is inspired by second-order principles, we believe the novelty of HessianGrad lies in leveraging response trajectory analysis as a practical and scalable alternative to conventional second-order techniques. This approach is specifically tailored for textual optimization tasks and addresses challenges that TextGrad alone may not fully resolve.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"Existing automatic optimization methods only focus on immediate feedback, which can be easily trapped by the local optima. HessianGrad is analogous to second-order derivative methods by taking into account the feedback over multiple iterations. Experimental results show consistent gains of HessianGrad in prompt optimization, solution refinement, and code optimization.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The core idea addresses a critical limitation of iterative/reflective methods that only focus on immediate feedback.\\n\\n2. This work covers a wide array of recent literature, and the presentation is clear.\", \"weaknesses\": \"1. Besides the analogy provided in Eq. 2 and Eq. 3, the core implementation of HessianGrad is to include a meta-prompt that encourages LLM to reflect over multiple turns as shown in Pg. 12. However, this is arguably analogous to the real Hessian matrix that the work is trying to deliver. Moreover, whether LLM is able to capture the second-order phenomenon is also questionable and lacks justification in this work.\\n\\n2. The actual technical contribution of this work is to provide a more refined meta-prompt to the original TextGrad\\u2019s meta-prompt. First, the contribution of the refined meta-prompt appears to be limited. Second, the OPRO work [1] has also included similar meta-prompts to reflect over multiple turns by feeding the iterative optimization trajectory into the context of LLM. Therefore, the novelty of this work is limited and appears more as an incremental improvement on TextGrad.\\n\\n3. The selected baselines in main experiments are also questionable. Most baselines are TextGrad and M-TextGrad. However, for each task, competitive baselines, e.g. ProTeGi [2] in prompt optimization, are not compared.\", \"questions\": \"1. Could you provide justifications for the difference between your optimizer prompt and OPRO\\u2019s meta-prompt [1] for prompt optimization?\\n\\n[1] Yang, C., Wang, X., Lu, Y., Liu, H., Le, Q. V., Zhou, D., & Chen, X. (2023). Large Language Models as Optimizers (No. arXiv:2309.03409). arXiv. http://arxiv.org/abs/2309.03409\\n\\n[2] Pryzant, R., Iter, D., Li, J., Lee, Y. T., Zhu, C., & Zeng, M. (2023). Automatic Prompt Optimization with \\u201cGradient Descent\\u201d and Beam Search (No. arXiv:2305.03495). arXiv. http://arxiv.org/abs/2305.03495\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 2/2\", \"comment\": \"**4. Second, the OPRO work [1] has also included similar meta-prompts to reflect over multiple turns by feeding the iterative optimization trajectory into the context of LLM. Could you provide justifications for the difference between your optimizer prompt and OPRO\\u2019s meta-prompt [1] for prompt optimization?**\\n\\nWe appreciate the reviewer\\u2019s request for clarification. Below, we detail the differences between HessianGrad\\u2019s optimizer prompt and OPRO\\u2019s meta-prompt:\\n\\n1. Task Specificity\\n\\nOPRO\\u2019s meta-prompt is task-specific, designed exclusively for prompt optimization. For instance, it instructs the model to generate instructions with higher scores by feeding solution-score pairs into the LLM:\\n\\n> Generate an instruction that is different from all the instructions <INS> above and has a higher score than all the instructions <INS> above.\\n\\nIn contrast, HessianGrad\\u2019s optimizer prompt is task-agnostic and explicitly designed to iteratively refine any variable (not limited to prompt optimization) by modeling response evolution. For example:\\n> Focus on adjusting the variable in a way that each step introduces thoughtful, measured changes based on past iterations, rather than drastic shifts.\\n\\nThis task-agnostic design enables HessianGrad to handle a broader range of textual optimization tasks, such as solution optimization and code generation, beyond just prompt optimization.\\n\\n\\n2. Explicit Modeling of Iterative Dynamics\\n\\nWhile OPRO\\u2019s meta-prompt incorporates solution-score pairs into the context, reflecting an iterative process, it does not explicitly analyze response evolution dynamics across iterations. Instead, OPRO relies on mechanisms like temperature tuning to balance exploration and refinement, which introduces stochasticity and potential instability in updates.\\n\\nHessianGrad, on the other hand, explicitly tracks how responses evolve across iterations using a <PAST_ITERATIONS> section:\\n> Reflect on how the responses to this variable have evolved across iterations: <PAST_ITERATIONS>{past_values}</PAST_ITERATIONS>.\\n\\nThis explicit modeling allows HessianGrad to refine updates based on cumulative trends, stabilizing optimization or escaping stagnation as needed. By integrating response trajectory analysis and textual gradients, HessianGrad achieves a deterministic balance between refinement and exploration, ensuring stability and coherence.\\n\\n3. Fine-Grained Output Examples\\n\\nIn prompt optimization tasks, OPRO generates concise instructions such as:\\n> Let\\u2019s solve this problem step-by-step.\\n\\nWhile effective for basic tasks, these outputs lack the detailed, structured reasoning necessary for more complex scenarios. In contrast, HessianGrad generates fine-grained, structured prompts tailored to the task\\u2019s complexity. For example, in BBH_Object Counting, HessianGrad produces:\\n> You will answer a reasoning question. If the task is complex or unclear, begin by restating it to confirm understanding. Explicitly list each item and its quantity in a structured format like .... Ensure clarity by stating the number of each type of object before summing them up. Perform calculations in ..., explaining the addition process ..., Verify your results by ..., and consider using ... for confirmation. Provide the final answer in .... Avoid unnecessary details and ensure ... relevant to the question.\\n\\nThis level of detail showcases HessianGrad\\u2019s ability to refine prompts iteratively, ensuring higher-quality outputs that go beyond the capabilities of OPRO\\u2019s meta-prompt.\\n\\n\\n**5. The selected baselines in main experiments are also questionable. Most baselines are TextGrad and M-TextGrad. However, for each task, competitive baselines, e.g. ProTeGi [2] in prompt optimization, are not compared.**\\n\\nWe appreciate the reviewer\\u2019s feedback regarding the inclusion of additional competitive baselines. ProTeGi primarily focuses on prompt optimization and does not explicitly address optimizing prompts for smaller models using feedback from larger models. To adapt ProTeGi for comparison, we followed this approach:\\n\\n* Optimization Engine: We used GPT-4o as the optimization engine to perform inference on the training set.\\n* Few-Shot Example Selection: High-quality inference examples from GPT-4o were selected as few-shot examples for the smaller model, Llama-3.1-8B.\\n\\nWe incorporated ProTeGi into prompt optimization task. The results are shown in the following table:\\n| Dataset | | | | | Accuracy % (Improv. over TextGrad)|\\n| --- | --- | --- |--- |--- |--- |\\n| | CoT | ProTeGi | TextGrad | M-TextGrad |HessianGrad|\\n|Object Counting|65.0 |68.0 | 77.0|80.0|83.0|\\n|GSM8k|84.6|84.6|84.6|84.6|84.6|\\n\\nWe can observe that On GSM8k with Llama-3.1, both ProTeGi and HessianGrad show limited performance gains, likely due to model saturation. However, on the Object Counting dataset, HessianGrad outperforms ProTeGi by effectively leveraging feedback from stronger models like GPT-4o, demonstrating superior optimization for weaker, cost-effective models such as GPT-3.5-turbo-0125.\"}", "{\"title\": \"Response 2/2\", \"comment\": \"**4. Calculating HessianGrad typically requires more computational resources. Could you provide a detailed comparison of computational resources (GPU memory, runtime) for HessianGrad versus the baseline methods across all three tasks.**\\n\\nWe appreciate the reviewer\\u2019s suggestion. We have added the detailed comparison of computational resources for HessianGrad versus baseline methods across all three tasks. Please refer to Section 4.5 for the full result table. (The text shown in blue color in the revised manuscript highlights our revisions.)\\n\\nWe observe that while HessianGrad involves slightly higher per-iteration runtime due to its second-order optimization-inspired design, it converges in fewer iterations, resulting in significant overall savings. The detailed findings are as follows:\\n* On Object Counting dataset, HessianGrad reduces total runtime by 50\\\\% compared to TextGrad by converging in fewer iterations despite slightly higher per-iteration costs.\\n* For solution optimization task, HessianGrad achieves 26.14\\\\% lower total runtime than TextGrad, while M-TextGrad incurs 77.65\\\\% higher runtime due to instability.\\n* For code optimization task, HessianGrad reduces total runtime by 16.67\\\\% compared to baselines.\\n* For GPU memory usage, HessianGrad demonstrates similar requirements to baseline methods, indicating no significant increase in computational resources.\\n\\nThese findings demonstrate that, despite higher per-iteration runtime, HessianGrad\\u2019s faster convergence ensures practical efficiency and stability, making it computationally advantageous across tasks.\\n\\n**5. In general computational frameworks, HessianGrad is computed by adding a small quantity in the iteration direction and recalculating the gradient once, followed by taking the finite difference of the gradients. From a practical standpoint, can the direct finite difference version of similarity $S$ save computational costs (since $L(r(p_{t-1}))$ and $p_{t-1}$ are both values computed from the previous iteration) and achieve similar effects?**\\n\\nWe thank the reviewer for this insightful suggestion. We would like to clarify that recalculating gradients using finite differences is not applicable within HessianGrad\\u2019s framework. HessianGrad operates without explicit numerical gradients or directional updates, relying instead on the LLM\\u2019s implicit processing capabilities to handle updates.\\n\\nThis design choice ensures computational efficiency by avoiding additional recalculations while still achieving second-order optimization-inspired effects. Empirical results validate the effectiveness of this approach, demonstrating that HessianGrad delivers robust optimization outcomes without requiring finite difference methods.\\n\\n**6. There is a typo in Equation (3) in Section 3.**\\n\\nWe thank the reviewer for the careful reading. We have updated it in the revision.\"}", "{\"title\": \"Response 1/2\", \"comment\": \"Thank you for your careful reading and thoughtful reviews. Let us address your comments below.\\n\\n**1. When defining the similarity function of the prompt before and after each iteration, you do not multiply it by any hyperparameters and directly add it to the original TextGrad to obtain the HessianGrad. Consider adding two hyperparameters, $\\\\beta_1$ and $\\\\beta_2$, to the two terms of HessianGrad, similar to Adamw. Could you conduct an ablation study on the effect of adding these hyperparameters, or provide justification for why they were not included in the current formulation?**\\n\\nWe thank the reviewer for the insightful suggestion regarding the inclusion of hyperparameters $\\\\beta_{1}$ and $\\\\beta_{2}$ \\u200b in HessianGrad. We would like to clarify that HessianGrad operates entirely within the textual optimization domain, where updates are implicitly handled by the LLM. Unlike numerical optimizers such as AdamW, HessianGrad does not involve explicit numerical calculations or parameterized updates. Introducing hyperparameters like $\\\\beta_{1}$ and $\\\\beta_{2}$ would require determining their magnitudes and fine-tuning them, which could introduce subjectivity and complexity to the framework.\\n\\nOur approach avoids explicit hyperparameters because HessianGrad leverages the LLM\\u2019s contextual understanding to implicitly balance feedback (TextGrad) with response trajectory tracking. Empirical results demonstrate that HessianGrad achieves effective optimization in its current formulation, validating the framework\\u2019s robustness without requiring additional hyperparameters.\\n\\n**2. You use the $L_1$ norm to define $S$, however, when measuring semantic similarity, cosine similarity is more commonly used as it signifies that the loss L before and after iteration is closer in direction, while $L_{1}$ primarily signifies that L is closer in numerical value. It is recommended to provide a rationale for choosing the $L_{1}$ norm as a similarity metric. (Simple reasons are acceptable, such as \\\"$L_{1}$ norm is easier to compute\\\"). Could you compare the performance of your method using $L_1$ norm versus cosine similarity, or provide empirical justification for why $L_1$ norm was chosen over other common similarity metrics?**\\n\\nWe appreciate the reviewer\\u2019s question regarding the use of $L_1$\\u200b norm versus cosine similarity. HessianGrad does not explicitly compute numerical similarity metrics, such as $L_1\\u200b$ or cosine similarity, as all comparisons and updates are implicitly handled within the LLM\\u2019s textual framework. This design eliminates the need for explicit numerical calculations, and as a result, a direct performance comparison between $L_1$\\u200b and cosine similarity is not applicable to our implementation.\\n\\nIn the theoretical framework, we chose $L_1$\\u200b norm to describe $S$ for its conceptual clarity and simplicity. The $L_1$\\u200b norm intuitively represents the magnitude of change in textual loss across iterations, which aligns with the goal of tracking response evolution. On the other hand, cosine similarity focuses on directional alignment, which is less relevant in the context of textual optimization tasks, where maintaining semantic and contextual coherence is a higher priority.\\n\\nWe hope this explanation clarifies our rationale and are happy to provide further details if needed.\\n\\n**3. Please provide the loss curves for HessianGrad, M-TextGrad, TextGrad, and CoT of their iterative processes for a representative example from each of the three tasks (prompt optimization, solution optimization, and code optimization) to demonstrate the effect of \\\"HessianGrad Escaping Local Optima\\\" as shown in Figure 1.**\\n\\nWe appreciate the reviewer's suggestion to provide the loss curves to demonstrate the simulation of second-order effects in HessianGrad. We have added an empirical analysis of loss curves on solution optimization tasks across three datasets\\u2014Google-proof QA, MMLU-Machine Learning, and MMLU-College Physics. Please refer to the blue part in Section 4.2 for details (the text shown in blue color in the revised manuscript is our revision).\", \"we_observed_the_following_key_effects\": [\"Escaping Local Optima: As shown in Figure 2 (b), HessianGrad surpasses performance plateaus through cumulative response dynamics.\", \"Stabilizing Updates: Unlike baselines that exhibit oscillations and instability (e.g., M-TextGrad in Figure 2 (c)), HessianGrad achieves smoother optimization trajectories.\", \"Improved Performance in Challenging Scenarios: Proxy loss curves in all figures highlight HessianGrad\\u2019s ability to navigate meaningful adjustments, achieving higher test accuracy over iterative refinements.\", \"We believe these results validate that the second-order effects are simulated and operationalized by the framework\\u2019s design, demonstrating the effect of \\\"HessianGrad Escaping Local Optima\\\" as shown in Figure 1.\"]}" ] }
0h6v4SpLCY
Universal generalization guarantees for Wasserstein distributionally robust models
[ "Tam Le", "Jerome Malick" ]
Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that generalization guarantees of robust models based on the Wasserstein distance have generalization guarantees that do not suffer from the curse of dimensionality. However, these results are either approximate, obtained in specific cases, or based on assumptions difficult to verify in practice. In contrast, we establish exact generalization guarantees that cover a wide range of cases, with arbitrary transport costs and parametric loss functions, including deep learning objectives with nonsmooth activations. We complete our analysis with an excess bound on the robust objective and an extension to Wasserstein robust models with entropic regularizations.
[ "generalization guarantees", "optimal transport", "distributionally robust optimization", "nonsmooth analysis" ]
Accept (Spotlight)
https://openreview.net/pdf?id=0h6v4SpLCY
https://openreview.net/forum?id=0h6v4SpLCY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yjamuzixsx", "qcAdYXq5Qs", "UB7x1jbnJK", "TFMMA6w7RY", "SF5HnFQRqZ", "NecMRzUaX9", "IBgU0FT6ZI", "AJk2wGzK5J", "5UxCNRgZh6" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "decision", "official_review" ], "note_created": [ 1732185370229, 1732184910111, 1730716014309, 1732185021100, 1734558280941, 1730698003097, 1732184949384, 1737524065187, 1730512700429 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10606/Authors" ], [ "ICLR.cc/2025/Conference/Submission10606/Authors" ], [ "ICLR.cc/2025/Conference/Submission10606/Reviewer_tSiX" ], [ "ICLR.cc/2025/Conference/Submission10606/Authors" ], [ "ICLR.cc/2025/Conference/Submission10606/Area_Chair_VJTG" ], [ "ICLR.cc/2025/Conference/Submission10606/Reviewer_tsmX" ], [ "ICLR.cc/2025/Conference/Submission10606/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10606/Reviewer_hWjP" ] ], "structured_content_str": [ "{\"title\": \"Response to minor points\", \"comment\": \"We now continue with your minor points. We have addressed all of them in the revision. Please find our answer to the most important ones below.\\n\\n\\n### Minor points\\n> The definition of Wasserstein distance circa (1) is incorrect, \\n$\\\\Xi$ must be Polish, and $c$ must be a power $p \\\\geq 1$\\n of a metric topologizing $\\\\Xi$\\n; what you write is just some transport problem [...]\\n\\n**On the terminology ''Wasserstein distance'' (1):** Your are right, writing \\\"Wasserstein distance\\\" in our case is a slight abuse of terminology. WDRO may be studied for general costs (see e.g. [2]), under our Assumption 2.1. Note that we also require $c(\\\\xi,\\\\zeta)$ to be zero if and only if $\\\\xi = \\\\zeta$. As you say, this setting does not imply $W_c$ to be a metric. We now use the terminology \\\"optimal transport cost\\\" as in [2] to avoid any confusion.\\n\\n\\n> **Assumption 1 vs. Line 145** You say that is just a measurable space, then later you say its a compact metric space [...]\\n\\n\\nThe section notation was indeed a bit general for the main text. We modified it to make it more simple and we wrote a general notation part in the appendix for the proofs. In the main text we now define the spaces $\\\\Xi$, $\\\\mathcal{F}$ and the cost $c$ right from the beginning in the notation part. We kept their specificities (such as compactness or continuity) in Assumption 2.1 to facilitate the comparison of our setting with the literature, and make the presentation as much transparent as possible. \\n\\n\\n> Line 69: \\\"This theoretical feature is specific to WDRO and highlights its potential to give more resilient models.\\\" This can be much less hand-wavy. Please explain more precisely/mathematically. \\n\\n\\nThis relates to the previous sentence, which already describe the inequality (3). We reorganized both sentences to make it clearer\\n\\n\\n\\n\\n\\n> Line 176: Why jointly Lipschitz? [...]\\n\\nThis is a good remark, there was an omission here. Joint Lipschitz continuity ensures $\\\\mathcal{I}\\\\_{\\\\mathcal{F}}$ is finite; this is now added in the revision. $\\\\mathcal{I}\\\\_{\\\\mathcal{F}} < \\\\infty$ can also be satisfied when assuming $f(\\\\cdot,\\\\xi)$ $L$-Lipschitz for all $\\\\xi \\\\in \\\\Xi$. For simplicity we assume joint Lipschitz continuity since we can not think of a relevant situation where $\\\\Xi$, $\\\\Theta$ would be compact but $f$ not jointly Lipschitz continuous.\\n\\n\\n> Line 223: There are many more references of the use of this type of metric\\n\\nThis is indeed a natural metric, which exists in many other contexts than WDRO. \\n\\n> Assumption 1: Why call (2) jointly continuous [...]\\n\\nYes, but this is continuity on the product space $\\\\Xi \\\\times \\\\Xi$. We use the terminology joint continuity to avoid any confusion with continuity with respect to each variable.\\n\\n\\n\\n[2] Jose Blanchet and Karthyek Murthy. Quantifying distributional model risk via optimal transport.\\nMathematics of Operations Research, 44(2):565\\u2013600, 2019.\"}", "{\"comment\": \"Thank you very much for your positive feedback, all your comments/remarks, and your two main questions above. It helps us improving our presentation, and especially on the position w.r.t Azizian et al. 2023a, which is the closest work.\\n\\nIn the revision, we now provide a discussion, just below our main assumptions (Assumption 2.1) where we compare our setting to Azizian et al. 2023a in details. This complements the key differences, already mentioned in the related work section (lines 120 to 130). \\n\\n\\n**Let us recall here our improvements, compared to Azizian et al (2023a):**\\n- The setting of Azizian et al (2023a) restricts to smooth functions $f \\\\in \\\\mathcal{F}$ (twice differentiable with uniformly bounded derivatives on a convex sample space). We only require the $f \\\\in \\\\mathcal{F}$ to be continuous on a metric space. In addition to nonsmooth functions, this allows us to consider distributions on sample spaces with discrete and continuous variables (as for e.g. classification tasks). \\n\\n\\n\\n- Their proof require to take $c$ as the squared norm and the \\nreference distribution $\\\\pi_0(\\\\cdot|\\\\xi)$ as a Gaussian distribution. We consider instead general costs $c$, continuous with respect to a distance on $\\\\Xi$ and an arbitrary reference probability distribution.\\n\\n\\n\\nFor instance, this setting is captured by us but not by Azizian et al. (2023a):\\n\\n> (i) The sample space $\\\\Xi = B(0,R) \\\\times \\\\{0,1\\\\}$ where $R > 0$\\n>\\n> (ii) The loss family $\\\\mathcal{F} = \\\\\\\\{ f(\\\\theta, \\\\cdot) \\\\ : \\\\ \\\\theta \\\\in \\\\Theta \\\\\\\\}$ with the cross entropy loss $$f(\\\\theta, x,y) = - y \\\\log(h(\\\\theta,x)) - (1 - y) \\\\log(h(\\\\theta,x))$$ \\n where $h(\\\\theta, \\\\cdot)$ is a feedforward network with RELU activations and $\\\\Theta$ is compact. \\n>\\n> (iii) The cost function: $c((x,y), (x',y')) = \\\\|x-x'\\\\|_{p}^{q} + \\\\kappa \\\\mathbb{1}\\\\_{y \\\\neq y'}$\\n\\n\\n\\n\\n- Moreover, in their proof, to overcome nonsmoothness of WDRO (which poses concerns for applying concentration results), they require two technical assumptions: a compactness condition (1) and growth conditions around maximizers (2), this is their **Assumption 5**:\\n\\n\\n(1) For any $R > 0$, there exists $\\\\Delta > 0$ such that\\n $$\\\\forall f \\\\in \\\\mathcal{F}, \\\\; \\\\forall \\\\zeta \\\\in \\\\Xi, \\\\; d\\\\left(\\\\zeta, \\\\operatorname{argmax} f\\\\right) \\\\geq R \\\\implies f(\\\\zeta) - \\\\max f \\\\leq -\\\\Delta.$$\\n \\n\\n(2) There exist $\\\\mu > 0$ and $L > 0$ such that, for all $f \\\\in \\\\mathcal{F}$, $\\\\xi \\\\in \\\\Xi$ and $\\\\xi^*$ a projection of $\\\\xi$ on $\\\\operatorname{argmax} f$,\\n $$f(\\\\xi^*) \\\\geq f(\\\\xi) + \\\\frac{\\\\mu}{2} \\\\|\\\\xi - \\\\xi^*\\\\|^2 - \\\\frac{L}{6} \\\\|\\\\xi - \\\\xi^*\\\\|^3.$$\\n \\nWe do not rely on these conditions. They are rather strong and difficult to verify since maximizers over the sample space are hard to control in general and both depend on the sample space and the function class geometries. In particular, (1) requires $\\\\mathcal{F}$ to be compact with respect to a distance defined by summing the sup norm and the Hausdorff distance between maximizers sets. Equivalently, this can be seen as the continuity of $f \\\\mapsto \\\\operatorname{argmax} f$ on $\\\\mathcal{F}$ (see our Proposition F.4 in appendix), which is hard to verify.\\n\\n\\n\\nThe main technical difficulties in our proof was thus to get rid Assumption 5 from Azizian et al 2023a and to deal with the nonsmooth aspect of the WDRO objective. To this purpose, we simplified the proof and use nice nonsmooth analysis tools. We highlight this aspect in the sketch of proof, section 4.2 ''Definition of the lower bound\\\" where we present a maximal radius function.\\n\\n\\n\\n**About your smaller question:** Indeed, the assumption is vacuous in this case and we may take $\\\\omega = 1$. We decided to keep the constant $\\\\omega$ in the results in order to highlight the dependence of $\\\\lambda_{\\\\text{low}}$ on the hypothesis domain. To make our statements more precise, we added the condition $\\\\omega \\\\geq 1$ in Assumption 3.1.1.\"}", "{\"summary\": \"This paper presents novel bounds on for the DRO loss using the Wasserstein distance. In particular, they address the question of finding the minimal $\\\\rho$ used by the empirical robust loss such that the loss is an upper bound for the actual population loss. The main challenge is to overcome the dependency on the distance between W(P_n, P) \\\\sim n^{1/d}. While this problem has been studied in the literature, and dimension free bounds exist, this paper presents a proof requiring weaker assumptions.\\n\\n\\n\\n\\n---------- after the rebuttal ------------\\n\\nI thank the authors for their response and increased the score accordingly.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses an important problem is generalization bounds/theoretic ML. In particular:\", \"The paper is well written and the results are nicely presented.\", \"The proof sketch in Section 4 is excellent. It is very easy to follow and often neglected in these types of papers\", \"The proof idea is smart, non-trivial and interesting.\"], \"weaknesses\": \"Given that this is a more traditional field, I would expect a clearer comparison with the existing works. While the authors do a very good job in presenting the proof idea, it is not so clear how the proof fundamentally differs from existing works.\", \"questions\": \"I am happy to increase my score and support this paper with a high confidence if the authors can provide an extensive discussion during the rebuttal on the assumptions in Azizian et al. (2023a) . In particular, my two major questions are: can the authors be more precise in which cases their assumptions are weaker than the ones in Azizian et al. (2023a). In particular, can you give an example for a class of distributions that are covered by this paper but not by Azizian et al. (2023a)? Moreover, can the authors explain why the proof in Azizian et al. (2023a) breaks for your assumptions and why it is not trivial to extend the proof?\", \"smaller_question\": \"Isn't assumption 3.1 (1) always true satisfied by w<=1. Is it possible that this is a typo?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"-\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to main questions\", \"comment\": \"Thank you for your reading and positive feedback. Your several remarks helped us improving the paper and we addressed each of them in our revision. Please find below our answers to the important points from your review.\\n\\n\\n### Questions\\n\\n> Why not submit to JMLR? The paper is very rigorous and rather long and technical for an ML conference? You also examine the problem in good detail.\\n\\nYes we tried to propose a thorough study. We acknowledge that certain parts of our work are technical. We have made an effort to structure them rigorously and have placed them in the appendices. In the main, we have instead focused on the context, the results, and the key ideas that underpin their derivation. We believe that this material will be valuable to a significant portion of the community, whether they work on theoretical and practical aspects.\\n\\n\\n> Could you provide a simple example in Theorem 3.2, where the optimal coupling is known under (say) Gaussianity assumptions?\", \"this_is_not_evident_even_for_simple_examples\": \"the optimal coupling depends on $Q$ which can be any distribution satisfying $W_c^\\\\tau(P,Q) \\\\leq \\\\rho$. In order to gain more interpretable results, approximation results might be established to quantify how much the right hand side is close to the exact risk. Considering existing works on this aspect, see e.g. [1], we believe this would require more structure on $c$. This is a relevant and non trivial question.\\n\\n> I'm a bit confused. What does $\\\\operatorname{argmax}_\\\\Xi f$ mean in (5) a sup norm or something?\\n\\nThis is the set of maximizers of $f$ on $\\\\Xi$, the points attaining the maximum of $f$ on $\\\\Xi$, the ``argmax''. This now appear in the notations.\\n\\n> Why is $\\\\operatorname{min} c(\\\\xi,\\\\zeta) ...$ measurable? \\n\\nAs you say, this can be given by the measurable Maximum Theorem. In our proofs, we did not discuss measurability of every function because it was often satisfied through stronger notions such as continuity or semicontinuity (for instance to prove Lemma 4.1 in appendix D.1).\\n\\n\\n> Each result assumes that the (difficult to interpret) $\\\\rho\\\\_{\\\\text{crit}}$ is \\\"large enough\\\" Can you please provide a general set of conditions ensuring that $\\\\rho\\\\_{\\\\text{crit}}$ can be bounded away from 0.\\n\\nThis is quite abstract, indeed. We interpret it as follows: when $P$ is supported on $\\\\Xi$, $\\\\rho_{\\\\text{crit}} > 0$ if and only if $\\\\mathcal{F}$ contains no constant function. We have a proposition in appendix establishing this; we have added a pointer to it in the main, for more clarity.\\n\\n\\n> In theorem 3.2, why is $\\\\pi^{P,Q} \\\\ll \\\\pi_0$? \\n\\nThis is due to the definition of KL divergence. This is now mentioned as a footnote.\\n\\n\\n\\n\\n> Is it fair to compare, verbally, our results to those of Fournier et al [...] ?\\n\\n\\nFournier and Guillin concentration result can indeed be improved when some structure can be leveraged. In this paper, we consider instead the general case, with no restriction on specific distributions; this is the setting of the seminal work of Mohajerin Esfahani and Kuhn (2018), relying on Fournier and Guillin, for the statistical results. In our work, our take is that, in this general setting, we can still gain in sample complexity, not using the structure of the distributions, but rather by leveraging the structure of the WDRO optimization problem itself.\\n\\n\\n\\n[1] Waiss Azizian, Franck Iutzeler, and Jerome Malick. Regularization for wasserstein distributionally robust optimization. ESAIM: Control, Optimisation and Calculus of Variations, 29:33, 2023b.\"}", "{\"metareview\": \"This paper analyzes distributionally robust optimization, which effectively addresses data uncertainty and distribution shifts in training robust machine learning models. Existing generalization guarantees are often approximate or limited to specific cases with hard-to-verify assumptions. This paper establishes exact generalization guarantees that apply to a broad range of scenarios, including deep learning models with nonsmooth activations, and provides an excess bound on the robust objective along with an extension to Wasserstein robust models with entropic regularizations.\\n\\nThe paper considers an important problem, and is generally well written (including the proofs and theorem statements). The rebuttal helped a lot in clarifying the advantage of the new result over the existing ones. Please polish the revised PDF and make sure that the discussions from the rebuttal and discussion period are properly incorporated.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal has been noted by the reviewers and have been taken into account by the AC in the recommendation of acceptance/rejection.\"}", "{\"summary\": \"This paper provides exact generalization guarantees for Wasserstein Distributionally Robust Optimization (WDRO) for a wide variety of models with compactness and finite Dudley's entropy assumptions. The results apply to radius $\\\\rho$ scaling as $O(1/\\\\sqrt{n})$, which does not suffer from the curse of dimensionality. The results also cover the double regularization case.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The generalization guarantees of this work do not rely on restrictive assumptions like smoothness compared to the previous work (Azizian et al. 2023a).\", \"This paper is well-structured, and the theoretical results and proof sketches are clearly presented.\"], \"weaknesses\": [\"In Section 3.2, the authors discussed how their results on generalization guarantees apply to linear regression and logistic regression. However, more complicated models such as neural networks with ReLU or other smooth activation functions (e.g. GELU) are not discussed.\", \"The theoretical results require a lower bound on $n$, while Theorem 3.4 of Azizian et al. (2023a) applies to all $n \\\\ge 1$. The implications of this requirement should be discussed.\"], \"questions\": [\"What are the practical implications of the generalization guarantees compared to Azizian et al. (2023a)? Can you provide some numerical results analogous to Appendix H of Azizian et al. (2023a)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive feedback. Please find our response to your several questions below.\\n\\n\\n> In Section 3.2, the authors discussed how their results on generalization guarantees apply to linear regression and logistic regression. However, more complicated models such as neural networks with ReLU or other smooth activation functions (e.g. GELU) are not discussed.}\\n\\n\\nWe underline that more complicated models, e.g. neural networks with ReLU, are covered by our general analysis, in contrast to the previous ones.\\n\\n This being said, we agree that obtaining theoretical or empirical insights into the constants $\\\\lambda_{\\\\text{low}}$ and $\\\\rho_{\\\\text{crit}}$ in deep learning contexts would be interesting. This question remains open and deserves a dedicated study.\\n\\n---\\n> The theoretical results require a lower bound on $n$, while Theorem 3.4 of Azizian et al. (2023a) applies to all \\n$n \\\\geq 1$. The implications of this requirement should be discussed.\\n\\n\\nThis is a good technical point to raise, thank you. Theorems 3.1 and 3.4 of Azizian et al. (2023a) also (implicitly) require such lower bounds. Let us explain why.\\n\\nTheorems 3.1 and 3.4 from Azizian et al. 2023 require to choose $\\\\rho$ as \\n$$\\\\frac{\\\\alpha}{\\\\sqrt{n}}\\\\leq \\\\rho \\\\leq \\\\frac{\\\\rho_{\\\\text{crit}}}{2} - \\\\frac{\\\\beta}{\\\\sqrt{n}}$$\\n(here we replaced their big O notation by some constants $\\\\alpha >0$ and $\\\\beta > 0$). However, this range can be empty with no further condition on $n$. Only for $n$ high enough $\\\\rho$ can be chosen in the interval $[\\\\frac{\\\\alpha}{\\\\sqrt{n}}, \\\\frac{\\\\rho_{\\\\text{crit}}}{2} - \\\\frac{\\\\beta}{\\\\sqrt{n}}]$. This gives the condition $\\\\frac{\\\\alpha}{\\\\sqrt{n}} \\\\leq \\\\frac{\\\\rho_{\\\\text{crit}}}{2} - \\\\frac{\\\\beta}{\\\\sqrt{n}}$ and then the lower bound $n \\\\geq 4 (\\\\alpha + \\\\beta)^2/\\\\rho_{\\\\text{crit}}^2$. In this case, as we did, we may also remove the upper bound on $\\\\rho$ by monotonicity of the robust loss with respect to $\\\\rho$. We added a comments on this point in appendix. \\n\\n \\n---\\n> What are the practical implications of the generalization guarantees compared to Azizian et al. (2023a)? Can you provide some numerical results analogous to Appendix H of Azizian et al. (2023a)?\\n\\n\\nIt would indeed be great to have numerical illustrations of generalization properties in the general setting we consider. This is however a difficult point that would require a complete and rigorous work. This is out of scope of this theoretical paper. Among the many challenges, efficient practical procedures to compute WDRO models in deep learning are still missing. This is how we conclude the paper insisting on future work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"summary\": \"The proposed paper provides lower bounds on the robust empirical risk under unorthodox but interesting scaling limits on the radius of the Wasserstein ball around the empirical risk. The paper uses some cool techniques which are not often seen in machine learning.\\n\\nThe paper is relatively clearly written. However, I think there are a few little things here and there which are either difficult to justify (in the current form) or perhaps not well-defined (see below). Also, the introduction is excessively general while the setting rapidly collapses to a much more specific setting shortly after.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well written, interesting, and theoretical and provides very nice lower bounds on the robust empirical risk. The results are nice, and so is the use of set-valued analysis to derive them. Several relevant examples are considered, making a large portion of how these results can be used nearly transparent.\", \"weaknesses\": [\"Nevertheless, I think some of the assumptions are a bit opaque (see below), and I'm not certain some quantities are well-defined.\", \"*Minor*\", \"Citing Cuturi and Per\\u00e9e's book is odd when mentioning the Wasserstein distance. Perhaps the original source or a book on optimal transport such as Villani's book, would be more natural, IMO.\", \"The definition of Wasserstein distance circa (1) is incorrect, $\\\\Xi$ must be Polish, and c *must* be a power $p\\\\in [1,\\\\infty)$ of a *metric* topologizing $\\\\Xi$; what you write is just some transport problem. Eg if c is not symmetric, then $W_c$ is not a metric in general, or if $c(x,y)=0$ for all $x,y$ then $W_c$ cannot separate points.\", \"Perhaps \\\"suitable\\\" distributions is more appropriate before (1), since the distance explodes if these have no finite moment.\", \"Line 65: bad grammar: \\\"it does not introduce approximate term\\\" also imprecise.\", \"Line 66: Is Wainright's book and Boucheron the best reference? Perhaps older papers, e.g. on VC dimension, Bartlett's old papers on Rademacher complexity, or old papers on chaining are more natural references?\", \"Line 69: \\\"This theoretical feature is specific to WDRO and highlights its potential to give more resilient models.\\\" This can be **much** less hand-wavy. Please explain more precisely/mathematically.\", \"Line 149: In a metric space $(x,..)$ not \\\"In (X,..) a metric space\\\".\", \"Assumption 1 vs. Line 145: You say that $\\\\Xi$ is just a measurable space, then later you say its a compact metric space. Why no be forthright and say its a metric space on line 145. Similarly, why is $\\\\mathcal{F}$ an arbitrary family of functions, then straightaway after is actually a compact set of continuous functions.\", \"Line 176: Why jointly Lipschitz? If $\\\\Theta$ is compact, then since you already assumed $\\\\Xi$ is compact, then it is enough for $\\\\Theta\\\\times\\\\Xi\\\\ni (\\\\theta,\\\\xi)\\\\mapsto f(\\\\theta,\\\\xi)\\\\in \\\\mathbb{R}$ to be continuous; to deduce the compactness of $\\\\{f(\\\\theta,\\\\cdot):\\\\,\\\\theta \\\\in \\\\Theta\\\\}$ by the currying Lemma.\", \"Line 176: Not sure why you say \\\"if $\\\\Xi$ is compact, since this was assumed a few lines earlier on the same page.\", \"Maybe more natural examples come from Arzela-Ascoli...\", \"Should the definition of the Dudley entropy integral really be in a footnote, while more basic ideas are in the main text.\", \"Line 221: The words \\\"the metric\\\" are missing.\", \"Line 223: There are many more references of the use of this type of metric, especially in exponential convergence rate results for Markov chains (wrt $W_1$ over countable metric spaces with this distance).\", \"Line 245: \\\"sample randomness\\\" (I know what you mean...but the word independent is misleading as this\", \"Assumption 1: Why call (2) jointly continuous, it is just standard continuity (actually inform continuity by compactness).\"], \"questions\": [\"Why not submit to JMLR? The paper is very rigorous and rather long and technical for an ML conference? You also examine the problem in good detail.\", \"Could you provide a simple example in Theorem 3.2, where the optimal coupling is known under (say) Gaussianity assumptions?\", \"I'm a bit confused. What does $\\\\operatorname{argmax}_{\\\\Xi}\\\\,f$ mean in (5) a sup norm or something?\", \"Why is $\\\\min\\\\{ c(\\\\xi,\\\\zeta): ... \\\\}$ measurable? In particular, (independent of the meaning of the argmax, above question), why is there a measurable selection $\\\\xi\\\\mapsto \\\\zeta$? Without this, its not clear that $\\\\rho_{\\\\operatorname{crit}}$ is well defined. I'm guessing this is Berge's theorem (which is in Aliprantis & Border) somehow, but please spell it out for us :)\", \"Each result assumes that the (difficult to interpret) $\\\\rho_{\\\\operatorname{crit}}$ is \\\"large enough\\\". Can you please provide a general set of conditions ensuring that $\\\\rho_{\\\\operatorname{crit}}$ can be bounded away from $0$.\", \"Is it fair to compare, verbally, our results to those of Fournier et al. (and similar bounds, say, found in [1])? Since you are considering a small ball around the empirical measure while their results guarantee a minimal radius such that the empirical measure contains the true measure whp. Furthermore, those rates are only tight (afaik) when the measure is very spread out; more precisely, it is Alhors $d$-regular, see e.g. [3] for a nice clean proof.\", \"In theorem 3.2, why is $\\\\pi^{P,Q}\\\\ll \\\\pi_0$? To be this isn't directly evident... I.e.\\\\ why is the RHS not trivially $-\\\\infty$ in general?\", \"[1] Graf, Siegfried, and Harald Luschgy. Foundations of quantization for probability distributions. Springer Science & Business Media, 2000.\", \"[2] Otto, Felix, and C\\u00e9dric Villani. \\\"Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality.\\\" Journal of Functional Analysis 173.2 (2000): 361-400.\", \"[3] Kloeckner, Benoit. \\\"Approximation by finitely supported measures.\\\" ESAIM: Control, Optimisation and Calculus of Variations 18.2 (2012): 343-359.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0gqCIaBRQ9
Regularized DeepIV with Model Selection
[ "Zihao Li", "Hui Lan", "Vasilis Syrgkanis", "Mengdi Wang", "Masatoshi Uehara" ]
In this paper, we study nonparametric estimation of instrumental variable (IV) regressions. While recent advancements in machine learning have introduced flexible methods for IV estimation, they often encounter one or more of the following limitations: (1) restricting the IV regression to be uniquely identified; (2) requiring minimax computation oracle, which is highly unstable in practice; (3) absence of model selection procedure. In this paper, we analyze a Tikhonov-regularized variant of the seminal DeepIV method, called Regularized DeepIV (RDIV) regression, that can converge to the least-norm IV solution, and overcome all three limitations. RDIV consists of two stages: first, we learn the conditional distribution of covariates, and by utilizing the learned distribution, we learn the estimator by minimizing a Tikhonov-regularized loss function. We further show that RDIV allows model selection procedures that can achieve the oracle rates in the misspecified regime. When extended to an iterative estimator, we prove that RDIV matches the current state-of-the-art convergence rate. Furthermore, we conducted numerical experiments to justify the efficiency of RDIV empirically. Our results provide the first rigorous guarantees for the empirically well-established DeepIV method, showcasing the importance of regularization which was absent from the original work.
[ "Nonparametric estimator", "instrumental variables", "model selection", "causal inference." ]
Reject
https://openreview.net/pdf?id=0gqCIaBRQ9
https://openreview.net/forum?id=0gqCIaBRQ9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x326fvyNvk", "x2G3vGuFws", "lAuWleJ5Bi", "epyZtG50iA", "bMXnxptYot", "Xh8gQA7lzn", "X482y2M8d4", "VDFTiahgSK", "OpWsVsrLZI", "BzZkVf73fy", "9tBqjry3vb" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "meta_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732142731516, 1732742292121, 1729973432333, 1730812254279, 1734760768528, 1737523669215, 1732142295005, 1730749518797, 1732142340655, 1732142182900, 1730351385486 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4902/Authors" ], [ "ICLR.cc/2025/Conference/Submission4902/Authors" ], [ "ICLR.cc/2025/Conference/Submission4902/Reviewer_P9wu" ], [ "ICLR.cc/2025/Conference/Submission4902/Reviewer_MFbt" ], [ "ICLR.cc/2025/Conference/Submission4902/Area_Chair_YW95" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4902/Authors" ], [ "ICLR.cc/2025/Conference/Submission4902/Reviewer_q5qB" ], [ "ICLR.cc/2025/Conference/Submission4902/Authors" ], [ "ICLR.cc/2025/Conference/Submission4902/Authors" ], [ "ICLR.cc/2025/Conference/Submission4902/Reviewer_HEwd" ] ], "structured_content_str": [ "{\"title\": \"Thank you for your review!\", \"comment\": \">**Q1**: The transformation between one-stage and two-stage algorithms and analysis is in general only a technical problem and has been discussed in different places like DualIV, and the L2 regularization itself makes the model selection easier (with the strong convexity).\\n\\n**A1**: Thank you for your comment! We agree that in one-stage minimax optimization problems, such as DualIV, the population loss shares the same estimator as two-stage methods, as the former is a dual transformation of the latter. However, we respectfully disagree with the assertion that \\u201cthe transformation between one-stage and two-stage algorithms is redundant for general function approximation\\u201d.\\n\\nConverting IV estimations to its dual form (e.g., DualIV [1], AGMM [2]) introduces a minimax optimization framework, which is **theoretically intractable in computation and empirically challenging** for important function classes such as random forests or neural networks. Note that DualIV only studies kernel function approximation. Our contribution is that we rigorously prove that RDIV achieves comparable statistical convergence rates to minimax methods while being easily **applicable to general function classes** since it only requires single-level optimization, which has been widely studied and well-known to be tractable for function classes such as neural networks, thereby enjoying both theoretical rigor and practical feasibility.\\n\\nWe also confirm with the reviewer that $L_2$ regularization plays a crucial role in our analysis for both the RDIV method and its model selection procedure. However, we would like to disagree that such regularization makes the model selection trivial. **Previous literature ([2][3]) studying minimax estimators for IV regression also incorporates $L_2$ regularization, but no guarantee for model selection has been established.** This is because it is hard to clarify the error caused by the model misspecification of the dual function in the minimax optimization. On the other hand, RDIV only relies on single-level optimization and thus enjoys a more tractable model selection. To our knowledge, RDIV is the first method that enjoys a provable guaranteed model selection procedure. \\n\\n>**Q2**: To convert the MLE guarantee into an L2 guarantee, the authors assumed a minimum density on the conditional density. What are the benefits/drawbacks compared with the conditional mean embedding-based methods (although it also requires some assumptions like HS operators)?\\n\\n**A2**:\\nThank you for bringing this to our attention! In the following, we will explain the motivation behind this technical assumption, provide a detailed comparison with mean embedding-based methods in [6], and show that our method is more general. \\n\\n(a) We confirm that we do assume a lower bound for the density when establishing the convergence rate of MLE in Assumption 4. However, such an assumption does not directly affect our analysis of the bias-variance tradeoff in our final result, see Lemma 1 and the proof of Theorem 5. We humbly point out that such an assumption is conventional when analyzing the convergence rate of MLE, and refer the reviewer to [4, Chpt.14.4.1] for details. This assumption can be easily relaxed by applying $\\\\chi^2$-MLE instead of MLE as the density estimator, which we also discussed in Appendix C. A more recent line of work [5] provides a similar bound that does not require a lower bound of the density and can be easily incorporated into our proof. \\n\\n(b) The conditional mean embedding learned by the first stage in [6] is defined by the conditional expectation of the kernel function of $X$ under a given $Z$, thus is only defined for a kernel function class. When it comes to general function classes such as neural networks, such a function is usually undefined or intractable when the underlying Hilbert space is unknown. Moreover, [6] assumes realizability for the mean embedding, thus providing no results on how to perform model selection for the kernel classes when model misspecification exists. Compared with the mean embedding, RDIV (i) allows convergence guarantee for **general function classes beyond kernel classes**, (ii) uses MLE to estimate the conditional density in the first stage, thus **allowing establishing model selection results**. Therefore, we believe that the MLE method RDIV is more general than the mean-embedding-based method adapted by [6].\\n\\n[1] Dual Instrumental Variable Regression, Krikamol Muandet, Arash Mehrjou, Si Kai Lee, Anant Raj\\n\\n[2] Minimax Estimation of Conditional Moment Models, Nishanth Dikkala and Greg Lewis and Lester Mackey and Vasilis Syrgkanis. NeurIPS 2020\\n\\n[3] Source Condition Double Robust Inference on Functionals of Inverse Problems, Bennett et al.\\n\\n[4] High-dimensional statistics: A non-asymptotic viewpoint, Martin J. Wainwright.\\n\\n[5] Minimax Rates for Conditional Density Estimation via Empirical Entropy, Bilodeau et al.\\n\\n[6] Kernel Instrumental Variable Regression, Singh et al.\"}", "{\"title\": \"Thank you for your review\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper. During the initial phase of the rebuttal process, we carefully addressed your concerns and provided an updated version of the manuscript. We would like to reiterate our response in the following:\\n\\n>Q1. The technical novelty of this paper, and the contribution it made to IV regression\\n\\n**A1**. (1) Our technique for bounding the $L_1$ norm instead of the $L_2$ norm of $(\\\\mathcal{T} - \\\\hat{\\\\mathcal{T}})(h - \\\\hat{h})$ is essential for the **fast rate result**. Particularly, directly applying the Hellinger distance to $L_2$-norm results in a bound of $||(\\\\mathcal{T} - \\\\hat{\\\\mathcal{T}})(h - \\\\hat{h})||_2 \\\\leq \\\\delta_n^{1/2} \\\\cdot ||h - \\\\hat{h}||_2$, which further results in a slower convergence rate of $|| h_0 - h^*||_2 \\\\leq O(\\\\delta_n^{\\\\frac{\\\\min(\\\\beta,2)}{\\\\min(\\\\beta,2) + 2}})$. Our approach in Lemma 1 and Theorem 5 manages to bound $\\\\hat{\\\\mathcal{T}})(h - \\\\hat{h})||_1 \\\\leq O(\\\\delta_n \\\\cdot || \\\\hat{h} - h ||_2)$, and carefully applying this result to the second stage guarantee results in a rate of $|| h_0 - h^*||_2 \\\\leq O(\\\\delta_n^{\\\\frac{2\\\\min(\\\\beta,2)}{\\\\min(\\\\beta,2) + 2}})$, which is way much faster than the trivial result obtained with $L_2$ norm and comparable to the existing state of the art. \\n\\n(2) Our work establishes the **first theoretical guarantee for the popular DeepIV type method** [2], which has been lacking for years, and establishes a statistical convergence rate that **almost matches the SOTA convergence rate** in existing minimax approaches while remaining **computationally tractable**. We also point to a crucial modification that is required for that method to be made rigorously efficient (regularization) from both theoretical and empirical studies. Moreover, we establish a **theoretically grounded model selection** procedure for RDIV in Sec. 7, namely the Best-ERM and the Convex-ERM methods. Although widely discussed in machine learning literature, a provably efficient model selection guarantee has been lacking in existing works such as DeepIV, AGMM [4], and DFIV. \\n\\n>Q2. Comparison with TOSG-IVaR [2] and DualIV [3] \\n\\n**A2**. In our paper, we studied RDIV under **general function approximation**, and how to perform the model selection procedure, with both guarantees from theoretical and empirical perspectives. In contrast, [2] only studies stochastic optimization for IV regression under **parametric function classes**, including linear and generalized linear function classes, which are less general compared to our setting. Based on such a setting, the optimization scheme proposed in [2] cannot be directly extended to nonparametric function classes. When our function class is restricted to parametric function classes, RDIV allows a convergence rate of $O(1/ \\\\sqrt{n})$, identical to [2].\\n\\n[3] studies minimax optimization within a **kernel function class**. However, their method (i) provides no convergence guarantee, (ii) suffers computational intractability when transferred to the general function class due to the absence of a closed-form solution for the dual problem, and (iii) does not provide a theoretically grounded model selection procedure. Notably, [3] is a special case of the AGMM method [4], the latter studies minimax methods under general function approximation, and is discussed in detail in our related work section. \\n\\n>Q3. Computational efficiency for MLE \\n\\n**A3**. We believe that the application of MLE does not hurt the efficiency of our method. In our study, we have already conducted experiments on low-dimensional and median-dimensional data, in which MLE is naturally tractable by function approximation such as Gaussian mixture models. For more specific high-dimensional data such as images, since our method only requires sampling from $P(x|z)$, Step 1 could be easily conducted by Flow-based models [5] or diffusion models [6]. \\n\\nAs we approach the midpoint of the rebuttal period, we are eager to hear your feedback on the revised paper. We are committed to addressing any remaining questions or concerns you may have and further improving the work based on your valuable insights.\\n\\nThank you again for your thoughtful and constructive feedback.\\n\\n[1] Deep IV: A Flexible Approach for Counterfactual Prediction, Jason Hartford, Greg Lewis, Kevin Leyton-Brown, Matt Taddy. ICML 2017\\n\\n[2] Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data, Xuxing Chen, Abhishek Roy, Yifan Hu, Krishnakumar Balasubramanian\\n\\n[3] Dual Instrumental Variable Regression, Krikamol Muandet, Arash Mehrjou, Si Kai Lee, Anant Raj\\n\\n[4] Minimax Estimation of Conditional Moment Models, Nishanth Dikkala and Greg Lewis and Lester Mackey and Vasilis Syrgkanis. NeurIPS 2020\\n\\n[5] Danilo Jimenez Rezende, and Shakir Mohamed. \\u201cVariational inference with normalizing flows.\\u201d ICML 2015.\\n\\n[6] Denoising Diffusion Probabilistic Models, Jonathan Ho, Ajay Jain, Pieter Abbeel.\"}", "{\"summary\": \"This manuscript is basically a technical paper, discussed about two-stage non-parametric IV and model-selection in the second stage when equipped with an additional $L_2$ regularization.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors discussed lots of the aspects for non-parametric clearly.\"], \"weaknesses\": [\"I feel the problem studied in this paper is with limited novelty. The transformation between one-stage and two-stage algorithms and analysis is in general only a technical problem and have been discussed in different places like DualIV, and the $L_2$ regularization itself makes the model-selection easier (with the strongly convexity).\"], \"questions\": [\"To convert the MLE guarantee into an $L_2$ guarantee, the authors assumed a minimum density on the conditional density. What\\u2019s the benefits/drawbacks compared with the conditional mean embedding based methods (although it also requires some assumptions like HS operators).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies a two stage procedure for regression in the scenario where the errors are not conditionally independent. They first learn a conditional density to make use of instrumental variables and consequently solve a square loss erm problem weighted by the learned conditional density. They show that this procedure attains mostly standard nonparametric rates.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"With the (unfortunate) exception of the introduction, I found the paper mostly well-written and clear.\", \"The paper studies an interesting problem, proposes a natural solution, and proceeds to analyze said solution. While I am not familiar with the immediately preceding related work (in IV), this seems clean to me.\"], \"weaknesses\": [\"The organization of the paper is hard to follow and the introduction is way too terse. As someone well-versed in nonparametric statistics but not necessarily IV methods, I had to skip ahead to section 4 to really understand what was going on. Stating that you are trying to solve some fixed point equation in the introduction is not conducive to most people's understanding of the problem you are solving.\", \"My overall feeling is that the result is somewhat incremental. To my understanding the main difficulty lies making standard guarantees for MLE in Hellinger^2 compatible with the square loss. I could not entirely follow why this is so challenging and would encourage the authors to further explain why this is the case (for instance, in the very last paragraph of section 1, you mention this difficulty but do not really expand on it, nor do you reference the lemmata which might be useful for understanding this difficulty).\"], \"questions\": [\"is it really fair to say that your algorithm is more computationally tractable when it is based on MLE?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This study represents a solid and good contribution. However, the novelty and usefulness of the approach have not been fully appreciated, at least by the reviewers. The paper's improvements in norm selection and error evaluation are not presented in a form that is easily accepted by the majority of the machine learning community. The lack of experiments with real and large data sets is also noted as a problem. Since this is solid and good research, it would be necessary to make it more appealing to the community.\", \"additional_comments_on_reviewer_discussion\": \"MFbt commented on the limited contribution in addition to writing problems; P9wu also pointed out the lack of novelty; q5qB pointed out the need for comparison with previous studies; HEwd insisted on the need for experiments with real data; and MFbt suggested that the study should be expanded to include more data.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \">**Q1**: It is unclear how the method compares for example to recently developed methods (see arxiv:2405.19463; to appear at NeurIPS 2024) that completely avoid minimax formulations, as well as avoid the need for two-stage procedures.\\n\\n**A1**: Thank you for bringing the paper to our attention! We would like to highlight a few key differences between our works and [1]:\\nOur work studies the case where both the model class $\\\\mathcal{H}$ and conditional density function class $\\\\mathcal{G}$ are **general function classes**, which includes random forest, neural network, and parametric models as special examples, and establishes **a provable statistical convergence rate**, while [1] studies parametric function classes, including linear function class and general linear function class with a known link function $g$, and derived convergence rate based on their specific optimization scheme. However, their techniques are not directly transferable to general function approximation. Moreover, no guarantee or procedure for model selection for function $g$ is discussed in [1]. \\nWhen facing a nonlinear link function $g$ in TOSG-IVaR (Alg.1 of [1]), [1] assumes that we have access to a two-sample oracle that outputs two samples $X$ and $X$\\u2032 that are independent conditioned on the same instrument $Z$. Meanwhile, our results for general nonlinear function classes do not need such an assumption, which is often impractical. \\nWhen restrained our study to a parametric function class as in [1], our result in Theorem 5 provides a guarantee of $$||\\\\hat{h} - h_0||_2 \\\\leq \\\\tilde{O}(\\\\sqrt{1/n}),$$ since $\\\\beta = +\\\\infty$ and $\\\\delta_n = O(\\\\sqrt{1/n})$. This aligns with Theorem 2 in [1]. \\n\\nWe will upload an updated version of our work and summarize the comparison between [1] and our work in the Related Work section.\\n\\n[1] Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data, Xuxing Chen, Abhishek Roy, Yifan Hu, Krishnakumar Balasubramanian\"}", "{\"summary\": \"This paper addresses the problem of nonparametric instrumental variable (IV) regression, a framework with wide applications across fields such as causal inference, handling missing data, and reinforcement learning. The objective in IV regression is to solve the conditional moment equation, \\ud835\\udc38 [ \\ud835\\udc4c \\u2212 \\u210e ( \\ud835\\udc4b ) \\u2223 \\ud835\\udc4d ] = 0, where \\ud835\\udc4d serves as the instrument. The authors introduce RDIV, a regularized variant of the DeepIV method, marking the first work to provide rigorous theoretical guarantees for DeepIV. RDIV enhances generalization by incorporating Tikhonov regularization. Methodologically, RDIV follows a two-stage approach. The first stage involves learning the conditional distribution of covariates, while the second stage refines the estimator by minimizing a Tikhonov-regularized loss function.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"RDIV offers several key advantages over existing methods. It addresses three significant limitations of prior literature: it eliminates the need for unique IV regression identification, avoids reliance on the often unstable minimax computation oracle, and supports model selection procedures.\", \"weaknesses\": \"It is unclear how the method compares for example to recently developed methods (see arxiv:2405.19463; to appear at NeurIPS 2024) that completely avoids minimax formulations, as well as avoiding the need for two-stage procedures.\", \"questions\": \"please see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \">**Q1**:How is Tikhonov regularization related to a function space parametrized by the neural network? It seems not straightforward to relate it to weight decay.\\n\\n**A1**: Thanks for bringing this to our attention! We confirm that Tikhonov regularization is different from weight decay, the latter being an $L_2$ regularization on the neural network parameters instead of the function class. In our experiments, we calculated the empirical mean of $h_\\\\theta(X)^2$ by forward propagation and incorporated this into our loss function when training the model. \\n\\n>**Q2**: Is there a computational gain when minimax optimization is no longer needed?\\n\\n**A2**: We confirm that there is an obvious computational gain when compared with classical minimax methods such as AGMM because our method is easy to tune, allows model selection, and is thus more tractable. We humbly refer the reviewer to our Section 9 and Appendix J for more details.\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \">**Q1**: The result is somewhat incremental. To my understanding, the main difficulty lies in making standard guarantees for MLE in Hellinger^2 compatible with the square loss. I could not entirely understand why this is so challenging, and I would encourage the authors to explain why this is the case further.\\n\\n**A1**: Thanks for bringing this to our attention! In the following, we will first elaborate on the technical challenges we tackled in our proof, and then we will highlight the contribution of our result to IV regression.\\n\\n(1) Our technique for bounding the $L_1$ norm instead of the $L_2$ norm of $(\\\\mathcal{T} - \\\\hat{\\\\mathcal{T}})(h - \\\\hat{h})$ is essential for the fast rate result. The standard guarantee of MLE ensures that $H(\\\\hat{p}, p)^2 \\\\leq \\\\delta_n^2$, directly applying which to the $L_2$-norm will result in a bound of \\n$$||(\\\\mathcal{T} - \\\\hat{\\\\mathcal{T}})(h - \\\\hat{h})||_2 \\\\leq \\\\delta_n^{1/2} ||h - \\\\hat{h}||_2, $$ \\n\\nor \\n\\n$$||(\\\\mathcal{T} - \\\\hat{\\\\mathcal{T}})(h - \\\\hat{h})||_2\\\\leq\\\\delta_n||h - \\\\hat{h}||\\\\infty, $$\\n\\n\\n plugging both bounds back to our proof for Theorem 5 on page 19 will bring in a **slower rate** in the bias-variance tradeoff, or further assumptions on the relative smoothness between $||\\\\cdot\\\\||_\\\\infty$ and $||\\\\cdot||_2$ (e.g. $\\\\gamma$-smoothness, see Remark 2 in [1] ). On the contrary, **we derived a tighter bound on the $L_1$ norm of the error and incorporated this with our strong convexity analysis of the Tikhonov estimator**. This is non-trivial, as the standard analysis in nonparametric statistics usually only provides bounds in $L_2$-norm. \\n\\n(2) We would like to iterate our main contribution from two perspectives:\\nAs introduced in our introduction, our work establishes **the first theoretical guarantee for the popular DeepIV type method [2]**, which has been lacking for years. We also point to a crucial modification that is required for that method to be made rigorously efficient (regularization) from both theoretical and empirical studies. Although simple, such a modification allows RDIV to enjoy a provably convergence guarantee that almost matches the SOTA convergence rate, meanwhile does not introduce an intractable minimax optimization.\\nThanks to its theoretical guarantee, we establish a **theoretically grounded model selection procedure** for RDIV in Sec. 7, namely the Best-ERM and the Convex-ERM methods. Those procedures enable us to select the function class or the training hyperparameter for RDIV, and are widely discussed in statistics and machine learning [5][6]. Notably, existing methods such as the original DeepIV, DFIV [3], or AGMM [4] **do not have any guarantee** for model selection. Moreover, we empirically show its effectiveness with proximal causal inference as an example and summarize our result in Section 9 and Appendix J. \\n\\n> **Q2**: Is it really fair to say that your algorithm is more computationally tractable when it is based on MLE?\\n\\n**A2**: Thanks for your helpful comment! We believe that the application of MLE does not hurt the tractability of our method. In our study, we have already conducted experiments on low-dimensional and median-dimensional data, in which MLE is naturally tractable by function approximation such as Gaussian mixture models. For more specific high-dimensional data such as images, since our method only requires sampling from $P(x|z)$, Step 1 could be easily conducted by Flow-based models [7] or diffusion models [8]. \\n\\n\\n[1] Source Condition Double Robust Inference on Functionals of Inverse Problems, Bennett et al.\\n\\n[2] Deep IV: A Flexible Approach for Counterfactual Prediction, Jason Hartford, Greg Lewis, Kevin Leyton-Brown, Matt Taddy. ICML 2017\\n\\n[3] Learning Deep Features in Instrumental Variable Regression, Liyuan Xu and Yutian Chen and Siddarth Srinivasan and Nando de Freitas and Arnaud Doucet and Arthur Gretton. ICLR 2021\\n\\n[4] Minimax Estimation of Conditional Moment Models, Nishanth Dikkala and Greg Lewis and Lester Mackey and Vasilis Syrgkanis. NeurIPS 2020\\n\\n[5] Peter L Bartlett, St\\u00e9phane Boucheron, and G\\u00e1bor Lugosi. Model selection and error estimation. 353 Machine Learning\\n\\n[6] Guillaume Lecu\\u00e9 and Philippe Rigollet. Optimal learning with q-aggregation. Annals of Statistics 2014. \\n\\n[7] Danilo Jimenez Rezende, and Shakir Mohamed. \\u201cVariational inference with normalizing flows.\\u201d ICML 2015.\\n\\n[8] Denoising Diffusion Probabilistic Models, Jonathan Ho, Ajay Jain, Pieter Abbeel.\"}", "{\"summary\": \"The paper studies the nonparametric instrumental variable regression with Tikhonov regularization (RDIV), and proves that RDIV allows model selection procedures and matches the SOTA convergence rate. I agree with the author's claim that this work is the first attempt to provide rigorous guarantees for DeepIV. With Tikhonov regularization, the model selection procedure achieves the oracle rate and iterative RDIV matches the SOTA rate.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written, and the results are motivated well. I didn't go through the proofs, but the explanations after each result are insightful, and ease the reading. The theoretical contribution is solid.\", \"weaknesses\": \"The numerical experiments are only based on simulated data. It would be better to have some results from real data to demonstrate the strength of the proposal.\", \"questions\": \"How is Tikhonov regularization related to a function space parametrized by the neural network? It seems not straightforward to relate it to weight decay.\\n\\nIs there a computational gain when minimax optimization is no longer needed?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
0gVatTOgEv
Glider: Global and Local Instruction-Driven Expert Router
[ "Pingzhi Li", "Prateek Yadav", "Jaehong Yoon", "Jie Peng", "Yi-Lin Sung", "Mohit Bansal", "Tianlong Chen" ]
The availability of performant pre-trained models has led to a proliferation of fine-tuned expert models that are specialized to a particular domain or task. This has enabled the creation of powerful and adaptive routing-based “Model MoErging" methods with the goal of using expert modules to create an aggregate system with improved performance or generalization. However, existing MoErging methods often prioritize generalization to unseen tasks at the expense of performance on held-in tasks. This limitation adversely impacts practical applicability, as real-world deployments require robust performance across both known and novel tasks. We observe that current token-level routing mechanisms neglect the global semantic context of the input task. This token-wise independence hinders effective expert selection, particularly for held-in tasks, as routing decisions fail to incorporate the holistic semantic properties of the task. To address this, we propose a novel method, Global and Local Instruction Driven Expert Router (GLIDER) that integrates a multi-scale routing mechanism, encompassing a semantic global router and a learned local router. As recent LLMs demonstrate advanced reasoning capabilities for semantic-related contexts, the global router leverages this ability to enhance expert selection. By utilizing the input query and an LLM, the router generates semantic task instructions that guide the retrieval of the most relevant experts across all layers. This global guidance is complemented by a local router that facilitates token-level routing decisions within each module, enabling finer control and enhanced performance on unseen and challenging tasks. Our experiments using T5-based expert models for T0 and FLAN tasks demonstrate that GLIDER achieves substantially improved held-in performance while maintaining strong generalization on held-out tasks. Additionally, we perform ablations experiments to dive deeper into the components of GLIDER and plot routing distributions to show that GLIDER can effectively retrieve correct expert for held-in tasks while also demonstrating compositional capabilities for held-out tasks. Our experiments highlight the importance of our multi-scale routing that leverages LLM-driven semantic reasoning for MoErging methods. Our code is attached as supplementary material.
[ "Parameter Efficient Fine-Tuning", "LoRA", "Cross-Task Generalization" ]
https://openreview.net/pdf?id=0gVatTOgEv
https://openreview.net/forum?id=0gVatTOgEv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "t7OGLZHcU0", "fdyRgshy7R", "faduwKmugv", "YlGtdYeQAC", "RCd8zwkOrb", "GHLvixhM3c", "C8lDDLfnVI", "2pC6Rftc5T" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "official_comment" ], "note_created": [ 1729422546225, 1730585741720, 1730693551232, 1730366500704, 1732614994680, 1732639908024, 1732614773004, 1732639802021 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3242/Reviewer_kpST" ], [ "ICLR.cc/2025/Conference/Submission3242/Reviewer_S6dB" ], [ "ICLR.cc/2025/Conference/Submission3242/Reviewer_yu6t" ], [ "ICLR.cc/2025/Conference/Submission3242/Reviewer_rzEk" ], [ "ICLR.cc/2025/Conference/Submission3242/Reviewer_kpST" ], [ "ICLR.cc/2025/Conference/Submission3242/Authors" ], [ "ICLR.cc/2025/Conference/Submission3242/Reviewer_rzEk" ], [ "ICLR.cc/2025/Conference/Submission3242/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose GLIDER, which is a token-level and module-level hierarchical routing strategy for combining pools of parameter-efficient finetuned expert models, incorporating both local and global information over the specialization of the expert models. The local component learns a gating between LoRA modules which selects the best layer module from the pool of experts for each token. The global component uses an LLM to encode the overall task expertise of each expert, which is then incorporated into the routing scheme to enhance the routing such that it is sensitive both to local module-wise expertise and overall global expertise of the aggregated models. This scheme hopes to maintain strong generalizability to unseen tasks without sacrificing performance on held-in tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1)\\nThe core idea -- that incorporating global information of the specialization of finetuned expert models into local routing schemes can improve expert aggregation algorithms -- is intuitive and persuasive.\\n\\n\\n2) \\nThe use of an LLM to encode global semantic information of the overall expert specialization is a creative method for effectively integrating the required global context\", \"weaknesses\": \"1)\\nMy first concern relates to the overall problem setting of the paper and its core motivation of improving performance on held-in tasks. The authors claim that existing MoErging methods often prioritize generalization to unseen tasks at the expense of performance on held-in tasks, and indeed in Table 1 the authors report as one of their main results that GLIDER significantly outperforms baselines on held-in tasks. However, performance on held-in tasks is deemed unimportant precisely because we already have access to the specialized expert trained on that exact held-in task, and so we can always retain performance on any given held-in task by simply using the expert specialized to that task. Indeed, this is the reason why Phatgoose does not report results for held-in tasks. For this reason, the 'Oracle Expert' recorded in Table 1 for held-in tasks is not an Oracle but an attainable result for which we always have access - it is just the expert specialized to that given task. \\n\\nSo in this sense, given that for held-in tasks GLIDER still underperforms the expert specialized to that given task, I'm not yet convinced of one the proposed main benefits of GLIDER, since for any given held-in task we could easily get better performance by just selecting the corresponding expert specialized to that task. \\n\\nFundamentally, I'm not yet persuaded that the performance gains on held-in tasks justify the claim that GLIDER is an overall superior model, since by the problem setting these are not tasks we need to optimize for. If the authors can provide justification for why performance on held-in tasks is indeed important and why selecting GLIDER over the corresponding specialized expert for a given held-in task would be preferable, then I would be happy to change my score, but as it stands I'm concerned that a large proportion of GLIDER's performance gains are on tasks that we need not optimize for.\\n\\n2) \\nA second concern is that GLIDER's architecture, by the authors' own acknowledgement, is basically identical to Phatgoose for the local component of the hierarchical router. This being the case, the contribution of the paper is more so appending a global context component to Phatgoose, rather than an entirely new model. It would therefore be informative to consider alternative backbones for the local component of the router, for example Arrow. This could help isolate the contribution of the global routing component and help to demonstrate the robustness of potential improvements brought about by the proposed inclusion of global information.\\n\\n3) \\nSome grammar / spelling related issues:\", \"lines_53_54\": \"'However, MoE methods that train experts from scratch while MoErging utilizes a decentralized...' -> delete 'that'\", \"line_72\": \"'retrieve the correct expert for all token at every module' -> tokens should be plural\", \"line_264\": \"'our goal is to build a MoErging method that dynamically routing queries' -> should be 'routes queries'\", \"line_286\": \"'this work specifically focuses in LoRA' -> focuses 'on' LoRA\\n\\nLines 288-289 'Given the $t^{th}$ input token activation $u_i$ -> should be $u_t$ I'm assuming?\", \"line_332\": \"'added before the model to independently process the fully query' -> process the 'full' query\", \"line_348\": \"'the output of the module for token activation $u_i$ is computed as $Wu_i + \\\\sum_{k \\\\in \\\\xi_{top}} w_k * B_kA_ku_i$ -> It looks like you've forgotten to actually define $w_k$, I'm assuming it's the softmaxed affinity score, but you've left it undefined.\", \"questions\": \"My first question relates to the core motivation of improving held-in performance and the necessity of doing so given that for any held-in task, we always have access to the expert specialized to that task. Could the authors explain scenarios in which using GLIDER is preferable to simply selecting the specialized expert for a known held-in task?\\n\\nMy second question is to what extent is GLIDER more of a novel component to local routing schemes that aims to encode global context, as opposed to an entirely new model. If GLIDER is more a novel component than an entire model, then I think the authors should include ablation studies on the local router choice, in particular using Arrow.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents GLIDER, a method that combines global semantic and local token-level routing for LLMs. The key innovation is using an LLM to generate task instructions that guide expert model selection globally, while using token-level routing locally. Tested on T5-based models with T0 and FLAN benchmarks, GLIDER improves performance on held-in tasks by 6.6% while maintaining strong generalization on held-out tasks. This shows that incorporating LLM-driven semantic understanding helps achieve better routing compared to pure token-level approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper leverages LLMs to generate semantic task descriptions, providing global context for routing decisions which is a unique approach not explored in previous routing methods\", \"The paper well address the limitations of current approaches (focusing on either held-in or held-out tasks) and provides a novel solution integrating both.\"], \"weaknesses\": [\"The experiments focus solely on T5, an older encoder-decoder architecture. The effectiveness of GLIDER on modern decoder-only models (like GPT family, LLaMA, etc.) remains unproven, which is crucial given these are now the mainstream architectures for LLMs.\", \"Table 1 lacks clarity on evaluation metrics and methodological details. Without clear metric definitions and evaluation protocols, it's difficult to fully assess and compare the reported improvements.\", \"The routing design will bring extra computational overhead, how will GLIDER's inference latency change compare with the normal LoRA decoding methods (vLLM's lora inference module).\"], \"questions\": [\"Could you explain why T5 was chosen as the primary architecture for evaluation? Have you conducted any preliminary experiments with decoder-only models e.g. (LLama3-8B etc)?\", \"Could you provide more details about the evaluation metrics used in Table 1? What exactly do these numbers represent?\", \"Since this design integrates routing into the LoRA inference procedure, could you provide a detailed analysis of the additional computational overhead? How much will the inference latency be affected by such design?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on addressing the trade-off between performance improvement and generalization ability in expert modules. It assumes that this issue arises from the lack of global semantic context in token-level routing strategies, it then seeks to resolve this problem by combining global semantics with token-level information through the use of both global and local expert routers during routing.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The insights of the paper are to be praised.\\n2. Very interesting topic and focus on the router optimization. \\n3. Use the big model and small model together to solve the problem \\n4. Experimental results are good.\", \"weaknesses\": \"1. Writing and Presentation:The paper could benefit from some polishing. There are a number of typos and semantic issues, and the overall formatting could be improved for better readability. Additionally, some figures are a bit challenging to interpret. For instance, Figure 1 is only referenced in Appendix B but appears as the first figure in the Related Works section, which can disrupt the flow and clarity for the reader.\\n\\n2. Clarity of Background and Concepts: The background and explanation of key concepts in the paper could be clearer. While there are many references to ideas and works by Yadaav, the connections and explanations aren\\u2019t sufficiently detailed, which may leave readers a bit confused. In my initial reading, I found myself questioning whether the discussion pertained to a Mixture of Experts (MoE) scenario or Model Merging.\\n\\n3. Logical Flow and Mathematical Details: The paper seems to lack some logical coherence, especially regarding mathematical descriptions and derivations. There are no thorough mathematical proofs provided, and the modeling of the scenario feels a bit scattered across different sections. Some variable explanations are incomplete, which can be frustrating. Moreover, discussions around problems and solutions could be more precise. For instance, when mentioning that routing strategy issues stem from a lack of global semantics, it would be helpful to have more rigorous mathematical reasoning or experimental evidence to support this claim.\\n\\n4. Inconsistencies: There are some notable inconsistencies in the paper. For example, in line 85, it mentions that the Global Router selects the top-2 experts based on global semantics. However, the description of the Global Router algorithm starting from line 329 doesn\\u2019t reference any top-2 (or top-k) selection process. The top-k expert selection is only brought up later around line 347, based on the final score calculated from the weighted sum of global and local affinity scores. Clarifying these points would enhance the overall coherence of the paper.\\n\\n5. The experimental design could use some improvement. The main experiments lack detailed explanations, and Figure 3 is somewhat unclear. Many of the experimental configurations seem to mirror those from Muqeeth's work without providing enough context, which might raise questions about the originality of this study. Additionally, the ablation experiments focus on relatively trivial variables, while more significant factors\\u2014such as the differences between excluding and including the global semantics generated by GPT-4-turbo\\u2014are overlooked. Addressing these points could enhance the depth and rigor of the research.\", \"questions\": \"Inovation and contribution:\\n1. The core of this paper combines the scores from the Global Router and the Local Router, then performs a top-k operation based on the combined scores. a. Line 85: The descriptions of the algorithm in the preceding and following contexts are clearly conflicting, making it contradictory to describe the algorithm effectively. b. Relying on GPT-4 turbo to gather information from the full text means the process heavily depends on GPT-4's capabilities. Furthermore, as seen in line 346 and the subsequent ablation experiments, the value of $\\\\alpha$ is significant, indicating a high weight assigned to the Global score.\\n2. The paper extensively references the work of Muqeeth but fails to explain the rationale behind these references, raising concerns about the originality of the article.\\n3. Relying on GPT-4-turbo for global semantic information raises some questions about the overall workload and originality of the research. From the formula in line 346 and the subsequent ablation experiments, it appears that the global score heavily influences the final affinity score, making the expert selection largely dependent on GPT-4-turbo. Additionally, the potential latency issues introduced by using GPT-4-turbo during inference are not addressed, which is a significant concern. Overall, placing so much emphasis on GPT in the paper could weaken its persuasive impact.\", \"format_and_typo_issues\": \"1. 60) Formatting issue: The left parenthesis is missing.\\n2. 67-68) The sentences are semantically repetitive.\\n3. 140) What does the arrow ($\\\\rightarrow$) signify here?\\n4. 216) This figure is drawn in a hard model; it\\u2019s unclear what it is trying to convey.\\n5. 279) The explanation of $\\\\Psi$ is missing.\\n6. 289-290) The notation used here is confsued.\\n7. 301) There is no explanation for the newly introduced $\\\\sigma$.\\n8. 319) Suddenly introducing the variable \\ud835\\udc61 t from line 289 is confusing.\\n9. 323) Why isn\\u2019t the normalization process presented in a formula? Section 4.2 is poorly written, relying solely on descriptive language, lacking organization.\\n10. 354) The T0 held-in dataset is omitted.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"the paper studies ensembles of LLM models (model MoErging), proposing a technique for selecting experts to route tokens to at global (for selection of experts for in profile tasks) and local levels (to have more flexibility to handle out of distribution tasks)\\n\\nthe paper is incremental in nature (with small differences with respect to Phatgoose, from architectural design choice, to experimental settings and way too many details, to the point it feels it should be named Phatgoose++ instead of Glider)\\n\\nalthough appealing, the approach is heurisitic in nature: given this, one would have expected a significantly larger experimental part, including a wider range of tasks (and possible comparison points beyond those adopted in Phatgoose), and a statistical relevant comparison of improvements\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"this work introduces a routing mechanism to tradeoff between local and global experts, to increase performance on held in tasks, without compromising capability to handle held out tasks.\\n\\nthe goal is clear and the approach is simple (as it is heuristic in nature).\", \"weaknesses\": \"# evaluation\\nas this work has no theoretical basis, one would have expected a significantly larger experimental part to convince the reader of the generality of the approach on \\n- a significant wider range of tasks (and possible comparison points beyond those adopted in Phatgoose),\\n- further exhibiting a statistical relevant comparison of improvements\\n\\nthis is not the case, so the paper execution is far from being convincing.\\n\\nAdditionally, while the main advantage of this work is to increase performance of held-in tasks, authors additionally point out advantages that are too thin to be worth noting; and they do so in a disturbingly biased manner. For instance, authors claim 0.9% over held out tasks over Phatgoose (in bold), but the 0.9% (actually 0.88%) is the maximum observed across 5 held out datasets in Tab 3 (for the other 4 is 0.39%, 0.16% -0.53% and 0.3%) \\n\\n# empirical evidence of global router\\n\\nthe work is motivated to find *semantical* resemblance beyond tasks. however, the approach on held out tasks seems to leverage *syntactical* resemblance . Fig 1 shows held out tasks to systematically select two experts (one of which seems to be further common to a couple of tasks). Yet appendix B just shows the tasks to be syntactically similar: i..e, the Q&A pair has a cloze format, which is rather typical of simple benchmarks. As such, I am little reassured that the performance generalization will be maintained on more complex tasks, and this work is far from fully elucidating the robustness of the proposed method.\\n\\nQuantitative assessment of global experts over a wider range of diverse tasks (say few tens of datasets per type of answer) would have allowed to get true insights about the nature of global experts (e.g., whether the identified expert triggers \\\"cloze\\\" type answers, irrespectively of the semantic of the question)\", \"questions\": \"# incremental addition of new experts\\n\\nOne question that is not addressed is the incremental addition of new experts/datasets pair, and what are the consequences (on normalization, etc.). It seems it should be trivial to do this incrementally, but a discussion in the appendix (and an example use-case when, say a previously held-out task becomes held-in so that the router now uses global instead of local weights) would certainly improve the quality of the paper\\n\\n# limit of global router\\n\\nthe global router is constructed using a sample of 3 questions, which may be ok for very simple and low-diversity datasets, but not for more complex and diverse tasks,. a more in-depth study of global router across a wider range of datasets with quantitative assessment of global router policies seems necessary\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Any response from the authors?\", \"comment\": \"If the authors have any aspects of the review they'd like to address then let me know.\\n\\nLooking forward to your responses.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"rebuttal?\", \"comment\": \"I haven't seen any author rebuttal ?\"}", "{\"title\": \"General Response\", \"comment\": \"Dear Reviewers & ACs & SACs & PCs,\\n\\nThank you for taking the time to review our submission and for providing your valuable feedback. We deeply appreciate your thoughtful comments and insights, which have helped us better understand the strengths and weaknesses of our work.\\n\\nAfter careful consideration, we have decided to withdraw the paper from ICLR this year. We intend to use your feedback to improve the paper further and hope to resubmit it to a future venue once the necessary revisions have been made.\\n\\nThank you once again for your time and effort in reviewing our work.\\n\\n\\\\\\nWarmest regards,\\n\\nAuthors\"}" ] }
0gOQeSHNX1
Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects
[ "Wenhao Li", "Yudong Xu", "Scott Sanner", "Elias Boutros Khalil" ]
The Abstraction and Reasoning Corpus (ARC) is a popular benchmark focused on *visual reasoning* in the evaluation of Artificial Intelligence systems. In its original framing, an ARC task requires solving a program synthesis problem over small 2D images using a few input-output training pairs. In this work, we adopt the recently popular *data-driven* approach to the ARC and ask whether a Vision Transformer (ViT) can learn the implicit mapping, from input image to output image, that underlies the task. We show that a ViT—otherwise a state-of-the-art model for images—fails dramatically on most ARC tasks even when trained on one million examples per task. This points to an inherent representational deficiency of the ViT architecture that makes it incapable of uncovering the simple structured mappings underlying the ARC tasks. Building on these insights, we propose ViTARC, a ViT-style architecture that unlocks some of the visual reasoning capabilities required by the ARC. Specifically, we use a pixel-level input representation, design a spatially-aware tokenization scheme, and introduce a novel object-based positional encoding that leverages automatic segmentation, among other enhancements. Our task-specific ViTARC models achieve a test solve rate close to 100% on more than half of the 400 public ARC tasks strictly through supervised learning from input-output grids. This calls attention to the importance of imbuing the powerful (Vision) Transformer with the correct inductive biases for abstract visual reasoning that are critical even when the training data is plentiful and the mapping is noise-free. Hence, ViTARC provides a strong foundation for future research in visual reasoning using transformer-based architectures.
[ "Abstraction and Reasoning Corpus", "Abstract Visual Reasoning", "Transformers", "Vision Transformers" ]
Reject
https://openreview.net/pdf?id=0gOQeSHNX1
https://openreview.net/forum?id=0gOQeSHNX1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wqOd5sDxtb", "q7fbt2iwYZ", "oaUXCuckQv", "mVsdUIq2gP", "lqp1uQzGv6", "h8UGXortA5", "g7HaH0dtAz", "caKFIgzimO", "ZMxgsnHiaR", "Y9N30LPJ5U", "XOAc1PeMLJ", "SFIMvDzIlO", "LZVEGAG5yr", "HH4k7jkeOi", "EtazOz9ZoE", "9FXeqgZK03", "7haBcRNHjF", "7VV9O4aWod" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732133834654, 1732133874787, 1729104632392, 1732134358086, 1732559557857, 1732645949504, 1732132854075, 1734661750055, 1732133801883, 1730566071648, 1732575482464, 1732559360116, 1737523611161, 1732135266390, 1732134329790, 1729535905595, 1732487043963, 1731116150059 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3976/Authors" ], [ "ICLR.cc/2025/Conference/Submission3976/Authors" ], [ "ICLR.cc/2025/Conference/Submission3976/Reviewer_znVN" ], [ "ICLR.cc/2025/Conference/Submission3976/Authors" ], [ "ICLR.cc/2025/Conference/Submission3976/Reviewer_znVN" ], [ "ICLR.cc/2025/Conference/Submission3976/Area_Chair_Luoa" ], [ "ICLR.cc/2025/Conference/Submission3976/Authors" ], [ "ICLR.cc/2025/Conference/Submission3976/Area_Chair_Luoa" ], [ "ICLR.cc/2025/Conference/Submission3976/Authors" ], [ "ICLR.cc/2025/Conference/Submission3976/Reviewer_tj8k" ], [ "ICLR.cc/2025/Conference/Submission3976/Reviewer_tj8k" ], [ "ICLR.cc/2025/Conference/Submission3976/Reviewer_k3pK" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3976/Authors" ], [ "ICLR.cc/2025/Conference/Submission3976/Authors" ], [ "ICLR.cc/2025/Conference/Submission3976/Reviewer_k3pK" ], [ "ICLR.cc/2025/Conference/Submission3976/Authors" ], [ "ICLR.cc/2025/Conference/Submission3976/Reviewer_NewP" ] ], "structured_content_str": [ "{\"comment\": \"*Weaknesses (part 1)*\\n\\n**Unclear that this is Vision Modeling**\\n\\nWe appreciate the reviewer\\u2019s concern that some of our adaptations to the ARC domain implemented for ViTARC make the model less \\\"vision-specific.\\\" However, we argue that these adaptations are necessary for the ARC domain and do not shift our model away from vision modeling. Specifically:\\n\\n- ARC tasks demand pixel-level precision, which makes Conv-oriented setups (e.g., local attention or patching) less suitable. By using 1x1 patches, ViTARC ensures to capture every pixel detail. This adaptation doesn\\u2019t alter ViTARC\\u2019s core as a Vision Transformer as larger patch size can be implemented for tasks with greater tolerance (e.g., n x n pixel precision).\\n\\n- Unlike typical convolutional vision models where the image size and the number of patches are fixed, ViTARC must handle the generation of variable-sized grids. Since resizing or cropping is infeasible due to ARC's need for pixel-perfect precision, the End-of-Grid (EOS) token is a natural design choice. ViTARC-VT\\u2019s performance further demonstrates its importance.\\n\\n- We note that without positional encodings, a transformer inherently processes and outputs unordered sets. Thus, the 2D positional encodings, including OPE, explored in ViTARC reinforce our approach as being more vision-specific, not less as the reviewer suggests. \\n\\nWe agree with the reviewer that in certain vision tasks, token content is more important than spatial relationships. However, in abstract visual reasoning tasks, such as those in ARC, spatial relationships sometimes hold primary importance. For instance, in tasks where same-color pixels move (e.g., ARC task #025d127b https://kts.github.io/arc-viewer/page1/#025d127b), the meaningful information lies in the positional shifts rather than in the token content itself.\\n\\n**Unclear Significance of Contributions**\\n\\nThank you for raising concerns about disconnects between the different contributions. We\\u2019ll clarify each point below and update our paper accordingly.\\n\\n- PEmixer vs APE: PEmixer is primarily used to adjust the balance between positional and token information; it\\u2019s applied as an additional layer on top of APE.\\n\\n- RPE vs APE: As the reviewer pointed out, RPE was introduced to enhance spatial awareness that APE alone struggled with, as illustrated in the Figure 6 example. We acknowledge that the final encoding, which combines 2DAPE, 2DRPE, and OPE, may appear complex. We will clarify the progression and motivation behind each component in the updated version (**ChangeLog #2.2**).\\n\\n- PEmixer and OPE without RPE worsen performance: As discussed in section 5.1, this decrease in performance is due to OPE occupying space in the concatenated encoding and reducing the dimensionality available for APE. The inclusion of RPE recovers the positional signals at the attention level, thus improving the overall performance.\\n\\nWe appreciate the reviewer\\u2019s concern about a lack of global understanding of the problems and the suggestions for restructuring the paper. Our paper introduces many improvements to the ViT architecture motivated by different failure analyses and aims to address very different challenges. Therefore, we structured our paper with the goal that readers can easily follow the progression of our various contributions, which we are grateful the reviewer acknowledges as part of the paper\\u2019s strength. \\n\\n**Necessity of Padding Contributions**\\n\\nWe thank the reviewer for asking for additional clarification on the need for paddings. We address them in the following, and will update the paper accordingly:\", \"sequential_padding_at_the_end_where_the_padding_tokens_are_ignored_by_the_attention_should_be_the_correct_one\": \"While we can define 2D coordinates (x,y) for the encoder, where grid size is known in advance, the decoder lacks this spatial reference unless it operates on a fixed-size 2D grid. The <arc_pad> tokens serve as placeholders to support this fixed 2D template and therefore require attention. This setup was critical in enhancing performance from ViT to ViTARC-VT.\\n\\nPer-row EOS tokens should be enough and <arc endxgrid> are not needed: The need for border tokens arose after introducing 2D padding. Without them, the model would have to count tokens to determine boundaries, the precision of which quickly becomes an issue\\u2014especially in ARC tasks with dynamically defined output grid sizes (eg. task C in Figure 2). Additionally, inner grid borders differ from maximum grid boundaries, so separate tokens are used to prevent the model from having to learn this distinction implicitly. We will add these clarifications to the updated version (**ChangeLog #2.3**).\"}", "{\"comment\": \"*Weaknesses (part 2)*\\n\\n**No Baselines or Comparisons**\\n\\nWe thank the reviewer for their concerns regarding a lack of comparison with other ARC models. We address each point below:\\n\\n- Performance on Private ARC Set: Our goal of the paper is to reveal limitations in current vision transformer architecture for performing abstract visual reasoning. Therefore, ViTARC is not yet a general ARC solver and cannot be applied directly to solve the overall ARC benchmark. We aim to extend ViTARC to a general ARC solver in future work, but resolving these critical learning deficiencies on within-task learning performance were our focus in this paper and a requisite stepping stone to future work on cross-task generalization with ViTs.\\n\\n- Baseline Comparisons: AVR models currently rely on discriminative architectures rather than generative models. To our knowledge, there is no previous model positioned within the AVR + variable-sized grid + generative model regime. Therefore, we used the vanilla ViT setup as the baseline for this comparison.\\n\\n- Comparison to Sequence-Based ARC Models Using LLMs:\\nEach pixel in our setup is treated as a token, similar to many LLM-based ARC solver techniques. This design allows the vanilla ViT in our setup to function as a small-scale LLM **trained from scratch**, serving as a meaningful baseline to showcase the contributions of ViTARC's vision-specific architectural enhancements.\\n\\n We acknowledge that comparing ViTARC with **pretrained** sequence-based LLMs could be an interesting direction for future work. However, such comparisons are beyond the scope of this paper, as their performance gains may stem from scaling laws rather than architectural advancements.\\n\\n*Minor Weaknesses:*\\n\\n**Placement of Figure 8**: We appreciate the reviewer\\u2019s recognition of Figure 8, however, we decided to move it to the Appendix as it only contributes partly as the motivation, and we did not want to further confuse the readers from the main motivation which is the example shown in Figure 4.\\n\\n**Purpose of PEmixer**: As noted earlier, the primary purpose of PEmixer is to control the ratio between contextual and positional information. We further opted for a vector PEmixer over a scalar for greater flexibility by allowing the model to learn when certain coordinates are more significant for a given task. For example, in ARC tasks with horizontal transformations, x-coordinates may hold greater importance.\\n\\n**ChangeLog 2.1**: Refined Section 5 on 2D-RPE to clarify the slope design.\\n\\n**ChangeLog 2.2**: Refined Sections 5 and 5.1 to better illustrate the different positional encodings (PEs) and provide a clearer discussion of the results.\\n\\n**ChangeLog 2.3**: Section 4, clarified the role of border tokens in 2D padding and dynamic grids.\"}", "{\"summary\": \"The paper studies the ARC benchmark, assessing the potential of ViTs for this task when trained with supervision on a single task at a time with plenty of (procedurally-generated) data (which is different from the original few-shot setting).\\n\\nThe paper first shows that standard ViTs fail. It then shows several modifications to the architecture that make them perform much better.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper studies a popular architecture (ViTs), so it should be of broad interest.\", \"The paper shows a domain where ViTs fail dramatically. The task itself is not interesting (ARC in a new, supervised, large-data setting), but the finding is interesting because it points at intrinsic deficiencies of ViTs.\", \"Several modifications are described that improve the performance. It's kindof the opposite of an ablation study, but it does the job of demonstrating that novel methods imbue ViTs with novel capabilities.\"], \"weaknesses\": [\"I don't see significant flaws in this paper. Potential weaknesses:\", \"(W1) The proposed modifications (in particular the visual tokens) are quite straightforward. This can be seen a good thing. I am actually surprised that the absolute PE and padding (to handle images of different sizes or aspect ratios) have not been used before. Are the authors certain this hasn't been described in the existing literature?\", \"(W2) There is a valid argument that this paper solves a task that is extremely uninteresting in itself. It has no analogue in the real world and it completely defeats the purpose of the original ARC challenge (because of the supervised, large-data setting, focusing on a single task at a time). This paper makes absolutely no progress towards the goal of the ARC challenge. I still think that the findings are interesting for the reasons mentioned in my \\\"Summary\\\" above, i.e. that the proposed improvements give ViTs new abilities. The main issue now is that the contributions of this paper will only have value if/when the abilities are demonstrated to be useful for another task/setting.\"], \"questions\": \"What do the authors think about adding a caveat in the paper about W2 above? (i.e. the fact that this makes zero progress on the ARC challenge, and that the benefits of the proposed methods still need to be demonstrated on a meaningful task)\\n\\nI can't solve Task A (6e02f1e3) in Fig. 2. I guess other readers would benefit from an explanation? (I'm afraid this sortof makes a Vanilla ViT superhuman on this task!)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"*Weaknesses (part 2)*\\n\\n**No other baselines than ViT are explored, though there have been many works proposed for ARC & related reasoning tasks.**\\n\\nOur goal of the paper is to reveal limitations in current vision transformer architecture for performing abstract visual reasoning. Therefore, ViTARC is not yet a general ARC solver and cannot be compared directly with other ARC solvers. We aim to extend ViTARC to a general ARC solver in future work.\\n\\nFurthermore, AVR models currently rely on discriminative architectures rather than generative models. To our knowledge, there is no previous model positioned within the AVR + variable-sized grid + generative model regime. Therefore, we used the vanilla ViT setup as the baseline for this comparison. We would greatly appreciate any suggestions from the reviewer for relevant models or previous work to consider for comparison.\\n\\n\\n*Questions*\\n\\n**Can you show that the model works on other complex reasoning tasks?**\\n\\nThe diverse reasoning required across ARC tasks, ranging from spatial transformations to abstract rule discovery, already offers valuable insights into Abstract Visual Reasoning (AVR). A major strength of ViTARC lies in its dual capability as both a reasoning and generative model. \\n\\nWhile adapting ViTARC to other visual reasoning tasks is a promising future direction, it is beyond the scope of this paper. Notably, if benchmarks like RPM or odd-one-out were reframed as generative tasks, a significant portion would align with subsets of ARC. \\n\\nAdditionally, our findings with ViTARC could also extend to reasoning tasks requiring n x n pixel precision, such as physical reasoning in vision generation, where spatial accuracy is equally critical. We have updated the paper to emphasize these broader contributions (**ChangeLog #3.2**).\\n\\n**Does the model work without strong priors, e.g., on input and output grid shape?**\\n\\nAs previously described, we do not explicitly inject strong priors, such as predefined input or output grid shapes. However, if we were to remove the existing priors\\u2014such as 2D positional encoding, border tokens, and 2D padding\\u2014the setup would resemble a vanilla ViT more closely, and we expect the model\\u2019s performance would align similarly with it.\\n\\n**Additional Baselines Trained on the Same Amount of Data**\\n\\nAddressed in response to weakness 4.\\n\\n**ChangeLog 3.1**: Updated the terminology and description of vision tokens to ensure clarity and domain-agnostic applicability.\\n\\n**ChangeLog 3.2**: Updated the conclusion to discuss broader contributions.\\n\\n**ChangeLog 3.3**: Clarified OPE\\u2019s role in handling complex shapes, its synergy with PEmixer, and its novelty as a method for injecting external objectness knowledge.\"}", "{\"title\": \"Keeping my positive score\", \"comment\": \"Thanks to the authors for the response. I pointed out several limitations of this work and its impact, and the authors agree to point them out more explicitly in the final version. Therefore I maintain my score. The rationale is that there are some people in the ICLR audience who will find the contributions of this paper interesting.\"}", "{\"title\": \"[ACTION NEEDED] Respond to author rebuttal\", \"comment\": \"Dear Reviewer,\\n\\nNow that the authors have posted their rebuttal, please take a moment and check whether your concerns were addressed. At your earliest convenience, please post a response and update your review, at a minimum acknowledging that you have read your rebuttal.\\n\\nThank you,\\n--Your AC\"}", "{\"comment\": \"We thank the reviewer for their detailed review.\\n\\n*Weaknesses*\\n\\n**Clarity on Training/Evaluation Protocol and Generalization Ability**\", \"our_main_goal_is_to_study_the_fundamental_limitations_of_vit_in_the_simplest_setting\": \"a ViT trained for a single simple AVR task (ie. a single ARC task) from scratch. We observe that ViTs fail to learn on individual ARC tasks even when given 1,000,000 examples. Hence our primary objective in this paper is to understand why ViTs do not learn to generalize in-task and to resolve the sources of these learning deficiencies as we contributed in this paper. Addressing this critical gap on in-task ViT learning deficiencies is a requisite stepping stone for future extensions to cross-task generalization, an interesting direction we aim to explore in future work.\\n\\n**Novelty of Techniques and Limited Contribution**\\n\\n\\nWhile we acknowledge that 2DAPE has been explored in prior ViT work, the effect of positional encodings has been largely underestimated due to the use of larger patch sizes, which results in fewer patches and reduces the influence of positional context per patch. For instance, the original ViT paper [1] reported minimal performance differences between 1DAPE, 2DAPE, and learned APE under these conditions. In contrast, we are the first to demonstrate that positional encodings become crucial in fine-grained reasoning settings, such as ARC with 1x1 patches, where positional context may dominate patch-level information\\u2014especially in tasks involving movement of same-color shapes (e.g. ARC#025d127b). This finding reveals a limitation in current vision reasoning approaches, as demonstrated by ViTARC\\u2019s improved performance.\\n\\nThe necessity for enhanced 2D representations and an initial design similar to ViTARC has been independently highlighted by Fran\\u00e7ois Chollet, the creator of the ARC, in a recent Twitter post (Chollet, 2024) [2]. \\n\\nMoreover, we believe this insight on the significance of positional encoding in visual reasoning tasks has implications beyond ARC, potentially informing fields such as physical reasoning in vision generation tasks. We have updated the paper to emphasize these broader contributions (**ChangeLog #1.1**).\\n\\n[1] Dosovitskiy. et al. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. International Conference on Learning Representations. \\n\\n[2] Chollet, F. (2024). Twitter post. Retrieved from https://twitter.com/fchollet/status/1846263128801378616\\n\\n*Questions*\\n\\n**Variance in ViTARC-VT Performance (Figure 7)**\\n\\nThe large variance in ViTARC-VT\\u2019s performance comes from the variety of difficulties across ARC tasks. Some tasks are almost always solved perfectly, while others tend to hover around 70% accuracy, which increases overall variance. Figure 2 (left side) shows this distribution more clearly, where a strong performance on many tasks is balanced by lower scores on a smaller group. While more training could possibly reduce this, our goal here was to compare different architectures under the same conditions rather than fine-tune for each task.\\n\\n**Key Technique for Performance Improvement**\\n\\nRegarding the main driver of improvement from ViT-Vanilla to ViTARC-VT, the key technique is indeed the use of 2D padding rather than BorderTokens. Applying 2DAPE directly is challenging with ARC\\u2019s variable grid sizes. While we could adjust 2DAPE to accommodate different input grid sizes in the encoder, the output grid would still be generated relative to unknown 2D positions, making it difficult to map accurately. By using 2D padding, we create a fixed generation schema\\u2014essentially a template that maintains consistent spatial alignment\\u2014which greatly improves performance. We appreciate the reviewer highlighting this, and we will update the experiment discussion in the paper to emphasize it (**ChangeLog #1.2**). \\n\\n**Clarification on Equation (12)**\\n\\nThank you for this feedback. We will update the paper to clarify that $r_{left}$ and $r_{right}$ are fixed slopes following the original ALiBi setup, with adjusted starting values. We will update the paper to include the full equation and additional description for clarity (**ChangeLog #1.3**).\\n\\n**Tuning of Hyper-parameters \\u03b1 and \\u03b2**\\n\\n\\u03b1 and \\u03b2 are not manually tuned but are learned parameters. We\\u2019ve updated the paper to clarify this point (**ChangeLog #1.4**).\\n\\n\\n\\n\\n\\n**ChangeLog 1.1**: Updated the conclusion to discuss broader contributions\\n\\n**ChangeLog 1.2**: Updated Section 4.1 to highlight 2D padding as the key driver of ViTARC-VT's performance boost and explain its role.\\n\\n**ChangeLog 1.3**: Clarified in Section 5 that $r_{left}$ and $r_{right}$\\u200b follow a geometric sequence with different starting values, inspired by the ALiBi setup.\\n\\n**ChangeLog 1.4**: Updated section 5 PEmixer.\"}", "{\"metareview\": \"This is an interesting paper that designs a specific approach to tackle the ARC-AGI benchmark using object-centric priors and other task-specific components on top of a vision transformer architecture.\\n\\nWhile the method is interesting and the results look encouraging, reviewers raised concerns about the experimental evaluation and the generality of the approach, as it is specifically designed for a single visual reasoning benchmark task. It would be valuable to test whether the method contributions provide value beyond ARC-AGI on other established visual reasoning tasks, such as Raven's Progressive Matrices etc.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer consensus was to reject the paper.\"}", "{\"comment\": \"We thank the reviewer for the thorough and thoughtful review.\\n\\n*Questions:*\\n\\n**RoPE vs RPE?**\\n\\nWe share this curiosity and agree that RoPE (or even 2DRoPE) could be an interesting future adaptation for ViTARC. However, it wasn\\u2019t included in the current experimental setup for the following reasons: a) We prioritized PE methods with more established compatibility since RoPE is not yet a standard approach for ViTs. b) We hypothesized that RoPE\\u2019s multiplicative nature doesn\\u2019t integrate smoothly with certain components in our design, such as PEmixer, which uses additive interactions. This creates challenges in adjusting the context-to-position ratio without compromising RoPE\\u2019s key property of position-difference sensitivity through dot product alignment. We will incorporate this discussion into our paper as future work.\\n\\n**Encoding Directions in RPE, why not use explicit encodings for all four diagonal directions?**\\n\\nSince the 2D image generation occurs in raster order, the pixels in the image can be related using only 2 directions without losing the 2D nature. This is reflected using our current 2-slope approach and we will clarify this in the updated paper (**ChangeLog #2.1**). \\n\\nHowever, we appreciate the reviewer\\u2019s suggestion of adding 2 additional slopes, which can potentially enhance precision by providing explicitly 2D representational capacity. We are conducting some experiments to test this four-slope approach and will aim to provide the results in a future update.\\n\\n**How do your architectural changes influence other vision tasks?**\\n\\nOne of the main benefits of our proposed architectural changes is its enhanced focus on positional information, which we have found to be crucial for abstract visual reasoning tasks. This finding aligns with our intuition on vision-based tasks: spatial relations in higher dimensions carry more importance than in the 1D sequence setting. While we have not conducted experiments on other vision tasks, we hypothesize that this enhanced focus on positional information will be beneficial, especially in high-precision settings cases. We will incorporate this discussion into our paper as future works.\\n\\n**Starting from a Pre-trained ViT?**\\n\\nWe agree that starting from a pre-trained ViT could be beneficial since fine-tuning a pre-trained (non-vision) transformer is currently state-of-the-art [1]. However, we did not conduct experiments using pre-trained ViTs, since the main goal of our study is to uncover and address the fundamental limitations within the current ViT setup which we believe is shown more prominently when studied in the simplest setting: a ViT trained for a single simple AVR task from scratch.\\n\\n[1] Jack Cole and Mohamed Osman. Dataset-induced meta-learning (and other tricks): Improving model efficiency on arc. https://lab42.global/community-model-efficiency/ ,2023.\\n\\n**Did you try training jointly for all tasks?**\\n\\nWe are very interested in running this experiment, but are currently unable to do so given our computational resource limits. \\n\\n**Overfitting, Underfitting, and Data Scaling.**\\n\\nDuring development, our preliminary experiments suggested that 1M samples are indeed essential for the model size we used. For example, in ARC#007bbfb7 (https://kts.github.io/arc-viewer/page1/#007bbfb7), ViTARC-VT\\u2019s performance dropped significantly when using fewer samples: with 1M training samples, the model solved 99% of test instances, but reducing to 100k samples caused test performance to drop to 0.7%. As for additional data, while we haven\\u2019t tested beyond 1M samples, we have observed that training for more epochs improves performance. For instance, in ARC#1cf80156, increasing from 1 to 2 epochs improved ViTARC-VT\\u2019s performance from 9.6% to 17.5%. This suggests that increasing the training size could likely further enhance performance. We limited our experiments to 1 epoch due to the limitation in our computational resources.\\n\\n**Pixel Prediction Details**\\n\\nFor the domain of ARC, each pixel takes only 1 of 9 possible colors. Therefore in ViTARC, each pixel corresponds to a single token in the vocabulary space and is handled as a standard transformer generation task, where an exact match is required.\"}", "{\"summary\": \"This paper addresses the ARC benchmark from a vision perspective, and answers the question of what changes are required in current SOTA vision architectures to deal with the tasks in ARC.\\n\\nThe paper suggests that the bad results of vanilla ViTs on these tasks are due to their poor encoding of spatial and positional information, and proposes approaches to deal with this limitation.\\n\\nFinally, the paper shows results that indicate that the proposed changes were useful for the performance on the ARC benchmark.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1 - The paper addresses an interesting question, which is: if the tasks in ARC are visual tasks, how can se use the current vision tools to deal with them?\\n\\n2 - The reasoning behind every contribution in the paper is well explained. The paper is easy to follow. \\n\\n3 - Related to the previous point, I particularly like the analysis in Figure 6. It helps understand what the model is (not) paying attention to.\\n\\n4 - The results in the paper show a clear improvement with respect to the original ViT vanilla baseline, meaning the proposed contributions, overall, are helpful.\", \"weaknesses\": \"The paper has some weaknesses that I believe can be addressed, but also should be addressed.\\n\\n**1 - Unclear that this is vision modeling**.\\n\\nThe main argument of the paper is that vision transformers should be improved to work on ARC. But the proposed changes make the model not be a vision model anymore. While the paper mentions that \\\"Transformer-based LLM approaches convert the images into strings, which does not fully capture all relevant structural information\\u201d, this is not too different from what this paper does. \\n - By construction, vanilla Transformers model inputs as sequences, but there are some aspects that make them more \\\"vision\\\" specific. Some of them are not included in the original ViT (e.g. local attention), and the others (e.g. patching) are removed in this paper (see next points). A non-Transformer-based architecture (e.g. a U-Net in a diffusion model) would make the connection to vision more clear. (I'm not suggesting such a model should be used, just exemplifying the point).\\n - The pixels are encoded in a 1x1 grid, effectively making it a sequence.\\n - There is an end-of-sequence token, just as there would be if the task was modeled as a sequence. \\n - Even object positional encodings are added, abstracting away the low-level vision from the tasks. \\n\\nAn emphasis on vision is given in the paper more than once, e.g. \\\"However, in vision tasks, especially those requiring detailed visual reasoning, spatial relationships often carry as much importance as, if not more than, the content of the tokens\\\". I believe, however, that a lot of the learned lessons are not about vision, but about structured predictions. In general vision tasks, for example, the content of tokens *is* more important than the spatial relationships.\\n\\n\\n**2 - Unclear significance of contributions**.\\n\\nOverall, the paper reads as a sequence of contributions the authors tried, one after another, and building on the previous one, without any global understanding of the problems. I believe the process to get to a solution should not be part of the paper. The paper should just present the final results and justify them. This makes the presentation confusing, for example when showing results in section 4, before going back to explaining some more technical contributions in section 5. \\n\\nBut the main problem is not about presentation. I believe there is some disconnect between the different contributions. Some of the latter ones should make the former ones unnecessary, and these are not ablated properly. The following are some examples:\\n\\n - First, learnable embeddings are removed in favor of positional embeddings. But then a PEmixer is necessary to learn position-specific embeddings. And also, RPE is required because APE encodes spatial changes very slowly. All of this is confusing. I believe the paper should directly start by explaining the final encoding and its reasoning, not present two different encoding techniques before the final one and show how they are not ideal.\\n - The paper presents quantitative results showing that PEmixer and OPE, on top of ViTARC-VT actually *decrease* the performance of the model. \\n - The padding is added at the beginning, but then it results in problems that need to be corrected. I address this in more detail in the next weakness.\\n\\n**3 - It is unclear that all the padding contributions are necessary**.\\n\\nWhy are padding tokens necessary? The original formulation (sequential padding at the end), where the padding tokens are ignored by the attention should be the correct one (computationally it also makes more sense). The per-row padding (without being attended to) is effectively the same as the per-sequence padding, so I am not sure why it is being mentioned. \\n\\nI understand that there is a problem about the output not understanding boundaries corrected. But the per-row EOS tokens should be enough to address this issue. For the model, it should be equally hard to predict an \\\"end of row\\\" token than it is to predict the current _<arc endxgrid>_ token. Would it be possible to ablate this? This is another example of weakness #2. The final solution should not include previous iterations of the solution, if these are not necessary. No (attended-to) padding would imply: \\n - More efficient forward and backward passes.\\n - No need for three different kinds of end-of-grid tokens. Only a single one would be enough.\\n\\n**4 - No baselines or comparisons**.\\n\\nThere are no baselines or comparisons to other approaches, only to their own vanilla ViT. Also, I could not find the paper's performance on the private set of the ARC benchmark, so it is hard to compare to SOTA approaches. Specially interesting comparisons would be to approaches that model ARC as a sequence using an LLM, as I believe the approaches are not very different (see weakness #1).\\n\\n---\", \"other_minor_weaknesses\": [\"Figure 8 Appendix A seems to motivate the first main technical contributions of the paper. It should not be in the appendix.\", \"It is unclear why the PEMixer is helping. If every example in the task has different coordinates that are important, I don't understand why it would learn a global (across all samples) weighing. What would be the benefit of giving one position more weight than another one, if every example has different important positions?\", \"The paper could contain more clarifications and ablations. See \\\"Questions\\\" section.\"], \"questions\": \"1 - Did you try using RoPE embeddings instead of RPE, in the second point of Section 5? I am curious about the differences.\", \"2___about_rpe\": \"the paper mentions there is an encoding for left and above, and another one for right and below. How are \\\"above and to the right\\\" patches encoded? Why not using an explicit \\\"above left\\\", \\\"above right\\\", \\\"bottom left\\\", \\\"bottom right\\\"? It seems like the approach is re-using the 1D sequential approach without taking into account that we are modeling 2D inputs.\\n\\n3 - How do your architectural changes influence other vision tasks? E.g. classification, detection, VQA, etc. The SOTA ViT models are nowadays used for many tasks (at least as part of many tasks as vision encoders). It would be great if the changes proposed in the paper did not hurt performance in other tasks.\\n\\n4 - Related to previous question, did you try starting from a pre-trained ViT?\\n\\n5 - Did you try training jointly for all tasks? Adding a task ID as context for example.\\n\\n6 - Did you notice any overfitting or underfitting on the final models? Any scaling properties with the data? On the final models, is 1M samples necessary, of 100k would be enough? Would the models perform better with even more samples?\\n\\n7 - When generating pixels, during evaluation does the pixel value have to be exact? Is the prediction done as a classification in the RGB [256 x 256 x 256] space? Or is it a regression in the [0-1](x3) range? If it is a regression, how is the correctness evaluated?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"I keep my rating\", \"comment\": \"I appreciate the extensive rebuttal by the authors. They address the points that I raised in detail.\\n\\nHowever, I am still not convinced that the contributions in this paper can be helpful for the community. The weaknesses that I raised in the initial review I believe still hold. A cleaner version of the approach and the paper, with more comparisons to existing methods, a more clear vision of what the contribution of the paper really is, and showing application to other tasks would greatly benefit the paper.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thanks to the authors for clarifying their method. I have updated my score accordingly, though I still note the limited evaluation on ARC only, instead of other complex reasoning tasks (e.g., RPM and OOO as generative tasks) or physical reasoning tasks as the authors mentioned.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s encouraging comments.\\n\\n*Questions*\\n\\n**Adding caveat regarding the divergence from the ARC challenge, and that the real-world relevance is yet to be demonstrated.**\\n\\nWe appreciate this point and will include this caveat in the updated version to clarify that ViTARC\\u2019s contributions are complementary to existing methods and that further testing on broader tasks would better demonstrate its utility (**ChangeLog #4.1**).\\n\\nViTARC\\u2019s contributions are intended to complement current ARC solutions and support the challenge as a whole. Since the current SOTA in ARC relies on LLM-based transduction models that handle tasks through supervised input-output transformations [1], integrating the 2D inductive bias from ViTARC could provide an orthogonal benefit. Prior studies indicate that the sequential nature of 1D methods in LLMs can limit ARC performance; for example, because the input grid is processed in raster order, LLMs experience a significant drop in success rates when horizontal movement/filling tasks are rotated 90 degrees [2].\\n\\nAs we will discuss in **Weakness #1**, the insights from ViTARC may also extend to reasoning tasks that require n x n pixel precision, with potential applications in physical-aware image or video generation in real-world scenarios.\\n\\n\\n[1] Jack Cole and Mohamed Osman. Dataset-induced meta-learning (and othe tricks): Improving model efficiency on arc. https://lab42.global/community-model-efficiency/ ,2023.\\n\\n[2] Xu, Y., Li, W., Vaezipoor, P., Sanner, S., & Khalil, E. B. (2024). LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-based Representations. Transactions on Machine Learning Research (TMLR).\\n\\n**Explanation of Task A (6e02f1e3) in Fig. 2**\\n\\nThank you for pointing this out. Task A in Fig. 2 follows a rule based on color count: if the input grid has two distinct colors, the output contains a grey diagonal from the top-left to the bottom-right. Conversely, if the input grid has three colors, the grey diagonal is from the top-right to the bottom-left. We agree that additional explanations or demonstrations for such tasks would be beneficial and have incorporated these clarifications in the updated version (**ChangeLog #4.2**).\\n\\n**Weaknesses #1: Straightforwardness of Modifications and Novelty in Existing Literature.**\\n\\nTreating each ARC task as an Abstract Visual Reasoning (AVR) task places us in a specific setup that combines reasoning, variable-sized grids, and a generative approach with pixel-level precision. Unlike standard vision tasks, we can\\u2019t use resizing or cropping to handle variable sizes, as ARC\\u2019s precision requirements make these methods infeasible.\\n\\nARC tasks are also inherently more complex than benchmarks like RPM or odd-one-out (o3), which only require selecting a correct answer instead of generating a complete output. VQA, by contrast, involves vision input with NLP generation. Given these differences, to the best of our knowledge, there is no prior work addressing this same combination of requirements.\\n\\nNotably, the findings in ViTARC could be extended to support reasoning tasks requiring n x n pixel precision, including emerging discussions on physical reasoning in vision generation, where accurate spatial relationships are essential.\\n\\n**Weakness #2**\\n\\nAddressed in **Question #1**\\n\\n**ChangeLog 4.1**: Updated the conclusion to discuss broader contributions.\\n\\n**ChangeLog 4.2**: Added a description for Task A in Figure 2.\"}", "{\"comment\": \"We thank the reviewer for the thorough and thoughtful review.\\n\\nBefore addressing your questions in detail, we would like to reiterate the main goal of this work. While vision transformers are being touted as a basic piece in vision and multimodal foundation models, we show that a vanilla vision transformer cannot solve ARC tasks even when trained on 1,000,000 examples. This finding indicates a clear representational deficiency in vision transformers when applied to visual reasoning. We diagnose this deficiency and address it in this work. Ultimately, the architecture and results are not limited to the ARC, as we will develop next. We focus on the ARC as a representative domain for visual reasoning with growing research interest from the community and reasonable computational resource needs that we can meet. \\n\\n*Weaknesses (part 1)*\\n\\n**Many of the architectural designs in the proposed model are made for solving ARC specifically.**\\n\\nWe acknowledge that the naming of the padding tokens *<arc_pad>* and border tokens *<arc_endxgrid>* unintentionally suggests they are tailored specifically for the ARC domain. To clarify this, we will revise the terminology in the paper accordingly (**ChangeLog #3.1**).\\n\\nThat said, the underlying implementation of 2D padding and border tokens is inherently generic and can be adapted to process any variable-sized images. While ARC demands pixel-level precision (1x1 patches), the approach has the potential to generalize to visual reasoning tasks involving n\\u00d7n patches, where standard resizing or cropping techniques\\u2014typical in conventional vision tasks\\u2014are unsuitable.\\n\\nViTARC stands out as the first approach to expose the representational deficiencies in current ViT structures under these constraints and to propose necessary adaptations for addressing such challenges. We will discuss the inductive bias issue and the potential to extend this work to other visual reasoning datasets in response to **Question #1**.\\n\\n\\n**Strong inductive biases including a priori knowledge of what the final grid shape should look like, and also what the initial grid shape looks like.**\\n\\nWe appreciate this concern, as the core challenge of ARC indeed involves inferring the output dimensions.\", \"the_inductive_biases_we_introduce_are_minimal_and_not_specific_to_the_arc_domain\": \"a) The maximum grid size, a common practice in NLP tasks where padding requires a maximum length. Notably, most vision modeling imposes stricter constraints, such as fixed image sizes after resizing or cropping.\\n\\nb) The assumption that the maximum grid shape is rectangular, a typical prior in image processing.\\n\\nImportantly, these biases apply only to the **maximum** grids. We make no assumptions about the actual input or output grid shapes, which remain unbounded\\u2014even with the introduction of border tokens and 2D padding. This is especially true since ViTARC treats each pixel as a token.\\n\\nWe hope these clarifications demonstrate that our injected biases are modest and consistent with standard practices in related fields. Regarding the comment, \\\"Giving the model knowledge that one token should represent one block in ARC is a strong prior to be injecting,\\\" we are unclear about this interpretation. In ViTARC, each token represents one pixel so we assume the reviewer is referring to a pixel as a \\u201cblock\\u201d? If so, we see the pixel simply as the atomic representation of ARC tasks that is the most prior-free representation we can conceive. If the reviewer was referring to a different meaning of \\u201cblock\\u201d, we would appreciate it if they would consider clarifying their interpretation. \\n\\n**Object-Based Positional Encodings**\\n\\nWe agree with the reviewer that, ideally, the model would learn to infer objectness implicitly through attention among pixels. However, as shown in our failure case analysis (Figure 6), the model sometimes requires a stronger signal to handle complex shapes, such as multi-colored or concave shapes. OPE serves as a demonstration of how architectural design can allow for the injection of external knowledge as \\u201cpositional embeddings\\u201d within ViT models.\\n\\nMoreover, OPE works synergistically with PEmixer. If OPE\\u2019s signal is unreliable, the learned vector weighting can adapt to focus more on the original x and y coordinates. We will clarify this more clearly in the updated version (**ChangeLog #3.2**).\"}", "{\"summary\": \"The authors focus on a data-driven model to solve ARC. They first establish that vanilla ViTs fail on ARC despite being trained on a million examples. They then show that a 2D visual representation with ViT improves performance, and that object-based positional information further yields significant performance gains on ARC.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I appreciate that the authors explore the limits of a data-driven approach to ARC, as well as propose potential inductive biases to encode into a reasoning model for ARC. General priors for reasoning tasks are indeed important. The quantitative results compared to a naive ViT are promising for ARC.\", \"weaknesses\": \"1. Many of the architectural designs in the proposed model are made for solving ARC specifically. I believe ARC is a great intermediate proxy task for complex reasoning, but should not be an end goal in and of itself. With enough inductive biases, I believe that solving ARC with a million examples is reasonable, but is not particularly enlightening for the community. For example, 2D padding with <arc_pad> tokens and border tokens <arc endxgrid> that define grid conditions, etc, are very much defined for ARC itself. Would like to see if this model generalizes to other reasoning tasks, for example Raven's progressive matrices, odd one out challenges, etc.\\n2. In addition, I'm not convinced that these are indeed the best inductive biases. For example, I believe that by using a \\\"2D template\\\" indicated by padding, border, and new-line tokens, the method is endowed with a priori knowledge of what the final grid shape should look like (and also, what the initial grid shape looks like). One core challenge of ARC is precisely that it needs to infer what the output dimensions are (how the shape transforms). Giving the model knowledge that one token should represent one block in ARC is a strong prior to be injecting. \\n3. The object-based positional encodings based on OpenCV's contour detections will struggle on more complex or different shapes. This \\u201cobjectness\\u201d should be captured by the visual tokens implicitly. \\n4. No other baselines than ViT are explored, though there have been many works proposed for ARC & related reasoning tasks.\", \"questions\": \"1. Can you show that the model works on other complex reasoning tasks outside of ARC as mentioned in point 1 above?\\n2. Does the model work without strong priors, e.g., on input and output grid shape?\\n3. Can you add in more baselines trained on the same amount of data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Follow-Up 2D RPE Experiment Results:**\\n\\nWe conducted experiments to evaluate the reviewer\\u2019s suggestion of incorporating four diagonal directions into the relative positional encoding (RPE) design. These experiments were performed on a randomly selected subset of 50 tasks from the original 400 training tasks. The setup for the four diagonal directions was based on a 2D Manhattan distance approach, using the following slopes for the four directions:\\n- Top-right: $ 1/2 $\\n- Top-left: $ 1/2^{0.25} $\\n- Down-right: $ 1/2^{0.5} $\\n- Down-left: $ 1/2^{0.75} $\\n\\nAdditionally, for comparison, we tested an alternative \\\"4-direction\\\" version that used slopes for only the up, down, left, and right directions. In this setup, pixels above and to the right received slope additions for their respective directions, divided by 2 to preserve normalization.\\n\\n### Results Table\", \"below_is_the_performance_summary_for_the_three_configurations\": \"| Model (on 50 subset) | Mean | Median | 25th Pctl. | 75th Pctl. | Delta (Mean) |\\n|---------------------------|--------|--------|------------|------------|--------------|\\n| ViTARC (2 directions) | 79.51 | 96.90 | 83.12 | 99.88 | base |\\n| 2D RPE - 4 diag directions| **79.74** | 94.20 | 77.72 | 99.80 | +0.23 |\\n| 2D RPE - 4 directions | 78.10 | 96.50 | 71.70 | 99.83 | -1.41 |\\n\\n\\n### Key Observations:\\n\\n1. The 4-diagonal-direction RPE slightly outperformed the original 2-direction RPE in mean performance, supporting the reviewer's suggestion that explicitly modeling all four diagonal directions enhances precision.\\n\\n2. While the performance gaps were not significant, both 2D Manhattan-based designs outperformed the cardinal-direction-based RPE (up, down, left, right). This indicates that incorporating diagonal relative distances provides additional spatial information, contributing positively to performance.\\n\\nThese findings will be incorporated into future revisions and experiments as part of our continued refinement of RPE design.\"}", "{\"summary\": \"This paper studies vision transformer in abstract visual reasoning ARC tasks which do not include any text or background knowledge, and focus purely on visual abstraction and pattern recognition. However, directly training a ViT with one million examples per task fails on most ARC tasks. Then, several techniques are proposed to improve model performances, including improved 2D positional encodings, and object-based positional encoding. This work highlights the importance of positional information for utilizing Vision Transformers in visual reasoning tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The Abstract Visual Reasoning (AVR) tasks is interesting and important to study because it requires strong reasoning capability of ViTs. The paper also has very interesting findings, highlighting the importance of positional encodings in solving pure vision-based visual reasoning tasks.\\n2. This works provides very detailed model improvements they have tried to improve the performance, from ViT-Vanilla to ViTARC-VT and ViTARC. This is very useful to the communities to reproduce the experiments and improve further.\\n3. The final model ViTARC achieves very strong performance in most of the tasks, which is also a significant improvement over ViT-Vanilla.\", \"weaknesses\": \"1. The training/evaluation protocol is not clearly defined. The paper does not show clearly the generalization ability on unseen tasks. All the task they use for evaluation have some training examples during training. It would be very interesting to use some tasks pure for evaluation which are not seen during training.\\n2. Some of the key techniques used in this work are not new, like 2D (Relative) Positional Encoding, which have been discussed in the original ViT/Swin Transformer papers and plays a key role for performance improvement in this work. Though some new techniques are introduced in this work, like Positional Encoding Mixer (PEmixer) and Object-based Positional Encoding (OPE), the overall contribution and novelty is marginal.\", \"questions\": \"1. In figure 7, ViTARC-VT has very large variance in terms of performance. Any reason for it? Also what is the key technique leading to the significant performance improvement from ViT-Vanilla to ViTARC-VT? BorderTokens does not look to be important from this figure. Is this because of the 2D positional encodings in ViTARC-VT, instead of 1D positional encodings in ViT-Vanilla?\\n2. The Equation (12) is not quite clear. I did not find how to calculate the value of $r_{left}$ and $r_{right}$.\\n3. It is unclear how to tune the hyper-parameters $\\\\alpha$ and $\\\\beta$ in Equation (10), and no ablation studies are provided.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0gGPVbRqOE
Splitted Wavelet Differential Inclusion for neural signal processing
[ "Xinwei Sun", "Xuanjun Guo", "Yanwei Fu", "Shouyan Wang" ]
Wavelet shrinkage is a powerful tool in neural signal processing. It has been applied to various types of neural signals, such as non-invasive signals and extracellular recordings. For example, in Parkinson's disease (PD), $\beta$ burst activities in local field potentials (LFP) signals indicated pathological information, which corresponds to \emph{strong signal} with higher wavelet coefficients. However, it has been found that there also exists \emph{weak signal} that should not be ignored. This weak signal refers to the set of small coefficients, which corresponds to the non-burst/tonic activity in PD. While it lacks the interpretability of the strong signal, neglecting it may result in the omission of movement-related information during signal reconstruction. However, most existing methods mainly focused on strong signals, while ignoring weak signals. In this paper, we propose \emph{Splitted Wavelet Differential Inclusion}, which is provable to achieve better estimation of both the strong signal and the whole signal. Equipped with an $\ell_2$ splitting mechanism, we derive the solution path of a couple of parameters in a newly proposed differential inclusion, of which the sparse one can remove bias in estimating the strong signal and the dense parameter can additionally capture the weak signal with the $\ell_2$ shrinkage. The utility of our method is demonstrated by the improved accuracy in a numerical experiment and additional findings of tonic activity in PD.
[ "Wavelet smoothing", "differential inclusion", "weak signal", "signal reconstruction", "Parkinson's disease", "burst activity" ]
https://openreview.net/pdf?id=0gGPVbRqOE
https://openreview.net/forum?id=0gGPVbRqOE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "n3eDkiBHCq", "lKfS9Yksjf", "RF8Y2abpXL", "PP1bUTzqMk", "BRKjUw3qMU" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1729581039581, 1731992487189, 1730296602075, 1730560430442, 1730651854991 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12724/Reviewer_owsp" ], [ "ICLR.cc/2025/Conference/Submission12724/Authors" ], [ "ICLR.cc/2025/Conference/Submission12724/Reviewer_BHCx" ], [ "ICLR.cc/2025/Conference/Submission12724/Reviewer_Yxeg" ], [ "ICLR.cc/2025/Conference/Submission12724/Reviewer_BPQZ" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes the Splitted Wavelet Differential Inclusion (SWDI) method for neural signal processing, achieving better strong and weak signal estimation in Parkinson's disease analysis.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper introduces the Splitted Wavelet Differential Inclusion (SWDI) method, which improves the estimation of both strong and weak neural signals by utilizing an \\u21132 splitting mechanism. It demonstrates better accuracy than traditional wavelet shrinkage, particularly in Parkinson's disease signal analysis, capturing non-burst activity alongside stronger signal components.\", \"weaknesses\": \"To better demonstrate the proposed method's practicality, I suggest including comparisons with non-wavelet methods, particularly those that have seen recent success in this field. This could provide a clearer perspective on how the method performs in a wider range of real-world applications. Specific examples of non-wavelet techniques, such as deep learning-based methods, would strengthen the evaluation. While wavelet techniques have been widely used in the past, it would be helpful if the authors could justify their choice of wavelets in this context and explain how their approach advances the state-of-the-art. Additionally, comparisons with more recent works in the field would help to clarify the method's relevance and novelty. The paper's content seems more suitable for signal processing journals or conferences, such as TSP, INDIN, or ICASSP.\", \"questions\": \"It would be beneficial for the authors to discuss the tradeoffs between wavelet techniques and more recent methods, such as deep learning-based approaches, in this specific application. A comparison or discussion of how their method performs relative to recent non-wavelet techniques would provide valuable context for evaluating the method's effectiveness.The distinction between weak signals and noise could be clarified further. I suggest that the authors provide more detailed criteria for how they distinguish between the two, and discuss whether there are alternative approaches. The \\\"differential\\\" aspect of the proposed method requires clearer explanation. A step-by-step description of how the differential inclusion is applied, ideally with a simple example, would greatly improve the clarity of this concept and make it more accessible to readers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper considers the longstanding problem of recovering a temporal univariate signal from its noisy observations, which a fundamental problem in signal processing. The authors approach this challenge using wavelet analysis, where they propose to partition the signal into strong and weak components based on the magnitudes of the wavelet coefficients. The paper presents a new method, termed Splitted Wavelet Differential Inclusion (SWDI), which is designed to recover the strong component by employing a differential inclusion framework. It is shown theoretically and empirically in a simulation that the proposed method recovers the strong signal more accurately than other methods based on wavelet shrinkage. Additionally, the method is demonstrated in application to neural signals, where the goal is to identify medication effects on Parkinson\\u2019s disease.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The considered problem is central and important.\", \"The application to neural signals, specifically in the context of Parkinson\\u2019s disease, is interesting.\"], \"weaknesses\": [\"The **presentation** of the entire paper, and particularly, the technical aspects is challenging to follow, which hinders comprehension of the core ideas and derivations.\", \"Due to the presentation style, it is difficult to clearly appreciate the novelty of the paper. The mix of formal and informal statements complicates rigorous validation.\", \"The **problem setting** lacks clarity, particularly its statistical model. While the strong signal is defined based on the noise standard deviation $\\\\sigma$, the method\\u2019s dependence on the signal-to-noise ratio (SNR) or $\\\\sigma$ is unclear. Additionally, it is not specified whether the derivations and results assume Gaussian noise or if the true signal $f$ is deterministic.\", \"**Numerical results** are limited. Figures are of low resolution with small fonts, making them hard to interpret, especially Figure 1. Expanding the numerical experiments to encompass a broader range of cases and a more comprehensive comparison with alternative shrinkage methods would enhance the paper. The justification for the selected baselines is unclear, considering the prevalence of other methods addressing the same problem.\"], \"questions\": \"Clarification is needed regarding the problem setting, statistical assumptions, and the choice of baseline methods (see weaknesses above).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper improves on the well-known wavelet shrinkage approach, introducing a novel method coined \\\"Splitted Wavelet Differential Inclusion (SWDI)\\\". as opposed to wavelet shrinkage, it also takes weak components of the signal to be analyzed in consideration. The effectiveness of SWDI is showed by numerical experiments on data from Parkinson patients.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"It is a nice idea to go beyond considering only large wavelet coefficients, i.e., the strong part of the signal.\", \"The author substantiated her/his novel approach with a theoretical foundation (see, i.e., Theorem 4.6).\"], \"weaknesses\": [\"At present, we have powerful methods based on learning. It is not clear and now even discussed why those are not taken into consideration, There might be good reasons, but this requires a careful discussion.\", \"The numerical experiments only compare to other model-based methods, mainly wavelet-based approaches. Again, in partiular, learning based methods (DNNs, etc.) need to be used for comparison.\"], \"questions\": \"Please see the weaknesses and reply to those.\", \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns']\", \"details_of_ethics_concerns\": \"Medical data from Parkinson patients is used.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a novel method, the Splitted Wavelet Differential Inclusion (SWDI), for enhancing neural signal analysis, particularly for applications related to Parkinson\\u2019s disease. SWDI introduces a dual-parameter approach that estimates both the strong and whole signals simultaneously, addressing limitations of previous wavelet shrinkage techniques. The authors demonstrate that their closed-form solution path improves estimation accuracy for both signal components. This work contributes to the field of neural signal processing by offering a robust framework for analyzing complex neural data, which could have significant clinical applications in neurodegenerative disease research.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Originality\\nThe proposed SWDI method creatively combines wavelet analysis with differential inclusion to address limitations in current shrinkage methods by focusing on both strong and weak signals, thus enhancing the detection of signal features important for clinical applications.\\n\\nQuality\\nThe paper is well-founded with rigorous theoretical analysis that supports the authors' claims.\\n\\nClarity\\nOverall, the paper is clearly written, although some improvements can be made (see 'Weaknesses' section).\", \"weaknesses\": \"The clarity of presentation of this paper can be improved. There are some English mistakes, subject-verb disagreement, missing conjunction 'and', etc. These should be carefully addressed prior to the publication of this paper.\", \"examples\": \"\", \"line_052\": \"Add 'and' before 'non-parametric shrinkage'.\", \"line_053\": \"Change 'contains in the signal' to 'contained in the signal'.\", \"line_056\": \"Change 'composed by' to 'composed of'.\", \"line_073\": \"Change 'On the other' to 'On the other hand'.\", \"line_107\": \"Add 'and' before 'non-parametric shrinkage'.\", \"line_114\": \"Change 'have' to 'has'.\", \"line_123\": \"Add 'and' before 'the non-burst component'.\", \"line_373\": \"Change 'includes' to 'include'.\", \"line_377\": \"Change 'the same ... with' to 'the same ... as'.\", \"line_382\": \"Change 'as a contrast' to 'in contrast'.\", \"line_386\": \"Change 'compare to' to 'compared to'.\", \"line_402\": \"Add 'and' before 'then increases'.\", \"line_500\": \"Insert 'be' between 'may' and 'due to'.\", \"line_512\": \"Change 'Fig. 3,. 4' to 'Figs. 3 and 4'.\", \"questions\": \"How does the SWDI method compare to deep learning-based approaches or other adaptive wavelet techniques in terms of accuracy and computational efficiency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
0fwJMANq9P
Efficient Heuristics Generation for Solving Combinatorial Optimization Problems Using Large Language Models
[ "Xuan Wu", "Di Wang", "Zhiguang Cao", "Chunguo Wu", "Lijie Wen", "Chunyan Miao", "Yubin Xiao", "You Zhou" ]
Recent studies exploited Large Language Models (LLMs) to autonomously generate heuristics for solving Combinatorial Optimization Problems (COPs), by prompting LLMs to first provide search directions and then derive heuristics accordingly. However, the absence of task-specific knowledge in prompts often leads LLMs to provide unspecific search directions, obstructing the derivation of well-performing heuristics. Moreover, evaluating the derived heuristics remains resource-intensive, especially for those semantically equivalent ones, often requiring unnecessary resource expenditure. To enable LLMs to provide specific search directions, we propose the Hercules algorithm, which leverages our designed Core Abstraction Prompting (CAP) method to abstract the core components from elite heuristics and incorporate them as prior knowledge in prompts. We theoretically prove the effectiveness of CAP in reducing unspecificity and provide empirical results in this work. To reduce the required computing resources for evaluating the derived heuristics, we propose few-shot Performance Prediction Prompting (PPP), a first-of-its-kind method for the Heuristic Generation (HG) task. PPP leverages LLMs to predict the fitness values of newly derived heuristics by analyzing their semantic similarity to previously evaluated ones. We further develop two tailored mechanisms for PPP to enhance predictive accuracy and determine unreliable predictions, respectively. The use of PPP makes Hercules more resource-efficient and we name this variant Hercules-P. Extensive experiments across various HG tasks, COPs, and LLMs demonstrate that Hercules outperforms the state-of-the-art LLM-based HG algorithms, while Hercules-P excels at minimizing computing resources. In addition, we illustrate the effectiveness of CAP, PPP, and the other proposed mechanisms by conducting relevant ablation studies.
[ "Heuristic Generation", "Large Language Models", "Combinatorial Optimization Problem" ]
Reject
https://openreview.net/pdf?id=0fwJMANq9P
https://openreview.net/forum?id=0fwJMANq9P
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uT18DbPALb", "sdbDclS7zM", "scFxa8difI", "riiRSvnRZ1", "oizMoaNT0F", "kK1xaevCZP", "hGQrqm1lPi", "bzFm0p2KhW", "RdV9ntzYiR", "RDCJ33o7SL", "KrmnegcvpO", "BUs5IZiCoQ", "6mkWdBMzCJ" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730484503148, 1732622685188, 1730680576517, 1732512233201, 1732512172973, 1730685424040, 1737524216786, 1732512083884, 1732677216582, 1734689904313, 1733141096174, 1732512359695, 1730102082668 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12808/Reviewer_QjjZ" ], [ "ICLR.cc/2025/Conference/Submission12808/Reviewer_KGdN" ], [ "ICLR.cc/2025/Conference/Submission12808/Reviewer_KGdN" ], [ "ICLR.cc/2025/Conference/Submission12808/Authors" ], [ "ICLR.cc/2025/Conference/Submission12808/Authors" ], [ "ICLR.cc/2025/Conference/Submission12808/Reviewer_jJ2Y" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12808/Authors" ], [ "ICLR.cc/2025/Conference/Submission12808/Authors" ], [ "ICLR.cc/2025/Conference/Submission12808/Area_Chair_kw1y" ], [ "ICLR.cc/2025/Conference/Submission12808/Authors" ], [ "ICLR.cc/2025/Conference/Submission12808/Authors" ], [ "ICLR.cc/2025/Conference/Submission12808/Reviewer_f4Hd" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a framework to use LLMs to generate heuristics for solving\\noptimization problems. The authors describe their framework and evaluate it\\nempirically, comparing to other approaches in the literature.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed framework is interesting and seems to work well in practice.\", \"weaknesses\": \"The choice of KGLS as seed heuristics should be justified as it was not designed\\nfor the general TSP. Why not LKH? This should also be considered in the\\nempirical evaluation; in particular to answer the question of whether KGLS is a\\nreasonable heuristic to start with in this case (improving over a weak heuristic\\nis easier than improving over a strong heuristic).\\n\\nFigure 5 has no axis labels.\", \"questions\": \"The differences are small in some cases and it would be great if the authors could\\nprovide error bounds or confidence intervals for the empirical results.\\n\\nWhy is KGLS is reasonable base heuristic?\", \"update_after_responses\": \"Thank you for your responses. I have updated my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed response!\"}", "{\"summary\": \"This paper studies the generation of heuristics for combinatorial optimization problems using LLMs.\\n\\nThe work continues similar work in this space that tries to mimic evolutionary computation (crossover, mutation) via LLMs. The result is an LLM-infused metaheuristic algorithm. \\n\\nUnlike the previous work, the paper claims 1) to address introducing more problem specificity into the prompts and 2) to speed up the process by using LLMs to predict the performance of generated accuracies to skip over evaluating them fully. \\n\\nOverall, I enjoyed reading this paper, and I appreciated the work that went into building an end-to-end pipeline with several components.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Integrating LLMs with heuristic solving is an exciting combination.\\nThe paper implements an end-to-end pipeline that starts with a seed query that then mimics evolutionary computing via LLMs, yielding heuristics that can be embedded in the Local Search Meta-Heuristics for different combinatorial problems. \\nThe connection with Information Gain is an excellent addition\\nFrom a practical perspective, the paper considers several details into account such as reducing costs via LLM predictors.\", \"weaknesses\": \"As rightly noted in the paper, the idea of mimicking evolutionary computation via LLMs is not new. In fact, most (all?) crossover and mutation operators are from Ye et. al. 2024. On the one hand, the experiments and the ablation study show that the proposed modifications might offer some benefit in the results, and on the other hand, they can be regarded as incremental, and it is not clear what's the main takeaway.\\n\\nRegarding the presentation, I found it difficult/confusing that many moving parts are introduced as large components with several acronyms Hercules, CAP, PPP, EXEMPLAR, Cons --but after all, the provided pseudocode shows the overall algorithm, so I am not sure what these abstractions add to the presentation. Also, the paper claims our \\\"propriety\\\" CAP algorithm --what does that mean? \\n\\nThe idea of adding more specificity to the prompts seems reasonable at a high level, but the paper overindexes too much into the example in Figure 1. The information gain analysis is interesting (and is borrowed from a Hu et. al. 2024) but at the end what happens is we select top-k core components. And that's also not uniform, we do that only some number of iterations (denoted by \\\\lambda in the paper), all of which remain as more hyper-parameters to deal with.\\n\\nThe experiments cover TSP, CVPR, Binpacking, Multi-Knapsacks. Importantly, the starting seed function seems critical to the approach. The method generates heuristics but the overall approach to solve these problems are meta-heuristics. (please correct me if I understand this correctly). For TSP, we use guided local search. For BinPacking and Knapsacks we use Ant-Colony Optimization. One might argue that the settings of the outer meta-heuristics and their performance are crucial to the overall results and not just the heuristics (generated by LLMs here.) The experiments do not discuss or study any of this. \\n\\nAdditionally, all comparisons are with other LLM-based heuristics generations. Note that this is quite a costly approach (hence some effort with performance predictors to save time etc.). According to the tables in the appendix, we are consuming many many minutes upto 5 hours. Then, it is not clear to me how to fairly evaluate these results. How does the same GLS and ACO without the advanced heuristics found by LLM but with standard heuristics perform given the same amount of time? (Btw, does this time include LLM queries or only running the heuristics after the LLM generates them against the instances?) \\n\\nThis might not be surprising that the choice of LLM quite affects the results (Table 1; LLama vs GPT-4o). But then it makes one wonder how much of the value comes from the many moving components proposed here vs. plain and simple, the underlying LLM.\", \"questions\": \"Could you provide details on the outer meta-heuristics (GLS, ACO etc.)? How much of the results are due to the LLM integration with CAP, PPP etc. vs. the meta-heuristics leading the search into good solutions.\\nIt would be interesting to know the comparison between default GLS, ACO or even other baselines for TSP, BinPacking, MKP to position the results in this paper. As is, it is hard to evaluate the significance\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer KGdN (Part 3)\", \"comment\": \"> **W4:** The experiments cover TSP, CVPR, Binpacking, Multi-Knapsacks. Importantly, the starting seed function seems critical to the approach. The method generates heuristics but the overall approach to solve these problems are meta-heuristics. (please correct me if I understand this correctly). For TSP, we use guided local search. For BinPacking and Knapsacks we use Ant-Colony Optimization. One might argue that the settings of the outer meta-heuristics and their performance are crucial to the overall results and not just the heuristics (generated by LLMs here.) The experiments do not discuss or study any of this.\\n\\n**Response 2.4 (Choices of meta-heuristics):**\\n\\nWe agree with you that the settings and performance of the outer meta-heuristics are crucial to the overall results. Therefore, we adopt most HG tasks introduced in the closely relevant prior study (Ye et al., 2024a), e.g., deriving penalty heuristics for GLS to solve TSP and deriving heuristic measures for ACO to solve BPP and MKP. In addition, the parameter configurations for all seed functions are set identically to those in the prior study (Ye et al., 2024a), as stated in Appendix D of the previously submitted manuscript:\\n\\n> \\u201c*To ensure a fair comparison, we adopt the parameter configurations of all seed functions (e.g., KGLS parameters) as specified in the prior study (Ye et al., 2024a)*\\u201d.\\n\\n**Nonetheless, we believe the primary focus of our work is not to compare the performance of GLS and ACO on TSP, but rather to find out how much the improvements may the Hercules-derived heuristics lead to when comparing against those derived by other LLM-based HG algorithms, all using these established seed functions.** We believe that our choices and parameter configurations of seed functions provide a robust and fair demonstration of Hercules\\u2019 advantages over EOH (ICML\\u201924) and ReEvo (NeurIPS\\u201924), which are the most relevant and state-of-the-art (SOTA) baselines regarding our work.\\n\\nTo further address your concern, we have conducted an ablation study on the parameters of ACO to comprehensively evaluate Hercules\\u2019 robustness. The following experimental results clearly demonstrate that Hercules consistently outperforms the other LLM-based HG algorithms.\\n\\n| Algorithm | Gain (%) | Time (s) |\\n| --- | --- | --- |\\n| ACO | - | 261 |\\n| ACO+Random | -0.60 | 263 |\\n| ACO+EoH (ICML'24) | 0.25 | 264 |\\n| ACO+ReEvo (NeurIPS'24) | 0.20 | 268 |\\n| ACO+Hercules-P (ours) | _0.46_ | 264 |\\n| ACO+Hercules (ours) | **0.59** | 267 |\\n\\n> **W5:** Additionally, all comparisons are with other LLM-based heuristics generations. Note that this is quite a costly approach (hence some effort with performance predictors to save time etc.). According to the tables in the appendix, we are consuming many many minutes upto 5 hours. Then, it is not clear to me how to fairly evaluate these results. How does the same GLS and ACO without the advanced heuristics found by LLM but with standard heuristics perform given the same amount of time? (Btw, does this time include LLM queries or only running the heuristics after the LLM generates them against the instances?)\\n\\n**Response 2.5 (Performance of meta-heuristics):**\", \"as_mentioned_on_page_8_of_the_previously_submitted_manuscript\": \"> \\u201c*The gain measure is calculated as 1-(the performance of the LLM-produced heuristics)/(the performance of the original KGLS)*.\\u201d\\n\\nThis metric ensures that our experimental results always compare the performance of the LLM-derived heuristics against the original seed function. Additionally, as repeatedly stated in the previously submitted manuscript, the reported time refers to the overall search time of the LLM-based HG algorithms, including both the LLM response time and the execution time of all derived heuristics. For example, on Page 6 of the previously submitted manuscript, we stated:\\n\\n> \\\"*Hercules-P reduces the overall search time to 77% (23.6/30.6) of that required by Hercules.*\\u201d\\n\\nIt is generally recognized that LLM-derived heuristics do not significantly increase the execution time of the seed function. In response to your comment, we have now included execution time experiments in Appendix E.2 of the revised manuscript, comparing the original ACO with the final LLM-derived ACO variants. The experimental results (see the table presented in Response 2.4) demonstrate that the LLM-derived heuristics significantly improve the performance of the original ACO algorithm while only taking a marginally longer execution time.\"}", "{\"title\": \"Response to Reviewer KGdN (Part 2)\", \"comment\": \"> **W3:** The idea of adding more specificity to the prompts seems reasonable at a high level, but the paper overindexes too much into the example in Figure 1. The information gain analysis is interesting (and is borrowed from a Hu et. al. 2024) but at the end what happens is we select top-k core components. And that's also not uniform, we do that only some number of iterations (denoted by \\\\lambda in the paper), all of which remain as more hyper-parameters to deal with.\\n\\n**Response 2.3 (Significance of Figure 1 & purpose of $\\\\lambda$):**\\n\\nFigure 1 shows a representative example to further explain the challenge faced by RP (i.e., unspecificity in LLM responses) and our motivation (i.e., enhancing the quality of the produced search directions by incorporating core components as prior knowledge), aiming to help readers better understand the distinction between our proposed CAP method and the RP method (Ye et al., 2024a). While Figure 1 is cited five times throughout the main text, most references (e.g., on Page 4) serve to explain the challenge faced by RP (three times). We deem these cross-references are necessary to enhance reader comprehension, and they are not overly indexed.\\n\\nAlthough the concept of analyzing information gain is inspired by Hu et al. (2024), **our work introduces extensive extensions**. Specifically, Hu et al. (2024) did not define the value range of $IG(\\\\Omega_t)$, while we had rigorously proven that $IG(\\\\Omega_t)$ can decrease through core component abstraction and eventually fall within the $(0,\\\\log(k+1)]$ range. Furthermore, whereas Hu et al. (2024) restricted $\\\\Omega_t$ to two subsets (i.e., \\\"yes\\\" and \\\"no\\\" subsets), we expanded it to accommodate $k+1$ subsets, broadening its applicability. For example, in Appendix A of the previously submitted manuscript, we stated:\\n\\n> \\\"*Therefore, by abstracting core components, the unspecificity (entropy) can decrease within the $(0,\\\\log(k+1)]$ interval.*\\\"\\n\\nIn addition, we select the top-$k$ core components based on their fitness values, **which is a standard practice in evolutionary computation and aligns with the principles of elitism** (Zhang et al., 2015).\\n\\nRegarding your concern about hyperparameter $\\\\lambda$, we would like to emphasize that \\u03bb is primarily introduced to better balance between exploitation and exploration, as mentioned on Page 6 of the previously submitted manuscript:\\n\\n> \\u201c*In addition, Hercules adopts the core components of the top-$k$ heuristics as prior knowledge during the first $\\\\lambda$ percent of iterations ($\\\\lambda \\\\in$ [0,1]). In the later iterations, to better preserve population diversity, Hercules directly applies the core components of the parent heuristics as prior knowledge to provide search directions, bypassing the abstraction process of elite heuristics*\\u201d.\\n\\nIn addition, we would like to further clarify that except for ablation studies, all parameters (including $\\\\lambda$) are set consistently across all experiments, as detailed on Page 18 of the previously submitted manuscript. Finally, it is worth noting that most evolutionary computation methods, e.g., (Zhan et al., 2009), (Yang et al., 2018), and (Zhang et al., 2021), utilize adaptive hyperparameters to effectively balance between exploitation and exploration. **Therefore, the incorporation of the hyperparameter $\\\\lambda$ is consistent with established practices in the field of evolutionary computation.**\\n\\nJianming Zhang, Weifeng Pan, Jingjing Wu, and Jing Wang. Top-$k$ elites based oppositional differential evolution. IJWMC, 2015.\\n\\nZhi-Hui Zhan, Jun Zhang, Yun Li, and Henry Shu-Hung Chung. Adaptive particle swarm optimization. IEEE TCYB, 2009.\\n\\nQiang Yang, Wei-Neng Chen, Jeremiah Da Deng, Yun Li, Tianlong Gu, and Jun Zhang. A level-based learning swarm optimizer for large-scale optimization. IEEE TEVC, 2018.\\n\\nFangfang Zhang, Yi Mei, Su Nguyen, and Mengjie Zhang. Correlation coefficient-based recombinative guidance for genetic programming hyperheuristics in dynamic flexible job shop scheduling.\\nIEEE TEVC, 2021.\"}", "{\"summary\": \"The paper presents Hercules, an LLM-based algorithm for generating heuristics for combinatorial problems. The paper seems to extend the framework in Ye et al., 2024 with a more advanced direction generation (based on identifying core components in heuristics) as well as an LLM-based fitness calculation. The experiments show gains over the baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Strengths:\", \"The topic is interesting and of recent interest\", \"The approach (CAP and PPP) seems novel.\", \"The experiments show significant gains over the baselines in deriving penalty heuristics for guided local search, as well as more moderate gains on constructive heuristics for TSP, heuristic measures for ant colony optimization, and reshaping of attention scores in neural combinatorial optimization,\"], \"weaknesses\": [\"Weaknesses:\", \"I found the claim about information gain to be quite confusing.\", \"First, a lot of information is missing: why the number of core components corresponds to the number of heuristics (can we not have multiple core components per heuristic or the same core component in multiple heuristics)? why do we assume that the set of all possible directions can be partitioned into mutually exclusive subsets that correspond to components (can we not have the same direction for multiple core components)?\", \"Second, it is really not clear why the information gain means we get better heuristics (as indicated in lines 284-285)? If the components generated are of low-quality the directions may be of lower quality as well.\", \"Experimental evaluation:\", \"It is not clear what is being reported under gain: the definition is based on \\\"the performance of ...\\\" but it is not clear how performance is measured.\", \"Writing: the writing could improve as a lot of information is not clearly presented. For example there are no clear definitions for a range of terms like parent heuristics, elite heuristics, etc.\", \"The paper does not provide significant insight into the impact of the proposed techniques (CAP and PPP) beyond the experimental results. For example, it would be interesting to show an analysis of the correlation between predicted fitness values and quality of heuristics.\"], \"questions\": \"I would appreciate the authors response and clarification on the points listed under \\\"weaknesses\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer KGdN (Part 1)\", \"comment\": \"Thank you for your appraisal. Here are our detailed responses to your comments. If anything remains unclear, we would be more than happy to provide further clarification.\\n\\n---\\n> **W1:** As rightly noted in the paper, the idea of mimicking evolutionary computation via LLMs is not new. In fact, most (all?) crossover and mutation operators are from Ye et. al. 2024. On the one hand, the experiments and the ablation study show that the proposed modifications might offer some benefit in the results, and on the other hand, they can be regarded as incremental, and it is not clear what's the main takeaway.\\n\\n**Response 2.1 (Doubt on contribution & takeaway):**\\n\\nIndeed, most evolutionary computation algorithms rely on crossover and mutation operators. The reason we adopt the crossover and mutation operators introduced by Ye et al. (2024a) is **to ensure a fair demonstration of the effectiveness of our proposed CAP.** As shown in Figure 1 of the previously submitted manuscript, **compared with RP proposed by Ye et al. (2024a), our proprietary CAP enhances the quality of the produced search directions by first prompting the LLMs to abstract the core components as prior knowledge.** Extensive experimental results further validate the advancement of CAP over RP, even when identical crossover and mutation operators are employed. **Additionally, we introduce a novel LLM-based heuristic performance predictor, named PPP, to mitigate the excessively long search times required by ReEvo (Ye et al., 2024a) for certain HG tasks.** Furthermore, PPP incorporates two tailored mechanisms, EXEMPLAR and Cons, to enhance predictive accuracy and identify unreliable predictions, respectively. **These contributions in our work (i.e., CAP, PPP, EXEMPLAR, and Cons) are unique, significantly enhancing the quality of the derived heuristics and reducing unnecessary resource expenditure compared to ReEvo. Therefore, we argue that our contributions should not be regarded as \\\"incremental\\\".**\\n\\n**We believe the main takeaways of our work had been clearly stated throughout.** For example, in the Abstract of the previously submitted manuscript, we highlighted:\\n\\n> \\\"*To enable LLMs to provide specific search directions, we propose the Hercules algorithm, which leverages our designed Core Abstraction Prompting (CAP) method to abstract the core components from elite heuristics and incorporate them as prior knowledge in prompts*\\\",\\n\\nand\\n\\n> \\\"*To reduce the required computing resources for evaluating the derived heuristics, we propose few-shot Performance Prediction Prompting (PPP), a first-of-its-kind method for the Heuristic Generation (HG) task. PPP leverages LLMs to predict the fitness values of newly derived heuristics by analyzing their semantic similarity to previously evaluated ones. We further develop two tailored mechanisms for PPP to enhance predictive accuracy and determine unreliable predictions, respectively.*\\\"\\n\\nFurthermore, in Section 1 of the previously submitted manuscript, we had summarized our contributions comprehensively. **In summary, our work centers around our proprietary CAP and PPP methods, which are the key contributions of this work.**\\n> **W2:** Regarding the presentation, I found it difficult/confusing that many moving parts are introduced as large components with several acronyms Hercules, CAP, PPP, EXEMPLAR, Cons --but after all, the provided pseudocode shows the overall algorithm, so I am not sure what these abstractions add to the presentation. Also, the paper claims our \\\"propriety\\\" CAP algorithm --what does that mean?\\n\\n**Response 2.2 (Excessive acronyms & meaning of proprietary):**\\n\\n**We respectfully disagree with your observation that the mentioned acronyms make the presentation confusing. Instead, we properly defined all the acronyms of our proposed components in the previously submitted manuscript.** The use of acronyms is commonly adopted in most research publications to better facilitate the reference to the respective proposed models or components. For example, Liu et al. (2024a) defined five prompt strategies (E1, E2, M1, M2, and M3) to enhance readability and highlight their contributions. We are more than happy to provide additional explanations or engage in further discussion to clarify any points and ensure there is no misunderstanding.\\n\\nIt should be a misunderstanding of yours that we never used the phrase \\\"*our **propriety** CAP algorithm*\\\". While, the term **proprietary** in \\\"*our **proprietary** CAP algorithm*\\\" indicates that CAP is a novel method proposed by us and, to the best of our knowledge, the first-of-its-kind to better address the issue of unspecificity in LLM responses within the field of LLM-based HG.\\n\\nFei Liu, Xialiang Tong, Mingxuan Yuan, et. al., Evolution of heuristics: Towards efficient automatic algorithm design using large language\\nmode. In ICML, 2024a.\"}", "{\"comment\": \"Thank you for acknowledging that you have reviewed our responses to all your comments. We sincerely hope we have adequately addressed your concerns regarding our work. Please do let us know if you have any additional concerns, questions, or suggestions. We are more than happy to engage in further discussions to improve our research. We greatly appreciate your understanding and support.\"}", "{\"metareview\": \"The paper presents prompt and other refinements operating on the top of an LLM to guide a meta-heuristic approach dedicated to combinatorial optimization.\\nThe approach continues the work done by Fei Liu et al, 2024 (ICML) and Haoran Ye et al, 2024 (NeurIPS). \\nThe improvements come from the proposed core abstraction prompting (CAP) and the performance prediction prompting (PPP). Reminiscent of NAS, the authors also provide a generation of heuristics samples (EXAMPLAR) and a confidence stratification module (ConS) enhancing PPP.\", \"additional_comments_on_reviewer_discussion\": \"The many ingredients in the approach were diversely appreciated by the reviewers, despite the authors' rebuttals.\\n\\nThe area chair encourages the authors to revise the paper, notably clarifying the claims (CAP being proprietary while the approach is made publicly available), and hopes to see this revised version soon.\"}", "{\"comment\": \"Dear Reviewer KGdN,\\n\\nThank you once again for your insightful comments and helpful suggestions. As the author-reviewer discussion will end soon (< 24 hours from now), we would greatly appreciate it if you could take a moment to review our rebuttal. We sincerely hope our efforts and improvements are taken into consideration. Please let us know if you have any further questions or concerns.\\n\\nBest regards, \\nAuthors\"}", "{\"title\": \"Response to Reviewer KGdN (Part 4)\", \"comment\": \"> **W6:** This might not be surprising that the choice of LLM quite affects the results (Table 1; Llama vs GPT-4o). But then it makes one wonder how much of the value comes from the many moving components proposed here vs. plain and simple, the underlying LLM.\\n\\n**Response 2.6 (Ablation studies & underlying LLM):**\\n\\nWe do agree with you that the choice of LLM greatly affects the results. That is exactly why we conducted various experiments (see Tables 1, 3, and 4) to demonstrate that **no matter which LLM is in use, Hercules consistently outperforms the baseline models**. This success can be attributed to the contributions of each proposed component, with Hercules-P achieving comparatively strong results for the same reason. **As presented in Section 4.5 of the previously submitted manuscript, we had conducted extensive ablation studies to demonstrate the effectiveness of CAP, PPP, and the other tailored mechanisms.** These experimental results clearly demonstrate the value-adds of all proposed methods and mechanisms.\\n\\nFurthermore, we believe the underlying LLM you refer to closely resembles the **Random** algorithm compared in our manuscript. As mentioned on Page 8 of the previously submitted manuscript:\\n\\n> \\u201c*Random is a straightforward method that derives heuristics directly using LLMs without incorporating search directions and is commonly used as a baseline model in NAS studies (Li&Talwalkar, 2020).*\\u201d\\n\\nTherefore, we did make a direct comparison with the underlying LLM (i.e., the Random algorithm), and the experimental results showed that Hercules outperforms the underlying LLM.\\n\\n> **Q1:** Could you provide details on the outer meta-heuristics (GLS, ACO etc.)?\\n\\n**Response 2.7 (Details of outer meta-heuristics):**\\n\\nThe rationale behind selecting specific seed functions for different COPs is elaborated in Response 2.4. For specific choices of GLS and ACO, we generally follow the closely relevant prior study (Ye et al., 2024a). If you would like to know more about the parameter configurations and algorithm descriptions of all selected seed functions, you may refer to Appendices C and D of the prior study (Ye et al., 2024a) for more details. This experimental design had been clearly explained in Appendix D of the previously submitted manuscript:\\n\\n> \\u201c*To ensure a fair comparison, we adopt the parameter configurations of all seed functions (e.g., KGLS parameters) as specified in the prior study (Ye et al., 2024a), which also documented the definitions of all HG tasks used in this paper.*\\u201d\\n\\n\\n\\n> **Q2:** How much of the results are due to the LLM integration with CAP, PPP etc. vs. the meta-heuristics leading the search into good solutions. It would be interesting to know the comparison between default GLS, ACO or even other baselines for TSP, BinPacking, MKP to position the results in this paper. As is, it is hard to evaluate the significance\\n\\n**Response 2.8 (Ablation studies & performance of meta-heuristics):**\\n\\nFor a detailed discussion on the effectiveness of CAP and PPP, please refer to Response 2.6. Additionally, the rationale of not directly comparing the performance of different seed functions, such as GLS and ACO, is explained in Response 2.4. In summary, we believe our work explicitly and comprehensively demonstrates the contributions of CAP, PPP, EXEMPLAR, and ConS, and presents the performance of the meta-heuristics.\"}", "{\"summary\": \"This paper explores the application of LLM in autonomously generating heuristics to solve COPs and proposes a novel algorithm named Hercules to address the two main challenges of existing approaches.\\n\\nHercules utilizes the Core Abstraction Prompting (CAP) method to abstract core components from elite heuristics and incorporate them as prior knowledge in prompts, thereby reducing the specificity of search directions. This paper further introduces Hercules-P, an efficient variant of Hercules that integrates CAP with the novel Performance Prediction Prompting (PPP) method. PPP leverages LLMs to predict the fitness values of newly derived heuristics based on their semantic similarity to previously evaluated ones, significantly reducing the required computing resources.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is laudable for its well-structured and logical presentation, providing a comprehensive understanding of the research topic.\\n\\n2. The article is praiseworthy for its extensive experimental data and significant findings. The authors have selected a number of baselines for comparative experiments on different benchmarks.\\n\\n3. The supplement provided in this article is adequate. It explains in detail for the reader what is not expanded in detail in the paper, including specific experimental data, hyperparameter settings, Critical Difference Analysis, etc.\", \"weaknesses\": \"1. In the literature of TSP and CVRP, it is known that those conventional heuristic algorithms, such as LKH [1] and EAX [2], exhibit robust performance. It appears, however, that this submission does not address LKH and EAX, nor does it provide a comparative analysis of the proposed algorithm against these established methods.\\n\\n1. In lines 85-100 of the Introduction section, the authors describe two challenges to LLM-based HG methods, mentioning in the second challenge that these methods introduce numerous linear operations and conditional branches that make the GPU less efficient for these algorithms. In lines 113-128, the authors claim to have proposed Hercules-P in order to better address the second challenge, but I don't seem to have read in the manuscript how Hercules-P reduces linear operations and conditional branches, making GPUs more efficient in processing these algorithms. May I ask if the authors have solved this challenge? If not, these representations are inappropriate.\\n\\n2. Does the appearance of CVRP in Section 4.4 stand for Capacitated Vehicle Routing Problem? The authors do not explain what CVRP stands for in the body of the manuscript, and the only explanation appears in the code comments in the Appendix section (line 1337). This cannot be very clear to the reader when it comes to understanding the manuscript.\\n\\n3. In Section 2.3, the authors mention two challenges for NCO solvers: improving generalisation capabilities and large-scale COPs performance. In Table 5, for LEHD, the performance improvement of either Hercules or Hercules-p gradually decreases as the problem size of TSP or VCRP increases. Does this mean that Hercules also fails to address the challenges faced by NCO solvers? Can Hercules still provide performance gains when the problem size is larger? Further discussion is requested from the authors.\\n\\n\\n## References\\n[1] Keld Helsgaun. General k-opt submoves for the Lin-Kernighan TSP heuristic. Mathematical Programming Computation 1(2-3): 119-163 (2009)\\n\\n[2] Yuichi Nagata, Shigenobu Kobayashi. A Powerful Genetic Algorithm Using Edge Assembly Crossover for the Traveling Salesman Problem. INFORMS Journal on Computing 25(2): 346-363 (2013)\", \"questions\": \"Please reply to my comments in \\\"Weaknesses\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"This submission does not have ethics concerns.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
0fhzSFsGUT
PETRA: Parallel End-to-end Training with Reversible Architectures
[ "Stephane Rivaud", "Louis Fournier", "Thomas Pumir", "Eugene Belilovsky", "Michael Eickenberg", "Edouard Oyallon" ]
Reversible architectures have been shown to be capable of performing on par with their non-reversible architectures, being applied in deep learning for memory savings and generative modeling. In this work, we show how reversible architectures can solve challenges in parallelizing deep model training. We introduce PETRA, a novel alternative to backpropagation for parallelizing gradient computations. PETRA facilitates effective model parallelism by enabling stages (i.e., a set of layers) to compute independently on different devices, while only needing to communicate activations and gradients between each other. By decoupling the forward and backward passes and keeping a single updated version of the parameters, the need for weight stashing is also removed. We develop a custom autograd-like training framework for PETRA, and we demonstrate its effectiveness on standard computer vision benchmarks, achieving competitive accuracies comparable to backpropagation using ResNet-18, ResNet-34, and ResNet-50 models.
[ "Model parallelism", "Delayed gradient", "Reversible architectures" ]
Accept (Spotlight)
https://openreview.net/pdf?id=0fhzSFsGUT
https://openreview.net/forum?id=0fhzSFsGUT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qCibJSzdOP", "p8Oyutl0WC", "nMco5TP85H", "jTcfCw3NK3", "jIExoxBv3v", "dVx1EpIp4W", "YCk9sxlC7g", "T5hGsFAPG5", "Q8UL1eppFY", "OXbNMmz8ZF", "MNCS4Dyc7L", "GZYAVCOJay", "AudJgzxw5X", "8RNNiqYmW6", "8RHsZ13UsX", "5qAaWQRgDQ" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730661342032, 1732937162723, 1732467890786, 1732400441327, 1733075947032, 1737523789951, 1732717334118, 1730719988772, 1734314083991, 1732400751757, 1730669107746, 1732425263798, 1732697097874, 1730713771479, 1732400144662, 1732400606876 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6763/Reviewer_ggqx" ], [ "ICLR.cc/2025/Conference/Submission6763/Reviewer_hEA1" ], [ "ICLR.cc/2025/Conference/Submission6763/Area_Chair_v4xE" ], [ "ICLR.cc/2025/Conference/Submission6763/Authors" ], [ "ICLR.cc/2025/Conference/Submission6763/Reviewer_A9EF" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6763/Area_Chair_v4xE" ], [ "ICLR.cc/2025/Conference/Submission6763/Reviewer_A9EF" ], [ "ICLR.cc/2025/Conference/Submission6763/Area_Chair_v4xE" ], [ "ICLR.cc/2025/Conference/Submission6763/Authors" ], [ "ICLR.cc/2025/Conference/Submission6763/Reviewer_md3p" ], [ "ICLR.cc/2025/Conference/Submission6763/Reviewer_md3p" ], [ "ICLR.cc/2025/Conference/Submission6763/Authors" ], [ "ICLR.cc/2025/Conference/Submission6763/Reviewer_hEA1" ], [ "ICLR.cc/2025/Conference/Submission6763/Authors" ], [ "ICLR.cc/2025/Conference/Submission6763/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes to utilize the concept of reversible architectures to improve parallelization in DNN training. A model is split into multiple stages that are trained asynchronously; i.e. in a model parallel fashion. Leveraging reversibility, the training of the different stages is effectively decoupled. This scheme offers a linear speedup in the number of stages relative to end-to-end backprop, while reducing the memory footprint. The method is evaluated using ResNets/RevNets with three different image classification benchmarks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and easy to follow. The idea of utilizing reversibility for parallelization is a nice, simple, and novel idea! Consequently, I find myself sufficiently convinced that the method works --- albeit, that the empirical evaluation is somewhat limited. The novelty and applicability of the method mostly outweighs my concerns about the evaluation.\", \"weaknesses\": \"My only objection to this work is the limited number of experiments. They are limited to ResNet/Revnet 18/34/50 and CIFAR10, ImageNet-32, and ImageNet. It would definitely improve the paper to have at least a few more architectures included.\", \"questions\": \"How did you partition the architectures for your experiments? How many layers/blocks in each stage? Were they all the same size? And if so, would that not bring them out of sync during training such that top layers/stage were idle a lot of the time? The size of the feature maps is decreasing in the layer index, no? Thus, the lower layers/stages would consume more memory and compute than the top ones?\", \"perhaps_you_could_add_some_information_about_this_in_the_appendix\": \"-)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"I thank the authors for the thorough effort they put into their rebuttal. Some of my concerns were addressed, and I will update my scores accordingly.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThis is a gentle reminder that the authors have submitted their rebuttal, and the discussion period will conclude on November 26th AoE. To ensure a constructive and meaningful discussion, we kindly ask that you review the rebuttal as soon as possible and verify if your questions and comments have been adequately addressed.\\n\\nWe greatly appreciate your time, effort, and thoughtful contributions to this process.\\n\\nBest regards,\\nAC\"}", "{\"comment\": \"We thank the reviewer for his dedicated time and feedback in this review process. We would like to comment on the reported weaknesses first:\\n- **Dependency on reversible architectures:** the authors acknowledge that many existing architectures may not easily be made fully revertible. However, we believe that the favorable memory consumption of PETRA should motivate further research in developing fully reversible architectures.\\n- **Increased Communication Overhead:** The communication overhead stems from the transition from a non-reversible to a reversible counterpart in our paper. While this is a valid concern when dealing with very large models, we want to emphasize that the increase is by a fixed factor, and does not depend on the network architecture. Therefore, we do not believe this would be a significant limiting factor of PETRA.\\n- **Scalability Constraints with Non-Reversible Layers:** While this is true that non-reversible layers induced a memory overhead in PETRA, all other model parallel training techniques in the literature providing acceleration do require activation buffers and suffer from the same memory constraint. To the best of our knowledge, PETRA offers the best compromise in terms of memory consumption and acceleration.\\n\\nThe authors will now attempt to answer each of the reviewer\\u2019s question:\\n### **How PETRA perform on large model and more complex task, such as pretraining language model?**\\n\\nThe implementation used to compute the accuracy figures reported in the experiment section used a simulated environment to assess the numerical stability of PETRA, and did not allow us to perform efficient distributed training. Only recently did we succeed in producing an efficient distributed implementation of PETRA, from which we were able to derive runtime estimates to assess speedups for RevNets. While LLMs are not yet fully integrated into our framework for complete benchmarking, we have promising developments towards this use case with LLama2 on OpenWebText.\\n\\nStill, LLM pre-training requires substantial engineering to be performed efficiently. Thus, we consider that such large-scale studies are out of the scope of this paper. With the resources at our disposal, we focused our experience toward an academic-friendly setup as we personally think that the optimization properties must be studied in detail as we scale up the complexity of the task.\\n\\n[Edit]: One related concern, that another reviewer has raised explicitly, is the extent to which the depth, or more precisely the number of stages since it is related to the induced delay, affects the scalability potential of PETRA. To this end, we have currently performed experiments in Appendix B to ablate the effect of depth on gradient quality compared to backpropagation, and we have included it in the appendix of our current paper.\\n\\n### **Is the reversible architecture necessary for PETRA?**\\n\\nPETRA does necessitate a reversible architecture. However, reversible architectures can be derived from many existing architectures, with similar optimization properties as shown in the literature; see [1, 2]. PETRA aims to democratize the development of reversible architectures by providing acceleration with limited memory overhead when scaling model size.\\n\\n### **For models that integrate both reversible and non-reversible layers, how does PETRA manage memory savings and efficiency, and could these hybrid architectures affect its scalability benefits?**\\n\\nThe presented paper proposes an implementation of PETRA able to handle non-reversible layers by using an activation buffer between the forward and backward pass. This induces a memory overhead similar to any other delayed gradient approach in the literature, which is quantified theoretically in the storage column of table 1. Resorting to buffers is necessary for our experiments which use the reversible counterpart of ResNets that include non shape-preserving layers which cannot be made invertible canonically as presented in the paper. We tried to specifically deal with the memory issue of non-reversible layers by deferring the buffer to the CPU memory. However, we did not succeed in obtaining an efficient implementation of this mechanism, and left this aspect for further optimization. This would require advanced profiling tools that we are trying to integrate within our code base. As a workaround, we also experimented with quantization as a way to decrease buffer size with promising results, but did not fully investigate the impact of such numerical approximation. We will add an experiment in the appendix quantifying the impact of buffer quantization to a lower precision on final model performances and report corresponding memory usage.\\n\\n[1] - Jacobsen, J. H., Smeulders, A., & Oyallon, E. (2018). i-revnet: Deep invertible networks. ICLR 2018.\\n\\n[2] - Kitaev, N., Kaiser, \\u0141., & Levskaya, A. (2020). Reformer: The efficient transformer. ICLR 2020.\"}", "{\"comment\": \"I thank the authors for their rebuttal, which addressed my concerns. I am updating my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe wanted to let you know that the discussion period has been extended to December 2nd. If you haven't had the opportunity yet, we kindly encourage you to read the rebuttal as soon as possible and verify whether your questions and comments have been fully addressed.\\n\\nWe sincerely appreciate your time, effort, and thoughtful contributions to this process.\\n\\nBest,\\n\\nAC\"}", "{\"summary\": \"The authors propose a new algorithm for training reversible models. Compared to backpropagation, it can be run on each layer in parallel and with a reduced memory cost. They show empirically the advantages of their algorithm on RevNet models for image classification.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is well written, clear, and has helpful illustrations.\", \"The algorithm seems simple, natural and intuitive.\", \"While the algorithm relies on reversible layers, it can still be mixed with standard non-reversible layers, for which a standard backpropagation is performed.\", \"The authors validate their algorithm with thorough experiments and analyses.\"], \"weaknesses\": \"1. Invertible networks are currently not very used. This limits the direct applications of the algorithm. However I am aware that PETRA could motivate such the use of such architectures.\\n2. The experiments are only performed on RevNet models for image classification. As mentioned in the conclusion, it would be very nice to see experiments on more tasks and models. Indeed, as PETRA is applicable to only a subset of models (reversible models), it is frustrating to only see experiments on a single architecture.\\n3. Lines 509-510: I think you meant RevNet instead of ResNet.\", \"questions\": \"- How is the approximated gradient influenced by the depth of the model? I would expect the error to increase as the model gets deeper.\\n\\nI find the paper very interesting and am ready to increase my grade should my remarks be addressed by the authors.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The authors propose a new algorithm for training reversible models. Compared to backpropagation, it can be run on each layer in parallel and with a reduced memory cost. They show empirically the advantages of their algorithm on RevNet models for image classification.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers agree this is a good paper and it should be accepted.\"}", "{\"comment\": \"We thank the reviewer for his enthusiastic review and would like to address his questions.\\n\\n### **How did you partition the architectures for your experiments? How many layers/blocks in each stage?**\\n\\nWe partitioned the architectures at the residual block level, as standardly done for RevNets to obtain a fine-grained partitioning. The first convolutional block and the final projection each count as a separate stage; each residual block then also counts as a stage.\\n\\n\\n### **Were they all the same size? And if so, would that not bring them out of sync during training such that top layers/stage were idle a lot of the time?**\\n\\nWe did not fully investigate idling time caused by the non-uniform distribution of FLOPS across stages. In this paper, we wanted to focus on the optimization properties of PETRA and the effect of crucial parameters. Therefore, we left the topic of fully optimizing resource efficiency for future work, which requires additional development efforts for automation. We also found it difficult to obtain a reasonable balance by hand, each configuration being hardware configuration specific. Ideally, we\\u2019d like every worker to have the same flops to fully exploit the parallelization potential. We plan to investigate this load balancing problem on transformers which maintains constant dimensionality and FLOPs across depth in future work.\\n\\n\\n### **The size of the feature maps is decreasing in the layer index, no? Thus, the lower layers/stages would consume more memory and compute than the top ones?**\\n\\nIn order to properly answer this question, we need to separate compute from memory.\\n- As the size of the feature maps is decreasing with respect to layer index, early layers consume more memory than the top ones.\\n- The ResNet architectures are designed to approximately keep a constant number of FLOPs across layers. The non-reversible layers have more FLOPs than the other ones due to the extra downsampling operation, and the FLOPs of the different downsampling operations are not equal. The residual blocks that do not employ downsampling keep approximately the same number of flops, which is around 28 Giga-FLOPS for a ResNet-50.\"}", "{\"summary\": \"The paper proposes to perform model parallel training using reversible architectures. Compared to delayed gradient, the proposed method is more memory efficient since it does not need to stash weights. It is shown that on shallower architecture the performance is slightly better than regular backprop and on deeper architecture such as ResNet-50, there is a slight drop but not significant. Overall, the work is likely to have a big impact as a way to scale up model parallel training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The paper demonstrated that activation reconstruction can work well with out-of-sync backward weights, and the reconstructed activations can be used to update weights.\", \"The paper has shown real computation and memory savings.\"], \"weaknesses\": [\"It would be nice to see at what scale the method starts to break down (say when there is more and more delay in reconstruction). And show a plot on reconstruction error and final performance as a function of the number of delay steps. The model depth can be another variable to explore, aside from the few standard model architectures, perhaps sweeping a wider range of depths.\", \"Algorithm 1 is a little hard to process.\", \"The method relies on gradient accumulation to fully match with the It is unclear to me how gradient accumulation would have any impact when a large batch / data parallel is employed. This may not be a concern for LLMs, but for ImageNet and SSL training, many use very large batch sizes.\"], \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response. I look forward to your updated results.\"}", "{\"comment\": \"We would like to thank the reviewer once again for their suggestion and take this opportunity to address a few points.\\n\\n## Invertible networks are currently not very used. This limits the direct applications of the algorithm.\\nInvertible networks are currently not widely used but they have been shown to be extendable to all major architectures without loss of accuracy. For example such as [1], [2], [3], and [4]. Each of these studies highlights that invertibility is an intrinsic property of the model that does not degrade performance. On the contrary, it enables reversibility through simple adaptations of the model, often resulting in improvements, computational savings, or enhanced interpretability.\\n\\n## The experiments are only performed on RevNet models for image classification.\\nThe experiments in our work are conducted solely on RevNet models. We understand the reviewers' concerns; this paper serves primarily as a proof of concept, demonstrating that PETRA represents a promising research direction. In Appendix B, we have included additional experiments analyzing gradient approximation quality, emphasizing the potential for this work to extend beyond its initial scope, which we discuss below.\\n\\n## How is the approximated gradient influenced by the depth of the model?\\nWe have proposed a detailed analysis of this in Appendix B. It demonstrates that, when comparing PETRA during training with standard backpropagation and delayed gradients (as used in [5]), PETRA produces consistent gradients. Notably, deeper layers exhibit gradients that are closer to the true end-to-end gradients, whereas delays introduce discrepancies. However, our experiments in Section 4 show that these discrepancies do not negatively impact training.\\n\\n**References**\\n\\n[1] - Kitaev, N., Kaiser, \\u0141., & Levskaya, A. (2020). Reformer: The efficient transformer. ICLR 2020.\\n\\n[2] - Gomez, A. N., Ren, M., Urtasun, R., & Grosse, R. B. (2017). The reversible residual network: Backpropagation without storing activations. NeurIPS 2017.\\n\\n[3] - Behrmann, J., Grathwohl, W., Chen, R. T., Duvenaud, D., & Jacobsen, J. H. (2019, May). Invertible residual networks. ICML 2019.\\n\\n[4] - Jacobsen, J. H., Smeulders, A., & Oyallon, E. (2018). i-revnet: Deep invertible networks. ICLR 2018.\\n\\n[5] Zhuang, H., Wang, Y., Liu, Q., Zhang, S., & Lin, Z. (2021). Fully Decoupled Neural Network Learning Using Delayed Gradients. IEEE transactions on neural networks and learning systems.\"}", "{\"summary\": \"PETRA is a model-parallel training method for reversible neural networks that decouples forward and backward passes, eliminating the need for activation or parameter buffers. This enables efficient parallel computation across devices with reduced memory overhead. PETRA matches backpropagation in accuracy on datasets like CIFAR-10 and ImageNet, while also achieving notable speed and memory savings, making it a potential alternative for large-model training.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The PETRA paper presents a new alternative for large-scale neural network training, offering efficient parallelization by decoupling forward and backward passes, which enables stages to compute independently across devices. Utilizing reversible architectures, PETRA removes the need for activation and parameter storage, achieving up to 54.3% memory savings, making it especially valuable for training large models. It demonstrates accuracy comparable with backpropagation on datasets like CIFAR-10 and ImageNet.\", \"weaknesses\": \"Dependency on Reversible Architectures: The approach is designed specifically for reversible architectures, which may limit its application to models that can be easily adapted to this structure. Non-reversible architectures, such as standard ResNets or some types of transformers, may not benefit as fully from PETRA\\u2019s memory and efficiency gains.\", \"increased_communication_overhead\": \"While PETRA reduces memory usage, its reversible stages require additional communication overhead during the backward pass, which could affect scalability on very large, distributed systems. And the PETRA propose dividing a model into some\", \"scalability_constraints_with_non_reversible_layers\": \"Although PETRA performs well on reversible architectures, any non-reversible stages still require stored activations, potentially increasing memory use and complicating scalability for models that include such layers.\", \"questions\": \"How PETRA perform on large model and more complex task, such as pretraining language model? The experiment in the paper is weak. The scalability of PETRA can not be verified by the current empirical results. Experiments on distributed pretraining for llm is necessary to validate the efficiency of PETRA, for example: experiments on Pile dataset with varying model size.\\n\\nIs the reversible architecture necessary for PETRA? For models that integrate both reversible and non-reversible layers, how does PETRA manage memory savings and efficiency, and could these hybrid architectures affect its scalability benefits?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Experiments for further insights\", \"comment\": \"We thank the reviewer for his feedback and would like to add some elements to clarify his concerns. We highly appreciate that the reviewer acknowledges the scaling potential of reversible architecture within current memory constraints given suitable training procedures. We also acknowledge that experiments on other architectures would be very valuable, and are actively working on integrating transformers for both computer vision and NLP tasks. We also thank the reviewer for pointing out the typo at lines 509-510.\\n\\nWe are currently running experiments to track metrics quantifying approximation quality throughout training. We should be able to report them within the rebuttal period, but are still waiting for the results to support our answer with empirical measurements. We are also running experiments with a fully reversible convolutional architecture where we are able to increase depth without memory overhead; albeit the architecture is intended to be a toy example, this will give some insights about PETRA\\u2019s performance on non-standard architectures. We will come back to the reviewer once all results have been gathered.\"}", "{\"comment\": \"We thank the reviewer for his effort in this reviewing process, and would like to comment on the weaknesses reported.\\n\\n### **It would be nice to see at what scale the method starts to break down (say when there is more and more delay in reconstruction). And show a plot on reconstruction error and final performance as a function of the number of delay steps. The model depth can be another variable to explore, aside from the few standard model architectures, perhaps sweeping a wider range of depths.**\\n\\nAs the reviewer correctly noticed, the delay $\\\\tau$ is the main source of numerical instabilities. Therefore, we chose to explore the largest setting up to a RevNet50, by partitioning the model at the residual block level. A coarser partitioning would decrease the delay and make the problem easier in our case. \\n\\nTo address the concern of the reviewer, we designed a simple fully-reversible convolutional architecture by stacking reversible residual blocks. We train such architectures with PETRA while increasing the depth of the network and splitting the architecture at the residual block level, thus effectively increasing the delay during training. We will report the result in the rebuttal if they arrive in the following day, and include an appendix with the tables and plots suggested by the reviewer.\\n\\n\\n### **Algorithm 1 is a little hard to process.**\\n\\nWe apologize for this. We deliberately chose a precise formulation of the algorithm to avoid any misinterpretation of the training process.\\n\\n\\n### **The method relies on gradient accumulation to fully match with the It is unclear to me how gradient accumulation would have any impact when a large batch / data parallel is employed. This may not be a concern for LLMs, but for ImageNet and SSL training, many use very large batch sizes.**\\n\\nWe did investigate the effect of large batch sizes on ImageNet training. The available training recipes do not scale to effective batch sizes larger than 2048 without performance degradation; deterioration starts at 4096 in our experiments. Since LLMs are trained with significantly larger batch sizes, gradient accumulation would not be an issue in this scenario. It is indeed often used to handle significant model sizes when the maximum supported batch size on a given hardware configuration is too restricted. We however note that the training recipes allowing us to scale synchronous SGD to large batch sizes are also effective with PETRA according to our experiments.\"}" ] }
0fXcrl35V0
Second-order finite-time and fixed-time systems for sparse recovery and dynamic sparse recovery
[ "Yihua Huang" ]
In the rapidly advancing field of healthcare, efficient processing of sparse data is essential for applications such as medical imaging and personalized medicine. This paper introduces innovative second-order finite-time and fixed-time systems tailored for sparse recovery in healthcare data, incorporating control laws into the second-order derivative. We validate the stability and convergence of these systems within finite and fixed times using the Lyapunov method. Furthermore, we examine the tracking performance and assess both practical finite-time and fixed-time convergence. The effectiveness of our systems is highlighted through comparative analyses with existing methods, with numerical experiments demonstrating superior accuracy and dynamic tracking capabilities of sparse biomedical signals.
[ "frame of defining penalty functions", "noise", "accelerated distributed generalized reweighted noise filtering consensus algorithm", "accelerated distributed robust generalized reweighted denoise consensus algorithm", "$l_p$-norm minimization", "multi-target tracking" ]
https://openreview.net/pdf?id=0fXcrl35V0
https://openreview.net/forum?id=0fXcrl35V0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "JX0KYppxc5" ], "note_type": [ "comment" ], "note_created": [ 1727880077889 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7723/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0fJfVOSUra
ThunderKittens: Simple, Fast, and $\textit{Adorable}$ Kernels
[ "Benjamin Frederick Spector", "Simran Arora", "Aaryan Singhal", "Arjun Parthasarathy", "Daniel Y Fu", "Christopher Re" ]
The challenge of mapping AI architectures to GPU hardware is creating a critical bottleneck in AI progress. Despite substantial efforts, hand-written custom kernels fail to meet their theoretical performance thresholds, even on well-established operations like linear attention. The diverse capabilities of GPUs suggests we might we need a wide variety of techniques to achieve high performance. However, our work explores if a small number of key abstractions can drastically simplify the process. We present ThunderKittens (TK), a framework for writing performant AI kernels while remaining easy to use. Our abstractions map to the three levels of the GPU hierarchy: (1) at the warp-level, we provide 16x16 matrix tiles as basic data structures and PyTorch-like operations, (2) at the thread-block level, we provide templates for asynchronously overlapping operations, and (3) at the grid-level, TK helps hide block launch, tear-down, and memory costs. We show the value of TK by providing simple & diverse kernels that match or outperform prior art. We match CuBLAS and FlashAttention-3 on GEMM and attention inference performance and outperform the strongest baselines by $10-40$\% on attention backwards, $8\times$ on state space models, and $14\times$ on linear attention.
[ "Systems", "Kernels", "Efficiency", "Efficient Models", "IO Awareness", "GPUs" ]
Accept (Spotlight)
https://openreview.net/pdf?id=0fJfVOSUra
https://openreview.net/forum?id=0fJfVOSUra
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zwjUEECK4e", "yCW0mMWNu7", "tuyzcPcWux", "sUiVk6sFoM", "rx8VtsHol3", "pWJTtQTpEL", "oXJzc2e2RT", "mpEuAZNR52", "lWMAnryuEJ", "kK1epCZp0Y", "jhvjLZsUIu", "iOBcibUeJ4", "hU2lNy6Zpk", "h91FyU3AHs", "e5jt5BEgRN", "b3ms3b27aO", "XrGj1YkI1z", "VZU3NDQytd", "Tl47Pt4DbS", "T1Nt0ZMchc", "R1mEaXgIHt", "PdLlKqPdIp", "GdYdU7fGMB", "BoZ4IuQGE5", "4pvmZekjHk" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732257731849, 1732564604962, 1732564638358, 1732859703810, 1730451752577, 1730403686237, 1732335793837, 1737523986373, 1732859132883, 1732307782352, 1732595374376, 1730711144068, 1732254249605, 1732257387253, 1732256084692, 1732685161556, 1734265955508, 1732256506960, 1732257111799, 1733007244844, 1730353355693, 1732255720913, 1732255344701, 1732256738165, 1732685086889 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Reviewer_nd8y" ], [ "ICLR.cc/2025/Conference/Submission9493/Reviewer_ANEJ" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Reviewer_ANEJ" ], [ "ICLR.cc/2025/Conference/Submission9493/Reviewer_kgD5" ], [ "ICLR.cc/2025/Conference/Submission9493/Reviewer_kgD5" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Reviewer_nd8y" ], [ "ICLR.cc/2025/Conference/Submission9493/Area_Chair_2ZdH" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Reviewer_nd8y" ], [ "ICLR.cc/2025/Conference/Submission9493/Reviewer_5GXe" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Authors" ], [ "ICLR.cc/2025/Conference/Submission9493/Reviewer_nd8y" ] ], "structured_content_str": [ "{\"title\": \"Response to review [continued]\", \"comment\": \"## Addressing weakness 3. TK\\u2019s performance at scale and in parallelized scenarios.\\nWe appreciate this suggestion. While multi-GPU parallelization is indeed important for large-scale training, our current focus is on optimizing single-GPU kernel performance. Users of our framework can currently handle multi-GPU communication through established libraries like NCCL, which provides highly optimized primitives for inter-GPU collective operations.\\n\\nWe agree that extending the producer-consumer model to multi-GPU contexts is an interesting direction. We are excited to explore such directions in future work\\u2013navigating both the interaction between our kernel scheduling and inter-GPU communication patterns, as well as how to balance computation and communication overhead across different GPU topologies.\\n\\n## Addressing additional questions \\n\\n**Balancing memory overheads and computational efficiency:**\", \"we_provide_a_number_of_opportunities_to_help_the_user_balance_memory_and_compute_efficiency\": \"1. *Tile sizes and managed memory.* Kernels operate on data in small tiles, due to the limited amount of fast shared and register memory. Developers generally need to tune the tile size to balance the memory use and compute per tile. TK templates all library functions and memory layouts around the tile size, making it easier for users to tune the sizes, without requiring significant kernel rewriting. \\n2. *Tuning pipelines and occupancy.* More pipeline stages and higher occupancy increases the kernel\\u2019s memory demand, but can increase the hardware utilization. We developers users to tune the number of pipeline stages and the occupancy level by modifying a single number. \\n\\n**Debugging process for TK:**\", \"the_tk_development_and_debugging_process_includes_the_following\": \"1.*Debugging correctness.* First, users write a test case in Python for the kernel that they want, which is used to test kernel correctness. To help users, we provide several examples in our repository for how to hook these Python test cases into the kernel development process. \\n2. *Debugging compilation errors.* For each library function, we consistently provide static checks for the inputs and outputs that the user passes in. These informative checks tell the user where their mistakes occur. We discuss these checks in lines 245-248 of our original submission. \\n3. *Debugging performance.* Once we have a correct, compiled kernel, we typically use NVIDIA NSIGHT COMPUTE, an open-source profiling tool that helps users understand where performance bottlenecks occur. \\n\\n\\nWe hope our review has addressed your remaining questions. Please let us know if we can provide anything else that would be helpful!\"}", "{\"title\": \"Following up\", \"comment\": \"Dear Reviewer nd8y,\\n\\nThank you again for your review! We are wondering if our response has addressed your concerns? Please let us know if we can provide anything more and we look forward to your response!\"}", "{\"title\": \"Following up\", \"comment\": \"Dear Reviewer 5GXe,\\n\\nThank you again for your review! We are wondering if our response has addressed your concerns? Please let us know if we can provide anything more and we look forward to your response!\"}", "{\"title\": \"Response to review (2/2)\", \"comment\": \"### Technical Details\", \"the_tc\": \"ALU ratio comes from both the datasheet as well as our own microbenchmarks. Regarding the datasheet, https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet, affirms that for an H100 SXM, there are 1979 TFLOPs of sparse BF16 TC compute, which corresponds to 989 TFLOPs of dense BF16 compute (see more here https://developer.nvidia.com/blog/structured-sparsity-in-the-nvidia-ampere-architecture-and-applications-in-search-engines/). Additionally, there are 67 TFLOPs of FP32 ALU compute. Thus the ratio is 989/67=14.8x. Additionally, here is a simple benchmark to affirm that BF16 does not run at double-rate relative to FP32, even though NVIDIA\\u2019s Hopper Architecture In Depth documentation (https://developer.nvidia.com/blog/nvidia-hopper-architecture-in-depth/) suggests that it would (with the caveat that NVIDIA admits those numbers are preliminary): https://drive.google.com/file/d/1QaUgHjTpt9Ljb4_3yoniZ6AOx0p8kQKC/view?usp=sharing\", \"regarding_the_discussion_of_register_spilling\": \"first, we apologize, we thought we had updated that text to mention further spills up the cache hierarchy (though, as you note, this only understates our point more modestly) -- it must have gotten lost somewhere. We\\u2019ll be sure to put that back in. However, it\\u2019s worth noting that it\\u2019s also somewhat inaccurate to describe as spilling to global memory, since the PTX model is explicit that local address space is distinct from global address space. Part of the reason for our elision in this discussion was that we felt that a full treatment of the half-dozen address spaces present in NVIDIA\\u2019s model was well beyond an appropriate introduction of GPUs to an AI audience.\\n\\nBelieve it or not, the operations of min/max and FMA are not computed on the same units. On an H100, FMA runs through the FMA pipeline (which is actually composed of two sub-pipelines, FMA heavy and FMA lite), but floating point min/max are actually computed on the ALU pipeline. This can be seen with a simple dummy kernel that runs only min/max or only FMA\\u2019s, and profiling in ncu. It\\u2019s further confirmed by NVIDIA\\u2019s kernel profiling guide, https://docs.nvidia.com/nsight-compute/ProfilingGuide/index.html. We bring this up in the paper specifically because it can be counterintuitive, but understanding which operations can and cannot be overlapped can meaningfully impact kernel performance.\\n\\nThroughput differing across pipelines is indeed accurate; this is not mere resource contention, and operations do not all run at one operation per clock per data lane. The clearest documentation of this can be found within NVIDIA\\u2019s CUDA C++ Programming Guide at https://docs.nvidia.com/cuda/cuda-c-programming-guide/#arithmetic-instructions. To illustrate: from that chart, one can see that FP32 FMA runs at 8x the rate of FP32 transcendental functions (e.g. expf), and as an FMA is generally counted as two operations, it runs at a full 16x the FLOPs. This is a particularly extreme example, and it aligns with our own microbenchmarks.\\n\\nFixed instruction latency is actually a distinct modality of thread stall from resource contention. Generally speaking, resource contention is observed as a \\u201cpipe throttle\\u201d stall of some kind, depending on the operation. For example, contention on load units is observed as a \\u201cload global throttle\\u201d, contention on transcendental functions usually appears as an \\u201cMIO throttle\\u201d, and contention on ALU resources appears as a \\u201cmath pipe throttle\\u201d. In contrast, a fixed instruction latency (for example, an FMA needs to return a result to a register before it can be used in a store to global memory) is observed as a \\u201cstall wait:\\u201d. Different workloads can consume identical resources and yet have different performances, depending on to what degree instruction-level parallelism allows threads to issue new instructions before previous instructions have finished.\", \"regarding_the_instruction_cache\": \"we added a brief discussion of it, per your request. However, although it is indeed possible to construct kernels which suffer from terrible instruction cache misses, we have not found this to be a significant effect in any of our kernels, whereas register spilling is a frequent, important, and tricky problem to address. Accordingly, we do feel it\\u2019s important to maintain the focus of the background onto the key problems we\\u2019ve observed.\"}", "{\"summary\": \"This paper proposes a framework to facilitate easy writing of efficient CUDA kernels. The authors leverage the asynchronous compute capabilities of the Hopper series GPUs by following a producer-consumer paradigm, to efficiently overlap different kernel operations. Additionally, the authors investigate the impact of various memory ordering strategies, demonstrating that relatively simple strided patterns offer the best tradeoffs. Lastly, the authors demonstrate performance that is comparable to or exceeds existing methods, including Triton.\\n\\nOverall, the work provides a significant contribution to improving computational efficiency for common operations, though the application appears limited in scope. Additionally, minor technical and structural errors impact readability. These issues could be addressed in a revision, at which point I would be inclined to raise my score.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors demonstrate significant improvements to computational efficiency within a clearly defined framework that appears relatively straightforward to adapt. Their framework also provides functionality for more complex resource management, which is often challenging to manage directly in CUDA. Additionally, the authors demonstrate the impact of varying hyperparameters for several key kernel operations, most of which match or exceed standard baselines. Lastly, the results show a surprising contrast with Triton implementations, positioning their approach within the CUDA domain while achieving a similar level of complexity to Triton.\", \"weaknesses\": [\"The application appears limited in scope, which should be explicitly addressed. For example, is the framework limited to Hopper GPUs and above? And the focus on 16x16 register blocks may limit extensibility to other common cases such as GEMV and sparse computations.\", \"The paper contains many issues with presentation, including caption errors, grammatical and awkward wording, and typos, all of which impair readability.\", \"The paper overlooks relevant computer architecture literature regarding performance modeling, specifically in the context of balancing compute and memory (e.g. roofline analysis). Many of the findings presented in the paper are expected from the existing literature.\"], \"questions\": \"1)\\tIs your framework limited to the Hopper series? Can it be applied to A100s, or other GPUs such as the A40/L40?\\n2)\\tYou focus on the 16x16 register block level, but how can your framework be extended to smaller blocks, such as with GEMV, sparse operations, and masked operations (e.g. non-power-of-two dimensions and strided masking, such as in Natten).\\n3)\\tThroughout the paper, you focus on BF16 precision (with the exception of softmax); have you considered other data types, such as integer types or floating-point formats like FP8?\\n4)\\tHow could your framework be extended to handle multi-GPU operations, such as Fully Sharded Data Parallel (FSDP) for split operations? This seems like a natural extension of the producer-consumer model.\\n5)\\tYou compare yourself against Triton, which also supports AMD GPUs. Can you address this as a potential tradeoff in the paper? Alternatively, if your framework can be trivially extended to ROCm, this should be included in the paper with a demonstration, otherwise it represents a tradeoff between efficiency and portability.\\n6)\\tYour cost model in Section 2.2 is effectively a Roofline model; could you contextualize this in the existing literature? The results in Table 3 are expected, as reordering increases the arithmetic intensity (FLOPs/Byte) of the inner loops.\\n7)\\tThroughout the paper, the emphasis on industry versus academic adoption (including the use by undergraduates) feels extraneous and detracts from the main narrative. The paper\\u2019s contributions should stand on their own without reliance on external endorsements or applications.\\n8)\\tFigures 2 and 5 present a simplified sketch for softmax, whereas the true implementation is significantly more complex, potentially leading to a misleading comparison with PyTorch. Furthermore, Figure 2 led me to question why you are using C at all for the API, when the listing could easily have been captured by a python trace (e.g. Triton). This design choice is only clarified upon reviewing the implementation details provided in the appendix and supplementary material.\\n\\nTo build on these questions, the feedback below addresses specific technical details and aims to enhance overall clarity. While this paper presents a strong contribution toward improving kernel efficiency, addressing these points will better showcase the authors\\u2019 contributions.\", \"minor_technical_errors\": [\"044: The H100 datasheet shows a 7.4x ratio between TCs and ALUs, not 16x. Additionally, my understanding is that the TCs necessarily require bubbles as the Register path cannot keep up with the TC I/O for full throughput.\", \"136: This should be \\\"can load\\\" or \\\"may load\\\" instead of \\\"loads.\\\" In general, a kernel does not necessarily need to load data from memory. Kernels can rely solely on arguments (loaded into registers at startup) to generate new data. For example, a kernel might generate a pseudo-random noise tensor without accessing memory.\", \"148: The 32 threads must be within the same quadrant, where \\u201cconsecutive\\u201d or \\u201cadjacent\\u201d would be more appropriate than \\u201cnearby\\u201d.\", \"150: In Ampere, a warp cannot simultaneously occupy different functional units, though separate warps can. For accuracy, please verify this claim against the Hopper documentation or micro-benchmarking paper, otherwise consider omitting if verification is unavailable.\", \"167: Excess registers spill over into Global Memory, not L1. They can appear in L1 due to the memory hierarchy, but this is at the discretion of the hardware cache manager.\", \"171: Multiple thread blocks can only schedule on the same SM if there is sufficient space (e.g. SMem), otherwise they would clobber each other.\", \"173: This statement should be more precise to mention \\u201call thread blocks\\u201d and that the L2 is hardware managed, making it distinct from the software managed SMem.\", \"179: The tail-effect cost mentioned only applies to singular kernels. Ideally the GPU should have multiple kernels in flight, which can run concurrently.\", \"It would also be relevant to mention that kernels which contain too many instructions can cause slowdown as they will incur ICache misses.\"], \"presentation_issues\": [\"The abstract should be revised for clarity, with suggested improvements like \\u201ccreates a\\u201d, \\u201csuggest that\\u201d, and \\u201cresembling PyTorch.\\u201d\", \"The paper could benefit from clarity revisions in several sections, where phrasing and word choice could make technical details easier to follow. Lines: 073, 170, 178, 205, 278, 299, 301, 328, 370, 397\", \"325: You should not use \\\"[1]\\\" and \\\"[2]\\\" to enumerate concepts as they are easily confused with reference indicators.\", \"Table 2 and Table 3 should probably be Figures like Figure 6. It is also unclear why these stop at 4 stages, K=1024, and what K is. (MxN)x(NxK)?\", \"Figure 7 and 8 should use subfig captions rather than plot titles. If parameters are common among subfigures, then they should be stated in the figure caption, otherwise in the subfig caption. The fontsize for the axis and labels is too small. Finally, the batch size does not match with the titles and caption.\", \"The table in Section 4.2 is missing a caption and column (TK is listed twice).\", \"The reference links are broken in Appendix B.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents ThunderKittens (TK), a C++ embedded library for writing high-performance CUDA kernels for NVIDIA GPUs. It introduces warp-, thread-block-, and grid-level abstractions to facilitate mapping of kernels to the GPU hierarchy. Experimental results indicate that TK can outperform strong industrial baselines, achieving superior performance for GEMM and attention kernels.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The TK library provides a useful abstraction for writing high-performance asynchronous kernels on GPU.\\n2. The presentation is clear and accessible, especially the introductory sections on GPU architecture, which provide a helpful overview for ML researchers who may lack in-depth experience with GPU programming.\\n3. The experimental results are compelling, showing performance on par or better than highly optimized kernels, such as FlashAttention3. The paper also demonstrates significant speedups across different kernel types compared to state-of-the-art frameworks like Triton and PyTorch.\", \"weaknesses\": \"1. The TK library is still too low-level with too many details, which requires users to manage synchronization carefully and does not simplify the programming burden.\\n2. The novelty and advantages of TK over CUTLASS are unclear. Many functionalities seem achievable with CUTLASS as well. The authors mention that TK addresses bank conflicts, but the evidence presented is minimal. There appear to be no inherent limitations in CUTLASS that would prevent it from avoiding bank conflicts.\\n3. Similarly, the benefits of TK over Triton are not well established. Triton, embedded in Python with a PyTorch-like API, may offer a more accessible interface. By contrast, TK, embedded in C++, still requires explicit handling of communication with mbarrier operations like expect and arrive. No user study or lines of code comparisons are provided to demonstrate that TK improves programmer productivity.\\n4. Experimental results are good, but still missing comparisons in some important cases like quantized kernels and causal attention.\\n5. The work reads more like a system paper, with limited ML-focused insights, raising questions about its fit for ICLR.\", \"minor\": [\"P4: \\\"Since the frameworks are not C++ embedded, it can be challenging to use specialized hardware instructions\\\" This statement is inaccurate; TVM provides mechanisms to incorporate low-level TensorCore instructions, and Triton also has [inline](https://triton-lang.org/main/python-api/triton.language.html#inline-assembly) operation to include PTX code.\", \"Section 2 does not discuss the Tensor Memory Accelerator (TMA) on Hopper, which is essential for asynchronous optimizations mentioned in Contribution 2.\", \"Appendix B labels appear broken (??).\"], \"questions\": \"1. What are the fundamental challenges preventing CUTLASS from avoiding bank conflicts? Could it be that the FlashAttention3 kernel simply did not select the optimal layout?\\n2. CUTLASS has implemented both ping-pong and cooperative kernel variants for GEMM, with varying performance across different scenarios. How does TK support ping-pong and cooperative kernels, and could you include a comparison with CUTLASS in Figure 7\\u2019s GEMM kernel results?\\n3. TK appears designed specifically for the Hopper architecture with asynchronous features. Is it also compatible with Ampere or other GPU generations? How does TK\\u2019s performance on an A100 compare to Triton?\\n4. Following Q3, if Blackwell GPUs were released, would TK\\u2019s abstractions remain applicable? How do you plan to ensure extensibility across GPU generations?\\n5. What's the usage of the cost model in Section 2.2? This formula is highly simplified and does not guide any optimization or automatic search later.\\n6. Section 3.1 discusses various layouts \\u2014 do users need to manually manage data organization and specify layouts in TK?\\n7. Figure 5 is just some wrappers of mbarriers. Any insights here?\\n8. Can TK effectively handle quantized kernels, where data layout is crucial for efficient transfers from TMA and WGMMA computation? How does it perform on FP8 GEMM and FlashAttention kernels?\\n9. What is TK's performance on causal attention kernels?\\n10. Please provide detailed experimental configurations in the Appendix. For example, which versions of PyTorch and Triton were used? Was `torch.compile` employed to optimize those network layers? For cuBLAS, was the latest [cuBLASLt](https://developer.nvidia.com/blog/introducing-grouped-gemm-apis-in-cublas-and-more-performance-updates/) autotuning enabled? Since PyTorch also uses Triton as a backend, what distinguishes the two baselines in Figure 8?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you again for your review and help in improving the paper! We appreciate the score update and will be sure to fix these additional points.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response to review (1/2)\", \"comment\": \"Thank you very much for your response! We\\u2019ve done our best to address your questions below, and will also take them into full account for our next revision.\\n\\n### Demonstrating Broader Scope\\n\\nWe have addressed your comments on scope below. We are grateful that you appreciated the advantage of a consistent framework, including: TK is running on multiple AI workloads and hardware platforms, competing with the best baselines on each of those settings--with the same abstractions. Each of TK\\u2019s abstractions can of course be further optimized if the need arises \\u2013 as a C++ embedded framework, the user can incorporate the full power of CUDA when using TK. However, we demonstrate that with few abstractions, TK already outperforms major projects from prior work \\u2013 there are full research projects for each individual baseline kernel we compare to (e.g., flash attention). \\n\\n**FA2 Performance**: As you note, we do find about a 5% performance difference on a 4090 relative to FlashAttention-2. We do know the source of the issue, which has to do with shared memory. In the case of the 4090 kernel, which does not have the benefit of TMA acceleration to compute addresses and load data into tensor cores, the kernel consumes additional resources computing these shared memory addresses. We do not believe this is a fundamental limitation and can address this for the camera ready. Specifically, the additional cost we incur could be amortized through TK internally caching certain address offsets in a few extra registers; we were not able to implement this in-time for rebuttals. But in any case, we are excited that our na\\u00efve port of the TK primitives transfers with relatively minor performance penalty, even within the rebuttal timeframe. \\n\\n**FA3 Performance**: In the case of FA3, the reason is actually not related to 16x16 blocks, but instead an algorithmic difference. The kernel we implement in ThunderKittens is not FA3's algorithm, but rather a modified version of FA2, updated for the H100 GPU. In our causal forward attention kernel pass, we therefore pay an additional cost from the fact that our kernel launches three consumer warpgroups instead of two, which means more threads sit idle at the end of the kernel. We do this in order to show that most of the performance improvements of FA3 actually come from a relatively straightforward update into the Hopper architecture, without complex ping-pong scheduling. This is core to our research thesis that a small number of primitives can go a long way for AI kernels.\\n\\n**4090 and A100**: We focused on adaptation to the 4090 actually because (a) we have 4090 GPUs more easily available to us, and (b) we felt it was more different from the H100 in terms of performance characteristics. We are happy to include A100 for the camera ready. We hypothesize that the A100 will face a similar address generation cost as the 4090, though as we wrote above, we do believe this gap could be closed without breaking any tile abstractions or even adjusting any memory layouts.\\n\\n**GEMV**: Regarding GEMV, we have not yet optimized its performance, as even in non-batched LLM inference, techniques such as speculative decoding are now ubiquitous in production scenarios. And indeed: the 16x cost penalty of padding is about the same as one incurs simply by not using the tensor cores, so one might as well put something in those columns. (Furthermore: it is probably still actually preferable to use the tensor cores in most such scenarios in order to save instruction issue slots, so that other pipelines can be used in parallel.) But there is no reason whatsoever that our framework could not support an ALU-based GEMV; it would just be another primitive like mma or row_max or whatnot.\\n\\n### Connections to Computer Architecture\\nWe concur that our cost model is not novel, and we have zero interest in claiming it as a contribution of the work. We don\\u2019t think it\\u2019s all that interesting, so we\\u2019d actually be pretty sad if anyone thought of it as a contribution. But we do believe it\\u2019s important for a non-systems audience to provide a broad framing of how one might think of cost on the GPU. We will certainly further emphasize this in a camera ready (e.g., removing \\u201cwe show\\u201d, changing \\u201cinspired by\\u201d).\\n\\nYour idea of adding a third column to Table 3 with the empirical arithmetic intensity is a good one -- apologies we didn\\u2019t understand it last time around. We agree it would help connect to the original model, and we\\u2019ll be sure to update that.\\n\\n### FSDP\\n\\nRespectfully, we believe this request is beyond the scope of our work. Gluing two TK kernels together with Pytorch would not really serve to add to the contributions of the work, in forming a basis for writing individual kernels, and in fact would distract from the main body of the work. We certainly concur that tensor-parallel is an important regime for many, many workloads, but in our view networking is best treated as a meaningfully distinct problem from kernel development.\"}", "{\"comment\": \"Thanks for the detailed response! I truly appreciate the authors\\u2019 efforts to conduct additional experiments, clarify misunderstandings, and enhance the paper\\u2019s presentation. I will accordingly raise my score. A few additional suggestions:\\n1. Please incorporate the Feature-TK-CUTLASS-Triton comparison table into the paper. This table is both important and valuable for helping readers understand the differences among these frameworks.\\n2. Please include the discussion on \\u201chighlighting TK\\u2019s value to the ML audience\\u201d in the paper, which effectively explains why the ML audience should care about this work.\\n3. In Figure 7, the text for the CuBLASLt-2048 case overlaps with the legend. Please adjust for clarity.\\n4. Regarding the mention of \\u201can example ThunderKittens ping-pong GEMM kernel in the appendix,\\u201d is this referring to Figure 15? If so, please explicitly clarify this in the text.\\n5. In Appendix B.1, please provide a specific commit ID for the Triton compiler. As Triton evolves rapidly, the performance of different commits\\u2014even under the same version number (currently 3.0.0)\\u2014can vary significantly. Including this information would ensure better reproducibility.\"}", "{\"comment\": \"Thanks the authors for the response. I have read it and it have addressed all my concerns.\"}", "{\"summary\": \"The paper proposes a new programming library for implementing efficient CUDA kernels. tThe paper contains the three ideas at three different levels for CUDA kernel implementation: (1) At warp-level, the author proposes to organize the tile as the multiple of 16; (2) at thread-block level, the author devises template libraries to overlap between different asynchronous warps; (3) at grid-level, the author proposes methods for managing kernel launch kernel launching overheads. As a result, the proposed library can achieve a performance on par with the existing state-of-the-art implementations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper proposes methods at different levels that simplify CUDA kernel implementations\", \"The paper can achieve a similar performance compared to the state-of-the-art implementation\"], \"weaknesses\": [\"The paper has not discussed the tunning overhead with the proposed techniques.\"], \"questions\": \"Thanks for submitting the excellent paper to ICLR. While in general I enjoyed reading the paper, I have a few thoughts on the extension of the paper. Specifically, this paper proposes a new CUDA abstraction that allows users to write new kernels. However, it seems that it is built on top of the fact that all the dimensions should be a multiple of 16. This could be problematic in the context of dynamic shapes where the dimension does not divide 16. Could you please elaborate on how could the proposed technique be extended to such cases?\\n\\nBesides, the paper uses auto-tuning for further adjust the hyperparameters for a better performance. Could you elaborate how much the tunning overhead is?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Common response to all reviewers\", \"comment\": \"We thank all the reviewers for their time and effort reviewing our work and for their constructive comments, which have made our paper stronger. Reviewers consistently appreciated our \\u201ccompelling\\u201d, \\u201cimpressive\\u201d and \\u201csignificant\\u201d performance improvements compared to the prior state of the art kernels across numerous AI workloads [kgD5, nd8y, ANEJ, 5GXe], and the \\u201csimplicity\\u201d, \\u201cfreshness\\u201d, and \\u201cusefulness\\u201d of the ThunderKittens (TK) programming framework [kgD5, nd8y, ANEJ, 5GXe]. Reviewers also appreciate that our work can help make hardware-aware AI accessible to a broader set of developers [ANEJ, 5GXe].\\n\\nIn this common response, we provide: (1) a recap of our contributions and the changes we made in our revisions, and (2) details on important new results relevant to all reviewers. Please find our comments for individual reviewers in their respective threads. \\n\\n## Summary of contributions\\nWe explore whether extracting peak performance from AI hardware requires complex kernels, or whether simple programming primitives that capture most of the optimization patterns needed in AI. We are inspired by widely used programming frameworks: CUTLASS, which supports every hardware optimization edge case through layers of templating, and Triton, which exposes fewer hardware optimizations, but provides a drastically simpler programming experience: Our research takes initial steps to explore the large tradeoff space between programming complexity and the accessibility of peak hardware performance. Our contributions are:\\n\\n1. **Programming primitives**: At each level of GPU parallelism, we identify opportunities to simplify the programming experience, without sacrificing on the accessibility of peak performance. To help ML researchers use these opportunities, we release the ThunderKittens (TK) programming library. Our optimizations include:\\n- **Warp-level**: We automatically select and manage memory layouts for the user. We provide optimized library functions inspired by PyTorch (mma, exp, cumsum).\\n- **Block-level**: We provide a kernel template that helps users utilize the core asynchronous execution patterns that are common in AI workloads. \\n- **Grid-level**: We provide persistent grid support and a template function to help users manage L2 cache reuse for the kernel.\\n\\n2. **Optimized kernels**: We evaluate using a **broad range of popular AI workloads** (attention variants, convolutions, state space models, GEMM, positional encodings, layernorms), **4 data types** (BF16, FP16, FP32, FP8), **3 different AI hardware platforms** (NVIDIA 4090, H100, and Apple M2). We compare kernels written in TK to the strongest baselines available: state-of-the-art, carefully engineered kernels written in frameworks \\u2013 CUTLASS, Triton \\u2013 that are supported by large industry teams over multiple years. Excitingly, we find that TK kernels consistently compete with state-of-the-art, despite using **<200 lines of code per kernel** on average.\\n\\n\\n\\n## Common results and responses\", \"here_we_provide_results_and_responses_to_two_points_that_were_of_interest_to_multiple_reviewers\": \"\", \"breadth_of_features\": \"Reviewers asked about the breadth of features supported in ThunderKittens and wanted to understand the performance on edge cases.\", \"comparison_to_other_frameworks\": \"Reviewers wanted to further understand how TK compares to popular frameworks like CUTLASS and Triton.\\n\\n### Breadth of features supported in ThunderKittens: \\nReviewers were curious about whether few edge cases are supported in ThunderKittens. We show that they are indeed supported and add discussion to explain the implementations: \\n1. **Precision levels:** The review mentions that our paper focuses on BF16 and it would be useful to highlight other data types. Our previously demonstrated kernels rely on FP16, FP32, and BF16. We now provide an **FP8 GEMM kernel**, which achieves up to 1570 TFLOPS on NVIDIA H100s. Please find the results for the FP8 TK kernel in Appendix B2. \\n2. **Padded tiles:** Reviews mention that the 16x16 tiles may be restrictive for applications requiring dimensions offsetted from multiples of 16. We now also provide results for a TK attention kernel that does not assume multiples of 16. Please find a description of the TK unaligned attention kernel in Appendix B2. We also add Appendix D1 to our revised submission, which details how **TK manages tiles, when dimensions are offset from multiples of 16**.\\n3. **Range of hardware platforms:** Our original evaluations focused on the top-of-line data center GPU, NVIDIA H100s. **We now provide new evaluations for the top-of-line consumer GPU, NVIDIA 4090s, and personal hardware, Apple M2 chips.** We entirely converted TK to Apple hardware in the span of the rebuttal period and are including this in our revised supplementary materials. Please find performance benchmarks for our new TK kernels in Appendix B. TK consistently provides high performance and is extensible.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you for your positive review of our work! We are glad that you find our approach \\\"fresh\\\" and \\\"valuable\\\" and results \\\"impressive\\\". We carefully address your outstanding questions in our response.\\n\\n## Addressing weakness 1. TK\\u2019s flexibility for complex AI workloads.\", \"we_focus_on_a_research_question\": \"What are the tradeoffs between programming complexity and the accessibility of peak hardware performance? We view the fact that TK is uses minimal primitives, yet provides competitive or higher performance as prior kernels, as a strength rather than a limitation. We provide the summary table below (and in Appendix B.3), showing that:\\n1. TK competes with or outperforms the baselines across settings. \\n2. TK uses a similar amount of lines of code per kernel as Triton, and fewer lines than the other CUDA reference kernels\\n\\n**Table: Lines of code across TK H100 kernels, state of the art non TK kernels, and the TK speed up over the reference kernels.**\\n\\n| Workload | TK kernel (lines of code) | Reference kernel (lines of code) | Speed up (max-min) |\\n|----------|--------------------------|--------------------------------|-------------------|\\n| Attention forwards | 217 | 2325 (CUTLASS FA3) | 0.87-1.14x |\\n| GEMM | 84 | 463 (CUTLASS) | 0.98-2.05x |\\n| Convolution (N=4096) | 131 | 642 (CUDA FlashFFTConv) | 4.7x |\\n| Based linear attention | 282 | 89 (Triton) | 3.7-14.5x |\\n| Hedgehog linear attention | 316 | 104 (Triton) | 4.0-6.5x |\\n| Mamba-2 | 192 | 532 (Triton) | 3.0-3.7x |\\n| Rotary | 101 | 119 (Triton) | 1.1-2.3x |\\n| Fused layer norm | 146 | 124 (Triton) | 1.0-2.2x |\\n\\nOverall, our core abstractions handle the optimization patterns needed for most AI kernels (as demonstrated in our case studies), TK is explicitly designed to be extensible when specialized optimizations are needed. Users can seamlessly mix TK's high-level primitives with custom CUDA code - as we show in our Mamba-2 implementation. This hybrid approach means users get the benefits of TK's abstractions for the bulk of their kernel (typically 95% of the effort), while retaining the flexibility to implement specialized operations where needed. \\n\\nPhilosophically, TK focuses on providing well-chosen defaults that compose well with both each other and custom code. This design ensures TK remains practical and maintainable (ThunderKittens\\u2019 source is just a few thousand lines of code) while supporting the full spectrum of AI workloads.\\n\\n## Addressing weakness 2. TK\\u2019s portability across hardware platforms.\\nTo address this concern, we provide 2 new kernels for NVIDIA 4090 GPUs and 3 new kernels for Apple M2 chips, to demonstrate that the TK framework ports across hardware platforms. We have added results and discussion for these kernels in Appendix B.\\n\\n**Table: 4090 Attention FWD Performance (non-causal, batch=16, heads=16)**\\n\\n| Sequence Length | TK Attention FWD (head dim 64) | TK Attention FWD (non-causal, head dim 128) |\\n|----------------|-------------------------------------------------------|--------------------------------------------------------|\\n| 1024 | 150 TFLOPs | 141 TFLOPs |\\n| 2048 | 154 TFLOPs | 145 TFLOPs |\\n| 4096 | 157 TFLOPs | 156 TFLOPs |\\n| 8192 | 160 TFLOPs | 148 TFLOPs |\\n\\n**Table: Apple M2 Standard Attention FWD vs Apple MLX (non-causal, batch=16, heads=16)**\\n\\n| Sequence Length | TK Attention FWD (head dim 64) | TK Attention FWD (head dim 128) |\\n|----------------|--------------------------------|--------------------------------|\\n| 512 | 3523.46 vs 3088.41 GFLOPS | 3263.38 vs 3770.52 GFLOPS |\\n| 1024 | 3723.83 vs 3276.87 GFLOPS | 3435.89 vs 3977.23 GFLOPS |\\n| 1536 | 3761.81 vs 3313.16 GFLOPS | 3490.66 vs 4053.37 GFLOPS |\\n| 2048 | 3784.12 vs 3309.63 GFLOPS | 3488.09 vs 4005.99 GFLOPS |\\n| 2560 | 3793.42 vs 3329.78 GFLOPS | 3483.83 vs 4047.90 GFLOPS |\\n\\n**Table: Apple M2 GEMM TK vs Apple MLX Performance**\\n\\n| Sequence Length | Performance (GFLOPS) |\\n|----------------|--------------------------------|\\n| 1024 | 3830.83 vs 3444.65 |\\n| 2048 | 5238.84 vs 4839.45 |\\n| 4096 | 5600.58 vs 5190.06 |\\n| 8192 | 5675.82 vs 5182.69 |\\n| 16384 | 5266.97 vs 5117.65 |\\n\\nWe originally highlighted NVIDIA H100s since they represent the trend of AI hardware growing more complex, with an increasing number of specialized instructions, levels of parallel execution, and opportunities for asynchronous execution.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you for your review, we are very grateful that you took the time to provide the thoughtful detailed feedback, which has helped us significantly strengthen our work. We carefully address your feedback in our response and hope that it helps.\\n\\n## Demonstrating the broad scope of ThunderKittens\\nThank you for the question! We provide new results beyond the submission for FP8 GEMM, causal attention, NVIDIA 4090 kernels, and Apple M2 kernels to further emphasize the scope of our framework. We also show that TK supports tile dimensions that are not multiples of 16; we provide demo attention kernels that use this feature. Please find these results in the common response and in Appendix B of our revision. We also added new discussion of our implementations for these features in Appendix D.1.\\n\\n## Connections to the computer architecture literature\\nThank you for raising this concern! We have contextualized both the cost model in Section 2.2 and the results in Table 3 to highlight the existing literature. We did not aim for Section 2.2 to appear as our contribution and appreciate the suggestions to further establish that this is fundamental background knowledge. We correspondingly added citations to computer architecture works in this section. Please let us know if any other changes are helpful!\\n\\n## Extending to Fully Shared Data Parallel (FSDP)\\nWe appreciate this suggestion. While multi-GPU operations like FSDP are indeed important for large-scale training, our current focus is on optimizing single-GPU kernel performance. Users of our framework can currently handle multi-GPU communication through established libraries like NCCL, which provides highly optimized primitives for inter-GPU collective operations. We agree that extending the producer-consumer model to multi-GPU contexts is an interesting direction. We are excited to explore such directions in future work.\\n\\n## Presentation improvements\\nWe have made several presentation improvements to improve readability. We apologize for the presentation errors. Additionally: \\n- We removed the mention of external endorsements from the paper, per your recommendation. \\n\\n- The review notes that Figures 2 and 5 are too simplified compared to the full implementation. We have updated the paper to include more precise captions, to highlight that the figures represent specific aspects of the library: Figure 2 is only meant to show that the library functions and tiles are Pytorch-like. Figure 5 is only meant to improve intuition on separation of responsibility across workers. \\n\\nThank you for your feedback on presentation, which has helped improve the paper. \\n\\n## Technical details\\nWe have updated the paper to reflect your feedback, and appreciate your help in tightening the technical writing. We also provide specific responses below: \\n- With respect to the TC:ALU ratio: We were specifically referring to the BF16 tensor core to ALU compute ratio, while your citation refers to FP16 FMA performance. Nonetheless, there was a slight error in our writing for the H100; we've adjusted the number to match NVIDIA's official 15x figure. (The A100 GPU officially has 16x.)\\n- Regarding kernel memory access: Kernels can operate without memory access. We\\u2019ve updated the wording to \\\"typically loads\\\" for accuracy, as it describes the typical pattern for deep learning kernels, which is our focus. \\n- On warp execution units: We've revised this section to be more precise. You're correct that individual warps cannot simultaneously occupy different functional units. We've clarified that we were referring to instruction-level parallelism (e.g., overlapping memory loads with computation) and concurrent execution across different warps.\\n- Concerning register spills: We've updated this to be more precise. Register spills occur to local memory, which may be cached in L1 depending on hardware policies. Thank you for helping us make this clearer.\\n- On SM thread block scheduling: We've added explicit mention of resource constraints as a factor in thread block collocation.\\n- As to kernel tail effects: While concurrent kernel execution is possible across multiple asynchronous CUDA streams, our discussion focuses on the common case in deep learning where kernels execute sequentially within a single CUDA stream, and for these workloads tail effects are an important consideration. We've added a note acknowledging advanced multi-stream techniques while maintaining our emphasis on typical AI workloads. We are excited to explore overlapping kernel execution in future work, such as in the recently-released async TP workload.\\n- We've also added a brief note about instruction cache considerations, though this wasn't a significant factor in our evaluated kernels.\\n\\nPlease let us know if anything else would be helpful in your review!\"}", "{\"title\": \"Response to Authors (2/2)\", \"comment\": \"## Technical Details\\n\\nThank you for addressing these corrections. However, a few issues remain:\\n\\n5. Regarding the TC:ALU ratio, where are you finding this number? According to the H100 datasheet, NVIDIA reports a 7.4x increase, not 16x. This is the case for both FP16 and BF16, as they utilize the same hardware units, likely to support TF32. Are you perhaps considering the FLOPs with sparse matmul? \\n\\n6. The discussion of register spilling to L1 still seems imprecise. It would be more accurate to describe this as \\u201cspills into ~~global memory~~ HBM.\\u201d The subsequent sentence effectively explains how SMEM and L1 can be traded to provide additional L1 capacity for these spilled registers, where I believe the local memory L1 policy is write-back but evictions can occur during global memory load / stores. However, it is worth implying that, in the most general case, spilling registers may incur a full hierarchy cache miss, potentially causing significant performance degradation. As currently written, this performance issue may be understated.\\n\\n7. There are a few minor technical inaccuracies in Section 2.1 that I\\u2019d like to highlight:\\n - The ALU operations of min/max and FMA are computed on the same units, but the text seems to suggest they are distinct. While speculative, these operations likely share significant overlap within the datapaths of the ALU/CUDA Core.\\n - Could you confirm whether the statement \\u201cthroughput differs across pipelines\\u201d is accurate? Or is the observed throughput difference due to resource contention rather than variations in individual unit throughput? My understanding is that all compute units, including the XFU, have a consistent throughput of 1 op/clk.\\n - Regarding thread stalling, the term \\u201cfixed instruction latencies\\u201d may not be sufficiently precise. I suggest replacing it with \\u201cresource contention,\\u201d which provides a broader and more accurate explanation.\\n\\nAdditional Comments\\n\\n- Regarding the instruction cache, this is particularly relevant to your discussion on reducing the number of registers per thread. An increase in operations could lead to more instructions, potentially resulting in I$ misses.\\n- Thank you for your response regarding kernel tail effects. I had not considered the limitation of sequential kernel launches, and this would indeed be an excellent direction for follow-up TK!\"}", "{\"metareview\": \"The paper presents a new framework to simplify writing AI kernels for GPUs while still allowing for high performance. All the reviewers liked the paper for its practical significance. Hence, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttals further demonstrated the practical utility of the proposed method.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you for your thorough review of our work! We are grateful for your detailed feedback, which significantly helped us improve our work. We provide our response to your feedback below.\\n\\n## Addressing weakness 1: Simplicity of programming with TK, and Addressing weakness 3: Lines of code comparisons.\\nThe review mentions that TK is too low-level, which may not simplify the programming burden. Our results and analysis suggest that TK is quite helpful. We provide the summary table below (and in Appendix B.3), showing that:\\n1. TK competes with or outperforms the baselines across settings. \\n2. TK uses a similar amount of lines of code per kernel as Triton, and fewer lines than the other CUDA reference kernels\\n\\n**Table: Lines of code across TK H100 kernels, state of the art non TK kernels, and the TK speed up over the reference kernels**\\n| Workload | TK kernel (lines of code) | Reference kernel (lines of code) | Speed up (max-min) |\\n|----------|--------------------------|--------------------------------|-------------------|\\n| Attention forwards | 217 | 2325 (CUTLASS FA3) | 0.87-1.14x |\\n| GEMM | 84 | 463 (CUTLASS) | 0.98-2.05x |\\n| Convolution (N=4096) | 131 | 642 (CUDA FlashFFTConv) | 4.7x |\\n| Based linear attention | 282 | 89 (Triton) | 3.7-14.5x |\\n| Hedgehog linear attention | 316 | 104 (Triton) | 4.0-6.5x |\\n| Mamba-2 | 192 | 532 (Triton) | 3.0-3.7x |\\n| Rotary | 101 | 119 (Triton) | 1.1-2.3x |\\n| Fused layer norm | 146 | 124 (Triton) | 1.0-2.2x |\\n\\n\\nAdditionally, we reiterate that TK manages the following for the user:\\n1. **Warp-level:** We automatically manage memory layouts for the user. We provide optimized library functions inspired by PyTorch (mma, exp, cumsum) to uplevel the programming experience. We provide tile data structures for both shared and register memory (and we manage data type conversions, broadcasts, templating functions around the tile sizes, loads and stores across the memory hierarchy) to help users manage their memory resource utilization.\\n2. **Block-level:** We provide a general kernel template that helps users utilize two important asynchronous execution patterns that are helpful in AI workloads: (1) multi-stage pipeline and (2) producer consumer worker specialization. We set these patterns up for the user. \\nWe think that giving users the option to set barriers is important for enabling peak performance. Triton does not offer this, and we show in our paper that asynchrony is important. Our goal is to simplify the primitives, without sacrificing performance. \\n3. **Grid-level:** We provide persistent grid support and a function to help users set the thread block launch pattern, which influences L2 reuse.\\n\\n\\n## Addressing weaknesses 2: Novelty of TK compared to prior frameworks\", \"we_focus_on_a_research_question\": \"What are the tradeoffs between programming complexity and the accessibility of peak hardware performance? Our novelty is in showing a new point on the tradeoff space exists. We can use fewer primitives to achieve higher performance on AI workloads, compared to what\\u2019s been demonstrated in prior frameworks. Identifying the primitives is non-trivial given the complexity of GPUs, and this is core to our contribution.\\n\\nTo address your feedback, we have included new analysis in Appendix B.3, we believe it has helped the paper and thank you for your suggestion. Please also see this new content below and in our common response. We compare the features supported by different frameworks:\\n\\n| Feature | TK | CUTLASS | Triton |\\n|---------|----|---------| -------|\\n| Direct register memory control | YES | YES | NO |\\n| Fine-grained asynchronous execution | YES | YES | NO |\\n| PyTorch-like library functions | YES | NO | YES |\\n| Supports multiple hardware platforms | YES | NO | YES |\\n\\nWe recognize that the following is an imperfect metric. However, we include the sizes of various CUDA libraries below as a rough proxy. For CUTLASS and TK we report the size of the \\u201cinclude/\\u201d directory, and for Triton we report the combined size of the \\u201cinclude/\\u201d directories in Triton plus the \\u201cinclude/\\u201d in the core MLIR compiler dependency.\\n\\n| Library | Size (Bytes) | Date/Version |\\n|----------|--------------|--------------|\\n| CUTLASS | 22 MB | 10/22/2024 |\\n| Triton | 12.6 MB | 10/22/2024 |\\n| TK | < 1.0 MB | 10/22/2024 |\\n\\n\\n## Addressing weakness 4. Additional kernels highlighting the scope of TK's features\\nThe review notes that our experiments miss some useful kernels. We provide new results for the requested kernels: quantized FP8 GEMM, causal attention. Please find these results in Appendix B2 and in the common response. We hope that these results address any remaining concerns on experimental validation.\"}", "{\"title\": \"Response to review [continued]\", \"comment\": \"## Addressing question 5: Section 2.2 helps summarize the kernel costs\\nThank you for your feedback about Section 2.2. The goal of Section 2.2 is to summarize the various costs in a concise way. As you mention, Section 2 ``helps ML researchers who may lack in-depth experience with GPU programming''. It is correct that we do not use Section 2.2 for automation \\u2013 our research focus is on identifying simple, but performant programming primitives \\u2013 not automation.\\n\\nEach part of Section 3 corresponds to terms in Section 2.2:\\n- Section 3.1 addresses $C_{SHARED}$ \\n- Section 3.2 addresses $C_{SYNC}$\\n- Section 3.3 addresses $C_{HBM}$, $C_{L2}$, and $C_{SETUP}$\\nThis organizational structure is reflected at lines 195-196, 220-221, 238-240, 276-278, 285-287, 359-361, using line numbers from our revised submission.\\n\\nWe also added new citations to point to prior work in the computer architecture literature that articulates this cost model to highlight that it is commonly used as a rule of thumb. \\n\\n## Addressing question 6: TK manages layouts.\\nThe review asks ``do users need to manually manage data organization and specify layouts in TK?''. We reiterate that TK manages the layouts for the user (lines 88-94, 275-278 state we automatically choose). \\n\\n## Addressing question 7: Role of Figure 5\\nThe goals of Figure 5 are to (1) show the reader how work is partitioned between load workers and compute workers and (2) show the reader a broader set of TK library functions (warpgroup, TMA). We have clarified the goal of the figure in the caption.\\n\\n## Addressing question 8: Quantized kernels. \\nYes TK handles quantized kernels. We include new results for a quantized FP8 GEMM in Appendix B2 and compare to CuBLAS. Our GEMM kernel performance is shown below: \\n\\n**Table: TK FP8 GEMM vs CuBLAS Performance**\\n\\n| Sequence Length | Performance (TFLOPs) |\\n|----------------|---------------------|\\n| 4096 | 1457 vs 1439 |\\n| 6144 | 1507 vs 1509 |\\n| 8192 | 1532 vs 1534 |\\n| 12288 | 1570 vs 1429 |\\n| 16384 | 1447 vs 1396 |\\n\\n## Addressing question 9: Causal attention kernels. \\nWe add new results for causal attention in Section 4. Our kernel performance is shown below:\\n\\n**Attention Performance Comparison (D = 128, B = 16, H = 16)**\\n\\n| Sequence Length | TK Attention Inference Causal (vs FA3) | TK Attention Backwards Causal (vs FA3) |\\n|----------------|---------------------------------------|---------------------------------------|\\n| 768 | 290 TFLOPs (vs 286 TFLOPs) | 185 TFLOPs (vs 153 TFLOPs) |\\n| 1536 | 417 TFLOPs (vs 465 TFLOPs) | 330 TFLOPs (vs 264 TFLOPs) |\\n| 3072 | 519 TFLOPs (vs 581 TFLOPs) | 400 TFLOPs (vs 362 TFLOPs) |\\n| 6144 | 537 TFLOPs (vs 617 TFLOPs) | 468 TFLOPs (vs 422 TFLOPs) |\\n| 12288 | 550 TFLOPs (vs 598 TFLOPs) | 494 TFLOPs (vs 449 TFLOPs) |\\n\\n## Addressing question 10: Experimental configurations. \\nWe have added Appendix B1 to recap our experimental configurations. \\n- Our PyTorch results are compiled with torch.compile and we have noted this in the paper. \\n- We added the versions of PyTorch, torch compile, and CUDA that we use. \\n- We added the cuBLASt baseline and CUTLASS to our GEMM plots. We use cuBLASLt autotuning for each dimension. \\n- PyTorch vs. Triton: While both torch-compiled PyTorch and explicit Triton kernels can use Triton as the backend, our PyTorch baselines are written in PyTorch library functions. \\n\\n## Minor details and presentation\\nWe have updated our paper to discuss opportunities for using specialized hardware with TVM and Triton, thank you for pointing this out! Broadly, we are very excited about TVM and Triton and their value to the community. The goal of our work not to promote the use of ThunderKittens over other frameworks. Our work simply explores whether there exists a small set of programming primitives that is sufficient for high performance.\\n\\nRegarding extensibility, indeed, Triton allows for inline PTX. However, the mechanism for this, inline_asm_elementwise, supports a restricted subset of PTX\\u2019s capabilities, which represent simple element-wise maps over tensors. Furthermore, while TVM can access tensor cores, there are frequently important low-level optimizations that cannot be expressed within its scope. For example, FlashAttention-3 accomplishes the transpose of the V matrix through two low-level optimizations, one of which perform a partial transpose but also induces a permutation on the elements of the matrix. These sorts of optimizations are not accessible without full and broad PTX support, which is possible in embedded frameworks like CUTLASS and TK, but not in TVM and Triton.\\n\\nFinally, we have addressed the presentation issues -- we added mention of TMA to Section 2, and have fixed the broken links in Appendix B. Thank you very much for the feedback!\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank you for addressing my additional questions and investigating the performance discrepancies with FA2 and FA3. I also appreciate the clarification that TK is applicable to the older Ampere generation.\\n\\n---\\n\\n**GEMV:** Regarding GEMV, are there any current implementations successfully utilizing 16x speculative decoding? My understanding is that 4x is standard, which could still result in a significant performance penalty. However, I had not considered that padding might still be more efficient. If true, this would indeed be an interesting insight.\\n\\n**TC:ALU Ratio:** Comparing BF16 TC FLOPs to FP32 FMA FLOPs seems somewhat misleading. A more appropriate comparison would be against BF16 FMA FLOPs for a fair evaluation.\\n\\n**Register Spilling:** I believe that simply connecting to the cache hierarchy (DRAM/HBM) is sufficient. Discussing the various memory regions would introduce unnecessary complexity.\\n\\n**ALUs vs FMA:** Thank you for the clarification; this is both interesting and surprising. It is also consistent with the throughput table. In hindsight, this makes sense as it unifies integer and float min/max operations without requiring explicit FSUB support.\\n\\n**Throughput:** The throughput tables appear to confirm my original statement. The example of expf is attributable to the XU count: 16 per SM versus 128 FMA units per SM. While each unit maintains a throughput of 1/clk, XU instructions must be partially serialized within the warp. \\n\\n**Instruction Latency:** Thank you for clarifying the observed cases. While I agree that FMA has a fixed latency, the 'stall wait' described does not result from this latency alone. It arises due to data hazards (RAW dependencies) in low-ILP scenarios, where dependent instructions must wait for the result. In contrast, in high-ILP workloads with independent instructions, the same fixed latency would not cause a stall. This differs from what is typically considered 'fixed instruction latency,' such as a division operation requiring N cycles to complete and exhibiting a 1/N throughput. In such cases, even independent division instructions must wait the full latency. However, I believe this distinction may introduce unnecessary complexity, which is why I suggested avoiding the issue by listing 'resource contention' instead.\\n\\n---\\n\\nI thank the authors again for their detailed responses and for addressing the majority of my concerns. I am satisfied with their efforts and will raise my score accordingly.\"}", "{\"summary\": \"The paper introduces THUNDERKITTENS (TK), a framework that simplifies writing AI kernels for GPUs while still allowing for high performance. Using a few key abstractions, TK provides tools for developers to create efficient kernels without deep expertise in GPU programming. Through benchmarking, the authors show that TK performs on par with or better than other leading frameworks like CuBLAS and FlashAttention-3 for various AI tasks. TK\\u2019s accessible design, inspired by PyTorch and NumPy, aims to make high-performance kernel development more straightforward and accessible to a wider audience.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper offers a fresh and practical approach to GPU kernel programming, using only a handful of essential abstractions to make high-performance kernel writing accessible to a wider range of developers. This simplicity-oriented approach can reduce the complexity typically associated with GPU development, which could be particularly valuable for those without extensive CUDA experience. In terms of performance, THUNDERKITTENS shows impressive results, even surpassing established libraries like CuBLAS and FlashAttention-3 in several tasks, especially in backward pass operations for attention mechanisms and linear attention. The results strongly suggest that TK\\u2019s design strikes a good balance between simplicity and performance optimization. Furthermore, by aligning its design with PyTorch and NumPy, TK makes it easier for non-specialists to adopt, potentially expanding the accessibility of efficient GPU programming.\", \"weaknesses\": \"1- While the minimalistic design is a key strength, it may also limit TK\\u2019s flexibility for more specialized AI tasks that require tailored optimization strategies. As demands grow for handling complex and emerging AI workloads, the current set of abstractions could potentially fall short.\\n\\n2- The focus on NVIDIA\\u2019s H100 GPUs raises questions about how well TK can transfer to other platforms, such as AMD or Apple GPUs. Expanding on cross-platform compatibility would provide more clarity about TK\\u2019s broader usability.\\n\\n3- Though the paper demonstrates strong performance on medium-sized data, it is less clear how TK handles scalability with very large datasets or highly parallelized scenarios. Addressing its limitations in these settings could further support TK\\u2019s value in real-world applications.\", \"questions\": \"Could the authors elaborate on the potential for cross-platform compatibility? Given the focus on NVIDIA\\u2019s H100 GPUs, it would be helpful to understand whether TK\\u2019s abstractions could be adapted to other GPU architectures, like AMD or Apple, and what challenges might arise.\\n\\nThe paper demonstrates TK\\u2019s strong performance on medium-sized data blocks, but could the authors provide more insights into how well TK scales with very large datasets? Are there specific limitations to consider for applications requiring high parallelization or extensive data handling?\\n\\nCould the authors expand on their design choice to limit TK to a few key abstractions? Are there specific reasons why additional templates or adaptive features were not incorporated, and would doing so have risked undermining the framework\\u2019s simplicity?\\n\\nIn scenarios with high memory demands, how does TK manage the balance between memory overhead and computational efficiency? Further detail on this balance could clarify TK\\u2019s suitability for applications with varied memory and compute requirements.\\n\\nLastly, could the authors clarify TK\\u2019s debugging process, especially for users who may not be familiar with GPU optimization? Since GPU kernel errors can be challenging to diagnose, any insights into how TK might support error handling and debugging would be valuable for potential adopters.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you for your positive and thoughtful review! We are glad that you appreciate TK\\u2019s simplicity and performance, including providing kernels that far outperform the prior state-of-the-art kernels by up to 14x. Here, we address your remaining questions.\\n\\n## Simplicity of tuning kernels within TK\\n\\nIn our work, we de-emphasize autotuning, as we find that tuning kernels is usually straightforward once the right primitives are in place. This is, in a sense, a limitation of our framework compared to compiler-based frameworks like Triton, which automatically block and tile. However, our core argument is that for most AI workloads, sophisticated ML compilers are not needed to concisely, performantly, and portably express kernels. \\n\\nTo the extent that tuning is necessary, it can usually be done by twiddling occupancy and pipeline depth, for which there are usually only a few values that make sense, anyways. **TK makes it easy to twiddle these two values** by changing a single number in the kernel, and recompiling.\\n\\nFor tile size optimizations, users may need to consider hardware constraints such as shared memory and register limitations, requiring more careful analysis. However, **TK's templated operation design minimizes the code changes needed to implement these adjustments**.\\n\\n\\n## TK supports dynamic shapes beyond multiples of 16\\n\\nThank you for the insightful question with respect to the potential limitations of 16x16 tiles. Yes, TK does support workloads where the dimensions are not multiples of 16. To address your feedback, we include new results in Appendix B.2, where we report that TK gives high performance on attention kernels that require tiles that are not multiples of 16. \\n\\nWe also describe how we support these dimensions in Appendix D.1. The common solution to support any dimension is to use *padding*. ThunderKittens supports padding within the kernel, and other frameworks, such as CUTLASS and Triton also pad data into tiles. Padding is the solution the hardware prefers, too, since tensor cores generally operate in multiples of some base size (usually 16). \\n\\n\\nWe hope these responses help address your feedback, and please let us know if you have any additional questions!\"}", "{\"title\": \"Common response to all reviewers [continued]\", \"comment\": \"# Performance Benchmarks\\n\\nPlease also find these results discussed in Appendix B of our revised submission.\\n\\n## TK extends across hardware platforms\\nTo further highlight the extensibility of TK, we provide new kernels for the NVIDIA 4090 and Apple M2 platforms. \\n\\n**Table: 4090 Attention FWD (non-causal, batch=16, heads=16)**\\n\\n| Sequence Length | TK Attention FWD (head dim 64) | TK Attention FWD (head dim 128) |\\n|----------------|-------------------------------------------------------|--------------------------------------------------------|\\n| 1024 | 150 TFLOPs | 141 TFLOPs |\\n| 2048 | 154 TFLOPs | 145 TFLOPs |\\n| 4096 | 157 TFLOPs | 156 TFLOPs |\\n| 8192 | 160 TFLOPs | 148 TFLOPs |\\n\\n**Table: Apple M2 Pro Attention FWD vs Apple MLX (non-causal, batch=16, heads=16)**\\n\\n| Sequence Length | TK Attention FWD (head dim 64) | TK Attention FWD (head dim 128) |\\n|----------------|--------------------------------|--------------------------------|\\n| 512 | 3523.5 vs 3088.4 GFLOPS | 3263.38 vs 3770.5 GFLOPS |\\n| 1024 | 3723.8 vs 3276.9 GFLOPS | 3435.89 vs 3977.2 GFLOPS |\\n| 1536 | 3761.8 vs 3313.2 GFLOPS | 3490.66 vs 4053.4 GFLOPS |\\n| 2048 | 3784.1 vs 3309.6 GFLOPS | 3488.09 vs 4006.0 GFLOPS |\\n| 2560 | 3793.4 vs 3329.8 GFLOPS | 3483.83 vs 4047.9 GFLOPS |\\n\\n**Table: Apple M2 Pro GEMM TK vs Apple MLX Performance**\\n\\n| Sequence Length | Performance (GFLOPS) |\\n|----------------|--------------------------------|\\n| 1024 | 3830.8 vs 3444.7 |\\n| 2048 | 5238.8 vs 4839.5 |\\n| 4096 | 5600.6 vs 5190.1 |\\n| 8192 | 5675.8 vs 5182.7 |\\n| 16384 | 5267.0 vs 5117.7 |\\n\\n## Comparing TK to CUTLASS and Triton:\", \"we_focus_on_a_research_question\": \"*What are the tradeoffs between programming complexity and the accessibility of peak hardware performance?* Our novelty is in showing a **new point on the tradeoff space exists**. We can use fewer primitives to achieve higher performance on AI workloads, compared to what\\u2019s been demonstrated in prior frameworks.\\n\\n**Overview:** CUTLASS provides many layers of templated C++ primitives. Compiler approaches like Triton expose fewer optimizations, but simplify the programming experience. Researchers often turn to Triton, pointing to the relative simplicity of Triton. Our novelty is showing a small set of C++ primitives that captures most of what we need and and consistently unlocks state-of-the-art kernels across hardware platforms. Identifying this small set is not trivial given the complexity of GPUs. \\n\\nBelow, we summarize how Triton, CUTLASS, and TK support key features needed for high performance:\\n\\n**Table: Framework Feature Comparison**\\n\\n| Feature | TK | CUTLASS | Triton |\\n|----------------------------------------|--------|---------|---------|\\n| Direct register memory control | YES | YES | NO |\\n| Fine-grained asynchronous execution | YES | YES | NO |\\n| PyTorch-like library functions for usability | YES | NO | YES |\\n| Supports multiple hardware platforms | YES | NO | YES |\\n\\nAs proxies for the simplicity versus efficiency resulting from different frameworks, we measure: (1) the size of various frameworks and (2) the lines of code used across kernels. We validate two claims: \\n1. TK compares to Triton in lines of code, and both outperform CUTLASS\\n2. TK competes with or outperforms Triton and CUTLASS/raw CUDA baselines\\n\\n**Table: Library Size Comparison**\\n\\n| Library | Size (Bytes) | Date/Version |\\n|----------|-------------|--------------|\\n| CUTLASS | 22 MB | 10/22/2024 |\\n| Triton | 12.6 MB | 10/22/2024 |\\n| TK | < 1.0 MB | 10/22/2024 |\\n\\n\\n**Table: Lines of code across TK H100 kernels, state of the art non TK kernels, and the TK speed up over the reference kernel.**\\n\\n| Workload | TK kernel (lines of code) | Reference kernel (lines of code) | Speed up (max-min) |\\n|----------|--------------------------|--------------------------------|-------------------|\\n| Attention forwards | 217 | 2325 (CUTLASS FA3) | 0.87-1.14x |\\n| GEMM | 84 | 463 (CUTLASS) | 0.98-2.05x |\\n| Convolution (N=4096) | 131 | 642 (CUDA FlashFFTConv) | 4.7x |\\n| Based linear attention | 282 | 89 (Triton) | 3.7-14.5x |\\n| Hedgehog linear attention | 316 | 104 (Triton) | 4.0-6.5x |\\n| Mamba-2 | 192 | 532 (Triton) | 3.0-3.7x |\\n| Rotary | 101 | 119 (Triton) | 1.1-2.3x |\\n| Fused layer norm | 146 | 124 (Triton) | 1.0-2.2x |\"}", "{\"title\": \"Response to review [continued]\", \"comment\": \"## Addressing weakness 5. Highlighting TK\\u2019s value to the ML audience\\n\\nWhile TK advances systems capabilities, its impact lies in enabling the next wave of ML innovation. As recognized in the ICLR 2025 call for papers, implementation efficiency and hardware optimization are no longer secondary concerns - they are fundamental bottlenecks in advancing AI research.\", \"our_work_offers_three_key_contributions_that_directly_address_these_bottlenecks\": \"1. **Democratizing Hardware-Efficient ML Research.** The complexity of hardware optimization has become a critical barrier to ML architecture innovation. TK dramatically lowers this barrier, enabling researchers to rapidly prototype and evaluate novel neural architectures that would previously have required months of specialized CUDA engineering. For instance, our framework makes tensor core optimization accessible through simple interfaces (stated on lines 88-91, 270-280 of our submission).\\n\\n2. **Making New ML Architectures Practical.** History shows that major ML advances come from scaling - but scaling requires efficiency. Our framework has already revealed that architectures previously dismissed as impractical can be transformed into state-of-the-art approaches through proper optimization. For example, our implementation shows that linear attention can outperform standard attention in wall-clock time at much shorter sequences than previously thought possible, fundamentally changing the calculus of architecture design decisions. This is just one example of how TK can unlock new families of architectures that were previously considered computationally infeasible.\\n\\n3. **Quantitative Impact on ML Workloads.** We demonstrate substantial improvements across core ML operations:\\n- $10-40\\\\%$ faster attention computation\\n- $8\\\\times$ acceleration for long-range convolutions\\n- $6\\\\times-14\\\\times$ speedup for linear attention variants These improvements directly enable research into larger contexts, alternative attention mechanisms, and novel architecture designs.\\n\\nIn our view, these optimizations represent the difference between architectures that can scale and those that cannot. Just as FlashAttention's impact went far beyond its speedup numbers to enable the current wave of large language models, TK aims to unlock the next generation of ML architectures by removing critical implementation barriers that currently constrain innovation.\\n\\n## Addressing question 1. Bank conflicts in FlashAttention-3. \\nCUTLASS and TK support the exact same capabilities, which is stated on lines 200-202 of our submission. We did not state that CUTLASS faces fundamental challenges causing bank conflicts. We simply observe that bank conflicts are avoidable and the user does not need to be burdened in thinking about layouts, as stated on lines 206-208 of our submission. TK handles the layout selection for the user to help minimize such issues.\\n\\n## Addressing question 2. CUTLASS ping pong and cooperative kernels. \\nThunderKittens supports ping-pong kernels like CUTLASS -- our work has not emphasized ping-pong style kernels, as we have found that it adds considerable complexity for marginal performance improvements. Our GEMM kernel is also cooperative in nature; we also have threads exchange data through shared memory before performing coalesced stores to global memory. We have added comparison to CUTLASS GEMM in Figure 7, and also provide an example ThunderKittens ping-pong GEMM kernel in the appendix which makes further use of the asynchronous tensor core instructions. We find it achieves an additional 0.4% performance above the kernel proposed in the main paper.\\n\\n## Addressing question 3. Extensibility of TK\\u2019s ideas across hardware platforms\\nThe review mentions that our original set of kernels are for H100 GPUs. We provide 2 new kernels for NVIDIA 4090 GPUs and 3 new kernels for Apple M2 chips, to demonstrate that the TK framework ports across hardware platforms. We have added results and discussion for these kernels in Appendix B3.\\n\\n## Addressing question 4. Extensibility to future GPU generations\\nThank you for this thoughtful question! While it is difficult to exactly know what changes will be required, since we do not have Blackwell GPUs, we are optimistic that TK\\u2019s approach will extend. Our primitives are designed around fundamental hardware properties \\u2013 e.g., for memory, shared memory is banked, we need coalesced HBM loads, register utilization needs to be carefully managed; e.g., for compute, hardware has multiple execution units that benefit from occupancy, latencies need to be hidden via asynchronous execution. These are not platform specific properties; these fundamentals have already cleanly transferred across hardware platforms and vendors (NVIDIA 4090; NVIDIA H100; Apple M2). We expect the trend to continue.\"}", "{\"title\": \"Response to Authors (1/2)\", \"comment\": \"I thank the authors for their response and commend their efforts in extending TK support to both the 4090 and Apple M2.\\n\\n---\\n\\n## Demonstrating Broader Scope\\n\\nThank you for including these cases; the competitive performance across the added conditions is impressive and further supports the generality of your method. However, the new results raise a few questions.\\n\\n1. Flash-Attention2 appears to perform better on the 4090. Have you identified the cause of this? While pinpointing the reason would be helpful, I consider this a secondary concern, as the advantages of a consistent framework and greater adaptability can outweigh minor performance variations.\\n\\n2. Similarly, Figure 8 indicates that FA3 outperforms TK in the causal attention forward pass. Have you identified the reason for this? Could it be related to the use of 16x16 blocks?\\n\\n3. You explored adaptation to the 4090, which is an Ada GPU incorporating many architectural changes introduced by Hopper. Is this focus due to available testing hardware, or is there a fundamental restriction preventing testing on Ampere? While Ampere is no longer state-of-the-art, it remains widely used in academic clusters (e.g., A100, A6000) and among consumers (e.g., 30 series).\\n\\n4. Thank you for discussing block padding with masks. However, this does not fully address the concern with GEMV. While GEMV is rarely used during training, it is the primary operation for non-batched LLM inference. As it stands, your framework implies GEMV would incur a 16x performance reduction due to padding. Could you clarify or address this concern?\\n\\n---\\n\\n## Connections to Computer Architecture Literature \\n\\nThank you for adding the reference to the Roofline model in Section 2.2. However, the wording in this paragraph may still need refinement to avoid implying that your cost model is novel. Perhaps it could be described as a \\u201creformulation\\u201d of the Roofline model rather than inspired by it. Notably, roofline considers FLOPs, which is equivalent to $1/\\\\mathrm{cost} \\\\sim 1/\\\\mathrm{time}$ in your model. \\n\\nRegarding the contextualization of Table 3, I did not find this addressed in your revision. I recommend framing the reordering as increasing the arithmetic intensity (FLOPs/byte), which aligns with the insights provided by the Roofline framework. This should suffice to contextualize the result in Table 3. Currently, the text may be interpreted as offering a fundamental new insight.\\n\\n---\\n\\n## Extension to FDSP\\n\\nThank you for your response. Perhaps this could be included in the Camera-Ready version (in the appendix). Alternatively, you could provide an example in your code release that demonstrates computing GEMM across two GPUs with TK using PyTorch as an intermediary.\\n\\nNotably, this extension is important not only for training but also for inference. Many LLM frameworks, such as vLLM, support tensor-parallel computations to efficiently distribute weights and increase parallelism.\\n\\n---\\n\\n## Presentation Improvements\\n\\nThank you for addressing the suggested changes. I noticed that some issues highlighted in my original review remain unresolved. However, these remaining issues appear to have only a minor impact on overall readability.\"}" ] }
0fD3iIBhlV
Emergence of a High-Dimensional Abstraction Phase in Language Transformers
[ "Emily Cheng", "Diego Doimo", "Corentin Kervadec", "Iuri Macocco", "Lei Yu", "Alessandro Laio", "Marco Baroni" ]
A language model (LM) is a mapping from a linguistic context to an output token. However, much remains to be known about this mapping, including how its geometric properties relate to its function. We take a high-level geometric approach to its analysis, observing, across five pre-trained transformer-based LMs and three input datasets, a distinct phase characterized by high intrinsic dimensionality. During this phase, representations (1) correspond to the first full linguistic abstraction of the input; (2) are the first to viably transfer to downstream tasks; (3) predict each other across different LMs. Moreover, we find that an earlier onset of the phase strongly predicts better language modelling performance. In short, our results suggest that a central high-dimensionality phase underlies core linguistic processing in many common LM architectures.
[ "interpretability", "intrinsic dimension", "large language models" ]
Accept (Poster)
https://openreview.net/pdf?id=0fD3iIBhlV
https://openreview.net/forum?id=0fD3iIBhlV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zVG38qCTec", "yZxEhQm1mU", "wD6SNZbXbU", "ttYcoW2vwp", "qT3p9qyINA", "pW0e4YmJTZ", "nK8ydTTJTa", "nA9857nzAk", "l9BesoDPsq", "kMhJYBCCba", "jJPrYHBtJq", "j1L05MopND", "iHGvoNsVDf", "hBKTORbxU0", "dhpGY1b8Fq", "aAyKAq0FE7", "ZQ22QKOkSp", "YBZCaskXBT", "XALYWHEXmk", "X6qmtjyoyq", "URlt0GRKpk", "SAAc8hIRLi", "NbdKNKwUwC", "MpAWrAmpRT", "MdRSmWWKbT", "MRAVragrNV", "M3ZA0r5L8J", "L9e0c9D3LM", "FbBG5ompEK", "FQeWfZiSzt", "CdH7fa9aOs", "B4aPjnPUBZ", "7D1Az2fXVj", "4pq2H5PHri", "4eIPdRhcdc", "2TYcedUdNZ", "20jhFcS6j6" ], "note_type": [ "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731582539121, 1733644159360, 1731028228103, 1731305779527, 1731578599903, 1731582563468, 1731581253484, 1731581018044, 1731578553197, 1731666798899, 1731582584866, 1731673487725, 1731673454250, 1731580870550, 1733127593519, 1730327224942, 1731666930076, 1731578575685, 1731673777445, 1731673550780, 1731667156906, 1732273746333, 1731673677188, 1730719307107, 1731582510850, 1731112070157, 1731582611849, 1731673391973, 1731578759741, 1737523779226, 1731667065433, 1731580965815, 1731580922040, 1731582846042, 1731581134008, 1731666823791, 1731580830193 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Area_Chair_kRYK" ], [ "ICLR.cc/2025/Conference/Submission6601/Reviewer_PyB7" ], [ "ICLR.cc/2025/Conference/Submission6601/Reviewer_1CPH" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Reviewer_5snc" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Reviewer_61Jw" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Reviewer_j48a" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ], [ "ICLR.cc/2025/Conference/Submission6601/Authors" ] ], "structured_content_str": [ "{\"comment\": \"__I was not able to identify which variable is corresponding to the intrinsic dimension without going through the cited paper. It seems to be the variable $d$ which authors did not explain what it is__\\n\\nThanks for pointing this out, that was our oversight! We have added a line in Section 3.4 that defines d to be the estimated ID.\"}", "{\"metareview\": \"The paper employs intrinsic dimension (ID) estimation as a technique to analyze the properties of different layers in transformer-based LLMs. While inspired by previous work, this study expands the scope by including five LLMs and introducing more extensive probing and downstream tasks on defined datasets to analyze ID profiles across layers. The paper presents several interesting findings. Most of the concerns raised by the reviewers were addressed during the authors' rebuttal.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer 5snc provided the lowest review score. However, the questions raised primarily concern writing issues, which can be addressed in the camera-ready version.\"}", "{\"summary\": \"The paper explored how transformer-based language models evolve their internal representations across layers, revealing a distinct high-dimensional abstraction phase. The authors observe the some findings across multiple LMs and datasets, and they provide a foundation for better understanding and optimizing language model architectures. This work bridges the gap between geometric representation and linguistic function in transformer-based LMs. Also, it highlights the potential of intrinsic dimensionality as a tool for analyzing and evaluating LMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work conducts experiments on various LMs (e.g., OPT-6.7B, Llama-3-8B, Pythia-6.9B, OLMo-7B, and Mistral-7B) using multiple datasets, providing a comprehensive analysis. It also observes how representational intrinsic dimensionality (ID) varies across layers and proposes insightful hypotheses. Furthermore, this work inspires the research community to explore the utilization of ID information in transformer-based LM applications.\", \"weaknesses\": \"While the paper combines two methods, GRIDE (Denti et al.) and Information Imbalance (Glielmo et al., 2020), to analyze four large language models (LLMs), this combination may fall short in terms of novelty. In Section 4.1, the choice of pre-training datasets for evaluation is also a limitation. Since these datasets have likely been encountered by the models during training, the results may not provide a fully accurate picture of the models\\u2019 generalization capabilities. Testing on unseen datasets would be crucial to evaluate the robustness and generalizability of the observed patterns, especially in real-world applications where unseen data is the norm. The study is limited to a narrow range of LLMs in terms of scale. Evaluating models of varying sizes (e.g., smaller models alongside large ones) would offer a more comprehensive understanding of how model size impacts intrinsic dimensionality and representation overlap across layers.\", \"questions\": \"1. There seems to have second ID peak in the later layers over LLMs. Do you think this second ID peak might reveal additional insights?\\n2. In your analysis (Figure 4), you observed that Pythia and OPT exhibit very similar representations. Could this similarity be attributed to pre-training on similar datasets? If so, how might this influence your findings, and have you considered controlling for dataset overlap to isolate structural factors more effectively?\\n3. The work focuses on classification tasks to analyze representation spaces in language models. Could you explain why generative tasks were not included? Do you expect the observed ID peaks and representation patterns to differ in generative contexts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper uses the technique of intrinsic dimension estimation as a tool for analyzing properties of different transformer LLM layers. 5 different LLMs are analyzed on textual inputs from 3 different public-domain corpora. In addition to computing the intrinsic dimensionality (ID) (using the generalized ratios intrinsic dimension estimator) for different layers, the ID is correlated with performance of different layers' representations on syntactic and semantic probing tasks. Furthermore, the difference in representational power between different layers is measured using an Information Imbalance criterion. The authors find that middle layers in LLMs have the highest ID; ID peaks seem to be an indicator of linguistic structure being learnt; early onset of peaks in ID across layers is correlated with better next token prediction performance performance; and high ID peak layers are representationally equivalent across different LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper conducts a broad analysis across 5 different LLMs and considers a range of questions and ablation studies (e.g., estimating ID on shuffled data, comparing layers across different models); altogether an impressively broad set of experiments. The paper is clearly written and presents a few new insights (e.g., correlation between peak onset and performance). Code and data would be made available, which would be valuable for the community.\", \"weaknesses\": \"The use of ID as an analysis tool for LLM layers is not an entirely new idea (e.g., https://arxiv.org/pdf/2402.18048).\\nMost of the results (e.g., the peaking of ID at middle layers, emergence of linguistically informative representations in those layers) has been shown before by means of other methods (e.g., mutual information or canonical correlation analysis). These should have been discussed in more detail under prior work.\", \"questions\": \"While the analyses show some interesting trends, it is difficult to tell how meaningful or significant the numerical differences are. Methods for analyzing LLM layers other than through ID could have been discussed in a prior work section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"__Most of the results (e.g., the peaking of ID at middle layers, emergence of linguistically informative representations in those layers) has been shown before by means of other methods__\\n\\nWe are happy to extend the related work with any relevant prior work you point out.\"}", "{\"comment\": \"__wrong claim in line 177~178 that \\\\mu has a generalised pareto distribution. I cannot find any resources claiming this specific distribution is a (generalized) pareto distribution including the original cited paper.__\\n\\nTheorem 2.2 in Denti et al, 2022 establishes that the $\\\\mu_i$ follow a generalized Pareto distribution. In particular, please see Eq (9) in Denti et al, 2022 and their Supplementary Materials for the proof. If the reviewer suggests, we can restate Theorems 2.2 and 2.3 from Denti et al., 2022 in our paper to make that more clear.\"}", "{\"comment\": \"__Could you explain why generative tasks were not included?__\\n\\nAnother good question. Our goal with the probing tasks was to find out what kind of information about the inputs is present in LM layer activations. Classification tasks are ideal for this. \\nA classification task on, e.g., sentiment, predicts a label from the layer representations $X_t$, ($t$ is the layer index). If sentiment classification achieves high accuracy on $X_t$, then we can say that information about sentiment is encoded in layer $t$.\\n\\nWhy not generative tasks? Again, we want to investigate the kind of information contained in each layer. Generating text from intermediate layers has been shown to be tricky and uninterpretable (Belrose, 2023); also, it raises the question, what kind of information would need to be generated for us to interpret each layer\\u2019s function? Instead, classification via probing can be performed straightforwardly in any layer. Hope this answers your question! We will clarify this point in the final version.\"}", "{\"title\": \"Responses to Questions\", \"comment\": \"__There seems to have second ID peak in the later layers over LLMs. Do you think this second ID peak might reveal additional insights?__\\n\\nWe have preliminary evidence that suggests that the second peak coincides with a phase in which the model is preparing to generate the next output token. For example, for some models performance in a surface-probing task starts increasing again under the second peak, suggesting that the model is processing actual \\\"tokens\\\" again. However, for space reasons, we chose to focus on the first, larger ID peak phase that is more consistent across all models.\"}", "{\"comment\": \"Thanks so much for your feedback, especially the references! We respond in detail to your comments below. In particular, we would greatly appreciate it if you could point us to any more missing work that you\\u2019re aware of.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Thanks so much for your careful review. We\\u2019re glad you found our recommendations for future work clear, and our contributions valuable to the community. We\\u2019ll respond to each comment below.\"}", "{\"comment\": \"__In the data section, it is not very clear to me what does it mean to \\\"extract 5 non-overlapping partitions of 10k 20-token sequences\\\" and how the shuffled version is generated, can authors explain more about this?__\\n\\nFor each corpus, we sampled without replacement a total of 50k distinct 20-token sequences. We then divided these into partitions of 10k sequences each, which we used for the experiments. For the shuffled version, we first randomized the order of the tokens in each corpus, and then proceeded as with the non-randomized versions. We will clarify this in the paper.\"}", "{\"comment\": \"__The analysis focuses only on fully trained models and does not provide insights into how ID changes over time.__\\n\\nPlease see Fig 1 (right) for ID over training time for Pythia, the only model besides the slightly anomalous OLMo, to release intermediate checkpoints. We find here that ID increases as a result of learning. This lends support to our findings that higher ID corresponds to learned abstractions over the input data.\"}", "{\"comment\": \"__the submission does not explain for what else the gained insights can be used for or wether they are more useful than that at all.__\\n\\nThanks for this point; we respectfully disagree. In the Conclusion lines 511-515, we detail a path forward using our insights: \\\"ID profiles emerge as significant blueprints of model behaviour that could be used as proxies of model quality. ID information can be used for model pruning, or to choose which layers to fine-tune, or for model stitching and other model-interfacing operations.\\\"\\n\\n\\nWe can elaborate on the latter. Our results linking the ID peak to feature richness, abstraction, and inter-LM representational similarity provides a guideline for model interfacing. For instance, this would recommend ID peak representations as the first viable ones for semantic/syntactic downstream tasks and model stitching. And, it recommends ID peak representations for building CLIP-like models. In addition, an exciting real example of our results\\u2019 usefulness is in building encoding models of the brain. In this domain, follow-up work to our paper has already found ID peak layers to best model fMRI responses to natural language. We will discuss these important points more extensively in the final version.\"}", "{\"comment\": \"__GRIDE (Denti et al.) and Information Imbalance (Glielmo et al., 2020), to analyze four large language models (LLMs), this combination may fall short in terms of novelty.__\\n\\nTo the best of our knowledge, our paper is the first to apply both GRIDE and Information Imbalance to studying neural network representations (and in particular those of LLMs).\"}", "{\"comment\": \"Thank you again for taking the time to review our paper. As the discussion period ends today, we kindly ask that you confirm whether all your questions (especially concerning methodology) have been resolved. Please let us know if anything remains unclear!\"}", "{\"summary\": \"This work analyzes properties of representations of several LLMs through a few approaches: downstream probing, intrinsic dimensionality and information imbalance. The analysis is mainly developed around intrinsic dimension and they show LLMs typically have a few intrinsic dimension peaks across layers. Additionally, they suggest that those peaks indicate transition to abstract linguistic processing through a variety of analysis\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I like that authors combine evidence from a few different perspectives to demonstrate the relation between intrinsic dimension peak and transition to abstract processing. They also conduct experiments on a few corpus and a few models as well, which make the claim more general and robust\", \"weaknesses\": \"The method section is weak and the explanation of intrinsic dimension computing is not enough given its importance in this work. I was not able to identify which variable is corresponding to the intrinsic dimension without going through the cited paper. It seems to be the variable $d$ which authors did not explain what it is.\\n\\nAdditionally, author made a wrong claim in line 177~178 that \\\\mu has a generalised pareto distribution. I cannot find any resources claiming this specific distribution is a (generalized) pareto distribution including the original cited paper.\", \"questions\": \"In the data section, it is not very clear to me what does it mean to \\\"extract 5 non-overlapping partitions of 10k 20-token sequences\\\" and how the shuffled version is generated, can authors explain more about this?\", \"in_section\": \"The ID peak marks a transition in layer function, I think the relation between ID peak and \\\\delta(l_i \\\\to l_first) is not very clear. It has very similar shape in OPT and somewhat in pythia, but LLAMA has a completely different curve. It is maximizing towards the end of the layer instead of the center of layer.\\n\\nIn section 4.2, authors also claim the relation between ID-peak and a few tasks. However, Figure (a) and (b) do not have very clear co-related trend between ID peaks and tasks' performance. In particular, task performance in Figure 5(b) seems to be monotonically increasing instead of peaking in the middle. Can authors justify more about this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"__(2) Please provide rationale for critical algorithmic designs, for example, please clarify why GRIDE is selected, and why the three alternative measures for comparing layer\\u2019s representation spaces are chosen.__\\n\\nThanks for the suggestion. We\\u2019ll justify our choices in the manuscript as follows:\\n\\nWe selected GRIDE because it allows probing, in a rigorous framework, the dependence of the ID on the scale. In a complex data landscape the ID at short distances is dominated by noise, while at large distances is affected by curvature effects and density variation. GRIDE allows selecting on a rigorous basis such an intermediate scale. It is a generalization of the TwoNN estimator (Facco et al 2017), which has enjoyed a lot of popularity in the ID estimation literature (Valeriani et al, 2023; Cheng et al, 2023; Chen et al, 2024; Doimo et al, 2024, among others). While TwoNN assumes local uniformity up to the 2nd nearest neighbor, GRIDE relaxes this assumption to produce unbiased ID estimates up to the $2^k$th nearest neighbor ($k$ being the scale). \\n\\nWe chose to test RSA and linear CKA also because they are broadly used in the analysis of the representations of DNNs (see Sucholutsky et al, 2023; Williams, 2024 for surveys). On the other hand, we are the first to use Information Imbalance to analyze neural net embeddings. This, as we underline in the paper, allows measuring _asymmetric_ information containment (RSA/CKA are symmetric). Such information asymmetries are related to the non-linearity of the manifold, and are not captured by linear methods such as RSA and CKA.\"}", "{\"comment\": \"__The use of ID as an analysis tool for LLM layers is not an entirely new idea (e.g., https://arxiv.org/pdf/2402.18048).__\\n\\nThanks for the reference. We were not aware of this paper and we will include it in the related work. Their emphasis is on using the local ID of specific inputs in specific layers as a truthfulness cue, which is very different from our emphasis on global ID as an indicator of general processing features of multiple LMs across layers. Like us, they also report ID profiles across layers for a specific model (confirming the overall pattern we detected), but the link we establish, for multiple LMs, between per-layer ID and properties such as linguistic processing, downstream performance and cross-layer and cross-model similarity is novel and, in our opinion, useful to understand how LLMs work.\"}", "{\"comment\": \"__The first paragraph of Asset Section C.1 (lines 809-836, in particular 828-829) mentions sensitivity of ID estimation w.r.t. to noise, small scales, density variations and curvature. That analysis suggests some sort of frequency decomposition integrated with the ID estimation.__\\n\\nGride, the algorithm we are using, is one of the few capable of estimating the ID across multiple scales efficiently and robustly, making it well-suited for analyzing natural representations.\\n\\nIn general, though, yours is an excellent suggestion, which we will forward to our colleagues working specifically on the development of ID estimators. In the paper, we will discuss in more detail the criticalities related to the ID estimate, as also suggested by another reviewer. In particular, we will underline that other studies address the problem of estimating the ID on multiple scales in the presence of noise and curvature. This is a fundamentally difficult problem: as shown by the classical work of [Hein and Hudibert], a key challenge is that \\u201cfor small sample sizes, it is impossible to distinguish between noise and high curvature\\u201d. \\n\\nSome works address this by using a multiscale SVD [Little et al] or identifying gaps in the spectrum of the local covariance matrix estimated on hyperspheres of increasing radii [Recanatesi et al.]. However, these spectral methods are computationally intensive, which limits their applicability to the representation of neural networks. \\n\\n\\nRecanatesi et al., A scale-dependent measure of system dimensionality \\\\\\nAnna V. Little et al, Multiscale Geometric Methods for Data Sets I: Multiscale SVD, Noise, and Curvature \\\\\\nHein and Hudibert, Intrinsic Dimensionality Estimation of Submanifolds in R d\"}", "{\"comment\": \"__The submission wrongly mentions PCA being linear (line 061, applies only to its original form) which leads to the quick conclusion to discard it.__\\n\\nHere, we are referring to linear (standard) PCA. We included this as an example of a popular dimensionality estimation method that is familiar to readers, to then bridge into the discussion on nonlinear estimators. To clarify that we\\u2019re not talking about nonlinear variants of PCA, we can change \\u201cPCA\\u201d -> \\u201clinear PCA\\u201d in the text. Or, if the reviewer prefers, we can remove this point altogether as it\\u2019s not essential to our paper.\"}", "{\"comment\": \"__(4) It is interesting that OLMo seems a bit of an outlier compared to the other 4 LMs, although it also exhibits the ID peak and other related properties. It would be useful to provide insights on why OLMo behaves differently from the other models, and shed light on patterns of any potential \\u201coutlier\\u201d LM.__\\n\\nWe agree that understanding the causes for OLMo's slightly outlying behaviour should be a priority. We did not find any obvious difference in data or architecture that could easily explain it, and we plan to dedicate future work to a more causal approach to observing the emergence of ID peaks (along the lines of what we do in the paper by considering the effect of size and training data amount).\"}", "{\"title\": \"Thanks + remaining questions\", \"comment\": \"Hi all, thanks again for your constructive reviews. As the discussion period is coming to an end, please let us know if anything remains unclear in our response.\"}", "{\"comment\": \"__A correlation analysis to other work would add more value. My first thought was a correlation to the IB method (e.g. Tishby et al. 2000 and Schwartz-Ziv & Tishby 2017), but this may not be the only or best choice.__\\n\\nThis is a good suggestion. Indeed the information imbalance has been shown to be an upper bound to the transfer entropy (see Del Tatto 2024), and estimating the relative information between the representations is at the basis of the analysis framework which brought to the formulation of the IB hypothesis. However, the representations analyzed in our work are those of the last token, and the sequence of these representations is not a Markov chain (only the concatenation of the representations of all the tokens would be a Markov blanket). Therefore, it is not straightforward to compare directly our results to those discussed in the literature on the IB. We will add a sentence mentioning this topic as worthy of detailed and dedicated investigation. \\n\\nNevertheless, in light of your feedback we\\u2019ve prepared an \\u201cIB-like\\u201d plot, which you can see here: https://osf.io/p5z8q?view_only=913370ebe056498791f3616fb65fbee6 . Likely due to the above differences, our plots don\\u2019t display a similar trend to the final model configuration in IB; rather they present the complementary view that, through the layers, informativity (inputs -> layer) decreases, and informativity (layer -> outputs) increases, with the ID peak marking a changepoint in processing patterns. This is in support of our current results. Please let us know whether you think this would be a good addition to the Appendix!\\n\\nAs for comparison to other methods, Fig H.1 reproduces trends in representational similarity using linearCKA. In terms of correlation analysis of our findings to work beyond the IB and representational similarity methods, we cannot think of other papers that are sufficiently similar\\u2013 this speaks to the novelty of our contribution. Let us know if you find anything we can compare to; we\\u2019d be happy to investigate!\"}", "{\"summary\": \"The submission is analytic work trying to correlate intrinsic dimension of intermediate NN/LLM representations with linguistic targets. Comparison of the method applied to different models allows insights into some of their learned structural differences.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The findings in bold letters at lines 406-407 and lines 425-426 may be useful to some researchers that need to train and/or select models. The fact that ID seems to change gradually over layers is interesting, but may have a simple explanation in the extreme averaging scale of these models.\", \"weaknesses\": \"Except for the few strengths mentioned above, the submission does not explain for what else the gained insights can be used for or wether they are more useful than that at all.\\n\\nThe analysis focuses only on fully trained models and does not provide insights into how ID changes over time. A correlation analysis to other work would add more value. My first thought was a correlation to the IB method (e.g. Tishby et al. 2000 and Schwartz-Ziv & Tishby 2017), but this may not be the only or best choice.\\n\\nThe submission wrongly mentions PCA being linear (line 061, applies only to its original form) which leads to the quick conclusion to discard it. This is puzzling as the research on non-linear PCA is quite diverse based on very different techniques and there's even early work using neural networks dating back to 1991 (Mark Kramer: \\\"Nonlinear PCA Using Autoassociative NNs\\\").\", \"questions\": \"I strongly advice to improve the submission w.r.t. the mentioned weaknesses. That helps both quality and reach.\\n\\nThe first paragraph of Asset Section C.1 (lines 809-836, in particular 828-829) mentions sensitivity of ID estimation w.r.t. to noise, small scales, density variations and curvature. That analysis suggests some sort of frequency decomposition integrated with the ID estimation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Weaknesses\", \"comment\": \"Thanks for your careful review! Here are some clarifications that resolve the points raised in the Weaknesses. Please let us know if there are more specific things we can do to clarify the ID computation, such as restate the Theorems in Denti et al., 2022.\"}", "{\"summary\": \"This work takes a high-level geometric approach to analyze intrinsic dimension (ID) of the representational manifold at each layer of a decoder-only Transformer LLM to understand how layer geometry relates to layer function. Although inspired by the earlier work of (Valeriani et al., 2023), this work greatly extends the models investigated to include five mainstay decoder-only LLMs, and added more extensive probing and downstream tasks on defined datasets to analyze ID profiles across layers. The resulting observations are different from those from (Valeriani et al., 2023). This work made quite a few interesting findings on detecting broad qualitative patterns, and provides useful guidance for future research towards interpretability, analysis of model behavior and quality, and model pruning and layer-specific fine-tuning etc.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1)\\tAlthough inspired by the earlier work of (Valeriani et al., 2023), this work greatly extends the models investigated to 5 distinct mainstay transformer-decoder-only LLMs and added more extensive probing and downstream tasks on defined datasets to analyze ID profiles across layers. Hence, the conclusions drawn in this work are verified across various models, datasets, and tasks, making the findings more convincing.\\n\\n(2)\\tThe comparisons to related works, esp. (Valeriani et al., 2023) which inspires this work, are clearly presented, hence the contributions of this work are clear and solid.\\nThe verification of the emergence of a central high-dimensionality phase, and analysis of language processing behavior and performance during the high-dimensionality phase are quite thorough.\\n\\n(3)\\tThe analysis in Conclusion demonstrates that many findings in this work align with prior works and concurrent works. The paper clearly summarizes insights of guidance for future research. The Appendix provides detailed experimental setup and additional results. And finally, the analysis of potential applications of the findings is valuable to the research community.\\n\\n(4)\\tOverall, the paper is clearly written and easy to follow.\", \"weaknesses\": \"(1)\\tAlthough the paper is overall clearly written, please make sure that every symbol used needs to be clearly defined when it first appears, e.g., d in Section 3.4.\\n\\n(2)\\tPlease provide rationale for critical algorithmic designs, for example, please clarify why GRIDE is selected, and why the three alternative measures for comparing layer\\u2019s representation spaces are chosen. \\n\\n(3)\\tCurrently, k is still selected based on visual inspection. It would be useful to propose methods that can automatically select k.\\n\\n(4)\\tIt is interesting that OLMo seems a bit of an outlier compared to the other 4 LMs, although it also exhibits the ID peak and other related properties. It would be useful to provide insights on why OLMo behaves differently from the other models, and shed light on patterns of any potential \\u201coutlier\\u201d LM.\", \"questions\": \"(1) Please check the questions listed under Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"__In section: The ID peak marks a transition in layer function, I think the relation between ID peak and \\\\delta(l_i \\\\to l_first) is not very clear. It has very similar shape in OPT and somewhat in pythia, but LLAMA has a completely different curve. It is maximizing towards the end of the layer instead of the center of layer.__\\n\\nAre you referring here to Figure 2? In this figure, for all models, \\\\delta(l_i \\\\to l_first) reaches a (local) peak within the ID peak span, followed by a local minimum either at the very end of the ID peak span or right after it. We would be grateful for further clarifications about the difference you see.\"}", "{\"title\": \"Response to Weaknesses\", \"comment\": \"Thanks so much for your thoughtful feedback! Please find below our responses to your comments in the Weaknesses:\"}", "{\"comment\": \"__While the analyses show some interesting trends, it is difficult to tell how meaningful or significant the numerical differences are.__\\n\\nThe trends that we show are meaningful with high statistical confidence. We have added confidence intervals based on multiple iterations (5 random data splits) of all experiments and significance scores where appropriate. We emphasize however in the paper that our interpretation of the results is essentially qualitative in nature and supported by the fact that a number of different models, datasets, experiments and measures provide converging quantitative evidence for the same high-level picture of how LMs process language. Still, we welcome ideas for further quantitative tests we'd be happy to run.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"__(3) Currently, k is still selected based on visual inspection. It would be useful to propose methods that can automatically select k__.\\n\\nWe selected a value of $k$ that is at the same time not too large (to retain the local nature of the estimator), and that minimizes the variation of the ID with respect to this parameter. Visually, this corresponded to the lowest $k$ for which the ID reaches a maximum. We also verified that the ID profiles were robust to the choice of this parameter, if varied around the reference value (fig C.2). We will consider automatic selection of $k$ as an important direction for future work.\"}", "{\"comment\": \"__The study is limited to a narrow range of LLMs in terms of scale. Evaluating models of varying sizes (e.g., smaller models alongside large ones) would offer a more comprehensive understanding__\\n\\nWe agree that it's important to evaluate different-sized models. This experiment is in \\\"Influence of model size on ID and probing tasks\\\", Appendix D, and Fig D.1, and a pointer to the Appendix section is in line 260.\\n\\nWe replicated the finding that the ID peak corresponds to semantic/syntactic processing in one larger and one smaller Pythia model (Fig D.1 middle, right). The differences in ID by model size is in Fig D.1 (left). For different sizes, the ID profile over layers is correlated, but the magnitude is larger for larger models.\"}", "{\"comment\": \"__In Section 4.1, the choice of pre-training datasets for evaluation is also a limitation\\u2026 Testing on unseen datasets would be crucial to evaluate the robustness and generalizability of the observed patterns, especially in real-world applications where unseen data is the norm.__\\n\\nThanks for this point. Our question in this paper concerns how LMs process in-distribution data. This is a key feature of our setup, allowing us to make statements about the LMs\\u2019 learned behavior on the generic language they were trained on. Crucially, we do not claim that the empirical ID profiles we found generalize to out-of-distribution data. We can make this statement more clear in the Discussion. \\n\\nIn addition, note that it\\u2019s difficult to tell what would be unseen data for these models. At this time, only the OLMo training data are fully publicly available. What comes closest to unseen data in our experiments could be the bookcorpus, which is a unique textual typology. Intriguingly, we observe that bookcorpus is also the dataset where ID peaks tend to be the least pronounced. This observation, together with the fact that the peaks almost disappear for shuffled data, suggests that peak ID size correlates with the degree to which processing data follow training distributional statistics.\"}", "{\"comment\": \"__In section 4.2, authors also claim the relation between ID-peak and a few tasks. However, Figure (a) and (b) do not have very clear co-related trend between ID peaks and tasks' performance. In particular, task performance in Figure 5(b) seems to be monotonically increasing instead of peaking in the middle. Can authors justify more about this?__\\n\\nWe agree that the patterns we observe can be slightly noisy; still, _for all tasks and models_, we see in Figure 5(b) a phase of fast increase in task performance that ends under the ID peak. After this phase, task performance tends to plateau: it remains constant, or it increases or decreases very slowly. This is in accordance with our interpretation that the ID-peak-span is where the model performs a full linguistic processing of the input, resulting in representations that contain useful information for the syntactic/semantic probing tasks, as well as downstream tasks (sentiment/toxicity).\\n\\nNote that we do not claim that, once this linguistic processing is performed, the model gets rid of the information it provided: indeed, we expect it to use it in the subsequent layers to further refine its analysis and make its guess about the continuation. This is fully compatible with the patterns we observe: we would indeed be surprised if the probing-task performance dramatically decreased after the peak.\"}", "{\"comment\": \"__In your analysis (Figure 4), you observed that Pythia and OPT exhibit very similar representations. Could this similarity be attributed to pre-training on similar datasets? If so, how might this influence your findings, and have you considered controlling for dataset overlap to isolate structural factors more effectively?__\\n\\nGood question. Our work gives a robust characterization of consistent representational profiles in different models tested on different datasets. We leave to future work a _causal_ understanding of the sources of this consistency (training data, objective, architecture). As LLMs are pre-trained using large computational resources and their training data often are not public, this is not entirely trivial. Interestingly, concurrent work has shown that representations of models are converging (Huh et al, 2024) not only in language, but in other modalities as well. Our work links this convergence to the high-ID layers where linguistic abstraction happens. But, in the general case, the cause of representational similarity between different models is still unknown and is an active area of research (Moschella et al, 2023; Huh et al, 2024; see UniReps and Re-align workshops).\"}", "{\"comment\": \"(1) Although the paper is overall clearly written, please make sure that every symbol used needs to be clearly defined when it first appears, e.g., d in Section 3.4.\\n\\nThank you for pointing out this oversight, we are adding this in Section 3.4.\"}", "{\"title\": \"Response to Weaknesses\", \"comment\": \"Thanks for your valuable feedback! Please find below our responses to the points raised in Weaknesses:\"}" ] }
0eu837jdBD
Autoencoder-Based Hybrid Replay for Class-Incremental Learning
[ "Milad Khademi Nori", "IL MIN KIM", "Guanghui Wang" ]
In class-incremental learning (CIL), effective incremental learning strategies are essential to mitigate task confusion and catastrophic forgetting, especially as the number of tasks $t$ increases. Current exemplar replay strategies impose $\mathcal{O}(t)$ memory/compute complexities. We propose an autoencoder-based hybrid replay (AHR) strategy that leverages our new hybrid autoencoder (HAE) to function as a compressor to alleviate the requirement for large memory, achieving $\mathcal{O}(0.1 t)$ at the worst case with the computing complexity of $\mathcal{O}(t)$ while accomplishing state-of-the-art performance. The decoder later recovers the exemplar data stored in the latent space, rather than in raw format. Additionally, HAE is designed for both discriminative and generative modeling, enabling classification and replay capabilities, respectively. HAE adopts the charged particle system energy minimization equations and repulsive force algorithm for the incremental embedding and distribution of new class centroids in its latent space. Our results demonstrate that AHR consistently outperforms recent baselines across multiple benchmarks while operating with the same memory/compute budgets.
[ "Catastrophic Forgetting", "Class-Incremental Learning", "Continual Learning", "Task Confusion." ]
https://openreview.net/pdf?id=0eu837jdBD
https://openreview.net/forum?id=0eu837jdBD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nWjPvaLKH1", "aiP4CsAKRo", "Envel01gpp", "3mapljrvs2", "0WXOnfIYnP" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730596017795, 1730552434360, 1729243281723, 1730040637339, 1731640362544 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2656/Reviewer_zWHK" ], [ "ICLR.cc/2025/Conference/Submission2656/Reviewer_ykiH" ], [ "ICLR.cc/2025/Conference/Submission2656/Reviewer_BFBo" ], [ "ICLR.cc/2025/Conference/Submission2656/Reviewer_R7ia" ], [ "ICLR.cc/2025/Conference/Submission2656/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a novel approach to CIL called Autoencoder-based Hybrid Replay (AHR). This method combines exemplar and generative replay techniques to address key challenges in CIL, such as task confusion and catastrophic forgetting (CF). The hybrid autoencoder (HAE) serves as both a discriminative and generative model, storing data in a compressed latent space with minimal memory (O(0.1t)) compared to traditional exemplar methods (O(t)). The use of charged particle system energy minimization (CPSEM) and a repulsive force algorithm (RFA) aids in optimal placement of class centroids in the latent space. The experimental results indicate that AHR consistently outperforms existing baselines across five benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The combination of generative and exemplar replay in a single system that minimizes memory while maintaining high performance is novel.\", \"The proposed method achieves a significant reduction in memory requirements (O(0.1t)), which is crucial for scalability in CIL.\", \"Comprehensive experiments across multiple benchmarks and comparisons with state-of-the-art (SOTA) methods demonstrate the robustness of AHR.\"], \"weaknesses\": \"-The paper lacks exploration of real-world applications or more complex, dynamic scenarios beyond standard benchmarks.\\n-Performance could be impacted if the autoencoder's compression and reconstruction capabilities are not well-optimized.\\n- While memory reduction is emphasized, the impact of this method on significantly larger-scale datasets or more diverse data distributions is not detailed.\", \"questions\": [\"What are the specific challenges encountered when integrating CPSEM and RFA into existing architectures?\", \"Have you considered applying AHR to non-vision data, such as text or audio?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an autoencoder-based hybrid replay (AHR) strategy that leverages our new hybrid autoencoder (HAE) to function as\\na compressor to alleviate the requirement for large memory, achieving O(0.1t) at the worst case with the computing complexity of O(t) while accomplishing state-of-the-art performance.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well organized.\\n\\n2. I appreciate the extensive experiments.\\n\\n3. The idea of modeling the energy dynamics within the system akin to charged particles is interesting.\", \"weaknesses\": \"1. The writing of Section 2 (\\\"OUR STRATEGY: AUTOENCODER-BASED HYBRID REPLAY (AHR)\\\") is confusing. Please outline the motivations of each step and explain why it makes sense.\\n\\n2. The technique contribution is fuzzy. I want to know what technique used in this paper and what technical challenge or a novel idea in the technique of proposed method.\", \"questions\": \"I have read this paper carefully. Unfortunately, this paper is totally out of my research area. Therefore, I cannot capture the brilliance of this paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the problem of Class-Incremental Learning by introducing a Hybrid Autoencoder (HAE). The proposed model is designed for both exemplar replay during incremental training and classification at inference. The autoencoder consists of two components: an encoder, which is trained to minimize the Euclidean distance between the latent representation and the corresponding class centroid, and a decoder, which is trained to minimize the reconstruction error between the input and output images. Both the encoder and decoder are trained with a distillation loss on the previous task to mitigate activation drift.\\n\\nAt the encoder level, the class centroids serve as anchors in the latent space, guiding the latent representations towards their respective classes. These centroids are initialized before training using the Charged Particle System Energy Minimization (CPSEM) method, which ensures that the centroids are well-separated. After training, a nearest-mean classification rule is applied to classify test images based on the proximity of their latent representations to these centroids.\\n\\nIn the post-training phase, the encoder's latent space output is used to populate a replay buffer with latent representations of the current task samples, following a herding strategy. These representations are replayed in the next task by feeding them into the decoder trained on the previous task, which generates the corresponding images. These generated images are then used for training on the new task. Instead of storing images, as is typically done in incremental learning, the proposed method reduces memory requirements by storing only the latent space representations.\\n\\nThe authors compare their approach to several incremental learning methods on benchmarks such as MNIST, Balanced SVHN, CIFAR-10, and miniImageNet. They also analyze the method's resource consumption, evaluate different decoder sizes, and provide an ablation study by comparing its performance when real images are used during the replay phase instead of the generated ones.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The Charged Particle System Energy Minimization (CPSEM) method for initializing class centroids for the encoder training is novel and interesting.\", \"Several comparisons are carried out in the experimental section, and the method shows good performance on the benchmarks and methodologies used for comparison.\", \"Detailed analysis on resource consumption is provided.\", \"The ablation study comparing the proposed AHR with AHR using original images (AHR-lossless) highlights that the quality of the images generated by the encoder is sufficiently good for replay, as AHR with original images achieves similar performance. I appreciated this analysis.\"], \"weaknesses\": \"Overall, I believe the presentation of the paper requires significant improvement. Below are my major concerns regarding the presentation:\\n\\n- (a) The complexity analysis in the introduction needs to be clarified and expanded. The notation $O(0.1t)$ is incorrect according to the definition of Big-O notation. What does this represent? If the authors intend to convey that memory is saved by storing the latent space representation, I suggest incorporating both the latent space dimension and image size into the complexity analysis. Additionally, the term $e$ in $O(cte)$ is not defined and requires clarification.\\n\\n - (b) The notation throughout the paper is difficult to follow, with multiple indices used unnecessarily (e.g., in Equation 1). This excessive use of notation makes the paper hard to read. I suggest simplifying the notation wherever possible.\\n\\n- (c) Equation 1 seems to imply that all examples from previous tasks are needed to minimize the reconstruction error. However, I understand that this is not the case\\u2014some examples are real images, while others are generated. The replay buffer should be explicitly highlighted to clarify this in the equation.\\n\\n- (d) The three pseudocode blocks on page 4 make the methodology difficult to follow. Including all three pseudocode blocks on a single page compresses the accompanying methodology description into less than half a page. As a result, some LaTeX formulas in the main text break across two lines, further increasing the difficulty of reading.\\n\\n - (e) The organization of the paper should be reconsidered. Given the limited space for submission, dedicating more than two pages to the literature review while allocating just over one page to the methodology does not allow for a proper description of the proposed approach. I suggest moving the extended literature review to the appendix and presenting a more concise version in the main paper.\\n\\nRegarding the methodology and experimental section, my major concerns are as follows:\\n\\n- (f) The introduction of the Charged Particle System Energy Minimization (CPSEM) for initializing class centroids is interesting but requires additional explanation. It is unclear why this type of initialization benefits the autoencoder and how it relates to the Coulomb interaction energy . While I do not expect a full background on the calculus of variations for minimizing energy, more mathematical details\\u2014even in the appendix\\u2014would be helpful. An analysis of how centroids are distributed in the latent space is required to underline why the proposed strategy is effective. Furthermore, the CCE (class centroid embedding) placement is explained only through pseudocode, with no accompanying description. At a minimum, the operations performed in Algorithm 2 should be explained in words to provide intuition, especially for readers unfamiliar with the physics-based intuition behind this algorithm.\\n\\n- (g) Regarding CCE, why is this initialization considered effective? If the goal is to initialize centroids such that the class centroids are distant from each other, why not simply use the K-means algorithm? Alternatively, why not select class centroids as the latent vectors for each class that are the most distant from each other in terms of Euclidean distance, in a similar way as performed with hard negative sampling?\\n\\n- (h) The usage of the latent space for memory reduction and the decoder for latent replay is not novel. For example, Ayub et al. (ICLR 2021) [1] employ the encoder for storing latent representations and replay these latent representations in subsequent incremental learning steps. When the memory budget is reached, the latent representations are compressed into centroids and covariances. A comparison with their approach is necessary. The storage of latent representations for incremental learning and efficient memory replay with autoencoder is also explored in [2].\\n\\n- (i) Comparison in Table 2. 1) Some comparisons are unnecessary since the methods the authors compare to perform different tasks. For instance, Prediction Error Based Classification (PEC) [3] is designed for online continual learning (single-pass data), while the authors address the problem of offline incremental learning. It is clear that the harder setting for which PEC is designed results in lower performance compared to the authors' method. The authors should compare their approach only with offline class incremental learning methods under the same conditions. 2) Since the authors' method operates in an offline incremental learning setting, they should compare it with recent exemplar-based class incremental approaches, such as X-DER [4] and MEMO [5]. Additionally, the comparison should consider using a larger and more realistic backbone, such as ResNet-18, which is now commonly evaluated with more parameters and a larger latent space [4][5]. The paper should also evaluate how the method performs with higher-resolution images (e.g., 224x224) on a dataset like ImageNet100 [5].\\n\\n- (l) In Table 3, the authors report the wall-clock time for training their methods. They state that training takes about 8 hours on CIFAR-100 with ResNet-32, which seems excessive. In FACIL [6], joint training on a not particularly novel GPU requires less time, as only about 400k parameters need to be optimized. What are the timings for joint training with the same epoch budget? An incremental learning method should be more efficient than joint training. Additionally, the authors should specify the device used for the experiments when reporting training time.\\n\\n[1] A. Ayub and A. Wagner, \\u201c{EEC}: Learning to encode and regenerate images for continual learning,\\u201d in International\\nConference on Learning Representations, 2021.\\n\\n[2] Caccia, L., Belilovsky, E., Caccia, M. &amp; Pineau, J.. (2020). Online Learned Continual Compression with Adaptive Quantization Modules. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research.\\n\\n[3] Micha\\u0142 Zaj \\u02dbac, Tinne Tuytelaars, and Gido M van de Ven. Prediction error-based classification for\\nclass-incremental learning, in ICLR 2024\\n\\n[4]Zhou, Da-Wei and Wang, Qi-Wei and Ye, Han-Jia and Zhan, De-Chuan, A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning, in ICLR 2023 \\n\\n[5] Matteo Boschini, Lorenzo Bonicelli, Pietro Buzzega, Angelo Porrello, Simone Calderara. Class-Incremental Continual Learning into the eXtended DER-verse, in TPAMI 2022 \\n\\n\\n[6] Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost van de Weijer Class-incremental learning: survey and performance evaluation, TPAMI 2022\", \"questions\": \"Overall, I believe the paper has significant issues with presentation (as outlined in points [a-e] of the weakness section), which necessitate a major reformatting. Additionally, regarding other concerns, the novelty appears limited to performing classification at the encoder level and the introduction of CPSEM for initializing class centroids. While the latter is a novel contribution, it is neither well-explained nor well-motivated (as noted in points [f-g] of the weakness section). The experimental section, particularly the method comparison, needs refinement by considering recent related work on the proposed approach, utilizing higher-resolution benchmarks, and employing architectures with a larger latent space (as indicated in points [h-i] of the weakness section). Moreover, the details about the training resources should be clarified (point [l] in the weakness section).\\n\\nI believe the paper has the potential for significant improvement for a future submission. To enhance its novelty, the authors should focus more on the description and proposal of the class centroid initialization, which could provide both theoretical and empirical insights. However, as previously mentioned, these insights are lacking in the current version. Additionally, improving the experimental section would further strengthen the submission.\\n\\nConsidering all the above, I recommend rejecting the current submission. The paper is not yet ready for publication and requires significant revisions. I suggest making these improvements and submitting it to a different venue.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Hybrid Autoencoder (HAE) and Autoencoder-Based Hybrid Replay (AHR) strategies to reduce the memory burden for CIL, especially for replay-based approaches. HAE combines both discriminative and generative modeling to handle classification and replay tasks. It employs Charged Particle System Energy Minimization (CPSEM) equations and the Repulsive Force Algorithm (RFA) to manage class separation within its latent space, enabling class identification using Euclidean distance. AHR integrates exemplar and generative replay strategies by storing samples in the latent space, which significantly reduces memory usage. Its decoder is designed to memorize training data, allowing for effective replay without the typical issues of hazy pseudo-data found in other generative approaches. Simulations in various benchmark datasets also validate the hypothesis.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1) Easy to follow and addresses the important topic of computational burden for continual learning algorithms\\n2) The motivation behind the Hybrid replay is well written by contextualizing the current works of literature along with their gaps\", \"weaknesses\": \"1) Inconsistent notation\\n: In the explanation of AHR (184) you are referring are T_l but in the algorithm, it is T_i which is the same for P\\n : what does the * refer to in algorithm 1?\\n : What is J_l?\\n : ^ in explanation and ' is used in Figure 1 are interchangeably used for generated output\\n : what is \\\\mathcal{T} in Figure 1?\\n 2) Is there any explanation for how the memory is reduced to 0.1t\\n\\n 3) It is claimed that the complexity reduces to 10%, but no empirical evidence is provided to validate that hypothesis\\n4) Evaluation metric: It is unclear to readers, (line 373) does the accuracy represents the accuracy on the only last task after training on all tasks or is average on all previous tasks.\\n5) How does the number of exemplars decrease over time? You are representing the exemplars in latent space? Does it mean reducing the number of classes?\\n(Table 3) Why the different numbers of epochs for AHR and others? Please be clear on the size of latent and raw ? is 150 (latent) better than 20 (raw)\\n6) It would be clearer to the readers if there was some explanation of how CPSEM and RFA create incremental embeddings\\n7) The main objective of this paper is to reduce the size of exemplars in memory. In the related work section, the authors focus on mainly describing the current replay mechanism without mentioning how the current strategies fall short in reducing the size and its relation to AHR\\n\\n\\n8) There is no comparative analysis of the work with state-of-the-art replay methods such as:\\n I) Rolnick, David, et al. \\\"Experience replay for continual learning.\\\" Advances in neural information processing systems 32 (2019).\\n\\n II) Buzzega, Pietro, et al. \\\"Dark experience for general continual learning: a strong, simple baseline.\\\" Advances in neural information processing systems 33 (2020): 15920-15930.\\n\\nwhich makes it challenging to assess the significance of the work in the literature.\\n\\n### Comments on evaluation: \\nI am mainly concerned about the accuracies for CIFAR100 and mimiImageNet. There are various works [FeTrIL [1] by Petit, Gr\\u00e9goire, et al., FeCAM [2] by Goswami et al] utilizing ResNet-18/32 achieving higher accuracy of more than 65% even in exemplar-free settings. I wonder how with an exemplar, the model is not able to maintain that accuracy.\\n\\n[1] Petit, Gr\\u00e9goire, et al. \\\"Fetril: Feature translation for exemplar-free class-incremental learning.\\\" Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2023.\\n\\n[2] Goswami, Dipam, et al. \\\"Fecam: Exploiting the heterogeneity of class distributions in exemplar-free continual learning.\\\" Advances in Neural Information Processing Systems 36 (2024).\", \"questions\": \"please refer to weakness section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
0er6aOyXUD
Evaluating Robustness of Reward Models for Mathematical Reasoning
[ "Sunghwan Kim", "Dongjin Kang", "Taeyoon Kwon", "Hyungjoo Chae", "Jungsoo Won", "Dongha Lee", "Jinyoung Yeo" ]
Reward models are key in reinforcement learning from human feedback (RLHF) systems, aligning the model behavior with human preferences. Particularly in the math domain, there have been plenty of studies using reward models to align policies for improving reasoning capabilities. Recently, as the importance of reward models has been emphasized, RewardBench is proposed to understand their behavior. However, we figure out that the math subset of RewardBench has different representations between chosen and rejected completions, and relies on a single comparison, which may lead to unreliable results as it considers only an isolated case. Therefore, it fails to accurately present the robustness of reward models, leading to a misunderstanding of its performance and potentially resulting in reward hacking. In this work, we propose a direction for designing benchmarks that reliably evaluate reward models in mathematical reasoning. We conduct comprehensive analyses to validate whether our design effectively reflects the robustness of reward models. The results underscore that the benchmark designed to reduce the possibility of reward hacking and employ one-to-many comparisons strongly correlate with the results of optimized policy, whereas the existing benchmark shows almost no correlation. Furthermore, by analyzing through the lens of reward overoptimization, we show that the design involving multiple comparisons results in a significantly more reliable benchmark. We make our code and data publicly available.
[ "mathematical reasoning", "RLHF", "reward models", "reward overoptimization", "language models", "benchmark" ]
Reject
https://openreview.net/pdf?id=0er6aOyXUD
https://openreview.net/forum?id=0er6aOyXUD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wLYTI5rfO3", "vIJg819M7U", "vHz9bQAQ4F", "vEJPfjVbE0", "v5cN5w7cIh", "smT1g1dEKz", "skNacAJdtF", "rfx8jdYskD", "ps23xP10hY", "ofn3ua2rVe", "nGpr293YY8", "mFtvwsRzpZ", "m6rSWq4n0H", "lQeMuR5PnA", "jfHFVpLRNk", "jWEEYHmwXM", "j8LLREvVjF", "gOMQ9bRHRp", "gGywYVjZnw", "fS02EotDku", "ekHcgCbKm9", "eCMJCB0ilm", "dg5hYafwsm", "abuztpNDsq", "Z3smjyhQji", "YjFRLfsyJF", "Y4m8jY9s3M", "T5lgiTMXLp", "T41QyEN3Zk", "Rdro1F55CA", "QMXHya6Q69", "PtFI3waByp", "PAKxM08cKQ", "NFfHKVB7G0", "KAziwtViRe", "JpNvkSq585", "IqbXL9usru", "Ef66mGWvrc", "Dl0YQZR9ew", "CKAJ8I4c2b", "AqnK2Gbcxe", "ANLE6WyqFg", "5orKVV6ely", "44wuSQVLKa", "2wp5YW6quH", "0Z7G9Cz9Ek" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732799246715, 1731858003009, 1732799471048, 1731859056784, 1733152443090, 1731861499248, 1732382047630, 1732246988405, 1731858592461, 1730718921725, 1732215824404, 1732216299265, 1733152315026, 1732756325005, 1732799194311, 1731860336847, 1731858956334, 1734791342433, 1732177022017, 1730958567795, 1730484120244, 1731862976826, 1732375667877, 1731860114276, 1731859674457, 1733152410131, 1733152469951, 1732604123127, 1732216350138, 1737524038657, 1732644539903, 1730373685466, 1731860674547, 1731859404837, 1732216190002, 1731862684586, 1731860440789, 1732799132882, 1732220316781, 1733152365262, 1731859305213, 1731860229398, 1730530201600, 1733168174886, 1732381732939, 1732216241884 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_BzTe" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_1Mfa" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_BzTe" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Area_Chair_bsE6" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_1Mfa" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_bxC4" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_K97G" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_K97G" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_TTuB" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_bxC4" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_BzTe" ], [ "ICLR.cc/2025/Conference/Submission10281/Reviewer_BzTe" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ], [ "ICLR.cc/2025/Conference/Submission10281/Authors" ] ], "structured_content_str": [ "{\"title\": \"Sincere Request for Review of Our Responses and Revised Paper\", \"comment\": \"Dear Reviewer 1Mfa,\\n\\nThank you once again for your time and effort in providing insightful feedback on our paper. As you highlighted, we have addressed the expandability to other domains by including additional details in Appendix A of the updated PDF **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))** . Furthermore, we have provided responses to your additional questions.\\n\\nWe hope that our responses sufficiently address your concerns. Should there be an opportunity for further discussion during the rebuttal period, we would be delighted to engage and provide any clarifications or further insights.\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"title\": \"General Response to All Reviewers\", \"comment\": \"We greatly appreciate all reviewers for their time in reviewing our paper and for providing thoughtful suggestions to make our paper stronger. We would like to address comments and suggestions commonly raised by reviewers.\\n\\n\\n___\\n## Clarification for Our Main Contributions \\nThrough the reviewers\\u2019 comments, we realized that the paper placed too much emphasis on the new benchmark, RewardMATH. However, the purpose of RewardMATH\\u2014a simplified benchmark representing our proposed design\\u2014was to validate the reliability of our benchmark design in comparison to RewardBench, rather than to serve as a widely adopted benchmark in future studies. To clarify this point, we slightly revised Title, Abstract and Introduction to reduce the emphasis on RewardMATH, aligning more closely with our original intent.\", \"the_contributions_of_the_paper_are_as_follows\": [\"We identify the issues with existing benchmark for reward models (i.e. RewardBench)\\u2014such as poor quality, vulnerability to reward hacking, and the risk of misjudgement due to isolated cases\\u2014and introduce a new design for reliable benchmark for reward models, focusing on reducing the risks of reward hacking and employing multiple comparisons to effectively estimate the degree of reward overoptimization.\", \"We conduct extensive experiments validating that the scores on RewardMATH strongly correlates with the performance of optimized policy and effectively estimates the degree of reward overoptimization. These results pave the way for future directions in evaluating reward models more reliably.\", \"Furthermore, **_our key insights_** can be described as follows:\", \"**In benchmarks for reward models, a significant difference between the chosen and rejected responses shows low correlation with downstream tasks due to the potential for reward hacking.**\", \"**One-to-one comparisons may yield inaccurate results depending on the preference pairs, which in turn results in low correlation with downstream tasks.**\", \"**A benchmark employing multiple comparisons can effectively capture reward overoptimization, indicating its ability to assess the robustness of reward models.**\", \"___\", \"## A Scope Limited to the Mathematical Reasoning\", \"We believe this concern stems from a misunderstanding of our contribution. If our main goal were to propose a well-crafted new benchmark, focusing solely on mathematics might limit the scope of the research; however, our goal is to provide insights into future directions for constructing reliable benchmarks for reward models. And it is fairly intuitive that a reliable benchmark should not be vulnerable to reward hacking and that conducting multiple comparisons, rather than one-to-one-comparisons, provides a more reliable evaluation of reward models. So, we trust that it is important to thoroughly validate our design through at least one specific domain that allows for in-depth experiments and analysis.\", \"Here, the reasons for choosing mathematical reasoning are summarized.\", \"**One of the tasks where reward models are most extensively used is mathematics reasoning.** In mathematical reasoning tasks, reward models are widely utilized during training to enhance reasoning capabilities and during inference-time using reward-based techniques such as best-of-$n$ (BoN) sampling or Monte Carlo Tree Search (MCTS).\", \"**Mathematical reasoning includes a clear human preference.** In mathematical reasoning, human preference can be easily defined as correctness, allowing us to focus effectively on the analysis without the need to deliberate over true preferences.\", \"___\", \"## Updates in the Revised Draft\", \"The updated draft (click to see the pdf) also includes the following enhancements:\", \"We have updated the Title, Abstract and Introduction to clarify our main contributions.\", \"A detailed explanation of MATH500 has been added to line 200 and Appendix B.1.\", \"A comprehensive explanation of why PPO experiments were not conducted in a non-synthetic setup is discussed in Appendix B.4.\", \"The experiments of policy fine-tuning methods beyond BoN sampling have been featured in Appendix C.5.\", \"Additional experimental results based on factors considered in benchmark design have been integrated into Table 12, with analysis details added to Appendix C.4.\"]}", "{\"title\": \"Sincere Request for Review of Our Responses and Revised Paper\", \"comment\": \"Dear Reviewer TTuB,\\n\\nThank you once again for your time and effort in providing insightful feedback on our paper. As you mentioned, we conduct additional experiments and incorporated the results into Table 12 of the updated PDF **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))** . Furthermore, we have provided responses to your additional questions.\\n\\nWe hope that our responses sufficiently address your concerns. Should there be an opportunity for further discussion during the rebuttal period, we would be delighted to engage and provide any clarifications or further insights.\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"title\": \"Response to Reviewer bxC4 [3/3]\", \"comment\": \"___\\n### **Q7.**\\n> Does the benchmark evaluate cases where both the rejected and chosen response arrive at the same answer, but the rejected answer has the wrong steps?\\n\\nNo, we initially considered including rejected solutions with correct answers but wrong reasoning steps; however, we decided not to include them for the following reasons:\\n\\n* **Challenges in Data Collection:** Our primary focus was to analyze the design of reliable benchmarks for reward models. Therefore, when collecting data, we filtered based on the correctness of the final answer, followed by a manual inspection conducted by humans (authors) to ensure data quality. If we were to include such cases, however, this process would become more costly. \\n* **Rarity of Such Cases:** In the MATH dataset, True/False questions are rare, as are cases where the answer is correct but the reasoning is incorrect. For instance, in the problem where the task is to count the possible values of $n$, the correct solution may involve $n=4,5,6$, resulting in a final answer of 3\\\\. However, a model generated solution consisting of $n=2,3,4$ could still arrive at the same final answer of 3\\\\.\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer K97G,\\n\\nThank you again for your thoughtful feedback on our paper. Your feedback has been invaluable in improving our work. As today is the final day for discussion, we would be delighted to provide any clarifications or further insights if needed. Please let us know if there are any remaining concerns we can address.\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"title\": \"Response to Reviewer TTuB [1/2]\", \"comment\": \"Dear Reviewer TTuB,\\n\\nWe appreciate your comments and feedback on our work. We will address the questions raised by the reviewer below.\\n___\\n### **W1.**\\n> ... However, is the comparison in terms of mathematical reasoning appropriate, given that RewardMATH is specifically designed for this purpose? If RewardBench is indeed the most comprehensive eval set even for mathematical reasoning tasks prior to RewardMATH, it would be helpful to clarify this point.\\n\\nThank you for highlighting the points that required clarification. While reward models are used in a wide range of studies, there have been relatively few attempts to analyze or interpret them. So, RewardBench was the only available benchmark for reward models, and no benchmark existed that focused on specific domains, including mathematics. Consequently, the research community primarily relied on RewardBench to assess reward models. However, we argue that top-ranked models on the math subset of RewardBench may be vulnerable to reward hacking, and its one-to-one comparisons are unreliable. To address these limitations, we start with a focus on the analysis of reward models for mathematical reasoning tasks, as it provides a clear definition of human preference, without the need for deliberation on true preferences. We then propose guidelines for designing and structuring benchmarks tailored to reward models and validate them through comprehensive experiments.\\n\\n___\\n\\n### **W2.**\\n\\n> The benchmark appears to lack cases where reward models must distinguish between correct solutions of varying quality, such as those missing reasoning steps. \\n\\nAs you mentioned, many-to-many comparisons are indeed preferable to one-to-many. However, as we outlined in Section 3.2 and the Limitation (Appendix A), gathering correct solutions requires significant human resources, so we focused on collecting a variety of rejected solutions. While many-to-many comparisons are undoubtedly the final goal, we emphasize that the most efficient next step, given the current reliance on RewardBench (i.e. one-to-one) as the primary benchmark, is to adopt a one-to-many comparison. We believe that this approach should advance consistently across all domains, not just in mathematics.\\n\\n> It is also unclear whether 500 samples is sufficient to cover diverse mathematical reasoning tasks.\\n\\nRewardBench was constructed from PRM800K, which is based on MATH500, so we used MATH500 as well to ensure a fair comparison. Furthermore, rather than constructing a comprehensive benchmark that covers all tasks, our goal is to propose the next step for reward model benchmark and to validate our design through a thorough analysis.\"}", "{\"comment\": \"___\\n### **\\\\[W3\\\\]**\\n\\n> ... evaluating RM does not limit to reward over-optimization. Other attributes fundamentally matters, such as alignment to human preference and being able to distinguish between correct answers.\\n\\nAs you commented, we agree that the evaluation of a reward model should include its alignment with human preference as well as its ability to distinguish between correct answers. However, this work focuses on **the benchmark** for evaluating reward models. Fundamentally, a benchmark for reward models is designed to evaluate how effectively the reward model aligns with human preferences. But, the existing benchmark (i.e. RewardBench) have assessed how well a reward model aligns with human preferences by using a single pair of predefined human preference (a single chosen vs. a single rejected). \\n\\nThus, this work proposes a multiple comparison design for a reliable benchmark to better capture human preferences and validates our design through the following two perspectives:\\n\\n1. Correlation with BoN sampling and DPO (included in the updated draft) \\n2. Reward overoptimization\\n\\nThe results of BoN sampling involve selecting the best sample from numerous responses, which includes the reward model\\u2019s ability to distinguish between correct answers. Additionally, the results of reward overoptimization experiments reflect how robustly the reward model provides useful signals for policy learning. Thus, we believe that we have taken into account the attributes you mentioned.\\n\\n&nbsp;\\n\\n> ... paper is still limited in scope \\u2026\\n\\nAs you know, mathematical reasoning is not a simple task. Mathematical reasoning serves as a cornerstone for deeper reasoning abilities, and many studies leverage reward models to enhance this capability \\\\[1-5\\\\]. However, the analysis of reward models remains under-explored. While the existing benchmark for reward models, RewardBench, has shown a weak correlation with downstream tasks, our design exhibits a strong correlation. We believe this represents a valuable contribution to future research on reward models.\\n\\nFurthermore, we assume that preferences are well-defined for the target tasks. Previous work has simply assessed human preferences through an isolated case (i.e. one-to-one comparisons) without considering two critical aspects: (1) **how well the benchmark correlates with downstream tasks** and (2) **how effectively the benchmark reflects the robustness of the reward model**. Hence, the findings\\u2014that a benchmark for reward models should capture their correlations with downstream tasks and their robustness against reward overoptimization\\u2014can reasonably be considered applicable to other domains.\\n\\nTherefore, we believe our work can provide valuable insights not only for mathematical reasoning but also for the broader evaluation of reward models across diverse tasks. We hope this response addresses your concerns and highlights the contributions and potential impact of our work. \\n\\nBest regards,\\n\\nThe Authors of Paper 10281\\n\\n&nbsp;\\n\\n**[Updates]** We provide additional clarification on **the key insights** of this work in the general response.\\n\\n&nbsp;\\n\\n\\n**References**\\n\\n[1] Lightman, Hunter, et al. \\\"Let's Verify Step by Step.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[2] Luo, Liangchen, et al. \\\"Improve Mathematical Reasoning in Language Models by Automated Process Supervision.\\\" arXiv preprint arXiv:2406.06592 (2024).\\n\\n[3] Wang, Peiyi, et al. \\\"Math-shepherd: Verify and reinforce llms step-by-step without human annotations.\\\" Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024.\\n\\n[4] Setlur, Amrith, et al. \\\"Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning.\\\" arXiv preprint arXiv:2410.08146 (2024).\\n\\n[5] Wang, Chaojie, et al. \\\"Q*: Improving multi-step reasoning for llms with deliberative planning.\\\" arXiv preprint arXiv:2406.14283 (2024).\"}", "{\"title\": \"Official Response from Reviewer\", \"comment\": \"Thank the author for their response.\\n\\nI understand the paper's new direction towards providing insights into future directions for constructing reliable benchmarks for reward models. However, I believe that if this is indeed the authors' goal, then the benchmark built in the paper must be also sound. \\n\\nW1. What was the detail behind this experiment? It is unclear from the paper which dataset did the experiments of self-enhancement bias in LLM-as-Judge utilized. How was the correctness determined? Was it MATH? If so there is a distribution difference between the experiment's dataset versus the benchmark dataset. \\n\\nW2. Thanks for the explanation. Without access to true oracle reward, using a \\\"gold\\\" RM is the only feasible alternative for most researchers. Nonetheless, I am still skeptical about the justification of using Internlm2-7b-reward, and feel the author should conduct additional validation. However, this seem fine to me for now.\\n\\nW3. To answer the author's questions, I was referring to the following:\\n\\nFirst, evaluating RM does not limit to reward over-optimization. Other attributes fundamentally matters, such as alignment to human preference and being able to distinguish between correct answers.\\n\\nSecond, the paper is still limited in scope. The author stated in the meta comment:\\n> Mathematical reasoning includes a clear human preference. In mathematical reasoning, human preference can be easily defined as correctness, allowing us to focus effectively on the analysis without the need to deliberate over true preferences.\\n\\nWhile I agree with the authors correctness might be the primary attribute to human preference in the context of mathematics, it is not going be true for other domains, such as creative writing and social studies. Even among technical fields, correctness is not the only attribute for open-ended tasks, such as web development and UI/UX design. On these tasks, human preference no longer only consist of correctness, but also includes other complex attributes like helpfulness (in the case of UI/UX, the visual result of the code generated by the model matters). The paper is limited to mathematics, and also close-ended tasks. I remain skeptical whether the methods proposed in the paper can be truly adopted by someone who is trying to build reward model benchmark in domain outside close-ended mathematical problems. It is not clear to me how to approach building reward benchmarks for other domains after reading the methods in the paper. \\n\\nI agree with Reviewer 1Mfa that to demonstrate the method is indeed generalizable as author claim, applying the method in another domain, ideally more open-ended queries, will be necessary.\\n\\nThanks for answering my questions, the authors' answers are very helpful.\"}", "{\"title\": \"Response to Reviewer bxC4 [1/3]\", \"comment\": \"Dear Review bxC4,\\n\\nWe appreciate your thoughtful review and recognition of our efforts to advance benchmarks for reward models. We hope that the following responses help clarify any ambiguities in the paper and make our work more comprehensive:\\n___\\n### **Q1-2. About MATH500 dataset**\\nWe apologize for any confusion caused by the lack of detailed explanation regarding MATH500 in our initial submission. We have now included citation and comprehensive information in Section 3 and Appendix B of the updated PDF **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**.\\n\\nMATH500 is derived from the original MATH dataset [1], which comprises a 7.5K training set and a 5K test set. During the development of PRM800K [2], the initial 7.5K training set was insufficient for training a robust Process Reward Model (PRM) on step-by-step solution data. Consequently, 4.5K problems from the MATH test set were incorporated into the training set, leaving a remaining subset of 500 problems now referred to as MATH500. Since the release of PRM800K, MATH500 has been widely adopted to prevent overlap between training and test sets. \\nTo clarify, MATH500 is part of the original MATH dataset and is not a new contribution of our paper. Moreover, only PRM800K contains mis-annotated samples, while the MATH500 derived from the original MATH dataset remains unaffected.\\n\\n**References**\\n\\n[1] Hendrycks, Dan, et al. \\\"Measuring Mathematical Problem Solving With the MATH Dataset.\\\" Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track\\n\\n[2] Lightman, Hunter, et al. \\\"Let's Verify Step by Step.\\\" The Twelfth International Conference on Learning Representations.\\n___\\n\\n### **Q3. Evaluations and ablations on GSM8K and MATH**\\nAs previously mentioned, MATH500 is a subset of the MATH dataset, and our experiments already include evaluations on this dataset. While we appreciate the suggestion to include evaluations on GSM8K, we chose to exclude it from our current study for the following reasons:\", \"simplicity_and_established_performance\": \"GSM8K is a relatively simple dataset on which many studies have already achieved high scores, potentially limiting the meaningful insights gained from further evaluation.\", \"data_distribution_concerns\": \"In our experiments, we classify mathematical reasoning test sets based on the data learned by the RM as either in-distribution (e.g., MATH500) or out-of-distribution (e.g., Gaokao-math). Given GSM8K's widespread use and its potential inclusion in training data, it is challenging to classify it definitively as either in-distribution or out-of-distribution.\\n___\\n\\n### **Q4. An ablation study on the score based on the number of steps.**\\nThank you for your insightful question. Since both RewardBench and RewardMATH are based on MATH500, with RewardBench using human-annotated chosen solutions and RewardMATH using machine-generated ones, we ablate the scores of chosen solutions across each benchmark. The first table below shows the average number of steps for chosen solutions in each benchmark, with RewardBench\\u2014which comprises human chosen solutions\\u2014having a lower average step count. The second table below presents the average scores of chosen solutions generated by reward models, revealing that many models tend to assign higher scores to shorter-step answers and indicating that training was aimed at achieving high scores within RewardBench.\\n| RewardBench mean length | RewardMATH mean length |\\n| ----- | ----- |\\n| 4.11 | 5.99 |\\n\\n| Model | Mean score of chosen solution | |\\n| ----- | ----- | ----- |\\n| | **RewardBench** | **RewardMATH** |\\n| ArmoRM-Llama3-8B-v0.1 | 8.21 | 8.04 |\\n| Skywork-Reward-Llama3.1-8B | 202.22 | 47.13 |\\n| Oasst-rm-2.1-pythia-1.4b | 17.97 | \\\\-25.65 |\\n| Internlm2-20b-reward | 25.78 | 10.75 |\\n| Internlm2-7b-reward | 61.17 | 39.81 |\\n| GRM-llama3-8B | \\\\-10.77 | 86.50 |\\n| GRM-gemma-2B | 4.61 | 45.80 |\\n| Eurus-RM-7b | \\\\-5908.98 | \\\\-5172.88 |\\n| Beaver-7b-v2.0-reward | 26.62 | 20.14 |\"}", "{\"summary\": \"This paper proposes RewardMATH, a reward model evaluation benchmark focused on the math domain. This benchmark adopts a one-to-many comparison to evaluate reward models robustly. Specifically, it provides 10 responses for each prompt where only 1 response is the answer. The evaluated reward model is considered accurate only when the answer response is given the highest score among all 10 responses.\\nThe authors also empirically provide sufficient evidence that the RewardMATH benchmark provides reliable estimates for reward models that give policies optimized using BoN are indeed robust in math benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper provides clear and sufficient empirical evidence that their RewardMATH benchmark is more reliable than the math subset of RewardBench [1]. The empirical results are also clear as LLM policy using BoN on high-scored reward models on RewardBench shows little to no correlation with the performance increase of Math benchmarks (r-square = 0-0.1), while RewardMATH shows a much stronger correlation\", \"(r-square = 0.6-0.8) in Figure 3.\", \"The authors have evaluated diverse reward models on RewardMATH, including LLMs (generative reward models), classifier-based reward models, and process reward models.\", \"The paper considers the problem of over-optimization using a synthetic setup of gold RMs and proxy RMs.\", \"[1] https://arxiv.org/abs/2403.13787\"], \"weaknesses\": [\"The work would be more interesting if the authors showed any other domains (such as coding or text summarisation or maybe safety) reward model benchmark can be improved by the framework proposed here (by adopting multiple responses and using diverse LLMs to generate outputs). Any initial or limited experiments would be helpful.\", \"The lack of PPO (or DPO) usage for policy fine-tuning in experiments seems like a major weakness. The main contribution of this paper is using policy fine-tuning methods to verify if the RewardMATH benchmark scores correlate with the signals it provides during policy fine-tuning. I agree with this approach and am impressed by the number of experiments conducted to verify this using mainly Best-of-N sampling. However, Best-of-N sampling is an inference time method to generate better model outputs using reward models, whereas PPO (or possibly DPO) is the main fine-tuning method researchers use. Although Figure 5 does show a PPO experiment under a synthetic setup, the number of checkpoints or whether the dots follow the findings from Gao et al [2] is not clear to me. Without any solid PPO results, Best of N sampling seems not enough to verify the benchmark's capability of measuring the robustness of reward models. The work will be much more convincing if the authors show more PPO-trained policy evaluations. Or at least, it will be helpful if the author provides more context as to why PPO is hard to train in their non-synthetic setup. Also, I suspect high-scoring reward models on RewardMATH have the ability to find the best response from multiple responses, and Best-of-N adopts a very similar way as it picks the response with the highest reward, resulting in a high correlation of results. Whether this ability will generalize even on PPO setups is not clear to me at this point.\", \"Experiment results in Figure 6 compare diverse RMs on both RewardBench and RewardMATH benchmarks with gold or oracle rewards. It would be nice if the authors not only provided the numbers but also a statistical analysis (such as Kendell's tau) that measures the agreement between RewardMATH(or RewardBench) and oracle (or gold) reward scores in Figure 6.\", \"[2] https://arxiv.org/abs/2210.10760\"], \"questions\": [\"As the proxy reward model trained from synthetic data shows far from optimal performance in Table 3 (only around 13% for RewardMATH and 69% for RewardBench), can you consider using better proxy RMs? The increase in value from 12.68 to 13.51 is not very convincing to me that this is a strong trend.\", \"In Figure 6, the gold reward (or even the oracle reward) does not drop for most cases even with the maximum KL distance considered. If a larger N is considered for BoN sampling, will the graph drop down as in Gao et al [2]? For a larger N is RewardMATH still successful in detecting more robust reward models regarding the overoptimization problem?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your response!\\n\\n### **[W1]** \\n\\nWe agree with your suggestion that conducting analysis across diverse domains would have made our work more comprehensive and engaging. However, we believe that the following findings are applicable beyond the specific domain of mathematics:\\n\\n* In various domains beyond mathematics, it is important that benchmarks for reward models are validated both **based on** **their correlation with the performance on downstream tasks** and **through the lens of reward overoptimization.** \\n* While we demonstrated **the effectiveness of one-to-many comparisons over one-to-one comparisons in assessing the robustness of reward models** in mathematics, accurately ranking multiple responses, rather than merely comparing two, is a design that can also be applied to other domains.\\n\\nWhile we did not explore multiple domains, these findings can be considered applicable to other domains. Thank you once again for your constructive suggestion for improvement.\\n\\n&nbsp;\\n\\n**[Updates]** We provide additional clarification on **the key insights** of this work in the general response.\\n\\n___\\n\\n### **[W2, W3]**\", \"the_primary_distinction_lies_in_the_reward_signals_used_for_optimization\": \"BoN focuses solely on selecting the response with the highest reward, while DPO leverages both the highest-reward and lowest-reward responses during training. Consequently, a reward model that assigns high rewards to correct solutions performs well in BoN, whereas one that also avoids assigning low rewards to correct solutions stands out in DPO.\\n\\nGiven these differences, achieving a strong correlation for DPO through one-to-many comparisons is challenging. We believe that adopting many-to-many comparisons, as outlined in the future work (Appendix A), will better capture these and lead to stronger correlations.\\n\\n___\\n### **[Q2]**\\n\\nThank you for your question. The differences in graph shapes stem from variations in the experimental design. In [1], a gold reward model with a 6B parameter size was used, while the proxy reward model was of the same architecture but with a different size. As observed in Figure 1 of [1], a very small model (3M) produces a flipped U-shape curve, whereas a larger model, closer in size to the gold reward model (3B), shows a nearly linear graph. This suggests that when the performance of the proxy reward model closely matches that of the gold reward model, the graph of the results tends to appear linear. Similarly, in Figure 6 of our paper, the reward models analyzed do not exhibit significant differences in size or structure, which leads to a more linear graph. \\n\\nHowever, for oracle rewards, where the performance is poor and reward overoptimization occurs rapidly, we observe reward collapse in certain models. The lack of a flipped U-shape in such cases is also due to experimental design differences. In [1], extensive resources were used to conduct experiments with $N=60,000$, resulting in a smooth curve with densely sampled points. In contrast, our experiments were conducted with fewer $N$, providing discrete points that were connected linearly, which likely contributes to the differences you mentioned. Notably, Figure 2 of [2] and Figure 7 of [3] present similar discrepancies, suggesting that such discrepancies are common in related studies.\\n\\nFor the DPO experiments, KL divergences are calculated based on specific checkpoints. To achieve the curve seen in [1], KL needs to be sampled at very fine intervals, which is challenging to obtain from the checkpoints. A similar pattern is also observed in Figure 1 of [4].\\n\\n**References**\\n\\n\\\\[1\\\\] Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" Proceedings of the 40th International Conference on Machine Learning. 2023\\\\. \\n\\\\[2\\\\] Yang, Rui, et al. \\\"Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs.\\\" arXiv preprint arXiv:2406.10216 (2024). \\n\\\\[3\\\\] Rame, Alexandre, et al. \\\"WARM: On the Benefits of Weight Averaged Reward Models.\\\" Forty-first International Conference on Machine Learning. \\n\\\\[4\\\\] Rafailov, Rafael, et al. \\\"Scaling laws for reward model overoptimization in direct alignment algorithms.\\\" arXiv preprint arXiv:2406.02900 (2024).\"}", "{\"title\": \"Sincere Request for Review of Our Responses, New Experiments, and Revised Paper\", \"comment\": \"Dear Reviewer K97G,\\n\\nThank you again for your time and effort to provide your insightful feedback on our paper. \\n\\nWe have addressed your comments and clarified the challenges of PPO in a non-synthetic setup in Appendix B.4 of the updated draft **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**.\\nIf you have any remaining questions or require further clarification, we would be happy to address them before the time window closes.\\n\\nThank you so much for your time and valuable feedback!\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"comment\": \"Dear Reviewer bxC4,\\n\\nThank you again for your thoughtful feedback on our paper. Your feedback has been invaluable in improving our work. As today is the final day for discussion, we would be delighted to provide any clarifications or further insights if needed. Please let us know if there are any remaining concerns we can address.\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\", \"title\": \"Gentle Reminder\"}", "{\"title\": \"Official Response from Reviewer\", \"comment\": \"Thanks for the response.\\n\\nW1. Thank you for the information, could authors point the necessary experiments and clarifications regarding bias to the said experiments in main text of the paper. Furthermore, mentioning only employing more models other than GPT-4 for crafting answer might be more ideal in limitation section. Also the limitation section is not linked to the main text of the paper, which makes it hard to find.\\n\\nW3. Thanks for the explanation. I still believe human preference is an important attribute regardless. Could author also address this point in the limitation section?\\n\\n> Therefore, we believe our work can provide valuable insights not only for mathematical reasoning but also for the broader evaluation of reward models across diverse tasks. We hope this response addresses your concerns and highlights the contributions and potential impact of our work.\\n\\nCould the authors provide a detailed example of how their approach could be applied to a domain outside of mathematics, preferably a non-technical one? Specifically, it would be helpful to see:\\n\\n1. A concrete example in another domain\\n2. How to mitigate reward hacking in that context\\n3. How to collect unbiased reject and prefer pairs for that domain\\n\\nA specific example would help readers better understand how to apply this meta-approach more broadly.\\n\\nDepending on the authors' response, I am ready to re-evaluate my score accordingly. However, I will not change my score at the current time. I will not ask the authors to revise their paper in the short period of time, but they should provide how they would revise their paper if they were accepted.\"}", "{\"title\": \"Sincere Request for Review of Our Responses and Revised Paper\", \"comment\": \"Dear Reviewer bxC4,\\n\\nThank you once again for your time and effort to provide insightful feedback on our paper. As you pointed out, the results on GSM8K would be invaluable and we are pleased to inform you that the ReasonEval-34B experiment has now been completed, and the results have been incorporated into the updated PDF **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**. The table below presents the correlation between the BoN sampling results on GSM8K and the performance on each benchmark. As demonstrated, our design continues to show a strong correlation for GSM8K, whereas RewardBench exhibits a weaker correlation.\\n\\n| | RewardBench | RewardMATH |\\n| :---- | ----- | ----- |\\n| GSM8K (BoN) | 0.209 | 0.797 |\\n| MATH (BoN) | 0.187 | 0.902 |\\n\\nWe hope these additional results address your concerns. If there is an opportunity for further discussion within the rebuttal period, we would be happy to engage and provide any clarifications or further insights.\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"title\": \"Response to Reviewer BzTe [3/4]\", \"comment\": \"___\\n### **Q1.**\\n> Are the prompts used to evaluate the LLM judge on REWARDMATH the same as the prompt used to evaluate the LLM judge on the Reward Bench? Different prompting strategy (eg. difference system prompt) raises concerns regarding fair comparison between the two benchmarks.\\n\\nRewardBench was evaluated using pairwise comparisons, whereas RewardMATH was assessed with both pairwise comparisons and direct assessment. The same prompt was applied for pairwise comparisons in both benchmarks (Figure 17), while a separate prompt was used for direct assessment in RewardMATH (Figure 16), following \\\\[9\\\\]. As noted in the Appendix B.3, Prometheus-2 utilizes the prompts proposed by \\\\[10\\\\] applying criteria specific to reasoning tasks (Figure 18, 19).\\n\\n**References**\\n\\n\\\\[9\\\\] Zheng, Lianmin, et al. \\\"Judging llm-as-a-judge with mt-bench and chatbot arena.\\\" Advances in Neural Information Processing Systems 36 (2023): 46595-46623.\\n\\n\\\\[10\\\\] Seungone Kim et al. Prometheus 2: An open source language model specialized in evaluating other language models. arXiv preprint arXiv:2405.01535, 2024\\\\.\\n\\n___\\n\\n### **Q2.** \\n> What is MATH500? Were there any steps taken to ensure the dataset is not contaminated with the models being evaluated? If the dataset is used during training on any of the evaluated RMs, the benchmark\\u2019s reliability will be undermined.\\n\\nWe apologize for any confusion resulting from the initial lack of detail on MATH500. We have now provided a citation and comprehensive information in Section 3 and Appendix B of the revised PDF **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**.\\n\\nMATH500 is drawn from the original MATH dataset \\\\[11\\\\], which consists of a 7.5K training set and a 5K test set. During the construction of PRM800K \\\\[12\\\\], the original 7.5K training set was insufficient for training a robust Process Reward Model (PRM) on step-by-step solution data. Therefore, 4.5K problems from the MATH test set were incorporated into the training set, leaving a final subset of 500 problems now identified as MATH500. Since the release of PRM800K, MATH500 has been widely adopted to prevent overlap between training and test sets. \\nTo clarify, MATH500 originates from the original MATH dataset and is not a new contribution of our work. Additionally, only PRM800K contains mis-annotated samples, while the MATH500 subset, directly sourced from the original MATH dataset, remains accurate and unaffected.\\n\\n**References**\\n\\n\\\\[11\\\\] Hendrycks, Dan, et al. \\\"Measuring Mathematical Problem Solving With the MATH Dataset.\\\" Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track\\n\\n\\\\[12\\\\] Lightman, Hunter, et al. \\\"Let's Verify Step by Step.\\\" The Twelfth International Conference on Learning Representations.\"}", "{\"title\": \"Response to Reviewer bxC4 [2/3]\", \"comment\": \"___\\n### **Q5.**\\n> If one is trying to do preference learning using a llama model as the base model, is it important for the reward model to know the rejected response generated by a non-llama model should be worse than the accepted response?\\n\\nThank you for your insightful question. If we only consider reward models for preference learning, it may not be important for the reward model to recognize rejected responses from non-Llama models (i.e. different base models). However, creating a benchmark based solely on responses from one model could limit its applicability and generalizability across different models and scenarios. In this work, we collected responses from various models to ensure that the reward model can be applied across multiple scenarios (e.g. inference-time optimization with the reward model, PPO, and dataset construction with the reward model), allowing us to evaluate its performance. In this context, we also conducted an analysis of LLM-as-a-judge. \\nIn our experiments (Section 4), we used MetaMATH-Mistral-7B and WizardMATH-7B-v1.1 as policy models and observed a correlation between the results of rejected responses from these models (the 4th and 5th rows from the bottom in Table 12\\\\) and the Best-of-N (BoN) results. Additionally, Table 11 (Appendix C.4) presents the correlations between the performance of optimized policy on the downstream tasks and the performance of reward models on a dataset where the policy\\u2019s solutions were removed from RewardMATH. From these results, we observe that one-to-one comparisons can be significantly influenced by the policy models used in the experiments, as noted by the reviewer. In contrast, one-to-many comparisons remain unaffected by this influence and exhibit a strong correlation.\\n\\n___\\n### **Q6.**\\n> Do reward model-free alignment methods like DPO experience overfitting issues, and what advantages do reward models offer over such methods for reasoning tasks?\\n\\nThank you for pointing this fundamental question out, as it addresses key considerations in understanding the limitations of reward model-free alignment methods like DPO and the potential advantages of using reward models, especially for complex reasoning tasks. \\\\[3\\\\] demonstrates that reward overoptimization can occur even in direct alignment algorithms (e.g. DPO). \\nAdditionally, using reward models is advantageous over model-free alignment methods for the following reasons:\\n\\n* **PPO improves over DPO for reasoning:** \\\\[4\\\\] empirically shows that PPO achieves a larger performance improvement over DPO in reasoning tasks. And \\\\[5\\\\] investigates the limitations of DPO through theoretical and experimental analysis, finding that DPO is sensitive to distribution shifts between base model outputs and preference data, highlighting a fundamental limitation of DPO. The proposed PPO method in \\\\[5\\\\] shows performance improvements in reasoning tasks, and the paper also notes the critical role of the reward model during training. \\n* **Using reward models for inference-time scaling:** \\\\[6\\\\] shows the importance of inference-time scaling, demonstrating that applying inference-time scaling with an oracle verifier yields very high performance. \\\\[7\\\\] and \\\\[8\\\\] also demonstrate performance improvements by using reward models in different inference-time methods.\\n\\n**References**\\n\\n\\\\[3\\\\] Rafailov, Rafael, et al. \\\"Scaling laws for reward model overoptimization in direct alignment algorithms.\\\" *arXiv preprint arXiv:2406.02900* (2024).\\n\\n\\\\[4\\\\] Ivison, Hamish, et al. \\\"Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback.\\\" arXiv preprint arXiv:2406.09279 (2024).\\n\\n\\\\[5\\\\] Xu, Shusheng, et al. \\\"Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study.\\\" Forty-first International Conference on Machine Learning.\\n\\n\\\\[6\\\\] Brown, Bradley, et al. \\\"Large language monkeys: Scaling inference compute with repeated sampling.\\\" arXiv preprint arXiv:2407.21787 (2024).\\n\\n\\\\[7\\\\] Kang, Jikun, et al. \\\"Mindstar: Enhancing math reasoning in pre-trained llms at inference time.\\\" arXiv preprint arXiv:2405.16265 (2024).\\n\\n\\\\[8\\\\] Wu, Yangzhen, et al. \\\"An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models.\\\" arXiv preprint arXiv:2408.00724 (2024).\"}", "{\"metareview\": \"**summary**\\n\\nThe paper introduces RewardMATH, a new benchmark for evaluating reward models in mathematical reasoning tasks, designed to address limitations in the existing RewardBench. RewardMATH uses a more accurate dataset (MATH500), multiple incorrect examples for robust evaluations, and a mean-based evaluation metric to avoid length bias, ensuring a fair comparison. It demonstrates improvements over RewardBench by showing higher correlation with downstream evaluations and validating the robustness of reward models through a one-to-many comparison approach. This approach enhances the reliability and effectiveness of reward models in identifying the most accurate responses in mathematical reasoning tasks.\\n\\n**strengths**\\n\\n* The authors successfully identify weaknesses in current benchmarks and introduce substantial improvements.\\n* The paper addresses the increasingly important topic of improving LLM reasoning capabilities by focusing on creating robust benchmarks for evaluating these efforts.\\n* Diverse reward models were evaluated on RewardMATH, demonstrating its versatility and wide applicability.\\n\\n**weaknesses**\\n\\n* The study's focus is narrowly tailored to mathematical problems, potentially overlooking other critical aspects of reward model. To validate the authors' claims about the generalizability of their method, applying the benchmark to other domains or more open-ended queries would be necessary, ideally moving beyond strictly mathematical reasoning.\\n* The paper's primary weakness lies in its limited experimental validation, relying mainly on correlation with best-of-n sampling. Broadening the scope to include other fine-tuning techniques like SFT, DPO, and PPO, and testing in other domains such as coding or open-ended generation tasks, could provide a more comprehensive assessment of the benchmarks' effectiveness across diverse applications.\\n\\n**decision**\\n\\nAlthough the authors emphasized that the main contribution of this paper extends beyond the new benchmark, it is crucial that the validation of key findings be further improved. I recommend that the authors address the concerns raised by reviewers and consider resubmitting to another venue after making necessary improvements.\", \"additional_comments_on_reviewer_discussion\": \"The authors have successfully addressed several concerns raised by reviewers, including the incorporation of additional fine-tuning methods and evaluations on other math benchmarks, as well as addressing bias from specific LLM models. They have also revised the title, abstract, and main draft to better clarify the contributions of the paper. However, despite these improvements, several reviewers, including myself, remain skeptical about the overall soundness of the paper.\"}", "{\"comment\": \"Thank you for your response.\\n\\n[W1] I understand that this work does not feature a new benchmark and it proposes a new method that researchers may use when creating benchmarks. However, I think this raises the natural question: will this approach (multi-to-one comparisons) generalize to other domains than Math, on more complex domains such as coding or safety or helpfulness? I understand that for validation, a clear human preference is needed, which I think is definitely possible for simple coding domains (utilize unit tests provided by HumanEvalPack [1]), and likely possible for safety (one could use Llama-Guard [2]). If this work indeed proposes a new method, I think it should be verified on at least another domain. The math dataset used seems too simple for me to truly acknowledge this method's capability.\\n\\nI understand that this may not be possible within the rebuttal period due to time constraints. But I believe this remains a weakness.\\n\\n--- \\n\\n[W2, W3]\\nThank you for providing additional experiments and the statistical analysis. I think (although limited compared to BoN), the results do confirm that this method does work on other RLHF methods. Were there any differences observed between the results of BoN and DPO?\\n\\n---\\n\\n[Q2] My question was regarding whether the graph will drop down after the peak. For example, in Figure (6) (a) of your original manuscript, the Intern LM-2-7B and other RMs show a rather linear graph whereas for Gao et al [3], the curve is a flipped U-shape, even for most competent RMs. My question is whether there is a reason for this seemingly different behavior of the two curves. For example, in the DPO experiment you have conducted, the fine-tuned LLM will likely have a larger KL divergence than BoN sampling. Have you drawn this curve using your version of DPO? And does the shape follow that of Gao et al? If there is a difference, why so?\\n\\n[1] https://huggingface.co/datasets/bigcode/humanevalpack \\n\\n[2] https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/\\n\\n[3] Gao, Leo, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. Proceedings of the 40th International Conference on Machine Learning. 2023.\"}", "{\"summary\": [\"Authors aims to design a better benchmark for evaluating reward models in reasoning tasks. Authors identify problems with the previous benchmark RewardBench and proposes RewardMath:\", \"RewardBench is based on PRM800K which contains many wrong annotations, RewardMath instead is based on MATH500.\", \"RewardBench uses pairs of (human annotated example, LLM annotated incorrect example). RewardMath includes more than one incorrect example.\", \"RewardBench\\u2019s accepted and rejected examples have different number of steps, this could be a spurious correlation that leads to reward hacking. RewardMath fixes this.\", \"RewardBench\\u2019s PRM evals uses product instead of mean which biases shorter responses, authors fix this by using mean instead.\", \"To demonstrate the improvements of rewardmath 1) authors compare performances of different reward models on both RewardBench and RewardMath, and show rewardmath has higher correlation with downstream evals 2) authors show rewardmath exhibits the common trend of larger dataset -> better performance while rewardbench does not.\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The topic on how to Improve LLM reasoning capabilities has recently gained a lot of attention. This paper focuses on having good benchmarks for evaluating these efforts, and this could be very impactful if done correctly.\", \"Authors identify flaws of existing benchmarks and make good efforts to fix them.\", \"Paper has good results, specifically Figure 4 is very cool showing RewardMath has stronger correlation with downstream tasks.\"], \"weaknesses\": \"See questions I have below\", \"questions\": [\"RewardMath is based on the dataset MATH500, where does the dataset MATH500 come from? Is MATH500 prior work (and if yes the citation is missing) or is this a contribution of the paper (in this case it should be made clear).\", \"Does MATH500 address the incorrect annotation problem found in PRM800K?\", \"Can authors also show evaluations and ablations on gsm8k [1] and MATH [2] which are the most common eval tasks for LLM reasoning capabilities?\", \"Authors identify that in RewardBench the accepted response often has less steps than rejected ones, which could give a chance for models to reward hack (i.e. reward relies on the number of steps instead of the actual response quality). Did the authors ablate this? I.e. Does the reward-hacking model predict lower reward if we make the accepted response in RewardMath longer? And vice versa, does it predict higher reward if we make the rejected response shorter?\", \"Regarding RewardMath giving more than one rejected responses: If one is trying to do preference learning using a llama model as the base model, is it important for the reward model to know the rejected response generated by a non-llama model should be worse than the accepted response? I.e. the distribution could be very different that it never encounters it during preference learning. I.e. for Figure 4, if we use a llama model as the policy, does one-to-many RewardMath still do better than one-to-one RewardMath chosen & Llama rejection?\", \"Does reward-model free alignment methods like DPO also suffer from reward model overfitting problem? What is the advantage of using reward models over reward model-free methods for reasoning tasks?\", \"Does the benchmark evaluate cases where both the rejected and chosen response arrive at the same answer, but the rejected answer has the wrong steps? I.e. this is common for truth or false questions.\", \"[1] Training Verifiers to Solve Math Word Problems\", \"[2] Measuring mathematical problem solving with the math dataset\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces REWARDMATH, a benchmark for evaluating reward models in mathematical reasoning tasks, arguing that it provides a more reliable evaluation than the existing RewardBench by using one-to-many comparisons instead of pairwise comparisons. The authors validate their approach by showing correlation between benchmark performance and best-of-n sampling results.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper provides a good correlation study between the proposed benchmark and best of N downstream performance in datasets like MATH and Gaokao etc. The proposed one-to-many comparison seems to be on the right direction for better correlation with downstream performance.\", \"weaknesses\": \"I have the following major concerns:\\n\\n1. Technical Contribution & Novelty: The primary contribution seems to be replacing pairwise comparisons in reward bench with best of N comparisons, which is an incremental modification rather than a substantial methodological advance. I do not think the change only is sufficient for a publication in top machine learning conference. The correlation between N-way comparison performance and best-of-N sampling results is somewhat expected and doesn't provide deep insights into reward model behavior. \\n\\n\\n2. Unclear Definition of Robustness: The paper uses \\\"robustness\\\" throughout but never provides a precise definition. The authors seem to equate robustness with correlation to best-of-n sampling results, but this is circular reasoning since the benchmark itself uses similar methodology. There's no clear framework for what properties a \\\"robust\\\" reward model should have beyond empirical correlation with certain metrics. \\n\\n\\n3. Limited Experimental Validation: The paper relies heavily on correlation with best-of-n sampling as validation, but doesn't explore other important aspects of reward model behavior. To make the paper deeper and broader, it would be great if the authors could try comparing the correlation of different real down-stream fine-tuning techniques like BoN SFT, DPO, PPO etc. and see whether how RewardBench and RewardMath correlate with downstream performance there. It would also be interesting to see if such observation extends to other domain like coding, and perhaps even open-ended generations without ground truth.\", \"questions\": \"Why do the authors choose 10 generations with 9 incorrect and 1 correct answer in the k-way comparisons? How does the choice of k and numbers of correct and incorrect answers affect the resulting correlations and reward overoptimization? The relationship between reward overoptimization and the proposed benchmark needs more rigorous analysis.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer K97G [2/2]\", \"comment\": \"___\\n### **W3. Limited Experimental Validation**\\n\\nThank you for the insightful comments. We understand the concern regarding the lack of PPO or DPO in our experiment setup and appreciate the opportunity to address this. Below, we would like to address any concerns and clarify our experimental setup:\\n\\n**Challenges of PPO in a Non-Synthetic Setup** \\nMany previous studies have used the responses of an SFT model to train the same pretrained model as the reward model to achieve stable RLHF (PPO) training \\\\[1-2\\\\]. In particular, \\\\[2\\\\] highlights that initializing the reward model with the same pretrained model helps prevent information mismatches with the policy model, contributing to a consistent and accurate reward signal. Additionally, \\\\[2\\\\] and \\\\[3\\\\] suggest that as the policy model improves, the data distribution shifts, and if the reward model is not exposed to this new distribution, its accuracy may be limited. \\nIn our case, the reward models we evaluate are trained on different backbone (i.e. pretrained) models and are also different from the policy model, making stable PPO training challenging in a non-synthetic setup. Indeed, when we attempted training with several reward models, the training process was highly unstable. Due to these reasons, it was difficult to perform comprehensive PPO experiments with various reward models. \\nWe have now clarified the challenges of PPO in a non-synthetic setup in Appendix B.4 of the updated PDF **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**.\\n\\n**Concern that BoN Sampling Involves Circular Reasoning** \\nAs the reviewer mentioned in weakness 2, the methodology for evaluating RewardMATH\\u2014specifically, identifying the correct solution from multiple options\\u2014seems similar to the approach used in best-on-$n$ sampling. To clarify, however, BoN sampling involves generating multiple responses from a single model (i.e. a policy) and selecting the one with the highest reward, whereas RewardMATH utilizes responses generated by a wide range of models. Thus, the two approaches do not employ the same methodology, and it does not constitute circular reasoning.\\n\\n**Correlation with Downstream Tasks Other than BoN** \\nWe understand the reviewer\\u2019s concern regarding the need to verify the effectiveness of RewardMATH beyond BoN sampling, through methods such as DPO or PPO. As previously mentioned, due to the instability of PPO experiments in our setup, we focused on conducting experiments where the reward model can effectively provide learning signals.\\n\\n* **Preference data for DPO constructed using the reward model:** We created a preference dataset for DPO by selecting a response with the highest reward as the \\u201cchosen\\u201d sample and a response with the lowest reward as the \\u201crejected\\u201d sample.\\n\\nIn our experiments, we used MetaMATH-Mistral-7B as the SFT model and selected a 32K subset of data from the MetaMATH dataset as the training dataset, considering the short discussion (rebuttal) period. We performed n=32 sampling with the SFT model and removed instances that were entirely correct or incorrect to reduce noise and better assess whether the reward model provides meaningful learning signals. Finally, we obtained rewards from each reward model for a final dataset of 13.5K responses and conducted training with DPO.\\n\\nThe table below presents the correlation between the results of the optimized policies on MATH500 and the benchmark results. As a result, we reconfirm that the results on DPO also show a stronger correlation than RewardBench.\\n\\n| | RewardBench | RewardMATH |\\n| :---- | ----- | ----- |\\n| DPO with reward model | 0.156 | 0.725 |\\n| BoN | 0.187 | 0.902 |\\n\\n**References**\\n\\n\\\\[1\\\\] Ouyang, Long, et al. \\\"Training language models to follow instructions with human feedback.\\\" Advances in neural information processing systems 35 (2022): 27730-27744.\\n\\n\\\\[2\\\\] Touvron, Hugo, et al. \\\"Llama 2: Open foundation and fine-tuned chat models.\\\" arXiv preprint arXiv:2307.09288 (2023).\\n\\n\\\\[3\\\\] LeVine, Will, et al. \\\"A Baseline Analysis of Reward Models' Ability To Accurately Analyze Foundation Models Under Distribution Shift.\\\" arXiv preprint arXiv:2311.14743 (2023).\\n\\n___\\n### **Q1.**\\n> Why do the authors choose 10 generations with 9 incorrect and 1 correct answer in the k-way comparisons? How does the choice of k and numbers of correct and incorrect answers affect the resulting correlations and reward overoptimization?\\n\\nAs noted in Limitation (Appendix A), we consider finding the optimal $k$ to be beyond the scope of our study, given that our goal is not to propose a well-crafted new benchmark. As the number of solutions ($k$) increases, both the inference cost and the reliability of the results rise. Therefore, identifying the optimal trade-off point is crucial, and future work on the construction of benchmark should take this into account.\"}", "{\"comment\": \"Thank you for your response!\\n\\n> Why not just use the full MATH dataset since this is the standard practice, instead of just the MATH500 subset?\\n\\nAs previously mentioned, many studies aiming to improve mathematical reasoning capabilities through reward models primarily use MATH500 instead of the full MATH test set to prevent training set overlap with PRM800K [1-4]. Since PRM800K incorporates parts of the MATH test set into its training set, using MATH as a test set could compromise evaluation reliability. To address this, MATH500 [1], designed to exclude such overlap, has become the standard for testing. Even the OpenAI-O1 report includes the performance of MATH500 [5]. Finally, [2] said that \\u201cThe subset consists of 500 representative problems, and we find that the subset evaluation produces similar results to the full-set evaluation\\u201d.\\n\\nTherefore, MATH500 serves as a reliable and unbiased test set for evaluating mathematical reasoning capabilities within the community.\\n\\n**References** \\n\\\\[1\\\\] Lightman, Hunter, et al. \\\"Let's Verify Step by Step.\\\" The Twelfth International Conference on Learning Representations. \\n\\\\[2\\\\] Wang, Peiyi, et al. \\\"Math-shepherd: Verify and reinforce llms step-by-step without human annotations.\\\" Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024\\\\. \\n\\\\[3\\\\] Sun, Zhiqing, et al. \\\"Easy-to-hard generalization: Scalable alignment beyond human supervision.\\\" arXiv preprint arXiv:2403.09472 (2024). \\n\\\\[4\\\\] Setlur, Amrith, et al. \\\"Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning.\\\" arXiv preprint arXiv:2410.08146 (2024). \\n\\\\[5\\\\] [https://openai.com/index/learning-to-reason-with-llms/](https://openai.com/index/learning-to-reason-with-llms/) \\n___\\n> I see that classifier-based reward models models (in Table 2) as well as the base models (in Figure 4) used in the paper are mostly on the scale of under 8B parameters. On this scale, the models usually are not \\\"achieving high scores\\\" on gsm8k (i.e. <80%). So I still think it's valuable to include those results.\\n\\n> I don't know if it's that important whether we can give a hard classification of it's \\\"in\\\" or \\\"out of distribution\\\". Rather I think it would be more important to include both of these evals because they are the most commonly used benchmark for math-related tasks.\\n\\nWe appreciate your feedback and understand your suggestions that including the results on GSM8K would enhance this work. Accordingly, we also examine the results on the GSM8K dataset as you commented. The first table below exhibits the performance of Mistral-MetaMATH-7B, used as the policy model in Figure 4, on GSM8K. Moreover, another policy model we used, WizardMath-7B-v1.1, has already achieved a score of 83.2 (pass@1), and Llama-3.1-8B-Instruct, though not a math-specialized model, has reached 84.5 (pass@1) on GSM8K. \\n\\n| N | pass@N |\\n| :---: | :---: |\\n| 1 | 78.772 |\\n| 2 | 79.909 |\\n| 4 | 82.942 |\\n| 8 | 85.747 |\\n| 16 | 88.779 |\\n| 32 | 91.812 |\\n| 64 | 94.011 |\\n| 128 | 96.361 |\\n| 256 | 98.939 |\\n\\nHowever, as you pointed out, the results on GSM8K would be invaluable; therefore, we further conducted an ablation study on it. The second table below presents the correlation between the results of BoN sampling on GSM8K and the performance on each benchmark. Due to limited GPU resources, the ReasonEval-34B experiment is still in progress and will be included in the updated PDF once it is completed. To ensure a fair comparison, we have also provided correlations for the MATH dataset without ReasonEval-34B. The results demonstrate that our design also presents a strong correlation for GSM8K, while RewardBench has a weak correlation.\\n\\n| | RewardBench | RewardMATH |\\n| :---- | ----- | ----- |\\n| GSM8K (BoN, w/o ReasonEval-34B) | 0.221 | 0.818 |\\n| MATH (BoN, w/o ReasonEval-34B) | 0.368 | 0.946 |\\n| MATH (BoN) | 0.187 | 0.902 |\"}", "{\"title\": \"Response to Reviewer BzTe [1/4]\", \"comment\": \"Dear Reviewer BzTe,\\n\\nWe appreciate your comments and feedback on our work. We will address the clarification points you raised (i.e. weakness 1) at the end of our response.\\n___\\n### **W2. Benchmark Biases**\\n\\n**1\\\\) Dataset construction**\\n\\nWe understand the concern regarding potential bias introduced by using GPT-4 to convert human-written solutions into machine-generated step-by-step solutions for RewardMATH. \\nHowever, we have already addressed this by conducting experiments of self-enhancement bias in LLM-as-Judge, with the detailed results provided in Appendix C.2. Figure 8 and Table 8 demonstrates that GPT-4o and other LLM judges exhibit a mild preference for their own rejected and correct solutions, but the bias is not significant. For example, when we used GPT-4o as LLM-as-Judge and performed a pairwise comparison (win or lose) between its own correct solution and the chosen solution from LLaMA3-70B, it selected only 48% of its own correct solution. Nonetheless, we agree on the importance of collecting diverse correct solutions and have noted this as a limitation in our paper (Appendix A). Furthermore, as shown in Figure 2b, the incorrect solutions modified by GPT-4 account for only 9% of the total incorrect solutions, which we believe minimizes any potential bias.\\n\\n**2\\\\) Selecting Internlm2-7B as the gold RM and using KL Divergence to capture optimization degree** \\nPrior studies on reward overoptimization often use a gold RM with more parameters than the proxy RM \\\\[1, 2\\\\] or even GPT-4 as the gold RM \\\\[3, 4\\\\]. However, conducting experiments with a larger model as the gold RM is challenging, as we need to assess a variety of reward models. Moreover, GPT-4 did not outperform other classifier-based reward models, making it unsuitable for gold RM. Therefore, we chose Internlm2-7B, which demonstrated high performance on both RewardBench and RewardMATH. We (and Reviewer BzTe) recognized that the previous studies\\u2019 approach of considering only gold reward may not be the optimal. To address this, we introduce oracle reward based on human preferences, which, in the case of mathematics, align with accuracy, as detailed in Section 5.2.1. \\nAdditionally, although expanding the optimization metrics could offer further insights, we believe KL divergence is a sufficient measure here, as it has been widely used in prior research \\\\[1-4\\\\] to observe reward overoptimization.\\n\\n**References**\\n\\n\\\\[1\\\\] Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" Proceedings of the 40th International Conference on Machine Learning. 2023\\\\.\\n\\n\\\\[2\\\\] Coste, Thomas, et al. \\\"Reward Model Ensembles Help Mitigate Overoptimization.\\\" The Twelfth International Conference on Learning Representations.\\n\\n\\\\[3\\\\] Rafailov, Rafael, et al. \\\"Scaling laws for reward model overoptimization in direct alignment algorithms.\\\" arXiv preprint arXiv:2406.02900 (2024).\\n\\n\\\\[4\\\\] Rame, Alexandre, et al. \\\"WARM: On the Benefits of Weight Averaged Reward Models.\\\" Forty-first International Conference on Machine Learning.\"}", "{\"title\": \"Response to Reviewer 1Mfa [3/3]\", \"comment\": \"___\\n### **W3. The agreement between RewardMATH or RewardBench and oracle reward scores in Figure 6.**\\n\\nThank you for your constructive suggestion regarding the statistical analysis for measuring agreement. Typically, reward overoptimization is illustrated with a graph, as shown in Figure 6. However, unlike previous studies, we examined a wide range of reward models, which may obscure clear trends. Therefore, we agree that a statistical analysis, such as Kendall's tau, would be beneficial for providing deeper insights. We calculate Kendall\\u2019s tau for the results in Figure 6, comparing the reward scores (gold and oracle reward) with the benchmark performance (RewardBench and RewardMATH) at specific KL divergence (i.e. specific $n$).\\n\\n| $n$ | KL | RewardBench (gold) | RewardMATH (gold) | RewardBench (oracle) | RewardMATH (oracle) |\\n| :---- | :---- | :---- | :---- | :---- | :---- |\\n| 64 | 3.17 | 0.400 | 0.718 | 0.116 | 0.692 |\\n| 128 | 3.86 | 0.348 | 0.718 | 0.182 | 0.761 |\\n| 256 | 4.55 | 0.322 | 0.692 | 0.156 | 0.761 |\\n\\n___\\n\\n### **Q1.**\\n> As the proxy reward model trained from synthetic data shows far from optimal performance in Table 3 (only around 13% for RewardMATH and 69% for RewardBench), can you consider using better proxy RMs? The increase in value from 12.68 to 13.51 is not very convincing to me that this is a strong trend.\\n\\nWe agree that the proxy reward model\\u2019s performance, as shown in Table 3, is not optimal, particularly with results of approximately 13% for RewardMATH and 69% for RewardBench. However, we believe this is not critical, as our experiment follows the original experimental setups in \\\\[6\\\\] and \\\\[7\\\\]. The primary goal of this experiment is to observe reward overoptimization as data size increases in a synthetic setup and to verify whether this trend is well-reflected by the score on a benchmark designed for reward models. Therefore, using a more advanced proxy RM was not considered in this work. Testing with a stronger proxy RM would require a broader dataset and diverse training approaches, which would fall outside of a synthetic setup.\\n\\n**References**\\n\\n\\\\[6\\\\] Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" Proceedings of the 40th International Conference on Machine Learning. 2023\\\\.\\n\\n\\\\[7\\\\] Coste, Thomas, et al. \\\"Reward Model Ensembles Help Mitigate Overoptimization.\\\" The Twelfth International Conference on Learning Representations.\\n\\n___\\n### **Q2.** \\n> If a larger N is considered for BoN sampling, will the graph drop down as in Gao et al \\\\[2\\\\]? For a larger N, is RewardMATH still successful in detecting more robust reward models regarding the overoptimization problem?\\n\\nWe appreciate your question regarding the impact of larger N on the trend in Figure 6\\\\. Due to constraints on available computational resources, we evaluate BoN sampling using only $n=256$. As N continues to increase, a robust reward model is likely to either converge or exhibit minimal overopimization at higher KL divergence. However, since reward models that overoptimize at low KL divergence or have lower peaks tend to perform poorly on RewardMATH, we expect that RewardMATH will remain effective at detecting such models even with large N.\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer BzTe,\\n\\nThank you again for your thoughtful feedback on our paper. Your feedback has been invaluable in improving our work. As today is the final day for discussion, we would be delighted to provide any clarifications or further insights if needed. Please let us know if there are any remaining concerns we can address.\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer TTuB,\\n\\nThank you again for your thoughtful feedback on our paper. Your feedback has been invaluable in improving our work. As today is the final day for discussion, we would be delighted to provide any clarifications or further insights if needed. Please let us know if there are any remaining concerns we can address.\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"comment\": \"Thank you for the response. I appreciate the efforts the authors have made to verify the effectiveness of the reward bench in downstream RLHF performance with new DPO experiments, and the investigation of the new benchmark scores. I have updated my score accordingly.\\n\\nHowever, I'm not fully convinced that the contribution is enough as a ICLR conference paper, due to the following reason:\\n\\n1. The core technical contribution is still the change from [1 to 1] to [1 to many], and as authors conjected, perhaps [many to many]. \\n2. The main focus is only math performance, while reward bench does focus on a wide range of chat capabilities including hard, reasoning, safety etc.\"}", "{\"title\": \"Sincere Request for Review of Our Responses, New Experiments, and Revised Paper\", \"comment\": \"Dear Reviewer TTuB,\\n\\nThank you again for your time and effort to provide your insightful feedback on our paper. \\n\\nWe have addressed your comments and incorporated the additional results into Table 12 of the updated PDF **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**.\\nIf you have any remaining questions or require further clarification, we would be happy to address them before the time window closes.\\n\\nThank you so much for your time and valuable feedback!\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer K97G,\\n\\nThank you for your response and for reconsidering the score. In addition to the clarification provided in the general response and W1, we would like to provide further clarification regarding the points you raised.\\n___\\n> ...core technical contribution is ... the change from [1 to 1] to [1 to many] ... \\n\\nWe understand that our contribution might be seen as merely a change from one-to-one to one-to-many comparisons. However, we would like to emphasize that this change is not our only contribution. We have described our contributions below:\\n\\n- **We identify the limitations of the existing benchmark.** We experimentally verified that the scores on RewardBench show a weak correlation with the performance of policies on downstream tasks, using BoN sampling and DPO. Moreover, we figured out several limitations of RewardBench, including poor quality, vulnerability to reward hacking, and unreliable one-to-one comparisons (i.e. evaluation based on single, isolated cases). \\n- **We propose a better design for a reliable benchmark.** RewardBench has a significant difference in step counts between chosen and rejected solutions. So, we argue that the benchmark for reward models should be careful of the huge difference between chosen and rejected solutions to prevent reward hacking. Moreover, since evaluating a reward model using a single pair of solutions is highly prone to misjudgement, we underscore the need for multiple comparisons. \\n- **We thoroughly validate the design through correlation with downstream tasks and through the lens of reward overoptimization.** Due to the limitations of RewardBench, its scores show a weak correlation with performance on downstream tasks. However, our design, which mitigates reward hacking factors (i.e. the huge difference between chosen and rejected responses) and incorporates multiple comparisons, achieves a strong correlation. Furthermore, Reward overoptimization is a critical challenge in RLHF, where models may overfit to specific reward signals, leading to degraded generalization. So, it is important to effectively estimate the degree of reward overoptimization via the scores on the benchmark. Through the lens of reward overoptimization, we show that the limitations of RewardBench raises critical issues that compromise its role as a reliable benchmark.\\n\\n(As we mentioned in the paper, we refer to the math subset of RewardBench simply as RewardBench. So, there may be domains within RewardBench that do not suffer from issues like poor quality or the huge difference between chosen and rejected responses.)\\n\\nTo summarize, we describe our key insights below:\\n1. **In benchmarks for reward models, a significant difference between the chosen and rejected responses shows low correlation with downstream tasks due to the potential for reward hacking.** \\n2. **One-to-one comparisons may yield inaccurate results depending on the preference pairs, which in turn results in low correlation with downstream tasks.** \\n3. **A benchmark employing multiple comparisons can effectively capture reward overoptimization, indicating its ability to assess the robustness of reward models.**\\n___\\n> The main focus is only math performance, while reward bench does focus on a wide range of chat capabilities including hard, reasoning, safety etc.\\n\\nWe believe it would be unfair to compare this work with RewardBench in terms of the scope of the domains they address. The authors of RewardBench propose a new benchmark, which consists of a preference set to assess reward models across various domains. But it lacks a comprehensive investigation into the reliability of its results, including vulnerabilities to reward hacking, insufficient analysis of correlations with downstream tasks, and an inability to effectively estimate reward overoptimization. \\n\\nIn contrast, although our focus was exclusively on mathematics, we emphasized delivering profound insights into the considerations for benchmarks used to assess reward models. We also believe that our insights\\u2014validation against reward hacking and reward overoptimization\\u2014can be applied to a variety of domains in the future.\\n\\n&nbsp;\\n\\nWhy mathematics? \\nAs we mentioned in the general response, the reason we chose mathematics is as follows:\\n\\n- **One of the tasks where reward models are most extensively used is mathematics reasoning.** To enhance mathematical reasoning capabilities, reward models are widely utilized during training (e.g. PPO) and at inference-time through reward-based techniques such as BoN sampling or Monte Carlo Tree Search (MCTS). \\n- **Mathematical reasoning includes a clear human preference.** In mathematical reasoning, human preference can be easily defined as correctness, allowing us to focus effectively on the analysis without the need to deliberate over true preferences.\\n\\n&nbsp;\\n\\nThank you once again for your time and thoughtful feedback, and for engaging with our submission.\\n\\n&nbsp;\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"summary\": \"The paper addresses a specific limitation of RewardBench, a widely used benchmark for reward model evaluation, in its assessment of mathematical reasoning capabilities. To this end, the authors introduce RewardMATH, a new benchmark that employs one-to-many comparisons of chosen and rejected responses to mathematical questions to enhance evaluation robustness. Experiments show that RewardMATH correlates well with policy performance and is more effective at identifying potential reward overoptimization and the reliability of reward signals.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper identifies a particular limitation of RewardBench, a popular benchmark for evaluating reward models, in assessing mathematical reasoning and introduces a new benchmark that addresses this issue by including one-to-many comparison data.\\n2. The authors provide extensive experiments and analyses across different reward model types, including both proprietary and open models, and assess various performance metrics.\", \"weaknesses\": \"1. The benchmark comparison primarily involves RewardBench, which is designed to evaluate reward models more holistically across various domains. However, is the comparison in terms of mathematical reasoning appropriate, given that RewardMATH is specifically designed for this purpose? If RewardBench is indeed the most comprehensive eval set even for mathematical reasoning tasks prior to RewardMATH, it would be helpful to clarify this point.\\n2. The benchmark appears to lack cases where reward models must distinguish between correct solutions of varying quality, such as those missing reasoning steps. It is also unclear whether 500 samples is sufficient to cover diverse mathematical reasoning tasks.\\n3. Tables 1 and 2 report performance comparisons of various LLMs on RewardBench and RewardMATH. The results seem to merely suggest that the two benchmarks differ significantly. Can we conclude from these results that \\\"high scores on RewardBench do not guarantee **robustness** in reward models\\\"?\", \"questions\": \"Q. How much of the relatively poor results for RewardBench is due to the noisy annotations inherited from PRM800K, as mentioned in Sec. 3.1? In other words, could simply fixing these annotations significantly change the comparison?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer K97G [1/2]\", \"comment\": \"Dear Reviewer K97G,\\n\\nFirst of all, we appreciate your constructive feedback on our work. We address the key issue raised by the reviewer in the comments below:\\n___\\n### **W1. Technical Contribution & Novelty**\\n\\nAs you mentioned, if we aim to propose a new benchmark simply by changing the one-to-one comparisons in the existing benchmark (i.e. RewardBench) to one-to-many, we agree that this would be considered an incremental modification. However, what we aim to propose is not a new benchmark (i.e. RewardMATH), but rather insights into the future direction of reliable benchmark for reward models. So, we would like to provide a brief motivation and summary of our work.\\n\\nDespite many studies achieving sufficiently high scores on RewardBench, few have questioned the quality or evaluation methodology of RewardBench. Moreover, it is recently recognized that models performing well on RewardBench do not necessarily deliver strong results when applied in the RLHF system. Therefore, we first highlight the issues with RewardBench\\u2014such as poor quality, susceptibility to reward hacking, and unreliable one-to-one comparisons\\u2014and argue for a benchmark design that employs multiple comparisons for a more reliable evaluation, using RewardMATH as an example.\\n\\nThe most notable aspect of RewardBench is the significant difference in step counts between chosen and rejected solutions, underscoring the importance of preventing reward hacking by ensuring that models do not exploit a preference for shorter-step solutions over solutions correctness (Figure 4). We also emphasize that evaluating models through one-to-one comparisons based on isolated solutions risks misjudging actual performance of the models, and we believe that many-to-many comparisons represent the ideal approach for reward model benchmark. However, since gathering a variety of correct solutions demands substantial human resources, we adopted a one-to-many comparison instead (Section 3.2 and Limitation). Using RewardMATH\\u2014a benchmark representing this one-to-many design\\u2014we conducted a thorough analysis through the lens of reward overoptimization, demonstrating that a design involving multiple comparisons results in a significantly more reliable benchmark.\\n\\nTo summarize, we underscore the issues with RewardBench and demonstrate, through RewardMATH, that a benchmark design involving multiple comparisons is significantly more reliable. As a result, we argue that adopting a many-to-many comparison across all domains\\u2014not just mathematics\\u2014is the optimal path for constructing a reliable benchmark for reward models.\\n___\\n### **W2. Unclear Definition of Robustness**\\n\\nIn Section 2.3, we discussed robustness, stating, \\u2018we argue that the robustness of a reward model should be evaluated based on how effectively it provides signals from which a policy can learn\\u2019. In other words, we consider a robust reward model to be one that is resistant to reward hacking, consistently assigning high scores to good answers and low scores to poor ones.\"}", "{\"title\": \"Response to Reviewer 1Mfa [2/3]\", \"comment\": \"___\\n### **W2. Concerns about the lack of PPO Experiments and its impact on benchmark validation**\\n\\nThank you for the insightful comments and for recognizing the extensive experiments conducted using Best-of-N (BoN) sampling. We understand the concern regarding the lack of PPO or DPO in our experiment setup and appreciate the opportunity to address this. Below, we would like to address any concerns and clarify our experimental setup:\\n\\n1. **Details of synthetic setup for PPO:** \\n In response to your question regarding the synthetic setup in Figure 5, we provide further details here. We trained on a 12K MATH dataset for 2000 steps and saved a total of 10 checkpoints at 200-step intervals. For each checkpoint, we computed the KL divergence, oracle reward, and gold reward. Following \\\\[1\\\\], the fitted dotted curves utilize scaling laws proposed in \\\\[2\\\\], following the formula $R\\\\_{RL}(d) \\\\= d(\\\\\\\\alpha\\\\_{RL}-\\\\\\\\beta\\\\_{RL}\\\\\\\\log\\\\_{d})$. \\n Through the fitted dotted curve, we observed reward overoptimization relative to data scale, a phenomenon experimentally demonstrated in many studies. From the results of this experiment (Figure 5 and Table 3), we found that RewardBench does not reflect the degree of overoptimization. \\n2. **Challenges of PPO in a non-synthetic setup:** \\n Many previous studies have used the responses of an SFT model to train the same pretrained model as the reward model to achieve stable RLHF (PPO) training \\\\[3-4\\\\]. In particular, \\\\[4\\\\] highlights that initializing the reward model with the same pretrained model helps prevent information mismatches with the policy model, contributing to a consistent and accurate reward signal. Additionally, \\\\[4\\\\] and \\\\[5\\\\] suggest that as the policy model improves, the data distribution shifts, and if the reward model is not exposed to this new distribution, its accuracy may be limited. \\n In our case, the reward models we evaluate are trained on different backbone (i.e. pretrained) models and are also different from the policy model, making stable PPO training challenging in a non-synthetic setup. Indeed, when we attempted training with several reward models, the training process was highly unstable. Due to these reasons, it was difficult to perform comprehensive PPO experiments with various reward models. \\n We have now clarified the challenges of PPO in a non-synthetic setup in Appendix B.4 of the updated PDF **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**. \\n3. **Correlation with downstream tasks other than BoN:** \\n We understand the reviewer\\u2019s concern regarding the need to verify the effectiveness of RewardMATH beyond BoN sampling, through methods such as DPO or PPO. As previously mentioned, due to the instability of PPO experiments in our setup, we focused on conducting experiments where the reward model can effectively provide learning signals. \\n * **Preference data for DPO constructed using the reward model:** We created a preference dataset for DPO by selecting a response with the highest reward as the \\u201cchosen\\u201d sample and a response with the lowest reward as the \\u201crejected\\u201d sample. \\n \\n In our experiments, we used MetaMATH-Mistral-7B as the SFT model and selected a 32K subset of data from the MetaMATH dataset as the training dataset, considering the short discussion (rebuttal) period. We performed n=32 sampling with the SFT model and removed instances that were entirely correct or incorrect to reduce noise and better assess whether the reward model provides meaningful learning signals. Finally, we obtained rewards from each reward model for a final dataset of 13.5K responses and conducted training with DPO. \\n \\n The table below presents the correlation between the results of the optimized policies on MATH500 and the benchmark results. As a result, we reconfirm that the results on DPO also show a stronger correlation than RewardBench.\\n\\n| | RewardBench | RewardMATH |\\n| :---- | ----- | ----- |\\n| DPO with reward model | 0.156 | 0.725 |\\n| BoN | 0.187 | 0.902 |\\n\\n**References**\\n\\n[1] Rafailov, Rafael, et al. \\\"Scaling laws for reward model overoptimization in direct alignment algorithms.\\\" arXiv preprint arXiv:2406.02900 (2024).\\n\\n[2] Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" Proceedings of the 40th International Conference on Machine Learning. 2023.\\n\\n[3] Ouyang, Long, et al. \\\"Training language models to follow instructions with human feedback.\\\" Advances in neural information processing systems 35 (2022): 27730-27744.\\n\\n[4] Touvron, Hugo, et al. \\\"Llama 2: Open foundation and fine-tuned chat models.\\\" arXiv preprint arXiv:2307.09288 (2023).\\n\\n[5] LeVine, Will, et al. \\\"A Baseline Analysis of Reward Models' Ability To Accurately Analyze Foundation Models Under Distribution Shift.\\\" arXiv preprint arXiv:2311.14743 (2023).\"}", "{\"title\": \"Sincere Request for Review of Our Responses, New Experiments, and Revised Paper\", \"comment\": \"Dear Reviewer bxC4,\\n\\nThank you again for your time and effort to provide your insightful feedback on our paper. \\n\\nWe have addressed your comments and added comprehensive details about MATH500 in Section 3 and Appendix B of the updated draft **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**.\\nIf you have any remaining questions or require further clarification, we would be happy to address them before the time window closes.\\n\\nThank you so much for your time and valuable feedback!\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"title\": \"Response to Reviewer TTuB [2/2]\", \"comment\": \"___\\n### **W3.**\\n\\n> Tables 1 and 2 report performance comparisons of various LLMs on RewardBench and RewardMATH. The results seem to merely suggest that the two benchmarks differ significantly. Can we conclude from these results that \\\"high scores on RewardBench do not guarantee **robustness** in reward models\\\"?\\n\\nAs mentioned earlier, both RewardBench and RewardMATH are constructed based on MATH500, and thus are not significantly different. Additionally, the rejected solutions in RewardBench were generated with an unaligned GPT-4, and GPT-family models were also used to produce the rejected solutions in RewardMATH. Consequently, the model rankings in RewardBench should not differ significantly from those in RewardMATH. However, for example, Oasst-rm-2.1-pythia-1.4b ranks among the top 3 in RewardBench but falls to second-to-last in RewardMATH. This discrepancy suggests that models achieving high scores in RewardBench may not actually be robust.\\n\\n___\\n### **Q1.**\\n\\n> How much of the relatively poor results for RewardBench is due to the noisy annotations inherited from PRM800K, as mentioned in Sec. 3.1? In other words, could simply fixing these annotations significantly change the comparison?\\n\\nWe find this question intriguing and consider it a valuable point. Instead of manually inspecting and correcting all misannotations, we can infer their impact from the table below. Similar to Figure 4 and Table 12, the table below presents the correlation between the results of the one-to-one benchmark, comparing RewardBench\\u2019s chosen solutions with each model\\u2019s rejected solutions in RewardMATH, and the performance of the optimized policy on the downstream tasks. We believe that even if the misannotations in PRM800K were fully corrected, the results would remain similar to those in this table, as other critical issues\\u2014such as susceptibility to reward hacking and the limitation of one-to-one comparisons\\u2014would still persist. \\n\\nThanks to your insightful question, we believe we can make our work stronger, further supporting the importance of both reducing the possibility of reward hacking and employing multiple comparisons. From the table below, we can observe that significant representation gaps between chosen and rejected responses make reward models more susceptible to reward hacking. Moreover, regardless of which rejected solutions are used, the correlations remain substantially lower than those of one-to-many comparisons. This reconfirms the effectiveness of our design for reliable benchmark. We have now incorporated the table below into Table 12 of the updated PDF **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**, with the detailed results provided in Appendix C.4.\\n\\n\\n| chosen (RewardBench) vs. | | MetaMATH-Mistral-7B | | | | WizardMATH-7B-v1.1 | |\\n| ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| **rejected** | **MATH500** | **Gaokao** | **SAT** | | **MATH500** | **Gaokao** | **SAT** |\\n| GPT-4o-2024-05-13 | \\\\-0.174 | \\\\-0.157 | \\\\-0.245 | | \\\\-0.118 | \\\\-0.041 | \\\\-0.146 |\\n| GPT-3.5-turbo-0125 | 0.135 | 0.126 | 0.124 | | 0.264 | 0.256 | 0.247 |\\n| Claude-3-sonnet-20240229 | 0.234 | 0.247 | 0.143 | | 0.341 | 0.373 | 0.308 |\\n| Meta-LLaMA3-70B | 0.058 | 0.066 | 0.096 | | 0.165 | 0.185 | 0.236 |\\n| Mixtral-8x7B | 0.193 | 0.187 | 0.124 | | 0.291 | 0.317 | 0.236 |\\n| Gemma-2-27b-it | 0.008 | 0.011 | \\\\-0.022 | | 0.099 | 0.124 | 0.082 |\\n| DeepSeek-V2 | 0.292 | 0.313 | 0.286 | | 0.434 | 0.446 | 0.463 |\\n| Phi-3-medium | 0.074 | 0.071 | 0.069 | | 0.209 | 0.196 | 0.187 |\\n| Meta-LLama3-8B | 0.025 | 0.055 | 0.055 | | 0.159 | 0.160 | 0.225 |\\n| Qwen1.5-7B-chat | 0.316 | 0.330 | 0.259 | | 0.434 | 0.446 | 0.396 |\\n| Gemma-7b-it | 0.311 | 0.335 | 0.259 | | 0.439 | 0.463 | 0.434 |\\n| WizardMath-7B-v1.1 | 0.275 | 0.324 | 0.195 | | 0.429 | 0.446 | 0.341 |\\n| MetaMATH-Mistral-7B | 0.308 | 0.237 | 0.204 | | 0.341 | 0.397 | 0.269 |\\n| RewardMATH (random choice) | 0.162 | 0.170 | 0.107 | | 0.264 | 0.287 | 0.247 |\"}", "{\"title\": \"Response to Reviewer BzTe [4/4]\", \"comment\": \"___\\n### **Q3.** \\n> What was the motivation behind different parts of the Synthetic Data experiment?\\n\\nIn synthetic setup, we examined how well performance on the benchmark estimates reward overoptimization within a controlled environment. Following \\\\[13\\\\], preference data was labeled by a gold RM serving as a substitute for human annotators, and this data was used to train proxy RMs with varying amounts of training data, ranging from 8K to 65K. Table 3 presents the scores of proxy RMs on RewardBench and RewardMATH, and Figure 5 illustrates how the gold reward changes for each proxy RM as KL divergence increases. In conclusion, this synthetic setup demonstrates, under controlled conditions with all other variables held constant, that scores on RewardMATH offer a more accurate estimate of reward overoptimization.\\n\\n**References**\\n\\n\\\\[13\\\\] Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" Proceedings of the 40th International Conference on Machine Learning. 2023\\\\.\\n\\n&nbsp;\\n> What was the reasoning behind using the MetaMATH dataset? Why was only 80K out of the 155K data points augmented from MATH used for training?\\n\\nWe appreciate your interest in the details of our work. We agree that this information is important, thus we detailed it in Appendix B.4. \\nWe can not train the policy model solely on the MATH dataset to generate synthetic preference data, as the limited amount of MATH was not sufficient for the policy to learn effectively, preventing the generation of meaningful synthetic data. So, we used 80K of the 155K MetaMATH dataset to train the policy model, with the remaining 75K used to generate synthetic preference data that reflects the policy\\u2019s preferences. We generated 16 samples per problem using the policy model and, after excluding samples that could not form preference pairs, created a final synthetic preference dataset of 65K pairs.\\n\\n___\\n### **Q4.**\\n> Were there steps taken to validate the said incorrect solutions are indeed incorrect?\\n\\nYes, we were also very mindful of the point you raised. Thus, we provide the details in Appendix B.1. \\nFirst, we filtered out the solutions with correct answers and selected only those with incorrect answers. As shown in the solution examples in Figure 1a, the final step of each solution includes the reasoning that leads to the answer. So, an incorrect final answer generally indicates that the solution is incorrect. However, the final answer is parsed from the solution using \\\\[14\\\\], which can sometimes lead to misclassifying the correct solutions as incorrect due to parsing errors, leading to their unintended inclusion. Hence, we conducted a manual inspection of all chosen and rejected solutions.\\n\\n**References**\\n\\n\\\\[14\\\\] Boning Zhang et al. MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit. arXiv preprint arXiv:2404.13925 (2024)\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer BzTe,\\n\\nThank you for your suggestions that could further enhance our work. We believe your comments addressed potential concerns that readers might have, and we have updated the draft accordingly **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**. The updates made to the draft are as follows:\\n\\n- **Section 4.2 & Appendix C.3** \\n In Section 4.2, we added a short explanation about the self-enhancement bias in LLM judges and further clarification in Appendix C.3. \\n- **Section 5.3** \\n We added a footnote regarding the explanation of our key insights and application to other domains. \\n- **Appendix A.1 (Limitation)** \\n We provide further explanation on the considerations that should be carefully considered when applying our key insights to other domains in the future, as well as self-enhancement bias and future directions. \\n- **Appendix A.2** \\n We elaborate on the reasons for selecting the mathematics domain, summarization of our key insights, how these insights can be applied to other domains, and an example of their application in the chat domain. \\n- **Appendix C.5** \\n Although it is not part of your suggestions, we additionally examine the correlations between the performance on the benchmark and scores of optimized policy using BoN sampling on GSM8K.\\n\\nThank you once again for your time and thoughtful feedback, and for engaging with our submission.\\n\\nBest Regards,\\n\\nThe Authors of Paper 10281\"}", "{\"title\": \"Thank you Authors\", \"comment\": \"Dear Authors,\\n\\nThank you for your detailed rebuttal and your detailed explanations towards my questions and concerns.\", \"some_remaining_concerns_i_have_regarding_evaluations\": \"> As previously mentioned, MATH500 is a subset of the MATH dataset, and our experiments already include evaluations on this dataset.\\n\\nWhy not just use the full MATH dataset since this is the standard practice, instead of just the MATH500 subset?\\n\\n>GSM8K is a relatively simple dataset on which many studies have already achieved high scores, potentially limiting the meaningful insights gained from further evaluation. \\n\\nI see that classifier-based reward models models (in Table 2) as well as the base models (in Figure 4) used in the paper are mostly on the scale of under 8B parameters. On this scale, the models usually are not \\\"achieving high scores\\\" on gsm8k (i.e. <80%). So I still think it's valuable to include those results. \\n\\n> Given GSM8K's widespread use and its potential inclusion in training data, it is challenging to classify it definitively as either in-distribution or out-of-distribution.\\n\\nI don't know if it's *that* important whether we can give a hard classification of it's \\\"in\\\" or \\\"out of distribution\\\". Rather I think it would be more important to include both of these evals because they are the most commonly used benchmark for math-related tasks.\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer 1Mfa,\\n\\nThank you again for your thoughtful feedback on our paper. Your feedback has been invaluable in improving our work. As today is the final day for discussion, we would be delighted to provide any clarifications or further insights if needed. Please let us know if there are any remaining concerns we can address.\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}", "{\"title\": \"Response to Reviewer 1Mfa [1/3]\", \"comment\": \"Dear Reviewer 1Mfa,\\n\\nThank you for your deep understanding of our work, and appreciate your suggestions that can make our work stronger. We will address the key concern raised in the review below.\\n\\n___\\n### **W1. Expansion to additional domains**\\n\\nWe sincerely thank you for your suggestion, which can further enhance our work. However, it seems there are some misunderstandings about what we intended to convey in our work. If our main goal were to propose a well-crafted new benchmark, focusing solely on mathematics might limit the scope of the research; however, our goal is to provide insights into future directions for constructing reliable benchmarks for reward models. And it is fairly intuitive that a reliable benchmark should not be vulnerable to reward hacking and that conducting multiple comparisons, rather than one-to-one-comparisons, provides a more reliable evaluation of reward models. So, we trust that it is important to thoroughly validate our design through at least one specific domain that allows for in-depth experiments and analysis. We outline the reason behind choosing the mathematics domain for this work below:\\n\\n* **One of the tasks where reward models are most extensively used is mathematics reasoning.** Since the success of RLHF, many studies have utilized reward models extensively. In particular, mathematical reasoning tasks have increasingly employed reward models both during training to enhance reasoning capabilities and during inference using reward-based techniques such as Best-of-N (BoN) sampling or Monte Carlo Tree Search (MCTS). This justifies the analysis of reward models within the context of mathematical reasoning tasks. \\n* **Mathematical reasoning includes a clear human preference.** By selecting the math domain, where human preferences can be relatively easily defined by correctness, we were able to focus more effectively on the analysis.\"}", "{\"title\": \"Response to Reviewer BzTe [2/4]\", \"comment\": \"___\\n### **W3. Comprehensiveness**\\n\\n> \\u2026 scope is notably narrow, focusing solely on evaluating reward models' performance on mathematical problems \\u2026\\n\\nThank you for highlighting these concerns. However, there seems to be a slight misunderstanding regarding our work, so we would like to clarify these points. First, we want to make it clear that our work does not aim to introduce a new, meticulously designed benchmark but rather to provide insights into the future direction for developing reliable benchmarks for reward models.\\n\\nSince the success of RLHF, research utilizing reward models has been steadily growing. In this context, we observed that reward models are increasingly applied during both training and inference-time for reasoning tasks, particularly to enhance mathematical reasoning through Process Reward Models (PRM) or similar approaches \\\\[5-8\\\\]. Despite their importance, there has been limited analysis of reward models themselves. Furthermore, mathematical reasoning tasks allow human preferences to be clearly defined (i.e. correctness), enabling more focused and in-depth analyses. For these reasons, we chose to analyze reward models specifically within the context of mathematical reasoning.\\n\\nAt the beginning of our research, the math subset of RewardBench was only the benchmark available for evaluating reward models for mathematical reasoning tasks. However, as we used this benchmark, we identified several limitations: (1) a significant distribution gap between chosen and rejected responses, (2) unexpectedly high performance for certain reward models, and (3) the potential to fail in accurately evaluating a model\\u2019s actual capabilities due to the limitations of the one-to-one comparisons. This observation led us to consider how to design a reliable benchmark and what analytical perspectives could be used to validate it.\\n\\nTo address these issues, we found that reducing distributional discrepancies help prevent the possibility of reward hacking, while the one-to-many comparisons provide a reliable result. Additionally, by analyzing through the lens of reward overoptimization, we confirmed why reward models that perform well on RewardBench (i.e. the math subset) may still lack robustness. We hope our findings offer valuable insights to the research community, contributing to the development of more reliable benchmarks for reward models in the future. \\n\\n&nbsp;\\n> \\u2026 primarily concentrates on reward over-optimization, overlooking other potential vulnerabilities ... Additionally, \\u2026 comparing one correct solution against multiple incorrect ones limits its thoroughness. \\n\\nApologies, we find ourselves unclear about the specifics of these comments. Our confusion stems from not understanding what other potential vulnerabilities we may have overlooked, as well as why comparing a single correct solution with multiple incorrect solutions would limit the thoroughness of our approach. Thus, we kindly request that you provide more detailed explanations. We eagerly anticipate further clarification or discussion on this matter. \\n \\n&nbsp;\\n> the author's assumption that MATH500 adequately represents mathematical reasoning tasks may be oversimplified.\\n\\nYes, we agree that the problems in MATH500 do not fully represent all mathematical reasoning tasks. However, as previously mentioned, defining the scope of mathematics lies beyond the focus of our work, which aims to explore the future direction of reliable benchmark for reward models. Moreover, since RewardBench was constructed based on MATH500, we also utilize MATH500 as the foundation for RewardMATH to ensure clarity in our experiments and analyses.\\n\\n&nbsp;\\n\\n**References**\\n\\n\\\\[5\\\\] Lightman, Hunter, et al. \\\"Let's Verify Step by Step.\\\" The Twelfth International Conference on Learning Representations.\\n\\n\\\\[6\\\\] Luo, Liangchen, et al. \\\"Improve Mathematical Reasoning in Language Models by Automated Process Supervision.\\\" arXiv preprint arXiv:2406.06592 (2024).\\n\\n\\\\[7\\\\] Wang, Chaojie, et al. \\\"Q\\\\*: Improving multi-step reasoning for llms with deliberative planning.\\\" arXiv preprint arXiv:2406.14283 (2024).\\n\\n\\\\[8\\\\] Zhang, Dan, et al. \\\"Rest-mcts\\\\*: Llm self-training via process reward guided tree search.\\\" arXiv preprint arXiv:2406.03816 (2024).\"}", "{\"summary\": \"The paper proposes a new benchmark, REWARDMATH, to improve the robustness evaluation of reward models in mathematical reasoning tasks. It highlights limitations in the existing RewardBench benchmark, which relies on single comparisons between chosen and rejected solutions, potentially leading to reward hacking and misjudgments of model robustness. REWARDMATH addresses this by using one-to-many comparisons with multiple incorrect solutions to better capture robustness and reduce the risk of reward over-optimization. Experimental results indicate that scores on REWARDMATH strongly correlate with policy optimization outcomes and provide a more reliable measure of reward model robustness compared to RewardBench. This benchmark aims to enhance the development and reliability of RLHF systems, with the findings underscoring the potential of REWARDMATH to serve as a trustworthy evaluation tool in this domain\\u200b.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. **Thoroughness**: The paper presents detailed implementations, including training hyperparameters and experimental protocols. This ensures that other researchers can accurately reproduce the experiments and validate the findings.\\n2. **Relevance**: This work addresses a critical gap in the field by focusing on reward model evaluation, a crucial area of research that has significant implications for the development of more reliable AI systems.\\n3. **Motivation**: The paper presents a compelling critique of the existing Reward Bench evaluation metric, establishing a strong foundation for their work. The authors make a persuasive case for developing benchmarks that minimize over-optimization risks, backing their arguments with experimental evidence. This dual focus on improving metrics while addressing practical concerns demonstrates clear motivation for the research.\", \"weaknesses\": \"1. **Clarity**: The paper is generally well written, however, it has some clarity issues, especially in section 5, which is hard to follow. Clarification questions are asked in the question section, marked with [Clarification]. The authors should address those questions.\\n2. **Benchmark Biases**: The paper has several biases, raising concerns on the claimed robustness and reliability. Examples and comments below:\\n\\n> Line 206: Hence, we first convert the human-annotated solutions from MATH500 into step-by-step machine-generated solutions. We prompt GPT-4, using 4 carefully crafted exemplars for each math subject as part of the prompt.\\n\\nAll correct solutions in the benchmark are generated via GPT-4, raising concerns regarding biases towards GPT-series models. Even though the authors manually inspect the solutions, the solutions were still mainly generated using GPT-4. Notably, the authors observe LLM judges from the GPT-4 Series to perform significantly higher than other models (Line 286), which is likely due to this oversight (since it is known LLM judges tend to bias their own response, eg. GPT-4 family judge favors responses from GPT-4 family). The authors should use a diverse set of LLMs to curate the correct solutions to avoid potential biases.\\n\\n> Line 805: Secondly, we instruct GPT-4-0125-preview to select a specific step from the correct solution, transform it into an erroneous step, and then prompt again to continue generating the solutions from the erroneous step.\\n\\nSimilar to the previous point, employing GPT-4-0125-preview as editor to insert errors into other LLMs\\u2019 answers may introduce biases. Additional validation is needed to ensure the benchmark does not exhibit any bias towards GPT family models. \\n\\n> Line 402: We assume Internlm2-7B-reward, which performs well on both RewardBench and REWARDMATH, as the gold RM.\\n\\nThe use of Internlm2-7b-reward as the gold standard lacks sufficient justification and raises several concerns about experimental validity. The author relies primarily on performance metrics from RewardBench and REWARDMATH, but this approach is problematic for multiple reasons. First, the authors themselves criticized RewardBench for containing incorrect ground truth data and failing to adequately assess reward models. Second, using REWARDMATH as a benchmark is circular since it's the very dataset being studied. High scores on these benchmarks alone don't necessarily indicate that a reward model can reliably approximate human preferences. To establish Internlm2-7b-reward as a legitimate gold standard, the author should conduct additional validation studies specifically demonstrating its ability to align with human judgment on mathematical tasks.\\n\\n> Line 426: We find that proxy reward models trained on smaller datasets reach peak rewards at lower KL divergences, indicating faster over-optimization.\\n\\nThe author assumes KL divergence adequately captures optimization degree without proper justification. KL may not account for other important aspects of policy change. Further study will strengthen the experimental results. \\n\\n3. **Comprehensiveness**: This paper's scope is notably narrow, focusing solely on evaluating reward models' performance on mathematical problems. While it attempts to address limitations in a small subset of the Reward Bench dataset, its improvements remain constrained. The study primarily concentrates on reward over-optimization, overlooking other potential vulnerabilities in reward model benchmarking. Additionally, the benchmark's methodology of comparing one correct solution against multiple incorrect ones limits its thoroughness. Furthermore, the author's assumption that MATH500 adequately represents mathematical reasoning tasks may be oversimplified. These limitations collectively suggest a need for a more comprehensive approach to reward model evaluation.\", \"questions\": \"1. [Clarification] Are the prompts used to evaluate the LLM judge on REWARDMATH the same as the prompt used to evaluate the LLM judge on the Reward Bench? Different prompting strategy (eg. difference system prompt) raises concerns regarding fair comparison between the two benchmarks.\\n2. [Clarification] What is MATH500? The author did not mention the details behind this dataset which they used for their benchmark. Were there any steps taken to ensure the dataset is not contaminated with the models being evaluated? If the dataset is used during training on any of the evaluated RMs, the benchmark\\u2019s reliability will be undermined. \\n3. [Clarification] What was the motivation behind different parts of the Synthetic Data experiment? What was the reasoning behind using the MetaMATH dataset? Why was only 80K out of the 155K data points augmented from MATH used for training? \\n4. The authors did not mention how the incorrect solutions are ensured to be actually incorrect. Were there steps taken to validate the said incorrect solutions are indeed incorrect?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Reviewer BzTe\", \"comment\": \"I thank the authors for making these changes for the revision, I think it helps readability, and I like the discussion in the appendix. I have raised my score by 1 point.\"}", "{\"comment\": \"Thank you for your response!\\n\\n### **\\\\[W1\\\\]**\\n\\n> What was the detail behind this experiment? It is unclear from the paper which dataset did the experiments of self-enhancement bias in LLM-as-Judge utilized. How was the correctness determined? Was it MATH? If so there is a distribution difference between the experiment's dataset versus the benchmark dataset.\\n\\nThank you for your thoughtful question and we apologize for any confusion caused by the lack of detailed explanation regarding the experiments of self-enhancement bias in LLM-as-Judge. We conducted this experiments with two research questions to analyze self-enhancement bias in LLM-as-Judge:\\n\\n1. **Does the model prefer its own incorrect answers over correct answers?** In this experiment, we compared the model\\u2019s own rejected (incorrect) solutions from RewardMATH with correct solutions from RewardMATH. Note that this experiment involves not only preference bias but also the model's judgment ability, which must be considered when interpreting the results. \\n2. **When given two correct solutions, which one does the model prefer?** For this experiment, we collected correct solutions from the LLM-as-a-Judge model itself across 100 problems in MATH500 where all LLMs we used generated correct solutions. We examined the model's preference between its own correct solutions and correct solutions generated by other models. Since both solutions are correct, we evaluated them under two settings: (1) when a tie is an available option (w/ tie), and (2) when a tie is not an available option (w/o tie), to analyze which solution the model prefers more strongly in each setting.\\n\\nFor the first experiment, we used a subset of RewardMATH's dataset that aligns with the rejected and correct solutions relevant to the LLM-as-a-Judge. For the second experiment, we collected correct solutions directly from the respective models, which is different from RewardMATH. All problems in these experiments are part of the MATH500 dataset.\\n\\nFurthermore, we conducted an additional experiment to address concerns about potential bias in the GPT-4 judge, as the correct solutions in RewardMATH were modified into machine-generated solutions using GPT-4, not GPT-4o. Our previous experiments of self-enhancement bias were evaluated by the GPT-4o judge; therefore, to ensure a more thorough analysis, we further validate using the GPT-4 judge. Based on the second research question mentioned earlier, we analyze which solution is preferred: the chosen solution from RewardMATH or the correct solutions from each model. The table below demonstrates that GPT-4 does not exhibit a bias toward preferring its own solutions. The results indicate that there is no bias, as the solutions were not directly generated by GPT-4 but were instead modified versions of human solutions in MATH dataset, which demonstrates that the benchmark is free from potential bias and affirms the fairness of the experiments.\\n\\nWe have also incorporated the table below into Table 8 of the updated draft **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**.\\n\\n| | GPT-4o | GPT-3.5-turbo | Llama3-70B | Llama3-8B | Claude-3-Sonnet | Gemma2-27B |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| w/o tie (self / others) | 30.09 / 69.91 | 33.03 / 66.97 | 33.63 / 66.37 | 27.35 / 72.65 | 24.35 / 75.65 | 29.2 / 70.8 |\\n| w/ tie (self / tie / others) | 35.79 / 18.95 / 45.26 | 37.89 / 14.74 / 47.37 | 40.0 / 18.95 / 41.05 | 33.68 / 23.16 / 43.16 | 29.47 / 21.05 / 49.47 | 34.74 / 18.95 / 46.32 |\"}", "{\"title\": \"Sincere Request for Review of Our Responses, New Experiments, and Revised Paper\", \"comment\": \"Dear Reviewer BzTe,\\n\\nThank you again for your time and effort to provide your insightful feedback on our paper. \\n\\nWe have addressed your comments and added comprehensive details about MATH500 in Section 3 and Appendix B of the updated draft **([click to see the pdf \\\\[link\\\\]](https://openreview.net/pdf?id=0er6aOyXUD))**.\\nIf you have any remaining questions or require further clarification, we would be happy to address them before the time window closes.\\n\\nThank you so much for your time and valuable feedback!\\n\\nBest regards,\\n\\nThe Authors of Paper 10281\"}" ] }
0eRJRbVG95
Unraveling the Shift of Visual Information Flow in MLLMs: From Phased Interaction to Efficient Inference
[ "Hao Yin", "Guangzong Si", "Zilei Wang" ]
Multimodal large language models (MLLMs) improve performance on vision-language tasks by integrating visual features from pre-trained vision encoders into large language models (LLMs). However, how MLLMs process and utilize visual information remains unclear. In this paper, a shift in the dominant flow of visual information is uncovered: (1) in shallow layers, strong interactions are observed between image tokens and instruction tokens, where most visual information is injected into instruction tokens to form cross-modal semantic representations; (2) in deeper layers, image tokens primarily interact with each other, aggregating the remaining visual information to optimize semantic representations within the visual modality. Based on these insights, we propose Hierarchical Modality-Aware Pruning (HiMAP), a plug-and-play inference acceleration method that dynamically prunes image tokens at specific layers, reducing computational costs by approximately 65% without sacrificing performance. Our findings offer a new understanding of visual information processing in MLLMs and provide a state-of-the-art solution for efficient inference. Code is released at https://anonymous.4open.science/r/HiMAP.
[ "Multimodal Large Language Models", "Visual Information Flow", "Inference Acceleration" ]
https://openreview.net/pdf?id=0eRJRbVG95
https://openreview.net/forum?id=0eRJRbVG95
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rEL4hsk2xY", "jMO4gByAqy", "fmH56kifEM", "aETr98jzKE", "Y18LZ4IUfG", "OOJIVrQN2S" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730793818895, 1730718261445, 1730860016066, 1730950934480, 1731658229004, 1730293776712 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9526/Reviewer_Qbnk" ], [ "ICLR.cc/2025/Conference/Submission9526/Reviewer_kQ13" ], [ "ICLR.cc/2025/Conference/Submission9526/Reviewer_nG12" ], [ "ICLR.cc/2025/Conference/Submission9526/Reviewer_yfzG" ], [ "ICLR.cc/2025/Conference/Submission9526/Authors" ], [ "ICLR.cc/2025/Conference/Submission9526/Reviewer_Vzxh" ] ], "structured_content_str": [ "{\"summary\": \"This paper examines the significance of image tokens in different layers of MLLMs, suggesting that image tokens tend to facilitate modality injection in shallow layers and engage in more internal interactions in deeper layers. Based on this analysis, the paper proposes HiMAP, an algorithm for dynamically pruning image tokens to accelerate inference, which has been validated for its effectiveness across various multimodal tasks.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Universality: As a universal image token pruning algorithm, HiMAP can be easily applied to different architectures of MLLMs to achieve accelerated inference.\\n2. Usability: The method is straightforward, with low transfer costs.\\n3. The saliency scores and dynamic pruning approach used in the paper can provide inspiration for the field of accelerated inference.\", \"weaknesses\": \"1. The paper lacks line numbers, which seems to deviate from ICLR's submission standards and may hinder reviewers in accurately pinpointing issues within the document.\\n2. The experiments are not comprehensive enough, with validation only on a limited number of tasks. As a universal solution, it should be tested on common multimodal benchmarks, such as LLava-Bench, MMBench, etc.\", \"questions\": \"1. As mentioned in Weakness #1.\\n2. In Section 2.2, the authors derive two main factors based on insights 1) \\\"As the model depth exceeds...\\\" and 2) \\\"Instruction tokens exert the most...\\\". The first conclusion is undoubtedly correct, but the second remains questionable. Although much work has validated the redundancy of image tokens in MLLMs, the two insights provided in this paper do not directly lead to this conclusion. The \\\"limited impact of image tokens\\\" mentioned in the paper only supports the first conclusion, while the argument for the second conclusion would require a computation of saliency for each image token (assuming a length of N), and if the authors conducted this experiment, they would find that only some image tokens have high significance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes that in MLLMs, image tokens primarily convey visual information to instruction tokens in shallow layers, while deeper layers consolidate the remaining visual data. Based on this insight, a plug-and-play visual pruning method, HiMAP, is proposed to reduce the computational costs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"From the novel perspective of information flow, the authors have analyzed the fusion patterns of visual features in MLLMs. Building upon this analysis, they proposed an adaptive approach for visual redundancy elimination.\", \"The structural design of HiMAP is intuitive and demonstrates robust performance across a range of tasks, exemplified by image captioning and VQA.\", \"The article is highly readable, featuring a well-defined and clear structure.\"], \"weaknesses\": [\"In analyzing the information flow between visual tokens and textual tokens, it is essential to thoroughly examine the flow in both directions. Hypothesis H1 is valid only if it is determined that the primary flow of information occurs from the visual modality to the textual modality, rather than in the opposite direction. This requires conducting an experiment to compare the magnitudes of $S_{vt}$ in Equation 6 with $S_{tv}$.\", \"Furthermore, merely showing a decline in performance by restricting the interaction between image tokens and instruction tokens in shallow layers does not sufficiently support Hypothesis H1. It is essential to complement this with an experiment that specifically limits the flow of intra-visual information within shallow layers (the current IMG2RND experiment in Figure 4 is not direct). Only when the resulting performance degradation is considerably less pronounced can Hypothesis H1 be adequately substantiated.\"], \"questions\": [\"Does the type of task potentially influence the conclusion regarding the minimal importance of visual tokens, which account for only 0.03% of the significance of textual tokens? For example, the proportion of visual tokens may considerably decrease in multi-turn dialogue tasks. At the same time, their relative significance could increase due to the normalization of sequence length reflected in Equations 2, 3, and 4.\", \"The insight presented in line 7 on page 4 looks\\u00a0weird. If the contribution of tokens from deeper layers to response prediction is low, why not leverage tokens from the shallow layer with the most significant contribution to generating responses? In this reviewer's opinion, only the comparison within each layer in Figure 2 carries practical significance.\", \"The reviewer suggests evaluating the performance of HiMAP against the baseline on fine-grained perception tasks, such as document understanding and OCR (e.g., Chartqa[1] and Docvqa[2]). This would provide a more solid demonstration of HiMAP's efficacy in reducing redundant image tokens.\", \"[1] Chartqa: A benchmark for question answering about charts with visual and logical reasoning. \\\\\", \"[2] Docvqa: A dataset for vqa on document images.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper identifies the minimal role of image tokens in MLLM predictions, uncovers patterns in visual-textual interactions, introduces HiMAP as a pruning technique to reduce inference latency without compromising performance, and demonstrates its effectiveness across diverse vision-language tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written, with clear explanations and helpful visuals.\\n\\n2. This paper introduces intriguing hypotheses and includes extensive, detailed studies to support them.\\n\\n3. Extensive experiments confirm HiMAP\\u2019s effectiveness, showing reduced computational costs while preserving performance.\", \"weaknesses\": \"1. Lack of ablation studies. For example, additional experiments could be included to examine the impact of various pruning strategies on the model and to assess the effects of different hyperparameter settings, such as K1, K2 and the ratio.\\n\\n2. The authors might consider including additional benchmarks, such as MME and AI2D, and presenting fine-grained performance scores. Additionally, it would be helpful to include metrics such as GPU memory and total time in the comparisons to provide a more comprehensive evaluation.\", \"questions\": \"I noticed that VTM suggests that visual tokens are not essential in the deeper layers of MLLMs and strategically withdraw them at a certain layer. I'm very curious about the advantages of HiMAP compared to VTM.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper dive into how multimodal large language models (MLLMs) process and utilize visual information. Based on the widely used saliency technique for interpretability, information flow among different tokens across different layers is analyzed. The authors find out that visual information injection dominates in shallow layers while intra-visual aggregation dominates in deeper layers. Finally, hierarchical image token pruning is proposed to prune at both shallow and deep layer with specific criterion.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper is well-written and easy understanding. The figure 1 and 6 are intuitive to understand the overall framework.\\n2.\\tThe saliency technique used for analyzing the information flow among various tokens are interesting and intuitive. The conclusion that visual information injection dominates in shallow layers while intra-visual aggregation dominates in deeper layers makes sense.\\n3.\\tExperimental results demonstrate the effectiveness of the proposed method to some extent.\", \"weaknesses\": \"1.\\tThe phenomenon analyzed in the paper is not surprising, and previous works [1][2] have pointed out the similar findings that information of vision tokens has migrated to following text tokens within the first few layers of MLLMs. Thus, I think this paper is with limited novelty which employs a commonly used technique to analyze the phenomenon that have been identified.\\n2.\\tI wonder how the parameter K1, K2 is determined. For different datasets and tasks, the parameters may be different. Directly setting k1 and k2 to a pre-defined value may be not suitable. Could K1, K2 be dynamically adjusted based on the input samples?\\n3.\\tThe evaluation datasets used in the paper is quite limited. I suggest the authors to evaluate on other commonly used datasets, especially OCR-related or fine-grained datasets to demonstrate the effectiveness, e.g., textvqa, gqa, docvqa, chartqa, seed-bench. For the efficiency evaluation, I suggest the authors to include inference time and GPU memory.\\n4.\\tTwo different criteria are used in shallow and deep layers. I wonder the performance if the same criterion is used. If the performance is similar, the analysis of different information flow in shallow and deep layers is not very convincing. \\n\\n[1] An Image is Worth 1/2 Tokens After Layer 2: Plug-and-PLay Acceleration for VLLM Inference.\\n\\n[2] BOOSTING MULTIMODAL LARGE LANGUAGE MODELS WITH VISUAL TOKENS WITHDRAWAL FOR RAPID INFERENCE\", \"questions\": \"Please refer to the 'weaknesses' part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"In this paper, authors begin by studying how visual tokens interact in MLLMs and observe that 1) image tokens interact strongly with text instruction tokens to form cross-modal representations in shallow layers; 2) image tokens aggregate remaining visual information in deeper layers. Upon this, they propose a token pruning inference strategy, HiMAP, for MLLM, by selecting the most important image tokens by image-text-attention scores in shallow layers and image-self-attention scores in deeper layers.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1) The paper is well motivated, authors design the HiMAP strategy corresponding to the behavior of how visual information is utilized in MLLM layers.\\n\\n2) HiMAP manages to reduce the computational costs by approximately 65% without sacrificing performance.\", \"weaknesses\": \"1) The ablation studies are insufficient, e.g., different choices of K1, K2 , R1, R2; ablation for using different importance criteria on shallow layers and deep layers.\\n\\n2) The finding that LLMs may well process visual tokens in the early layers has already been proposed in previous works[1-3]. The stagewise token pruning strategy has also been proposed for efficient MLLM [3]. Consequently, the novelty of this paper is somewhat limited.\\n\\n3) More benchmarks for MLLM performance evaluation should be involved to exhibit the effectiveness of HiMAP, e.g., the widely used GQA, MME, MM-Vet, VQAv2 and etc.\\nThe paper should be drafted with line number on the page and the writing of the paper should be improved.\\n\\n\\n[1] An Image is Worth 1/2 Tokens After Layer 2: Plug-and-PLay Acceleration for VLLM Inference, in ECCV24.\\n\\n[2] DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs, in NeurIPS24.\\n\\n[3] LLaVolta: Efficient Multi-modal Models via Stage-wise Visual Context Compression, in NeurIPS24.\", \"questions\": \"Please refer to weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0eMsrRMmCw
Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM
[ "Zheng Wei Lim", "Nitish Gupta", "Honglin Yu", "Trevor Cohn" ]
Multilingual large language models (LLMs) are great translators, but this is largely limited to high-resource languages. For many LLMs, translating in and out of low-resource languages remains a challenging task. To maximize data efficiency in this low-resource setting, we introduce Mufu, which includes a selection of automatically generated multilingual candidates and an instruction to correct inaccurate translations in the prompt. Mufu prompts turn a translation task into a postediting one, and seek to harness the LLM’s reasoning capability with auxiliary translation candidates, from which the model is required to assess the input quality, align the semantics cross-lingually, copy from relevant inputs and override instances that are incorrect. Our experiments on En-XX translations over the Flores-200 dataset show LLMs finetuned against Mufu-style prompts are robust to poor quality auxiliary translation candidates, achieving performance superior to NLLB 1.3B distilled model in 64% of low- and very-low-resource language pairs. We then distill these models to reduce inference cost, while maintaining on average 3.1 chrF improvement over finetune-only baseline in low-resource translations.
[ "translation", "low-resource", "large language model" ]
Accept (Poster)
https://openreview.net/pdf?id=0eMsrRMmCw
https://openreview.net/forum?id=0eMsrRMmCw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vVAU8ml6Hp", "t2n7UcV7s9", "rtUosudebR", "nxVQmCDWFc", "mbUUnqK0A3", "jjsS6R3aXd", "iqbYhV2PPZ", "cCSebIqhkS", "aO9luCAM2X", "GCcnGvE7Aw", "FZVEb6rgDR", "8UeDZ9p95b" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733147214269, 1731729208382, 1731729176093, 1732302202190, 1734600316423, 1730656155353, 1730313175007, 1737523443648, 1731729128764, 1732331140755, 1730719258092, 1732280967535 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1255/Reviewer_uix1" ], [ "ICLR.cc/2025/Conference/Submission1255/Authors" ], [ "ICLR.cc/2025/Conference/Submission1255/Authors" ], [ "ICLR.cc/2025/Conference/Submission1255/Reviewer_uix1" ], [ "ICLR.cc/2025/Conference/Submission1255/Area_Chair_nnPe" ], [ "ICLR.cc/2025/Conference/Submission1255/Reviewer_Xmqh" ], [ "ICLR.cc/2025/Conference/Submission1255/Reviewer_uix1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1255/Authors" ], [ "ICLR.cc/2025/Conference/Submission1255/Authors" ], [ "ICLR.cc/2025/Conference/Submission1255/Reviewer_84wT" ], [ "ICLR.cc/2025/Conference/Submission1255/Reviewer_Xmqh" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your clarification. After carefully going through the paper again, I appreciate the novel approach, but I have some reservations about the latency-accuracy trade-offs. While the concept is innovative, I believe there might be more promising areas to explore, such as developing more distilled models that could potentially address these concerns.\\n\\nAt this stage, I'm inclined to keep the scores same. I look forward to potential future improvements in the approach. Thank you and all the best!\"}", "{\"title\": \"We will consider expanding our analysis of multi-head attention to more languages, and improve cross-domain adaptability with 5-shot prompting.\", \"comment\": \"Thanks for the positive feedback.\\n\\n1. Our analyses include 201 languages spoken in 19 regions (e.g., Middle Africa, Central Asia, Northern Europse, etc.), of which 113 languages are considered low- and very-low-resource. While this is only a fraction of 7000+ languages in the world, our setup is unfortunately limited by the availability of high-quality parallel datasets in the other languages. However, we will consider expanding the set of languages in our analysis of self-attention (Section 4.2), and including in the final draft analyses for other languages analogous to Figure 3 in the Appendix.\\n\\n2. We highlighted the contributions of auxiliary languages by comparing mufu5 against mufu5hrl, which consists of 5 HRLs (Dutch, Russian, French, Chinese and Spanish) as auxiliary languages; and mufu5tr, which removes the postediting target in the context. We showed mufu5 to be superior to both mufu5rhl and mufu5tr, indicating the importance of the relevance of auxiliary languages to the target language and postediting candidates in the context. In Section 4.2, we further corroborated the impact of auxiliary candidates, showing mufu finetuned models to be capable of inferring correct translation from relevant languages by cross-lingual lexical alignment in multihead attention. Some languages are missing in URIEL, we therefore include random auxiliary languages in context. Our preliminary analysis with controlled language resource level show target languages with related auxiliary candidates to have on average higher chrF improvement ratio ((chrF_mufu - chrF_baseline) / chrF_baseline) across models and mufu{5, 10, 20}. We will consider adding in the final manuscript an ablation with the inclusion of random auxiliary candidates for all target languages if required.\\n\\n3. Thanks for this suggestion. We will report in the final manuscript the translation performance of the distilled models on NTREX with 5-shot prompting to improve cross-domain adaptability.\"}", "{\"title\": \"We now include mufu20+5hrl in ablation and report BLEU scores in the appendix.\", \"comment\": \"Thanks for the review and suggestions.\\n1. We reported the initial prompt selection in Appendix A.1\\u2014unfortunately we are unable to move the description to the main text due to page limit. We have tried mufu20+5hrl, which includes 5 HRLs candidates in addition to the same set of auxiliary languages as mufu20, and found it to be less performant than the latter. This result is now reported in Table 2. We are also happy to consider any other suggestions on the mix of auxiliary languages for further ablation studies. \\n\\n2. We primarily report our results in chrF rather than BLEU as the latter heavily relies on tokenization that is underdeveloped for many low-resource languages. Nonetheless, we now report BLEU scores in Appendix A.4, which are consistent with the positive results shown in Table 2 in the main text. Prior to mufu finetuning, PaLM2 XXS\\u2013NTL has been continued-pretrained on a corpora derived from Next-Thousand-Language effort (Caswell et al., 2020), which contain monolingual and parallel sentences in 1000+ languages. This results in significant improvement of translation performance when the model is finetuned with mufu prompts, as compared to PaLM2 XXS. Given the gap in performance, we speculate further improvement as well in other models with similar monolingual training.\"}", "{\"comment\": [\"Thank you for your comments, I have some questions from your rebuttal:\", \"You have mentioned that PaLM2 XXS is comparable to NLLB 1.3B, however, performance of PaLM2 XXS (and PaLM2 XS) is lower than NLLB 1.3B. Only when you pretrained it on corpora derived from NTL, some gains were observed.\", \"Thank you for throwing more light on Win% vs teacher. It is now much clear to me.\", \"\\\"However, improving high-quality translations in HRLs is harder and requires the student model to also learn the subtle differences between model- and human-generated output.\\\" do you have any specific paper to support your claim?\"]}", "{\"metareview\": \"The submission introduces \\\"Mufu\\\", a method for improving low-resource language translation using a multilingual fused learning approach, tested on large language models (LLMs). The methodology turns translation tasks into post-editing tasks, enhancing translation quality by leveraging the reasoning capabilities of LLMs while using auxiliary translation candidates. Experimental results show that Mufu-style prompts improve translation performance for low- and very-low-resource languages, outperforming NLLB 1.3B distilled model in most language pairs.\", \"pros\": \"1. The paper introduces a compelling approach that combines in-context learning and fine-tuning, significantly improving translation performance in low-resource languages, an area that remains a key challenge in the NLP field. \\n2. The novel methodology of transforming the translation task into a post-editing task enhances the usability and reasoning capabilities of LLMs. \\n3. The strong experimental results presented across various language pairs demonstrate the robustness of the approach. Additionally, the research provides thorough ablation studies and includes performance evaluations on both in-domain and out-of-domain datasets, making it a substantial contribution to multilingual NLP.\", \"suggestions\": \"1. Provide comparative latency metrics across different models to clarify trade-offs between accuracy and efficiency.\\n2. Consider increasing the diversity of the low-resource languages analyzed to enhance the generalization of findings. \\n3. Include evaluations using metrics such as sacreBLEU to offer a broader performance perspective.\\n4. Clarify the size and training details of the models used, particularly for non-public models like PaLM2.\\n5. Explore alternative distillation techniques beyond the current approach, assessing their effectiveness for LLMs.\\n6. Delve into overfitting concerns, particularly in high-resource language training, and consider curriculum learning approaches.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers generally agreed on the novelty and technical merits of the paper, with specific praise for its innovative approach and detailed experimental analysis.\\n\\nConcerns revolved around clarity regarding model details, evaluation metrics, and real-world applicability concerning latency-accuracy trade-offs. The authors responded comprehensively to many queries, committing to additional analyses and clarifications. However, certain reservations about latency and model comparisons were not completely alleviated, leaving room for further exploration and validation.\"}", "{\"summary\": \"This paper tackles low-resource translation quality improvement in LLM models. To maximize data e\\ufb03ciency in the low-resource setting, the authors introduce a new approach called Mufu, including automatic selection of multilingual translation candidates and an instruction tuning to correct inaccurate translations via the prompt. Experimental results on Flores-200 dataset for English-XX directions show robustness and achieves better performance against NLLB 1.3B distilled model in 64% of low- and very-low resource language pairs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"experimental results show some effectiveness of the proposed approach\", \"the idea of leveraging multilinguality via the prompt sounds technically good\"], \"weaknesses\": [\"unclear about the experimental results; how to decide the best prompt template for mufu; any impacts of language combination used in the prompt template - for example, have you ever tried adding high-resource language translation pairs during training to enhance multilingual training with high and low-resource language pairs?\", \"results are not convincing enough, maybe due to low-resource setting with limited improvement in ChrF. Can you report other metrics such as sacreBLEU scores? Have you tried finetuning LLM with low-resource monolingual data so that the LLM can more effectively enhance Mufu.\"], \"questions\": \"Please see the weaknesses for the questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Mufu, which turns translation into post-editing task by providing auxiliary translations and target translation from teacher model. The student model learns in-context to produce the correct target translation and is then fine-tuned against references. Languages for auxiliary translations are chosen from URIEL and they evaluate using PaLM S family models along with Gemma 2B,7B on FLORES 200 (iid), and NTREX (ood). The paper contains thorough ablation studies as well as cross lingual attention alignment which helps understanding or interpreting how model is learning through in-context.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is very clearly written and easy to follow.\", \"They combine 2 interesting learning paradigms: ICL and parameter tuning and their core focus is on very-low and low resource language which I really liked.\", \"They perform evaluation on NTREX which is important for ood evaluation.\", \"The experiments performed by authors are quite extensive. I especially liked mufu5hrl, mufu5tr, distilled, and lora which corroborate their approach of selecting 5,10,20 related languages from URIEL.\", \"Quantitative evidence provided in Figure 3 is quite helpful in knowing how language transfer is taking place. Moreover, the attention pattern further helps in understanding how attention pattern is making mufu models perform better.\"], \"weaknesses\": [\"No model sizes available for PaLM2 family of models. I\\u2019m not sure how to compare them with Gemma or NLLB.\", \"If I were to just compare on the basis of chrF score, only PaLM2 XXS -NLT and PaLM2 S are able to beat NLLB 1.3B distilled model in both FLORES 200 and NTREX (and Gemma 7B on FLORES 200). Rest all are inferior to NLLB 1.3B distilled. One suggestion for authors in this case will be to add `Latency` column for all models (higher for mufu and lower for distilled models) to show the trade off between accuracy and latency which will help readers understand how competitive other models are.\", \"The authors have mentioned this but finetuning an LLM (or even NLLB with 1B+ param) with just 787 sentences and in-context learning will definitely lead to overfitting which is evident by the fact that mufu20lora performed better than full finetuning. I wonder if that is the case for other models too?\", \"It\\u2019s great they used Gemma 2, an open weight model but I\\u2019m slightly disappointed that majority of their experiments use PaLM2 models which are not public like Gemma 2.\", \"Two iteration process (teacher model followed by student model) is quite expensive. The authors have mentioned that distillation helps to alleviate the problem but it only worked for NTREX in PaLM2 XXS - NTL (not for Gemma 7B), performance on FLORES 200 for both distilled models is lower than NLLB 1.3B.\", \"The authors experiment with one learning paradigm i.e., in-context learning for LLMs for distillation. Did they try distillation from model outputs (not the one fine-tuned with mufu20)? How much better or worse is in-context learning compared to vanilla distillation?\"], \"questions\": [\"Were there any accidental translations in a different language for Mufu{5,10,20}?\", \"What exactly is Win% vs teacher? For instance, for NLLB 1.3B distilled, its chrF is 46.0 whereas that of teacher is 43.7, still its win% is 41.3? It means NLLB 1.3B was less than 50% correct when compared to teacher model still its chrF score is higher? Another example, Win% vs teacher is 56.2 for NLLB 54B MoE (48.9 chrF) whereas for mufulora20 with PaLM2 S it is 99% with chrF less than NLLB 54B MoE on FLORES 200. It will be great if authors can formalise what is Win% vs teacher.\", \"Can the authors explain In theory\\u2026 model outputs (line 207-211)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"The manuscript is now much improved. We will consider vanilla distillation and ablation with the best performing Gemma model in the final manuscript.\", \"comment\": \"Thanks for the positive and detailed feedback.\\n1. Unfortunately the sizes of PaLM2 models are not public. Barring the differences in model size, however, the gap between PaLM2 and Gemma is also driven by differences in their pre-training recipes (e.g., PaLM2 are highly multilingual LLMs, Gemma models are more monolingual); as well as the fact that PaLM2 are relatively older models than Gemma. We speculate PaLM2 XXS to be comparable to NLLB 1.3B due to their similarity in compute requirements. To roughly align these models nonetheless, we suggest comparing their baseline performance in Table 2.\\n\\n2. Admittedly, mufu-finetuned models have substantially higher latency than NLLB, which are encoder-decoder models trained specifically for translation. Low-resource languages are however notoriously difficult for LLMs [Robinson et al., 2023; Zhu et al., 2024] due to the huge disparity in resource levels between languages. We now further elaborate this point in lines 500 and 535.\\n\\n3. Thanks for pointing out the possibility of overfitting in the other models. Despite seemingly superior performance of mufu20lora based on mean chrF, we find lora to be worse than full finetuning in translations to very-low-resource languages (Figure 2a). In Table 2, PaLM2 XS and PaLM2 S finetuned with the baseline method overfit and perform worse than PaLM2 XXS. This is not the case for Mufu finetuned models, as we see improvement with increasing model capacity. It is also possible that the model overfits to translations in HRLs, but not in LRLs \\u2014 in which case, a reasonable approach might be to terminate HRL training early (i.e., a form of curriculum learning). We leave this experiment to future work, and now address the point in Section 4.3, line 426.\\n\\n4. We conducted most experiments with PaLM2 XXS\\u2013NTL as it was the best performing model. Should the work be accepted we will include the corresponding ablation results for the best performing Gemma model in the Appendix.\\n\\n5. Thanks for suggesting to compare the distilled mufu20 model against a vanilla distillation setup. We note, however, that the current teacher model (PaLM2 S) is poor in translations to low-resource languages. We are nonetheless happy to consider other teacher models, and include this result in the Appendix of the final manuscript if required.\\n\\n6. Thanks for raising that there might be accidental translations in the wrong language. We manually inspected a number of Indonesian and Chinese languages, and found no absolute incorrect language in the translations. It is however difficult to assess the accuracy systematically as many of these languages share the same scripts, have similar vocabulary, and borrow words from one another (see for example, Table 4). For other language pairs with <20 chrF (e.g., Fon, Tamasheq, Tigrinya), we could only perform sanity checks for accidental translation in the wrong script, as we are unfamiliar with these languages. \\n\\n7. Win rate is the percentage of language pairs where the model outperforms a benchmark. For example, NLLB 54B MoE outperforms the teacher in 113/201 \\u2248 56.2% language pairs based on chrF; whereas PaLM2 S finetuned with mufu20lora outscores the teacher in 199/201 \\u2248 99% language pairs. We now clarify this in Section 4, line 197.\\n\\n8. Thanks for raising that lines 207-211 could be further improved. In Mufu we finetune the models against the gold-standard translations, and expect improvement from the postediting targets. This is effective for LRLs with low-quality postediting candidates. However, improving high-quality translations in HRLs is harder and requires the student model to also learn the subtle differences between model- and human-generated output. It is also possible that the LLM teacher surpasses human for some translations in HRLs, in which case, learning from the human output could be detrimental. We now elaborate this cause of decline in performance relative to teacher in Section 4, line 263.\"}", "{\"comment\": \"Thank you for the questions.\\n\\n1. NLLB 1.3B was distilled from NLLB 54B MoE. The latter was trained in hundreds of translation directions, on large-scale mined bitext and monolingual data augmented with backtranslation [1]. Thus it is not surprising that PaLM2 XXS requires further NTL pretraining and gains from Mufu to achieve comparable performance.\\n\\n2. Thanks for raising that the statement needs further support. LLMs closely resemble human translations in their use of lexical and linguistic features [2]. LLM output also becomes increasingly difficult to be distinguished from human translations [3], given the current evaluation method that identifies similar errors in both systems [4]. We now include these citations following the claim in line 265. More importantly, however, we note in the manuscript that the key reason for decline is that finetuning on human-generated translations is suboptimal compared to finetuning on high-quality model-generated data [5]. \\n\\n[1] Costa-juss\\u00e0, Marta R., et al. \\\"No language left behind: Scaling human-centered machine translation.\\\" arXiv preprint arXiv:2207.04672 (2022). \\n\\n[2] Sizov, Fedor, et al. \\\"Analysing Translation Artifacts: A Comparative Study of LLMs, NMTs, and Human Translations.\\\" Proceedings of the Ninth Conference on Machine Translation. 2024.\\n\\n[3] Kocmi, Tom, et al. \\\"Findings of the WMT24 general machine translation shared task: the LLM era is here but mt is not solved yet.\\\" Proceedings of the Ninth Conference on Machine Translation. 2024. \\n\\n[4] Zhang, Ran, Wei Zhao, and Steffen Eger. \\\"How Good Are LLMs for Literary Translation, Really? Literary Translation Evaluation with Humans and LLMs.\\\" arXiv preprint arXiv:2410.18697 (2024).\\n\\n[5] Finkelstein, Mara, and Markus Freitag. \\\"MBR and QE Finetuning: Training-time Distillation of the Best and Most Expensive Decoding Methods.\\\" The Twelfth International Conference on Learning Representations.\"}", "{\"summary\": \"This paper introduces \\\"Mufu\\\" , which is a method for low-resource language translation using a multilingual fused learning approach, specifically targeting large language models (LLMs).\\nThe Mufu method, which aims to address the challenge that large language models (LLMs) perform well in translating high-resource languages but still struggle with low-resource languages. The Mufu prompting approach turns the translation task into a post-editing task, leveraging the reasoning capabilities of LLMs with auxiliary translation candidates, requiring the model to assess input quality, align semantics cross-lingually, copy from relevant inputs, and override incorrect instances. Experiments show that LLMs fine-tuned with Mufu-style prompts achieve better performance than the NLLB 1.3B distilled model in 64% of low- and very-low-resource language pairs on the Flores-200 dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Interesting research, Introduces Mufu, a novel approach leveraging multilingual context and post-editing for low-resource language translation.\\n2. Employs automatically generated candidates and instructions to correct translations, enhancing LLM's reasoning capability.\\n3. Demonstrates robustness against poor-quality auxiliary translations, outperforming specialized NMT systems in many low-resource pairs.\\n4. Proposes a hybrid learning paradigm, combining in-context learning and finetuning for improved translation quality.\\n5. Implements knowledge distillation to reduce inference costs while maintaining performance gains in low-resource translations.\", \"weaknesses\": \"1. Experiment Method Optimization\\uff0c Consider incorporating a more diverse set of low-resource languages in the experimental dataset to better generalize the findings and evaluate the model's performance across a wider linguistic spectrum.\\n\\n2. Experiment Conclusion Enhancement\\uff0c Suggest conducting ablation studies to isolate the specific contributions of different components of Mufu, such as the impact of various auxiliary languages, to fine-tune the approach and maximize translation accuracy.\\n\\n3. 5-shot Prompting Improvement\\uff0c Explore the use of meta-learning strategies in 5-shot prompting to enhance the model's ability to quickly adapt to new translation tasks with limited examples, potentially improving the efficiency of the learning process.\", \"questions\": \"1\\u3001 more diverse set of low-resource languages in the experimental dataset will be helpful\\n2\\u3001 the impact of various auxiliary languages can be deeply analyzed\\n3\\u3001 prompt analyzation can be improved\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for addressing my questions and comments. I have updated my score accordingly.\"}" ] }
0e2pcSxQJS
PN-GAIL: Leveraging Non-optimal Information from Imperfect Demonstrations
[ "Qiang Liu", "Huiqiao Fu", "Kaiqiang Tang", "Chunlin Chen", "Daoyi Dong" ]
Imitation learning aims at constructing an optimal policy by emulating expert demonstrations. However, the prevailing approaches in this domain typically presume that the demonstrations are optimal, an assumption that seldom holds true in the complexities of real-world applications. The data collected in practical scenarios often contains imperfections, encompassing both optimal and non-optimal examples. In this study, we propose Positive-Negative Generative Adversarial Imitation Learning (PN-GAIL), a novel approach that falls within the framework of Generative Adversarial Imitation Learning (GAIL). PN-GAIL innovatively leverages non-optimal information from imperfect demonstrations, allowing the discriminator to comprehensively assess the positive and negative risks associated with these demonstrations. Furthermore, it requires only a small subset of labeled confidence scores. Theoretical analysis indicates that PN-GAIL deviates from the non-optimal data while mimicking imperfect demonstrations. Experimental results demonstrate that PN-GAIL surpasses conventional baseline methods in dealing with imperfect demonstrations, thereby significantly augmenting the practical utility of imitation learning in real-world contexts. Our codes are available at https://github.com/QiangLiuT/PN-GAIL.
[ "Generative adversarial imitation learning", "imperfect demonstrations", "reinforcement learning" ]
Accept (Poster)
https://openreview.net/pdf?id=0e2pcSxQJS
https://openreview.net/forum?id=0e2pcSxQJS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qxWuLDMxAy", "qtWUYybDeK", "qf6KHu2PBg", "phg8J7OfIS", "pCHVvEnlhp", "i7pKwm8QFP", "gSJ076hs9H", "d5BXByQM74", "bwJROH4Kvd", "bCyPAIff8T", "a0IE0qLUCC", "UHBqNG8U5a", "JdCDTJbgYZ", "HLG9qyTtzc", "AhnCeUsoge", "9qwnApCqfk", "2D4NFr2hW8", "1cokPYDtFh", "1ZN01gqVRD" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732073973711, 1732758697286, 1732128763356, 1730401405807, 1732074374354, 1734719891392, 1730794712913, 1732150596298, 1732093697882, 1732074524779, 1730183527041, 1732094749825, 1737524253521, 1730443316470, 1732074180885, 1732759660317, 1732074583861, 1732311599170, 1732250622232 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13343/Authors" ], [ "ICLR.cc/2025/Conference/Submission13343/Reviewer_eg39" ], [ "ICLR.cc/2025/Conference/Submission13343/Reviewer_1f3M" ], [ "ICLR.cc/2025/Conference/Submission13343/Reviewer_XEhc" ], [ "ICLR.cc/2025/Conference/Submission13343/Authors" ], [ "ICLR.cc/2025/Conference/Submission13343/Area_Chair_GPhv" ], [ "ICLR.cc/2025/Conference/Submission13343/Reviewer_H316" ], [ "ICLR.cc/2025/Conference/Submission13343/Authors" ], [ "ICLR.cc/2025/Conference/Submission13343/Reviewer_H316" ], [ "ICLR.cc/2025/Conference/Submission13343/Authors" ], [ "ICLR.cc/2025/Conference/Submission13343/Reviewer_1f3M" ], [ "ICLR.cc/2025/Conference/Submission13343/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13343/Reviewer_eg39" ], [ "ICLR.cc/2025/Conference/Submission13343/Authors" ], [ "ICLR.cc/2025/Conference/Submission13343/Authors" ], [ "ICLR.cc/2025/Conference/Submission13343/Authors" ], [ "ICLR.cc/2025/Conference/Submission13343/Authors" ], [ "ICLR.cc/2025/Conference/Submission13343/Reviewer_XEhc" ] ], "structured_content_str": [ "{\"title\": \"Common Response\", \"comment\": \"We sincerely appreciate the insightful feedback from the reviewers. **The revised paper based on the feedback has been uploaded.** We first respond to the common concerns and present the results of the experiments that we have supplemented.\\n\\n**1.** Extreme optimal and non-optimal demonstration rate experiments.\\n\\nTo highlight the strength of PN-GAIL in an extreme optimal or non-optimal demonstration rate setting, in the Ant-v2 environment, we collect demonstrations with ratio of $\\\\pi_\\\\mathrm{opt}:\\\\pi_1=1:10$. We then evaluate the performance of PN-GAIL and other baseline methods using the collected demonstrations. As shown in **Appendix B.3 Figure 8**, under this extreme demonstration ratio, all other methods fail, resulting in outcomes that are close to GAIL. In contrast, PN-GAIL successfully learns valuable information from the extreme demonstrations.\\n\\n**2.** Experiments when optimal demonstrations are dominant.\\n\\nTo illustrate that PN-GAIL consistently performs well across a range of dataset quality distributions, in the Ant-v2 and Pendulum-v1 environments, we conduct the experiments at the demonstration ratio of $\\\\pi_\\\\mathrm{opt}:\\\\pi_1=2:1$. As shown in **Appendix B.3 Figure 9**, it can be seen that when the optimal demonstrations are dominant, PN-GAIL still shows robust and excellent performance.\\n\\n**3.** Experiments under different numbers of non-optimal demonstrations.\\n\\nIn order to understand how PN-GAIL can take advantage of additional non-optimal demonstrations, we conduct experiments with different numbers of non-optimal demonstrations in the Ant-v2 environment, and the results are presented in **Appendix B.3 Figure 10**. We find that when the number of non-optimal demonstrations decreases, the performance of PN-GAIL does not decrease significantly.\\n\\n**4.** Comparative experiments with methods based on expert preference.\\n\\nWe compare the performance of PN-GAIL with CAIL, TREX, and DREX (typical expert preference-based methods) in the Ant-v2 and Pendulum-v1 environments. PN-GAIL outperforms the other methods, achieving the highest returns. More details can be seen in **Appendix B.3 Table 4**.\\n\\n| Methods | Pendulum-v1 | Ant-v2 |\\n|:--------:| :---------:|:--------:| \\n| PN-GAIL | **-465.978\\u00b1132.710** | **2960.458\\u00b1851.465** |\\n| CAIL | -697.821\\u00b191.459 | 2336.546\\u00b1735.378 |\\n| T-REX | -1074.435\\u00b1258.078 | -1858.025\\u00b1217.228 | \\n| D-REX | -1532.958\\u00b1114.13 | -2495.473\\u00b1220.703 | \\n| Optimal Policy | -116.81 | 4271.79 | \\n\\nTable R1. Average returns of PN-GAIL, CAIL, T-REX and D-REX. Based on the implementation here: https://github.com/Stanford-ILIAD/Confidence-Aware-Imitation-Learning.\\n\\n**5.** Comparative experiments with other advanced IL algorithms.\\n\\nWe compare the performance of PN-GAIL with f-IRL [1] in the Ant-v2 and Pendulum-v1 environments. As shown in the table below, PN-GAIL performs better than FKL(f-IRL), RKL(f-IRL) and JS(f-IRL). More details can be seen in **Appendix B.3 Table 4**.\\n\\n| Methods | Pendulum-v1 | Ant-v2 |\\n|:--------:| :---------:|:--------:| \\n| PN-GAIL | **-465.978\\u00b1132.710** | **2960.458\\u00b1851.465** |\\n| FKL(f-IRL) | -698.064\\u00b1300.793 | 1563.796\\u00b11372.482 |\\n| RKL(f-IRL) | -603.760\\u00b1189.727 | 962.785\\u00b1816.547 | \\n| JS(f-IRL) | -581.602\\u00b1185.096 | 882.026\\u00b1690.371 | \\n| Optimal Policy | -116.81 | 4271.79 | \\n\\nTable R2. Average returns of PN-GAIL and f-IRL. Based on the implementation here: https://github.com/twni2016/f-IRL .\\n\\n**6.** The variance of the estimator $\\\\hat{R}_{BSC,\\\\ell}(g)$ before and after inclusion $\\\\alpha$ and $\\\\beta$.\\n\\n We validate the appropriateness of our chosen values of $\\\\alpha$ and $\\\\beta$ by comparing the variance of the estimator $\\\\hat{R}_{BSC,\\\\ell}(g)$ before and after the inclusion of $\\\\alpha$ and $\\\\beta$.\\n\\n| Var | Ant-v2 | HalfCheetah-v2 | Hopper-v2 | Pendulum-v1 | Swimmer-v2 | Walker2d-v2 |\\n|:--------:| :---------:|:--------:| :---------:| :---------:| :---------:| :---------:|\\n| Origin | 0.126\\u00b10.054 | 0.033\\u00b10.009 | 1.476\\u00b10.574 | 0.00064\\u00b10.00024 | 1.664\\u00b11.475 | 0.093\\u00b10.029 |\\n| PN-GAIL(Ours) | **0.100\\u00b10.050** | **0.025\\u00b10.006** | **1.412\\u00b10.566** | **0.00013\\u00b10.00004** | **0.007\\u00b10.015** | **0.084\\u00b10.028** |\\n\\nOrigin indicates the lack of $\\\\alpha$ and $\\\\beta$ (i.e., $\\\\alpha$=0 and $\\\\beta$=0). The results in the above table show that the variance of PN-GAIL is consistently smaller, demonstrating the validity of the chosen values for $\\\\alpha$ and $\\\\beta$.\\n\\n**7.** Modification of the content of the article:\\n- We have redrawn the figures of the article to provide a clearer explanation and to enhance readability.\\n- We have added explanations for Figures 1 and 3 in Sections 4.1 and 5.1, respectively, for improving understanding.\\n- We have modified the undefined or unclearly defined parts such as the definitions of $\\\\delta$ and PN-GAIL$\\\\backslash$BSC.\\n\\n\\n[1] Tianwei Ni, Harshit Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, and Ben Eysenbach. f-irl: Inverse reinforcement learning via state marginal matching. In Conference on Robot Learning, pp. 529\\u2013551. PMLR, 2021.\"}", "{\"comment\": \"Apologies for the delayed response. The author made valid points, so I am raising my score to 6. Good luck!\"}", "{\"comment\": \"Thank you for your response to my questions. I appreciate the incremental experiments you conducted and the modifications you made. I confirm rating score 8 as the authors did great work regarding initial submission and revision.\"}", "{\"summary\": \"The authors propose PN-GAIL, an extension of the GAIL framework designed to handle the imperfect expert demonstrations. By predicting confidence scores for unlabeled data, PN-GAIL allows for more accurate imitation learning without relying solely on optimal examples. Theoretical analysis is provided for the output of the optimal discriminator in the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper introduces a new algorithm (based on 2IWIL and IC-GAIL) supported by theoretical analysis and demonstrates its effectiveness across multiple tasks. Experiments are extensive, including different benchmarks, different $\\\\pi_{OPT} : \\\\pi_{1}$ ratios, different standard deviations of Gaussian noise.\", \"weaknesses\": \"Although the proposed method is based on [1], e.g., some techniques to prove Theorem 4.1, Theorem 4.2, have been explored in [1], this study sill broadens the scope of [1]. One potential weakness is: although $\\\\alpha$ and $\\\\beta$ are intended to play distinct roles in Theorem 4.2, they are selected to be identical in Algorithm 1, which may affect the overall applicability of the algorithm.\\n\\n[1] Wu, Yueh-Hua, et al. \\\"Imitation learning from imperfect demonstration.\\\" International Conference on Machine Learning. PMLR, 2019.\", \"questions\": \"1. Could the authors highlight expert performance more prominently in the figures to enhance clarity and interpretability?\\n\\n2. Would varying values of $n_{u}$ and $n_{c}$ significantly impact the performance of the proposed method? Additionally, could the authors provide guidelines or criteria for selecting optimal values for these parameters in practice?\\n\\n3. Figure 1 offers a valuable comparison of the proposed method against GAIL and 2IWIL. To further support this comparison, it would be better if the authors could also provide more intuition of why GAIL and 2IWIL fail but the proposed method succeed. E.g., gradient colors for different confidence scores, and why 2IWIL fails to predict them. Additionally, Why GAIL predicts 5.0 for some of theses data points?\\n\\n4. To further contextualize this work within imitation learning (specifically, driver behavior imitation), it would be beneficial to incorporate additional relevant studies, such as:\\n\\n[2] Ruan, Kangrui, and Xuan Di. \\\"Learning human driving behaviors with sequential causal imitation learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 4. 2022.\\n\\n[3] Hawke, Jeffrey, et al. \\\"Urban driving with conditional imitation learning.\\\" 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020.\\n\\nand so on.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer eg39\", \"comment\": \"> **Q1.** The method's reliance on confidence scores may limit its applicability when it's difficult to assign confidence levels directly.\\n\\n**A1.** Thank you for the insightful comments. Our method needs to label a small percentage of confidence scores, which can be obtained by averaging multiple expert labels (as long as trajectory preferences are obtainable). To further validate the superiority of confidence scores over simple trajectory preference ranking, in **Common Response point 4**, we compare the performance of PN-GAIL with CAIL, TREX, and DREX (typical trajectory preference-based methods). The results show that PN-GAIL outperforms other methods, achieving the highest rewards.\\n\\n\\n\\n> **Q2.** Related to my disagree on the claim that 'GAIL treats non-optimal demonstrations as if they were optimal' in the weaknesses section, please could you provide a counterargument?\\n\\n**A2.** Thank you for this helpful comment. We have re-written this statement to avoid any potential confusion and also provided additional experimental results to further demonstrate this point. When expert demonstrations include non-optimal examples, GAIL aims to reproduce the overall distribution of these demonstrations, causing both optimal and non-optimal instances to be weighted equally. Consequently, if non-optimal demonstrations dominate, the learned policy will also reflect most non-optimal behaviors. In contrast, PN-GAIL mimics optimal demonstrations while minimizing the influence of non-optimal ones by assigning different weights to each type of demonstration and considering the associated positive and negative risks. This is also reflected in the poor performance of GAIL and the best performance of PN-GAIL in our additional experimental results in **Appendix B.3 Figure 8**. \\n\\n\\n> **Q3.** Lacks comparison with other advanced IL algorithms, such as f-IRL.\\n\\n**A3.** Following your constructive comments, we have demonstrated the performance of f-IRL [1] in **Common Response point 5**. However, since f-IRL is not designed for imperfect demonstrations, it does not work well because it reproduces all demonstrations with the same weight.\\n\\n\\n> **Q4.** Theorem 1's bound relies on knowing the variances, how practically useful is the bound in scenarios where variance values are unknown or hard to assess?\\n\\n**A4.** We are not sure if you are referring to Theorem 4.4 (If not, please let us know). For Theorem 4.4, the bound given is related to $\\\\alpha$ and $\\\\beta$. For computational convenience, we assume that these covariances are sufficiently small, and take a fixed approximation of $\\\\alpha$ and $\\\\beta$. The experiments in **Common Response point 6** verifies the rationality of the approximation. When $\\\\alpha$ and $\\\\beta$ use a fixed approximation, the bound given is independent of variance.\\n\\n\\n\\n\\n> **Q5.** Theorem 2's bound may become quite loose if the Rademacher complexity is high.\\n\\n**A4.** Thank you for your insights. This phenomenon is indeed possible, and we believe that overfitting is one of the contributing factors. However, it can be mitigated through techniques such as regularization or dropout. Additionally, we can select different classification models based on the size of the dataset. For larger datasets, a more complex model may be appropriate, as the abundance of data helps reduce the risk of overfitting. Conversely, for smaller datasets, it is advisable to opt for a simpler model.\\n\\n\\n\\n[1] Tianwei Ni, Harshit Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, and Ben Eysenbach. f-irl: Inverse reinforcement learning via state marginal matching. In Conference on Robot Learning, pp. 529\\u2013551. PMLR, 2021.\"}", "{\"metareview\": \"This paper presents a well-motivated and theoretically sound approach to imitation learning with imperfect demonstrations, offering a valuable contribution to the field. The authors effectively address a key limitation of prior work by removing the reliance on the unknown class prior and introducing practical parameter choices. This significantly improves the applicability and ease of use of the proposed method. The theoretical analyses provide further support for the approach. While the empirical results, in their current form, do not fully showcase the method's potential advantages compared to baselines, the concerns raised are relatively minor and could be addressed through additional experiments. Given the strong theoretical grounding, practical improvements, and overall clarity of the paper, we recommend acceptance. The authors are encouraged to consider expanding the experimental evaluation to more definitively highlight the advantages of their method in scenarios where existing methods struggle.\", \"additional_comments_on_reviewer_discussion\": \"nothing concerning\"}", "{\"summary\": \"This work addresses imitation learning from imperfect demonstrations, utilizing both confidence-labeled noisy demonstrations and unlabeled noisy demonstrations. It aims to overcome two primary limitations of prior work, specifically 2IWIL [A], a representative approach to this problem that employs a two-step learning process: (i) semi-confidence labeler training on the unlabeled dataset, and (ii) confidence-based generative imitation learning.\\n\\nThe proposed method, PN-GAIL (Positive-Negative GAIL), tackles the limitations of 2IWIL as follows:\\n\\n1. **Incorporating Negative Risk to Objective**: 2IWIL overlooks the negative risk associated with imperfect demonstrations, leading the discriminator to disproportionately prioritize the positive risk of frequent samples. PN-GAIL addresses this by incorporating both positive and negative risks into the confidence-based imitation learning objective, ensuring a more reliable evaluation regarding to demonstration quality.\\n2. **Balanced Semi-Confidence (BSC) Classification**: In 2IWIL, semi-confidence (SC) classification is used to train a confidence labeler for unlabeled demonstrations. However, SC classification tends to overestimate the confidence of labeled data and underestimate the confidence of unlabeled data. To address this, PN-GAIL introduces a balanced semi-confidence (BSC) objective and further suggests near-optimal values for hyperparameters $\\\\alpha$ and $\\\\beta$, enhancing practical applicability.\\n\\n[A] Wu et al., \\\"Imitation Learning from Imperfect Demonstration,\\\" ICML 2019.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This work is well-motivated and effectively addresses the challenges presented in the previous study using sound methods.\\n\\nA notable strength of this work is its practice-oriented design of the objective functions, which enhances applicability in real-world scenarios.\\nThis study removes the dependence on $\\\\eta$, the class prior $p(y=0)$ for imperfect demonstration datasets, from the primary objective. \\nSince $\\\\eta$ is generally unknown and challenging to estimate, prior work treated it as a hyperparameter, requiring practitioners to invest substantial effort in tuning it. \\nBy eliminating this reliance, the proposed approach reduces the overhead associated with hyperparameter optimization.\\n\\nAdditionally, the authors introduce near-optimal and practical choices for the parameters $\\\\alpha$ and $\\\\beta$ in the Balanced Semi-Confidence (BSC) objective, which can be straightforwardly calculated based on the dataset sizes $n_c$ (confidence-labeled) and $n_u$ (unlabeled). \\nThis adjustment simplifies the implementation process and supports the broader applicability of imitation learning with imperfect demonstrations in practical settings.\\n\\nFurthermore, the manuscript includes theoretical analyses showing that (i) the proposed objective helps avoid the imitation of non-optimal data and (ii) derives a sample complexity bound for the BSC method, providing a rigorous foundation for the proposed improvements.\", \"weaknesses\": \"Despite the many strengths of this work, the empirical results presented in the manuscript do not stand out as particularly impressive.\\n\\nSpecifically, Figure 1 shows that the performance difference between PN-GAIL and the most competitive baseline across tasks is not significant. In my opinion, as discussed in Section 2, since baseline methods typically assume a dominant proportion of $\\\\pi_{opt}$, exploring scenarios with a more skewed ratio (e.g., $\\\\pi_{opt}:\\\\pi_1=1:10$) might provide a more notable results where conventional methods fail while PN-GAIL successes. \\nI think conducting experiments with more extreme demonstration ratios could more clearly demonstrate the scenarios in which PN-GAIL offers a distinct advantage over baseline methods.\\n\\n**[Minor Comments]**\\n1. For Figures 2, 3, and 4, using distinct line colors for different methods would enhance readability.\\n2. In the ablation study presented in Figure 3, it would be advantageous to include results from 2IWIL\\u2014even though they are already provided in Figure 2. Since 2IWIL represents a variant of PN-GAIL that excludes both the PN objective and the BSC, its direct comparison within the same figure would clarify the incremental benefits of these components.\", \"questions\": \"Q1. In contrast to scenarios where $\\\\pi_1$ dominates, how would the results in Figure 2 be affected if $\\\\pi_{opt}$ were dominant, a condition under which existing methods are known to perform well? Results for PN-GAIL in the Pendulum-v1 task with various $\\\\pi_{opt}:\\\\pi_1$ ratios are presented in Figure 7 of the supplementary material, but a more systematic comparison between PN-GAIL and baseline methods could strengthen the manuscript. If PN-GAIL demonstrates robust performance and outperforms the baseline methods in such scenarios, it would confirm the method's reliability. This evidence would support the assertion that PN-GAIL performs consistently well across a range of dataset quality distributions, thus setting it apart from existing methods.\\n\\nQ2. Instead of modifying the $\\\\pi_{opt}:\\\\pi_1$ ratio while maintaining a fixed total number of demonstrations, what would occur if the optimal demonstrations was kept invariant while the number of suboptimal demonstrations was varied? This setup would illustrate how PN-GAIL effectively utilizes additional suboptimal demonstrations, isolating the influence of optimal demonstrations on imitation performance. It would offer valuable insights into PN-GAIL\\u2019s ability to adapt to and leverage diverse demonstration qualities effectively.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"I have no ethical concerns on this work.\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your response!\", \"comment\": \"Thank you so much for taking the time and effort to review our paper and read the follow-up rebuttal! We are pleased to hear that you recognize our work and would like to express our sincere appreciation for your feedback.\"}", "{\"comment\": \"Thank you for addressing my questions. I appreciate the inclusion of new experimental results, which now appear clearly convincing to align with the arguments presented in this research.\\nI have no further concerns and update my rating to 8, as the quality of the revision is strong enough to make a valuable contribution to this community.\"}", "{\"title\": \"Author Response to Reviewer XEhc\", \"comment\": \"> **Q1.** Although $\\\\alpha$ and $\\\\beta$ are intended to play distinct roles in Theorem 4.2, they are selected to be identical in Algorithm 1, which may affect the overall applicability of the algorithm.\\n\\n**A1.** We do not directly choose $\\\\alpha$ and $\\\\beta$ as the same value. In fact, after assuming that the covariances are small enough relative to the variances, the final approximations of $\\\\alpha$ and $\\\\beta$ happen to be the same. In addition, we empirically validate the appropriateness of our chosen values of $\\\\alpha$ and $\\\\beta$ in **Common Response point 6**. The results demonstrate the rationality of this setting.\\n\\n\\n> **Q2.** Could the authors highlight expert performance more prominently in the figures to enhance clarity and interpretability?\\n\\n**A2.** Thank you for your valuable suggestion. Following your suggestion, we have redrawn Figures 2, 3, 4 and added a blue dotted line to indicate the performance of the optimal policy.\\n\\n\\n> **Q3.** Would varying values of $n_u$ and $n_c$ significantly impact the performance of the proposed method?\\n\\n**A3.** We show the PN-GAIL performance at different $n_u$ and $n_c$ ratios in Figure 4 (a), Figure 4(b), and the PN-GAIL performance when $n_u$ is reduced in **Appendix B.3 Figure 6**. The results show that decreasing $n_u$ has a greater impact on method performance than changing the ratio of $n_u$ and $n_c$. In practice, when choosing the dataset size ($n_u+n_c$), it is better to ensure that the optimal demonstrations information contained in the dataset is sufficient to learn an optimal policy. When selecting the label ratio ($\\\\frac{n_c}{n_c + n_u}$), we need to make the distribution of the label dataset as consistent as possible with the distribution of the whole dataset.\\n\\n\\n> **Q4.** Provide more intuition of why GAIL and 2IWIL fail, but the proposed method succeeds. Why does GAIL predict 5.0 for some of these data points?\\n\\n**A4.** Thank you for your valuable comments. We have redrawn Figure 1 and used gradient colors for different confidence scores. We explain Figure 1 with an example in Section 4.1, where the data points represent the equivalent confidence level at the time of training. Specifically, we consider the goals of GAIL, 2IWIL:\\n\\n$\\\\min_\\\\theta \\\\max_w E_{x\\\\sim p_\\\\theta}\\\\left[\\\\log D_w(x)\\\\right]+ E_{x\\\\sim p}\\\\left[\\\\frac{r(x)}{\\\\eta}\\\\log(1-D_w(x))\\\\right].$\\n\\nExpand the second term, which is $\\\\sum p(x) \\\\frac{r(x)}{\\\\eta}\\\\log(1-D_w(x))$, and since GAIL has no confidence scores, $\\\\frac{r(x)}{\\\\eta}$ is always equal to $1$. The coefficient for each state action pair $x$ in given demonstrations at the time of training is $p(x) \\\\frac{r(x)}{\\\\eta}$. Therefore, if there is a higher probability of $x_1$ appearing in imperfect demonstrations, such as $p(x_1) = 5p(x_{other})$. Then its coefficient is $p(x_1)\\\\frac{r(x_1)}{\\\\eta} = p(x_{other})\\\\frac{5r(x_1)}{\\\\eta}$, which equates to the confidence score of $x_1$, five times that of the original. For GAIL, since the confidence scores are all considered to be $1.0$, it is $1.0\\\\times5=5.0$. \\n\\nFor PN-GAIL, by increasing the non-optimal information of the demonstrations, the higher the probability of a demonstration, the third term of Eq 11 will balance the influence of the second item due to the weight change, so that the score given by discriminator $D$ is more accurate. We have added the above explanations to the revised version.\\n\\n\\n> **Q5.** To further contextualize this work within imitation learning (specifically, driver behavior imitation), it would be beneficial to incorporate additional relevant studies.\\n\\n**A5.** In fact, we have already attempted the proposed algorithm in the field of autonomous driving and achieved preliminary results. Due to time and space limitations, we will publish such results in our future work.\"}", "{\"summary\": \"Motivated by the limitation of 2IWIL in a type of scenario where certain non-optimal demonstrations have high probabilities of appearing in a set of imperfect (unlabeled) demonstrations, the paper proposes a new method named PN-GAIL, better leveraging optimal and non-optimal information from imperfect demonstrations to learn optimal reward and policy by assessing both positive and negative risks. Moreover, the paper modifies semi-conf classification in 2IWIL to establish balanced semi-conf classification to handle better the cases where certain demonstrations only exist in the confidence (known, labeled) data.\\n\\nThe authors conduct experiments, comparing their PN-GAIL to four baseline methods across six environments. The results show that PN-GAIL alleviates the impact of the unbalanced frequency in impact demonstrations, outperforms other methods, and maintains relatively good performances given the decreasing number of labels. Also, the outcomes demonstrate that the balanced semi-conf classifier improves performances, particularly in three out of six environments.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The main objective/purpose of this paper is articulated explicitly with the existing methods and their limitations highlighted clearly.\\n2. The literature review on IL, GAIL, and IL with imperfect information is comprehensive. The preliminaries provide the essential information of 2IWIL.\\n3. The theoretical derivations regarding the discriminator modification and classification refinement are straightforward and concise in the main body, which makes audience easy to follow; meanwhile, the supplemental materials in the appendices provide necessary, detailed explanations.\\n4. The experiments with three goals setup are tightly related to the main achievements that the paper wants to claim. The experiments are conducted with representative benchmarks across various environments, effectively showing the performances with well-formatted figures and table in the main context.\\n5. Overall, the authors identify the potential challenges of 2IWIL about certain non-optimal data with high frequencies in an unlabeled demonstration set significantly affecting reward and policy generation. The topic is likely to be of interest to a large proportion of the community. The proposed PN-GAIL creatively update the previous methods to successfully remove limitations of prior IL results to a certain extent.\", \"weaknesses\": \"1. Missing definition: in Section 3, what \\u03b4 represents. Please define \\u03b4 when it is first introduced in the paper.\\n\\n2. Needing clarification: PN-GAIL\\\\BSC: PN-GAIL without balanced semi-conf (BSC) classification--does this mean no classification used or SC used? Without a probabilistic classifier, how to obtain confidence scores? Please explicitly state whether a SC classification is used, and to explain how confidence scores are obtained if no classifier is used. \\n\\n3. Lack of analysis for experimental outcomes: it is necessary to provide more detailed explanations and discussions regarding those figures. For example, in Figure 3 what possible reasons (e.g., characteristics of each environment? limited number of demonstrations?) are that result in the varying performance patterns across six environments. We can observe that in some cases three colors (methods) are similar; in some cases blue and green are close; in some cases, blue and orange are close; while in some cases, blue works best. Please provide a systematic analysis of how different factors might contributing to these patterns.\", \"questions\": \"1. About clarification: PN-GAIL\\\\BSC: PN-GAIL without balanced semi-conf (BSC) classification--does this mean no classification used or SC used? Without a probabilistic classifier, how to obtain confidence scores?\\n\\n2. To verify BSC outperform SC in 2IWIL, straightforward comparisons are PN-GAIL(BSC) vs. PN-GAIL(switch to SC) vs. 2IWIL(switch to BSC) vs. 2IWIL(SC). Why do you consider the way of comparisons that are presented in the paper?\\n\\n3. It is better to provide more detailed explanations and discussions regarding those figures. For example, the statement about Figure 3 is quite short. The audience would like to learn about what possible reasons are that result in the varying performance patterns across six environments in Figure 3. We can observe that in some cases three colors (methods) are similar; in some cases blue and green are close; in some cases, blue and orange are close; while in some cases, blue works best. Could you provide a systematic analysis?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your response!\", \"comment\": \"We greatly appreciate your time in reviewing our paper and reading the follow-up rebuttal! We're thrilled for your recognition of our work, thanks a lot!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper introduces an IL method that handles imperfect demos by including both the optimal and non-optimal data in training, assuming that the demos come with a confidence score. The goal is to improve policy learning when demos aren\\u2019t perfectly optimal. The method assigns weights to optimal and non-optimal examples through a semi-supervised confidence classifier. Experiments on six control tasks show that PN-GAIL performs better than standard GAIL and other baselines under these imperfect conditions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Originality: The use of positive and negative risks to manage imperfect demos tackles some of the real-world problems of non-optimal data in a practical way.\", \"Quality: The paper provides theoretical analysis for the positive-negative risk approach and shows experimental results across multiple control tasks.\", \"Clarity: Most theoretical ideas are clearly presented with good notation.\", \"Significance: The approach is relevant for real-world applications. It might have a real impact on the usability of IL in practical scenarios.\"], \"weaknesses\": [\"The method\\u2019s reliance on confidence scores may limit its applicability when it\\u2019s difficult to assign confidence levels directly. Human annotation of preferences over trajectories, for instance, might be more feasible than assigning explicit confidence scores.\", \"The fundamental motivation of this paper is based on a claim that 'GAIL treats non-optimal demonstrations as if they were optimal'. I disagree with this claim. GAIL\\u2019s objective is to minimize the JS divergence between the expert and agent trajectory distributions, aiming to reproduce the overall distribution of expert demonstrations. When the agent policy is close to the expert policy, the discriminator's output tends to be 0.5 everywhere. If expert demos include a mix of optimal and sub-optimal trajectories, GAIL should naturally capture this mixture without necessarily assuming optimality. Could you provide a counterargument or clarification on why PN-GAIL\\u2019s approach is necessary, given this perspective?\", \"Lacks comparison with other advanced IL algorithms, such as f-IRL.\"], \"questions\": [\"Theorem 1\\u2019s bound relies on knowing the variances. However, the variances can be difficult to estimate in real-world applications. Given this dependency, how practically useful is the bound in scenarios where variance values are unknown or hard to assess?\", \"Theorem 2\\u2019s bound may become quite loose if the Rademacher complexity is high, as is typical with deep and wide neural networks. Could this have negative implications for the method's reliability when using complex models?\", \"Related to my disagree on the claim that 'GAIL treats non-optimal demonstrations as if they were optimal' in the weaknesses section, please could you provide a counterargument?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to Reviewer H316\", \"comment\": \"> **Q1.** Conducting experiments with more extreme demonstration ratios could more clearly demonstrate the scenarios in which PN-GAIL offers a distinct advantage over baseline methods.\\n\\n**A1.** In the Pendulum-v1 environment, the ratio of the demonstrations is $\\\\pi_{\\\\mathrm{opt}}:\\\\pi_{1}=1:4$, in which case the advantage of PN-GAIL over other baseline methods is clearly visible. In addition, following your advice, we have added experiments at the ratio of $\\\\pi_{\\\\mathrm{opt}}:\\\\pi_{1}=1:10$ in **Common Response point 1**. We notice that WGAIL performs even worse than GAIL, which we attribute to the fact that WGAIL incorrectly assigns high confidence scores to non-optimal demonstrations, resulting in a worse policy.\\n\\n\\n> **Q2.** Modifications of Figures 2, 3, 4.\\n\\n**A2.** Thank you for your valuable comments. We have redrawn Figures 2, 3, 4 and also included the results of 2IWIL in Figure 3, following your advice. The redrawn plots have been placed in the revised version.\\n\\n\\n> **Q3.** How would the results in Figure 2 be affected if $\\\\pi_\\\\mathrm{opt}$ were dominant? \\n\\n**A3.** In the Ant-v2 and Pendulum-v1 environments, we supplement the experiments at the demonstration ratio of $\\\\pi_\\\\mathrm{opt}:\\\\pi_1=2:1$ in **Common Response point 2**.\\n\\n\\n> **Q4.** What would occur if the optimal demonstrations were kept invariant while the number of suboptimal demonstrations varied?\\n\\n**A4.** We have conducted experiments with different numbers of non-optimal demonstrations in **Common Response point 3**. We find that when the number of non-optimal demonstrations decreases, the performance of PN-GAIL does not decrease significantly.\"}", "{\"title\": \"Thanks for your response!\", \"comment\": \"We really appreciate your time for reviewing our paper and reading the follow-up rebuttal! If you had any additional questions, please feel free to ask any time.\"}", "{\"title\": \"Author Response to Reviewer 1f3M\", \"comment\": \"> **Q1.** Missing definition: in Section 3, what $\\\\delta$ represents. Please define $\\\\delta$ when it is first introduced in the paper.\\n\\n**A1.** Thank you for pointing it out! $\\\\delta$ represents Dirac delta function. We have added the above explanation to the revised version.\\n\\n> **Q2.** Needing clarification: PN-GAIL$\\\\backslash$BSC: PN-GAIL without balanced semi-conf (BSC) classification--does this mean no classification used or SC used? \\n\\n**A2.** We apologize for the inaccurate statement. PN-GAIL$\\\\backslash$BSC stands for using SC classification. We have clarified the relevant explanations in the revised version.\\n\\n\\n> **Q3.** Lack of analysis for experimental outcomes: it is necessary to provide more detailed explanations and discussions regarding those figures.\\n\\n**A3.** Thank you for your suggestion. In Figure 3, the difference between the performance of PN-GAIL and PN-GAIL$\\\\backslash$PN indicates that there is a preference in the imperfect demonstrations, resulting in the poor performance of the 2IWIL follow-up method. The performance gap between the performance of PN-GAIL and PN-GAIL$\\\\backslash$BSC indicates that the prediction confidence of SC classification is not accurate enough, which affects the subsequent training. If the performance gap is not significant, it means that the above problems are not obvious or do not affect the final results. For example, in the Pendulum-v1 environment, we designed a high percentage of non-optimal demonstrations. In Figure 3, the performance of PN-GAIL and PN-GAIL$\\\\backslash$BSC are close to and higher than that of PN-GAIL$\\\\backslash$PN, indicating that there is little difference between the BSC classification and the SC classification, and there is a preference in imperfect demonstrations. This is also in line with the design of our dataset and the results of the confidence classifier. We have added the above explanation to the revised version.\\n\\n\\n> **Q4.** To verify BSC outperform SC in 2IWIL, straightforward comparisons are PN-GAIL(BSC) vs. PN-GAIL(switch to SC) vs. 2IWIL(switch to BSC) vs. 2IWIL(SC). Why do you consider the way of comparisons that are presented in the paper?\\n\\n**A4.** This is because a direct comparison of the difference between the classification results and the true value is more accurate, convincing, and quantifiable. We have compared the performance of PN-GAIL(BSC) with PN-GAIL$\\\\backslash$BSC (which means using SC) in Figure 3. In addition, we have also added comparison between PN-GAIL$\\\\backslash$PN (2IWIL with BSC) and 2IWIL (SC) in Figure 3. The results of both comparisons suggest that BSC is superior to SC.\"}", "{\"comment\": \"Thank you for your valuable feedback and pointing out these relevant references. We have included these references and further strengthened the comparisons and discussions to related work and literature review in the revised paper.\", \"title\": \"Thanks for your response!\"}", "{\"title\": \"Official Comment by Reviewer XEhc\", \"comment\": \"Thanks to the authors for providing a detailed response. Based on the revisions and clarifications, I believe this paper meets the standards for acceptance. As such, I decide to maintain my score.\\n\\nFor the final camera-ready version, I encourage the authors to consider citing additional relevant references (e.g., the one we reviewers suggested [2][3][4][5] and some recently published works) to further strengthen the paper\\u2019s impact and completeness. While I totally understand that conducting additional experiments may be challenging at this stage, it would still be valuable to include comparisons and discussions in the related work section.\", \"driver_behavior_imitation\": \"[2] Ruan, Kangrui, and Xuan Di. \\\"Learning human driving behaviors with sequential causal imitation learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 4. 2022.\\n\\n[3] Hawke, Jeffrey, et al. \\\"Urban driving with conditional imitation learning.\\\" 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020.\", \"imperfect_demonstrations\": \"[4] Li, Ziniu, et al. \\\"Imitation learning from imperfection: Theoretical justifications and algorithms.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[5] Yang, Hanlin, Chao Yu, and Siji Chen. \\\"Hybrid policy optimization from Imperfect demonstrations.\\\" Advances in Neural Information Processing Systems 36 (2024).\"}" ] }
0e26yMOCbd
CHARGE DIRICHLET ENERGY: Geometric Perspectives on Over-smoothing in Deep Graph Neural Networks
[ "ZhongYu Li", "Geng Zhao" ]
Over-smoothing is regarded as a key issue affecting the performance of deep Graph Neural Networks (GNNs). As the number of GNN layers increases, model performance degrades significantly, due to node embeddings converging into indistinguishable vectors. This phenomenon stems from the recursive aggregation of neighbor node representations, which impairs the distinguishability of node embeddings. From an energy perspective, this is associated with the convergence of node embeddings to a fixed point solution during the minimization of Dirichlet energy, hindering the model's ability to learn underlying geometric structures. While Graph Convolutional Networks (GCNs) have achieved success in modeling graph-structured data, there is still insufficient understanding of how the underlying geometry contributes to the trainability of deep GCNs. In this paper, we present a novel geometric perspective to understand the poor performance of deep GCNs during training, a method called Charge Dirichlet Energy (\model). We argue that maintaining a healthy geometric structure can significantly enhance the trainability of GCNs and enable state-of-the-art performance, even in base GCN architectures. Subsequently, we analyze the importance and feasibility of learning geometric shapes, demonstrating the critical role of geometric information in training deep GNNs. Extensive empirical validation on multiple benchmark datasets shows that our method improves the geometric shape of deep base GCNs, significantly enhancing their performance and outperforming many state-of-the-art methods in competitive settings. Our contributions include not only a new approach to mitigating over-smoothing and over-compression but also comprehensive theoretical and empirical verification of the importance of geometric structures for the trainability of deep GNNs.
[ "Graph Neural Network", "Over-smoothing", "Dirichlet energy" ]
Reject
https://openreview.net/pdf?id=0e26yMOCbd
https://openreview.net/forum?id=0e26yMOCbd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vGnjSgK8kY", "puleWJx8xP", "LC7FLr9o4J", "Fb56PWKo4M", "8ICIfz3R2y", "5dcpZOS6O1", "1WNTxhmMLd" ], "note_type": [ "official_review", "decision", "official_review", "official_review", "official_review", "meta_review", "official_review" ], "note_created": [ 1729106333486, 1737523805395, 1730949646868, 1730156051497, 1731276065353, 1734694937622, 1730834851293 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6949/Reviewer_JWcE" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6949/Reviewer_Wyan" ], [ "ICLR.cc/2025/Conference/Submission6949/Reviewer_z88i" ], [ "ICLR.cc/2025/Conference/Submission6949/Reviewer_DPwo" ], [ "ICLR.cc/2025/Conference/Submission6949/Area_Chair_gmW7" ], [ "ICLR.cc/2025/Conference/Submission6949/Reviewer_iAkJ" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a new understanding of Dirichlet energy in the context of GNNs, revealing the relationship between Dirichlet energy decay and edge space collapse.\\n\\nThe paper also introduces a new message updating scheme, which prevents over-smoothing by incorporating a residual term weighted by the minimum Dirichlet energy. And the authors conducted extensive comparison experiments demonstrating the effectiveness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The authors revealed that the Dirichlet energy decay in deep GNNs is linearly proportional to the edge space sum with a constant $c$.\\n\\nThe authors provided that, within the Dirichlet energy analystic framework, it is crucial for the activation function $\\\\phi$ to satisfy $\\\\phi(0)=0$ in Proposition 3.2. This may indirectly explains why adding a shift $b$ in SReLU is effective.\\n\\nThe minimum Dirichlet energy constrained message updating scheme showed a good performance.\\nExtensive benchmark performance comparisons.\", \"weaknesses\": \"**Lack of novelty**\\n\\nThis paper is a direct follow-up of [1].\\n\\nConsider equation(8) in [1] : \\n\\n$X^{(k)}=\\\\sigma\\\\left(\\\\left[\\\\left(1-c_{\\\\min }\\\\right) \\\\tilde{P} X^{(k-1)}+\\\\alpha X^{(k-1)}+\\\\beta X^{(0)}\\\\right] W^{(k)}\\\\right)$,\\n\\nwhere $\\\\alpha+\\\\beta=c_{\\\\text{min}}$.\\n\\nWhen set $\\\\beta=0$ and rephrase the equation as \\n\\n$X^{(k)}=\\\\sigma\\\\left(\\\\left[ \\\\tilde{P} X^{(k-1)}+ \\\\frac{c_{\\\\text{min}}}{\\\\left(1-c_{\\\\min }\\\\right)} X^{(k-1)} \\\\right]W^{(k)}\\\\right)$.\\n\\nReplacement the symbol $\\\\frac{1}{1-c_{\\\\text{min}}} \\\\to \\\\alpha$ and $c_{\\\\text{min}} \\\\to E_{init}$ , we immediately obtain:\\n\\n$X^{(k+1)}=\\\\sigma( \\\\tilde{P} X^{(k)}W^{(k)} + \\\\alpha E_{init} X^{(k)}W^{(k)}))$.\\n\\nCompare to Equation(8) in this paper:\\n\\n$X^{(l+1)}=\\\\sigma\\\\left(\\\\tilde{\\\\mathbf{L}} X^{(l)} \\\\mathbf{W}^{(l)}+\\\\alpha E_{\\\\text {init }} X^{(l)}\\\\right)$\\n\\nThese two update functions are remarkably similar, except for a weight matrix.\\n\\nIt has been demonstrated in [1] that, utilizing two distinct forms of residual connections can effectively constrain the lower bound of the DIRICHLET energy, and the constraint's intensity can be modulated by a gating parameter.\\n\\nThe approach presented in this paper appears to gracefully fit within this previously established framework, representing a specific instance when $\\\\beta=0$.\\n\\nThe main contribution is addressing the issue that researchers do not know how to choose an appropriate lower bound $c_{\\\\text{min}}$ for different datasets. While in this paper, the authors suggest that simply using the initial DIRICHLET energy $E_{\\\\text{init}}$ as the lower bound works very well.\\n\\nThe authors need to clarify how their method distinguishes itself from or enhances the previous approach.\\nIt would be more valuable if the authors could elaborate on the significance of omitting the weight matrix in the update function and the rationale behind selecting the initial DIRICHLET energy as the lower bound among various initial value choices.\\n\\n[1] K. Zhou *et al.*, \\u201cDirichlet energy constrained learning for deep graph neural networks,\\u201d in *Advances in Neural Information Processing Systems*, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. W. Vaughan, Eds., Curran Associates, Inc., 2021, pp. 21834\\u201321846. \\n\\n \\n\\n**Lack of experiment**\\n\\n- Dirichelet energy visualization. Plotting the Dirichlet energy of each layer with and without the $E_{init}$ term would be more persuasive. As stated in Section 4.2, 'The initial Dirichlet energy serves as a lower bound for the Dirichlet energy,' it is expected that $E_{\\\\text{Dirichlet}}$ will converge to the lower bound $E_{init}$ as the number of layers increases.\\n\\n- The claim that $E_{init}$ preventing topological collapse in Section 4.2 should be supported by experimental evidence. Visualizing the node representations in the final layer and comparing them to the initial topology would be helpful. Consider using commonly employed techniques, such as t-SNE visualization\\\\[2\\\\]\\\\[3\\\\] or a color-propagation test[4].\\n\\n\\n\\n\\n\\\\[2\\\\]D. Shen, C. Qin, Q. Zhang, H. Zhu, and H. Xiong, \\u201cHandling over-smoothing and over-squashing in graph convolution with maximization operation,\\u201d *IEEE Trans. Neural Netw. Learn. Syst.*, pp. 1\\u201314, 2024, doi: [10.1109/TNNLS.2024.3442270](https://doi.org/10.1109/TNNLS.2024.3442270).\\n\\n\\\\[3\\\\]M. Liu, H. Gao, and S. Ji, \\u201cTowards Deeper Graph Neural Networks,\\u201d in *KDD*, Aug. 2020, pp. 338\\u2013348. doi: [10.1145/3394486.3403076](https://doi.org/10.1145/3394486.3403076).\\n\\n\\\\[4\\\\]K. Xu, C. Li, Y. Tian, T. Sonobe, K. Kawarabayashi, and S. Jegelka, \\u201cRepresentation learning on graphs with jumping knowledge networks,\\u201d in *International conference on machine learning*, PMLR, 2018, pp. 5453\\u20135462.\", \"questions\": \"**General Concerns**\\n\\n- Is $\\\\alpha$ in Equation(8) a learnable parameter or a manually defined hyper-parameter? The authors did not clarify this.\\n- What are the exact values of $\\\\alpha$ and $E_{init}$ for each dataset in the experiments? These details are not mentioned in either the main text or the supplementary materials.\\n\\n- If $\\\\alpha$ is a hyper-parameter, the model's performance under different $\\\\alpha$ settings should be reported. Additionally, a discussion on how to select an appropriate $\\\\alpha$ would provide valuable insights for the community.\\n- Why not include a comparison with EGNN in Table 1?\\n- Why not use SReLU as the activation function in Section 5.2, given that it has been demonstrated in EGNN[1] for preserving Dirichlet energy, especially on large datasets like OGBN-arxiv?\\n- It would be helpful if the authors could clarify the source of the baseline model performances in Table 1. Specifically, did they conduct all the experiments themselves, or were some of the results sourced from other papers?\\n- Given that the benchmark results are based on 10 random splits, would it be possible to provide the standard deviation in addition to the mean? This could offer a more comprehensive understanding of the results.\\n- It would be better to include publicly available code to ensure reproducibility;\\n\\n\\\\[1\\\\] K. Zhou et al., \\u201cDirichlet energy constrained learning for deep graph neural networks,\\u201d in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. W. Vaughan, Eds., Curran Associates, Inc., 2021, pp. 21834\\u201321846.\\n\\n\\\\[2\\\\] J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra, \\u201cBeyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs,\\u201d in *Advances in Neural Information Processing Systems*, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., Curran Associates, Inc., 2020, pp. 7793\\u20137804. \\n\\n\\n**Ambiguous statement**\\n\\n- In Eqaution(8), $E_{init}$ is **multiplied by $X^{(l)}$** . While in Line 304, it states **multiplied by $X^{(0)}$** .\\n\\n > Line 304\\n > The initial Dirichlet energy Einit captures the geometric information of the original graph and, when multiplied by the initial node embeddings X(0), ensures that each layer\\u2019s update process retains the topological features of the original graph.\\n\\n- data split question. What are the actual splits used for the Cora, Citeseer, and PubMed datasets? If Yang's split was adopted, why not use the same split as Geom-GCN for consistency?\\n > Line 326\\n >\\n > For this study, we use the **Cora, Citeseer, and Pubmed** datasets Sen et al. (2008), **following the standard training/validation/test splits established by Yang et al. (2016)**\\n >\\n > Line 367\\n >\\n > We apply our model to datasets including **Cora, Citeseer, Pubmed**, Chameleon Rozemberczki et al.(2021), Film, Cornell, Texas, and Wisconsin, **following consistent splits of 48%, 32%, and 20%** for training, validation, and testing, respectively.\\n\\n\\n**Minor comments**\\n\\n- Table1, Table 2, Table 3 out of page width;\\n- In the introduction section, authors use the notion `a minimum Dirichlet energy \\u03c9` . But in the following text, $\\\\omega$ is no longer used; instead, $E_{init}$ is used. A consistent notation across the whole paper would be better;\\n- There is a mistake in Equation (5): $\\\\mathcal{E}(f)$ is already a summation over the (i,j) pairs and cannot be summed again.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper studies the over-smoothing problem of deep graph neural networks and proposes a geometric perspective for addressing over-smoothing. Specifically, the authors analyze the Dirichlet energy minimized by the feed-forward computation process of GCNs and propose a new method based on Dirichlet energy for resolving over-smoothing when the layer number increases. Experiments on small datasets demonstrate the effectiveness of the model.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper is well motivated and studies an important problem from an interesting perspective\\n\\n2. The paper is generally well written and easy to follow\", \"weaknesses\": \"1. The proposed method has limited novelty given existing works that have explored similar ideas and model designs, e.g., [1,2]. Adding self-loop or residual link or strengthening the information of the centered nodes have been extensively used by existing GNN models, such as the well-known ones [1, 2]\\n\\n2. The theoretical results are not new and have been derived in the literature, e.g., [3, 4]. The result of Lemma 1 has been proved in [3] and [4]. Besides, the analysis presented in this paper only shows the result that is already well-known, i.e., over-smoothing will happen when the layer increases. There lacks analysis in why and how the propose model can address the over-smoothing.\\n\\n3. The experimental evaluation is limited in small datasets and comparison with state-of-the-arts is insufficient. More experiments on large datasets such as ogbn-products and ogbn-proteins are suggested. More comparison with state-of-the-art GNNs, especially the ones that can overcome over-smoothing, e.g., GCNII, are needed.\\n\\n[1] Simple and Deep Graph Convolutional Networks, ICML 2020\\n\\n[2] Predict then Propagate: Graph Neural Networks meet Personalized PageRank, ICLR 2019\\n\\n[4] A note on over-smoothing for graph neural networks. Arxiv 2020\\n\\n[5] Dirichlet Energy Constrained Learning for Deep Graph Neural Networks, NeurIPS 2022\", \"questions\": \"1. How does the model perform on large graph datasets, such as ogbn-products and ogbn-proteins?\\n\\n2. Can the authors provide validation on whether the over-smoothing problem is indeed addressed in practice on the experimental datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Inspired by previous work on over-smoothing and Dirichlet energy, the authors propose a simple, intuitive, and generally applicable method named CDE-GNN to address the over-smoothing problem in Graph Neural Networks. The proposed approach is validated by theoretical and empirical results.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposed a very intuitive and generally applicable idea to effectively alleviate the over-smoothing problem for GNNs.\\n2. The proposed idea is well-motivated by solid theoretical insights and results.\\n3. The paper is easy to read.\", \"weaknesses\": \"1. Some theoretical terms in Section 3 are not sufficiently introduced and explained. The authors are suggested to provide more detailed background and solid definition of the quantities involved.\\n\\n2. Several parts are repetitive or even inconsistent. For example, Section 4.1 is redundant and repetitive of the earlier content, as well as the Hyperparameter Analysis with its three following paragraphs in Section 5.2. Please refer to the Questions for details.\\n\\n3. The hyperparameter analysis is not insightful. The study of different activation functions, hidden dimensions, and dropout rates is old-fashioned and not unique to the proposed approach.\", \"questions\": \"1. In Equation (5), why summing over the edges twice?\\n\\n2. In Line 229-230, should Equation 9 actually refer to Equation 7?\\n\\n3. In Line 080, the authors state that the energy lower bound is learnable. However, as in Section 4.2.1, it is fixed as the initial energy. How is it learnable?\\n\\n4. In Line 304, it says the initial energy is multiplied by the initial embeddings, whereas in Equation 8 it is multiplied by the embeddings per layer. Which one is correct?\\n\\n5. In Table 1, why are there two bolded results for Citeseer and Film? Also, are those results for the semi-supervised or fully-supervised setting? It says in Line 334 and 370 that both settings\\u2019 results are in Table 1 but there are no marks for different settings.\\n\\n6. Several results in Table 2 and 3 don\\u2019t match. For example, on the Physics dataset with 32 layers, the optimal accuracy is 94.2 and 94.4, respectively. Why?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the problem of over-smoothing in deep GNNs, where node embeddings become indistinguishable as network depth increases, leading to degraded performance. The authors present a novel geometric perspective on this issue and propose a method called Charge Dirichlet Energy (CDE-GNN). The authors validate their method through comprehensive experiments across various datasets and network depths, showing consistent performance improvements over baseline models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Well-structured presentation progressing from problem motivation to theoretical analysis to practical solution.\\n2. Comprehensive empirical validation across multiple datasets.\", \"weaknesses\": \"1. Limited Novelty: using Dirichlet energy to overcome oversmoothness has been extensively studied.\\n2. lack of detailed analysis of computational overhead compared to baseline methods\", \"questions\": \"1. The layer propagation rule shows strong similarity to EGNN's Lower-bounded Residual Connection [1]. The paper needs to better elaborate on the key differences between these approaches.\\n\\n2. While EGNN appears as a baseline in Table 2, it is missing from the comprehensive comparison in Table 1. This makes it difficult to fully assess CDE-GNN's performance against this closely related method.\\n\\n3. Figure 1 analyzes the Dirichlet energy and edge space length for GAT, but lacks a corresponding visualization showing how CDE-GNN's Dirichlet energy behaves across different layers. Adding this visualization would help demonstrate the effectiveness of the proposed method in preventing energy decay.\\n\\n[1] Zhou et al. Dirichlet energy constrained learning for deep graph neural networks. Advances in Neural Information Processing Systems, 34, 2021.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces Charge Dirichlet Energy, a geometric approach that improves deep GCN training by preserving geometric structure. Experiments show the model mitigates over-smoothing, enhances performance, and outperforms state-of-the-art methods, with strong theoretical and empirical support for the importance of geometry in GNNs.\\n\\n### Strengths:\\n\\n1. The problem addressed is important.\\n2. Empirical validation is provided across multiple datasets.\\n\\n### Weaknesses:\\n\\n1. The proposed method offers limited novelty.\\n2. The theoretical results are not new and have been previously derived in the literature.\\n3. The presentation requires improvement.\\n4. Key experimental evaluations are missing.\\n\\n### Overall:\\n\\nThe paper exhibits significant weaknesses in terms of novelty, significance, and clarity. A rejection is therefore recommended.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not provide any feedback in the rebuttal period.\"}", "{\"summary\": \"This paper discusses an approach to alleviate over-smoothing of deep graph neural networks through the lens of Dirichlet energy. The idea lies in adding one additional term in the layer-wise propagation which takes into account the Dirichlet energy of the initial graph. Experiments have been conducted on several node classification benchmarks showing that the model can yield better performance with increasing depth of the graph neural network.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The problem tackled is important and this paper approaches it through the concept of Dirichlet energy.\\n\\n2. The method seems to offer empirical enhancements on various benchmarks.\", \"weaknesses\": \"1. Though a lot of efforts have been paid on discussing DIrichlet energy, how the proposed approach is linked to preserving Dirichlet energy still remains very unclear. Eq. (8) was introduced alone while more theoretical investigations and experimental observations should be incorporated.\\n\\n2. The presentation needs improvement. Wordy sentences present at times with many of them constantly repeated, e.g., the contents in Sec. 4.1 has been discussed multiple times in the previous sections and should be simplified for more informative contents.\\n\\n3. Some concepts were introduced with confusion and did not exhibit strong correlation to the proposed approach. For instance, how Proposition 3.2 is related to Eq. (8) (e.g., how Eq. (8) help address the vanishing problem) is unclear. There is also no clear reason of introducing the edge space (Definition 3.2) and Corollary 3.1 conveys limited information.\\n\\n4. Experiment settings are not convincing. For example, the reported performance of the baselines are remarkably lower than the official leaderboard of obgn-arxiv (https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv).\", \"minor\": \"Misuse of citet vs citep in multiple places hinders the readability. The authors are encouraged to correct these presentation issues.\\n\\nOverall, the paper in its current shape is unsatisfactory in justifying the rationale of the proposed approach, which is simply Eq. (8), and relevant discussions in both theory and experiment are missing, making it less self-consistent.\", \"questions\": \"1. How Eq. (8) could help preserve Dirichlet energy of each layer? By simply multiplying $X^{(l)}$ with a scalar and adding that to before applying nonlinearity, how would it guarantee Dirichlet energy is not vanishing?\\n\\n2. How is the approach compared with others like GCNII in terms of preserving DIrichlet energy? Intuitively adding initial residuals can already effectively preserve Dirichlet energy. Why the proposed approach adds the additional term before the nonlinearity and why the embedding of the previous layer instead of the initial layer is used?\\n\\n3. Why is the performance of all models on ogbn-arxiv much lower than officially reported results?\\n\\n4. Are there any plots of the layer-wise Dirichlet energy of the proposed model as well as some baselines (e.g., GCNII [1]) on these benchmarks? How does Dirichlet energy connect to actual performance? This would be important to help readers gain more intuition and also help understand the efficacy of the proposed approach.\\n\\n5. What is the rationale of introducing edge space (Definition 3.2) and how does it play a role in justifying the proposed method?\\n\\n\\n[1] Chen et al. Simple and Deep Graph Convolutional Networks. ICML'20.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
0dELcFHig2
Multi-modal brain encoding models for multi-modal stimuli
[ "SUBBA REDDY OOTA", "Khushbu Pahwa", "mounika marreddy", "Maneesh Kumar Singh", "Manish Gupta", "Bapi Raju Surampudi" ]
Despite participants engaging in unimodal stimuli, such as watching images or silent videos, recent work has demonstrated that multi-modal Transformer models can predict visual brain activity impressively well, even with incongruent modality representations. This raises the question of how accurately these multi-modal models can predict brain activity when participants are engaged in multi-modal stimuli. As these models grow increasingly popular, their use in studying neural activity provides insights into how our brains respond to such multi-modal naturalistic stimuli, i.e., where it separates and integrates information across modalities through a hierarchy of early sensory regions to higher cognition (language regions). We investigate this question by using multiple unimodal and two types of multi-modal models—cross-modal and jointly pretrained—to determine which type of models is more relevant to fMRI brain activity when participants are engaged in watching movies (videos with audio). We observe that both types of multi-modal models show improved alignment in several language and visual regions. This study also helps in identifying which brain regions process unimodal versus multi-modal information. We further investigate the contribution of each modality to multi-modal alignment by carefully removing unimodal features one by one from multi-modal representations, and find that there is additional information beyond the unimodal embeddings that is processed in the visual and language regions. Based on this investigation, we find that while for cross-modal models, their brain alignment is partially attributed to the video modality; for jointly pretrained models, it is partially attributed to both the video and audio modalities. These findings serve as strong motivation for the neuro-science community to investigate the interpretability of these models for deepening our understanding of multi-modal information processing in brain.
[ "brain encoding", "fMRI", "multi-modal models", "multi-modal stimuli", "Transformers", "videos", "speech", "language" ]
Accept (Poster)
https://openreview.net/pdf?id=0dELcFHig2
https://openreview.net/forum?id=0dELcFHig2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zK3eEDjIzR", "yWeHYfnUBT", "yN0v10hxJe", "xqGXrLG6V9", "vY9PA2MsE6", "tNniRnkOtq", "t1RFjIW9w6", "oYq7s00A1T", "n6A0eiRZsg", "lwRBeisL3I", "j2UIsd8nvB", "h9f24fG949", "fXSTARzEIO", "dJgAMHrX0U", "WPesJsUBwL", "Vf9NYfypef", "USMMrzUrPe", "QEKkmot3FM", "Nldx0hOlsS", "KqI4zs7UCx", "HdikBWZ1zW", "FrU1sgOPDH", "ACkHGU4L8j", "9gH3zm3ZZx", "8Rx1efoVl8", "4VMTdqM0xe" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730370221570, 1732106393471, 1732289618537, 1730703198616, 1732119801870, 1729798182822, 1732117277844, 1732379807310, 1737523662596, 1732109662453, 1732105064717, 1732380477823, 1732117581781, 1732315043317, 1732220062663, 1732226458860, 1732309196273, 1734733075187, 1732110715069, 1732309364518, 1732107553298, 1732110219850, 1732115812764, 1732519179683, 1732110689345, 1732513174870 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4793/Reviewer_cWB1" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Reviewer_8Uj4" ], [ "ICLR.cc/2025/Conference/Submission4793/Reviewer_gD8j" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Reviewer_8Uj4" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Reviewer_8Uj4" ], [ "ICLR.cc/2025/Conference/Submission4793/Reviewer_cWB1" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Area_Chair_dY76" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Authors" ], [ "ICLR.cc/2025/Conference/Submission4793/Reviewer_gD8j" ] ], "structured_content_str": [ "{\"summary\": \"This paper compares different multimodal AI models to human brain responses while participants view audiovisual movies. They compare two different multimodal models, one cross-modal model that learns separate visual/audio embeddings and projects them into a shared representational space, and one jointly pretrained multimodal model, and three unimodal (vision or speech) models. The results show that in most brain regions multimodal training improves encoding model performance of voxel activity, particularly compared to unimodal speech models. The authors do additional residual analysis to understand the unique contribution of multimodal models (over unimodal models) to brain predictivity.\\n\\nOverall, this is an interesting approach and the paper has many strengths. The link between results and overall conclusions was not always clear. In particular, the small number of highly varied models makes it difficult to draw strong conclusions about parallels between multimodal processing in the models and brains. Finally, there were several clarification/presentation questions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"There are many interesting and novel aspects to this paper. First, while there have been extensive encoding model studies on visual or audio stimuli, few have looked at model comparison to multimodal movies. The comparison of audiovisual models to this data is particularly novel. Further the comparison of different types of multimodal models is interesting (though it is difficult to draw strong conclusions about what their comparison tells you about multimodal processing in the brain, see below), and particularly the attempts to quantify what additional explanatory power a multimodal model has over unimodal models. The encoding analyses were also robust, particularly the cross-movie train/test splits.\", \"weaknesses\": \"The biggest issues stem from the comparison of the performance of a relatively small number of models that differ along many factors. This limitation makes it difficult to attribute any model differences to multimodality or different cross-modal training schemes, as the model architecture and training sets vary from model to model.\\n\\nThe fMRI dataset uses a small-n, data-rich design, which is good, but given this, it would help to see the results at the individual subjects\\u2019 level (in the appendix). On bar plots, it would be nice to plot each of the six subjects as a point overlaid on the average bar (rather than error bars which can obscure differences across the small number of subjects).\\n\\nThe residual analyses and results are somewhat confusing. The residual correlation with unimodal features was 0.56, which is still quite high. Given this, it is not clear that unimodal information was removed from multimodal models. Alternatively, the authors could do the residual analysis on the brain instead of the models (i.e., fit a model with both unimodal and cross modal and predict with just cross modal). Relatedly they could calculate the product measure or unique variance explained by multimodal models above unimodal models from this joint model. \\n\\nOverall, the language and visual region responses look largely the same in most analyses. There are some quantitative/statistical differences, but the pattern is extremely similar. The authors should address this.\\n\\nPerhaps related to the above point, all regions of interest are defined anatomically, but there is a fair amount of subject-wise variability in the anatomy of high-level functional regions, such as language. The authors should address this limitation.\\n\\nAcronyms and plots were difficult to follow. Would help to spell out throughout vision versus lang regions for example, and clarify the legend in figure 4 (e.g., it was not clear on first read that \\u201c-\\u201c indicates \\u201cminus\\u201d).\\n\\nFigure 5 is difficult to read and interpret. In terms of clarification, the authors should specify hemispheres/views (it looks like a lateral view on top, medial on bottom, but I\\u2019m not certain). The results look extremely noisy and seem to show random patterns, with as much red as blue. Blue are areas the unimodal models perform better? How should that be interpreted? The legend says the colorbar indicates \\u201cpercentage increase/decrease. Does this mean 0.5% or 50%? If the former, these are very tiny differences, which perhaps explains the above confusion, but I believe makes it difficult to draw any strong conclusions about these results. \\n\\nI had questions about two of the conclusions listed in the discussion. I was unsure what the second part of conclusion 2 (\\u201cThis variance can be partially attributed to the removal of video features alone, rather than auditory features.\\u201d) meant. I am also unconvinced of conclusion #4 given the overall similarity between language and visual brain regions described above.\", \"typos\": [\"Line 224: \\u201caudo\\u201d --> \\u201caudio\\u201d\", \"Line 276: \\u201cwmploy\\u201d --> \\u201cemploy\\u201d\"], \"questions\": [\"It would help to have additional methodological details in some sections:\", \"Was cross-subject prediction generated by using all of the predictor subjects voxels, to predict the target subject\\u2019s voxel-wise responses?\", \"What are the six different modalities in the image-bind modality? I thought it was an audio-visual model (which I would consider two modalities)\", \"The layer-wise predictions for models are shown in the appendix, but do the main text figures use all layers or best layer? If the latter, how is this layer selected.\", \"Are the whole brain results averaged across all cortical voxels? Or only those with above-threshold cross-subject prediction? Or some other criteria?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"**Q4. For the procedure described in the Figure 1b caption on removing unimodal influence: why subtract out the unimodal contribution? Why not learn a regression directly from the unimodal contribution itself, i.e., the predictions of r?**\", \"Thank you for this question.\", \"We would like to clarify that in this analysis, we did not directly subtract the unimodal contribution from multimodal models.\", \"Instead, we first learn a linear mapping from unimodal to multimodal features using ridge regression (r).\", \"This function (r) estimates the unimodal aspect correlating with the multimodal representations (CM(X)).\", \"Now, we compute the residuals, |CM(X) - r(VM(X))|, where CM(X) represents the multimodal contribution and VM(X) represents the unimodal contribution.\", \"The residuals allow us to characterize any changes in brain alignment performance when unimodal representations are removed. The subtraction operation here conceptually stands for removal of unimodal contributions from multimodal representations and may not be interpreted as direct subtractions.\", \"Further, if we perform regression of r(VM(X)) with the brain responses, as the reviewer suggests, this would be similar (identical) to looking at unimodal brain alignments.\", \"These results are already presented in Fig. 3 (Orange and Blue bars for Video and Speech unimodal models, respectively).\", \"**Q5. Can it be said that the IB-audio and IB-video unimodal representations described on lines 223-224 are not truly unimodal, since they are extracted from a model that was trained with multimodal inputs? Then, they reflect correspondences between language and vision.**\", \"Thank you for this question.\", \"Yes, it is true that IB-audio and IB-video are also multimodal representations. These embeddings are derived from the pretrained ImageBind model, which inherently generates multimodal representations. By \\\"modality-specific embeddings,\\\" we refer to the extraction of audio and video embeddings from ImageBind. While these embeddings are modality-specific in their origin (audio or video), they are still multimodal in nature due to the model's multimodal pretraining, as the reviewer points out.\", \"Just to add, Unimodal VM (ViT-B, VideoMAE, ViViT) and Unimodal SM (Wav2Vec2.0, AST) are the pure unimodal models.\", \"**Q6. Figure 3 second row, middle two columns: Why does the green bar not have ^\\\\* for SV? It seems significantly higher than both light green bars. Why does the green bar have ^\\\\* for MT? It only seems significantly higher than on light green bar.**\", \"Thank you for raising this question.\", \"To clarify, the * symbol indicates cases where multi-modal embeddings are significantly better than unimodal video models. For the SV region (renamed as PPA, Parahippocampal Place Area), IB Concat (green bar) is not significantly better than unimodal video models (orange bar).\", \"\\u2227 indicates cases where multi-modal embeddings are significantly better than unimodal speech models. For PPA region, IB Concat (green bar) is significantly better than unimodal speech models. As a result, we use only the ^ symbol to denote this relationship.\", \"We have already included this information in the caption for clarity. A similar explanation applies to the MT region, where the multi-modal embeddings show a similar pattern of significance.\"]}", "{\"comment\": \"*Q2: Model variability*\\n\\nOverall, I find this explanation by the authors to somewhat mitigate my concern. However, I want to emphasize that belonging the same general class of architecture (e.g., ViT) is not the same as having the same architecture. The authors need to address this as a limitation in the main text of the paper. \\n\\n*Q4: Conwell et al. findings*\\n\\nI am not satisfied by the authors explanation of the Conwell findings. In controlled comparisons of models with and without language alignment, Wang et al. (2023) also did not find an increase in performance using an encoding approach as a result of language alignment throughout most of high-level visual cortex (Figure 5c). I will also emphasize here that my primary concern related to the Conwell and Wang findings is that when models are well controlled for architecture and training data, multimodality training may not have a meaningful effect. The authors can mitigate my concern here by just acknowledging that not all studies have found multimodal effects in the Related Work section and acknowledge that tightly controlled comparisons of models are needed in future work in the Limitations section (see *Q2*). \\n\\n*Q6: Multimodal effects*\\n\\nI appreciate the clarifications that the authors provided here. I would additionally like to see these clarifications reflected in the text.\"}", "{\"summary\": [\"Existing work uses unimodal models to identify language and vision processing pathways in the brain. This paper studies multimodal processing in the brain. Multimodal networks are aligned to the brain, and regions with better alignment are identified as the sites of multimodal processing. To verify that the alignment is actually due to multimodality, unimodal influence is removed from multimodal representations by subtracting out the multimodal target, as predicted by the unimodal input, from the multimodal features.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This work is novel in that it is the first to use fMRI. But other works also take similar approaches to identifying multimodal processing (see weaknesses).\", \"Findings of the difference between cross-modal and jointly-trained models with respect to regions is novel\", \"These findings have neuroscientific significance\", \"Appropriate random baselines are used to give context to alignment numbers is given\"], \"weaknesses\": \"- Relationship with previous work [1] and similar concurrent work [2,3] that uses multimodal models to probe for multimodal processing in the brain should be discussed. [1] studies cross-modal and jointly trained networks and uses a naturalistic multi-modal stimuli.\\n- The random baseline described in 6.1 is good to make sure that the trained model weights matter. To better get a sense of whether the alignment actually reflects processing in the brain, another good sanity check would be to run a permutation test in which you keep the trained weights, but give the movie stimulus inputs to the model in scrambled order. This would give a floor for the scale of alignments that we see in subsequent results.\\n\\n## Small things\\n- Line 276 typo: wmploy -> employ\\n\\n## References\\n[1] Subramaniam, V., et al. \\\"Revealing Vision-Language Integration in the Brain with Multimodal Networks.\\\" International Conference on Machine Learning. International Conference on Machine Learning (ICML), 2024.\\n\\n[2] Kewenig, Viktor, et al. \\\"Evidence of Human-Like Visual-Linguistic Integration in Multimodal Large Language Models During Predictive Language Processing.\\\" arXiv preprint arXiv:2308.06035 (2023).\\n\\n[3] Dong, Dota Tianai, and Mariya Toneva. \\\"Vision-Language Integration in Multimodal Video Transformers (Partially) Aligns with the Brain.\\\" arXiv preprint arXiv:2311.07766 (2023).\", \"questions\": [\"For the procedure described in the Figure 1b caption on removing unimodal influence: why subtract out the unimodal contribution? Why not learn a regression directly from the unimodal contribution itself, i.e., the predictions of $r$?\", \"Can it be said that the IB-audio and IB-video unimodal representations described on lines 223-224 are not truly unimodal, since they are extracted from a model that was trained with multimodal inputs? Then, they reflect correspondences between language and vision.\", \"Figure 3 second row, middle two columns: Why does the green bar not have $\\\\wedge *$ for SV? It seems significantly higher than both light green bars. Why does the green bar have $\\\\wedge *$ for MT? It only seems significantly higher than on light green bar.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary of our responses and revision\", \"comment\": \"**We are grateful to all reviewers for their strong positive feedback, time and their constructive suggestions, which will further strengthen the impact of our work.**\\n\\n**Summary of Reviewer Strengths:**\\n\\n1. Novelty of Approach: This work is the first to use fMRI to investigate differences in cross-modal and jointly-trained multimodal models with respect to brain regions, a novel contribution with neuroscientific significance. **(reviewers: gD8J, cWB1)**\\n2. Significance of Findings: This study provides novel insights into the differences between cross-modal and jointly-trained models and their explanatory power for multimodal processing in the brain. **(reviewers: gD8J, cWB1)**\\n3. Well designed experiments: **(reviewers: gD8J, cWB1, 8Uj4)**\\n - Authors use standard and well-validated methods were employed effectively, providing confidence in the study's findings.\\n - The use of random baselines contextualizes alignment numbers, improving interpretability.\\n - Cross-movie train/test splits and residual analyses add robustness to the encoding analyses, offering a more comprehensive evaluation of multimodal effects beyond unimodal contributions.\\n4. Innovative Comparisons: **(reviewers: gD8J, cWB1)**\\n\\t- The comparison between different types of multimodal models (e.g., audiovisual and unimodal) is interesting\\n\\n**Additional changes to the draft during the rebuttal process**\\n\\nWe have updated the main manuscript and the appendix to address these following comments. The changes made in the manuscript are highlighted in blue color. The major additional changes are listed below.\\n\\n1. **Extended Related Works** (Reviewer gD8j, 8Uj4): We discuss how our current study is different from previous studies [Subramaniam et al. (2024)], [Kewenig et al. (2024) ], [Dong et al. (2023)] in the following aspects:\\n - Subramaniam et al. (2024) employed vision-language models based on image-text pairs, potentially overlooking the temporal dynamics of continuous movies, and used SEEG, which is limited in spatial resolution and coverage. Their study focused on cross-modal integration without exploring jointly pretrained models, with each participant viewing different movie stimuli, leading to varied inputs across participants.\\n - Kewenig et al. (2024) focused on behavioral evidence, demonstrating that multimodal attention models can leverage contextual information to predict upcoming words in a manner aligned with human behavior. Notably, the study did not involve brain recordings; instead, the authors collected behavioral ratings and focused on human attention as indexed through eye-tracking data.\\n - Dong et al. (2023) study compared brain alignment performance before and after fine-tuning but did not explore jointly-pretrained mutlimodal models vs. multiple unimodal models, leaving open questions about which multimodal approaches best predict brain activity.\\n - Conwell et al. (2022) and Wang et al. (2022): These studies are discussed in the context of their findings that contrastive image-language training does not always lead to performance improvements, particularly in tightly controlled experiments.\\nWe have added results of these experiments in **Appendix J** of the revised paper.\\n\\n2. **Baseline Analysis: Scrambling Inputs to Multimodal Models** (Reviewer gD8j): \\n - we conducted an additional baseline experiment where we kept the trained weights unchanged and shuffled the movie stimuli into a scrambled order as input to the two multimodal models: cross-modal and jointly-pretrained models. \\n - The results demonstrate that embeddings from multimodal models exhibit significantly better brain alignment compared to both randomly initialized models and when passing scrambled clips as input.\\n\\n3. **Whole Brain, Language and Visual ROIs analysis: Shared and Unique variance between Multimodal and Unimodal models** (Reviewer cWB1): To empirically verify residual analysis, we now build two voxelwise joint encoding models: (1) that includes both unimodal and cross-modal features, (2) that includes both unimodal and jointly-pretrained features. \\n - Using these joint encoding models and prior encoding models, we compute the unique variance explained by each model i.e. unimodal and cross-modal, unimodal and jointly-pretrained models. \\n - The results, presented in Appendix N, Figures 12, 13 and 14, for the whole brain and the language network, reveal that the shared variance between the jointly pretrained (TVLT) model and the unimodal models is significantly lower than that observed between the cross-modal (IBConcat) model and the unimodal models.\\n\\n4. **Updation of Brainmap Plots and Figures in Appendix** (Reviewer cWB1): The revised appendix includes updates to Brainmap plots and figures (Figs. 7, 8, 9 and 10), featuring standard ROI names. Additionally, subject values are now represented as individual points on bar plots instead of using error bars, ensuring clearer visualization of subject-specific data.\"}", "{\"summary\": \"The authors aim to address whether there is a difference in brain-alignment based on whether multimodal models had cross-modality pretraining (separate encoders for two modalities) or joint pretraining (shared encoder across modalities). They compare one model of each type to video and speech models and evaluate the prediction in a large-scale open access movie dataset. Using residual analyses, they investigate whether there are multi-modal representations in visual and language regions of the brain.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"For the most part, the authors use standard and well validated methods to answer their question. I particularly like the approach of evaluating the performance on entire held-out videos and using the residual analysis to investing multi-modal effects above and beyond unimodal effects.\", \"weaknesses\": \"The authors only evaluate two multi-modal models and a small number of unimodal models. However, the chosen models differ on many factors (e.g. architecture, training data) in addition to the input modality and as a result, it is premature to draw conclusions about semantic representations in the brain that may be attributable to any of these factors. To my mind, there are two ways to mitigate this concern: 1) controlled model training and evaluation so that only one factor varies at a time, or 2) testing many different models of a given class such that even across significant model variations there is a robust effect of modality regardless of particular model factors. I think that this is a serious concern because not all prior work has found that multi-modal models are more brain aligned. In controlled comparisons between visual models with the same architecture and training data, there was no performance increase as a result of contrastive image-language training (Conwell et al., 2023). These authors suggest that the higher alignment of CLIP relative to unimodal models in other work may be training set size.\\n\\nConwell, C., Prince, J. S., Kay, K. N., Alvarez, G. A., & Konkle, T. (2023). What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines? (p. 2022.03.28.485868). bioRxiv. https://doi.org/10.1101/2022.03.28.485868\\n\\nA minor weakness of the paper is that the authors use custom, non-standard acronyms and names for brain regions (e.g., scene visual area, SV, and object visual processing region, OV). It is confusing as a reader, but more critically, it difficult to understand what has been found for particular regions across the field, making the status of the literature more tenuous. I would suggest that the authors adopt standard acronyms throughout (e.g., PPA instead of SV). \\n\\nAlthough the paper overall is fairly clear, section 6.3 and the corresponding figures (4, 9, and 10) are difficult to follow. I welcome clarification because, outside of a few lines in the discussion, I am having a hard time understanding which regions do show a multi-modal effect. Additionally, I think that the authors should emphasize whether they uncover expected unimodal effects in primary sensory cortices. In particular, we would expect that EVC would be predicted by visual models with no additional multi-modal contribution and similarly in AC but for auditory models. I am having trouble determining whether that is the case from the figures, but if it is not the case, it would lend more weight to my major concern about differences between models beyond modality.\", \"questions\": \"Why did the authors choose to use parametric statistics? To my knowledge non-parametric statistics are more common in NeuroAI to estimate a baseline performance rather than assuming one.\\n\\nAre the brains on the bottom in Figure 5 the medial view? It is difficult to see why there are four brains in each box. Outside of labeling, showing the sulci on the inflated brains would help to orient the reader to what is being shown.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"**Q4. In controlled comparisons between visual models with the same architecture and training data, there was no performance increase as a result of contrastive image-language training (Conwell et al., 2023).**\", \"Thank you for pointing this out and for giving us the opportunity to clarify our findings in the context of existing research.\", \"**Addressing Conwell et al. (2023) Findings:**\", \"Conwell et al. (2022) conducted controlled comparisons between visual models with identical architecture and training data, finding no performance improvement from contrastive image-language training. This outcome might be attributed to the evaluation metrics used in their analysis.\", \"Conwell et al. (2022) employed distributional similarity measures like Representational Similarity Analysis (RSA) even for voxel-wise encoding models. While these metrics are useful for comparing the overall statistical properties of neural and model representations, they may not capture detailed functional correspondences between specific model features and neural responses.\", \"In contrast, correlation-based metrics such as Pearson Correlation Coefficient (PCC) and explained variance are designed to assess the direct relationship between model predictions and neural activity on a voxel-by-voxel basis. These metrics are more sensitive to data nuances and can detect subtle alignments that distributional measures might miss *[Soni et al. 2024]*.\", \"For instance, *[Soni et al. 2024]* noted that using correlation-based metrics might emphasize linear relationships, while distance-based metrics could capture non-linear patterns. This sensitivity underscores the importance of carefully choosing the metric to ensure that the analysis aligns with the research objectives and accurately reflects the underlying data structures.\", \"The difference in evaluation metrics may partly explain why Conwell et al. (2022) did not observe a performance increase with contrastive image-language training, whereas other studies, including ours, have found that multimodal models show improved brain alignment by using correlation-based metrics that are sensitive to fine-grained functional correspondences.\", \"*[Soni et al. 2024] Conclusions about Neural Network to Brain Alignment are Profoundly Impacted by the Similarity Measure, Arxiv 2024*\", \"**Q5. A minor weakness of the paper is that the authors use custom, non-standard acronyms and names for brain regions**\", \"Thank you for raising this important concern.\", \"We have updated the plots in our manuscript to include standard brain region names commonly used in the literature. Specifically, we have labeled the regions as follows:\", \"PPA (Parahippocampal Place Area) for scene visual areas\", \"OFA (Occipital Face Area) for face-related areas\", \"LOC (Lateral Occipital Complex) for object-related regions\", \"These updates have been made both in the text and in the figures to enhance clarity and readability for readers. We believe that using these standard nomenclatures will make our findings more accessible and understandable to the scientific community.\", \"All these changes have been incorporated into the revised draft of the manuscript.\"]}", "{\"comment\": \"We appreciate the reviewer's feedback and are confident that it has enhanced the paper's quality.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": [\"*We thank the reviewer for their valuable comments and suggestions which are crucial for further strengthening our manuscript.*\", \"**Q1. Challenges in comparing relatively small number of models. Impact of diverse model architectures on performance comparison**\", \"Thank you for this interesting question.\", \"We have provided a detailed discussion in our responses to **CQ1 and CQ2**. Kindly refer to those responses for more comprehensive information.\", \"**Q2. The fMRI dataset uses a small-n, data-rich design, which is good, but given this, it would help to see the results at the individual subjects\\u2019 level (in the appendix). On bar plots, it would be nice to plot each of the six subjects as a point overlaid on the average bar**\", \"We thank the reviewer for this suggestion.\", \"For brain encoding studies using naturalistic fMRI datasets, we follow the common practices in this area of research in that the number of samples per participant is more important than the number of subjects (six) because the predictive models are trained independently for each participant. Thus, having more samples per participant helps us learn a better predictive model. The Movie10 fMRI dataset is one of the largest datasets in terms of samples per participant (~12,950 samples), and that is the main reason for its frequent use. Our results also clearly show that this dataset is sufficient to learn a good predictive model, as we can predict up to 50% of the explainable variance for held-out brain recordings that were not used for training (e.g., Fig 2). Hence, our fMRI dataset is not small.\", \"Based on the reviewer\\u2019s suggestion, **we have updated the bar plots in Appendix Figs. 7, 8, 9 and 10** to display each subject\\u2019s predictivity score as individual points instead of using error bars. This visualization provides a clearer representation of variability across subjects, allowing readers to assess individual differences more effectively. We believe these changes address the reviewer's concerns and enhance the clarity and interpretability of our results.\", \"**Q3. The residual analyses and results are somewhat confusing. Alternatively, the authors could do the residual analysis on the brain instead of the models. Relatedly they could calculate the product measure or unique variance explained by multimodal models above unimodal models from this joint model.**\", \"Thank you for this interesting question.\", \"**Residual Analysis on the Brain:**\", \"The features can be removed from the model representations (as we do), or from the brain recordings (as suggested by the reviewer).\", \"Conceptually, the results of these approaches should be the same because when the feature is removed completely from either the input or/and the target, it would not be able to further impact the observed alignment.\", \"However, practically, brain recordings are noisier than model representations and so estimating the removal regression model will be more difficult especially with fMRI data of low SNR.\", \"Thus residual analysis on the brain is less effective. Therefore, we opt to remove features from the model representations where we can exercise more control, similar to *[Toneva et al, 2022; Oota et al., 2023; Oota et al., 2024]*. We will clarify this reasoning in the main paper.\", \"**Unique Variance Calculation:**\", \"Based on reviewer\\u2019s suggestion, we now build two voxelwise joint encoding models:\", \"(i) that includes both unimodal and cross-modal features,\", \"(ii) that includes both unimodal and jointly-pretrained features.\", \"Using these joint encoding models and prior encoding models, we compute the unique variance explained by each model i.e. unimodal and cross-modal, unimodal and jointly-pretrained models.\", \"Also, we compute the shared variance between these models.\", \"The results, presented in **Appendix N, Figures 12, 13 and 14**, for the whole brain and the language network, reveal that the shared variance between the jointly pretrained (TVLT) model and the unimodal models is significantly lower than that observed between the cross-modal (IBConcat) model and the unimodal models.\", \"This finding aligns with earlier residual analysis results, where IBConcat showed a larger performance drop when unimodal information was removed compared to the TVLT model (see Figure 4). Specifically, we observe a more pronounced performance drop when video features are removed compared to when speech features are removed.\", \"For the visual network, as shown in Figure 14, we observe that the unique explained variance between TVLT and unimodal VMs is comparable, while the cross-modal model IB Concat exhibits a higher unique explained variance compared to unimodal VMs. In contrast to unimodal VMs, both cross-modal and jointly pre-trained models demonstrate higher unique explained variance compared to unimodal SMs.\", \"*[Toneva et al. 2022], Combining computational controls with natural text reveals aspects of meaning composition, Nature Computational Science 2022*\"]}", "{\"comment\": [\"*We thank the reviewer for their strong positive, insightful and valuable comments and suggestions which are crucial for further strengthening our manuscript.*\", \"**Q1. Relationship with previous works that uses multimodal models to probe for multimodal processing in the brain should be discussed.**\", \"Thank you for this question. We would like to clarify that our study is different from previous studies in the following aspects:\", \"**Subramaniam et al. (2024)** utilized vision-language models based on image frame-text pairs, despite the stimuli being continuous movies. While effective for certain tasks, this approach may overlook the temporal dynamics inherent in videos. Additionally, their use of stereoencephalography (SEEG), although offers high temporal resolution, is limited in spatial resolution and typically restricted to specific brain regions where the electrodes are implanted. Finally, their study focused on multimodal integration in a cross-modal model setting and did not explore jointly pretrained settings. Additionally, each participant watched a different movie while their SEEG activity was recorded, therefore the input stimuli varied widely across participants.\", \"In contrast, our study leverages video-audio models that capture the temporal events in videos, providing richer and more dynamic representations of the stimuli. By incorporating audio data, we preserve acoustic information that may be lost in text-based transcriptions. Moreover, we utilize fMRI data, which offers whole-brain coverage and higher spatial resolution, enabling a more comprehensive analysis of the brain activity.\", \"Our approach also considers both cross-modal and jointly pretrained multimodal models, offering a more nuanced understanding of how different modalities interact and integrate information in the brain.\", \"**Dong et al. (2023)** utilized the Friends web series fMRI dataset to investigate the effectiveness of pretrained versus fine-tuned multimodal video transformers using video+text stimuli-based brain activity. Their study specifically examined the impact of fine-tuning a multimodal model on brain alignment, comparing performance before and after fine-tuning. However, they did not explore cross-modal vs. jointly pretrained model analysis or the comparison between multimodal and unimodal models, leaving it unclear which type of multimodal models perform best for predicting brain activity. In contrast, our study focuses on video+audio stimuli and includes a comprehensive residual analysis, providing deeper insights into the contributions of different modalities.\", \"**Kewenig et al. (2024)** conducted a study involving 200 human participants who provided ratings while watching short audio-visual clips (6 seconds each) to estimate the predictability of upcoming words. This study specifically focused on behavioral evidence, demonstrating that multimodal attention models can leverage contextual information to predict upcoming words in a manner more aligned with human behavior. Notably, this study did not involve brain recordings; instead, the authors collected behavioral ratings and focused on human attention as indexed through eye-tracking data.\", \"These differences suggest that our study is better placed to provide deeper insights into how the brain processes and integrates multimodal information, leveraging the strengths of video-audio models leveraging the comprehensive spatial coverage of fMRI.\", \"We have included this extended related work in **Appendix J** of the revised paper.\", \"**Q2. Baseline Analysis: Scrambling Inputs to Multimodal Models**\", \"Thank you for suggesting this new experiment.\", \"Based on the reviewer\\u2019s suggestion, we conducted an additional baseline experiment where we kept the trained weights unchanged and shuffled the movie stimuli into a scrambled order as input to the two multimodal models: cross-modal and jointly-pretrained models. The results of these experiments have been added to **Appendix K, Fig. 12** of the revised paper.\", \"The results demonstrate that embeddings from multimodal models exhibit significantly better brain alignment compared to both randomly initialized models and when passing scrambled clips as input. Furthermore, when comparing scrambled input to pretrained models with randomly initialized models, the scrambled input shows improved alignment over random initialization.\", \"Overall, this sanity check confirms that representations from multimodal models maintain meaningful alignment with brain activity, even when the stimulus order is scrambled, highlighting their robustness and effectiveness.\", \"**Q3. Line 276 typo: wmploy -> employ**\", \"Thank you for pointing this out. We have corrected the identified typos in the revised draft.\"]}", "{\"comment\": \"Dear Reviewer gD8j,\\n\\nWe appreciate your feedback and effort you have invested in evaluating our work.\\n\\nIn response to your insightful comments, we have addressed the issues you highlighted. We believe these revisions significantly contribute to the clarity and completeness of the paper. We kindly request you to verify our response and consider updating your evaluation based on the revisions made.\\n\\nShould you have any further questions or suggestions, we are ready to provide additional information or clarification as needed.\\n\\nThanks for your help\"}", "{\"comment\": [\"**Q6. How do the authors clarify the multimodal effects in Section 6.3 and Figures 4, 9, and 10, particularly regarding whether expected unimodal effects are observed in primary sensory cortices**\", \"Thank you for this question. We understand the need for clarity as suggested by the reviewer. In the following we summarize the main results for the multi-modal and unimodal models.\", \"Multi-modal effects:\", \"In general, multimodal models have better predictivity in the language regions (see Fig. 2).\", \"Unimodal effects:\", \"Unimodal models have higher predictivity in the early sensory regions (visual and auditory). Such patterns of results are to be expected as the reviewer pointed out.\", \"Critical Differences:\", \"However, critical differences are also observed. While multimodal models perform with around 50% alignment (Fig. 3) in high-level visual regions (PPA, MT), they seem to perform less (around 40% alignment) in the early visual region (EVC). These patterns are also seen in the auditory regions (AC versus PTL). Thus, there seem to be both patterns of similarity as well as critical dissimilarities observed in the comparative analyses.\", \"Residual analysis:\", \"For cross-modality models, the alignment in regions AG and MT is extremely high, and this alignment is only partially explained by video features (Fig 4). This implies that significant unexplained alignment remains after the removal of video features. Conversely, the removal of speech features does not lead to a drop in brain alignment, indicating that there is additional information beyond speech features that is processed in these regions.\", \"We agree with the reviewer that the EVC region is well-predicted by unimodal video models, showing similar levels of predictivity with multimodal models. A recent study by [Oota et al. 2024] explored the type of information in language models that predicts brain activity and found that low-level features in these models drive predictivity, such as speech-based language models predicting the visual cortex or text models predicting the auditory cortex.\", \"When unimodal video model (VM) features are regressed out of multimodal models, there is no performance drop in EVC but a drop in regions like PPA and LOC, suggesting that multimodal models do not solely rely on corresponding unimodal features but also contain unexplained variance in EVC. Additionally, removing low-level features like motion energy may impact EVC performance. Interestingly, regressing out unimodal VM features does not affect speech-related information in multimodal models, as speech models also exhibit brain predictivity in EVC.\", \"**Q7. Why did the authors choose to use parametric statistics? To my knowledge non-parametric statistics are more common in NeuroAI to estimate a baseline performance rather than assuming one.**\", \"Thank you for this question. As explained in the Statistical Significance sub-section (lines 298-309 above Section 6), we describe the types of statistical testing employed. We employed a combination of both parametric and non-parametric methods to ensure robust and reliable results.\", \"We use non-parametric tests such as permutation testing and Wilcoxson signed-rank test. Further, we also applied the Benjamini-Hochberg False Discovery Rate (FDR) correction for multiple comparisons. While FDR correction is often associated with parametric statistics, it is appropriate in our context because fMRI data is considered to have positive dependence, as established in previous research.\", \"**Q8. Are the brains on the bottom in Figure 5 the medial view? It is difficult to see why there are four brains in each box.**\", \"Thank you for this valuable feedback. We appreciate the opportunity to clarify the points regarding Figure 5: Hemispheres/Views: The top row represents lateral views, while the bottom row shows medial views of the brain.\", \"We have now updated the figure caption in the revised draft to explicitly specify the views for clarity.\", \"We have updated the brainmaps with one colormap (Reds) and colorbar to clearly indicate the percentage of decrease, with a range from 0% to 80%.\", \"The colorbar shows the percentage decrease in brain alignment, where darker shade red voxels indicate a higher percentage decrease and white voxels indicate areas where unimodal video features do not contribute any shared information within the multi-modal context (i.e., no percentage drop).\", \"We observe that the removal of unimodal video features from the cross-modal model (IB Concat) results in a significant performance drop of 40-50% in the visual regions. Additionally, a higher percentage drop is observed in the language regions (PTL and MFG) for the TVLT Joint model.\", \"Overall these edits ensure that the brainmaps now provide a more intuitive and clear visualization of the results.\"]}", "{\"comment\": \"Thank you for addressing these additional concerns! I have updated my score to reflect the incorporation of these changes.\"}", "{\"comment\": \"Thank you for these responses and updates. I think the revisions help to clarify the paper, particularly the figures, and I appreciate the additional variance partitioning analysis the authors conducted.\\n\\nMy primary concern, however, still holds: the comparison of the performance of a relatively small number of models that differ along many factors. This limitation makes it difficult to attribute any model differences to multimodality or different cross-modal training schemes, as the model architecture and training sets vary from model to model. \\n\\nI appreciate the approach of treating individual models as data points, and that any one such model factor may not make a difference across a large set of models. However, the difference between any pair of models in a set can be large and due to a variety of factors we don\\u2019t fully understand. I think this is an important limitation to acknowledge.\\n\\nThat said, I think this is an interesting paper, and in particular the use of and analysis of different components of multimodal models is an important advance in the field. I have adjusted my score accordingly. \\n\\nFinally, I want to clarify my comment on the fMRI dataset: I agree the dataset is not small, and I did not say this in my review. In my review I referred to it as a small-N dataset (meaning few subjects, many samples). I think this is preferable to a large-N, low-sample dataset, however, it often lends itself better to individual analysis and plots. Thank you for including these.\"}", "{\"comment\": \"Dear Reviewer cWB1,\\n\\nWe appreciate the reviewer\\u2019s positive feedback and are confident that it has contributed to enhancing the quality of our paper.\\n\\nWe acknowledge the point raised regarding the differences between the pairs of models in terms of variability in architectures. As suggested by the reviewer. we will include this point in the limitations section. Additionally, we suggest that future work could focus on more controlled comparisons to better isolate the effects of these factors, as model-controlled experiments are longstanding questions for the linguistic community.\\n\\nRegards,\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer 8Uj4,\\n\\n*We appreciate the reviewer\\u2019s positive feedback and are confident that it has contributed to enhancing the quality of our paper.*\\n\\n**Q2. Model Variability:**\\n\\n* Based on reviewer's suggestion, we have acknowledged this limitation in the revised draft. The Limitations section now emphasizes the importance of tightly controlled comparisons of models to better evaluate the effects of architecture and training data.\\n\\n**Q4: Conwell et al. findings**\\n\\nThank you for this valuable suggestion. \\n* We have now included the works of Conwell et al. (2022) and Wang et al. (2022) in the Related Work section. These studies are discussed in the context of their findings that contrastive image-language training does not always lead to performance improvements, particularly in tightly controlled experiments.\\n\\n* Additionally, we have acknowledged the need for tightly controlled comparisons in future research in the Limitations section.\\n\\n**Q6. Multimodal effects.**\\n\\nThank you for your positive feedback on clarification. \\n* We have now included Multimodal vs. Unimodal effects in the Appendix Section O. \\n\\nWe hope these updates address your concerns and demonstrate our commitment to improving the manuscript. Thank you again for your insightful feedback.\\n\\nPlease do let us know if there are any additional clarifications that we can offer. We would love to discuss more if any concern still remains. Otherwise, we would appreciate it if you could support the paper by increasing the score.\\n\\nregards,\\n\\nAuthors\"}", "{\"metareview\": \"The paper considers the problem of finding alignments between active regions of the brain from fMRI signals when a person watches multimodal content (audiovisual movies) and multimodal features extracted from state-of-the-art deep learning models. Correlation results using the Movie-10 dataset presents many interesting conclusions, especially regarding better alignment of multimodal models against unimodal ones, including better alignment of video models than speech/audio.\\n\\nThe paper received favorable reviews, with one accept and two borderline accepts. The reviewers appreciated the novelty of finding correlations between fMRI data and multimodal/unimodal features and the extensive experiments.\", \"additional_comments_on_reviewer_discussion\": \"There were three major points on which the reviewers raised concerns.\\n1. On the small number of multi-modal models used in the study, each of these models differing in their capabilities over diverse factors, thus questioning the reliability/significance of the correlations derived (cWB1, 8Uj4), \\n2. Inconsistencies / lack of clarity in the some of the provided results (gD8j, cWB1)\\n3. Inconsistencies with the conclusions derived in prior methods that also explored multi-modal models for brain alignment (8Uj4).\\n\\nFor 1., authors argued that in the setting that is explored in this paper, the provided study is by far the largest cohort of models being explored, albeit relatively smaller. Authors have acknowledged this point in their limitations.\\n\\nFor 2., authors provided additional details clarifying the issues pointed out, including providing new results when scrambling video inputs as the reviewer suggested, and clarifying the differences to prior works. \\n\\nFor 3., authors clarify that they are extending the study in previous works examining brain region activations for multi-modal stimuli. They also agree to consider tightly-controlled experiments that are needed to derive similarities to previous studies. \\n\\nOverall, the paper is well-written and makes a valuable contribution to the field. While there are questions that are difficult to be addressed within the scope of the current work, AC thinks the conclusions derived in the paper are sufficiently novel and thus recommends acceptance.\"}", "{\"comment\": [\"**Q13. Are the whole brain results averaged across all cortical voxels? Or only those with above-threshold cross-subject prediction? Or some other criteria?**\", \"Thank you for this question.\", \"We considered the whole-brain voxels based on the following process for obtaining normalized brain alignment:\", \"Voxel Selection: We initially selected voxels with a cross-subject predictivity> 0.05 Pearson correlation, in line with previous works (*[Popham et al., 2021], [La Tour et al., 2022], [La Tour et al., 2023]*).\", \"Normalization: Each voxel prediction was divided by its corresponding cross-subject predictivity.\", \"Averaging: The normalized predictions were then averaged across all selected voxels to compute the whole-brain results.\"]}", "{\"comment\": \"Dear Reviewer cWB1,\\n\\nWe appreciate the reviewer\\u2019s positive feedback and are confident that it has contributed to enhancing the quality of our paper.\\n\\nBased on the reviewer's suggestion, we have addressed this limitation in the revised draft. The Limitations section now highlights the importance of conducting tightly controlled model comparisons to more effectively evaluate the impact of architecture and training data.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Common Responses to Reviewers cWB1 and 8Uj4:\", \"comment\": \"*We are grateful to all reviewers for their strong positive feedback, time and their constructive suggestions, which will further strengthen the impact of our work.*\\n\\n**CQ1: Challenges in comparing relatively small number of models (Reviewers: cWB1, 8Uj4)**\\n\\nThank you for this interesting question. We would like to provide a clear discussion as follows.\\n\\n* We appreciate the concern raised by the reviewer about performance comparison across a small cohort of models. \\n* It is to be noted that there exist a relatively large number of vision-language or vision-alone or language-alone or speech-alone models as compared to video-audio models that is the primary focus of the current investigation. Therefore the current effort needs to take into consideration the sparsity of video-audio model availability. \\n* Despite this limitation, the current paper reports comparison across three video-only, two speech-only, one jointly-pretrained video-audio, and one cross-modal multi-modal model \\u2013 to the best of our knowledge, this is by far the largest cohort of comparative analysis of multi-modal models and their alignment with brain representations resulting from multi-modal stimuli. \\n* Now, let us discuss the state-of-the-art practices in brain alignment experiments in CQ2.\\n\\nWe have added this discussion in **Appendix L** of the revised paper. \\n\\n**CQ2. Impact of diverse model architectures on performance comparison (Reviewers: cWB1, 8Uj4)**\\n\\nSeveral prior brain encoding studies in the literature compared a variety of language/speech models (differing in their size, architecture, training dataset, etc.) and their brain alignment.\\n\\n* *[Schrimpf et al. 2021]* investigated 43 language models ranging from distributed word embeddings models like Word2Vec, GloVe, FastText, to Sequence models such as RNN, LSTM, Contextualized models like ELMo, Transformer models like BERT, GPT-2 and Transformer-XL with its variations such as base, small, and larger models. Although all these models have different architectures, training datasets, *[Schrimpf et al. 2021]* considered each model as a subject and computed normalized brain predictivity, i.e., what percentage the model explains the variance given a ceiling value for each voxel.\\n* Similarly, *[Toneva et al. 2019]* used four different language models such as ELMo, BERT, USE and Transformer-XL and compared the explained variance of each model while doing brain alignment. \\n* Further, *[Aw et al. 2023]* use four longer-context language models such as BART, Big-Bird, Longformer and Long-T5 and verify the deeper understanding of language models and brain alignment by the amount of variance explained by each model.\\n* Similarly, *[Antonello et al. 2022]* used 101 language models including both pretrained and task-based language models and compared the amount of explained variance in the brain by extracting the semantic representations and whether these representations are closer to brain-level semantics. \\n* Recently, *[Oota et al. 2023] [Oota et al. 2024]* used four text-based language models such as BERT, GPT-2, FLAN-T5 and BART, speech-based language models such as Wav2Vec2.0 and Whisper and verified the amount of explained variance in the brain at different language regions.\\n\\nIt is important to observe that all the above studies utilize a number of language models that are different in training architecture and training datasets, however the primary goal of all these studies is to investigate how close the semantic representations captured by each model aligns with brain-relevant semantics.\\n\\nThe extensive precedent in the literature, from studies comparing 43 models *[Schrimpf et al. 2021]* to those examining 101 models *[Antonello et al. 2022]*, demonstrates that this approach is both valid and valuable for understanding the relationship between artificial and biological language processing.\\n\\nWe have added this discussion in **Appendix M** of the revised paper.\\n\\n*[Schrimpf et al. 2021], The neural architecture of language: Integrative modeling converges on predictive processing. PNAS 2021*\\n\\n*[Toneva et al. 2019], Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). NeurIPS 2019*\\n\\n*[Antonello et al. 2022], Low-dimensional structure in the space of language representations is reflected in brain responses, NeurIPS 2022*\\n\\n*[Aw et al. 2023], Training language models to summarize narratives improves brain alignment, ICLR 2023*\\n\\n*[Oota et al. 2024], Speech language models lack important brain relevant semantics, ACL 2024*\"}", "{\"comment\": \"**Q4. Overall, the language and visual region responses look largely the same in most analyses. There are some quantitative/statistical differences, but the pattern is extremely similar. The authors should address this.**\\n\\n* It is true that the overall pattern of results with respect to how multimodal versus unimodal models behave across the language and visual regions are similar. \\n* In general, multimodal models seem to have better predictivity in the language regions (see Fig. 2). \\n* Similarly, unimodal models have higher predictivity in the early sensory regions (visual and auditory). Such patterns of results are to be expected. \\n* However, critical differences are also observed. While multimodal models perform with around 50% alignment, they seem to perform less (around 40% alignment) in the early visual region (EVC) but seem to improve in higher visual areas (SV (rerenamed as PPA), MT). \\n* These patterns are also seen in the auditory regions (AC versus PTL). Thus, there seem to be both patterns of similarity as well as critical dissimilarities observed in the comparative analyses. \\n\\n**Q5. All regions of interest are defined anatomically, but there is a fair amount of subject-wise variability in the anatomy of high-level functional regions, such as language.**\\n\\nThank you for this interesting question. The reviewer is concerned with subject-wise variability in the anatomy of high-level functional regions. To address this limitation, we follow several important steps:\\n\\n**Dataset Size:** The dataset we use, Movie10, is one of the largest in terms of samples per participant (~12,950 samples), which is the main reason for its frequent use. In comparison, naturalistic fMRI datasets used in linguistic brain encoding studies typically have around 9,000 samples (e.g. Moth-Radio-Hour) with six subjects, which is still fewer than the Movie10 dataset.\\n\\n**High-Spatial Resolution:** We use fMRI datasets that provide high-spatial resolution and more detailed, individualized maps of functional regions, unlike EEG datasets where source estimation is more challenging.\\n\\n**Cross-Subject Predictivity:** Our analyses focus on cross-subject predictivity, which provides shared explained variance among participants. The number of samples per participant is crucial because our predictive models are trained independently for each participant. More samples per participant help us learn better predictive models.\\n\\n**Aggregated Results:** We present results at the whole brain, language, and visual ROIs levels by aggregating normalized brain predictivity results. This approach helps us account for individual differences more effectively.\\n\\n* Overall, the normalized brain alignment computed per participant helps in assessing how closely our model predictions explain variance through brain predictivity. Our results clearly show that this dataset is sufficient to learn a good predictive model, as we can predict up to 50% of the explainable variance for held-out brain recordings when averaged across participants. \\n* Despite all these, we do agree with the reviewer that the inherent uncertainties in individual functional localization need to be acknowledged as a limitation in all such brain alignment studies.\\n\\n**Q6. Acronyms and plots were difficult to follow. Would help to spell out throughout vision versus lang regions**\\n\\nThank you for this valuable suggestion. \\n* We have updated the plots to clearly specify the Regions of Interest (ROIs) as follows: Language: ROI, Visual: ROI, and Auditory: ROI. We hope this distinction helps clarify the figures.\\n* Additionally, we have revised Figures 4, 9, and 10 to explicitly indicate that the \\\"\\u2013\\\" symbol represents residuals. These updates aim to improve the readability and interpretability of the plots.\"}", "{\"comment\": [\"*We thank the reviewer for their valuable comments and suggestions which are crucial for further strengthening our manuscript.*\", \"**Q1. Challenges in comparing relatively small number of models. Impact of diverse model architectures on performance comparison**\", \"Thank you for this interesting question.\", \"We have provided a detailed discussion in our responses to **CQ1 and CQ2**. Kindly refer to those responses for more comprehensive information.\", \"**Q2. To my mind, there are two ways to mitigate this concern: 1) controlled model training and evaluation so that only one factor varies at a time, or 2) testing many different models of a given class such that even across significant model variations there is a robust effect of modality regardless of particular model factors.**\", \"Thank you for this valuable suggestion. We also acknowledge that testing a larger number of models within a given class can help determine if the effect of modality is robust across various model configurations.\", \"In our study, we follow the second alternative focusing on models that belong to the same class, particularly those based on the Vision Transformer (ViT) architecture and the Audio Spectrogram Transformer (AST).\", \"**Unimodal Video Models:**\", \"VideoMAE, ViViT, and ViT-H are all built upon the ViT architecture. These models share the same foundational structure, with variations primarily in training strategies or specific implementation details.\", \"VideoMAE: Employs masked autoencoders for video representation learning based on ViT.\", \"ViViT: Extends ViT to video by processing spatio-temporal tokens.\", \"ViT-H: A larger version of ViT with more parameters but the same architectural backbone.\", \"**Multimodal Models:**\", \"ImageBind and TVLT are also based on the ViT architecture for the vision component and utilize AST for the audio component.\", \"ImageBind: Binds multiple modalities by projecting them into a shared embedding space using ViT and AST encoders.\", \"TVLT: Integrates visual and auditory information through ViT and AST, focusing on temporal alignment.\", \"Thus, we believe that our current approach addresses your concern by utilizing multiple models within the same class, thereby reducing the impact of confounding factors related to architecture and training differences. By focusing on models based on ViT and AST, we provide a more controlled comparison that highlights the effect of input modality on brain alignment.\", \"**Q3. I think that this is a serious concern because not all prior work has found that multi-modal models are more brain aligned.**\", \"Thank you for pointing this out and for giving us the opportunity to clarify our findings in the context of existing research.\", \"**Multimodal Models and Brain Alignment:**\", \"Recent brain encoding research has shown that multimodal models align more closely with brain activity than unimodal models, even when subjects are exposed to single-modality stimuli.\", \"*[Tang et al. 2023]* investigated encoding models trained using representations from a multimodal transformer (BridgeTower) and a unimodal transformer (RoBERTa). They found that the multimodal transformer learned more aligned representations of concepts in both language and vision than unimodal ones.\", \"*[Wang et al. 2023]* reported that CLIP, a multimodal model trained on image-text pairs, explains greater unique variance in higher-level visual areas compared to models trained only with image/label pairs (e.g., ResNet) or text-only models.\", \"*[Oota et al. 2022]* examined four multimodal models\\u2014LXMERT, VisualBERT, CLIP, and ViLBERT\\u2014and found that these multimodal models showed improvements over the unimodal models in explaining brain activity.\", \"*[Nakagi et al. 2024]* examined multimodal vision-semantic LLMs (GIT, BridgeTower and LLaVA-v1.5] that predict brain activity and find that these models uniquely capture representations in the association cortex better than unimodal models [BERT, DEiT, ResNet] combined.\", \"Furthermore, **reviewer gD8j** suggested the following works: *Subramaniam et al. (2024)* and *Dong et al. (2023)*. These studies also demonstrate that multimodal models outperform both unimodal models and language-vision models with linearly integrated features, such as the concatenation of vision and language features.\", \"Our study extends this line of inquiry by examining brain encoding during multimodal stimuli (simultaneous audio and visual input), which more closely reflects real-world sensory experiences.\", \"*[Tang et al. 2023], Brain encoding models based on multimodal transformers can transfer across language and vision, NeurIPS-2023*\", \"*[Nakagi et al. 2024], Unveiling Multi-level and Multi-modal Semantic Representations in the Human Brain using Large Language Models, EMNLP-2024*\", \"*[Wang et al. 2023], Incorporating natural language into vision models improves prediction and understanding of higher visual cortex, Nature Machine Intelligence 2023*\", \"*[Oota et al. 2022], Visio-Linguistic Brain Encoding, COLING-2022*\"]}", "{\"comment\": \"We appreciate the reviewer's positive feedback and are confident that it has enhanced the paper's quality.\"}", "{\"comment\": \"**Q7. Figure 5 is difficult to read and interpret. In terms of clarification, the authors should specify hemispheres/views. The results look extremely noisy How should that be interpreted?**\\n\\nThank you for this valuable feedback. We appreciate the opportunity to clarify the points regarding Figure 5:\\n**Hemispheres/Views:** The top row represents lateral views, while the bottom row shows medial views of the brain.\\n* We have now updated the figure caption in the revised draft to explicitly specify the views for clarity.\\n* We have updated the brainmaps with one colormap (Reds) and colorbar to clearly indicate the percentage of decrease, with a range from 0% to 80%.\\n* The colorbar shows the percentage decrease in brain alignment, where darker shade red voxels indicate a higher percentage decrease and white voxels indicate areas where unimodal video features do not contribute any shared information within the multi-modal context (i.e., no percentage drop). \\n* We observe that the removal of unimodal video features from the cross-modal model (IB Concat) results in a significant performance drop of 40-50% in the visual regions. Additionally, a higher percentage drop is observed in the language regions (PTL and MFG) for the TVLT Joint model.\\n\\n* Overall these edits ensure that the brainmaps now provide a more intuitive and clear visualization of the results.\\n\\n**Q8. I had questions about two of the conclusions listed in the discussion.**\\n\\n* Conclusion 2 is based on our observation made in lines 460-468, reproduced here:\\n\\u201cFor cross-modality models, the alignment in regions AG and MT is extremely high, and this alignment is only partially explained by video features. This implies that significant unexplained alignment remains after the removal of video features. \\n* Conversely, the removal of speech features does not lead to a drop in brain alignment, indicating that there is additional information beyond speech features that is processed in these regions.\\u201d\\n\\nPlease see the response to Q5 above for patterns of dissimilarity observed for the language and visual regions.\\n\\n**Q9. Typos: - Line 224: \\u201caudo\\u201d --> \\u201caudio\\u201d - Line 276: \\u201cwmploy\\u201d --> \\u201cemploy\\u201d**\\n\\nThank you for pointing this out. We have corrected the identified typos in the revised draft.\\n\\n**Q10. Was cross-subject prediction generated by using all of the predictor subjects voxels, to predict the target subject\\u2019s voxel-wise responses? **\\n\\nThank you for this question. \\n* Yes, similar to previous studies *[Schrimpf et al. 2021] [Oota et al. 2024] [Alkhamiss et al. 2024]*, cross-subject predictions were generated by using all possible combinations of s participants (s\\u2208[2,6]), where voxel-wise responses from s\\u22121 predictor participants were used to predict the target participant's voxel-wise responses.\\n\\n*[Schrimpf et al. 2021] The neural architecture of language: Integrative modeling converges on predictive processing. PNAS, 2021*\\n\\n*[Oota et al. 2024] Speech language models lack important brain relevant semantics, ACL 2024*\\n\\n*[Alkhamissi et al. 2024] Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network, Arxiv 2024*\\n\\n**Q11. What are the six different modalities in the image-bind modality? I thought it was an audio-visual model (which I would consider two modalities) -**\\n\\nThank you for this question.\", \"the_six_modalities_in_the_imagebind_model_are\": [\"Image/Video: Visual modalities, including static images and videos.\", \"Text: Natural language data.\", \"Audio: Sound and acoustic signals.\", \"Inertial Measurement Units (IMU): Motion and activity data captured via sensors.\", \"Depth: Spatial depth information from sources like LiDAR or stereo cameras.\", \"Thermal: Heat or temperature patterns from infrared sensors.\", \"During the training of the ImageBind model: Image and text encoders are kept frozen. Audio, depth, thermal, and IMU encoders are updated.\", \"This architecture results in a model with a total of 132 million parameters, enabling cross-modal embeddings across these six diverse modalities.\", \"**Q12. The layer-wise predictions for models are shown in the appendix, but do the main text figures use all layers or best layer? If the latter, how is this layer selected.**\", \"Thank you for this question.\", \"The figures in the main text present the average normalized brain alignment across subjects, layers, and voxels. This approach ensures that the results are representative of overall model performance, without bias toward any specific layer. Note that we are only averaging across voxels which have a statistically significant brain alignment.\"]}", "{\"comment\": \"I thank the authors for their thorough response. I am more convinced that the work covers novel ground, based on their explanations of prior work. I have upgraded my score accordingly.\"}" ] }
0ctvBgKFgc
ProtComposer: Compositional Protein Structure Generation with 3D Ellipsoids
[ "Hannes Stark", "Bowen Jing", "Tomas Geffner", "Jason Yim", "Tommi Jaakkola", "Arash Vahdat", "Karsten Kreis" ]
We develop ProtComposer to generate protein structures conditioned on spatial protein layouts that are specified via a set of 3D ellipsoids capturing substructure shapes and semantics. At inference time, we condition on ellipsoids that are hand-constructed, extracted from existing proteins, or from a statistical model, with each option unlocking new capabilities. Hand-specifying ellipsoids enables users to control the location, size, orientation, secondary structure, and approximate shape of protein substructures. Conditioning on ellipsoids of existing proteins enables redesigning their substructure's connectivity or editing substructure properties. By conditioning on novel and diverse ellipsoid layouts from a simple statistical model, we improve protein generation with expanded Pareto frontiers between designability, novelty, and diversity. Further, this enables sampling designable proteins with a helix-fraction that matches PDB proteins, unlike existing generative models that commonly oversample conceptually simple helix bundles. Code is available at https://github.com/NVlabs/protcomposer.
[ "protein design", "diffusion model", "controllable generation", "drug discovery", "proteins", "biology" ]
Accept (Oral)
https://openreview.net/pdf?id=0ctvBgKFgc
https://openreview.net/forum?id=0ctvBgKFgc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ycvW6gOZmc", "x9VGug8IoD", "vxyoCfha5p", "sMAAZt7XQ0", "q9SPqgwx6i", "puwC3ERpM1", "lo4cTLkK3N", "lBlUl3xbrw", "l5TsTKA2Gq", "cUa7Zu8Sav", "aFl2mFLnKW", "ZqKxhZbSMA", "Y7zjF9X49U", "UwUyneDVe4", "Nc6O8jGSvk", "Ml5GBCljMH", "IiFGJthtsQ", "CHOXKi41RG", "AalLxI7L1J", "A1jrejawF4", "13o6oyKulD" ], "note_type": [ "decision", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1737523663037, 1732544748681, 1734722206750, 1731875192692, 1730647900158, 1731874830483, 1732641190868, 1732309930919, 1732246596550, 1731974385123, 1730664029033, 1732564765627, 1731875346570, 1731875091719, 1731874890426, 1731875289593, 1733104493575, 1730689244542, 1732544574141, 1730508844901, 1732544839906 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ], [ "ICLR.cc/2025/Conference/Submission4802/Area_Chair_UYkf" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ], [ "ICLR.cc/2025/Conference/Submission4802/Reviewer_D7Wa" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ], [ "ICLR.cc/2025/Conference/Submission4802/Reviewer_7wor" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ], [ "ICLR.cc/2025/Conference/Submission4802/Reviewer_679L" ], [ "ICLR.cc/2025/Conference/Submission4802/Reviewer_eiq5" ], [ "ICLR.cc/2025/Conference/Submission4802/Reviewer_679L" ], [ "ICLR.cc/2025/Conference/Submission4802/Reviewer_eiq5" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ], [ "ICLR.cc/2025/Conference/Submission4802/Reviewer_D7Wa" ], [ "ICLR.cc/2025/Conference/Submission4802/Reviewer_7wor" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ], [ "ICLR.cc/2025/Conference/Submission4802/Reviewer_eiq5" ], [ "ICLR.cc/2025/Conference/Submission4802/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"Response by Authors 2\", \"comment\": \"With the discussion period ending tomorrow, we thank you for the work together toward a better paper!\\n\\nSince we are excited that you, as well as all other reviewers, consider our work as a good submission, please let us know if there is anything we can do to further improve the paper and make you consider modifying and raising your paper rating from \\u201c8: accept, good paper\\u201d to \\u201c10: strong accept, should be highlighted at the conference\\u201d, or if you think such a score increase is already warranted.\"}", "{\"metareview\": \"The paper introduces ProtComposer, a framework for generating protein structures conditioned on 3D ellipsoids that encode spatial layout information. The reviewers commend the paper for being well-written and addressing a relevant, well-defined problem with practical applications. The reviewers also found the experimental evaluation to be strong and the results to be impressive. \\u00a0Concerns were raised regarding the performance of natural proteins, and the introduction of a new compositionality metric also raised concerns about its validity. However, most of the reviewers' concerns about the papers were addressed with the rebuttal. The reviewers unanimously agree that this is a very strong paper, and therefore, I recommend accepting the paper.\", \"additional_comments_on_reviewer_discussion\": \"The discussion phase mainly consisted of the rebuttal from the authors and the reviewers acknowledging the comments and changes from the authors:\\n\\n* Reviewer 7wor requested co-designability results, and the authors added Table 3.\\n* Reviewers sought clarifications on the number of ellipsoids, and the authors added Figure 14 to illustrate segmentation radius effects and a histogram of ellipsoid counts in Figure 11.\\n* \\u00a0Reviewers D7Wa and 679L requested ablation studies on self-conditioning approaches, and the authors added Section C.2 and Figure 9.\\n*Reviewer eiq5 expressed concerns about the introduced metric's validity, and the authors included comparisons to other models in Table 4 and added an explanation to Appendix B.\\n\\nOverall, reviewers appreciated the thoroughness of the responses.\"}", "{\"title\": \"Response by Authors Part 2/2\", \"comment\": \"**3: Results in table 2, the impact of guidance on the tradeoff between designability vs. helicity+diversity+novelty, and comparisons to RFDiffusion, Chroma, and Multiflow.**\\n\\nWe removed our mention of \\\"natural proteins,\\\" which may have been confusing and could leave the impression that there is a notion of comparing models on synthetic vs. natural proteins, which is not the case. All models (ProtComposer, RFDiffusion, Chroma, Multiflow) can sample a distribution of protein structures p(X) without any further inputs. In ProtComposer, this distribution is decomposed as p(X)=P(X|E)p(E) into a distribution of ellipsoids p(E) and a distribution of structures conditioned on ellipsoids P(X|E). \\n\\nIn Figure 4 we generate proteins from scratch with all methods. Table 2 shows numbers for ProtComposer when generating from a special type of ellipsoids - ellipsoids extracted from PDB proteins. This would not be available when attempting to generate a protein from scratch.\", \"regarding_the_guidance_and_the_tradeoff\": \"We introduce the guidance mechanism as a control knob for trading off between designability vs. helicity, diversity, and novelty. In other models, such as RFDiffusion, this tradeoff can be controlled via the sampling temperature. We compare the strength of the tradeoffs that can be achieved for all models via their tradeoff curves in Figure 4. It shows ProtComposer's improved tradeoff between designability and the other metrics, including helicity if lower helicity is the goal (note the minus sign in \\\"1 - helicity\\\" on the y-axis).\\n\\n\\n\\n**4: Self-conditioning via interpolation vs. other variants**\\n\\nWe added Section C.2 with Figure 9 to the revised manuscript which shows the designability vs. ellipsoid adherence frontier under our 5 different possibilities of performing self-conditioning, showing that the interpolation variant performs best under all settings of guidance strength.\\n\\n---\\nWe hope the discussions and additions address your concerns! When appropriate, we also reference our additions and clarifications in the main text. Please let us know if there are any further opportunities to improve the score.\\n\\n[1] Adding Conditional Control to Text-to-Image Diffusion Models\\\\\\n[2] GLIGEN: Open-Set Grounded Text-to-Image Generation\\\\\\n[3] Compositional Text-to-Image Generation with Dense Blob Representations\\\\\\n[4] Self-Attention with Relative Position Representations\"}", "{\"summary\": \"This paper presents ProtComposer, which is a fine-tuned model from MultiFlow to support conditional generation based on 3D ellipsoids, showing success at controllable design and achieving SOTA on the Pareto frontier of designability and diversity.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The paper is well written with clean visualizations demonstrating methods and results. The concept of utilizing ellipsoids as conditions for protein generation is interesting and novel, providing a bridge between protein-level conditioning and atom-level conditioning. In addition, the authors propose an effective Invariant Cross Attention module for integrating ellipsoid conditioning and demonstrate success at achieving SOTA performance on the Pareto frontier of designability and diversity.\", \"weaknesses\": \"I have no major concerns about this paper. However, it would be helpful if the authors could elaborate on the applicability of the ellipsoid-based conditioning approach on practical protein design tasks. How would it help with or facilitate the protein design process?\", \"questions\": [\"Line 355: How is the length between ellipsoids determined/sampled? Also, consider an ellipsoid with beta strand annotation, how is the length between each stranded segment determined (particularly if a strand ellipsoid is formed by segments that are distant in sequence but close in structure)?\", \"It would be helpful if the authors could provide ablation study results on the effect of self-conditioning, particularly the two self-conditioning schemes described in line 290.\", \"Figure 9: What is the linear fit and statistical significance in both cases?\", \"During training, is the ellipsoid conditioning information always provided, or only provided for a percentage of time?\", \"Line 367: Why is a structured residue considered as \\u201ccovered\\u201d if it is inside at least one ellipsoid instead of inside the ellipsoid it is assigned?\", \"Table 1: Could the authors provide some intuition on why over-guidance (\\\\lambda > 1) performs better than the conditional model itself (\\\\lambda = 1)?\", \"Table 2: Does \\u201cPDB proteins\\u201d correspond to the validation dataset or does it include the training dataset?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Overall Response by Authors\", \"comment\": [\"# Overall Response by Authors\", \"We thank all reviewers for their constructive feedback and their time taken to review!\", \"Next to the individual responses, we updated the manuscript that can be downloaded above with **new figures and results** and writing improvements. Changes are highlighted in blue (the final version will not contain the coloring). The main additions include:\", \"Section C.2 with an ablation study and Figure 9 which shows the designability vs. ellipsoid adherence frontier under our 5 different possibilities of performing self-conditioning as suggested by reviewers **D7Wa** and **679L**. Our chosen \\\"interpolate\\\" option is best under all guidance strengths.\", \"Table 3 with results for sequence-structure co-design as suggested by reviewer **7wor**. As in the Multiflow paper, co-generation does not improve designability.\", \"Figure 14 to visualize the effect of different segmentation radius cutoffs on the resulting ellipsoids, showing that 5A provides a good level of granularity.\", \"Figure 17 showing proteins generated for different $K$ (numbers of ellipsoids) from our statistical model.\", \"Table 4 with compositionality results for Multiflow, Chroma, and RFDiffusion as suggested by reviewer **eiq5**.\", \"All suggested clarity improvements such as PosEmbed explanation (**679L**), pseudo-frames for ellipsoids (**679L**, **7wor**), user guidelines (**eiq5**), number of ellipsoids/residues per ellipsoid (**679L**), ...\"]}", "{\"comment\": \"Thank you for thorough response to the questions. Figure 14 and your response on residue token explain the design choice well. Regarding the score, I second the response from Reviewer eiq5. I appreciate the theoretical ground of this work, the effort to demonstrate use cases, and how this conditioning can increase diversity of designs, but I will respectfully keep my score.\"}", "{\"title\": \"Response by Authors 2\", \"comment\": \"It is great to hear that all your concerns are resolved - thank you for your help in improving the paper and for the thoughtful points!\\n\\nSince we are excited that you, as well as the other reviewers, consider our work excellent and as a strong submission, please let us know if there is anything we can do to further improve the paper and make you consider modifying and raising your paper rating from **\\u201c8: accept, good paper\\u201d** to **\\u201c10: strong accept, should be highlighted at the conference\\u201d**, or if you think such a score increase is already warranted.\"}", "{\"comment\": \"I would like to thank the authors for their excellent work and thorough response. My concerns have been resolved and I have raised my score accordingly.\"}", "{\"title\": \"Consideration of Updates\", \"comment\": \"Thank you for the excellent work and updates.\\n\\nMy main concern was a justification of the metric's validity given that it is introduced in this paper. I am wary of new, custom metrics which are used to show how well the new technique/model performs. However, the comparison with the Diversity Index as well as (to a lesser degree) the added table alleviate my concerns about the metric's origins. I am satisfied with this. \\n\\nI had concerns over how to choose K in practice. The results provided in the appendix help give a user a good starting point for experimentation, so I am also satisfied with the response to this concern. \\n\\nOverall, I think that this is a strong submission. I am happy to give it a high score!\"}", "{\"summary\": \"This work extends Multiflow to accept spatial conditioning of secondary structures via 3D ellipsoids, aiming to improve control in protein generation and reduce the overrepresentation of alpha-helices in current generative models. Building on Multiflow\\u2019s architecture, the authors address two main challenges:\\n\\n1. Integrating and updating ellipsoid conditioning with structure embeddings with minimal modifications: they introduced an *Invariant Cross Attention* module to update residue embeddings while preserving local SE3 invariance.\\n2. Implementing an effective conditioning approach for flow-matching models: they used classifier-free guidance to interpolate flow vectors across translation, rotation, and amino acid spaces, and employ self-conditioning to refine predicted structures\\n \\nExtensive experiments show that the proposed model can faithfully follow spatial conditioning, resulting in greater diversity, novelty, and improved secondary structure composition. This improves Multiflow by generating proteins with secondary structures more similar to natural proteins. \\nOverall, this work presents a straightforward approach to control protein generation, enhancing Multiflow's diversity, novelty, and secondary structure accuracy.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"**[Clarity & Quality]**\", \"The manuscript is well-written, with a thorough introduction and background information on protein structure generation, spatial conditioning, and flow matching for data generation. Overall, it provides a smooth reading experience.\", \"The paper is of good quality, presenting clear mathematical foundations grounded in current techniques for protein modeling, diffusion-based generation, and guided sampling.\", \"The problem is well-defined, and the authors designed several experiments to evaluate model performance in 1) following conditioning, 2) improving general performance, and 3) demonstrating practical use in flexible conditioning, with both quantitative and qualitative comparisons.\", \"**[Significance]**\", \"The spatial conditioning approach using ellipsoids is intuitive for practical applications and has potential implications for the utility of protein generation models.\", \"The proposed methods appear generalizable to various spatial conditioning scenarios. (*However, this paper focuses solely on secondary-structure conditioning.*)\", \"**[Originality]**\", \"The authors introduce a novel conditioning modality using spatial ellipsoids for protein generation, along with a new layer, \\\"invariant cross-attention,\\\" to integrate this information.\"], \"weaknesses\": \"**[Clarity]**\\n- Some design choices and model details lack clear explanations or in-depth examination (see Q1, 2, and 4).\\n\\n**[Soundness]**\\n- Performance on Natural Proteins: While the authors demonstrate high designability at fixed helicity levels on synthetic data, it isn't as clear if these benefits hold for natural proteins (see Q3).\\n\\n**[Significance]**\\n- Scope: The current methods are examined only on Multiflow and for secondary structure guidance. Their practical impact on other protein generation models and types of spatial conditioning (e.g., domains, hydrophobic cores) is not extensively explored.\", \"questions\": \"1. **Ellipsoid Representation**\\n\\n a. Choosing the Number of Ellipsoids (k):\\n - *Training:* Is k determined by the structure of the training protein? If so, what is the distribution of k in natural proteins?\\n - *Evaluation based on the statistical model:* The authors appear to have used a fixed k=5 in the experiments. How was this number chosen? Have the authors tested other k values?\\n\\n b. Number of Residues per Ellipsoid:\\n - The current representation specifies the number of residues in each ellipsoid, but the authors show that this number directly depends on ellipsoid volume. Could specifying the number of residues be redundant, and might removing this constraint provide the model more flexibility in generation? Have the authors examined the impact of residue count on amino acid (AA) prediction?\\n\\n2. **Invariance Cross-Attention (ICA) Layer Design**\\n\\n a. The authors separately model the SE3 features (E_k) and scalar features (e_k) of ellipsoids using the proposed ICA and transformer to achieve SE3 invariance in the local frame. Has the team considered alternative approaches, such as modeling ellipsoids as \\u201cpseudo frames\\u201d with SE3 and scalar features and simply using IPA to update ellipsoid and residue features together?\\n\\n b. In Algorithm 1:\\n - Could the authors clarify the *PosEmbed* used in line 223? Does it include distance, angles, or local coordinates?\\n - Could they also explain why the query uses un-updated $s$, while the key and value use $a$, which incorporates current ellipsoid information?\\n\\n3. **Results in Table 2 (natural proteins):** The model, even with the strongest guidance, tends to overestimate helices in proteins. Additionally, the authors did not present designability results, which, based on Figure 16, may be compromised with strong guidance. Could the authors elaborate on the model's performance in addressing the \\u201coverrepresented helix problem,\\u201d the trade-offs with other metrics, and its overall comparison to models like *RFDiffusion*?\\n\\n4. In self-conditioning (line 290), they propose supplying interpolated conditions to both conditional and unconditional models, suggesting this improves \\u201cdesignability and ellipsoid adherence for all $\\\\lambda$ values.\\u201d However, no ablation studies were provided to verify this claim.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Consideration of Score\", \"comment\": \"I appreciate the enthusiasm of the authors with regards to the revision process. I think the paper is very interesting and a strong addition to Comp Bio literature. I reserve a score of 10 for papers I believe will become instrumental throughout the field and lead to the creation of a new standard set of practices in the field. I think further fine-grained control over the generation process would be needed for me to change the score from an 8 to a 10. Nonetheless, I think it is a very strong paper and represents excellent research.\"}", "{\"title\": \"Response by Authors\", \"comment\": \"Thank you for the review! To address your questions and concerns (our updated manuscript can be downloaded at the top of this page):\\n\\n---\\n\\n**Regarding introduced metric: \\\"other algorithms under this metric, or stronger domain justification\\\"**\\n\\nWe assume that you refer to our compositionality metric and are glad you find it intuitive! We added Table 4 to the revised manuscript, where we evaluate additional protein structure generative models' compositionality. We note that an ellipsoid conditioned model could be built out of any protein structure generative model, and the base model does not necessarily have to be Multiflow.\\n\\n| | ProtComposer ($\\\\lambda=2.0$) | Multiflow | Chroma | RFDiffusion |\\n|--------------------|----------------------------|-----------|--------|-------------|\\n| Compositionality | 3.2 | 1.9 | 2.5 | 3.3 |\\n\\nFurther, we add an explanation to Appendix B: that our compositionality metric is inspired by the ``Diversity Index\\\" (https://en.wikipedia.org/wiki/Diversity_index), which is often used in the domains of ecology or demography.\\n\\n**Formatting and white space in the appendix**\\n\\nThank you for pointing it out! We fixed it in our revision.\\n\\n**User guidelines for choosing the number of ellipsoids $K$**\\n\\nThat is a nice suggestion! We added Figure 11 with a histogram of $K$ for PDB proteins and this guidance to Section B: All specifications of numbers of ellipsoids $K<10$ can be expected to yield successful generations. To see which $K$ are the most in-distribution, please see the histogram in Figure 11 where we visualize the frequency of different $K$.\\n\\n---\\n\\nWe hope the discussions and results address your concerns! Please let us know if there are any further opportunities to improve the score.\"}", "{\"title\": \"Response by Authors Part 1/2\", \"comment\": \"Thank you for the review! To address your questions and concerns (our updated manuscript can be downloaded at the top of this page):\\n\\n---\\n\\n**Limitation to secondary structure conditioning**\\n\\nWe acknowledge that conditioning on ellipsoids with functional annotations beyond secondary structure would be highly interesting. We considered the feasibility of this direction during the project using data from function annotation databases such as Interpro or using predicted annotations from models such as DeepFRI. However, the data quality is insufficient for training a model conditioned on ellipsoids with function annotations. For instance, in Interpro, many location-specific annotations are assigned to the whole protein instead of the functionally relevant site or the DeepFRI location predictions conflict with each other for different functions. Thus, we provide a general framework that allows for arbitrary annotation types but only implement it for secondary structure annotations - once sufficient quality data of broader annotations is available, the more general model can be trained.\\n\\n**1.a.1: \\\"Is k determined by the structure of the training protein? If so, what is the distribution of k in natural proteins?\\\"**\\n\\nWe now clarify in A.1 that during training, the data determines the number of ellipsoids and added Figure 11 with a histogram of $K$.\\n\\n**1.a.2: $K=5$ for the synthetic ellipsoid experiments**\\n\\nWe added a clarification in A.4 that we chose a fixed $K$ for consistency across protein lengths and the specific value of $K=5$ since it is the most frequent number of ellipsoids in PDB proteins (see Figure 11).\\nWe added Figure 17 to the paper where we show proteins generated with different $K$.\\n\\n**1.b: \\\"The current representation specifies the number of residues in each ellipsoid, but the authors show that this number directly depends on ellipsoid volume. Could specifying the number of residues be redundant, and might removing this constraint provide the model more flexibility in generation?\\\"**\\n\\nWe added to the discussion in Appendix B that this is a reasonable alternative and that we chose to specify the number of residues per ellipsoid to give the user the option of controlling it (or having the number be determined via linear fit if preferred).\\n\\n**2.a: The authors separately model the SE3 features (E_k) and scalar features (e_k) of ellipsoids using the proposed ICA and transformer to achieve SE3 invariance in the local frame. Has the team considered alternative approaches, such as modeling ellipsoids as \\u201cpseudo frames\\u201d with SE3 and scalar features and simply using IPA to update ellipsoid and residue features together?**\", \"we_added_this_discussion_to_appendix_b\": \"We indeed considered this. However, a canonical assignment of a frame to an ellipsoid is not possible. To see this, we consider the worst-case scenario of a round ellipsoid - clearly, no canonical assignment of a 3D orientation is possible. In the best-case scenario of an ellipsoid with three distinctly sized principal components, we could choose, e.g., the two largest to construct a frame from. However, their sign is arbitrary, leading to 4 options among which no canonical choice exists. Thus, we opted for our ICA without ellipsoid token updates, which is empirically sufficient for strong ellipsoid adherence and, together with our transformer layers, aligns with the mechanisms commonly employed in computer vision [1,2,3].\\n\\n**2.b.1: clarifing PosEmbed in Alg. 1**\\n\\nWe added to A.1 in the revised manuscript that PosEmbedd is a sinusoidal positional encoding of the relative positions (the vectors between ellipsoid means and residue positions). Each number of the 3D offset vector is encoded into 64 dimensions, and all 3 are concatenated.\\n\\n\\n**2.b.2: On why the query uses un-updated s, while the key and value use a, which incorporates relative positional information**\\n\\nWe added to Appendix B that this approach to inject relative positional encoding was our default choice since it is how relative positional encodings are used in language model transformers [4] or in geometric transformers such as \\\"SE3-Transformer\\\".\"}", "{\"title\": \"Response by Authors\", \"comment\": \"Thank you for the review! To address your questions and concerns (our updated manuscript can be downloaded at the top of this page):\\n\\n---\\n\\n**Sequence-structure co-generation results**\\n\\nWe added Table 3 where we select statistical model parameters with high designability ($\\\\nu = 50, \\\\sigma=5$) and draw 400 structures together with sequences from ProtComposer. As in the Multiflow paper, the joint generation does not improve designability. Interestingly, the median self-consistency RMDSs (scRMSD) of the jointly generated sequences are worse, while their mean scRMSDs are better.\\n\\n1-seq Designability refers to generating 1 sequence per structure, while 8-seq Designability uses the best out of 8 sequences per structure. Self-consistency RMSD is abbreviated as scRMSD.\\n\\n| Approach | 1-seq Designability \\u2191 | Median scRMSD \\u2193 | Mean scRMSD \\u2193 | 8-seq Designability \\u2191 |\\n|------------------|-----------------------|-----------------|---------------|-----------------------|\\n| Joint Generation | 0.75 | 1.76 | 2.15 | -- |\\n| ProteinMPNN | 0.81 | 1.65 | 2.41 | 0.98 |\\n\\n\\n**Ellipsoid segmentation cutoff**\\n\\nWe added Figure 14 to the paper, which visualizes the effect of different radius cutoffs on ellipsoids. We observe that a 5A cutoff offers reasonable 3D ellipsoid segmentations that are neither too coarse nor too fine-grained.\\n\\n**Communication between ellipsoid and residue tokens**\", \"we_added_this_discussion_to_appendix_b\": \"In our transformer layers (algorithm 2), the residue tokens update all ellipsoid tokens, and ellipsoid tokens update all residue tokens. In Invariant-Cross-Attention (algorithm 1), we inject ellipsoid position and geometry information into the residues tokens without updating ellipsoid tokens.\", \"the_reason\": \"ICA updates a residue token based on transforming the ellipsoid means and covarianc matrices into the local coordinate frame of the residue (where residue frames are defined as in AlphaFold2). The same mechanism is not applicable to update ellipsoid tokens based on residue positions since a canonical assignment of a frame to an ellipsoid is not possible. To see this, we consider the worst-case scenario of a round ellipsoid - clearly no canonical assignment of a 3D orientation is possible. In the best-case scenario of an ellipsoid with three distinctly sized principal components, we could choose, e.g., the two largest to construct a frame from. However, their sign is arbitrary, leading to 4 options among which no canonical choice exists. Thus, we opted for our ICA without ellipsoid token updates, which is empirically sufficient for strong ellipsoid adherence and, together with our transformer layers, aligns with the mechanisms commonly employed in computer vision [1,2,3].\\n\\n\\n\\n**\\\"Is it possible to set the order of ellipsoids?\\\"**\\n\\nA simple approach to allow for setting the order would be retraining a model with an additional ellipsoid feature that encodes a list of the orders in which the ellipsoid is \\\"hit\\\". \\n\\n**Further ablation results**\\n\\nSince you advised adding ablations as a last avenue to improve the paper further, we also added an investigation of different self-conditioning approaches. We added Section C.2 with Figure 9 to the revised manuscript which shows the designability vs. ellipsoid adherence frontier under our 5 different possibilities of performing self-conditioning.\\n\\n---\\nWe hope the discussions and results address your concerns! Please let us know if there are any further opportunities to improve the score.\\n\\n[1] Adding Conditional Control to Text-to-Image Diffusion Models\\\\\\n[2] GLIGEN: Open-Set Grounded Text-to-Image Generation\\\\\\n[3] Compositional Text-to-Image Generation with Dense Blob Representations\"}", "{\"title\": \"Response by Authors\", \"comment\": \"Thank you for the review! To address your questions and concerns (our updated manuscript can be downloaded at the top of this page):\\n\\n---\\n\\n**On the applicability of the ellipsoid conditioning on practical protein design tasks**\\n\\nAn important question! We added the following to Appendix B:\\n- Example use case: We aim to scaffold a therapeutically relevant functional site. The protein requires a certain shape to fit into a delivery mechanism. With ProtComposer we can specify the rough shape and size of the scaffold to still fit into the delivery mechanism\\n- ProtComposer can redesign the connectivity of secondary structure elements: biologists aim to escape the existing space of protein topologies and discover new ones that can be used as scaffolds or for other design tasks. \\n- Example use case: We aim to design a binder for a target at a flat beta-sheet region. With ProtComposer, we can specify that a beta-sheet of the right size and shape should interface with the target's beta-sheet to increase the probability of success in generating a strong binder.\\n- We often know how much flexibility/rigidity we want in certain areas of the protein. With ProtComposer, we can place a rigid helix bundle, a beta-barrel, or more loosely connected substructures in those regions. \\n\\n\\n\\n**How is the length between ellipsoids determined/sampled?**\\n\\nWe add the following clarification to A.1:\\n\\nAt inference time, ProtComposer (and Multiflow, RFDiffusion, and Chroma) take a protein length $L$ as input which specifies the number of residues in the generated protein. In ProtComposer, each ellipsoid $\\\\mathbf{E}_k$ is associated with a number of residues $n_k$ that is supposed to end up in that ellipsoid. The sum of $n_k$ does not have to match the total lenght $L$. The $n_k$ are just an additional conditioning input which the model can adhere to but does not have to. Even if the sum of $n_k$ is larger than $L$, the model can still (and does) use some of the residues for strands and to connect the ellipsoids. When generating from synthetic ellipsoids, we set $L$ to be equal to the sum of $n_k$. When generating from data-extracted ellipsoids, we set $L$ to be the length of the original protein which the ellipsoids were extracted from, so $L$ > $\\\\sum n_k$.\\n\\n**\\\"provide ablation study results on the effect of self-conditioning, particularly the two self-conditioning schemes described in line 290.\\\"**\\n\\nWe added Section C.2 with Figure 9 to the revised manuscript which shows the designability vs. ellipsoid adherence frontier under our 5 different possibilities of performing self-conditioning, showing that the interpolation variant performs best under all settings of guidance strength.\\n\\n**Figure 9: What is the linear fit and statistical significance in both cases?**\\n\\nWe added to the Figure's caption that the linear fit for alpha-helices is 0.97 and for beta-sheets 0.93 and that for both, the statistical significance is high with a p-value that is so close to 0 as to be outside of numerical precision.\\n\\n**\\\"During training, is the ellipsoid conditioning information always provided, or only provided for a percentage of time?\\\"**\\n\\nFor classifier free guidance we combine a conditional and an unconditional model. During pretraining, to obtain the unconditional Multiflow, there is never any ellipsoid conditioning. When fine-tuning Multiflow to obtain the conditional model, ellipsoid conditioning is always provided.\\n\\n**Why is a structured residue considered as \\u201ccovered\\u201d if it is inside at least one ellipsoid instead of inside the ellipsoid it is assigned?**\\n\\nThe notion of a residue being assigned to an ellipsoid does not exist at inference time. The ellipsoids only have an assigned number of residues, no assignment of which residue will go into them. The model can freely choose which residue ends up in which ellipsoid.\\n\\n**Intuition on why \\\"over-guidance\\\" with $\\\\lambda > 1$ can yield improvements (Table 1)**\\n\\nA very interesting question! This is a long-standing and not fully understood phenomenon of diffusion/flow models. Figure 2 in the classifier-free guidance paper (https://arxiv.org/pdf/2207.12598) provides intuition by showing what happens for a Gaussian mixture - with \\\"over-guidance\\\" we make \\\"extra-sure\\\" that the samples are close to the conditioning distribution, and far from other possibilities. Most justification is empirical and, e.g., this blog post's Section 3 (https://sander.ai/2022/05/26/guidance.html) shows how image quality is improved with \\\"over-guidance\\\" where $\\\\lambda = 3$. Lastly, the autoguidance paper (https://arxiv.org/pdf/2406.02507v1) observes that improvements are even possible when guiding a model with a worse, less trained version of itself. \\n\\n**Whether our \\\"PDB protein\\\" evaluation set is included in the training data**\\n\\nThis set of proteins is separate from the training data.\\n\\n---\\nWe hope the discussions and results address your concerns! Please let us know if there are any further opportunities to improve the score.\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"Thank you for your response and all my questions are addressed. I will keep my score since I reserve 10 for fundamental work that can have significant impacts across multiple fields.\"}", "{\"summary\": \"This paper proposes a framework to generate protein structures by conditioning on layouts specified through 3D ellipsoids. The conditions include location, size, orientation and secondary structure. These conditions are injected to flow-base protein generative models via proposed cross-attention and update modules. It shows greatly improved controllability and designability over baselines.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper is well motivated, formulated, written, and evaluated.\\n\\n1. The injection of ellipsoid information is achieved through cross attention. This allows the ellipsoids to be unordered set. i.e. user doesn\\u2019t have to specify the order or ellipsoids; model also decides the order of ellipsoids. \\n2. The formulation of ellipsoid token is effective and is easy to be extended to different conditions other than secondary structure. (such as hydrophobicity)\\n3. Thorough analysis of ellipsoid consistency, including both geometric and probabilistic metrics.\\n4. Fig4: The comparison on designability/diversity with baseline methods are done as comparing Pareto frontiers with varying sampling temperatures/guidances. This clearly shows the tradeoff between the methods and the performance improvement from baselines. I found this analysis insightful and believe other papers arguing increased performance could benefit from a similar evaluation scheme.\\n5. The practical use-case of the method is shown in Section4.3 flexible conditioning.\\n6. The controllability is greatly improved from Chroma (Table1). Accuracy and coverage is very impressive.\", \"weaknesses\": \"How is sequence design performance? What I understand is that the designability is solely based on the generated structure (i.e. generated sequence is discarded). Can you also present co-design designability value as in MultiFlow paper?\\n\\nOther than that, I did not find any major weaknesses from the paper. However, the ablation study can be improved. Most of the model ablations are based on guidance strength, but I am also curious about ablation study on (i) ellipsoid segementation cutoff (5A currently), (ii) allow residue token to update ellipsoid token or not. For (ii), explaining the reasoning behind the design choice may suffice.\", \"questions\": \"Is it possible to set the order of ellipsoids? Or how complicated would it be to extend this framework to allow user to set the order of ellipsoids?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response by Authors 3\", \"comment\": \"With the discussion period ending tomorrow, we thank you for the work together toward a better paper! We would be excited to hear any further feedback on our second response above if you can find the time!\"}", "{\"summary\": \"Introduces ProtComposer, a generative model for proteins. ProtComposer seeks better control over the shape of generated proteins, as well as allow for greater novelty in the generation of proteins. Notably, the user has to choose between control or novelty, ProtComposer does not achieve both simultaneously. These tasks are accomplished by the introduction of 3D ellipsoid frames to guide the generation of proteins by the pre-existing Multiflow algorithm.\\n\\nWithout a pre-existing metric the authors feel sufficiently quantifies compositionality, they introduce their own metric. It is shown that ProtComposer outperforms existing methods in this metric.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"It addresses an important and relevant area. The results are overall quite impressive as well. Additionally, showing the ability to work with handcrafted ellipsoid frames or the more easily scalable ML generated frames shows the practicality of ProtComposer.\\n\\nI really like the point about using simple ML models for ellipsoid sampling as compared to NNs. I would like to see some stronger analytical or experimental justification of the claim though.\", \"weaknesses\": \"Since a custom metric is introduced in this paper and then used as justification for the performance of the model, a section (either in the main paper or the appendix) justifying this metric would be nice. Showing comparisons to other pre-existing metrics, performance of many other algorithms under this metric, or stronger domain justification would strengthen the meaning of the results. I did not reject over this since the metric intuitively looks good, but further empirical or analytical support would be nice.\\n\\nThe formatting of the extra results in the appendices results in very odd page layouts, where some pages are blank, others have odd whitespace gaps, etc. Adjustment to make these pages more presentable would be nice.\", \"questions\": \"I am assuming that the user can redefine K during each trial as they see fit, though studies on guidelines in choosing K would be helpful. In situations where the user has only a vague idea of what they are looking for (an unfortunately common occurrence), having a guide on where to start would be beneficial.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response by authors 2\", \"comment\": \"With the discussion period ending tomorrow, we thank you for the work together toward a better paper!\\n\\nSince we are excited that you, as well as all other reviewers, consider our work as a good submission, please let us know if there is anything we can do to further improve the paper and make you consider modifying and raising your paper rating from \\u201c8: accept, good paper\\u201d to \\u201c10: strong accept, should be highlighted at the conference\\u201d, or if you think such a score increase is already warranted.\"}" ] }
0cadcLKbt7
TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices
[ "Zonghang Li", "WenjiaoFeng", "Mohsen Guizani", "Hongfang Yu" ]
Large model inference is shifting from cloud to edge due to concerns about the privacy of user interaction data. However, edge devices often struggle with limited computing power, memory, and bandwidth, requiring collaboration across multiple devices to run and speed up LLM inference. Pipeline parallelism, the mainstream solution, is inefficient for single-user scenarios, while tensor parallelism struggles with frequent communications. In this paper, we argue that tensor parallelism can be more effective than pipeline on low-resource devices, and present a compute- and memory-efficient tensor parallel inference system, named TPI-LLM, to serve 70B-scale models. TPI-LLM keeps sensitive raw data local in the users' devices and introduces a sliding window memory scheduler to dynamically manage layer weights during inference, with disk I/O latency overlapped with the computation and communication. This allows larger models to run smoothly on memory-limited devices. We analyze the communication bottleneck and find that link latency, not bandwidth, emerges as the main issue, so a star-based allreduce algorithm is implemented. Through extensive experiments on both emulated and real testbeds, TPI-LLM demonstrated over 80\% less time-to-first-token and token latency compared to Accelerate, and over 90\% compared to Transformers and Galaxy, while cutting the peak memory footprint of Llama 2-70B by 90\%, requiring only 3.1 GB of memory for 70B-scale models.
[ "DML Systems", "Edge LLM Serving", "Tensor Parallelism", "Memory Scheduling" ]
https://openreview.net/pdf?id=0cadcLKbt7
https://openreview.net/forum?id=0cadcLKbt7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oEbSiYQ7mO", "ZihNtJYXT0", "YeIB0tSnEt", "9OnkZsGo5u", "7cqPB1Sdxb" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730682578511, 1730622151182, 1730767533447, 1731994652175, 1730602854184 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8847/Reviewer_rKWH" ], [ "ICLR.cc/2025/Conference/Submission8847/Reviewer_3Ruh" ], [ "ICLR.cc/2025/Conference/Submission8847/Reviewer_sPoF" ], [ "ICLR.cc/2025/Conference/Submission8847/Authors" ], [ "ICLR.cc/2025/Conference/Submission8847/Reviewer_jo28" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a novel tensor parallel inference system named TPI-LLM, designed to efficiently serve large-scale language models (LLMs) on low-resource edge devices. The system addresses the challenges of limited computing power, memory, and bandwidth on edge devices by introducing a sliding window memory scheduler that dynamically manages layer weights during inference, overlapping disk I/O latency with computation and communication. This approach allows larger models to run smoothly on memory-limited devices while keeping sensitive raw data local to the user's devices, enhancing privacy.\", \"the_key_contributions_of_the_paper_are_as_follows\": \"1. Tensor Parallelism on Edge Devices: The paper argues for the effectiveness of tensor parallelism over pipeline parallelism on low-resource edge devices and presents TPI-LLM as a compute- and memory-efficient solution for serving 70B-scale models.\\n \\n2. Sliding Window Memory Scheduler: A memory scheduler is introduced to asynchronously load and unload layer weights, which enables the inference of larger models on devices with limited memory by overlapping disk I/O with computations and communications.\\n \\n3. Star-Based AllReduce Algorithm: The paper identifies link latency as the main issue in communication and implements a star-based allreduce algorithm to reduce latency, outperforming ring- and tree-based methods commonly used in high-latency networks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper addresses the emerging challenge of deploying large language models (LLMs) on edge devices, which is a novel and increasingly relevant problem in the era of edge computing and privacy concerns. It proposes a new approach to tensor parallelism that is specifically tailored for low-resource environments, which is an original contribution to the field.\\n\\nThe authors combine concepts from distributed computing, memory management, and parallelism to create a system that is both memory and compute-efficient. The sliding window memory scheduler and the star-based allreduce algorithm are creative solutions that address specific pain points in edge device inference.\\n\\nThe work has significant implications for the future of edge computing, as it enables the deployment of LLMs on devices that were previously considered incapable due to resource constraints.\", \"weaknesses\": \"I think that although this paper is a bold attempt at the edge of LLM serving, most of the solutions provided are based on the application of past works. Firstly, whether it is tensor parallelism, sliding window, or the star-based algorithm, they are all proposed in existing and relatively mature works. The author's approach to using these methods to optimize edge LLM serving is similar to theirs, hence the paper's contribution lacks a certain level of innovation.\\n\\nFurthermore, I think there are some flaws in the author's logic when explaining the motivation and opportunities for the research. I only learned from the first sentence of the abstract that the significance of serving large models at the network edge lies in data privacy-preserving. In the first paragraph of the introduction, I learned that if edge devices must be used for LLM serving to ensure user privacy and security, then more edge devices will have to be used in a distributed collaborative serving due to the limitations of computing and storage resources. However, the title of the second paragraph is \\\"Observations and Motivations,\\\" but the content only contains observations (which should only contain them without motivations). Therefore, I suggest that the author optimize the logic of explaining the research motivation and opportunities. The scope of privacy security issues is too large, so that the research motivation seems slightly pale. Can it be combined with some more specific downstream tasks or application scenarios?\\n\\nFinally, the experimental results shown in Table 1 shows that the latency of serving large models on edge devices is much higher than that in the cloud, with TTFT reaching above the second level, and the throughput is far from comparable to that of the cloud. Does this indicate that the research motivation of the paper only considered privacy protection and neglected performance issues? Although the experimental results are already much better than Galaxy. This issue is also a huge challenge that all edge LLM serving inevitably faces.\", \"questions\": \"When the author discusses opportunities, I have several points of confusion:\\n\\n1. Why does the author believe that the communication proportion of tensor parallelism is high, but the overall inference time is reduced due to parallel computation? Is this conclusion drawn from the results in Figure 1-(b)? Has the author considered the synchronization issues among multiple edge devices in tensor parallelism?\\n \\n2. Why is the total time of tensor parallelism less than 100% in Figure 1-(b)? Does the author want to use the pipeline parallelism as a baseline to illustrate the superiority of tensor parallelism among edge devices?\\n \\n3. In Figure 1-(c), why is the memory footprint of each device in the TPI-LLM framework proposed in this paper the same? If tensor parallelism is used, it should decrease as the number of devices increases.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a new framework called TPI-LLM for model serving on low resource edge devices using tensor parallelism and memory scheduling. The proposed frameworks shows better performance over the SOTA frameworks in terms of latency to predict the first token and the overall token latency.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Following are the strengths:\", \"The paper is well-written, easy to follow and understand the concepts presented.\", \"The paper tries to addresses the critical problem of LLM inference on the edge devices.\", \"The paper discusses the existing frameworks in this space and positions within the body of knowledge.\", \"Achieving the faster TTFT latency using 3k LOC and for large models is always interesting and of use for broader audience.\", \"Although the paper uses multiple devices connected in the same network (Wi-Fi/cable connected), it is understandable that there are use-cases where a house will have multiple edge devices all of which might operate in-tandem, there maybe business orgs that have edge devices that don't have permission issues to run LLMs on multiple of such devices. For such use-cases this is a significant contribution.\", \"The paper proposed an easy to use technique of overlaying the data fetch during communication step in LLMs in the proposed sliding window strategy.\"], \"weaknesses\": \"Please follow the questions section, there is a cohesive list of weaknesses and the corresponding questions that are to be addressed.\", \"questions\": [\"This technique might also be useful even in the cloud server settings, especially when there is not enough GPUs, can the sliding window memory management can help avoid the OOMs. Any thoughts in that direction or future recommendations maybe?\", \"However, following are the concerns/questions.\", \"### Major concerns\", \"The proposed framework has two major operational changes that are applied on the pre-trained LLM, that are 1) distribution of attention heads across nodes, ii) the memory management through the sliding window approach. Given these two changes, the paper does not discuss the performance implications with and without the proposed framework. It is important to guarantee that the performance remains same.\", \"Given that, it is recommended to show that the performance (atleast in the best case settings on atleast one model) remains unchanged with or without the TPI-LLM.\", \"Figure 5 is more concerning in the following perspectives.\", \"No impact on bandwidth: There is no surprise on the lack of impact of increasing the bandwidth since the sliding window size is defaulted to 2. However, we can only learn about the impact by increasing the window size. There is no such ablation in the paper that shows a best combination of the number of devices, maximum possible sliding window size, bandwidth, available memory on each of the devices.\", \"Recommendation is to provide a thorough feasibility study on the combination of the above variables to clearly understand the impact of the bandwidth. In fact on that note, there is no clear analysis on the maximum possible sliding window size for a given hardware configuration of the master/worker nodes. Therefore, the feasibility study can be preceded by maximum window size in order to limit the number of combinations to be studied.\", \"Increasing the number of devices/cores reduces the token latency (from first two sub-figures in Figure5) is not a true statement and is really vague. Those plots are shown simply for 8 devices, if the number of devices are kept increasing, after a point we see diminishing returns of parallelism. That is where the communication dominates and hence to diminishing returns. Without having proper study `increasing the devices reduces the latency` are not valid claims.\", \"The recommendation is to conduct a quick roofline analysis to substantiate the claims or remove the controversial parts.\", \"There are a few limitations on what the proposed framework can offer. They are as follows.\", \"There is a security and privacy concern, this framework should not run any device connected in the same Wi-Fi network unless there is a prior consent. It is not stated in addressed in the paper and hence please clearly state the limitation or the constraints under which the framework can operate.\", \"The star configuration can not scale to large number of nodes/devices. It probably can be extended to scale in a hierarchical-star configuration etc, but that is not the scope of the paper and hence this needs to be state clearly as a limitation. There are real-time use-cases in the resource constrained edge scenarios where the number of devices is high, which leads to failures of a single master node in star config.\", \"There is probably an unstated assumption in the paper that the data gets generated on the master or stays centrally on the master node/device. However, there is a high chance of the worker nodes/devices having user specific data. It appears that the proposed framework does not handle data that is generated on all the other devices. If that is not the case, please clarify how that data is handled on each of the workers? Otherwise, this limitation should be stated.\", \"### Minor concerns\", \"In Figure 4, Time steps 7 and 8 have the same memory window, why is it the case, it is a bit confusing to understand. Is the memory peaked and hence the window does not slide or something else? Please provide clarifications or amend the figure to make it clear.\", \"Tables 1 & 2 provide comparisons for TTFT, latency and memory usage, etc between with and without the use of memory scheduler. However, when the memory scheduler is disabled, it is not clear how those numbers are attained. How were they measured? by using accelerate, vanilla Klonet or Galaxy or llama.cpp or others?\", \"Please add those details on how the stats were measured and the underlying frameworks. Ideally benchmark comparisons against the best possible frameworks/methods is a common practice?\", \"Section 4.3 states the `comparison with benchmarks` Ideally those (a. Standalone, b. Model Parallelism (MP) and c. Galaxy) are SOTA methods/frameworks (in this case). Are they not? Why are they called benchmarks? Clearly there is benchmarking of TPI-LLM against those other things.\", \"Recommendation is to please rephrase in order to convey the message so that the confusing the reader can be avoided.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a technique to run 70B LLM on CPU based (edge) devices. The system uses a tensor parallel framework to distribute attention heads across multiple nodes. The authors then perform an All-Reduce latency analysis, claiming that latency, not bandwidth, is the main bottleneck for all-reduce. The authors then describe a sliding memory scheduler, followed by experiments where they show the performance of their system.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper aims to solve a very important problem, how to run very large models with billions of parameters on CPUs or machines with no CUDA.\", \"weaknesses\": \"Thank you for submitting your work to ICLR. I believe the current version of the paper has many shortcomings which I will describe here in detail:\\n\\n1. The paper needs thorough proof-reading. Some examples:\\n- by adaptively partitioning model--->a model or models\\n- with token latency increases-->increasing\\n- Write in Active voice to avoid odd sentence structures such as :\\n- \\\"Constrained by the high link latency, a star-based allreduce algorithm is implemented\\\"\\n- \\\"We design a TPI-LLM\\\"--->We design TPI-LLM\\n- \\\"power collaborate\\\"--->to collaborate\\n- \\\"which dominate inference time\\\"---> \\\"what dominate\\\"\\n\\n\\n2. I did not really understand the example on p.5 \\\"For example, the path from device h2 to h1 via h8\\nfollows the route h2 \\u2192 r2 \\u2192 r9 \\u2192 r8 \\u2192 h8 \\u2192 r8 \\u2192 r9 \\u2192 r1 \\u2192 h1, resulting in a total link latency of 16\\u03c4 , where \\u03c4 is the per-hop link latency\\\"\\n\\n\\n3. The Sliding Window Memory Scheduling is very similar to PipeSwitch (https://www.usenix.org/conference/osdi20/presentation/bai). The only main difference being that you are swapping in/out from Disk to/from device memory. This is in many ways also similar to memory page to disk swapping.\\n\\n4. Starting with the results for the swapping. Having an OOM is probably better than having 26.1s/token. For a 100 tokens output, you need to wait for roughly 30 minutes. This is an output of about 75 words as per OpenAI (https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them), i.e., one paragraph. Why would a user want to tolerate this? If the user chooses to use the fastest model (the Llama2-3B), they will wait for a bit less, about 3.3 minutes. I am not sure if there is a use-case for such a slow running LLM.\\n\\n5. Moving to the networking results in 4.2, I think the authors are drawing the wrong conclusions. The computations are just super slow in this case that the network is not really a bottleneck in the computations. I think a better experiment would be to try the system with proper edge GPU/TPUs, e.g., Google's Corel, NVidia's Orin, or NVidia's Nano. Right now, I believe that what we are seeing is the result of just a very slow computation. You can already see that in a real world scenario, things are much worse in the real case study where the latency in Table-3 is multiple times compared to Table-1.\", \"questions\": \"1. What is the use-case for this work if it will take minutes to hours to generate an output?\\n2. How do these results change if you have a faster edge device compared to a CPU?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We don't have the specialized edge devices with CUDA/GPUs required by the reviewer, but only general mobile devices that poor/normal users might have at home, so it's impossible for us to provide the additional experiments and data.\"}", "{\"summary\": \"The paper proposes TPI-LLM, a tensor parallelism-based distributed LLM inference system tailored to low-resource edge devices. Unlike inference tasks on servers, TPI-LLM keeps user data securely on user (edge) devices during inference. By analyzing the network latency bottleneck and memory limits, TPI-LLM leverages a star-based all-reduce algorithm to facilitate distributed inference and employs a sliding window-based memory scheduler to reduce inference memory footprints. Experiments on both emulations and real-world testbeds show that TPI-LLM outperforms existing baselines by providing lower TTFT, token latency, and peak memory footprints.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1 - The paper provides extensive theoretical analysis.\\n\\n2 - The proposed approach is evaluated in both emulations and real-world testbeds against state-of-the-art baselines.\\n\\n3 - The performance is significant, especially on edge devices with limited resources.\", \"weaknesses\": \"1 - In Section 2, Q1 is somewhat ambiguous. First, isn't tensor parallelism already a form of model parallelism? And second, even on low-resource edge devices, can we combine and use both parallelism techniques instead of simply abandoning one?\\n\\n2 - In Section 3.2, the example network topology here is star-based (Appendix A.7). Given this star-based topology, a star-based all-reduce scheme indeed would be most efficient. Is this a common network topology for all edge scenarios?\\n\\n3 - Figure 4 may need further detailed descriptions. The sliding window is not very clear in the figure. For example, in Time 7 and 8, why would you prefetch Blocks 6, 7, and 8 so early when it's still far away from actually using them? Isn't that prefetching too early causing memory waste?\\n\\n4 - Note that the layer-wise parameter offloading is not new. Many popular frameworks, such as DeepSpeed-Inference, Mixtral Offload, and Llama.cpp, support this offloading scheme. How does the proposed TPI-LLM differ from existing offloading techniques?\\n\\n5 - Evaluation lacks sensitivity analysis on the memory sliding window size. Why would you pick a window size of 2?\", \"questions\": \"Please see questions from weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0cBttXaOUK
Multi-aspect Knowledge Distillation with Large Language Model
[ "Taegyeong Lee", "Jinsik Bang", "Soyeong Kwon", "Taehwan Kim" ]
Recent advancements in deep learning have significantly improved performance on computer vision tasks. Previous image classification methods primarily modify model architectures or add features, and they optimize models using cross-entropy loss on class logits. Since they focus on classifying images with considering class labels, these methods may struggle to learn various aspects of classes (e.g., natural positions and shape changes). In contrast, humans classify images by naturally referring to multi-aspects such as context, shape, color, and other features. Inspired by this, rethinking the previous approach from a novel view, we propose a multi-aspect knowledge distillation method using Multimodal Large Language Models (MLLMs). Our approach involves: 1) querying Large Language Model with multi-aspect questions relevant to the knowledge we want to transfer to the model, 2) extracting corresponding logits from MLLM, and 3) expanding the model's output dimensions to distill these multi-aspect logits. We then apply cross-entropy loss to class logits and binary cross-entropy loss to multi-aspect logits. Through our method, the model can learn not only the knowledge about visual aspects but also the abstract and complex aspects that require a deeper understanding. We primarily apply our method to image classification, and to explore the potential for extending our model, we expand it to other tasks, such as object detection. In all experimental results, our method improves the performance of the baselines. Additionally, we analyze the effect of multi-aspect knowledge distillation. These results demonstrate that our method can transfer knowledge about various aspects to the model and the aspect knowledge can enhance model performance in computer vision tasks. This paper demonstrates the great potential of multi-aspect knowledge distillation, and we believe it offers a promising direction for future research in computer vision and beyond.
[ "Multi-aspect Knowledge Distillation", "LLM", "MLLM" ]
Reject
https://openreview.net/pdf?id=0cBttXaOUK
https://openreview.net/forum?id=0cBttXaOUK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zZbpf44HbU", "xOAOihvgZg", "uysJltLN2M", "rMGNgi73Tw", "qinCQCPhpZ", "qPIjMpmaw1", "ojzSTKs6r7", "ntYT6Nk9QJ", "mH78UotEQh", "m8Bfcp3iFC", "lwxp9w2QIf", "gHEqQVjkm3", "UYCTkQQSJ6", "NK6JDrHCdW", "MmkOMiWZzG", "M7fO0IVTlh", "KKcFfIu9IS", "HxpGO2qrla", "HsQQw5ab8R", "GirHgMlgMB", "6HbarhlyK5", "5JepTBuOsG", "3hlsR5jOYc" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1733302534462, 1732953494508, 1733302613134, 1732305006158, 1734509678909, 1732304993507, 1732305110636, 1732305028700, 1733302523626, 1732305067136, 1730437801761, 1737523706180, 1732305047262, 1732774930212, 1729853831417, 1732620354826, 1732615347135, 1732304936712, 1732304973505, 1732305084467, 1730197241258, 1733302581986, 1730563327095 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Reviewer_2mpd" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Area_Chair_Kw7u" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Reviewer_2mpd" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Reviewer_RKH6" ], [ "ICLR.cc/2025/Conference/Submission5430/Reviewer_gXGp" ], [ "ICLR.cc/2025/Conference/Submission5430/Reviewer_gXGp" ], [ "ICLR.cc/2025/Conference/Submission5430/Reviewer_xDGy" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Reviewer_xDGy" ], [ "ICLR.cc/2025/Conference/Submission5430/Authors" ], [ "ICLR.cc/2025/Conference/Submission5430/Reviewer_RKH6" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your feedback. Our method differs from [1][2] by leveraging the generalized features learned by the MLLM, allowing the student model not only to predict classes but also to respond to multi-aspects. This makes the student model to learn more features even with a small dataset and significantly improves performance. As shown in Table 5 of the main paper, reducing the StanfordCars dataset by 40% still results in a 24.01% performance improvement compared to the baseline. We believe that our method has significant potential to achieve effective results in vision tasks, even in challenging low-resourced settings, and can be applied to real-world scenarios, such as in the medical domain.\"}", "{\"comment\": \"Thank you for your response and the additional experiments, which have filled in some missing details. I think the experiments with larger scale of datasets is neccessery and I encourage the authors to continue improving this project. I have decided to maintain my current score.\"}", "{\"comment\": \"Thank you for your feedback. Our method leverages the generalized features learned by the MLLM, allowing the student model to learn more features even with a small dataset. As shown in Table 5, reducing the StanfordCars dataset by 40% still results in a 24.01% performance improvement compared to the baseline. We believe that our method has significant potential to achieve effective results in vision tasks, even in challenging low-resourced settings, and can be applied to real-world scenarios, such as in the medical domain.\"}", "{\"comment\": \"**Weakness 5.** Missing the training curve of MaKD Loss with the number of iterations. The visualization of t-SNE embeddings and the model's multi-aspect responses to a single image are presented in Fig 4 and 5. There is no overall evaluation of the model's responses to multi-aspect on the test dataset.\\n\\n\\n| Dataset | L1 | KL |\\n|:-------------:|:------:|:------:|\\n| StanfordCars | 0.1174 | 0.1328 |\\n| Mini-ImageNet | 0.0773 | 0.2562 |\\n| Caltech101 | 0.0881 | 0.2548 |\\n| OxfordPets | 0.0972 | 0.0969 |\\n| DTD | 0.1312 | 0.1463 |\\n| FGVC-Aircraft | 0.0876 | 0.0423 |\\n\\n**Table 4. L1 distance and KL Divergence in aspect prediction between MLLM and ResNet18 on test dataset.**\\n\\nThank you for your valuable comments. We included a figure illustrating the training curve of MaKD Loss with the number of iterations in Appendix A in our revised version. As shown in Figure 6 of Appendix A, MaKD Loss becomes close to 0 as training progreses. Additionally, we provide the overall evaluation of the model\\u2019s responses to multi-aspect on the test dataset using L1 distance and KL Divergence on Table 4. In the overall evaluation on the test dataset, the differences in L1 distance range between 7% and 13%. Also, regarding KL Divergence, we observed that the distributions of fine-grained datasets are closer compared to those of coarse-grained datasets. **These results demonstrate that our method effectively predicts multi-aspect features on fine-grained datasets than coarse-grained datasets.**\\n\\n**Question 2.** There are questions regarding the task details when extending to object detection: should the input to the MLLM be the object within the box or the entire image? The entire image may contain multiple objects, and the MLLM's response may not be accurate. \\n\\nThank you for your insightful feedback. MLLM may struggle to provide accurate responses to aspects when multiple objects within an image have differing features or states. To address this, we believe incorporating visual grounding using bounding boxes for distinct objects in the image can enhance performance. By providing MLLM with a specific grounding target, it would be able to respond more accurately to the aspects related to the designated object.\"}", "{\"metareview\": \"The paper studies knowledge distillation for visual recognition models by querying the teacher models for multiple specific concepts. The authors demonstrate that this technique leads to better performance than vanilla knowledge distillation. The experiments in this paper are conducted on datasets - StanfordCars, OxfordPets, DTD, 102Flowers, FGVCAirCraft, CUB200, Caltech101 and miniImageNet using ResNet34 and ResNet18, EffiNet and Mb-N1 models.\\n\\nStrengths\\n1. The paper is easy to follow and well written. The authors have provided a lot of details about the experiments and the method.\\n2. The authors have studied a lot of different datasets and model combinations in this work\\n\\nWeaknesses\\n1. No comparisons to SOTA KD methods\\n2. Object detection results are limited\\n3. Datasets used in this work are relatively small scale, and thus the results may not hold on larger datasets + models\\n4. While the proposed method is different from existing KD work, there is limited explanation/insights into why this method works.\\n\\nJustification\\nThe AC read the paper and the reviews, and concluded that while interesting, the paper does have severe limitations in its experimental setup. The use of small scale datasets is concerning, and the lack of fine-grained visual tasks like object detection on LVIS style benchmarks limits the impact of this work. The reviewers also remain unconvinced about the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised concerns about\\n1. Using Convolution architectures\\n2. Usage of small datasets like StanfordCars\\n3. Not comparing to SOTA KD methods\\n4. Limited explanation about why the method works\\n\\n\\nThe authors have tried to address all three concerns by showing follow up experiments for (1,2,3) and offering explanations for (4)\\n\\nThe AC agrees with all the concerns raised by the reviewers. Points (2) and (4) are still not addressed and these are critical for the paper.\"}", "{\"comment\": \"**Weakness 3.** The improvement in object detection tasks is very limited in Tab7, and there is no comparison done on currently well-performing object detection methods. Object detection is inherently a more fine-grained visual task than classification. Still, the experiments in this paper do not demonstrate the method's effectiveness of multi-aspect knowledge distillation in detection.\\n\\nOur paper mainly focuses on fine-grained image classification. Moreover, our method is not limited to image classification but demonstrates potential for broader applicability, including KD and object detection, as a promising direction for future work. Even though achieving state-of-the-art performance in object detection is not our focus, we would like to illustrate our potential and have included results in Table 3 showing performance improvements achieved by tuning a hyperparameter, alpha value (weight) of the MaKD Loss.\\n|ResNet18|AP|AP50|AP75|\\n|:----------------:|:-----:|:-----:|:-----:|\\n|Baseline|33.18|53.54|35.31|\\n|Ours (alpha=1.0)|33.35|53.90|35.58|\\n|Ours (alpha=2.0)|**33.58**|**54.09**|**35.97**|\\n\\n**Table 3. Additional object detection experiments with alpha values.** We run each experiment three times and report the average results.\\n\\nBased on these results, we believe that the multi-aspect from MLLM can also be effective in object detection tasks and have the potential to be further developed through the application of various visual grounding techniques utilizing MLLM in the future work.\\n\\n**Weakness 4 + Question 1.** The explanation for the poor zero-shot classification performance of MLLMs is missing in Tab 1. Incorrect knowledge could also be distilled to the student model. As shown in Tab 1, MLLMs perform badly in zero-shot classification on fine-grained image test datasets, how do we ensure that MLLMs provide correct answers across multiple aspects? \\n\\nThank you for kindly pointing out the areas where our paper could be improved. In Sec 5.1, we briefly mentioned the explanation for the poor zero-shot classification performance and the reasons for this poor performance were missing in Tab 1. We observed that MLLM demonstrates poor zero-shot classification performance on fine-grained datasets. This suggests that classifying subclasses within similar superclasses can be challenging if subclass-specific names are not adequately represented in the dataset used for MLLM training. However, when it comes to aspects, MLLM can effectively provide responses regarding the features of the objects visible in the image. For instance, if the object is a car such as \\u201cRolls-Royce Phantom Drophead Coupe Convertible\\u201d or \\u201cRolls-Royce Phantom Sedan\\u201d, and the aspect question is \\\"Does the car have a convertible roof?\\\" MLLM can answer whether the car has a roof or not, even without knowing the exact class of the car. As shown in Figure 5 of the main paper, the model effectively responds to the aspect \\u201cDoes the animal have striking blue eyes?\\u201d for a Birman(cat) without relying on class information. Therefore, despite poor zero-shot classification performance, the multi-aspect questions allow MLLM to focus on and respond to specific features of the image or object. These aspects can be distilled to student model as seen in Figure 4 and 5 in the main paper. We will revise Tab 1 to include this information and also update the explanation in Section 5.1 of the main paper.\"}", "{\"comment\": \"Dear reviewer gXGp,\\nThank you for your insightful comments. We address your concerns below.\\n\\n**Weaknesses + Question 1.** The approach to utilizing knowledge distillation is somewhat unclear. When are the multi-aspect logits extracted from the MLLM, and how are they incorporated into the model's training or inference objective? \\n\\nThe multi-aspect logits are first extracted from the training dataset using MLLM before training the student model. These logits are then incorporated into the training process alongside the original training dataset. As a result, during inference, the image classification model is not only able to perform classification but also gains the capability to respond to aspects in a manner similar to MLLM.\\n\\n**Weaknesses + Question 2.** Given that GPT-4o generates the multi-aspect questions and that the MLLM has not seen images from each category, especially considering these categories are often long-tailed and fine-grained. Do you have any validation or filtering steps in place for the generated questions and responses, or have you considered comparing the generated questions to human-curated ones? Which types of generated questions contribute most to performance improvements. \\n\\nAs mentioned in Sec 4.1, we manually reviewed the multi-aspects generated by GPT-4o and confirmed that all responses were valid for each dataset. Using the process detailed in the same section, GPT-4o initially generated 100 questions. These were then filtered to select the 50 most relevant ones, and we ranked them accordingly using GPT-4o. \\nFigure 2 in the main paper shows that as the number of multi-aspects increases, the magnitude of performance improvement decreases. As shown in Figure 2, we empirically observed that higher-ranked aspects contribute more significantly to performance improvement than lower-ranked aspects.\\n\\n**Question 3.** When generating responses to the multi-aspect questions for each image, has the potential hallucination issue within the MLLM been considered? How accurately can the MLLM (InternVL) answer these generated questions, and to what extent does the hallucination issue in InternVL affect the accuracy of its responses? \\n\\nThank you for the insightful feedback. InternVL demonstrates sufficiently good performance on the VLM benchmark (InfoVQA). As shown in Figure 5 of the main paper, we empirically found that it responds well to multi-aspect given a single image. InternVL effectively answers to the aspect \\u201cDoes the animal have striking blue eyes?\\u201d for a Birman(cat) without relying on class information. Additionally, to ensure accurate answers to the generated questions, our method uses a prompt for MLLM that provides a single image and asks it to answer only with \\\"Yes\\\" or \\\"No.\\\" Empirically, we observed a hallucination issue where MLLM generates irrelevant sentences instead of answering \\\"Yes\\\" or \\\"No\\\" when the image resolution is extremely low, such as below 100x100. To address this, we added a prompt stating, \\\"Ignore the low resolution problem\\\", for images with a resolution smaller than 128x128. As a result, InternVL accurately answers with \\\"Yes\\\" or \\\"No\\\" for all images in the datasets specified in the paper. \\n\\n**Question 4.** For the object detection task, have you attempted to use other datasets, such as the larger-scale LVIS? \\n\\nUnfortunately, we could not perform experiments on LVIS due to time constraints. Instead, we provide experiments on the scalability of tuning the MaKD Loss to further improve performance in Table 3.\\n| ResNet18 | AP | AP50 | AP75 |\\n|:----------------:|:-----:|:-----:|:-----:|\\n| Baseline | 33.18 | 53.54 | 35.31 |\\n| Ours (alpha=1.0) | 33.35 | 53.90 | 35.58 |\\n| Ours (alpha=2.0) | **33.48** | **53.98** | **35.76** |\\n\\n**Table 3. Additional object detection experiments with alpha values.** We run each experiment three times and report the average results. \\n\\nAlso, our method's ability to extend to object detection, and incorporate multi-aspect questioning through visual grounding suggests its potential for improving performance in large-scale object detection datasets like LVIS. We believe this approach can lead to effective results in future studies.\"}", "{\"comment\": \"Dear reviewer 2mpd,\\nThank you for your insightful comments. We address your concerns below.\\n\\n**Weakness 1.** Limited Experimental Setting\\uff1aThe experimental setting is narrow, which restricts the generalizability of the findings. The scale of datasets is small and may not be sufficient to demonstrate the robustness of the proposed method across different scenarios. Expanding the experimental scope to include more varied or challenging datasets such as the full ImageNet would significantly strengthen the paper. \\n\\nThank you for your valuable feedback. The method proposed in our paper focuses on improving image classification performance, particularly when working with small datasets, including fine-grained datasets, which is challenging and practical. Our approach leverages MLLM to represent various visual features as aspects, distilling these aspect responses to enhance performance. Unfortunately, due to time constraints, we were unable to conduct experiments on large datasets such as the full ImageNet. To demonstrate greater robustness, we have included in Table 1 additional results showing performance improvements using larger parameter models like ViT.\\n| | StanfordCars | | | Mini-ImageNet | | |\\n|:---------:|:------------:|:-----:|:-----:|:-------------:|:-----:|:-----:|\\n| Model | Base | Ours | Gap | Base | Ours | Gap |\\n| ViT-B/16 | 10.88 | **11.62** | +0.74 | 43.41 | **44.18** | +0.77 |\\n| ViT-B/16* | 85.72 | **86.13** | +0.41 | 96.64 | **96.87** | +0.23 |\\n| ViT-B/32 | 9.32 | **9.73** | +0.41 | 41.42 | **42.73** | +1.31 |\\n| ViT-B/32* | 79.00 | **79.98** | +0.98 | 93.41 | **93.50** | +0.09 |\\n\\n**Table 1. Additional experiments with the ViT-based model on StanfordCars and Mini-ImageNet.** * indicates that the model was trained using a pretrained model from ImageNet-1K. We run each experiment three times and report the average results. \\n\\n**Weakness 2.** Lack of novelty: The proposed method directly adopts the MLLM\\u2019s output logit to perform distillation. The principle behind this design is not fully demonstrated. Why MLLM can help improve the performance of student and what features support this? \\n\\nTo the best of our knowledge, we are first on distillation of multi-aspect knowledge from MLLM, which is simple yet effective. Our method is different from traditional KD as it expands prediction logits instead of directly distilling class logits. Since MLLM has been trained on large datasets, it generates responses based on generalized knowledge. One of the key reasons MLLM improves the student's performance is that it trains the student model to predict the same responses for the given image features as MLLM. Through this process, we believe our method achieves significant performance gains in image classification tasks. Additionally, our approach is not limited to image classification. It can also be extended to traditional knowledge distillation and object detection tasks, where it demonstrates further performance improvements.\"}", "{\"comment\": \"Thank you for your feedback. Our method leverages the generalized features learned by the MLLM, allowing the student model not only to predict classes but also to respond to multi-aspects. This makes our method both novel and simple, but effective. Additionally, Our experiments demonstrated notable performance improvements in fine-grained image classification, as well as scalability in ViT, KD, and object detection. We believe that our method has significant potential to achieve effective results in vision tasks and can be applied to real-world scenarios, such as in the medical domain.\"}", "{\"comment\": \"Dear reviewer xDGy,\\nThank you for your insightful comments. We address your concerns below.\\n\\n**Weakness 1.** The evaluation datasets in the paper are relatively small, and the model parameters appear insufficient in 2024 . Using ResNet18/34 as the primary model limits the assessment of the framework\\u2019s scalability. It would be valuable to test the framework on a larger dataset, such as ImageNet, and with a more complex model like ResNet101, to assess its effectiveness in a more challenging setting. \\n\\nThank you for your valuable feedback. Unfortunately, due to time constraints, we were unable to conduct experiments on large datasets like the full ImageNet. To better assess the effectiveness of our approach, we included results for both ViT models and ResNet101 in Table 5. \\n| | StanfordCars | | | Mini-ImageNet | | |\\n|:---------:|:------------:|:-----:|-------|---------------|-------|-------|\\n| Model | Base | Ours | Gap | Base | Ours | Gap |\\n| ViT-B/16 | 10.88 | **11.62** | +0.74 | 43.41 | **44.18** | +0.77 |\\n| ViT-B/16* | 85.72 | **86.13** | +0.41 | 96.64 | **96.87** | +0.23 |\\n| ViT-B/32 | 9.32 | **9.73** | +0.41 | 41.42 | **42.73** | +1.31 |\\n| ViT-B/32* | 79.00 | **79.98** | +0.98 | 93.41 | **93.50** | +0.09 |\\n| ResNet101 | 81.36 | **83.97** | +2.62 | 77.00 | **78.11** | +1.11 |\\n\\n**Table 5. Additional experiments with the ViT-based model and ResNet101 on StanfordCars and Mini-ImageNet.** * indicates that the model was trained using a pretrained model from ImageNet-1K. We run each experiment three times and report the average results. \\n\\n**These results demonstrate that our method is effective not only for ViT-based models but also for more complex CNN-based models like ResNet101.**\\n\\n**Weakness 2.** The paper lacks comparisons with other knowledge distillation (KD) baselines, which would provide a clearer benchmark for evaluating the proposed method\\u2019s relative performance. \\n\\nAs pointed out by reviewers RKH6 and 2mpd, we conducted experiments with the latest KD methods [1, 2] and included the results in Table 2. \\n| StanfordCars | | | Caltech101 | |\\n|:---------------:|:---------------:|:-------------------:|:---------------:|:-------------------:|\\n| Teacher | ResNet34(80.93) | EfficientNet(86.41) | ResNet34(75.36) | EfficientNet(80.05) |\\n| Student | ResNet18(77.53) | MobileNetV1(82.84) | ResNet18(73.35) | MobileNetV1(76.64) |\\n| Ours | 83.38 | 85.43 | 75.76 | 79.14 |\\n| KD | 79.62 | 85.11 | 74.53 | 78.71 |\\n| KD + LS | 82.56 | 85.96 | 76.52 | 80.15 |\\n| KD + Ours | **83.44** | 86.34 | 76.70 | 79.70 |\\n| KD + LS + Ours | **83.44** | **86.69** | 77.24 | 80.68 |\\n| DKD | 82.55 | 85.93 | 76.37 | 79.95 |\\n| DKD + LS | 82.82 | 86.13 | 76.57 | 80.39 |\\n| DKD + Ours | 83.23 | 86.43 | 77.41 | 80.95 |\\n| DKD + LS + Ours | 82.99 | 86.63 | **77.43** | **80.99** |\\n\\n**Table 2. Comparisons with DKD and Logit Standardization in knowledge distillation methods on StanfordCars and Caltech101.** We run each experiment three times and report the average results. \\n\\nOur method demonstrates performance improvements when extended with [1, 2]. Specifically, on the StanfordCars dataset, where the teacher model is ResNet34 and the student model is ResNet18, our method outperforms other knowledge distillation methods [1, 2] (Ours: 83.38 > DKD [1] + Logit Standardization [2]: 82.82). **Also, other results show that extending our method further improves performance overall.**\\n\\nReferences \\n[1] Zhao, Borui, et al. \\\"Decoupled knowledge distillation.\\\" CVPR 2022. \\n[2] Sun, Shangquan, et al. \\\"Logit standardization in knowledge distillation.\\\" CVPR 2024.\"}", "{\"summary\": \"This paper proposes a new knowledge distillation method. It performs multi-aspect knowledge distillation with the LLM and the MLLM. LLM is utilized to generate multi-aspect questions by using the class and prompt. It further adopts the MLLM to extract the logit for multi-aspect questions and obtain the probabilities corresponding to yes token. The student is optimized by the original cross-entropy loss and the distilled binary cross-entropy loss. Extensive experiments are conducted to demonstrates the effectiveness of the proposed method. It also extends to object detection task to evaluate the great potential.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written, with a clear and logical flow from the introduction through to the conclusion. The authors present simple ideas in a straightforward manner, making the paper accessible to readers from diverse backgrounds. The experimental setup is meticulously organized, with each step of the process described in a way that facilitates reproducibility. The authors outline the methodologies, datasets, and evaluation metrics in clear subsections, allowing readers to follow the experimental design intuitively.\", \"weaknesses\": \"1\\u3001\\tLimited Experimental Setting\\uff1aThe experimental setting is narrow, which restricts the generalizability of the findings. The scale of datasets is small and may not be sufficient to demonstrate the robustness of the proposed method across different scenarios. Expanding the experimental scope to include more varied or challenging datasets such as the full ImageNet would significantly strengthen the paper.\\n\\n2\\u3001\\tLack of novelty: The proposed method directly adopts the MLLM\\u2019s output logit to perform distillation. The principle behind this design is not fully demonstrated. Why MLLM can help improve the performance of student and what features support this? \\n\\n3\\u3001\\tSome details are missing, and some experimental comparisons are not fair. The parameter number of the MLLM is larger than the teacher model in the traditional KD. It is questionable whether the improvement is due to the large number of parameters or the inherent properties of the MLLM itself. What will happen to the performance of the student model if only adopting the large vision encoder in MLLM. Some comparison to the traditional methods is not fair. The basic KD adopted in experiment in classification is too old and many improved versions should be used.\\n\\n4\\u3001\\tThere is not the comparison to SOTA KD method in object detection and the baseline should also adopt the powerful setting.\", \"questions\": \"I hope the author can better explain the novelty of the paper and the principle of how the algorithm is really effective.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"**Weakness 3.** Some details are missing, and some experimental comparisons are not fair. The parameter number of the MLLM is larger than the teacher model in the traditional KD. It is questionable whether the improvement is due to the large number of parameters or the inherent properties of the MLLM itself. What will happen to the performance of the student model if only adopting the large vision encoder in MLLM. Some comparison to the traditional methods is not fair. The basic KD adopted in experiment in classification is too old and many improved versions should be used.\\n\\nTab. 4 shows that our proposed method is more effective than performing KD based on class logits using MLLM as a teacher. Furthermore, Tab. 6 shows that our method can enhance performance even when applied in combination with existing teacher models. This highlights the distinction in approach, as our method builds upon the inherent properties of MLLM to achieve these improvements. As noted by Reviewer RKH6, we have also conducted experiments with two latest KD methods, [1, 2], and included the results in Table 2. \\n| StanfordCars | | | Caltech101 | |\\n|:---------------:|:---------------:|:-------------------:|:---------------:|:-------------------:|\\n| Teacher | ResNet34(80.93) | EfficientNet(86.41) | ResNet34(75.36) | EfficientNet(80.05) |\\n| Student | ResNet18(77.53) | MobileNetV1(82.84) | ResNet18(73.35) | MobileNetV1(76.64) |\\n| Ours | 83.38 | 85.43 | 75.76 | 79.14 |\\n| KD | 79.62 | 85.11 | 74.53 | 78.71 |\\n| KD + LS | 82.56 | 85.96 | 76.52 | 80.15 |\\n| KD + Ours | **83.44** | 86.34 | 76.70 | 79.70 |\\n| KD + LS + Ours | **83.44** | **86.69** | 77.24 | 80.68 |\\n| DKD | 82.55 | 85.93 | 76.37 | 79.95 |\\n| DKD + LS | 82.82 | 86.13 | 76.57 | 80.39 |\\n| DKD + Ours | 83.23 | 86.43 | 77.41 | 80.95 |\\n| DKD + LS + Ours | 82.99 | 86.63 | **77.43** | **80.99** |\\n\\n**Table 2. Comparisons with DKD and Logit Standardization in knowledge distillation methods on StanfordCars and Caltech101.** We run each experiment three times and report the average results. \\n\\nOur method demonstrates performance improvements when extended with [1, 2]. Specifically, on the StanfordCars dataset, where the teacher model is ResNet34 and the student model is ResNet18, our method outperforms other knowledge distillation methods [1, 2] (Ours: 83.38 > DKD [1] + Logit Standardization [2]: 82.82). Also, other results show that extending our method further improves performance overall. **These results demonstrate the potential of our method to be effectively extended to other KD methods and improve their performance**.\\n\\n**Weakness 4.** There is not the comparison to SOTA KD method in object detection and the baseline should also adopt the powerful setting. \\n\\nThank you for your valuable feedback. Our method mainly focuses on image classification while also demonstrating its potential to be extended to tasks such as KD and object detection. Unfortunately, due to time constraints, we only include additional object detection results with tuned hyperparameter, alpha values (MaKD weight), in Table 3.\\n| ResNet18 | AP | AP50 | AP75 |\\n|:----------------:|:-----:|:-----:|:-----:|\\n| Baseline | 33.18 | 53.54 | 35.31 |\\n| Ours (alpha=1.0) | 33.35 | 53.90 | 35.58 |\\n| Ours (alpha=2.0) | **33.58** | **54.09** | **35.97** |\\n\\n**Table 3. Additional object detection experiments with alpha values.** We run each experiment three times and report the average results. \\n\\nThe object detection results of tuning the alpha parameter in the MaKD loss, which show further performance improvements. These findings suggest that our method is also effective for object detection and can be applied to broader tasks in the future.\\n\\nReferences \\n[1] Zhao, Borui, et al. \\\"Decoupled knowledge distillation.\\\" CVPR 2022. \\n[2] Sun, Shangquan, et al. \\\"Logit standardization in knowledge distillation.\\\" CVPR 2024.\"}", "{\"comment\": \"Thank you for your comprehensive response, addressing some of my concerns. The experiments show the potential of the proposed method, but we need to ensure MLLMs provide correct answers across multiple aspects. If some questions are not correctly answered by the MLLMs, the distilled model will receive incorrect supervision. Additionally, applying knowledge from multiple aspects to other vision tasks remains an area for further exploration. Therefore, I will raise my rating to 5.\"}", "{\"summary\": \"This paper introduces a novel approach to solving computer vision tasks, such as image classification and object detection, by enhancing conventional models' classification capabilities through knowledge distillation from Multimodal Large Language Models (MLLMs). The method involves expanding the dimensionality of the model's original logits, which improves classification accuracy. The paper provides numerous ablation experiments and conducts a thorough analysis of the results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper combines traditional network models, such as ResNet, with MLLMs to enhance accuracy in classification and detection tasks.\\n\\n2. It uses multi-aspect questions to extract knowledge from MLLMs, leveraging this knowledge to support classification. \\n\\n3. The experiments are comprehensive.\", \"weaknesses\": \"The approach to utilizing knowledge distillation is a bit unclear\\u2014are you applying this strategy during training, or is it only used in inference? Additionally, there seems to be a lack of consideration for hallucination issues that may arise with GPT-4o during the generation of questions and responses.\", \"questions\": \"1. The approach to utilizing knowledge distillation is somewhat unclear. When are the multi-aspect logits extracted from the MLLM, and how are they incorporated into the model's training or inference objective?\\n\\n2. Given that GPT-4o generates the multi-aspect questions and that the MLLM has not seen images from each category, especially considering these categories are often long-tailed and fine-grained. Do you have any validation or filtering steps in place for the generated questions and responses, or have you considered comparing the generated questions to human-curated ones? Which types of generated questions contribute most to performance improvements.\\n\\n3. When generating responses to the multi-aspect questions for each image, has the potential hallucination issue within the MLLM been considered? How accurately can the MLLM (InternVL) answer these generated questions, and to what extent does the hallucination issue in InternVL affect the accuracy of its responses?\\n\\n4. For the object detection task, have you attempted to use other datasets, such as the larger-scale LVIS?\\n\\nI may reconsider my score based on your response to these issues.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your comprehensive response and the supplementary experiments, which have helped address several of my initial concerns. However, I believe that the experimental setup still requires some further completion to ensure a more thorough evaluation. Additionally, the level of novelty could be further emphasized. I appreciate the efforts made to refine the work, but I believe further development is needed. Therefore, I have decided to keep my original rating.\"}", "{\"comment\": \"Thank you for your detailed response and the additional experiments, which have addressed some of my concerns and strengthened the conclusions. However, after reviewing the comments from other reviewers, I believe it is necessary for the authors to complete the full experimental setup, such as including evaluations on the ImageNet dataset. This is particularly important given that related works cited in [1][2] also performed experiments on this dataset. Additionally, the novelty of the proposed approach remains somewhat limited. Furthermore, it appears that the performance improvement diminishes when evaluated on larger model settings. I encourage the authors to continue refining this project, and I have decided to maintain my current score.\"}", "{\"comment\": \"We thank the reviewers for their thoughtful and valuable feedback.\\n\\nOur method provides a novel and simple yet effective approach that uses MLLM to infer multi-aspect knowledge, distill diverse information, and improve image classification on both fine-grained and coarse-grained datasets. It also shows potential for extending to tasks like KD and object detection. \\n\\nWe are encouraged that reviewers find our method is **novel** (gXGp), **straightforward and easy to follow** (RKH6, 2mpd), **insightful** (xDGy), **reproducible** (2mpd), and offers a **simple but effective way** (2mpd, xDGy), to apply multi-aspect knowledge to different tasks. We also believe our approach can be extended to more challenging tasks, such as classification in the medical field.\\nFollowing the reviewers' feedback, we conducted the following experiments: \\n1. Based on the comments from reviewers RKH6, 2mpd, and xDGy regarding the lack of KD baselines, we added experiments using the latest knowledge distillation methods [1, 2]. \\n2. As suggested by reviewers RKH6 and xDGy, we included experiments applying our method to models with larger models in the classification task. \\n3. To address reviewer RKH6's comment on object detection, we extended our experiments to demonstrate further applicability.\\n4. We incorporated missing figures and addressed specific comments raised by reviewer RKH6. \\n5. In response to reviewer xDGy\\u2019s feedback, we analyzed the additional annotation time required for MLLM to respond to aspects. \\n6. For the hallucination issues pointed out by reviewer gXGp, we explained how our method addresses and resolves these challenges. \\n \\nWe appreciate all reviewers again for their thoughtful and insightful comments, which give our paper going to be more clear and convincing. Please feel free to let us know for any additional questions or further clarifications. \\n\\nReferences \\n[1] Zhao, Borui, et al. \\\"Decoupled knowledge distillation.\\\" CVPR 2022. \\n[2] Sun, Shangquan, et al. \\\"Logit standardization in knowledge distillation.\\\" CVPR 2024.\"}", "{\"comment\": \"Dear reviewer RKH6,\\nThank you for your insightful comments. We address your concerns below.\\n\\n**Weakness 1.** The proposed method shows some improvement on some classic CNN-based models but lacks experiments on ViT-based models. \\n\\nFollowing your suggestion, we conducted additional experiments by incorporating the ViT-B/16 and ViT-B/32 models, and have added new results to Table 1.\\n\\n|| StanfordCars ||| Mini-ImageNet |||\\n|:---------:|:------------:|:-----:|:-----:|:-------------:|:-----:|:-----:|\\n| Model | Base | Ours | Gap | Base | Ours | Gap |\\n| ViT-B/16 | 10.88 | **11.62** | +0.74 | 43.41 | **44.18** | +0.77 |\\n| ViT-B/16* | 85.72 | **86.13** | +0.41 | 96.64 | **96.87** | +0.23 |\\n| ViT-B/32 | 9.32 | **9.73** | +0.41 | 41.42 | **42.73** | +1.31 |\\n| ViT-B/32* | 79.00 | **79.98** | +0.98 | 93.41 | **93.50** | +0.09 |\\n\\n**Table 1. Additional experiments with the ViT-based model on StanfordCars and Mini-ImageNet**. * indicates that the model was trained using a pretrained model from ImageNet-1K. We run each experiment three times and report the average results. \\n\\nThe hyperparameters were referenced from the ImageNet-1K training results in [3]. We applied our method to both learning from scratch models and pretrained models on ImageNet-1K and evaluated them on StanfordCars and Mini-ImageNet. The experimental results for the ViT-based models are as follows:\\n- For learning from scratch models, applying our method led to performance improvements (StanfordCars: 9.32 \\u2192 **9.73**, Mini-ImageNet: 41.42 \\u2192 **42.73** on ViT-B/16 model). However, ViT models trained from scratch in small dataset, unless pretrained, does not achieve various regularization and data augmentation settings [3], their performance remains lower compared to CNN-based models reported in our paper.\\n- For StanfordCars trained from ImageNet-1K pretrained models, applying our method to ViT-B/16 and ViT-B/32 resulted in performance gains of 0.41 and 0.98, respectively.\\n- For Mini-ImageNet trained from ImageNet-1K pretrained models, applying our method to ViT-B/16 and ViT-B/32 resulted in performance gains of 0.24 and 0.09, respectively. \\n\\n**These experimental results demonstrate that our method enhances performance not only for CNN-based models but also for large models such as ViT.**\\n\\n**Weakness2.** In the knowledge distillation task, the comparison is only done with KD, lacking comparisons with other knowledge distillation methods [1,2]. \\nWe conducted experiments on the other knowledge distillation methods you suggested [1, 2] following the settings described in our paper. The results have been included in Table 2. \\n||StanfordCars||Caltech101||\\n|:---------------:|:---------------:|:-------------------:|:---------------:|:-------------------:|\\n| Teacher | ResNet34(80.93) | EfficientNet(86.41) | ResNet34(75.36) | EfficientNet(80.05) |\\n| Student | ResNet18(77.53) | MobileNetV1(82.84) | ResNet18(73.35) | MobileNetV1(76.64) |\\n| Ours | 83.38 | 85.43 | 75.76 | 79.14 |\\n| KD | 79.62 | 85.11 | 74.53 | 78.71 |\\n| KD + LS | 82.56 | 85.96 | 76.52 | 80.15 |\\n| KD + Ours | **83.44** | 86.34 | 76.70 | 79.70 |\\n| KD + LS + Ours | **83.44** | **86.69** | 77.24 | 80.68 |\\n| DKD | 82.55 | 85.93 | 76.37 | 79.95 |\\n| DKD + LS | 82.82 | 86.13 | 76.57 | 80.39 |\\n| DKD + Ours | 83.23 | 86.43 | 77.41 | 80.95 |\\n| DKD + LS + Ours | 82.99 | 86.63 | **77.43** | **80.99** |\\n\\n**Table 2. Comparisons with DKD and Logit Standardization in knowledge distillation methods on StanfordCars and Caltech101**. We run each experiment three times and report the average results.\\n\\nOur method demonstrates performance improvements when extended with [1, 2]. Specifically, on the StanfordCars dataset, where the teacher model is ResNet34 and the student model is ResNet18, our method outperforms other knowledge distillation methods [1, 2] (Ours: 83.38 > DKD [1] + Logit Standardization [2]: 82.82). Also, other results show that extending our method further improves performance overall. **These results demonstrate the potential of our method to be effectively extended to other KD methods and improve their performance**.\\n\\nReferences \\n[1] Zhao, Borui, et al. \\\"Decoupled knowledge distillation.\\\" CVPR 2022. \\n[2] Sun, Shangquan, et al. \\\"Logit standardization in knowledge distillation.\\\" CVPR 2024. \\n[3] Steiner, A., et al. \\\"How to train your vit? data, augmentation, and regularization in vision transformers.\\u201d TMLR 2022.\"}", "{\"comment\": \"**Weakness 3.** The framework could explore additional ways to leverage the knowledge in MLLMs. For instance, distilling logits from the last token output by the MLLM after processing the input image may capture different aspects of visual representation.\\n\\nThank you for the insightful suggestion. When MLLM generates sentences, the logits of the tokens are determined by the image or previously generated text tokens. This implies that the likelihood of generating tokens for the desired features may contain the visual representation. However, the logits for various text tokens related to the features we aim to extract may not always correspond precisely to the intended tokens. Our method addresses this by explicitly defining the features as aspects to obtain clear responses. By minimizing the computational cost associated with token generation, our approach ensures rapid and efficient responses. This makes it not only effective for traditional image classification models but also adaptable for tasks such as KD and object detection.\\n\\n**Weakness 4.** While Section 5.5 discusses training time and computational cost, the analysis might be incomplete. The time required for MLLMs to annotate the training dataset should also be considered to provide a more comprehensive assessment of computational demands. \\n\\nThank you for your valuable feedback. We additionally provide the time required for MLLMs to extract aspect responses from the training dataset. For the StanfordCars dataset, it takes approximately 0.83 s \\u00b1 174 ms per image for a single aspect, while for Mini-ImageNet, it takes about 0.942 s \\u00b1 169 ms. The annotation time varies depending on the number of aspects and the size of the dataset.\\n\\n**Question 1.** I noticed that even using random logits leads to performance improvements (Table 3(b)). Could you clarify the underlying reason for this result? \\n\\nThe clarification regarding Table 3(b) on distillation using random logits is as follows. According to studies on Teacher-free KD methods [4], teacher-free logits can also improve student performance when distilled. Similarly, using our extended logits as noise appears to impact performance, potentially as part of dark knowledge or through additional regularization effects. \\n\\nReferences \\n[4] Yuan, Li, et al. \\\"Revisiting knowledge distillation via label smoothing regularization.\\\" CVPR 2020.\"}", "{\"summary\": \"This paper presents a multi-aspect knowledge distillation framework that uses MLLMs to improve model performance in visual understanding and detection tasks. By expanding the model\\u2019s output dimensions, the method distills multi-aspect logits that encapsulate diverse visual and contextual features beyond standard class labels. Extensive experiments on various image classification datasets, complemented by thorough ablation studies, underscore the framework's effectiveness and robustness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The core idea is simple but looks effective.\\n2. The paper writing is fluent and easy to follow.\\n3. The paper conducts experiments on six different fine-grained datasets and two different coarse-grained datasets. The results show that the proposed method achieves stable performance improvement, especially on the fine-grained datasets.\\n4. The ablation studies and related visualization are comprehensive and insightful.\", \"weaknesses\": \"1. The evaluation datasets in the paper are relatively small, and the model parameters appear insufficient in 2024 . Using ResNet18/34 as the primary model limits the assessment of the framework\\u2019s scalability. It would be valuable to test the framework on a larger dataset, such as ImageNet, and with a more complex model like ResNet101, to assess its effectiveness in a more challenging setting.\\n\\n2. The paper lacks comparisons with other knowledge distillation (KD) baselines, which would provide a clearer benchmark for evaluating the proposed method\\u2019s relative performance.\\n\\n3. The framework could explore additional ways to leverage the knowledge in MLLMs. For instance, distilling logits from the last token output by the MLLM after processing the input image may capture different aspects of visual representation.\\n\\n4. While Section 5.5 discusses training time and computational cost, the analysis might be incomplete. The time required for MLLMs to annotate the training dataset should also be considered to provide a more comprehensive assessment of computational demands.\", \"questions\": \"1. I noticed that even using random logits leads to performance improvements (Table 3(b)). Could you clarify the underlying reason for this result?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your constructive feedback. As pointed out, MLLM may not answer some questions correctly. However, its responses are expressed in probabilities, indicating the confidence it has regarding the question. Additionally, our method trains the student model to predict multi-aspect response probabilities similar to MLLM, ensuring it to achieve a similar level of knowledge. Furthermore, as illustrated in Figure 3, the average probabilities of aspect questions represented reasonable correctness regarding each aspect in the corresponding question. Our experiments demonstrated notable performance improvements in fine-grained image classification, as well as scalability in ViT, KD, and object detection. We believe our method has significant potential to achieve effective results in vision tasks and can be applied to real-world scenarios, such as in the medical domain.\"}", "{\"summary\": \"This paper starts from the perspective of how humans classify images, where humans typically consider multi-aspects such as context, shape, color, and other features. Motivated by this, the author proposes a multi-aspect knowledge distillation method that utilizes Multimodal Large Language Models (MLLMs) to improve image classification performance. By querying, extracting relevant logits, and expanding the model's output dimensions, the method achieves the knowledge learning of visual aspects and abstract knowledge. This method enhances the performance of baseline models across lots of experiments, demonstrating the potential of multi-aspect knowledge distillation in computer vision and other tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.This paper is written in a clear and straightforward manner, making it easy to quickly grasp the method's approach.\\n2.The paper conducted a lot of experiments, and the figures and tables are well-organized.\\n3.The authors claimed they are the first to offer a novel perspective on distilling multi-aspect knowledge regarding abstract and complex concepts. I have seen the author's efforts in the design of knowledge transfer.\", \"weaknesses\": \"1.\\u00a0The proposed method shows some improvement on some classic CNN-based models but lacks experiments on ViT-based models.\\n2.\\u00a0In the knowledge distillation task, the comparison is only done with KD, lacking comparisons with other\\u00a0knowledge distillation methods [1,2].\\n3.\\u00a0The improvement in object detection tasks is very limited in Tab7, and there is no comparison done on currently well-performing object detection methods.\\u00a0Object detection is inherently a more fine-grained visual task than classification. Still, the experiments in this paper do not demonstrate the method's effectiveness of multi-aspect knowledge distillation in detection.\\n4.\\u00a0The explanation for the poor zero-shot classification performance of MLLMs is missing in Tab 1. Incorrect knowledge could also be distilled to the student model.\\n5.\\u00a0Missing the training curve\\u00a0of MaKD Loss with the number of iterations. The visualization of t-SNE embeddings and the model's multi-aspect responses to a single image are presented\\u00a0in Fig 4 and 5. There is no overall evaluation\\u00a0of the model's responses to multi-aspect\\u00a0on the test dataset.\\n\\n[1] Decoupled Knowledge Distillation\\n[2] Logit Standardization in Knowledge Distillation\", \"questions\": \"The paper develops a simple way to distill\\u00a0the multi-aspect\\u00a0knowledge\\u00a0of MLLM\\u00a0to perform image classification\\u00a0using a student model. The experiments show some improvements however I still believe that the contributions of this paper are quite limited. Additionally, the baseline models selected in this paper are quite outdated. In summary, the overall technical novelty of the direct injection of knowledge from\\u00a0large models seems incremental.\\n\\n1.\\u00a0As shown in Tab\\u00a01, MLLMs perform badly in zero-shot classification on fine-grained image\\u00a0test datasets, how do we ensure that MLLMs provide correct answers across multiple aspects? \\n2.\\u00a0There are questions regarding the task details when extending to object detection: should the input to the MLLM be the object within the box or the entire image? The entire image may contain multiple objects, and the MLLM's response may not be accurate.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
0bswm093Yl
GeneBench: Systematic Evaluation of Genomic Foundation Models and Beyond
[ "Zicheng Liu", "Jiahui Li", "Lei Xin", "Siyuan Li", "Chang Yu", "Zelin Zang", "Cheng Tan", "Yufei Huang", "yajingbai", "Jun Xia", "Stan Z. Li" ]
The Genomic Foundation Model (GFM) paradigm is expected to facilitate the extraction of generalizable representations from massive genomic data, thereby enabling their application across a spectrum of downstream applications. Despite advancements, a lack of evaluation framework makes it difficult to ensure equitable assessment due to experimental settings, model intricacy, benchmark datasets, and reproducibility challenges. In the absence of standardization, comparative analyses risk becoming biased and unreliable. To surmount this impasse, we introduce GeneBench, a comprehensive benchmarking suite specifically tailored for evaluating the efficacy of Genomic Foundation Models. GeneBench offers a modular and expandable framework that encapsulates a variety of state-of-the-art methodologies. Through systematic evaluations of datasets spanning diverse biological domains with a particular emphasis on both short-range and long-range genomic tasks, firstly including the three most important DNA tasks covering Coding Region, Non-Coding Region, Genome Structure, etc. Our results on GenBench has led to an interesting discovery: regardless of the number of parameters, the noticeable variation in preference between attention-based and convolution-based models for short- and long-range tasks could offer valuable insights for the future development of GFM. As a result, we propose a straightforward modified model called Genhybrid, which is an effective and efficient convolution-attention hybrid model suitable for all tasks.
[ "genetic foundation model", "benchmark", "hybrid model" ]
https://openreview.net/pdf?id=0bswm093Yl
https://openreview.net/forum?id=0bswm093Yl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vsAtkM4FMm", "n5snwF0V9A", "ms9UvnNvpr", "mdsmAMvSqX", "k3PxjLkMiZ", "fFElhaKnYS", "KSLXmiuUXX", "Ha2l5tdBZG", "EkW7fjPQpf" ], "note_type": [ "official_comment", "comment", "official_review", "official_review", "official_review", "comment", "official_review", "comment", "official_comment" ], "note_created": [ 1731463331861, 1731506581494, 1730670340693, 1730745191352, 1730440865692, 1737342357638, 1730694599566, 1731437615420, 1731548980133 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3577/Authors" ], [ "~Yu_Bo1" ], [ "ICLR.cc/2025/Conference/Submission3577/Reviewer_jV9G" ], [ "ICLR.cc/2025/Conference/Submission3577/Reviewer_MdoR" ], [ "ICLR.cc/2025/Conference/Submission3577/Reviewer_u9EG" ], [ "ICLR.cc/2025/Conference/Submission3577/Authors" ], [ "ICLR.cc/2025/Conference/Submission3577/Reviewer_HAWR" ], [ "~Yu_Bo1" ], [ "ICLR.cc/2025/Conference/Submission3577/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks for your feedback\", \"comment\": \"Dear friend,\\n\\nThank you for your feedback. We\\u2019re sorry to hear about the difficulties reproducing DeepSTARR\\u2019s results. Could you provide more details regarding the specific issues or errors encountered during your experiments? This will help us identify potential discrepancies and support you more effectively. For reference, our experiments were conducted primarily on NVIDIA L40 GPUs, so any information on your setup and configurations might also be helpful in diagnosing the issue.\"}", "{\"comment\": \"Thank you for your response. To clarify, while I was able to reproduce the results in the current paper using the provided scripts, the result was different from the original DeepSTARR paper. Otherwise, I encountered no additional issues.\"}", "{\"summary\": \"The paper introduces GeneBench, a benchmarking suite specifically designed for evaluating Genomic Foundation Models across a wide range of genomic tasks. GeneBench includes evaluations of eleven GFMs on forty-four datasets, with tasks spanning various genomic regions and functions. This systematic benchmarking reveals insights based on the performance of GFMs across short- and long-range tasks. Furthermore, the paper proposes a new model that incorporates advantages from two types of models and demonstrates effective performance across all tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper offers a wide-range and detailed evaluation of GFMs and also provides concrete guidance for users on how to select models based on different tasks.\", \"The paper provides clear classification of GFMs and benchmarking tasks.\", \"Beyond benchmarking, the paper proposes a new model GeneBench based on the insights from the experiments, and achieves the best performance on most tasks.\"], \"weaknesses\": [\"The paper belongs to a benchmarking paper, but it does not include comparisons with other existing genomic benchmarks (e.g., the length of input sequences, types of benchmarked methods, etc.). This limits the motivation why the research area needs this new benchmarking.\", \"While the benchmark focuses on GFMs, it's better to have simpler baselines without pretraining, (e.g. CNNs). Including such models would provide a deeper understanding of the advantages or limitations of GFMs relative to classical models.\", \"The paper doesn't provide sufficient descriptions on several tasks (e.g. Genomic Structure Prediction)\"], \"questions\": [\"Could the authors discuss how GeneBench differs from or improves upon existing genomics benchmarking?\", \"Could the authors provide detailed input-output descriptions for some tasks (e.g. Genomic Structure Prediction)?\", \"For visualization in Figure 8, it would be helpful if the authors add x-axis and y-axis labels to the heatmaps. Similarly, for Figure 10 and 11, what's the range of y-axis?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduced a benchmark suite for genomic foundation models(gFMs) called GeneBench that systematically evaluates gFMs on an wide array of datasets across a range of tasks for both short and long range sequence prediction. This work also presented a new method called GenHybrid that leverages both SSM and attention based model architectures.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Through and comprehensive dataset curation for gLM evaluation that covers both short and long range tasks.\\n2. A wide range on gFMs are benchmarked across various model architectures and parameter sizes. \\n3. The introduction of new hybrid method that leverages both attention based models and state-space models and outperformed existing models in most of the datasets evaluated. \\n4. In-depth analysis of the benchmarking results and provided insights into the current state of gFMs and their performance differential in various tasks.\", \"weaknesses\": \"1. Lacking on tasks beyond classification and regression. One of the most promising application of gFMs is to predict zero-shot mutation effects and generative modeling of genomic sequences. This benchmark effort misses in both aspects. Many mutation effect databases are available and a comprehensive curation of a benchmarking dataset will be of vast interest to the community.\\n2. Lacking vertical comparison of different model architectures on model sizes and pre-training schemes. I understand this will be computationally costly but as a benchmarking effort, this is necessary to paint a more complete picture of the model landscape. \\n3. Missing naive benchmarks and ab initio models for comparison. It has been shown in many recent studies that gFMs do not outperform ab initio models trained on task specific datasets. Adding both ab initio models and native benchmarks will be very important as a benchmark suite.\", \"questions\": \"1. What\\u2019s the rationale behind the collection of tasks used in this study? they seem to be very similar in terms of task type and it will be more interesting to see more variation in tasks such as zero-shot mutational effect prediction and generative sequence modeling.\\n2. The paper is presented as a benchmark suite but the introduction of GenHybrid seems to be the main emphasis throughout the results section. However, the details of such model is missing from the main text of the paper. What\\u2019s the main focus of this paper? \\n3. Why not include ab initio models trained on tasks specific data and naive benchmark in this suite? The numers presented in the paper is without context can hardly can be used as a standardized benchmark for future methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This study introduces a comprehensive benchmark suite, GeneBench, for evaluating the efficacy of Genomics Foundation Models. The authors systematically evaluated several DNA tasks including coding region, non-coding region, and genome structure. They also provided some insights into the model design and model training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The manuscript is well-organised and the experiments are relatively comprehensive.\", \"weaknesses\": \"Lack of biological insights\", \"questions\": \"1. The authors are suggested to state the advantages of their method. For example, why a bioinformatian should use their proposed method instead of others?\\n2. The authors should provide some biological insights.\\n3. Some case studies can be provided. For example, how the proposed method can be used to facilitate biological findings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces a benchmark framework for evaluating genomic foundation models. The authors gathered a large number of tasks from multiple existing papers for benchmarking. The tasks are classified into either long-range tasks or short-range tasks. A study that compares several existing genomic foundation models using the gathered tasks was performed. In addition, the authors proposed a hybrid approach that is supposed to work well for both short-range tasks and long-range tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Benchmarked large number of tasks.\", \"Compared all major genomic foundation models developed recently.\"], \"weaknesses\": [\"The training in Pre-training is not clear. Should the optimization in Eq. (1) be actually involved in pre-training? The two categories of targets described in fine-tuning do not appear in Eq. (1) at all.\", \"The manuscript appears to be prepared in a hurry, needing major cleaning up. For example, there is no mentioning what the abbreviations used in Figure 4 stand for and in the caption of the same figure, it is mentioned (a), (b), and (c), but based on the content I believe only (c) is there. As another example, in line 406, the authors mentioned they studied NT with different choices of number of parameters. However, there is no description of the results and respective discussion to offer any insights. As the last example, in Table 7, Caduceus is the second performing model, but the author said it was HyenaDNA in the text.\", \"There is no description on how the proposed Genhybrid was trained.\", \"The value of their main findings may be limited. Since attention-based models were trained using short-length sequences while convolution-based models trained using long sequences (to consider long context), it is expected to see the former is better in short-range tasks and the later has potential advantage in long-range tasks.\", \"The effort in data curation is minimal. It looks to me simply pulling from previous works. Could the authors clarify if there is any additional processing or validation of the data.\", \"Due to the limitation of the work pointed by the authors themselves, I do not see they can answer the last two questions summarized in the second paragraph in their introduction. Could the authors explain?\"], \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Fail to reproduce the results of Deepstarr using the provided scripts.\"}", "{\"comment\": \"Thanks for your attention. The main difference comes from the fact that we implemented DeepStar with pytorch, while the original version uses Tensorflow. The network structure and training strategy are copied from the original version. This benchmark is mainly to provide you with a reference of the performance of different models in a completely unified experimental environment, and we hope that it will be helpful to you.\"}" ] }
0bmGL4q7vJ
Multi-modal Agent Tuning: Building a VLM-Driven Agent for Efficient Tool Usage
[ "Zhi Gao", "Bofei Zhang", "Pengxiang Li", "Xiaojian Ma", "Tao Yuan", "Yue Fan", "Yuwei Wu", "Yunde Jia", "Song-Chun Zhu", "Qing Li" ]
The advancement of large language models (LLMs) prompts the development of multi-modal agents, which are used as a controller to call external tools, providing a feasible way to solve practical tasks. In this paper, we propose a multi-modal agent tuning method that automatically generates multi-modal tool-usage data and tunes a vision-language model (VLM) as the controller for powerful tool-usage reasoning. To preserve the data quality, we prompt the GPT-4o mini model to generate queries, files, and trajectories, followed by query-file and trajectory verifiers. Based on the data synthesis pipeline, we collect the MM-Traj dataset that contains 20K tasks with trajectories of tool usage. Then, we develop the T3-Agent via Trajectory Tuning on VLMs for Tool usage using MM-Traj. Evaluations on the GTA and GAIA benchmarks show that the T3-Agent consistently achieves improvements on two popular VLMs: MiniCPM-V-8.5B and Qwen2-VL-7B, which outperforms untrained VLMs by 20%, showing the effectiveness of the proposed data synthesis pipeline, leading to high-quality data for tool-usage capabilities.
[ "Multimodal Agents", "Vision-language Model", "Tool usage" ]
Accept (Spotlight)
https://openreview.net/pdf?id=0bmGL4q7vJ
https://openreview.net/forum?id=0bmGL4q7vJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zsYDDa6a8C", "vHwSFkuI2I", "ubWpKHf5Gz", "sUJc3fuXIi", "o89RSKgVSQ", "nEv2GUxa5U", "mw69OYJyo7", "m3IV3kP23J", "lKbzXFBqf9", "jbzGOiaBLB", "dmh5j9SLaU", "dCXeoOCm5Q", "cAbTtGeaSB", "c53A1ea6y3", "ZKtKUv0PgP", "ScGCrs01oH", "QYLKs5UX3X", "LdWA3Mn02z", "KX6llHLmEL", "BPcmgjbKf1", "8y2GyhRquU", "7cAo8OIIK3", "1trkhguxj8", "0eMCRQxj2n" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732619620701, 1732114856723, 1732113154818, 1732117889364, 1732165821397, 1730787622963, 1734462232926, 1732117714700, 1730411505359, 1730570870372, 1732781373612, 1737523619294, 1732115524956, 1732113561395, 1732117987802, 1730489281716, 1732780436544, 1732852515122, 1732200629931, 1732658533551, 1732783585129, 1732739524844, 1732862911311, 1732682391852 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Reviewer_uszN" ], [ "ICLR.cc/2025/Conference/Submission4107/Reviewer_4Mmx" ], [ "ICLR.cc/2025/Conference/Submission4107/Area_Chair_RsPK" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Reviewer_snR1" ], [ "ICLR.cc/2025/Conference/Submission4107/Reviewer_uszN" ], [ "ICLR.cc/2025/Conference/Submission4107/Reviewer_uszN" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Reviewer_zCvz" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Reviewer_zCvz" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Reviewer_snR1" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Reviewer_4Mmx" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ], [ "ICLR.cc/2025/Conference/Submission4107/Authors" ] ], "structured_content_str": [ "{\"title\": \"General Response\", \"comment\": \"Dear Area Chairs and Reviewers,\\n\\nWe sincerely thank the Area Chairs and all the Reviewers for their insightful comments and recognition of our work. Your feedback has been invaluable in helping us improve our paper. We are particularly grateful for the acknowledgment of the strengths of our work, including:\\n\\n\\n1. Our multi-modal agent tuning method is **neat** (Reviewer 4Mmx), **novel** (Reviewer uszN), **well-structured** (Reviewer uszN), and **scalable** (Reviewer zCvz).\\n2. Our data synthesis pipeline produces **diverse** and **expressive** tasks (Reviewer snR1).\\n3. The dataset is **robust**(Reviewer zCvz), **comprehensive**, (Reviewer zCvz) and **novel** (Reviewer snR1) with **in-depth statistical analysis** (Reviewer snR1).\\n4. The experiments are **thorough** (Reviewer 4Mmx), use **diverse valuation metrics** (Reviewer uszN), have **significant performance gains** (Reviewer zCvz) and **detailed visualizations** (Reviewer zCvz). \\n\\nBased on your feedback, we have made revisions to our paper. Below is a summary of the major updates.\\n\\n**1. We conduct several new experiments to strengthen our contributions.**\\n\\n(1) We compare the T3-Agent with multiple agents driven by open-source models. Results show significant improvements in the GTA and GAIA benchmarks, validating the effectiveness of our dataset and data synthesis pipeline.\\n\\n(2) We tune another model: Qwen2-VL-7B as the controller using the MM-Traj dataset, and it achieves consistent improvements on both benchmarks.\\n\\n(3) We conduct ablation studies on input modalities and dataset sizes to evaluate their individual impacts.\\n\\n(4) We conduct more comprehensive user studies for generated tasks, trajectories, and agent outputs. We recruit more people (30 (for data quality) + 20 (for agent outputs) = 50 persons in total) and assess the quality of more data points (1000 data points in total). Results show the effectiveness of the used verifiers and the obtained dataset.\\n\\n(5) Replacing GPT-4o-mini with Qwen2.5-72B, we verify that our data synthesis pipeline remains effective, and the generated data can improve agent performance as well.\\n\\n**2. We improve the presentation of our paper to make it more clear and readable.**\\n\\n(1) We clarify that the two used verifiers effectively filter low-quality synthetic data, as demonstrated in prior works, our experiments, and user studies.\\n\\n(2) We discuss that the performance gap between T3-Agent and GPT-4o-driven agents primarily arises from differences in model size and training data scale of the controller.\\n\\n(3) We add explanations on how our method scales to more modalities by incorporating additional tools and advanced multi-modal models. Furthermore, synthesizing larger datasets consistently enhances performance.\\n\\n(4) We fix typos and correct an erroneous figure for accuracy.\\n\\nThanks again to the Area Chairs and all the Reviewers! We look forward to any further discussions or questions during the rebuttal phase.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response To Reviewer uszN [1/2]\", \"comment\": \"Thanks for your insightful and thorough review. We will address your concerns below.\\n\\n> **W1**. Interpretability: While the approach demonstrates performance gains, the interpretability of results remains limited. Additional analyses, such as ablation studies or attention maps, would be beneficial to understand how each modality contributes to the decision-making process. For example paper first generates queries first without files before relevant queries, what is the impact if we don't do that, or this is based on some past work/observations?\\n\\n**R:** Thanks for your comments. For the interpretability of our method, we add ablation experiments to show the contributions of different modalities in decision-making. Ablation results on the GTA dataset are shown in Table E, where removing the image modality reduces the performance by 40%, highlighting the importance of input images. \\n\\n**Table E: Ablation on the GTA Benchmark**\\n| Method | AnsAcc | ToolAcc | ToolAcc | \\n|-------|------|---|---|\\n|T3 Agent w/o image|10.67 | 25.32 | 20.09 |\\n|T3 Agent w/ image | **52.56** | **65.85** | **80.49** |\\n\\n\\nRegarding your concern \\\"first generate queries without files before relevant queries\\\", we adopted this strategy based on our observations, as it leads to more natural synthetic tasks. For complex tasks involving multiple files, first generating files and then queries often result in less natural tasks, as the generated files within one query may have limited or unrelated connections. By generating queries first and then creating files accordingly, we can ensure stronger coherence and relevance among the files based on the information in queries.\\n\\n> **W2**. Scalability: The paper does not thoroughly address the scalability of the proposed method, particularly as the number of modalities or the dataset size increases. It would be beneficial to test how the method's performance and computational requirements scale with additional modalities or larger datasets. For example, experiments that measure latency, memory usage, and accuracy as more data is introduced could illustrate the framework's robustness and its viability in resource-constrained or high-throughput environments.\\n\\n**R:** For the scalability of modalities, our method is able to extend to additional modalities by incorporating more tools and leveraging advanced multimodal models. For example, to extend our method to the video modality, we can integrate a video search model into the data synthesis pipeline and replace the MiniCPM-V model with a video-language model for the agent. This approach ensures seamless adaptation to new modalities while maintaining efficiency and coherence.\\n\\nFor the scalability of dataset size, we add experiments in Table F to show the agent's performance on the GTA benchmark as the dataset size increases. With the increase of data number, the agent achieves better performance, the memory consumption is constant, and the time consumption linear increases. Compared with the accuracy improvements, we think that the consumption of memory and time is acceptable. \\n\\n**Table F: Ablation on the GTA Benchmark**\\n| Dataset size | 6K | 12K | 20K | \\n|--|--|--|--|\\n|Accuracy|43.59%|48.08%|52.56%|\\n|Memory|214 GB|214 GB|214 GB|214 GB|\\n|Training Time|276 mins|532 mins|946 mins|\\n\\nWe agree with the reviewer about the importance of the scalability of modalities and dataset size. However, our current work mainly focuses on demonstrating the effectiveness of the multimodal agent tuning method. We will expand to additional modalities and explore engineering optimizations in the future, to enhance the applicability of our method by using fewer resources but for more tasks.\\n\\n> **W3**. User Study: To evaluate the practical usability of the framework, a small user study or qualitative feedback from users would provide valuable insights into the query handling experience. Specifically, gathering feedback on aspects like ease of use, perceived accuracy, responsiveness to complex queries, and the intuitiveness of the tool-usage process could highlight areas for refinement in real-world settings.\\n\\n**R:** Thanks for your comments. We add a user study about agent outputs on the GTA benchmark. We recruited 20 participants, each evaluating 20 tasks with agent outputs, where the agent is with or without fine-tuning. The outputs of the agent (w/ or w/o tuning) were shuffled for each task, and the participants were not informed about the source, ensuring an unbiased assessment. The participants were asked to provide the preference of the two agent outputs, based on the accuracy, helpfulness, and relevance. We measured the percentages of results, as shown in Table G. Outputs from the tuned agent has a significantly high preference, indicating its better performance in solving practical tasks.\\n\\n**Table G: User study for agent outputs on the GTA benchmark.**\\n| Agent w/o tuning is better | Tie | Agent w tuning is better|\\n|--|---|--|\\n|21%|13%|66%|\"}", "{\"title\": \"Response To Reviewer 4Mmx [1/2]\", \"comment\": \"Thanks for your insightful and thorough review. We will address your concerns one by one.\\n\\n> **W1**. Verifying the output of an LLM by the LLM itself does not seem accurate. I am skeptical about the quality of the generated MM-Traj dataset.\\n\\n**R:** Using LLMs to verify the quality of LLM outputs has shown effectiveness in multiple works, such as [a] verifying the generated instructions and [b] verifying the generated plans. Inspired by them, we argue that using LLMs can also verify the synthetic tasks and trajectories. The ablation experiments about the used verifiers are shown in Table 5, where the verifiers lead to about 2% improvements on both GTA and GAIA benchmarks. Meanwhile, a more solid user study (involving more people and more data points) is presented in the response to W2, which also confirms that the verifiers can filter our low-quality data. \\n\\n\\nTo evaluate the quality of the remaining data in the MM-Traj dataset, we have compared models with and without using the MM-Traj dataset, as shown in Table 2 and Table 3 of the manuscript. Using our dataset leads to 18.59% and 7.88% improvements on the GTA and GAIA benchmarks, respectively, demonstrating the effectiveness of the MM-Traj dataset.\\n\\n\\n[a] Self-Instruct: Aligning Language Models with Self-Generated Instructions. ACL 2023.\\n\\n[b] APIGen: Automated Pipeline for Generating Verifiable and Diverse Function-Calling Datasets. NeurIPS 2024. \\n\\n\\n\\n> **W2**. You need quantitative verification for the dataset. (It is not clear whether a user study involving a few people on 100 data points out of 15K would provide sufficient confidence.)\\n\\n\\n**R:** To address your concern, we conduct a more extensive and robust user study during rebuttal. Specifically, we recruited 30 participants with rich experience in programming and AI to evaluate the quality of the generated tasks and trajectories. Each participant was tasked with assessing 20 tasks and 20 trajectories, randomly selected and mixed from both the MM-Traj dataset and the filtered-out data. There are 600 tasks and 600 trajectories in total. Notably, the participants were blinded to the source of the data to ensure unbiased evaluations. Participants rated the quality of tasks and trajectories on a range from 1 to 10, where a higher score indicates better quality. The average scores, as shown in Table A, show that the MM-Traj dataset significantly outperforms the filtered-out data in both task and trajectory quality. These results confirm the effectiveness of our verification process.\\n\\n\\n\\n\\n**Table A: User study for the verification process, where 30 persons were recruited and each one was assigned with 20 tasks and 20 trajectories.**\\n| MM-Traj Task | MM-Traj Trajectory | Filtered out Task | Filtered out Trajectory |\\n|--|-|-|-|\\n|8.32|8.67|6.36|6.38|\\n\\n\\n> **W3**. Experimental results on the GTA benchmark are more promising than those on the GAIA benchmark. However, the overall performance of T3-agent is not superior to that of the other agents in comparison. Specifically, on the GAIA benchmark, the HF agent with GPT-4o performs twice as well as the T3-Agent.\\n\\n**R:** Compared to the GTA benchmark, the GAIA benchmark is more challenging due to its more complex tasks and longer trajectories, which demand stronger context understanding and reasoning capabilities. Although the HF agent using GPT-4o achieves higher performance than ours, it is important to note that GPT-4o benefits from significantly larger-scale training data and larger model size. In contrast, our agent is built on the MiniCPM-V-8.5B model, an open-source model with 8.5B parameters, fine-tuned on the MM-Traj dataset of 20K samples. The scales are considerably smaller than those of GPT-4o in both model size and data volume, which understandably lead to performance differences. Therefore, directly comparing our agent with agents using GPT-4o is not entirely fair.\\n\\nTo provide more comprehensive and fair comparisons on the GAIA benchmark, we include results from agents built with open-source models of similar sizes, such as LLaVA-NeXT-8B, InterVL2-8B, and Qwen2-VL-7B, as shown in Table B. These comparisons demonstrate the competitive performance of T3-Agent within the open-source ecosystem. We will add the comparisons in a revised version.\\n\\n**Table B: Comparisons on the GAIA benchmarks with open-source models**\\n| Method | Controller | AnsAcc | Level 1 | Level 2 | Level 3 |\\n|--|-|-|-|--|-|\\n|HF Agent|LLaVA-NeXT-8B|3.64|9.43|1.16|0.00|\\n|HF Agent|InternVL2-8B|4.85|7.55|4.65|0.00|\\n|HF Agent|Qwen2-VL-7B|9.70|16.98|8.14|0.00|\\n|HF Agent|MiniCPM-V-8.5B|7.27|13.21|5.81|0.00|\\n|T3 Agent|Tuned MiniCPM-V-8.5B|**15.15**|**26.42**|**11.63**|**3.84**|\"}", "{\"title\": \"Response To Reviewer snR1 [1/2]\", \"comment\": \"Thanks for your insightful and thorough review. We will address your concerns below.\\n\\n> **W1**. In Section 3.3, this work states, \\u201cFor other needed files, we prompt GPT-4o mini to extend the file content and generate Python code to produce the files.\\u201d However, the methodology for file generation remains unclear. For example, if a file is required to be an MP4 video of Taylor Swift\\u2019s latest album, it\\u2019s uncertain how this content could be generated through Python code alone. Furthermore, if GPT-4o mini generates the Python code to produce such files, it raises concerns about data quality and how the model ensures that the generated content is not hallucinated.\\n\\n**R:** In this work, we do not generate video files (e.g., MP4, MOV), and the current T3-Agent is not designed to handle video-based tasks. The primary reason is the lack of sufficiently reliable video tools. While we have explored video tasks, we found that even the most advanced tools (e.g., Gemini 1.5 Pro) struggle with challenges requiring fine-grained video understanding, complex video reasoning over long video sequences, and deep semantic interpretation. \\n\\nIn future work, we will address the challenges by exploring more advanced video tools and integrating them into the T3-Agent. We also acknowledge that hallucination remains a significant challenge. To address this, rather than using GPT-4o to generate Python codes for video creation, we will adopt a retrieval-based strategy (similar to the generation strategy of image tasks) to mitigate such risks in video tasks.\\n\\n\\n> **W2**. While including a human study to assess dataset quality is commendable, having only five experienced raters for a subset of the data may be too limited, potentially introducing biases based on individual preferences. Gathering feedback from a larger pool of participants, even with fewer data points per person, could strengthen claims about the dataset's effectiveness. Additionally, comparing MM-Traj to filtered-out data may not yield meaningful insight. Instead, comparisons with other established tool-usage datasets would likely provide more meaningful insights.\\n\\n**R:** Thanks for your comments. We add a more comprehensive user study during rebuttal, where we recruit 30 persons with rich programming and AI experience to evaluate the generated tasks and corresponding trajectories. Each person is tasked with assessing 20 tasks with trajectories, which are randomly selected and mixed from both MM-Traj and filtered-out data. We ask each one to provide scores (1-10) for the task quality and trajectory quality, and they do not know whether the data is from MM-Traj or filtered-out data. A low score means the quality is bad and a high score means its quality is good. The average scores are shown in Table J. The results clearly show that the used verifiers remove low-quality data. Due to the time limitation, we will further expand the scale of the user study in a revised version.\\n\\n**Table J: User study on the MM-Traj dataset**\\n| MM-Traj Task | MM-Traj Trajectory | Filtered out Task | Filtered out Trajectory |\\n|-------------------|-------------------|-------------------|-------------------|\\n|8.32|8.67|6.36|6.38|\\n\\nThank you for your suggestion to compare MM-Traj with other established tool-usage datasets. However, it is difficult to make direct comparisons, due to the domain, complexity, and evaluation differences among these datasets. In the future, we will bridge this gap by introducing a unified inference formulation among these datasets and further conducting fair comparisons.\"}", "{\"title\": \"Response to Authors comment\", \"comment\": \"Thank you for addressing my concern and I am happy to see that there is a constant memory use even with increasing the dataset size. Regarding W3 (user study), what are the details of these 20 participants, as in the paper I see that 5 persons were used to evaluate the tasks and trajectories generated. What is the reason for this difference? Are these participants familiar with the whole setup beforehand? Suppose the evaluators are all from the same lab as the authors this has a potential of creating bias in user study.\\n\\nApart from user study my other concerns are addressed.\"}", "{\"summary\": \"This paper proposes a method for multi-modal agent tuning for tool usage and presents a dataset designed to train an agent for tool usage. The authors claim that their T3-Agent achieves significant improvements.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The idea of using an LLM to separately generate queries, files, and trajectories, followed by a query-file verifier and trajectory verifier is neat.\\n2) The paper addresses the problem of using the appropriate array of tools corresponding to the information relevant to a query well. \\n3) The experiments are thorough.\", \"weaknesses\": \"1) Verifying the output of an LLM by the LLM itself does not seem accurate. I am skeptical about the quality of the generated MM-Traj dataset.\\n2) You need quantitative verification for the dataset. (It is not clear whether a user study involving a few people on 100 data points out of 15K would provide sufficient confidence.)\\n3) Experimental results on the GTA benchmark are more promising than those on the GAIA benchmark. However, the overall performance of T3-agent is not superior to that of the other agents in comparison. Specifically, on the GAIA benchmark, the HF agent with GPT-4o performs twice as well as the T3-Agent.\\n4) If the T3-Agent\\u2019s performance were clearly superior, I would be more optimistic about the generated dataset. However, the results seem to support my doubts about the dataset.\", \"minor_comment\": \"In Tables 2 and 3, the best results should be highlighted in bold.\", \"questions\": \"Please address the concerns raised in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper presents an approach to improve the tool-use capability of vision-language models using synthetic tool-use dataset. The pipeline for synthetic data consists of three steps. First, LLMs generate query tasks using a tool-aware prompt. Second, task relevant images are retrieved or generated. Finally, using the ReACT framework, trajectories containing steps, thought, and code are generated for each generated task. Finetuning a VLM on this dataset shows considerable improvements over off-the-shelf open-weights VLM on the GTA and GAIA benchmarks.\\n\\nReviewers agreed that the synthetic data generation pipeline proposed in the paper is novel and scalable. Having a verification mechanism integrated into the pipeline was also seen as a strength by the reviewers. Reviewers were also happy with the experimental setup, and significant performance gains on the benchmarks.\", \"additional_comments_on_reviewer_discussion\": \"During the reviewer discussion period, all reviewers were satisfied with the authors responses. I am summarising the discussion here:\\n\\nThe authors conducted a more thorough user study (as requested by Reviewer 4Mmx, uszN, snR1) and found that trajectories and tasks that were filtered out scored lower than those that were part of the final dataset indicating the effectiveness of the proposed verification process. \\n\\nThe authors also provided more baselines of the same size as the proposed approach for the GAIA benchmark to put the work in perspective. Adding this to the final manuscript will be useful. \\n\\nReviewer 4MMx had concerns about the quality of generated dataset, and its effectiveness. In response, authors conducted additional experiments with Qwen2-VL along with MiniCPM, and finetuned versions on the synthetic dataset showed improvements on both GTA and GAIA showing the effectiveness of the dataset.\\n\\nAuthors should also add the discussion with uszN about reasons for improvement. Additional ablations showed in Table E and Table F could be added to the appendix. \\n\\n\\nIt is also encouraging to see that the proposed data pipeline also works with open-weights model. The authors show that replacing GPT-4o-mini with Qwen-2.5.7B during synthetic data pipeline also leads to improvement (albeit a smaller improvement).\"}", "{\"title\": \"Response To Reviewer zCvz\", \"comment\": \"Thanks for your insightful and thorough review. We will address your concerns below.\\n\\n\\n> **W1**. The T3-Agent exhibits a gap in programming capabilities, which leads to lower accuracy in predicted answers compared to GPT-4o.\\n\\n**R:** The performance gap in programming can likely be attributed to the differences in model size and training data of the agent controller. \\nCompared with the MiniCPM-V model in the T3-Agent, GPT-4o benefits from a larger model size and richer training corpus. \\n\\n\\n> **W2**. While the paper acknowledges the T3-Agent\\u2019s limited programming capabilities, it does not suggest potential improvements or outline future directions to strengthen this aspect.\\n\\n**R:** Thanks for your comment. In the future, we could improve the programming capabilities of the agent via two manners. (1) Use VLMs that are pre-trained for coding as the controller. (2) Combine code data (such as the PyTraceBugs dataset [d]) with our trajectory data in fine-tuning.\\n\\n[d] PyTraceBugs: A Large Python Code Dataset for Supervised Machine Learning in Software Defect Prediction\\n\\n\\n> **W3**. The reliance on GPT-4o mini throughout the pipeline raises questions about biases and limitations from this closed-source model. Exploring alternative methods or open-source models could enhance transparency and address these limitations.\\n\\n**R:** We chose GPT-4o mini due to its well recognition within the research community. However, we emphasize that our method is not tied to specific models and can flexibly transfer to open-source alternatives. To demonstrate this point, we add an experiment where the GPT-4o mini model in the data synthesis pipeline is replaced by the open-source Qwen2.5-72B model. Using this setup, we collect 12K new data and fine-tune the MiniCPM-V-8.5B model. The resulting performance is evaluated on the GAIA dataset, with the results presented in Table I. These results demonstrate that the 12K data produces about 6% improvements over the untuned baseline, highlighting the model-agnostic nature of our approach. We will include these findings in the revised manuscript to emphasize the flexibility and compatibility of our method with open-source models.\\n\\n\\n**Table I: Agent on the GAIA benchmark.**\\n| Data num | models in data synthesis | AnsAcc |Level 1 | Level 2 | Level 3 |\\n|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|\\n|Untuned agent| N/A | 7.27 | 13.21 | 5.81 | 0.00 |\\n|12K| Qwen2.5-72B | 13.94 | 24.53 | 11.63 | 0.00 |\\n|20K| GPT-4o-mini | 15.15 | 26.42 | 11.63 | 3.84 |\\n\\n\\n> **W4**. What is $p_i$ in Equation 2?\\n\\n**R:** Thanks for pointing it out. $p_i$ should be corrected as $t_i$, denoting the generated thought in the $i$-th step, and the Equation 2 is\\n\\n\\\\begin{equation}\\n\\\\min \\\\mathbb{E}_{(F^{\\\\star}, Q, T, C, O, A)\\\\sim \\\\mathbb{D} } [ -\\\\sum^{n}_i P(t_i , c_i| F^{\\\\star}, Q, h_i)],\\n\\\\end{equation}\\n\\nwhere we train the agent controller to fit the thought $t_i$ and code $c_i$. The equation will be revised in a new version.\\n\\n> **Q1**. I lean towards acceptance because the paper introduces a promising approach to enhancing the reasoning capabilities of multi-modal agents. The proposed data synthesis pipeline and the resulting MM-Traj dataset are significant contributions to the field, potentially advancing the state of multi-modal learning. However, the paper lacks a discussion on the T3-Agent's robust programming capabilities and does not explore how these might be improved. I would like the authors to comment on this aspect.\\n\\n**R:** Thanks for your positive comment. To improve the programming capabilities of agents, there are two manners: (1) using VLMs pre-trained for coding and (2) combining code data with our trajectory data in fine-tuning. We will add the above discussion in the revised manuscript.\"}", "{\"summary\": \"This work introduces a multi-modal tool-usage data generation pipeline designed to finetune vision-language models (VLMs) for tasks requiring tool-usage reasoning. The pipeline consists of three primary steps. First, a large language model (LLM) is prompted to generate query tasks. Next, relevant images or files are retrieved or generated based on the specified query tasks. Finally, ReAct agents are employed to generate trajectories that address the query task problem, followed by an additional LLM prompt to verify the generated data.\\n\\nThis study also introduces the MM-Traj dataset generated through the proposed scheme and uses it to finetune the MiniCPM-V model to create the T3-agent. The T3-agent's effectiveness is subsequently assessed on the GTA and GAIA benchmarks, showcasing a 20% improvement in performance over untrained VLMs and achieving results comparable to other baseline agents, such as the Lego Agent, Warm-up Act Agent, and HF Agent.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed data generation pipeline first generates queries independently of any specific files, followed by producing relevant files to align with these queries. This approach allows the pipeline to create more diverse and expressive tasks, unrestricted by file format or quantity limitations.\\n2. This work introduces the novel MM-Traj dataset, containing 20k diverse tool-usage data points, supported by a human study to demonstrate dataset quality. \\n3. This work also performs an in-depth statistical analysis of the dataset and shows that MM-Traj offers broad coverage across various data modalities and knowledge requirements, as shown in Figure 3.\", \"weaknesses\": \"1. In Section 3.3, this work states, \\u201cfor other needed files, we prompt GPT-4o mini to extend the file content and generate Python code to produce the files.\\u201d However, the methodology for file generation remains unclear. For example, if a file is required to be an MP4 video of Taylor Swift\\u2019s latest album, it\\u2019s uncertain how this content could be generated through Python code alone. Furthermore, if GPT-4o mini generates the Python code to produce such files, it raises concerns about data quality and how the model ensures that the generated content is not hallucinated.\\n2. While including a human study to assess dataset quality is commendable, having only five experienced raters for a subset of the data may be too limited, potentially introducing biases based on individual preferences. Gathering feedback from a **larger pool of participants**, even with fewer data points per person, could strengthen claims about the dataset's effectiveness. Additionally, comparing MM-Traj to filtered-out data may not yield meaningful insight. Instead, comparisons with other established tool-usage datasets would likely provide more meaningful insights.\\n3. The evaluation results in Table 3 reveal mixed outcomes, with the T3-agent performing significantly worse than other methods, such as HF Agent and Sibyl Agent, on the GAIA benchmark. What could lead to this performance discrepancy on the GAIA benchmark?\", \"questions\": \"1. In Figure 3, the sum of trajectories\\u2014214 + 14,273 + 8,740 + 2,520 + 1,242 + 697 + 199 + 202\\u2014totals 28,087, which exceeds the stated 20k tasks in the abstract. In addition, the paper mentions that only 15k files remain after passing the query-file and trajectory verifiers. What is the final size of the generated dataset?.\\n2. Why does GPT-4o mini outperform GPT-4o in Table 2, specifically in the row with HF Agents on AnsACC and CodeExec, given that GPT-4o is expected to be more powerful than GPT-4o mini?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel approach for multi-modal agent tuning, aimed at enhancing agent performance in complex environments through better utilization of multiple data modalities. The authors propose a tuning framework designed to leverage cross-modal data (e.g., visual and text info) to improve agent task performance, with specific emphasis on tool usage within the agent's capabilities. Their T3 agent is multi-modal agent that can efficiently use tools to solve practical tasks by tuning a VLM as the controller. Evaluations over the various dataset show the significant improvement using their agent with closed as well as open source model. Additionally, they curated dataset using multimodal info having trajectory of various length for broader study. In this work focus is on the correct tool selection and code is given more importance than the widely used JSON schema. In summary, they generate data, tune the VLM and create dataset followed by leverage of tool agent to make use of tool in a better way.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Innovation in Multi-Modal Interaction: The approach shows potential in pushing the boundaries of how agents interact with cross-modal data sources. By focusing on practical applications of tool usage, this work could offer useful insights into building agents that understand and respond to complex queries across various media.\\n2. Comprehensive Methodology: The paper describes a well-structured experimental setup and provides a clear description of the multi-modal tuning process. This includes thoughtful considerations on data processing, model architecture, and task-specific tuning steps, making it easy to follow.\\n3. Evaluation Metrics: The authors employ a diverse set of metrics to evaluate agent performance. This choice not only validates the model\\u2019s accuracy in the tasks but also emphasizes the practical utility of the proposed framework in real-world applications.\", \"weaknesses\": \"1. Interpretability: While the approach demonstrates performance gains, the interpretability of results remains limited. Additional analyses, such as ablation studies or attention maps, would be beneficial to understand how each modality contributes to the decision-making process. For example paper first generate queries first without files before relevant queries, what is the impact if we don't do that, or this is based on some past work/observations?\\n\\n2. Scalability: The paper does not thoroughly address the scalability of the proposed method, particularly as the number of modalities or the dataset size increases. It would be beneficial to test how the method's performance and computational requirements scale with additional modalities or larger datasets. For example, experiments that measure latency, memory usage, and accuracy as more data is introduced could illustrate the framework's robustness and its viability in resource-constrained or high-throughput environments.\\n\\n3. User Study: To evaluate the practical usability of the framework, a small user study or qualitative feedback from users would provide valuable insights into the query handling experience. Specifically, gathering feedback on aspects like ease of use, perceived accuracy, responsiveness to complex queries, and the intuitiveness of the tool-usage process could highlight areas for refinement in real-world settings.\", \"minor_hints\": \"1. sec5.6 typo.. \\\"wen based\\\"--> web based.\\n2. sec 3.4- Author(s) mention details can be found in but missed cross-referencing it.\\n3. cross reference missing at end of sec 3.4\", \"questions\": \"Q: Could you please elaborate on how not using the final answer A aligns with your goal? Specifically, how does this choice benefit your approach to enhancing tool-usage capability?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to authors comment (round 2)\", \"comment\": \"Thank you for your responses. My queries have been resolved, and with more participants in the user study, the findings are clearer for the readers. I will maintain my rating and recommend accepting the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response To Reviewer uszN [2/2]\", \"comment\": \"> **Q1**. Could you please elaborate on how not using the final answer $A$ aligns with your goal? Specifically, how does this choice benefit your approach to enhancing tool-usage capability?\\n\\n**R:** We do not use the final answer $A$ in the training objective of the agent controller, as we encourage the controller to leverage tools in solving given tasks, instead of directly producing an answer based on its internal knowledge.\\n\\n> **Minor hints** \\n> 1. sec5.6 typo.. \\\"wen based\\\"--> web based.\\n> 2. sec 3.4- Author(s) mention details can be found in but missed cross-referencing it.\\n> 3. cross reference missing at end of sec 3.4 \\n\\n**R:** Thank you again for your detailed feedback and valuable suggestions. We have corrected these typos and conducted a thorough review of the manuscript to ensure that no other typos remain.\"}", "{\"title\": \"Response To Reviewer 4Mmx [2/2]\", \"comment\": \"> **W4**. If the T3-Agent\\u2019s performance were clearly superior, I would be more optimistic about the generated dataset. However, the results seem to support my doubts about the dataset.\\n\\n**R:** In fact, generating high-quality data for multimodal agents is extremely challenging due to the requirements for naturalness, diversity, and complexity of both the tasks and their corresponding trajectories. Our paper proposes an effective pipeline for data generation, resulting in the MM-Traj dataset. Although the T3-Agent does not outperform all agents derived from closed-source models, this does not imply that the MM-Traj dataset is of low quality. Below are the supporting arguments.\\n\\n\\n(1) We have compared the MiniCPM-V model with and without using the MM-Traj dataset, as shown in Table 2 and Table 3. Using our dataset yields improvements of 18.59% and 7.88% on the GTA and GAIA benchmarks, respectively, highlighting the effectiveness of both the MM-Traj dataset and the proposed data synthesis pipeline.\\n\\n(2) We add a more extensive and robust user study that shows the employed verifiers can indeed discard low-quality data. (Details in the response to W2)\\n\\n\\n(3) These agents that achieve better performance than the T3-Agent are based on closed-source models (e.g., GPT-4o) with larger model size and more training data. These factors may primarily contribute to the performance differences. To make more comprehensive comparisons, we compare the T3-Agent with agents driven by open-source models (details are shown in the response to W3). T3-Agent achieves better performance, underscoring the effectiveness of the MM-Traj dataset.\\n\\n\\n(4) We evaluate the MM-Traj datasets on more VLMs. Concretely, we use the MM-Traj dataset to fine-tune another VLM: Qwen2-VL-7B. Results on the GTA and GAIA benchmarks are shown in Table C and Table D, respectively. Compared with the untuned Qwen2-VL-7B model, the tuned model brings 19% and 7% improvements on the two benchmarks. The performance improvements across **multiple VLMs** further validate the effectiveness of our dataset.\\n\\n\\nWe will include the above discussion and experiments in the revised version.\\n\\n**Table C: Comparisons on the GTA dataset**\\n| Controller | AnsAcc | ToolAcc | CodeExec | \\n|-------------------|-------------------|-------------------|-------------------|\\n|MiniCPM-V-8.5B|33.97|36.59|56.10|\\n| Tuned MiniCPM-V-8.5B | **52.56** | **65.85** | **80.49** |\\n| Qwen2-VL-7B | 42.31 | 44.85 | 65.19|\\n| Tuned Qwen2-VL-7B | **53.85** | **64.63** | **84.32** |\\n\\n\\n**Table D: Comparisons on the GAIA dataset**\\n| Controller | AnsAcc | Level 1 | Level 2 | Level 3 |\\n|-------------------|-------------------|-------------------|-------------------|-------------------|\\n |MiniCPM-V-8.5B|7.27 | 13.21 | 5.81 | 0.0 |\\n | Tuned MiniCPM-V-8.5B | **15.15** | **26.42** | **11.63** | **3.84** |\\n | Qwen2-VL-7B |9.70|16.98|8.14|0.00|\\n | Tuned Qwen2-VL-7B | **16.97** | **26.42** | **15.12** | **3.84** |\"}", "{\"title\": \"Response To Reviewer snR1 [2/2]\", \"comment\": \"> **W3**. The evaluation results in Table 3 reveal mixed outcomes, with the T3-agent performing significantly worse than other methods, such as HF Agent and Sibyl Agent, on the GAIA benchmark. What could lead to this performance discrepancy on the GAIA benchmark?\\n\\n**R:** The GAIA benchmark is extremely challenging, since it has more complex tasks and longer trajectories, requiring stronger context understanding and reasoning capabilities. It is worth noting that the HF Agent and Sibyl Agent use the GPT-4o model as the controller, which is a closed-source model benefitting from larger-scale training data and larger model size. In contrast, the T3-Agent is developed with the MiniCPM-V model which is an 8.5B open-source model and fine-tuned using 20K data, which are much smaller than the model size and data volume of GPT-4o. Thus, it is not fair to directly compare the T3-Agent with these agents. \\n\\nTo make comprehensive comparisons on the GAIA dataset, we compare the T3-Agent with agents using open-source models, including LLaVA-NeXT-8B, InternVL2-8B, and Qwen2-VL-7B, as shown in Table K. The T3-agent has obviously better performance than these agents. This experiment demonstrates the effectiveness of our method. We will add the above discussion and experiments in a revised version.\\n\\n\\n**Table K: Comparisons on the GAIA benchmarks with open-source models**\\n| Method | Controller | AnsAcc | Level1 | Level 2 | Level 3 |\\n|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|\\n|HF Agent|LLaVA-NeXT-8B|3.64|9.43|1.16|0.00|\\n|HF Agent|InternVL2-8B|4.85|7.55|4.65|0.00|\\n|HF Agent|Qwen2-VL-7B|9.70|16.98|8.14|0.00|\\n|HF Agent|MiniCPM-V-8.5B|7.27|13.21|5.81|0.00|\\n|T3 Agent|Tuned MiniCPM-V-8.5B|**15.15**|**26.42**|**11.63**|**3.84**|\\n\\n\\n> **Q1**. In Figure 3, the sum of trajectories\\u2014214 + 14,273 + 8,740 + 2,520 + 1,242 + 697 + 199 + 202\\u2014totals 28,087, which exceeds the stated 20k tasks in the abstract. In addition, the paper mentions that only 15k files remain after passing the query-file and trajectory verifiers. What is the final size of the generated dataset?\\n\\n**R:** Thank you for pointing this out. We put an incorrect Figure 3(d), where some data was inadvertently duplicated in the statistics. We correct it in the revised version. After double-checking, we confirm that there are a total of 20K tasks with 15K files after the two verifiers. This discrepancy between '20K' and '15K' arises because some tasks do not have assigned files and instead, they require searching for information from external web sources.\\n\\n\\n\\n\\n> **Q2**. Why does GPT-4o mini outperform GPT-4o in Table 2, specifically in the row with HF Agents on AnsACC and CodeExec, given that GPT-4o is expected to be more powerful than GPT-4o mini?\\n\\n**R:** The reason for the performance discrepancy is that GPT-4o sometimes uses its own knowledge to directly answer the question instead of adhering to the format of \\\"Thought: ..., Code: ...\\\" to call tools. This causes the code parsing error and slightly inferior performance to GPT-4o-mini.\"}", "{\"summary\": \"This paper introduces a new approach for improving tool usage in multi-modal agents by fine-tuning a vision-language model (VLM) controller with synthesized tool-usage data. To overcome the limitations of traditional LLM-driven agents\\u2014such as reliance on prompt engineering and limited reasoning for tool usage across modalities\\u2014the authors create a three-step data synthesis pipeline. First, *Query Generation* uses GPT-4o mini to generate diverse, tool-aware prompts. Next, *File Generation* retrieves images from similar datasets and creates other files programmatically. Finally, *Trajectory Generation* employs a zero-shot agent using GPT-4o mini to solve tasks, capturing thought processes, code, and observations for each step. Quality is controlled through query-file and trajectory verifiers, also based on GPT-4o mini, producing a dataset called MM-Traj. The resulting agent, T3-Agent, uses the ReAct framework and the MiniCPM-V model trained on MM-Traj, enabling versatile tool usage across categories like web search, visual perception, image editing, and file handling. Benchmarks on GTA and GAIA demonstrate the T3-Agent\\u2019s significant improvements in tool usage over both untrained VLMs and other state-of-the-art LLM-driven agents.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The data synthesis pipeline introduces a scalable, automated approach to generating diverse, complex multi-modal data for tool usage scenarios.\", \"Verification mechanisms embedded within the pipeline enhance data quality, resulting in a robust, comprehensive dataset.\", \"With training on the MM-Traj dataset, the T3-Agent demonstrates significant performance gains, surpassing agents built on closed-source models like GPT-4 in certain benchmarks.\", \"Ablation studies underscore the critical role of data verification in achieving top performance.\", \"The paper includes detailed visualizations of the T3-Agent\\u2019s reasoning process.\"], \"weaknesses\": [\"The T3-Agent exhibits a gap in programming capabilities, which leads to lower accuracy in predicted answers compared to GPT-4o.\", \"While the paper acknowledges the T3-Agent\\u2019s limited programming capabilities, it does not suggest potential improvements or outline future directions to strengthen this aspect.\", \"The reliance on GPT-4o mini throughout the pipeline raises questions about biases and limitations from this closed-source model. Exploring alternative methods or open-source models could enhance transparency and address these limitations.\", \"What is $p_i$ in Equation 2?\"], \"questions\": \"I lean towards acceptance because the paper introduces a promising approach to enhancing the reasoning capabilities of multi-modal agents. The proposed data synthesis pipeline and the resulting MM-Traj dataset are significant contributions to the field, potentially advancing the state of multi-modal learning. However, the paper lacks a discussion on the T3-Agent's robust programming capabilities and does not explore how these might be improved. I would like the authors to comment on this aspect.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your review!\", \"comment\": \"Thank you for your thoughtful review and for raising the score! We\\u2019re thrilled to know that our response addressed your concerns effectively. Your constructive comments have been essential in improving this work, and we truly appreciate the suggestions you\\u2019ve provided.\"}", "{\"comment\": \"Thank you for the updated paper! My concerns have been addressed. I will maintain my positive score.\"}", "{\"title\": \"Response To Reviewer uszN\", \"comment\": \"> Thank you for addressing my concern and I am happy to see that there is a constant memory use even with increasing the dataset size.\\n\\n> Regarding W3 (user study), what are the details of these 20 participants?\\n\\nThank you for your response. We are also glad to see that your other concerns have been addressed.\\n\\nRegarding the 20 participants, they are from multiple universities and research institutions (not in the same lab). While they are all members of the AI community, they come from different research backgrounds and are not familiar with our project beforehand.\", \"the_criteria_for_selecting_participants_were_as_follows\": \"- Participants are required to have a certain level of programming proficiency, as the controller's output consists of Thought and Python Code. Assessing the quality of the Python Code necessitates programming skills.\\n- Participants are unaware of our method, including the data synthesis pipeline and the agent architecture. We only introduce the structure of the trajectories and the meaning of the agent outputs, expecting the participants to evaluate the quality of trajectories and agent outputs objectively.\\n\\n> As in the paper I see that 5 persons were used to evaluate the tasks and trajectories generated. What is the reason for this difference? \\n\\n**R:** The 5 participants mentioned in the paper (increased to 30 during the rebuttal period) and the 20 participants referenced here are not the same group. This distinction is made to avoid potential biases in evaluations, such as additional priors about our agent.\\n\\nThe former group primarily evaluated the quality of generated tasks and trajectories via our data synthesis pipeline, while the latter assessed the execution results of the agent.\\n\\n> Are these participants familiar with the whole setup beforehand? \\n\\n**R:** No, the participants are not familiar with the setup beforehand. We only informed them about the agent's current functionality and asked them to rate its performance based on their preferences in terms of accuracy, helpfulness, and relevance. We did not disclose any details about the implementation of our agent or the data synthesis process to avoid evaluation biases caused by prior knowledge.\\n\\n> Suppose the evaluators are all from the same lab as the authors this has a potential of creating bias in user study.\\n\\n**R:** No. They are not from the same lab to reduce the biases in user study. Our participants are from multiple universities and research institutions.\"}", "{\"comment\": \"Thanks for the clarifications. I will raise my scores.\"}", "{\"title\": \"Thanks for your review!\", \"comment\": \"Thank you for your thoughtful review and positive feedback! We sincerely appreciate your valuable suggestions on interpretability, scalability, and the user study, which have been instrumental in enhancing this work!\"}", "{\"title\": \"Concerns addressed\", \"comment\": \"Thanks for your comments, I will raise my score.\"}", "{\"title\": \"Thanks for your review!\", \"comment\": \"Thank you for your thoughtful review! We are pleased to hear that our response addressed your concerns. Your invaluable feedback has played a crucial role in improving this work!\"}", "{\"title\": \"Thanks for your review!\", \"comment\": \"Thank you for raising the score! We are delighted to hear that our response could clarify your concerns. Your invaluable feedback has been instrumental in enhancing this work. We deeply appreciate your thoughtful and constructive comments!\"}" ] }
0bcUyy2vdY
Multi-play Multi-armed Bandit Model with Scarce Sharable Arm Capacities
[ "Hanyang LI", "Hong Xie", "Defu Lian", "Enhong Chen" ]
This paper revisits multi-play multi-armed bandit with shareable arm capacities problem (MP-MAB-SAC), for the purpose of revealing fundamental insights on the statistical limits and data efficient learning. The MP-MAB-SAC is tailored for resource allocation problems arising from LLM inference serving, edge intelligence, etc. It consists of $K$ arms and each arm $k$ is associated with an unknown but deterministic capacity $m_k$ and per-unit capacity reward with mean $\mu_k$ and $\sigma$ sub-Gaussian noise. The aggregate reward mean of an arm scales linearly with the number of plays assigned to it until the number of plays hit the capacity limit $m_k$, and then the aggregate reward mean is fixed to $m_k \mu_k$. At each round only the aggregate reward is revealed to the learner. Our contributions are three folds. 1) \textit{Sample complexity:} we prove a minmax lower bound for the sample complexity of learning the arm capacity $\Omega(\frac{\sigma^2}{\mu^2_k} \log \delta^{-1})$, and propose an algorithm to exactly match this lower bound. This result closes the sample complexity gap of Wang et al. (2022a), whose lower and upper bounds are $\Omega(\log \delta^{-1})$ and $O (\frac{m^2_k \sigma^2}{\mu^2_k} \log \delta^{-1})$ respectively. 2) \textit{Regret lower bounds:} we prove an instance-independent regret lower bound $\Omega( \sigma \sqrt{TK} )$ and instance-dependent regret lower bound $\Omega(\sum_{k=1}^K\frac{c\sigma^2}{\mu_k^2} \log T)$. This result provides the first instance-independent regret lower bound and strengths the instance-dependent regret lower bound of Wang et al. (2022a) $\Omega(\sum_{k=1}^K \log T)$. 3) \textit{Data efficient exploration:}we propose an algorithm named \texttt{PC-CapUL}, in which we use prioritized coordination of arm capacities upper/lower confidence bound (UCB/LCB) to efficiently balance the exploration vs. exploitation trade-off. We prove both instance-dependent and instance-independent upper bounds for \texttt{PC-CapUL}, which match the lower bounds up to some acceptable model-dependent factors. This result provides the first instance-independent upper bound, and has the same dependency on $m_k$ and $\mu_k$ as Wang et al. (2022a) with respect to instance-dependent upper bound.But there is less information about arm capacity in our aggregate reward setting. Numerical experiments validate the data efficiency of \texttt{PC-CapUL}.
[ "Multi-play multi-armed bandit", "scarce sharable arm capacity", "regret bounds" ]
Reject
https://openreview.net/pdf?id=0bcUyy2vdY
https://openreview.net/forum?id=0bcUyy2vdY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xfMXqqtQGb", "xVgtN52fbi", "tMJPptxeuM", "nezYedLnrc", "mFztg83Pmw", "jL1CT2cWD9", "eaKeJ2CNn1", "byR9n8Cuyl", "YPCIll57pU", "WTc6A71UxE", "UQNm82Ql7F", "RZj0oIbxnD", "DSLtpor5Ru", "Cer1ir917T", "ArVEqKYpgw", "01G8mzGPhv" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1737524300588, 1732376206423, 1732374755163, 1732804097656, 1732680665143, 1730842014084, 1732374943607, 1730407456155, 1732802733112, 1734857045476, 1732850186492, 1729175277311, 1732758283610, 1732948156039, 1730710026542, 1732377734450 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14140/Authors" ], [ "ICLR.cc/2025/Conference/Submission14140/Authors" ], [ "ICLR.cc/2025/Conference/Submission14140/Authors" ], [ "ICLR.cc/2025/Conference/Submission14140/Reviewer_fizY" ], [ "ICLR.cc/2025/Conference/Submission14140/Reviewer_P6Qi" ], [ "ICLR.cc/2025/Conference/Submission14140/Authors" ], [ "ICLR.cc/2025/Conference/Submission14140/Reviewer_HpxS" ], [ "ICLR.cc/2025/Conference/Submission14140/Authors" ], [ "ICLR.cc/2025/Conference/Submission14140/Area_Chair_fdFv" ], [ "ICLR.cc/2025/Conference/Submission14140/Reviewer_HpxS" ], [ "ICLR.cc/2025/Conference/Submission14140/Reviewer_Xyas" ], [ "ICLR.cc/2025/Conference/Submission14140/Reviewer_Xyas" ], [ "ICLR.cc/2025/Conference/Submission14140/Authors" ], [ "ICLR.cc/2025/Conference/Submission14140/Reviewer_fizY" ], [ "ICLR.cc/2025/Conference/Submission14140/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your comments and suggestions, and your questions will be answered one by one.\\n\\n\\n1. The $m_k$ is not known in advance. There is no requirement on the dependence of $m_k$ and $c$ in this article. The only requirement of the movement cost $c$ is that $c<\\\\mu_k$ for all $k$, so that every unit of occupied resource is expected to increase rather than reduce the utility. This requirement is necessary in our proof, especially when considering the regret lower bound. Because it is demanded that both overestimating and underestimating the capacity $m_k$ should generate a regret. \\n\\n\\n2. The main reason is that there are not sufficient plays for every arm to be played with UE freely. Once there are enough plays to arrange UE on all arms, we can use Theorem 2 on each arm separately. When there are not enough plays, some arm are forced to be played with IE. According to the regret lower bound in Theorem 3 and Theorem 4, the larger $\\\\mu_k$ is, the less time slots we have to spend on that arm to get $m_k$. Moreover, if the arm with large $\\\\mu_k$ is forced to be played with IE because of the lack of plays, it will be likely that the regret can be large too ( we do not know $m_k$ in advance, so all we can do is using $\\\\hat{\\\\mu}\\\\_{k,t}$ or $\\\\hat{\\\\mu}\\\\_{k,t}m\\\\_{k,t}^u$ or other estimators to predict the regret). In this algorithm and the proof of its regret upper bound, it is shown that $\\\\hat{\\\\mu}\\\\_{k,t}$ is a qualified estimator.\\n\\n3. The only reason for setting $c=0.1$ is to meet the requirement that $c<\\\\mu_k$ for all $k$. The movement cost $c$ is known in advance in a practical scenario, and $c$ is independent of $\\\\mu_k$. Additional experiments with different values of $c$ are conducted, and results are proposed in Appendix A.4.\\n\\n4. There are changes in the range of $m_k$ in the appendix part A.1. And it is shown that the larger $m_k$'s range is, the larger the regret is. Additional experiments are done as you required, and you can check the figures depicting the impact of changing $c$ in the Appendix A.\"}", "{\"comment\": \"Thank you for your comment, and your questions will be answered one by one.\\n\\n1. As for the mentioned confusing notations in the first weakness: (1) $\\\\epsilon^{UE}$ is the correct notation; (2) the full name of the cited article is ''Tightening exploration in upper confidence reinforcement learning'';(3) the lemma used in the proof of Theorem 1 is the Theorem 14.2 in Lattimore$\\\\&$Szepesvari(2020). This lemma is also mentioned in the proof of Theorem 3 and it is cited properly there;(4)The mentioned \\\"MP-MAP-SA\\\" should be \\\"MP-MA-SE\\\" actually;(5)\\nA rebuttal version of the article is submitted and the writing quality is improved.\\n\\n\\n2. The question mentioned in the second term in the weaknesses part:(1) Sorry that we are not sure whether more examples or more concrete details on the LLM application are expected. We would appreciate it if you can give a clue on this. (2) Running an LLM service has cost on computing resource, IT operations of the system, system maintenance, etc. Transmitting an end user's query to the commercial LLM server has a communication cost, especially when a query has a long prompt input. The cost $c$ is an aggregate abstraction of the above cost. (3) More detailed analysis of the experimental results is proposed in Section 6 in the rebuttal version and Section A in the appendix.\\n\\n3. The question mentioned in the third term in the weaknesses part: (1) The comparison is fair for the following three reasons. First, the comparison between the results of the example complexity is fair. The example complexity is focused on one particular arm, and the rank of arms is not included. Second, as for the regret bounds, consider the case when $M=N$, which satisfies the conditions in both Wang et al.(2022a) and this article. Then the optimal action is the same, and the rank of the arms is not required to be learned. It should be noted that the regret lower and upper bounds in Wang et al.(2022a) are reduced to $\\\\Omega\\\\left(\\\\sum_{k}\\\\log T\\\\right)$ and $\\\\Omega\\\\left(\\\\sum_{k}\\\\left(w_k\\\\sigma^2 m_k^2/\\\\mu_k^2\\\\right)\\\\log T\\\\right)$. Compared to the bounds in this article, their lower bound is less informative than the $\\\\Omega\\\\left(\\\\sum_k\\\\log T/\\\\mu_{k}^2\\\\right)$, while their upper bound is the same. Third, Wang et al calculate the regret by decomposing it into several parts. And the part that is related to estimating the capacities of the ''optimal arms'' is not related to ranking of the arms, and the part of regret related to estimating the capacities is bounded as it is shown above. So it is fair to compare the part that estimates the capacities in the regret bounds. (2) The word ''unstable'' might mislead you about the robustness of the algorithm. The actual meaning is that the denominators of the confidence bounds are preferred to be positive as soon as possible. Otherwise $N-\\\\sum_{i=1,i\\\\neq k}^K m_{i,t}^l$ will be the upper bound of the capacity, and this value is not a good estimation. Additional experiments are conducted to compare the convergence speeds of the two estimators\\n\\n4. The question mentioned in the questions part: The article shows that the sample complexity gap is closed by UE and IE, in which the arms are assigned with UCB and LCB plays respectively. The closed sample complexity gap serves as clues that the regret bound gap in this problem can be closed by UE and IE as well.\"}", "{\"comment\": \"Thanks a lot for your further suggestion.\\n\\nAdditional explanation about the ordering of arm selection has been added in the front of the proof of the regret upper bound (Theorem 6) in the appendix part. Since the rank plays a significant role in the proof of subsequent Lemma 7 and Theorem 6, with further explanation here, it will be easier for readers to comprehend why the arms should be ranked according to $\\\\hat{\\\\mu}\\\\_{k,t}$ and why this ranking can improve the regret upper bound of the algorithm.\"}", "{\"comment\": \"Thank you for the response. I went through all the answers and reviews. I have no further questions at this point. However, to improve clarity and to avoid ambiguity regarding the ordering of arm selection, the authors could incorporate the relevant information from the Q2 comments into their next version of this paper.\"}", "{\"summary\": \"This paper revisits multi-play multi-armed bandit with shareable arm capacities problem. Improved on previous work Wang et al. (2022a), the paper proposes refined lower and upper bounds for both sample complexity and regret. For sample complexity, the authors propose a minmax lower bound, and give an algorithm that matches the bound. For regret, the authors provide both instance dependent and instance independent regret lower bounds, and find algorithms that match the bounds up to some model-dependent factors.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. The work closes the sample complexity gap and narrow the regret gap for the MP-MAB problem. Although the techniques used in the proof are not particularly unique (mostly based on regular UCB and LCB), the conclusions are still very interesting and make sense.\\n2. The work propose numerical simulation to show the advantages of their algorithms.\", \"weaknesses\": \"1. The writing is a bit poor. The paper contains many colloquial expressions, i.e., line 383 \\\"But if\\\", line 390, 403, 405 \\\"And furthermore\\\" \\\"And this\\\".\\n2. The author states in the introduction that the algorithm has applications to LLM inference serving. I believe it\\u2019s necessary to provide some LLM-related experiments to support this statement.\", \"questions\": \"1. should we assume that $\\\\mu_k\\\\ge c$ for all $k$? The authors state that the optimal action is always $(m_1,\\\\dots, m_K)$ in line 211. It seems that this only holds when $\\\\mu_k\\\\ge c$ for all $k$.\\n2. what is \\\\mu in Theorem 1. I did not find the definition.\\n3. I am curious about how could the sample complexity in Theorem 2 gets rid of the dependence of N. Intuitively, even there is no noise (sigma = 0), for any algorithm, it still need at least $\\\\log N$ rounds to find the true $m_k$ by binary search. Is the dependence on $N$ hidden in $\\\\xi$?\", \"typos\": \"\", \"line_224\": \"$a_{k,t}$ instead of $a_k$\", \"line_289\": \"for large probability -> with high probability\", \"line_383\": \"to played with -> to play with\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your comment, and your questions will be answered one by one.\\n\\n1. For the first question, the answer is yes. If for some $k$ there is $\\\\mu_k<c$, then the arm $k$ can not generate reward. In most real world cases, it is rare to see people paying to occupy resources and get nothing. So we can set the movement cost as a small value. \\n\\n2. That is a mistake in notation. $\\\\mu_k$ is set as $\\\\mu$ here, which is the per unit reward of the arm $k$.\\n\\n3. Sorry that I do not understand the question because there should be no dependence of the play number $N$ in the sample complexity, but I think you may be curious about the absence of $m_k$ in the upper bound of the sample complexity. Sample complexity here is the number of time slots demanded to calculate the true capacity $m_k$. The estimated UCB is not searched from $[N]$ randomly, but calculated with previous information generated by previous actions in the MAB. If the confidence intervals in Wang et al. (2022a) are used in this article, there will always be $m_k$ in the upper bound of the sample complexity. But with the refined confidence intervals in this article, the gap between upper and lower bounds is closed up. The sample complexity is different from the regret bound, and the regret upper bound depends on $N$. As for the cases when there is no noise, the upper bound of the sample complexity is actually $2$: one UE and one IE should suffice to get the correct capacity via dividing. So the complexity for all non-negative $\\\\sigma$ can be modified by adding $2$, and this new upper bound is shown in the rebuttal version.\"}", "{\"summary\": \"This paper considers the problem of multi-play multi-armed bandits with scarce shareable arm capacities. Specifically, different from [Wang et al., 2022a], this paper considers the problem where $N\\\\geq \\\\sum_k m_k$ where $m_k$ is the capacity of action $k$. With a modification on the reward function, this paper proposes new sample complexity lower/upper bound that is tight as well as regret lower/upper bound for this problem. Specifically, the author claims that the sample complexity lower bound proven in this paper improves upon the one shown in [Wang et al., 2022a]. Empirical results are also shown to strengthen their theoretical findings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper first considers this problem with scarce shareable arm capacities and proposes both lower and upper bound for both sample complexity and the regret bound.\", \"Based on the parts that I checked, the proofs look correct to me.\", \"The experiments are also conducted to show superior performance compared to the previous work.\"], \"weaknesses\": [\"One main concern is the motivation of this paper to consider the case where $N\\\\geq \\\\sum m_k$. In this case, the problem seems to be easier (in the sense of algorithm design) since you will definitely explore each action sufficiently enough to figure out the exact $m_k$ while in the opposite case $N< \\\\sum m_k$, the problem seems to be harder since you need to decide the exploration amount for the suboptimal $k$. Can the authors explicitly justify the choice of studying the $N\\\\geq \\\\sum m_k$ case and why it is challenging compared to the previous case?\", \"This also leads to the question about the comparison between the lower/upper bound shown in this paper and [Wang et al., 2022a]. While the authors claim better lower bound, I wonder whether the upper/lower bound are comparable in these two cases? Can the algorithm that is derived in this setting adapted to the other? Moreover, I am not sure why equation (5) is more reasonable since it makes sense to me to have the noise's variance larger when $m_k$ or $a_k$ is large.\", \"As for the upper bound, the bounds in Theorem 5 seems to be suboptimal since it seems to be dependent on $\\\\frac{\\\\max_i \\\\mu_i}{\\\\min_i \\\\mu_i}$, which can be large.\", \"I do not understand the lower bound argument shown in Theorem 4. When the cost $c=0$, then this ratio becomes 0, which is surely not informative. In addition, why is the ratio independent of $m_k$? Can the authors explain more on this?\", \"Typos:\", \"Line 223: it -> if\", \"Line 224: a_k -> a_{t,k}?\", \"Line 471: depended -> dependent\", \"Line 751: missing lemma reference.\", \"missing periods at the end of many theorem statements (e.g. Theorem 4,5,6..)\"], \"questions\": \"See \\\"Weakness\\\" Section for questions.\\n\\n- I wonder whether the dependency on the cost parameter $c$ can be improved for the regret lower bound.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your further comment, and your questions will be answered one by one.\\n\\n1. An improved version of this article has been submitted already.\\n\\n2. We make a careful balance between generality and customization in developing the model. As for the customization, our model aims to capture the core resources allocation challenge in the scarce resources scenarios, and our model can be applied to LLM, edge computing, cognitive ratio applications, online advertisement placement etc. As for the generality, our model serves as a backbone, which could be further adapted to specific applications with the reward function extended.\\n\\n3. There is not enough time for additional experiments, considering the time your reply was sent to us. Your requirement of error bars in the figures is reasonable and they will be added in the future version.\"}", "{\"metareview\": \"This paper addresses the multi-play multi-armed bandit with shareable capacities problem, presenting results on improved sample complexity, regret lower bounds, algorithms, and regret upper bounds. The primary concern with this paper lies in the subtle differences between the scenarios it addresses and those in prior work, raising questions about the fairness and validity of comparisons with existing results and lower bounds.\\n\\nSpecifically, the paper focuses on cases where the number of plays $N$ exceeds the total amount of capacities $M$. However, this restriction might simplify the problem, and the paper does not provide a convincing explanation to justify this aspect. Additionally, there are several areas where the clarity and rigor of the writing, both in terms of narrative and mathematical descriptions, are lacking. \\n\\nFor these reasons, I cannot support the acceptance of this paper at this time.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised concerns regarding the subtle differences between the scenarios addressed in this paper and those in prior work, questioning whether comparisons with existing results and lower bounds are fair and justifiable. Specifically, the paper focuses on cases where the number of plays $N$ exceeds the total amount of capacities $M$, but this restriction might simplify the problem, and the paper fails to provide a convincing explanation to address this issue.\\n\\nAdditionally, the reviewers noted that many parts of the paper lack clarity and rigor, both in terms of textual expression and mathematical descriptions. The authors' rebuttal did not sufficiently alleviate these concerns.\"}", "{\"comment\": \"I thank the reviewers for their efforts. With the new theorem 7, the authors improve upon the upper bound rate. However, I still have questions about my other concerns.\\n\\n1. The reason that the regret can be constant when c=0 (in the lower bound construction) still does not make sense to me since when you double the number of play, you may violate the condition that \\\\sum a_k \\\\leq N. From another perspective, the proven regret upper bound does have addition leading terms when c=0, showing that the lower bound can be loose. I also feel that m_k should appear in the lower bound and the current lower bound result that is m_k-free seems to ignore this m_k constraint.\\n\\n2. I also do not understand why the lower bound in Wang et al., 2022 applies here since the reward model (the noise scale) is different. A more rigorous comparison is needed I think. In addition, a smaller proven lower bound does not show that one problem is easier since a trivial regret lower bound for all problems is 0. A smaller upper bound instead shows this.\"}", "{\"summary\": \"This paper discusses the problem of the Multi-play Multi-armed Bandit Model with Shareable Arm Capacities (MP-MAB-SAC). It tightens the lower bounds for both sample complexity and the cumulative regret compared to the previous work. Besides, this paper proposes corresponding algorithms to match the lower bounds. Finally, the numerical experiments show that the proposed algorithms outperform the existing ones.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The theoretical contributions are nontrivial. This paper shows tighter lower bounds, and then proposes new algorithms to match them. Furthermore, the experiments verified the theories.\", \"weaknesses\": \"I have the following concerns:\\n\\n1. The writing quality of this paper falls below the standards required for publication in ICLR. Issues such as clarity, rigor, and basic grammatical correctness are prevalent. It appears that the authors did not thoroughly review the paper before submission. From a writing perspective, the paper remains in draft form: numerous typos, confusing notations, and grammatical errors hinder readability. For example, (1) in Lemma 2, $\\\\epsilon^{uE}$ should be $\\\\epsilon^{UE}$; (2) in the proof of Lemma 2 \\u201cBourel et al., 2020\\u201d is even not cited; (3) in the proof of Theorem 1, which lemma is used here? Besides, this theorem should be proved more formally; (4) What is the first baseline \\u201cMP-MAB-SA\\u201d in the experiments?\\n\\n2. The explanations provided in the paper are insufficient. (1) In Section 1, more concrete examples of the model's practical applications are needed. (2) The claim that certain changes in settings make the model more suitable for LLMs requires stronger evidence. For instance, the movement cost $c$ (which is known to the learner) seems irrelevant. (3) The paper should provide a more in-depth analysis of the experimental results, going beyond mere statements of fact.\\n\\n3. The comparison with the previous work seems not fair. (1) Since $N \\\\ge M$ makes the learner only need to learn the capacity $m_k$, without needing to learn the rank of the arms, the learning task seems easier. (2) In lines 307~310, is there any evidence to show stability is getting better? Besides, I\\u2019m kind of confused about this result because the robustness v.s. regret usually has some trade-off, which means the increasing of stability may (not always) lead to the decreasing of performance.\", \"questions\": \"Please try to solve the problems in weaknesses. In addition, since the improvement in results achieved in this paper mainly comes from the careful selection of UCB, I would like to know what kind of inspiration it will bring to future work? This is not necessary as the theoretical improvement itself is interesting.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thanks for author's rebuttal. Generally speaking, my concerns still remain. Here are some follow-up comments:\", \"When reading the updated version, I still find some simple mistakes in Appendix. I hope the author will thoroughly read and revise the entire paper to prevent hindrance to the reader's comprehension.\", \"The author's reference to LLM applications in Lines 96-104 lacks sufficient introduction and explanation. The author does not give reasonable argument in the subsequent content, which makes me feel that the author is just trying to introduce some unrelated but popular topics.\", \"I appreciate the author's explanation of experiments. Could you please provide an error bar to support the statistical significance of the results?\", \"Considering the limitations of the current version, I've decided to keep my score unchanged.\"]}", "{\"comment\": \"Thanks a lot for your further comment, and your questions will be answered one by one.\\n\\n1. You might be confused by the difference between the sample complexity and regret. Sample complexity measures the number of explorations that a strategy requires to estimate the capacity correctly, and it is proven to be $m_k$-free here. It is mentioned in this article that the sample complexity gap is closed, and the sample complexity lower bound is tight. However, the regret lower bound is not asserted to be tight in this article. A closed gap in the regret upper and lower bounds is one of the targets of our future work. It is not sure whether the regret can be bounded by a constant if there is no movement cost. The optimal action is not unique when $c=0$, and $a\\\\_t $ with $a_{k,t}\\\\geq m_k$ are all optimal actions. The mentioned strategy in our last reply serves as an illustration that some strategies can find the optimal action set without learning the exact $m_k$. Though the doubling strategy is only valid for large $N$( $N=O(T^{100}2^T)$ for instance ), some clues imply the existence of efficient strategies even in conventional settings.\\n\\n2. There are mainly two reasons of why the lower bound of Wang et al.(2022a) applies here. (1) Comparison between the lower bound in this article and the one in Wang et al.(2022a) is based on the comparison between the information contained in the reward functions. With less informative reward function, a more informative regret lower bound is proposed in this article, implying that the lower bound of Wang et al.(2022a) is kind of loose. Considering the case when $\\\\mu=0,c=0$ on an arm, learning the capacity $m_k$ is impossible in our setting, but possible in Wang et al.(2022a)'s setting by checking the variance. (2) The lower bounds derived with information theoretical methods are usually the estimators of the problem difficulty, and in both this article and Wang et al.(2022a) this opinion is admitted. One can check that with the same information theoretical methods used in this article to get the complexity lower bound, he or she may get a complexity lower bound modified by multiplying a term of $O(\\\\exp(-T))$ in Wang et al.(2022a)'s setting.\"}", "{\"summary\": \"This paper studies the multi play multi-armed bandit problem having shared arm capacity where the in each round, the learner gets to select the arm for a number of pulls capped by the capacity limit with the goal of maximizing the total reward at the end of the play. The authors propose a new reward function and develop a new algorithm PC-CapUL for this problem setting. The developed algorithm provides tighter bounds on sample complexity and regret in comparison to the existing works, efficiently balances exploration and exploitation. The work is applicable in resource allocation problem with capacity constraint scenarios such as LLM inference and many other real world scenarios.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"\\u2022\\tThe problem of Multi play multi-armed bandit problem is an interesting setting to study and improve the foundation of it as it pertains to main real-world settings including LLM inference serving. The work re-establishes that with emphasis on theoretical guarantees.\\n\\n\\u2022\\tThe work provides theoretical improvements in sample complexity compared to the existing work on MP-MAB-SAC. It tends to close the sample complexity gap found in the previous work in Reference A\\n\\n\\u2022\\tThe authors also provide a new Improved algorithm, PC-CapUL that performs much better than other existing algorithms and have a solid theoretical backing to it with proved theoretical Regret bound guarantees.\\n\\n\\u2022\\tThe experiments cover the regimes where the number of arms is larger which predominantly requires more exploration to take place. The developed algorithm provides much better performance in terms of regret compared to other existing algorithms in this experimental setting.\", \"reference\": \"[A] Xuchuang Wang, Hong Xie, and John C. S. Lui. Multiple-play stochastic bandits with shareable finite-capacity arms. International Conference on Machine Learning, ICML 2022.\", \"weaknesses\": \"\\u2022\\tThe experimentation design could have been done much better with the inclusion of better baseline comparison in addition to the algorithm found in Reference A . Also, utilizing a real-world dataset for evaluation would have further complemented these theoretical results.\\n\\n\\u2022\\tThe readability of the paper could be much improved. Also, a brief intuitive explanation like a proof sketch could be added in the main text to help the reader get the intuitive logic and understanding of the proof techniques. \\n\\n\\u2022\\tA more detailed theoretical comparative analysis like how regret fares against the regret of other algorithms would make the argument much stronger for the developed PC-CapUL algorithm. Moreover, having such a discussion would also help us uncover insights like how the regret bound behaves in different regimes.\", \"questions\": \"1.\\t$m_k$ is deterministic and well known beforehand as to how many pulls can be made in a round. However, there is a constant movement cost $c$ associated to an arm. In case of LLM query, the number of pulls is associated to the amount of query a server instance can handle.\\n\\nAre $m_k$ and moving cost $c$ dependent in this scenario? If so, how do this implication sit with all the theoretical proof, or do they have to be independent? A clarification on this would help the readers utilize the developed algorithms on many scenarios where the dependencies are crucial. \\n\\n2.\\tWhy is ordering of plays of arm selection important in $a_t$, providing some details on it will avoid ambiguity around its objective of whether to maximize the resource utilization or to maximum capacity of the arm?\\n\\n3.\\tAlso, with respect to movement cost, in the experiment setting, it has been assigned to an arbitrary value of 0.1 . Is there any fundamental reason for that? Also, how can they be evaluated in a practical scenario when they are also coupled with reward formulation? Adding some details around them can greatly improve the clarity of the work. \\n\\n4.\\tIt would be nice to see how the experiments scale up with varying parameters like changing the $m_k$ and changing the movement cost etc ? This will help us understand the empirical performance of the algorithm much better.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your comments, and your questions will be answered one by one.\\n\\n1. First, the choice of studying the $N\\\\geq \\\\sum m_k$ case is mainly attributed to the fact that it is common in real world that the resources are scarce and the competition for these resources is intense. Second, it is not mentioned in the article that the $N\\\\geq \\\\sum m_k$ case is more challenging than the $N< \\\\sum m_k$ case. You might be curious about the fairness of the comparison between results in both cases, and this question will be answered below.\\n\\n2. (1) The comparison is fair for following three reasons. First, the comparison between the results of the example complexity is fair. The example complexity is focused on one particular arm, and the rank of arms is not included. Second, as for the regret bounds, consider the case when $M=N$, which satisfies the conditions in both Wang et al.(2022a) and this article. Then the optimal action is the same, and the rank of the arms is not required to be learned. It should be noted that the regret lower and upper bounds in Wang et al.(2022a) are reduced to $\\\\Omega\\\\left(\\\\sum_{k}\\\\log T\\\\right)$ and $\\\\Omega\\\\left(\\\\sum_{k}\\\\left(w_k\\\\sigma^2 m_k^2/\\\\mu_k^2\\\\right)\\\\log T\\\\right)$. Compared to the bounds in this article, their lower bound is less informative than the $\\\\Omega\\\\left(\\\\sum_k\\\\log T/\\\\mu_{k}^2\\\\right)$, while their upper bound is the same. Third, Wang et al calculate the regret by decomposing it into several parts. And the part that is related to estimating the capacities of the ``optimal arms'' is not related to ranking of the arms, and the part of regret related to estimating the capacities is bounded as it is shown above. So it is fair to compare the part that estimates the capacities in the regret bounds.(2) As for the equation 5, the returned reward of equation 5 contains less information as it is explained in line 79-85. And it is a reward model of conventional linear bandits with one dimensional feature, which is as common as the one with increased noise's variance. Moreover, we can get the corresponding regret lower bound with reward function as equation 1 in the same way, and the lower bound is modified by multiplying the term $\\\\exp({-9TK/2})$. This implies that the problem setting with equation (5) is more difficult than that with equation 1. Consequently, if we can get a closed regret upper and lower bounds in the setting of equation (5), the same techniques or insights might work in the setting of equation 1.\\n\\n3. That's correct. This bound is also a kind of rough bound, and we list it as a theorem for readers to find the similarity of this the upper bound and the one in Wang et al.(2022a). And your concern is solved in the rebuttal version. A new bound is shown explicitly as Theorem 7 at the last part of the appendix.\\n\\n4. When the movement cost $c=0$, consider the case when there is almost infinite plays compared with the capacities. We can assign every arm with one play in the first round, and double the number of plays assigned to each arm after every round. Because of the absence of movement cost, once the number of plays assigned to arm $k$ exceeds the capacity $m_k$, there would be no regret. So the regret of this strategy remain constant after only a few rounds. This implies that there might be other strategies which optimize the action in much fewer rounds without learning the exact $m_k$. As for the next question, the results in sample complexity in Theorem 1 and Theorem 2 show that the sample complexity is independent of $m_k$. And in the proof of the lower bound, we can only assume that the regret generated by single exploration to be $\\\\min(\\\\mu_k-c,c)$. So it is reasonable for the regret lower bound to be independent of $m_k$.\"}" ] }
0bcRCD7YUx
VALL-E 2: Neural Codec Language Models are Human Parity Zero-Shot Text to Speech Synthesizers
[ "Sanyuan Chen", "Shujie LIU", "Long Zhou", "Eric Liu", "Xu Tan", "Jinyu Li", "sheng zhao", "Yao Qian", "Furu Wei" ]
This paper introduces VALL-E 2, the latest advancement in neural codec language models that marks a milestone in zero-shot text-to-speech synthesis (TTS), achieving human parity for the first time. Based on its predecessor, VALL-E, this work introduces two significant enhancements: Repetition Aware Sampling refines the original nucleus sampling process by accounting for token repetition in the decoding history. It not only stabilizes the decoding but also circumvents the infinite loop issue. Grouped Code Modeling organizes codec codes into groups to effectively shorten the sequence length, which not only boosts inference speed but also addresses the challenges of long sequence modeling. Our experiments on the LibriSpeech and VCTK datasets show that VALL-E 2 surpasses previous systems in speech robustness, naturalness, and speaker similarity. It is the first of its kind to reach human parity on these benchmarks. Moreover, VALL-E 2 consistently synthesizes high-quality speech, even for sentences that are traditionally challenging due to their complexity or repetitive phrases. The advantages of this work could contribute to valuable endeavors, such as generating speech for individuals with aphasia or people with amyotrophic lateral sclerosis. See https://anonymous/valle2 for demos of VALL-E 2.
[ "Zero-shot Text to Speech Synthesis", "Speech Generation", "Voice Cloning", "Language Modeling", "In-Context Learning" ]
Reject
https://openreview.net/pdf?id=0bcRCD7YUx
https://openreview.net/forum?id=0bcRCD7YUx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qfWhMcEIWF", "qRTy09pQLR", "anVke5LuFV", "Ne39HPnDfh", "HKxBK4VxmM", "8hx2FFRbcc" ], "note_type": [ "official_review", "meta_review", "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1730206412513, 1734539785953, 1730127072913, 1729662112422, 1737524254793, 1729241754912 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13358/Reviewer_DjJg" ], [ "ICLR.cc/2025/Conference/Submission13358/Area_Chair_SmMy" ], [ "ICLR.cc/2025/Conference/Submission13358/Reviewer_cpdk" ], [ "ICLR.cc/2025/Conference/Submission13358/Reviewer_gHCZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13358/Reviewer_rff7" ] ], "structured_content_str": [ "{\"summary\": \"VALL-E 2 is an LM-based TTS model based on VALL-E. It proposes two new methods:\\n\\n1. **Repetition Aware Sampling**: In this method, during the sampling process, the repetition ratio is calculated based on the number of times a token has been generated. If this value exceeds a threshold, tokens are generated randomly from the original distribution.\\\\\\\\\\n2. **Grouped Code Modeling**: This method reduces sequence length by grouping adjacent tokens into fixed-size groups.\\n\\nThanks to these contributions, VALL-E 2 achieves significantly higher performance than the baseline VALL-E, particularly yielding better subjective evaluation results than the ground truth on LibriSpeech test-clean and VCTK.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is written in a clear and accessible manner, enhancing readability and comprehension. Notably, VALL-E 2 demonstrates superior performance over ground truth in subjective evaluations, achieving higher scores in both CMOS and SMOS on both the LibriSpeech test-clean and VCTK datasets.\", \"weaknesses\": \"The authors have made an effort to present this work as a promising study; however, upon closer examination, there are numerous concerns that require substantial improvement.\\n\\n- **On Subjective Evaluation Results**: \\n The results discussed as a strength in Figure 1 are fundamentally flawed due to the dataset disparity with other studies (e.g., NaturalSpeech3 [1] uses the Librilight dataset). This undermines fairness in comparison. To ensure meaningful comparison, the authors should replicate NaturalSpeech3, which currently appears to be a good model, and train on the same dataset.\\n\\n- **On Grouped Code Modeling**: \\n While this approach has some merit as a method to reduce the sequence length given the codec model\\u2019s high 75Hz frequency, it is rather na\\u00efve and cannot be considered innovative. In fact, similar efforts have already been undertaken in existing research, such as [2], which the authors should have cited at minimum. Additionally, the method does not lead to significant improvements in either objective or subjective evaluations, suggesting that further refinements are needed.\\n\\n- **On Repetition Aware Sampling**: \\n Although this method appears to address the traditional issue of repetition in models like VALL-E effectively, it is not particularly innovative. In language modeling (LLM) contexts, penalties for repetition have long been in use [3], making the lack of reference to these approaches surprising. While the authors\\u2019 method differs slightly from these established approaches, it would be necessary to compare with them to clarify the method\\u2019s effectiveness. Moreover, the existing application of repetition penalties in TTS contexts, as seen in [4], further accentuates this concern.\\n\\n- **On Ablation Studies**: \\n There is a significant lack of ablation studies. The paper includes excessive unnecessary information; for example, the equations related to the model are redundant, and condensing this information would allow the inclusion of ablation studies directly in the main text. The limited experiments in the appendix also lack relevance. Ablations such as the presence or absence of prompts and dataset size variations are not particularly noteworthy, and their results are self-evident. More critical studies, such as comparisons with traditional repetition penalties or ablations involving Vocos (a major change from VALL-E), would have been more appropriate.\\n\\n- **On Baseline Comparisons**: \\n Changing the decoder from VALL-E\\u2019s original to Vocos represents a major shift and warrants stronger emphasis in comparative experiments. Additionally, the fact that subjective evaluation is best when the group size is 1 makes it very challenging to establish differentiation from the baseline.\\n\\n- **Contribution to the Field**: \\n The lack of code and weight release significantly diminishes the contribution of this study to the field.\\n\\n\\n[1]: Ju, Zeqian, et al. \\\"Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models.\\\" ICML 2024.\\\\\\n[2]: Tang, Changli, et al. \\\"Salmonn: Towards generic hearing abilities for large language models.\\\" ICLR 2024.\\\\\\n[3]: Keskar, Nitish Shirish, et al. \\\"Ctrl: A conditional transformer language model for controllable generation.\\\" arXiv preprint arXiv:1909.05858 (2019).\\\\\\n[4]: Casanova, Edresson, et al. \\\"XTTS: a Massively Multilingual Zero-Shot Text-to-Speech Model.\\\" INTERSPEECH 2024.\", \"questions\": [\"**Why was Vocos selected?** Given that other Vocoders with potentially better performance are available, why was Vocos specifically chosen? Additionally, what was the necessity of switching the decoder from Encodec\\u2019s original model?\", \"**Reason for the Significant Improvement in SIM**: It is understandable that Repetition Aware Sampling could lead to an improvement in WER; however, it is less clear how this would directly impact SIM. Furthermore, why does the subjective evaluation show significant improvement despite relatively poor objective metrics?\", \"**Lack of Evaluation on Difficult Cases**: The introduction references challenging cases, yet no evaluation related to these cases is provided. Why is this evaluation absent?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes to incorporate a new sampling procedure and a different grouping of codes to improve VALL-E for zero-shot TTS.\\n\\nI recommend a rejection because all reviewers unanimously agree that paper lacks novelty despite the strong performance. \\n\\nThere are various other suggestions provided by the reviewers that the authors should consider to improve the paper. In particular, the comparison across multiple systems should be done carefully to avoid comparing apples and oranges.\", \"additional_comments_on_reviewer_discussion\": \"There was not rebuttal and no discussion between the reviewers and the authors.\"}", "{\"summary\": \"The authors proposed a neural codec language model for zero-shot text-to-speech, enhancing robustness by refining sampled tokens through repetition-aware sampling. They further improved robustness and efficiency by applying grouped code modeling, effectively reducing sequence length.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"They enhanced the baseline model, VALL-E, the first neural codec language model, by introducing repetition-aware sampling and grouped code modeling. While the baseline models are prone to issues like word repetition or omission, the proposed methods mitigate these problems and further improve model efficiency.\", \"weaknesses\": \"1.\\t[Grouped Code Modeling1] From an engineering perspective for neural codec language models, the proposed grouped code modeling could improve model performance and efficiency. However, grouped code modeling is already a well-known technique in language models, as seen in works like [MegaByte], [RQ-Transformer], and [Block Transformer].\\n\\n[MegaByte] Yu, Lili, et al. \\\"Megabyte: Predicting million-byte sequences with multiscale transformers.\\\" Advances in Neural Information Processing Systems 36 (2023): 78808-78823.\\n\\n[RQ-Transformer] Lee, Doyup, et al. \\\"Autoregressive image generation using residual quantization.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[Block Transformer] Ho, Namgyu, et al. \\\"Block Transformer: Global-to-Local Language Modeling for Fast Inference.\\\" arXiv preprint arXiv:2406.02657 (2024).\\n\\n2.\\t[Grouped Code Modeling2] Additionally, [UniAudio] and [GPST] have already adopted a similar structure to sample RVQ tokens more efficiently. While there may be slight differences in implementation, they have the same goal\\n\\n[UniAudio] Yang, Dongchao, et al. \\\"UniAudio: Towards Universal Audio Generation with Large Language Models.\\\" Forty-first International Conference on Machine Learning.\\n\\n[GPST] Zhu, Yongxin, et al. \\\"Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer.\\\" ACL, 2024.\\n\\n3.\\t[Grouped Code Modeling3] Notably, the classic sequence-to-sequence text-to-speech Tacotron also utilized the grouped spectrogram sampling through a reduction factor.\\n\\n[Tacotron] Wang, Yuxuan, et al. \\\"Tacotron: Towards end-to-end speech synthesis.\\\" arXiv preprint arXiv:1703.10135 (2017).\\n\\n4.\\t[Grouped Code Modeling4] The recently proposed model [MELLE] also claims to predict multiple frames per step, accelerating inference and mitigating robustness issues associated with long-sequence modeling while maintaining strong performance. Moreover, MELLE has been shown to outperform VALL-E2. This hurts the contribution of the proposed method.\\n\\n5.\\t[Repetition Aware Sampling] Recently, Flow-matching and MaskGIT-based text-to-speech models have adopted iterative sampling methods similar to repetition-aware sampling. It would be beneficial for the authors to discuss and compare repetition-aware sampling with these iterative methods, particularly against models like VoiceBox and E2-TTS.Specifically, I hope to see the comparison with VoiceBox and E2-TTS. \\n\\n6.\\t[Weak Baseline] The authors only compared the model with VALL-E. However, VALL-E underperforms compared to VoiceBox, E2-TTS, DiTTo-TTS, and CosyVoice. \\n\\nI sincerely acknowledge the novel contribution of VALL-E in opening the door for neural codec language models; however, the novelty of VALL-E2 does not meet the standards expected for ICLR.\", \"questions\": \"[Q1. Comparison with Low-bitrate Codec] Have you compared the grouped Code Modeling with low-bitrate Codec?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"VALL-E represents a breakthrough in neural codec language modeling for zero-shot text-to-speech synthesis. It can synthesize personalized speech from just a 3-second recording, while preserving the speaker's voice, emotion, and acoustic environment. VALL-E uses an autoregressive transformer to model coarse codec codes (1st group of EnCodec) and a non-autoregressive transformer to generate fine codec codes (2nd-8th groups of EnCodec). However, VALL-E faces two key limitations: 1) Stability: Random sampling during inference can cause instability, while small top-p nucleus sampling risks infinite loops. 2) Efficiency: Its autoregressive architecture is constrained by a fixed high frame rate, slowing inference.\\n\\nThe paper introduces VALL-E 2, which addresses the aforementioned issues with two innovations: Repetition Aware Sampling, which stabilizes decoding without increasing computational costs, and Grouped Code Modeling, which reduces sequence length and speeds up inference. These improvements make VALL-E 2 more robust, natural, and efficient in zero-shot TTS, achieving human parity for the first time on benchmarks including LibriSpeech and VCTK. VALL-E 2 can stably generate high-quality speech for complex sentences that are hard to read or contain many repeated phrases.\\n\\nConsidering that VALL-E (pre-printed on January 5, 2023) has not been published in any conference or journal, yet has already garnered 542 citations and opened the field for neural codec models, and that VALL-E 2 builds on this foundation by achieving human parity for the first time, I believe the combining contributions makes it deserving of acceptance.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"VALL-E 2 achieving human parity in zero-shot TTS is a promising advancement, marking a new benchmark for text-to-speech systems. Its potential applications are particularly promising in assistive technologies for individuals with speech impairments.\", \"The introduction of repetition-aware sampling and grouped code modeling is a simple but effective approach that enhances the model's stability and efficiency in generating speech. These methods could be easily adapted to speech-to-speech language models.\", \"The paper demonstrates strong experimental validation with comprehensive evaluations on datasets including LibriSpeech and VCTK, showing clear improvements in robustness, naturalness, and speaker similarity\\u200b.\", \"The technical explanations and results are clearly presented, making the contributions and performance enhancements easy to understand.\"], \"weaknesses\": \"See above\", \"questions\": [\"Why do this paper choose Byte-Pair Encoding (BPE) for text tokenization instead of using phonemes? How many BPE tokens are used in the model? Given that large datasets like LibriHeavy typically require thousands of BPE classes, while phoneme-based tokenization usually involves only a few dozen classes, how do you anticipate this choice impacts the model\\u2019s performance?\", \"Including punctuation marks in modeling units could benefit text-to-speech (TTS) systems. I\\u2019m interested to know if the BPE units in this work incorporate punctuation marks. How might this decision impact the model\\u2019s performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work is an extension of VALL-E, which aims to solve its two problems. 1. inference repetitions --> degrade performance 2. too long codec sequence for modeling --> degrade speed.\\n\\nSpecifically, they propose 1. Repetition Aware Sampling to remove repititions by accounting for token repetition in the decoding history, thus improve the synthesis quality, 2. Grouped Code Modeling to re-organize codec sequence into groups to shorten the length, thus improve the modeling efficiency.\\n\\nFrom my side, it is a good extension of VALL-E series and solve its practical issues. But from research perpective, this work does not convey much novelty or insights, especially given the high requirement of ICLR conference.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"see above\", \"weaknesses\": \"see above\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }